This one is very interesting thing to me as it covers a few different (but connected) topics.
1. Autonomous weapons: eg - the ability of machines to independently target and kill (or otherwise neutralise) people. **** that.
2. Face recognition and tracking: these are two things but often thought of together. A system could recognise a face generally (eg: I see a face for the first time, next time I recognise it as someone I've seen before but without identifying that individual) - recognition. Tracking: not only I've seen that face before, but also different imaging systems can retrace an individual's movements and potentially identify the individual outright or have enough information for someone else to do so.
Now my issue with #2 - a company develops that. It's a technology. It can be of use in different ways and different levels.
But private companies have a choice as they can't really be stopped from doing something with that technology unless it comes against the law. They can be pressured in the court of public opinion (to the extent that the companies themselves allow it) but that's pretty much that.
Now, we want to hold those companies accountable for how their technology is used. But it's not the companies that make actual use of it, it's their customers.
So why do we want to held companies accountable for how their technology might be misused? Why not the customers?
We jump at law enforcement only AFTER they've done something. Law enforcement has A LOT of leeway to use firearms but people take to the streets only AFTER they're used. Political pressure to change that is only for show, briefly.
The sad irony is that we live in a system that's supposed to provide checks and balances across the board of public service and across different powers of the state AND we can influence those directly by electing people. Yet when it comes to accountability, we make a presumption of guilt against a public entity for the potential actions of someone else YET that very someone else is presumed innocent at all times.