The UK population is among the most watched in the world. London has one CCTV camera for every 14 citizens, and every day these are joined by more shop monitors, ANPR systems, body-worn and dashboard cameras, and so on. But for every shoplifter or dangerous driver caught on video, many millions of faces are captured and stored, and processed by who knows who?
What happens to all the images scooped by public and private cameras is anyone’s guess, but the signs aren’t encouraging. AI facial recognition company Clearview AI built a lucrative business scraping billions of faces from social media to feed its models, selling its surveillance system to law enforcement and private businesses alike.
At the same time, the law is starting to catch up with intrusive bulk surveillance. The GDPR allows EU citizens to ask organisations for personally identifiable footage of them, and GDPR, a similar regulation passed recently in California, with a federal bill under discussion in the US. India, Japan, Brazil, South Africa, UAE and other countries are also introducing data privacy legislation.
See also: Clearview AI faces possible £17m fine for violating Britain’s privacy laws
Under the GDPR, individuals can issue data subject access requests to organisations that hold images and video they appear in, with those organisations obliged to provide that data and, on request, remove or obscure the faces of people who are not directly relevant. Manually, this is not a practical proposition – an hour’s footage at a busy station might capture a million faces – with the result that our images no doubt end up in numerous databases without our knowledge.
Simon Randall, CEO of UK AI startup Pimloc, says that cameras are here to stay. Indeed they offer benefits to safety and security when used responsibly, but that the lack of clarity over what happens to peoples’ personal data that’s hoovered up by monitoring systems is problematic.
“It will be used—but it will also be abused,” he said.
With footage increasingly analysed by AI, the situation becomes more concerning still, putting human oversight at further remove. After Cambridge Analytica and numerous high-profile breaches involving identity theft, the public is increasingly distrustful about what happens to their personal data, Randall added.
Pimloc, which has just raised £5.6 million in venture capital funding, has developed a cloud-based service called Secure Redact that automatically but configurably obscures personal data such as faces or number plates, allowing organisations to more easily comply with privacy regulations. They can review the redacted footage and de-anonymise just the bits they need – although of course, what they do with the original unredacted footage may remain an issue.
Randall says there is an increasing interest across sectors including government transportation, manufacturing, education, health, autonomous vehicles, facilities management and law enforcement in systems that automate data protection compliance.
“We are seeing genuine interest from private and public sector organisations who want to keep people safe and run efficient and productive environments but want to do it responsibly and without breaking everyone’s data privacy,” he said.
“Many people put privacy and security at opposite ends of a continuum, but we need both. Companies and public organisations build trust with their employees and customers based on what they do and how they behave – which now applies as much to data handling and surveillance as it does to their customer service.”
Cameras are already everywhere (many low-end gadgets feature video capture and the latest Teslas have eight external cameras) and as smart cities and automated systems develop apace with regulations starting, belatedly, to catch up, we can expect to see increasing interest in automating privacy controls too.
Credit: Source link