Amazon recently stopped providing facial recognition software to law enforcement after 2 years of research and activism.
This MIT Tech Review article and others like it have been making the rounds recently. Basically, it outlines the journey that an idea takes from research1 into having real-world effects. In retrospect, it seems obvious – facial recognition is unproven, and of course it should not be used in law enforcement. Be careful, though, because this is most likely a case of hindsight bias2.
In the last two years, it seemed inevitable that facial recognition was the way of the future. The prevailing narrative was that companies producing these facial recognition products would continue to make improvements to their AI models and these biases would eventually disappear on their own. Take a look at this article from April 2019.
A day after this article was published, an Amazon spokeswoman responded, saying that the company had updated its Rekognition service since the M.I.T. researchers completed their study and that it had found no differences in error rates by gender and race when running similar tests.
Through popular journalism and relentless activism3 that exposed how this technology is being used, the Overton Window, or the range of acceptable views in the mainstream, shifted over time. We have now entered into a space where it’s more OK to question whether this technology should be used in law enforcement contexts.
I recommend this interview with one of the original authors of the MIT paper, Timnit Gebru, where she explains in simple terms the consequences of biased facial recognition systems.
Hindsight bias refers to the common tendency for people to perceive events that have already occurred as having been more predictable than they actually were before the events took place. (Wikipedia) ↩