
Facial recognition technologies are some of the most up-and-coming promising tech on the market.
Yet, many facial recognition systems are best with white men. This is a bias is causing errors within the recognition software.
As Joy Buolamwini, an MIT researchers and founder of the Algorithmic Justice League, says: “If your facial recognition system works worse with women or people with darker skin, it’s in your own interest to get rid of that bias.”
In fact, a huge fraction of the world’s population is made up of people who don’t have European-heritage white skin. They are, as Buolamwini called them at a Women Transforming Technology conference, “the undersampled majority.”
As she said, “You have to include the undersampled majority if you have global aspirations as a company.”
Bias problems in AI come from data limitations used to train the AI systems. Then, when the AI is used in the real world, it encounters much more varied types than it was trained to recognize.
While it may seem like a smaller error in a developing technology, the bias is troubling when thought of from a potential-use context.
Facial recognition software is likely to be eventually adopted by justice and other government branches for use in daily operations. A mistake could bring the wrong person under suspicion.
For example, Microsoft correctly identified the gender of: 100 percent of light-skinned men, 98.3 percent of light-skinned women, 94 percent of dark-skinned men and 79.2 percent of dark-skinned women.
That’s a 20.8 percent difference between the best (light-skinned men) and worst (dark-skinned women) recognitions.
IBM and Face++ didn’t do any better — but worse — with 34.4 (IBM) and 33.8 (Face++) percentage differences compared to Microsoft’s 20.8.
Many facial recognition developers, such as IBM, Microsoft and Kairos, have acknowledged that improvements are needed.