United Nations Artificial Intelligence Advisor
In 1994, Bally’s launched the use of facial recognition in Las Vegas casinos. While humans could use cameras to track a person through the facility, the advent of facial recognition technology allowed computers to track hundreds of people simultaneously, in real time. What started as
a solution to combat cheating and fraud evolved into a marketing tool as the casinos soon learned customers’ preferences in games and drinks. Fast forward to 2007 when Australia launched SmartGate, which works with biometric passports to validate a person through facial recognition and their passport photo. Today, facial recognition technology (FRT) has made a quantum leap forward and no longer requires comparison to a tagged image or a database of passport photos. Now, artificial intelligence (AI) technology can determine who a person is just from an unconnected image, video, or audio file by scouring the internet. Or can it?
In 2015, Google said that its face analysis technology did well in recog- nizing the gender of white men but not so much women and ethnic minorities. Worse, the technology thought African Americans were “gorillas.” Even though these were not intentionally epithets or errors by Google, they raise the question of how could this have happened. That’s the challenge we face. In developing these technologies, we need diverse teams to train the AI systems. We could show a million photos of different faces of all types to train an AI system; however, if none of the photos shows a person wearing glasses, then the AI will not realize people can wear glasses. It seems trivial, but think of all the variations we have for facial recognition. Missing a variation means that we’re missing a segment of the population.