Recommended

Researchers Find Facial Recognition Tech's Bias Based on 'Gender and Skin-Type'

A recent study conducted by researchers from the Massachusetts Institute of Technology and Stanford University revealed that several commercially available facial recognition tools show an accuracy bias based on the user's gender and skin-type.

According to MIT News, the tests showed that when the subjects were light-skinned men, the facial analysis tools performed with almost perfect accuracy and only manifested errors of about 0.8 percent.

Unfortunately, the tested programs had an overwhelming increase in the percentage of errors when analyzing faces of dark-skinned women. One of the tools had an error rate of over 20 percent while the other two showed a more than 34 percent inaccuracy.

Get Our Latest News for FREE

Subscribe to get daily/weekly email with the top stories (plus special offers!) from The Christian Post. Be the first to know.

The researchers -- led by Joy Buolamwini of MIT and Stanford University alumnus Timnit Gebru -- focused their study on three facial recognition programs "from major technology companies" that have already been released in the market at the time of the analysis.

The said programs were reportedly intended for "general-purpose facial-analysis" and offered services like linking various photos to a person's face while taking into account the subject's age, gender, and other characteristics.

The study also reportedly pointed out that one "major U.S. technology company" had guaranteed that a certain facial recognition tool had a 97 percent accuracy rate. However, the research revealed that the data set used to measure its precision was composed of 77 percent males and over 83 percent light-skinned subjects.

For Buolamwini's team's research, they collected 1,200 images where women and dark-skinned people were well-represented as well as subjects with other genders and skin-types.

It is important to note that while facial recognition systems work with artificial intelligence and neural networks, these programs' knowledge were still based on the set of data entered by their developers. From there, facial analysis tools look for patterns to come up with a result.

"To fail on one in three, in a commercial system, on something that's been reduced to a binary classification task, you have to ask, would that have been permitted if those failure rates were in a different subgroup?" Buolamwini said in the report.

She added: "The other big lesson ... is that our benchmarks, the standards by which we measure success, themselves can give us a false sense of progress."

Was this article helpful?

Help keep The Christian Post free for everyone.

By making a recurring donation or a one-time donation of any amount, you're helping to keep CP's articles free and accessible for everyone.

We’re sorry to hear that.

Hope you’ll give us another try and check out some other articles. Return to homepage.

Most Popular

More Articles