The researchers who participated conducted the machine vision test to the artificial intelligence services of Google and it’s rivals Microsoft and Amazon. Crowdworkers received pay to analyze the annotations given by the AI services to the both official and tweeted photos of lawmakers.
Each of the image recognition services noticed basic things that anybody could notice but the AI also noticed different things about women and men. The image recognition services frequently characterized women by appearance. The software’s labeled the women lawmakers with “beauty” and “girl”. The image recognition services has were reported to have a tendency to not see the women at all, more than they failed to see the men.
What this study reveals is that algorithms fail to see the world through mathematical detachment but rather replicate and even amplify cultural bias. This study was motivated by a project conducted in 2018 called Gender Shades which exposed gender biases in IMB and Microsoft’s cloud software’s. Additionally, women were given tags based on appearance’s that men did not receive such as “hairstyle”, “skin”, and “neck”. Results did reveal that Microsoft and Amazon’s image recognition software’s used less clear bias. The results showed that Microsoft’s service was able to establish the gender of all the men but only eight of the women while tagging one women a man and leaving another without a tag.
However, Google has previously turned off it’s gender recognition service after stating that gender can’t be accurately determined by the appearance of a person.
Schwemmer, along with his workmates began doing simple test with Google’s services with the goal of being able to identify a measurable pattern to find how people speak about politics online with images. What he then uncovered about gender biases gave him the belief that the technology is not suitable to be used for research in that form. This could results in companies which use the AI services facing distasteful results.
Lennert Schwemmer believes that unequal training results in biases such as found in its AI services.
Fixing these biases has been a point of interest in the past years . With artificial intelligence focused on pixels and patterns there’s a big change of misunderstanding. This issue has become a bigger topic as equations and algorithms become better at processing images.
Google along with its competition in the artificial intelligence field, give a lot to research on bias and fairness in AI. This involves an idea of forming a basic way to share limitations and content of databases and AI software to developers.
Thanks for reading!