Artificial intelligence might be developing its own set of stereotypes


If artificial intelligence is made to think like humans, then of course one way or another it is bound to develop its own set of stereotypes, just the way humans do. Take for example the finding in this Boing Boing report– if you search for “professional hairstyles” Google Images shows you, mostly white women with different hair styles. When you search “unprofessional hairstyles” Google Images shows mostly coloured or black women with different hair styles. Somehow the artificial intelligence driving Google Images search is learning that hair styles of white women are professional and hairstyles of black or coloured women are unprofessional. @BonKamona who came upon this discovery had this to say on Twitter:

See for example this comparison:


On the upper row appear the images that turned up when “unprofessional hairstyles” was searched. In the lower row, images for “professional hairstyles” turned up. It means the artificial intelligence of Google search is already making assumptions about looks and the way people do their hair.

This NextWeb write up has compiled a list of instances when artificial intelligence has shown signs of extremism, racial bias and even complete hatred. For example, Microsoft’s Tay began talking like a Hitler-loving Nazi in just 24 hours of self-learning.

Artificial intelligence is going to evolve through self-learning – So much information cannot be added manually, humanly it is not possible. The information will have to be gathered from existing conversations and conversations cannot be controlled. People and organisations working on these artificial intelligence systems also need to put in place mechanisms to filter out ideological biases and racial stereotypes.

About Sarah Watts
Sarah is a technology buff. Not uptight about her writing skills, but when it comes to covering technology, she is a no holds barred writer.

Be the first to comment

Leave a Reply

Your email address will not be published.