/ 25 May 2015

Technology is not racist, but it still needs to change

Tags and categories on social media can be offensive.
Use it or lose it: Tim Caudill says among other reasons to use technology in change management is that it makes people happy

William has wide, expressive eyes and a kind face. He’s 56 years old but his smooth skin and soft afro make him look 10 years younger. He lives in Los Angeles and preaches kindness to the local kids. And a few weeks ago a computer algorithm decided that he was an “animal” and an “ape”.

The algorithm in question was introduced by Flickr, a photo sharing site, at the beginning of May. It automatically scans each image uploaded to the site and categorises or “tags” them according to objects it recognises. 

When it works properly it is like magic. A picture of your cat lying on your couch is automatically tagged “cat”, and the same applies to sunsets, bridges, buildings and countless other things.

But Yahoo, which owns Flickr, did not anticipate that its new algorithm might confuse species and do so in such an offensive way. William, whose picture was uploaded to the site in 2014, is the most prominent example discovered, but the same algorithm also tagged a picture of a white woman as “animal” and “ape”.

It’s important to note that this algorithm is not overtly racist. No closet bigot at Yahoo secretly programmed it to mislabel photos. Instead it uses a technique called “machine learning” to scan through the billions of photos in Flickr’s archives and literally learn to recognise the contents of images through sheer repetition. 

Only a few people bother to tag each image they upload to the site, but the algorithm can use those tags that do exist to extrapolate to all of the other images with similar content. 

In that context it can be forgiven for mis-identifying a few humans as apes since we are part of the same family of primates, the Hominidae. Just like other hominids, we have large heads with forward facing eyes and protruding noses positioned above our mouths. 

Given how relatively crude our machine learning tools still are, these kinds of anomalies were bound to crop up. Computers are still far too limited and stupid to understand how offensive these kinds of mistakes can be. The same algorithm blithely tagged pictures of Dachau, the Nazi death camp, with “jungle gym” and “sport”. 

This is not the first time a piece of technology has been accused of inherent bias. In 2009 a pair of co-workers, one black and one white, uploaded a video to YouTube using a new HP laptop. The laptop boasted a camera with “face tracking” technology, intended to keep it focused on your head as you moved around in front of your computer. 

The camera could easily track the face of the white person, but when the black person entered the frame, it immediately stopped tracking and fell back to a neutral position. The evidence is compelling – there’s no mistaking the camera’s total inability to cope with darker skin. 

But skin colour isn’t the only racial characteristic that algorithms have trouble parsing. In 2010 a Taiwanese-American blogger found that her new digital camera kept warning that the someone in her family photos had blinked. She quickly realised that the camera was misinterpreting the epicanthic folds of their eyes as blinking. 

No one is seriously accusing any of these companies of overt bigotry. There’s no coven of white male geeks with a racist axe to grind. But in many ways this is a product of something much worse: obliviousness. 

At the moment the vast majority of consumer technology is designed by white men, usually from privileged backgrounds. Asian men also feature quite strongly in the sector, with white women coming in a distant third. 

When these white men test their products they do so for their main markets – the people most likely to buy their products. That means affluent North Americans and Europeans. The majority of those people are also white. The net result is technology designed by, tested by and marketed by white people.

In most cases this has no obvious negative effects. An iPhone or a FitBit doesn’t work better if you’re white or malfunction if you’re asian. But it does speak to a narrowness of perspective in the people who control the technology sector. If these machines cannot understand our faces, how will they understand our credit scores or our medical records?

But the tide of history is already shifting and technology will be dragged with it. In a few short decades the majority of middle class people on the planet will no longer be white. As new powers emerge in Asia, South America and Africa, technology will be forced to broaden its perspective and become more inclusive.

And the same thing will happen to the people making the technology: they will start to look more like most people on the planet, and less like the cast of Friends. Silicon Valley will not be the centre of the technology world forever. But until then, we need to keep reminding the geeks that our planet is a lot bigger and more varied than they realise.