The Wall Street Journal does a review of new research on algorithmic bias. If you’re following this blog the articles they reference are not new, but it’s nice to see a prominent news site covering an issue that perhaps gets less coverage than it should.
There’s an interesting anecdote about cameras that fail to do face recognition on non-white faces:
Back then, he built a software program that would comb through images online and try to detect objects in them. The program could easily recognize white faces, but it had trouble detecting faces of Asians and blacks. Mr. Viola eventually traced the error back to the source: In his original data set of about 5,000 images, whites predominated.
The problem got worse as the program processed images it found on the Internet, he said, because the Internet, too, had more images of whites than blacks. The software’s familiarity with a larger set of pictures sharpened its knowledge of faces, but it also solidified the program’s limited understanding of human differences.
To fix the problem, Mr. Viola added more images of diverse faces into his training data, he said.
This shows why efforts like the World White Web, although a tad quixotic, might still be useful.