Here’s Why People Trust Human Judgment Over Algorithms
”…people are even less trusting of algorithms if they’ve seen them fail, even a little. And they’re harder on algorithms in this way than they are on other people. To err is human, but when an algorithm makes a mistake we’re not likely to trust it again.”
The article references research that shows that even when presented with evidence that algorithms can perform better than people on a task, humans trust other humans more. This is somewhat ameliorated if the human is allowed to “tweak” the result of the algorithm.
YC-Backed Pomello Helps Teams Determine Whether Job Applicants Will Fit In
A start-up called Pomello would like to help companies hire applicants who match the notoriously unfair “culture fit” criteria. They claim that their start-up is in fact working against this practice:
“Of course, a common criticism of Silicon Valley is that it promotes its own monoculture of nerdy dudes hiring and spending time with others just like them. The team is aware of the stereotype and says that its method actually pushes against that problem. Ke says the questions they ask ‘are less biased than looking at a person’s particular interests and background.’”
I’d like to see them looking past simply asking new questions to critically examining their own data and recommendations.
President Tweaks the Rules on Data Collection
“[T]he administration will announce new rules requiring intelligence analysts to delete private information they may incidentally collect about Americans that has no intelligence purpose, and to delete similar information about foreigners within five years.”
Are these decisions being made individually by people? It seems unlikely given the vast amounts of data involved. So – how will these rules be computerized? Will the rules be applied in a fair way? What does that mean in this context – i.e., whose data is considered to have “no intelligence purpose” and whose data is kept?
AI Has Arrived, and That Really Worries the World’s Brightest Minds
Elon Musk and Stephen Hawking, along with others at a recent conference, are raising worries about ethics in AI and potential unfairness resulting from an unthinking use of algorithms.