Researchers and corporations alike are turning more and more to artificial intelligence techniques to teach computers to make more-accurate-than-human assessments of everything from healthcare diagnoses to recidivism rates. An algorithm can, for example, scan through a tremendous volume of publicly available texts and make associations—find patterns in words and concepts—based on commonalities between hundreds, thousands, or tens of thousands of them.
But the data the computer uses to generate those conclusions isn’t itself computer-generated. It comes from humans, from the decisions and statements and societal choices those humans have made in the past. So what happens when we input into a computer algorithm the collected wisdom of humanity, and asks that algorithm to draw conclusions? As Brian Resnick explains, you get results that mimic the biases in the data you’ve drawn from:
In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words “male” and “engineer.” But if a person lags on associating “woman” and “engineer,” it’s a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)
Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names. (In a weird way, the IAT might be better suited for use on computer programs than for humans, because humans answer its questions inconsistently, while a computer will yield the same answer every single time.)
Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI.
TOP COMMENTS • HIGH IMPACT STORIES
TWEET OF THE DAY
BLAST FROM THE PAST
At Daily Kos on this date in 2010—What’s the matter with doctors?
The bottom line here is that when you hear about a doctor claiming he or she is going to have to turn away patients because of health reform, chances are it's a highly paid (and conservative Republican) specialist who's doing some turf protection at the expense of an honest discussion of the issues, using health reform as an excuse to vent about professional anxieties that would exist if Obama had never been elected. Are the issues real? Sure, at least at some level. But so are the solutions, and it's going to require the government to implement them if they have any chance of helping doctors, and by extension, their patients.
On
today’s Kagro in the Morning show: Whee! Chinese trademarks for the whole family! A brief moment of gun sanity in FL. No tax reform while you’re “under audit,” please. US guarantees a Korean war or double your money back! Is Congressional oversight about to get Gorsuched again?