How could that be? AI should
be neutral, right? If the results of a study are not what was expected, the
method must be biased. Or, in language more plain, white folks developed the
system. Ergo, racism.
From Pluralist.
By Kyle Hooten
Researchers from the University of Cornell discovered that
artificial intelligence systems designed to identify offensive “hate speech”
flag comments purportedly made by minorities “at substantially higher rates”
than remarks made by whites.
Several
universities maintain artificial intelligence systems designed to monitor
social media websites and report users who post “hate speech.” In a study published
in May, researchers at Cornell discovered that systems “flag” tweets that
likely come from black social media users more often, according to Campus Reform.
The
study’s authors found that, according to the AI systems’ definition of abusive
speech, “tweets written in African-American English are abusive at
substantially higher rates.”
The
research team averred that the unexpected findings could be explained by
“systematic racial bias” displayed by the human beings who assisted in spotting
offensive content.
“The results show evidence of systematic racial
bias in all datasets, as classifiers trained on them tend to predict that
tweets written in African-American English are abusive at substantially higher
rates,” reads the study’s abstract. “If these abusive language detection
systems are used in the field they will, therefore, have a disproportionate
negative impact on African-American social media users.”
One of the study’s authors said that “internal
biases” may be to blame for why “we may see language written in what linguists
consider African American English and be more likely to think that it’s
something that is offensive.”
Automated technology for identifying hate speech is not new, nor
are universities the only parties developing it. Two years ago, Google unveiled
its own system called “Perspective,” designed to rate phrases and
sentences based on how “toxic” they might be.
Shortly
after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging
inconsistencies in implementation.
According to Tormental, the system rated
prejudicial comments against minorities as more “toxic” than equivalent
statements against white people.
Google’s
system showed a similar discrepancy for bigoted comments directed at women
versus men.
Link at knuckledraggin.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.