An Algorithm That ‘Predicts’ Criminality Based on a Face Sparks a Furor

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn’t stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

try what she says
understanding
updated blog post
url
us
use this link
via
view
view it
view it now
view publisher site
view siteÂ…
view website
visit
visit here
visit homepage
visit our website
visit site
visit the site
visit the website
visit their website
visit these guys
visit this link
visit this page
visit this site
visit this web-site
visit this website
visit your url
visite site
web
web link
web site
weblink
webpage
website
website link
websites
what do you think
what google did to me
what is it worth
why not check here
why not find out more
why not look here
why not try here
why not try these out
why not try this out
you can check here
you can find out more
you can look here
you can try here
you can try these out
you can try this out
you could check here
you could look here
you could try here
you could try these out
you could try this out
your domain name
your input here
have a peek at this web-site
Source
have a peek here
Check This Out
this contact form
navigate here
his comment is here
weblink
check over here
this content
have a peek at these guys
check my blog
news
More about the author
click site
navigate to this website
my review here
get redirected here

Advances in data science and machine learning have led to numerous algorithms in recent years that purport to predict crimes or criminality. But if the data used to build those algorithms is biased, the algorithms’ predictions will also be biased. Because of the racially skewed nature of policing in the US, the letter argues, any predictive algorithm modeling criminality will only reproduce the biases already reflected in the criminal justice system.

Mapping these biases onto facial analysis recalls the abhorrent “race science” of prior centuries, which purported to use technology to identify differences between the races—in measurements such as head size or nose width—as proof of their innate intellect, virtue, or criminality.

Race science was debunked long ago, but papers that use machine learning to “predict” innate attributes or offer diagnoses are making a subtle, but alarming return.

In 2016 researchers from Shanghai Jiao Tong University claimed their algorithm could predict criminality using facial analysis. Engineers from Stanford and Google refuted the paper’s claims, calling the approach a new “physiognomy,” a debunked race science popular among eugenists, which infers personality attributes from the shape of someone’s head.

In 2017 a pair of Stanford researchers claimed their artificial intelligence could tell if someone is gay or straight based on their face. LGBTQ organizations lambasted the study, noting how harmful the notion of automated sexuality identification could be in countries that criminalize homosexuality. Last year, researchers at Keele University in England claimed their algorithm trained on YouTube videos of children could predict autism. Earlier this year, a paper in the Journal of Big Data not only attempted to “infer personality traits from facial images,” but cited Cesare Lombroso, the 19th-century scientist who championed the notion that criminality was inherited.

Each of those papers sparked a backlash, though none led to new products or medical tools. The authors of the Harrisburg paper, however, claimed their algorithm was specifically designed for use by law enforcement.

“Crime is one of the most prominent issues in modern society,” said Jonathan W. Korn, a PhD student at Harrisburg and former New York police officer, in a quote from the deleted press release. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of [a] person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”

Korn didn’t respond to a request for comment. Nathaniel Ashby, one of the paper’s coauthors, declined to comment.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post America’s Great—if Small—Return to Drive-In Theaters
Next post The Meme-Fueled Rise of a Dangerous, Far-Right Militia