AI is an increasingly important force in our data-driven world, driving decisions from job hiring to how teachers are ranked. But as we are discovering, the underlying algorithms are often biased and spit out tainted results. While we might want to believe that mathematical models don't lie—the numbers either add up or they don't—that misses the point. The problem lies in the data, not the algorithm. If the training data sets are flawed the AI results will be as well.
The big companies developing them show no interest in fixing the problem.
Kate Crawford, speaking at the AI Now conference at MIT this week.
Opaque and potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem.
This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. The AI Now initiative was announced at an event held at MIT to discuss what many experts see as a growing challenge.
Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).
Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.
The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.
“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”
Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.
Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.”
A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren’t transparent about how they operate. O’Neil says, for example, she is concerned about how the algorithms behind Google’s new job search service work.
O’Neil previously worked as a professor at Barnard College in New York and a quantitative analyst at the company D. E. Shaw. She is now the head of Online Risk Consulting & Algorithmic Auditing, a company set up to help businesses identify and correct the biases in the algorithms they use. But O’Neil says even those who know their algorithms are at a risk of bias are more interested in the bottom line than in rooting out bias. “I’ll be honest with you,” she says. “I have no clients right now.”
O’Neil, Crawford, and Whittaker all also warn that the Trump administration’s lack of interest in AI—and in science generally—means there is no regulatory movement to address the problem (see “The Gaping, Dangerous Hold in the Trump Administration”).
“The Office of Science and Technology Policy is no longer actively engaged in AI policy—or much of anything according to their website,” Crawford and Whittaker write. “Policy work now must be done elsewhere.”
"Garbage in, garbage out" may be an old line, but it still speaks the truth. The bottom line is that data-driven insights are only as good as the data they draw from. As data plays an increasingly important role in how our world operates, it's incumbent upon us to make sure that the data we use is actually correct. For NETSCOUT, that mandate has led to technologies driven by smart data that is well-structured, contextual, available in real time, and based on end-to-end pervasive visibility across the entire enterprise. To learn more read about smart data https://www.netscout.com/solutions/enterprise/smart-data ~Carol Hildebrand, Sr Strategic Marketing Writer, NETSCOUT