Proponents of algorithmic repair suggest taking lessons from curatorial professionals such as librarians, who have had to think about how to ethically collect data on people and what should be included in libraries. They propose to consider not only whether the performance of an AI model is judged fair or good, but also whether it displaces power.
The suggestions echo previous recommendations from former Google AI researcher Timnit Gebru, who in a 2019 article encouraged machine learning practitioners to think about how archivists and librarians deal with issues. ethics, inclusiveness and power. Gebru says Google fired it in late 2020 and recently launched a distributed AI research center. Critical analysis concluded that Google subjected Gebru to a pattern of abuse historically targeting black women in work environments. The authors of this analysis also urged computer scientists to look for patterns in history and society in addition to data.
Earlier this year, five U.S. senators urged Google to hire an independent auditor to assess the impact of racism on Google products and the workplace. Google did not respond to the letter.
In 2019, four Google AI researchers argued that the field of responsible AI needed a critical theory of race, because most of the work in the field ignores the socially constructed aspect of the race. race or do not recognize the influence of history on the datasets that are collected.
“We stress that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial categorization,” the paper reads. “To oversimplify is to inflict violence, or even more, to reinscribe violence in communities that are already suffering from structural violence. “
Lead author Alex Hanna is one of the first sociologists to be hired by Google and lead author of the article. She sharply criticized Google executives following Gebru’s departure. Hanna says she appreciates that critical race theory centers race in conversations about what is right or ethical and can help reveal historical patterns of oppression. Since then, Hanna has co-authored an article also published in Big Data & Society which confronts how facial recognition technology reinforces gender and race constructs that date back to colonialism.
At the end of 2020, Margaret Mitchell, who together with Gebru led the ethical AI team at Google, said the company was starting to use critical race theory to help decide what is fair or ethical. Mitchell was fired in February. A Google spokesperson said critical race theory is part of the AI research review process.
Another article, written next year by Rashida Richardson, the White House science and technology policy adviser, argues that you can’t think of AI in the United States without acknowledging the influence of racial segregation. The legacy of laws and social norms to control, exclude and otherwise oppress black people is too influential.
For example, studies have shown that algorithms used to screen apartment renters and mortgage applicants disproportionately disadvantage black people. Richardson says it’s critical to remember that federal housing policy explicitly required racial segregation until the passage of civil rights laws in the 1960s. The government has also come to terms with developers and developers. owners to deny opportunities to people of color and to separate racial groups. She says segregation has allowed “cartel-like behavior” among whites in homeowner’s associations, school boards and unions. In turn, separate housing practices exacerbate problems or privileges related to education or generational wealth.
Historical models of segregation have poisoned the data upon which many algorithms are built, says Richardson, as if to classify what is a “good” school or attitudes about policing in brown and black neighborhoods.
“Racial segregation has played a central evolutionary role in the reproduction and amplification of racial stratification in data-driven technologies and applications. Racial segregation also limits the conceptualization of algorithmic bias issues and relevant interventions, ”she wrote. “When the impact of racial segregation is ignored, problems of racial inequality emerge as natural phenomena, rather than as by-products of specific policies, practices, social norms and behaviors.”