Tech

The movement to make AI accountable gets more steam


An upcoming report by the Algorithmic Justice League (AJL), a private nonprofit, recommends disclosing requirements when using AI models and creating a public archive of incidents where AI causes harm. . Repositories can help auditors spot potential problems with algorithms and help regulators investigate or penalize repeat violators. AJL Co-Founder Joy Buolamwini co-authored a influential 2018 audit showed that facial recognition algorithms performed best on white men and worst on dark-skinned women.

The report said it was important for the auditors to be independent and that the results could be publicly reviewed. “Without those safeguards,” there is no accountability mechanism, said Sasha Costanza-Cock, lead researcher at AJL. “If they wanted to, they could bury it; If an issue is found, there is no guarantee that it has been resolved. It has no teeth, it is secret, and the auditors have no leverage. “

Deb Raji is a member of AJL who reviews audits, and she participated in the 2018 audit of facial recognition algorithms. She warned that Big Tech companies appear to be taking a more adversarial approach to outside auditors, sometimes threatening lawsuits on privacy or anti-hacking grounds. In August, Facebook prevent NYU scholars from monitoring political ad spend and thwarting a German researcher’s attempt to investigate the Instagram algorithm.

Raji called for the creation of an audit oversight board within a federal agency to do things like enforce standards or mediate disputes between auditors and firms. Such a board could be created after the Financial Accounting Standards Board or the Food and Drug Administration standards for the evaluation of medical devices.

Auditor and review standards are important because growing calls to regulate AI have resulted in the creation of a number of startup audit, some criticize AI, and others may be more beneficial to the companies they are auditing. In 2019, a coalition of AI researchers from 30 institutions encourage External auditing and regulation create a marketplace for auditors as part of building AI that people trust with verifiable results.

Cathy O’Neil founded O’Neil Risk Consulting & Algorithmic Auditing (Orcaa), in part to evaluate AI that is invisible or otherwise inaccessible to the public. For example, Orcaa works with the attorney generals of four US states to evaluate financial algorithms or consumer products. But O’Neil says she’s losing leads because companies want to maintain reasonable rejection and don’t want to know if their AI harms humans.

Earlier this year, Orcaa tested an algorithm used by HireVue to analyze people’s faces during job interviews. A company press release states that the test found no inaccuracies or bias issues, but the test did not attempt to evaluate the system’s code, training data, or performance for the tests. different groups of people. Critics said The nature of HireVue’s audit was misleading and confusing. Shortly before releasing the test, HireVue said it would stop using AI in video job interviews.

O’Neil thinks audits can be helpful, but she says in some respects it’s too early to adopt the approach prescribed by the AJL, in part because there’s no standard for audits math, and we don’t fully understand the ways in which AI harms humans. . Instead, O’Neil advocates a different approach: algorithmic impact assessment.

While an assessment can evaluate the outputs of an AI model to see if it treats men differently than women, an impact assessment can focus more on how an algorithm is designed, who can be harmed, and who is responsible if things go wrong. In Canada, businesses must assess the risk to individuals and communities when implementing an algorithm; In the United States, assessments are being developed to determine when AI is low risk or high risk and to quantify how much people trust AI.

The idea of ​​measuring impacts and potential harms dates back to the 1970s with the National Environmental Protection Act, which led to the creation of environmental impact statements. Those reports took into account factors ranging from pollution to discoverability of ancient artifacts; Similar impact assessments for algorithms will consider a range of factors.

.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button