Artificial Intelligence: How the Internet’s Gatekeeper Could Affect Your Civil Rights

Lindsay Nako, Director of Litigation & Training

Lindsay Nako, Director of Litigation & Training

In the modern world, artificial intelligence helps us navigate through a sea of information, curating our online experience.  Which song will I want to listen to next?  Is an email important or SPAM? What ads will interest me?  Artificial intelligence prevents us from being inundated with irrelevant information – and that raises a important questions.

Who determines what is relevant or irrelevant?  And how do they decide?

If an artificial intelligence program delivers me a song by Chopin instead of Lizzo, it might be surprising.  But if an artificial intelligence program delivers me a job ad for administrative assistant and prevents me from seeing one for car mechanic, that could be illegal.  

Artificial intelligence, the science and engineering of making intelligent machines, combines algorithms (unambiguous instructions) with data to perform functions similar to human decision-making. The first generation of artificial intelligence in the 1980s and 1990s could apply rules written by humans to data to create outputs. The second generation in the 2000s could “learn,” meaning that programs could take data and guidance provided by humans, independently identify rules, and then apply those rules to new data to create outputs. The third generation of artificial intelligence, which we are currently in, seeks to incorporate “deep learning.”  Deep learning will permit programs to autonomously learn rules and automatically judge new data to create outputs, without human intervention.

AI-based products may screen out vulnerable groups – people of color, women, people with disabilities, older people, members of the LGBTQ community

AI-based products may screen out vulnerable groups – people of color, women, people with disabilities, older people, members of the LGBTQ community

Over the past few years, products have been emerging that incorporate artificial intelligence to streamline processes for delivering employment and housing advertisements, identifying resumes with relevant experience, interviewing job candidates, evaluating tenant applications, and more.  These products range from first-generation AI, scanning documents for relevant words or background records for criminal convictions, to more advanced models, such as those purporting to use video analysis to identify “trustworthy” or “enthusiastic” candidates.  

Many of these products advertise their use of artificial intelligence as a way to avoid human bias, but minimizing the role of humans does not guarantee equitable results. Artificial intelligence prioritizes user preference, while our civil rights laws prioritize equality of opportunity.  In the quest for compatibility, AI-based products may screen out vulnerable groups – people of color, women, people with disabilities, older people, members of the LGBTQ community – and run afoul of the very laws they claim to satisfy.  Our civil rights laws prohibit targeting advertisements and making employment or housing decisions based on personal characteristics.  This would also prohibit use of these characteristics as relevant data for AI decision-making.

Reuters and others reported an AI failure by retail giant Amazon in 2018.  After multiple years of building machine-learning computer programs to evaluate resumes and identify top candidates, Amazon reportedly abandoned the project because it could not create a program that was able to evaluate candidates for technical positions without disadvantaging women.  For example, Reuters reported that the system penalized resumes that included the word “women’s” and downgraded graduates of all-women’s colleges.

Deep learning will permit programs to autonomously learn rules and automatically judge new data to create outputs, without human intervention.

Deep learning will permit programs to autonomously learn rules and automatically judge new data to create outputs, without human intervention.

Or it may be as simple as a platform allowing advertisers to select the users it wants to see ads for housing, credit, and employment opportunities.  A recent lawsuit brought by the ACLU and others challenged Facebook’s use of ad targeting based on gender, age, zip code, and other demographic information.  The settlement reached last year removes gender, age, and multiple demographic proxies from ad targeting options.  Yet another case was recently filed alleging that Facebook continues to discriminate against older and female users by restricting their access to ads for financial services.

At this point, it is impossible to imagine our society without artificial intelligence.  But we also have a shared responsibility to ensure that our principles of fairness, equity, and inclusion remain at the forefront of our collective advancement.  Technology provides an opportunity to generate greater visibility and access for historically disadvantaged communities.  We cannot let that opportunity wither in the black box of artificial intelligence.

Many thanks to Professor Niloufar Salehi, UC Berkeley School of Information; Professor Catherine Albiston, UC Berkeley School of Law; Christine Webber, Cohen Milstein Sellers & Toll; and Galen Sherwin, ACLU Women’s Rights Project for our discussion on “Artificial Intelligence & the Future of Discrimination” at the 2020 Impact Fund Class Action Conference. 

Previous
Previous

Ninth Circuit Rules on Article III Class Action Standing in Favor of Plaintiffs in Ramirez v. TransUnion

Next
Next

Impact Fund & Amici to Florida Court of Appeals: Local Non-Discrimination Ordinances Must Be Respected