Country Case Study USA

Background

In the US, the use of advanced analytics and artificial intelligence (including both static and machine-learning algorithms) is enabling automated decision-making across an expanding set of domains. Such tools are making it possible to utilize increasing volumes of disparate data in decisions and to perform tasks that are difficult or cost prohibitive for humans to do directly (e.g., real-time analytics of many streams of video data simultaneously). State actors also use – or, may soon use– such algorithms in the justice system as well as in administration and regulation. Private companies use algorithms to decide, e.g., who will receive loans or insurance, and what rates or premiums applicants will pay, and determining who can attend universities or be hired for jobs.

In many cases, the goal of applying algorithms to decisions that would otherwise be made by more subjective means is to advance fairness. However, many algorithms have been criticized as replicating, magnifying, or even introducing unjustifiable biases on the basis of race, gender, religion, or other unacceptable grounds.1 Likewise, algorithmic errors that have had a serious impact on individuals’ lives fuel concerns about and fears of automation of judgement.2 Algorithmic decisions have come under public, policy, and legal scrutiny in large part because of their transparency deficit and manipulability.3 The transparency deficit is a result of both legal and practical factors. Companies deploying algorithms often enjoy legal protections for secrecy, designating the details of how an algorithm functions as a trade secret critical to their business success. And even without legal barriers to openness, machine-learning algorithms are famously “black boxes,” with details of their decision-making obscure even for their developers.

Even if they are superior to human decisions on average, bias and error in automated decisions that have serious and long-lasting consequences for individuals are spurring two sets of counterreactions. Initial evidence suggests that substantial portions of the public perceive algorithmic decisions that impact important individual interests as unfair.4 Thus, algorithmic decisions may be – and, in some specific cases, already have been – challenged in court. Many existing legal challenges to automated decisions focus on the aforementioned problems – the non-transparent and unexplainable nature of the algorithm as violative of due process or related legal principles.5 How common and pervasive such challenges will be will depend crucially on public perceptions of fairness of algorithms, and their willingness to resort to the justice system for redress. The future course of litigation, in turn, will likely exert a significant influence on the development and adoption of advanced analytic and AI technologies by both the private and public sectors.

The work proposed under this effort will examine cases in the U.S. following two tracks with exchange and consultation between the two.

Case Study: RAND
“Suing the Algorithm”

The ongoing RAND research project “Suing the Algorithm” seeks to understand and assess the legal vulnerabilities of algorithms, and proceeds in two broad inquiries.

First, the project will investigate people’s perceptions of fairness of algorithmic decision-making – and their likelihood of legally challenging such decisions, compared to same decisions made by humans. For this purpose, we will conduct a survey experiment using scenarios based on one or more specific instances in which algorithmic decision is being used or introduced. Scenarios may include the use of algorithms to determine benefits eligibility, screen job applicants, or determine an individuals’ “risk” with regard to some outcomes. The survey experiment will also seek to identify the sources of the perceptions of unfairness and inclination to sue, to investigate what aspects of those noted above are most relevant in people’s assessments (i.e., bias, error, non-transparency, manipulability, or other sources).

Second, we will also seek to understand the market and tech response to the likely legal challenges to algorithmic decision-making. Could technological innovation provide some solutions to the problematic properties of algorithms, which contribute to the perceptions of unfairness and create legal vulnerabilities? For instance, if court decisions foreclose algorithmic decisions in some domain because a transparency deficit offends due process principles, innovations in explainable AI (XAI) may remedy the problem and allow for continued use of algorithms in such domains. Similarly, if manipulability concerns prompt close regulatory scrutiny in some domains, innovations that offer greater control over harmful manipulation may address some concerns that would otherwise lead to regulation. On the other hand, tech solutions to the transparency deficit and manipulability may exacerbate the problematic features of algorithmic decisions: although sometimes transparency can also address manipulability concerns,6 there may be some trade-off between transparency and non-manipulability, in that transparent systems are easier to hijack. To address these questions, we will seek out unstructured interviews with representatives of tech companies working on similar kinds of algorithms as those addressed in the survey experiment portion, as well as subject matter experts.

Addressing the AI FORA research questions, the research partners RAND and Arizona State University will interact on their development of the U.S. country case study. RAND will, in consultation with the AI FORA project leadership and the national partner and subject to project resources, perform the following tasks pertaining to the AI application(s) for social assessment that RAND will have studied as the subject of the above-mentioned ongoing RAND project:

  • Provide AI FORA access to project materials, data, and literature, to enable AI FORA to answer key questions for the US case study
  • Consistent with limits imposed by human-subjects protections and permissions for the project, provide AI FORA the underlying data collected through the survey experiment and interviews conducted for the project.
  • Identify data sources which may be useful for document and discourse analysis
  • Help identify relevant literatures on the ethical and societal implications of AI decision-making in the US
  • Identify potential interview subjects (e.g., stakeholders, subject matter experts, industry representatives)
  • Participate in workshops organized by AI FORA, (or, if not feasible, suggest other Participants.

The prime contractors desire that RAND shall use the funding received under AI FORA to engage appropriate RAND expertise in research design, oversight, quality assurance, analysis and presentation. The prime contractors will provide additional funding for background research, identification of potential interview subjects or workshop attendees, coding of workshop/interview outputs, workshop costs, travel costs and survey administration in a manner that shall be agreed upon by both RAND and the AI FORA prime contractors.

Case Study: Arizona State University
How AI can both amplify and mitigate bias in the provisioning of K-12 education

Arizona State University will leverage RAND data analytics, literature, and resources to support an existing vein of research related to the provision of K-12 education in the state of Arizona. Previous analysis conducted by the ASU research team using publicly available 2016 Public Use Microdata Sample (PUMS) data from the US Census Bureau revealed that households with children, who are primary consumers of public K-12 educational services, only comprise 36 percent of all Arizona households. Only one-third of adults in households with children have a postsecondary degree (an associate’s degree or higher), and the median age of adults in these households is 37 years. This is in stark contrast to the 64 percent of households without children, 36 percent of which have a postsecondary degree, with an adult median age of 56 years.

Thus, for every household with children in the K-12 Arizona education system, there are almost twice as many who can advocate and vote for education-related policy and provision that do not have children who will experience the outcome of such policies. When considering that households with children in the K-12 education system are generally younger, less educated, and have lower median annual income than households without children, there is significant potential for the voices of households which may be most affected by an AI algorithm which could determine educational provision to be silenced or marginalized by prevailing discourse about the potential ramifications of such an algorithm. Across the geographic regions of the state determined by Public Use Microdata Areas (PUMAs), other biases have been identified. Southwestern Arizona and Pinal County, two regions with the highest percentages of households with children (73 and 69 percent, respectively), are also among the bottom three regions with lowest postsecondary attainment among adults aged 25-39. On the other end of the spectrum, Tempe and Scottsdale are two of the regions with lowest percentages of households with children (35 and 36, respectively), with over half of all adults aged 25-39 with some postsecondary attainment.

In light of the findings described above and any additional insight from RAND analysis, Arizona State University will conduct a series of stakeholder engagement workshops about the specific potential area of AI algorithms to exacerbate bias in educational services when determining government provision of K-12 educational services in the state of Arizona. In collaboration with RAND, the ASU team will invite educational researchers, policymakers, administrators, community organizations, and others. The workshops will have two general goals:

  1. Reveal the aforementioned bias of the state toward older, higher income households without children in state representation, as well as any additional insight from RAND resources and analytics.
  2. Conduct an in-depth, facilitated conversation of the factors that would be necessary for an AI algorithm to mitigate, rather than exacerbate, such bias(es).

The collection of quantitative and qualitative data from workshop participants will be approved by the ASU Institutional Review Board (IRB) prior to any data collection to protect the rights and risks of participants, as well as allow the researchers to share findings with the broader academic and policy audience.

1 E.g., Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. “Machine Bias,” ProPublica (May 23, 2016), at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Sweeney L. Discrimination in Online Ad Delivery. Communications of the ACM, Vol. 56 No. 5, Pages 44-54.
2 E.g., Leslie Newell Peacock, “Legal Aid sues DHS again over algorithm denial of benefits to disabled: Update with DHS comment” Jan 27, 2017 https://www.arktimes.com/ArkansasBlog/archives/2017/01/26/legal-aid-sues-dhs-again-over-algorithm-denial-of-benefits-to-disabled;
3 See Pasquale, Frank. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press, 2015
4 Aaron Smith, “Attitudes Toward Algorithmic Decision-making”, https://www.pewresearch.org/internet/2018/11/16/attitudes-toward-algorithmic-decision-making/
5 For instance, in a case that the US Supreme Court recently declined to hear, a man sentenced to prison argued that it is a violation of his constitutional right to due process for a court to rely on COMPAS, a risk assessment instrument at sentencing “because the proprietary nature of COMPAS prevents a defendant from challenging the accuracy and scientific validity of the risk assessment.” Loomis v. Wisconsin, Petition for certiorari denied on June 26, 2017. A lower stake example is presented by a challenge to Zillow’s house-pricing algorithm: https://www.reuters.com/article/us-zillow-group-lawsuit/zillow-wins-dismissal-of-zestimate-lawsuit-in-u-s-idUSKCN1B32RN
6 “Kaspersky Lab to open software to review, says nothing to hide,” (Oct. 23, 2017), https://www.reuters.com/article/us-usa-security-kaspersky-russia/kaspersky-lab-to-open-software-to-review-says-nothing-to-hide-idUSKBN1CS0Y1