Publications
Simone Zhang and Rebecca Johnson. 2023. “Hierarchies in the Decentralized Welfare State: Prioritization in the Housing Choice Voucher Program.” American Sociological Review 88(1), 114–153.
[Paper] [Online Supplement] [Replication Package]
Abstract
Social provision in the United States is highly decentralized. Significant federal and state funding flows to local organizational actors, who are granted discretion over how to allocate resources to people in need. In welfare states where many programs are underfunded and decoupled from local need, how does decentralization shape who gets what? This article identifies forces that shape how local actors classify help seekers when they ration scarce resources, focusing on the case of prioritization in the Housing Choice Voucher Program. We use network methods to represent and analyze 1,398 local prioritization policies. Our results reveal two patterns that challenge expectations from past literature. First, we observe classificatory restraint, or many organizations choosing not to draw fine distinctions between applicants to prioritize. Second, when organizations do institute priority categories, policies often advantage applicants who are formally institutionally connected to the local community. Interviews with officials in turn reveal how prioritization schemes reflect housing authorities’ position within a matrix of intra-organizational, inter-organizational, and vertical forces that structure the meaning and cost of classifying help seekers. These findings reveal how local organizations’ use of classification to solve on-the-ground organizational problems and manage scarce resources can generate additional forms of exclusion.
Rebecca A. Johnson and Simone Zhang. 2022. “What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy.” In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), June 21–24, 2022, Seoul, Republic of Korea.
[Paper]
Abstract
There is growing concern about governments’ use of algorithms to make high-stakes decisions. While an early wave of research focused on algorithms that predict risk to allocate punishment and suspicion, a newer wave of research studies algorithms that predict “need” or “benefit” to target beneficial resources, such as ranking those experiencing homelessness by their need for housing. The present paper argues that existing research on the role of algorithms in social policy could benefit from a counterfactual perspective that asks: given that a social service bureaucracy needs to make some decision about whom to help, what status quo prioritization method would algorithms replace? While a large body of research contrasts human versus algorithmic decision-making, social service bureaucracies target help not by giving street-level bureaucrats full discretion. Instead, they primarily target help through pre-algorithmic, rule-based methods. In this paper, we outline social policy’s current status quo method—categorical prioritization—where decision-makers manually (1) decide which attributes of help seekers should give those help seekers priority, (2) simplify any continuous measures of need into categories (e.g., household income falls below a threshold), and (3) manually choose the decision rules that map categories to priority levels. We draw on novel data and quantitative and qualitative social science methods to outline categorical prioritization in two case studies of United States social policy: waitlists for scarce housing vouchers and K-12 school finance formulas. We outline three main differences between categorical and algorithmic prioritization: is the basis for prioritization formalized; what role does power play in prioritization; and are decision rules for priority manually chosen or inductively derived from a predictive model. Concluding, we show how the counterfactual perspective underscores both the understudied costs of categorical prioritization in social policy and the understudied potential of predictive algorithms to narrow inequalities.
Simone Zhang, Rebecca A. Johnson, John Novembre, Edward Freeland, and Dalton Conley. 2021. “Public attitudes toward genetic risk scoring in medicine and beyond.” Social Science and Medicine 274:113796.
[Paper] [Online Supplement] [Replication Package]
Abstract
Advances in genomics research have led to the development of polygenic risk scores, which numerically summarize genetic predispositions for a wide array of human outcomes. Initially developed to characterize disease risk, polygenic risk scores can now be calculated for many non-disease traits and social outcomes, with the potential to be used not only in health care but also other institutional domains. In this study, we draw on a nationally-representative survey of U.S. adults to examine three sets of lay attitudes toward the deployment of genetic risk scores in a variety of medical and non-medical domains: 1. abstract belief about whether people should be judged on the basis of genetic predispositions; 2. concrete attitudes about whether various institutions should be permitted to use genetic information; and 3. personal willingness to provide genetic information to various institutions. Results demonstrate two striking differences across these three sets of attitudes. First, despite almost universal agreement that people should not be judged based on genetics, there is support, albeit varied, for institutions being permitted to use genetic information, with support highest for disease outcomes and in reproductive decision-making. We further find significant variation in personal willingness to provide such information, with a majority of respondents expressing willingness to provide information to health care providers and relative finder services, but less than a quarter expressing willingness to do so for an array of other institutions and services. Second, while there are no demographic differences in respondents’ abstract beliefs about judging based on genetics, demographic differences emerge in permissibility ratings and personal willingness. Our results should inform debates about the deployment of polygenic scores in domains within and beyond medicine.
Conley, Dalton and Simone Zhang. 2018. “The promise of genes for understanding cause and effect.” Proceedings of the National Academy of Sciences 115(22): 56265628.
[Paper]
Abstract
This article discusses methodological challenges associated with causal inference using genetic instrumental variables.
Work in Progress
Tipping the Balance: Predictive Algorithms and Institutional Decision-Making in Context
[Working Paper]
Abstract
Predictive algorithms inform institutional decisions about individuals that shape the distribution of benefits and burdens in society. How do these technologies influence decision-making practices? This article argues that predictive algorithms can disrupt the balance of multiple goals central to many institutional decisions by requiring specific, measurable outcomes to model. When incorporated into deliberations among decision-making actors, algorithms add a voice that endorses a narrowed set of objectives, anchoring attention and empowering actors whose perspectives align with the algorithm’s own. I develop this argument through the case of pretrial risk assessment algorithms. Using court hearing transcripts and administrative data from a county that implemented such a tool in a randomized controlled trial, I show risk assessments heighten concern about an adverse outcome they model — missed court dates — and serve as more effective resources for prosecutors seeking harsher pretrial conditions than for defense attorneys. These findings suggest that predictive algorithms can skew the balance of power and objectives in decision-making.
Generative AI Meets Open-Ended Survey Responses: Participant Use of AI and Homogenization (with Janet Xu and AJ Alvero). Conditionally accepted at Sociological Methods & Research .
[Working Paper]
Abstract
The growing popularity of generative AI tools presents new challenges for data quality in online surveys and experiments. This study examines participants’ use of large language models to answer open-ended survey questions and describes empirical tendencies in human vs. LLM-generated text responses. In an original survey of participants recruited from a popular online platform for sourcing social science research subjects, 34% reported using LLMs to help them answer open-ended survey questions. Simulations comparing human-written responses from three pre-ChatGPT studies with LLM-generated text reveal that LLM responses are more homogeneous and positive, particularly when they describe social groups in sensitive questions. These homogenization effects may mask important underlying social variation in attitudes and beliefs among human subjects, raising concerns about data validity. Our findings shed light on the scope and potential consequences of participants’ LLM use in online research.
Predictive Algorithms and Perceptions of Fairness: Parent Attitudes Toward Algorithmic Resource Allocation in K-12 Education (with Rebecca Johnson). Conditionally accepted at Sociological Science.
Abstract
Institutions increasingly use predictive algorithms to allocate scarce resources, sparking scholarly concern about their potential to reinforce and normalize discrimination and inequality. While research has examined how elite discourses position algorithms as fair and legitimate, we know less about how the public perceives them. Research on public trust in science and important institutions indeed predicts potential cleavages in people’s views. We use a vignette-based survey experiment to compare public perceptions of algorithms relative to several traditional allocation methods: administrative rules, lotteries, petitions from potential beneficiaries, and professional judgment. Fielded on a nationally representative sample of over 4,300 U.S. parents, the vignette focuses on the case of school districts using algorithms to allocate scarce tutors. Overall, we find that most parents view algorithms as fairer than traditional methods, especially lotteries which parents disliked for imprecisely targeting resources. However, significant divides emerge along socio-economic and political lines. Lower-SES and conservative parents favor the personal knowledge about student need used by counselors or parents, while higher-SES and liberal parents prefer the impersonal logic of algorithms. We further find that informing participants about algorithmic bias significantly decreases the perceived fairness of algorithms, but the largest drops are among groups like low-income parents who would be most directly impacted by the bias we describe. Overall, our findings suggest that broader forces like trust in science guide perceptions of algorithms and map out the contours of public acceptance of algorithms that could shape their adoption by key social institutions.
Social Mechanisms of Performative Prediction
Abstract
Predictions of social outcomes raise concerns about performativity: the potential for predictions to influence the world, including the very outcomes they aim to forecast. Existing technical work has formalized these feedback dynamics and proposed optimization approaches in such settings. This article bridges these efforts with social science scholarship pointing to the diverse social mechanisms through which predictions can have performative effects. Focusing on prediction in policy settings and at evaluation interfaces, I present a taxonomy that disaggregates ways that decision-makers, decision subjects, and societal third parties react to predictions. Within this taxonomy, I distinguish between responses to specific predictions and responses to the broader characteristics of a prediction system. Elucidating theses social mechanisms can broaden our understanding of how predictions intervene in social systems and inform technical and non-technical strategies for the responsible deployment of prediction models with performative potential.
Why’s the Power Out? Organizational Responsiveness to Everyday Resident Requests, Questions, and Complaints
Abstract
Many everyday services are increasingly structured by resident-initiated requests and complaints. Concerns that this development might exacerbate urban inequality has prompted research on where service requests originate. Less well understood is whether local service providers vary in their responsiveness to different requests. In this paper, I address two key questions: to whom are local service providers responsive? And what styles of claims-making are more likely to elicit responses? I analyze the case of public client service interactions on Twitter between residents and local governments, electric utilities, broadband Internet service providers, and cell phone service providers. Using supervised machine learning and matching, I find that these public interactions between residents and service providers do not replicate patterns of bias based on demographic cues commonly observed in other settings. They may, however, disadvantage groups that are more collectively under-served as organizations are more responsive to claims framed in terms of individualized needs and preferences, rather than collective ones.
How Local Discretion Shapes Racial and Gender Inequality: The Case of Small Business Relief Funding (with Elizabeth Bell, Heather Kappes, Crystal C. Hall, Rebecca Johnson, and Miles Williams)
Abstract
Organizations that distribute limited resources, such as government agencies and non-profit organizations, face conflicts over how to select and evaluate recipients. Questions of fair selection were salient as local economic development organizations allocated COVID-19 relief to U.S. small businesses. We leverage observational data, simulations, and quantitative and qualitative survey responses to better understand how local discretion affected inequality in access to relief. Qualitative coding of selection procedures (Study 1) showed that local organizations varied in who they defined as the beneficiary of relief and the criteria they used to prioritize recipients. To test how selection procedures affected access to help, we used real application data from three cities (Study 2). Simulations showed that when selection procedures used economic criteria to give higher priority to “meritorious” businesses, women- and minority-owned businesses were disadvantaged. However, selection procedures that give a demographic plus factor to these businesses correct for these inequalities. Finally, we used a vignette-based experiment to understand whether members of the public perceive these inequalities as fair, finding strong political polarization (Study 3). Together, these studies show how and why local discretion over scarce relief can contribute to racial, ethnic, and gender disparities in access to resources.