Winter 2026 Colloquia
Monday, January 26, 2026
Beyond "students-in-schools": A Bayesian partial pooling approach for applied research
ABSTRACT: Modern social science research often analyzes many related effects—platform-specific associations, item-specific weights, subgroup contrasts—where "run a separate regression for each effect" is noisy, but "estimate one single model for all" can hide meaningful heterogeneity. Partial pooling offers a principled middle ground by sharing information across related units and shrinking extreme estimates toward a common mean in proportion to how much data support them. This talk gives an applied introduction to Bayesian partial pooling and shows why it is often the most convenient way to fit and interpret these models in the kind of setting where each observation contributes information to multiple “higher-level” units yet key outcomes are measured once per observation. Bayesian modeling provides a particularly coherent and practical workflow when the hierarchy lives in a batch of coefficients rather than in neatly nested outcomes. Two empirical applications illustrate this approach: a cross-platform analysis that partial-pools platform-specific effects on affective polarization in a moderated mediation framework, and a certainty-weighted, belief-based attitude model that imposes a structured “item importance × universal mechanism” constraint and enables counterfactual simulations.
Gabriel Li, Assistant Professor of Communication Studies, Chapman University
Monday, January 26, 2026
The Fragmented Screen: How Much, in What Ways, and Why
ABSTRACT: Smartphones support multiple aspects of our daily lives, ranging from work to social interaction. As people navigate this digital environment and quickly move through a wide variety of media, their screen use is becoming more dynamic. How fragmented has our digital life become? In this talk, I will address screen activity fragmentation by asking: How much, in what ways, and why. Building on data from the Human Screenome Project, I examine how individuals' fragmentation metrics change over time and in relation to within-person change in attentional states. Furthermore, what is the psychological foundation of these fragmented behaviors? By leveraging psychological experiments in the dual-tasking paradigm, I will introduce the concepts of attentional inertia and attentional residue.
Rinseo Park, PhD Candidate, Stanford University Department of Communication
Monday, February 23, 2026
When AI enters high-stakes domains: Human involvement, Responsibility, and Cultural expectations
ABSTRACT: AI improves efficiency and decision-making accuracy. However, humans often exhibit “AI aversion,” the tendency to distrust products and the human decision-makers associated with them once people learn that AI was involved. Prior research suggests that AI aversion varies by domain and is stronger in high-stakes contexts. Understanding AI aversion in such domains has important theoretical implications for models of trust and authority, as well as practical implications for the integration of AI into medicine, science, and other institutions that rely on public legitimacy. This talk presents three studies. Study 1 examines AI-assisted diagnosis and shows that mentioning AI involvement reduces trust in physicians and willingness to seek care, even when human oversight remains central. Study 2 focuses on academic research and finds that AI use, especially in theoretical and conceptual tasks, lowers perceived credibility and quality. Penalties are stronger when AI replaces core intellectual labor than when it is used for methodological tasks such as coding. Study 3 is an ongoing cross-national conjoint experiment investigating public perceptions of AI healthcare products in the United States and China. While product-level attributes such as regulatory approval increase acceptance in both countries, baseline demand for AI health products diverges sharply between them. Across studies, personal experience with AI attenuates aversion, suggesting that resistance reflects concerns about responsibility and authorship rather than simple technological unfamiliarity. Together, these findings indicate that AI aversion intensifies when AI is perceived to encroach upon domains associated with human accountability and epistemic authority, underscoring the institutional foundations of public trust in the age of AI.
Catherine Chen, Assistant Professor in Political Communication at Louisiana State University
Monday, March 2, 2026
A Technological Medium of Communication, Connecting Installations Across the Globe
ABSTRACT: The phrase “the next Silicon Valley” is frequently invoked to announce the rise of places as varied as Austin and Shenzhen into the forefront of the global economy, with branded nicknames like “Silicon Welly” in Wellington, “Silicon Mountain” in Buea, “Silicon Plateau” in Bangalore, “Silicon Docks” in Dublin, and “Chilecon Valley” in Santiago all signaling both the symbolic power of Silicon Valley and the promise of replicating its success. In the global rush to construct hubs of post-industrial innovation, it has become common wisdom that the history of Silicon Valley represents a universal model of how technology can transform national and global economies in the twenty-first century. But this assumption warrants scrutiny. By examining the early history of Silicon Valley alongside three other major mid-twentieth-century projects in technological and spatial investment that emerged during the same period, this paper investigates their shared political and spatial foundations, their key differences, and the ways these large-scale built environments reshaped global scientific production. It asks what these ambitious undertakings were intended to signal at their inception and how they have since influenced the social and labor conditions that define today’s innovation economies.
Jeanette Estruth, Assistant Professor of History, Santa Clara University
Monday, March 9, 2026
When AI enters high-stakes domains: Human involvement, Responsibility, and Cultural expectations
ABSTRACT: I discuss what experimental designs are needed when the goal is to learn an effective targeting rule rather than estimate a single average treatment effect. In one application, we use large-scale Facebook Messenger experiments in Kenya and Nigeria to study how personalized communication can shift behaviors. We recruit unvaccinated users via targeted ads and elicit their primary vaccine concern, partitioning respondents into pre-specified concern “regions." Within each region, we run an adaptive learning phase—using top-two Thompson sampling—to identify which messages best address that concern. Because adaptive learning creates dependencies that complicate standard evaluation, we then implement a separate, non-adaptive evaluation phase that randomizes new participants to a control, a standard PSA, or the optimized concern-addressing messaging, measuring vaccine intentions and willingness. A parallel “infodemic” application evaluates a large factorial set of respondent- and headline-level interventions designed to reduce misinformation sharing, with outcomes defined over true/false sharing behavior. Across settings, the results highlight both the promise of personalized messaging and the design tradeoffs required to learn policies reliably under limited budgets and heterogeneous populations.
Molly Offer-Westort, Assistant Professor of Political Science, University of Chicago