2020-2021 Fall

Dallas Card, Postdoc at Stanford’s NLP Group Dallas Card

September 14, 2020

“Challenges and Opportunities in Using Text Classification for Social Science Research”

ABSTRACT: Automated text classification is a useful tool in a variety of social science research settings, especially when working with large collections of news articles or social media. However, despite impressive progress in recent years, considerable challenges remain, including the problems of bias, annotation, and meaningful evaluation. In this talk, I will provide a brief history and overview of the state of text classification today (from counting words to contextual embeddings), and discuss these challenges in relation to a variety of recent and ongoing projects, including work on automated coding of open-ended survey responses, identifying stance towards climate change across the ideological spectrum, and the risk of racial bias in hate speech identification.


Neil Malhotra, Professor of Political Economy at Stanford University’s GSB Neil Malhotra | Stanford Graduate School of Business

September 21, 2020

“Distributive Politics and Policy Feedback: Evidence from Agricultural Producers”

ABSTRACT: How does participation in government programs affect political attitudes? Evaluating this relationship empirically is crucial to assessing and building theories of policy feedback. Whereas a plethora of studies have investigated how public policies affect political participation, fewer have examined the effects on political attitudes. In this study, we contribute to the policy feedback literature by exploring the relationship between participation in USDA farm aid programs and political attitudes ranging from program support to broader views of government. We contribute to the extant literature by: (1) typing administrative records to an original survey; and (2) studying a policy area that is not a social welfare policy and where most recipients are conservative. We find that program participation can affect views of the particular programs as well as more general attitudes toward government. However there is often important heterogeneity by political predispositions; those with more negative views toward government are often resistant to feedback effects.


Jane Willenbring, Associate Professor of Geological Sciences at Stanford University Jane Willenbring | Life and Landscape Lab

September 28, 2020

“Do attitudes on gender bias/sexual harassment mimic belief in climate change?”

ABSTRACT: In this talk, I will give an introduction to research on the prevalence of gender bias/discrimination and sexual harassment in STEM fields. Although this large body of work on bias and harassment is (mostly) reproducible, many scientists simply don’t believe it (Handley et al., 2015). It is not yet known how to best change such attitudes and the resulting behavior. I posit that some similarities could be drawn between the science of science communication research and gender bias research and that area is rife with new opportunities to study human attitudes and behavior.


Eran Amsalem, Department of Communication at the Hebrew University in Israel Eran Amsalem (@EranAmsalem) | Twitter

October 5, 2020

“Personality and the Policy Positions of Politicians”

ABSTRACT: Politicians’ support or opposition to concrete policies is uniquely consequential for policymaking, public opinion, and a host of other societal outcomes. Explaining their policy positions is therefore a major research agenda in political science. Here, we evaluate the role of politicians’ personality traits, measured with the Big Five typology, in shaping how liberal or conservative their economic and social policy positions are. While existing research establishes this link among non-elites, it is far from obvious that the same holds for politicians, who have systematically different personality profiles, and whose positions are constrained by party sorting. Using an in-person study of 895 incumbents in five countries who completed personality questionnaires and provided detailed issue positions, we find that Openness to Experience is a consistently strong and significant predictor of politicians’ positions, but a null effect for Conscientiousness. We discuss implications for the role of elites’ individual characteristics in policymaking.


Jiang Bang, PhD Student in Management Science and Engineering at Stanford University Jiang Bian | Management Science and Engineering

October 12, 2020

“Good to go first? Position order effects in expert evaluation of early-stage ventures”

ABSTRACT: There is considerable anxiety and conflicting advice concerning the benefits of presenting/being evaluated first. In this paper, we investigate how randomly assigned expert evaluators vary in their evaluations of proposals based on their prior experience evaluating similar proposals and position order. The context for this study is the premiere innovation fund competition in Beijing, China, in which: (a) the prize money at stake is economically meaningful. (b) Evaluators are randomly assigned to evaluate proposals individually, with scores then averaged for the purpose of selecting grantees. (c) The evaluations observed in this setting result in funding that is non-zero-sum as funds are doled-out based on how a proposal is ranked based on its score until all available funds for a particular year are allocated. (d) We also obtained and coded granular measures of the firms’ pre-proposal performance metrics used by the evaluators in their judgements. As this written material was the sole basis of evaluation used by the experts – in contrast with the persuasiveness of oral or live sports/musical presentations that may vary considerably across performances and which may be impacted by (even random) performance order – we have a compelling context to assess evaluation order effects. We can and do also estimate firm and evaluator fixed effects models that yield consistent findings. Overall, we find that an applicant that is evaluated first needs net profits in the top fifth percentile to merely equal the evaluation of an applicant in the bottom first percentile that is not evaluated first. 


Melanie Green, Professor of Communication at University of Buffalo Melanie Green, State University of New York at Buffalo • Expertise Finder  Network

October 19, 2020

“Stories for Social Good”

ABSTRACT: Much of the power of stories comes from their emotional impact. Emotional shifts can occur throughout a story; a narrative may change from a negative to positive emotional tone or vice versa. This emotional flow can influence narrative outcomes such as persuasion. In particular, narratives that show a positive trajectory (restorative narratives) may be effective in helping encourage attention to difficult topics and promoting prosocial behavior. In this talk, I will present a set of studies that have investigated the outcomes of restorative narratives, stories of resilience and recovery. I will also present research examining the potential for stories to influence perceptions of societal responsibility for helping.


Steven Kull, Founder and President of Voice of the People, Member of PPRG Steven Kull: With a crisis in confidence of government, the people are  ready for reforms - FairVote

October 26, 2020

“Public Consultation: Answering the Public’s Demand for a Greater Voice, Finding Bipartisan Common Ground”

ABSTRACT: First, I will present the findings of an extensive study that sought to determine the roots of public dissatisfaction with the US government. It found that the central critique was the elected officials have violated a perceived social contract to serve the common good, guided by the people, responding instead to demands of campaign donors and special interests. A key antidote that receives overwhelming public support is for elected officials to systematically consult their constituents on policy issues, with voters indicating a readiness to support candidates, even crossing party lines, who commit to doing so. Second, I will present the findings of a series of more than 30 in-depth public consultation surveys that sought to give voters the means to weigh in on key Congressional decisions by going through an online process called a ‘policymaking simulation,’ with special emphasis on the question of whether the public is more apt to find bipartisan common ground than is Congress.


Sebastian Lundmark, Researcher at the Society, Opinion, and Media Institute at the University of Gothenburg, Sweden Sebastian LUNDMARK | PostDoc Position | PhD in Political Science | Stanford  University, CA | SU | Department of Communication

November 2, 2020

“Exploring survey non-response: Introducing the socially impoverished areas as contextual explanations to non-response.”

ABSTRACT: In 2015, the Swedish Police publicized a list of 53 neighborhoods they considered to be at risk of becoming or already were socially impoverished/deprived areas. The Police defined a socially impoverished area as areas where organized crime was widespread, trust in societal functions were low, the physical space was worn-down, and where parallel societies had emerged. In the areas defined as extremely socially impoverished, the parallel society and alternative social order had become the norm, and these parallel structures had started to replace the Swedish judicial system. This alternative social order led to a criminalizing process that permeated a majority of the young adults living in the area. The socially impoverished areas tend to be low in social-economic-status, have widespread unemployment, and a higher rate of immigrants. Today, about 320,000 (4%) of the Swedish adult population live in an area considered socially impoverished. 

In the present talk, the socially impoverished area definition will be presented as a new contextual variable potentially influencing survey non-response. By making use of the Swedish extensive registry data on individual-level factors such as country of birth, citizenship, age, sex, marital status, and neighborhood variables such as mean income, and the number of people relying on social welfare, I investigate whether the socially impoverished areas help explain a portion of survey non-response.

Furthermore, the talk will detail the results of three survey experiments aimed to increase response rates among the public in general and the people living socially impoverished areas in particular. The talk will also describe the work that we have been doing on focus groups of young adults living in one of the socially impoverished areas of Gothenburg, Sweden. 


Nathan Kalmoe, Professor of Political Communication at Louisiana State University in the Manship School of Mass Communication and the Department of Political Science Nathan P. Kalmoe | Associate Professor of Political Communication,  Louisiana State University

November 9, 2020

“Uses and Abuses of Ideology in Political Psychology”

ABSTRACT: Ideology is a central construct in political psychology, but the field’s strong claims about an ideological public rarely engage evidence of an enormously stratified public–a minority exhibiting real ideological coherence and weak to nonexistent political belief organization for everyone else. In this talk, I bridge disciplinary gaps by showing the limits of mass political ideology with several popular measures in representative U.S. surveys across five decades. Results show polar, coherent, stable, and potent ideological orientations only among the most knowledgeable 20-30% of citizens. Whether through top-down opinion leadership or bottom-up reasoning, organized political belief systems require political attention and understanding to form.


Matt Berent, Main Coordinator at PPRG Our Team | PPRG website

November 16, 2020

“A fundamental flaw in longitudinal survey projects like the ANES and the GSS”

ABSTRACT: The American National Election Studies (ANES) and the General Social Survey (GSS) have asked many survey questions repeatedly for decades. Asking the same question repeatedly is presumed to yield both accurate measurements at moments in time and accurate trends over time. This presentation reports evidence challenging these assumptions: an identical question measuring social trust in the two survey streams has consistently yielded significantly different distributions of responses and uncorrelated trends over time. New analyses document two explanations for these discrepancies: (1) the GSS has measured trust in its one and only interview with respondents each year, whereas the ANES has usually asked the trust question during a second interview with respondents, and low trust people were especially likely to decline to complete a second interview, and (2) different questions preceding the trust question in the two surveys changed answers to the trust question. Additional analyses rejected different sample weighting methods, interview mode effects, and seasonal variation in when interviews were completed as possible explanation for the discrepancies. The findings suggest caution regarding the assumption of comparability of survey estimates across survey data streams and the accuracy of over-time trends within a data stream.