2015-2016 Spring

Cecilia Mo

March 28, 2016

When do the Advantaged See the Disadvantages of Others? A Quasi-Experiment on the Effects of Prolonged Contact

ABSTRACT: As wealth and power continues to become even more unequally distributed, many socioeconomically privileged Americans have been found to justify their economic and social position by defending the fairness of the economic and political system and emphasizing the centrality of hard work in achieving their privileged position. However, disadvantaged Americans often become disillusioned and are convinced that America is an unfair society. This paper investigates whether deep and sustained contact between advantaged and disadvantaged groups of individuals alters perceptions of fairness. If advantaged Americans have prolonged interactions with disadvantaged populations, where issues of inequality are not just salient but actively considered, do the former see the world from the vantage point ofthe latter? This is crucial to know because advantaged Americans have disproportionate influence on public policy, and perceptions of fairness strongly influence policy positions on a variety of policy domains like social welfare,affirmative action, and criminal justice reform. We consider the case of Teach for America (TFA), a prominent national service program, which places recent college graduates in low-income schools to be teachers for an extended period—at least two years—and a charge to help solve the education achievement gap between high-income and low-income Americans, where a large share of the student populations they serve are racial minorities. Equally important, TFA began implementing a selection process in 2007 with a cut-off score for admission, which makes causal inference using a quasi-experimental regression discontinuity design possible. Using an original national survey of TFA applicants and TFA’s selection data for the 2007-2015 application cycles, we find strong evidence that prolonged contact with disadvantaged communities and issues of inequality cause advantaged Americans to adopt beliefs that are closer to disadvantaged Americans, and conclude that the economic, social, and political status quo is unfair. Moreover, this type of intergroup contact elicits less prejudice towards disadvantaged populations and perceptions that systemic injustices are more to blame than the economically and socially vulnerable groups for their disadvantaged positions within the American social fabric.

Cecilia Mo
Anna Boch

April 4, 2016

Which Americans Changed their Minds to Support Same-Sex Marriage? (And Why?)

ABSTRACT: Between 2006 and 2014, public support for same-sex marriage (SSM)surged from 35% to 52%, an increase of 17 percentage points in only 8 years. This change in public opinion cannot be explained solely by cohort replacement. So then,who were the people who changed their minds from opposing SSM to supporting it,and why did they change their minds? Evidence from the General Social Survey’s panel data from 2006 to 2014 suggests that changing support for same-sex marriage
reflected increased polarization, in other words, the people who changed their minds came from the same demographic groups that supported SSM to begin with: women,younger people, and those NOT from the South were more likely to change their minds to support SSM (however, it’s worth noting that Blacks were just as likely as whites to change their minds). Religious affiliation (Protestant, Catholic, or None) moderated the effect of age on predicting a change from opposing same-sex marriage to supporting it. Additionally, after controlling for the effects of constant individual traits like race and gender, decreasing religious service attendance and more years of education (controlling for concurrent increased age) over the observed time period predicted an increase in support for SSM.

Anna Boch
Sam Savage

April 11, 2016

The Arithmetic of Uncertainty: A Cure for the Flaw of Averages

ABSTRACT: The Flaw of Averages is set of systematic errors that occur when uncertainties are replaced by single “average” numbers. A classic example involves the statistician who drowns in the river that is on average three feet deep, but it also explains why so many things are behind schedule, below projection, and beyond budget.

ProbabilityManagement.org has been cited by MIT’s SLOAN Management Review as“a non-profit organization that aims at improving communication of uncertainty,” andapplies “simulation-based communication to improve actual managerial decisions andpublic policies.” The organization’s open SIPmath™ standard helps cure the Flaw of Averages by allowing users of spreadsheets and other common software to perform arithmetic with uncertainties using the same keystrokes they would for numbers.

Sam Savage
Yongwei Yang and Mario Callegaro (with Ana Villar, Tzu Yun Chin, and Jon A. Krosnick)

April 18, 2016

Assessing the accuracy of 51 non-probability online panels and river samples:A study of the Advertising Research Foundation 2013 online panel comparison experiment

ABSTRACT: More and more survey research is conducted using online panels and river samples. With a large number of vendors available, clients need to decide which panel or river sample can produce accurate and reliable results. Previous studies comparing online panels have shown that data accuracy varies greatly across providers. This paper evaluates the effect of samplings strategy such as the specific variables used in quota sampling or using river samples help explain differences inaccuracy. Data come from a large study run by the Advertiser Research Foundation(ARF) in 2013. We compare findings from 45 US online panels of nonprobability samples, 6 river samples and one RDD telephone sample to population estimates obtained from large-scale surveys of probability samples with high response rates like the ACS, NHIS and NHANES. The nonprobability samples were supplied by 17 major US providers that redirected respondents to a third party website where the survey was administered; samples were assigned to three quota methods: method A):age and gender nested within regions; method B): as A plus race/ethnicity; and method C): as B plus education. The questionnaire included questions identical to those from the benchmark surveys on a large range of topics like wellbeing, consumer purchase behavior, and brand attitudes. Mean survey length was 26 minutes and the average sample size is of 1,118 respondents. Comparisons are made using weighted data. Samples will be compared using the absolute average error method, where the percentage of respondents who chose the modal category in the benchmark survey is compared to this percentage in each sample (in absolute value). Mode effects and consistency within vendor will be discussed as well.

Taylor Orth (with Jon Krosnick, David Grusky, and Bo MacInnis)

April 25, 2016

An Investigation of Climate Change Belief and Concern among U.S. Hispanics and non-Hispanic Whites

ABSTRACT: One of the largest demographic shifts in recent American history is the explosion of the American Hispanic population. Today, Latinos make up roughly 17percent of the electorate, and are expected to double to a striking 31 percent of theU.S. population by 2060. As Latinos gain a meaningful voice in government, they will be in a position to develop and influence public policies. As a result, many people are interested in the political behavior of Hispanics because it will have increasingly significant impacts. In this paper, we use data from multiple nationally representative surveys (n=4,808) to examine whether or not Hispanics and non-Hispanic whites significantly differ on six views regarding climate change (these include: belief,perceptions of vulnerability, personal importance, and global and national seriousness). Having demonstrated a statistically significant difference in opinion between Hispanics and whites, we then test whether these relationships remain intact after incorporating a number of theoretically motivated social and demographic controls into our models. Next, we examine the geographic distribution of Hi

Taylor Orth

David Grusky

Bo MacInnis

Henning Silber (with Jan Karem Höhne, and Stephan Schlosser)

May 2, 2016

Under-Reporting Support for Europe: Question Order Experiments in the German-European Context

ABSTRACT: The empirical findings of large-scale cross-national surveys are of great importance and relevance to politicians as well as policy makers around the world. In this paper, we investigate the context stability of measurements of questions on political issues in cross-national surveys. For this purpose, we conducted three replication studies (N1 = 213; N2 = 677; N3 = 1,489) based on nine split-ballot experiments with graduate and undergraduate students in order to test for question order effects. The questions included the topics democracy, economy, identity, and politics and were replications of questions asked in the Eurobarometer 2013. The respondents were randomly assigned to one of the two experimental groups. The first experimental group first received the questions regarding their home country and then the questions regarding the European Union/Europe (original order). The second experimental group first received the questions regarding the European Union/Europe and then the questions regarding their home country (reversed order). The results show significant question order effects between the two experimental groups in all three replication studies, irrespective of the question content. In eight out of nine experiments, the European Union/Europe was evaluated significantly worse when respondents were first asked to evaluate their home country (Germany). We compare our results to those of past studies on context effects and discuss the implications for future cross-national survey research (e.g., Schwarz and Bless 1992; Schwarz, Strack,and Mai 1991; Tourangeau, Rips, and Rasinski 2000). For example, our empirical findings suggest that the likelihood of the occurrence of such context effects can easily be reduced by implementing informed questionnaire design strategies.

Jan Karem Höhne

Stephan Schlosser

LinChiat Chang (with Mary Beth Ofstefal)

May 2, 2016

Unit Nonresponse in the 2014 Health & Retirement Study Experimental Modules

ABSTRACT:The Health and Retirement Study (HRS) is a longitudinal panel survey representing Americans over age 50. After completing the 2014 HRS core interview,around 18,000 respondents were randomly assigned to 1 of 11 modules of supplemental questions, yielding approximately 1,600 respondents per module.However, not all HRS respondents who were assigned to a module completed it.Nonresponse increases the risk of bias in survey estimates. The risk of systematic bias is cause for concern when core substantive measures are associated with nonresponse propensity. Conversely, nonresponse may be ignorable if it is random and not associated with core substantive measures. This analysis will address four main questions: 1. How representative are module subsamples? Module subsamples differ slightly from population demographic parameters, in part by design because modules are administered only to non-proxy respondents who can answer for themselves. In addition, respondents who were willing to complete the module questions consistently scored higher on physical, cognitive, and emotional functioning than those who refused to complete the modules. 2. Did unit nonresponse vary by module topic?Nonresponse rate did not vary significantly across the 11 modules; and the completed samples exhibited similar demographic profiles, contactability and cooperativeness. 3.Is nonresponse propensity systematically associated with substantive variables? Using paradata on respondents’ cooperativeness and contactability to simulate nonresponse propensity, we assess the extent to which nonresponse propensity is associated with substantive module variables. We demonstrate how the impact of nonresponse may be ignorable on certain module variables; whereas other module variables are more susceptible to nonresponse bias. Bonferroni correction is applied. 4. Does nonresponse propensity change relationships between substantive variables?Multivariate models across 3 module topics of arts participation, healthcare technology, and expectations of longevity will be fitted to determine whether nonresponse propensity moderates the strength and direction of established relationships between substantive variables.

Mary Beth Ofstefal

LinChiat Chang(with Yongwei Yang)

May 2, 2016

Psychometric Scale Development for Arts Participation among Older Adults

ABSTRACT: The 2014 Health and Retirement Study (HRS) included a supplementary arts module that has created an opportunity to study a broad array of
relationships between arts participation and well-being of older adults. To facilitate such investigations, a valid and reliable scale is needed to assess values and perceptions relating to arts participation. To this end, this paper will provide a psychometric assessment of 8 potential items in the Arts and Culture module that were measured on the 5-point Likert (agree/disagree) response scale, to capture values and perceptions about the arts and participation in the arts. The sample of 1,600respondents will be split into training vs. validation data sets. The training set will be subjected to pairwise correlations and exploratory factor analysis to map items onto coherent latent constructs of interest; competing factor solutions will then be fitted on the validation data set via confirmatory factor analysis to evaluate and identify the final recommended factor structure. Interim scales will also be assessed by Cronbach’s coefficient alpha, item-total correlations, and tests of convergent and discriminant validity against other HRS indices where longstanding associations, or lack thereof, have been demonstrated in past research. Strength and limitations of agree-disagree scales will be discussed, with focus on the negatively-worded items to illustrate how the cognitive burden of having to process double negatives may be too much for less educated respondents. The final scale will be produced with proven psychometric properties to facilitate future investigations.

Michaeleen Gallagher, Director of Education and Environmental Programs

May 9, 2016

Sunnylands Center & Gardens The Annenberg Retreat at Sunnylands

BIOGRAPHY: Michaeleen Gallagher has worked in science and education for twenty years, in the U.S. and overseas. She joined the Annenberg Foundation Trust in 2011,one year before Sunnylands opened. She is director of the education and environmental programs department overseeing the development of public programming and environmental research projects, including the Sunnylands Monarch Project. She has been instrumental in developing the sustainability committee and sustainability messaging for Sunnylands. She has two published teacher guides for the IMAX films, Magic of Flight and Everest and in 2013 she was featured in Art & Nature: The Gardens at Sunnylands.
Prior to Sunnylands, she worked in wildlife rescue and education in North Carolina, at the Outer Banks Wildlife Shelter, and in Virginia with a bird of prey rescue. She was the education specialist at The Living Desert, at the Reuben H. Fleet Science and Technology Center, in San Diego; and spent three years teaching in Japan.
Michaeleen is a member of the International Society for Sustainability Professionals,and currently serves as Board President for the California Association and Museums,co-chair of the Green Museums Initiative Committee, Steering Committee member for The Museums Collaborative Network, and on the Programs Committee. She is currently pursuing her MS in Environmental Policy and Management from the University of Denver.

Michaeleen Gallagher
Aleksandar Matovski

May 9, 2016

Does Fear of Instability Sustain Popular Support for Electoral Autocracies? Comparative Evidence from Post-Communist Russia

ABSTRACT: Electoral authoritarian regimes have become the most numerous and persistent type of non-democracy after the Cold War, continuously ruling in as many as a third of all the countries in the world. A key, but often overlooked feature of these systems is that they tend to emerge in the wake of acute political, economic and security crises. In this paper, I argue that such contexts allow electoral autocracies to emerge and maintain their rule with minimal resort to coercion by exploiting popular anxieties and demands for stability. I show that risk-averse populations confer key electoral advantages to these regimes by: (1) magnifying the impact of valence advantages that they enjoy over their challengers; and (2) highlighting uncertainties about the quality of the opposition. I test these mechanisms empirically by examining the attitudinal currents that accompanied the rise of electoral authoritarianism in Russia, and with cross-national evidence from the US.

Aleksandar Matovski
David Vannette

May 9, 2016

Assessing the Effects of Using Attention-Check Questions in Web Surveys: Evidence From a 14 Country Survey Experiment

ABSTRACT: The quality of survey responses in web surveys often leaves much to be desired. Respondents commonly engage in negative response behaviors and satisfice in their responses by taking shortcuts in the response process that researchers would like them to follow when answering questions. Many researchers have begun attempting to identify respondents, in real-time, that are not being thoughtful and then excluding them from either the rest of the survey or the final dataset. The use of attention-check questions is one extremely popular method that has been used to identify some of these inattentive or misbehaving respondents. This paper reports the results of three experimental interventions aimed at increasing respondent attention and thoughtfulness. Unlike the production survey context, the respondents that failed the attention-check questions were allowed to remain in the sample to evaluate how they influence the data and how they differ from respondents that pass these attention-checks on observable characteristics. An additional control condition that had no intervention was also used to assess the effects of asking these types of questions at all because simply using this type of attention-check may affect responses. The effects of these different treatment conditions on data quality are assessed using outcome measures that include the quantity of text generated in open-ended responses, survey completion times, socially desirable responding, and item non-response. Additionally, the demographic similarities and differences that result when using these methods are assessed. The data for this study were collected from convenience samples of web survey respondents in 14 independently sampled countries. The results from this study will inform knowledge about the effectiveness of interventions aimed at improving the quality of responses in web surveys and will be of value to all researchers using this method.

Mary Currin-Percival

May 16, 2016

Let’s Go To the Numbers: Understanding Journalists’ Decisions About Reporting the Details of Public Opinion Polls

ABSTRACT: The focus of this research is to better understand why little methodological information is included in news media reports about public opinion polls. This area of research is important because the news media serve as the major source of polling information for most Americans and how these details are reported affects the amount and quality of methodological information people receive. Using an original newspaper and broadcast journalist survey and a series of field interviews with journalists and news organization polling experts, I find that there are specific institutional and audience factors that affect journalists’ decisions about what details to include in their stories with poll results.