Current Research

Current_Research_Slide_GW
Surveys_NASA_US_Aviation_Safety-449x300
survey-graph-2-e1337710545291
Designing_Good_Questionnaires-445x300

   

PPRG Research Project Topics:
Click Below to Expand Selection

1) Voting and Elections

An Exploration of Forces Driving Vote Choices in the 2008 American Presidential Election


(with Omair Akhtar, Josh Pasek, Keith Payne, Trevor Tompson, and Yphtach Lelkes)

At the beginning of September, 2008, Barack Obama was ahead of John McCain in the polls, but by a much smaller margin than many forecasting models predicted. In conjunction with the Associated Press, Yahoo! News, and the Stanford University Institute for Research in the Social Sciences, we investigated potential reasons that might have explained this discrepancy by regressing vote choice at that time and actual voting behavior on election day on a series of predictors, including: racism, beliefs about the candidates’ competence in governing, military experience, integrity, elitism, and issue priorities, their familiarity to voters, voter preference for divided government, voter perception that Obama is a Muslim, Obama’s relationship with Reverend Wright, evaluations of the first ladies, resentment maintained by Hilary Clinton supporters, perceptions of the economy, Presidential approval ratings, party identification, and ideology. 

A variety of racism measures were used, including symbolic racism, racial resentment, stereotypes of Blacks’ personalities, affect toward Blacks and Whites, other explicit measures, and the Affect Misattribution Procedure, a measurement tool developed by social psychologists for measuring prejudice without explicitly asking respondents. This procedure asks respondents to evaluate various Chinese ideographs and measures the spillover of the affect inspired by nearly-subliminal exposures of faces of Black and white faces preceding them.

We identified a wide range of factors that explained vote choices as well as decisions about whether to vote or abstain.

The Effect of Polls on Political Behavior


(with Neil Malhotra)

In recent years, there has been much speculation about the possibility that pre-election polls gauging the status of a horserace between competing candidates may affect the behavior of voters on Election Day.  Specifically, some observers have asserted that polls showing a runaway victory may discourage voters from turning out, because they think their vote will not make a difference in the outcome. 

To test this idea, we are exploring the impact of public opinion poll results on political attitudes, beliefs, and behavior in the electoral context. We are developing and testing a causal model to explain how people update their assessments of the closeness of a race using information from polls and how they then use this closeness assessment to evaluate candidates and decide whether or not to vote.

To test our model, we conducted a survey experiment (administered by Harris Interactive with a sample of the general U.S. adult population) about a hypothetical presidential race between Hillary Clinton and Jeb Bush in 2008.  Some respondents were told about a poll showing Clinton far ahead of Bush, other respondents were told about a poll showing the race as tight, and still other respondents were not told about any poll result.  Some respondents were asked about their perceptions of race closeness before being told about the poll results, and others were not asked this question. 

We explored three sets of questions:

First, we explored how people used the poll results to update their assessments of the closeness of a race, and what factors moderate this process. Previous studies have ignored this updating process – indeed, they did not even measure people’s prior beliefs before providing poll information.  We discovered that providing any poll result caused our survey respondents to update their closeness assessments in the direction of the poll result.  Interestingly, asking people about their prior beliefs before providing poll results reduced updating.

Second, we explored whether polls showing one candidate far ahead cause people to “jump on the bandwagon” and support the frontrunner. We went beyond the existing literature by analyzing the relation between “the bandwagon effect” and the updating process, the validity of various proposed causal mechanisms underlying the “bandwagon effect,” and the variables that moderate these causal processes. Among many findings, we observed that respondents were more likely to vote for Governor Bush when exposed to the tight poll result.

Third, we explored whether polls showing the race is lopsided reduced political participation by making citizens feel their actions will be inconsequential. We found that polls showing one candidate behind another had a demobilizingeffect; people exposed to polls showing Governor Bush trailing were less likely to vote, volunteer for his campaign, and contribute money.

Rationalization of Candidate Preferences and Mischaracterization of the Causes of Votes


(with Alison Pfent)

For years, researchers have been interested in what determines citizens?N decisions about which candidates to vote for in presidential elections.  These decisions have intrigued political scientists and social psychologists alike, and their research has led to the discovery of numerous elements that seem to play causal roles: party identification, performance of the incumbent, the health of the national economy, candidates?N stances on important national issues, perceptions of candidates?N personalities, and more.  Although correlations of various political attitudes and beliefs with candidate preferences are well-documented and assumed to influence impact on candidate preferences, social psychology (and cognitive consistency theories in particular) suggests a very different interpretation of these correlations: rationalization.

We are investigating the theory that people form candidate preferences and thereafter change related political attitudes and beliefs so they become more consistent with their candidate choice.  Using all available National Election Study data, we have found evidence of pervasive rationalization in every instance possible to analyze.  We have found evidence of rationalization in ideological self-identifications, party identification, and stances on abortion and other national policy issues. We are now attempting to employ panel data to yield strong evidence about the causes of candidate choices, eliminating the contaminating influence of rationalization.

Candidate Name Order Effects in Elections


(with Joanne Miller, Michael Tichy, Daniel Schneider, Eyal Ophir, Key Lee, Daniel Blocksom, and Alexander Tahk)

A great deal of evidence suggests that survey respondents’ answers to closed-ended questions can be influenced by the order in which those choices are presented. However, the impact of order depends upon whether the choices are presented visually or orally. Under visual presentation conditions, people are inclined to select the first response options they encounter, and whereas under oral presentation conditions, people are inclined to select the choices they encounter last.

Given that these order effects appear quite consistently in surveys, we were interested in whether they would appear in real elections. When they enter voting booths, citizens encounter candidates’ names visually, either written on paper or written on a voting machine. Findings from survey research therefore suggest that people may be inclined to selected names toward the top of the list. To test this idea, we collected actual voting returns for the 1992 elections in three large counties in Ohio. Precincts are randomly assigned to receive different orders of candidate names, so we were able to analyze these data as if they resulted from an experiment. And indeed, people were inclined to vote for candidates whose names appeared toward the top of the ballot. These effects were much more common in races about which voters knew less and where partisan affiliations of the candidates were not listed on the ballot.

In 2001-2002, we conducted another test of these effects.  This time, we analyzed data from the 2000 general election for the entire state of Ohio, as well as the states of North Dakota and California.  In all three states, portions of the state (such as precincts or counties) were assigned to receive different orders of candidate names for all races we analyzed, so we were able to analyze these data as if they resulted from an experiment.  Again, name order effects were found in many of the races we analyzed.  A side effect of this research project was the discovery of how widely name order laws vary in the U.S.

Currently, we are in the process of two large data collection enterprises to improve our understanding of moderators of ballot order effects and to investigate the implication of different statistical approaches to testing hypotheses on ballot order effects. For the 2004 presidential race, we are focusing on the election in Ohio where ballot order was implemented on the precinct level. We are especially interested in the impact of voting method on effect strength and in irregularities in the implementation of rotation. More extensively, we are currently gathering elections results for all state-wide races in California from 1976 to 2006 to investigate the ballot order effect in over 50 different races.

Rationalization of Candidate Preferences and Mischaracterization of the Causes of Votes


(with Alison Pfent)

For years, researchers have been interested in what determines citizens?N decisions about which candidates to vote for in presidential elections.  These decisions have intrigued political scientists and social psychologists alike, and their research has led to the discovery of numerous elements that seem to play causal roles: party identification, performance of the incumbent, the health of the national economy, candidates?N stances on important national issues, perceptions of candidates?N personalities, and more.  Although correlations of various political attitudes and beliefs with candidate preferences are well-documented and assumed to influence impact on candidate preferences, social psychology (and cognitive consistency theories in particular) suggests a very different interpretation of these correlations: rationalization.

We are investigating the theory that people form candidate preferences and thereafter change related political attitudes and beliefs so they become more consistent with their candidate choice.  Using all available National Election Study data, we have found evidence of pervasive rationalization in every instance possible to analyze.  We have found evidence of rationalization in ideological self-identifications, party identification, and stances on abortion and other national policy issues. We are now attempting to employ panel data to yield strong evidence about the causes of candidate choices, eliminating the contaminating influence of rationalization.

The Effect of Economic Performance on Voter Attitudes


2) Survey Research Methodology

Survey Research


(with Penny Visser & Paul Lavrakas)

Social psychologists have long recognized that every method of scientific inquiry is subject to limitations and that choosing among research methods inherently involves trade-offs. With the control of a laboratory experiment, for example, comes an artificiality that raises questions about the generalizability of results. And yet the naturalness of a field study or an observational study can jeopardize the validity of causal inferences. The inevitability of such limitations has led many methodologists to advocate the use of multiple methods and to insist that substantive conclusions can be most confidently derived by triangulating across measures and methods that have nonoverlapping strengths and weaknesses (see, e.g., Brewer, this volume, Ch. 1; Campbell & Fiske, 1959; Campbell & Stanley, 1963; Cook & Campbell, 1969; Crano & Brewer, 1986; E. Smith, this volume, Ch. 2).

This chapter describes a research methodology that we believe has much to offer social psychologies interested in a multimethod approach: survey research. Survey research is a specific type of field study that involves the collection of data from a sample of elements (e.g., adult women) drawn from a well-defined population (e.g., al adult women living in the United States ) through the use of a questionnaire (for more lengthy discussions, see Babbie, 1990; Fowler, 1988; Lavrakas, 1993; Weisberg, Krosnick, & Bowen, 1996). We begin the chapter by suggesting why survey research may be valuable to social psychologists and then outline the utility of various study designs. Next, we review the basics of survey sampling and questionnaire design. Finally, we describe procedures for pretesting questionnaires and for data collection.

Moderators of Ballot Name Order Effects: The Role of Information and Ambivalence


(with Nuri Kim)

Much evidence suggests that candidate name order effects occur in elections, but we know surprisingly little about the psychological mechanisms(s) responsible for these effects. A handful of past studies have identified conditions in which the effect is more or less pronounced, generally relating to either the characteristics of the election (e.g., type of race) or individual attributes (e.g., education). Adding to such external and dispositional contingencies, the current study focuses on the cognitive processes that lay beneath the observed effect. Two main moderators are examined – the amount of information voters have about the candidates, and the ambivalence voters feel toward the candidates.  An experiment embedded in a national survey was done to test both explanations.

Unmotivated Anonymity


(with Yphtach Lelkes, David Marx, Charles Judd, and Bernadette Park)

Public opinion researchers often assume that promising anonymity to survey respondents minimizes social desirability response bias. Anonymity may indeed encourage reporting of potentially embarrassing attitudes and behaviors and may discourage over-stating socially admirable attitudes and behaviors, but past studies have generally not tested whether anonymity makes reports more accurate. Two experimental studies demonstrated that making participants completely anonymous when answering self-administered paper and pencil questionnaires led them to report fewer socially admirable opinions and behaviors and more socially embarrassing opinions and behaviors.  But complete anonymity also induced more survey satisficing and lower accuracy of reports of factual matters.  These studies suggest that complete anonymity may not be a costless method for minimizing social desirability response bias and that less extreme versions of confidentiality may be preferable. 

Improving Survey Design and Accuracy for the National Science Foundation


(with Curtiss L. Cobb III)

The National Science Foundation (NSF) is mandated by the U.S. Congress to provide a central clearinghouse for the collection, interpretation, and analysis of data on the science and engineering resources of the nation.  NSF partially fulfills this responsibility by annually conducting three large national surveys designed to collect uniform data that allows for a detailed analysis of the employment, educational, and demographic characteristics of those trained in science and engineering fields.

To help NSF carry out this mission, we are conducting a series of experiments and analyses and provide technical expertise on three new efforts:  (1) Designing a starting salary question for the Survey of Earned Doctorates (SED).  We have designed a survey experiment to determine which question type for salary provides the best quality of data without hindering the quality of other SED data. (2) Designing a question on the field of a person’s bachelor’s degree for inclusion in the American Community Survey (ACS), yielding a recommendation of the best wording for a field of degree question for the ACS.  The accuracy of the data yielded by different question designs is being tested in a survey experiment using a sample of Stanford University undergraduate alumni and their parents. (3) Conducting statistical analyses of the Survey of Doctorate Recipients to assess whether survey responses vary systematically depending on mode in which the data are collected, to profile post-docs in various fields, and to assess whether studying migration of post-docs from one field to another is possible given the existing data structure.  Additional cognitive work will identify optimal ways to ask survey questions of post-docs using focus groups.

Conversational Conventions


(with Allyson Holbrook, Richard Carson, & Robert Mitchell)

Research in linguistics suggests that conventions govern the order in which words are listed in sentences during everyday conversations.  We examine one such convention, that when listing two terms, one positive and the other negative, it is conventional to list the positive one first (e.g., like or dislike, for or against, support or oppose).  Specifically, we examine whether, when asking a question to gauge a person’s attitude it is conventional to offer the positive or affirmative response choice first, and the negative response choice second.

We found that in everyday conversation it is conventional to offer the positive or affirmative response option first.  We have found that violating conversational conventions can sometimes reduce the data quality of responses to attitude questions.  When the two options are presented in the unconventional order, expectations are violated, people are surprised and distracted, so responses are made more slowly and with more error.  These effects are most apparent among respondents with the least cognitive skills, those with low GPA’s or little formal education.

If there is a convention regarding the order in which response alternatives to such questions should be offered, one might presume that researchers would never violate it, so the problems caused by violating the convention would never occur.  However, there is a reason why researchers may violate the convention: response order effects.  A great deal of research has found that the order in which response choices are offered can influence the distribution of answers to closed-ended questions, sometimes advantaging alternatives presented first, and other times advantaging alternatives presented last.  In order to minimize the impact of such response order effects on response distributions, some questionnaire design experts have advised that response order be systematically rotated across respondents and at least on major survey firm, the Gallup Organization, routinely rotates response alternatives in order to estimate and control for response order effects.

In the past, the only apparent costs of such rotation have been that it increases the complexity and expense of the survey and introduces a source of systematic measurement error that must then be modeled in multivariate statistical analyses. However, our research suggests that presenting responses in the unconventional order makes respondents’ cognitive tasks more difficult and reduces data quality. Consequently, the best solution may be to use only the conventional response order and take steps to eliminate response order effects by enhancing respondent’s motivation to thoughtfully answer survey questions.

Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms


(with Allyson Holbrook & Alison Pfent)

In recent years, there has been wide-spread speculation about the possibility that response rates for national surveys have been low and are dropping due to increasing respondent reluctance to be interviewed.  This concern is accompanied by additional worry that low and dropping response rates are associated with decreased representativeness of survey samples and therefore reduced accuracy.

We initiated a project to attempt to better understand current response rates in the best and most visible surveys being done of nationally representative populations by telephone via Random Digit Dialing.  To this end, we approached the nation’s leading news media polling organizations and the nation’s leading survey research firms that do large-scale telephone surveys for agencies of the federal government.  All of the organizations we approached agreed to provide to us full disposition codes for recent national RDD telephone surveys, answers to a series of questions about how the surveys were conducted, and unweighted distributions of demographic variables for the respondents who completed interviews.

We found that response rates for the news media surveys were lower than those for the government contractors and that there was considerable variability in these response rates, with some very low and others quite high.  Observed response rates were correlated strongly with refusal rates and more weakly with contact rates.  Various aspects of survey procedure were associated with higher response rates, as would be expected, including longer field periods, shorter questionnaires, the payment of incentives, sending of advance letters, and more.

Most importantly, the unweighted demographics of the survey were compared to data on the nation gathered via the U.S. Census Bureau’s Current Population Survey (an authoritative benchmark).  The survey samples were remarkably similar to the nation in terms of age, race, gender, education, and income.  Higher response rate surveys manifested slightly less error than lower response rate surveys, but these differences were quite small.

The Survey Response Process in Telephone and Face-to-Face Surveys: Differences in Respondent Satisficing and Social Desirability Response Bias


(with Melanie Green & Allyson Holbrook)

In recent decades, survey research throughout the world has shifted from emphasizing in-person interviewing of block-listed samples to random digit dialing samples interviewed by telephone.  In this paper, we propose three hypotheses about how this shift may bring with it changes in the psychology of the survey response, involving survey satisficing, enhanced social desirability response bias, and compromised sample representativeness among the most socially vulnerable segments of populations.  We report tests of these hypotheses using data from three national mode experiments.  As expected, RDD-telephone samples were less representative of the population and more significantly under-represented the most socially vulnerable segments.  Furthermore, telephone respondents were more likely to satisfice (as evidenced by no-opinion responding, non-differentiation, acquiescence, and interview length), less cooperative and engaged in the interview, and more likely to express dissatisfaction with the length of the interview.  Telephone respondents were also more suspicious about the interview and more likely to present themselves in socially desirable ways than were face-to-face respondents.  These findings shed light on the nature of the survey response process, on the costs and benefits associated with particular survey modes, and on the nature of social interaction generally.

Acquiescence Biases Answers to Agree/Disagree Rating Scale Questions


(with Willem Saris and Eric Schaeffer)

Agree/disagree rating scales are tremendously popular in questionnaire research, but for 50 years, researchers have known that answers to these questions are biased by acquiescence response bias.  In our new paper, we show that this and other problems significantly compromise the validity of measurement made with agree/disagree scales.  Fortunately, it is always easy to ask the same questions with construct-specific response alternatives instead, and doing so simplifiest the respondent’s task and gathers more useful data.

Improving Election Forecasting


(with LinChiat Chang)

Surveys that forecast election outcomes have implications for campaign strategies, financial contributions, political analysis in the mass media and academia, as well as actual electoral turnout. However, techniques for improving the accuracy of election forecasting in polling organizations are often proprietary and not amenable to comparative evaluation. With the intention of moving the field toward viewing these techniques as an appropriate terrain for scientific investigation, we investigated how election forecasting would be affected by (a) elimination of respondents who are not likely to vote, (b) allocation of undecided respondents to candidates or referendum positions, (c) weighting of samples for representativeness, (d) addition of random responses, (e) controlling for candidate name order effects.

Using data from the 1997-1999 Buckeye State Polls collected by the Center for Survey Research at The Ohio State University, we found that (a) although better forecasting is achieved when using a combination of filters than separate filters to eliminate non-voters, there is a limit on the number of filters that should be applied. Filtering down to 50% of the samples is optimal for forecasting candidate races, while forecasting of referendum benefit from using only 10-20% of the original samples. (b) Random allocation of undecided respondents to candidates or issue positions improved the forecasting potential of the pre-election surveys for both candidate races and referenda. (c) Although weighting of the unfiltered samples did not consistently improve forecasting, substantial improvement was attained on referenda forecasting by weighting the samples after the optimal number of filters have been applied. (d) Addition of random responses did not improve forecasting. (e) Candidate name order effects emerged on the surveys, with recency effects most pronounced for less publicized races and among respondents with less education. Based on these results, we were able to provide a set of recommendations on how to improve election forecasting using empirically-validated techniques.

The Optimal Length of Rating Scales to Maximize Reliability and Validity


(with Alex Tahk)

Survey research frequently uses multi-point scales to assess respondents’ views.  These scales vary from two points (e.g., agree or disagree) to 101 points (e.g., the American National Election Study’s thermometer-style ratings).  Scales can also vary in another regard: being bipolar (meaning the zero point is in the middle and the end points are opposites, such as extremely positive and extremely negative) or unipolar (meaning the zero point is at one end, as in “not at all important”).   However, different scale lengths may differ in reliability, so it is important to understand how the length of the scales affects the reliability of the responses.

To explore the relation between scale length and reliability, we conducted a meta-analysis of the results of many past studies.  Our data consist of results from 706 tests of reliability taken from thirty different between-subject studies.  We combined various measures of reliability and various sample sizes, controlling for these and other factors in determining the relation of scale length to reliability.

In general, we found that five- or seven-point scales produced the most reliable results.  Bipolar scales performed best with seven points, whereas unipolar scales performed best with five.  We also found that offering a midpoint on a bipolar scale, indicating a neutral position, increased reliability.

Instigators of Satisficing


(with Sowmya Anand, George Bizer, Melanie Green, Ken Mulligan, & Wendy Smith)

Satisficing theory proposes a number of survey features and individual differences that are likely to make satisficing more or less likely.  For example, individuals who have not given much thought to an issue are theorized to be especially likely to satisfice; similarly, individuals lower in education or who don’t like to think may tend to satisfice.  On the other hand, individuals who are specifically instructed to try to answer questions carefully and accurately may more likely to provide optimal answers.  We are currently subjecting these proposed moderators of satisficing to experimental tests, using both student and national samples.  In particular, these experiments will shed light on the ways in which various interpersonal and situational factors interact to influence survey responding.

Development of Survey Questionnaires for NASA to Track US Aviation Safety


(with Mike Silver)

Currently assisting in the development of several national-level survey questionnaires for a NASA program designed to track changes in US aviation safety. This project has applied research on the relationship between the organization of events in memory and recall of those memories to survey instrument design. More specifically, this research has used focus groups and individual interviews with pilots and air traffic controllers to identify the safety-related issues that inform the content of the questionnaire as well as preliminary information on the organization of related events, identification of a key cognitive organization scheme used by pilots in their memories of safety-related events, assessments of pilots’ abilities to accurately recall events over time, and an experimental demonstration of the effectiveness of using memory cues matching the pilots’ organization scheme to enhance accurate recall for safety-related events.

Comparing the Results of Probability and Non-Probability Sample Surveys

Researchers interested in collecting survey data from national samples often consider three possible ways of doing so: (1) telephone interviewing of RDD samples, (2) internet data collection from non-probability samples of volunteers, and (3) internet data collection from probability samples recruited via RDD.  In order to help inform such choices, a single questionnaire was designed and administered by each of eight survey firms (one doing RDD telephone interviewing, one doing internet data collection from a probability sample, and six doing internet data collection from non-probability samples; each sample approximately 1,000 people).  The firm that conducted internet data collection from a probability sample and one firm that collected data from volunteer respondents (SPSS) were told in advance that results would be compared across firms, and the remaining firms were not told in advance that such comparisons would be made.  A set of preliminary results were presented at the 2005 annual meeting of the American Association for Public Opinion Research (AAPOR).  Questions about this study can be directed to Professor Douglas Rivers at Stanford University(rivers@stanford.edu).

A Comparison of Minimally Balanced and Fully Balanced Forced Choice Items


(With Eric Shaeffer, Gary Langer, and Dan Merkle)

Survey researchers are generally mindful that balancing the wording of a question can alter the distributions of answers obtained. However, researchers who choose to use balanced questions can choose among multiple ways to achieve this aim. A u?Lfully balancedi^ question involves fully restating the competing point of view, whereas a u?Lminimally balancedi^ question simply uses the words u?Lor noti^ or a phrase of that sort to briefly acknowledge a second viewpoint.

In two studies using national sample survey data, we compared the distributions and concurrent validity of responses across fully and minimally balanced questions. We also explored whether the impact of full balancing varied with respondent education, a variable that has been shown in prior studies to regulate the magnitude of various response effects. Across these studies, minimally balanced and fully balanced questions resulted in similar distributions of responses of equivalent validity, and this pattern did not vary with respondent education.

A third study examined the distributions of responses to factual knowledge questions using a sample of college undergraduates. Participants provided responses to either fully balanced or minimally balanced questions that were worded either in a conversationally conventional way (e.g., greater or less than) or in a non-conventional way (e.g., less than or greater than).  The latter approach has been shown in other research to disrupt processing and reduce data quality, and we found here that the unconventional question wording yielded fewer correct answers from respondents.  When the unconventional wording was used, full balancing enhanced answer accuracy over what was obtained with minimal balancing.  But when a question is worded in a way consistent with conversational conventions, full balancing did not offer an advantage over minimal balancing in terms of response accuracy.

Therefore, when questions are worded in conversationally conventional ways, the practical benefits of minimal balancing give that approach a relative advantage over full balancing.  However, if researchers were inclined to violate conversational conventions, fully balanced items may offer an advantage.

Comparing the Quality of Data from Telephone and Internet Surveys


(with LinChiat Chang)

With their response rates declining and costs rising, telephone surveys are increasingly difficult to conduct.  At the same time, Internet data collection is emerging as a viable alternative, in two forms.  Some firms are distributing computer equipment to national samples recruited through RDD calling, and other firms are attracting volunteer respondents and then building panels of those individuals with some demographic characteristics distributed as they are in the nation.  Most firms assemble panels of respondents who provide data on a regular basis.

Given the obvious practical advantages of Internet-based data collection, it seems worthwhile to conduct object tests of this relatively new method in direct comparison with the dominant alternative methodology: telephone interviewing.  To do so, we commissioned a set of side-by-side surveys using a single questionnaire to gauge public opinion and voting intentions regarding the 2000 U.S. Presidential Election from national samples of American adults.

Data were collected by three houses: The Ohio State University Center for Survey Research (CSR), Knowledge Networks (KN), and Harris Interactive (HI).  The CSR did RDD telephone interviewing.  KN recruited respondents via RDD telephone interviews and equipped them with WebTV, which then permitted Internet data collection.  HI respondents joined a panel after seeing and responding to invitations to participate in regular surveys; the invitation appeared on the Excite search engine web page and in various other places as well.  These respondents also completed Internet surveys.

This study suggests that Internet-based data collection represents a viable approach to conducting representative sample surveys.  Internet-based data collection compromises sample representativeness, more so when respondents volunteer rather than being recruited by RDD methods.  But Internet data collection improves the accuracy of the reports respondents provide over that rendered by telephone interviews.

Response Option Order, Respondent Ability, Respondent Motivation, Task Difficulty, and Linguistic Structure


(with Allyson Holbrook)

Satisficing theory suggests that respondents may sometimes choose the first satisfactory response alternative they consider, rather than carefully considering all the response alternatives.  This theory predicts that respondents are most likely to satisfice when they are unable and/or unmotivated to think carefully about a question and when the question is difficult to answer.  When questions are presented orally, respondents typically cannot start thinking about the response alternatives until all have been read, so they more fully process response alternatives read last.  This process typically leads to recency effects when questions are presented orally.  In a meta-analysis of 212 dichotomous response order experiments in telephone surveys conducted by the Gallup Organization between 1995 and 1998, we are testing the impact of respondents ability, respondent motivation, and task difficulty on the likelihood and magnitudes of response order effects.  In addition, we are exploring a new hypothesis, that the order in which response options are considered can be affected by the linguistic structure of the question.

Measuring the Frequency of Regular Behaviors: Comparing the 'Typical Week' to the 'Past Week'


(with LinChiat Chang)

Respondents’ reports of behavioral frequencies have implications for important issues spanning the spectrum of unemployment rates, medical epidemiology, neighborhood and community evaluations, transport infrastructure, crime rates, consumer behavior, and government health resource allocation. Despite numerous assumptions about the relative strengths and weaknesses of questions asking about the past week vs. a typical week, there is a lack of empirical evidence comparing the performance of these two question forms. One previous study revealed no significant difference between past week and typical week measures, but those analyses treated variances in these two question forms as if they were the same. Using more appropriate analysis techniques, we compared the validity of “typical” week and “past” week reports using data from the 1989 National Election Pilot Study, in which respondents were randomly assigned to report TV news and newspaper exposure during either a typical week or the past week. The predictive validity of the measures was assessed using objective tests of current events and political knowledge, as well as self-report assessments of political knowledge. The typical week question form proved to be consistently superior, especially among the most educated respondents. We encourage further attempts to replicate the current findings in other domains of behavioral frequencies.

Designing Good Questionnaires


(with Leandre Fabrigar)

Thousands of experimental studies have compared the effectiveness of questionnaire items written in different ways, yet these studies have never been brought together in a single review. We are now completing a book doing just that. We will draw upon this literature to recommend to people when to use open-ended vs. closed-ended questions, when to use rating scales vs. ranking tasks, how many points to put on rating scales and how to label the points verbally, how the order of response choices influences answers, whether to offer “don?Nt know” response options, how to word and order questions, whether to ask people to recall their attitudes at prior times, and whether to ask people to explain the causes of their thinking and actions. The result of our efforts is an empirically-validated set of recommendations about how to maximize the reliability and validity of data collected via questionnaires.

Exploring the Impact of Sequential Ordering on the Interpretation of Fully Verbally Labeled Ratings Scales


(with Annabell Suh and Philip Garland)

Rating scales often contain fully verbally labeled response options presented in a sequential order (e.g., from extremely positive to extremely negative). There is some evidence that presenting response options sequentially aids respondents. If response options are vague, for instance, the order of presentation might help to clarify the meaning of each response option. On the other hand, other evidence suggests that verbal labels are clear enough on their own; respondents may, then, only be taking into account the literal meanings of the response options and not any context information.

This series of studies aims to examine these two questions: first, how respondents interpret the meaning of a response option scale and second, whether verbal labels are clear and unambiguous on their own. It compares the data quality and respondent satisfaction of two types of rating scales: one presented in a sequential order and one presented in an entirely random order. Initial results suggest that respondents are not aided by and do not employ order information, indicating that verbal labels may be clear enough on their own. 

The Accuracy of Direct vs. Filtered Questions


(with Rajesh Srinivisan, Annabell Suh, and Philip Garland)

A seemingly simple frequency question, for example asking how many times one has seen a movie, can be presented in two different ways. It can be asked directly in a direct question, such as, “How many times in the past week have you seen a movie?” It can also be asked first as a yes/no question asking, “have you seen a movie in the past week?” and only if the answer is yes, then ask how many times.

Previous research has found that these two different types of questions lead to different frequencies, with the direct question generally leading to higher frequencies than the filtered question does. This is the first study to examine which question type is more accurate and to determine why the difference occurs. It also rules out a previously suggested explanation for the difference. 

3) Public Attitudes on Global Warming

Partisanship and Attitudes about Global Warming


(with Ariel Malka)

Since the 1990s, the volume of information about global warming (GW) transmitted to the general public has increased dramatically.  This increase in coverage was initially sparked by the emergence of a scientific consensus that human-caused GW has, in fact, been occurring and that it may have devastating consequences (Intergovernmental Panel on Climate Change, 1995).  Despite the emergence of this consensus, the messages about GW conveyed to the general public during the last decade have often been mixed.  Much mainstream news coverage has suggested that GW is real, human-caused, potentially catastrophic, and something that the federal government should deal with, but a good deal of coverage has also presented various more skeptical views as well.  Perhaps driven by a desire to appear politically impartial and/or to cover all viewpoints fully, news media outlets have often quoted individuals conveying that human-caused GW is not happening, or that the government should not take aggressive action to deal with GW.

In this research, we are examining the impact of this information flow on Americans attitudes and beliefs about GW.  In one study, we compared survey responses from a representative American sample in 1998 to those of another representative American sample collected in 2007.  During those years, Americans became more likely to A) hold basic beliefs about GW that are congruent with the scientific consensus, B) favor greater federal government action to deal with GW, C) perceive agreement among scientists that GW has been happening, and D) possess stronger attitudes and beliefs about GW.  However, these increases only happened among Democrats and Non-Partisans but did not occur among Republicans.  Consequently, Republicans and Democrats are now more different from one another than they were in the late 1990s.

In another study, we examined the relation between knowledge about GW and concern about this issue.  Information campaigns about GW are often predicated on the assumption that learning more about GW will lead people to become more concerned about it.  Using data from three surveys of nationally representative samples of American adults, we found that the relation between knowledge and concern about GW is more complex than this view suggests.  Among people who trust scientists to provide reliable information about the environment and among Democrats and Non-Partisans, increased knowledge has been associated with increased concern.  But among people who are skeptical about scientists and among Republicans, more knowledge was generally not associated with greater concern.  The association of knowledge with concern among Democrats and Non-Partisans who trust scientists was mediated by perceptions of consensus among scientists about GW’s existence and by perceptions that humans are a principal cause of GW.  Thus, when studying the relation of knowledge and concern, it is important to take into account the content of the information that different types of people acquire and choose to rely upon.

The Development of Public Beliefs and Attitudes about Global Warming


(with Penny Visser, Allyson Holbrook, & Laura Lowe)

In September, 1995, the international community of scientists who study the environment announced that they had come to a new consensus that global warming has been occurring as the result of human activities and that it will have very significant and costly consequences for the world unless some steps are taken to slow its development. This new consensus was reported to Americans via television news programs and in newspapers, but these two media carried slightly different messages. Whereas the television messages simply acknowledged the new scientific consensus, newspaper stories acknowledged that a minority of scientists disagreed with this position, and newspaper stories published in October and November, 1995, were especially skeptical.

In December, 1995, we conducted a telephone survey of a representative sample of Ohio adults to study the diffusion and impact of this information. And in short, we found that people formed their beliefs about whether or not global warming is real using both news media information and their own personal experiences. Television exposure did indeed encourage people to believe more in the existence of global warming, whereas newspaper exposure discouraged such a belief. But these media effects occurred only among people who were highly trusting of scientists to provide accurate information. People who were distrusting of scientists based their assessments of the existence of global warming on their own first-hand observations of changes in temperature and air pollution levels in recent years. Those who thought temperatures had gotten warmer and who thought pollution had increased were especially likely to believe in global warming.

We also examined the origins of people’s attitudes toward global warming. Although most people thought global warming would be negative, some felt it would be neither positive nor negative, and a few actually thought it would be positive overall. And these attitudes were apparently driven by people’s beliefs about impact on factors immediately relevant to people’s daily lives: food, water, and shelter. People who believed global warming would hurt food and water supplies and would flood coastal living areas held negative attitudes. In contrast, global warming?Ns perceived impact on the beauty of natural scenery, on processes of plant and animal species extinction, on animal migration, and the like were inconsequential. Therefore, it appears that people’s attitudes were driven by their beliefs about the immediate material interests of society.

This survey project also allowed us to explore some general issues in the attitude literature. For example, we examined whether four dimensions of attitude strength (attitude importance, prior thought, certainty, and perceived knowledge) are all reflections of a single underlying construct. And although a factor analysis of them yielded a single factor, they were correlated quite differently with demographic variables, psychological antecedents, and a measure of the magnitude of the false consensus effect. This evidence reinforces the general conclusion that attitude strength is not a unitary construct.

In September and October, 1997, we conducted a telephone survey or a representative sample of adults.  In December, 1997 through February, 1997, we re-interviewed a portion of those interviewed in September and October, as well as an additional representative sample of adults who had not previously been interviewed.  Between these two sets of interviews, the White House Conference on Global Climate Change occurred, and hundreds of stories on global warming were broadcast on television and radio, and published in newspapers and magazines across the country.  Our goal was to re-examine our findings from the Ohio survey with a national sample and to study how this media coverage changed public beliefs and attitudes about global warming.

On the surface, American public opinion about global warming did not seem to change in response to media coverage of the issues.  However, changes did occur when party identification was considered.  Strong Democrats moved in the direction of the message coming from the White House (i.e., that global warming would happen, that it would be bad, and that something should be done about it) while strong Republicans moved in the opposite direction.  So even though overall attitudes did not change, opinions polarized along party lines.  In addition to this partisan polarization, the media attention led the public to do more thinking about the issue of global warming and to be more certain of their opinions about global warming.  People were also able to report their opinion about global warming more quickly during the second set of interviews, one indicator that people’s opinions about global warming were more crystallized after the media campaign.

4) Attitude Strength and Issue Publics

Exploring the Latent Structure of Strength-Related Attitude Attributes


(with Penny Visser & George Bizer)

Some attitudes are durable and impactful, whereas others are weak and inconsequential. Over the last few decades, researchers have identified roughly a dozen attributes of attitudes that differentiate the strong from the weak. However, considerable controversy remains regarding the relations among these attributes. Some scholars have suggested that the various strength-related attributes reflect a small number of latent constructs, whereas others have suggested that each is a distinct construct in its own right. We review this ongoing controversy, and we then review a diverse set of recent studies that provide new evidence in support of the latter perspective. We consider the implications of our findings for the conceptualization of attitude strength and for the methods by which it is studied.

The Measurement of Attitudes


(with Charles Judd & Bernd Wittenbrink)

Attitude measurement is pervasive. Social psychologists routinely measure attitudes when studying their causes (e.g., Fishbein & Ajzen, 1975; Tesser, Whitaker, Martin, & Ward, 1998; Zajonc, 1968), how they change (e.g., Festinger, 1957; Howland, Janis, & Kelley, 1953; Petty & Cacioppo, 1986) and their impact on cognition and behavior (e.g., Lord, Ross, & Lepper, 1979). Attitude measurement is also frequently done by political scientists, sociologists, economist, and other academics. Commercial market researchers are constantly engaged in measuring attitudes toward real and imagined consumer products and services. Beginning in the 1990s, all agencies of the U.S. federal government initiated surveys to measure attitudes toward the services they provided. And the news media regularly conduct and report surveys assessing public attitudes toward a wide range of objects. One of the most consequential examples is the routine measurement of Americans’ approval of their president.

To gauge people’s attitudes, researchers have used a wide variety of measurement techniques. These techniques have varied across history, and they vary across professions today. This variation is due both to varying philosophies of optimal measurement and varying availability of resources that limit assessment procedures. When attitude measurement was first formalized, the pioneering scholars presumed that an attitude could be accurately assessed only using a large set of questions that were selected via an elaborate procedure (e.g., Likert, 1932; Thurstone, 1928). But today, attitudes are most often assessed using single questions with relatively simple wordings and structures, and the variability of the approaches is striking, suggesting that there is not necessarily one optimal way to achieve the goal of accurate measurement.

Recently, however, scholars have begun to recognize that the accumulating literature points to clear advantages and disadvantages of various assessment approaches, so there may in fact be ways to optimize measurement by making good choices among the available tools. Furthermore, some challenging puzzles have appeared in the literature on attitude measurement that are stimulating a reevaluation of widely shared presumptions. This makes the present a particularly exciting time for reconsidering the full range of issues relevant to attitude measurement.

In this chapter, we offer a review of issues and literatures of use to researchers interested in assessing attitudes. We begin by considering the definition of attitudes, because no measurement procedure can be designed until the construct of interest has been specified. We review a range of different definitions that have been adopted throughout the history of social psychology but settle in on one that we believe captures the core essence of the notion of attitudes and that we use to shape our discussions throughout.

Because attitudes, like all psychological constructs, are latent, we cannot observe them directly. So all attitude measurement depends on those attitudes being revealed in overt responses, either verbal or nonverbal. We, therefore, turn next to outlining the processes by which we believe attitudes are expressed, so we can harness those processes to accurately gauge the construct. Finally, we outline the criteria for optimal measurement that we use throughout the rest of the chapter: reliability, validity, and generalizability.

Having thus set the stage, we turn to describing and evaluating various techniques for measuring attitudes, beginning with direct self-reports (which overtly ask participants to describe their attitudes). We outline many ways by which a researcher can design direct self-report measures well and less well. Next, we acknowledge the limits of such direct self-reports. A range of alternative assessment techniques, some old and others very new, have been developed to deal with these limitations, and we review those techniques next.

Attitude Importance and Attitude Accessibility


(with George Bizer)

Some scholars have argued that people use attitude accessibility as a heuristic with which to infer attitude importance, whereas others have argued that importance causes accessibility. Through a series of experiments, we have examined the relation between these constructs.  We failed to find an effect of accessibility on importance, whereas we did find effects of importance on accessibility.  These findings have helped us to better understand the relation between these two constructs and, perhaps more importantly, the underlying structure of attitude strength in general.  Specifically, it appears that importance and accessibility represent distinct constructs, and some apparent effects of importance may be mediated by accessibility.

The Development of Attitude Strength Over the Life-Cycle


(with Penny Visser)

A number of theories posit that people’s attitudes become stronger as they get older, though they disagree on exactly how and when this might occur. Using data from national and regional surveys of adults, we have found that peoples political attitudes are especially open to change between ages 18-25, become more resistant to change immediately thereafter, and become more open to change at the end of the life-cycle. Other manifestations of attitude strength (e.g., the personal importance of attitudes, the confidence with which they are held, and the amount of knowledge people feel they have) also show this same surge and decline.

We have recently expanded this program of research in a number of ways.  First, we are exploring the generalizability of our results to other attitude domains.  Because our research, like virtually all of the investigations that have preceded it, has focussed on attitudes toward social and political issues, it is not clear whether the observed pattern of openness to attitude change is unique to social and political attitudes, or whether it describes age-related fluctuation in openness to change more generally.

Second, in addition to resistance to attitude change, we are exploring changes over the life span in some of the other defining qualities of attitude strength. Specifically, we are assessing changes over the life span in the degree to which attitudes (1) motivate and guide behavior and (2) direct information processing.

Finally, we are moving beyond a simple description of the relation between age and openness to attitude change to explore the causal underpinnings of this relation.  Specifically, we are testing several possible mediators of the relation between age and openness to change, including (1) changes in the size, composition, and frequency of contact with people’s social networks, (2) changes in the frequency of role transitions and new social identifications, (3) changes in the nature of people’s self concept, and (4) changes in cognitive functioning over the life span.

This program of research promises to enrich our understanding of the aging process and refine our appreciation of the adult life cycle.  Equally important, however, this research will contribute to a broader understanding of the social and psychological factors that determine susceptibility to attitude change in general.

 How People Form Political Attitudes


(with Allyson Holbrook)

Many researchers have argued that citizens combine information about political candidates by simply subtracting the number of unfavorable beliefs they have about a candidate from the number of favorable beliefs they have about the candidate.  This describes a symmetric linear process.  It is symmetric because favorable and unfavorable beliefs have the same magnitude of impact on attitudes.  It is linear because as beliefs are added, they have the same amount of impact as earlier beliefs.  For example, five beliefs have five times as much impact is one belief.  In addition, according to a symmetric linear model (SLM), citizens who have no favorable or unfavorable beliefs about a candidate have neutral attitudes toward him or her.

Work in psychology adopting a behavioral adaptive perspective suggests a number of amendments to the SLM.  According to this perspective, human cognitive and behavioral processes develop because they facilitate survival and reproduction in a potentially hostile world.  Approaching any new object with favorable expectations is worthwhile, because it could be food or could facilitate acquisition of food.  However, vigilantly scanning for ally signs of danger all object might pose is also important, so that harmful objects call be avoided.  In the absence of any information about all object, then, attitudes toward it should be slightly positive.  And people should be especially attentive to the first information they receive about all object in order to form an accurate first impression.  Then if the object appears to pose not immediate threat, vigilance can taper off, so that the impact of each additional piece of information acquired about the object may diminish.  However, because one must vigilantly scan for signs of danger in all object, unfavorable information should have more impact than favorable information and vigilance to additional unfavorable information should not taper off to the same degree as attention to additional favorable information.

The model we propose, the asymmetric nonlinear model (ANM) is based on this approach and makes three predictions about attitudes about political candidates that differ from those of the SLM. First, citizens who leave no favorable or unfavorable beliefs about a candidate should have slightly positive attitudes toward him or her. Second, the information should have less impact as the amount of previously acquired information increases. And third, unfavorable information should have greater impact than favorable information and/or as the amount of previously acquired information increases, the impact of unfavorable information should decrease slower than the impact of favorable information.

We compared the SLM and the ANM using National Election Study (NES) data from presidential elections from 1972 to 1996.  Cross-sectional NES data showed that the ANM describes attitudes toward presidential candidates and political parties better than the SLM among respondents high and low in political involvement (measured using education, political knowledge, time of voting decision, and whether or not respondents voted).  Longitudinal NES data (collected before and after presidential elections between 1980 and 1996) showed that the ANM outperforms the SLM in describing the impact of beliefs on changes over time in attitudes toward presidential candidates.  And the ANM revealed that voter turnout is enhanced by a stronger preference for one preferred candidate, as long as at least one candidate is dislike, whereas the SLM failed to detect this effect.  Thus, the ANM appears to be superior to the SLM and the ANM has important implications for understanding the impact of election campaigns on citizens’ preferences and actions.

Gauging the Attitude-Behavior Relation


(with Christopher Bryan)

Social psychologists have long been interested in the extent to which behavior is consistent with attitudes.  One approach to the study of this topic is to measure features of attitudes that relate to the strength of the attitudes. A great deal of research has shown that certain features of attitudes are related to the strength of the attitude-behavior relation.  Examples of such features are the personal importance of the attitude object and the certainty with which the attitude is held.  These features are referred to as Strength-Related Attitude Features (or SRAFs).

Our research focuses on a methodological issue related to the way in which the effects of SRAFs are determined.  Existing social psychological research on this topic has used one of two types of statistical analysis to gauge the effect of SRAFs on the attitude-behavior relation.  The first assumes that, to the extent that people engage in behavior related to an attitude object, it will be consistent with their attitude.  For example, it is assumed that if a person is opposed to legalized abortion and signs a petition on the issue, that person will sign a petition against legalized abortion and not for it.  Making this assumption, some studies have tested whether various SRAFs predict the number of attitude-expressive behaviors a person performs, ignoring the types of behaviors.

A second approach does not make any assumptions about valence matching between attitudes and behavior.  Studies that used this approach treated the direction of a person’s attitude as a variable in the analysis and tested whether various SRAFs interacted with a person’s attitude to predict behavior, taking into account the direction of that behavior.

Although both of these approaches are designed to test the same theoretical question, we have found that, in our data on attitudes about legalized abortion, the methods  yield very different results.  This might help to explain the fact that different studies of SRAF effects sometimes yielded contradictory results.  Therefore, attitude researchers should carefully choose their analytic method, because that choice can affect their results.

Presidential Approval and Gas Prices: The Bush Presidency in Historical Context


(with Laurel Harbridge)

During the last two years, journalists and scholars have speculated about the possibility that rising gasoline prices have caused the decline in President Bush’s approval ratings.  But documenting causality has been difficult, because both variables have trended together, albeit in opposite directions, since 2002.  In order to separate correlation from causation, we created a monthly time series from 1976 to 2006 to place the relation between gas prices and approval in context of numerous presidential administrations.  Controlling for traditional economic, event, and scandal predictors of presidential approval, we implemented time series analysis to assess the historical relation between gas prices and approval and to test whether President George W. Bush’s approval has been impacted differentially by gas prices.  In addition, we tested whether data from different polling agencies produce different results and whether an average measure of approval is sufficient, collapsing across “houses”.  Preliminary evidence indicates that, when controlling for other factors, gas prices have not been a significant factor in determining residential approval either in the past or during the current Bush presidency but that they were powerful determinants of approval during Jimmy Carter’s presidency, when the White House’s responsibility for this aspect of the economy was much more apparent to Americans.

 A Reexamination of the False Consensus Effect: Projection or Conformity?


(with Lori Gauthier)

Most explanations of the false consensus effect (FCE) presume that people exaggerate the extent to which others share their own attitudes, as the result of projection from self to others. Surprisingly, the accumulated evidence on this issue has rarely tested this proposition directly. We returned FCE research to its most basic level by examining the relation between the effect’s key variables. Study 1 manipulated respondents’ attitudes to test whether those attitudes shape their perceptions of others. Study 2 adopted procedures analogous to the Asch experimental paradigm to test whether perceptions of others’ attitudes influence one’s own attitude via conformity. Altering participants’ attitudes did not impact their perceptions of others’ attitudes, but manipulating participants’ perceptions of others’ attitudes did influence their own attitudes. These results challenge the widely held belief that people project their attitudes onto others and suggest that conformity drives the FCE.

 The Impact of Policy Change Threat on Financial Contributions to Interest Groups


(Joanne M. Miller, Jon A. Krosnick, Allyson Holbrook, Laura Lowe, Alex Tahk)

Many scholars have proposed that citizen activism intended to influence government policy has one of its primary motivations in citizen dissatisfaction with current life circumstances.  Less prominent—and never directly tested—is the notion that the perceived threat of undesirable policy change could motivate political behaviors aimed at averting the threat.  In this research, we explore the relationship between threat perception and financial contributions to interest groups, using data from three representative sample surveys.  Because the financial contributions do not appear to fit any standard parametric distribution, we rely on non-parametric tests of differences in means and regressions bootstrapped standard errors to ensure that our findings are robust to assumptions about the distribution of financial contributions.

The research demonstrates that the threat of an undesired policy change did motivate financial contributions to interest groups.  There is also weak evidence that policy change threat motivated activism most when people attached great personal importance to a policy issue and when they had sufficient resources to permit participation.

 Self-Interest, Values, Involvement, and Susceptibility to Attitude Change


(With Stephanie Lampron, & Eric Shaeffer)

In the social psychological literature on attitude change, there has been an ongoing debate regarding how being personally involved in an issue impacts attitude change.  One perspective has been that involvement in an issue can be separated into two types of involvementA?None based on self-interest and one based on value-relevance.  It was hypothesized that these two constructs would have separate effects on attitude change.  The other perspective is that involvement need not be separated into its more specific components because they will all lead to the same effect on attitude change.  No previous studies have provided an adequate framework from which to test these hypotheses.

An experimental framework was created in which to test the effects of self-interest and value-relevance on attitude change.  In the study, participants were manipulated to connect the issue of comprehensive graduation exams to either their self-interest or their values before they read either a strong or weak counter-attitudinal message regarding the issue, which consistent of appeals to both self-interest and values.

The findings provide preliminary support for the idea that value-relevance interacts with message strength in the same manner as self-interest to affect attitude change.  In addition, high value-relevance also led to a greater resistance to attitude change than high self-interest when messages were weak, indicating that separate effects may also be plausible.

Currently, this study is being re-run with new self-interest and value-relevance manipulations as well as more clear definitions of self-interest and values. The results of this study will hopefully elucidate the earlier findings while providing greater insight on how involvement is best conceptualized as a variable in attitude change.

 Attitude Strength, Threat, and Political Activism


(with Joanne Miller & Laura Lowe)

A great deal of research has explored the psychological origins of citizen activism intended to influence government policy in democratic societies.  Although various scholars have speculated that one motivator of such behavior is the perception of a credible threat of policy change in an undesirable direction, this hypothesis has never been directly tested using data on individual citizens and their perceptions.  Our research tests the hypothesis that policy change threat leads to activism.  More specifically, we are testing the effect of a citizen?Ns belief that a piece of legislation that he/she does not want to see passed i` whether for health or safety reasons, or just personal preference i` might actually become law.

We suspect that such threats will be more likely to inspire activism among some individuals than others, one determining factor being the personal importance of the issue to the individual. To a citizen who attaches a great deal of importance to an issue, an undesirable policy change would be personally devastating.  But no matter how much significance people attach to an issue, they cannot be immediately and vigorously responsive to a threat unless they have the available resources in terms of time and/or money.  Resources have, of course, been recognized as very important determinants of activism.  But in all past research, resources have been statistically treated as having direct effects on activism, as if simply having more money or more free time, in and of themselves, directly inspire political action.  Although this research implies that the effect of resources on activism depends on certain conditions, it is surprising that this hypothesis has never been explicitly tested, which we have now done.

We have conducted a series of studies to test the following hypotheses: 1) that policy change threat leads to political activism; 2) that the effect of policy change threat on activism is moderated by personal importance; and 3) that the effect of policy change threat on activism is moderated by income.  These studies have employed a variety of methods, including nationally representative telephone surveys and a field experiment, and have examined threat in a variety of context, including attitudes towards the environment, abortion, and presidential candidates.

The results of these studies have been quite compelling i` in all cases, the threat of an undesirable policy change motivated activism aimed at preventing the change.  In addition, we have found that personal importance exacerbates the effect of threat on activism, as does income, when the type of activism examined is financial contributions (an act for which income is a necessary resource).   We are currently conducting a nationally representative telephone survey experiment and a laboratory experiment to extend our findings and show additional causal evidence of the effect of threat on activism.

5) Cigarette Smoking

6) Contingent Valuation

Valuing Environmental Goods via Contingent Valuation: Comparing Referendum and Direct Question


(with Eric Shaeffer, Stephanie Lampron, Penny Visser, Trevor Thompson, and Daniel Schneider)

During the last two decades, numerous surveys have been done to assess the monetary value that people place on the existence of various natural resources that have been damaged by human actions. For example, the Exxon Valdez oil tanker damaged ecosystems in Prince William Sound, Alaska, and surveys later sought to ascertain the value of those ecosystems for litigation. This sort of survey research has been called “contingent valuation”.

A heated debate has been raging in the contingent valuation literature about how best to measure these monetary values. Some scholars argue that it is best to ask respondents referendum questions about whether they would be willing to pay a specified amount of money to prevent the same sort of damage from happening again in the future. Different respondents are randomly assigned to be asked about different amounts, and everyone’s answers are used to assess the public’s total willingness to pay. Other scholars argue that this approach may bias estimates of willingness to pay by anchoring thinking on the particular values offered. Therefore, these scholars suggest, it is preferable to ask direct questions about how much respondents would be willing to pay.

We initiated a project assessing the impact of anchoring on responses to these sorts of referendum questions. By experimentally comparing referendum questions to direct questions, we find that they do yield significantly different results. On average, the referendum questions yielded higher estimates of mean maximum willingness to pay than did direct questions.  However, the referendum questions and direct questions both manifested equivalently high levels of correlational validity via their theoretically sensible relations with an array of predictors.

 Comparing Open and Closed Questions to Measure Non-Use Values of Public Goods


(with Eric Shaeffer, Stephanie Lampron, Penny Visser & Trevor Thompson)

During the last two decades, numerous surveys have been done to assess the monetary value that people place on the existence of various natural resources that have been damaged by human actions. For example, the Exxon Valdez oil tanker damaged ecosystems in Prince William Sound, Alaska, and surveys later sought to ascertain the value of those ecosystems for litigation. This sort of survey research has been called “contingent valuation”.

A heated debate has been raging in the contingent valuation literature about how best to measure these monetary values. Some scholars argue that it is best to ask respondents closed-ended questions about whether they would be willing to pay a specified amount of money to prevent the same sort of damage from happening again in the future. Different respondents are randomly assigned to be asked about different amounts, and everyone’s answers are used to assess the public’s total willingness to pay. Other scholars argue that this approach may bias estimates of willingness to pay by anchoring people’s thinking on the particular values offered. Therefore, these scholars suggest, it is preferable to ask open-ended questions about how much respondents would be willing to pay.

We initiated a project assessing the impact of anchoring on responses to these sorts of closed ended questions. We found that although offering values close to a person’s true willingness to pay can in fact pull his or her answers in the direction of the anchor, anchors that are far from people’s true willingness to pay have no such biasing impact. Furthermore, as long as anchors are spread across a wide range of dollar values, the particular values chosen do not seem to bias estimated total willingness to pay. Thus, the open-ended and closed-ended questioning approaches perform comparably. This finding enhances our understanding of the comparative functioning of open-ended and closed-ended questions more generally.

7) News Media Influence

News Media Agenda-Setting and Priming


(with Joanne Miller)

A great deal of literature has shown that the news media have the ability to influence peoples political judgments. One particular media effects is agenda-setting, the notion that by paying attention to a particular national problem, the media can induce people to cite it as the most important national problem. A second media effect is priming, the idea that prolonged focus on a political issue can lead Americans to derive their overall evaluations of their President’s job performance primarily from his handling of that issue.

We tested a widely-held assumption about the cognitive mechanism responsible for these effects: accessibility. In short, scholars have presumed that media attention to an issue makes attitudes and beliefs about that issue especially accessible, which leads them to select the issue as the country’s most important and leads them to place weight on it when evaluating presidential performance. However, our laboratory studies clearly refuted these hypotheses by showing that although news media to a problem did increase the accessibility of relevant attitudes and knowledge, this increase in accessibility did not mediate either effect. Furthermore, agenda-setting did not mediate priming; that is, considering a problem to be the nations most important did not lead people to place greater weight on it when evaluating presidential performance. These findings challenge prevailing wisdom about these news media effects and encourage future research seeking to identifying the mechanisms that are in fact at work.

News Media Priming: Derivation or Rationalization?


(with Brent Bannon, Stanford University, and Laura Brannon, Kansas State University)

 A great deal of evidence is consistent with the news media priming hypothesis, but no past study has yet used survey data to directly test whether news media coverage of an issue increases the consistency between domain–specific evaluations and overall evaluations of the president via derivation, rationalization, or both.  The research reported here applied covariance structure modeling to longitudinal data from the 1990-1992 American National Election Study panel survey to gauge the impact of the surge in media coverage of the economy between 1991 and 1992 on derivation and rationalization of overall evaluations of President George H. W. Bush.  All analytic approaches yielded support for the same conclusion: increased media attention to the economy increased derivation and reduced rationalization, consistent with presumptions about the workings of news media priming.  The reduction in rationalization caused by media coverage of an issue most likely means that past studies have under-estimated the magnitude of priming.

8) Racism and Prejudice

The Impact of Interviewer Race and Gender on Survey Results: The Case of Pre-Election Polls


(with Mario Callegaro, Robert P. Daves, Femke De Keulenaer, Daniel Schneider, and Yphtach Lelkes)

Most discussions of bias due to the race and gender of a survey interviewer center on the quality and veridicality of the responses given by the survey respondent. That is, respondents are thought to bow to social desirability pressures and give answers that they believe will be less offensive or more appealing to the interviewer given his/her race or gender. However, an often overlooked but potentially damaging source of error involves interviewer recruitment differences due to race and gender.  That is, interviewers of some races or genders may be especially effective at recruiting respondents with matching races or genders, thereby inducing sample composition bias.

Hierarchical regression models were used to disentangle various sources of interviewer error in a survey conducted in Minneapolis during the 2001 mayoral election, when a black female incumbent was challenged by a white male. Stated vote intention and favorability toward the incumbent increased when the interviewer was a black female. Furthermore, survey recruitment also varied by the race and gender of the incumbent as expected.

Thus, this study indicates that interviewer race and gender may introduce bias in two different steps: recruitment and response generation.

9) Best Practices in Science

Best Practices in Science Conference

The aim of science is to accurately depict some aspect of the world. Recently, problems in science have been on the rise, from studies not replicating in many different fields to the rise in questionable research practices and unethical behavior. After many discussions at Stanford University’s Center for Advanced Study in the Behavioral Sciences, a group of scholars, led by Jon Krosnick, Lee Jussim, and Simine Vazire, convened the Best Practices in Science Conference at Stanford University on June 18-19, 2015. The conference gathered experts in the field to present research and discuss problems in science, their causes, and possible solutions.  
 
We are writing a report that summarizes the insights from the discussions at the conference and provides empirical research questions designed to examine the extent  of problems, the source or cause of these problems, and the impact of possible solutions. These research questions can be used in future studies to empirically investigate how scientific practice can best be improved, and to further develop the behavioral science behind scientific practice.
 
Funding for the conference was provided by the Fetzer Franklin Fund

The Replication Project

(with Sebastian Lundmark)

The reproducibility of scientific findings has become an issue of growing concern in science. Disciplines including medicine, psychology, genetics, and biology have been repeatedly challenged by findings that are not as robust as they initially appeared. Shrinking effects and out right failures of replication raise questions not only about the specific findings they challenge, but more generally about the confidence that we can have in published results that have yet to be verified independently.

The Replication project aims to investigate these issues through a collaboration between University of California Santa Barbara, University of California Berkeley, University of Virginia, and Stanford University. Four research teams, located at each university, will individually produce new experimental treatment effects which they will share with the other universities who, in turn attempts to replicate these initial findings. Partaking in this form of structured collaboration, allows the four research teams to participate in one of the first multi-site multi-replication meta-science experiments in Social Psychology with the goal of investigating the reproducibility and potential decline of brand new scientific findings.

The project is funded by the Fetzer Franklin Fund.