Skip to main content Skip to secondary navigation
Main content start

2010-2011 Spring Colloquia

Event Details:

Monday, April 11, 2011 - Monday, May 16, 2011

Schedule

Monday, April 11, 2011

  • Caught in the Draft: The Effects of Vietnam Draft Lottery Status on Political Attitudes

    ABSTRACT: The 1969 Vietnam draft lottery assigned numbers to birth dates, determining which young men would be called to fight in Vietnam. We exploit this natural experiment to examine how draft vulnerability influenced political attitudes. Data are from the Political Socialization Panel Study, which surveyed high school seniors from the Class of 1965 before and after the national draft lottery was instituted. Males holding low lottery numbers became more anti-war, more liberal, and more Democratic in their voting compared to those whose high numbers protected them from the draft. They were also more likely than those with safe numbers to abandon the party identification they had held as teenagers.Trace effects are found in re-interviews from the 1990s. Draft number effects exceed those for pre-adult party identification and are not mediated by military service. The results show how profoundly political attitudes can be transformed when public policies directly impact citizens’ lives.

    Laura Stoker

    Laura Stoker, Director of Graduate Affairs, Department of Political Science, University of California, Berkeley

Monday, April 18, 2011

  • Measuring Americans‘ Issue Priorities: A New Version of the Most Important Problem Question Reveals More Concern About Global Warming and the Environment

    ABSTRACT: For decades, numerous surveys have asked Americans the ―Most Important Problem (MIP) question: What do you think is the most important problem facing this country today? Global warming and the environment have rarely been cited by more than a tiny number of respondents in these surveys in recent years, which might seem to suggest that these have not been the most important issues to Americans.

    This paper explores the possibility that an additional method of assessing the public‘s priorities might support a different conclusion. Three experiments embedded in national surveys (two done via the Internet, the other done by telephone) show that when asked the traditional MIP question, respondents rarely mentioned global warming or the environment, but when other respondents were asked to identify the most serious problem that will face the world in the future if nothing is done to stop it, global warming and the environment were the most frequently mentioned problems. Furthermore, a large majority of Americans indicated that they wanted the federal government to devote substantial effort to combating problems that the world will face in the future if nothing is done to stop them. Thus, future surveys might include both versions of the MIP question to more fully document Americans‘ priorities.

    Sam Larson

    Sam Larson, Senior Undergraduate, Public Policy, Stanford University

  • Optimizing the Design of a Question Intended to Measure Expected Starting Salary

    ABSTRACT: Past research suggests that when answers can be either numeric or categorical, open-ended questions yield more accurate measurements than closed-ended questions. However, many researchers prefer to ask closed questions to measure income on the assumption that an open question might be perceived to be intrusive and might ask for more precision than respondents can provide. This paper reports an experimental comparison of question formats in terms of item non-response, unit non-response, and the distribution of answers. The optimal question format began by asking an open question; respondents who declined to answer the open question were then asked a closed question. In comparison to an open question alone or a closed question alone, the combination approach led to the lowest unit and item nonresponse rates and acquired data with maximum refinement without distorting the distribution of answers. This study therefore suggests that the best measurement approach for collecting unbounded numeric answers might be the use of an open/closed pair of questions.

    Curtiss Cobb

    Curtiss Cobb, Director of Survey Methodology, Knowledge Networks

Monday, April 25, 2011

  • Measuring perceptions of probabilities: Verbal or numerical response options?

    ABSTRACT: Researchers across disciplines often attempt to measure respondents’ perceptions of probabilities. For example, respondents may be asked to estimate the likelihood that they will they will vote in an upcoming election, or to estimate the percent chance that they will lose their job in the next year. However, the measurement of probability questions has been inconsistent: Participants are sometimes asked to respond using verbal response options, such as, “very likely” and “very unlikely,” and other times asked to respond using numerical response options, such as “0%” or “100%.” Since data quality is contingent on the validity of the measures, guidelines are needed so that researchers write the best possible question to gauge perceptions of probabilities. As a step in that direction, the concurrent validity of a question that utilized verbal response options was compared to that of the same question framed in numerical form. Respondents (N=1203) were asked, as part of the 1998 ANES pilot study, to estimate their perceived probability of voting in upcoming elections and were randomly assigned to receive the question and response either in verbal or numerical form. The association between question form and a number of correlates of vote turnout were compared between groups. In line with our predictions, the association between criterion variables and the numerical form of the question was consistently stronger than that of the verbal form of the question. Surprisingly, this effect was not stronger among the more educated, whom we expected to be more apt at using percentages, than the lower educated.

    Yphtach Lelkes

    Yphtach Lelkes, PhD Candidate, Communication, Stanford University

  • Does Mentioning ―Some People and Other People In An Attitude Question Improve Measurement Quality?

    ABSTRACT: Researchers often measure attitudes and beliefs using ―some/other questions “Some people think that … but other people think that…”) instead of asking simpler ―direct questions. This is done to, presumably, decrease social desirability or acquiescence response bias by communicating that either response is normative. The present study tested the alternative hypothesis that implicitly telling respondents that the public‘s opinion is split 50/50 on the issue would be confusing for some respondents, thus reducing the quality of their data. Meta-analyses of thirteen original experiments embedded in national surveys of adults provided no evidence that the some/other form improves response validity. Direct questions yielded more valid reports than did some/other questions when using a conversationally natural response order (which was found to be more valid than the conversationally unnatural response order). Because some/other questions produce less valid measurements and involve more words than direct questions – so they involve more cognitive burden for respondents – the direct form using a conversationally conventional response order seems preferable.

    David Yeager

    David Yeager, Assistant Professor of Psychology, University of Texas, Austin

Monday, May 2, 2011

  • The Impact of Healthcare Utilization on Satisfaction with Health Insurance Plans

    ABSTRACT: This paper explores the impact of healthcare utilization on perceived quality of health insurance plans, using data from a landline-cellphone bilingual (English/Spanish) telephone survey conducted in November 2009 with a nationally-representative probability sample of 1,502 adults. Although 72% of adults rated the quality of their private plans as “good” or “excellent”, those who had received more inpatient care over the past 5 years gave significantly lower quality ratings, as did people who reported poorer state of health. In contrast, the negative relationship between healthcare utilization and satisfaction with insurance plans did not emerge among Medicare or Medicaid recipients, and was reversed among beneficiaries of military health plans.

    Further analyses revealed that the negative impact of inpatient care utilization on satisfaction with private insurance was more pronounced among the less empowered segments of the population – namely, ethnic minorities, women, and people in lower income groups. In contrast, the amount of outpatient care over the past 5 years was associated with higher quality ratings for private health plans only among White Americans and people in higher income groups. The impact of healthcare utilization on perceived quality of private health plans was mediated by whether the plans had refused to pay for healthcare services in the past.

    Taken together, these results suggest that one reason the majority of Americans are contented with their private health insurance plans is because they have not yet had to push the boundaries of their coverage, whereas the minority of Americans who have had to cope with substantial healthcare utilization, particularly utilization of medical services in hospitals, are not quite as satisfied with their private plans.

    LinChiat Chang

    LinChiat Chang, Visiting Scholar at Stanford University, Data Analytics and Survey Research Methodology

  • Measuring Intent to Participate and Participation in the 2010 Census and Their Correlates and Trends: Comparisons of RDD Telephone and Non-probability Sample Internet Survey Data

    ABSTRACT: Do Internet surveys of non-probability samples yield conclusions similar to those produced by RDD telephone surveys with regard to distributions of variables, relations between variables, and trends over time? This study explored whether probability sample telephone survey data and data from nonprobability sample Internet surveys yielded similar results regarding intent to complete the 2010 Census form and actual completion of the form, the correlates of these variables, and changes in these variables and their correlates over time. Using data collected between January and April, 2010, we found that the telephone samples were more demographically representative of the nation’s population than were the Internet samples. Furthermore, the distributions of opinions and behaviors were often significantly and substantially different across the two data streams, as were relations between the variables and changes over time in the variables. Thus, research conclusions would often be different depending on which data stream was used. Because the telephone data collection methodology rests on well-established theory of probability sampling and produced the most demographically representative samples, the substantive results yielded by these data may also be more accurate than the substantive results generated with the non-probability sample Internet data.

    Josh Pasek

    Josh Pasek, Professor of Communication Studies, University of Michigan

Monday, May 9, 2011

  • More comparisons of Probability and Non-Probability Sample Internet Surveys: The Dutch NOPVO Study

    ABSTRACT: Measurement of public perception via survey methods is an expensive endeavor. This is partially due to the cost of probability sampling in recruiting survey respondents. As a result, non-probability sampling has grown in popularity in survey research. While there appear to be some benefits to Internet-based survey methods such as easier administration of survey questionnaires, reduced social desirability bias and reduced survey satisficing, the relative accuracy of non-probability samples has recently been shown to be less reliable than probability samples (Yeager et al., 2009). However, if non-probability samples can be weighted or otherwise analyzed in ways that yield representative results, the merging of Internet data collection and non-probability samples may still be a viable approach to yielding accurate survey measurements.

    In this paper, we will present an analysis of non-probability and probability sampling survey data collected in the Netherlands. The Dutch Online Panel Comparison Study (NOPVO) involved 19 Internet survey companies collecting data from non-probability samples of the Dutch population using the same questionnaire, which asked questions on demographic and non-demographic topics. This paper covers a comparison between the accuracy of these companies and probability samples performed at the same time, using benchmarks from the Dutch Census to determine accuracy. The results of this analysis will further illuminate the relationship between accuracy and web-based survey methods, particularly with respect to the viability of weighting methods and non-probability sampling.

    Rebecca Weiss

    Rebecca Weiss, PhD Candidate, Communication

  • Complete Satisficing in Surveys: An Exploratory Investigation

    ABSTRACT: Satisficing, cognitive compromise in optimally responding to survey questions, has remained a public opinion research topic with enduring interests since its conceptualization by Simon (1957) and elaborate theorization by Krosnick (1991). The research on satisficing has centered around two notions of satisficing—weak satisficing and strong satisficing—based on the absence of or compromise in the four stages in optimized response as delineated in Tourangeau and Rasinski (1988): fully interpret the meaning of each question, search memory for relevant information, process and integrate retrieved information into summary judgment, and report judgment. Weak satisficing and strong satisficing are defined as the second and third stages of the response process being incomplete and/or biased, and skipped altogether, respectively. In this study, we propose a new and stronger form of satisficing, ―complete satisficing, where the first step of fully interpreting the question is compromised or omitted. We first provide an innovative method to devise questions that allow the detection of complete satisficing, proposing that such questions should possess the following design characteristics: a) questions are devoid of the need of information retrieval and integration, b) questions are free of content domains requiring no domain-specific knowledge or interests in question interpretation, c) questions for which there are correct answers and the selection of the correct answer solely depends on the question comprehension, and d) questions are of relatively low cognitive demand, bearing much resemblance to instructions. Second, we assess the statistical properties of questions in which complete satisficing might be manifested, including reliability and validity tests. Third, we investigate the potential instigators of complete satisficing, and its relationships to common representations of weak and strong satisficing in a variety of survey questions. Finally, the potential impact of complete satisficing on data quality and implications for better survey designs are discussed.

    Bo MacInnis

    Bo MacInnis, Visiting Scholar, Communication, Stanford University

Monday, May 16, 2011

  • Using Ancillary Data Available for Address-Based Sampling to Measure Self-Selection Bias

    ABSTRACT: A feature of address-based sampling (ABS) is versatility of the sample frame where many ancillary data can be appended to an address. Commercial databases, e.g., Experian, infoUSA, Acxiom are used to append observed and modeled information at various levels of aggregation. This enables researchers to develop more efficient sample designs and broaden analytical possibilities with expanded sets of covariates. More relevant to the worskshop is the use of these ancillary data to measure self-selection bias, a form of nonresponse bias. Relying on surveys and KnowledgePanel® recruitment samples that employ ABS, the author will present results of comparisons between an array of ancillary data and corresponding observed values collected directly from the responding households. The ancillary data are used to demonstrate the ability to analyze non-response bias by comparing the ancillary data available for the invited sample and the subset of recruited study participants.

    Mike Dennis

    Mike Dennis, Executive Vice President, Knowledge Networks

Related Topics

Explore More Events

No events at this time. Please check back later.