Skip to main content Skip to secondary navigation
Main content start

2009-2010 Spring Colloquia

Event Details:

Sunday, April 5, 2009 - Sunday, May 24, 2009

Schedule

Sunday, April 5, 2009

  • Issue Publics and Candidate Evaluations: Explaining Inconsistent Results in the Moderation of Issue Agreement by Individual Issue Importance

    ABSTRACT: The idealized version of democracy entails educated individuals who base their vote choices on a wide swath of information. However, psychological models suggest that individuals selectively incorporate information based on its relative importance. Within politics, this suggests that voters weigh information about personally important political issues more heavily than issues deemed inconsequential when forming political opinions.

    While the theoretical expectations are unwavering, some empirical studies have yielded expected results, and other studies have found no evidence of moderation by improtance. Why the disparity?

    This paper argues that the disparate results stem from a series of methodological differences in how analyses were conducted, including the operationalization of issue agreement, candidate placement, policy preferences, and much more. These differences and their effects are explained and then illustrated using American National Election Studies data. The paper culminates with an explication of the methodological conditions under which this moderation occurs and when it is suppressed.

    Wendy Gross

    Wendy Gross, PhD Candidate, Political Science, Stanford University

  • Toward A Standardization of Survey Weights: The American National Election Studies Weighting System

    ABSTRACT: Although there has been a tremendous amount of publication on poststratification in recent years, there appears to be no universally agreed upon, straightforward method for survey weight construction. Indeed, many prominent practitioners believe that constructing weights should be an art, not a science. There is no agreed upon standard for determining which variables should be used, what types of algorithms should be employed, whether and how weights should be truncated, and a variety of other important considerations.

    In order to identify best practices in constructing survey weights, the American National Election Studies (ANES) assembled a panel of experts in the field. Doug Rivers, Martin Frankel, Colm O’Muircheartaigh, Charles Franklin, and Andrew Gelman spent months working over a series of recommendations for how to best construct weights. Their work culminated in a September, 2009 report on how weights should be computed for the ANES and other similar studies.

    In this presentation, we will describe the methods proposed in the ANES weighting report. We will also introduce a new software package designed to implement the agreed-upon procedures. The software package presents a simpler interface for specifying weighting information, comes pre-loaded with demographic information to match each year’s demographics for the United States population from 2000 to 2009, and also automatically conducts analyses of which variables account for the largest discrepancies from known population parameters. In addition, the algorithm allows the user to specify mechanisms for variable selection, for the creation of weights on separate portions of the population, and for the creation of identically parameterized weights across multiple subsamples.

    Josh Pasek

    Josh Pasek, PhD Candidate in Communication, and Matt DeBell, American National Election Study

Sunday, April 12, 2009

  • Comparing Web Panel Samples vs. Telephone Samples of Medical Doctors

    ABSTRACT: Although web panels are widely used in market research, there are persistent concerns about panel conditioning, panel attrition, and self-selection biases. We compared web panel samples vs.g fresh samples recruited via telephone across 3 medical specialties: neurology (n=167 web vs. n=97 phone), pulmonology (n=83 vs. n=68), and pediatrics (n=56 vs. n=60). All physicians, regardless of recruitment mode, completed the surveys online. Comparative analyses were conducted on 162+ measures of medical practice characteristics, treatment choices, attitudes and perceptions. Few significant differences emerged, and none of those differences suggested systematic panel biases, possibly because physicians receive the same fundamental training and their treatment choices are restricted by clinical guidelines. I have finished the core analyses, but would love to get some ideas from PPRG on additional analyses I could try.

    LinChiat Chang

    LinChiat Chang, Independent Consultant

Wednesday, April 28, 2010

  • Chris Bryan

    Chris Bryan, PhD Candidate, Department of Psychology, Stanford University

    Rui Wang

    Rui Wang, PhD Candidate, Department of Communication, Stanford University

Monday, May 3, 2010

  • Nuri Kim

    Nuri Kim, PhD Candidate, Department of Communication, Stanford University

Wednesday, May 5, 2010

  • Ana Vilar

    Ana Vilar, Research Staff, IRISS

Monday, May 10, 2010

  • David Yeager

    David Yeager, PhD Candidate, School of Education, Stanford University

    Gaurav Sood

    Gaurav Sood, PhD Candidate, Department of Communication, Stanford University

Monday, May 17, 2010

  • Jack Glaser

    Jack Glaser, Professor, University of California Berkeley

Monday, May 24, 2010

  • Adam Berinsky

    Adam Berinsky, Professor, MIT

Related Topics

Explore More Events

No events at this time. Please check back later.