|Michal Kosinski, Assistant Professor, Stanford’s Graduate School of Business
September 26, 2016
Predicting psychological traits from digital footprints
ABSTRACT: A growing proportion of human activities such as social interactions, entertainment, shopping, and gathering information, are now mediated by digital devices and services. Such digitally mediated activities can be easily recorded, offering an unprecedented opportunity to study and assess psychological traits using actual―rather than self-reported―behavior. Our research shows that digital records of behavior, such as samples of text, Tweets, Facebook Likes, or web-browsing logs, can be used to accurately measure a wide range of psychological traits. Such Big Data assessment has a number of advantages: it does not require participants’ active involvement; it can be easily and inexpensively applied to large populations; and it is relatively immune to cheating or misrepresentation. Essentially, if the ethical and methodological challenges could be overcome, Big Data has the potential to revolutionize psychological assessment, marketing, recruitment, insurance and many other industries.
|Lauren Howe, PhD Candidate, Department of Psychology, Stanford University
October 3, 2016
Creating a demonstration of random sampling to improve trust in surveys
ABSTRACT: Lauren will present initial studies showing that people distrust polls suggesting that the American public’s views conflict with theirs. Then, she will ask for feedback on a video demonstration of random sampling that she has created as an intervention to improve trust in well-conducted polls.
|Adina Abeles, PhD Candidate, Department of Communication, Stanford University
October 11, 2016
Does using labels as a shorthand to describe groups of people signal bias on behalf of the communicator employing those labels?
ABSTRACT: Adina will describe a current research project focused on what signals a communicator sends simply by choosing a label to describe a group of people. She uses various labels from climate change discourse in her research and will seek advice and feedback on interpreting some interesting results.
|MJ Cho, PhD Candidate, Department of Communication, Stanford University
October 18, 2016
Assessing Accuracy of Pre-Election Polls: 2008 – 2012
ABSTRACT: This study tests various hypotheses regarding the relationships among the features and accuracy of pre-election polls. Based on the literature, this study systematically collects data of polls and derives multiple measures of accuracy to test the hypotheses. The analyses in this study provide evidence in supporting the hypotheses regarding the influence of poll features on poll accuracy. In particular, this study suggests that types of elections, modes of polls, and partisanship of the polling firms are significant determinants of poll accuracy as represented by the average absolute and partisan biases.
|Soohee Kim, PhD Candidate, Department of Communication, Stanford University
October 25, 2016
An Exploration of the Mechanism of the Effect of Advertising
ABSTRACT: Much research on advertising has explored the impact of advertising on people’s attitudes and behavioral intentions. However, advertising’s potential impact on another key construct – perception of social proof, which is known to play a crucial role in shaping people’s attitudes and behaviors – has been largely overlooked in the literature. In this study, we examine the potential mediating role of perceptions of social proof in the effects of advertising on consumers. In addition, to fully understand the impact of advertising on various aspects of attitudes and purchase intentions, we propose to conceptualize the two pivotal constructs – brand attitude and purchase intention – as consisting of some potentially related sub-constructs. Using data from a nationally representative sample of U.S. adults, we find that exposure to advertisements does not directly influence purchase intention but does so indirectly through individuals’ own brand attitude and perceptions of social proof, even though the role of the latter appears to vary depending on the characteristics of the advertisement. The structural equation model analyses indicate that the indicators for brand attitude and purchase intention proposed in this study are effective measurements of the two constructs, suggesting that effects of advertising on attitudes and purchase intentions can be more comprehensively examined by conceptualizing the diverse aspects of advertising effectiveness. Implications of the findings are discussed in terms of the role of social influence in the effects of advertising as well as the importance of measurement for a more comprehensive understanding of the effects of advertising.
|Jan Karen Höhne, PhD Candidate, University of Göttingen, Germany
November 1, 2016
Survey Participation, Cognitive Effort, and Data Quality: A Comparison of Agree/Dis-agree and Item-Specific Question Formats
ABSTRACT: In social research, the use of agree/disagree (A/D) questions (i.e. response categories are based on an agreement continuum) is a popular methodological technique to measure attitudes and opinions. Theoretical considerations, however, suggest that A/D questions require effortful cognitive processing. Therefore, many researchers recommend the use of item-specific (IS) questions (i.e. response categories match the content dimension directly) since they seem to be less burdensome. In this talk, I present the results of several survey experiments, conducted to investigate the performance of AD and IS questions. The first study compares both question formats regarding cognitive effort (measured by response times and answer changes) and response quality (measured by speeding, non-differentiation, and dropouts). The second study investigates the (cognitive) processing of A/D and IS questions using eye-tracking methodology. On the basis of recordings of respondents’ eye movements, it is possible to draw conclusions on how respondents process survey questions and evaluate how they process information. The third study, investigates the susceptibility of A/D and IS questions regarding response order effects. Respondents were additionally asked to evaluate both question formats using different adjective pairs. The final study compares AD and IS questions over PC and smartphones (the data collection is still in progress). Considering the current results, it is to see that the IS question format performs much better than the AD question format. Although IS questions seem to be more demanding they encourage respondents to perform a more active and intensive cognitive processing than AD questions. Therefore, they also seem to be more robust against response bias such as response order effects. Given that these findings are compatible with earlier research attesting higher measurement quality to IS questions than to A/D questions (Saris, Revilla, Krosnick, & Shaeffer, 2010), survey researchers should give preference to IS over A/D questions when developing survey instruments.
|Nick Allum, Professor of Research Methodology, Department of Sociology, University of Essex
November 8, 2016
Use of closed probes in a probability panel to validate cognitive pretesting: a first-cut analysis
ABSTRACT: In standard cognitive pretesting, interviewers ask respondents to ‘think aloud’ and to answer verbal probes. The purpose is to understand how respondents comprehend and respond to survey questions, with the aim of fixing problems and enhancing data quality. Typically, small samples of five to fifteen participants are used and questions revised (or not) in light of what is found. Although cognitive testing is well-established, quite serious questions remain as to its efficacy, which primarily have to do with the small Ns involved. Firstly, it is possible that not all important problems with a survey item will be uncovered by a small number of interviews. Recent research suggests that indeed many problems may routinely be missed when small samples are employed (Conrad and Blair 2011). Secondly, it is generally not known how significant or widespread such problems might be for the study population as a whole. Thirdly, and following on from the second, it is not known how changes made to questions affect data quality. Web surveys have begun to be used to carry out cognitive testing online, using open ended probes, in order to solve some of these issues (Behr et al), but in no case to our knowledge has a probability-based design been used. In this study, we run a randomised experiment on the NatCen online probability-based panel survey. We use closed-ended probes to evaluate revisions made to survey items after they have undergone standard cognitive pretesting. Analysis is at an early stage and we find that the results are mixed, but indicate modest success for standard cognitive testing methods, at least in the cases we examine.
|David Broockman, Assistant Professor of Political Economy and Spence Faculty Scholar fro 2016-7, Stanford Graduate Business School
November 29, 2016
Taste-Based Voter Discrimination Against Nonwhite Politicians: Evidence from a Natural Experiment (co-authored with Evan Soltas)
ABSTRACT: Why are nonwhite candidates less likely to win US elections? It is widely speculated that voters engage in taste-based discrimination against nonwhite candidates, but credible evidence that voters discriminate against nonwhite candidates is limited, as is evidence that any such discrimination is tasted-based and not statistical. We exploit a unique natural experiment where voters have no incentive to engage in statistical discrimination and an incentive against engaging in tasted-based discrimination. We find that approximately 10% of voters avoid voting for nonwhite candidates in this setting, an effect large enough to change political outcomes.
|Melissa R. Michelson, Professor of Political Science at Menlo College
December 6, 2016
Listen, We need to Talk: How to Change Attitudes about LGBT Rights
ABSTRACT: American public opinion tends to be sticky. Although the news cycle might temporarily affect the public zeitgeist about abortion, the death penalty, or gun control, public support or opposition on these issues has remained remarkably constant over decades. But there are notable exceptions, particularly with regard to polarizing issues that highlight identity politics. Over the past three decades, public support for same-sex marriage has risen from scarcely more than a tenth to a majority of the population. Why have people’s minds changed so dramatically on this issue, and why so quickly?
Listen, We Need to Talk tests a theory that when prominent people representing particular interest groups voice support for a culturally contentious issue, they sway the opinions of others who identify with the same group, even if the interest group and the issue at hand have no obvious connection. In fact this book shows that the more the message counters prevailing beliefs or attitudes of a particular identity group, the more persuasive it is. While previous studies of political attitude change have looked at the effects of message priming (who delivers a message) on issues directly related to particular identity groups, this study is unique in that it looks at how identity priming affects attitudes and behaviors toward an issue that is not central or directly related to the targeted group. The authors prove their theory through a series of random experiments testing the positive effects of identity-based messaging regarding same-sex marriage among fans of professional sports, religious groups, and ethnoracial (Black and Latino) groups.