throbber
Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 1 of 15 PageID #:3164
`Case: 1:22-cr-00520 Document#: 146-2 Filed: 04/08/25 Page 1 of 15 PagelD #:3164
`
`EXHIBIT B
`EXHIBIT B
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 2 of 15 PageID #:3165
`
`Public Opinion Quarterly, Vol. 74, No. 1, Spring 2010, pp. 154–167
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`COMPARING ORAL INTERVIEWING WITH
`SELF-ADMINISTERED COMPUTERIZED
`QUESTIONNAIRES
`AN EXPERIMENT
`
`LINCHIAT CHANG
`JON A. KROSNICK*
`
`Abstract A previous field experiment conducted via national surveys
`showed that data collected via the Internet manifested higher concurrent
`and predictive validity and less random and systematic measurement
`error than data collected via telephone interviewing. To ascertain the
`extent to which these differences were attributable to mode per se, a
`laboratory experiment was conducted in which respondents were ran-
`domly assigned to answer questions either on a computer or over an
`intercom with an interviewer. Replicating findings from the national
`surveys, the laboratory experiment indicated higher concurrent validity,
`less survey satisficing, and less social desirability response bias in the
`computer mode than in the intercom mode. The mode difference in con-
`current validity and non-differentiation was most pronounced among
`respondents with more limited cognitive skills. Taken together, these
`results suggest a potential inherent advantage of questionnaire self-
`administration on the computer over telephone administration.
`
`As researchers are increasingly interested in conducting surveys via the Inter-
`net, it is important to understand whether shifting from oral administration of
`questions (in telephone or face-to-face interviews) to computer self-adminis-
`tered interviewing changes the answers that respondents provide. This paper
`reports the results of a laboratory experiment designed to assess the impact of
`this mode shift on survey responses.
`
`LINCHIAT CHANG is an independent contractor in San Francisco, CA, USA. JON A. KROSNICK is
`with Stanford University, Stanford, CA, USA and is a University Fellow at Resources for the
`Future. This research was funded by an Ohio State University Graduate School Alumni Research
`Award to Chang and was reported in a Ph.D. dissertation submitted by Chang to Ohio State
`University. The authors would like to thank Elizabeth Stasny, Marilynn Brewer, Ken Mulligan,
`and Joanne Miller for their help and advice, and Joy Baskin, Mrinalini Raina, Kameela Majied,
`Crystal Velazquez, Juanita Wright, and Augustina Jay for assisting in data collection. *Address
`correspondence to Jon A. Krosnick, Standford University, 434 McClatchy Hall, 450 Serra Mall,
`Standford, CA 94305, USA; e-mail: krosnick@stanford.edu.
`
`Advance Access publication February 12, 2010
`doi: 10.1093/poq/nfp090
`© The Author 2010. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
`All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 3 of 15 PageID #:3166
`Oral Interviewing versus Computer Administration
`155
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`For this study, respondents were brought to the lab and randomly assigned to
`answer a questionnaire either self-administered on a computer or administered
`orally via an intercom with an interviewer. This design allowed us to assess
`whether the computer mode was associated with improved concurrent validity,
`less survey satisficing, and less social desirability response bias, as Chang and
`Krosnick (2009) found in a field experiment comparing these two modes.
`The experimental design also allowed us to explore whether the mode effect
`was moderated by respondents’ cognitive skills. Oral presentation might pose
`the greatest challenges for respondents with limited cognitive skills, because of
`the added burden imposed by having to hold a question and response choices in
`working memory while searching long-term memory and generating a judg-
`ment. Visual presentation of a question might reduce that burden on working
`memory, thereby helping people with limited cognitive skills the most. Howev-
`er, it may be that oral presentation makes question and response choice
`interpretation easier for people with limited reading skills than would visual pre-
`sentation. If that is so, then any advantage of computer presentation might be
`confined to respondents high in cognitive skills and might even reverse among
`respondents with more limited skills. We explored these various possibilities.
`We also examined whether administration time varied across modes. Re-
`spondents answering questions via computer could answer questions at
`whatever pace was optimal for them. But the nature of oral exchange in the
`absence of visual cues might lead both interviewers and respondents to accel-
`erate the pace of questioning over an intercom beyond what would be
`optimal. So we thought respondents in the computer mode might complete
`the questionnaire more slowly than those in the intercom mode.
`
`Methodology
`
`RESPONDENTS
`
`Respondents were undergraduates enrolled in introductory psychology classes
`at Ohio State University in spring 2001. They accessed an online database of
`all experiments available for participation that quarter and chose to sign up for
`this experiment in exchange for course credit. Only people who had resided in
`the United States for at least the past five years were eligible to participate.
`The respondents included 174 males and 158 females, most of them born be-
`tween 1979 and 1982; 78% of the respondents were White, 11% were
`African-American, 2% were Hispanic, 6% were Asian, and the remaining
`3% were of other ethnicities.
`
`PROCEDURE
`
`Respondents arrived at the experimental lab at scheduled times in groups
`of four to six and were individually randomly assigned to go alone into a
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 4 of 15 PageID #:3167
`Chang and Krosnick
`156
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`small soundproof private room containing either a computer on which to
`complete a self-administered questionnaire or intercom equipment. Respon-
`dents completed the questionnaire by their assigned mode and were
`debriefed and dismissed.
`
`INTERVIEWERS
`
`The interviewers were experienced research assistants who received training
`on how to administer the questionnaire, record answers, and manage the
`interview process. The procedures used for training these interviewers were
`those used by the Ohio State University Center for Survey Research. Fol-
`lowing training, the interviewers practiced administering the questionnaire
`on the intercom. They were closely monitored during the interviewing pro-
`cess, and regular feedback was provided, as would be standard in any high-
`quality survey data collection firm.
`
`MEASURES
`
`The questions included many items similar to those used in Chang and
`Krosnick's (2009) national field experiment, including feeling thermometer
`ratings of political figures, approval of President Bill Clinton’s job perfor-
`mance, perceived changes in past national conditions, expectations of
`future national conditions, perceptions of the 2000 presidential candidates’
`personality traits, the emotions evoked by the candidates, preferences on
`policy issues, political party identification, and liberal/conservative ideo-
`logy. The measurement and coding of these variables are described in
`the appendix.
`Respondents were asked to identify the most important problem facing
`the country,
`the most
`important problem facing young people in the
`country,
`the most
`important environmental problem facing the country,
`and the most
`important
`international problem facing the country. Each
`question offered respondents four response options. Half of the respon-
`dents (selected randomly) were offered the options in sequence A, B,
`C, D, whereas the other half were offered the options in sequence D,
`C, B, A.
`Of the 332 total respondents, 205 granted permission authorizing us
`to obtain their verbal and math SAT or ACT test scores from the uni-
`versity registrar’s office. All ACT scores were converted into SAT scores
`using the concordance table available at
`the College Board Web site
`(www.collegeboard.com), which shows the equivalent SAT scores for
`each corresponding ACT score. Total SAT scores were recoded to range from
`0 to 1; the lowest total score of 780 was coded 0, and the highest total score of
`1480 was coded 1.
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 5 of 15 PageID #:3168
`Oral Interviewing versus Computer Administration
`157
`
`Table 1. Unstandardized Regression Coefficients of Variables Predicting
`the Difference Between the Gore and Bush Thermometers (standard errors
`in parentheses)
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`Clinton Approval: Job
`
`Clinton Approval: Economy
`
`Clinton Approval: Foreign Relations
`
`Clinton Approval: Crime
`
`Clinton Approval: Education
`
`Clinton Approval: Race Relations
`
`Clinton Approval: Pollution
`
`Clinton Approval: Health Care
`
`Past Conditions: Economy
`
`Past Conditions: Foreign Relations
`
`Past Conditions: Crime
`
`Past Conditions: Education
`
`Past Conditions: Race Relations
`
`Past Conditions: Pollution
`
`Past Conditions: Health Care
`
`Expectations: Economy
`
`Expectations: Foreign Relations
`
`Expectations: Crime
`
`Expectations: Education
`
`Expectations: Race Relations
`
`Intercom
`
`Computer
`
`.52** (.08)
`N = 166
`.35** (.12)
`N = 166
`.27* (.11)
`N = 166
`.07 (.10)
`N = 166
`.08 (.10)
`N = 166
`.22 (.12)
`N = 166
`-.13 (.10)
`N = 166
`.16 (.10)
`N = 166
`.34** (.12)
`N = 166
`.29* (.11)
`N = 166
`.07 (.11)
`N = 166
`.15 (.13)
`N = 166
`.04 (.14)
`N = 166
`-.10 (.12)
`N = 166
`.26* (.10)
`N = 166
`.56** (.05)
`N = 166
`.47** (.05)
`N = 166
`.41** (.07)
`N = 166
`.52** (.06)
`N = 166
`.54** (.09)
`N = 166
`
`.88** (.09)
`N = 166
`.78** (.13)
`N = 166
`.79** (.12)
`N = 166
`.90** (.13)
`N = 164
`.74** (.12)
`N = 166
`.84** (.13)
`N = 164
`.69** (.14)
`N = 165
`.83** (.11)
`N = 166
`.40* (.16)
`N = 164
`.56** (.15)
`N = 163
`.54* (.14)
`N = 164
`.48** (.14)
`N = 164
`.33* (.17)
`N = 164
`.55** (.14)
`N = 164
`.76** (.14)
`N = 162
`.82** (.05)
`N = 164
`.76** (.05)
`N = 166
`.81** (.06)
`N = 166
`.79** (.05)
`N = 166
`.82** (.07)
`N = 166
`
`z-test
`
`2.95**
`
`2.35**
`
`3.2**
`
`5.06**
`
`4.17**
`
`3.51**
`
`4.67**
`
`4.32**
`
`.33
`
`1.46
`
`2.69**
`
`1.75*
`
`1.35
`
`3.65**
`
`2.92**
`
`3.63**
`
`3.99**
`
`4.34**
`
`3.46**
`
`2.60**
`
`Continued
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 6 of 15 PageID #:3169
`Chang and Krosnick
`158
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`Table 1. Continued
`
`Expectations: Pollution
`
`Expectations: Health Care
`
`Candidates’ Traits: Moral
`
`Candidates’ Traits: Really Cares
`
`Candidates’ Traits: Intelligent
`
`Candidates’ Traits: Strong Leader
`
`Evoked Emotions: Angry
`
`Evoked Emotions: Hopeful
`
`Evoked Emotions: Afraid
`
`Evoked Emotions: Proud
`
`Party Identification
`
`Political Ideology
`
`Military Spending
`
`Welfare Spending
`
`Help for Black Americans
`
`Gun Control
`
`Effort to Control Crime
`
`Immigration Restriction
`
`* p < .05; ** p < .01.
`
`Intercom
`
`Computer
`
`.22** (.08)
`N = 166
`.45** (.06)
`N = 166
`.50** (.07)
`N = 166
`.68** (.05)
`N = 166
`.30** (.08)
`N = 166
`.57** (.06)
`N = 166
`.74** (.05)
`N = 166
`.64** (.05)
`N = 166
`.70** (.09)
`N = 166
`.67** (.05)
`N = 166
`.77** (.09)
`N = 166
`.47** (.10)
`N = 166
`.30** (.10)
`N = 166
`.39** (.10)
`N = 166
`.53** (.12)
`N = 166
`.23* (.11)
`N = 166
`-.12 (.11)
`N = 166
`.12 (.10)
`N = 166
`
`.57** (.07)
`N = 166
`.76** (.06)
`N = 166
`.79** (.07)
`N = 166
`.84** (.04)
`N = 166
`.82** (.07)
`N = 166
`.76** (.05)
`N = 166
`.85** (.05)
`N = 166
`.84** (.04)
`N = 166
`.83** (.07)
`N = 166
`.88** (.05)
`N = 166
`1.32** (.10)
`N = 166
`.88** (.13)
`N = 166
`.45** (.14)
`N = 166
`.61** (.10)
`N = 166
`.65** (.12)
`N = 166
`.65** (.15)
`N = 166
`.47* (.19)
`N = 165
`.30* (.14)
`N = 166
`
`z-test
`
`3.31**
`
`3.66**
`
`2.98**
`
`2.37**
`
`5.08**
`
`2.48**
`
`1.61
`
`3.02**
`
`1.08
`
`2.99**
`
`4.28**
`
`2.51**
`
`.87
`
`1.51
`
`.71
`
`2.29*
`
`2.65**
`
`1.02
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 7 of 15 PageID #:3170
`Oral Interviewing versus Computer Administration
`159
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`Results
`
`CONCURRENT VALIDITY
`
`Concurrent validity of the measures was estimated using the same approach as
`was employed by Chang and Krosnick (2009). Table 1 displays unstandard-
`ized regression coefficients estimating the effects of 38 postulated predictors
`on the feeling thermometer ratings of George W. Bush subtracted from the
`feeling thermometer ratings of Al Gore.1 The computer data yielded signifi-
`cantly higher concurrent validity than did the intercom data for 29 of these
`predictors. In no instance did the intercom data manifest significantly higher
`concurrent validity than the computer data. Across all coefficients shown in
`Table 1, a sign test revealed statistically significantly higher concurrent valid-
`ity in the computer data than in the intercom data (p < .001).
`To explore whether the mode difference varied in magnitude depending up-
`on individual differences in cognitive skills, we regressed the difference in
`thermometer ratings on each predictor, a dummy variable representing mode,
`cognitive skills, and two-way interactions of mode x the predictor, cognitive
`skills x the predictor, and mode x cognitive skills, and the three-way interac-
`tion of mode x the predictor x cognitive skills.2 The three-way interaction
`tested whether the mode effect on concurrent validity was different for people
`with varying levels of cognitive skills. We estimated the parameters of this
`equation using each of the 38 predictors listed in Table 1.
`The three-way interaction was negative for 84% (32) of the predictors (sev-
`en of them statistically significant) and positive for six predictors (none
`statistically significant). A sign test revealed that the three-way interaction
`was more likely to be negative than positive (p < .001), indicating that the
`mode difference was more pronounced among respondents with limited cog-
`nitive skills. Among participants in the bottom quartile of cognitive skills (N =
`52), the computer data yielded significantly higher concurrent validity than
`the intercom data for 16 out of 38 predictors, whereas among participants
`in the top quartile of cognitive skills (N = 53), the two modes did not yield
`statistically significantly different concurrent validity for any of the 38 predic-
`tors. Thus, it seems that respondents high in cognitive skills could manage the
`two modes equally well, whereas respondents with more limited cognitive
`skills were especially challenged by oral presentation.
`
`1. Policy preferences on pollution by businesses did not predict the difference in feeling ther-
`mometer ratings regardless of mode and were therefore excluded from our concurrent validity
`analyses.
`2. For efficiency, the massive tables showing detailed coefficients for all main effects and inter-
`action effects are not presented here. These tables are available from the authors upon request.
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 8 of 15 PageID #:3171
`Chang and Krosnick
`160
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`SURVEY SATISFICING
`
`Non-differentiation: Non-differentiation was measured using responses to the
`eight feeling thermometer questions with a formula developed by Mulligan et
`al. (2001). Values can range from 0 (meaning the least non-differentiation
`possible) to 1 (meaning the most non-differentiation possible). Intercom re-
`spondents (M = .50) manifested significantly more non-differentiation than
`the computer respondents on the feeling thermometers (M = .44), t = 3.14,
`p < .01. To test whether the mode difference in satisficing was contingent
`on individual differences in cognitive skills, we ran an OLS regression pre-
`dicting the non-differentiation index using mode, cognitive skills, and the
`interaction between mode and cognitive skills. The interaction was negative
`and statistically significant, indicating that the mode difference in non-differ-
`entiation was more pronounced among respondents with more limited
`cognitive skills (b = -.15, p < .05).
`
`Response order effects: When asked the four “most important problem”
`questions, half of the respondents were offered the response options in the
`order of A, B, C, D, whereas the other half were offered the options in the
`order of D, C, B, A. We computed a composite dependent variable by count-
`ing the number of times each respondent picked response option A or B,
`which were the first or second response option for half of the respondents
`and the third or fourth response option for the other half. This composite var-
`iable ranged from 0 to 4, where 0 indicates that a respondent never picked
`response option A or B across all four “most important problem” items,
`and 4 indicates that a respondent always picked response option A or B. Then,
`within each mode, this composite dependent variable was regressed on a dum-
`my variable representing response choice order (coded 0 for people given
`order A, B, C, D and 1 for people given order D, C, B, A).
`A significant recency effect emerged in the intercom mode (b = .49, p <
`.01), indicating that response choices were more likely to be selected if
`they were presented later than if they were presented earlier. In contrast,
`no response order effect was evident in the computer mode (b = .07,
`p>.60). When the composite dependent variable was regressed on the dum-
`my variable representing response choice order, cognitive skills, and the
`two-way interaction between response choice order and cognitive skills,
`a marginally significant interaction effect emerged among respondents in
`the intercom mode (b = 1.77, p < .10). This interaction indicates that
`the mode difference was substantial among people with stronger cognitive
`skills (computer: b = -.10, n.s., N = 57; intercom: b = .68, p < .05, N = 68)
`and invisible among respondents with more limited cognitive skills (computer:
`b = .17, n.s., N = 49; intercom: b = .21, n.s., N = 49).
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 9 of 15 PageID #:3172
`Oral Interviewing versus Computer Administration
`161
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`SOCIAL DESIRABILITY RESPONSE BIAS
`
`Following Chang and Krosnick (2009), we explored whether social desirabil-
`ity response bias varied across the modes using the question asking whether
`the federal government should provide more or less help for African Amer-
`icans. The distributions of answers from White respondents differed
`significantly across the two modes, χ2 = 16.78, p < .01. White intercom re-
`spondents were more likely than White computer respondents to say the
`government should provide more help to African Americans (49% in inter-
`com mode versus 36% in computer mode), whereas White computer
`respondents were more likely to say the government should provide less help
`to African Americans (16% in intercom mode versus 38% in computer mode).
`This suggests that the computer respondents were more comfortable offering
`socially undesirable answers than were the intercom respondents.
`
`COMPLETION TIME
`
`One possible reason why the intercom interviews might have yielded lower
`response quality is the pace at which they were completed. If the lack of
`visual contact in intercom interactions leads interviewers and respondents
`to avoid awkward pauses and rush through the exchange of questions and
`answers, whereas self-administration allows respondents to proceed at a
`more leisurely pace, then the completion times for the intercom interviews
`may have been less than the completion times for the computer question-
`naire completion.
`In fact, however, the intercom interviews took significantly longer to com-
`plete than the self-administered surveys on computers, t (330) = 21.68, p <
`.001. Respondents took an average of 17.3 minutes to complete the self-ad-
`ministered questionnaire, whereas the intercom interviews lasted 26.6 minutes
`on average.
`
`Discussion
`Data collected via computer manifested higher concurrent validity than data
`collected via intercom, replicating the results of Chang and Krosnick’s (2009)
`national survey field experiment. In addition, we found more satisficing in
`the intercom data than the computer data, as evidenced by more non-dif-
`ferentiation and a stronger response order effect. This set of evidence
`suggests that certain features of the computer mode may have facilitated
`optimal responding.
`The advantage of the computer over the intercom in terms of concurrent
`validity and non-differentiation was especially pronounced among respon-
`dents with more limited cognitive skills and was weaker among people
`with stronger skills. This is consistent with the notion that the computer
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 10 of 15 PageID #:3173
`Chang and Krosnick
`162
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`may have reduced the cognitive demands imposed by oral presentation, so
`the greatest gap between the two modes appeared among the people most
`likely to be over-burdened by oral presentation. However, it is important to
`note that moderation of the response order effect by mode ran in the
`the computer mode manifested a significantly weaker
`reverse direction:
`response order effect than the intercom among respondents high in cog-
`nitive skills, whereas the mode difference was invisible among people
`with more limited cognitive skills. This surprising finding raises the pos-
`sibility that the role of cognitive skills in moderating mode effects may
`be complex rather than simple. We look forward to future research inves-
`tigating this issue.
`Some past studies have shown that visual presentation of questions on
`paper yielded primacy effects, whereas oral presentation yielded recency
`effects (Bishop et al. 1988; Schwarz, Hippler, and Noelle-Neumann
`1992). The present data replicated the expected recency effects in the in-
`tercom mode, but no response order effect appeared in the computer mode.
`This lack of effect in the visual mode may be due to the fact that the self-
`administered questionnaires were presented on computers instead of paper.
`Past research has shown that respondents answering questions via comput-
`er made fewer completion mistakes, left fewer items blank, and refused to
`answer fewer items than did paper-and-pencil respondents (Kiesler and
`Sproull 1986). Computer-assisted self-interviewing (CASI) has worked well
`even with respondents with no familiarity with computers, and respondents
`prefer CASI to paper and pencil (Davis and Cowles 1989; O'Reilly et al.,
`1994). Therefore, it is conceivable that the primacy effects often documen-
`ted with paper-and-pencil surveys may be weak or non-existent in the
`computer mode.
`Perhaps due to the absence of a human interviewer, computer respondents
`were apparently more willing to provide honest answers that were not social-
`ly admirable. This mode difference in social desirability bias jibes nicely
`with a set of past relevant findings. Respondents’ reports of drinking behav-
`ior and income were more accurate in mail surveys than in face-to-face or
`telephone interviews (De Leeuw 1992); Catholics were more likely to en-
`dorse birth control and Jews were more likely to endorse legalized
`abortion on mail questionnaires than in telephone interviews (Wiseman
`1972); and marital adjustment scores obtained over the telephone were high-
`er than those obtained from mail questionnaires (Gano-Phillips and Fincham
`1992). In a national follow-up survey of Medicare beneficiaries who had
`surgery for prostate cancer, mail respondents were more willing to report
`personal problems and worse health statuses than telephone respondents
`(Fowler, Roman, and Di 1998). Respondents were twice as likely to report
`unprotected sex with a non-primary partner in a mail survey than in a tele-
`phone interview, and half as likely to report volunteering in AIDS efforts
`(Acree et al. 1999).
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 11 of 15 PageID #:3174
`Oral Interviewing versus Computer Administration
`163
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`In short, evidence suggests that self-administration decreases concerns with
`impression management, so people are less likely to conform to social desir-
`ability standards and more likely to provide honest answers to threatening or
`sensitive questions (Sudman and Bradburn 1974). Our evidence differs from
`many past studies in that random assignment to mode here means that the
`observed differences between modes must be due to mode effects and not
`to differences between the samples of people who contributed data via the
`two modes. The reduction in the social desirability bias in the computer mode
`observed here may also have partly accounted for the higher concurrent va-
`lidity documented in that mode.
`We hope that this experiment sets the stage for future experimental studies
`exploring the underlying mechanisms of the mode difference we observed.
`Specifically, meticulous designs are needed to investigate which features of
`computer self-administration account for this mode’s advantage over oral in-
`terviews. The advantage could be due to the lack of standardization of oral
`administration across interviewers, pacing differences between modes (allow-
`ing respondents to move quickly through items they can answer easily and
`more slowly through items on which they need some time for reflection), re-
`duced working-memory demands afforded by the visual presentation of
`questions and response options, and more. Insights into what factors are re-
`sponsible for the differences we observed may shed light on possible
`directions for improving oral administration of survey questionnaires.
`
`Appendix
`This appendix shows the question stem and response choice wordings shown
`on the computer. During the oral interviews, the response options for the
`questions other than the feeling thermometers were preceded by “You can
`choose…”
`
`Feeling thermometer ratings: “In the following list of names, please rate how
`favorable or unfavorable you feel toward each person by picking a number
`between 0 and 100. The larger the number you pick, the more you like the
`person. Ratings between 50 and 100 mean that you feel favorable toward the
`person, and ratings between 0 and 50 mean that you feel unfavorable toward
`the person. You would rate a person at 50 if you don't feel favorable or
`unfavorable. If you don’t recognize a name, please enter the number 800 in
`the box next to that name.” Rated politicians were: Bill Clinton, Al Gore,
`George W. Bush, Dick Cheney, Colin Powell, Jesse Jackson, Janet Reno,
`and John Ashcroft. For half of the respondents, the sentence “You would
`rate a person at 50 if you don’t feel favorable or unfavorable” was not
`offered. All thermometer ratings were divided by 100, so that responses fell
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 12 of 15 PageID #:3175
`Chang and Krosnick
`164
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`within the range of 0 to 1, with larger numbers meaning more favorable
`ratings.
`
`Approval of President Clinton’s job performance: “Do you approve,
`disapprove, or neither approve nor disapprove of the way Bill Clinton has
`handled…” “His job as president,” “the U.S. economy, ” “U.S. relations with
`foreign countries,” “crime in America,” “education in America,” “relations
`between Black Americans and White Americans,” “pollution and the
`environment,” “health care in America.” (Response options: strongly
`approve, approve not strongly, neither approve nor disapprove, disapprove
`not strongly, strongly disapprove.) For half of the respondents, the choice
`“neither” was not offered. Responses were coded to range from 0 to 1,
`with 1 indicating the most approval.
`
`Perceived changes in past national conditions: “Next are some questions
`on whether you believe some things in the country now are better or
`worse than how they were when Bill Clinton became president in 1993,
`or whether each of these things is pretty much the same now as it was
`then. Compared to eight years ago, would you say that each of these is
`now much better, somewhat better, about the same, somewhat worse, or
`much worse?” “The nation’s economy,” “U.S. relations with foreign
`countries, ” “the amount of crime in America,” “education in America,”
`“relations between Black Americans and White Americans,” “the amount of
`pollution in America,” “health care in America.” For half of the respondents,
`the choice “about the same” was not offered. Responses were coded to
`range from 0 to 1, with 1 indicating the most improvement over the
`past eight years.
`
`Expectations of national conditions if the candidate were elected: “Now,
`what would you expect to happen in the country during the next four years
`if Al Gore was elected president in the elections last year? If Al Gore was
`elected president, would you expect each of the following to get better,
`worse, or stay the same over the next four years?” (Response choices:
`much better, somewhat better, about the same, somewhat worse, much
`worse.) “The nation’s economy,” “U.S. relations with foreign countries,”
`“the amount of crime in America,” “education in America,” “relations
`between Black Americans and White Americans,” “the amount of pollution
`in America,” “health care in America.”
`“Now, what would you expect to happen in the country during the next four
`years if George W. Bush was elected president in the elections last year? If
`George W. Bush was elected president, would you expect each of the follow-
`ing to get better, worse, or stay the same over the next four years?” (Response
`choices: much better, somewhat better, about the same, somewhat worse,
`
`

`

`Case: 1:22-cr-00520 Document #: 146-2 Filed: 04/08/25 Page 13 of 15 PageID #:3176
`Oral Interviewing versus Computer Administration
`165
`
`Downloaded from
`
`http://poq.oxfordjournals.org/
`
` at Pennsylvania State University on September 19, 2016
`
`much worse.) “The nation’s economy,” “U.S. relations with foreign
`countries,” “the amount of crime in America,” “education in America,” “rela-
`tions between Black Americans and White Americans,” “the amount of
`pollution in America,” “health care in America.” For half of the respondents,
`the choice “about the same” was not offered. For each issue, ratings of expec-
`tations under Bush were subtracted from expectations under Gore, and the
`result was coded so that it could range from 0 to 1.
`
`Perceptions of candidates’ traits: “In your opinion, how well do each of
`these words and phrases describe Al Gore? Extremely well, very well,
`somewhat, or not at all?” “Moral,” “really cares about people like you,”
`“intelligent, ” “can provide strong leadership.”
`“In your opinion, how well do each of these words and phrases describe
`George W. Bush? Extremely well, very well, somewhat, or not at all?”
`“Moral,” “really cares about people like you, ” “intelligent,” “can provide
`strong leadership.”
`For each trait, ratings for Bush were subtracted from ratings for Gore, and
`the result was coded so that it could range from 0 to 1.
`
`Emotions evoked by the candidates: “When you think of Al Gore, does he
`make you feel…” “Angry?” “Hopeful?” “Afraid? ” “Proud?” (Response
`options: extremely, very, somewhat, a little, not at all)
`“When you think of George W. Bush, does he make you feel…” “Angry?”
`“Hopeful?” “Afraid?” “Proud?” (Response options: extremely, very, some-
`what, a little, not at all)
`For the two positive emotions, ratings for Bush were subtracted from rat-
`ings for Gore, and the result was coded so it could range from 0 to 1. For the
`two negative emotions, ratings for Gore were subtracted from ratings for
`Bush, and the result was coded so it could range from 0 to 1.
`
`Policy preferences: “Next are a set of questions about what you think the
`government should do on a number of issues.” “Do you think the federal
`government should spend more money on the military, less money on the
`military, or about the same amount as it spends now?” “Do you think the
`federal government should spend more money on social welfare programs
`to help the poor, less money on those programs, or about the same amount
`as it

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket