Here is the CMT Uptime check phrase

Data Quality

Perhaps the single, most visible, and widely used index to data quality is response rate. Simply defined, the response rate is the proportion of sample individuals who actually give an interview. In this calculation, expressed as a ratio, the number of interviews is the numerator; the denominator varies from survey organization to survey organization. For ANES surveys, and for ISR surveys in general, the response rate denominator for cross-section, non-panel surveys has always consisted of all age-eligible citizens, randomly selected among individuals living in sample households. Thus the denominator includes, besides respondents, individuals who did not give us an interview for many different reasons: people who cannot give us an interview because of a physical handicap, poor health, or mental incompetence; people who do not speak English or Spanish (the only two languages in which ANES conducts its interviews); people who are away from home for the entire field period; and people who are otherwise too busy or unwilling to consent to being interviewed.

The table below displays the response rate and the refusal rate for each of the Election Studies comprising the ANES Time-Series data collection. (In presidential election years, the reported response rate is calculated from the initial, pre-election wave; in midterm years, the response rate is also calculated from the initial wave, the single interview conducted post-election).

The ANES Project Staff have carefully monitored non-response in ANES surveys: its nature, extent, sources, and consequences. The differences between telephone and face-to-face non-response rates have been closely monitored and the consequences of these differences for inferences about public opinion, participation, and voting have been assessed. These reports are cited in ANES codebooks, listed in the technical reports series, and are available through the ANES website.

Over the past five years, ANES has collaborated with the ISR Survey Research Center to develop new methods to boost response rates. We have devised new training procedures for interviewers; revised the introductory letters and scripts that are used to introduce respondents to take part in an ANES survey; developed more efficacious and cost-effective methods for converting initial refusals; and tested the efficacy and cost-effectiveness of the selective use of respondent incentives and payments.

Measurement

From its inception, ANES has self-consciously and systematically attended to issues concerned with the quality of measurement. Particularly important in this respect is the series of Pilot Studies that ANES has mounted since 1979. ANES Pilot Studies are vehicles for methodological improvement: for assessing the quality of measurement, identifying and understanding question framing effects, testing new strategies for posing questions to respondents both in face-to-face and telephone surveys and more. Thanks to the Pilot Study series, many basic concepts are measured better now than two decades ago: candidate evaluations, political values, economic well-being, ideological self-identification, religious affiliation, and political knowledge, to name a few.

All ANES surveys are fully pre-tested by experienced field interviewers. The resulting debriefings of the field interviewers by principal investigators and project staff have long been important ways to learn about how particular questions are working or not working. The 1993 Pilot Study, a “behavior coding” of the 1994 Election Study pretest, was also undertaken to observe the behavior of the interviewer, as the interviewer interacted with the respondent. Audio-tapes of the Behavior Coding interviews were coded for departures from standard interviewing procedure such as changing the wording of the question as it is read. This small data set can itself be analyzed, and in conjunction with the standard pretest debriefing, conveys much information about the performance of specific items.

Vote Validation

For many social scientists, the dependent variables of greatest concern in an ANES study relate to the respondent’s reported voting behavior: whether the respondent voted and, if so, for which candidate for President, House, or Senate. Because the voter turnout variable is so important, we have gone through extraordinary efforts to try to measure this variable accurately.

Almost every survey of voting behavior has produced estimates of turnout, based on reported vote, that are larger than aggregate measures of turnout. Survey reports of voter turnout run from 10-20 percentage points higher than the “official,” aggregate estimates based on the percentage of the voting-age population that cast a ballot for President. Many explanations have been advanced over the years for this discrepancy. One part of the story is that some respondents who actually did not vote tell interviewers that they did.

Unless one knows who is misreporting, there is no way to determine how harmful such misreporting is to data quality. Misreporting could be quite random with respect to variables of analytic interest, or it could systematically bias analyses of participation and vote choice. In an effort to correct the survey responses for misreporting, ANES has undertaken a series of “vote validations” for the Election Studies (in 1964, 1972, 1974, 1976, 1978, 1980, 1984, 1986, 1988, and 1990; the 1988 respondents were actually validated twice.) In a voter validation study, field interviewers go to local election offices and look at the office’s record of participation for each respondent. The results of this very complicated look-up operation are included with ANES study data to compare with the original vote report. Vote validation data are also included in the Cumulative Data File.

Monitoring Field Work

The several topics listed above may be said to relate to the problem of validity of measurement. Are we measuring what we think we’re measuring, with a minimal amount of bias? To some extent, this topic has to do with producing data that are reliable: as free as possible of random errors that constitute “noise” and hence lower the reliability of the inferences made from the analysis of the survey data.

Interviewers are carefully trained on how to ask the questions in each ANES survey, by means of a study guide containing question objectives, and also in pre-study training conferences conducted by their supervisors. Standard procedures are also reviewed in the pre-study conference. Nevertheless, it is unrealistic to expect that every interviewer will correctly implement instructions on each interview and each question in every interview. Hence, field supervisors and the ANES Project Staff carefully monitor the production from each interviewer. Supervisors review initial production, and at regular intervals thereafter, they monitor each interviewer’s work.

In addition, during the interviewing period, members of the ANES staff periodically review interview data as it arrives from the field, checking that proper administration and interview logic are apparent in the data. If a problem is detected, every attempt is made to correct or resolve immediately so that the effect on the study are is as small as possible.

Response Rates and Refusal Rates for the 1952-2000 Pre- and Post-Election Studies

Year Response Rate Refusal Rate Number of Interviews Sample N
2000/ 60.5 31.0 1807 2985
1998/ 63.8 1281 2008
1996# 59.8 398 666
1994# 72.1 1036
1992# 74.0 20.8 1126 1522
1990& 70.6 20.3 1980 2802
1988 70.5 22.2 2040 2893
1986 67.7 25.6 2176 3215
1984 72.1 20.7 2257 3131
1982 72.3 21.5 1418 1960
1980 71.8 20.8 1614 2249
1978 68.9 22.7 2304 3343
1976+ 70.4 2248 3191
1974+ (70.0) 16.5 1575
1972 75.0 14.5 2705 (3606)
1970* 76.6^ 14.1 1507^ 1967^
1968* 77.4 13.6 1557 2011
1966 77.1 13.8 1291 1674
1964* 80.6 12.5 1571 1948
1962 9.0 1297
1960+ 1164~
1958+ 78.1 12.2 1450 (1856)
1956 (85.0) (7.7) (1939) 2281
1952 (77.2) (6.2) (1799)~ (2330)

& numbers for 1990 recalculated in 1992 when 20 invalid interviews discovered
+ using unweighted Ns
~ excl. post-only cases (17 in 1960, 100 in 1952)
^ excl. 73 cases ineligible voters
* excl. supplements (i.e., cross-section only)
/ total, telephone and personal
( )Numbers in parentheses are either very close to exact, or result from calculations for response rates to which special circumstances apply
– Not ascertained
# only includes fresh cross-section sample