next up previous index
Next: ASA Teaching of Statistics Up: ASA Survey Research Methods Previous: asa.survey.rm.04

asa.survey.rm.05


SRMS

Session Slot: 2:00- 3:50 Monday

Estimated Audience Size: 100

AudioVisual Request: xxx


Session Title: Achieving Quality in Surveys

Theme Session: No

Applied Session: Yes


Session Organizer: Lyberg, Lars Statistics Sweden


Address:

Phone:

Fax:

Email: lars.lyberg@scb.se


Session Timing: 110 minutes total (Sorry about format):

110 minutes total...please allocate Opening Remarks by Chair - 10 minutes First Speaker - 20 minutes Second Speaker - 20 minutes Third Speaker - 20 minutes Floor Discussion - 20minutes


Session Chair: Biemer, Paul P. Research Triangle Institute


Address:

Phone:

Fax:

Email:


1. Quality Improvement in Surveys - A Process Perspective

Biemer, Paul P.,   Research Triangle Institute

Lyberg, Lars, Statistics Sweden


Address:

Phone:

Fax:

Email: lars.lyberg@scb.se

Abstract: Survey quality is directly related to survey errors. As for sampling errors comprehensive theories exist. For nonsampling errors no such theory exists. Left uncontrolled, nonsampling errors can render the resulting survey data useless for many important survey objectives. Typically specific error sources are treated one by one by using control or verification procedures and sometimes there are integrated treatments or simultaneous modeling of several specific error sources. Most of these attempts lead to estimates of post-survey quality indicators which are important for deciding data accuracy, but, except for repeated surveys, may be of little value for improving the survey data. Therefore, interest must shift from post-survey quality evaluation to controlling the survey processes such as data collection, data processing and data analysis. Process quality generates product quality.

Many survey organizations throughout the world are now working with the concepts of TQM in the context of survey design and survey execution. Methods for process quality such as control charting, quality teams, customer focus and decision-making based on scientific methods are now being successfully applied in survey work. It is our intention to provide examples of such attempts.


2. Reporting on Data Quality and Process Quality

Dobbs, Joy,   Office for National Statistics, U.K.


Address:

Phone:

Fax:

Email:

Davies, Pam, Office for National Statistics, U.K.

Martin, Jean, Office for National Statistics, U.K.

Ruddock, Vera, Office for National Statistics, U.K.

Abstract: Users of statistics and survey data need to have information which enables them to assess the quality of the data collected: they need to be able to judge whether the data are suitable for the purposes envisaged. In Britain the Official Statistics Code of Practice asserts that the Government Statistical Service (GSS) has a duty to "provide guidance and interpretation to help users understand and use the statistics." It goes on to indicate the sort of information that should be provided to aid such interpretation. To help put this into practice, the GSS recently published a Statistical Quality Checklist which consists of a detailed list of questions which should be considered when describing the statistics or survey data in a report or publication.

Underlying the checklist is the recognition that, although ideally users require information about data quality, in practice there are many aspects of data quality which cannot be measured - or only with considerable effort and expense. It is therefore important also to provide information about the quality of the processes used in collecting and producing the statistics: process quality is often a good indicator of data quality when direct measures of the latter are not available.

This paper discusses the sort of information that can be produced to measure data and process quality, drawing on the statistical approach to quality measurement exemplified by the concept of total survey error with its emphasis on identifying and measuring errors from different sources. However, this approach is set within a broader definition of quality derived from a TQM perspective. Here the emphasis is on customer focus and the provision of a complete service to meet customer needs, providing the right information, in the right form, at the right time and price, and which goes beyond the production of high quality data.


3. Measuring Survey Quality in a CASIC Environment

Couper, Mick P.,   Survey Research Center, University of Michigan and Joint Program in Survey Methodology


Address:

Phone:

Fax:

Email:

Abstract: Computer assisted survey information collection (CASIC) is changing the way the survey data collection process is designed, implemented and evaluated. Methods like CATI and CAPI have made quality control or evaluation tools used in paper and pencil interviewing (e.g., review of paper forms, item-missing data counts, skip errors) obsolete. At the same time, field staff (interviewers, supervisors, managers, trainers, etc) are being asked to take on new and more complex tasks. CASIC thus offers both new challenges for the measurement and evaluation of the survey process, as well as providing the opportunity to develop new tools and procedures for such evaluation. This paper will review some of the key impacts of automation on quality measures and discuss alternative approaches and procedures for developing quality measures in a CASIC environment. These include both design and testing tools (e.g., usability testing) as well as tools to evaluate the data collection process (e.g., automated production statistics, keystroke file analysis).


4. Improving Survey Quality Through Pretesting

DeMaio, Theresa,   U. S. Bureau of the Census


Address:

Phone:

Fax:

Email:

Abstract: As the vehicle of data collection, the questionnaire is one of the critical components in achieving high quality in a survey. The best of sampling schemes and estimation strategies will not yield accurate data if the answers provided by the respondent are not meaningful.

In recent years, there has been increased emphasis on building quality into the questionnaire design process through pretesting. Methods have been developed or adopted from other fields that focus on the response process, using either the respondent alone or the respondent's interaction with the interviewer as the target of study. Laboratories have been established in the U.S. statistical agencies to implement the methods. And generic clearance procedures that foster pretesting of government questionnaires have been established.

In this paper, we describe three methods that are used at the U.S. Census Bureau to pretest questionnaires: cognitive interviewing, behavior coding of respondent/interviewer interaction, and respondent debriefing. Examples of their use in testing U.S. Census Bureau survey questionnaires will be presented, and examples of their results will be documented.

List of speakers who are nonmembers: None


next up previous index
Next: ASA Teaching of Statistics Up: ASA Survey Research Methods Previous: asa.survey.rm.04
David Scott
6/1/1998