Sponsoring Section/Society: Health Policy Statistics Section
Session Slot: 2:00- 3:50 Wednesday
Estimated Audience Size: 75-125
AudioVisual Request: None
Session Title: Statistical Models and Methods for Health Care Policy Research
Theme Session: Yes
Applied Session: Yes
Session Organizer: Christiansen, Cindy L. Harvard Medical School and Harvard Pilgrim Health Care
Address: Harvard Medical School and Harvard Pilgrim Health Care, 126 Brookline Ave., Suite 200, Boston, MA 02215
Phone: (617) 421-2430
Fax: (617) 859-8112
Session Timing: 110 minutes total (Sorry about format):
3 speakers and 1 discussant Opening Remarks by Chair - 3 minutes First Speaker - 30 minutes Second Speaker - 30 minutes Third Speaker - 30 minutes Discussant - 10 minutes Floor Discusion - 7 minutes
Session Chair: Bailey, R. Clifton HCFA
1. Estimating the Effect on Survival of Detecting Cancers By Screening Using Instrumental Variables: An Example With Breast Cancer Screening
McIntosh, Martin, University of Washington, Department of Biostatistics and Fred Hutchinsen Cancer Research Center
Address: Department of Biostatistics, University of Washington, F-600 Health Sciences 357232, Seattle, Washington 98195
Abstract: Evaluating the effectiveness of cancer screening trials has many technical difficulties. It is common to evaluate a trial by computing a p-value and estimating effect sizes with comparison groups constructed from the subjects treatment assignments, but using only those subjects who developed cancer (the ``cancer cohort''); see for example Shapiro (Cancer, 1997), Shapiro, et. al (JNCI 1982), Chu, et. al (JNCI, 1988) for examples with breast cancer screening). The difficulty of evaluating these data are that (1) the screened group may find cancers earlier than in the unscreened population (``lead time'' bias), (2) the screening may find cancers that go undiagnosed in the unscreened population, and (3) some subjects assigned to be screened may have refused.
Each difficulty makes constructing comparison groups difficult: to whom do you compare the screening group cancers? One solution to (1) is to follow subjects until well after the screening has stopped, then use all cancer cases in both groups for comparison. The solution to (2) has been to 'hope' that the long followup time finds the number of cancers in the unscreened group 'caught up' to those in the control group. If catchup does not occurr, then the subjects cancer status depends on the treatment assignment (i.e., it is in the causal pathway), and treating it as a covariate does not lead to valid causal interpretation). Even if catchup does occur, including cancer cases who could not have been detected by screening attenuates the effect size estimate, and leads to substantial bias. Problem (3) has a similar difficulty, because a refuser who was assigned to the control would appear there as a complier, and so compliance staus too is effected by the treatment assignment, and cannot be treated as a covariate.
The methods to deal each of these have been ad-hoc, and admit substantial bias. Here we show how using the Imbens and Rubin (1996a, 1996b) causal framework for dealing with noncompliance can be adapted to analyze cancer screening trials, and gives a very simple and elegant estimate of the causal effect of screening (or screen detection) on patient outcomes (survival time, cancer stage, tumor size, lead time effect). This method deals with all these biases simultaneously.
2. Models for Health Care Planning
Carrier, K.C., University of Alberta
Address: 632 CAB, Department of Mathematical Sciences, Edmonton, Alberta, CANADA, T6G 2G1
Abstract: In health services research, as in any other social science research, the first step in data analysis is to identify the factors deemed relevant for inclusion in the estimation. The next pertinent decisions are the selection of the unit of analysis and an appropriate treatment of the parameters in the equation. In health services research, where health effects are potntially attributable to area level, or ecological variables, there are many critical methodological issues to ponder. All analytic approaches have some risks and some disadvantages. If the analytic results and recommendations were to be implemented successfully in practices of health care planning, it is imperative to understand these risks and disadvantages as well as advantages. Across Canada, health care reform has taken place recently. Using administrative and census data from three Canadian prairie provinces (Manitoba, Alberta, Nova Scotia), we explore weaknesses and strengths of some alternative approaches, with research objectives to efficiently and objectively evaluate the health care reform. We will focus on the relationship between policy changes and changes in overall utilization of hospitals and physicians according to area of residence, area of service (urban and rural) and types of hospital (teaching, urban community, rural) by groups at risk (elderly and those in low socioeconomic conditions).
3. Using Difference (Rather Than Ratio) Measures in Clustered Healthcare Data
Localio, Russell, University of Pennsylvania, Center for Clinical Epidemiology and Biostatistics
Address: Rm 606, Blockley Hall, 423 Guardian Dr., Philadelphia PA, 19104-6021
Ten Have, Thomas, University of Pennsylvania, Center for Clinical Epidemiology and Biostatistics
Berlin, Jesse, University of Pennsylvania, Center for Clinical Epidemiology and Biostatistics
Abstract: Much theoretical and applied research on clustered categorical data relies on ratio measures (such as odds ratios) to demonstrate treatment or program effectiveness. These measures, while convenient to the statistician for theoretical reasons, are often less meaningful to the clinical or health policy audience. We investigate the methodological and practical challenges of using measures such as risk difference in the context of clustered health care data. Our example is a two-period cross sectional analysis of the process of cardiovascular care in 197 hospitals (and 23,000 patients) in one state in a program sponsored by the Health Care Financing Administration and its local PRO (Keystone Peer Review, Inc). With multiple endpoints per patient and differing numbers of endpoints per hospital, we also investigate the effects of the number of clusters (hospitals), the number of patients per hospital (cluster size), and the correlation of endpoints within hospital on the choice of estimation techniques. In addition, process endpoints being highly correlated within hospital, we shall address the impact of levels of within-cluster correlation on the choice of estimates.
Discussant: Gatsonis, Constantine Brown University
List of speakers who are nonmembers: None