Chapter Two: Survey and Sample Description

In this chapter, we describe the population we surveyed and the survey instrument in some detail, including how we administered the survey. We also address the methods we used to aid respondents' recall and describe a second, smaller follow-up survey we fielded to assess the reliability of the responses-- in the sense of how much they were subject to change over time. We conclude by discussing the survey response rates.

THE SURVEY SAMPLE

We drew our sample from records of Army, Air Force, Marine Corps, and Navy personnel who were reported to have served in ODS/DS between August 1, 1990, and July 31, 1991. We focused on the subset of personnel who served on the ground (as opposed to personnel who were located at sea in the Persian Gulf or who only flew over the area) in the KTO. Personnel who were ultimately eligible to be surveyed consisted of:

To identify personnel meeting these criteria, we used data supplied by the U.S. Armed Services Center for Unit Records Research (USASCURR) with assistance from OSAGWI and the Center for Health Promotion and Preventive Medicine (CHPPM). In brief, we augmented a database of personnel who were in ODS/DS originally compiled by the Defense Manpower Data Center (DMDC) with information linking persons to units and units to locations. From this combined database, personnel in units that were not located in theater between August 1, 1990, and July 31, 1991, were ineligible to be sampled. We erred on the side of inclusion, since exclusion was dependent on known ineligibility. Personnel who could not be linked to units, or who were in units that could not be linked to locations, remained eligible to be sampled.

The sample was stratified by branch of service, occupational specialty, rank, and unit location. We stratified in these dimensions to (1) achieve equal precision in the estimates across services, and (2) to interview sufficient numbers of personnel with special knowledge or special living conditions. In particular:

In all, 3,264 records were sampled from 536,790 eligible records, evenly divided across the Army, the Air Force, and the Marine Corps combined with the Navy. Additional details on the specific definition of the sampling frame, the sample selection methodology, and oversampling are contained in Appendix B.

DESCRIPTION OF THE MAIN SURVEY INSTRUMENT

We designed the survey instrument around two primary objectives:

Our definition of "appropriate data" was driven by the determination of what information would be necessary to accurately portray use and exposure levels. We also carefully organized and presented the survey questions so that veterans' memories of pesticide details from eight to nine years prior would be most likely to sharpen during the interview.

We conducted an extensive literature review of other retrospective studies to evaluate survey methods used to reduce recall bias. This review and its findings, in combination with the insight gained from our initial pretests, guided the survey's final organization and grouping of topics. Table 2.1 outlines the instrument. Appendix A provides additional details about the survey instrument design process, and Appendix D provides additional details and a complete discussion of the recall bias results.

Table 2.1
Survey Instrument Outline

Module I: Introduction and Screener
Respondent verification
Informed consent, privacy statement, and confidentiality pledge
Assessment of general awareness of Gulf War issues to measure recall bias

Module II: One Month in the Persian Gulf
Month chosen at random
Reminder given of landmark events in that month
Data on physical environment collected

Module III: Pest Problems and Pesticide Use
List of pests respondent encountered in Gulf
Record form of pesticides respondent used on body or uniform
Record details of use by individual pesticide form and product
Record form of pesticides respondent and others used around physical environment
Record details of use by individual pesticide form and product

Module IV: Background Questions
Used to explore patterns of differential response
Data on other respondent demographics collected

Instrument Format and Branching

We organized the survey instrument into three primary sections: (1) a description of each respondent's physical environment; (2) the personal use of pesticides by respondents on their bodies or uniforms; and (3) the field use of pesticides in their personal environment by the respondent or others. Each respondent was asked to provide this information for a randomly chosen month during his or her service in theater. The three sections were preceded by an introduction and a series of identity verification questions and followed by concluding questions regarding respondent education and willingness to take part in a follow-up survey.

The survey was designed to be completed in 30 minutes on average. To achieve this, we had to make a tradeoff between collecting detailed data for a specific, short time frame or collecting more general data about the respondent's entire ODS/DS experience. To collect the most information possible that still allowed extrapolation to the entire ODS/DS period, we chose to select one month at random out of each respondent's tour, creating a cumulative database of detailed information across respondent experiences.

We specifically organized the instrument to trigger memories of the time period by asking respondents first to visualize their location during the randomly selected month, then by asking specific questions regarding their living quarters, bathroom facilities, eating arrangements, and work areas. Not only was this preliminary information of interest to us in our analysis, but it also served to set the scene for respondents, preparing them for the more detailed questions regarding their pesticide use by setting up the contextual memory on which they could draw at the outset of the interview.

Using primarily the feedback of pretest respondents, we organized the questions so that information was first collected on the various pesticide forms used. We then asked specific questions addressing each form indicated by the respondent. Personal and field-use pesticides were queried separately in this same format. This approach paralleled the way respondents recollected their use of pesticides: First the various products were listed, then each type and its use were described.

Elicitation of Pesticide Information

We conducted this survey between May and October 1999. Considering that it was fielded years after the end of ODS/DS, we had little expectation that respondents would be able to recall the names of all the products used, especially those used in the field. In fact, Gambel et al. (1998) indicate that Army soldiers deployed in Kuwait, Haiti, and Bosnia had difficulty identifying military-issue personal-use pesticides even during their deployment. To overcome this challenge, we developed a strategy in the pretest rounds in which respondents could focus primarily on the forms of the pesticides throughout the survey, rather than on their names. This method of questioning proved to be effective: Most veterans were easily able to recall the forms of the pesticides used during the war and were consequently able to discuss further details surrounding their use (when perhaps the particular names of the pesticides were not known). Based on the responses of pretest veterans, the forms included in the final survey were lotions, sprays, powders, liquids, flea collars, small solids (specifically, pellets, crystals, and granules), and "other." For each form a respondent indicated using, the pesticide name was first solicited either by active ingredient or by trade names. If the name could not be recalled, a description of each active ingredient was then solicited in terms of its color and smell.

The list of possible smells was constructed by including all the smells that characterize the different pesticides[3] shown in Table 2.2. These were cooking oil, rotten eggs or sulfur, gasoline, insecticide, kerosene, chemical, sweet, and musty. Similarly, the colors were defined as colorless/clear, light brown, dark brown, gray, orange, red, white, opaque/cloudy/milky, and yellow. Both color and smell allowed the respondent to choose multiples and to add other comments. The combination of form, color, smell, and the location where the pesticide was used can often be used to specify a unique active ingredient.

We separately grouped the possible uses of pesticides into those for personal use and those for field use. Personal-use pesticides were defined as those used directly on the skin or uniform by the respondent. For field-use pesticides, the user could have been either the respondent or another individual observed by the respondent.

The specific pesticides used by or near the respondent depended on the type of location where she/he spent most of the time during the period in question and on the type of pests present. Some pesticides were used only indoors, others only outdoors, others in latrines, etc. Some were used on the skin and others on uniforms or netting only.

We divided the types of geographical locations where people spent most time while in the Persian Gulf into the following categories:

Within these categories, there were different types of facilities where people slept, ate, and worked. These places are listed in Table 2.2, along with the possible pesticide list used to code smells and colors.

Table 2.2
Possible Pesticides Used in Different Types of Situations

Situation Type Possible Pesticides Used
Sleeping or working areas
Building or warehouse Allethrin/permethrin/resmethrin, azamethiphos, cypermethrin, deltamethrin, dichlorvos, diphacinone, methomyl, d-phenothrin
Tent Allethrin/permethrin/resmethrin, azamethiphos, dichlorvos, diphacinone, methomyl, d-phenothrin, valone
Military vehicle Azamethiphos, diphacinone, methomyl, valone
Outdoors Aluminum phosphide, azamethiphos, B.T., carbaryl, cypermethrin, deltamethrin, diazinon, lindane, malathion, methomyl, parathion, propoxur, pyrethrin, valone
Other places
Mess hall/eating area Allethrin/permethrin/ resmethrin, azamethiphos, chlorpyrifos, diazinon, dichlorvos, diphacinone, malathion, methomyl, d-phenothrin, propoxur, valone
Latrine Azamethiphos, chlorpyrifos, dichlorvos, diazinon, malathion, methomyl, d-phenothrin, propoxur, valone

SOURCE: OSAGWI.

For each pesticide used by the respondent, we asked questions about where it was obtained, its frequency of use, and, if they stopped using it, the reason. For the nonliquid/nonspray pesticides, we also recorded information on disposal.

For the field use of sprays, we recorded information on the type of sprayer used (hand-held, truck, or plane fogger[4]) as well as the areas sprayed (indoors, outdoors, outside the camp perimeter, and specific areas inside the camp).

Advance Recall Aids

We sent a letter to each respondent in advance of the interview explaining the study's purpose, its sponsor, and what the interview would be covering. We also enclosed a brochure with answers to frequently asked questions and materials we developed to aid recall. These materials included a map of the Persian Gulf, a calendar with key events highlighted for the months August 1990 through July 1991--key events that respondents could use to bound experiences during their tour--and a Gulf War Service Fact Sheet (mimicking questions from Module I) to be filled out in advance of the interview.[5] This fact sheet, in addition to the other materials, was intended to initiate recall of the respondent's tour in advance of the interview.

Assessing Recall Bias Through Re-Survey

We also randomly selected a subset of 8 percent of the respondents who agreed to be reinterviewed with selected questions from the original survey to assess the reliability of answers about exposure to pesticides during ODS/DS. We administered the second survey after about six weeks, during which time respondents were generally expected to forget, at least in part, how they had answered the first survey. In this way, we were able to examine what fraction of their answers changed. The interpretation of this change can be ambiguous, but re-testing helped us to assess how reliable the answers were over time.

We administered the recall survey to a random subsample of the original respondents.[6] We re-asked the location and timeline questions to reestablish context, and then asked about the types of personal and field uses of pesticides respondents participated in or observed. If they indicated a pesticide type that matched something they listed the first time, we continued with the more detailed questions about names, sources, and frequency of use. (If there was no match, we had no data to compare against and so did not collect additional information.)

MODE OF DATA COLLECTION

We administered the surveys by telephone via a centralized telephone interviewing facility located in RAND's Santa Monica, California, office. It was designed for use with the Berkeley computer assisted telephone interviewing (CATI) system, making online data collection possible. RAND's CATI system is run using a current version of Berkeley's computer assisted survey execution system (CASES). The CATI system displays each interview question on the computer screen for interviewers to read, and allows direct entry of responses into the computer database at the time of the interview, while employing real-time edit and logic checks. It facilitates implementation of complex survey designs such as this one, because it can quickly determine sample eligibility, provide appropriate skipping and branching routines, and tailor question wording to respondent characteristics or previous answers. The system includes sample management as well as automatic call scheduling and case delivery. This allowed our interviewing staff to maintain the status of each case in the sample and to work the sample efficiently according to survey priorities and scheduled appointments. Some respondent tracking was also done with the help of this system. The central monitoring system enabled both audio and visual "real time" monitoring of interviews in progress, both for quality control and interviewer guidance.

INTERVIEWER TRAINING

Interviewers participated in ten days of structured training, with an additional week allowed for unstructured CATI practice before initiating calls. During training, interviewers were instructed in survey interviewing methodology, in the use of the CATI system, in basic tracking methods, and in specific details relating to the interviewing of Gulf War veterans and issues surrounding the study. Interviewers were required to pass a checkout interview with a mock respondent before they were allowed to proceed with calling. Over the course of the study, 20 percent of all interviews were monitored by a supervisor for quality control purposes.

In addition, once the study was under way, an interviewer specialist was trained in refusal conversion. This person followed up with all respondents who had previously refused participation, in an effort to better inform them of the nature of the project and give them another chance to participate. Of those recontacted, approximately 70 percent agreed to the interview.

RESPONDENT COOPERATION AND RESPONSE RATES

Gulf War veterans were very cooperative with the survey effort. Only 4 percent of the veterans contacted refused to participate. Of those interviewed, the RAND interviewers rated cooperation as "good" to "very good" for 95 percent of the respondents; less than 1 percent were rated as "poor" to "very poor." Similarly, interviewers rated almost 97 percent of the respondents' interest in the survey as "average" to "very high."

Response and nonresponse rates are summarized in Table 2.3. The interviewers were able to contact 76 percent of the personnel in the initial sample. Interviews were completed for 2,005 out of the original 3,264 personnel selected.

Table 2.3

Survey Response Rates

Response Status Percentage of Subgroup Percentage of Sample
Respondent not in Gulf War 7
Interview completed 61
Respondent located, no interview
Not interviewed 3
Refused interview 3
Unable to respond 2
Total 8
Respondent not located
Full tracking 14
Reduced tracking 9
Total 23
Other 1

NOTE: The original sample size was 3,264.

In all, only 3 percent of the sampling frame refused to participate, or about 4 percent of those contacted. Two percent were deceased or not able to respond--deployed active duty personnel, for example, often could not respond. Another 7 percent of the personnel in the sampling frame were actually reached but indicated that they had not served in ODS/DS.

As we anticipated, the most common reason for nonresponse was an inability to locate the individual. A detailed and disciplined approach was undertaken to find as many personnel in the sampling frame as possible. However, in spite of these efforts, 23 percent could not be located before the conclusion of the interview period. Individuals who were in the Air Force during ODS/DS were easier to locate than those in the other services. Retired personnel were easier to locate than personnel still on active duty or in the reserves, and civilians were harder to locate. Finally, minorities and females were more difficult to locate than white males.

RESPONDENT AND POPULATION DEMOGRAPHICS

Table 2.4 shows the demographic characteristics of the survey respondents and the entire Gulf War population on the ground in theater. The first column, "Survey Respondents," contains the statistics for the 2,005 survey respondents. The "Population Estimates" column contains the statistics after the respondents are statistically adjusted to reflect the population of interest. As we discuss in Appendix C, the adjustments account for oversampling, nonresponse, and other phenomena. One specific adjustment we made was for personnel listed in the ODS/DS database but who did not serve in theater. As a result, we estimate that of the 536,790 personnel in the sampling frame, only 469,047 were actually in ODS/DS. This represents a population error rate of about 12 percent,[7] although this rate varies by particular characteristics such as service.[8]

Table 2.4

Sample and Population Demographics

Demographics Survey Respondents
(n = 2,005)
Population Estimates
(n = 469,047)
Gender
Male 94.1 92.6
Female 5.7 6.7
Unknown 0.3 0.7
Service
Air Force 36.0 14.7
Army 32.5 65.0
Coast Guard 0.1 <0.1
Marine Corps 27.8 18.2
Navy 3.6 2.0
Food service
No 90.8 97.2
Yes 9.2 2.5
Builtup area
No 84.4 94.4
Yes 15.6 5.4
Rank
E-1 to E-3 14.4 16.6
E-4 to E-5 44.0 54.1
E-6 to E-9 30.7 17.8
Enlisted, unknown 0.3 0.7
Officer 10.6 10.8
Race
Caucasian 72.6 67.4
African-American 18.3 24.5
Hispanic 4.8 4.0
Other 3.7 4.3
Unknown 0.5 0.3

NOTE: Totals may not sum to 100 percent because of rounding.

An analysis of those who said that they were not in ODS/DS shows that: (1) junior enlisted and female personnel (in the database) were less likely to have served in ODS/DS, and (2) personnel who were located in urban areas or had food service occupations were more likely to have been correctly listed as being in ODS/DS.


[1]Coast Guard personnel were also included in the sampling frame and two members of the Coast Guard were actually surveyed. (Five were originally selected to be interviewed. Of these, two were located and interviewed, two more were located but did not meet the survey eligibility criteria, and one could not be located.) Their results are not included in the tabulations because the results could not be generalized to the Coast Guard population on the ground, in theater. However, we carefully read these respondents' responses individually and found nothing unusual. These two personnel used typical pesticides in typical ways with typical frequency.

[2]We did not oversample preventive medicine personnel, who have had special training and have knowledge of pesticides, as they had been previously separately interviewed by OSAGWI.

[3]Colors and smells were derived from NIOSH (1997); the Merck Index, 12th edition; Hazardous Substances Data Bank (HSDB) (sponsored by the National Academy of Medicine); and Toxicological Data Network (TOXLINE) (sponsored by the National Academy of Medicine).

[4]Although the best information available indicates that aerial spraying was never authorized or used, we included the category for completeness and as an external validation of official reports.

[5]Examples of these materials can be found in Spektor et al. (forthcoming).

[6]More precisely, we administered the recall survey to a random subset of those who agreed to be recontacted. However, 97.4 percent of those surveyed the first time agreed to be recontacted.

[7]The population error rate differs from the sample error rate of 7 percent (Table 2.3) because of weighting. The difference occurs because the sample respondents who did not serve had higher weights than those who did; hence the population percentage is larger than the sample percentage.

[8]Also, note that this 12 percent error rate does not capture the reverse type of error: personnel who served in the Gulf War but are not in the Gulf War database. From our survey, we have no way of quantifying this type of error.


Contents
Previous Chapter
Next Chapter