Chapter Seven

WORLD WAR II

World War II marked a watershed in both the approaches to and understanding of the psychological consequences of combat and the war zone. In contrast to previous thinking and concepts, it became clear that while some men were more vulnerable to the development of psychological symptoms and syndromes, all men, no matter how brave or courageous, were vulnerable. World War II began with major reliance on psychological screening. It ended denying the efficacy of screening, contending that "every man has his breaking point," crystallizing the concept of stress as a psychophysiological reality. When soldiers were believed to have predispositions and special vulnerabilities, the most commonly invoked models continued to be drawn from the wellsprings of psychoanalytic thought, as determined by personality patterns established in infancy and early childhood. Ultimately, during World War II, the dynamics of soldier breakdown and symptom formation shifted from the previous "biological" perception (primarily a function of constitutional nervous system inadequacy) to the appreciation of the battlefield and war zone as stressors that interact with the soldiers and their social environment to alter psychological and physiological behavior. It was established that for most soldiers, external events had internal consequences and that, in part, postevent expectations and beliefs about cause and outcome could shape such consequences.

Selection

The military and the nation went into World War II believing almost implicitly that soldier selection would be the solution to all military mental health problems. When selective service came into being, the Veterans Administration "exerted pressure to focus the attention of selective service on the stupendous burden from neuropsychiatric problems which resulted from the last conscription [World War I]" (Sullivan, 1964, p. 126). Sullivan was the first psychiatric adviser to the Selective Service System. He was deeply concerned by the economic costs of neuropsychiatric casualties:

The taxpayers of the United States have spent on neuropsychiatric disabilities related to the conscription and war of 1917—1918, 946-odd million dollars. The cost is still going up. Everything else has gone down . . . but the neuropsychiatric load goes up steadily, in its magnitude, year by year (Sullivan, 1964, p. 129).

He was also concerned about how soldiers who had become neuropsychiatric patients negatively affected others in their units:

The disorder, inefficiency, and grave risk which these patients caused in combat units were very sharply impressed on everyone serving in combat troops in the last war; and we have every reason to believe that the fundamental stability of American youth has diminished and that many of the strains of warfare have increased (Sullivan, 1964, p. 129).

Sullivan saw massive psychiatric screening and "selection out" as the primary solutions to both the severe wartime problem of having psychologically vulnerable soldiers in units and to the extreme economic cost of long-term treatment for masses of neuropsychiatric patients.[1] While Sullivan advocated the use of the psychiatric screening interview to select out the vulnerable and the unfit, he did not think it would screen out more than 50 percent of those who would ultimately become psychiatric problems for the military. He was most concerned with those who might develop long-term psychoses. Ultimately, Sullivan was dropped as psychiatric consultant, but the concept of screening and selecting out the vulnerable remained a core element of the Selective Service System.

While the psychiatric screening was cursory, during World War II massive screening did take place as part of the Selective Service assessment and induction system. Initially, 1,681,000 men were rejected and excluded from the draft for emotional, mental, or educational disorders or deficiencies. Between 1942 and 1945 over 500,000 (Ginzberg, 1959) more were separated from the Army on psychiatric or behavioral grounds. In addition to these separations, a constant and consistent process of weeding out men from combat units took place at training centers, during divisional training, upon notice of embarkation for an overseas theater, and in the staging areas prior to deployment into battle. Almost all World War II division psychiatrists cite such precombat screening as one of their ongoing tasks. It is impossible to know the actual numbers of soldiers that were considered potentially vulnerable, since most were transferred to support and service functions and not actually discharged.

Combat Realities and the Failure of Selection

When the United States entered World War II, little preparation (beyond the concept of selection) had been done to deal with psychological casualties. Despite the use of selection, during World War II, the United States suffered an average of one diagnosed psychological casualty for every four wounded.[2] Despite the use of screening, the first two major commitments of American forces to battle made it clear that the U.S. forces were going to experience many psychiatric casualties during the war. The battle of Guadalcanal in the Pacific and the battles of the Kasserine and Faid passes in North Africa generated large numbers of psychiatric casualties.

Guadalcanal produced extraordinary levels of psychiatric casualties in the First Marine division and the Army units sent in to reinforce it. Rosner (1944) reported that 40 percent of the casualties evacuated from Guadalcanal "suffered from disabling neuro-mental disease" (compared with only 5 percent following the attack on Pearl Harbor). He describes the psychiatric casualties as

reduced to a pitiable state of military ineffectiveness after prolonged exposure under severest tropical conditions to exhaustion, fear, malaria, and sudden violent death at the hands of an insidious and ruthless enemy (Rosner, 1944, p. 770).

Rosner felt that the consequences of Guadalcanal presented a specific challenge to the idea of predisposed psychological vulnerability that developed during the interwar years:

Since World War I neurosis incurred in wartime or under combat conditions has come to be considered pre-conditioned and non-specific. The term "shell shock" for example is supplanted by the less suggestive and less militant "anxiety neurosis"; similarly "gas neurosis" becomes "acute psychoneurotic respiratory syndrome." There has been a de-emphasis of exciting cause and reemphasis of individual personality and other predisposition factors . . . [however] . . . a condition designated by the title "Guadalcanal Neurosis" [indicates that] . . . the brutal combat situation responsible for this interesting psychiatric aberration, however, redirects attention to the importance of exciting cause (Rosner, 1944, p. 774).

Rosner went on to attempt to distinguish between those soldiers who broke down early in their military careers or in the course of battle because of predispositional factors, and those who broke down as a consequence of continuing battle experiences and had no detectable predispositional factors.

Reviewing the issue of psychiatric casualties from Guadalcanal in a 1946 article, Theodore Lidz, an Army psychiatrist who had treated evacuees in the Pacific, noted that "even the non-psychiatric casualties showed emotional reactions of a severity that would have been considered incapacitating in later campaigns." In addition to anxiety and depression, symptoms included "headaches, anorexia, . . . tremors, insomnia, nightmares and palpitation [which] were individual symptoms or could all be present in one man." Trying to understand what had contributed to the tremendous psychiatric casualty levels of this prolonged battle, Lidz (1946, p. 194) concluded that:

there were many factors preying on the emotional stability of the men. The tension of suspense in one form or another was among the most serious; waiting to be killed, for death had begun to seem inevitable to many, and some walked out to meet it rather than continue to endure the unbearable waiting; waiting for the next air raid and the minutes of trembling after the final warning; waiting for the relief ships; waiting without acting through the jungle nights, listening for the sounds of Japs crawling, or for the sudden noise that might herald an attack; waiting even in sleep for the many warning sounds. The fears were numerous: of death, of permanent crippling, of capture and torture, of ultimate defeat in a war that was starting so badly . . . [as well as] fear of cowardice . . . and of madness.

As he cogently put it, "In this first offensive battle of the war it became clear that the incapacitating wound could arrive with the mail from home . . . the loss of a girlfriend, the fight with parents" (Lidz, 1946, p. 195).

In contrast to Rosner, Lidz felt that all of those who became psychological casualties had predisposing factors in their preservice family relationships and life courses. These men he felt would have survived the trauma of briefer and less intense combat and indeed would have behaved, as they did initially on Guadalcanal, heroically. The weaknesses in their personality structures and inner resources combined with the continuous daily trauma of war to ultimately undo them. In Lidz’s thinking, a single traumatic combat event was meaningless; rather, it was the cumulative stress of many such events that produced psychological breakdown.

The next great wave of psychiatric casualties came shortly after Guadalcanal, in the battle of the Kasserine and Faid Passes in North Africa. Here, a poorly trained, equipped, and led American division met Rommel’s superior Africa Corps. An American participant noted the feeling of absolute helplessness as he watched shells from his unit’s short-barreled low-velocity 75mm howitzers bounce off the attacking German Panzers. The American division experienced an exceptionally high proportion of psychiatric casualties, almost equal to the number of killed and wounded. Men were overwhelmed by the shattering reality when their own poorly trained and equipped troops met a highly skilled and better-equipped enemy. This breakdown of a division, depicted in the opening of the movie Patton, was a cause of great concern to senior Army commanders. As with Guadalcanal, the United States had "selected out" those assumed to be most vulnerable, but that did not prevent large numbers of psychological casualties.

It was clear that all of the lessons of World War I had essentially been forgotten, particularly in terms of fielding organizations to deal with widespread psychological dysfunction. Casualties were being evacuated directly back to the United States. Among the questions asked were: What happened? What maintains soldiers in the combat zone and in combat? A psychiatrist, Herbert X. Spiegel, was sent to Tunisia to evaluate the situation and to develop preventive recommendations. Spiegel’s observations set the stage for much of the thinking that became central to military psychiatry and determined the criticality of the primary group in maintaining soldier mental health and buffering the effects of the stresses of the battlefield and events at home. Spiegel (1944, pp. 311—312) noted:

If abstract ideas–hate or desire to kill–did not serve as strong motivating forces, then what did serve them in the critical time? . . . It seemed to me that the drive was more a positive than a negative one. It was love more than hate. Love manifested by 1) regard for their comrades who shared the same dangers, 2) respect for their platoon leader or company commander who led them wisely and backed them with everything at his command, 3) concern for their reputation with their commander and leaders, and 4) an urge to contribute to the task and success of their group and unit. . . . They seemed to be fighting for somebody rather than against somebody.

If that be so, what practical significance does it have to psychiatrists? Let us first consider the psychiatric casualties. A considerable amount of the ordinary combat accomplishment was performed by ordinary men experiencing rather severe anxiety.

The overt symptoms varied from a feeling of tension, dry mouth, palpitation, perhaps mild tremors, through a more marked tension with increasing sensitivity to noises of any kind, to the extreme of gross trembling, screaming, crying, running about in confusion, and almost complete disorientation. These extreme cases were not common. . . . If there was anything that appeared to be common to all these states besides fear, it was the factor of fatigue or exhaustion. Fatigue not only as a result of physical exertion and lack of restful sleep, but also as a result of a constant state of tension and anxiety.

Another component was something . . . which . . . might be referred to as the X factor. It was something which corresponds to whatever courage is; something which, when present, indicated good morale . . . it was influenced greatly by devotion to their group or unit, by regard for their leader, and by conviction for their cause. It seems to explain why a tired, uninspired, disgusted soldier had the clinical appearance of an anxiety state. It seemed to explain why some units could outdo others; it seemed to aid in controlling the ever present fear; and it seemed to aid in resisting fatigue. . . . Here was a critical, vulnerable and, to be precise, an easily influenced component that often decided whether or not a man would be overwhelmed by his fear, anxiety, or fatigue. Here was a factor that decided whether or not a man became a psychiatric casualty.

THE ROLE OF THE GROUP

Spiegel’s observations became the hallmark of preventive psychiatric thinking later in World War II. The primary mediating structure that enabled the soldier to cope with stress to prevent breakdown and longer-term psychological damage was the support provided by the soldier’s group, particularly the combat or task group and its immediate leadership. This group structure rested upon relationships within the primary group (i.e., the crew or squad nested in those relationships of the platoon and of the company) and was most protective when coupled with trust and confidence in the relationship with unit leaders at each level.[3] Where there were no strong primary group affiliations, the potential for breakdown was very high. Where there was a real break between leaders and led, a lack of trust or a lack of communication, the potential for breakdown was even higher. As pointed out earlier, a certain amount of the psychological sustaining power of the group was undoubtedly due to the evolution of tactics and weaponry. The establishment of companies, platoons, and squads as maneuver elements (as opposed to the mass of the "line of battle") endowed these groups with new value to the soldier in combat. A soldier’s life and survival was now in the hands of a small interdependent group. Decisions made by his sergeants, lieutenants, and captains now had determining power far beyond any they might have exercised in the past.

Glass (1973, p. 995), who became the dominant figure in military psychiatry for more than two decades after the war, summed up the powerful effects of these observations:

Perhaps the most significant contribution of World War II military psychiatry was recognition of the sustaining influence of the small combat group or of particular members thereof, variously termed "group identification," "group cohesiveness," "the buddy system," and "leadership." This was also operative in non-combat situations. Repeated observations indicated that the absence or inadequacy of such sustaining influences or their disruption during combat was mainly responsible for psychiatric breakdown in battle. These group or relationship phenomena explained marked differences in the psychiatric casualty rates of various units who were exposed to a similar intensity of battle stress. The frequency of psychiatric disorders seemed to be more related to the characteristics of the group than to the character traits of the involved individuals. Thus World War II clearly showed that interpersonal relationships and other social and situational circumstances were at least as important as personality configuration or individual assets and liabilities in the effectiveness of coping behavior.

It is clear that World War II marked an extraordinary paradigmatic shift from a doctrine of vulnerability based upon constitutional and inherited factors to one based almost entirely upon environmental determinacy. In a sense, a wide-scale leveling of the "personality/constitutional—predispositional" playing field occurred. It was agreed that a modest proportion of men bore psychic wounds from their past that made them exceptionally vulnerable and that these men usually broke down very quickly in training or in the initial commitment to battle. As for the rest, the model adopted fit the democratic values that underlay our national commitment to the war. In general, all men were moderately equally endowed to bear the vicissitudes of war, but all were behaviorally and psychophysiologically "plastic." Each was at risk for being stressed by the strains, fears, and anxieties of combat, the combat zone, separation from family, etc., to a point of possible breakdown or symptomatic expression. It must be remembered that where scientifically rigorous chains of causality, such as smallpox virus leading directly to smallpox, do not exist we ordinarily reason from situational and correlational determinants. When men whose bravery in past combat and whose lack of prior asserted or recorded psychiatric problems broke down, the obvious correlates of phenomena that contributed were environmental. The specious racism of the eugenics movement had so tainted consideration of biological factors that they were looked upon askance by much of the medical community. In addition, in our national dialogue, the sacrifices of brave men who suffered combat fatigue were not to be stigmatized as involving individual constitutional weaknesses–as General Patton discovered to his severe discomfort.

Thus the real predisposing factors were seen not as internal to the men themselves but as external and environmental. The following are some of these factors: threat–the erosive anxiety that comes from being in a situation where ones life is in danger, where people (the enemy) are trying to kill you, either directly in firefights or with the more random and the more difficult-to-defend-against agencies of bombs and artillery; primitive living conditions–soldiers often bed down in a hole in the ground and may not have access to latrines; hunger and thirst abated only by unappetizing combat rations; periods of intense fear alternating with periods of boredom; disease and accident; grief over the loss of buddies; anxiety about events at home; and, above all, interactions with those in the unit and the unit climate.

The power of the unit and of organizational climate can be measured by comparing soldier breakdown rates for different regiments and divisions engaged in equivalent combat scenarios. Psychological casualties ranged from 3 percent to 54 percent. The 442nd Regimental Combat Team[4] had almost no psychiatric casualties throughout the Italian and German campaigns. In contrast, the 24th and 43rd Divisions in the Pacific Theater had major and chronic problems with combat stress and general psychiatric casualties. The 24th Division, which had been responsible for the defense of Oahu during Pearl Harbor, seemed to function as if it were under the perpetual stigma of failure and incompetence. In addition to large numbers of psychiatric casualties in its first campaign, an unusual amount of situational homosexuality during its deployment as well as extremely high levels of sick call and somatic symptoms characterized the division. The 43rd Division, which was characterized by poor morale and leadership problems, lost almost 10 percent of its manpower as psychiatric casualties in New Georgia. In combat, psychiatric breakdown appeared to be contagious–mass breakdown occurred among small groups, such as infantry squads. The relationship between the pattern of breakdown and the divisions’ organizational problems is indicated by the fact that the number evacuated from each company was directly proportional to the number of unit leaders evacuated (see Marlowe, 1986). In this latter case, as Coleman (1973, p. 637) pointed out, this underlined "again the paramount importance of qualified combat leadership in maintaining morale and preventing combat disturbances."

BATTLE FATIGUE/COMBAT FATIGUE

In response to the belief of universal vulnerability, the U.S. Army adopted the official slogan "Every Man Has His Breaking Point," with respect to the problem of combat stress or, as it became popularly known, "combat fatigue" or "battle fatigue." It was established that even the bravest and strongest people exposed to combat for a long enough period would break down. The legitimization of psychological or behavioral breakdown led to a shift in what might be termed the "behavioral metaphors" used to express the consequences of stressful events. Psychological and behavioral symptoms became predominant. Physical symptoms remained, but they tended to be less-dramatic, chronic discomforts rather than disabling ones. Weinstein,[5] who commanded the 601st Neuropsychiatric Unit, which supported the 5th Army in Italy, estimated that some 15 percent to 20 percent of patients suffered conversion or equivalent symptoms. The drop in the commonality of conversion reactions was not, however, universal. They remained the dominant symptoms of breakdown in elite organizations such as the Rangers and in the airborne divisions. In units that set a cultural premium upon psychological as well as physical toughness, psychological symptomatic expression was far less acceptable than in ordinary infantry divisions.[6]

The issue of culturally and socially acceptable metaphors as aspects of the expression of illness is one that remains difficult for some to grasp. The "pure disease medical model" (i.e., that model in which symptoms are the fixed and invariant expression of the disease entity–be it pathogen, toxin, or malignancy) has truth and utility but it is a limited truth and utility when dealing with symptomatic expressions of the wider category of "illness." There remains a vast part of the spectrum of "illness"–the individual and cultural expression of the perceived disease state–in which symptomatic expression is channeled into a culturally agreed upon narrative acceptable in terms of the patient’s role, status, and image.

In a simpler and more dramatic sense, the case of the Wehrmacht during World War II is illustrative. On a formal level, the Nazi government and military leadership banned the concept of psychological breakdown in the mode of shell shock, war neurosis, or battle fatigue. A behavioral breakdown or the exhibition of psychological symptoms other than those of "insanity" was considered both cowardly and treasonous.[7] The penalty for this, since it was often considered refusal to do ones duty in the face of the enemy, was often death or punishment. German military physicians were well aware of the realities of combat stress, but soldiers with combat-stress-induced illnesses were usually diagnosed in terms of physical symptoms with little or no reference to a psychological component (see Schneider, 1986). The pattern of symptomatic presentation of the combat stress reaction thus tended to be dissimilar compared with that of the U.S. or British forces even though the basic ailment was the same.

The overwhelming focus of military psychiatry on the problem of combat fatigue operated throughout the war and for decades into the future, diminishing concern about other psychologically relevant phenomena affecting troops during deployment. The reason was quite simple–strategy and tactics focused military concern upon those who actually carried the war to the enemy, and these soldiers had rapidly demonstrated that they were the group most vulnerable to breakdown during or following combat. Well over 90 percent of all combat fatigue cases came from infantry maneuver regiments, followed by more modest numbers from armor and even fewer from artillery (see Mullins and Glass, 1973).

In the mid-century period of mass armies engaged in extensive and intensive ground war, large drafts of manpower were critical to maintaining the war effort. Due to selection processes, combat losses, and demands of competing theaters, as well as the forces required to maintain a massive logistics effort, troops were in short supply. By the later part of 1944, the European Theater of Operations was scraping the "bottom of barrel" for replacements. This fact led to the termination of the Army Specialized Training Program, designed to continue the university education of those in "vital" specialties and their subsequent rapid movement to Europe as infantry replacements. Some infantry regiments were suffering casualty rates of 1,600 per thousand per year. The only nonland forces that suffered equivalent levels of behavioral and psychophysiological breakdown were the heavy bomber forces, particularly the 8th Air Force. Their air war, with casualty rates extending from 15 to 40 percent of those involved in deep penetration raids, shared the traumatic intensity of ground warfare (see Mullins and Glass, 1973). In this situation, combat fatigue losses represented a major problem. Yet, it is important in looking at the war-related problems of the second half of the 20th century, and in particular those of the Gulf War, to realize the extent that other elements contributed to, and in a number of cases drove, psychological, psychosocial, and stress-based illness.

A NONCOMBAT HYSTERIFORM BEHAVIORAL EPIDEMIC: THE CASE OF ATABRINE

A primary example (and one that has resonance today in terms of a psychosocially structured, rumor-driven series of events with extreme medical and behavioral consequences) was the Atabrine/Mepacrine problem of World War II, which perhaps best illustrates the effect of belief on behavior and illness. Compliance with taking Atabrine/Mepacrine, a synthetic antimalaria drug,[8 ] was persistently undermined by a combination of moderate side effects and a continual barrage of rumor and folklore passed among soldiers. As Field Marshall Slim (1956, p. 180), the overall commander of the China-Burma-India theater described the phenomenon:

When Mepacrine was first introduced and turned men a jaundiced yellow, there was the usual whispering campaign among troops that greets every new remedy–the drug would render them impotent–so, often the little tablet was not swallowed.

Noncompliance had significant deleterious tactical effects. The consequences of these rumors and beliefs were pointed out by W. J. Officer (1969, p. 274):

The periodic rises in the incidence of malaria occurring at intervals of six weeks were very successfully overcome by increasing the dosage to three tablets per diem for 5 days before they were expected. . . . In spite of this large intake of Mepacrine over a prolonged period, no toxic effects were recorded, although some individuals exhibited an idiosyncrasy to it at the commencement and required quinine for suppression.

Unfortunately, there was a somewhat widespread belief that Mepacrine produced impotence, and in one battalion the administration of the drug was suspended before the troops went into action as it was considered by the combatant officers to reduce the fighting efficiency of the unit. As such fallacies have a tendency to spread rapidly and become exaggerated and gain greater credence during circulation, every opportunity must be seized to discredit them.

Over and over again in the Burma campaign, the issue of "Atabrine discipline" (that is, compliance with a standardized regimen of Atabrine intake) played a central role in the medical and tactical breakdown of forces in the theater because it was enmeshed in rumor, folklore, and distrust. For Merrill’s Marauders–the 5307th Provisional Composite Unit, the American deep penetration counterpart to the Chindits–noncompliance with the Atabrine regimen was cited as a key factor in the unit’s disintegration:

Suppressive practices apparently held up well until the battle of Nphum Ga. By then some cases were "breaking through" on the march, and they became very numerous during and immediately after the siege. Evacuations and medication produced some relief. But as the troops struggled over the trail to Myitkyina and lost momentum in the fight for the town, malaria overwhelmed the force.

The most probable cause of the outbreak was a serious breach in Atabrine suppressive discipline. In the midst of a crisis in morale such an explanation became especially convincing.

The outbreak at Nphum Ga, however, revived old doubts [about the efficacy of Atabrine].

It is doubtful whether the command and the medical establishment ever regained control of the situation. Some semblance of Atabrine discipline had been reinstated before the march to Myitkyina began. But "breakthroughs" and new cases immediately appeared again. Those who did not fall by the wayside with malaria were thoroughly ill when they staggered into the aid stations at Myitkyina. Sent off after the usual onsite treatment, they soon returned as sick as ever. Outraged by restrictions on evacuation and the pressure to continue the campaign, genuinely dazed with fatigue and suffering from other diseases, more and more men repudiated Atabrine therapy. The sicker they became, the lower fell their morale. The lower their morale, the less hope there was of restoring Atabrine discipline and curbing malaria.

Thus were the Marauders destroyed, not by misleadership . . . nor by the enemy. (Hopkins, Stelling, and Tracy, 1969, pp. 394—395).

Rumor, folk belief, fear, and anxiety, reinforced by the overt side effects of Atabrine, all combined in the Burma tactical scenario to undermine compliance and destroy Atabrine discipline. Burma was not unique: Equivalent problems with compliance were seen in North Africa, the Mediterranean, and the South Pacific. Describing the early efforts of suppression in the South Pacific, Baker (1963, p. 465) pointed out that, "with these early efforts at suppressive drug control, malaria rates of combat troops in the range of 1,500 to 2,000 per thousand troops per annum were common." Again, the problem of providing successful suppression with Atabrine had little to do with its ultimate efficacy, which was established, but with the perception of the drug.[9] Baker (1963, p. 466) continues:

When Atabrine was initially administered it frequently led to nausea, vomiting, and diarrhea. This was particularly likely to occur when the administration was begun on shipboard, where anxiety and seasickness contributed to the prevalence of gastrointestinal upsets. Confusion, too, between the skin discoloration due to Atabrine and cases of infectious hepatitis [a confusion added to by some medical officers] increased fear of the drug. There were rumors that Atabrine caused impotence. In addition soldiers soon learned that if they acquired malaria they would be removed from combat areas to more adequate hospital facilities. Altogether, the value and safety of administration of this drug was not wholeheartedly accepted by the troops, and forward medical officers themselves became lukewarm regarding it. . . . The result of all of this was poor discipline in the use of suppressive therapy and consequent failure in control of clinical malaria.

Thus, rumor, folklore, and myth, particularly when combined with perceivable side effects and possible future reproductive consequences, can adversely affect compliance with necessary or recommended prophylactic drug regimens, alter behavior, or generate or exacerbate functional physical symptoms–particularly those reputed to be created by the agent involved.[10] These phenomena, involving attribution of feared future effects to medication, appear to prefigure the same sorts of responses to chloroquine-primaquine in Vietnam and to several agents (including anthrax vaccine and pyridostygmine) given during the Gulf War.

OTHER NON-BATTLE-FATIGUE PSYCHOLOGICALLY IMPLICATED DISORDERS

In addition to the usual symptoms of battle fatigue,[11] the war also generated other psychophysiological and psychosomatic disorders. Many of these were the same disorders that characterized ailments of troops in World War I. Lewis and Engel (1954) note that neurasthenia (neurocirculatory asthenia) was more common among troops in World War II than in World War I. They also point out (p. 139) that there appeared to be predisposing factors for a range of physical ailments causing hospitalization. Based on the data, they assert that, "a high incidence of personality disorders characterized patients hospitalized for acute upper respiratory infections and for hemorrhoids" and that "those with high neurotic potential, as shown by Cornell Service Index studies, were hospitalized more frequently than the average." Almost all functional disorders appeared to have had fatigue as an overriding characteristic. Gastrointestinal disorders[12] were epidemic in both combat and noncombat soldiers. Psychogenic rheumatism and skin reactions[13] were also common, as was headache. The observation was made that levels of somatization–i.e., the reporting of physical symptoms–among troops in combat line organizations were proportional to levels of psychiatric casualties in combat and the "climate" of the units involved (Kaufman, 1947).

It was observed that in many situations, particularly those involving some level of social isolation, climatic stressors, and environmental austerity, support forces were at significant risk for "neurotic" patterns of response, particularly for the generation of somatic symptoms and other physical expressions of illness. These medical problems were often accompanied by rises in the level of psychiatric referrals and by such behavioral phenomena as rises in situational homosexuality. This was particularly true of troops manning the logistical pipelines supporting the forward forces in such places as the South and Southwest Pacific and the Persian Gulf. (See, for example, Mullins and Glass, 1973.) Emotional problems and minor illnesses were amplified in support forces stationed in such urban centers as Naples, because of "relative deprivation" (that is, living in an environment that was in some ways "out of the war" and conceptually more like home). These living conditions appeared to markedly increase the stressfulness of separation from home and family, food quality, and lack of entertainment. The observation was sometimes made that stressors that appeared to be trivial for combat troops in the line in Italy and elsewhere were major chronic stressors for rear echelon personnel (Stouffer, 1949).

One observation was made by almost all physicians who treated battle fatigue and psychogenic disorders: Patients with functional disorders tended to fix upon and continue to exhibit the symptoms of the disorders from the initial screening or diagnosis. This concept came to be known in medical sociology and social and military psychiatry as "labeling theory." Labeling theory was considered of paramount importance for treatment, symptom resolution, and limitation of disabilities to the short term rather than the long term.

Weinstein has noted that in the Italian theater, the most potent tool for alleviating the symptoms that might develop into battle fatigue was a simple and reasonable explanation of their source and normality. When it was explained to the soldier that a churning stomach, dry mouth, pounding heart, and trembling hands were normal ways the body responded to a situation of high apprehension and anxiety, the symptoms usually moderated as the soldier breathed a sigh of relief. He was not, after all, going mad or about to collapse from some terrible internal disorder.[14] When the soldier was told that he had an ailment of unknown origin, the symptoms tended to persist and amplify. The more attention paid to him by the medical system, the longer the patient moved through rearward and more-sophisticated medical echelons, the more intractable the symptoms would often become. Soldiers with cases of battle fatigue that might have been treated easily and readily in forward areas often became long-term psychiatric ward patients if evacuated to hospitals in the United States. Patients with physical symptoms often became chronic cases and had greater levels of disability when neither diagnosis, rest, nor medications relieved their symptoms, and in many cases amplified them. The role of interaction between the patient’s expectations and the medical system’s expectations became, once again, a significant contributing player to the course of the illness and its short-term or long-term outcome.

During World War II, the lessons of World War I with respect to the handling and treatment of combat psychiatric casualties (treat quickly, rest briefly, and explain and act with the expectation that the soldier will return to his unit) were initially forgotten. Soldiers and Marines who broke down during the early phases of World War II were usually evacuated, and many became long-term psychiatric patients. This situation was eventually remedied,[15] and ultimately, as in World War I, a wide variety of therapeutic modalities were utilized. These included brief therapy, simple encouragement, hypnosis, and even sodium amytal usage. All worked to some degree, at least from the practitioner’s viewpoint. If there was a common underpinning, it lay in the trust soldiers had in the medical system and in the physicians who were treating them, and in the belief of both soldiers and practitioners that the therapy would work. As The American Soldier studies point out, this belief in the efficacy of the military medical system was a salient factor in maintaining morale (see Stouffer et al., 1949).

In addition to the important principles of not labeling soldiers with a psychiatric or physical diagnosis and not withdrawing them from combat or the combat zone, another principle of treatment emerged in World War II–never, if at all possible, let troops know what the range of symptoms of psychological disorders were.[16] Indeed, the terms battle fatigue or combat fatigue themselves were picked because they were essentially neutral and did not indicate the specific elements of a syndrome and were considered to be nonstigmatizing. It was assumed that the underlying psychological dynamic, derived in good part from psychoanalytic thought, was the "primitive" desire of the soldier to find a legitimate pathway through which to withdraw himself from the terrible threat of death and dismemberment presented by the combat zone.[17] Because most men did not want to die or be maimed, many military psychiatrists felt that, given the strength of the "survival instinct," if soldiers knew what symptoms would lead to a diagnosis of combat fatigue, these symptoms would be produced (from subconscious or unconscious motivations) by many soldiers and there would be epidemics of battle fatigue. During World War II, it was reported that in areas where the symptoms were widely known by the troops, there were large increases in the number of soldiers coming in for treatment.

We do not know whether this supposition, grounded in long-standing psychological beliefs, was true. The only tests of it came almost 50 years later in a markedly changed military and a changed culture when in Panama and in the Persian Gulf every soldier had access to materials delineating all of the symptoms of combat stress. These were, however, short wars with combat dramatically different from the grinding battles of World War II and Korea. We do know that these short wars produced no marked increase in the production of stress-related symptoms or patients during the initial period of commitment to battle. In fact, there were proportionally fewer than during past wars. This may well be analogous to Weinstein’s observations, reflecting widespread knowledge of the "normalcy" of combat stress.

It should be noted that the emphasis on psychological dysfunction as a legitimate pathway out of the combat zone served to mask the roles played by both wounding and disease in the genesis of psychological dysfunction. World War II produced, at least among the American forces, the widespread myth of the "million dollar" wound: a wound severe enough to remove the soldier permanently from the combat zone, but not severe enough to maim or disable. The "million dollar wound" and its disease analogs were seen by psychiatrists as legitimating the soldier’s withdrawal from battle and abandonment of his primary group. Therefore, it precluded development of the symptoms of combat fatigue or other psychological symptoms. This masking of the physical-psychological relationship was unfortunate and led to a lack of focus in the many reports of combat fatigue symptoms of the wounded and the physically ill (see Lewis and Engel, 1954). It also contributed to the establishment of an implicit model in which the "psychological" was seen as driving the "psychosomatic" consequences of experiences in the combat zone. But it did not particularly attend specifically to the interaction and interdependency of mental and physical health.

The World War II Paradigm Shift

The paradigm shift that took place during World War II was one that moved from causation based upon constitutional predisposition in markedly vulnerable population subsets to the concept that all normal human beings could break down. Any soldier could be made behaviorally dysfunctional as well as physically symptomatic by the stresses, anxieties, and strains affecting him in the war zone environment. Human response to the extreme stresses of the combat environment was seen as variable. Some soldiers fell prey to the stresses of combat or even training sooner than others did. In time however all were ultimately vulnerable. Combat events and other environmental stresses were perceived as altering the internal environment of the body in ways that were destructive to mental and physical health. Prolonged combat-environment exposure could alter the human capacity to maintain a reasonable level of performance, and in such situations, the power to maintain physical and mental homeostasis was seen as limited. This led to the belief that all combatants would ultimately become psychological casualties. Projected curves demonstrating the relationship of force sustainment to intensity of battle were developed (see Swank and Marchand, 1946; Swank, 1949; and Appel and Beebe, 1946), and in all, the curve extrapolated to a "real point" at which 100 percent of the force would have become combat psychiatric casualties. The three most pertinent mediating variables were seen as the intensity of combat, the cohesion of the unit, and normal human variability. Breakdown, or the exhibition of either stress-related physical or psychological symptoms, was not simply due to genetics, the early childhood experiences, or the ethnic group membership of the soldier as had been believed by many in the past.

Despite the asserted ease of treatment and the comparatively radical environmental situationalism, a profound concern developed over the possibility that soldiers might suffer the kinds of long-term stress-related psychological disabling disorders that had been seen after World War I and that had been so costly in both economic and human terms. This possibility led to the passing of the National Mental Health Act in 1945, which provided for a vast expansion of mental health facilities, including the establishment of Veterans Administration "store-front" local treatment facilities to deal with the anticipated wave of mental health and adjustment problems of veterans.

ENDNOTES

[1] His arguments are of great interest because they illuminate the conceptual structure of the time held by an advanced psychiatric thinker–the father of interpersonal and social psychiatry.

[2] According to estimates made by Albert Glass and others, four times as many were treated locally and never recorded as psychological casualties.

[3] This relationship is characterized by implicit agape-like love between the members of a good military unit.

[4] This unit was composed primarily of Nisei from Hawaii and relocation centers and was the most decorated unit in the U.S. Army.

[5] Personal communication with Edwin Weinstein, M.D., 1995.

[6] The reality of battle fatigue was certainly not universally accepted within the allied forces. The infamous incident in which General George Patton slapped a hospitalized soldier presumably suffering combat fatigue is one well-known instance.

[7] The phrase used was ohne fuehrergeist.

[8] Taking Atabrine/Mepacrine was essential because of the loss of natural sources of Quinine in Southeast Asian plantations.

[9] The medical community compounded the Atabrine/Mepacrine issue by making statements of mistrust, despite the fact that it represented the only malaria suppressant (other than limited supplies of quinine) available to U.S. and allied forces.

[10] Problems like those seen with Atabrine/Mepacrine in World War II also surfaced during the Vietnam conflict. There were many reports of soldiers refusing to take the chloroquine-primaquine antimalarial drugs. In addition to beliefs that it would bring on impotence and other untoward side effects, it was credited with causing genetic damage that would lead to severe birth defects in future children conceived by soldiers.

[11] Symptoms included trembling, palpitations, narrowing of the visual field, startle response, fatigue and weakness causing failure to continue one’s duties, problems with memory, etc.

[12] Or as it was termed then, "gastrointestinal neuroses." These were almost always prolonged gastrointestinal disorders without peptic or duodenal ulcer.

[13] Skin diseases included neurodermatitis, urticaria, pruritis, etc.

[14] Personal communication with Edwin Weinstein, M.D., 1996.

[15] Following the debacle in North Africa, Hanson, Glass, and others recreated the forward-based psychiatric treatment models of World War I. The emphasis was on brief intervention psychotherapy, rest, a hot meal, and the concept of restoration.

[16] Personal communication with Albert J. Glass, 1982.

[17] This was perceived, in part in terms of the Freudian concept of Thanatos, as the primary drive informing the soldiers’ quest for survival.


Contents
Previous Chapter
Next Chapter