Stuck with a difficult assignment? No time to get your paper done? Feeling confused? If you’re looking for reliable and timely help for assignments, you’ve come to the right place. We promise 100% original, plagiarism-free papers custom-written for you. Yes, we write every assignment from scratch and it’s solely custom-made for you.
Order a Similar Paper Order a Different Paper
Technical Writing course-
Write a 200-word post about the most interesting insight you got from the chapter. You can choose any aspect of the chapter you want to write about. Make sure to include specific references to the book (and quotes where necessary) as you describe what you found most interesting in the text.
Text attached to this submission.
Technical Writing course- Write a 200-word post about the most interesting insight you got from the chapter. You can choose any aspect of the chapter you want to write about. Make sure to include sp
Chapter Title: Organizational Culture Book Title: Organizational Learning at NASA Book Subtitle: The Challenger and Columbia Accidents Book Author(s): JULIANNE G. MAHLER and Maureen Hogan Casamayou Published by: Georgetown University Press Stable URL: https://www.jstor.org/stable/j.ctt2tt559.10 JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/termsGeorgetown University Press is collaborating with JSTOR to digitize, preserve and extend access to Organizational Learning at NASA This content downloaded from 126.96.36.199 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms CHAPTER 6 Organizational Culture N ASA’s ORGANIZATIONAL CULTUREhas been the subject of an enormous number of popular and scholarly works. Thomas Wolfe’s 1995The Right Stuff, and the subsequent ﬁlm, observed the self-conﬁdent, can-do ethos of test pilots and its early inﬂuence on NASA and its astronaut corps. The 1995 ﬁlmApollo 13celebrated the dedication and ingenuity of NASA’s engineers on the ground that were able to improvise a device to scrub carbon dioxide from the air and replot a return path for the crew of the badly damaged lunar landing craft in 1970. A darker view of the agency’s culture is described by Diane Vaughan (1996), who traced the widespread acceptance of the increasingly clear evidence of faulty seals in the solid rocket boosters before theChallengeracci- dent. Adams and Balfour inUnmasking Administrative Evil(1998) attribute what they see as the isolated and ‘‘defensive organizational culture’’ (108) of the Mar- shall Space Flight Center to its early management by a team of German rocket scientists with links to Nazi forced labor camps. Cultural elements are also thought to have contributed to the two shuttle accidents. Both of the ofﬁcial investigations of the shuttle disasters identify cul- ture as a cause. The Rogers Commission ‘‘found that Marshall Space Flight Cen- ter project managers, because of a tendency at Marshall to management isolation, failed to provide full and timely information bearing on the safety of ﬂight 51-L to other vital elements of shuttle Program management’’ (Rogers Commission 1986, 200). Based on this ﬁnding, the commission indirectly rec- ommended culture change as one remedy: ‘‘NASA should take energetic steps to eliminate this tendency at Marshall Space Flight Center, whether by changes of personnel, organization,indoctrinationor all three’’ (200; emphasis added). The Columbia Accident Investigation Board went into much more detail about the failings of the shuttle program culture, identifying cultural issues behind several of the patterns of behavior that led to the accident. The board found that a ‘‘culture of invincibility’’ permeated the management (CAIB, 199), particularly 140This content downloaded from 188.8.131.52 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 141 as it used past successes to justify current risks (179). There were also ‘‘ ‘blind spots’ in NASA’s safety culture’’ (184). Excessive hierarchy and formalization, intolerance of criticism, and fear of retribution kept those with concerns silent. The CAIB identiﬁed lapses in trust and openness, contributing to blocked com- munication across the shuttle organization, a ﬁnding that had also been identi- ﬁed in the Rogers Commission report and in 2000 by the Shuttle Independent Assessment Team (179). In both accidents, information and events had been interpreted though cultural frames of reference built up over years of experience (CAIB, 200). In line with the approach taken in the previous three chapters, we will exam- ine the evidence that similar patterns of cultural beliefs and assumptions contrib- uted to both accidents. We then look at the efforts that were made to change these assumptions or learn to overcome their effects after theChallengeraccident. Because underlying cultural assumptions tend to persist in organizations, we do not necessarily expect to ﬁnd wholesale or rapid changes in the agency’s culture, but rather some recognition that certain cultural beliefs contributed to the man- agement patterns that led to the accidents. Finally, we search out the efforts that were made to understand the impact of cultural beliefs, to initiate changes when possible, or to make intelligent adaptations. There are many characterizations of NASA’s culture and subcultures. We can- not hope to track them all. Instead, we will consider the evidence surrounding four core cultural beliefs and assumptions about the work at NASA and the shuttle program particularly, each of which bears directly on the decisions and actions surrounding the accidents. They are, very brieﬂy, the sense of rivalry and grievance that contributed to lapses in reporting and management isolation at Marshall, the dismantling of the hands-on laboratory culture at Marshall that left engineers without an effective means of judging reliability, the low status of safety work that contributed in both cases to a silent safety program, and the unwillingness to report unresolved problems based on what some have termed the ‘‘climate of fear’’ in the agency or, less elegantly, what contractors have called ‘‘NASA chicken’’ (Wald and Schwartz 2003). INVESTIGATING THE CULTURE OF THE SHUTTLE PROGRAM All of the examples offered in the paragraphs above illustrate NASA’s organiza- tional culture, deﬁned as the deeply held, widely shared beliefs about the charac- ter of work, the mission, the identity of the workforce, and the legacy of theThis content downloaded from 184.108.40.206 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 142 Analyzing the Causes of the Shuttle Accidents organization’s founders. These kinds of cultural beliefs may not be overtly acknowledged by members. In fact, such beliefs may not be recognized as ‘‘the culture’’ but rather simply as the way things are done (Martin 2002). This ‘‘taken for granted’’ character of organizational culture makes it especially difﬁcult to identify or to change, but it also means that the beliefs exercise a signiﬁcant and lasting effect on the perceptions and actions of members. Culture is especially important to organizational learning because lessons learned across organization levels become embedded in culture as often-told stories, rituals, or tacit knowl- edge, as well as in new formal policies and procedures (Schein 1992; Levitt and March 1988). This link to learning also reﬂects the ways in which culture evolves as a product of the history of an organization. Van Maanen and Barley note that ‘‘culture can be understood as a set of solutions devised by a group of people to meet speciﬁc problems posed by the situations they faced in common’’ (1985, 33). Thus cultural meanings accrete over time and uniquely in response to the experiences of organization members. There are many ways to conceptualize and study the culture of an organiza- tion (Martin 2002). Particularly useful here is Schein’s (1992, 1999) approach that distinguishes the visible artifacts, behavior patterns, and articulated or espoused values from the underlying cultural beliefs and assumptions that may not be either visible or overtly articulated. These core assumptions describe the patterns of meaning in an organization (Martin 2002, 3), and they help account for how members think, feel, and act. Assumptions about the worth of the mis- sion, the identity of members, and professional norms all inform the meaning of artifacts to organization actors. The overt manifestations of these underlying beliefs may include stories, architecture, and rituals, but also structures and poli- cies (Martin 2002, 55). Public statements about the underlying beliefs may or may not be accurate. The tensions between core beliefs and the exigencies of the day may emerge in espoused statements of values that are clearly at odds with actions or with the actual operative beliefs about the organization and its mem- bers (Schein 1999). INTERCENTER RIVALRIES AND GRIEVANCES Rivalry among the centers, poor communication within the shuttle program hierarchy, and a reluctance to share information across centers emerged as pat- terns in both shuttle accidents, though much more strongly in the case of the Challenger. The Rogers Commission directly identiﬁed intercenter rivalries as a factor in communication lapses between the Marshall Space Flight Center inThis content downloaded from 220.127.116.11 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 143 Alabama and the Johnson Space Center in Houston. Marshall was criticized for management isolation and failing to provide ‘‘full and timely’’ information to other program ofﬁces (Rogers Commission 1986, 200). The CAIB found that one of the reasons that the mission management team failed to take vigorous action to acquire images of the orbiter wings was that Marshall and Langley engineers had been the ﬁrst to identify the problem (CAIB 201), and ‘‘the initial request for imagery came from the ‘low status’ Kennedy Space Center’’ (201). In contrast, another member of the debris assessment team, who was without par- ticular credentials on this issue but who had an ofﬁce in the higher-status shuttle program ofﬁce at Johnson, was instrumental in deﬁning the problems as incon- sequential early on (201–2). The implication is that the Johnson-based mission management team was unwilling to listen carefully to outsiders. While it appears that the rivalries between Johnson and Marshall were more directly implicated in theChallengeraccident than in theColumbiadisaster, struggles over status had impacts on communication and coordination in both cases. The rivalry between the Marshall center and the Johnson center was of long standing and has been well documented. The centers had competed for resources and control of projects since at least the early 1960s, when they begin to plan for the lunar programs. The emerging proposals pitted Marshall’s labs against the Manned Spacecraft Center in Houston, which became the Johnson Space Center in 1973, over whether a lunar landing craft would launch from an Earth orbit, favoring Marshall’s heavy-propulsion systems, or would rely on new, lighter lunar orbital spacecraft designed at Houston. Lobbying by the Manned Space- craft Center resulted in success for the latter plan, giving Houston the lead in the lunar program and setting up a pattern that was replicated many times in subse- quent years (Dunar and Waring 1999, 56). In the Apollo program, Marshall’s role was to provide the propulsion systems, the Saturn rockets. Wernher von Braun, who was then center director, accepted this resolution so as not to jeopardize the project (56), but this was only one of many compromises. Marshall engineers were later given theLunar Roverproject as well, but they were required to design it to Houston’s speciﬁcations. This was irksome to the engineering teams that had been successful in launching the ﬁrst U.S. satellite, and it left them with a sense of grievance. Describing the arrange- ments that essentially made Marshall a contractor working for Houston, the Lunar Rover’sproject manager regretted that Marshall ‘‘ ‘always seemed to get the short end of the string’ ’’ (102). As noted in previous chapters, the funding for NASA projects diminished rapidly even before the lunar landing was achieved, as national attention began to focus elsewhere. To improve prospects for new projects to keep his team ofThis content downloaded from 18.104.22.168 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 144 Analyzing the Causes of the Shuttle Accidents scientists together, von Braun led efforts to diversify Marshall’s capabilities and enlarge its mission. The center was able to secure initial work on space telescope projects for the Apollo astronomical observatories and later on the Voyager probes to Mars. Such strategies often put the centers in new competition as each jealously guarded its own project areas (Dunar and Waring 1999, 138). Dunar and Waring, for example, note that ‘‘Houston challenged any proposal from Marshall that related to operations, astronauts, or manned systems’’ (139). Rivalry intensiﬁed in planning for the post-Apollo projects. Marshall proposed a small space station based on reusing the spent rocket stage, making the best use of the center’s own technologies, but Houston strenuously objected since this crossed the line into their province of manned spacecraft (181). To clarify the division of labor and head off some of the increasingly bitter feuds, the headquarters associate administrator for manned space ﬂight, George Mueller, brought these and other center staff to a retreat in 1966 to work out a formal division of labor (Dunar and Waring 1999, 139). It was here that the concept of lead center was formalized. The lead would have overall managerial responsibility and set hardware requirements for the support centers. In princi- ple, Marshall and Houston would each be lead centers on different elements of the Apollo projects. But in practice the division between the modules was difﬁ- cult to specify, and the sniping continued. Rivalries also continued in planning for the shuttle, but by the mid-1960s, the centers had signed on to an agreement similar to that worked out for the Apollo project. Marshall would design and manage the contracts for the solid rocket boosters and, when it was added to the plan in 1971, the external tank, while Houston would management the orbiter project. This arrangement, however, effectively made Houston the lead center on the shuttle project. Commenting on this, a shuttle program developer noted, ‘‘There is a certain amount of competi- tiveness and parochialism between the Centers that makes it difﬁcult for one Center to be able to objectively lead the other. . . . That was the real ﬂaw in that arrangement’’ (282). In fact, Houston took ﬁrm control of the shuttle project management and disapproved some of Marshall’s facility requests while ﬁlling its own. Again, Marshall staff agreed to this so as not to imperil the project overall (285), but it was another example of the center’s ‘‘short end of the string.’’ In fact, relinquishing lead status to Houston was a blow to the sense of excep- tionalism that was the hallmark of the engineering culture at Marshall. When von Braun and his 127 fellow engineers and scientists came to the United States in 1945 and later in 1950 to what was then the Redstone Arsenal in Huntsville (Dunar and Waring 1999, 11), they brought with them a team identity and aThis content downloaded from 22.214.171.124 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 145 culture built upon German technical education and years together under his leadership in the German military rocket program. Others have described the moral and ethical issues surrounding their work on the American rocket pro- gram (Adams and Balfour 1998; Dunar and Waring 1999), but here we focus on the laboratory culture they established at Huntsville. Hands-on experience and the combination of theory and practice was part of their training and deﬁned the culture they came to call the ‘‘arsenal system’’ (Dunar and Waring 1999, 19). They prided themselves on their capacity for ‘‘dirty hands engineering,’’ which combined design and development work with the ability to execute and exhaus- tively test the project. They designed and built their own prototypes with a staff of technicians. In contrast, at most other centers, contractors typically did this work to the speciﬁcations of NASA employees. The hallmark of the arsenal sys- tem was conservative engineering. Engineers deliberately designed-in redun- dancy to improve reliability and performance, and would then ‘‘test to failure’’ (Sato 2005, 572; Dunar and Waring 1999, 43) to determine the limits of the design. The system was very successful. In the 1960s, no Saturn launch had failed, a remarkable record of reliability given the complexity of the technology (Dunar and Waring 1999, 92). This approach was also known for building in wide mar- gins of safety and containing costs while still speeding the development of new designs. In one often-told story, to avoid delay and the $75,000 that a contractor wanted to charge for a rocket test stand, Huntsville engineers cobbled together their own stand for $1,000 in materials (20). The technical strengths of the arsenal system as a whole also came under severe pressure as the resource-rich early years of the Apollo program came to an end. The hands-on lab culture eroded as reductions in force took ﬁrst the younger engineers and then cut into the original team of German engineers. This was a particular blow to the Marshall workforce under von Braun, who believed that a good team did not work by a clear-cut division of labor. Rather, it depended on identity, honesty, mutual respect, and trust, which could develop only through a long period of collective effort . . . [and were] the prerequisites for any sound rocket-building organization. (Sato 2005, 566 –67) Von Braun’s group was forced to move from its heavy reliance on in-house work to the more typical Air Force model of development by contracting its work. Using a wide array of contractors also helped NASA maintain its political base by creating constituency support at a time when the space exploration became a lower national policy priority. However, as a result, the Marshall team’s ability to develop, manufacture, and test its prototypes was diminished. It lost its shopsThis content downloaded from 126.96.36.199 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 146 Analyzing the Causes of the Shuttle Accidents and technicians and with them some of the basis for its lab culture (Dunar and Waring 1999, 165; McCurdy 1993, 136). The cuts also compromised the ability of the engineers to closely monitor contractors’ work and thus maintain their own quality and reliability standards. Earlier, 10 percent of the Marshall work- force had been on permanent assignment at contractor facilities to monitor the quality of the work, a system they called ‘‘penetration.’’ Marshall managers felt forced to accept these changes even though they threatened their identity as an ‘‘exceptional workforce.’’ As McCurdy summarized it: ‘‘The requirements of survival were simply more powerful than the organization culture during the period of decline’’ (1993, 138). The management isolation cited by the Rogers Commission and the unwill- ingness of the Marshall managers to report unresolved problems thus emerge in the midst of a long history of competition and distrust between the centers. Recognizing the cultural context of their actions, the severe threats to Marshall’s distinctive and successful lab culture adds to our understanding of Marshall’s reluctance to acknowledge problems with the solid rocket booster joints or to delay launches to deal with the redundancy status of the seals. It helps explain Houston’s sometimes dismissive responses toward Marshall’s concerns about possible damage toColumbia’s wing. As the CAIB noted regarding both the Challengerand theColumbiacases, ‘‘All new information was weighed and inter- preted against past experience’’ (2003, 200). Long-held cultural beliefs affected judgments about the launch and inﬂuenced the assessment and communication of information about risks. The Rogers Commission recommendations for remedying the management isolation at Marshall included changes in personnel and in their training or ‘‘indoctrination’’ (Rogers Commission 1986, 200). While the commission’s dis- cussion of its ﬁndings appears to recognize the issues of competition and rivalry behind the charges of management isolation, NASA’s response of the recommen- dations took a formal, structural approach. As noted in chapter 3, in the report on the implementation of recommendations issued a year after the Rogers Com- mission Report, NASA strengthened the shuttle program ofﬁce at Houston’s Johnson Space Center by making it a headquarters ofﬁce and adding a shuttle project manager at Marshall to coordinate the Marshall elements and report on them to the shuttle program ofﬁce. This solution addressed the formal commu- nication issues, but not the underlying cultural beliefs that complicated the com- mand and communication structures, because Houston was still the site of the ‘‘headquarters’’ ofﬁce. By 1990 the Augustine Report on NASA’s role in U.S. space policy noted internal and external evidence of continuing reluctance by ‘‘the various NASA centers to energetically support one another or take directionThis content downloaded from 188.8.131.52 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 147 from headquarters,’’ but also noted that ‘‘an intense effort by the current center and headquarters managements has been underway to redress these long-build- ing trends, yet much remains to be accomplished in this most difﬁcult of man- agement challenges, a cultural shift’’ (NASA 1990, subsection ‘‘Institutional Aging’’). In any case, the structure reverted back to the concentration of program authority at the Johnson center when the shuttle program ofﬁce was repositioned under the control of that center in 1996. Reorganizations and downsizing also created overlaps between program management roles and safety functions. These changes were linked to efforts to improve the willingness of NASA employees and contract personnel to voice their safety concerns. We consider these issues below. RELIABILITY AND RISK ASSESSMENT Investigations of both shuttle accidents found that the shuttle program suffered from lapses in safety and reliability. The Rogers Commission found that NASA failed to track trends with the erosion of the O-rings and failed to act on the evidence they did have. The commission stated that ‘‘a careful analysis of the ﬂight history of O-ring performance would have revealed the correlation of O- ring damage and low temperature’’ (Rogers Commission 1986, 148), and it con- cluded that ‘‘NASA and Thiokol accepted escalating risk apparently because they ‘got away with it last time’ ’’ (148). TheColumbiaaccident investigators found similar evidence of a willingness to assume that known hazards that had not produced a catastrophic accident did not require urgent action. As Vaughan found in the case of theChallenger(1996), the damage from foam debris had become ‘‘normalized’’ and was seen as simply a maintenance issue by program managers (CAIB, 181). The Board concluded, ‘‘NASA’s safety culture has become reactive, complacent, and dominated by unjustiﬁed optimism’’ (180). But the assumptions about safety and how to judge safety were also rooted in the culture of the organization. Reliability had been a central value of the arsenal system at Marshall under von Braun. The center’s tradition of conservative engineering meant that engi- neers included wide margins of safety and reliability with built-in redundancies (Dunar and Waring 1999, 44, 103). Testing was done both on project compo- nents and on the assembled result, and the severity of test conditions was increased to allow engineers to ﬁnd the point at which the design would fail. Then engineers would isolate and ﬁx the problems (100). As noted, early results were impressive. Under this model, however, quantitative risk analysis was notThis content downloaded from 184.108.40.206 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 148 Analyzing the Causes of the Shuttle Accidents seen as appropriate or feasible. Statistical analysis and random testing, while acceptable in a mass-production setting, was not appropriate in the zero-error environment of manned ﬂight (45) in which failure was not permitted. Reliabil- ity assurance at Marshall thus rested on these premises, among others: a funda- mental belief in engineers’ vigilance, an emphasis on engineering judgment founded on broad experience, and a conservatism nurtured in a particular social structure (Sato 2005, 571–72). However, in the shuttle era, as resources were cut and the in-house arsenal system was dismantled, the ‘‘agency could no longer afford the conservative engineering approaches of the Apollo years, and had to accept risks that never confronted an earlier generation of rocket engineers’’ (Dunar and Waring 1999, 324). They did not do this willingly, holding instead to the importance of conservative engineering and the need for thorough testing even though they no longer had the resources to sustain that approach (McCurdy 1993, 150). The reliability of the arsenal system had been lost, but no culturally workable replacement was forthcoming. The antipathy of shuttle program managers to statistical analysis has also been linked to the many frighteningly pessimistic ﬁgures about the shuttle’s survival that had been circulated. Estimates of the reliability of the shuttle’s rockets varied by source. A NASA ad hoc group put failure rate at 1 in 10,000, while a 1983 Teledyne study estimated failures at 1 in 100 ﬂights (Dunar and Waring 1999, 399). Johnson Space Center managers offered ﬁgures of 1 in 100,000 ﬂights, while Rogers Commission member Richard Feynman polled NASA engineers who put the rate at 1 in 200–300 launches (399). All these, of course, were inferences drawn from a very slim data base. Casamayou traces the low estimates by management to a desire to avoid publicizing unacceptable estimates of risk during the Apollo program (1993, 177). An estimated probability of overall fail- ure of that spacecraft of 1 in 20 prompted a project manager to respond: ‘‘Bury that number, disband the group, I don’t want to hear about anything like this again’’ (McKean 1986, 48). More recently, GAO reported that in 1995 a contrac- tor offered a median estimate of catastrophic shuttle failure at 1 in 145 launches (GAO 1996, 10). NASA managers recognized that these numbers represent a level of risk that would be perceived as unacceptable by the public, but they did not consider the numbers to be accurate. In addition, public tolerance of risk was diminishing with the increasing rou- tinization of space operations (McCurdy 1993, 151), just as the complexity of the programs was increasing and the capacity of the agency to detect problems was declining. The greater ambitions of the shuttle and space station programs brought with them more complex and tightly coupled technologies with more parts and more potential interactions among ﬂaws. Yet NASA generally andThis content downloaded from 220.127.116.11 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 149 Marshall in particular were cutting back on personnel and testing to reduce costs. They were also more dependent on contractors and less able to closely monitor them. This dependency was not entirely voluntary, but a result of repeated waves of personnel downsizing since the late 1960s. Nevertheless, it had the effect of reducing the close supervision and penetration of contractor facilities. The rapid decline in the arsenal system also meant that Marshall had a reduced ability to conduct its own tests and learn the limits of its designs. Where formerly individ- ual components of rockets were tested before assembly, now much of the testing was ‘‘all up,’’ reserved for the ﬁnal product. All these changes meant that engineers had become cut off from their tradi- tional means of judging reliability, without being able to establish effective sub- stitutes (McCurdy 1993, 150). While a probabilistic risk analysis was the method that hazardous program administrators typically adopted for risk management, NASA resisted using it. That approach was not well integrated into the agency culture generally, and it was especially inconsistent with the lab culture at Mar- shall. The Rogers Commission identiﬁed the absence of trend analysis as a factor in the loss of theChallengerand recommended improvements in documenting, reporting, and analyzing performance (1986, 201). But little was done to adopt these quantitative methods. In the mid-1990s GAO and the National Research Council faulted the agency for still using its qualitative risk-management method, based on engineering judgments to classify components based in their criticality and redundancy, instead of an integrated, quantitative risk-analysis system based on records of past performance (GAO 1996, 37). NASA maintained that such analysis was not appropriate because the shuttle ﬂights were too few to offer a basis for probabilistic analysis, but the agency agreed to have contractors study the feasibility of such an analysis. After the loss of theColumbia, the CAIB found that despite external recommendations NASA still did not have a trend- and risk-assessment system for the components and the whole project (183, 188, and193). The CAIB also criticized NASA’s inability to integrate its ‘‘massive amounts of data’’ (180) about the shuttle into usable information for decision making. They noted, ‘‘The Space Shuttle Program has a wealth of data tucked away in multiple databases without a convenient way to integrate and use the data for management, engineering, or safety decisions’’ (193). While there were strategic reasons for not publicizing risk estimates, it appears that there were also deeply held cultural reasons behind adhering to engineering judgments as a basis for assessing risk and reliability even in the face of changed lab conditions. The long-term evolution of the professional culture and signiﬁ- cant losses of personnel had conspired to turn assumptions about professional engineering into an organizational liability.This content downloaded from 18.104.22.168 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 150 Analyzing the Causes of the Shuttle Accidents There was another path, too, by which reliability judgments were compro- mised. Anomalies seen in the erosion and charring of the O-rings and the debris shedding came to be considered acceptable because they did not lead to catastro- phe (Casamayou 1993; Vaughan 1996). The absence of disaster, even as a result of deviations from speciﬁcations, became a rationale to continue. Standards for the O-rings changed from ‘‘no erosion’’ to ‘‘acceptable erosion’’ (Dunar and Waring 1999, 355), and debris strikes ceased being anomalies and became a maintenance or turnaround issue (CAIB, 135). Vaughan found that NASA deci- sion makers would routinely deﬁne such data on anomalies as nondeviant by recasting their interpretation of the range of acceptable risk. Before theChal- lenger, the ﬂight readiness reviews had become to some degree ritualized (Dunar and Waring 1999, 360), and they were not always carried out face to face with all the key participants. The CAIB similarly identiﬁed a ‘‘culture of invincibility’’ that permeated NASA management, particularly as it used past successes to jus- tify current risks (179, 199). In each case, the deviations that became acceptable were part of systems that were, objectively, among the ‘‘least worrisome’’ in the program (353). But in the absence of cultural agreement on how to determine reliability, judgments were cut adrift and became subject to other imperatives. THE STATUS OF SAFETY WORK Another aspect of NASA’s risk and reliability culture was the seemingly passive and underdeveloped safety system that was silent at crucial points in both acci- dents. AfterChallenger, the safety ofﬁces were found not only to be dependent on the programs they were supposed to monitor, but also to be signiﬁcantly understaffed, with few resources to attract those most qualiﬁed for the painstak- ing safety work. Commenting on the lack of information on trends in seal ero- sion and the status of redundancy, the Rogers Commission found that ‘‘a properly staffed, supported, and robust safety organization might well have avoided these faults and thus eliminated the communication failures’’ (1986, 152). Reinforcing the safety organizations was not a priority in the shuttle pro- gram, particularly after it was declared operational. The number of safety and reliability ofﬁcers across the organization was reduced after one shuttle was declared operational, and the ofﬁces were reorganized. Marshall’s safety staff declined as the ﬂight rate increased, and at headquarters only a few staff were detailed to the shuttle program (Rogers Commission 1986, 160). But the safety ofﬁcials whowerein place also appear to have been ignored by the other actors. The Rogers Commission noted:This content downloaded from 22.214.171.124 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 151 No one thought to invite a safety representative or a reliability and quality assurance engineer to the January 27, 1986, teleconference between Marshall and Thiokol. Similarly, there was no representative of safety on the mission management team that made key decisions during the countdown on January 28, 1986. (1986, 152) Within a year of the loss of theChallenger, NASA had established a goal for SRM&QA to ‘‘develop an SRM&QA work force that is manned with quality people who are properly trained and equipped, who are dedicated to superior performance and the pursuit of excellence’’ (Rogers Commission 1987, 45). But, as noted in earlier chapters, the changes in the independence and the scope of safety work were short-lived. The 1995 Kraft Report found, in fact, that the burden of safety checks sinceChallengerhad grown excessive. BythetimeofthelossofColumbia, safety organizations had again been downsized and reorganized. Mirroring the Rogers Commission ﬁndings, the CAIB found that the safety ofﬁcers were ‘‘largely silent during the events leading up to the loss ofColumbia’’ (192). As noted in chapter 3, both NASA and con- tract safety personnel were passive during the meetings of the debris assessment team, the mission evaluation room, and the mission management team (170). They did not press for more information, but ‘‘deferred to management’’ (170). The CAIB also points out repeatedly that the safety ofﬁces were no longer inde- pendent of program ofﬁces in the centers, so their ability to report critically might have been compromised. While the pressures of schedules and budget reductions certainly explain the loss of safety personnel and the marginalizing of safety concerns, there is also evidence that safety work itself was not a valued assignment for NASA personnel. Safety monitoring in an operational shuttle program did not have the scientiﬁc or technical interest of testing the limits of new rocket designs. McCurdy’s detailed study of NASA history and culture argues that NASA underwent a slow shift in its culture as its original technical, space-science identity was tempered by the management requirements of making the shuttle safe, routine, and opera- tional (1993, 141). Steady funding and organizational survival became priorities. Governmental oversight, which had become more searching and less tolerant of failure, even in the name of scientiﬁc advance, led to a more conservative and preservation-minded NASA administration (McCurdy 1993, 163–72; Romzek and Dubnick 1987). Budget cuts had forced the agency to skimp on prototypes and ﬂight tests, eroding NASA’s capacity to pursue space research rather than routine operational missions. But, McCurdy argues, these major changes in practice were not followed by changes in fundamental cultural assumptions. Engineers were reluctant to acceptThis content downloaded from 126.96.36.199 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 152 Analyzing the Causes of the Shuttle Accidents the changes in the character of their work identity that would go along with the administration’s push for operational status, and a major cultural schism emerged between the engineers and scientists devoted to space exploration and the administrators (1993, 145). For space scientists, missions that did not test new ideas and take risks would not bring new knowledge but would be mere routine. At its core, the agency retained the assumption that cutting-edge research is inherently complex and risky and can be made safe only to a degree. Risk was justiﬁed as a means to ultimate reliability and, along the way, scientiﬁc advance. How the shift to the routine was viewed in the agency is exempliﬁed in the comments of an executive from the earlier Apollo project who said that if the shuttle were becoming an operational rather than a research and develop- ment project, NASA ‘‘ought to paint the damn thing blue and send it across the river’’ (McCurdy 1993, 145) to the Air Force. As requirements for routine and efﬁcient launches increased, the interest of engineering personnel in the program seems to have ﬂagged. Marshall administrators appeared bored when briefed on the O-ring task force and devoted minimal attention to routine concerns about the solid rocket motor joints (Dunar and Waring 1999, 367). The Marshall direc- tor who came on after the shuttle accident noted, ‘‘When the Shuttle got opera- tional, NASA ‘got too comfortable’ ’’ and stopped looking for problems (Dunar and Waring 1999, 408). After theChallengeraccident, NASA clearly did make efforts to change the culture surrounding the safety function as well as increase the size and scope of the safety ofﬁces. NASA managers were to establish an open-door policy, and new formal and informal means of reporting problems conﬁdentially were cre- ated. A campaign to promote reporting about safety issues by all was initiated. Posters were displayed, declaring ‘‘If it’s not safe, say so.’’ And some learning appeared to occur. A GAO study of NASA’s safety culture in the mid-1990s, found that all of the groups reported that the shuttle program’s organizational culture encourages people to discuss safety concerns and bring concerns to higher man- agement if they believe the issues were not adequately addressed at lower levels. . . . NASA managers at the three ﬁeld centers with primary responsibility for managing shuttle elements and at NASA headquarters reported having taken steps to create an organizational environment that encourages personnel at all levels to voice their views on safety to management. (1996, 19) A full 90 percent of NASA employees responding to a survey by GAO in 1996 said ‘‘NASA’s organizational culture encourages civil service employees to dis- cuss safety concerns with management’’ (GAO 1996, 21).This content downloaded from 188.8.131.52 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 153 However, such responses may be good examples of Schein’s notion of espoused values: overt value statements of publicly acceptable values that do not actually reﬂect underlying cultural beliefs. Several events leading up to the loss of theColumbiasuggest this to have been the case. The CAIB noted that NASA’s testimony regarding its risk-averse culture stated that it encouraged employees to ‘‘stop an operation at the mere glimmer of a problem, ’’ but this did not accord with reality (177). In fact, members feared retribution for bringing up unresolved safety concerns (192). Safety ofﬁcials did not respond to signals of problems with debris shedding, and were again silent during key deliberations. As noted in chapter 3, Rodney Rocha, one of the cochairs of the debris assess- ment team, invoked the ‘‘if it’s not safe . . .’’ slogan even as he declined to press further for images of the shuttle’s wing. In 2004, a survey of NASA employees found that the agency had ‘‘not yet created a culture that is fully supportive of safety’’ and that workers were still ‘‘uneasy about raising safety issues’’ (David 2004). All this suggests a conﬂuence of factors that made the safety organizations weak. The founders, history, and mission of the organization led to a cultural bias for technological progress over routine operations and space science over safety monitoring. Given the training and background of NASA personnel and their contractors, concentrated ever more in the ranks of senior staff by repeated downsizing, safety was not as interesting as science. The organization was hard- pressed to make it so and keep the attention of qualiﬁed, respected personnel on these concerns. Even with ample outside attention directed to the problem of the safety culture, it is not surprising that the agency could not wholly shake its core professional identity. Moving the safety functions to contractors simply exacerbated the separation of safety from the core NASA technology. Genuine cultural assumptions are often not recognized as such. Taken for granted, they are often not subject to analysis, which is partly why the absence of cultural change at NASA is not surprising. Core beliefs may also have been reinforced by the fear of recriminations for reporting delay-causing problems up the chain of command. NASA CHICKEN A fourth pattern of behavior linked to the culture of Marshall, and of NASA generally, was the unwillingness to report bad news. This reluctance seems repeatedly to have overcome engineers’ and managers’ concerns for safety. One part of this pattern was an intolerance for disagreement or criticism often seen in the responses of some NASA managers, and the unwillingness of some toThis content downloaded from 184.108.40.206 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 154 Analyzing the Causes of the Shuttle Accidents listen to concerns of the working engineers. We see these behaviors in the events surrounding both accidents and in the intervening years. Early on in the life of the shuttle, in 1978, two NASA engineers had observed the joint rotation and the problems with the O-rings in the solid rocket booster. They complained that Thiokol was ‘‘lowering the requirement for the joint’’ and should redesign it (Dunar and Waring 1999, 345). As we have seen, both Thiokol and Marshall managers rejected this contention. We have also seen how Marshall managers avoided acting on the unresolved problems with the joint and how Thiokol’s concerns about temperatures on the eve of the launch were suppressed by Marshall managers. Intimidation and concern for preserving the chain of command also led to the suppression of information. Several engineers at Mar- shall had also been concerned about the temperature issues long before the acci- dent, but they did not pass these doubts up to project managers. Nor were they heard by Hardy, Marshall’s deputy director for science and engineering, the safety organization at Marshall. One of the concerned engineers stated he did not speak up himself because ‘‘ ‘you don’t override your chain of command’ ’’ (Dunar and Waring 1999, 377). Lawrence Wear, the director of the rocket motor project at Marshall, ‘‘admitted that at Marshall ‘everyone does not feel free to go around and babble opinions all the time to higher management’ ’’ (377), though he acknowledged that the dissenters may have been intimidated by the Marshall management pronouncements about the seals. Lawrence Mulloy, manager of the solid rocket booster project at Marshall, revealed before Senate investigators that the seal decision had suffered from groupthink and that other participants cen- sored themselves in the context of established management statements that the seals constituted an acceptable risk (377). Excessive hierarchy and formalization were also seen also during theColum- biaﬂight when the leadership of the mission management team, without probing into the reasons behind the requests for images of the shuttle’s wing, cancelled requests because they had not been made through the correct channels. During the ﬂight ofColumbia, the debris assessment team cochair, Rodney Rocha, expressed serious doubts to colleagues about the decision of the mission manage- ment team not to obtain images of the wing: ‘‘In my humble technical opinion, this is the wrong (and bordering on irresponsible) answer from the SSP and Orbiter [managers] not to request additional imaging help from any outside source.’’ But he did not press the matter further, noting that ‘‘he did not want to jump the chain of command’’ (CAIB, 157). The leadership of the mission management team also stiﬂed discussion of the possible dangers from the foam strike by rushing to the conclusion that theThis content downloaded from 220.127.116.11 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 155 strikes did not pose safety of ﬂight issues. As the CAIB reported, ‘‘Program man- agers created huge barriers against dissenting opinions by stating preconceived conclusions based on subjective knowledge and experience, rather than on solid data’’ (192). The debris assessment team, too, admitted to self-censoring in fail- ing to challenge these decisions (192). The CAIB reported that even members of the mission management team felt pressure not to dissent or challenge the appar- ent consensus. NASA contractors call this unwillingness to be the ﬁrst to speak out ‘‘NASA chicken’’ (Wald and Schwartz 2003). One NASA engineer described the cultural basis for the phenomenon when he explained that ‘‘the NASA cul- ture does not accept being wrong.’’ Within the agency, ‘‘the humiliation factor always runs high’’ (Wald and Schwartz 2003). The mission management team, like the Marshall actors before theChallenger launch, did not seek out information from lower-level engineering staff. They did not probe the reasons for the debris assessment team’s requests, get frequent status reports on analyses, or investigate the preliminary assessments of the debris-strike analysis. The CAIB concluded that ‘‘managers’ claims that they didn’t hear the engineers’ concerns were due in part to their not asking or listen- ing’’ (170). Fear of criticism also affected the ﬂow of information: ‘‘When asked by investigators why they were not more vocal about their concerns, debris assessment team members opined that by raising contrary points of view about Shuttle mission safety, they would be singled out for possible ridicule by their peers and managers’’ (169). In another instance, engineers reported that they had simulated the effects of a blown tire on the orbiter after-hours because they were reluctant to raise their concerns through established channels (192). Ample evidence exists that many of the same kinds of failings of management openness and willingness to investigate possible hazards were factors in both accidents. This pattern was a major theme in the CAIB report, overall, and one reason the investigators questioned whether NASA could be characterized as a learning organization. It also raises questions about the cultural assumptions behind the management failings and what efforts NASA made to change these assumptions. Here we see some contradictory historical precedents in the agency. Management in the Huntsville facility under von Braun in the 1950s and ’60s was characterized by close teamwork and a nonbureaucratic ethos. Decentraliza- tion was essential for technical specialists to organize themselves to solve novel problems, but central control under von Braun served to resolve conﬂicts (Dunar and Waring 1999, 50). The Huntsville facility, which became the Marshall Space Flight Center in 1960, was organized into eight labs, each with its own test facili- ties and program management functions. The labs included such specialties as aero-astrodynamics, propulsion and vehicle engineering, and space sciences.This content downloaded from 18.104.22.168 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 156 Analyzing the Causes of the Shuttle Accidents They constituted almost self-sufﬁcient space agencies with duplicated adminis- trative functions (40), leading Marshall historians Dunar and Waring to call them ‘‘imperial laboratories’’ (164). Another old hand described the German lab directors as having a ‘‘ﬁefdom philosophy where each one ran their own little kingdom’’ (197). Von Braun himself was characterized as a reluctant supervisor, but stories told about him suggest he was held in high regard by most and seen as charismatic by some. This is illustrated in an account of him that evokes the classic founder story form: Center veterans recollected how von Braun had responded to a young engineer who admitted an error. The man had violated a launch rule by making a last- minute adjustment to a control device on a Redstone, and thereby had caused the vehicle to ﬂy out of control. Afterwards the engineer admitted his mistake, and von Braun, happy to learn the source of the failure and wanting to reward honesty, brought the man a bottle of champagne. (Dunar and Waring 1999, 49) He was also famous for his ‘‘weekly notes,’’ summarized from lab directors’ reports. He would circulate them around the Marshall labs, adding his handwrit- ten comments and recommendations. They became a forum for considering technical problems and policy controversies and provided a way for Marshall managers to acquire a ‘‘holistic view of the Center’’ and learn how to coordinate their projects (51). Decades later the technique was still in use under William Lucas, who became center director in 1974 soon after von Braun left and who remained in charge until retiring after theChallengeraccident. Yet others gave von Braun mixed reviews as a supervisor, noting he was too autocratic, stiﬂed criticism, was harsh on those who disagreed, and was too secre- tive (154). Adams and Balfour found that later, in the 1970s and ’80s, even after the German team had largely left the top leadership positions, ‘‘Marshall was run like a Teutonic empire’’ (1998, 124). Lucas, who worked in the structures and mechanics lab at Marshall for many years,maynothavehadvonBraun’sleadershipskills,andheappearstohave relied more on hierarchical control than teamwork. Only total loyalty led to advancement (McConnell 1987, 108). Under Lucas, ‘‘this autocratic leadership style grew over the years to create an atmosphere of rigid, almost fearful, con- formity among Marshall managers. Unlike other senior NASA ofﬁcials, who reprimanded subordinates in private, Lucas reportedly used open meetings to criticize lax performance’’ (McConnell 1987, 108). The aggressive managementThis content downloaded from 22.214.171.124 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 157 style at Marshall was said to have engendered an ‘‘underground decision-mak- ing process’’ that prevented open discussion of problems (Dunar and Waring 1999, 312). Many factors may have contributed to this authoritarian climate at Marshall. The pressures of adhering to a relentless launch schedule, the competition for project control, and resentments about Houston’s role as lead center for the shuttle may all have pushed center leaders to show that they could successfully handle the challenges of the solid rocket boosters and the external tank. Vaughan describes these motives when she observes, ‘‘No manager wanted his hardware or people to be responsible for a technical failure’’ (1996, 218). But Lucas was also known for insisting on tight bureaucratic accountability and absolute con- formity to the speciﬁcations of the preﬂight readiness reviews (218). Dunar and Waring quote an administrator at the Johnson Space Center saying, ‘‘Nothing was allowed to leave Marshall that would suggest that Marshall was not doing its job’’ (1999, 402). To admit that a step had been dropped or a problem was not resolved was to call down Lucas’s scorn in open meeting (Vaughan 1996, 219– 21). Lucas’s subordinates ‘‘feared his tendency to ‘kill the messenger’ ’’ (Dunar and Waring 1999, 403). This aggressive management style also extended to contract management. Oversight of contractors was said to be harsh and excessive, though generally effective in the long run (Dunar and Waring 1999, 312). Marshall was very criti- cal of Thiokol management, in particular, and blamed them for repeated prob- lems with manufacturing safety and quality. Without their previous capacity to monitor contractor facilities, Marshall inspectors would wait until a problem surfaced and then impose heavy fees and penalties (312). The aggressiveness of their oversight reportedly made Thiokol project managers reluctant to open up to monitors, and the ﬂow of information was blocked. A study of ‘‘lessons learned’’ commissioned afterChallengerby the new safety organization, the Ofﬁce of Safety, Reliability, Maintainability and Quality Assurance, found care- less mistakes and other evidence of substandard contractor workmanship at the time of theChallengerlaunch. Some required quality veriﬁcations were not being conducted, and some inadvertent damage sustained during prelaunch processing went unreported. Workers lacked conﬁdence in the contractors’ worker-error- forgiveness policies, and workers consequently feared losing their jobs (NASA 1988, 13). In addition, they found that NASA’s own error-forgiveness policy was harsh (Klerkx 2004, 242), so that even NASA’s own technicians were ‘‘hesitant to report problems’’ (NASA 1988, 27). The report also offered suggestions for improving the morale and stafﬁng of the safety functions, but the reaction of one agency safety representative did not bode well: ‘‘At the meeting where weThis content downloaded from 126.96.36.199 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms 158 Analyzing the Causes of the Shuttle Accidents presented the original version of the report, the safety guy in charge threw it in the waste basket. He said he was sick of hearing bad things about NASA. . . . This man was supposed to make sure these things didn’t happen in the future, but he didn’t want to hear anything had ever been wrong’’ (Klerkx 2004, 247). While rivalry and budget and schedule pressures undoubtedly lay behind such management reactions, they also mirrored the management style of NASA administrator Beggs at headquarters and his top associate administrators. Years later, a similar pattern of pressure was seen under administrators Goldin and O’Keefe. In 2000, internal and external studies described a ‘‘workforce appar- ently paralyzed by a fear of displeasing their boss’’ (Lawler 2000b). Whistleblow- ers and public critics were not tolerated and fear of reprisals for criticizing the agency ‘‘was deeply engrained in the NASA culture’’ (Klerkx 2004, 243). Fear of the consequences of admitting delay-causing problems reportedly led to defen- sive decision making and a ‘‘’bunker mentality’ ’’ (Dunar and Waring 1999, 402). The result of these patterns was to block the upwards communication channels. One Marshall administrator explained: For a combination of semi-political reasons, the bad news was kept from com- ing forward. Contractors did not want to admit trouble; Centers didn’t want Headquarters to know they had not lived up to their promises; and Headquar- ters staffs didn’t want to risk program funding with bad news. (Dunar and Waring 1999, 312) Efforts to change the culture to encourage upward information ﬂows focused on trying to change the willingness of lower-level personnel to report problems rather than trying to alter the prevailing climate of aggressive accountability and resistance to seeking out or even listening to difﬁcult news. Some of these efforts were summarized earlier in the chapter. Managers were coached to adopt the open-door policy and to establish alternative channels for critical information or safety concerns that personnel did not feel comfortable raising through formal channels. As noted in the CAIB, however, these changes were of questionable effectiveness. LEARNING ABOUT CULTURE Genuine culture change is a difﬁcult and very long-range task, and it was not accomplished with the structural changes and short-term attempts at cultural intervention that followed theChallengeraccident. Rocha’s admission that he could not ‘‘say so’’ regarding what he saw as a seriously hazardous situation inThis content downloaded from 188.8.131.52 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms Organizational Culture 159 theColumbiacase demonstrates the limited success of creating alternative chan- nels for safety information. It is not even clear how widely the direct and indirect recommendations about the need for culture change were accepted by NASA management. While safety was clearly espoused as a higher priority after the loss of theChallenger, we do not see actual changes in the cultural impediments to creating a more effective and aggressive safety organization. The 1996 GAO study does show that the agency was making structural adaptations to recognized cul- tural problems, but in the end, when immediate hazards appeared, the changes were not enough. Learning, as we are using the term, about how to change the cultural beliefs that affected shuttle management in dangerous ways occurred only very partially and temporarily. These cultural patterns also inﬂuenced the kinds of learning that could have occurred about the structural, contractual, and political dimensions of shuttle management. Communication gaps among program modules created by the dis- persion of program elements across separate centers were exacerbated by jealous- ies, grievances, and differences in laboratory styles. Reporting on hazards by contract management, already subject to perverse incentives, was made more difﬁcult and less likely by the aggressive accountability practices and harsh treat- ment of violations. Even coping with external political and media pressure was complicated by the competitive stance among centers and the unwillingness of programs to admit to each other that the workload was putting an impossible strain on their capacities. The effects of these and other long-lived cultural beliefs was to ﬁlter negative information about performance and hazards and offer interpretations of results that conﬁrmed managers’ inclinations and allowed them to pursue other priorit- ies. The CAIB suggests that managers lost their wariness of the dangers and actually learned an attitude of risk acceptance (CAIB, 181). We next turn to what we can learn about organizational learning from the evidence collected in the past four chapters.This content downloaded from 184.108.40.206 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms This page intentionally left blank This content downloaded from 220.127.116.11 on Fri, 15 Oct 2021 17:51:28 UTC All use subject to https://about.jstor.org/terms