http://ejbo.jyu.fi/
EJBO - Electronic Journal of Business Ethics and 
Organization Studies

Vol. 12, No. 2
ISSN 1239-2685
Publisher: Business and Organization Ethics Network (BON)
Publishing date: 2007-11-12

CURRENT ISSUE
ARCHIVES (2004-)
ARCHIVES (1996-2004)
MANUSCRIPT SUBMISSION

Sound Engineering Practices and Ethics in Technology Business

By: Lauri Pirttiaho [biography]

Introduction

It is a common perception that when everything goes well in technology business, there is no need to discuss the responsibility issues. Only when something goes wrong, project or technology fails, money is lost, laws are violated, or people get injured or killed-only then the subject of responsibility enters the discussions. Blame gets laid on the people who made the decisions and ultimately on the people who carried out those decisions.

A disaster seldom happens without anybody noticing that something is going awry. Usually it doesn't even happen without anybody warning ahead of time. The risk is considered and communicated to the management. Enter the politics and various business issues. The engineering evidence is overruled by non-technical arguments. The risk materializes and a disaster realizes. Sometimes even the engineers make such decisions. When faced with clear and well-understood evidence their pride may precede the technical facts and prevent them from acting responsibly.

Engineers are often not the most diplomatic people around when difficult things are being discussed. Their language is not the language of the everyday human-to-human communications, one that is used to influence other people, to negotiate tradeoffs and persuade support. Their language is that of technical data, measurements and probabilities and conjectures drawn from those. When you say in normal discussion that something is unlikely to happen, it is entirely different from the engineering statement that probability of a structure to fail is one in fifty in certain conditions and greater in more aggravated situation. The former is a feeling that can be brushed of with a wish of unlikely things not happening today. The latter is a statement bound to given conditions. When the risk is being discussed, also the reasons why the risk can be accepted and why the worse conditions are considered unlikely have to be made clear.

In discussions concerning risks the engineers have often the disadvantage of seeing the meaning of the data clearly but not being able to influence the decisions. On the other hand, the non-technical decision-makers often have no experience or up-to-date knowledge to appreciate the engineering data. Even some engineers may be blind to the fact that they are not making promises for themselves only but for a group of people who may not have the capabilities they think they have. Usually the engineers are not experts in human issues in that respect. The question becomes, what an engineer can do in such a situation?

In this paper I have collected some published and rather well known cases from technology business. The cases include the widely discussed Ford Pinto safety problem of 1970's that surprisingly, or perhaps not, has been repeated with Ford S-10 pick-up truck two decades later. There is also the probably even more widely discussed case of the space shuttle Challenger. From the currently important field of software engineering I have chosen the costly failure of the CONFIRM hotel and car rental reservation system development project. Another software related case is the recent controlled flight into Martian terrain of the first Mars weather satellite. Finally I include a case of Citicorp Tower in which the disaster was prevented by application of engineering analysis and taking the necessary actions based on the presented facts. The purpose of the examples is to discuss the role of engineering and engineers in technology business showing how communication problems and some other factors may lead to ethically problematic situations.

Some of the presented cases led only to financial losses but the most regrettable ones led to loss of human life. We can't know if a more diplomatic approach from the engineering side could have affected the outcome in the presented cases that ended in a disaster. Based on the fact that most engineering business is very successful there have to be an abundance of cases where disasters have been avoided by engineering influence and by action of responsible and knowledgeable management. Not very many such cases have been reported but a few can be found such as the one described below. Even in those cases the more important factor has been the technical understanding of the decision-makers than the diplomatic approach of the engineers. The most unfortunate aspect about these examples is that similar disasters are materializing today, as if we were unable to learn from the previous failures.

Price to the human life

In the very beginning of Nicomachean Ethics (Crisp, 2000) Aristotle tells us about relations of various sciences. In this context the "science" refers to practically any class of human pursuit: arts, crafts or other tasks. These sciences form a hierarchy, in which some sciences are subordinate to others, like bridle making is subordinate to horsemanship, which in turn is subordinate to military science. In this hierarchy he mentions that "the end of the master science is more worthy of choice than the ends of the subordinate sciences."

Engineering in industrial organization can be considered to be a subordinate to business. The only rationale for engineering is to make money by making technological products that can be traded profitably. The question is, whether business is a subordinate "science" to something like improving the life of people or does it live for itself only. Some hard core capitalists like Milton Freeman (Freeman, 1970) have argued that the only way that business is responsible to society is by making bigger profits and therefore more money to their owners, who then can choose to use that money in a beneficial way. This thinking leads to separation of the interests of making money on the one hand and guaranteeing the welfare of people on the other as the following example told by De George (1981) shows.

Mid 1960's Japanese car manufacturers began entering the US market with so called sub-compact cars, small cars weighing less than 2000 pounds. In 1968, feeling this competition, Ford ordered its engineering to produce a sub compact car to be introduced for model year 1971 that could be sold for less than $2000: the car that became to be known as the infamous Ford Pinto. The engineering department was in an unusual situation being requested to come up with a new design in 25 months instead of the typical lead-time of 43 months.

Because of light weight, small size and low price requirements, certain design decisions were made that were found problematic in rear impact collision tests done during 1970 and 1971. Impact at as low as 20 mph would cause the rear end to collapse and the fuel tank, that was mounted behind the rear axle, to break, facilitating an explosive fire. Following the tests the engineers suggested an engineering change: adding a part that would cost $6.65 to reduce the fuel tank puncturing risk. An executive decision was made against the change based on the projected total cost savings of $20.6 million during years 1971-1976. The new law mandating the better rear impact safety was expected to become effective in 1977, which it did, and the design was changed that year.

By 1978 about 50 suits had been brought against Ford, that were related to burn fatalities in rear-end collision accidents involving Ford Pintos. Besides financial impacts, the Pinto court cases ended up to be a public relations disaster for Ford. The question is, what could the engineers have done to prevent this happening?

In a wild imagination, maybe the engineers could have colluded in making the engineering change without telling the reason to the executives so that they could not have judged the actual reason for the change. After all, presumably they can not go through the complete technical design and randomly stripping parts away, just because of their cost, until the design barely stays together and is still sellable. However, in mechanical engineering organizations, unlike in modern IT ones, the executives usually have technical background and can inspect the designs and see possible points of savings there.

A often called for action from the part of engineers, especially in these kind of cases related to consumer safety is to go public with the findings, which is called "whistle blowing", or by leaking the information to the press. Neither of these is guaranteed to work. De George actually gives a list of five conditions that should be fulfilled before an insider should consider whistle blowing. First the harm to the public must be serious and considerable. Second the considerations must first be made known to the superiors. Third is that no satisfactory resolution is achieved even after exhausting the possible channels available inside the corporation including alerting the board of directors. Fourth is that there is documented evidence which can be used to convince a reasonable, impartial observer about the problem. The last condition is that making the evidence public has strong chance of preventing the serious harm, like by calling attention of authorities who have legislative power in such a case. The problem in the described case as usual, is the last one. Not until 1977 did the authorities have the power, by introduction of the new safety regulations, to require Ford to change the design. Therefore, it could not reasonably be expected from the Ford engineers to begin publicly fighting the employer over this issue and possibly risk their personal interests, including employment.

Fear of the loss of business

Sometimes the keeping the business is the most important thing at a moment to the managers responsible of a company dependent on few customers. This has been case in some governmental and military contracts where biddings for contract have been made below the costs. Sometimes this kind of behavior has led to risks involving human life like in the case of the infamous Challenger disaster that stopped the space shuttle program for over two years.

The Rogers' commission report on Challenger disaster is a report of a classic case where anyone of a number of things could have become a seed of major fatality. It just happened that due to the weather conditions certain O-rings were the ones that led to the failure of one booster rocket and explosion of the whole vehicle. Because of a personal memo from one of the engineers, Roger Boisjoly (1987), an account of the work of the Rogers' commission by Nobel laureate Richard Feynman (1993) and several other books written about this incident, it and its background have become well known.

In a post-flight inspection of re-usable booster rockets, after January 1985 Space Shuttle flight, Boisjoly noticed that some of the rubber O-rings sealing the booster rocket joints were eroded. There were indications that hot gases could blow past the sealings and it was concluded that this kind of blow-by could endanger the shuttle. The reason for the erosion was not definitively established but the data showed that due to reduced resiliency of the rubber material at low temperatures temporary loss of contact between the sealing surface and the rubber seal was the most probable cause. Some tests indicated that the seal would fail for a time ranging from few seconds to several minutes depending on the temperature. The effect was considered very risky at 50 °F. Findings and the projected dangers were communicated to the management and to the flight readiness review board. The engineering recommendation was that the shuttle should not be launched at temperature below 53 °F.

Preceding the fatal launch of Challenger planned for January 28, 1986, a video meeting was held in the previous evening. The meeting was between the Kennedy Space Center, the launch site, the Marshall Space Flight Center, the space flight command post in charge, and Morton Thiokol Inc., the subcontractor supplying the booster rockets. After presentation and recommendation by the Morton Thiokol not to launch the shuttle if the temperature was below 53 °F the decision not to launch was reluctantly accepted by NASA management since the forecast temperature for the next morning was below 20 °F. Brief off-line discussions were held at various sites after this. Feeling the pressure the general manager of Morton Thiokol decided to "to make a management decision". Ignoring the data the management, including one reluctant engineering manager, who was asked to "take off his engineering hat and wear the management hat" while making this decision, voted four against none to recommend the launch.

The exact reasons to the management decision are diverse and quite likely to include the usual reasons like personal ambitions and risk taking inherent with business. The last point is supported by the fact that at the time the decision was made, Morton Thiokol was negotiating a new contract with NASA about the booster rockets (Whitbeck, 1998). The decision not to launch probably seemed to the management that they were admitting there were problems with their engineering competence. During the Rogers' commission investigation the Morton Thiokol management presented a smoothed-out view of what actually happened while the launch decision was being made. However, Boisjoly and two other Morton Thiokol engineers presented the actual data and documents regarding the events, which showed that they actually had the data against the launch decision and that the decision to launch was made against the better knowledge. This led to deterioration of the relationship between the management and these engineers so that they ended up being dismissed from their previous positions.

This case shows that often the engineers are severely constrained in making safety critical decisions when those decisions are against apparent business case. There are two possible outcomes in these cases. Either the management sees the point of accepting the facts as an indication of good engineering and put enough weight on it in the decision making, or they go with the management decision against the engineering recommendation. It is the nature of the real world engineering that measurements and conclusions derived from them are never absolutely sure. There is an amount of risk involved in any engineering decision. However, usually that risk is weighed against other possible solutions and the apparently lowest acceptable risk is taken. Management decisions, on the contrary, are based, in addition to engineering risks, to business risks and opportunities and are usually weighted towards them, especially when the engineering competence in the management is not very good. In this kind of situation the question is, what can an engineer do when facing such a case?

Whitbeck (1998) has analyzed the Boisjoly's case and gives several guidelines that can be considered and they are in agreement with what De George recommends. Most of them are concerned with keeping with good engineering. Therefore, before other actions the engineer should try to collect the most convincing material possible about the case. This material and conclusions drawn from it should be made known to all affected parties including, if necessary, the highest managerial level of the company that can affect the decisions. Keeping a personal journal is a good option even though collecting material privately about what is going on in a company may be considered as a case of espionage. The purpose of the journal is not just to give an escape route in the case something goes wrong, but as a memo to learn from so that the same errors are not made again. At the end, if the risk realizes and a significant failure occurs the engineers will be responsible of helping in figuring out the real causes and preventing similar errors in the future. It is engineer's responsibility also to educate others about these matters, as Boisjoly's example shows.

Technical incompetence cured by wishing

Production of software for computers is considered to be a field of engineering and is often called software engineering. Therefore, similar ethical recommendations that bind engineers should give foundations to the ethics in software engineering. For this end, codes of ethics have been published by such computing related organizations like Association of Computing Machinery (ACM), Institute of Certified Computer Professionals (ICCP), Information Technology Association of America (ITAA) and British Computer Society (BCS).

Like in other fields of engineering where business considerations have to be made in addition to technical decisions, the two tend to be sometimes in dissonance in the software engineering and information technology. Project plans and product delivery agreements are made based on over optimistic assumptions, just like may happen in building construction. And just like in construction, when profit margin diminishes, inferior material or work may be substituted for the better and more costly ones and short cuts may be taken where possible.

In building construction there are building codes that describe the minimum standards for various aspects like material strength, thermal insulation and electrical safety. Also, building construction is relatively well known field so that project estimation can be done with some confidence. In contrast, in software construction, no such standards exist. Productivities of different programmers may differ by factor of ten or more, and no widely known work amount estimations are commonly used. The best practice in the field requires iterative development and modification of the plans and estimates during the project. That way the stakeholders will stay informed and can affect the decisions if they see it necessary. Often, however, such method is not used but rather the delays from the original plans are explained away with a hope of catching up at a later phase. This has very seldom been seen to work and is one of the major reasons why majority of all software projects are cancelled before they are ready or the products are never used for anything. The following story told by Effy Oz (1994) is a typical example even though an unusually costly one.

In mid 1980's some 80% of airline reservations were being made through centralized, computerized reservation systems. In contrast, in hotel business only about 20% of bookings were handled centrally. American Airlines had one of the most successful airline reservation systems online, the SABRE system. The information systems subsidiary of American Airlines, AMR Information Services, Inc. (AMRIS) saw the field of hotel reservations as prospective new field to apply the knowledge gained form SABRE and to make money by providing centralized reservation services to hotel chains. AMRIS prepared a proposal to leading hotel chains and finally got an agreement to build such a system for Marriott, Hilton and Budget Rent-a-Car companies. The original estimate for the development cost of this CONFIRM reservation system was $55.7 million in fall 1988 and the estimated operating costs were to be $1.05 per reservation. The plan was that the initial design of the system would take seven months and the development would be completed in 45 months after signing the agreement. The agreement was signed in September 1988 and the system was to be ready for use in June 1992.

The problems began appearing at very beginning of the project. At the end of 1988 AMRIS came out with an initial design that was not seen to be adequate by the Marriott people. In March 1989, when the initial design was to be ready, AMRIS presented specifications and development plans that were not acceptable to the other partners, mostly due to technical flaws. Another six months were devoted to fixing them. In September 1989 the initial design and development plan were considered to be ready but the price estimate had gone up to $72.6 million and the estimated operating cost to $1.30 per reservation. It later turned out, that even those numbers were salted and the actual estimate would have been $2.00 per reservation.

In October 1989 AMRIS assured the partners that they were in time, but already in the beginning of the next year they missed two agreed deadlines. In February 1990 AMRIS admitted they were more than three months behind the schedule but they wished they would catch up later. In August AMRIS declared the first development phase to be completed but could not show any deliverables for that phase. In October they admitted of being one year behind the schedule. That makes the schedule was delayed by 9 months in 8 months of work. This is a usual situation in software development projects where schedule estimates are not based on data from previous, similar projects. At that point some employees of AMRIS began showing uneasiness with the project, since obviously the management was not admitting the facts and acting responsibly, and began leaving the company. Again, a typical situation in so called death-march software development project. The problems continued and there was a high turnover in development and management staff with some being dismissed and some resigning voluntarily. Still AMRIS assured to the other partners that they could make the system in about the agreed time and budget. In July 1992, when the system should have been delivered and after spending $125 million it was agreed that, due to technological problems, completion of the system as it was planned, was not feasible, but would have required at least another two years of work. After two years of legal battles, AMRIS has been told to have agreed to pay the other partners $160 million in settlement to avoid suits for the contract and related damages.

It is common that development of complex information systems involving large software development efforts is difficult to plan and manage. Most of the reasons are due to the enormous complexity of the systems. It is very difficult to find people who can handle the complex and multidisciplinary technical aspects efficiently. Further, it is very difficult to estimate performance of software developers, just like it is impossible to estimate the productivity of an author writing novels or poems, since the work is almost purely mental. However, it is usually visible in very early phases of such a project that the actual progress does not match the plans. In such a situation almost any developer can see the case and many do raise concerns. Mostly, though, the management does not respond adequately to the warning signs, since they are not accustomed to using project management methods that would allow the plans to be modified dynamically. Also the contracts are often prepared as constrained in all three dimensions of time, cost and deliverables, leaving the management no space to move when necessary. The question here is, what can a developer or a manager do?

The grass-roots software engineer does not have many options. He of she can just raise opinions and try to affect the closest managers, and do his or her best regarding to the development. It is of no use to alert external authorities in this unregulated field and leaking information to the customer has no better effect than leaking it to own management after notable amounts of money and time have already been invested. The situation is often a typical case of throwing good money after bad due to irrational reasons, as in the dollar auction game described by Mérö (1998). Usually failure of a software project inflicts only financial harms to the involved parties, and it is the nature of business that risks are taken. In such considerations the engineers have no binding obligation to work over-time to save the game if the management does not respond in a sensible way to the issues raised. Also, in the strongly competitive field of software development there is a constant demand for programmers and other software people. This has lead to the situation where the mobility of people is high and usually the most productive developers are the first ones to abandon the projects that are badly managed.

Entirely different question is, what the management can do in such a situation. With engineering hat on, a project manager might have courage to admit the problems and respond to them by re-negotiating the terms and re-planning the project. However, such re-negotiations practically always have financial consequences and with business hat on these steps are seldom taken. One reason for that is that middle management is often not very well versed in neither human resource issues (Humphrey, 1997; DeMarco&Lister, 1999; Pfeffer, 1998) nor the nature of software development work (Weinberg, 1988; Weinberg, 1998). Therefore they handle the only resource they have, the engineering team, so that the turnover makes the development work difficult if not impossible.

Requested to prove the risk

The normal engineering approach to handling safety risk is that whenever one is detected, it is analyzed to find if the design is safe with respect to it. However, sometimes an attitude of denying the existence of the risk sets in. Especially when a more junior or lower ranked person raises an issue against something a more senior person has been responsible of designing. This is related to a certain human relations issue consultant Jerry Weinberg (1985) has described in one of his famous rules. The rule is known as the Spark's Law of Problem Solution. It concerns the human relationships and states: "The chances of solving a problem decline the closer you get to finding out who was the cause of the problem." The first reaction you encounter when raising an issue against someone's work is defensive.

To be able to handle technical issues that may be sensitive to personal issues, some engineering organizations have established formal procedures to handle them. Sometimes people do, however, prefer more informal ways of communicating which may lead to the issues slipping through the cracks and never being officially considered. This was one of the major reasons that led NASA to lose $125 million Mars Climate Observer on December 23, 1999. The following is based on the NASA investigation board report (NASA, 1999) and an article in IEEE Spectrum (Oberg, 1999).

Mars Climate Observer (MCO) was to be the first weather satellite to orbit another planet. Unlike earlier, amply funded space missions, it was a product of a trimmed-down organization. It was to be one of the satellites in a series of low-cost Mars missions. Some of the people working in the project were part timers shared between other missions, some of the work was out-sourced and some of the work products were products of simplified working processes. This led to some systems being only partially tested, some decisions being based on incomplete information and some people not knowing exactly what they were working on. But the final nail in the coffin of the mission was inefficient communication and consequent inaction when facing the emerging disaster.

The MCO was launched on December 11, 1998. In four occasions along its route to Mars its direction was corrected in what are called Trajectory Correction Maneuvers (TCM). These maneuvers rely on information obtained from velocity measurements and knowledge of the forces acting on the craft. Some of the forces are due to gravitational pull of planets and sun, some due to the sun's radiation pressure and some due to the operation of on-board rotation momentum correcting thrusters.

Because of an error in a specification a subcontractor had provided a piece of software that provided the small forces due to the operation of the thrusters in imperial units instead of metric ones. The discrepancy in the values, by factor of 4.45, caused the trajectory modeling go off by about one millimeter per second for every momentum correction event. Accumulation of this small error caused the modeled trajectory to differ from the actual one. By the time the probe was approaching the destination the error was probably few tens of kilometers.

Some measurements along the trajectory had indicated that there was a discrepancy between the measured and modeled trajectories but the reason was not found and the anomaly was communicated only informally. The standard procedure would have required submission of an "Incident, Surprise, Anomaly" (ISA) report. However, regardless of that, even one big meeting was arranged to discuss the findings but no cause was found. Officially nothing was wrong even though practically everyone was suspecting that there was a problem with navigation of the craft.

The probe was approaching Mars and accelerating due to the planet's gravitational pull. The measured trajectory near Mars was at first uncertain because of an unfavorable geometry but as the probe gained in velocity the measurements began getting better. Based on the earlier, less reliable measurements the position of the probe was not very far from the expected one. The measured error was about 30 km towards the surface. The expected error was about 10 km to any direction. The latter, more reliable measurements showed that the probe might be off from the course by more than 70 km, again towards the planet. Believing that the earlier measurements were the right ones, the project management decided not to do the final planned TCM since the probe appeared to be not too far astray. Some calculations indicated, however, that the probe was approaching the planet too close, at the closest approach possibly touching the atmosphere, which would destroy it. Therefore, some had requested for the final TCM to lift the craft high enough to be on the safe side. This was not done since there wasn't enough evidence that something is wrong.

Classical engineering approach is that when facing uncertain data, one should err on the safe side, preferably by a large enough factor. This engineering approach was not followed. Instead, a managerial decision not to act on the lack of direct evidence was made. The question is, would submitting a formal ISA report earlier in the process have prevented the accident?

Since the project was one of the cost-cut space missions, it looks improbable that the minor evidence about the trajectory discrepancy would have prompted enough resources to discover the software problem that was found in the later investigation. It is possible that if some high enough ranking manager could have been convinced about the possible problem and a proper way to handle the situation, the final TCM might have been made and the probe saved and returned later to the correct trajectory around the planet. The problem usually in this kind of case, however, is that uncertainty is met by showing courage to take risks rather than by admitting the uncertainty and behaving in a safe way. After all, often people end up being managers because they are risk takers by nature, while the less self-assured people tend to stay in engineering. The problem is that seeing the difference between mission critical technological risk and a non-critical one requires understanding the technological aspects well enough.

A way to convince the affected parties

Most of the publicized cases of the failures in engineering are ones that ended up in an accident. Not very many cases are reported which describe close shaves that did not end in disasters. The story about how Citicorp Tower in New York City (Whitbeck, 1998) was saved is one of those. It can be found online in onlineethics.org so this is just a brief sketch about what took place.

In 1977 a structurally unprecedented building designed by William J. LeMessurier, the Citicorp Tower, was completed in the New York City. The building involved novel solutions to bind the structure together using very long steel beams. After completion of the construction the designer learned that the beams were joined using bolts rather than welded as the original design called for. This change in construction made the structure weaker than the designed one and it was apparent that the building would not withstand the strongest foreseeable wind loads. The joints had to be strengthened but before that could be done the designer had to convince the other parties that the repair was really necessary, since it was to be costly. The repairs were estimated to cost about one million dollars.

LeMessurier did structural computations and wind tunnel tests using scale models. With these he was able to show that the structure would not withstand even a wind load that could be expected from a storm that occurs on the average once in sixteen years. After consulting with lawyers and a colleague, LeMessurier decided to contact the highest-ranking Citicorp executives and let them know about the problem. It happened that a vice president of Citicorp, who had been involved in the construction, had engineering background and could appreciate the technical facts. Agreement about the urgency of the repairs was reached and the joints were reinforced by welding.

The question here is, what if the Citicorp executives would not have been convinced about the need for the repairs? Could there have been alternative ways to get the message through? I don't think so! The nature of this kind of decisions is that the people responsible of making the decisions have to be made aware of all the information. After that, it is entirely up to them to do the decisions as they best can. In the case the responsible people do not make the necessary decisions to avoid a disaster, then alerting the public and the authorities might be the way to go. Still, when we have a large corporation against the general public, the chances are that the decisions are not made before it is too late as some of the previous examples show.

Discussion

The ethical problems to engineers arise from a large number of sources all presenting different kinds of challenges, but which all involve higher, usually non-engineering-oriented management. Of the cases presented above, the one that led to satisfactory conclusion, was the one in which the people that were to make the final decisions could be convinced about the real danger: it is not an option to let a large building to collapse and possibly destroy other buildings, too. In this case the decision-makers had enough technical competence to judge the request and see that the action was really needed. In other cases the necessary decisions were inhibited by personal ambitions, financial considerations, technical incompetence or possibly a fear of failing in a new way.

Many of the cases presented above call for the consideration of the ethics of professionals and special requirements imposed on them due to their special status in the society. Oz (1994) discusses some of these, and particularly the professional's obligations to the client based on codes of ethics certain organizations have set. He notices that a professional employed by a company has obligations both to the client and to the company and may thus have a dilemma of making a choice between them. However, a manager running a project for a client has only obligations to that client. Therefore it is then in the best interests of the company and the client that the management carefully listens to the warning signs of possible problems and acts responsibly to prevent the disasters.

References

(Boisjoly, 1987). Roger M. Boisjoly: The Challenger Disaster: Moral Responsibility and the Working Engineer. Speech given at MIT, January 7, 1987. Printed in Johnson, 1991.

(Crisp, 2000) Roger Crisp (ed.), 2000: Aristotle: Nicomachean Ethics. Cambridge: Cambridge University Press.

(De George, 1981) Richard T. De George: Ethical Responsibilities of Engineers in Large Organizations: The Pinto Case. Business and Professional Ethics Journal, vol. 1, no. 1, pp. 1-14. Reprinted in Johnson, 1991. (Information about the Ford Pinto case is also available online in http://www.safetyforum.com/fordfuelfires/ (URL valid on December 16, 2001))

(DeMarco & Lister, 1999) Tom DeMarco and Timothy Lister, 1999: Peopleware: Productive Projects and Teams. Second ed. New York: Dorset House Publishing Co.

(Feynman, 1993) Richard P. Feynman, 1993: 'What Do You Care What Other People Think?'. London: Harper Collins Publishers.

(Freeman, 1970) Milton Freeman, 1970: The Social Responsibility of Business is to Increase Its Profits. The New York Times Magazine, 13. September 1970. Reprinted in Johnson, 1991.

(Humphrey, 1997) Watts S. Humphrey, 1997: Managing Techincal People: Innovation, Teamwork and the Software Process. Reading, MA: Addison Whesley Longman, Inc.

(Johnson, 1991) Deborah G. Johnson, 1991: Ethical Issues in Engineering. Englewood Cliffs, NJ: Prentice-Hall Inc.

(Mérö, 1998) László Mérö, 1998: Moral Calculations: Game Theory, Logic, and Human Frailty. New York: Springer-Verlag New York Inc., Copernicus.

(NASA, 1999) Mars Climate Orbiter Mishap Investigation Board Phase I Report, November 10, 1999. Available online (URL valid on November 3, 2001): ftp://ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf

(Oberg, 1999) James Oberg: Why the Mars Probe went off course. IEEE Spectrum, vol. 36, no. 12, December 1999.

(onlineethics.org) The story about LeMessureir saving the Citicorp Tower can be read in http://onlineethics.org/moral/LeMessurier/lem.html (URL valid on November 5, 2001).

(Oz, 1994) Effy Oz: When Professional Standards are Lax: The CONFIRM Failure and its Lessons. Communications of the ACM, vol. 37, no. 10, October 1994, pp. 29-36.

(Pfeffer, 1998) Jeffrey Pfeffer, 1998: The Human Equation: Building Profits by Putting People First. Boston, MA: Harvard Business School Press.

(Weinberg, 1985) Gerald M. Weinberg, 1985: The Secrets of Consulting. New York: Dorset House Publishing Co., Inc.

(Weinberg, 1988) Gerald M. Weinberg, 1988: Understanding the Professional Programmer. New York: Dorset House Publishing Co., Inc.

(Weinberg, 1998) Gerald M. Weinberg, 1998: The Psychology of Computer Programming. Silver anniversary ed. New York: Dorset House Publishing Co., Inc.

(Whitbeck, 1998) Caroline Whitbeck, 1998: Ethics in Engineering Practice and Research. Cambridge: Cambridge University Press.