Pilot in Command or Computer in Command?
Observations on the conflict between technological progress and pilot accountability

Author: Rechtsanwalt Prof. Dr. Ronald Schmid, Frankfurt am Main / Darmstadt / Germany
[published at: 2000 (XXIV) Air & Space Law 281 - 290]


I. Issue

1. A sarcastic "joke" making the rounds in pilot circles goes like this: "What are an Airbus pilot's first and last words? The first: 'What's it doing now?' The last: 'It's never done that before!'"

This is humour in its very blackest form. Humour is often also a way of repressing fears, however, and thus this joke is indicative of an old, probably deep-seated psychological problem arising out of the man-machine relationship.

The special relationship between man and the machine is as old as man's very invention of the machine. It was relatively uncomplicated as long as man controlled the machine - or at least had the feeling he was doing so. However, it became problematic the moment man began to replace himself with the machine. This is not always a cause for concern, but it most certainly is in the areas in which safety plays a role. And that is precisely the case in the field of aviation.

The person who designs a machine and puts it into service must personally concern himself with the machine. The aviation pioneer Wilbur Wright, in referring to the "flying machine", is said to have put it this way: "If you seek perfect safety, you are well advised to sit in a tree and watch the birds. If you really want to learn, however, you have to climb in a machine and familiarize yourself with its peculiarities through experimentation.'"

2. From the very beginning of aviation, the following pattern could be observed: At first technology only assisted man, but soon it started replacing him to an ever greater extent. The historical development of the flight crew has thus borne a strong resemblance to the children's song "Ten Little Indians".

When my father started flying again after World War II as a commercial pilot for Lufthansa German Airlines, five or six people still sat in the cockpit of a Lockheed L-1049, the famous "Super Constellation": In addition to the two pilots (backed up by a relief pilot on long-haul flights) there was also a flight engineer, a navigator and a radio operator. The latter, however, was soon no longer needed.

Shortly after the Boeing 707 and the DC-8 were put into service (i.e. at the beginning of the jet age in the year 1960), the navigator likewise became superfluous on all but a few routes: Only on the routes over the North Pole and over the North Atlantic, where navigation with the aid of very-high-frequency omnidirectional radio range (VOR) was not possible, was a navigator still needed.

With the introduction of the inertial navigation system (INS) on aircraft such as the Boeing 747 in 1970, the on-board computer took over the work of the navigator. He could now be dispensed with completely.

At the beginning of the 1980s, the battle revolving around the "third man" in the cockpit - the flight engineer - started in earnest. The cockpits of an increasing number of short-range and medium-range Western aircraft no longer even had room for a flight engineer; his services were required only on the bigger commercial aircraft used predominantly for long-haul flights. With the introduction of the newest generation of long-range aircraft such as the Boeing 747-400 and the Airbus A 340, he was no longer needed on these flights either. As in the case of the navigator, on-board computers took over his operational and monitoring functions.

3. Anyone who has followed the development from a cockpit with six, five or four persons to one with a three- and ultimately only two-man crew can thus rightfully ask the following question: What will the cockpit crew be like in the commercial aircraft of the future?

Pessimists in professional circles already know the answer: A pilot and a dog. Yes, you heard me correctly: The pilot's only job is to feed the dog and keep him awake; the dog is supposed to make sure that the pilot doesn't touch anything.

This joke, which made the rounds among pilots in the early stages of the manufacture of second and third-generation Airbus aircraft (A 320, A 340), may be a gross exaggeration, but it reveals the - not totally unfounded - fears of aviators regarding engineers. The latter have always dreamed of resolving the problem of human error by first repressing and ideally later eliminating its "cause", namely man with his weaknesses. At the manufacturing consortium Airbus Industries - in contrast to the American aircraft manufacturers Boeing, McDonnell Douglas or Lockheed - human beings were long considered little more than archaic "software" in the aircraft, a software that was over 50,000 years old. Pilots were considered a necessary evil, one that technically could be overcome if it weren't for the need to take into account the psychology of the passengers. The one-man cockpit (which is indeed feasible, as can be seen from military fighter planes) was viewed as the "logical" intermediate step towards the ultimate goal of remote-controlled aircraft (similar to the unmanned train shuttle "Sky Train" at the Frankfurt Airport connecting the two terminals).

Thus when designing its aircraft, the European manufacturing consortium Airbus Industries focused above all on the human being as the greatest of all possible sources of error. The object was to eliminate this source of error - if not completely, then at least to a major extent. Every effort was made to design technical equipment in a manner that would rule out the possibility of human error or prevent it from affecting the safety of flight operations. This philosophy was reflected in the remark a member of the management board of Airbus Industries is said to have made in February 1990 after the crash of an Indian Airlines A 320 in India: "If only the pilots had kept their sinful fingers off the controls..."



4. The downside of this faith in technology soon becomes apparent, however. The following five examples graphically illustrate this.

a. When my brother was assigned captain to the then newly introduced Airbus A 310 - a plane which in the 1980s was considered a high-tech aircraft but today already appears antiquated - he told me about an incident that gave me pause: During the last stage of the final approach, a bolt of lightning struck the nose of the aircraft, damaging the plane's electronic equipment in the process. The confused on-board computer still had a suggestion to make, however, and flashed it on the screen: "Shut down engines."

Now no sensible pilot in the world would do that during this stage of flight, so "Colleague Computer's" suggestion was ignored. The incident itself makes one stop and think, however: Isn't there the danger that at some point in the future the on-board computer will not merely make a suggestion but go ahead and take action itself ? Isn't there perhaps even a danger that one day, in keeping with the new philosophy I mentioned earlier, the pilot will only be able to intervene to the extent permitted by the computer? No matter how enthusiastically one may basically embrace technical progress, anyone who has retained any critical perspective at all will find it impossible to answer this question with an unequivocal "no". The following additional examples make it clear that a healthy dose of scepticism is by no means unwarranted.

b. On 26 June 1988, a brand-new Air France A 320 that was participating in an air show crashed in a wooded area in the Alsatian town of Habsheim near Mulhouse while performing an extremely low altitude fly by. When the pilot reached the end of the runway and wanted to power up the engines from minimum thrust to the thrust required for climb, the aircraft failed to react to his signal to commence the climb: Since the plane had been flying over the airfield at minimum speed (VLs) on the verge of a stall, the on-board computer refused to obey the command to lift the nose, for if the low thrust had remained unchanged, lifting the nose would have caused the plane to stall and then crash. The plane had not yet attained the higher speed necessary to avert a stall, however, because a jet engine needs several seconds to accelerate. Thus the A 320, controlled by computer logic and unresponsive to the pilot's will, flew into the adjoining woods.

c. On 14 September 1993, a Lufthansa A 320 crashed in Warsaw while landing on a wet runway in the rain. Due to the strong crosswind, the pilot tilted the plane slightly to the right just before touchdown; it thus touched down first on the right main landing gear and then on the left. As a consequence of the A 320's construction at the time, the spoilers (which changes the airflow round the wings, modifying the lift and thus bringing the plane down to the ground) did not work because the main landing gear on both sides were not fully weighted and the wheels - due in no small part to the aquaplaning effect - were not turning at the programmed speed. In short: According to the logic of the computer, the plane had not yet landed but was still turning. Thus the spoilers, which would create a braking effect, were not to be activated.

At that time neither the thrust reversers nor the spoilers of an Airbus A 320 - in contrast to a Boeing 737, for instance - could be manually activated. As a result, the aircraft - braked too slowly and too late - raced towards the end of the runway. The human being (pilot) was helpless.

As if that were not enough, the on-board computer did one more thing: The pilot could not fully activate the thrust reversers to brake the plane because the engine performance had been reduced to a maximum of 71 percent of full reverse thrust in order to protect the engines. A captain friend of mine remarked: "That would not have happened with my B 737."

Conclusion: "The pilot, who in a crisis decides against protecting the engines and in favor of saving the aircraft and human lives, is rendered powerless by the "foresighted" programmer of the system."

d. Discussion of the "battle between the pilot and the aircraft" also brings to mind the tragic Birgenair Flight ALW 301, which crashed off the coast of the Dominican Republic shortly after takeoff from Puerto Plata on 7 February 1996. As you may recall: Presumably as a result of a blocked pitot tube on the Boeing 757, the air data computer was fed incorrect data concerning the speed of the aircraft. Since from the standpoint of the computer the plane was apparently flying too fast, the activated autopilot increased the angle of incidence in order to reduce the speed, and the autothrottle system reduced the thrust. Because the aircraft was actually flying slower than the speed indicated on the display, however, these (incorrect) adjustments brought it to the verge of a stall.

It is still unclear whether the pilot - who after a brief moment of confusion caused by the conflicting information on his screen evidently suspected that the on-board computer was making incorrect decisions on the basis of erroneous data - turned off the autopilot and the thrust management system himself. It is not, however, unlikely that the autopilot system itself did this: A safety mechanism had been installed on the B 757 which automatically turns off the autopilot system as soon as the programmed pitch and bank limits are exceeded. The same thing is true of the thrust management system in the event that maximum speeds are exceeded.

It therefore cannot be ruled out that despite deactivation of the automated systems the autopilot engaged itself again and reassumed control of the flight. Boeing disputed this for a long time; the aviation journalist and pilot Tim van Beveren, however, experienced it a number of times in the flight simulator when "reflying" Flight ALW 301. If this was the case

- and I say it could indeed have been the case - then the system, which was programmed for climb, interpreted the pilot's corrective nose-down command as interference with "its own agenda" and, by further altering the trim of the nose, evidently tried to "resume" (but in fact further increased!) the climb in order to attain the most recently programmed flight profile. The pilot, for his part, tried again and again to override the autopilot and regain control of the aircraft. Thus man fought against the machine until the latter finally went into a stall and crashed.

It is also conceivable - although this can no longer be proven and Boeing has long denied it - that the thrust management computer eluded the pilot's commands as well: The so-called "runaway of the autothrottle during flight", i.e. the situation in which the computer independently sends commands to the engines (more or less thrust) via the autothrottle, was evidently a problem affecting the Boeing 757. It was probably not without good reason that in an Airworthiness Directive (AD) issued just about six months after the crash of the Birgenair B 757, the manufacturer gave instructions to operators of B 757 aircraft concerning measures to be taken to ensure that the problem did not occur (again?). For - according to the manufacturer - there was a danger that "the runaway of the autothrottle during flight or ground operations ... could distract the crew from normal operation of the airplane or lead to an unintended speed or altitude change". No further comment is necessary!

e. And, finally, one last example of the tendency of the computer to take things into its own hands: On one A 321 belonging to a German airline, the air data reference computer (ADR) - acting on the basis of a brief erroneous signal - independently retracted the slats and flaps (which had already been set in position 1 for takeoff) during take off run. Thanks to their flying skill and great luck, the crew was able to avert a catastrophe.

5. Conclusion: The philosophy of protecting man from himself and his mistakes by means of automated machines may in principle be a correct approach. However, it is always highly questionable if the pilot is rendered powerless by the "foresighted" programmer of the system.



II. The man-machine relationship

Man and the machine (and the computer in particular) have developed function if they work together in harmony. However, they are - and will continue to be - unequal partners due to their different strengths and weaknesses:

A computer can grasp more data and compare it far more swiftly than the human brain will ever be capable of doing. However, it can only call up and compare preconceived and preprogrammed courses of action; it cannot "think" truly independently.

Only man can find new solutions to problems and react to unanticipated situations, even though he collects, compares and processes information at a hopelessly slower rate.

If these different capabilities are perceived - and above all acknowledged - man and machine can achieve a symbiosis and comprise a "system". If the human element of this partnership is continually weakened, this "system" will become vulnerable: man and machine will become increasingly estranged.

And this is precisely the development that has been observable in recent years in the area of aircraft design: Pilots have increasingly been degraded to system operators ("push-button pilots") who often have no complete picture of the technical processes involved in the aircraft they operate for a living.

In a study conducted in 1996, the Nuremberg professor Holger Ebert came to the following conclusion: "The more advanced cockpits become, the less the pilots know about the technical systems." "Many pilots" would perhaps be more correct. The ones particularly affected are often those who were still flying an old B 727 yesterday but are assigned to fly an A 320 tomorrow; this is like leaping from the Stone Age to the high-tech age overnight.

Professor James Reason of the University of Manchester summed up the inevitable approach to the resolution of this problem in a single sentence: "We cannot change the human condition, but we can change the conditions under which people work."



III. Automation and responsibility

1. We live in a world in which life without automation is virtually inconceivable. Automation means delegating human action to machines - man lets machines work for him. This has not failed to have an impact on aviation as well.

Automation: For one person it is the work of the Devil - for another it is a blessing. Niki Lauda, who as we know can not only drive Formula 1 racecars but also fly all the different aircraft in the fleet of his airline Lauda Air, once told a journalist how he went about steering his ultra-modern Boeing 777: "Look, it's this way: I still take off manually, but then I switch on the autopilot and don't switch it off until after the landing in Los Angeles." This, in my opinion, reveals an attitude that is none too safe, as can be seen from the accident report on Flight ALW 301 and the subsequent modification of the B 757 manual which I mentioned earlier (see I. 4. d.).

Other people have greater reservations about the new technology. They are concerned that human beings might no longer be capable of completely controlling it and thus might no longer be able to assume responsibility for it. "With the introduction of 'fly by wire' technology, crews can be set a limit that is calculated by a computer. For the first time, the 'final authority' of the human being is being called into question."

This, in my opinion, was made eminently clear by the aforementioned accidents involving the A 320 in Warsaw and Mulhouse and the B 757 in Puerto Plata. They are, I feel, prime examples of what can happen when the computers function properly from a technical standpoint but either ignore or replace human input.

2. Another potential source of danger is the new layout of modern aircraft cockpits.

a. The cockpits of modern commercial aircraft are tidier than ever: Monitor technology makes it possible for the multitude of one-dimensional gauges and dials of the old instruments cluttering 1950s-era cockpits to be reduced to just a few screens. There is, however, a danger that - intentionally or unintentionally - the information necessary to comprehend the sequence of technical operations is being suppressed in the process. The idea behind this development is that the pilot should not be burdened with too much information and too much knowledge and distracted from performing his job. In principle, this idea is acceptable, but the line must be drawn at the point at which the human being is confused or even deprived of the ability to make decisions.

b. The same approach must apply to the "synthetic visual system" (high-performance graphics generator) that is presently being developed at the Institut für Flugmechanik und Regelungstechnik at Darmstadt Technical University under the aegis of Professor Kubbat: This system generates a three-dimensional display of the terrain surrounding the aircraft to enable the pilot to fly the aircraft as if visibility were good even when visibility is poor. This is indisputably a great help for the pilot, serves to promote flight safety and should therefore be welcomed. It should not, however, be permitted to lead to the aforementioned deprivation of the pilot's ability to make decisions. This would, for instance, be the case if an on-board computer only permitted the aircraft to be steered within the framework of the topographical conditions generated by the computer itself or defined by the data entered into it.

c. Moreover, the cockpit layout should not deviate unnecessarily from conditioned action patterns (allocation of warning colours, for instance: orange and red for warning or prohibition, green for all-clear or permission), because human beings swiftly revert to these thought processes in stressful and above all dangerous situations. It is simply a fact that the human brain cannot be immediately reprogrammed to forget conditioned behaviour patterns. The designer of a (flying) machine must therefore also take into account the typical archaic behaviour patterns of the human being.

3. Another area in which the question should be posed as to whether pilots can always bear responsibility for executing a flight are bad weather approaches under CAT III c conditions (the lowest category of flight operations), i.e. landings with runway visual range 0 (zero visibility). Even though such approaches are not (yet) permitted today, landings by commercial aircraft on precision approach runways with corresponding ground equipment are already technically possible. This means that the runway visual range (RVR) can be a mere 0 meters instead of the 75 meters still required today for a CAT III b approach, and the decision height (DH) can likewise be 0 meters instead of the 30 meters (100 feet) required today. In other words, landings can be made under conditions in which visibility is so poor that rescue vehicles would find it very difficult or even impossible to find an aircraft that had crashed.

To put it more specifically: When flying an aircraft like the Boeing 747, which has a landing speed of about 270 km/h, i.e. an approach speed of 76 m/sec, a pilot making a CAT II approach (DH 100 feet = 30 meters, RVR 300 meters) has only two or three seconds to make a "decision" whether to touch down or to go around again. When making a CAT III b approach (DH 17 feet = 5.4 meters, RVR 75 meters), which is already common practice today, he has even less time. In the case of a CAT III c approach, however, the aircraft would already have touched down - correctly or incorrectly - by the time a "decision" was made to go around again. And even if such a decision were to be made, an aircraft still weighing as much as 285 tons at the time of landing (such as a B747-400) would need far too long to react. Anyone can imagine how catastrophic the consequences could be if an aircraft touched down just a few meters to the side or ahead of the runway.

The key questions to be asked, in my opinion, are the following:
Does the technology we are developing still permit responsible action?
And: In the case of which structures is there a real danger that it will ultimately prove impossible to still take responsible action?


4. And let me point out one more danger that has not yet materialized today but could in the not-too-distant future: In the context of the development of so-called "free-flight concepts", serious consideration is currently being given to the possibility of no longer verbally issuing air traffic control instructions to pilots over the radio but instead transmitting them by radio signal to a receiver in the aircraft which would then carry them out directly - i.e. without delay. At this point - as a first step - the idea is still to have the pilot verify and confirm the instructions before the on-board computer carries them out, but how long will this be the case? Doubts do not appear to be totally unwarranted.

And if such external control of aircraft is someday possible, and if provision is no longer made for the pilot to verify and confirm incoming instructions before they are automatically carried out, who can really rule out the possibility that some crazy and/or criminal "hacker" might someday be tempted to try to "take over" an aircraft?

As a next step, the idea is to network the on-board computers of aircraft to enable them to locate and communicate with each other by independently adjusting and co-ordinating their flight plan, direction and speed with one another. The pilot would then no longer be permitted - or, as yet another step, even be able - to personally intervene. This would be the final departure from the pilot-in-command concept.  

IV. Legal aspects

1. We have seen that technology is well on the way to depriving the aircraft pilot of his decision-making capacity and, ultimately, control. From a legal standpoint, this is cause for considerable concern and therefore cannot remain unaddressed. In the Chicago Convention of 1944, the "mother of all aviation regulations", Appendix 6 No. 4.5.1 states: "The pilot-in-command shall be responsible for the operation and safety of the aeroplane and for the safety of all persons on board during flight time."

This provision was universally incorporated into the national legal regimes of the ICAO member states and thus into the German Air Traffic Regulations as well. Section 3 para 1 LuftVO states: "The pilot- in- command shall have the right of decision concerning the operation of the aircraft. He must take the measures necessary to ensure safety during flight, takeoff, landing and taxiing."

Section 1 LuftVO obligates the pilot as an active participant in air traffic to conduct himself in such a manner that safety and order in air traffic are ensured and that no other person is endangered or injured. German air traffic law thus makes it clear that - all automation notwithstanding - it is not the computer but the human being who bears ultimate responsibility for safe execution of the flight. But can he do so?

2. The ideal the legislator had in mind when drafting the aforementioned provisions, which still prevail today, has been profoundly altered by the new technologies. For technical reasons (feasibility) and due to economic constraints (cost reduction), man is now building systems the monitoring and control of which necessitate a complicated servo system that is susceptible to malfunction. Often, however, this system is so complex that it can only be mastered with great difficulty. In such cases the possibilities for taking responsible action are very limited, to say the least.

The same is true of landings when the visibility is 0 meters (see III. 3. above). If a pilot only has less than two seconds to correct an error he has recognized, and if the technical equipment involved (aircraft or engines) needs a few more seconds to respond after receiving his commands, it is no longer possible for any person to really act responsibly.

For the ability to exercise responsibility presupposes the freedom and opportunity to take action. If, however, the pilot in cases such as those described earlier is de facto incapable of exercising any responsibility, he cannot continue to be burdened with this responsibility de iure.

3. But that is precisely what some aircraft manufacturers apparently fail to comprehend. During the Fourth ICAO Global Flight Safety and Human Factors Symposium in Santiago (Chile) in April 1999, for instance, Curtis Graeber, a representative of the Boeing Company, stated: "In over 50% of the loss of control accidents ... complete control was available to the flight crew." That may well be. But he - intentionally! - fails to mention the other 50%. And anyone who read Boeing's first statements on the Birgenair crash will recall similar assertions made by other Boeing representatives. The fact that the operations manuals for the B 757 and the B 767 were subsequently modified more or less surreptitiously doesn't really fit this picture.

And during the same symposium, Etienne Tarnowski, a representative of Airbus Industries, compared the role of the pilot to that of a goalkeeper in a soccer game and voiced the opinion that "the pilot as the goalkeeper is ultimately responsible for the safe operation of the aircraft in all circumstances". I am grateful to Mr. Tarnowski for this metaphoric example because in my opinion it clearly reveals the very disturbing attitude of some aircraft designers: It goes without saying that it is precisely the goalkeeper who does not direct and control the game. He is only called upon when the team, especially the defense (in this case the computer), has failed to do its job on the playing field. Then, however, he is "the loneliest man on the field". If the game is lost because of him, no one says anything more about the poor performance of the ten other players on the field or the coach! The pilot is just as lonely as the goalkeeper when the computer doesn't know what to do next or even malfunctions. But there is one big difference: For the pilot, there is no break in the game or any opportunity to bring in a replacement. He has to take action. If things go wrong, it is simply chalked up to the famous "pilot's error" - the real cause is all too gladly forgotten.

Interestingly enough, the comments by the manufacturers' representatives contain no indication of any self-critical thinking about whether "design defects" could perhaps be the cause of aircraft accidents.

To sum it up: If the legal concept of the "pilot-in-command" is to be prevented from degenerating into "pilot-partly-in-command", steps must be taken - now and in the future - to ensure that precisely in the case of complex flight management systems the pilot is able intervene in the highly automated sequence of flight operations at any time and correct them. Otherwise he would degenerate into a "pilot doing his best".

In the future, the pilot-in-command must therefore still be able to switch off the autopilot at any time and stabilize the aircraft in the desired position exclusively by manually setting the pitch, bank and thrust. Unfortunately, however, designers of high-tech aircraft have not always devoted sufficient attention to this idea of furnishing a complete "backup" by reducing the computer override. And I fear that they will devote even less attention to it in the future.

But if one wishes to deprive the human being of his freedom to take action, one must also, as a logical consequence, relieve him of his responsibility for his actions. It would then - as a further logical consequence - be necessary to abandon the legal concept of the "pilot-in-command"; section 3 LuftVO would have to be eliminated or reformulated.



V. Summary and conclusions

1. I would like to make one thing very clear: Automation in the cockpit - as in other areas - can basically be a blessing. This is indisputably true of achievements such as the autopilot or the various warning systems. My lecture is therefore not motivated by hostility towards technical equipment or technology but rather solely by a healthy scepticism. For the dream of perfectly functioning technology is just as illusory as the belief in the infallibility of the human being.

The elaborate precautions taken in connection with the Y2K problem show just how (un)certain the "EDP faithful" are about their command of modern computer-controlled technology. Even NASA, which undoubtedly has one of the highest standards of technology, decided to play it safe and brought the space shuttle Discovery and its crew (who had repaired the Hubble space telescope) back to Earth before 1 January 2000 because it could not completely rule out the possibility that a computer glitch might occur at the turn of the "millennium".

2. Man and machine are thus equally imperfect. There is a crucial difference between the two, however: A "machine" (or, to be more precise, the computer controlling it) does not think. It only processes and carries out preconceived and pre-programmed courses of action

- at least at the present time. The human being, however, can think and is basically capable of deviating from conditioned patterns of action and finding new solutions. This fact cannot in all seriousness be contested and should never be forgotten.

If a pilot is called upon to solve a technical problem, the success of his efforts will hinge on his creativity and his superior understanding of the system. He must understand what is going on in the systems he is operating and what phenomena are determining the sequence of events. He must be able to correctly perceive what is happening even if he cannot look inside the system.

3. Given this situation, the role of technology must be - and must continue to be - that of a service provider: It must be limited to providing the resources and information required for flexible and effective action and warn the pilots of dangerous developments. The aircraft designer must therefore, if necessary, even dispense with technical advances in order to ensure that the aircraft remains operable, comprehensible and thus controllable by the human being. Under no circumstances can technology be permitted to filter or block out information, much less take action on its own.

4. The conclusion: "The strengths of the human being, which lie primarily in his analytical capacity, must be accorded greater consideration in design."

And, moreover: They must be embedded as a basic philosophy in the brain of the designer and taken into account from the very beginning of the development of aircraft an aircraft systems. As long as technology is incapable of completely replacing the human being, the latter's skills - and limitations - must set the standards.

"Aircraft must be adapted to serve as useful, controllable tools for the human being, not vice versa." Or, to put it differently: Man must (continue to) control the machine; the machine must not be allowed to control man. Only then can the pilot-in-command continue to exercise and bear his responsibility.

My aim therefore has been and will be to bring this problem to the attention of those who initially feel no sense of responsibility when an aircraft accident occurs but who are indeed responsible, at least indirectly: The designers, who dream up and build machines that are so computer-controlled as to be barely or no longer controllable by the human being, and the people responsible for the business decisions to purchase and put these aircraft into service. Even though they may not be legally responsible in a given case - they cannot, after all, be reproached for putting a certified aircraft into service - their moral responsibility can, in my opinion, very well be addressed if they knowingly took risks for economic gain.