It doesn’t take a failure of anything big to cause big trouble – as in massive, catastrophic and lethal damage to a sophisticated transportation system.
The U.S. space shuttle Challenger exploded 73 seconds after liftoff in 1986 due to a failure of O-ring gaskets. A rocket booster came loose, which then ruptured an external fuel tank.
Some of the worst airline disasters in history were caused by instrument malfunctions: In 1997, Korean Air Flight 801 crashed three miles short of the runway in Guam in 2009, due to a fault with the Ground Proximity Warning System, killing 228 people.
Defects in software code – strings of numbers and letters – can do it too. Late last month, a Lion Air Boeing 737 Max 8 jetliner crashed into the Java Sea off Indonesia, killing all 189 passengers and crew, due to what investigators described as a “glitch” in the plane’s flight-control software.
Some glitch. In most cases, that word implies a minor, temporary malfunction that can be easily fixed and doesn’t cause a major problem.
Not this time. Following the crash, the Federal Aviation Administration issued an emergency notice to operators of Boeing 737 Max 8 and 9 planes, warning that faulty “Angle of Attack” sensor readings “could cause the flight crew to have difficulty controlling the airplane.” This, it said, in a euphemistic phrase for a deadly crash, could lead to “possible impact with terrain.” Or in this case, the ocean.
Which raises questions yet again about the reality that modern society is increasingly dependent on the security and integrity of software, not just for the magical conveniences that computers, smartphones, apps and smart devices provide, but also for the life and safety of people when they travel. Clearly, that software is not always perfect.
[“source=ft.com]