50 years ago on July 21, 1969 Neil Armstrong became the first person to walk on the moon. It was one of the most momentous occasions in human history and more than half a billion people watched it unfold live on television. Armstrong famously stated, “That's one small step for man, one giant leap for mankind,” and he couldn’t have been more right. Consider, for a moment, just how far technology has come in the last 50 years. At the time of the lunar landings, the most simplistic and rudimentary form of what we would consider a ‘computer’ had only existed for around 30 years. This was at a time where computers weighed hundreds of pounds and occupied considerable amounts of space.
The computers, which powered the spacecraft and its landing module, weighed in excess of 70 pounds each, were filled with delicate glass vacuum tubes, and had performance stats dwarfed by any cell phone produced today. But at the time, they were the smallest, fastest, and most advanced computers in the world. The Apollo Guidance Computer (AGC), responsible for monitoring and correcting the flight path on-the-fly as well as automatically interfacing with over 150 different ship components, had approximately 4KB of RAM. In contrast, most smartphones today have 4GB of RAM. That is over a million times the amount of memory available to the entire AGC system. Likewise, the processor in the AGC clocked in at a mere 0.043MHz while an iPhone X runs at 2,490MHz, over 100,000 times more processing power. NASA’s achievements are even more remarkable when you consider that your average microwave has more storage and computing power than the tech responsible for keeping astronauts on-track as they hurtled through space.
So where did all the code that powered the Apollo 11 mission come from? At the time, the technology simply didn’t exist. Programmers and engineers at MIT’s Instrumentation Laboratory had to invent it from scratch; this was pre-internet when they couldn’t just Google how to solve the problem. They developed an original system of coding (dubbed ‘Core Rope Memory’) whereby they wound a series of wires through magnetic cores; if the wire wrapped around the core it was a “0”, if the wire went through the core it was a “1”. After writing out a code sequence it would be physically woven into the computer, a process that could take as many as eight weeks. Even a single wire out of place or wrapped incorrectly would mean malfunctions in the computer’s flight navigation systems and could have catastrophic results. Fortunately, NASA’s rather analog approach to coding their flight computers rendered them safe from solar radiation and other cosmic effects. Considering the programming instructions were literally being hardwired into the computers, this archaic form of software is also technically a kind of hardware hybrid.
Source: https://imgur.com/a/KQmQdiS
A photo shows Margaret Hamilton- the leader of the Apollo, flight software development team, standing next to a stack of source code from the AGC. At a staggering 11,000 pages, the code is as tall as she is. Margaret herself coined the term “software engineering” while developing these systems at NASA. In fact, her unique approach to software development may have saved the entire Apollo mission.
Just as Neil Armstrong and Buzz Aldrin were attempting to land on the moon, a malfunctioning radar began flooding the computer with garbled data. This would have overloaded the AGC and prevented the computer from making the necessary landing calculations. Thankfully, Margaret had planned ahead by creating an ‘asynchronous executive’ failsafe which allowed the computer to drop non-critical tasks when overloaded. This allowed the computer to dump the radar data and focus on the calculations to land safely. Unbelievably, Margaret’s insistence on including these sorts of software failsafes was deemed “excessive” at the time by NASA. They insisted the astronauts were trained to be perfect and would make no mistakes, therefore there was no need for the frivolous code. It could be argued that her foresight is the very reason why the Apollo mission succeeded and the astronauts returned home safely.
Source: https://imgur.com/a/PBAfMfQ
With the lunar landing accomplished, the Space Race had come to a close. Cold War tensions with the Soviet Union began to ease and inflation was on the rise. Passionate support for space exploration waned quickly, and the pressure on the government to reduce spending meant the Apollo program was shuttered in 1972 after only seven successful lunar landings. Gene Cernan was the last man to walk on the moon over 47 years ago. After that, space travel and exploration moved to the back burner of American priorities resulting in the industry stagnating for decades.
That is until a handful of billionaires decided to get involved. Between 2000 and 2004, three separate private aerospace companies would emerge that refocused America’s attention on the cosmos. Jeff Bezos, Elon Musk, and Richard Branson launched Blue Origin, SpaceX, and Virgin Galactic, respectively. They recruited thousands of engineers, scientists, programmers, and software developers to create the next generation of technology to explore space. This marked the first time a private sector company could compete with NASA, and it heralded the start of the Billionaire Space Race.
Each company has a unique goal; Blue Origin seeks to shift heavy industry from Earth into space, SpaceX seeks to reduce the cost of space travel via reusable rockets as well as the eventual colonization of Mars, and Virgin Galactic wants to bring tourism into space. This results in less of a head-to-head competition between enemies for a single objective, and more of a general rivalry between companies.
Each of these companies are at various stages of success in their bid to reach the stars. Blue Origin has completed a number of sub-orbital flights carrying consumer payloads. Virgin Galactic has reached an altitude just shy of 56 miles and even brought their first test passenger. SpaceX has made a number of successful launches and landings, including supply runs to the International Space Station. Overall it is clear that they are making progress, as each flight seems to go better than the last.
So what kind of sweet new tech and software powers these futuristic rockets and shuttles? Sadly for us, much of the truly interesting stuff is classified for security reasons. However, even with the limited information available, we can make a few educated guesses at the sort of problems the developers will need to prepare for. First and foremost, the computers will need to survive the incredible forces applied to the shuttle at launch. This includes not only the obvious intense heat, but also the jarring vibrations that would threaten to pull the devices apart.
Things don’t get any easier once the shuttle reaches outer space. Once in orbit, computer systems will need to survive massive fluctuations in temperature. While the ship is in-between the Sun and the Earth, temperatures can exceed 250°F. When the Earth is in-between the Sun and the ship, temperatures can drop lower than -240°F. Once engineers have nailed down solutions for these problems, the next obstacle to conquer is solar radiation.
The Sun emits high-energy radioactive particles that can damage sensitive instruments as well as the on-board computer systems themselves. It is rather safe to assume that modern aerospace companies such as SpaceX no longer rely upon antiquated systems such as Core Rope Memory and have moved on to more modern (and digital) means of coding their devices. This leaves them vulnerable to what is called ‘bit flip’. Essentially, when a radioactive particle passes through key components within the computer, data may become altered. If one of these particles strikes the memory it can change a “0” to a “1” or vice versa, corrupting the data. Likewise, calculations can become wildly erratic if the particle strikes the processor. Changing even a single integer of the extraordinarily complex calculations made by these machines could lead to a drastic difference in their outputs.
How would software developers and engineers tackle these problems? The answer is a two-step process, parity bits, and redundancy. Software onboard SpaceX’s vehicles can be designed to monitor the memory and if an element becomes corrupted, that specific sequence can quickly be amended using parity bits. When it comes to the processor, SpaceX opts for redundancy rather than radiation-shielded components. According to John Muratore, the previous director of SpaceX vehicle certification, each Dragon capsule is equipped with three flight computers. They have dual-core processors that run the calculations individually and then compare the results. If one of the cores is hit by a high-energy particle, there will be five other calculations to compare it to. When the malfunctioning core is identified, the system will automatically reboot it and then sync it with the other cores. In addition to the flight computers, there are 18 other onboard systems that utilize triple redundant computers. That’s a total of 54 processors running a single Dragon capsule.
Beyond that, it is rather difficult to guess what these companies could have under the hood of their shuttles. If you look back on history, it is clear that software development and space exploration go hand in hand. The harsh conditions of space helped shape the adaptability of software. Likewise, software contributed greatly to the resilience of the space industry and the progress of technology as a whole.
Contact us to see why the brightest companies trust Lithios.
Get in touch