A view of the world from my own unique perspective

Archive for the ‘Science’ Category

Did the Dinosaurs Die From Boredom?

On the surface, this sounds like a ridiculous title. Everyone knows that the dinosaurs were wiped out in a mass extinction event about 65 million years ago. An asteroid hit the Earth near the Yucatan Peninsula, and the debris displaced by the impact caused an extended global winter. The dinosaurs, being cold-blooded, were unable to maintain their body temperature as the Earth started cooling, and they eventually died.

Asteroid Impact Location v2

So, what’s boredom got to do with anything? As you know, this is The Bob Angle, so naturally there has to be another way of looking at it…

A few years ago, I was watching the movie Men In Black. Toward the end of the movie, there is a CGI scene in which the camera zooms out from Battery Park in Manhattan, into orbit, past our solar system, beyond the Milky Way … until it reaches the edge of the universe itself. Then it keeps zooming out until it is revealed that our universe is actually contained within a marble-like object, which is resting on the ground of a world from a higher plane of existence.

I found this sequence fascinating because it reminded me of Stephen Hawking’s concept of multiverses – multiple universes. Hawking speculated that ours isn’t the only universe; there might be hundreds or thousands of other universes, each formed by their own Big Bang, and each governed by different laws of physics.

Marble Universe

This scene also proposes the notion of life on a higher plane of existence. Instead of a singular, managerial God, an entire society of superior beings may exist on some unreachable, god-like realm. In fact, our entire universe may be nothing more than a sophisticated physics experiment to the creatures who inhabit this plane. Naturally, every one of these creatures would be considered a god to us mere mortals.

If our snow-globe universe is a classroom experiment, then it’s possible that we are being observed by several of these god-like creatures, or perhaps an entire room full of them. That thought alone should make you want to be on your best behaviour – if Humankind destroys itself in a nuclear war (after evolving from single-celled organisms) the superior being in charge of our celestial marble may receive a lower grade for this science project.


The Game of Life

No, I’m not talking about the board game that you probably played during your childhood; this is a more esoteric life simulation that’s also known as cellular automata. If you took computer science in university, then you will undoubtedly be familiar with it.

The concept was developed in the 1940s by John von Neumann and Stanislaw Ulam, and was turned into a simulation by John Conway during the 1970s. Calling it a game is a bit of a misnomer, since there is no continual user interaction. It’s essentially a simulation. It can also be run within a browser, if you’d like to give it a try.

You begin with a blank grid. Each square, or cell, represent a life form. You add one or more lifeforms by highlighting some of the sells. Each cell has eight neighbouring cells. Whether a particular cell survives to the next generation depends on the following set of rules:

Automata Grid

  • 0-1 Neighbours: The cell dies from underpopulation (or loneliness).
  • 2-3 Neighbours: The cell survives until the next generation.
  • 4-8 Neighbours: The cell dies from overcrowding.
  • A dead cell with three neighbours will come to life in the next generation.

Cell NeighboursIt seems absurdly simple, but this simulation can generate some surprisingly complex behaviour. Cellular automata are used in encryption, random number generation, and the arrangement of processing elements in CPUs. If you’d like to take a deep dive into this topic, M. Mitchell Waldrop’s book, Complexity: The Emerging Science at the Edge of Order and Chaos is an excellent place to start. Waldrop elaborates on research into complex systems, done at the Santa Fe Institute in New Mexico. Systems with just a few simple rules can generate complex, even unpredictable behaviour, and even act as if they’re intelligent.

In the Game of Life, the initial configuration of cells is called the seed. Seeds can evolve into stable, complex or chaotic patterns. However, many will becomes static patterns and others will simply oscillate forever.

Automata oscillator


The Dinosaur / Cellular Automata Connection

Dinosaurs reigned the Earth from 225 – 65 million years ago, a period lasting 160 million years. During this time, they were in perfect harmony with nature, having established an ecological equilibrium. In fact, one might argue that the dinosaurs were better stewards of the planet than we are, despite our larger brains and lofty perch at the top of the evolutionary ladder.

Dinosaur Illustration 1

What the Game of Life is to us – a simplistic model of evolution – is probably what our universe is to these superior beings. It’s just a dark snow globe, or perhaps a novelty item sold in their museum gift shops. It might be a science experiment or a game: generate an initial seed value by adjusting the laws of physics (and other parameters), and then sit back and watch the universe unfold – see if life develops, or how advanced the civilizations will become before they destroy their environment or themselves. On a god-like plane of existence, this might actually be amusing!

Once the simulation began, and the primordial ooze coalesced into stars and planets, circumstances on Earth were interesting as it slowly took shape and developed. Life began, and started to evolve, growing increasingly complex.

Now consider the reign of the dinosaurs. They have evolved into a stable configuration, and remained that way for 160 million years. After this ecological equilibrium was established, things plateaued, evolution-wise. If I were a superior being, I would quickly become bored. Life on Earth would be much like watching a static or oscillating cellular automata pattern… dull as ditch water. Ar this point, I could throw out my snow globe universe, or being the resourceful being that I am, I could find a way to hack it… just a little.

This is what I think might have happened. The owner of our universe-in-a-marble decided to make an infinitesimal change to a tiny sliver of our universe – just enough to disrupt the equilibrium on a single planet. Our universe owner either created a new asteroid by breaking apart a large object, or nudged an existing asteroid so that its orbit would collide with Earth. The impact wouldn’t be forceful enough to destroy the planet or all life on it, but enough to cause a global extinction event and leave a few survivors who will take evolution in a new direction. In fact, the BBC reported that the asteroid hit the Earth in just the right spot to accomplish this.

Another article speculates that our sun had a sister star that hurled a few meteors in our direction every 27 million years. One of them hit the Earth and wiped out the dinosaurs. There are countless ways in which one can hack the universe.


Could This Happen To Us?

In a word: no. We homo sapiens are just too darn interesting. We’ve settled all over the planet. We’ve drawn and redrawn political boundaries as empires rose and fell. We’re constantly inventing new things and are now extending our reach past the planet itself.

If that weren’t enough, we’re already the architects of our own demise. For over a century, we’ve been extracting raw materials from the ground, processing them, and then feeding them back to the planet in an indigestible (plastic), or even poisonous configuration (spent nuclear fuel rods) – a perverse form of reverse dialysis on a planetary scale.

Will we smarten up in time to stop the ecological damage we’ve caused to our planet, or will be perish as a result of our own stupidity? Even to a superior being, that’s some pretty decent cliffhanger material!

Even if we do manage to save ourselves, we still won’t be out of the proverbial woods. Achieving a net zero carbon footprint sounds like an admirable goal, but let’s not rest on our laurels for too long. After 160 million years of living in harmony with the planet, the watchers may once again decide to stir things up…

Asteroid Earth



The Apollo Code Redundancy Speculation

About 12-15 years ago, some friends and I were discussing Moore’s Law, but from a slightly different angle. While we’ve enjoyed exponential increases in computer memory memory and storage space over the past couple of decades, we were nevertheless impressed by the programmers of the early personal computers. They were able to write useful programs and very enjoyable games that were less than 64K in size. I don’t think that any of today’s programmers would have the talent or resourcefulness to do something like that now – packing that much functionality into such a small space requires not only proficiency in a low-level programming language, but also an intimate knowledge of the computer hardware itself (along with its limitations and idiosyncrasies).

Mission Control Console

Apollo Mission Control Center

One of us then took the comparison a step further and said “What about the Apollo engineers during the 1960s? They had even less memory, and their code had to send men to the moon and back!”. Another friend added “Did you know that 90% of the computer code used during the Apollo missions was redundant? Only 10% of the the code was needed to run the computer – the rest was used for error checking and to ensure that the computers never crashed”.

Windows BSODI can usually identify an urban legend or a hoax fairly quickly, but this one – despite the lack of references or source material – actually sounded plausible. The thought of a computer miscalculation, crash, or the Apollo equivalent of the dreaded Microsoft Windows BSoD (Blue Screen of Death) would be simply terrifying! It seemed reasonable to me that the Apollo engineers would add as much extra error-trapping code as necessary to ensure that the onboard computers never crashed.

So I filed that story away in the back of my mind as something that would likely remain one of life’s great mysteries.


Fast forward to July, 2017. I was attending the American Mensa Annual Gathering, and deciding which lecture to see next. There are typically 6-7 simultaneous lecture streams, and naturally, I think they’re all interesting; it’s exceedingly difficult to settle on just one. For the 10:30 a.m. slot, I finally decided to go with the one billed as “A Behind-The-Scenes Look at the Apollo Moon Landing“. The lecturer was Martha Lemasters, who was a member of IBM’s Launch Support Team as a PR writer during the Apollo missions (IBM was a NASA contractor). After the end of the Apollo program, she worked on the Skylab and Soyuz programs.

Lemasters Lecture

Martha Lemasters’ Mensa lecture.

Lemasters had also written a book about her time at IBM, called The Step: One Woman’s Journey to Finding her Own Happiness and Success During the Apollo Space Program. Her engaging, 75-minute presentation included numerous facts and trivia about NASA and the Apollo missions, stories about her job and the working conditions, excerpts from her book, and a slide presentation filled with photos that I had never seen before. The room full of Mensa members enjoyed themselves thoroughly. Lemasters is a natural storyteller, and she effortlessly took the audience with her on a journey back in time, to a challenging, fast-paced working environment, but also one that may seem insufferably chauvinistic by today’s standards. For example: women were not allowed to wear dresses on the launch platform because it would be too much of a distraction for their male coworkers. Of course, that’s not quite how NASA phrased it – they said that dresses were a “safety hazard” because a distracted male working on an elevated platform might drop a wrench and injure someone working below.

Personally, I found this directive puzzling: IBM employs only intelligent, educated, ambitious, disciplined and professional people – the best of the best. Surely these men wouldn’t be reduced to salivating teenagers at the sight of a woman in a dress.

Lemasters finished her presentation with a Q&A session, which was an unexpected surprise and a wonderful opportunity – a chance to speak with someone who actually worked on the Apollo mission and who was embedded with its engineers. As she pointed out during her lecture “There aren’t too many Apollo veterans left”. I raised my hand, recited my friend’s claim about the redundant computer code, and asked her if this was actually true.

Unfortunately, she didn’t know the answer. Now most presenters, when faced with a similar question, would simply say that they don’t know, and then move on. However, she then did something that really impressed me. She replied that she didn’t know the answer herself, since she didn’t work directly with the computer systems. However, she added that she still keeps in touch with many of the engineers on the Apollo project, and that if I’d like to write down my question and give her my e-mail address, she’ll forward my question to them.

Well, this was much more than I could have hoped for! I never thought that the redundant code story would ever be verified, and now my question was about to be forwarded right to the source – engineers and programmers who actually worked on Apollo 11 (the first moon landing)!

A few days later, I received e-mail messages from Martha Lemasters, and two former Apollo Mission veterans, James Handley and Kenneth Clark (both of whom Lemasters described as “geniuses”). They not only answered my question, but were kind enough to send several e-mail messages over the next few days, containing an incredible amount of detail. I was impressed with the amount of information they provided, and also astounded that they were able to recall these technical details so vividly after almost half a century.


James Handley was in charge of the design and programming effort for the SLCC (Saturn Ground Computer Launch Checkout System) in Huntsville, Alabama, and then transferred to the Kennedy Space Center in Florida, to oversee the installation and maintenance of the software. Using one the first IBM 360 mainframe computers, Handley and his team developed the SIRS (Saturn Information Management System), a workload management system. He also headed the NASA Flight Crew Training Directorate contract. Handley eventually managed a staff of 90, and was responsible for all Saturn programming efforts, the facility computer, and all new business activities. Later in his career, Handley worked on the design, development and installation of the Space Shuttle Ground Checkout System.

Kenneth Clark summarized his role in the Apollo / Saturn project as follows: “I was a programmer and launch team member for IBM’s part of the project at the KSC (Kennedy Space Center). My earliest job was writing programs to check out the Saturn IB & V launch vehicles. I later became a member of the launch team and the ‘go to’ guy for anything bad that happened to the software in the Ground Launch Computers (RCA 110As). Later I was the leader of the design / development team for the Space Shuttle Launch Processing System.”

NASA Code Redundancy – The Real Story

Here is their response, pieced together from our e-mail conversations:

The Launch Vehicle Digital Computer (LVDC), made by IBM in Owego NY, was called a Triple Modular Redundant (TMR) computer. That meant that the guidance equations (or code) were simultaneously being solved by three different circuits then compared and voted on so if there was a single point failure in the computer, two answers would agree and the third would be discarded. This was done to achieve the close to the 100% reliability desired. So this meant the computer was like three computers plus circuits to compare. On the issue of code redundancy I think there was only one set of code in the computer and the TMR logic all operated on that set of code. Therefore the code itself was not replicated, although I think there were checks and balances in the code also but I don’t think the 10% vs 90% is true.

The term “code redundant” implies that there is code that recomputes a value for which the answer is known, in order to verify correctness. There were two Apollo Guidance Computers in the spacecraft. One in the Command Module and one in the Lunar Module. I doubt there was any of that in the flight computers and know for a fact there was none in the ground computers. The Launch Vehicle Digital Computer used Triple Modular Redundancy (TMR) logic, but I don’t believe the code was replicated. The Saturn Ground Launch Computers were not TMR. However the Mobile Launcher Computer did contain redundant set of code which was switched to if the primary memory encountered a parity error, or if there was a no instruction alarm during execution.

On the subject of error checking, not even close to 90% of the code would be allocated to that task. The amount of memory in any of the computers made it absolutely impossible for there to be much if any code in the computers to be used for error checking. During the Apollo era memory was big, bulky, and most of all, heavy. They just couldn’t afford to launch much of it. Having redundant code would require redundant memory. The error checking that existed was to determine if an operation requested or commanded by a program completed successfully. There were some checks even in the Lunar Lander to report on unexpected errors. An example of this was the Lunar Module program alarms minutes into the landing sequence (Error codes 1201 & 1202).

The memory used in the computers was mostly magnetic core. Here are some examples of the memory sizes used in the computers:

  • Saturn Ground Launch Computers (RCA 110A) – 32 K 24-bit words + 1 parity bit
  • Instrument Unit Launch Vehicle Digital Computer – 32 K 28-bit words including 2 parity bits
  • Apollo Guidance Computers — 2048 K words of erasable magnetic core memory and 36 K 16-bit words of read-only core rope memory.
Apollo Guidance Computer

Apollo Guidance Computer

The Space Shuttle Program carried redundancy to the ultimate level. The computers on the Space Shuttle were AP-101s manufactured in Owego by IBM. They were called the Space Shuttle General Purpose Computers or GPCs for short. There were five GPCs on board the Space Shuttle. During launch, four of the GPCs were executing 100% redundant code programmed by IBM Houston. Each output from this “Redundant Set” was voted by hardware logic. If one of the computers came up with a different answer it was voted out by the hardware. The fifth computer was running software programmed by MIT Labs. The backup flight computer could take over if the “Redundant Set” experienced multiple failures or some other failure took out the “Redundant Set”.

There you have it, right from the source. An urban legend debunked with a mixture of curiosity, serendipity and the graciousness of some people who actually worked on NASA’s Apollo mission. Thank you so much Martha Lemasters, Kenneth Clark and James Handley!



Is There a Hidden Inspirational Message In Einstein’s Theory of Relativity?

Have you ever experienced a really profound dream – one in which you’ve stumbled upon the hidden mysteries of the universe, and one so intense that it actually woke you up in the middle of the night? Upon awakening, you think to yourself “This is it – I’ve discovered the secret! Yes, it all makes sense now!” Then you roll over and go back to sleep, and when you wake up in the morning, you’ve completely forgotten what your dream was about. I had one of those dreams a few weeks ago, but this time it happened just a few minutes before I was supposed to wake up, so I was able to remember it. It doesn’t seem as profound now as it did when I was dreaming it, but for what it’s worth, here it is…

In my dream, I uncovered a secret inspirational message contained within Einstein’s Theory of Relativity. Of course, since Einstein died in 1955, we can’t ask him if it’s true, so this will be nothing more than the whimsical nocturnal speculations of my overactive imagination.

Albert Einstein

I suspect that I was able to connect the dots because I’m a fan of Leonard Bernstein and had recently been watching his Harvard lectures. In 1973, this Harvard alumnus delivered a series of lectures at his alma mater called The Unanswered Question. In the first lecture, Musical Phonology, he told the students that the principal thing that he learned from his masters at Harvard was a sense of interdisciplinary spirit, and that “the best way to know a thing, is in the context of another discipline.

It was in a similar interdisciplinary spirit that I was dreaming about something very analytical, which appeals exclusively to the left hemisphere of our brains – Einstein’s Theory of Relativity – from a decidedly right-hemisphere point of view. I was contemplating relativity from a new and unique vantage point: the self-help section of a bookstore.


Even if you don’t understand it, you are undoubtedly familiar with Einstein’s relativity equation: E=MC² It states that energy (E) equals mass (M) times the speed of light (C) squared. It’s also important to know a couple of facts about the speed of light, which is 186,000 miles per second, or about 300,000 kilometres per second. Einstein stated that the speed of light was always constant, and that nothing (or at least nothing with any mass) can travel at or faster than light. I admit that it does seem strange that there could be a maximum speed for anything in the universe, but the concept of light’s maximum velocity can be illustrated in the following graph:

Energy vs Speed Graph

This graph displays speed along the x-axis (horizontally) and energy along the y-axis (vertically). The faster an object travels, the more energy is required to reach that speed. As you can see, there is a vertical asymptote at c (the speed of light). I’m sure that you already know that a vertical asymptote is a vertical line that the graph plot approaches but never actually touches (because its value would have to be infinity in order to reach it). In this graph, it means that it will take an infinite amount of energy to propel anything at the speed of light. That’s why nothing (with mass) can travel that fast – there just isn’t enough energy in the universe to do it.

And now, the essence of the dream… was Einstein an even greater genius than we thought? While E=MC² was certainly a groundbreaking equation for physicists, it could also be interpreted as an important social statement. Einstein’s Theory of Relativity might actually be a parable – much like one of Aesop’s Fables – disguised as an equation. I had finally decoded the secret, inspirational message contained within the equation, because I (much like Leonard Bernstein’s professors) was examining it within the context of another discipline.

The 80/20 Rule and Project Management

If that graph looks familiar to you, then this might be why. If your job is at a manager’s level or higher, then you probably know about the 80/20 Rule, known formally as The Pareto Principle. It’s embraced by many different industries, and each one places their own personalized spin on it:

  • 80% of your sales will come from 20% of your clients
  • 80% of network traffic occurs during 20% of the day
  • 20% of computer code contains 80% of the errors

In project management, there is a popular maxim paraphrased as follows “80% of a project can be completed in 20% of the time… but it’s that final 20% that requires 80% of the project’s timeline (or even more, in many cases)“. This graph illustrates that maxim quite well.

Take a look at the graph from a Project Manager’s point of view, but relabel the x-axis as “Percent Complete” and the y-axis as “Time”. At the 80% mark, the project time requirements start to skyrocket, and soon it becomes clear that delivering every feature (flawlessly) within the initial time frame will not be possible. Compromises are inevitable. Did Einstein leave this message for Project Managers in his Theory of Relativity?

Perfectionist Personalities

We all know people who are perfectionists, and I’m sure you’ll agree that they can often be trying. Some of these folks – those who insist that others should rise to their perfectionist standards – can be annoying or even insufferable. Personally, I think that perfectionists are generally not very happy, since they have set for themselves, a goal that cannot realistically be achieved, and therefore exists in a continual state of disappointment.


In that same graph, let’s relabel the axes once again and assume that the x-axis represents our own perceived level of perfectionism, and that the y-axis represents the time, money and energy required to reach this level of perfection. Since we are all imperfect beings, targeting 100% is a pointless exercise. In fact, I would love to show this graph to a perfectionist and say “Study this graph, and then please abandon your quest for perfectionism. None of us will ever be perfect, so stop trying. As you can see, you can reach and maintain a fairly respectable level without even breaking a sweat, but soon as you set your sights on 100%, the effort (relative to the gains) rises exponentially. The graph is speaking to you!

Could Einstein have coded into his equation, this sage and practical advice for the perfectionists in our lives?

Reinterpreting Relativity

For more than a century, Einstein’s concept of relativity has been viewed only one way. Could it also be examined within a social context? I’m going to propose that Einstein embedded a behavioural allegory in his Theory of Relativity, and that the following is his hidden personal and motivational message for all of us: What relativity really means is that you must measure yourself relative to those around you, and not on an absolute scale of perfection. Since none of us is perfect, then your life is really a lot better than you realize. If you’re a perfectionist, then trying to achieve 100% perfection is merely an exercise in futility. Do the best you can, but as you can see from the graph, anything more than that will take a disproportionate amount of time, energy and money.

Einstein was certainly a genius, but I’m going to propose that he was also a cross-disciplinary visionary who purposely designed his Theory of Relativity to appeal to both hemispheres of our brain. This theory challenged Newtonian physics and also contained an inspirational message for everyone. It simply took the rest of us a century to decode this second component. Who could have guessed that analyzing a graph of the speed of light might make us a little more… enlightened?

And now, I’d like to pose what I call The Grand Unifying Question: should books about Einstein’s Theory of Relativity also be placed in the self-help section of your local bookstore?