The Robots are Coming!

Nassim Taleb literally wrote the book about Black Swan events. The examples he uses are stuff like 9/11 and the 2008 crash. If you recall, lots of people had long been worried about a major terrorist attack on American soil well before 9/11 and I think a guy named Gary Shilling warned of a major housing bubble in 2006.

Black Swans are improbable and unpredictable in terms of the details; what specifically happened and when. But it doesn’t mean one cannot expect it in a more general sense and prepare for it. It is why we have a vice president.

Tipping points are generally unpredictable and can be improbable depending on one’s biases and assumptions. If one happens, it’s a black swan.

1 Like

I’m sorry but I disagree with your view. From the CFI website: “A black swan event, a phrase commonly used in the world of [finance] is an extremely negative event or occurrence that is impossibly difficult to predict. In other words, black swan events are events that are unexpected and unknowable. The term was popularized by former Wall Street trader Nassim Nicholas Taleb, who wrote about the concept in his 2001 book [Fooled by Randomness].” Or you can just review Wikipedia. Neither 9/11 or 2008 were Black Swan events despite their significance. And just like the major collapse of the worlds climate and the consequent extinction of life on the planet; it will not be a Black or Blue Swan event!


1 Like

No need to apologize but note that it is not simply my view:

But the unpredictable nature of a black swan event is often the reason for its powerful effect. The 9/11 terrorist attacks, the 2008 financial crisis, and the COVID-19 pandemic are commonly cited examples of black swan events. What Is a Black Swan Event? | The Motley Fool

Further down that CFI website are their examples of Black Swan events that include 9/11 and the 2008 crisis. Black Swan Event - Definition, Examples, Attributes

You are certainly welcome to use your own definition of Black Swan, but it might be helpful if you gave what you think is a real life example of such an event.

1 Like

What is or is not a Black Swan actually comes down to how ignorant or informed and thoughtful people are. I know people who, based on their spending habits, find foreclosures on debt they themselves incurred to be (“Oh poor me how could this possibly be happening!!!”) a black swan from nowhere!

d fb


I would put Chernobyl and Fukushima in that category even though we have had nuclear incidents and tsunamis before. But nobody saw either of these coming before hand; obviously they weren’t even contemplated or they wouldn’t have been designed to allow them to happen.

I would put Covid 19 and its aftermath on my list.

I would not have your garden variety stock fraud, even Enron on the list, but Bernie Madoff might make it simply because it was so unexpected, the results so wide-ranging, and the after-the-fact analysis too facile to have predicted it in advance even as “everybody should have known.”

There have been bank panics and market crashes that I would call “black swan events” (again: unforeseeable, huge impact, arm chair quarterbacking after), but most are not. I would not put 2008 in that category, heck I got out of the way of that train myself. Neither would I put the implosion of the dot-com boom there, as it was obvious (to some) before and to everyone afterwards.

I might be defining the line too finely for some; a black swan event doesn’t have to “never have happened before in history”, it just has to be “completely unexpected in the current context”. That’s why a pandemic qualifies, even though there have been plenty of them before.

I hope that helps, but I recognize that it probably doesn’t :wink:


Oh, it’s just that you suggested that the nature of the AI brain that “evolves” as we develop the technology might end up being more human if the bodies are more human shaped. That just seemed wrong to me, because my understanding is that the AI brains aren’t evolved in the bodies. Unlike organic life forms, they’re developed in massive disembodied supercomputers. Only after they’ve been developed are they installed into actual bodies, which makes it impossible for the shape of the body to have direct feedback into whether the brains evolve in a more human-like way.

1 Like

According to the book, a Black Swan is a surprise to the observer. I don’t think many people are going to be surprised by AI.

It depends on how the system is set up, specifically what information the supercomputer is receiving. If all the inputs the disembodied AI is receiving are from humanoid robots then for all intents and purposes relative to evolving cognition that AI has a humanoid body form.

In other words, it doesn’t matter where the AI “brain” is located. What matters is that the AI brain is connected to humanoid bots such that it experiences what the bots do and evolves based on that experience.

If instead the disembodied AI is connected to bots shaped like a spider, the experiences driving the evolution of that AI will be based on a spider form factor.

1 Like

This perspective is as good as any IMO. Like most things, it all starts getting arbitrary after awhile. The main point of the concept (to me) is that the unexpected happens more frequently than most expect, so financial institutions (as well as financial planning) should make resiliency as much a priority as profitability.


I would love to respond; but I have absolutely no idea what you are talking about. I admit to being old, with a barely functioning brain that might not have ever been particularly bright; but what connection does this have with something being a ‘black swan event’ or not. And what is “In the future perfect way” mean?

Apologies for explaining my point of view with several concepts at once. I could have probably clarified a bit to assist understanding.

I used the black hole for a concept because, despite considerable mental horsepower devoted to the topic, we really DO NOT UNDERSTAND what is on the otherside of the event horizon.

This is very similar in concept to a black swan event in that there is an event horizon, it is observable that something is going on to a few specialists that really study this topic and there are obvious signs of the phenomenon present (for both black holes and black swans).

There are further similarities in that we can study the event horizon from information we have on hand:

Black hole: physics based modeling of blackholes, gravitational behavior in the viscinity of the black hole, radiation emission streams from black holes (x-ray, infrared, etc.), gravitational lensing around the event horizon.

HOWEVER: None of those really describe what occurs on the otherside of the singularity.

AGI and robotic androids: Current state of compute. Current form factors for actuators, packaging and control. Modeling state and recent successes for neural net algorithms. These are all information that we have today indicating that something is possible.

This thread is so long because people are passionately trying to predict something that will be predicted incrementally (weeks/months) along the way, but that will not likely resemble any prediction further into the future(years). This is the AGI singularity.

Both of these have black swan implications because:

  1. information is not generally known
  2. specialists are learning incrementally as they go (but cannot see beyond the near term. Vision is not fact. Theory is not law)
  3. general public behavior may be dramatically different based on retrospect study of these events.

Future perfect tense is a state of grammar in which any writing predicts how an event will go. This is the reason for including that in the post.


Thanks for asking the questions.
Your questions and @GDavenport excellent explanation taught me a different way to understand, think about, observe black holes and black swan events.

:+1:Well done


Just wanted to chime in and say your post really makes me miss being able to mark a poster as one of our favorites ala the old board.

Who may have already had you marked as a favorite but now will never know!

1 Like

It would be cool to get a robot that does all the housework for me, and also cooks meals. Well, in one single robot, not like the countless number of small appliances I have.

1 Like

Don’t mean to be dense, but I’m not sure this helps. Given these criteria, it seems that any highly impactful event that is unexpected by most is a Black Swan. From Watergate to 9/11 to the pandemic.

Well, you’re right!

Lumpers and splitters:

Lumpers vs. Splitters: Economists as Lumpers; Psychologists as Splitters — Confessions of a Supply-Side Liberal (

Lumpers: People who generally place all variations into one bucket that looks sort of close.

Splitters: People who seek to categorize everything into a myriad array of colors, shapes, lengths, columns, rows and… Even the dewey decimal system

I would rehash my 3 points above as saying that all black swan events have these characteristics, but not all events that have these characteristics are black swans.

(e.g. a square is a type of rectangle, but a rectangle is not a type of square)

1 Like

Oh I get that. No doubt there are are nuances that help people make finer distinctions. I think the audience understands and accepts that. The problem is that there is no agreement on what those nuances should be. Swan color seems disturbingly subjective and arbitrary.

Here’s an example that might help clarify. We’ve long understood that a lot of older coastal cities are under increasing threat of flooding from weather events. We even prepare for such events by building levees, funding FEMA, planning evacuation strategies, and mandating flood insurance. We just can’t predict when or where such events will occur until just before it happens.

So was Katrina a Black Swan?

Interesting question.

Let’s see:

  1. knowable possible outcome, with attendant fat tail probability, but not widely understood* - at all.
  2. unknown impact after the event (true risk unknown), but with models showing some aspects in detail enough to attempt a risk plan - inadequately.
  3. Fundamental change in virtually every aspect related to behavior and value for the system impacted.

Seems like it meets the criteria.

*general public has actionable knowledge AND sufficient understanding to take the implied actions.

Nassim Taleb describes the black swan with respect to a system. at a highlevel, we should not try to gage the coincidental or causal changes to other systems as a feature(optional or mandatory) to other systems even though these might be an individual outcome from a long tail risk event (black swan).

The very notion that long tail events can occur implies that they are part of the model.

The construct of them being described as “fat tail” indicates that they are not being modeled correctly. (not truly a normal distribution)


Clearly, we see the trend to the negative is not normally distributed and the curve has a natural limit of some kind at about value 25.

This is NOT a normal distribution, and cannot really be called a classic bell curve. If the true system behaves this way but is modeled as a normal distribution, you will have attendant fat tail risks associated.

Turning this back to robots and AI and AGI and form factors and and and.

The evolution of a species follows a fractal pattern.

Attempts to speculate on one leg of a fractal belie the inherent increasing entropy of the system. Each step begets options. Each answer begets 2 (or more!) questions. (incremental knowledge support only a near term view of the path to singularity - or form variety)

It seems that one group here is climbing the pyramid to find the capstone, speculating the distance to and the shape(form) within.

The other group is climbing out of an inverted pyramid, acknowledging that there are many paths and many destinations.

(I’m sure some are vacillating their point of view from one structure to another!)


The swan song for lip sync-ers.


Don’t think it meets Taleb’s critera. His three attributes of a black swan are:

First, it is an outlier , as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

Katrina would seen to fail with respect to the first point in that we have had several historical examples of devastating hurricanes, so we certainly new from experience that a Katrina-like event was possible. It also fails with respect to point three in that no one is suggesting in hindsight that the occurrence and path of the hurricane should have been better predicted

Sometimes it does. Sometimes it occurs incrementally, as Darwin noted with beak morphology in what has become known as “Darwin’s finches”.

I think invoking complexity theory is a bridge too far and moving in the wrong direction. Something chaotic is not predictable but if we have experienced it in the past, like a hurricane or earthquake, we know that these are possible and have some idea of the consequences. A Black Swan event is something that happens that we didn’t know was possible and don’t know the consequences. An example would be the discovery of antibiotics.

1 Like

I agree with you that Katrina was probably not a statistical outlier by the event definition.

The RISK profile was not modeled correctly which would make the event+the damage fall outside the realm of expected.

The fat tail came from the outsized risk contribution moreso than the storms strength and location.

Your discussion of Darwin’s finches is a too high level view of the problem. Of course he EVENTUALLY noticed the finches oscillating between a couple of prominent features based on their environment. This is a closed system with a variable but reinforcing tendency to a central median.

In a sense, it is a text book demonstration of median/mean drift over time.

The fractal pattern chaos was clipped through each generation of finches not conforming to the optimum conditions.

Increase the variation in the inputs and wait long enough for generations to occur and you would end up with big finches and small finches, finches with long beaks and finches with no beaks…

and other species