Sunday 8 February 2009

Never Say Never

I should imagine there were a fair number of financial experts who would have said the present turmoil could never happen or at least be as likely as a lottery win, are now crying into their flat champagne. Similarly, I bumped into a pilot yesterday who said the likelihood of a plane being struck by birds at take-off and then safely landing in the Hudson River had so many 'impossibilities' attached to it that it should never happen.
But they did.

So when a few boffins tell us that the risk of a black hole opening to gobble up the Swiss Alps when the LHC (Large Hadron Collider) is switched on is no more than 1 in a billion, how sure are they that they have calculated the right numbers?

The Possibility of The Impossible

If you want a cerebral read, go to New Scientist and read Mark Buchanan's Essay on this subject which is scary. I will attempt to put it in terms that I can understand and hopefully a monkey can then interpret it too. Firstly, we are a clever species, you know. Better than apes and dolphins, we have a large capacity for deductive reasoning, although you would not always think so because we also have the capacity to deceive ourselves over the significance of our conclusions.

The financial chaos we are in is a prime example and at loggerheads with the views of Taleb et al, I don't think it was a Black Swan phenomenon that popped up randomly to wreck the system, it was poor deductive reasoning of the combined financial army of gurus who thought their schemes were bullet-proof when a cursory conversation with an average chap in the pub would have said there were several flaws in their thinking. That and their judgement was clouded by greed.

The message here is that just because there are sophisticated arguments about sophisticated systems by sophisticated people, doesn't mean to say they are right. Just like the paranoids who think just because they are paranoid it doesn't mean people are not out to get them. Well it isn't like that but I like that idea.

So when some chap makes a seminal study of the risks of the LHC causing a blackhole small but deadly enough to kill us and tells us it's virtually impossible, we might ask just how impossible is impossible?

Do We Know What We Are Doing?

There are no real ways we can test the theory without switching the thing on, at which point it may be too late as some pixelated monster from a Nintendo game may start chomping its way around the earth's crust and gobble us up. Then we would look pretty silly and dead.

Well let me stun you with science. You see, boffins invoke quantum chromodynamics which is not something to do with the making of a multi-coloured pizza but it's about defining the gravitational conditions for creating black holes. If you combine this with our vast knowledge (?) of near-Earth high energy collisions due to cosmic rays, you can calculate the risk of a 'dangerous event', which could defined as merely Earth-extinction, as being a paltry 1 in a billion. This is very comforting.
Or is it?

Now a chap called Ord has done a study on this and he concludes that this small risk is fine if, and only if, the boffins who came up with it have made the correct assumptions in the first place. Even better, he even tells us there is a calculation to see how wrong they could be which goes something along the lines of X being the probability of an event occurring multiplied by the probability of it being wrong added to the multiple of Y, the chance that the event will happen if the argument is wrong and the probability of the argument being wrong.

With me so far? Well it doesn't matter much if the value of X is high like 50% or even 1%, but darn me, if X is very small like 10 to the power of minus 20 or so and the chance the argument is wrong is 1 in a billion, then we should be quite scared as the first number becomes meaningless.

Eh?

Put in terms we can understand - it means if the probability estimate given by the argument is dwarfed by the chance of it being wrong, then run and take cover because then the estimate they come up is likely to be highly suspect.

What they are trying to say, the more suspect your logic, the more likely something will happen even if you think the possibility is small. You see in comparison to 10 to the minus 20, 1 in a billion is a huge chance. What it does mean is that if you are going to make very small probability predictions, you had better be very, very, very sure of your facts.

When I studied for my sciences degree, everyone was obsessed by the percentage of margins of error for calculations but we rarely operated at + or - a few percent at most because we knew that our measuring kit was pretty accurate. Here we are playing with incredibly small numbers so the difference between 10 to the power of -20 compared to 10 to the power of -18 is 100 times different and that's the issue.

The LHC Argument

Now the boffins cite that the energy densities frequently created by cosmic rays colliding with particles in the earth's atmosphere could be compared to the probabilities in the LHC case - as, if these were truly dangerous, then we would have had a big enough collision already to kill us all. That sounds very reassuring. But is it?

Here is a scary bit about science. The boffin who looked at this risk also looked across the proportion of scientific papers which proved to be wrong and concluded that 1 in 10,000 were wrong. Sound small? Remember the conclusion of the probability of a nasty event at the LHC was 10 to the power of -9. Worse still, they concluded that in fact when you account for the fact that the journal they chose covered only top-ranking papers, the real errors rate was likely to be 1 in 1,000 for all papers published. Even if it were a million times smaller, it should be a cause for concern given that interesting calculation.

In other words, we have a nasty record of getting our predictions wrong.

And when you think of it, people like Newton, Einstein and Darwin who we revere so much, have subsequently been proven to be not exactly right about everything they said. And this invokes the Donald Rumsfeldism of 'known knowns' and 'known unknowns' - if you don't know everything that underpins your argument at the outset then you are building in a margin of error, and Newton did not know about quantum mechanics and Darwin knew nothing of DNA.

It's Impossible To Prove The Impossible

Here is the ironical thing - it's pretty much true that despite our longing for certainty we can never definitively prove anything as we can never be certain we have not got something wrong in the process - and even if we ask others to help, they can make mistakes too.

So after all that nasty maths at school, we can be certain of one thing and that is that mathematical certainty is not certain. But we can't even be certain of that.

The end message is, when they switch that LHC thing on, hide under the stairs and cover your ears. You can never be too sure.........

No comments: