AI is all the rage nowadays. The WEF are wetting their knickers over the potential of machines to read human thoughts and emotions and control human behaviour, whilst climastrologists are turning to AI to make sense of the mess of global warming entrails and soggy tea leaves to tell them the most likely date when man-made Thermageddon (2C warming, or 1.5C, according to the ‘settled science’) becomes a terrifying reality.
We have become enthralled by the technical wonders of our age - to the exclusion of all else. Whether it's dodgy epidemiology models or dodgy climate predictions, the algorithm has become the Great White Hope of the failing West - as evinced by the WEF.
There are good reasons to believe that we're heading for a period of global cooling over the next couple of decades and if that turns out to be right, how will the catastrophists handle it?
As one deeply and daily involved "in the AI" industry, it's all bollocks. Skynet is still a long way off. For managing huge relatively simple datasets and sussing out potential meaningful insights from patterns it's invaluable. But it's still GIGO, just on a massive scale now.
We're a long way from anything reading human thoughts without even a rudimentary understanding of consciousness.
Anyone who claims that a purported computer game - er, sorry - climate simulation of an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
Lorenz’s early insights marked the beginning of a new field of study that impacted not just the field of mathematics but virtually every branch of science–biological, physical and social. In meteorology, it led to the conclusion that it may be fundamentally impossible to predict weather beyond two or three weeks with a reasonable degree of accuracy.
Some scientists have since asserted that the 20th century will be remembered for three scientific revolutions–relativity, quantum mechanics and chaos.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster. But for some climate “scientists” I suppose it pays the mortgage…
Models are very often better reflections of the modeler than they are of that which is modeled.
‘AI’ isn’t going to predict the climate -- non-linear, chaotic systems are more sensitive to initial conditions than current technology is capable of measuring those initial conditions.
Current ‘climate models’ are to a greater or lesser degree, nothing more than a technique for adding a patina of legitimacy to the policy preferences (or purchased conclusions) of the modelers. There’s nothing about ‘AI’ that improves on that in any qualitative fashion. It’s still garbage in, garbage out.
The understanding of the functioning of the brain in relation to the mind is even more crude and childlike than the understanding of antibodies and immunity.
I think ‘AI’ is probably valuable research because it will help develop new statistical techniques, and may help automate some mundane tasks, but if the state of the art is reflected in the Musk robot, which had to be carried onto the stage and which had zero interaction with its surroundings, or in my robot vacuum cleaner, which is confused by the rug being at slight angle to the wall, then ‘AI’ is vastly oversold.
We have become enthralled by the technical wonders of our age - to the exclusion of all else. Whether it's dodgy epidemiology models or dodgy climate predictions, the algorithm has become the Great White Hope of the failing West - as evinced by the WEF.
There are good reasons to believe that we're heading for a period of global cooling over the next couple of decades and if that turns out to be right, how will the catastrophists handle it?
As one deeply and daily involved "in the AI" industry, it's all bollocks. Skynet is still a long way off. For managing huge relatively simple datasets and sussing out potential meaningful insights from patterns it's invaluable. But it's still GIGO, just on a massive scale now.
We're a long way from anything reading human thoughts without even a rudimentary understanding of consciousness.
Anyone who claims that a purported computer game - er, sorry - climate simulation of an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
Lorenz’s early insights marked the beginning of a new field of study that impacted not just the field of mathematics but virtually every branch of science–biological, physical and social. In meteorology, it led to the conclusion that it may be fundamentally impossible to predict weather beyond two or three weeks with a reasonable degree of accuracy.
Some scientists have since asserted that the 20th century will be remembered for three scientific revolutions–relativity, quantum mechanics and chaos.
http://news.mit.edu/2008/obit-lorenz-0416
You can add as much computing power as you like, the result is purely to produce the wrong answer faster. But for some climate “scientists” I suppose it pays the mortgage…
If AI predicts global warming, then its been programmed to predict it.
Tell me, how much sunspot and ice core data did they feed it? None I bet money on it.
So if its going to happen anyway, we can stop all the net zero nonsense now, because it makes no difference!
Models are very often better reflections of the modeler than they are of that which is modeled.
‘AI’ isn’t going to predict the climate -- non-linear, chaotic systems are more sensitive to initial conditions than current technology is capable of measuring those initial conditions.
Current ‘climate models’ are to a greater or lesser degree, nothing more than a technique for adding a patina of legitimacy to the policy preferences (or purchased conclusions) of the modelers. There’s nothing about ‘AI’ that improves on that in any qualitative fashion. It’s still garbage in, garbage out.
The understanding of the functioning of the brain in relation to the mind is even more crude and childlike than the understanding of antibodies and immunity.
I think ‘AI’ is probably valuable research because it will help develop new statistical techniques, and may help automate some mundane tasks, but if the state of the art is reflected in the Musk robot, which had to be carried onto the stage and which had zero interaction with its surroundings, or in my robot vacuum cleaner, which is confused by the rug being at slight angle to the wall, then ‘AI’ is vastly oversold.