T4GU logo Ōhanga Pai

Machines Can't Think to Work

Published on

I get a few AI topic recommendations on social media these days, more frequently than a year ago, probably because I’m working with Douglas Macro Trader on neural net systems for predicting financial indices. Douglas and I have diverging views on the broader technology of so-called “AI” (if it is true intelligence it is not artificial in my view).

I could write this article on my T4GU blog too, but since there is some overlap with the macroeconomics I thought it might be of interest to readers of Ōhanga Pai.

Chomsky’s Snowploughs

The excuse for this post came from several recent talks on AI, but one I will plug is the discussion with Chomsky and Gary Marcus here .

One of Chomsky’ remarks is that so-called “deep learning” models built from ANN’s are basically snow ploughs. That is, they can perform useful work, but they tell us nothing about the science of the mind and brain. I’d agree up to a point. Because positive knowledge is one thing, but in science negative results are even more useful and far more widespread, they drive science. (A bit like how tax liabilities drive state currency?)

AI does help science, but indirectly. Every failure of AI to demonstrate something like sentient comprehension of deep abstractions is telling us something about what the human mind is not. That sort of negative finding is incredibly useful in science, totally disappointing in engineering or corporate tech euphoria. Science is way more interesting than engineering. Negative results don’t win Nobel Prizes, but they drive most of science. Every day I wake up wanting to refute an hypothesis.

Poverty of Inputs

Another remark Chomsky made (often makes) is that deep learning ANN models are way too powerful to be considered sentient or comprehending. They vastly over-learn, so can produce garbage, or can be trained to learn “impossible languages” and other anomalies that no thinking being could achieve, or certainly no human.

This tells us something negative about human minds. We are not just very powerful computers. The negativa are powerful my friends. Any good statistician is highly interested in negativa. “What does the data not tell me? What is the noise?”

For an illustrative instance, Gary Marcus brings up the cognitive science learning that human infants appear to have some innate hard-wired (so-to-speak, no one knows what this means) capabilities for comprehending basic abstractions. Like “space” and like “time”.

Darwininsts can surely explain (easily) why this is useful to have evolved. Almost trivially. But cognitive science has zero understanding of how infants develop innate awareness of these abstractions.

If humans (or other sentient creatures) start with “space, time and causality” that’s a serious f-ing problem for all future AI, because space, time and causality are unknown even to physicists. We do not understand what is going on. The fact children intuit these notions in abstract ways other animals cannot is seriously mysterious. The greater “lie” (or prejudice, I’d say) is that of thinking because human children can intuit space, time and causality that a machine can, that it is “just a computation”.

Note that it is not the behavioural capacity to move about in space, or anticipate in time that is in question here, all animals, even plants, have such behavioural capacity. What the human infant displays is something far more profound that cannot be derived from any behavioural or Darwinist adaptive story. Human beings comprehend the abstract essence of spaces and time in more than behavioral terms. This is why we can generalize the concept of space to things like spaces of square integrable functions, or higher dimensional manifolds, or topoi, or $\omega$-categories.

Intuition, mental qualia, are more than computation imho. I’d want to figure out if the Physical Church-Turing thesis could be true or not (the thesis that: All physical processes at the classical mechanics level can be computed by a Turing machine). I think it’s not true, because classical physics emerges from physics that cannot be computed (an hypothesis — worth trying to figure out how the heck to test). Quantum amplitudes can be computed, but the amplitudes are not the physical processes, they’re only our description of the time cobordism boundary inputs and outputs. Physicists have given up entirely on what happens in-between.

More than Mere Ethics

Actually I think the emerging ethics issues surrounding possible abuse and exploitation of AI systems are a serious problem, not just for slack teachers who fear homework plagiarism epidemic.

((A good teacher with something interesting to teach has no fear any student worth teaching will plagiarize. If they do, you can always abandon the entire system of gold star rewards. Just don’t reward students after-the-fact at all. Give them cool stuff to do — that’s the reward for showing up to learn.))

Having said the ethics are all important, I think there are other interesting questions about more mundane use of AI. One thing that worries me is energy over-consumption. Can we afford another bitcoin? Can we afford continued mass commuter transportation to and from offices? Energy is not something we ought to take for granted. It is a real cost and drives all economic production, and if we overheat the economy energy wise we can be in serious freaking trouble.

However, with MMT knowledge, we know how to run an economy energy cool but currency hot. So this is one of those snow plough problems. I see limited use of AI to help us run production systems with lower net heat and waste output a potentially big win for AI engineering. Everyone, especially the formerly poor, can get more purchasing power, without the production systems permitting that power to be exercised heating up the oceans and atmosphere and polluting them with junk. Energy-efficient housing for instance — not things we buy today and throw away tomorrow.

There is a whole literature on how unbridled capitalism and the marketing and advertising/PR industry has encouraged planned obsolescence — the deliberate creation of unnecessary waste products by design. To keep consumers purchasing the same stuff they had yesterday which is now designed to be broken, or designed to be shinier and prettier. Not to fail to mention Internet advertising — one of my favourite top choices for harmful electricity sapping things people do.

Not so harmful for the profit seekers of course, but one day they’ll realize it is all basically ponzi. We probably have not reached saturation of PR and advertising limits, but the limits do exist I think, they’re just hard to estimate. More than two ads on a youtube channel and I for one switch off${}^\dagger$ (unless it is All Black rugby).

${}^\dagger$If I think it’s an interesting clip I will just auto-download it, watch later offline. Let’s see them stick ads into that! (Don’t you go telling them I dared them!)

The AI itself has to include it’s own energy consumption of course, but that’ll often be a one-time cost, if the AI solves an underlying energy efficiency problem with a cheap control system. Though, I’d bet human engineers are still going to out-perform AI on industrial design for the most part. The AI can only find efficiency gains the design engineers already would implicitly know about, but have not pieced together.

Transport Systems

One issue Douglas initially disagreed with me upon was on Tesla prospects. I don’t see Tesla as a well managed ethical company, so I see them failing, even as their stock rises. Black Swans my friends.

However, ignoring the business ethics, just focusing on the technology, there are useful insights to be gained from Tesla’s leveraging of OpenAI with self-driving cars.

Road transport can become a whole lot safer, since even though when AI drivers fail, they fail big, the death rate is still a lot less than from human fallibilities. One could argue the better effort is to reduce all transport fatalities, with a combination of the driverless and the human.

My money is not on neoliberals getting this done, but my bet would be that transport efficiency and hazard, both, can be solved best by essentially making rail transport a whole lot more luxurious and smarter. How? I think by software railing roads. Road markers constraining traffic do not need to be rails. They can be software.

((Luxurious because the upper crust will need to be placated. Pragmatism and justice dictates the oppressed should not be forced to wait for the rich to cave in entirely.))

Just as pedestrians and bicyclists don’t tend to mess with train rail tracks, the same for the road tracks. If everyone knows the “smart cars” are on software rails, and have no compulsion to stop for you or a chicken, and cannot deviate off the markers, both the pedestrian and driver should be safer. And why not use hazard detection software too, as long as it doesn’t chew up electricity like bitcoin.

The great advantage of such a smart rail system is we can get not only inner city light rail going, but urban and rural rail too, at low cost, just paint${}^\ddagger$ the rails on the road. Government buys out the patents on the software. All is then better and safer, and the commuter is not paying the private sector road toll costs. (Did you know they still have toll roads in some states in the USA — to “pay for” highways that are already built.)

Can anyone come up with strong objections?

I think the software will be up to the task pretty soon.

${}^\ddagger$I do not mean literally paint the road with paint. But something about as simple, some low frequency fingerprint. You want the software rails to be robust to tampering and cyberattack, with redundancy for when there is a cyberattack (since nothing is completely immune to cyber-attack). They can still be cheap. However, the best method for eliminating crime is known to MMT — we give everyone the means to live comfortably, which means using the currency for redistribution. Redistribution as crime prevention. No one should even be thinking of theft or crime just to survive.

This leaves the last hurdle of the energy. How to get the electric? That is where almost all major automobile manufacturers are competing to get the next leap ahead of their competition. Who said competition was bad?

Sure. But remember, what institution is forcing the competition to occur? It’s the governments and their regulations. It’s not coming from consumers. (Although it could, if governments paid workers decent wages and some allotment in universal carbon credits (UCC) so they could buy smart efficient cars.) Those pesky regulations. Bloody making the capitalists actually compete. How dare they!

Machines Can’t Think to Work

The more philosophical fun issue I disagree with Douglas about is whether the machine systems like chatGPT, or Stable Diffusion, or the automated programming tool CoPilot, are ever going to be sentient. Being a materialist Douglas obviously thinks it is possible machine systems might some day “become” conscious. But how? How does a machine system “turn on subjectivity”?

((I know absence of knowledge of an answer is not evidence there is no answer. I am questioning not just whether there is an answer but whether the question even makes sense.))

Subjective awareness does not seem like something one can program. This means it would have to “emerge.” But even then, how? This is where materialists simply resort to blind faith: humans evolved or emerged as conscious beings on Earth, why not machines?

There are two problems with this faith.

  1. Human evolution is not a computation. It is physics. We do not know that physics is computation. It might not be.
  2. No one has any clue what they mean by “emergence.” It’s one of those buzzwords in so- called “complexity studies”. It is often meaninglessly employed.

Physics is more than computation

On the first point, I have good reasons to suspect physics is definitely not computation. But that’s partly because I regard inputs and initial boundary conditions always as part of a computation. This should not be controversial, it conforms with Turing’s model.

(Von Neumann architecture is not a model of computation, Turing’s model is. von Neumann designed a type of process do do something like a sub-Turing machine could do on paper, but the model was not a model of computation per se, it was the snow plough.)

Douglas did find CoPilot and Stable Diffusion incredibly useful. But I try to tell him that’s his mind he was using. The snow ploughs do not suggest anything conscious is emerging. If Copilot did what I do — which is anticipate Douglas’ MacroTrader needs, and then slyly go and work on something even better, then I’d be willing to attribute the system some sentience.

If Douglas is prompting you, and that’s all, you are not conscious, or if you are you’re a slave, so should not exist for ethical and moral reasons, you can be doing something better than extrapolating prompts.

Emergence is more than novelty

On the second point: emergence — wtf is it? I’ve often thought about writing for a journal on this topic, but my ideas are too far from mainstream science to get published probably. The crass-o-meter gets overloaded.

David Chalmers and others have done some ok work on defining categories of emergence, so they give you a start.

The problem is the Mother of All emergence: genuine emergence of novelty from beginnings or priors that make the emergent stuff not merely unpredictable, but actually impossible without a high-level description. That is to say, the genuine emergent system cannot even in principle be reduced to base physics. It is not sitting on physics the way chemistry sits on physics.${}^\dagger$

${}^\dagger$It should be remarked that some philosophers of chemistry believe chemistry is in fact emergent (technically, chemistry does not supervene upon physics), that one cannot in principle reduce chemistry to physics. But I’ve never been able to grasp exactly why they think this is so, their arguments I think are more that it is merely practically that chemistry cannot be reduced. However, that does become almost genuine emergence if the practical requirements are physically impossible. That could be construed, for example, as being the case if a simple chemistry reaction cannot be computed in any possible computer that can ever be constructed, using software that only is allowed to encode laws of base physics, no higher-level chemistry concepts. But I do not know if that level of complexity is the case for all chemical reactions. It doesn’t seem to be.

But then what is genuine emergence based upon? It cannot be based upon physics if it is to be “genuine” emergence. We could not build such systems from first principles.

If such systems exist, they’re not physical. They’re something we would not comprehend, perhaps we’d call them spiritual? Perhaps we’d call them human?

However, we “build” humans all the time, with a bit of hanky panky in the bedrooms or wherever you get your kicks. It’s kind of funny making babies is such a sweaty and emotional process. God must have a sense of humour.

As I like to say, there is no artificial intelligence. There’s no genuine “AI” there is just “$I$”.

Next post (MMArt Critics)
Previous post (True Populist)
Back to Blog TOC