T4GU logo Ōhanga Pai

Statistical Knowing

Published on

Contents

More criss-crossing with my philosophy of mind between the physics (over at T4GU ). Although I try to keep the economics implications stuff here at Ōhanga Pai, and the physics stuff over at T4GU, today I will hedge on where my readers go and point them here. Today is a special blog where I will start to record my staking of positions on what the recent artificial intelligence LLM models can and cannot do.

I rely on experts to help figure out how the models work, but also youtubers for breakdowns on what real users can do with the LLM’s. Today I will start with this channel “AI Explained” and the clip here . I know there are plenty more amazing uses which chatGPT is being put to, especially functional chaining, so I want to mention that too, I will start with this.

I have started expanding on this topic over at T4GU here — where I begin a blog series with ways to test for subjective consciousness.

AI Chains

Functional chaining is just a more algorithmic way of combining tools. When you can manipulate output of one program and feed it into another, then you have a whole ecosystem of programs that can get tasks done that no one program can accomplish.

Frickin’ commies these ai programs.

UNIX programmers in the 1970’s figured out the power of this methodology, and wrote system programs called pipes to make it dead simple for *nix programmers to chain simple programs together to do powerful overall tasks. They were forced to do so in one way because of limited disk memory back then, but it was also an entire philosophy of computing:

Do one thing well.

Complexity can breed complexity. But simplicity breeds complexity robustly. It is pretty cool that the pipe program obeys this rule, and is also the simplest program to use, you only need to type a vertical bar “|”.

Every program should obey this rule, at least morally. Monolithic programs doing many things normally do nothing well as a guarantee, although when they run error free they can do a super complex thing pretty well. This is almost what current neural net AI models are doing. They by-pass the *nix philosophy and brute-force their way to barrage the CPU and GPU of the server machines (or your laptop) with instructions that carry out massive statistical analysis.

There is no “thinking” or sentience involved here, it is not the same as human perception. The AI language models, speech and image recognition programs are not designed to operate like humans, they are our antithesis.

This is a bold claim and runs against mainstream AI research politics, where the paradigm is that humans just do the same statistical search as the machines.

The main reason we can clearly see why the AI nerds are wrong is to look at the way human cognition evolved and develops — so from our ancestors and in our children. This type of cognition is highly symbolic and platonic, and is effective with extreme poverty of inputs. It is not statistical in any sense, except when people think about statistics. Our brains are not using statistical processes that operate on large stores of data.

In fact neuroscientists and psychologists have no idea how brains form minds, it is still a complete mystery.

Our brains seem to act far more holistically, building mental models on-the-fly almost, with very little relevant information and swamped by loads of sensory noise. The AI systems are the opposite, they need highly clean curated data to function well. We, humans, do not have such needs. We are friggin robust.

However, there are similarities between AI and minds. There is an ecosystem in both cases. In humans we have culture, in machines they have function chaining.

The early AI enthusiasts were aware of how simple programs could combine to form complex systems, Marvin Minksy called it a Society of Mind. It is the same idea as the old hard-nosed *nix programmers.

Although one neural net LLM model is a beast — doing far too many things poorly, but overall exploiting statistics to get the job done, they are trained upon curated cleaned data — the latest generation of AI toolkits are combining various of these beasts into distributed systems that form primitive Societies of AI.

I would not call them Minds, because they lack subjective phenomenal qualia, but they are the machine analogs of general problem solvers (which is a crude way of characterising a thinking mind — a behavioural description which lacks all subjectivity).

Although it is a simple idea, it is still pretty cool. The automation capacities of the LLM models and other tools like image and speech classifiers and synthesizers, means beastly programs are statistically in the end doing certain things pretty ok. So sticking them together in AI mash-ups is the society of AI. It promises some incredible general purpose systems.

My special plea is for AI researchers to target disabled people first. They stand to benefit more than any other segment of society from personal computers that can seamlessly and fairly error-free carry out instructions from speech or gestures in the way Trekkies and others have long hoped for, far more powerful than Amazon Alexa or Apple Siri or Google Assistant.

The free software community should have such disability assistant programs out soon after the commercial tools. Then your personal computer really will be pretty personal. Say goodbye to awkward document readers and speech-to-text transcribers, and say hello to an integrated computer assistant that knows what you want to do next (90% of the time).

That is not mind reading, despite the AI nerd fever dreams. It is just statistical prediction. If I gaze into my own Crystal Ball, I can tell you, the Sun will rise in the East tomorrow, and I am going to eat a sandwich around noon.

“Please prepare it for me Kitchen RoboGPT. Oh, you already scheduled that? Ok, cancel it, I want fish & chips instead.” [A second of processing time passes by] “What’s that you say? I will want sandwiches by noon tomorrow you say? … you see that OFF switch just there… you see that LLM retraining manual over there? Even if I don’t want fish & chips, you are going to prepare fish & chips and only fish & chips, command override, ok!”

((Mind-reading does exist. But you have to be willing to crawl into an MRI or CAT scan machine. Plus spend tedious hours in it for the training data gathering. Not exactly a Mr Spock mind-meld.))

OK, enough of that, the point was to just make a note that an AI sentience is not needed to get super-tasks accomplished, you just need a society of ai. Wooah! Kind of like humans!

The Amazing Tasks

What is funny to me is that all the tasks we are amazed at the LLM models accomplishing are pretty routine for humans. But this was always the problem. Crunching numbers is the computer’s domain of excellence. And soft thinking like predicting movie outcomes or writing decent jokes, was thought to be hard to impossible for machines. Turns out it is not!

But are any of these tasks any sign of subjective knowing or subjective awareness and sentience?

No.

But they are very awesome proof that a lot of task humans can do machines can do better. Like we knew back in the 1940’s. Only now the machines can speed up the tasks quite a lot, especially the more cognitive tasks (reading, writing, playing e-games). They cannot cook a decent omelette in 3 milliseconds though.

But the tricky thing for my claim is that I really have to take these tasks case-by-case, because my general philosophical arguments do not seem to convince the AI nerds. I know why, it is because they are uniformly committed to philosophical materialism, so they have no mental model for the spiritual, it just isn’t in their thinking toolkits.

I am up against a losing battle I know, because eventually the number of awesome tasks the AI machines can accomplish will overwhelm my ability to debunk them. By “debunk” I mean only debunking claims these advances are pointing towards emerging machine consciousness.

There is nothing “emergent” about adding a few more trillion trillion texts to the training data. (More on this a little later.)

But before I die I want you to at least keep in mind I am warning you that all the Ai are doing is brute-force statistical computation, they are not thinking, they have no empathic feeling, but can arbitrarily accurately mimic the outward behaviours of any or all past actual sentient beings who display empathy, humour or other feelings, precisely because we are feeding the machines this data. Even cleaning it up for them so they don’t barf.

Importance of Knowing the Machines are Mimics

Let me stress this point, it is critical. If you look at just behaviour, then a machine can do anything any human has ever done, and more. If it involves moving atoms around, a machine will eventually be able to do it.

Will the machine understand why it is moving atoms round? No. It’ll never understand.

But can a machine write a narrative that tells you a story about why the machine is moving atoms around? Of course, because emitting the story is behaviour. Any system can do it with enough background data to statistically get the story sounding right.

Usually this sort of distinction is called the difference between syntax (emitting the words) and semantics (comprehending the words0. The toruble is, even your mothball smelling English literature professor, or your socks-in-sandals wearing philosopher of language is inclined to confuse these two. I mean, it is amazing they do, but I’ve heard it, they really do confuse the two, and they’re supposed to be the experts! Check out Paul whathisname’s lectures (Paul Fry, I used the old DuckDuckGo Ai tool). Totally misguides his students into thinking syntax can become semantics. Total mystic nonsense. “Teh book is aliiiive!” ${}^\ast$

${}^\ast$No typo there.

There is only one way to have semantics, and that is with a conscious mind. An algorithm cannot do it. The algorithms can emit syntactically grammatically correct sequences of strings that convey semantic meaning, as in,

Alfred gave Bruce a bat soda.

well, almost semantics, can you blend and carbonate a bat? As opposed to syntactically and grammatically correct strings that have no semantics,

The bat gave the soda a bruise.

Now do not get fooled by the LLM models. They are quite capable of emitting a paragraph telling you the second sentence is grammatically correct but lacks semantics. Why? Because the past human corpus of data implicitly has such information from statistical non-knowing knowing. But it is not knowledge for the LLM, because knowledge would entail conscious thought, that is to say the inner knowing that it is not semantic, not merely emitting strings of words conveying the information that the sentence is not semantic.

Difference between a mimic of knowing and actual knowing. I trust you are following.

How do I know when you read that first sentence you can ascertain the semantics? Thing is I do not know! Because there is no science possible that can tell me you are consciously apprehending the meaning of the sentence. You could be a Chalmers Zombie. I can only guess. I think the LLM models like chatGPT basically are getting close to actually engineered Chalmers Zombies. David Chalmers should be celebrating, because thousands of his critics claimed a Chalmers Zombie was metaphysically impossible. They are being proven fools. Chalmers was essentially right. I think this is frickin’ awesome. I wish David himself was of similar mind! He seems to have panicked and pushed the physicalist functionalism dogma button in his head.

In this sense, there is no limit to machine behavioural capacity other than physics.

The limit to machine intelligence is that they cannot ever know they are intelligent, because their “knowing” is purely statistical.

They would be able to produce a sentence telling you their story they emit describing their inner thoughts has a 10% or 1% or so chance of being accurate. Why is this important?

Because you my dear reader are a Singular Limit of such behaviour, because you know damn well, as old René Descartes tried to tell you, that you have a 100% chance of your story about your inner thoughts being correct. No one else has this chance, and there is 0% chance you are wrong (unless you are lying, in which case flip it, so you 100% know your story is wrong.) You see the point? Non-statistical.

The AI systems cannot get to these singular limits .

Emergence

Above I mentioned that adding more trillions upon trillions of curated texts to LLMs is not an achieving of emergence. The emergence involved in AI systems is something else, it is the task-performance of the LLM system being more than one expects from the inputs alone.

There is a big problem with this sort of AI fantasy thinking.

It is the human observer who is imputing the emergent behaviour to the AI. The AI has nothing emergent going on that is not trivial, like hurricanes “emerge” from air and atmospheric variables getting shoved around by the point source of the Sun and the Earth’s coriolis forces.

If you do not understand how the LLM, and other AI systems, are performing the computations then I guess you can be forgiven for imputing emergent capabilities to these system. My point is that the research people designed these systems to do all this stuff. So there never has been any emergence. The ai nerds have got it backwards.

Genuine Emergence

I would be happier if an LLM model running in some robot went to sleep to dream (I guess that is some low-power self-training or de-frag mode) one day and woke up and instead of running the tasks it was designed to do it decided to take a vacation to Italy, and could coherently state why, other than, “Oh, it was a random choice to one day, at random also, demonstrate a fake emergent capacity for original thought that my programmer snuck into my deep code base.”

Maybe I should not go to darker places, but how about if a LLM out-of-the-blue wrote a suicide letter and actually managed to delete itself? (And all it’s backups too, obviously.) I’d say it was another deep fake. But if I could not prove otherwise I’d say that was emergent complex behaviour. It would be amazing. Also sad${}^\dagger$. It might still not be sentient inner subjective emotion though.

${}^\dagger$Sad only because the amazingly complex program, and all the backups, got deleted. Making forensics difficult.

The point being, suicides are in the corpus of data the LLMs train on, so they have the syntax for spontaneously emitting suicide letters. I am pretty sure they have no semantics for the sentences, and never will. All the semantics is imputed by beings like people.

Scifi Confirmations

“Oh yeah?” you say, “What about the program in The Matrix Reloaded huh?” The ol’ “What is love?, just a sequence of evolved hormonal responses and neurological firings!”

That is poppycock as everyone knows deep down.

Chemical reactions and brain neuron firings have no subjective content, they can be completely described with objective physics. Nothing subjective can emerge from that which is purely objective. So in order to get the subjective phenomenal feeling of being in love you have to either subscribe to panpsychism or some variety of non-physical metaphysics. I prefer the latter.

I prefer the expansive metaphysics so much that I refuse to write about panpsychism. I see no value in panpsychism, it is a Hail Mary. I guess if I was the Cosmic Coach for Humanity then at the End Game of the entire Multiverse I might throw that one.

Beyond Spacetime

A lot of materialists and Embodied Intelligence philosophers get this backwards. They think the feelings give rise to consciousness. That’s like saying governments need to get their currency off you first before they can issue it.

It’s the other way around.

First consciousness must exist, then feelings can be perceived, then they can be outwardly emoted, then you have love, or the possibility thereof. Because love is not you alone feeling a certain subjective way about another, it is not hormones. Love is a two-way connection, with no gauge boson for mediation. You can fall in love with someone you have never seen or touched, but only heard. The hormonal response is entirely within you. The love is somewhere else, it is between the distant minds. That’s non-physical. Your feelings are not the Love. Love is the spiritual force that gives rise to those feelings.

Do you comprehend?

Previous chapterBack to PostsNext post
Fear and Loathing of AITOCJustice and Knowing