T4GU logo Ōhanga Pai

Fear & Loathing in Los Vital

Published on

Contents

Sometimes I criss-cross posts on philosophy of mind between the physics (over at T4GU ) and the Macroeconomics here at Ōhanga Pai. Lunchtime today I was watching a fresh episode of Machine Language Street Talk — MLST does not directly help with the MMT DougBot project, but I consider it peripheral research that’s loosely justified (for the creative mind you see, I can’t just go full Copilot and chatGPT for MMT).

Tim had David Chalmers on the show, so I could not resist.

Who is Afraid of AGI?

One reason I’m writing this on Ōhanga Pai is because it has a fair dose of interesting politics. It also segues ok from the previous post .

I think we might look back in a hundred years and possibly thank right-wing conservative lunatics for doing a bit of good helping protect us working class plebs from Artificial Intelligence technofascism.

To be clear on my philosophical stance (I defend these views, but do not ask others to believe, so it ain’t Jargon File religiosity) I will give you a quick summary of my stance.

((Bear in mind all claims about the future are pure speculation, I hate Asimov’s Harry Seldon stuff, it’s ridiculous (but to be fair, Asimov was writing before Chaos Theory was a thing).))

  1. Possibility of AGI — artificial general intelligence. Depends on how you define it. Computations cannot be conscious, so they will not be sentient beings. So it’s ok to turn them off at the wall socket if they become dangerous. (This proposition will need some defence.)
  2. So Ray Kurzweil’s Superintelligence Singularity is hogwash. Thanks to a truly beautiful human being, Ed Frenkel , we know a little bit now about why Ray Kurzweil entertains this madness, and it is rather something I can be empathetic about. It is born from love. So although Kurzweil is an idiot, it is rather adorable, not Dr Evil level stuff.
  3. You need more than an Embodiment to be conscious. A physical body gives you not causal power but the potential for causal efficacy. That’s a big difference. But the subjective conscious cannot emerge out of a purely objective system. So again, there is no need to fear a Superintelligence.
  4. The right-wingers will say there already is a Super-duper-intelligence you should fear, but we all know Fear of God is a different sort of thing, it is optional. That is the whole point of True Religion, which the right-wingers and conservatives repeatedly fail to understand.
  5. The only serious fear with advanced AI is that bad people will use the AI as weapons of mass democracy distortion. Unfortunately, this is unavoidable. You do not need a military if you monopolize AGI. So bad people will seek such monopoly in a new techno arms race. The painfulness in fighting a looming climate and ecological crisis will be more painful if we have to also fight techno fascists at the same time.

I do not want to deflate Point 4 here, I will like to leave it hanging and see if you can figure out what I mean. It’s a puzzle challenge. Write to me if you think you can figure it out, and I’ll be happy to let you know if you understood me correctly. (Donate first though , so I can justify the time replying to you. If you come across me in the wild at a café then it’s freebies — always happy to chat when I have a coffee and sandwich to hand.)

On the political praxis, I am resolutely against the Ted Kaczynski style of fighting technofascism. We can win battles with love and education, or grab a bit of Edward Frenkel.

Possible Interference

It might be somewhat better for society if the techno-fascists are also Green Fascists, so the monopolies on AGI tech are at least used to come up with engineered solutions to ecological collapse and climate change — that do not involve exterminating humans or mice. I think this sort of interference is highly probable.

But we also have societal options if we pull together to avoid techno-fascism gaining any footholds.

Techno-fascism is far more a threat than the existence of advanced general purpose AI. General purpose AI is a tool, like nuclear power. Powerful enough to be incredibly useful to humanity, and incredibly dangerous. But the danger is not the AI itself, the danger is people.

I am not however a human supremacist. I am an empiricist on matters that I have no theory for, and I have no theory for how human beings are spiritual and how we can have non-physical souls as the source of our causal power. I mean, I’d like to have a theory for it, but I don’t. I look to other traditions and valid religion${}^\dagger$ (where I think I can find it), and people I trust. I just don’t think it is useful to regard a computation as consciousness is all, so I have no moral qualms about turning them off. Make as many kill switches as you please.

${}^\dagger$Valid religion is easy. It is whatever is a source of good, a source of peace and harmony. The trouble is finding it. Most so-called “religions” are not religions. (Sound a bit like an echo of Minsky MMT? Probably that’s not an accident.)

Why don’t human beings have kill switches?

It is because we evolved as a conscious spiritual species, and for such entities by sheer metaphysical logic (of a sort, not analytical logic) such creatures cannot have kill switches. Because we are not computations. I think Gödel understood this, unfortunately Turing did not, and all the ridiculously bombastic AI nerds idolize Turing instead of Gödel.

To shut down a computation (Solved the Halting Problem y’all!) you can flick the off switch. But you can also pick the thing up with a bulldozer and drop it. The latter works to shut down conscious beings too, but there is no ON switch to boot back up.

Our on/off switch analogues are subtler, they are things like anaesthetics and alcohol. Only, you see, those are not switches. That’s the point. Computation is on/off deployable and nothing bad happens (except grumpiness from a conscious being who is effected by the latency). But if you drug a person, well… bad things can happen that effect our uncomputable functions, our consciousness, our emotions, our qualia, our soul.

Politics of AI

The ethics and morality seems well ahead of the politics. But I still worry. It seems to take frickin’ ages for newly developed domain specific ethics to filter into the neoliberal politicians mind. They are all like: “Let’s wait and see market forces and privatization take care of the AGI ethics worries, huh?”

Bloody idiots.

I mean, sh$\ast$t… they’d love to have Hayek’s omniscient markets conscious of ethics and morals, wouldn’t they? Anything to absolve the individual policy maker from owning an ounce of moral responsibility.

I have a big problem with normie non-neoliberal liberals too. They don’t seem to be awake to the fact the politicians are not entirely to blame. It’s the puppet master controlling the politicians, the big money donors. I do not want to have to wait and trust the big money donors to push the politicians to do the right thing.

Also, these days most liberals are materialists or Marxists. By which I mean they are de facto adherents to the stupidity generating school of Scientism. A type of intersectionality of Marxism, Nietzscheism, and fake Buddhism (the Sam Harris types). So they don’t even believe in a spiritual basis for morality and ethics. Their based class analysis might go some ways, but it is insufficient.

However, I know a few Marxists and Liberals (even of the Sam Harris variety) who are good people, and agree with me on the correct politics of AI, as far as “correct” is a definable term in this domain (minimize harm to the collective, basically, without killing dumb-dumb individuals, just euthanizing their dumb ideas).

OK. That’s about enough wading into the treacherous waters of political economy for one day. I trust you can be charitable and appreciate where I’ve been serious and where I’ve trolled you a tad. (I feel bad about trolling regardless, since I am easy to trigger and fool myself, since I’m a bit autistic. But a bit of trolling of ideologies and materialistic shibboleths makes a more fun writing experience, hopefully reading too.)

Why ML is Getting close to AGI, and Should Be!

I fully expect AI and machine learning to get close as you like to AGI. There is a curious moment in the MLST episode where David Chalmers seems puzzled about recent ANN/ML progress. Why are the machines getting close to AGI with just dumb statistical look-ups and petabytes of data?

I could hardly believe my ears.

I agree with Chalmers that computation cannot give us subjective consciousness (or perhaps I am more didactic about this than Chalmers is, since I take qualia more seriously then he does (he was the original 1990’s “taking qualia seriously” guy.))

The trouble was, his whole break-through book was about how to get subjective qualia out of physics. That was his problem. It’s an impossible project, and he never admitted it was. He just keeps trying to push panpsychism or his “bridging laws” stuff. It’s all nonsense and made-up metaphysics.

((To be clear, most metaphysics is made-up. It’s not science, so it has to be mostly poetry. So I am not really faulty Chalmers here, it is his profession to write poetry. He just doesn’t seem to know it is poetry.))

I dropped this comment:

@9:00 David seems worried about something not to be worried about. If there is a “so far but no further” (which I agree there is — the subjectivity$\Rightarrow$qualia divide which separates mind from computation) then it is natural that throwing compute resources at problems will get you there. Why would he expect ML to pull up shorter? That’d make no sense.

What I am saying here is that we should damn well expect Machine Learning to get to the very horizon of AGI. Why would you think brute force computation incapable of getting there? You don’t go saying massive particles will only get to 75% of the speed of light in an accelerator just because you think 88% is too freaky. If humans can output certain strings, so can machines. It is just physics.

The issue for the nuanced philosophers and humble sifters of wheat is that no amount of shuffling physical atoms around can generate subjective qualia. I used to think Chalmers understood this, but now I’m not so sure he has remained faithful to his original insights.

It is fine, imho, to look for bridging laws and ways to develop either a modernized non-naïve Cartesian dualism, or platonism, or panpsychism. But don’t go around like John Searle claiming physics can do it for you. That’s just silly. It shows you do not understand physics.

It might even be worse than that. They, like Max Tegmark and Guilio Tononi, probably comprehend physics ok, so what they are really doing is imputing mystical elements to physics that no one has ever observed. Basically New Ageism. But dressed up as black tie respectable science. It’s gross.

It is gross because physics is a special science — because it is a science. Do not corrupt it you nerds. Physics concerns objective components of reality. It does not touch the subjective. I am only defining “Physics” here. You must appreciate. I’m not saying the idiots can’t have their own definitions. I am instead claiming what I define as physics makes a science of physics possible, and theirs does not.

That’s a spiritual argument against that crowd. Do not ask me to debunk them using facts and logic. It cannot be done. The whole point of all this is that subjectivity is private, it is not communicable. What we communicate to each other about our inner feelings and qualia is done via physical processes. But the language is not the noumena.

Anyone recall Shannon? Communication over a noisy channel? You cannot reconstruct more than the input.

Human thought and consciousness is generative, it routinely constructs more than it’s input. But only in qualitative terms. The entropy constraints mean our physical senses get way more information than our mind perceives. But the qualitative aspects of what our minds conceive are more than are contained in the sensory input. When I see a beautiful sexy woman, and she sees me, I know she likes me. Yeah, right… for a split second ok, then that thought gets trashed and replaced by quivering insecurity, doubt and uncertainty, also generated with insufficient validating data.

Wittgenstein kind of half-understood this, it was what he was trying to get at with the “private language” idea. He was using bad language framing. It’s not a language. Thought is not mere syntax with semantic overlay. Jerry Fodor makes the same confusions, and it’s infuriating. There is no language of thought. Thought is above language.

Language of Thought is a nice metaphor though, so is Ned Block’s Mental Paint, a nice metaphor, that much can be granted.

The Science of AI… is not the Philosophy

Science is perhaps not the biggest thing to write about here, since it is pretty mundane. Current generations of neural net and reinforcement “learning” AI systems are scientifically as simple as toast. They are fork-lifts.

So the more interesting aspect which is far more pertinent to the politics, is the philosophy. So here I need to go back to my claims above, and defend a couple of them, then lunchtime will be over.

I think the main one of note worth expanding is the issue of Behaviourism versus conscious Cognition. Most normie scientists, politicians and poets judge AI systems by output, by behaviour.

The nuanced philosopher or ordinary humble sifter of wheat, knows that this is a gigantic mistake. Behaviour is not cognition. Behaviour of a non-cognitive system can however arbitrarily accurately mimic the output of a cognitive system.

What is Cognition here? I am using a highly non-standard definition, sorry if it confused you. In this essay “Cognition” entails inner subjective mental qualia. So it is beyond all computation. Computation (as defined by Church and Turing) involves no subjective states at all, by definition.

This leaves just the question of whether or not our physical computers, taken as a whole in a Web, can be more than computation?

Well… it is semi-plausible they could be. But that’s the unresolved issue, that has some small bearing on the ethics and politics.

However, if we cannot know for sure that an AI system is more than a computation then we cannot usefully attribute to It any consciousness, and so cannot attribute any sacred rights or have ethical qualms about turning them off at the wall socket.

Is there a moral hazard though — in assuming the machines cannot be conscious without proper proof?

Well, I suppose there is. But are we actually going to worry about it?

No! We are not going to worry. At least I’m not. Why not?

Because whatever subjectivity the AI machines possess, it can always be turned on again, with no emotional scarring or existential dread. This is, interestingly enough, testable, so empirical. Which is beautiful for a scientist like myself. How so?

Suppose Dr Tony Stark-Kurzevil comes along and claims System $X$ is conscious. Then I can ask about the subjective fear and dread System $X$ has when turned off then on again. Or just the threat of being turned off. I can then easily write a second system $Y$ that by design never emits any output describing it’s hypothetical fear & dread of the same variety. In all other respects the two systems will have basically identical behaviour (up to statistical uncertainties and non-linearities and sensitive dependence upon initial conditions and whatnot).

For all you normie Behaviourists, you cannot possible observe any material signs of consciousness in System $Y$, and you have no data concerning the machine’s horrific inner qualia experiences upon being calmly told it is about to be turned off for the day. So by your own standards there is no soul here and no moral hazard.

((How can I so easily design System $Y$ you ask? It is too simple to put in words, I leave it to you as a basic puzzle you can solve in a minute or two. I really mean this! Some simple puzzles should not be put into words, so that they can be deployed as cool puzzles. Also this particular one has probably many solutions, so if I wrote one of them it’ll ruin it for you, blocking you from perhaps coming up with your own solution. Also, my solution would involve some AI algorithms, so is ugly and clunky. If you want my solution Donate first and write to me.))

Unfortunately, for me, I can entertain moral panic about turning off System $Y$, because I am a platonist, not a materialist, I happen to believe it is possible things exist that are non-physical, and I am also spiritually inclined in some of my metaphysics. So even if there is no way to tell if System $Y$ is feeling pain, I can imagine it might be, despite no possible evidence. I do not rely wholly upon Behaviourism in other words. I am an empiricist, but not that sort of empiricist.

Fortunately for me, my spiritual philosophy permits me to understand computations are not spiritual beings, and never will be, because they are constrained by bulk physics. Humans are not (clearly, otherwise we’d be unable to discover abstract (i.e., non-physical) mathematics, among many other things).

Notice the latter point is not behaviourist, but in a very interesting and pertinent way! I am relying here for a bit of reasoning on our mathematical ability. It looks, tastes and sounds like behaviourism, but you see I only know human mathematical abstract reasoning is not pure evolved behaviour because I for one have subjective mental qualia concerning such things. I cannot speak for anyone else. You may all be Zombies. This is the whole point. Subjective consciousness is not a scientific thing, it never will be. People who think they can study consciousness scientifically are people who do not understand science. They do (quite likely) understand research grant proposal funding writing style.

The residual question is then whether or not being crafted out of physical “stuff” (quarks and electrons &c.) a machine is more than a computation. This question could apply to even an old 1970’s hand calculator.

On this I am a bit dumbfounded. Why would you even bother to ask?

Humans are crafted out of physical stuff, and clearly we are subjectively conscious and posses a soul. Or our soul possesses us, or however you like to frame it, I am pretty agnostic on this linguistic gymnastics, since no one has any access to the definitive truth about souls. Even my intellectual hero, Gödel, understood a machine could be evolved to become conscious.

This not the point at issue though. We have no moral necessity to bother evolving such machines. Why would we? It is more fun having babies. Children are a lot cheaper too. Fun and cheaper. So why are you trying to grow a conscious machine?

All right, all right. I know. Dr Tony Stark-Krutzfinger is going to try anyhow.

I am sympathetic. The thing is, if you cannot appreciate machines based on computation can never become conscious, then you will always have the irresistible urge to try to prove they can become conscious, and thus debunk all the spiritual types who think human beings and other sentient aliens in our vast universe are “special” beings, spiritual beings, endowed with non-physical souls. Because If a machine can develop a thinking soul with mental qualia generating capacity, then there is no good reason for spirituality to be thought of as a non-physical phenomenon. We get closer to eliminating God-thought brainworms in society.

So I kind of hate to tell you that this is a giant waste of your time. So I won’t.

On the other hand you see, I want to encourage Dr Stark-Krutzfinger’s research. I believe his research will fail, but I am fine with it succeeding.

Why will it fail?

It will fail because he will never have a testable${}^\ast$ or falsifiable definition of mental qualia. I want him and his ilk to try to succeed because it is going to be (a) hilarious, and (b) highly likely to generate some amazing technology, which will be actually useful and a massive benefit to humanity. Like nuclear power was supposed to be.

Unfortunately if you are a materialist I cannot promise it will be as hilarious.

${}^\ast$There are people who try, like Christof Koch and Giulio Tononi. They are retarded and brilliant at the same time. They do not understand, or seem to not understand, their definitions are objective specifications. This is why they are testable. But they are not descriptions of any single subjective phenomenology, so they are not defining Consciousness, they are defining behavioural intelligence, which is objective and something totally different to consciousness. Heck, conscious beings are half the time not even remotely intelligent. They have probably read Thomas Nagel, but I suspect they have concluded Nagel is a annoying little lout. #Sad.

Consider also, if I were after the money, I too would be writing fraudulent research grant proposals claiming I had some brilliant scheme for understanding human consciousness, it’d involve “emergence” and “complexity"and all those buzzwords, also “quantum fields”. Why the heck not? I am not chasing the money though, thank God. So I retain intellectual integrity. And I am not saying everyone writing grant funding proposals is a fraud, just that it increases the odds. I know, I’ve seen it a dozen times up real close. That was enough times to lose my academic career. I could not write a fraudulent grant proposal, and was never smart or “political” enough to write a non-fraudulent proposal that actually got a grant.

“How dare you say subjective phenomena may exist!” … said the materialist scientist just before I justified switching him off at the wall. (Ok, ok, I did not switch him off, I just justified doing so, geez.)

Previous chapterBack to PostsNext post
Conservative Smell in the MorningTOCStatistical Knowing