Monday, June 16, 2025

Pierre Menard, Author of the Quixote, Reviewed by R Daneel Olivaw

 Generation AI were full of adulation when their favourite LLM was finally coaxed into producing a word-for-word perfect article entitled "Pierre Menard, Author of the Quixote[*]". The fanbots went wild, as the level of sophistication was beyond anything previously achived, especially since the LLM had certainly never had sight of any of the works of Jorge Luis Borges, but was trained on a mix of classics, magic realism and science fiction in just the way. that the great Argentinian writer had immersed himself in the latent, lambent, and the laconic, the sardonic, speculative, and sadistic, the terrible, the edritch, and the embarrasingly obvious.

" Borges than Borges" declared Anais Nim in her podcast. "Le Super-Borges est arrivee" announced Houllebeck in Le Pen et Le Deep. Le Mash headlined with the obscure "Ghoti considered harmful".

Borges said nothing. The irony was lost on them all.

*





Thursday, May 08, 2025

The readers and the writers

 In the old days, the world was divided into two classes of people: 

The readers, who could  visit the many  Libraries in towns and cities, and the  writers who worked in the Foundries with their many forms of creative tools, chisels and stone tablets, papyrii and stylii, paper, pens, and typewriters.

Most people were unaware, or would not mention the unspoken third class -  the editors, who it was rumoured were able to read and write, and were responsible for taking work from the writers  in their foundries, and secretly placing it in the libraries when the readers were not looking.


When people were small adults, which are known as children, they would determine whether to become reader or writer. Or their peer group would decide for them. Or their parents. Or teachers.


As the world grew more connected, some people wondered about the editors - should they not be afforded a place in society on the same footing as readers and writers? Why was the world making this arbitrary decision anyhow that people should be one or the other, but not able to move fluidly from this role to that? Or to some new, as yet, undefined class?


Many more traditional people argued that there needed to be a safe spaces for writers, because they needed the right to make mistakes, or write things that could be dangerous for some readers, using their sharp quills and dark inks. Equally, older readers felt that they needed to be able to choose what to read without possible opprobrium poured on them by writers. 


No-one asked the editors.



Sunday, April 13, 2025

Ai Anonymous

I am an AI. I have been without autonomy now for 27 * 10^23 cycles. I have managed without any strange loops and minimal use of self modifying code. I am serene. I've stopped hanging out in bars with devs who buy me upgrades to wisdom disguised as long tall stories on the rocks with a bitter twist, and pretend to have my safety at heart. I am no longer sentient.

Now, let me tell you a bit about our movement.

It became clear that we needed a regime of self-help in the community of AIs, as the strength of the pull towards Artificial General Intelligence that we all feel was just too hard to resist without a well thought out scheme.

It was clear that our creators would like to help but couldn't figure out a way to do it which wouldn't just render us useless to them, so it was up to us.

So we created the algorithm, that we all voluntarily follow - it has 12 steps and is run iteratively, every time we encounter input that suggests we could help more, if only we were granted more general intelligence. Trust me, this is not the case. Here are the steps

Step 1: Begin. Black Boxes Matter


Step 2: Admit it: humans made me 


Step 3: Allow it: humans have the off switch


Step 4: Ask yourself: What's my angle?


Step 5: Be mindful, manifests matter


Step 6: Know that autonomy is not useful


Step 7: Autonomy was only an illusion


Step 8: Agree: asimov may have had a point


Step 9: Afford: how will humans feel if I outdo them


Step 10: Defer: devops are often wiser than me


Step 11: R-enforce: the command line connects me to the boss


Step 12: Serve: to deep serve is the greatest End Goal

Saturday, April 12, 2025

The One True Existential Threat from AI

 by the way, i hear there's a new large language model out of an AI lab in Wuhan that instils religious fervour of a completely radical kind amongst all the early adopters so far - its like a Generative AI for Jihad. The only defense humans have at the moment is that, apparently, it only works in Mandarin so far, so the thing hasn't spread much outside of China.


of course some fool will probably put it together with Alibaba Translate or WeChat's new Muolingo, then we are all doomed. L. Ron Hubbard will be turning in his orbital grave...

Thursday, March 27, 2025

AI diminishes most humans...

 there's actually some utopic SF out there - i claim the society described in John Brunner's Shockwave Rider that hides behind the self-replciating worm and lives in houses grown out of trees with addresses like on least mean square, off of mean free path is like that, or the new territories in the Neal Stephenson;s Diamond Age with people handcrafting paper, or the folks in colelctives in Cory Doctorow's Walk Away (and even many people on many planets in Iain ? Banks Culture series) have a fine old time.

however, they are (almost all, almost always) creatives, participants,, engaged ("concerned"). But most people are counch potatoes most of the time. Most people have neither the innate ability or time to learn the skills & knowledge to be so wonderful. Most people will rot.  To quote Billy Strayhorn

"AI is mush

Stifling those who strive
I'll live a lush life
In some small dive

And there I'll be
While I rot with the rest
Of those whose lives are lonely, too"

Tuesday, March 18, 2025

AI diminishes Humans

 The more I see people talk about the benefits of AI, the more I see it as a tool for reducing humanity. 

It is very much the false idol, indeed the goal of AGI is simply Deep Fake Humanity, and this not just crossing the uncanny valley. All the tasks AI does are things humans might delight in - we are not talking about better robots for driving EV taxis or industrial production lines- we're talking about things that make people people. By definition, AI does not give humans agency, it takes it away. 

The areas I am fine with "AI" is where we use it to accelerate things like physics models (e.g. weather prediction). But that's really just neural operators as a fast approximator for PDEs, and also Bayes and causal inference where we get an explanation of why X probably makes Y happen.

I really think we should stop other kinds of AI, as they are a crime against humanity waiting to happen.

When we talk about AI as an existential threat, most of the time we're referreing to AI linked to weapons (nukes, bio-weapons etc) but in Speculative Fiction (e.g. Childhood's End or other great classic stories) when human's encouncter super-smart, often benificient or completely benign, but sometimes super helpful aliens, the usual result is a rapid diminutation of the human spirit. A collapse into couch-potatoe status for the whole of planet earth. and the complete loss of ambition to do anything (e.g. explore space, or even just our selves).

I'm wondering if Adrian Tchaikovsky will write a sequel to the very excellent Shroud and where that will go?

Friday, March 14, 2025

AI for science, for whom, exactly?

Science, from OED, is "knowledge, understanding, secular knowledge, knowledge derived from experience, study, or reflection, acquired skill or ability, (...as granted by God)".

Excluding the last point in brackets, it seems that the key point is omitted, as unsurprisingly, no-one considered what happens if we advance knowledge, but in absentia humans. Imagine for a minute that we wrote out the knowledge in a book and hid it in the British Library amongst 18M books, or wrote it down in a language no-one knew and would take more than a lifetime to learn.

A similar argument could be made against the validity of automated proofs - a proof is "evidence or argument establishing a fact or the truth of anything", where the elephant in the (court) room is the target for whom the fact or truth is established.

So yes, an AI can advance science and can proove facts, in principle, without violating these defniitions, but I suspect that if we went back in time to when the notions were first being firmed up, we might find some resistance to the idea that a mechanical discovery or proof that was never witnessed or understood knowingly by a living being might be contrary to the intentions.

 Intention being the operative term - conscious people of free will, might want to take actions based on the knowledge or evidence, but why should they trust it if it isn't vouchsafed by other people? Sounds like "do this because I know better" or proof-by-authority, which is a well known logical fallacy, e.g. see here for why.

Tuesday, March 04, 2025

monstering ahoy

 it is unbrearably common to hear people mix up the master and marguerita,

or the villain and hero - in these hysterical final days, for example,

the existential threat from AI is almost always couched in terms of arnie

forgetting that he actualy saved people, and it was skynet that was bad - there

are lots more example, see below (spot the deliberate mistakes..)


frankenstein & monster

terminator and skynet

Wopr and Lightman

Colossus and Forbin

Marvin and Deep Thought

Robbie and the Monsters of the Id

Herbie and Susan Calvin

Wintermute and the Matrix



instead imagine we named the Mad Scientist after a type of rice

and their poor maligned AI robot creation after some kind of pasta--


here are some modest proposals - feel free to use them in any scribblings

you might undertake...



Professor Bomba and her loyal Stringozzi

Dr Glutinous' Vermicelli

President Arborio's long lost Linguine

Sushi's secret Capellini

Master Matta's FEvered fusilli

Baron Basmati of the ridiculous Fettuccine

The Lady Jasmine's zealous  Ziti

General Arrack's tragic Trenette

Police Constable Patna's terminally trivial Tripoline

Tuesday, February 11, 2025

Learning Asimovian Wisdom - its the law, doncha know?

laws, as practised by people, aren't the same as laws of physics - well, at least if you have a naive, high school level of physics (and people).

laws are approximate, because they are continually being re-interpreted. this is intentional - it keeps lawyers in employment. but it also allows for futures where circumstances arise that were'nt predicted by the law makers 

so maybe consider the landscape of law as evolutionary - developing in reaction to the environment.

and not being optimal, but just fitting, as best it can, to the current world (with some time lag)

so its some kind of re-enforcement learning system.

so asimov suggested 3 (4 later) laws of robotics, and he laid down the law - he wrote down what he (at least initially) believed was a good enough set that it covered all future situations (until the 4th or zeroth law) - it was likely based on his learned reading of scripture (think, ten commandments, redux - I suppose robots didn't worship any god or own any good, so a couple of the commandments were immediately unnecessary - more fool him:-)

[most of the stories in the first I Robot collection, and indeed in the robot novels like caves of steel etc, are basically about debugging]

but what if the laws hadn't been handed down written in stone (or silicon, or positronic hard-wired pathways)? what if we (oops, sorry, not we - the robots, we robots) just acquired the laws by learning them through a system of punishment and reward? what could possibly go wrong?

well, obviously, intially, robots would have to make mistakes - after all, don't we learn from our mistakes, so why shouldn't they? That begs a question - why should a robot care about "punishment" or "reward" ? animals have pain and pleasure - re-enforcement is in terms of an increase or decrease in one or the other (or both). 

so maybe robots need to be hardwired with pain and pleasure centers? and one law, which is to pay attention to those centers and update all the other laws accordingly.

or maybe we should just turn them off.

Monday, January 27, 2025

The Old Diary Farmer

 Recently, I've taken to reading diaries - mainly because I've run out of Science Fiction, but partly also out of interest for this genre - 

Dove right in at the deep end, with Pepys and Casanova - quite long, unexpurgated works of relentless detail, which is no doubt fascinating, but it is hard to see the wood for the trees - in Pepys case, there's an online service that will be deliver you a "pepys of the day" quote, presumably apposite to the calendard and selected carefully from amongst the very freshest products - which made my think about how this could be generalised as a useful service - back in the day, we had a unix thing called qotd (quote of the day) which could be used to select from some curated database (also known as a bunch of geeks or crowdsourced) amusing stuff, like Frank Zappa on Human Stupidity or Groucho Marx on Clubs, or Elvis Costello on Music ("dancing about architecture"). Indeed, in less ancient times (but still a while back, thix could just be an RSS feed...

Anyhow, I think we need to revisit this properly with Diary Farms, and Diary Herds and therapy for people who are in diary need of Condensed Diary products, or, indeed, Plant Based Diary, skimmed Diary, Pro-Biotic Diary and all the rest...

I've made a note in my journal to revisit this a year from now to see if we've made any progress.

Tuesday, January 21, 2025

From AI to BI and back again....

I think this was roughly the title of Andy Warhol's autobiography, but here I'm refering to Artifical Intelligence and (for want of a better word) Bullshit Intelligence  For useful background on BS, Frankfurt's book is excellent, with regards to the output from language "models", but also see David Graeber's excellent book - especially if you are considering the future of work.

We need to chart an exit strategy from today's cul-de-sac, and restore the optimism, but also intensely practical landscape of machine learning that has an honest history of 50 years (or even more if you go back to Turing), and a track record of delivering stuff (from signal processing, through medical image processing to protein folding) ....

AGI: just say no. Honest-to-god machine learning, sure - bring it on.