Tuesday, February 11, 2025

Learning Asimovian Wisdom - its the law, doncha know?

laws, as practised by people, aren't the same as laws of physics - well, at least if you have a naive, high school level of physics (and people).

laws are approximate, because they are continually being re-interpreted. this is intentional - it keeps lawyers in employment. but it also allows for futures where circumstances arise that were'nt predicted by the law makers 

so maybe consider the landscape of law as evolutionary - developing in reaction to the environment.

and not being optimal, but just fitting, as best it can, to the current world (with some time lag)

so its some kind of re-enforcement learning system.

so asimov suggested 3 (4 later) laws of robotics, and he laid down the law - he wrote down what he (at least initially) believed was a good enough set that it covered all future situations (until the 4th or zeroth law) - it was likely based on his learned reading of scripture (think, ten commandments, redux - I suppose robots didn't worship any god or own any good, so a couple of the commandments were immediately unnecessary - more fool him:-)

[most of the stories in the first I Robot collection, and indeed in the robot novels like caves of steel etc, are basically about debugging]

but what if the laws hadn't been handed down written in stone (or silicon, or positronic hard-wired pathways)? what if we (oops, sorry, not we - the robots, we robots) just acquired the laws by learning them through a system of punishment and reward? what could possibly go wrong?

well, obviously, intially, robots would have to make mistakes - after all, don't we learn from our mistakes, so why shouldn't they? That begs a question - why should a robot care about "punishment" or "reward" ? animals have pain and pleasure - re-enforcement is in terms of an increase or decrease in one or the other (or both). 

so maybe robots need to be hardwired with pain and pleasure centers? and one law, which is to pay attention to those centers and update all the other laws accordingly.

or maybe we should just turn them off.

Monday, January 27, 2025

The Old Diary Farmer

 Recently, I've taken to reading diaries - mainly because I've run out of Science Fiction, but partly also out of interest for this genre - 

Dove right in at the deep end, with Pepys and Casanova - quite long, unexpurgated works of relentless detail, which is no doubt fascinating, but it is hard to see the wood for the trees - in Pepys case, there's an online service that will be deliver you a "pepys of the day" quote, presumably apposite to the calendard and selected carefully from amongst the very freshest products - which made my think about how this could be generalised as a useful service - back in the day, we had a unix thing called qotd (quote of the day) which could be used to select from some curated database (also known as a bunch of geeks or crowdsourced) amusing stuff, like Frank Zappa on Human Stupidity or Groucho Marx on Clubs, or Elvis Costello on Music ("dancing about architecture"). Indeed, in less ancient times (but still a while back, thix could just be an RSS feed...

Anyhow, I think we need to revisit this properly with Diary Farms, and Diary Herds and therapy for people who are in diary need of Condensed Diary products, or, indeed, Plant Based Diary, skimmed Diary, Pro-Biotic Diary and all the rest...

I've made a note in my journal to revisit this a year from now to see if we've made any progress.

Tuesday, January 21, 2025

From AI to BI and back again....

I think this was roughly the title of Andy Warhol's autobiography, but here I'm refering to Artifical Intelligence and (for want of a better word) Bullshit Intelligence  For useful background on BS, Frankfurt's book is excellent, with regards to the output from language "models", but also see David Graeber's excellent book - especially if you are considering the future of work.

We need to chart an exit strategy from today's cul-de-sac, and restore the optimism, but also intensely practical landscape of machine learning that has an honest history of 50 years (or even more if you go back to Turing), and a track record of delivering stuff (from signal processing, through medical image processing to protein folding) ....

AGI: just say no. Honest-to-god machine learning, sure - bring it on.

Monday, December 09, 2024

The Knowledge

"Was it actually getting harder to find a taxi?", Brian wondered to himself as he found he had walked half way from Paddington to King's Cross station without seeing a single one.

And thus began one of the most fascinating, dangerous, and scandalous investigations of his long and intrepid journalistic career.

Starting with visits to Knowledge Corner, the legendary cafe and on-ramp point for all would-be Hansom Cab drivers and moving on to the hushed corridors of the hyperscale self-driving e-bike company, Fahrt, the trail would peter out, only for his pride to be piqued once more when narrowly missed by one of said companies vehicles, apparently sans rider - "ha!" he exclaimed silently to himself "this ain't no sleepy hollow".

Indeed it wasn't, as the clues led him through Limehouse past the charnel houses, to the great Koala tea warehouses of the Tai Chi Chai conglomerate.

On his mind, the constant mantra "where are all the drivers going?, as plenty of them are still dithering around town on those very e--bikes, learning the statutory 320 routes in the bluebook, and coping with the vagaries of roadworks and christmas lights and unexploded traffic cones."

As luck would have it, one day, he managed to hail a ride in a good old fashioned diesel smoke spewing black cab, and, no luck needed there, the driver wouldn't shut up about it.

"You'll never believe the signup fee they now pay - back in my day, you had to buy a moped and clipboard and waterproof all yourself, and ride up and down until you could pass the test - the inspector, no-one ever learned his name, was one hard b*rd, i can tell you. Never let you by if you made one mistake - things like thinking swiss cottage was on the way to st johns wood, or crystal palace was close to ally pally. na, pain in the backside, even more than the seats in these things"...

"so" started Brian in a millisecond gap in the constant stream of nostalgia and cursing "where are all the drivers going then, if they are paid so well?".

"now theres the thing - its a real conundrum and I can tell you " continued keef almost without taking a breath and chewing on a cheese and onion white bread sandwich whilst executing a sudden u-turn right in front of an ambulance, nearly executing a traffic warden in the same manoeuvre. "they aint going home, which just adds to the mistry. and they ain't showing up at the footie " (Brian consulted his smart watch and ascertained that gate was indeed way down at Millwall). "But" and here the cabbie touched his nose, winked and nearly took out two nuns on a pedestrian crossing "they are drinking an awful lot of tea. You can find them lined up all weekend down in the docks, you know, by that big old pagoda between limehouse basin and cannery row or wharf or whatever its called".

This was the big break Brian had been waiting for. He hotfooted it (actually pedalled) down ther right away having paid off keef for his interesting route (from paddington to kings cross via alexandra and crytal palace really showed creativity, especially in diverse use of bridges over the river thames, approaching that of US movies allegedly set in London).

And that's where he found them all, stretched out in the tea dens in the catacombs beneath the Master Ting Academy. Drinking tea, talking nonsense, but all the while, their heads in some weird contrivance that looked all the world like an old fashioned digital perm machine. 

Brian use his smart watch again to track the signals coming out of these machines and determined that, yes, indeed, it was heading back to Fahrt HQ, in grammerly square, behind, surprirse surprise, Kings Cross again. "what are they doing" he asked himself, especially as there wasn't anyone else there who would know.

Then a terrible thought struck him. Wasn't the founder of Fahrt also the guy who'd been going on about neural implants and direct mind control? Wasn't there a scandal when it was discovered that while the devices worked, they were once only use, since they erased the part of the brain that they interfaced to, in the process of sending the signals to the metacloud? What if his new outfit were crowdsourcing smart routes for their self-driving bikes, and hadn't figured out how to record the routes, so needed a constant supply of new graduates of the Knowledge, to keep the whole enterprise from collapsing? What if the answer to all these questions was "yes"?

Sadly, we will never know, as, ironically, he was knocked into the river by a black cab driven by someone the police say is probably called keef, although witnesses said it was hard to make out his features for all the blue air around the vehicle at the time.


Wednesday, November 13, 2024

Quantum Ransomware...or Squid Inkjection Attacks

 With apologies to whoever coined the term SQUID, here's a thought experiement.

Imagine for a moment, that I choose entangle a couple of particles, kind of QKD-style.

Now I use one of these to encode my e-mail to you. Now I can use this nearly innocently to delete your copy of my email. But Imagine, for a moment, that you are fortunate (or gullible) enough to run your email service on a Quantum Computer. I can now use my entanglement to de-cohere your processor - given it is a switched program (not really a stored program) computer, I can really spin it down. I even get notified when you try to restore it and find which e-mails are causing the problem (like when you read/delete a message with one of my QUnicode bits in, my copy gets altered too - hey, that's the physics, don't mess with that:-)

What fun!


In general, has there been much analysis of side channel attacks and denial-of-service threats to QC?

Friday, September 20, 2024

weaponising the supply chain - thanks but no thanks, mossad

By intermediating the supply chain, Mossad appears to have been able to subvert safety in various mobile devices (so far, pagers, walkie talkies, possibly some phones) - one speculation is that they undermined the current limiter or other safety fature that stop the battery overheating and catching fire - and put in some interface to allow software (e.g. via specific messages to the device) to trigger this behaviour - rather than, say, just putting a few grams of semtex in the deveice and turning it into a small IED.

Because this was done at scale, and somewhat scattershot, the normal trust in the safety of devices bought through regular supply chains has been undermined. Imagine if Mossad had decided to do this via several made up intermediaries who sold through the Amazon Market place for example, especially with relatively low value items that aren't typically checked when being shipped internationally. Great. Now the idea is out of the box, it is another extreme example of asymmetric warfare - lots of organisations could re-implement it easily[*]. 

This instantly, a lot of organisations now have to worry about people who, having innocently bought such a device, (or indeed bought one off of someone else who got one of these exploted gadgets) want to travel

We currently ban e-bikes on trains in the UK because, even without state-sponsored terrorism, safety features on e-bikes are not terribly well checked (they don't have the equivalent of a regular MoT/roadworthiness/emissions check, which might help a bit). We also don't allow really large capacity power banks on planes. In these cases, the risks are higher even if the occurance of fire/explosion is rare, because the energy in the device is so much more.  Nevertheless, the explosions seen in Lebanon would be extremely dangerous on a plane, whether in the passenger cabin or the hold. 

Or we'll only allow devices where the battery can be removed and kept seperate from any trigger circuit. (useful for consumers who want to replace old, knackered batteries too).

So maybe we will see a ban on carrying any mobile/rechargeable device on planes for a while, until some certification (including tamper proof sealing of post-certified devices) is available.

Of course, Mossad will then subvert the certification labs next, no doubt.

Just to start with I think we need to stop people with Israeli passports traveling outside of their country as they represent a clear and present danger to everyone in the world, not just to innocent bystanders in market places in lebanon (or gaza). Until they can assure us that they are not going behave so irresponsibly, and regain any possible level of trust they might once have enjoyed. Of course, most their agents will also have other passports too.

Update - this Bunniestudios blog is a very useful detailed analysis of the howto....

* footnote - why the west didn't launch cyber attacks on Russia's infrastructure (e.g. taking down all their power and comms) when they invaded Ukraine was a) revealing the tools the west has and b) inviting a retaliation which would also have succeeded, were both v. bad ideas. Mossad has just revealed that it has no clue about this type of precautionary principle. Well done. guys.

Thursday, September 12, 2024

explainability, next to reversabilty?

 XAI has many flavours (includnig interpretability as well as explainability) - au fond, the idea is to shine a light into the black box, and not just say why an input produced an output, but potentially show the workings, and, in the process, quantify uncertainty in the output (confidence)- in the process of using an AI that does produce these outputs, the user can necessarily gradually construct a model of what the AI is doing (and why, given the user knows the inputs too) Hence, in a sense, this is like debugging the AI, or indeed, modelling the AI. i.e. reproducing the AI's model. In the end, the user will have reverse engineered the AI.  This is an indirect, amd possibly time consuming way of reproducing the model, effectively if not actually. Ironically, in some cases, we may end up with a more accurate, or a cheaper model, or both. 


Of course, you may dispute that the model we learn is not actually the same as the thing inside the black box - the analogy of boxes and lights is, of course, nonsense. If we were to know the actual machine learning model (linear regression, random forest, convolutional neueal net, bayesian inferencer etc, and the actual weights (model parameters etc) then it wouldn't be a black box, and we'd be able to simply copy it.  various techniques can be used even for quite complex machines, to relate the model parameters (e.g. CNN weights and clustering) to the features the model is able to detect or predict. This is the direct approach. In this approach, we are also able, potentially, to simplify the actual model, removing components that serve no useful purpose ("junk dna"?).

Either way, any sufficiently advanced and thorough explanation of an AI is going to be a copy.

I wonder if the world of LLMs is resistant to XAI techniques partly (honestly) because very large models would be very expensive to re-model these ways, but also partly because some of the proponents of GenAI technlogies like to retain the mystery -- "it's magic", or perhaps less cynically "it's commerical in confidence". 

However, if we want to depend on an AI technology for (say) safety critical activities, I think it better be fully explainable. And that means it will be transparent, actually open, and reversable (in the reverse engineering sense).