Friday, September 20, 2024

weaponising the supply chain - thanks but no thanks, mossad

By intermediating the supply chain, Mossad appears to have been able to subvert safety in various mobile devices (so far, pagers, walkie talkies, possibly some phones) - one speculation is that they undermined the current limiter or other safety fature that stop the battery overheating and catching fire - and put in some interface to allow software (e.g. via specific messages to the device) to trigger this behaviour - rather than, say, just putting a few grams of semtex in the deveice and turning it into a small IED.

Because this was done at scale, and somewhat scattershot, the normal trust in the safety of devices bought through regular supply chains has been undermined. Imagine if Mossad had decided to do this via several made up intermediaries who sold through the Amazon Market place for example, especially with relatively low value items that aren't typically checked when being shipped internationally. Great. Now the idea is out of the box, it is another extreme example of asymmetric warfare - lots of organisations could re-implement it easily[*]. 

This instantly, a lot of organisations now have to worry about people who, having innocently bought such a device, (or indeed bought one off of someone else who got one of these exploted gadgets) want to travel

We currently ban e-bikes on trains in the UK because, even without state-sponsored terrorism, safety features on e-bikes are not terribly well checked (they don't have the equivalent of a regular MoT/roadworthiness/emissions check, which might help a bit). We also don't allow really large capacity power banks on planes. In these cases, the risks are higher even if the occurance of fire/explosion is rare, because the energy in the device is so much more.  Nevertheless, the explosions seen in Lebanon would be extremely dangerous on a plane, whether in the passenger cabin or the hold. 

Or we'll only allow devices where the battery can be removed and kept seperate from any trigger circuit. (useful for consumers who want to replace old, knackered batteries too).

So maybe we will see a ban on carrying any mobile/rechargeable device on planes for a while, until some certification (including tamper proof sealing of post-certified devices) is available.

Of course, Mossad will then subvert the certification labs next, no doubt.

Just to start with I think we need to stop people with Israeli passports traveling outside of their country as they represent a clear and present danger to everyone in the world, not just to innocent bystanders in market places in lebanon (or gaza). Until they can assure us that they are not going behave so irresponsibly, and regain any possible level of trust they might once have enjoyed. Of course, most their agents will also have other passports too.

Update - this Bunniestudios blog is a very useful detailed analysis of the howto....

* footnote - why the west didn't launch cyber attacks on Russia's infrastructure (e.g. taking down all their power and comms) when they invaded Ukraine was a) revealing the tools the west has and b) inviting a retaliation which would also have succeeded, were both v. bad ideas. Mossad has just revealed that it has no clue about this type of precautionary principle. Well done. guys.

Thursday, September 12, 2024

explainability, next to reversabilty?

 XAI has many flavours (includnig interpretability as well as explainability) - au fond, the idea is to shine a light into the black box, and not just say why an input produced an output, but potentially show the workings, and, in the process, quantify uncertainty in the output (confidence)- in the process of using an AI that does produce these outputs, the user can necessarily gradually construct a model of what the AI is doing (and why, given the user knows the inputs too) Hence, in a sense, this is like debugging the AI, or indeed, modelling the AI. i.e. reproducing the AI's model. In the end, the user will have reverse engineered the AI.  This is an indirect, amd possibly time consuming way of reproducing the model, effectively if not actually. Ironically, in some cases, we may end up with a more accurate, or a cheaper model, or both. 


Of course, you may dispute that the model we learn is not actually the same as the thing inside the black box - the analogy of boxes and lights is, of course, nonsense. If we were to know the actual machine learning model (linear regression, random forest, convolutional neueal net, bayesian inferencer etc, and the actual weights (model parameters etc) then it wouldn't be a black box, and we'd be able to simply copy it.  various techniques can be used even for quite complex machines, to relate the model parameters (e.g. CNN weights and clustering) to the features the model is able to detect or predict. This is the direct approach. In this approach, we are also able, potentially, to simplify the actual model, removing components that serve no useful purpose ("junk dna"?).

Either way, any sufficiently advanced and thorough explanation of an AI is going to be a copy.

I wonder if the world of LLMs is resistant to XAI techniques partly (honestly) because very large models would be very expensive to re-model these ways, but also partly because some of the proponents of GenAI technlogies like to retain the mystery -- "it's magic", or perhaps less cynically "it's commerical in confidence". 

However, if we want to depend on an AI technology for (say) safety critical activities, I think it better be fully explainable. And that means it will be transparent, actually open, and reversable (in the reverse engineering sense). 

Monday, September 02, 2024

what if cat species were named after greek food?

mossaka, the mouser

calamari, the cat of nine lives and nine tales

tsigarides, the top cat

kleftiko, the clever cat

stifado, the sedentary cat

marathopitakia, the mischief maker

dolmadakia, sleeps all day

add yours here...