Saturday, March 07, 2026

Responsible Ai for War

 As with Military Intelligence,  Responsible AI for War is an oxymoron.

As demonstrated in the US/Israeli attack on Iran, the use of today's AI technology is fraught with problems. The NY Times reported that the precision weapons had hit a girls' primary school. This had been a (possibly) legitmate target 15 years ago when it was used by the IGRC, but that was completely out-of-date intel. As with early careless use of misinformation, when the US bombed the Chinese Embassy in Belgrade, the old adage Garbage in Garbage out, clearly now applies to the state-of-the-art in the AI world - you would be sued for using such unreliable and unpredictable systems in medicine or finance (or even just self driving cars), but in armed conflict, colateral damage and friendly fire just go with the territory.


Of course, there's a whole moral philosophical debate about ethics of autonomous weapons, but right now, empirically, they are just bad, as a matter of objectively verifiable fact.