Monday, July 31, 2023

Fermi's last paradox.

Some people have been driven to revisit Fermi's paradox once more due to the recent revelations about UFO sightings from the USA.

Let's revisit the basic idea - we have some notion of the number of galaxies in the universe, the number of stars in galaxies, the distribution of stars with exo-planets, the probability those exo-planets are in the goldilocks zone, and so some idea based in 1 prior (life on earth) that intelligent beings must be out there, and, indeed, out there in pretty large numbers.

This is an sample bias of massive proportions.

While the idea that life should be likely given the right circumstances, and the observed statistics of the basic ingredients (at the planetary level) support this, the Fermi paradox is that, given how long we have been around as an intelligent species, capable of observing stuff like this, then the chances that others are around, at the same time, should be very high, and yet we have not (convincingly) encountered any of them, or at least not visibly, nor have their deigned to contact us.

So I have two arguments about this not being a paradox at all, one based on science fiction, the other based on fantasy.

1. SF - Asimov's Foundation series started as a riff on Gibbon's Decline and Fall of the Roman Empire. In it, he posits a galaxy wide empire which will eventually collapse due to internal contradictions (actually, he'd never have used such an overtly Marxist term, since he was an ardent capitalist, but his psychohistory theory smacks of marxist dialectical materialism). In this, the only way the "dark ages" that follow an empire's collapse, could be mitigated was via the Foundation, and even that couldn't solve noise introduced by accumulated small deviations from individual behaviours, so needed a Second Foundation to correct the noise (even they nearly lost due to the Mule, an extreme perturbation). So good old Prof Isaac basically invoked magic (ok, so in the spirit of Arthur C. Clark, it was dressed up as a technology so advanced as to be indistinguishable from magic). Indeed, in later books, he merged two of his future histories together from the robots and the empire, he made up yet more magic (Gaia, and Emergent Ethical self-modifying sentient artificial beings) ... sigh

The key point here is that Empires don't last. We have a set of past "civilisations" we could use to model the distribution of life times of organised societies - Mesopotamia, Songhai, Mayans, aforesaid Romans, the Brits, the EU, the USSR, etc etc - look, they don't last long. 

Worse, when they encounter another empire (British v. Moghul), inevitably one disappears.

So in space, chances are most organised tech societies don't last long enough to find one-another, but occasionally, when they do, one absorbs the other. Only it still fades away after (say) 100 years.

And note if FTL is not possible, we need to sustain a tech society across generational star ship lifetimes, which could be tens of centuries at least.

2. Fantasy. Maybe dark matter and dark energy are anathema to intelligence. Maybe there is "dark intelligence" which absorbs all smart being trying to make it through to the next light zone. Maybe there is some sort of truth behind Good Omens and other stories - there's a bunch of adversaries out there, but they aren't what we would call "civilised" - they are creatures from hell. After all, didn't a very smart human once say "Hell is other people". Maybe Fermi's paradox isn't a paradox, it is just that between all the little possible utopias is a vast abyss, not full of nothing, or zero point energy, but full of demons.


Note Frank Zappa already remarked that Hydrogen is not the most abundant substance in the Universe, and that stupidity was far more abundant. Hence, Dark Matter, Dark Energy, Dark Intelligence.

Tuesday, July 11, 2023

larging it language models

 so lets deconstruct this bullshit.


generative models - i wrote a random number generator in 1976. it used an ancient technique from the past still in use today coz it works - it was generative - that doesn't mean anything. 


foundation ai - what's that about? the mule defeated the foundation, last I heard. what a load of tosh.


large language models - not at all - statistics of utterances, unutterable bollocks. that's not language, that's verbal Diarrhoea.


"attention is all you need"? sure, if you have nothing to tell people about, it sure is.


interestingly (since penning this rant) I've read a whole slew of papers about shrinking NNs in general, and pruning LLMs specifically...still a research thing, but the Lottery Hypothesis suggests for now, this is no longer just post-training (just starting from a smaller architecture produces lower accuracy models...hmmm why?)

phew. lets get shot of this hype cycle and back to fixing the planet.