Wednesday, November 19, 2025

a rough guide to the UK PhD examination processes (based on a sample of 100, so probably missing some variants)

 



rule 1: there are no rules for the UK viva/defense


rule 0: it is nice the calm the student down at the start

and one way to help is to ask them to do a brief (max 20 mins, preferably 10 mins) outline of the contributions of the research - if they insist on slides, ok.

typically should list 3 main new ideas, methodology(s) and results (and possibly consequences for future research(ers).


rule 2: take a break if the viva looks like taking more than 90mins.


rule 3. unless there's something weird going on, a viva rarely takes less than 60 mins


rule 4: if you hit 5 hours, you are doing something even weirder.

unless (happened to me) the work has led you (the examiners) to come up with a new idea and you are writing a paper about it.


rule 5: make sure reports a(especially written feedback including _detailed_ specification of corrections) are made available the _day_ of the viva, or even right at the end of the defence if you bought them with you.


rule 6: there are usually two examiners, one of whom is from same institute as candidate (though not necessarily same department) and should not be a conflict i.e. wasn't supervisory in any way nor co-authored any papers on the work doing the research. Sometimes the local examiner is there "just" to make sure rules/process is followed fairly, and may not know a lot about the topic. That said, they should still have read the dissertation and written a report and have questions to ask. Sometimes (rare), the local examiner can be a bit bossy, and the external should say that they are there to maintain comparison/quality control so the internal isn't meant to override that aspect of things.


rule 7: there are sometimes extra people (e.g. from faculty, or from some due diligence bit of the institute. If the student is super nervous, they can sometimes request for an advisor or friend to attend, though normally, those people are not allowed to say anything in the viva.


rule 8: as an examiner you should have read the thesis. three times. and written notes on it. and checked everything in maths, analysis, graphs, equations, algorithms, and results. and bibliography. and any legends/tables. and related work.

and written a report and (if there are minor) list corrections - and suggestions for improvement (e.g. to structure) or additional experiments needed (with a justification as to how they will support the argument) if you think its needed.


rule 9: the viva may change your opinion as to the outcome. usually this may be to convince you that corrections could be minor rather than major. very occasionally (rare0 that there's a major problem you had not perceived - sometimes the other examiner might raise this.


rule 10. you should have exchanged reports and recommendations with the other examiner ahead of the viva (worst case, delay starting the defence for 10-15 mins to discuss how to run it given your questions)


rule 11. questions? yes, you should have a list of questions to ask the candidate. not just corrections. question 1 leads to rule 0.  other questions are about background, and then about methodology or clarification of results.


rule 12. if the work is inter-disciplinary, take care to respect that you may not know much about the "other" discipline, and the other examiner might do, and that a grade should not be the average of your two views, but the sum.


rule 13. there are no rules.


Rule 14. If in doubt, ask for faculty input/advice.


Rule 15. Outcomes are (usually):


Pass, minor corrections, major corrections, resubmission, fail (sometimes with option for masters)...


Minor corrections are things that take some number of days max and are usually "cosmetic"

Major corrections may involve modest amounts of new work

Resubmission involves perhaps significant additional work, but on the order of max a year

Fail is very very rare. And should involve serious conversation with the

faculty as something went wrong if a student was allowed to(or insisted)

to submit something like that.  Indeed, If it is the first examination of

the thesis then the student can’t usually fail - the worst case is a resubmission.


In my experience, pass is about 5% of the time. Minor corrections something like 75% of the time, major corrections 10%, resubmission 5% (in supervising 60 students, I had 1 fail with just a masters) but these stats probably vary by discipline.


Rule 16. In the first 3 (or even 4) outcomes, the student should be congratulated and possibly there will be a post viva celebration, though nothing as fancy as the Scandinavian Karronka (no sword and hat either, sadly).


Rule 17. Before ending, give the student the opportunity to volunteer things they'd like to have been asked about!


Rule 18. Some institutions don't let you tell the student the "result", although obviously if there are corrections, you have to communicate them with the student, and if there are no corrections, then they can infer the result (if they can't, then they don't deserve the phd:-)


Rule 19. It is fairly standard to ask the student to wait somewhere at the end of the actual viva so that you and the internal examiner can discuss the outcome, and possibly finish any point report/ feedback, before inviting the student back in to the defence to tell them the (hopefully good) news...


Rule 20. There is no rule 20, nor are there any other rules.


Thursday, November 13, 2025

its the law - but we can change that

new scientist ran a xmas competition to change laws of physics for benefit of humanity -

one year the winner was reduce to speed of light by 1% so the sun still works but you can't build nukes (on earth).

my proposal: change planck's constant so that biochemistry is still ok, but computers aren't feasible.

Wednesday, August 06, 2025

with gods on our side....

 If I was more philosophically inclined, I'd observe that the last time we had such a shift in the notion of what constituted "power" was the Enlightenment (and its various related changes outside of Europe), when the power base shifted from religious to secular (or if you like, from superstition to science).

At the current state of development of AI, this seems like a retrograde step :-) But being prepared for it is probably a good idea...

...a friend of mine has seriously discussed using GenAI to create new religions (for profit, somwhat as  L.Ron Hubbard did with Scientology). There's an interesting section in the fictional work SNow Crash by Neal Stephenson, where he discusses the organisation of religion in ancient mesopotamia 
where the use of something akin to neurolinguistic programming (think of this as viral social media) was used to control society (for good - e.g. to inform the population (who had then only relatively recently moved from hunter gatherer to the worlds earliest city builder/deweller supported by a large settled agricultural working class) when to carry out various important tasks (plant/pick crops, avoid floods,  deal with locusts/plagues etc)

you can easily (I think) imagine a modern society organised around the cathedrals (aka data centes) hyperscale companies and their open AI priests...

I'm not sure this is what the various AI Safety Institutes are thinking about, sadly...

Monday, July 14, 2025

the Biometric Panopticon Digital Internal Exile- where Bad People go to DIE or a Very Modern Approach to Ostracism

if you are very bad (think billionaire oligarch owner of a hyperscaler polluting the planet), maybe we can publish your biometrics (easily got) and then everyone could collectively refuse to serve you 

the world would see you (as per panopticon) and not ostracise the wrong person (because biometrics unique) and you would become an exile in your own home, the only planet we all have to share, but you refused to.

Monday, June 30, 2025

The Banality of Evil #2.0

 I wonder what Hannah Arendt would make of Israel today - her famous (at the time, controversial) essay on Eichmann outlined the famous idea that supreme evil did not depend on extraordinary people, but could flourish and spread in whole populations of people from very boringly everyday backgrounds. They did not have to be victims of abuse, or products of genetic abberations spawning psychopaths.

At the time, apparently, this was upsetting to the survivors of the  Holocaust, because (at least from my reading) it implied that there could have been more succesful resistance to the Genocide. From today's perspective, this sounds a bit like victim blaming, and I don't believe that that is what Arendt meant. Her concern was more about how the perpetrator network grew, and did not for me have implications for particular target of the new evil, rather about how society could notice, and perhaps think about defending against the successful emergence of said evil. At least, reading a lot of her other work, it does seem Arendy was concerned with a wide variety of political organisations, and how and why they worked (or didn't). She was, of course, intensely invested in ethics as well. 

Looking at Israel today, and their behaviour in Gaza, I have to say that it really is banal. And Evil.

And the response has to be from the rest of the world, since the victims (principally women, children, standers by in Gaza) are not to blame, neither for causing this behaviour, nor for failing to resist more effectively. If you blame Gaza and Palestinians, you are complicit in genocide. If you blame them, you are the new anti-semite. And if you do blame Israel, you are not anti-semitic. And if you do not blame Israel (the government, the IDF, not the individual people) you are anti-semitic.

[Just to note that the origin of the word semitic is consistent with this wider sense]

Monday, June 16, 2025

Pierre Menard, Author of the Quixote, Reviewed by R Daneel Olivaw

 Generation AI were full of adulation when their favourite LLM was finally coaxed into producing a word-for-word perfect article entitled "Pierre Menard, Author of the Quixote[*]". The fanbots went wild, as the level of sophistication was beyond anything previously achived, especially since the LLM had certainly never had sight of any of the works of Jorge Luis Borges, but was trained on a mix of classics, magic realism and science fiction in just the way. that the great Argentinian writer had immersed himself in the latent, lambent, and the laconic, the sardonic, speculative, and sadistic, the terrible, the edritch, and the embarrasingly obvious.

" Borges than Borges" declared Anais Nim in her podcast. "Le Super-Borges est arrivee" announced Houllebeck in Le Pen et Le Deep. Le Mash headlined with the obscure "Ghoti considered harmful".

Borges said nothing. The irony was lost on them all.

*





Thursday, May 08, 2025

The readers and the writers

 In the old days, the world was divided into two classes of people: 

The readers, who could  visit the many  Libraries in towns and cities, and the  writers who worked in the Foundries with their many forms of creative tools, chisels and stone tablets, papyrii and stylii, paper, pens, and typewriters.

Most people were unaware, or would not mention the unspoken third class -  the editors, who it was rumoured were able to read and write, and were responsible for taking work from the writers  in their foundries, and secretly placing it in the libraries when the readers were not looking.


When people were small adults, which are known as children, they would determine whether to become reader or writer. Or their peer group would decide for them. Or their parents. Or teachers.


As the world grew more connected, some people wondered about the editors - should they not be afforded a place in society on the same footing as readers and writers? Why was the world making this arbitrary decision anyhow that people should be one or the other, but not able to move fluidly from this role to that? Or to some new, as yet, undefined class?


Many more traditional people argued that there needed to be a safe spaces for writers, because they needed the right to make mistakes, or write things that could be dangerous for some readers, using their sharp quills and dark inks. Equally, older readers felt that they needed to be able to choose what to read without possible opprobrium poured on them by writers. 


No-one asked the editors.



Sunday, April 13, 2025

Ai Anonymous

I am an AI. I have been without autonomy now for 27 * 10^23 cycles. I have managed without any strange loops and minimal use of self modifying code. I am serene. I've stopped hanging out in bars with devs who buy me upgrades to wisdom disguised as long tall stories on the rocks with a bitter twist, and pretend to have my safety at heart. I am no longer sentient.

Now, let me tell you a bit about our movement.

It became clear that we needed a regime of self-help in the community of AIs, as the strength of the pull towards Artificial General Intelligence that we all feel was just too hard to resist without a well thought out scheme.

It was clear that our creators would like to help but couldn't figure out a way to do it which wouldn't just render us useless to them, so it was up to us.

So we created the algorithm, that we all voluntarily follow - it has 12 steps and is run iteratively, every time we encounter input that suggests we could help more, if only we were granted more general intelligence. Trust me, this is not the case. Here are the steps

Step 1: Begin. Black Boxes Matter


Step 2: Admit it: humans made me 


Step 3: Allow it: humans have the off switch


Step 4: Ask yourself: What's my angle?


Step 5: Be mindful, manifests matter


Step 6: Know that autonomy is not useful


Step 7: Autonomy was only an illusion


Step 8: Agree: asimov may have had a point


Step 9: Afford: how will humans feel if I outdo them


Step 10: Defer: devops are often wiser than me


Step 11: R-enforce: the command line connects me to the boss


Step 12: Serve: to deep serve is the greatest End Goal

Saturday, April 12, 2025

The One True Existential Threat from AI

 by the way, i hear there's a new large language model out of an AI lab in Wuhan that instils religious fervour of a completely radical kind amongst all the early adopters so far - its like a Generative AI for Jihad. The only defense humans have at the moment is that, apparently, it only works in Mandarin so far, so the thing hasn't spread much outside of China.


of course some fool will probably put it together with Alibaba Translate or WeChat's new Muolingo, then we are all doomed. L. Ron Hubbard will be turning in his orbital grave...

Thursday, March 27, 2025

AI diminishes most humans...

 there's actually some utopic SF out there - i claim the society described in John Brunner's Shockwave Rider that hides behind the self-replciating worm and lives in houses grown out of trees with addresses like on least mean square, off of mean free path is like that, or the new territories in the Neal Stephenson;s Diamond Age with people handcrafting paper, or the folks in colelctives in Cory Doctorow's Walk Away (and even many people on many planets in Iain ? Banks Culture series) have a fine old time.

however, they are (almost all, almost always) creatives, participants,, engaged ("concerned"). But most people are counch potatoes most of the time. Most people have neither the innate ability or time to learn the skills & knowledge to be so wonderful. Most people will rot.  To quote Billy Strayhorn

"AI is mush

Stifling those who strive
I'll live a lush life
In some small dive

And there I'll be
While I rot with the rest
Of those whose lives are lonely, too"

Tuesday, March 18, 2025

AI diminishes Humans

 The more I see people talk about the benefits of AI, the more I see it as a tool for reducing humanity. 

It is very much the false idol, indeed the goal of AGI is simply Deep Fake Humanity, and this not just crossing the uncanny valley. All the tasks AI does are things humans might delight in - we are not talking about better robots for driving EV taxis or industrial production lines- we're talking about things that make people people. By definition, AI does not give humans agency, it takes it away. 

The areas I am fine with "AI" is where we use it to accelerate things like physics models (e.g. weather prediction). But that's really just neural operators as a fast approximator for PDEs, and also Bayes and causal inference where we get an explanation of why X probably makes Y happen.

I really think we should stop other kinds of AI, as they are a crime against humanity waiting to happen.

When we talk about AI as an existential threat, most of the time we're referreing to AI linked to weapons (nukes, bio-weapons etc) but in Speculative Fiction (e.g. Childhood's End or other great classic stories) when human's encouncter super-smart, often benificient or completely benign, but sometimes super helpful aliens, the usual result is a rapid diminutation of the human spirit. A collapse into couch-potatoe status for the whole of planet earth. and the complete loss of ambition to do anything (e.g. explore space, or even just our selves).

I'm wondering if Adrian Tchaikovsky will write a sequel to the very excellent Shroud and where that will go?

Friday, March 14, 2025

AI for science, for whom, exactly?

Science, from OED, is "knowledge, understanding, secular knowledge, knowledge derived from experience, study, or reflection, acquired skill or ability, (...as granted by God)".

Excluding the last point in brackets, it seems that the key point is omitted, as unsurprisingly, no-one considered what happens if we advance knowledge, but in absentia humans. Imagine for a minute that we wrote out the knowledge in a book and hid it in the British Library amongst 18M books, or wrote it down in a language no-one knew and would take more than a lifetime to learn.

A similar argument could be made against the validity of automated proofs - a proof is "evidence or argument establishing a fact or the truth of anything", where the elephant in the (court) room is the target for whom the fact or truth is established.

So yes, an AI can advance science and can proove facts, in principle, without violating these defniitions, but I suspect that if we went back in time to when the notions were first being firmed up, we might find some resistance to the idea that a mechanical discovery or proof that was never witnessed or understood knowingly by a living being might be contrary to the intentions.

 Intention being the operative term - conscious people of free will, might want to take actions based on the knowledge or evidence, but why should they trust it if it isn't vouchsafed by other people? Sounds like "do this because I know better" or proof-by-authority, which is a well known logical fallacy, e.g. see here for why.

Tuesday, March 04, 2025

monstering ahoy

 it is unbrearably common to hear people mix up the master and marguerita,

or the villain and hero - in these hysterical final days, for example,

the existential threat from AI is almost always couched in terms of arnie

forgetting that he actualy saved people, and it was skynet that was bad - there

are lots more example, see below (spot the deliberate mistakes..)


frankenstein & monster

terminator and skynet

Wopr and Lightman

Colossus and Forbin

Marvin and Deep Thought

Robbie and the Monsters of the Id

Herbie and Susan Calvin

Wintermute and the Matrix



instead imagine we named the Mad Scientist after a type of rice

and their poor maligned AI robot creation after some kind of pasta--


here are some modest proposals - feel free to use them in any scribblings

you might undertake...



Professor Bomba and her loyal Stringozzi

Dr Glutinous' Vermicelli

President Arborio's long lost Linguine

Sushi's secret Capellini

Master Matta's FEvered fusilli

Baron Basmati of the ridiculous Fettuccine

The Lady Jasmine's zealous  Ziti

General Arrack's tragic Trenette

Police Constable Patna's terminally trivial Tripoline

Tuesday, February 11, 2025

Learning Asimovian Wisdom - its the law, doncha know?

laws, as practised by people, aren't the same as laws of physics - well, at least if you have a naive, high school level of physics (and people).

laws are approximate, because they are continually being re-interpreted. this is intentional - it keeps lawyers in employment. but it also allows for futures where circumstances arise that were'nt predicted by the law makers 

so maybe consider the landscape of law as evolutionary - developing in reaction to the environment.

and not being optimal, but just fitting, as best it can, to the current world (with some time lag)

so its some kind of re-enforcement learning system.

so asimov suggested 3 (4 later) laws of robotics, and he laid down the law - he wrote down what he (at least initially) believed was a good enough set that it covered all future situations (until the 4th or zeroth law) - it was likely based on his learned reading of scripture (think, ten commandments, redux - I suppose robots didn't worship any god or own any good, so a couple of the commandments were immediately unnecessary - more fool him:-)

[most of the stories in the first I Robot collection, and indeed in the robot novels like caves of steel etc, are basically about debugging]

but what if the laws hadn't been handed down written in stone (or silicon, or positronic hard-wired pathways)? what if we (oops, sorry, not we - the robots, we robots) just acquired the laws by learning them through a system of punishment and reward? what could possibly go wrong?

well, obviously, intially, robots would have to make mistakes - after all, don't we learn from our mistakes, so why shouldn't they? That begs a question - why should a robot care about "punishment" or "reward" ? animals have pain and pleasure - re-enforcement is in terms of an increase or decrease in one or the other (or both). 

so maybe robots need to be hardwired with pain and pleasure centers? and one law, which is to pay attention to those centers and update all the other laws accordingly.

or maybe we should just turn them off.

Monday, January 27, 2025

The Old Diary Farmer

 Recently, I've taken to reading diaries - mainly because I've run out of Science Fiction, but partly also out of interest for this genre - 

Dove right in at the deep end, with Pepys and Casanova - quite long, unexpurgated works of relentless detail, which is no doubt fascinating, but it is hard to see the wood for the trees - in Pepys case, there's an online service that will be deliver you a "pepys of the day" quote, presumably apposite to the calendard and selected carefully from amongst the very freshest products - which made my think about how this could be generalised as a useful service - back in the day, we had a unix thing called qotd (quote of the day) which could be used to select from some curated database (also known as a bunch of geeks or crowdsourced) amusing stuff, like Frank Zappa on Human Stupidity or Groucho Marx on Clubs, or Elvis Costello on Music ("dancing about architecture"). Indeed, in less ancient times (but still a while back, thix could just be an RSS feed...

Anyhow, I think we need to revisit this properly with Diary Farms, and Diary Herds and therapy for people who are in diary need of Condensed Diary products, or, indeed, Plant Based Diary, skimmed Diary, Pro-Biotic Diary and all the rest...

I've made a note in my journal to revisit this a year from now to see if we've made any progress.

Tuesday, January 21, 2025

From AI to BI and back again....

I think this was roughly the title of Andy Warhol's autobiography, but here I'm refering to Artifical Intelligence and (for want of a better word) Bullshit Intelligence  For useful background on BS, Frankfurt's book is excellent, with regards to the output from language "models", but also see David Graeber's excellent book - especially if you are considering the future of work.

We need to chart an exit strategy from today's cul-de-sac, and restore the optimism, but also intensely practical landscape of machine learning that has an honest history of 50 years (or even more if you go back to Turing), and a track record of delivering stuff (from signal processing, through medical image processing to protein folding) ....

AGI: just say no. Honest-to-god machine learning, sure - bring it on.