re-reading the Altered Carbon trilogy, I'm struck by a fundamental philosophical with re-sleeving (upload) - while the book runs with the paradigm of digital recording of the essence of consciousness and skates over thin ice about embodied intelligence (the meat is just a machine the s/w of a human being runs on), there's a separate idea of continuity; as with perfect forward and backwards secrecy, how is the new running copy anything to do with previous copies, and how do previous copies anticipate the new copy? they don't - they are copies. so philosophically, they aren't the same person, they are separate instances. but worse, if there's any notion of "eventual consistency" in how anticipation (and memory) overlap and interleave, then this just doesn't work at all. not one bit.
Tuesday, March 27, 2018
Thursday, March 08, 2018
What isn't AI?
so data science and its typical tools aren't really AI - machine learning, even deep learning, even generative adversarial nets style deep learning still isn't intelligent, though it sure is artificial - its useful, but claiming that a classifier trained on zillions of human-labelled images containing cats and no cats, is recognizing cats is just stupid - a human can see a handful of cats, including cartoons of pink panthers, and lions and tigers and panthers, and then can not only recognize many other types of cats, but even if they lose their sight, might have a pretty good go at telling whether they are holding their moggy or their doggy - how? well, because humans (probably) have a large collection of tools evolved (and trained) in the brain, and the brain is embodied, and so has perception, interaction, motion, sense of things light touch (how furry is that fuzzy looking cat's tail, how does a cat hold themselves when prowling, playing or just chilling... etc
these tools operate at many levels - some may just be context/recall, some may effectively be analogue programmes that model gravity or other physics things (stuff games software writers call "physics models"), and some may very well look like artificial neural nets (things to de-noise complex signals, and to turn moving images (the retina has gaps and doesn't refresh infinitely fast) into representations that let you find objects and name them (that cool cat is a songby squeeze :-) )
there are feedback loops between the "low level" perception stuff and the high level models so the models are (surprise surprise) nature & nurture...but prefacing learning with a model is going to help with unsupervised learning a lot - if we understand 3D, motion, gravity, materials (skin, bone, fur, muscle, fat, what we're made of, and what these other things are made of, wood, grass, mud, water, air etc) etc, then we don't have to see zillions of images of an X, because we generalize from one or two images to lots of views we'd expect an X to look like when its curled up asleep, or jumping up 3 times its height to catch a bird.
So there's a higher level still than al that, as if that wasn't complicated enough for you: humans (most animals) are pretty autonomous - they have goals (tropisms - find food, find partners, survive, enjoy, avoid pain etc), and they have some slightly less obvious tools like curiosity, imagination, creativity - all with a smattering of randomness. These can help seek out diverse input (and create different interactions) so our low level perception & Interaction are constantly refreshed, and our models are updated by challenges (think, scientific method & falsification & parsimony/occam's razor etc).
Then there can be another level still - things like self-awareness, consciousness,, beliefs, neuroses, and even "taste"/aesthetics, and of course, we are social beings, so ethics and theory of mind, and daft stuff like religion and manifest destiny and other collective psychoses. Ghost bugs in the machine.
these tools operate at many levels - some may just be context/recall, some may effectively be analogue programmes that model gravity or other physics things (stuff games software writers call "physics models"), and some may very well look like artificial neural nets (things to de-noise complex signals, and to turn moving images (the retina has gaps and doesn't refresh infinitely fast) into representations that let you find objects and name them (that cool cat is a songby squeeze :-) )
there are feedback loops between the "low level" perception stuff and the high level models so the models are (surprise surprise) nature & nurture...but prefacing learning with a model is going to help with unsupervised learning a lot - if we understand 3D, motion, gravity, materials (skin, bone, fur, muscle, fat, what we're made of, and what these other things are made of, wood, grass, mud, water, air etc) etc, then we don't have to see zillions of images of an X, because we generalize from one or two images to lots of views we'd expect an X to look like when its curled up asleep, or jumping up 3 times its height to catch a bird.
So there's a higher level still than al that, as if that wasn't complicated enough for you: humans (most animals) are pretty autonomous - they have goals (tropisms - find food, find partners, survive, enjoy, avoid pain etc), and they have some slightly less obvious tools like curiosity, imagination, creativity - all with a smattering of randomness. These can help seek out diverse input (and create different interactions) so our low level perception & Interaction are constantly refreshed, and our models are updated by challenges (think, scientific method & falsification & parsimony/occam's razor etc).
Then there can be another level still - things like self-awareness, consciousness,, beliefs, neuroses, and even "taste"/aesthetics, and of course, we are social beings, so ethics and theory of mind, and daft stuff like religion and manifest destiny and other collective psychoses. Ghost bugs in the machine.
Subscribe to:
Posts (Atom)