There was recently an AI fake Drake video that went viral causing the intellectual property rights folks to loose their minds over someone cutting into their cash flow. I really hope AI will give the whole ridiculous-Disney-makes-money-forever copyright edifice a kick in the groin.
The giants in the field of AI in the 50’s like Norbert Weiner, who were predicting that AI robots doing everything for us, could not have been more wrong. What they failed to realize is that an arm connected to a body (by evolution) has millions and millions of things that need to be connected to work. So robots won’t be autonomously making your dinner and serving it to you any time soon.
AI’s will be able to relate to you though. Eliza was a language computer developed in the 1960’s that people became emotionally attached to. I mean, look at the Japanese men who marry their sex doll. It is not hard to imagine GPT becoming your therapist. Doctors and lawyers are more likely to be replaced by AI than a cook or a ditchdigger. In fact software like Blue J is already mechanizing aspects of a tax accountant/lawyer’s practice.
There is big hype now that GPT4 is a reality that can explain jokes or that AI can make sci-fi movies but the rest of this piece is about lowering the volume a bit on this hype.
a)A short bit of history
John McCarthy, a 1950’s pioneer in this stuff wanted to use the term “complex data analysis” for a name. Unfortunately Herbert Simon's “artificial intelligence” term became the more human sounding label. McCarthy’s term would have been a lot less hype-able.
In the 50's and 60's titans of computer science were saying human level intelligence was just around the corner. When this did not pan out, military funding for AI collapsed especially after the 1973 Lighthill report of the British Government. This has been called the first AI Winter. The 80's saw the development of parallel computing and funding came roaring back with the first applications of neural networks where computers did not need rules, they simply learned on their own. In the 90's when this did not work out funding again collapsed with the second AI Winter when PHDs were being fired in droves from their jobs at tech startups. In the 2000's vision and speech systems started grabbing headlines. There are a few cracks now showing up like brittleness in computer vision for self-driving cars where a minor photoshopping can lead to catastrophic errors. Until very recently many were saying another AI Winter is sooner than we think. But generative AI (GPT3, or 4, or 5, and so on) seems to have knocked this thought away (for now).
b)What is intelligence anyway?
Humans will take one look at a kid doing astrophysics at 10 years of age and immediately say, “That kid is intelligent!”. It seems to have something to do with how [that-kid] grabs information from the environment and processes it. Ehhh. This sounds like a computer, doesn’t it?
This smart kid might sense a regularity in nature ala Murray Gell-Mann and perhaps make a prediction ala Lisa Feldman Barrett. This talent is something more than just scanning a database like AI does. Think about the original data set of all data sets…..Tycho Brahe’s positions of stars in the sky which was accumulate over decades of him lying on his back under the night sky measuring with his sextant. This data needed a human insight-in this case, that of Kepler- about the unique pattern of the “stars” that didn’t behave like stars-ie- they were in fact, planets. And these planets’ orbits, according to Kepler’s hunch, swept out equal areas in equal times as they rotated around the sun-because the orbit was elliptical. This sounds different than a computer just following an algorithm, doesn’t it?
Intelligence also seems to have something to do with succinct compression of knowledge. Newton was able to make a nice short formula for those orbits of planets with his famous F=ma. Einstein’s E=mc squared is another example. Computers are really good at the compression of data in …let’s say…a photo. This does not mean the computer is Newton or Einstein.
A human’s intelligence is because of a human brain. Duh! But a bicameral (ie-left and right side) brain developed over hundreds of millions of years-not centuries-of evolution. Five hundred million years ago, the worm nematostella vectensis, had the first features of this brain split. Psychologist Howard Gardner’s is well known for coming up with the nine different types of human intelligence. This really doesn’t sound like a computer….at least one made of silicon chips. Each half of the human brain seems to have a different interpretation of the world? Maybe.
Artificial intelligence (AI) is fundamentally different from human intelligence (HI). AI chemically is silicon. HI chemically is carbon. The building block of AI is a digital transistor which is excellent at math calculating. The building block of HI is an analogue neuron which is terrible at math calculating. The brain doesn’t do math computations in a serial manner like a von Neumann architecture computer with huge precise memory. A neuron sort of sums up a bunch of inputs and either produces an output or not, but once that output has fired off down the axon it is gone…no precise memory in that individual neuron.
c)What is AI? What is AGI?
An AI chess engine can play chess better than any human by following an algorithm. Ditto for AI playing GO. Deep Mind’s alphGO people declared victory in 2016 and moved on. Hold it! Not so fast! Recently an amateur showed how alpha GO could be tricked. Tricking AI with these adversarial attacks is why we don’t have totally self driving cars…maybe never will…regardless of what Mr. Musk says. We will continue to see headlines like “Computer Beats Go Champion” but the headline should read, “Computer Makes Go Champ Better”.
AGI-Artificial GENERAL Intelligence-on the other hand is just the opposite of a machine slavishly following some algorithm. As pointed out by David Deutsch, a true AGI should be able to decide it doesn’t want to play chess….it might want to play checkers….it might not want to play at all. A real AGI should be able to have a beer and joke around with you?
Human intelligence is very specific for our planet because of 4 billion years of Darwinian evolution. But we ain’t so special! Anything biological on earth has also been the beneficiary of this 4 billion years. A cockroach in its environment does amazing things. My guess is that when Homo Sapiens wipes itself out the cockroaches will take over Hellstrom Chronicle style, just like the mammals did after the dinosaurs were wiped out.
A conclusion is drawn these days, that because AI has already gotten good at chess or looking at pictures it is going to start thinking very soon. This is a naïve belief in raw computational power being human intelligence which is a complete fallacy. Human intelligence is a creative act, usually done by many humans over many years, resulting in a principle which can be generalized outside the original data set. That is NOT GPT3, or 4 or 5,000,000.
d)Is AI is going to KILL us all?
There is a long history of alarm bells about machines taking over from humans. This worry is referred to as the Alignment Problem which asks whether AI machines' goals will be the same as (aligned with) humans' goals. I.J. Good , a colleague of Alan Turing's in the 1930's said, “the first intelligent machine is the last invention man needs to make provided the machine is docile enough to tell man how to keep it under control.”
The fundamental issue with the alignment problem according to Stuart Russell is the inability of humans to completely define an objective. This means that the King Midas issue is always present. King Midas wished for everything he touched to turn into gold, which was great until his food became gold metal that he was unable to eat. Like….Houston, we have a problem. In the self-driving car scenario the destination should not be “the objective” as the “intelligent” car may then drive a hundred miles an hour to get there. In the “curing cancer” scenario the intelligence may just “unethically” give millions of people cancer causing agents to run experiments on who gets the cancer and who doesn’t.
There are other really sincere, smart, plugged in people out there like Eliezer Yudkowsky and Gary Marcus warning of this risk also. There is a recent request for a moratorium on AI signed by thousands. Dumb. Top down commandments are not going to stop the development of generative AI. Any moratorium will just be disobeyed by some-country or some-body. Any new technology is always claimed to be ruining the young. My parents bitched about TV’s effect on me and my grandparents bitched about radio’s effect on my parents. Apparently this time it’s different. I doubt it.
Greg Egan’s science fiction stories are mostly about “coming to terms with what it will mean when our growing ability to scrutinize and manipulate the physical world reaches the point where it encompasses the substrate underlying our values, our memories and our identities.” I doubt it will ever go this far but it makes for good sci-fi.
e) A few words about GPT
Generative AI systems like GPT are large language models (LLM’s) which are neural nets trained on trillions of documents to come up with what the next word is statistically. Well, actually….that’s not totally true statistically speaking. There is some randomness thrown in there to make it look more creative (called turning down the temperature by the code writers). Any way, you can think of LLM’s as super autocomplete or as Naval Ravikant says, a calculator for reading and writing.
Generative AI is the rage these days. Supposedly, AI is going to be your PDA (personal digital assistant) to organize your holidays, financial stuff, wine tastings, and so on. The money being throw into AI now reminds me of the dot com mania days. In fact the biggest dollars are being spent, not by governments, or Google but by big hedge funds doing algorithmic trading. A Tobin Tax would crash this in a nanosecond.
If you are neurologically inclined, you can think of GPT as a Broca's area for a computer. And that’s it….no Wernicke’s area, no rest of the brain, no body. By this reasoning AI does not “think”. But now I am getting into this whole word game Ludwig Wittgenstein wrote about.
They don’t think because LLM’s are feed forward only. There is no feedback like in biological homeostasis. It’ all just one giant Broca’s area! However they can help people pass the LSAT’s or automatically shoot back at trolls automatically. There are even generative AI business models out there for startups that can scrape all the data any one person has put on the internet and then be able to give a psych readout on them that would rival the FBI. Scott Alexander of Slate Star Codex fame tells a great story in 2018 how something like GPT can be used as a group destabilization tool…..wouldn’t be surprised if DARPA was looking into this weapon.
The words in these training documents come from what people have written in the past…yes, that is past tense…..so most of it is really data of the dead….so maybe this will be the new religion….like talking to long lost relatives….believe it or not there are/have been startups with a business model like this too.
There are two things AI doesn’t get. One is preference falsification otherwise known as not telling the truth. The other is Russell conjugation (I am strict, you are obstinate, but he is a pig-headed fool) otherwise known as selling a biased story. One could argue that this deception was used by Homo Sapiens to “rule the world”. There is a really famous Robert Trivers quote that goes “It is more than ironic that deception (and its propagation) is the file on which the tools of intellectual development were sharpened”. This ain’t GPT.
f) The Turing Test
I guess any little blurb like this would be incomplete without some Alan Turingy-Cumberbatchy-Imitation Game comments.
Turing really did settle the question on how AGI is possible from a machine. But until we have well functioning quantum computers this ain’t happening.
The really famous Turing Test was basically a human and a computer texting each other while some other guy tried to figure out which party was the computer. It was really a philosophical question, not an engineering one.
The real trouble with all this sh*t is that you text less than you say (text doesn’t pick up eye rolling or other contemptuous facial expressions) AND you say less than you think (a politician is very friendly and smiling while shaking the hand of another politician but he might well hate the other’s guy’s guts!). So as you go from text to speech to thought there is more actionable information. Some say GPT renders the Turing test obsolete but Sophia doesn’t have perfect facial expressions and generative AI does not think. In fact, GPT can be made to look really stupid if you badger it with enough prompts.
g) Medical AI
Computer people in charge of budgets know to be suspicious when the words artificial intelligence are said during a sales pitch especially when combined with words like “seamless integration” or “next generation”. Medical people in charge of “AI” budgets, unfortunately, are not skeptical enough.
For the most part AI in medicine has been long on promise and short on results. A famous example was IBM's Watson that was sold to MD Anderson for millions of dollars as diagnostics for cancer. It did not live up to the hype. AI systems, which means more employees, are now being promoted for installation in operating rooms analogous to the black boxes on commercial aircraft.
There are reasons to be hopeful about AI in medicine. Reading retinal scans is one good example. The most honest statistics might come from what people type into the google search box in a specific geographical area. A big uptick in “anosmia” searches has been shown to happen at the start of a local covid outbreak. This isn’t really anything more than data science. It’s hardly intelligence acting all on its own.
Ditto for AI operating room robots like da Vinci. These are incredible tools. They are used by intelligent humans. They are not artificially intelligent. But if you are going to sell something for that much money these days you better stick an AI moniker on it.
H)Finally
AI is another human tool. No AI Armageddon.
This piece is dedicated to the great and thank-god-not-yet-late Alex Katramadakis.