On
31 January 2024 at 22:36 Nealeb Said:
…and Artificial Intelligence isn’t really intelligence! It does a pretty good imitation of something that looks like intelligence, but the large data models used for training what we tend to think of as AI systems is just using very sophisticated text analysis to make a stab at “what word would come next in this context?” kinds of questions. It looks at all the text it has ever seen to make the decision. Train a system purely with material from the Flat Earth Society and it will respond to “Is the earth flat?” with reasoned – apparently – answers to say yes, it is. Given some of the nonsense that one sees reported from social media, then the reason that we should support the social media companies who don’t want their material used free of charge to support what is effectively a commercial AI offering is because we don’t want such systems trained on the kind of nonsense that is freely uploaded and then copied across social media. The more times it is copied, the greater weight that an AI system might give it. In fact, it does not have the intelligence to distinguish fact from plausible fiction. Although, having said that, neither do a lot of social media users…
Someone mentioned above, in the context of the current Fujitsu/Post Office fiasco, the “GIGO” principle – garbage in, garbage out. Not entirely relevant to the Fujitsu situation – where it is more like good data in, muck it around with faulty algorithms, garbage out – but it is certainly true in the case of today’s AI systems.
They are astonishingly good at what they do and how well they do it – but that doesn’t mean that you can trust them in all situations. Just think of them as “distilled essence of something that I read on the Internet” and whether you would bet the farm on that!
My intelligence disagrees with Neil’s! It’s because, just like an AI, we are both struggling to understand a complicated dataset. It consists of what we have seen, been taught, read, listened to, eaten, and where and by whom we were brought up. Everything depends on how our brain interprets sense inputs. Computers are analogous.
One model suggests people have two brains. The original is primitive and it deals with challenges requiring quick reaction: fight or flee. It’s response is emotional, not logical – when attacked by a bear, there isn’t time to out-think the beast. The logical brain evolved much later, and it is mostly responsible for what we call intelligence.
Humans are not unique in displaying intelligence. Crows can count up to 4, my cat learned how to open doors by jumping on the handle, and dogs train their human slave to pick up poo. In addition, human intelligence takes many forms. A talented colleague had a photographic memory, which made him extraordinarily good at his job, but he couldn’t innovate or work in a team. Another could speed-read, was a good manager, and made smart decisions but hated pressure. I used to play chess with a friend during the lunch-hour. I always won if we played to finish the game in an hour: he always won if we played slowly. Small children learn to talk in a few years after birth, and then most of us lose the ability. MRI scanning reveals the human brain goes through at least 2 major structural changes before adulthood, and one of them explains adolescent behaviour. Thereafter the brain slowly degrades with age, and older folk tend to lose the ability to innovate and rely ever more on experience. Very evident in Mathematics, were all original work is done before age 30.
So ‘intelligence’ covers a wide-range of different attributes, and I don’t believe there is a single definition that covers it. All forms of intelligence are useful.
Alan Turing is justly famous for proving what computers could and could not do: ‘The Entscheidungsproblem (decision problem) was originally posed by German mathematician David Hilbert in 1928. Turing proved that his “universal computing machine” would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the decision problem by first showing that the halting problem for Turing machines is undecidable: it is not possible to decide algorithmically whether a Turing machine will ever halt. This paper has been called “easily the most influential math paper in history”.’ (from Wikipedia)
Turing’s proof opens the door to the possibility of Artificial Intelligence, though not with the machines of his time. Nonetheless Turing was able to suggest a measure: basically, if a human conversing with a teletype can’t tell if the correspondent is a machine or a human, then the correspondent is intelligent. The Turing Test does not depend on opinion, beliefs, or preconceptions, and it requires the human to prove his intelligence too.
Turing didn’t know how AI might be implemented, but he did know it was possible. Over the last 70 years, the design issues have become clearer, several problems have been fixed, and technology has improved immeasurably. Processors are thousands of times faster, memory is cheap and plentiful, software can run on many processors in parallel, maths is done with high-speed hardware, and so are graphics. Turns out the hardware needed for high-performance graphics is also well suited to machine learning.
Machine learning is a breakthrough technology. A machine learning program isn’t written to perform a specific task, rather it is written to extract knowledge from a large dataset, and that can be built on. The result is analogous to how an organic brain develops. The data is anything that’s been digitized: images, text, radio signals, voice, music, weather, medical data, aircraft performance, whatever.
Within a narrow range AI already outperforms human specialists. AI is better at spotting anomalous cancer cells than a human microscopist, leading to more reliable diagnosis and spotting new clues. AI can also be used to analyse fiction and generate new books or film-scripts. And it’s possible for an AI generated film-script to be converted into a computer generated film. I don’t think a major film has been done yet, but shorts and sequences certainly have. AI writes better computer programs than most people. Above all, AI can only improve – it isn’t fully developed yet.
Of course, AI suffers the same problem as humans – rubbish ideas due to faulty input! On the other hand, AI is less likely to stick with nonsense. It won’t ignore evidence and facts just because they contradict a human belief. AI is unlikely to start a religious cult, develop toxic masculinity, fall for a Ponzi scheme, support -isms, blame minorities, deny unpleasant realities, jump on the bandwagon, commit crime, or play politics!
The most serious limitation of human intelligence is our inability to process large amounts of information. As such the human race are pretty ignorant: no-one understands everything. So humans jump to conclusions, and are prone to wishful-thinking and paranoia. A tool like AI that provides a dose of reality is surely useful! It is of course possible that lies are the only proof of intelligence.
Humans hate change, even if it’s good for them. Unfortunately AI is going to alter the workplace on a grand scale. Lots of people will have to find new jobs, and it will be painful. Nothing new in this: the motor car put millions of horsey workers out of business. My village had two forges, and no-one cares the blacksmiths have gone.
Dave