Laboratoř MIT, rok 1966. Vzduch voněl směsí kávy a ozónu z velkých sálových počítačů, jejichž blikající světla hypnoticky tančila po zdech. Profesor Joseph Weizenbaum, seriózní počítačový vědec, seděl u klávesnice vedle své sekretářky. MIT-Labor, 1966. Die Luft roch nach einer Mischung aus Kaffee und Ozon, das von den großen Großrechnern ausging, deren blinkende Lichter hypnotisch an den Wänden tanzten. Professor Joseph Weizenbaum, ein seriöser Computerwissenschaftler, saß neben seiner Sekretärin an der Tastatur. MIT Laboratory, 1966. The air smelled of a mixture of coffee and ozone from the large mainframe computers, whose flashing lights danced hypnotically on the walls. Professor Joseph Weizenbaum, a serious computer scientist, sat at the keyboard next to his secretary.
📝BLOG - off topic,  🤖 AI – Artificial Intelligence for Us

ELIZA: The sixty-year-old chatbot that defeated ChatGPT

MIT Laboratory, the year 1966. The air smelled of a mixture of coffee and ozone from large mainframe computers, their flashing lights dancing hypnotically on the walls.

"Leave us alone..."

Professor Joseph Weizenbaum, a serious computer scientist, was sitting at the keyboard next to his secretary. Who had just said, "Let us alone..."

And she was telling her boss! She wasn't asking for privacy with her colleague, lover, or friend. She wanted to be alone with his creation. With ELIZA – a chatbot that was so convincing that it made people feel like it understood them!

The machine that listened

Weizenbaum wasn't creating "intelligence" in the modern sense of the word. ELIZA was really a simple program that did one thing: it searched for keywords in sentences and returned them as questions. But that was enough.

For example, if you wrote:
"I feel tired and lost."
ELIZA replied:
"Why do you feel tired and lost?"

And people kept talking. And on. And on.

The most famous version of the program – DOCTOR – she imitated a Rogerian psychotherapist. No complicated advice, no clever analysis. Just mirroring your words and subtle questions that made you open up even more.

Psychologists then began to talk about a phenomenon that was given the name ELIZA effect: the human tendency to attribute feelings, intentions, and "minds" to machines, even when there is really nothing there but a few lines of code.

And Weizenbaum? He was horrified.

An experiment that got out of control

Originally, it was just supposed to be a test of human-machine communication. But when he saw how even his own secretary – who had been following the creation of the program every day – started communicating with ELIZA, emotional relationship, something disturbing occurred to him.

Maybe the truth isn't enough for people. Maybe the illusion that someone is listening to them is enough for them.

Weizenbaum later wrote a book Computer Power and Human Reason, where he warned against overestimating the capabilities of computers. At the same time, he described programmers in his time as “unkempt, sunken-eyed young men sitting at consoles with their fingers poised to attack the keyboard like gamblers watching the dice.” – a description that is still eerily relevant today.

And then ELIZA disappeared. The code was lost in the MIT archives, and she became nothing more than a footnote in history.

Forgotten code and return from the past

The year is 2021. In the dusty archives of MIT, Jeff Shrager discovered a folder titled "computer conversations". Inside – a treasure. Faded prints on punched tapes, handwritten notes and almost complete ELIZA source code in the MAD-SLIP language.

But bringing it back to life wasn't easy. It required building an emulator of the long-forgotten IBM 7094 computer, deciphering old dumps, and putting the pieces back together. It took three years before Rupert Lane's team could announce in December 2024:

"ELIZA is speaking again."

And not only that. The scientists also discovered a feature that Weizenbaum only mentioned in passing – ELIZA could teach new answers. You just had to type the "+" sign and the program would ask you: "PLEASE INSTRUCT ME".

It was like that ancient version of ChatGPT – only instead of billions of neural network parameters, you had to manually enter each new conversation rule.

Turing test – who is human and who is machine?

Meanwhile, on the other side of America, something big was brewing. Cameron Jones and Benjamin Bergen called together nearly 2,000 volunteers and pitted three "rivals" against each other in the legendary Turing test: the state-of-the-art GPT-4, its predecessor GPT-3.5… and good old ELIZA.

And here's a little digression: What exactly is the Turing test?

It was proposed in 1950 by a British mathematician and brilliant pioneer of computer science Alan Turing (you can find more about his life and genius in our article Alan Turing – The Genius Who Defeated the Nazis.

The Turing test is simple:

  • You have a human, a machine, and a judge who communicates with them only through text.
  • The judge cannot see who is who and tries to figure out who is human and who is machine.
  • If a machine can fool a judge into thinking he is human, passed the test.

In other words – it’s not about whether the machine thinks. The question is whether he can pretendthat he thinks.

And the winner is…

oppression, victory, winner, defeated, characters, puppet show, wooden doll Unterdrückung, Sieg, Sieger, Besiegter, Figuren, Puppenspiel, Holzpuppe útlak, vítězství, vítěz, poražený, postavy, loutkové divadlo, dřevěná loutka

The results of the experiment were surprising.

  • GPT-4 fooled humans 49.7% of the time – a decent performance for modern AI.
  • GPT-3.5 only reached 20 %.
  • And ELIZA? With her primitive trick reflection of emotions reached 22 % – which defeated GPT-3.5.

How is this possible? It turns out that people judge the “humanity” of chatbots mainly based on how they handle emotions and social interaction. And that's exactly what ELIZA was surprisingly good at.

While modern AI models sometimes show off their intelligence a bit too ostentatiously, ELIZA just... listened.

What exactly is "human"?

ELIZA was created as a simple experiment between 1964 and 1967. It was not “intelligence.” Nor was it an attempt at the perfect conversational agent. And yet it showed something that hasn’t changed in 60 years:

People don't just want smart answers. They want to feel heard.

Maybe we don't need AI that thinks. Maybe we need AI that can be a silent mirror of our words.

And perhaps that's why even today - in the age of neural networks, billions of parameters and ultra-modern algorithms - good old ELIZA still has something to say.

✨ Want to read more about how AI thinks? Try the article – How artificial intelligence thinks (and why it sometimes makes up ideas)

Or what AI would advise “Big Girls” – New life after 50 through the eyes of artificial intelligence

Note for our readers: Some of the links in our articles may be affiliate links (but definitely not all of them!). That means if you buy something through them, we may receive a small commission – at no extra cost to you! 💛 These little rewards help us keep the blog running: covering hosting, tools, and most importantly – the time and love we put into creating meaningful content for you. Thank you for supporting us. Every click is a sign that what we do matters.

0 0 votes
Article rating
Subscribe
Notify of
guest
4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ABM Insulation

I really appreciate the effort you put into this content. Truly appreciated!

Abm Insulation

This is such a lovely and inspiring message. So glad I came across this.

Insulation Abm

Just wonderful, thank you.

Abm Insulation

Very calming and nice.

en_GBEnglish
Powered by TranslatePress
4
0
Would love your thoughts, please comment.x
()
x