[Photo/VCG]
ChatGPT seemingly came out of nowhere and rode a hype bubble this spring. The AI bubble hasn't popped, but the initial fascination has dulled as what was once fresh has become boring and what we once thought was doing the impossible has been exposed for its flaws.
People still use ChatGPT, but what is it good for? Its own creators don't even know. Sam Altman, the CEO of OpenAI, ChatGPT's parent company, said AI would cause a technological revolution that would lead to "a new kind of society."
What does that even mean? To be fair, computers also changed society, and Altman, as a CEO, has to use hyperbolic promotional language.
But the promise of AI is being sold as something of a greater level than anything ever conceived. Altman said to The Atlantic that his younger self dreamed of creating some kind of force that would "far surpass us."
That's similar to what many AI evangelists are testifying before U.S. Congress. Samuel Hammond, a writer on AI and economics, warned in Politico "that a super-intelligent machine might quickly start working against its human creators."
This image of AI as a sentient tool, I think, is contributing to a false idea of what AI really is. AI can't think. But many everyday users treat it like it can.
On social media, I've seen all kinds of posts claiming to have found the answer to a question from ChatGPT. On Quora, a question-and-answer website, users often post ChatGPT responses and claim the problem is solved.
But ChatGPT cannot answer a question. It can only generate a unit of text that looks like the text it has studied.
There. I did it myself. I said ChatGPT "studied" text. It didn't. It was trained on text. ChatGPT doesn't do anything in the active tense.
An American football fan on Facebook posted a list of wide receivers that he said "were the best, according to ChatGPT." No, the players on that list were some of the best, according to fans and writers. ChatGPT just looked at existing information and spat out a general summary. The list would be slightly different every time if you asked 10 times.
Many people on Twitter even said that they addressed ChatGPT with "please" and "thank you." Do they use "please" when they push the button for the elevator?
Maybe it's just an innocent quirk. It doesn't really affect anyone. But I worry it reflects a distorted and harmful perception of AI. Its creators, its users, and the politicians who might regulate AI are imagining it is more than it is.
Exaggerating AI's capabilities could create more problems than AI creates itself. If people think of ChatGPT as a living being that they can have a conversation with, they will heighten their own loneliness and cause their own social skills to deteriorate.
Health reported that people are "using artificial intelligence in place of real therapy for mental health issues." BizAdmark encourages people to use ChatGPT to "cure your loneliness" and "provide emotional support." This suggests a future where people are sitting by themselves in front of a screen, directing a web application to display a block of text for them.
The veneration of AI leads, too, to people wrongly trusting AI to answer their questions. A study in Nature found that when users turned to ChatGPT for moral questions, their viewpoints were being influenced by ChatGPT, and "they underestimate how much they are influenced."
Moral questions are usually subjective; the correct response is an opinion. If someone is discussing or debating a moral question with another human, they typically know that the other person is offering their opinion, and they will be willing to push back if they don't accept it. But this tech supremacy viewpoint may cause some people to accept every byte of text AI programs generate with no skepticism.
We should be thinking about AI as a set of technologies that can work with applications to streamline tasks. It can do repetitive tasks that require following syntactical rules quickly and well. Most of those benefits will be reaped by companies through B2B and behind-the-scenes processes, not by consumer-facing chat applications.
AI cannot create. The faster we realize that, the better off we will be.
Mitchell Blatt is a columnist with China.org.cn. For more information please visit:
http://www.keyanhelp.cn/opinion/MitchellBlatt.htm
Opinion articles reflect the views of their authors, not necessarily those of China.org.cn.