The book deciphers the hidden aspects of artificial intelligence. Photo: Provided.
Artificial intelligence (AI) garnered widespread attention when ChatGPT launched in November 2022. Initially, the developer OpenAI didn't heavily promote it, treating it as just an "early test." However, ChatGPT quickly became popular. Just two months after its launch, the app had over 100 million users, and OpenAI didn't even have enough computing power to handle the massive volume of traffic it generated.
Will the path of artificial intelligence (AI) create miracles for the community? Two experts, Arvind Narayanan and Sayash Kapoor, co-authored the book "AI – The Benefits and Harms" (original title: AI Snake Oil) to decipher the core nature of AI. Are these two authors trying to go against the tide of history? No, both are leading experts in technology. Arvind Narayanan is a professor of computer science and director of the Center for Information Technology Policy at Princeton University, USA. Sayash Kapoor was formerly a software engineer at Facebook and is currently Narayanan's colleague. Both are included in TIME magazine's list of "100 Most Influential People in Artificial Intelligence."
The original title of the book, "AI Snake Oil," was chosen by Arvind Narayanan and Sayash Kapoor based on the story of snake oil. In the late 19th and early 20th centuries, traders exploited the unscientific belief that oil extracted from snakes offered numerous health benefits. It can be said that snake oil was a scam. Some types might be useless but harmless, while many others caused serious health damage, even death.
Today, we also face all sorts of "AI snakeheads," artificial intelligence systems that are overhyped and promoted as tools capable of replacing humans, while in reality they only operate effectively under very limited conditions.
Take text generation as an example. Chatbots can make a strong impression by giving very convincing answers on any topic we give them, but during training, there is no verification of the information's accuracy. This type of AI generation doesn't store all the facts; instead, it learns to detect patterns in language and piece them together when generating text. In reality, they are trained to create texts that sound plausible, not accurate, and many of the texts they generate are simply fabricated.
Another type of AI is predictive AI, which generates predictions about the future to support current decision-making. Developers claim they can predict future human behavior, such as whether a criminal will reoffend or whether a job applicant will perform well. In reality, predictive AI is often inaccurate due to many factors, such as data variability, probabilities, and inherent limitations in predicting human behavior.
The problem is that the inaccuracies of AI predictions don't affect everyone equally. Those most affected first and most severely are often the disadvantaged, minority groups, and the poor. This further exacerbates prejudices and inequalities in society, especially in sensitive areas like employment, healthcare, and the judiciary.
AI learns from data, and data is generated by humans. This means that every bias, inequality, or distortion in society can be "encoded" into algorithms. When deployed on a large scale, AI not only reflects but also amplifies these problems, making them harder to identify and control.
When artificial intelligence is used to evaluate job applications, classify patients, or assist in legal decision-making, even a small discrepancy can directly impact a person's opportunities, health, and even destiny.
From a broader perspective, the two authors of the book "AI - Benefits or Harms?" argue that instead of fearing artificial intelligence will become deceitful, a greater risk needs to be recognized: the overuse of it by humans. Many news sites have been found publishing articles riddled with AI-generated errors on important topics, such as financial advice, and they continue to use AI even after these errors are discovered. Amazon is flooded with AI-generated books, including some guides on how to identify mushrooms, an area where errors can be deadly if readers believe the content.
On the other hand, artificial intelligence also poses new challenges regarding the authenticity of information, as the line between truth and falsehood becomes increasingly blurred. Deepfakes, fake news, and mass-produced content are not just technical issues, but also a matter of trust in the digital society.
Regarding whether artificial intelligence threatens the survival of humankind, Arvind Narayanan and Sayash Kapoor argue that the development of AI is similar to that of the Internet. In its early stages, people only used the Internet for specific tasks such as checking email or searching for information on a particular website.
Today, the internet has become the primary platform for most daily communication and work activities. A similar shift will occur with generative AI as the technology continues to develop. In that scenario, generative AI will no longer be a tool serving individual needs, but will become part of the digital infrastructure, acting as an environment for many knowledge-based activities.
On the other hand, the two technology experts also believe that artificial intelligence is unlikely to cause a sudden mass unemployment crisis, but it will change the nature of many jobs, reducing demand for some professions while increasing demand for others, and even creating entirely new jobs. This trend is similar to past waves of automation. Those directly affected will have to seek immediate sources of income, learn new skills, or switch to a completely different career.
If operated in an uncontrolled and irresponsible environment, artificial intelligence could become a means of manipulating information, violating privacy, or causing widespread harm. Conversely, if properly understood and used cautiously, artificial intelligence can become a powerful tool to help humans solve complex problems. Therefore, there is an urgent need to equip people with the necessary knowledge, critical thinking skills, and a suitable legal framework. |