So You Think There Are No Dangers to Using AI Technology Like ChatGPT?
23 world-class scholars map the risk landscape
No doubt you have heard or read something about ChatGPT by now. It is being hailed and hyped by its fans as the next major tech breakthrough. Its detractors claim it has designs on ending the human race. Regardless of your own view, so-called Artificial Intelligence programs and applications that use Large Language Models (LMs) as their core training data are making breakthrough advances and enjoying rapid adoption in classroom and professional settings. But 23 authors who collaborated on the paper, Ethical and social risks of harm from Language Models, believe more work needs to be done to identify and reduce the risks of using these tools.
They published a detailed report to “help structure the risk landscape” (Weidinger et al.). In other words, their work maps out where the potential problems lie, where they come from, and where we should expect to see them in real-world usage. The authors hope tools based on LMs will be used safely, responsibly, and fairly. But in the high-tech world, whose motto is “Move fast and break things,” they realize that hopes alone won’t get the job done.
So, what is a Large Language Model?
Many people are eager to use the emerging technology based on LMs. OpenAI’s ChapGPT-3 reached the million user milestone in just 5 days! — faster than any social media platform — quicker than FaceBook, Twitter, or Insta (even faster than Netflix!). Despited the popularity, relatively few users understand the complexity behind these new systems’ proprietary curtains. Many conceive of them as having cognitive abilities reflecting human communication. But, as discussed below, they don’t.
Addressing these misconceptions is one of the report’s goals. To define LMs and computer scientist’s jargon about A.I.“conversational” systems, or “chatbots,” the authors included an appendix with definitions, a thorough bibliography (referencing more than 300 citations), and an abridged Table arranged by risk classification. These added resources inform readers who want to dive deeper.