ChatGPT is fun, but not an author

image

In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool’s developer, OpenAI. The program—which automatically creates text based on written prompts—is so popular that it’s likely to be “at capacity right now” if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa—who has come home from a tough day of selling—is told by her son Happy, “Come on, Mom. You’re Elsa from Frozen. You have ice powers and you’re a queen. You’re unstoppable.” Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.
ChatGPT (Generative Pretrained Transformer) was developed with a technique called Reinforcement Learning from Human Feedback to train the language model, enabling it to be very conversational. Nevertheless, as the website states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Several examples show glaring mistakes that it can make, including referencing a scientific study that does not exist.
Many concerns relate to how ChatGPT will change education. It certainly can write essays about a range of topics. I gave it both an exam and a final project that I had assigned students in a class I taught on science denial at George Washington University. It did well finding factual answers, but the scholarly writing still has a long way to go. If anything, the implications for education may push academics to rethink their courses in innovative ways and give assignments that aren’t easily solved by AI. That could be for the best.

image