ChatGPT might be great for answering quick questions or even helping you get started on a blog post; however, one thing the chatbot isn’t good at is writing code.
A recent study from Purdue University looked at how ChatGPT responded to 517 different questions from Stack Overflow (SO). The results were pretty underwhelming.
“Our examination revealed that 52 percent of ChatGPT’s answers contain inaccuracies and 77 percent are verbose," the researchers wrote in the paper, not peer-reviewed and published on a pre-print site.
Even more concerning, the study found that 54% of the errors made by the chatbot seemed to come from it not understanding the concept of the question that was being asked of it.
In instances where it did understand the question it often struggled with providing a correct answer.
“In many cases, we saw ChatGPT give a solution, code, or formula without foresight or thinking about the outcome," they said.
The study highlights the importance of fact-checking ChatGPT answers.
Worth noting, in January, Insider reported that an Amazon engineer used ChatGPT to answer interview questions for a software coding job at the company and the bot was able to get them all right.
While it might not be the best coder, the chatbot is expected to put a dent in the US job market and potentially disrupt 19% of professions.
A study from OpenAI in March found that the technology could be used in place of human translators and interpreters. Long-term it also might impact careers such as writers and authors, mathematicians, tax preparers, accountants, and auditors, amongst other professions.
That study also noted that ChatGPT had a tendency to make up answers, so even though it might be able to handle the work typically done by humans a human will need to oversee that work to ensure it is correct.