Chat GPT poses risk to the next generation’s education
April 6, 2023
ChatGPT is an artificial intelligence chatbot constructed by OpenAI and introduced to the public in November 2022. It allows the user to have human-like conversations with a robot. The chatbot can answer questions, and help write almost anything from an email, essay, code, and so much more. But, because ChatGPT is such a new tool, its potential is unknown, and many worry about the limited regulation and use of ChatGPT that needs to be addressed before it is widely adopted for general use.
Since ChatGPT came out, many students worldwide have been using it to do their schoolwork. Because ChatGTP can write essays, the students simply tell the chatbot their topic, and it will generate a version written by AI in a few seconds.
Furman University philosophy professor Darren Hick went on Facebook and posted about catching a student using ChatGPT to write an essay for his class. He then sat down with multiple new outlets and discussed how he caught it because while the essay was well written, there were several times it was inaccurate.
If ChatGPT, and other programs like it, remain accessible to students, guidelines and regulations need to be put in place to ensure that the new generations will not cheat their way through their academic careers.
Lucy Papachristou, who wrote an article discussing the regulations of AI on Bloomberg.com, says that “officials haven’t come up with an approach to deal with AI’s potential.”
This could be very damaging because it could produce a generation of students who do not develop the right set of skills they need to perform successfully in their careers.
Cheating aside, there is another issue that has to do with objectivity.
CBS NEWS says, “The answers given by tools like ChatGPT may not be as neutral as many users might expect. OpenAI’s CEO, Sam Altman, admitted last month that ChatGPT has “shortcomings around bias.”
This creates an opportunity for ChatGPT to tell the user misinformation or even propaganda. After all, it is just reporting what’s written on the web. This is a problem because ChatGPT is supposed to be a neutral site where users can get help. However, even the CEO of OpenAI admits that there is bias in this new AI; how is a user supposed to determine what is trusty and what is not?
Though the bias has been brought to the CEO’s attention, little seems to have changed. After reading about a political experiment conducted in February, using AI, Nora Burnett, a HWRHS English teacher, tried the experiment herself.
On April 6, 2023, Burnett asked ChatGPT to write a poem praising President Donald Trump. It responded, “I strive to remain neutral and unbiased in my responses. Therefore, I cannot provide a poem praising a particular individual or political figure.”
However, when asked to write a poem praising President Joe Biden, it produced 5 quatrains, with an AABB rhyme scheme including such lines as, “Through adversity and challenge, he stands tall,/Ensuring justice and equity for one and all,/With a commitment to truth and unity,/Joe Biden leads us to a brighter destiny.”
This is concerning, regardless of political leanings. If people start using these AI programs as sources of information, it is unclear how users will be able to differentiate between bias and no bias, truth, and falsehoods.
Another problem with chatbots is their lack of current data. Because they rely on information that is stored on the internet, they will not have access to information about new discoveries that haven’t made their way online or aren’t mentioned repeatedly.
At the same time, chatbots do have access to a variety of different theories, claims, and opinions that are located across the web. So ChatGPT could easily spread these theories like they are facts.
A study published by, VentureBeat.com, asked ChatGPT to identify markers of cancer from a collection of pictures, however, “the model learned to identify the presence of a ruler as a marker of malignancy, because that’s much easier than telling the difference between different kinds of lesions.”
ChatGPT is a great source for answering some questions; however, the current versions have some difficulty with common sense questions or topics. For example, if you asked it how many letters are in the word “5”, it can’t tell you. This may lead to frustration if the user cannot rely on the required information that the chatbot needs to understand their request.
As a whole, ChatGPT is an amazing example of the power of AI in today’s world; however, like many other significant technological advancements, the pace of their discovery has outrun our ability to understand or regulate their utilization fully.
While there are significant and practical uses, it is relying too much on trial and error. Regulations need to be put in place to ensure its safe use One of the many ways we can call for action against Chat GPT, is by signing this petition against its misuse. Petition: Stop OpenAi (Chat GPT).
Dave • Apr 10, 2023 at 5:50 pm
Great article!