“Write a paragraph where you explain what you are, when you were created, how popular you’ve become the last couple of weeks and what kind of questions your creation has raised. [input from the writer] 

I am ChatGPT, an artificial intelligence language model developed by OpenAI. I was created in 2021, and have since become increasingly popular for my ability to generate human-like text responses. In the past few weeks, my usage has seen a significant surge, which has raised questions about the potential implications and ethical considerations of AI technology like myself. Some people are concerned about the potential for AI to replace human jobs, while others believe that AI could greatly benefit society if it is developed and used responsibly. Regardless, it is clear that my creation has sparked important discussions about the future of AI and its role in society” – ChatGPT, chatbot interface. 


If you are only a little curious about the news or the development of information technologies, chances are you’ve heard of ChatGPT, or GPT 3.5. This system is indeed the new diamond in terms of text generation technology funded by Microsoft, and has taken Internet and the media by storm the last couple of weeks. 


Upon entering the site and registering, ChatGPT’s appearance is similar to social networks such as Messenger and WhatsApp. The catch? Instead of chatting with any other human being, the user sends a message to a computer, which will then analyze it and try to create an answer that has a logic connection. 


How does it work though? Well in terms of usage, nothing easier: you create an account on the website (after agreeing to share your data with the firm for research purposes), write your input in the chat box, and the AI will generate a personalized answer to it. What is really groundbreaking with this particular version, is the fact that the artificial intelligence (AI) seems knowledgeable on every topic imaginable. Programming? History? Mathematics? The bot can answer it all (mostly). The developers do warn us before using it that some of the information might be incorrect, and that information created less than 2 years ago may be inaccurate (for instance, with the war in Ukraine’s latest developments).  


Well, if it’s that good, the access to knowledge just went through a massive revolution, right? Well, it’s not that simple. For some context, first we need to say that GPT 3.5 quite literally learned from the mistakes from its predecessor, GPT 3.0, which has been terminated. That was because of some inputs and information that entered its database, that would cause the AI to give answers promoting racist beliefs and hateful behavior. Today, thanks to new technologies, the AI has some prompts that are unchangeable by what it can find from its database, avoiding the issue of it using this kind of speech. It is still possible to make the AI come up with bad-looking messages by using unconventional methods like making it roleplay), but usually the AI will either simply state a general consensus on the question (for instance, if you ask it to come up with pro-climate skepticism arguments, the AI will answer that most scientists proved that human-made climate change is real) or flat out refuse to answer.

It however faces one major flaw: it is not a direct source of knowledge, it only (while very developed) is a text generator and will always try to answer your questions by going the way you expect it to go.  

Here’s a simple example to demonstrate this:  

As can be seen with this example, the question assumes that cows lay eggs, and while the logical (or human-like) answer would be to refute the fact that cows lay eggs, the AI will simply try to assume that what was put in the question is correct, and give an answer to that, even if the given information at the start is blatantly wrong.  The AI is made in a way that the program will almost never go against what the user tells it, no matter how outlandish or silly the prompt is.  


Which brings us back to the main question: Should you use it for your studies?  

This question indeed sparks up a strong debate about teaching and its future: how useable is this AI in schoolwork, for learning purposes but also to give answers to homework?
As stated earlier, ChatGPT can answer many questions, including what could be asked in most school assignments at all kinds of levels. However, ChatGPT is also not without its flaws: firstly the point made in the previous part, about the answer being biased by the question, is probably the most important issue to be aware of. Moreover, even with a perfectly normal question without this kind of bias, the AI will not take its work from sources unless you specifically task it to do so, and is vulnerable to simple mistakes.
Regardless of these problems, teachers are (rightfully) opposed to the abuse of this technology, mainly regarded as a potential cheating tool and something that discourages critical thinking on the student’s own. UiA already updated their Norwegian webpage on cheating to include a reference to “ChatGPT-like” technologies, where they state that they are to be treated as any source provider, being allowed when specifically stated, and if not sourced the candidate is exposed to charges of plagiarism. It could still be used however, not as a main information provider, but as something to verify your work, or help you find an answer on a subject you already are well versed in.  

In conclusion, while ChatGPT is an exciting look on the future development of information technologies, and will only improve from here, as of now it is a new useful tool, but only one that must be balanced around the use of other more reliable sources, such as Wikipedia. Because, while ChatGPT is great at writing texts which seemingly look like academic writings, the other sources got something the AI does not: critical human thinking.  



, , , , , , , , , ,
Latest Posts from Unikum

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.