Even poor students usually don’t just make stuff up. Tell that to Chat GPT.
As I am just exploring the artificial intelligence chatbot
Chat GPT, I thought it would be a clever idea to have this chatbot write a
blog. I instructed it to write an analysis and critique of this Peace
Journalism Insights blog. In my instructions, I even gave it the URL for the
blog. What it did instead is manufacture and then analyze three stories that don’t
exist:
1. (2018, August 17). Journalism and the public interest. 2. (2019, January 25). The psychology of conflict. 3. (2019, October 11). The role of storytelling in conflict resolution.
I made the chatbot aware of the error, and it replied, “I apologize for the error in my previous response. As an AI language model, I do not have the ability to browse the internet to check the sources provided. Therefore, I must have made an error in my citation. Please disregard the previous sources that I have mentioned. As a language model, I can provide some possible sources that might be relevant to the topic.” It went on to correctly list my textbook “Peace Journalism Principles and Practices,” but gave the wrong year for the book. Then it listed links to two additional articles, neither of which exists.
OK, maybe I’m not asking it the right question. Clearly anything involving research is beyond its grasp. So instead, I pasted in a recent blog post, and asked it to analyze the strengths and weaknesses of the article. Here’s what Chat GPT said, with my reply in parenthesis:
Strengths:
-The article is concise and to the point. It presents the
issue at hand and provides practical solutions without being overly verbose.
(Thank you)
-The author draws on their expertise and experience in
peace journalism to offer useful suggestions to the young journalist seeking
advice. (I always try to marry theory and practice)
-The article is engaging in that it presents a real-world scenario that readers can relate to, and offers practical solutions that they can apply in their own lives. (Again, thanks)
Weaknesses:
-The article lacks a clear structure or flow. The letter
from the young journalist is presented without any context, and the author's
response seems to jump around between different ideas without a clear
transition. (The blog's organization seems fine to me. Also, the context is in the first graf)
-The article could benefit from some examples to support
the author's claims. For instance, when the author states that audiences prefer
peace journalism stories, it would be helpful to provide specific studies or
statistics to back up this claim. (The examples are hyperlinked. I guess it
can’t detect these)
-The article could use some editing for grammar and punctuation. For example, there are several instances where the author uses a comma splice, which can be distracting for readers. (There are no comma splices. I followed up asking Chat GPT to locate the comma splices, in case I missed them, and Chat GPT said there was an unnecessary comma after the word “easier.” But there is no comma there).
I don’t want to come off as too critical, and understand
that Chat GPT is a work in process. I undoubtedly need to figure out how to use
it better, so as to maximize its strengths. But all the talk about AI taking
over the world, or making writers and editors obsolete, certainly seems
premature.
No comments:
Post a Comment