It has been only a year since ChatCPT brought artificial
intelligence to the attention of the education community, opening a new phase
of the Information Revolution for both K-12 and higher education. This fall, the question administrators
brought to classroom teachers at both levels wasy, “How are we going to handle
it when students turn in essays written by AI?”
What is driving the increasingly intense interest in ChatCPT and
Artificial Intelligence in general is how ChatCPT, as Wikipedia
puts it, “enables users to refine and steer a conversation towards a
desired length, format, style, level of detail, and language used.” At both the high school and college levels,
faculty become concerned that students will use ChatCPT to generate assignment
papers. eCampusNews, which has published a number of articles about AI in
the classroom, carried an article
by Dr. Steven Baule on September 5, 2023, with “6 tips
to detect AI-generated student work.”
The six tips:
1. Look for
typos. AI-generated text tends not to include typos, and such errors
that make our writing human are often a sign that the submission was created by
a human.
2. Lack of
personal experiences or generalized examples are another potential sign of
AI-generated writing. For instance, “My family went to the beach in
the car” is more likely to be AI-generated than “Mom, Betty, and Rose went to
the 3rd Street beach to swim.”
3.
AI-generated text is based upon looking for patterns in large samples of text. Therefore,
more common words, such as the, it, and is are more likely to be represented in
such documents. Similarly, common words and phrases are more likely to appear
in AI-generated submissions.
4.
Instructors should look for unusual or complete phrases that a student would
not normally employ. A high school student speaking of a lacuna in his
school records might be a sign the paper was AI-generated.
5.
Inconsistent styles, tone, or tense changes may be a sign of AI-derived
materials. Inaccurate citations are often common in AI-generated
papers. The format is correct, but the author, title, and journal information
were simply thrown together and do not represent an actual article. These and
other such inaccurate information from a generative AI tool are sometimes
called hallucinations.
6. Current
generative AI tends to be based off training materials developed no later than
2021. So, text that references 2022 or more recent events, etc. is
less likely to be AI-generated. Of course, this will continue to change as AI
engines are improved.
Leon Furze noted in his blog
that the rapid growth of AI in education has led to a “widespread fear” that it
will be used by students for cheating.
However, he adds that “The truth is, we have little idea of
the impact the technology will have on education. . . Some states are still
deciding whether to ban the technology outright, while others try to grapple
with the ethical and academic implications of permitting its use.” Furze also noted that ChatGPT prohibits
people under 18 years old to sign up for access. “However,” he notes, “there are many ways
teaches might use ChatGPT . . . and it is almost certain that many students will be using the technology. This means that one of the biggest factors in
education should be the discussion of the technology’s ethical and appropriate
use.”
Interestingly, shortly after
publishing Steven Baule’s “six tips” article, eCampus News posted “Coming Out of the AI Closet: A Scholar’s
Embrace of ChatGPT-4,” a pro-AI statement by Dr.
John Johnston.
Johnson argues that “ChatGPT-4 has ushered in a new era of
brainstorming, structuring, and drafting academic papers. Understanding that
this cannot be equated to outsourcing my work to AI is crucial. Instead,
ChatGPT-4 acts as an enhancer for my innate critical thinking and creative
prowess.
The previous week in eCampus News, Roger
Hamilton had argued that “In the realm of higher education, this
marriage of AI and learning is ushering in a new era that holds the potential
to not only disseminate knowledge, but also cultivate the entrepreneurs of
tomorrow.” He added, “By acquainting
learners with cutting-edge technologies like AR, VR, and the metaverse via
innovative methodologies, this approach hones their ability to tackle
challenges that may not even be conceivable in the present.”
This year we recognize the thirtieth
anniversary of the Internet browser, a tool that has, over the past generation,
revolutionized how we communicate, how we work together, how we build bridges
across the old barriers of geography and time. It is not hard to imagine that
AI will be of similar—if not greater—significance, as K-12 schools and
universities together innovate to use this new tool to change how students use
technology to find meaning in their areas of study and learn how to better
communicate that meaning. The rapid
movement of AI into the mainstream is already creating disruption. Laura Ascione reported in eCampus News
on a Cengage Group survey of 1,000 degree graduates that “Half of graduates (46
percent) feel threatened by AI and question their workforce readiness (52
percent).” The challenge facing both
K-12 and higher education leaders is how to create a new approach to educational
methods and content to prepare students to work in an environment that is just
now taking shape but that will evolve rapidly over the coming years. Like the Internet browser three decades ago,
AI will stimulate some dramatic changes in how we educate citizens for the
future.