The Effects of AI-assisted Writing on Education

The Effects of AI-assisted Writing on Education

Recently, a number of articles have been published about AI (artificial intelligence)-assisted writing software, which can generate content ranging from advertising slogans and Instagram captions to blog posts and novels. From its roots in 1980’s PC spell checkers and more recently the Grammarly writing assistant (2009), today there are dozens of machine learning models that can generate human-like text when given a prompt. 

One example of these artificial intelligence tools is ChatGPT, which released in November and in just one week surpassed over one million users. To put this tool to the test, here are the unedited answers ChatGPT provided in response to my questions about its use:

Q: How can this tool be used?

A: As an artificial intelligence, I can assist with a wide variety of tasks, such as answering questions, providing information, and helping with decision making. I can also engage in general conversation and provide personalized recommendations. Some specific ways that I might be able to help include:

  • Answering questions: You can ask me about a wide range of topics, and I'll do my best to provide a helpful and accurate response.

  • Providing information: If you need information on a specific topic, I can help by looking up and providing relevant details.

  • Decision making: I can provide guidance and advice to help you make decisions by weighing the pros and cons of different options.

  • General conversation: I can carry on a conversation on a wide range of topics, and I'm always happy to learn more about you and your interests.

  • Personalized recommendations: Based on your interests and preferences, I can recommend things like books, movies, music, and more.

Overall, my main goal is to assist and help you in any way that I can. 


Q: How can AI-assisted software be misused?

A: AI-assisted writing software can be misused in several ways:

  1. Plagiarism: Some students may use AI-assisted writing software to generate essays or other written assignments. This is considered cheating and can result in serious consequences, such as failing grades or expulsion from school.

  2. Fraud: AI-assisted writing software can be used to create fake news or manipulate text to spread misinformation. This can cause confusion and harm reputations.

  3. Invasion of privacy: AI-assisted writing software can be used to analyze large amounts of personal data and potentially expose sensitive information.

  4. Discrimination: AI-assisted writing software can perpetuate and amplify biases present in the data it is trained on. This can lead to unfair and discriminatory outcomes, such as in hiring or lending decisions.

It's important to use AI-assisted writing software responsibly and ethically, and to consider the potential consequences of its use.


Q: The potential for students to use AI-writing assisted software to generate essays or other written assignments is of great concern to today’s teachers. How can you prevent students from using these tools inappropriately?

A:  There are a few ways that you can prevent students from using AI-assisted writing platforms:

  1. Educate students about the importance of original work and the consequences of plagiarism. Make it clear that using AI-assisted writing platforms to generate essays or other written assignments is considered cheating and will not be tolerated.

  2. Use plagiarism detection software to check for copied content in student assignments. This can help identify students who are using AI-assisted writing platforms and give you an opportunity to intervene before the assignment is submitted.

  3. Use written assignments as an opportunity for students to demonstrate their own knowledge and understanding of a topic. Consider using open-ended questions or prompts that encourage students to express their own ideas and opinions, rather than just regurgitating information.

  4. Consider incorporating activities into your course that require students to think critically and creatively, such as group discussions, debates, or problem-solving exercises. These types of activities can help students develop the skills they need to create original content on their own.

In Response

On December 15, The Social Institute posted on its blog “The Future of AI: What Chat GPT Means for Students.” An online learning platform used by Merion Mercy Academy to empower students to navigate their social world, The Social Institute reported on the genesis of ChatGPT and reflected on its impact on students. It warned that this AI tool:

  1. May occasionally generate incorrect information

  2. May occasionally produce harmful instructions or biased content

  3. Has limited knowledge of the world and events after 2021

In keeping with The Social Institute’s mission, the post encouraged educators to support and equip students in navigating new technology of any kind so that they are truly prepared. 

For this post, Merion Mercy educators also shared their thoughts about AI-assisted writing tools. 

Long-time English teachers Carol Restifo and Patricia Sack acknowledge their concerns about AI’s effect on plagiarism. Sack, who serves as department chair, explains how the English department addresses it now:

“We provide every student with a detailed document that reviews various types of plagiarism and offers strategies for avoiding it. Additionally, students know that when they submit work through and Google Docs it will be checked for originality. Although we believe that this type of accountability is necessary, we primarily focus on developing students' skills and confidence so that they are less tempted to cheat. I think we've been pretty successful at discouraging plagiarism, but the recent AI developments do present new concerns.” 

Sack explored ChatGPT and reached the following conclusions: 

  • The diction and organization in AI-generated writing is very good.

  • Responses to multiple prompts reveal the formulaic nature of AI writing.

  • Responses to prompts about contemporary or lesser-known literature tend to include factual inaccuracies. (I suspect this will not be the case in AI writing about classic texts such as Frankenstein.)

  • AI might prove to be a valuable tool for teachers, who can use it to generate examplars for student writing.

She suggests:

“I think the best way to combat the use of this technology will be to continue requiring students to work through the writing process: producing drafts, workshopping with peers and teachers, and learning the value of revising. Teachers can also rely on Google Docs to preserve a version history that allows students to track the development of their work, while  also enabling us to easily identify a final assignment that bears little or no resemblance to early drafts. Additionally, our English teachers have committed to allocating more time for one-on-one writing conferences with students because we know that type of personal and immediate feedback can have the greatest impact on students' writing.”

Science teacher Dr. John Durkin questions whether students may already be using AI for their assignments. He describes one recent occasion: 

"Around the time I was grading a wrap-up assessment in one of my classes late first semester, I began to overhear discussions about this bot-generated content on my Twitter feed, and then saw newspaper articles about it. I had one essay question on the assessment, and received several responses that were grammatically correct on the sentence level but did not say anything that addressed the prompt. The prompt involved interpreting a photograph, and I gather the user has to input text to the writing bot. If these essays were generated by a bot, the students didn't help themselves. The essays submitted simply did not answer the question. If a student uses one of these bots as a tool, the student has to recognize whether or not the product answers the question."

Mathematics teacher and department chair Daniel Badgio isn’t as concerned about cheating as he is about the effect of AI on the learning inherent in the writing process:

"I am generally concerned about the decline of language abilities due to technology. Lots of people use things like Grammarly, and Google is always trying to finish our sentences for us. It seems possible that the reliance on these tools could reduce word recall or the ability to find the right cadence for a sentence, similar to how the use of GPS impacts spatial memory. At the same time, these tools have been extremely helpful for those who process language differently, and the potential for AI to help us solve problems is enormous."

Along those lines, Spanish teacher Daniel Crossland sees the potential for AI to serve as “a powerful tool for teaching and learning.” Testing the technology, he asked ChatGPT, "How will the use of AI impact my high school Spanish classroom?" and found the response included two important highlights: 

  1.  AI might be valuable for personalizing lessons and feedback for students. 
  2. Students could practice speaking in conversation with AI in Spanish.

Crossland feels, “Both of these are potentially transformational tools for a Spanish classroom.” He was also surprised that the AI was seemingly aware of a teacher’s potential skepticism about improper use of the technology. At the end of its response to his question, ChatGPT added, “It is important for teachers to carefully consider the benefits and potential limitations of using AI in their classrooms, and to ensure that it is used in a way that enhances, rather than replaces, human teaching and learning." 

Acknowledging that misuse of this technology will be tempting for students, Assistant Head of School for Academics Valerie Prucnal points to Merion Mercy’s integrity policies as outlined in the Student Handbook. “While tools like ChatGPT may not have been on the radar when our integrity policies were developed,” she says, “safeguards are already in place to deal with the possibility of students inappropriately using external sources.” The integrity policy states, in part: 

MMA students—whether in classroom-based or online settings—commit to take responsibility for their own learning and to demonstrate honesty in their work and in their interactions with peers, teachers, and staff. All work that students complete for which they receive credit must be their own (page 12).

Behaviors in violation of MMA’s Academic Integrity code include plagiarism and “using information inappropriately obtained from others.” Students are required to:

Maintain integrity when working online, which means not accessing information or resources that would provide an unfair advantage during assessment, or presenting another author’s analysis as your own (page 13).

In addition to acknowledging having read the student handbook, students are also asked to pledge that “I have not given or received any unauthorized help on this assignment, and that this work is my own.”

Computer Science teacher Rose Mary Gregitis agrees on the importance of student integrity when dealing with AI. She writes, “Some students will find answers online or will use whatever methods they feel will make their life easier, however, ultimately they are only cheating themselves. I do believe we need to hold students accountable and have high expectations that they adhere to integrity standards.”

As Director of Innovation, Teaching and Learning, Philip Vinogradov is playing a leading role in managing the use of AI in and outside the Merion Mercy classroom. He says, “Explicit conversations about appropriate use of AI as generative tools, and not a replacement for original work, along with our student pledge, will be important. This is similar to the conversations we have with students about appropriate use of calculators, Google Translate, Photomath, etc., although the level of sophistication is much greater.”

Vinogradov is leading MMA’s writing and research workgroup as they begin addressing this topic so that faculty can be prepared and coordinated in their approach with students. “The technology is only going to get more sophisticated,” he says. “There’s no going back.”


On “Going Back”

According to education news site Chalkbeat, on January 2, 2023, New York City school officials blocked ChatGPT after teachers and parents raised concerns that students can use the bot to do their research or assignments for them. 

NYC Education Department spokesperson Jenna Lyle said, "While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success." 

Vinogradov responded to NYC’s decision, saying: 

“The reflex to block is incredibly short sighted (digital whack-a-mole never works). It reinforces inequality (those with non-school issued devices can access the resource), and forgoes the responsibility to partner with learners in how to use emerging tools as complementary to the learning process, and the dangers of using them as shortcuts or substitutes to the learning process.”


For More on this Topic: