A Brief Start to Defining Ethical AI Usage
I like to post funny stories about my students using AI on my Facebook page, so multiple people have started to ask me my opinion on AI usage. I have never tried to outline a consistent philosophy, so I would like to try to do that now. This may evolve, and there will arguably be holes in my argumentation, but as more and more of our digital communication is the product of AI, we need to figure out our rights and responsibilities when we decide to use these tools.
AI is not a monolithic entity, however. I think you could argue that even something as simple as spellcheck is a form of AI. After all, an artificial tool utilizes some form of logic, realizing that you misspelled a word and suggesting the right one. It is a very basic form of AI, but you might be able to argue that it utilizes artificial intelligence to build upon human intellect.
The next level of AI would be a tool like Grammarly. It will do the basic editing functions of the first set, but it begins to suggest ways to improve your writing that you haven’t considered. If you are relying too heavily on one particular word, it might suggest synonyms to add variety to your writing. It may suggest ways to change the tone of your writing. This is clearly a step of intelligence that incorporates not only technical advice but creative advice.
Lastly, we can consider large language models like ChatGPT or Grok. They can take a simple sentence and develop hundreds of words of content about whatever you want. They are not just taking your writing and helping you make it better. They are actually doing the writing for you based on a prompt.
There is very little debate about the first two classifications. I don’t know anyone who debates the usage of editing programs because you, the author, have generated the content that is being edited. Even the wording and tone suggestions from a tool like Grammarly require your thoughts to be there first before it suggests modification. In other words, the authorial intent precedes the AI intervention and advice. You have done the thinking, and you also have to consider whether you want to accept or reject the suggestions. While the argument could be made that this software is making us ignorant of the rules of grammar, human editors have existed for a very long time and have always helped us write better. When I published my book, the publisher’s editor gave me feedback and pointed out areas I needed to improve. It doesn’t mean I didn’t write the book, even though I took her advice into account. She helped me get better and supported my development. Grammarly is different than a human editor, but it does not seem to be a necessarily problematic application of AI for most situations (for example, if you were taking a grammar test, it would be unethical to use AI since you are being tested on your individual ability to understand grammar).
Large language models are where the actual debate lies because many of them can write reasonably well-developed papers simply by inputting the professor's prompt. Rather than people creating content that is improved by AI, the large language model creates the content. Unattributed copying of AI seems to be simple plagiarism. You are taking content that is created by someone else, because at its root, AI programs are compiling large amounts of information from other sources and summarizing them, and calling it your own. As much as taking a block of Mark Twain’s writing and calling it my own is plagiarism, taking an AI block of text and calling it my own is unethical.
I was talking to a college student from a different university a few months ago, and he told me how he utilized an AI to create an outline. He used it to generate ideas for his paper that he then legitimately wrote. Unattributed use of ideas that are not your own remains a problem and is therefore unethical, but there is the additional educational problem of stunting your own development. Part of the exercise of writing a paper involves learning how to think creatively and formulate arguments cohesively. While some might argue that it is somewhat less unethical than completely creating a work written by a large language model, even though I would still put it in the category of unethical, I would contend that it is a hazard to education and, therefore, ought to be avoided by anyone who values their own development regardless of if you draw the line exactly where I do.
This does not mean all large language model AI usage is unethical. For example, I can certainly quote someone else’s work in my work. When I write a paper about J.R.R. Tolkien, I can quote scholars who have written about J.R.R. Tolkien. I properly attribute and reference their work, using it to support my points. Similarly, if I found a piece of AI text that I thought was particularly compelling, I don’t think using it with a footnote would be a problem. Granted, it might not be wise to rely on AI, given some of the ridiculous things I have seen it say and the artificial sources it often makes up for my students. Some people will probably think that AI usage diminishes the quality of your argument, so it might be wise to avoid it in terms of credibility as well. However, in terms of pure ethics, it does not seem to be absolutely wrong to quote from an AI source. It is similar to quoting from an encyclopedia; it is a compilation of various sources summarized into a blurb of highlights. In fact, so much of the Internet is now generated by AI that many people have probably quoted AI sources without even knowing it.
It is similarly not unethical to utilize an AI as a research assistant. For example, if you ask an AI to find a variety of sources about the battle of Waterloo, it can be a great timesaver. If it can get you to reliable sources faster than a Google search, there doesn’t seem to be anything unethical about that. Some say that it reduces research, removing the virtue of perseverance, hard work, and diving through all the volumes by yourself until you find what you want. There is an argument to be made for that, I suppose, but it is similar to not Google searching to find sources because you should be doing the hard work of looking through books in the library. It doesn’t seem to hold up and outright prohibit its use. I do not know that this barrier in and of itself constitutes AI usage as unethical as a research assistant if it makes the process of doing your research easier.
I know this is only a brief introduction to AI usage. I might expand on this later because there is a great deal of nuance, and there are many applications in different fields that might be different. For example, what about AI computer coding? Is there anything fundamentally different in writing computer code as opposed to a paper? How about AI artwork? Is using an AI image and passing it off as your own a problem? Is that different than computer coding or writing? There are many more discussions that have to take place, but I think these two key principles summarize most of my current thoughts on AI. I would love to discuss them and wrestle with this technological force that constantly becomes more and more powerful.
1. Unattributed AI usage is plagiarism. It is not your work, and you cannot call it your own. This can be at the idea generation stage through actual writing.
2. An AI is a source, similar to an encyclopedia, that crams a lot of information into a short summary, and it must be treated as such, with perhaps additional scrutiny because of its unreliability in its current form.