Last updated on October 6th, 2023 at 09:39 pm
Generative AI is more than just big business. Its tentacles are spreading into every facet of society, from labour movements to code to copyright lawsuits.
The recent strike by the Writers Guild of America had AI policies at much of its core. Cautionary letters are being written (and to date, ignored) signed by leaders in science and tech decrying the lack of seriousness with which governments are approaching an ill-understood, extremely powerful leap forward in technical capability and capacity for rapid, uncontrolled change. There are concerns about self-governance and oversight, and concerns with legislation proposed to manage it.
Once again, powerful, society-changing tech is already in wide use, possibly uncontrollably, before legislators are even aware of how it functions. ChatGPT is just one part of the rapidly evolving AI applications, but it is a microcosm of the questions we will be confronting in so many areas of work and life already, and with increasing rapidity.
The Advent of ChaptGPT
ChatGPT has a relatively short but impactful history in the realm of content creation. It emerged as a groundbreaking AI language model, developed by OpenAI, initially designed for conversational interactions. Over time, users recognized its potential for generating high-quality and coherent articles across various domains. Then code. Now voice, video, and audio.
Use is now widespread, with over 100 million users globally, and questions are being raised about everything: how it is being trained (with multiple copyright lawsuits pending from Sarah Silverman, George RR Martin and others) and how it is being used to create. Universities and other institutions are banning its use. How do we reconcile these two realities? Do we now need to distinguish what is being written and the criteria by which it is being evaluated to decide if ChatGPT support is ethical? How do we look at authorship? And what tools are acceptable to use in writing?
A New Creative Landscape – With Bots
The question of ethics and acceptability is predicated on two things: our historical beliefs about copyright and creative ownership, and the exacting standards we apply to protect originality, authorship, and creativity. Writing has always been assumed to start from scratch by an individual and copyright law works hard to protect those rights. Does a creative work need to be written yourself from scratch? In many circumstances, the answer is yes. If you are being evaluated on your creative writing skills, the use of ChatGPT is widely considered inappropriate.
But it may not be. Artificial intelligence is introducing a new capability that we need to see differently, an assistant rather than a co-author or ghostwriter. An essay for school? A creative work of original fiction? Even these can benefit from ideas and structure from Chat GPT and sourcing. It’s a creative tool the likes of which we’ve never worked with before, and our previous conventions probably are no longer applicable.
After months of experience with the tech, we should all know by this point that ChatGPT is never, ever going to write an award-winning work of original fiction on its own. But a lot of the work of writing is structure, setup, information, and flow; help with this can be invaluable. Is this cheating? It is definitely support. It can even help with writer’s block; getting something down on paper or a blank screen can be a huge assist, a shortcut to needed inspiration. Should this be disclosed?
A Tool Unlike Any Other
If you have spent any time using ChatGPT for productivity in writing, you’ll understand that it is a highly useful writing tool. Using ChatGPT isn’t like staring at a typewriter Barton Fink style. It’s a different kind of creation. It’s an “intelligent” prop, similar to a word processor or a writing app like Word, just way more helpful and useful.
The way I use it at least, and everyone seems to have a different method, is similar to sculpture.
Creativity in writing may no longer be strictly additive. Sculpture may be a more useful metaphor. When a sculptor carves something out of a block of marble, it is cutting away the unnecessary parts. Michelangelo created the Pieta not by creating marble, but by cutting away where the marble wasn’t needed, where it impeded the vision. In the future, even in the present! writing may be about editing and enhancing when we all have our own ghost writer. Virtuosity is not just in the creation of painstaking words written one by one but in the end result. Those who will be read will still be those who write in an original, distinctive, informative way with or without the support of AI.
So maybe the question isn’t whether it is ethical to use ChatGPT, but how well was it used to set up original thinking by taking away the drudge work. And frankly, if you *can* tell at this point that someone is using ChatGPT, it is lazy output.
Is detecting ChatGPT even the right goal any longer? This will be a source of considerable debate for a long time. It matters to the creators who are suing ChatGPT for using their content to train the AI. There may be legal liability and required disclosure. The expectation is that no one is going to see an original Sarah Silverman joke in the middle of an AI-generated comedy routine (and if you do, please share!!); that is not how generative AI training should work.
But beyond that, maybe we need to look at space for creativity as an assisted function going forward. Inspiration can come from all kinds of places. You can’t say you won’t be inspired by a turn of phrase or a fact in something an AI creates to help you write your next work.
And maybe we should prepare ourselves for exactly that kind of assistive role AI can play in other parts of our lives, while recognizing unforeseeable risk, uncontrolled innovation, huge liabilities, and genuine fears are an important part of this reality, and should prevent widespread excitement at the potential.
The Next Phase of Generative AI
The utility of generative AI is unquestioned. What is less clear is where it fits into our existing human models for creation. Maybe soon some generative AI model will become an automated professional writing support function, but at this point, it’s still the human deciding to use the words. Offering all the creativity. Deciding on the final output.
At the same time, it is an artificial assistant. Is it the equivalent of performance-enhancing drugs for the literary and journalistic set? Is it a form of plagiarism? Or is it totally like plagiarism, just enhanced and in mass use so we accept it? In other words, do we just accept progress and use it? Is it even stoppable or controllable now? Ethicists are going to be very busy.
The answer is no, and this is what is giving so many people pause. There’s another component to using ChatGPT that is far less examined in this discussion, and that is our input into the AI. It is hungry for training data. Everything we input into it changes and enhances its own capabilities.
And it remembers. We have no way of knowing what will be done with that data. Will it be attached to us? Will other parts of ChatGPt or other LLMs or future AI have access to that data? what will it do with it? These are completely unanswerable questions in 2023. With lawsuits and legislation pending and so much so new and evolving, this tech will be figuring itself and its norms out, and answering the question “Is it ethical to use ChatGPT” – and how – will be a conversation for a long while to come.
And keep in mind, that every time you engage with or ask a question about the app: you are not just using ChatGPT.
It is using you too.
If you really need to verify whether something was written using ChatGPT here are a few tips:
- Human detection. How deep is the content? How current? How fresh is the analysis? Are the sources real and are the stats accurate? How much personality and creativity is in the copy? Think of it like a black and white image versus one with colour. Black and white does the job, but colour really brings the capture of a moment to life.
- Structure. The structure of its output is one of the biggest indicators. ChatGPT loves lists and often repeats phrases. Are the sources and stats valid?
- There is a whole crop of new AI auto-detection services and apps. They all work quite differently and some of them may be effective if you really want to sniff out AI support. https://www.pcguide.com/apps/chat-gpt-detectors/
- Watermarks may be coming. There is talk of ChatGPT5 including tokens or other traceable elements in its output. It’s hard to imagine how that could be possible, but we will have to wait and see!
- Writing style. You may be able to train an AI to mimic the style of Haruki Murakami or Kara Swisher, but never the personality. In the end, anyone reading what you’ve put in front of them will either be moved, informed, or compelled by the content. Or not.
Suggestions on how to detect whether something was written by ChatGPT by ChatGPT itself unedited:
“Detecting whether someone is using ChatGPT-4 specifically for an article may not be straightforward, as it depends on the intentions and actions of the user. However, there are some indicators and methods that could help:
1. User Disclosure: The most direct way is if the user explicitly mentions that they are using ChatGPT-4 to generate an article. They might include this information in their conversation or in the article itself.
2. Output Style and Quality: The quality and style of the generated content may be an indicator. ChatGPT-4 should produce more coherent and contextually relevant articles compared to earlier versions, so exceptionally well-written articles could raise suspicion.
3. Content-Length: ChatGPT-4 can generate longer content with better structure. If the article is unusually long and well-structured for a human writer, it might be an indication.
4. AI Attribution: Some platforms and services require AI-generated content to be clearly attributed as such. If you suspect AI involvement, check for any required disclosures or attributions.
5. Plagiarism Checks: You can use plagiarism detection tools to see if the article matches content from other sources or if it appears to be generated by an AI model.
6. Technical Terminology: AI-generated articles may contain complex technical terminology or jargon that might not be typical for a human writer.
7. Inconsistencies: Look for inconsistencies in the article, such as abrupt shifts in tone, style, or topic, which could indicate AI involvement.
8. Language Model Signatures: Some language models might leave subtle “signatures” in the text, like peculiar phrasing or errors that are common to that model. This can be hard to detect, but it’s not impossible. “
To detect the use of ChatGPT-4, you may also consider using services that specialize in AI detection, like content verification platforms or AI plagiarism detectors. These services are designed to identify AI-generated content or check for plagiarism. However, keep in mind that no method is foolproof, and it may be challenging to definitively determine whether ChatGPT-4 was used without direct disclosure from the user.”
When ChatGPT is asked if it matters whether AI is being used, here’s its response:: (unedited)
“Whether it matters if someone is using ChatGPT or any other AI for a particular task depends on the context and the nature of the task. Here are a few considerations:
1. Transparency and Disclosure: In many contexts, it’s important for users to be transparent about the use of AI, especially if the content generated will be attributed to a human. Failing to disclose AI involvement can be seen as unethical or misleading.
2. Quality and Reliability: The use of AI like ChatGPT can be beneficial when it enhances the quality and efficiency of tasks, such as content generation or data analysis. However, it’s essential to ensure that AI-generated content meets quality standards and is accurate.
3. Plagiarism and Copyright: AI-generated content should not be used to plagiarize or infringe on copyright. It’s important to respect intellectual property rights and use AI ethically.
4. Context Matters: In some creative or experimental contexts, using AI like ChatGPT might be entirely acceptable and even encouraged. For example, in art or literature, AI-generated content can be a form of creative exploration.
5. Regulatory Compliance: Depending on the industry and jurisdiction, there may be regulations or guidelines regarding the use of AI in specific applications. It’s essential to comply with these regulations.