Yes, it’s me, Ed Barks, a living, breathing, sentient human being. Not some AI-generated bot.
Some people are excited by artificial intelligence writing tools like ChatGPT, Google’s T5, and Microsoft’s various AI entries. I get it. It’s a shiny new object that appears fascinating from a technology point of view, and might prove to be a time saver for some uses (although I’d remain cautious; remember how advances like email and the internet were supposed to be time saving tools; while they can be that, they have also glued too many to their devices, proving to be a most excellent time waster).
Dumbing Down Our Writing Talents
Count me out from the AI writing schemes. Oh, I can hear the allegations. Luddite. Technophobe. Enemy of progress.
Why the resistance? Simple. It is growing difficult to discern what is written by humans and what is slapped together by “intelligence” masquerading as human.
I should note here that this discussion centers on AI writing systems, not the notion of artificial intelligence writ large. That is a more substantial concern.
Storming the Schoolroom
Schoolteachers and college professors may have the toughest road. How will they be able to tell when students take the easy way out by submitting AI-generated content? Will it be necessary to rely more heavily on class discussions, and handwritten assignments and tests that cannot be faked?
As Rutgers professor of sociology Rina Bliss writes in her Washington Post op-ed “AI Can’t Teach Children to Learn. What’s Missing?”, “while AI can assist in getting information to a learner, it cannot do the thinking for them — it cannot help them truly learn” [emphasis hers].
Writer, Beware
For writers, the threats of plagiarism and theft of intellectual property rear their ugly heads. A Washington Post analysis notes, “The copyright symbol — which denotes a work registered as intellectual property — appears more than 200 million times in the [Google] C4 data set.”
Two AI writing tools —Google’s T5 and Facebook’s LLaMA — at least disclose the sources their offerings use. OpenAI, which operates the popular ChatGPT “does not disclose what datasets it uses to train the models backing its popular chatbot,” reports the Post.
I have little doubt that, in the near future, I will be able to spot in some company’s presentation or student’s report excerpts lifted directly (and illegally) from content I have labored to create. The offender may have little idea that they have committed a crime. That’s no defense. As the old saying goes, ignorance of the law is no excuse.
Disinformation Dupes
Some of the systems go so far as to scrape matter from Wikipedia, a notoriously erratic source. Worse, those who lean on AI writing schemes may become ignorant pawns in spreading disinformation.
The Post reports that it “found several media outlets that rank low on NewsGuard’s independent scale for trustworthiness.” Among them (h/t Dave Pell for pointing out the highlights):
- com, featuring Russian propaganda
- alt-right darling breitbart.com
- White supremacist site stormfront.org
- Anti-trans site kiwifarms.net
- Harassment specialist 4chan.org
Consider the student who unknowingly uses claims from such sources as gospel. Not only does it make for a botched report, it also poisons their psyche, making them susceptible to disinformation.
Issuing a Disclaimer
What is a writer to do? First, vet your sources. Once vetted, it is imperative to then inform readers.
How? From this point forward, I plan to include a statement in my books, research reports, and position papers along the lines of, “Every word and thought in this paper was written by a living, breathing human being. No artificial intelligence schemes were used. Remedies for violation of copyright laws by users, whether intentional or unintentional, may be pursued” (I’m still fine tuning the exact verbiage, so reader suggestions are welcome). And rest assured that every jot of content on this C-suite Blueprint blog will remain written by this real human.
I labor under no delusions. There is no putting the genie back in the bottle. ChatGPT is just the beginning with more to come (if you doubt this trend of obsolescence, consider past “groundbreakers” like MySpace and Alta Vista). Writers, I urge you to refuse to give in to an all too easy (and perhaps inaccurate) tool. Write your own emails, memos, and thank you notes. The personal touch counts in business.
Reader Responsibility
Readers, push for answers if something looks fake to you. Pose straightforward questions like, “Did you write this entirely on your own or did you have help?” If they reply that they did not create all of the material, follow up with, “Did you use any type of artificial intelligence tool to write your content? If so, which one?” Then follow up with a discussion of the pluses and minuses.
Back to the perspective of Rina Bliss. Most troubling is a passage describing her reaction when her young children arrived home from school with AI tools: “Could AI offset our struggle to get our children to grasp new concepts and skills? Might AI be better equipped to help them tap into their own intelligence? After a few days watching my bouncy twins scroll and click, I can tell you the short answer is no.”
Recent Comments