The Key Question Is Not When To Use It, But When Not To Use It.
By Thomas J. Roach
AI is here to stay. It has many useful applications, and probably all of us do or will use it. However, AI composed messages are not always appropriate. Since AI’s applications are ubiquitous, the key question is not when to use it, but when not to use it.
The answer is that AI should not be used in any way that makes it seem like the words and thoughts are your own. To do so is essentially the equivalent of plagiarizing.
When it comes to personal feelings and expertise, AI is somewhat like greeting cards. A company puts some cute words together to be entertaining or to express deep feelings. You buy the card, sign it and send it.
The person receiving the card knows that it isn’t your clever witticism or emotional insight. It is a greeting card with a picture of a turkey or heart on it. You are saying, in effect, this mass-produced rhetoric reflects how I feel, and the recipient is usually ok with that.
A significant difference for AI is that it doesn’t come with a picture of a turkey or a pastel heart. It can come across like it is your sentiment in your words. That may sound desirable, but it is not.
Would you compose a letter or poem using language from William Butler Yeats or Elizabeth Barrett Browning and expect your significant other to believe they were your original thoughts in your words? Hopefully not.
Our thoughts and the words we use to express them are like a fingerprint, and people with whom we live or work don’t need to take them to a crime lab to know if they are ours.
Meaningful Communication
All meaningful communication is based on the assumption of sincerity and objectivity. When someone violates this social rule, they began to lose their audience.
A sincere love letter even with grammar problems communicates your true feelings, but a perfectly composed love poem in iambic pentameter that you claim you wrote looks suspicious and communicates that you are not to be trusted.
Similarly, at work, a report or presentation that you wrote communicates more than the data in the report; it shows your effort, knowledge, and desire to fulfill expectations. A report composed by AI may have the same information and meet expectations, but if it isn’t your vocabulary and sentence structure, and if it isn’t within your area of expertise, then it also conveys that you didn’t have the knowledge and didn’t make the effort to compose it yourself.
In either case, the lover or the work group may decide they should let you go and try to recruit the person or software you are mimicking.
On Another Level
Using AI to ghostwrite your letters or presentations or reports is problematic because it is an insult to your audience. Once your posturing becomes apparent, the recipients of your message realize you underestimated their intelligence, and they may respond by aggressively asking a lot of questions that you can’t answer.
Maybe last year someone might have been fooled, but society adapts quickly to cultural changes. Today, everyone knows about AI, and because they know what it can do, they become suspicious when something sounds too good.
When I was an undergraduate in the English department at Northern Illinois University, I wrote an inspired analysis of Shakespeare’s “Richard III” the night before it was due. The professor returned it with an “A+” and this comment: “This is so good that if you plagiarized it, I don’t even want to know.”
He knew I wrote it and he meant it as a compliment, but today he might have hedged his bet and written “A+, see me.”
Thomas J. Roach Ph.D., has 30 years experience in communication as a journalist, media coordinator, communication director and consultant. He has taught at Purdue University Northwest since 1987, and is the author of “An Interviewing Rhetoric.” He can be reached at [email protected].