Business communications has always been a strange beast; walking the line between information and self-promotion, using a kind of business English notable only for its blandness. Every piece of content that comes out of a comms group or PR firm has been tested and scrubbed of any controversy … in some cases, comms are so inside baseball and impenetrable that it’s hard to imagine who they were written for.
Bland, inoffensive, often with a peppy tone or fake cheery-esque, cheerful at a minimum (save for statements involving grief or tragedy in which any emotion at all is removed). Generally information-centric, run through six levels of approval; smooth and polished and essentially devoid of meaning.
Sometimes in business writing, diplomatic writing, or any sensitive comms, blandness is the desired effect. Is the use of ChatGPT in these circumstances a major issue? It can be attributed just like any other source. ChatGPT may be a blessing to readers and authors alike.
Adding AI to Business Communications
Oh, the horror of impeccable grammar and spelling, flow, and logic! Sometimes I run something I’ve finished through ChatGPT just to see what it says. It’s a bit cheerleady. It’s peppy. But a) you can adjust that with good prompting and b) wow… it took about six seconds.
“The reflection you’ve shared paints a vivid picture of the fears surrounding the rapid advancements in AI and the profound uncertainties that come with it. It echoes the sentiments of many experts, thinkers, and concerned citizens worldwide.
The analogy of the car breaking down and being fixed with a pair of nylons is powerful. It encapsulates a bygone era of transparency, understandability, and direct human agency over the tools we used. AI, especially in its advanced manifestations, is nothing like those transparent tools. It’s a black box, and one that’s rapidly becoming more complex.
The concerns highlighted, from big tech companies holding disproportional power to the potential randomness and capriciousness of AI actions, are valid. When it comes to decision-making AI models in social media or insurance, the immediate repercussions are clear. The unforeseen, long-term consequences, like the hypothetical scenario of a girl being denied a job decades later because of a chat with an AI, are indeed alarming and represent the vast ethical gray areas AI introduces.
Canada’s code, as with other regulatory initiatives, might be seen as a step in the right direction. But, as you’ve highlighted, without stringent enforcement and an intrinsic understanding of the stakes involved, such regulations may merely be scratching the surface. The overlapping and multiple codes of conduct, from the G7 to joint initiatives, while showcasing global acknowledgment of the issue, also underline the lack of a unified front.
The trajectory of AI, as it stands now, is largely dictated by market forces and technological ambition. The democratization of AI or the efforts to make it more transparent, explainable, and controllable, while ongoing, haven’t reached a stage where they can counterbalance the rapid advancements and the potential risks.
The concluding note, suggesting staying fearful, captures the essence of the caution we should employ. The AI wave is transformative, with the potential to redefine many aspects of society, the economy, and personal lives. While it holds promises of unprecedented advancements, it’s crucial to approach it with the respect and caution it warrants, ensuring we’re not caught off-guard by its cascading effects. “
A Tireless Research Assistant
By now many of us who do some comms for a living can spot a raw ChatGPT paragraph at twenty paces. There’s a certain structure; flow, and repetition … while being almost completely devoid of information or even a frisson of creativity. There are many circumstances where it should be banned but also many circumstances where its use is not only fine, it’s helpful. Having a tireless research assistant while you write is such a huge help, especially for someone with ADHD and a constantly ringing phone that isn’t billing $100 an hour.
In ChatGPT-5, horizon unclear, there are apparently efforts to watermark what the LLM produces with some kind of token. That… sort of made some clunky traceability sense until it was revealed that token means two different things in blockchain and in AI. Not helpful, tech semantic wizards who name things. But what is really the point? It should be treated like any other source. Attributed, credited. But energy shouldn’t be wasted on bookmarking, although it would be useful to have a feature that automates a footnote. Someone will make very useful integrations if they haven’t already.
The standard doesn’t always need to be “Did you write every word of this” … as for fiction, if someone was able to write a readable, award-candidate novel using today’s ChatGPT alone, they’d deserve a medal for ingenuity. It’s just another tool. We’re just starting with raw material now which lessens the labour.
Because increasingly the question isn’t “Did ChatGPT write this” but “Does it really matter? I don’t know, but ChatGPT helped me write this.