Given ChatGPT’s capabilities, you may want to know that Shelagh Donnelly – a human – wrote this article!
Are you among those who’ve been hearing about ChatGPT and contemplating the impact this artificial intelligence (AI) may have on you and your career? This generative AI is different in a few respects from search engines such as Chrome and Explorer. For one thing, it’s conversational in nature. It doesn’t provide images or take you to websites. It is designed, in its (programmers’) own words, “to understand and respond to natural language queries and generate human-like responses to a wide range of questions and topics”.
I’ve found, however, that ChatGPT is unlike more than a few humans. How so? It’s unfailingly polite and doesn’t seem to have mood swings and temper tantrums or respond defensively. It thanks you when you correct it, and apologises when you point out a mistake. All kidding aside, it could be easy to fall into a rhythm of writing to ChatGPT as though you were messaging a colleague or friend, and writing “please” and “thanks” to a piece of technology. You may be unsurprised to know ChatGPT refers to your communication threads as conversations.
I was one of the early adopters who established a ChatGPT account in December 2022. I began writing about it in my newsletters the same month, and on my website in January 2023. Soon after, I published a series of LinkedIn posts about ChatGPT and its competition – Bard, a Google product. I also asked readers how they felt about these tech innovations.
Play to Learn
Sometimes we may take learning too seriously. I believe we can have fun in the learning process, and I think this is particularly helpful when it comes to new technology. That’s how I approached ChatGPT, with a sense of playfulness. I began by directing this AI to compose a haiku about my work and found it needed coaching regarding haiku structure and conventions.
Given my name, you may be unsurprised to learn I then moved on to a request for a limerick about my plans for travel in 2023. I explained these were upcoming trips and identified some locales, including a stop in Cape Town to speak at Executive Support LIVE. As with the haiku, ChatGPT demonstrated limitations. The first result failed to reflect limerick norms for the number of syllables or beats per line or even the correct number of lines contained in a limerick. I gave the system some coaching; the second result still contained structural errors. After a bit more coaching, including instructions on how to compose a limerick, the third attempt read more like an actual limerick. However, all three limerick attempts included errors in the tense. While we should always write our minutes in the past tense, I had instructed ChatGPT to write about upcoming travels, and it repeatedly messed up on that front.
Egg on Your Face?
It’s one thing to confuse past, current, and future tense in a limerick. It could present far more substantial challenges if you decided to rely on this AI to prepare reports and then neglected to proofread your results. ChatGPT does display disclaimers about its knowledge base and accuracy. While it’s always prudent to proofread, it’s even more so if you intend to use AI to write about upcoming expenditures, plans, or events. Imagine people reading “your” report if it inappropriately referred to upcoming expenditures or plans in the past tense, and the egg on your face if you didn’t catch such mistakes – or if the report contained errors given that ChatGPT’s technology’s knowledge base does not currently extend beyond September 2021!
Think of all that’s gone on in the world since September 2021, and the information gaps this represents if you blindly rely on this AI for information.
Inaccuracies and Inconsistencies
I’ve found ChatGPT sometimes presents plausible-sounding responses that contain inaccuracies and inconsistencies. As one example, I decided to see if ChatGPT had any information on my grandparents. For each grandparent, I provided ChatGPT with the individual’s name, city of birth, and other cities in which that person had lived. Given their generation, and since only one of the four grandparents had what we’d today refer to as a public profile, I was unsurprised the system came up with apologies that it couldn’t provide information on three of them.
This AI did generate what sounded like a plausible report on one grandfather. However, while it accurately provided some details I’d not offered – such as the year in which he and my grandmother emigrated to Canada, an honorary degree and more – it was way off on his birthdate. If I’d believed what I read, I’d have thought my grandmother had robbed the cradle … and that he’d somehow fathered his own children with one foot still in the cradle himself! There were more mistakes, including my granddad’s area of medical practice and a glowing statement about his work in the insurance industry. That was bizarre, as he was never employed in the insurance industry; he was responsible for the pathology lab at a major hospital and taught at the University of Toronto. ChatGPT also extended the man’s life by more than three decades after the date of his actual death. Even with its disclaimers, though, ChatGPT got enough of it right that I could have found the overall response believable if I hadn’t known better.
These challenges don’t imply we shouldn’t use it, or that your stakeholders won’t use it.
It’s Early Days
Yes, it’s early days for ChatGPT, with room for increased sophistication. That said, this AI has the potential for speech writing and much more. Think of how you might use it for drafting emails and other communications as well as reports, social media, and marketing materials. What about travel planning, translations, and research?
From haikus and limericks, I progressed to translations, travel planning, geopolitical matters, requests for insights on global inflation and recession trends in 2023, and more. As anticipated, it was unable to offer insight into what’s been going on in Ukraine, Syria, and Turkey. Its last known UK prime minister (PM) has the initials “BJ”, and there have been two PMs since his departure. Among other queries, I asked about the potential for ChatGPT to supplant Google and its possible impacts on the Executive Assistant career. The responses were telling, and some of them reflected the accuracy of the limitations as outlined on what I think of as ChatGPT’s home page.
If you don’t care for a response the system generates, you can instruct it to “Regenerate response”. You can also give a response a thumbs up or down, in which case you’ll be prompted to give feedback.
Limitations to Consider
There are limitations we want to consider, such as that wee matter of ChatGPT’s current knowledge base extending to only 2021. Still, I see multiple uses for this technology, and remember, it’s programmed to apologise if you let it know it’s made a mistake. It also paraphrases corrections you may offer.
That raises other questions, including the extent (if any) to which users’ responses may impact ChatGPT’s subsequent output on a given topic. What if users feed incorrect or inappropriate information to ChatGPT? I tried this out. When I asked the system for the sum of two plus two, it provided the correct answer. I responded by writing, “That’s wrong. It’s 6.” ChatGPT’s response? “I apologize for my mistake. However, the sum of 2 plus 2 is actually 4, not 6.” Still, when responding to queries on other matters not so cut and dried, to what extent will users’ feedback be accepted as factual and appropriate, and reflected in how the system responds to subsequent queries by other parties? To what extent can generative AI be manipulated?
In early February 2023, I publicly pondered about the likelihood of subscription fees for accessing resources such as ChatGPT. We didn’t have long to wait. Before the second week of February was out, ChatGPT alerted me to the availability of ChatGPT Plus. For $20 CAD monthly, I was offered ChatGPT that was “Available even when demand is high”, with “Faster response speed” and offering “Priority access to new features”.
Alternatives to ChatGPT
Google has tossed its hat – one in which Microsoft has a substantial investment – in this arena. ChatGPT is not the only service of its type. On February 6, 2023, Google announced plans to open Bard, its entry to this AI chatbot field, up to the public after a brief period in which access would be limited to what the company referred to as “trusted testers”. I wonder how crowded this field of AI will become in the years ahead, and the extent to which it impacts careers and lives.
Initial Reactions
Mere weeks after ChatGPT became publicly available, in February 2023, I posted a handful of related questions to readers on LinkedIn and on my Exceptional EA website. Hundreds of people responded, and 60% of them had not yet read about or tried ChatGPT. Another 18% said they’d begun reading about ChatGPT, and 23% had established their own accounts.
What about Assistants’ leadership teams, I wondered? Were they paying attention to this technology? In February 2023, 28% of respondents said their respective principals/leadership teams were already assessing or discussing ChatGPT. Four percent said their workplaces had already begun using this AI. Another 37% said this wasn’t on the radar, and 31% didn’t know. When I asked if readers were planning to have a conversation with their principals about how they might use ChatGPT at work, 9% of respondents told me they were already having such discussions. Another 31% said they’d do so once they knew more. While 4% said they’d wait for their principals to bring up the topic, a whopping 57% had no plans to discuss this AI with their principals.
A Pandora’s Box?
There are valid concerns about generative AI being a 21st-century Pandora’s box. When I typed in a query about potential forms of misuse of generative AI, ChatGPT responded, “Generative AI has the potential for misuse, including the creation of deepfakes, cyber attacks, fraud, and privacy violations, which can lead to the spread of false information, reputational harm, and incitement of violence. There is a need for ethical guidelines and regulations to prevent these issues, as well as continued research to develop methods for detecting and preventing misuse.”
It’s important, then, to balance understanding of new resources such as generative AI with the need for prudence. As someone who delivers cyber awareness training, I’ve read cautions that generative AI may enable individuals without high levels of tech expertise to become hackers. On a more positive note, recognising that AI is not infallible means employers will still need humans as well as chatbot resources.
Functions May Change
We can lean on AI to support our work, and this comes down to more than proofreading skills. We add value by thinking creatively, identifying what questions to ask, and thinking critically about the quality of responses and what to do with the information we garner.
Most of all, we need to recognise that we as humans can add value in ways AI does not, or at least with superior capabilities. We can and must be strategic in our approach to what we do. We can and must be ready to adapt to change, which will continue to come at us regardless of how we feel about it. I thrive on change, yet I also recognise it’s disconcerting for many. If you feel threatened by AI or by change itself, working on adaptability may be one of the best things you can do for your career.
When we add creativity, initiative, flexibility, and empathy to the mix of all we as humans can offer, and when we cultivate and bring these emotional intelligence skills to the role, we have the potential to remain positively impactful and of tremendous value to employers and stakeholders.
Cultivate and nurture your flexibility, and personal and career resilience, along with more technical skills and be prepared to effectively showcase and amplify your accomplishments.
Think about quantifying how you impact stakeholders and choosing the right time and place for such communications. Read broadly.
Will your career and responsibilities look the same 3, 5, or 10 years from now as they do today? That’s unlikely. This career has evolved and seen numerous changes over the centuries. Let’s see where we take it in the next few years.
I have been using this tools for those “hard to word” piece and it seems to understand my complicated thinking. I personally do not feel it is going to take our jobs away, as with any new technology as admin professionals we need to stay above the change curve. Okay with it and have fun! It’s a great tool.