Generative artificial intelligence (GenAI) burst onto our screens in late 2022 and continues to disrupt our perception of our role, says Shelagh Donnelly

Are you among the 29% of Assistants currently using GenAI daily or almost daily? That’s the percentage of respondents to my early 2025 survey who reported using one form of GenAI or another either daily or almost daily. Another 49% reported using such tech resources on occasion. Anecdotally, based on conversations with Assistants when I travel to deliver keynotes and other presentations, we’re all over the map. Some are in workplaces where the use of GenAI is prohibited or restricted. For some Assistants who’ve not yet used GenAI at work, it’s simply not yet on the radar.

We’re now in our third year of a world with GenAI office resources, and change is a given.

Time flies when you’re having fun… and when the world is awash with change and disruption, positive or otherwise. Generative artificial intelligence, commonly known as GenAI, burst onto screens in November 2022, and it’s not going away. I’ve been writing about GenAI since December 2022, and my March 2023 cover article for this magazine may well have been one of the first pieces you read about ChatGPT.

Since then, we’ve seen the emergence of Claude, Copilot, Gemini (originally known as Bard), Perplexity and more… including the disruptive January 2025 launch of DeepSeek, a Chinese product.

DeepSeek: “Into the Unknown”

If ChatGPT burst onto screens in late 2022, DeepSeek crashed right through them. Less than a week after its product release, DeepSeek blasted to the top of Apple’s App Store downloads, overtaking ChatGPT while also triggering almost $600 billion in one-day stock market losses. Chip maker Nvidia and other tech-related stocks such as Alphabet (Google), Dell Technologies and Microsoft were among those impacted.

DeepSeek’s home page features a whale logo and the phrase, “Into the Unknown”. It seems I wasn’t the only one who considered those words apt, as a number of governments began seeking information from DeepSeek to determine whether users’ data would be appropriately safeguarded. In the US, some Department of Defense employees downloaded and began using DeepSeek within the Pentagon1 before officials banned workplace use of the product. Other nations have also banned government bodies’ use of the product, or entirely banned its use within their countries.

New Iterations of Earlier GenAI Resources

In the closing days of February 2025, OpenAI launched its latest iteration of ChatGPT. It’s known as GPT-4.5, and we’re told it will hallucinate less than earlier iterations. That would be really helpful, as I’ve found numerous instances in which GenAI may as well have been on what are known as magic mushrooms. I’ve identified a number of instances in which it produced fictitious statements that sound entirely plausible, presenting them alongside factual content.

We’re also told GPT-4.5 is better at following instructions, recognising patterns and drawing connections than its predecessors. I’ve played around with the “Deep research” option, which, in my view, produced results that continue to reflect biases. ChatGPT continues to offer free versions, which have limitations. If you want to access GPT-4.5, you’ll need to be a paid subscriber.

Whether you have a paid or unpaid subscription, it’s costly for providers to deliver such services, and the large language models (LLMs) that drove ChatGPT’s first couple of years have been generating diminishing returns for its developer, OpenAI. Those diminishing returns on a costly product are among the reasons OpenAI is moving onward from its days of “unsupervised learning”, or “pretraining”, in order to deliver increased “world knowledge”. What’s next? We’re also told we may anticipate improvements in this AI resource’s capacity to reason.

How GenAI Developed Its Knowledge

GenAI is very big business, and it consumes significant resources. Over the space of months last year, OpenAI’s value more than doubled – from $70B to $157B in October 2024. In early 2025, the ChatGPT developer was engaged in further funding discussions, with valuations ranging from $260B to as high as $300B.

OpenAI may require some of that funding to settle litigation over copyright infringements. Have you ever paused to think about the sources GenAI relies on for all that learning? In 2025, I posed this question to ChatGPT. ChatGPT responded, in part, “ChatGPT was trained on a vast dataset that includes books, academic papers, articles, publicly available content, and licensed data.”

Such practices have not been well received by various parties who have concerns over copyright infringements. The Authors Guild, which represents almost 14,000 authors, launched a lawsuit2 against OpenAI in September 2023. Its website cites compelling issues. The New York Times filed suit3 against OpenAI and Microsoft in December 2023, while a coalition of Canadian media outlets also filed suit4 against OpenAI in November 2024.

These lawsuits may be among the reasons ChatGPT’s early 2025 response to my question about its information sources also included the following: “ChatGPT cannot access subscription-based news sites (like the Wall Street Journal, Financial Times) or private databases (such as LinkedIn Premium, corporate reports, or confidential research papers).”

Gen AI and Energy Consumption

It’s not only financial resources that are fueling GenAI. What about our environment? We can turn to various sources to explore the usage of energy. When I asked ChatGPT about this in February 2025, it gave me the option of choosing one of two available responses. I chose both.

The first response stated that a typical ChatGPT query consumes 0.3 watt-hours (Wh) of electricity. Likely anticipating this wouldn’t mean a heck of a lot to a layperson, ChatGPT went on to explain this would be the equivalent of having a 10-watt LED light bulb on for about two minutes. It added, “For instance, with hundreds of millions of daily queries, the total energy consumption can reach around 1 gigawatt-hour (GWh) per day, equating to the daily energy usage of approximately 33,000 U.S. households.”

That’s a lot of energy we’re consuming.

ChatGPT’s second response noted a British newspaper’s report of studies suggesting GenAI has been driving energy consumption at levels 662% higher than the big tech companies have been claiming. It also reported that Google and Microsoft had announced or were investing in transitions to nuclear power.

Minutes: Who’ll Be Producing Them Next Year, You or GenAI?

Just as OpenAI continues to update its products, Microsoft and other GenAI providers are also pressing forward with new developments. Microsoft, as one example, has big plans for efficiencies – efficiencies which I suggest may or may not ultimately be to your advantage. This includes the capacity to record minutes. This isn’t entirely new. A bit over a decade ago, I recommended a software as a service (SaaS) portal to the board with which I worked, and we acquired it. By 2017 or so, we could have used that software to generate board minutes. We chose against that option with, I believe, good reason.

We’re told Microsoft will position workplaces to support minute preparation in part by uploading historical data including past minutes, policies and more. The intent is that organisations can also identify and establish controls related to production of future minutes and documents. You’ll already find online statements to the effect of, “Copilot will not generate a summary for meetings with content that includes potentially harmful or offensive content”.

Last December, I served on an international panel in which we discussed the future of the Assistant career. A couple of us had differing views on the extent to which recording minutes will – or won’t – be part of the Assistant career in the short term. A fellow trainer, one who’s earned recognition as a Microsoft MVP and is increasingly specializing in AI-related training, projected that – with exceptions – people won’t be taking minutes by the end of 2025.

As someone who delivers training on how to produce quality minutes with confidence, I can appreciate how that concept may be music to many ears. That said, I respectfully challenged my fellow panelist’s view. I do believe there will be early adopters who’ll be more than ready to tap into GenAI to help with minutes. I also believe many Assistants will still be recording minutes themselves in 2026 and beyond – in part because of risk management practices, beginning with cyber risks. Data breaches can have significant financial, reputational and legal ramifications.

There’s also the reality that, whether by choice or dictated by resources/lack of, many organisations and workplaces don’t necessarily implement changes at the pace at which tech resources are advancing.

As well, I think many employers will continue to want to rely upon a well-informed human’s understanding of nuances and organisational and sectoral knowledge. These factors help in understanding what should and shouldn’t be recorded – whether we’re considering a routine meeting or one called to deal with crisis management. A skilled, knowledgeable Assistant will have insights that help determine what should never be recorded, whether or not content can be deleted after the fact.

With my perspective contrasting with that of my fellow panelist, I put the question to both the general LinkedIn community and to the Executive Support LinkedIn group. With the advance knowledge of my fellow panelist, in December 2024 I asked, “If your role involves preparing minutes, what approach do you see your employer incorporating and expecting of you in 2026?”

I gave readers three responses from which to choose, and, on average, 51.5% said they anticipated their employers will expect them to both record and edit minutes. Another 42% said they anticipated their employers would expect AI to record meetings with them doing the editing.

The remaining percentage of respondents selected the “Other – please add a comment” option I’d included. In those comments, one respondent wrote, “I record and I also take notes. AI tends to capture all the wrong info.” Another wrote, “I shall continue to take minutes in the room or via Zoom – and no – I don’t use the recording. Call it professional pride but I really don’t need it. AI won’t be used at all.” People spoke to a few points, including wanting to maintain a human element in the process, cyber risks, information governance, and the reality I’d noted in posing the question: not all workplaces can or want to tap into tech resources at the pace at which they’re progressing.

As With the Opportunities, the Risks Are Real

Whatever GenAI resources you access, Assistants who are using GenAI are finding multiple uses for it. The speed with which we can generate results is impressive. I have and use a paid ChatGPT subscription, and dabble with Copilot. The words you see here are my own, though, and not regurgitated results of GenAI having vacuumed up people’s thoughts.

In using AI, it’s wise to be mindful of feeding it information you or your employer would consider proprietary, confidential or sensitive. We need to exercise discretion. We do have the option to head to GenAI tools’ privacy settings and submit privacy requests to ask that the data we enter not be used for training. You’ll notice, though, that the term is “privacy request”. How do we know with certainty that our data will not go further than we’d like, or be breached?

The pace and nature of cyber attacks continues to rise5. If you’ve heard me present on cyber awareness, you may remember me saying there are three types of organisations when it comes to cybersecurity: those that have been breached, those that will be breached, and those that have been breached but don’t yet know it. Microsoft, which brings Copilot our way, is among the entities that know they’ve experienced cyber breaches.

Reputation Management

Most of us who’ve built solid reputations within our careers have earned them over time. Both trust and credibility take time to establish, yet certain faux pas or lapses in judgement can shred reputations in a heartbeat. While there are many good reasons to utilise GenAI where appropriate and approved, we should never stop proofreading and scrutinising outputs for accuracy.

Take the case of a Vancouver lawyer, who used ChatGPT in preparing for a 2023 court case. Her client, a divorced father with a net worth estimated to be between $70–90 million, wanted to take the family’s three children from Canada to another country. Do you remember me mentioning GenAI hallucinations? This lawyer presented what proved to be non-existent case law in court. The lawyer lost both the case and face with her client, and more. In February 2024, a BC Supreme Court judge formally reprimanded the lawyer and penalized her for special costs. The lawyer’s name has also been splashed across news feeds internationally.

Whether you’re preparing minutes, a report or other deliverables, it’s not just your reputation and credibility that are at stake. Your colleague(s) and workplace may also be impacted. An Assistant may not receive credit for positive reputational outcomes, yet it’s safe to venture adverse outcomes can and will be traced back to you.

Is Your Principal Aware of the Extent to Which You’re Using GenAI in Your Career?

This is another question I posed to three groups in early 2025, on my website and to both the general LinkedIn community and the Executive Support LinkedIn group. The results surprised me, as less than 50% of all respondents chose the affirmative response. An average of 17% of respondents (with the median response being 16%) said they weren’t sure, and an average of 39% (with the median being 41%) said no.

The results have me asking even more questions. Are some Assistants reluctant to let their principals know they may otherwise be struggling to keep up with heavy workloads? Are some doing so to impress in terms of volume and quality of output, or is it a case of taking for granted the reality that AI is part of our careers? Are there instances in which this absence of communication reflects concern about job security, when GenAI can increasingly handle tasks that require more time and effort when undertaken without relying on AI?

Or does this point in part to the state of communications in some workplaces? In response to another question among my early 2025 surveys, 23% (the median among the three groups I surveyed was 22%) said there’d been no workplace communication on use of GenAI. Another 18% (with the median being 21%) said policy development was underway.

We’ve come a long way from 2023, our first full year with ChatGPT, when I posed similar questions. That year, double the percentage of respondents – 46% – said there’d been no communications on policies or guidelines associated with the use of GenAI in the workplace.

I suggest, as employers increasingly recognise GenAI as efficiency drivers, these percentages will continue to decline and more employers will develop not only policies or guidelines, but also expectations. Some will certainly and understandably continue to prohibit or limit the use of GenAI, yet the reality is GenAI continues to rise. It’s practical, then, to continue to explore these resources, and to do so with a healthy mix of curiosity and caution.

References

1 https://news.bloomberglaw.com/daily-labor-report/pentagon-workers-used-deepseeks-chatbot-for-days-before-block

2 https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/

3 https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

4 https://www.cbc.ca/news/business/openai-canadian-lawsuit-1.7396940

5 https://news.microsoft.com/en-cee/2024/11/29/microsoft-digital-defense-report-600-million-cyberattacks-per-day-around-the-globe/

Authentic, expert and inspiring, Canadian Shelagh (pronounced “Sheila”) Donnelly speaks internationally, and Assistants have been turning to her Exceptional EA website since 2013. Shelagh delivers future-focused training, and does so with a healthy sense ... (Read More)

Leave a Reply

Your email address will not be published. Required fields are marked *