Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Beyond Better Emails: Building Confidence with AI in the Workplace

Summary

The author attended DataFest 2025, a conference on "Navigating Data and AI" that revealed how most people limit AI use to basic tasks like email writing due to uncertainty about capabilities and workplace appropriateness. While developers enthusiastically embrace AI for productivity, non-technical users remain hesitant, creating a significant usage gap that highlights the need for better AI literacy training across all workplace roles. The key takeaway was that building confidence with AI requires starting small with practical experiments rather than trying to master everything at once, treating it like any other workplace tool that can be learned gradually through hands-on experience rather than comprehensive technical knowledge.

I attended the DataFest conference in early May 2025. This year’s theme, “Navigating Data and AI,” felt especially timely as both interest and uncertainty around these tools continue to grow. With speakers like Dr. Daniel Hulme from WPP, Colin Jarvis from OpenAI, and Zoe Kleinman from BBC News, the event explored topics ranging from machine consciousness to the ethics of digital twins, alongside real-world applications in design, education, and governance. 

Across sessions and conversations, one thing was evident: AI is stirring a powerful mix of excitement and anxiety. You could sense it in the questions people asked—hopeful curiosity about possibilities, paired with caution about the unknowns. 

As someone who already explores how generative AI can support tasks across different kinds of training—whether that’s simplifying code in Python workshops or generating examples in SPSS sessions—these conversations felt especially relevant. They echoed questions I often hear from learners: What can I use AI for? Where does it help? Where does it fall short? 

Here are a few reflections and takeaways I left the event with—ideas I’ve been thinking about ever since. 

 

  1. From Writing Emails to Rethinking Workflows

What stood out most to me—especially as someone who uses large language models (LLMs) every day—is how often people equate AI with just one task: writing better emails. Tools like ChatGPT or Microsoft Copilot have become common, but their use often stops at grammar correction, rewording, or quick summaries. 

This isn’t because people lack interest. More often, it’s because they’re unsure what else these tools can do, or whether it’s appropriate to use them beyond surface-level tasks. In training sessions, I regularly hear questions like “Am I even allowed to use this?”, “What if it gets something wrong?”, and “Is this cheating?”. 

These are valid concerns—but they can also limit the very experimentation that unlocks real value. What I appreciated about DataFest was how it moved beyond the hype to encourage a more nuanced view: AI isn’t here to replace thought—it’s here to support it. 

 

  1. Not Just for Developers

A recurring theme across sessions was the difference in how AI is perceived by those who build it versus those who use it. Developers spoke with optimism: LLMs are speeding up development cycles, assisting with boilerplate code, and freeing time for more creative problem-solving. But in many non-technical industries, the response is more hesitant. 

In the workplace, this split often shows up in two ways: 

  • People feel they’re “not technical enough” to explore GenAI meaningfully. 
  • Or they use it only in limited, repetitive ways—without seeing its potential to reshape how they communicate, learn, and collaborate. 

Personally, I believe the best way to move past that hesitation is simply to start. Even if it’s just using GenAI to rephrase an email or summarise a document, the more you use these tools, the more you begin to understand where they shine—and where they fall short. That gap isn’t discouraging; it’s empowering. It reminds you that at the end of the day, it’s still you—the human—who’s writing the prompt, setting the task, and making the decisions. 

It’s not about mastering the tool overnight. It’s about building familiarity through small, everyday use—and discovering, with time, how it might genuinely support your thinking and your work. 

 

  1. Preparing People for a Changing Workplace

During one panel, a speaker posed the question: “Are we preparing students for a world shaped by AI?” It struck a chord. But I’d take it further and ask: Are we preparing staff—across departments and roles—for the reality of working alongside AI? 

Many people entering the workforce today, or adapting to new digital tools in their current roles, have never received formal guidance on how to engage with AI thoughtfully. By offering clear, ethical, and practical learning pathways, we can ensure they’re not just passive users—but confident, critical thinkers. 

That’s the difference between digital literacy and digital confidence. 

 

  1. What I’m Taking Back

While the conference touched on topics as wide-ranging as AI in healthcare, ethics, and cognitive science, what I’m taking back with me is something simpler—and perhaps more reassuring. 

Everyone knows that generative AI is going to reshape the way we work. That truth surfaced in almost every session, across sectors and disciplines. And yes, that shift can feel overwhelming or anxiety-provoking. But at the end of the day, it’s just another tool—something that can be learned, adapted to, and integrated into daily workflows, just like Excel or any other piece of software we’ve come to rely on. 

That’s the message I want to carry into my training sessions: you don’t need to know everything to get started. You just need to start. With small, useful tasks. With curiosity. With the understanding that confidence comes from doing. 

Because the more we demystify these tools, the more we open the door for people to not just use AI—but to use it well. 

 

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel