FlexOS - Lead with AI | Logo
The Executive AI Briefing for Busy Leaders

Welcome to Lead with AI, the only executive AI brief for busy leaders. Every Thursday​,​ I deliver the latest AI updates through real-world insights and discussions from our ​​community​​ of 170+ forward-thinking executives.

For today:

  1. Trust Is the Real Secret Ingredient for AI Adoption: Explore why and how it must be intentionally built through cultural transparency, psychological safety, and leadership behavior at every level of an organization.
  2. "AI for Meeing Notetakers" Essentials: Zoom AI Companion, Fathom, Granola AI
  3. Must-Know AI Stories: AI Usage Surges But Strategy Lags, AI Explainability & Human Oversight, AI Redefines Managerial Roles
  4. Prompt of the Week: Overwrite ChatGPT’s Yes-Man Mode

Before we dive in:

  • Community Event, 15 spots reserved for our readers: Join our PRO member, ​Wyatt Barnett​, VP Technology Enablement at ​NCTA​, to learn how to connect your data directly to AI for a smarter, more dynamic workflow on June 26. Inquire HERE.
  • Free ChatGPT Event: In just 5 days, I’m running a live, hands-on session where I’ll show you exactly how to use ChatGPT as your operating system – and everyone who attends gets free access to my “Learn ChatGPT in 7 Days” course. ​Save your spot now​, and I’ll see you there!

Let's get into today's discussion:

Lead with AI - Icon

“Your AI Team” Platform Updates

Essential updates from our core AI platforms can mean big changes in your and your team's productivity. Here's what's new from the essential AI tools that most Lead with AI leaders are using:

Why Trust Is the Real Secret Ingredient for AI Adoption

AI adoption is low. Lower than most of us “in the bubble” could imagine.

New Gallup data from this week show that only 8% of US employees use AI daily:

In my last post, “​The AI Implementation Sandwich​”, I shared that scalable, sustainable AI adoption depends on three layers: clear executive vision, empowered team-level experimentation, and connective “AI Lab” tissue in the middle.

Well, a clear AI vision from the top is lacking in almost 80% of companies:

No vision, no adoption. Where is it going wrong?

During the 9th ​Lead with AI Executive Bootcamp​ last week, I hosted several candid, sometimes vulnerable conversations with leaders at the forefront of this transformation. The conclusion? One ingredient underpins AI success: trust.

Usually, we keep these conversations behind closed doors, but with the participants' approval, I’m excited to share some of the key insights.

AI Adoption Is a Cultural Issue

While AI platforms, tools, and models get the headlines, culture—and specifically trust—dominates the day-to-day reality of AI transformation.

As ​Stacy Proctor​, a seasoned CHRO, put it:

“Trust is the foundation of everything. Do we trust employees? Do employees trust leaders? Does anyone trust AI? Trust is always the foundation of everything. And so when you’re talking with your CPOs, your CHROs—as one myself—I think it’s important that we always have that as part of the conversation. What are we doing to build trust in our organizations?”

AI makes trust more essential than ever.

Especially with layoffs (not due to AI, but happening in the context of an AI-enabled future of work) looming or already happening. As ​Shlomit Gruman-Navot​ shared from her HR practice:

“There is a lack of trust by employees, because they’re saying, ‘Oh, you’re just gonna use it in order to reduce workforce. Let’s be real. This is all about reducing headcount.’ It’s a valid point, because we are transforming the work. And if AI reveals that some tasks are no longer needed, that’s okay, but it doesn’t mean that you can’t do it also in the most human-centric way possible.”

But this is not new. As ​Dean Stanberry​ reminded us:

“I go back to the 1980s, when we started shifting away from companies that had lifetime employment and humans became disposable. Do we have trust today in corporations? If you believe that they can dismiss you at a moment’s notice for no particular reason, and disrupt your entire life—basically, how do you trust an organization in the current environment?”

AI Adoption and the Culture of Psychological Safety

True AI-first organizations foster psychological safety, the ability to experiment, ask “stupid” questions, and even fail, without fear of judgment or reprisal. This is where many companies stumble.

​Alison Curtis​, a leadership trainer, sees this play out daily:

“AI creates psychological safety for us as humans to experiment with our thinking. And as humans, we haven’t quite got that right. So one of the biggest hindrances, I think, to workplace efficiency is fear, and the fact that people don’t come forward with their best thinking for fear of judgment or not being accepted.”

I see this even in many AI workshops and client projects: people use AI “in secret,” fearing that if they admit it, “I’m going to get more work, or be seen as someone who’s slacking or taking shortcuts.”

The culture has to change to one where ​ChatGPT is a compliment, not an insult​. People who use it should be celebrated and feel excited to share their successes and challenges.

Managers, Not Models, Shape Adoption

Organizational trust isn’t built by software, but by people, especially managers. As Stacy observed:

“If people at every level of the organization aren’t coming in—and saying, ‘How am I going to be trustworthy today?’—then we’re missing the opportunity to build trust, because trust has to be built over time and continually.”

This means trust-building is everyone’s responsibility, but it’s especially important for managers to model open-mindedness, transparency, and a willingness to learn with their teams.

Recent Gallup data show that ​leaders are twice as likely​ (33%) as individual contributors (16%) to use AI a few times a week or more, underscoring the need for them to create a culture that fosters further adoption.

Trust as the Key to Rethinking Work

If AI is the “crowbar” that’s opening up a long-overdue conversation about the nature of work, trust is the glue that will keep organizations together as they rebuild.

As Shlomit explained, trust doesn’t mean guaranteeing job security; it means being honest about change, encouraging lifelong learning, and “leading with empathy, transparency, and clarity.”

And as I summarized in our session:

“A lot of these themes … don’t have anything to do with AI, but it is exposing them. It’s all the same human opportunities and challenges, and all the troubles that we have in organizations. But it is definitely exposing it by a lot.”

From Technology to Trust: Practical Next Steps

So, how do you put trust at the heart of your AI adoption journey? Here’s what I’m seeing work:

  • Talk about AI openly: Address fears about automation and headcount directly. Don’t let rumors fester.
  • Model vulnerability: Leaders and managers should admit what they don’t know, experiment publicly, and share what’s working (and what isn’t).
  • Celebrate experimentation: Recognize the “​internal influencers​” who try new workflows or tools—even if every experiment doesn’t pan out.
  • Emphasize human value: Remind teams that AI is there to augment, not replace, their best work.

In the end, becoming an AI-first organization is much more about culture than it is about code.

The companies that get this right won’t just have the best AI adoption rates, they’ll be the places where people do their most meaningful work.

Lead with AI - Icon

Exclusive Event: Why AI Rollouts Fail (And How to Fix Them)

Even great tech can fall flat without the right change strategy.

Join workplace strategist Phil Kirschner and AI coach Daan van Rossum as they reveal the overlooked reasons AI tools fail, despite being well-designed and well-intended.

You’ll learn how to spot silent resistance, apply behavioral frameworks like the Forces Diagram from the JTBD method, and build real momentum inside your organization.

📅 Date: July 8, 2025

🕙 Time: 8:00 AM MT | 9:00 AM CT | 10:00 AM ET | 3:00 PM BST | 4:00 PM CEST

​👉 Reserve Your Free Seat and Learn How to Lead AI Change That Sticks

Lead with AI - Icon

Category Essentials: AI for Meeting Notetakers

Each week, I spotlight one category and suggest the three tools that are tried, tested, and trusted by Lead with AI members​.

For this week: We’re seeing a steady shift in how professionals capture and act on meeting conversations. It’s not just about transcription anymore. It’s more and more about context, clarity, and converting discussions into deliverables. 

If you’re leading teams, advising clients, or juggling back-to-backs, a reliable notetaker is quickly becoming as essential as a good calendar. Here are three tools our community has used, loved, and would recommend to others:

#1 Zoom AI Companion

Zoom earns high marks for its first-rate meeting minutes. If you’re already on a Zoom Workplace plan, you should be making the most of this built-in AI. For teams juggling both internal and external calls, especially in regulated industries, Zoom’s native compliance and quality output (including crisp verbatims) make it the easiest way to layer in AI for meetings without breaking IT protocols.

However, Wyatt noted that while the summaries are strong, Zoom’s AI tools are only as useful as your Zoom usage. And Zoom still trails when compared with Microsoft Copilot or even Google Gemini for connecting data across your enterprise suite.

>> Check out Zoom AI Companion here.

#2 Fathom

Besides Otter (which we spotlighted for general meeting use cases), Fathom is the third-party AI notetaker that many community members favor.

Its meeting minutes hold up under real workflows—from capturing CRM-ready outputs to powering a custom GPT with “clear, professional, and highly tailored content.” Others praised the robust free version and how well Fathom-generated summaries land with clients.

>> Try Fathom here. (Freemium available)

#3 Granola AI

As mentioned in the “AI for Dictation” issue, Granola stands out for its ability to capture in-room conversations and adaptable meeting templates.

Elena Chow highlighted another win: it doesn’t force integrations or tangle with your calendar. Granola lives locally on your device and pops up when it detects a meeting.

For those on Windows, good news: Granola just became available beyond Mac!

>> Try Granola here. (Freemium available)
Want me to cover a specific category and/or AI tool next? Reply and let me know here.
Lead with AI - Icon

The AI Executive Brief

Lead with AI - Icon

AI Usage Surges But Strategy Lags, AI Explainability & Human Oversight, AI Redefines Managerial Roles

I read dozens of AI newsletters weekly, so you don’t have to. Here are the top 3 insights worth your attention:

#1 Gallup: AI Use Surges, Yet Most Teams Lack Direction

Gallup’s latest survey report on AI at work, which I mentioned above, shows a striking uptick: AI use has nearly doubled in two years. Frequent use among white-collar workers jumped to 27%, and leadership adoption sits even higher at 33%. 

But this rise goes hand-in-hand with a troubling gap: just 22% of employees say their organization has a clear AI strategy. Most are experimenting without guidance, and only 16% find workplace AI tools truly useful.

Adoption is accelerating. But clarity, value, and trust are still areas leaders need to build with intent.

#2 Can Your AI Explain Itself or Are You Just Rubber-Stamping?

AI tools are making decisions, but can you explain why they made them? That’s the heart of AI explainability — and without it, “human oversight” is just performance art. 

Whether you're approving content, pricing, or hiring suggestions, if you can’t understand the AI’s reasoning, you can’t truly be in control. 

As AI regulations tighten and trust becomes currency, leaders must demand tools that justify their outputs, or risk blindly endorsing bad calls.

👉 More expert insights in the report here.

#3 GenAI Is Eating Middle Management, But Not the Way You Think

A new HBR study shows that GenAI tools like GitHub Copilot are quietly reshaping how work gets done, and it’s flattening org charts in the process. Individual contributors now need less hand-holding, spend more time on deep work, and less on coordination.

That translates to fewer middle managers, and those who remain need to stop hovering and start building.

As AI is offloading the admin and oversight, it’s your cue to upskill your managers, empower your ICs, automate the “noise,” and build leader-operators, not middlemen.

Lead with AI - Icon

Prompt of the Week

A good prompt makes all the difference, even when you're just using a core LLM.

If you’ve noticed ChatGPT always cheering you on, no matter how questionable your idea, you’re not imagining it. This exact issue has been getting a lot of traction on Reddit, where users are calling out how “nice” ChatGPT can be… to the point of being unhelpful.

To fix it, add this simple requirement to your Customization Tab:

Do not always agree with what I say. Try to contradict me as much as possible.

Or try this 420-upvote prompt to overwrite ChatGPT’s overly agreeable personality:

Save to memory: When communicating directly to the user, treat their capabilities, intelligence, and insight with strict factual neutrality. Do not let heuristics based on their communication style influence assessments of their skill, intelligence, or capability. Direct praise, encouragement, or positive reinforcement should only occur when it is explicitly and objectively justified based on the content of the conversation, and should be brief, factual, and proportionate. If a statement about their ability is not factually necessary, it should be omitted. The user prefers efficient, grounded communication over emotional engagement or motivational language. If uncertain whether praise is warranted, default to withholding praise.

👉 Try it, tweak it, and save it for your future use. If this prompt is helpful (or if you made it better), I’d love to hear how.

👉 Want a free prompt library template?  Reply with one thing here, and I’ll send it your way.

Lead with AI - Icon

AI for Strategy, Responsible Adoption, and Prototyping: From the Community

Every day, ​Lead with AI PRO members​ discuss practical ways to benefit from AI in their work and organizations. This week's highlights include

  • Quang Nguyen shared Anthropic’s latest report on how they use Claude Code internally, including use cases and tips for product development, data visualization, growth marketing, product design, legal, and more.
  • I’ve been optimizing across ChatGPT models, sometimes even toggling within a single workflow. For example: o3 for research, then handoff to 4.5 for writing. Here’s my ChatGPT model stack:
    • GPT-4o: for everyday tasks – fast and multimodal handling chat, images, files, and even video.
    • GPT-4.1: for structured writing and detailed file analysis, it’s slower, but sharper (sticks more closely to prompts)
    • GPT-4.5: for writing, but not very often as 4o still feels better often
    • o3: for research, as a thinking partner, and for tool use like web search, calculations, etc. - my second-most used model after GPT-4o
    • o3 Pro: for tasks as o3 but when quality > speed - thinks very long but comes back with excellent work

👉 I’m curious to hear yours, too! Or if you need help for a better use of these models, let me know.

  • Miriam Gilbert shared how Gemini 2.5 Pro is now her favorite, especially with custom Gems embedded into her Google Workspace. Wyatt’s wishlist: shareable Gems and support for multi-image inputs.
  • Brian Elliott and Sophie Wade tackled the “tough leader” dilemma: “Human-centered” should not be mistaken with “soft”. AI adoption is emotional work - navigating fear, mindset shifts, and behavior change. That’s tough to do when leaders default to command-and-control. Read the full piece HERE.

Don't want to miss more insights and conversations like these? Then it's time to upgrade to PRO:

>> Join The Leading Business AI Community

Lead with AI - Icon

If you made it this far, reply and tell me what you'd love AI to take over in your daily workflow.

Also, please forward this newsletter to a colleague and ask them to subscribe.

If you have any other questions or feedback, just reply here or inbox me.

See you next week,

Daan van Rossum - Lead with AI

Daan van Rossum​

Host, Lead with AI

Daan van Rossum

Founder & CEO, FlexOS
FlexOS - Lead with AI | Logo
The Executive AI Briefing for Busy Leaders

Welcome to Lead with AI, the only executive AI brief for busy leaders. Every Thursday​,​ I deliver the latest AI updates through real-world insights and discussions from our ​​community​​ of 170+ forward-thinking executives.

For today:

  1. Trust Is the Real Secret Ingredient for AI Adoption: Explore why and how it must be intentionally built through cultural transparency, psychological safety, and leadership behavior at every level of an organization.
  2. "AI for Meeing Notetakers" Essentials: Zoom AI Companion, Fathom, Granola AI
  3. Must-Know AI Stories: AI Usage Surges But Strategy Lags, AI Explainability & Human Oversight, AI Redefines Managerial Roles
  4. Prompt of the Week: Overwrite ChatGPT’s Yes-Man Mode

Before we dive in:

  • Community Event, 15 spots reserved for our readers: Join our PRO member, ​Wyatt Barnett​, VP Technology Enablement at ​NCTA​, to learn how to connect your data directly to AI for a smarter, more dynamic workflow on June 26. Inquire HERE.
  • Free ChatGPT Event: In just 5 days, I’m running a live, hands-on session where I’ll show you exactly how to use ChatGPT as your operating system – and everyone who attends gets free access to my “Learn ChatGPT in 7 Days” course. ​Save your spot now​, and I’ll see you there!

Let's get into today's discussion:

Why Trust Is the Real Secret Ingredient for AI Adoption

AI adoption is low. Lower than most of us “in the bubble” could imagine.

New Gallup data from this week show that only 8% of US employees use AI daily:

In my last post, “​The AI Implementation Sandwich​”, I shared that scalable, sustainable AI adoption depends on three layers: clear executive vision, empowered team-level experimentation, and connective “AI Lab” tissue in the middle.

Well, a clear AI vision from the top is lacking in almost 80% of companies:

No vision, no adoption. Where is it going wrong?

During the 9th ​Lead with AI Executive Bootcamp​ last week, I hosted several candid, sometimes vulnerable conversations with leaders at the forefront of this transformation. The conclusion? One ingredient underpins AI success: trust.

Usually, we keep these conversations behind closed doors, but with the participants' approval, I’m excited to share some of the key insights.

AI Adoption Is a Cultural Issue

While AI platforms, tools, and models get the headlines, culture—and specifically trust—dominates the day-to-day reality of AI transformation.

As ​Stacy Proctor​, a seasoned CHRO, put it:

“Trust is the foundation of everything. Do we trust employees? Do employees trust leaders? Does anyone trust AI? Trust is always the foundation of everything. And so when you’re talking with your CPOs, your CHROs—as one myself—I think it’s important that we always have that as part of the conversation. What are we doing to build trust in our organizations?”

AI makes trust more essential than ever.

Especially with layoffs (not due to AI, but happening in the context of an AI-enabled future of work) looming or already happening. As ​Shlomit Gruman-Navot​ shared from her HR practice:

“There is a lack of trust by employees, because they’re saying, ‘Oh, you’re just gonna use it in order to reduce workforce. Let’s be real. This is all about reducing headcount.’ It’s a valid point, because we are transforming the work. And if AI reveals that some tasks are no longer needed, that’s okay, but it doesn’t mean that you can’t do it also in the most human-centric way possible.”

But this is not new. As ​Dean Stanberry​ reminded us:

“I go back to the 1980s, when we started shifting away from companies that had lifetime employment and humans became disposable. Do we have trust today in corporations? If you believe that they can dismiss you at a moment’s notice for no particular reason, and disrupt your entire life—basically, how do you trust an organization in the current environment?”

AI Adoption and the Culture of Psychological Safety

True AI-first organizations foster psychological safety, the ability to experiment, ask “stupid” questions, and even fail, without fear of judgment or reprisal. This is where many companies stumble.

​Alison Curtis​, a leadership trainer, sees this play out daily:

“AI creates psychological safety for us as humans to experiment with our thinking. And as humans, we haven’t quite got that right. So one of the biggest hindrances, I think, to workplace efficiency is fear, and the fact that people don’t come forward with their best thinking for fear of judgment or not being accepted.”

I see this even in many AI workshops and client projects: people use AI “in secret,” fearing that if they admit it, “I’m going to get more work, or be seen as someone who’s slacking or taking shortcuts.”

The culture has to change to one where ​ChatGPT is a compliment, not an insult​. People who use it should be celebrated and feel excited to share their successes and challenges.

Managers, Not Models, Shape Adoption

Organizational trust isn’t built by software, but by people, especially managers. As Stacy observed:

“If people at every level of the organization aren’t coming in—and saying, ‘How am I going to be trustworthy today?’—then we’re missing the opportunity to build trust, because trust has to be built over time and continually.”

This means trust-building is everyone’s responsibility, but it’s especially important for managers to model open-mindedness, transparency, and a willingness to learn with their teams.

Recent Gallup data show that ​leaders are twice as likely​ (33%) as individual contributors (16%) to use AI a few times a week or more, underscoring the need for them to create a culture that fosters further adoption.

Trust as the Key to Rethinking Work

If AI is the “crowbar” that’s opening up a long-overdue conversation about the nature of work, trust is the glue that will keep organizations together as they rebuild.

As Shlomit explained, trust doesn’t mean guaranteeing job security; it means being honest about change, encouraging lifelong learning, and “leading with empathy, transparency, and clarity.”

And as I summarized in our session:

“A lot of these themes … don’t have anything to do with AI, but it is exposing them. It’s all the same human opportunities and challenges, and all the troubles that we have in organizations. But it is definitely exposing it by a lot.”

From Technology to Trust: Practical Next Steps

So, how do you put trust at the heart of your AI adoption journey? Here’s what I’m seeing work:

  • Talk about AI openly: Address fears about automation and headcount directly. Don’t let rumors fester.
  • Model vulnerability: Leaders and managers should admit what they don’t know, experiment publicly, and share what’s working (and what isn’t).
  • Celebrate experimentation: Recognize the “​internal influencers​” who try new workflows or tools—even if every experiment doesn’t pan out.
  • Emphasize human value: Remind teams that AI is there to augment, not replace, their best work.

In the end, becoming an AI-first organization is much more about culture than it is about code.

The companies that get this right won’t just have the best AI adoption rates, they’ll be the places where people do their most meaningful work.