Using AI Starts with Your Teams (with WSJ Technologist Dr. Alexandra Samuel)

Dr. Alexandra Samuel returns to update us on how to use AI in our work and organizations, and even how to let it design fingernails.
using-ai-with-teams-wsj-technologist-alexandra-samuel
Daan van Rossum
Daan van Rossum
Founder & CEO, FlexOS
I founded FlexOS because I believe in a happier future of work. I write and host "Future Work," I'm a 2024 LinkedIn Top Voice, and was featured in the NYT, HBR, Economist, CNBC, Insider, and FastCo.
May 28, 2024
15
min read

🎧 Listen Now:

In today’s episode, we explore the transformative role of AI with our first return guest, Dr. Alexandra Samuel

Since our last discussion, AI has not only advanced in capabilities but also in its integration into our daily workflows and organizational structures. 

We'll delve into how AI is transforming how we work, what our relationship with AI should be, the differing speeds at which it is being adopted even within one organization, and what you as a business leaders should do to successfully bring AI to your organization. 

These are the key insights you can implement immediately to enhance your AI strategies:

1. The Time to Act is Now

As Alex shared, the typical way of adopting technology is for a company to collect requirements, test, and roll out. However, this approach doesn't work for AI. By that time, the robots would have taken over. So, each of us must move fast and especially start experimenting. Alex suggests finding some people who are already AI enthusiasts and tapping them to have the teams innovate together. 

2. Create a Culture of AI

Alex stressed the paramount importance of organizational culture in adopting AI. We need to define our relationship with AI, and how it is used, for example, in making it a norm to summarize handover materials before sending. It's not just about having discussions and creating agreements between people about when and where to use AI. As Alex says, we need to make AI not just legitimated, but expected. 

3. Let People Experiment

An essential way to get people to use AI more is to let them experiment, as we also saw in the Moderna and Microsoft case studies. Make time available to figure out how to automate your work. And make it clear that it's okay to do so. At the same time, reassure people that this is not about losing their jobs. 

4. Ensure to Close the Gap

Some people are using AI a lot, and for some, it is still very new and used infrequently. As AI evolves, that could create a gap between employees. Ensure that the gap is closed by putting efforts into equal understanding and time spent with AI tools for all.

Get everyone to get the benefits of, for example, being good at using the ChatGPT voice feature, which is a game-changer. As Alex says, "profound changes come when you develop that capacity as an organization or a team."

We finish with Alex's call to action: AI is a co-intelligence. The co means we change, too. We evolve, too. Rethink your relationship with AI.

By leveraging these insights and integrating these tactics, you can facilitate smoother AI adoption and ensure your organizations remain at the forefront of technological innovation and workplace efficiency. 

🔔 Available on:

Transcript:

Daan van Rossum: We talked about it six months ago, and you're actually the first return guest on the podcast. Because when we talked six months ago, I was really focused on your book behind you, which is Remote, Inc. I was thinking about talking about hybrid work, remote work, and the 4-day work week, and we did talk about all of that, but then we started talking about AI, and it really changed my point of view completely. 

I was interested in AI, and I was playing with AI. But the way you described how I could eventually take over a lot of our work, maybe a little bit more than what we would hope for, really changed my thinking. I started researching it a lot more. 

It has been six months. A lot has obviously changed since then. AI is developing very rapidly. We're actually recording this right after Open AI did their demo day and announced a bunch of new features. So where do we stand with AI today? 

Alexandra Samuel: In some ways, it's changed a lot. In some ways, it's changed very little because the reality is that the vast majority of people are still just dipping their toes in the water while other people are swimming in a new ocean. And I think what we need to get our minds around at this juncture is that, yes, some of us are using AI, and our day-to-day working lives are really changing what we can accomplish.

We're thinking in new ways. We're taking on wildly ambitious projects that used to be out of reach. Possibly increasing our productivity and output in really significant ways.

Other people are using it a little here and there, correcting some spelling mistakes. The challenge is that I see so many surveys that ask, try, and measure where we are in AI adoption, and the results are all over the place because you can ask someone if they use AI. First of all, ask people if they use AI, and half of the people you ask might say no; 100% or 99% actually do use AI.

They just don't think of it as AI. They don't think about Google being an AI. They don't think about Gmail being run by AI. Unless you really ask people more nuanced questions and some sort of skill-testing question, you don't really know where they're at.

In this way, the path to AI adoption is not that different from what we've seen with many other waves of tech adoption in the workplace, which is that you have some people who are like, Everything's changing. I'm one of those people, obviously. Everything's changing. All this is amazing. We're going to do all this great stuff, and then we have to sit back and wait for the rest of the workplace to catch up. Because at the end of the day, when you're in an organization, it's awesome if you have three people in your team of 400 who are really into AI, but the profound changes come when you develop that capacity as an organization or a team.

Daan van Rossum: Absolutely. We've seen some data, and I totally agree with you that the data is extremely inconsistent from survey to survey. It's very difficult to really get a firm understanding of what it means. Also, a lot of those surveys are done by tech companies trying to sell AI. So when Microsoft comes out with their big study that AI is the future based on their data, obviously, we have to remember that they have some incentive to push AI. 

But when you're looking at it from an individual level, there's already a pretty big gap starting to develop. Then you look at it at a team level, and then you look at it at an organizational level. One of the things that did come out of the Microsoft study is that a lot of people are now BYOAI, bringing their own AI to work because companies are not providing AI tools. They may not have developed a framework. They may not have developed an adoption plan. 

So we recently saw two really interesting case studies. One was the Moderna OpenAI case, which had a very deep partnership with OpenAI. One was from Microsoft themselves and how they're applying it in HR. In both of those case studies, they actually say it's not so much about creating something like a huge AI initiative or thing at the company level.

It's actually letting individual people experiment more with AI, letting them test it out, and finding in which ways AI can automate their workflows or change or improve how they work. 

And that really reminded me of what you said six months ago, which is that smart bosses really should just give people more time for experimentation. Is that where you still stand that people just need to experiment more, but also what can companies do to create that space? 

Alexandra Samuel: I think at this time, in most organizations, that is the most realistic approach for a couple of reasons. One is that technology is changing too quickly. Witness today's announcement. You know how companies work. If you do traditional systems integration, you spend six months or a year gathering requirements, talking with everyone on the team, and figuring out what systems are going to work for everyone. Then you make your recommendations. Then you have a change management practice.

And two and a half years later, by the time you have implemented it, the robots have fully taken over, and humans are just like meat sacks that sit on a shelf as batteries. There's no point.

So part of it is just the pace of change. And then I think the other problem, and this is, again, tied up with organizational structures, is that, ideally, you would find something in the middle. Ideally, you would say, Look, we've got all these great people who are ready to experiment. We've got some people who may not be ready to experiment. Let's see if we can find a couple of teams where most people are game and have the team innovate together and talk about what pieces they're going to do together. Because I think some of the gains really do come from some agreements about what things we're going to automate. I'll give you an example, actually. I love this example. It happened totally by accident.

So, a few weeks ago, I was bringing a colleague up to speed. I'd been asked to hand over a big piece of a project I've been working on for a couple months. She has somebody else on the team, and I needed to get her up to speed. I did what I always do in that circumstance, especially because we were in very different time zones. I took three email threads that had accumulated over six weeks, so you could pretty much... The whole story of the project was told in these three email threads. One was the external one with the client, one was an internal one within the project team, and another was like a technical thread with an engineering team.

They were pretty long threads, but I was like, okay, here we go. So I just forwarded her the threads. And after I forwarded her the threads, I thought about this exchange I had with a friend of mine on Facebook the previous weekend where he had been complaining about the fact that he was in an organization where the practice is for his colleagues to just, people are constantly sending along PDFs, FYI, in case you're interested, of this article or that resource, and he was like, This is so annoying, I have no idea. Why do they think I would find this interesting?

And I said to him, This is where AI should be coming in. People should be giving you the AI summary with the PDF so you can decide if it's worth your effort. And why should all 500 recipients do that if just one person can do the summary before sending it out?

Of course, as soon as I'd sent this thread to my colleague, I was like, Oh no, I'm like the bad colleague who didn't do their homework. So it took me five minutes. I took the three email threads and copied and pasted them into text files. I uploaded the three text files to ChatGPT, and then I actually gave it some instructions. I said, You are a consultant; you're bringing a colleague up to speed on a project. Here are the seven things or areas where she needs to know what's going on with the project. Read these email threads and summarize where we're at on these seven fronts based on the email threads.

So it did that. I looked at the summary. It was, like, pretty good. There was one thing it left out. There was one thing that had gotten a little wrong. Anyhow, long story short, in less than 10 minutes, I had a really good memo bringing her up to speed on the project instead of these three email threads that she would have taken two hours to read and still been confused.

I give that example because, to me, this is how we use AI as a team, which is much more important and has much more impact than what we do individually. You need to create a culture where the norm is, Hey, it's not up to the individual recipient to use the AI. It's actually an incremental, tiny effort from the sender to use AI.

We're just not going to send these obnoxious emails anymore. We're not going to do these FYIs. We're not going to forward the whole thread. AI is going to make it possible to summarize the context, and so I think that's where we need to start establishing norms for our teams and organizations so that AI is not only legitimate but expected. And it's rude to send someone something without AIing it first.

Daan van Rossum: That's pretty much what the Moderna CEO said in the case study that they released with OpenAI. And again, similar to the studies from Microsoft, I know it's marketing, but I think it's very insightful in terms of how a big company in a very regulated industry like that can use AI. One of the things that they said is that there's now a mandate that people have to use it 20 times per day or more.

In the Wall Street Journal, he said no fewer than 20 times per day. He said once people get how easy it is and how much value it brings, they're more than happy to use it 20 times per day. It's not like telling kids to eat their vegetables. 

Because you mentioned culture, and that was a really big part of his case study as well, it was really more about, not just understanding, but how do you build a culture in which people want to try and innovate and build those small GPT's and change their workflows? How would you do it as a company leader today to go from, okay, maybe some people are individually using AI to finally getting to a team and company level? 

Alexandra Samuel: I think there are a few things that you want to mandate, or rather make possible and incentivize. One is that you want to make it totally okay to spend part of your work day figuring out how to automate your own work.

Second of all, you want to make it clear. It's not cheating. There's nothing embarrassing about sending AI-written stuff. Third of all, you want to create a culture where there's an expectation that you will fact-check your AI-written stuff. You're responsible for thinking critically about what you're looking at, and is this accurate? That's important.

Then, I think, the most perhaps foundational piece of this is that you need people to feel like their jobs are not at risk by doing this. You're not going to automate yourself out of work. And I think there's a lot of anxiety that people have around that.

Then I should also say that you need to put tools in place that address privacy, security, and safety concerns around AI. So a question I get constantly is, aren't you worried about your distinctiveness being added to the collective? Aren't you worried about having whatever you put into the AI turned into the AI fodder?

The answer is, I am not that worried because I pay for a premium level of ChatGPT for that precise reason. I'm actually very selective about how I use any AI I'm using regularly in any context where I have privacy or data protection concerns. You call me old-fashioned; I look at the terms of service.

Now, could they still use my data and break their terms of service? Yes, it wouldn't be the first or the last time. Could a hacker get access to the data? Who knows? I have long since made the choice that I'm willing to accept some risk in return for using the cloud.

Nobody's security is perfect. People do yucky things in every tech company, but there's plenty to worry about about what companies tell you they're going to do. So I've decided that if someone says they're not going to use my data, I'm going to take that legal agreement as binding. It is binding, and I'm going to pay to use tools that protect my data that way.

So companies need to pay for that. Of course, people aren't going to want to use the free version of GPT if they think their ideas are going to get rolled back into the training model. You have to pay for them to have access in a more secure form.

Daan van Rossum: Absolutely. I think there's something here that everyone is worried about, which is what happens with that data. And I think you make a really good analogy, which is that this is also what we maybe went through when we shifted to the clouds. From local files on a computer to the clouds, there's obviously some risk.

There can always be a cyberattack, a leak, or something like that. But we expect companies like Google and Microsoft, and maybe now OpenAI, to protect that data well because they have a lot to lose if there is a huge data breach. So what are some other things that you look at when you're assessing which tools to use, and when you're going through those privacy policies, what are some other things that you look at when you're deciding what tools to give to your AI team?

Alexandra Samuel: I think the path to integration with your existing tools is really important. One of the comments is that I just had this piece out in the journal today about everyday problems that AI can solve for you at work. Somebody pointed out that all the tools I had mentioned were freestanding tools, and they found it much more useful to be able to use AI in the context of their Microsoft suite.

Now, with that in mind, I do think for most organizations, what you want to think about is what 2 or 3 tools can solve the largest number of use cases for the largest number of users? Because most people are not going to be like me and use 20 different AI tools and also, hopefully, pay for food. 

Daan van Rossum: Yes, and then obviously having to administer all of that and ensure again all the CIO concerns. That's going to be too much to handle. So, you're saying to look at two or three suites of tools that would work, and that would probably be either Open AI Enterprise or the Microsoft suite. Then maybe something for the marketing team would be needed.

Everyone is going to go either on Canva or they're all going to go on Adobe Cloud or something like that. Are you looking at those software suites? 

Alexandra Samuel: I think most people who reach out to me about AI tools at this point, for exactly the reasons you're saying, aren't making choices for their organization. They're personally deciding what two or three tools they're going to use.

Again, in the context of organizations, people get their Microsoft access through the enterprise. So their access to Microsoft AI depends on whether the company has turned it on. So, when you're an individual user and you're not buying a corporate suite, you're not going to be looking at the $200 or $500 a month tools. You're going to be looking at the $10 and $20 monthly tools.

So, I think for most people, that's going to mean choosing Claude or ChatGPT. I'm paying for both: choosing the ideogram, mid-journey, or Dall-E, choosing an image generator, and then maybe choosing one tool that you're going to use for data if you do data work. The one thing I will say about adopting all the AI tools compared to others is that I've been a chronic early adopter for, goodness me, 40 years.

I just realized this year is my 40-year techiversary. So what really makes AI different compared to all the other tools is that it is self-teaching. It will teach you how to use it personally and tell you what to use it for. When I first sat down with, oh, I'm going to just be very arcane here.

When I first sat down with Edix/Wordix, which was the first word processor I ever used in 1984, it had this big fat manual in it that was really weird. It was a very weird binder. It was in a binder that was plastic quilting, and if you wanted to know how to do something, you're like flipping through the manuals for everybody; it's not written for a high school student. It's not written for a university professor. It's written for anyone who might be typing a document, which is a pretty broad category.

Whereas I've given ChatGPT my life story, basically. I have a custom instruction that governs all my interactions. It knows who I am, where I live, what I do for a living, about my marriage, about my children, and what software tools I use regularly.

And so whenever I want to learn how to use it to do something new, it can calibrate its instructions exactly for me.

Daan van Rossum: I think in that context, it would be amazing to hear maybe a little bit more about how you're personally using all these AI tools. So you're publishing pretty frequently now on LinkedIn, and I really appreciate these posts of just-down-to-the-ground, very simple workflows. Here's what I do with this tool, and here's what prompt I use.

So what are some of the ones that you see resonating most with people in terms of applying AI? 

Alexandra Samuel: It's so funny because the people I hear back from on this are super nerds. Who is really into it? Who are my people? And then when I write these things for the journal where we're thinking about, like, the average person picking up a newspaper, I'm constantly reminded, like, actually, Alex, most people don't use XLOOKUP as a routine part of their working lives. So, they don't really need to know how OpenAI or ChatGPT are going to make it easier to use XLOOKUP.

I'm actually really trying to be diligent in sharing use cases at varying levels of nerdiness. I'll tell you one that has really resonated with a lot of people, which I find interesting: my new habit of using it. I'm going to actually fire it up because it's so bonkers until you try it.

The voice interface for ChatGPT... Have you used the voice version?

“Hi, you are a podcaster, and you're interviewing somebody about AI, and you want to hear their tips for how people should use AI on a day-to-day basis. What are three questions you might ask to get interesting and accessible but concrete answers on what you can do with AI?”
Hi, that sounds like an exciting podcast episode. Here are three questions you could ask: One, can you share some practical ways individuals can integrate AI tools into their daily personal or professional tasks to improve efficiency or creativity? Two, what are some common misconceptions about using AI day-to-day that you think people should be aware of? Three, could you recommend any AI tools or platforms that are user-friendly for beginners and explain how they can get started with them? These questions should help uncover some useful insights and actionable tips for your listeners.”

Okay. Thank you. Sorry, Daan, I didn't mean to put you out of business.

Daan van Rossum: You're so polite. You say thank you.

Alexandra Samuel: Oh, my God. In fact, I want to tell you the story about my manicure in a second because it is a little crazy how concerned I am for the feelings of my AIs. One of the practices that I use now that seems to help other people is when I have something where I've just got like a bazillion ideas and I don't even know where to start.

I'll go for a walk, and I will set it up at the beginning by saying I'm starting a voice session if I know there's a document I'm going to want to refer to Sam. I did this recently when I was going to Ted, and I wanted to think about who I would try to interview at Ted. I uploaded a list of all the Ted speakers to a chat. Then, I opened the chat from my phone, and then I went out for a walk, and I uploaded it to a custom GPT that already has a whole bunch of context on the kind of stories I write about.

So, I went for a walk, and I was like, Hi, I'm covering TED. I want to be able to decide which speakers to interview. Can you look at the bios of all the TED speakers? And give me a list of the top 10 people who you think I may want to interview. Just give me a short summary. Just give me like 3 to 5 words about each one. And it did that.

By the time I got back from my walk, I had a sense that these were the three people I wanted to interview and what I might want to speak to them about. And then it can give it to me as an outline, and I'll tell you, it was the funniest thing because when I was my first, like 10 minutes ago, I ran into this woman, and she introduced herself to me, and I was like, Oh, I had a long conversation with ChatGPT about you. And she is like, what?

Then I pulled up the transcript, and it was so funny because she was clearly so startled that I really had talked to the AI about her. For me, free form means I'll just throw a whole bunch of ideas into it, and then when I come back, I'll get it back as an outline. For me, that's really helpful. That's become a really frequent part of my work.

Daan van Rossum: And it's so different from using text. I think that's the part where, when I do our course, we mostly take executives through what they should know about AI, but mostly also how to apply it. So, we help them build their AI team. Those are like the three most foundational things: they actually know how to prompt ChatGPT for as long as that is necessary.

Give it context like you just did in terms of you are a this, and this is what you should know, and this is what I expect. Number two is to just use the voice, because you can now speak to it as a human, and you can get that direct interaction. And number three is to start just building some small agents.

Suddenly you go from, I don't really know what AI is, and I'm a little bit scared to, whoa, this can do so much for me. And what I always hear from executives, especially, is that they feel so liberated. They always use that word because of all the things that they used to rely on other people for, and they have to always wait, review, and revise.

Now, all of a sudden, they're doing the work again, and it really lightens people's days off. It's incredible. 

Alexandra Samuel: It is. I think I do want to just make one tech note, which is that I find the voice interface for ChatGPT quite erratic. It often will. It's weird. Like it'll forget things even after. It has fed it back to me. It will turn out that it's lost like 12 minutes of our conversation. And so I recommend, as a practice, that you pause regularly, ask it to recap, and let it stew. You might even copy and take screenshots of things that you think are particularly useful.

I don't really understand why it's so erratic, but I do find it very helpful.

Daan van Rossum: That may also go back to what we are hearing frequently, which is that this is the worst that this technology is ever going to be. And so with the new demos that they just did in terms of the updated voice assistant, those things may already have been worked out.

So this is the very first version that we're seeing right now. Just imagine where that is 6 to 12 months from today. 

Alexandra Samuel: Yeah. So when you're watching my face as I'm just thinking about that, there's actually something really special about those liminal moments. And I can go back 40 years and tell you that I remember the first time I used a mouse.

I remember the first time I used a color printer. That was crazy. I remember the first time I Googled something about a device that was called the Trio, which was a short-lived device, at a restaurant. And the people at the restaurant were like, What's going on? You're looking something up, and you're on your phone. I don't even understand what's going on.

I remember the first time I wasn't called Google yet. I did a live web search. I was a very early procurer of high-speed internet, and so I had the shift between dial-up and high speed, meaning that you had the internet on all the time. And I remember having friends over, and we got into an argument about a book character, and I came back with an answer, and they were all like, Wait, how did you figure that out? And these are all things now that are completely woven into the texture of our lives: looking something up on your phone at a restaurant, using a mouse.

Of course, I'm a tech lover. I really, genuinely find those moments to be so magical. But I really hope I will remember. Okay, now let's talk about my fingernails, which are the most important thing. Let's close out the fingernails.

Daan van Rossum: Let’s close out on the fingernails?

Alexandra Samuel: It's a funny thing. It was my birthday like a week and a half ago. And I had this idea. Hey, you know what I'm going to do? I'm going to do the AI manicure for my birthday. And what I'm going to do is set 10 intentions for the next year. There are five personal goals and five professional goals.

I'm going to come up with a symbolic representation for each one and have them turn into nail art. I ended up spending probably an hour on ideaograms for each individual image. So for example, if I love theater and I wanted to do a fingernail, that would be like, I'm going to go to Newark and see more shows this year. And so this looks like a playbill, which is like the little magazines they give you.

It took a lot of playing around and iterating to get each fingernail the way I wanted it, and then eventually I had 10 images I was pretty happy with, and I just then, at that point, just used Canva to drop them onto a plain picture of a hand. And that way, the poor nail artist, when I walked in, had a very clear roadmap of what I wanted.

She did an amazing job of translating it. It looks exactly like the mockup and the thing that was really interesting, and the reason I bring it up as a proponent of this magic thing is that it's like this is a bonkers thing to do. First of all, I just told you I have a manicure like once every five years. So the idea that I've spent like, literally, 50 hours in the past six months figuring out my fingernails is ridiculous, except I'm going to remember this forever.

That thing of wanting to, and the process was amazing. I learned the limitations of what AIs can and can't do when it comes to imagery. I learned about this whole VR thing. And then, maybe most importantly, like when I was creating these 10 images, the fact that it took an hour to get what I wanted for each one, like at first and even when I was doing it, it didn't feel irritating exactly, but it was like, Oh boy, this is taking a lot more time than I thought it would.

But, you know, what ended up happening was that I had an hour of interactive meditation on each of my goals. I spent an hour thinking about how much I wanted to be going to the theater, and I got my playbills out and looked at how they were laid out.

I started writing down a list of 10 goals. And that took me like an hour to think about. But then I spent, like, an hour thinking about them. And now I look at them every day for the next month. For me, this was really a useful lesson in how we think and need to change how we think about prompting.

Which do we think of prompting as like a Oh, it's so annoying if I can't get the answer right away. Oh, it's so annoying if I have to give it a long prompt. Oh, it's so annoying if I have to do a chain-of-thought prompt to teach it how to think. But what we forget is that these aren't gumball machines where you're trying to drop in a quarter and get the best prize.

I love Ethan Mollick's idea that it's co-intelligence. And the co means we change too; we evolve too. And so the prompting process that takes time isn't just, oh, it takes so long to make the AI do the thing. It takes me so long. I'm learning, I'm evolving, and I'm reflecting on my actual goals, my thought process, and my values. And encouraging us to reflect on what we're trying to accomplish is, to my mind, as valuable as the output and that toy we get from the gumball machine.

Daan van Rossum: So that is really the call to action for everyone: not just to use these tools, but also to think differently about the relationship between us and the tools and the work itself. I love that. Beautiful. Thank you so much for being on today, and I definitely have to do another one. 

Alexandra Samuel: This was lots of fun. Nice to see you.

You Might Also Like …

All articles about

Future Work

A weekly column and podcast on the remote, hybrid, and AI-driven future of work. By FlexOS founder Daan van Rossum.