Five Paradigms In AI Adoption
Close up detail of a businessman working at a desk with a smartphone and laptop computer
Future via Getty ImagesAs we see how various companies are adopting artificial intelligence this year, there are some through lines that remain critically important.
Some of them are related to personal values and personal preferences – in other words, how people are embracing (or not embracing) the move toward new frameworks and processes.
I’ve been hearing a lot about this lately, and so it makes sense to look at a recent Imagination in Action panel, including my friend and colleague Jeremy Wertheimer, along with Amy Edmondson of Harvard Business School and Johnny Ho of Perplexity, who probably needs no further introduction here. Alisa Cohn moderated.
The group really zeroed in on some of the overall ideas around AI adoption, based on what you might call “personal thinking” – on how each individual human approaches this new reality.
I want to go over some of these broader ideas that inform what companies are doing today:
One of the things that panelists discussed is people using AI, but not talking about it so much. Cohn, for her part, put adopters (and non-adopters) into three “buckets” – one is those who are using AI, but afraid to talk about it, for whatever reason. A second tranche is those who are not using AI because they don’t like it, and a third category is: leaders mandating AI in their companies. Cohn mentioned Toby Lutke’s famous Shopify memo as a prime example, and talked about how companies are limiting headcount unless the human in the loop can be brought in with a value proposition.
“The two takeaways I have were, ‘I've suggested that everybody use AI, but suggestions are not enough. I'm now going to (have to) force it.’ And (the second takeaway is) again, you won’t be able to get head count, unless you can prove to that person that job can't be done by AI. So, like, that's a big sea change, obviously.”
“People are finding their feet,” Edmondson said, delineating between the mandate approach and a softer touch. “There (are) two ways to close that gap, broadly speaking. And one is to just require people to close the gap, which is more or less what that message (Lutke’s message) implied or sent, probably not intentionally. And the other is to make it attractive for people to close the gap.”
Panelists also talked about that process of “try, try again” that’s related to AI use in today’s environment.
“What people get wrong about psychological safety,” Edmondson said, “it's not about being nice, … it's not about feeling comfortable. It's about feeling uncomfortable. Learning is uncomfortable. So psychological safety describes a learning environment, an environment where you know you can be wrong and it won't be catastrophic, you know you can ask for help, or raise a crazy idea, or admit a mistake, or report bad news, and that all of those will not feel easy. … but you are confident that they're welcome – (it’s being) in a state of constant discovery and learning like a great scientific laboratory, right? A good laboratory has that environment.”
The panel, Jeremy Wertheimer especially, talked about how things change in a period that’s not relatively stable.
“Most people, most of us, are used to things being relatively stable, and a set of practices that work when things are relatively stable,” he said. “It's just that right now, it's sort of the absolute wrong thing right now.”
Bringing an anecdote to the table, he noted how people want to try something half-heartedly, and then give up fundamentally.
“I'll talk to an academic, and I'll say, have you looked at this?” he explained. “(And they’ll say) ‘Oh, yes, I tried AI. In fact, I tried it six months ago. It didn't work. I wrote a nice paper about that.’ And then I have to just as politely as I possibly can, say, ‘you might want to think about trying again.’”
The mindset, he noted, is important.
“Things are changing all the time, and I think you have to redefine (the approach) now,” he said. “Success is not like, ‘yes, we tried it. It doesn't work … (that’s) wrong today. It changed yesterday. I think that openness is a new thing we need.”
In times of change, Wertheimer argues, there are no laurels to rest on. People have to innovate. And so they need to be clear-eyed about that.
At Perplexity, Johnny Ho said, AI is the status quo.
“I think for us, it's actually more about that the expectations are always rising over time, so there's always that gap … we see AI as just getting stronger at an exponential rate right now. There’s a question whether that'll keep going, like in the next five years or not, but at least for us, we just expect it to be higher. We expect each individual contributor to have a higher scope over time, because we've expanded from a web app to like a variety of platforms.”
“I think … leaders in … even the oldest, most apparently stable organizations, they can benefit by shifting their mindset,” Edmondson said. “Think like a scientist. It's a profoundly different mindset, right? You're free to set directions, but you don’t have the answers. Obviously, you're setting direction, not providing the answers. You're helping people articulate better hypotheses.”
Here’s another aspect of this that I have written about so much over the last six months. There’s a general consensus that humans need to move toward a different range of skills, including management-type skills, like:
In the IIA group, panelists talked about new in-demand human skill sets, including these (I’m going to try to paraphrase):
Cracking the code – looking at the often enigmatic outputs of AI projects, and translating them for general use.
Oversight – making sure that AI efforts are properly targeted to converge where they make sense.
Changing course – charting a course, for a person, and a company.
Overcoming friction – getting past the barriers (think data siloes, etc.) that hamper progress.
In analyzing success rates for projects, they suggested, we can move toward better measurements of achievement.
At the end, the panelists talked more about the personal emotion involved in the process. You can watch the video for more.
In any case, I think that each of these ideas shows us a different aspect of how people are embracing AI in the 21st century. And the 21st century is new – we’re only a quarter of the way in. Every year is giving us leaps and bounds when it comes to using these new technologies, and we have to adapt quickly. Some of these concepts are likely to help us get there.