Generative AI and the Risk of Careful Consulting

Last year, I wrote a blog post about Kinetic West’s intentions for 2023. I wrote about our efforts to build a culture of trust and to break through the fear of failure both within our organization and through our client work. In that post, I wrote about the idea of “Careful Consulting” – the idea that many of the tools that we use in client service to keep from “messing up” our projects (e.g., attentive to clients, socializing and refining recommendations, best practice case studies) have the potential to inhibit truly creative and innovative consulting work.

Around that same time, Generative AI started to seriously enter the public discourse, as tools like Midjourney, ChatGPT and DALL-E became widely available. In February of this year, the Kinetic West team came together to talk about the implications of these Generative AI tools for our field - social impact consulting. The conversation was open and exploratory, but I think the team left with three takeaways:

  • A healthy skepticism about whether these tools would truly increase consultants’ ability to do the type of bold and innovative work our clients need;

  • An understanding of how tools could make life easier for consultants and help them break through some of the challenges that we all face when staring at the blank page;

  • The need for a critical eye on equity and the ethical use of Generative AI, like validating responses, mitigating bias, and avoiding plagiarism or infringing on copyrights.

Kinetic West at our 2023 Spring retreat on Vashon Island, WA.

We also concluded that we likely had at least a year or two to keep grappling with how and when to use Generative AI tools in our consulting work. Wow, were we naïve.

Since that time, I’ve been confronted with Generative AI's use in the world of management consulting nearly every day. A few examples:

  • Teammates or partners seeding brainstorming sessions with ChatGPT “to get the ball rolling”;

  • Clients sending ChatGPT recommendations to questions that we’ve been working on for months (or even years) as “food for thought”;

  • Reading blogs from very reputable consulting firms on the buzzy topics of the moment and being paranoid that they might have actually been written by Chat GPT;

  • Getting suggestions from friends about using AI tools to get over writer’s block (for this very blog post incidentally, which has been many, many months in the making!);

  • A partner of ours using heavily human-refined and edited Midjourney images on one of our big projects this summer which were received with both praise and skepticism.

So as you read what follows, know that I’m not coming to you from an ivory tower or fronting as holier-than-thou. I’m speaking as a fellow traveler who has used these tools in my work and is confronted daily with the temptation to use them more. Furthermore, I'm 100% prepared to be proven totally wrong on this blog post and make myself and our firm look hypocritical, given the real possibility that both the entire management consulting industry and KW will be leveraging generative AI tools in the near future. Chalk this up as my commitment to break through the fear of failure and take more risks. Sigh.

I believe that Generative AI tools are a dangerous force for the management consulting industry and our clients. There are several reasons to feel this way, including:

  • Ethical concerns around reusing the work of other consultants, artists, and designers without the ability to properly attribute or compensate them;

  • Equity concerns about the fact that AI recycles and continues to perpetuate the thinking of white-dominant institutions and consultants creating an echo-chamber that makes it harder for voices of color and other marginalized people to break through and influence policy and practice;

  • Labor market concerns related to AI eliminating jobs within our industries, especially entry level jobs where new professionals build their networks, gain experience, and learn.

We could write a blog about each of these topics but I'm going focus on how I think Generative AI tools can perpetuate already prevalent challenges in our industry which we call “careful consulting.”

As I shared above, “Careful Consulting” is the idea that many of the tools that we use in client service to keep us from “messing up” our projects, in practice, have the potential to inhibit truly creative and innovative consulting work.

 
 

The tools and techniques that I most associate with “careful consulting” are best practice reviews, case studies, landscape analyses, studying trends, interviews with thought leaders, literature reviews, and even excessively “socializing” recommendations with clients.

 

If you worked with me or KW over the years you might be thinking, “Wait! Doesn’t Marc put something like this in every scope?” and you’d be right! I’m not saying that we’re above using these techniques or even that they don’t have a place within great project work.

 

What I am saying is that these tools, definitionally, ground our work in the work of others and what has been done before, and that over-relying on “careful consulting” keeps us from innovating and pushing our clients into difficult places or finding solutions that depart from conventional thinking.

Today, it seems our communities and our clients are faced with intractable challenges like structural racial inequity, climate change, housing affordability and growing homelessness, and rising costs and declining outcomes in healthcare and education, to name a few. Working on these challenges year-over-year, our clients can feel stuck, like they’re running in circles.

And while there's much to learn from conventional thinking and how other cities, states, and countries do this work, for many of these problems, we're going to need to start with a blank page and test ideas and practices that haven’t been tried before. For those problems, careful consulting is just not going to cut it.

So what does this all have to do with Generative AI? AI tools learn by being trained on vast quantities of existing published and publicly-available information, ideas, and solutions, and processing that data to identify patterns and summarize themes

So if you ask ChatGPT a question like “How do you improve affordable housing and rising homelessness within a major American urban city?” or “What are best practices for improving Higher Education graduation rates within a rural community?” it will give you an answer and that answer will sound very convincing and professional because what it's doing is finding all the studies and all the previous ideas that have been published about this topic and summarizing an eloquent response based on those ideas. It’s a Careful Consulting Engine.

Previous
Previous

How Kinetic West’s Community Engagement Principles guided our engagement of the Seattle Community on the Downtown Activation Plan

Next
Next

End of Year Letter from the CEO