Navigation

© Zeal News Africa

Harnessing AI for Trust and Transformation | Data-Smart City Solutions

Published 4 days ago18 minute read

In this episode, Professor Stephen Goldsmith is joined by Miguel Carrasco, Global Leader for Boston Consulting Group’s Center for Digital Government, connecting from Australia to share a worldwide view of how artificial intelligence and digital tools are transforming public service. Carrasco reveals how governments are leveraging generative and agentic AI to cut through bureaucracy, empower front line workers, and streamline services. They also discuss how public leaders can use AI to rebuild trust between government and residents.

Listen here, or wherever you get your podcasts. The following is a transcript of their conversation.

This is Stephen Goldsmith, professor of Urban Policy at the Bloomberg Center for Cities at Harvard University. With another episode of our podcast. Today we're talking with someone who we're very excited to speak with, who's giving us his time all the way from Australia, where I think you're already talking to us from tomorrow, which I think is proverbial and literal at the same time, Miguel Carrasco, who is the global leader for Boston Consulting Group’s Center for Digital Government and global lead for BCG x Public Sector. Welcome, Miguel.

Thank you very much for having me, Stephen.

We are delighted to have you. But before we get into your work, give our listeners a little bit of information about your background, how you came to this intersection of tech and AI and government.

Of course. Yeah, look, so my history at BCG goes back 25 years actually, and I started here in our technology practice and then later on specialized in particular in government and public sector. So the Center for Digital Government, which I started with a number of colleagues about six, seven years ago, is at the intersection of technology, digital AI and government. And as part of continuing my learning journey, I've actually undertaking an executive certificate at the Kennedy School. I've done a couple of subjects and I need to do one more in order to get that certificate, but it's been great engaging with lots of people all around the world. In this role I get to speak to senior leaders in federal, state, and local governments about what they're doing in digital and AI and hopefully we can talk about some of that with your listeners.

I assume you're teaching that course and not taking it, that would be the best thing for those of us at the Kennedy School.

[laughing] There's always something you can learn. Trust me, as much as I have been the beneficiary of doing some great work with clients all around the world, I've always found it a great learning experience to keep asking questions and keep discovering new things with my fellow colleagues.

Well that's great. So you can tell us about a few of those things. Today we've seen a lot of different trends and transitions in the space…I'll call e-government, I think that's a little outdated, and digital government, but there are so many more opportunities. About a decade ago I was deputy mayor for Mike Bloomberg in New York City, and we thought then we were using pretty cutting-edge technologies. We look back: the not so cutting edge by today's standards. So how do you think a little bit about how digital tools can cut through bureaucracy? You've written about boosting efficiency and building trust in government, as have we. So just what's the overarching theme about the connection between digital tools and accountability and efficiency?

So I've always been super interested in the opportunity that technology, how you could leverage that to create greater public value. That was sort of my main motivation and is a lot of the work that we do. And I think what we're seeing at the moment is another step change in the potential of technology, digital and AI, the current evolution and breakthroughs in AI, not just GenAI, but also Agentic AI. And what we are finding through the work we're doing and the research that we're doing is we're seeing another level of transformational potential where you can really make it a lot easier for citizens that are trying to interact and navigate the complexity of government. And you can speed up the processing times significantly. We're seeing traditionally you could sort of get efficiencies of like 15, 20 percent maybe, but with AI enabled processing, AI enabled servicing, et cetera, you can actually see sort of 30, 40-plus percent improvement. And that's good from a community perspective because it means you can then reallocate resources to where the greatest need is and the efficiency and the productivity improvements can be sort of reinvested in better addressing unmet needs or better helping people who really need more help and support. And that's what I think is the exciting potential of AI for government.

Give us an illustrative example of what you're referring to...

So some of the things that we've seen already is, for example, in the customer servicing space, a lot of what governments do is helping, responding to queries or helping citizens with problems, complaints, service requests in the case of say local councils and local government. And we're seeing agents used in the call centers or in the contact centers to support the operators and help them as like a copilot in the call center to help them respond to queries more efficiently. But even in some cases, making those customer service bots or agents available to directly to citizens so they can self-serve their queries. We're seeing in some cases, there's a lot of assessments and processing work that happens. It might be permits or licenses, it might be applications for grants and things like that. And you have to assess the application against a set of criteria or rules and there's lots of compliance checks that need to happen. And we've seen examples in Europe for example, one government using it to streamline the time that it takes to assess the environmental permits and radically reducing the effort required by public servants to conduct all of those sort of compliance checks. And so getting rid of some of that grunt and allowing public servants to focus on the more important higher value-added activities to exercise judgment and to make decisions. So it's very much still a human in the loop, but using AI and the technology to really streamline and accelerate a lot of that processing work.

Let me give you a hypothesis of ours. And so since you're my guest, I assume you all agree with me, but let's try this to begin with. We've got a number of large cities working on how GenAI tools can open up the use of data to a much broader set of government actors, right? So how can the mid-level bureaucrat, and I don't mean bureaucrat in the negative sense, the mid-level manager use generative AI to understand the data which will allow them to intervene earlier preemptively and causally organize their resources. So I understand the examples you gave so far too, right? One is offloading the commodity work and redirect the time, and the second is some self-help tools. How about the empowerment tools, the opportunity of generative AI to empower smarter work by public employees?

Yeah, look, that's a great point. I think we talk about the way that AI can democratize information and data and make it more accessible, more available so that the things that you used to require advanced analytics degrees or data science degrees to be able to do regular folk can do. And as the interfaces have made it possible for conversational analytics and being able to interrogate data sets and ask questions and look across a whole range of different data sets. You've seen, I'm sure great examples of governments doing things like open data and open government and publishing all these data sets, but the value wasn't really realized in many cases because the kind of skill sets you needed to do that sort of analysis required a lot of training and not very accessible. But what this does is it makes the universe of information available to anyone to be able to, in the case of say government policy analysis, can look at and ask sensible questions like tell me about the housing policies of the last 10 years, which ones have been effective, which ones haven't been effective?

And it just enables you to do research much faster. It enables you to look at a much broader set of examples and use cases, both qualitative and quantitative. There's still an element of relying on reliable data sources and relying on reliable data sets, et cetera. Otherwise the outputs aren't going to be very useful. But with the right sort of training, making sure that you are prompting it correctly, making sure that you are verifying the veracity of the things that are coming back. But it can be an excellent thought partner, a brainstorming partner, a critical friend. It's a really useful way I think to get started. And then obviously you can then keep using some of the traditional tools and other resources and so on that are available to make sure that it's robust and rigorous.

It all seems so exciting, and I've been around so long at the beginning of the e-government initiatives, I was early on with Mike Bloomberg on creating a data analytics center. This seems even more promising in terms of the opportunity, what's standing in the way, what's keeping national, state, local governments from accelerating their use?

I think there's a couple of things that probably are getting in the way. One is the training or the developing the basic skills that are necessary to be able to use and leverage some of these tools and the extent to which public servants and people in government have access to them. There's often concerns about data sovereignty, data going offshore, whether the data is being ingested into the models and will that result in breaches of privacy or confidential information. So all of those for reasons, there's been caution about governments, people using AI tools, et cetera, and with good reason. But I think what we're starting to see is those concerns are able to be addressed by creating secure environments where the data stays onshore, where the data doesn't get used to train the models and therefore isn't at risk of leak. And then the more that people use these sorts of tools and the more comfortable they become with them, the more that they see the benefits outweighing the risks.

And I think what we're finding is it is gradually weaving its way into all the work that we do. And it will become really important for government to keep up and to be able to meet citizens' expectations. We've seen that in our research in the past on the trust imperative that if you can't deliver or if you don't deliver an experience that is contemporary and meets citizens' expectations, then people lose faith and lose trust in government and trust in the legitimacy of government. So I think it is imperative then on leaders in the public service to really lead by example as users for them to also provide the infrastructure and ensure that their staff and their people have the tools and are able to make the most of all of the data and all the resources that they've got available. And then for individual bureaucrats and public servants to be willing and open to embrace the tools in their own day-to-day work and learn by doing over time.

We have cities around the world with which we participate and there seems to be a somewhat difference of opinion between one group that says, I have serious data quality issues, so I can't use generative AI. And another group equally informed that says, look, this is the art of the possible. You have more than enough data to get started. Do you have a camp that you would choose to be in?

Maybe I have a foot in both camps. I do think it is important to ensure that the data you're using, that there's a level of accuracy and robustness about it, because obviously the AI tools are relying on that data and the outputs will only be as good as the inputs, so to speak. However, one of the things that I think the AI tools are very good at is even if your data is incomplete or even if it's not perfect, the more data sources that you use, the broader knowledge base that you draw upon, it actually reduces the risk of getting it wrong or improves the accuracy by being able to draw on the broadest possible data sources. And that was difficult in the past because data was stored in different databases, in different silos with different data hierarchies and data dictionaries and things like that.

And what the tools that we have today enable us to do is actually combine the myriad of data sets in whatever formats they're in, wherever they are. And by doing so, you can actually improve the accuracy versus say, relying on any one individual data source or repository. The other thing I have seen is people using AI tools to actually help them find the errors in the data and then correct them. So looking for anomalies, looking for inconsistencies, looking for patterns where there could be errors. And so actually using the AI tools to do what would've been in the past difficult and very time consuming for humans to do, you can actually use it to improve the quality of the data itself that you can then use to do analysis and synthesis of.

Yes, I thought about the same point in a different way. There's a lot of conversation, of course about algorithmic bias and bias in the data, which is true, but those conversations seem to miss the point that the AI, generative AI should be used to uncover bias in the data, right? It's kind of your same point if you have a lot of data to be able to just determine kind of the anomalies, it feels to me like we should be much more rigorous in using the tools to uncover these biases, which are latent in everything we do every day.

Yeah, I agree. I think it's not a panacea, but actually the tools can help us address some of the challenges themselves. I am a member of BCG's Global Responsible AI Council, and we review a number of use cases that come to us for approval. And they are the sorts of things we look for. We look for a bias in the data or any kind of risks if you like, in automation, bias, judgment, risks and so on. And part of our role is to make sure that there are safeguards in place around those use cases to mitigate and manage or address those potential risks. It's not perfect. That's the other thing I would say that sometimes I hear the argument and say, well, the model, it's not perfect or there's errors, and that's true. The reality is these models as good as they are, are not perfect.

I've always found that the best results have been a combination of humans and machines working together gives us the best outcome. The thing I would caution against is defending the status quo. In some cases, there is error in decision-making today. There is bias in administrative decision-making today. And the question I would pose is, should you continue and persist with that or should you look for and leverage the best tools that are available to reduce some of that error and some of that margin? And if by using some of the AI models, et cetera, you can improve consistency of decision making, improve the fairness and accuracy in decision-making in government, then I think we have an ethical obligation and responsibility to do that to get the best outcome for our citizens.

I won't ask you who's doing this badly, but who's doing this well, what country or state? I mean you're close to New South Wales, they've had a long-time reputation with respect to using data generally. I'm not sure about generative AI, but as you look around the country, who would you point our listeners to as good examples?

Yeah, there are lots of really good examples all around the world. I think everyone is learning from each other at the moment. It's quite a leading edge space. There's lots of experiments and pilots going on here in Australia. As you said, New South Wales has in many ways taken its inspiration many years ago from Michael Bloomberg and New York 3 1 1, and the initiatives that he put in place using data to improve services for the people of New York. And I think the leaders here in the government over the years have also adopted a similar approach of using a very data-driven approach to service delivery. And I think that has come through in a very positive way with some of the best government service delivery in the world here through things like Service New South Wales, which has now been sort of replicated and repeated in number of other jurisdictions.

Service Oklahoma was one that I'm aware of, was actually sort of modeled in a similar way. So I do think in this space of AI adoption, et cetera, there is a lot of really good work happening in jurisdictions in particular that non-OECD countries who are trying to leapfrog and get ahead. And I think what we're finding is they perhaps…maybe unshackled from a lot of legacy systems and legacy infrastructure have a bit more freedom to adopt contemporary technologies and contemporary platforms really quickly. I think we see a lot of really good things coming out of the Middle East from the UAE, from Saudi, from Qatar, et cetera, very ambitious leaders in those countries who want to be amongst the world's best. But having said that there, there's great examples from the UK recently I saw some application of AI and case processing there, which I think really demonstrates the art of the possible using it for things like language translation.

And even in the us some of my colleagues I know are working with a number of state governments on how to make it easier, particularly for citizens who are maybe not English speaking background, getting information from forms, transferring that information into the digital form, that it's needed for processing purposes, giving staff the tools to be able to interpret the policy and the rules and the legislation, and then be able to make decisions on those applications and then communicate those decisions back to citizens in a clear and concise way. And all of those different steps in the process, you can use AI to really accelerate it, improve it, and make it more consistent.

That was a great list. Thanks very much. So last question…I get confused as I am reading my notes of some of your publications. They sound like they have almost the exact same titles as our publications. So I'm very excited about what you write because it feels like it's the same thing and a couple of the articles deal with trust, which obviously is lacking in most governments around the country. So just as you think through to the future, help us understand how an enlightened leader can use these tools to build trust on the part of their constituents.

Yeah, it's great. I think that there's such a vibrant community around the world looking into these sorts of topics, and I'm always happy to find and engage with others who are doing research and publishing on this topic of trust in government. I think what we've found through our research over many years now is that there's a direct correlation between the quality of service delivery and in particular digital service delivery and trust in government, in fact, a direct relationship. So great experiences enhance trust and poor experiences, eroded citizens' expectations are actually quite high. They expect the quality of services from their governments to be as good or better than many of the best private sector organizations or some of the leading technology companies. And when you don't meet those expectations, people start to question your general competence and capability as an administration. So I think it's really important.

There's a flywheel effect that we've talked about where if you can use data, AI technology to really deliver a great customer experience, a great public service that is say personalized, tailored, relevant, fast, efficient, timely and effective, then that builds trust in government. And with that trust, people are more willing then to share their data and give consent for their data to be used to improve the service delivery and improve the policy that governments develop as well. And so there's a virtuous cycle. If you deliver great services, people will trust you, you'll get better policy and better service delivery. And so the cycle continues. And I think that is the secret I think, to unlocking or reversing some of the erosion of trust that we've seen in our governments over time. And I think that's why some of the leaders and the jurisdictions that I think are making great progress on digital technology and AI have realized that this is a way to better meet citizens' expectations and meet that constant demand of needing to do more with less.

It's very important to answer about trust, and for our listeners who want to know more, they can look at your writings on the issue of trust and accountability and efficiency, or they can look at some of the essays that are posted on the data Smart City Solutions at Harvard. I appreciate that. You've got worldwide responsibilities and we've already taken enough of your time, so let me just thank you. This is Stephen Goldsmith with Miguel Carrasco, global leader for the Center for Digital Government and Global Lead for BC GX Public Sector. Thank you very much for your time today.

Thank you very much.

Origin:
publisher logo
harvard
Loading...
Loading...
Loading...

You may also like...