Angus Ward, CEO, Beyond Now, leads this Fireside Chat at TM Forum Digital Transformation World 2024, together with Duarte Begonha, Partner and Head of IT in Telecom, McKinsey & Company, and Takao Imanaka, Director, Global Business Office NTT Comware.
Angus
I’d like to do a level set on AI to really understand who’s getting value, why they’re getting value and what needs to be done in order to achieve such value. There is some great new McKinsey research on this topic, so I thought that in this session we can dive into how to really use AI and the practical realities of it. I’m delighted to have two fantastic panelists with me: Takao Imanaka and Duarte Begonha. Could you quickly introduce yourselves?
Takao
NTT is a Japanese telco, and I would like to share NTT’s mid-term strategy which has three pillars. One is to create new value and accelerate a globally sustainable society. Another is to upgrade the customer experience. And the third is to improve employee experience. We believe that AI will help us achieve those two goals of upgrading the customer experience and improving the employee experience.
I personally belong to a company called NTT Comware, which is the IT service company within NTT Group, serving the telco operating company for NTT from an IT standpoint. Also, I am a director of the Global Business Office, which is responsible for introducing our solutions to the global market. For me it's about exploring how we could make better use of the marketplace to connect the ecosystem digitally with our partners.
Duarte
AI is one of the biggest topics at the moment, and we’ve been expanding our research, also leveraging a company we have called Quantum Black, to help harness the power of hybrid intelligence. So today we have more than 5000 people working in AI globally and telecom is a big part of the work we do.
Duarte
The research came about through all the talk of AI and especially on how to scale AI. We need to look at how to move on from the honeymoon phase. I think in reality, you can always question if it's a glass half full or half empty… and I still see it a little empty.
In reality, only 10% of organizations are able to really scale AI, but the results are still very low, so often not more than this. Organizations that take it more seriously, they are only thinking 15% of the growth, not the four or five times the impacts we can see everywhere. It's also interesting to note, that for every AI project, only 10% actually go into production.
For GenAI, it’s only half – 5%. So it's still in the early stages. I think there's currently a lot of experimentation and we are trying to understand this. There are seven hard truths for CIOs to move past Gen AI’s honeymoon phase:
To go into a bit more detail, firstly, many organizations are still taking a pilot approach. They're not yet thinking how to transform, and they need to start thinking differently.
Secondly, it’s not about the pieces. You need to create an orchestration that brings people, data, components, orchestration, the business and change management all together. And often these things are very siloed with very small teams not working as a whole.
You also need to start thinking about cost. With a lot of these pilots the cost becomes too expensive. Modeling is a small part of the cost, but the cost of doing the change management – the adoption – is brutal. So if you don't take something that is structural, the cost of deploying AI in an organization can be quite substantial.
There are also a lot of different technologies available, and you need to make your choices wisely and pick your battles. And you need to ensure the organization will adopt. Most of the problems in AI come about from adoption. So even if we built a lot of cool solutions, if the frontline – the business areas, the networks, the operations – they don't accept or believe in it, then it can’t succeed. This makes adoption really important. And data quality is a topic – we need to get the data right. You can use AI to fix data – but you need to get it done – and then reuse it, rather than reinventing the wheel every time. These are all things we found that need to be fixed in order to really be able to scale AI.
Angus
One thing that struck me about the research was that only 15% of companies are delivering real value to the bottom line… to EBIT. You can either look at it as being top of the hype curve – therefore 15% is really, really good. Or, with the machine learning, there’s a sort of segue into AI from lots of earlier examples, so then 15% is very, very low. Why do you think only 15%?
Duarte
I think it's because of the experimentation logic. People are doing a lot of pilots and a lot of experimentation, and nothing is big enough to really transform a need in an organization. Even some of use cases are quite mature, like call centers. Whenever we do this in call centers, we start it in one call center. It's first a pilot for some clients, but then it never scales. It never scales to the full call center nor to the all the other call centers in the organization. Why not? I think it's because people start questioning it and thinking about risk.
You need to think at scale with a use case you can take everywhere – call centers, workforce optimization, and there are many things you can do in marketing and sales. Or take a domain, an area like B2B sales, and make a decision to transform B2B sales with AI. But the whole part of it – not just a few things within it.
Duarte
The use cases that the telco should pick is the place with the fewest barriers of adoption by the business areas – which can probably change from organization to organization. You should try to pick the ones where you have some business areas – and business areas not only includes just commercial operations, but can also be in network engineering, or HR, or legal for example.
Often marketing and sales are areas have been quite prompt to adopt, which I think is also because they do not think like engineers – engineers are not the biggest adopters of AI because they question everything. I'm trying to implement some of these things in network engineering, and I can tell you it's really a nightmare. Whereas marketing and salespeople are typically more open. So this makes it a good area, not because the impact is more or less, but because adoption is higher.
Takao
I think ease of adoption and also business readiness are the two major points that we need to keep in mind to find the right focus for these AI initiatives. For example, as NTT Comware, as the IT services company, we are also trying to adopt AI in code generation to improve efficiency in system development. There is the possibility that we could implement AI on various processes, but we need to find the right target to get the low hanging fruit.
Angus
The next area we can move on to is around the scaling of AI. So not the pilots, not the experiments, but really scaling it. It's investing in integration and automation of the end-to-end process. There are components here of vector models databases, libraries applications etc. There's a whole supply chain of different things. And all testing provides new data. Adding components adds exponential amounts of complexity to everything. So the real heavy lifting is in the data, and it's in the end-to-end automation and testing of that value chain.
Takao
NTT is still in the midst of the journey itself. We haven't completed the full transformation, but we do agree that end-to-end operations are very important. Even currently within NTT, we have many systems in silos with one system for certain processes and another system for other offerings.
So even within the same processes, if we are serving a different offering, we’re using different systems. All these silos are what’s preventing the data unification or data distribution. Therefore, we are currently standardizing and consolidating systems using TM Forum ODA. We have recently been awarded the running on ODA status, so that’s our initial step in the process.
Takao
We all know that a single system or single platform could not solve all the processes or the operations, so we still need to live with multiple systems integrated in a single platform. Therefore, integrating APIs and orchestrating all those processes centrally is a very important thing to do. Now in NTT, we're trying to kind of move in that direction.
Duarte
I think this is fully building the AI factory. That starts with the data pipelines where you get the data you need to evolve. You need to ensure that you are enabling all the testing, all the monitoring and then have APIs to do the orchestration. But the APIs should not be a one-way thing… consuming the data from the prompts. You also need to get feedback from the prompts. This is a complication because a lot of these models need to learn. So whenever generating prompts we need to ask the users: “is this prompt good or bad?” And if it's bad they need to provide feedback, which needs to go back into the model for the model to improve and learn.
So just assembling the factory with all these different pieces is quite important. It's not just about creating a repository of models, it's building the end-to-end component that can be used at scale. I think often what we see is that various things exist partially and they can be broken. Then it takes then a long time to deliver new use cases faster.
Angus
Well, I guess this is the fifth truth, going back to the idea of only 15% of companies really getting value, and picking up the point you made about engineers challenging things. Is it being run too much as a technology program and not business driven enough? Is it too tactical and not strategic enough? Of all the things around digital transformation, everything else we've learned about how to be more business driven with business needs driving priorities, why do you think we're still seeing that this that kind of finding?
Duarte
I think we are not investing enough. For organizations launching AI at scale in some areas… for each dollar spent on building the models, they spent $3 in change. How many of you are spending $1 versus three? More often you spend one, and then you spend not even half a dollar to do the change. And this change is about changing the applications to incorporate the prompts into the front ends of the users, basically generating prompts in the back as a button for the first of this late adoption.
It comes back to training. You need to work to manage the risks to make sure that things are working properly. So really thinking ahead about these changes is critical. Often what we found is that many of these programs begin with AI teams that come up with good ideas, but nobody thinks about how they are going to generate the adoption and fund the project with all these different pieces. Then you have a nice model, but it’s only used by 2% of the organization. It never scales. It's a nice project, but it's not more than that.
Takao
I firmly believe that the business is the one who needs to own the outcome of using AI. So there needs to be more involvement from a business side. And I believe that involvement is still low at this moment.
How can we prepare our system landscape for scaling AI? A lot of companies have legacy system landscapes, and it’s a lot of cost to modernize. What would be the best solution to prepare that old system landscape to leverage AI capabilities?
Takao
I believe that master data management is key to achieving all these objectives because if the master data and the transactional data is in silos, we won't get the best outcome from this new technology. In NTT, it is still a challenge to achieve these goals. But MDM for me is one of the key drivers.
Angus
I think one of the different ways to look at that is the choice you've got between do you build capabilities and build it and they will come, or do you build for each use case?
Duarte
I think there are two types of platforms. One is this AI factory component, getting the data right, MLOps, the orchestration, all the APIs… That’s basically the data component of it, but that’s just one side. If you want adoption, you also need to change the legacy systems that are going to be able to consume the data.
This is where a lot of the challenges are, if you’re using a very old application that does not have an ability to consume data from APIs, or to integrate into your own system becomes much more complex. I think often what we see is that trying to bundle this AI into something bigger is probably then a good excuse to replace or to refactor part of this application and to say, listen, it's the time for us now, not to change the full applications, but maybe we need to change the front end of this application, because we need to have a decoupled front end to make sure that we can not just get the data from this backend, but also get the data from AI and from other things. Other times you need to create some components, like middleware components, where you can ingest the data and try to bring it into the front end. This can be more costly, but I think this is part of the challenge. When you think about transforming it is not just about building the factory, but also that you're going to need to change some of the legacy systems.
But I fully agree that it's a barrier you need to overcome. What we've seen is that what many organizations end up doing is forcing devices to make decisions that have been there for a long time. There’s been delay after delay, and now there’s no time for more delay, so the change has to be made now, and you’re trying to reuse components that don’t necessarily have those capabilities.