In this interview, Dotscience Founder & CEO Luke Marsden and Charles Radclyffe, Head of AI at Fidelity International reveal the Fidelity Maturity Matrix for evaluating and driving AI projects in large enterprises, and show how Dotscience model inventory, model management and infrastructure management accelerates progression through all three of the Demand, Scale and Governance Pillars.
- To help organizations like Fidelity get the most from an early investment in technologies like AI. So I’ve done roles like this at Deutsche Bank, I was head of technology in their innovation lab.
The other half of my career has been working in the startup community, I built and sold 3 tech businesses, all B2B, selling into large financial organizations like Fidelity, which was a client of mine in a previous firm.
Sure. I’d start off by saying, regardless of whatever technology you’re working with, you’ve got to get to the money bit as fast as possible. Otherwise it’s just playtime. And playtime is fun, but playtime doesn’t pay the bills.
So I think the key piece is to understand what the art of the possible is for a particular capability, and where it can be used by the organisation in order to achieve financial objectives.
There’s really only three things that any firm can hope to achieve:
- Cost reduction
- Risk mitigation
And the great thing about AI is that it can help with all three of these.
Particularly a firm like ours in the financial sector - we manage investments on behalf of lots of types of people and organizations. The opportunity for us is to use machine learning to be better at making investment decisions.
My colleagues in the investment management side of the business will say hey - there’s nothing new about this - we’ve been doing this for years - all the kind of fancy maths that takes place in the investment decision making side of things. It is the same as what we would call data science & ML. But then the problem is that if you only have capabilities that’s front-office focused and can’t be leveraged by the rest of the organisation at least you’re being inefficient, and not making the most of your investments. At worst, you’re leaving gaps on the table for potential capabilities that could be reused elsewhere.
So a large part of what I’ve been doing is trying to understand where the best pockets of excellence lie, and then to potentially help raise the bar and set up multiple centres of excellence.
What’s really interesting about AI vs. data analytics, which is probably what we’d be talking about if we were having this conversation a decade ago, is that with data analytics you were looking at data integration or visualisation as just one set of capabilities you were building. But AI is the whole set of things - everything from chatbots and robotic process automation to natural language processing and natural language generation to cybersecurity etc. So actually the roadmap for a firm like ours probably looks like multiple centres of excellence. So my job is to get set up for that.
The last point at a high level to mention is - I mentioned financial objectives, there are obviously things where it’s difficult to measure the financial impact. What we’ve got to keep an eye on is another objective which is how well our AI initiatives are aligned to strategic priorities for the firm. It may be that for any particular initiative that it’s hard to quantify how but it’s easier to see how something aligns strategically to customer satisfaction for an example.
It’s important to keep that lens.
There’s two things in tension - strategic alignment and financial impact. My job is to make sure we’re able to quantify how it aligns to strategy or delivers value.
That’s the high level piece.
What we’ve developed is essentially a maturity matrix. My personal learning in this role, at least to me, it’s been quite clear as to what’s important and what “good” looks like.
Something about AI makes it a bit more confusing for people.
What’s interesting is that some people come at this from a data background, some people come at it from a maths background, other people come at it from a software background. Each of those lenses is important but none of them give you enough - so we’ve had to figure out how to get everyone on the same page. One thing that was helpful is to look at what is best practice, in our industry and broadly, and what does “good” look like and where are we and where we are and then putting a plan together to get us to a higher level. And that exercise we’ve just completed, got a really clear mandate for what our ambition looks like for our CTO, and I’m just finalising that plan.
It won’t come as a surprise, it must sound obvious what we measure against. There’s three major themes:
- Demand: Execution, and focus on demand.
- Scale and efficiency
Within each theme there are three work streams which we’re organising and verbalising against.
Demand is is really about the execution on demand:
- Skill & ability, capability - our internal capability and our relationships with SIs or boutiques because that can be important
- Ecosystem engagement
What’s really interesting comparing AI… AI is not like Java. It’s not one thing. To get a best time to market you’ve got to make a build vs. buy decision. Sometimes your best time to market is to take Azure and get some people to develop something on your data, and sometimes the best thing is to bring something in from the outside. Vendors, academia and start-ups/scale-ups is a critical part of what will enable us to execute.
The third one needs a little more explanation because some people don’t get it intuitively.
If you don’t do that you end up in the situation where the things that are in the portfolio are either senior peoples’ pet projects, or junior people who have a lot of passion and who are managed by permission and that’s not using the org’s resources in the best way. The piece that’s really important is to make sure that every initiative we have is measured across those three vectors: risk, cost … objectives.
When you start those conversations you don’t walk into the chief people officer or the head of finance and say - hey we’re experts in AI … how can we help you? Or we’d like to build some AI solutions to fix your problems. Instead you start from a problem perspective and you don’t at all think about the technology solution. A lot of what we come up with is nothing that ML is ever going to fix but that’s fine because we’ve got other colleagues who we can push things to. But it’s the purest way of making sure we’re focusing on the right stuff.
For example - chatbots are one of the most transformational applications of this new generation of technology. For many reasons, but one big reason - the ecosystem/marketplace around the channel is the same fundamental shift we saw 20 years ago with the move from brochures to web, and we saw a decade ago from web to apps. And the next few years are going to be the years where chat and conversation agents really define the marketplace. That’s really interesting but it’s hard to bring that to people and say “look, hey this is going to change your world” and they just don’t care.
A friend of mine once said to me “the trick is to find the party trick” if you want to communicate impact. You’ve got to find something that everyone will resonate with. He used the example of Google DeepMind, no one really understands how it works apart from the people who built it but everyone understands what it can do - a machine playing video games better than humans. At that level that was good enough.
So I think finding these party tricks is the most valuable things you can do. There’s a couple of good example party tricks that every org will resonate with. Any firm over a certain size will struggle with things like meeting room booking - most people above a certain seniority get other people to do for them. How you want to do it is very much a conversational interface and how we do get to do it is through a clumsy web interface. And so saying to a chat bot “hey I need a meeting room for four people and video conferencing capability” - you can say “when do you need it for and how long do you need it for?” - “oh I need it at 11 and for an hour”. That’s the perfect interface. And if you have decent APIs internally you can find loads of those sorts of use cases.
Another good one is - “how many days holiday do I have left this year” - most people might have asked themselves - but they can’t be bothered to log into the HR system because it’s clumsy. Chat interfaces are good at those sorts of things.
And regardless of what it is, ML in its purest, or NLP, you’ve got to find a party trick. If you recognise potential transformational value but haven’t yet found use cases that are going to do it. Meeting room booking and holiday lookup are never going to change the world but what they do do is help people to understand what the opportunities look like and get better at bringing them.
If you look at the nine work streams you can get a sense of how bullish people feel, if it’s a diagonal leading from the top right to the bottom left, or equally top-to-bottom, or heavy bottom right and lighter top left - you can see if people are feeling bullish or cautious. I’d say most orgs in the financial sector are likely to be feeling more cautious than bullish. But you never know. But useful to be able to identify that.
Within the Governance framework, I really thought when I started this that this would be the easiest and we’d just be able to take off the shelf some thinking that a consultant firm had done and just implement it and it would be no more thinking than that. But we realised pretty early on and we were disappointed that we hadn’t seen this as mature as we hoped it would be in the market.
Couple of things that will be a nuance to Fidelity, we operate in many countries, and counties that are diverse culturally. Japan, Singapore, HK, Mainland China, UK. Different to a company that operates across the EU or UK/US.
What we need to understand is how to protect the org in a way that’s sympathetic to each of those regions. So we’ve identified three workstreams
Regulatory - quite easy to understand - but the nuance is that we need to map out the regulatory regime and if you want to get ahead of the curve not just what the regulators are telling you today (there aren’t many today) but the question is what’s coming down the pipe tomorrow? We have to make sure we’re building things - if the monetary policy of Singapore says you can’t do this for this type of application - or might say you “have to use ML for AML” and a human to check - you have to make sure you have early sight of that. And you may decide you want to have a public policy discussion influencing regulators - may be a lobbying point there if you get particularly advanced.
Other two areas: we’re a bit ahead of the curve here, we split out Risk and Safety from Ethics.
What we’ve realised is that what a lot of people call Ethics isn’t really ethics, it’s just engineering best practice. So it’s really important to make sure that algorithms do not become discriminatory on gender, race, religion or any other protected category, and the values that we have as a firm get inadvertently breached as a firm by our code. That’s obvious and every firm is in that place where they understand that. There’s a lot of noise about explainability. No one’s ever explained to me why we call it explainability and not explicability which I’m pretty sure is the word that’s in the Oxford dictionary. Maybe just data scientists are illiterate. Maybe one of your readers will explain it and put me in my place.
People get a bit carried away with that, humans aren’t very good at explaining their decisions. What we do typically when making decisions is that we post-rationalise it. Why did I cut in front of that Tesla? Well there was plenty of space - and it slowed down? Actually why I did it? No way of knowing. We get so hung up about algos being black boxes but I think it’s maybe a little bit overplayed - that’s my personal view not Fidelity’s view.
But there are emerging best practices about how you manage the risk and safety aspects.
And then what we’ve been building is effectively a materiality framework - we want to make it easy for an engineer to look at what they’re doing - is this a high level of materiality and do I need to risk assess it in a particular way and follow some process, to make sure the firm is protected? Should be quite formulaic and the engineers should like that and that’s our risk and safety exercise.
Whereas Ethics we break out as something different - more around the application of technology, this is actually the area that’s under-addressed by the market. What I’ve certainly identified is that in the last few years we’ve seen a growing techlash, especially towards big tech. And I think a large reason for that is that because people feel uncomfortable about the encroachment of technology into various parts of their lives. And whether that’s rational or irrational we need to have a dialog with people as to where the boundaries should lie, if there should be boundaries and what people feel ok with. Unless you have that dialog we’re going to see people pushing back - and facial recognition is definitely in the crosshairs at the moment. And we’re going to see firms with the best of intentions make mistakes and we’ve seen Facebook really lose a lot of good will over the last few years because of that. The risk is that we make those sorts of mistakes and we don’t want to.
By separating out ethics from risk and safety it makes it easier for us to recognise that we need a different strategy to handle that as we do to handle risk - risk is all about standards and process and engineering best practices, ethics is all about how do we have a structured conversations at scale across our constituent stakeholder groups.
We’re ahead of most firms because we’ve thought about this and we’ve got a clear articulation of how these things interrelate. But we’re not yet at the point where we feel that we’ve got that nailed. And I think what’s most concerning say “hey we’ve got the ethics bit nailed, we’ve got an ethics board” and yet - don’t want to pick on Facebook too much - but they spend a huge amount on ML and Ethics and yet still have pretty big reputational problems because they keep getting it wrong.
You might be surprised - “I never realised that was possible - in which case I’ve got this other problem which is totally fair game to solve”. So I think there’s actually a commercial advantage besides from just doing the right thing.
- Lots of data
- Fancy maths
- Powerful compute on tap to be able to do the fancy maths
That’s the scale and efficiency piece, so these are our three work streams:
- Model lifecycle, how do we originate the fancy maths and deploy it and monitor it and make sure it doesn’t go wrong.
- Data strategy and is it mature enough to support the use of machine learning across the firm
- What does our compute roadmap look like
Another dimension to ML - not just about having access to a cluster of CPUs you can push workloads to, you need to think about GPUs or ASICs which adds more complexity. Keep it simple - those three things.
That’s been helpful for me to understand the 3x3 work stream.
|Skill & ability|
|Risk & Safety|
- 0-1 Passive
- 2-3 Active
- 4-5 Strategic
This helps particularly non techie people understand “we’re only passive on the governance” so even without looking at the substance they feel inclined to push somewhere.
A lot of orgs, when they’re first playing with the tech, will be doing the same thing - give people permission to use company resources in their spare time to develop skills; using company resources in their spare time or as part of their … and even have a bit of budget to bring in some external capability to test & evaluate. That’s the early stage stuff.
The stuff is when you really can identify there is value - and you have a plan for how to attack that in a structured way. That’s the major difference in the gearing. And the strategic is when you realise that actually when you’re good at this and top quadrant in your industry at this is a competitive advantage & opportunity for you. I’ll give you an example: level 5 in our matrix for demand is “to be able to deliver products or services or address markets in such a way that would have been impossible but for AI”
This really speaks to what good looks like - if we didn’t have this muscle built - this kind of muscle wouldn’t be possible for us.
Rather than me saying - get some of our stakeholders where they feel that they are. And sometimes they might have a different view and that in itself is interesting, especially if they think they’re more mature than they really are. What I’ve definitely learned from this process is not to assume that everyone wants to get to 5, in fact sometimes people can feel quite threatened by the idea of 5, but it’s OK to say we want to be somewhere in the middle here. It’s really here. But what you absolutely must do and I wouldn’t compromise on this is you have to have that goal stated. Even if you’re further ahead than we want to be - it’s critical to know that. I think you can probably - easily spot the difference between firms that have the clear articulation of ambition vs. those who are just moving forwards with the best intentions.
The second thing around the model management - we obviously want to allow as much innovation to happen and make it easy for people to engineer models and - not necessarily deploy but in a beta environment testing them on real data and behind the scenes using them - we want to build CoEs, we don’t know how many they will be but to be able to do this, we need to be able to do this in production. At some point we need a platform to be able to manage the path towards production and we’re on the hunt. The hunch is that Dotscience is going to be able to help us there. (Either you guys or someone like you guys.)
In fact it touches three workstreams - we wouldn’t have met you and found unless we had a reasonably developed ecosystem management approach. It’s partly like someone like myself scouting pretty actively, but it’s also about having an org that recognises that you can’t just buy everything from Microsoft - as wonderful as they are to work with - they’re not the answer to everything. And that’s why AI’s not like Java. It’s much more exciting than Java because there’s so much diversity in the ecosystem you’ve got to tap into.
And you want to be able to sit down with a technology audit team, if you’re a firm like ours, with enough scale to have such a team, and be able to say “we’re good and this is why we’re good” - you want to have that platform to be able to say that. And the truth is that we all do that but we do it in Excel. And I think I’ve created more in Excel in my lifetime than I’ve killed, and I would like - it’s like a carbon footprint, it’s one metric I’d like to do better at in my life.
It’s hard to fully appreciate - we’ve got to get the basics in place first. That’s where talking within my peer group - other orgs, the mistake people can make is to overengineer this from day one. And there’s a huge amount you can do just being pragmatic. This is why the maturity assessment matrix is so useful. If you’re on a 1 and you want to get to a 3, 2 might just be getting that inventory into a place where it’s not being manually created by someone and manually checked by someone.
Is it important to have provenance? I’m sure it will be. And to get to higher state of maturity it will be. But the risk is that it’s going to add manual work to people and they’re not going to be happy with it. Giving them tools to be able to control and govern and not feel like this guy Charles is taking away their time to innovate and now they’ve got to do admin stuff. That’s my immediate need.
It comes back to the beginning of the point - particularly on the money making side of things - why is everyone getting excited AI? It’s really ML. We’ve been doing it for years already. We’ve got quants in our firm. They’d hate if they have to deal with me who imposes a bunch of process and admin on them because we’ve put together a risk management policy. You will have them kicking and screaming all the way unless you make life easier for them.
I think you guys help me with that - and if it goes further even better. Just trying to look at it.
The model inventory is a non-negotiable. We need to have a single repository of where our stuff is. Even if we have 4, someone has to manually turn it into one!
But if you are saying we want to take a very strong view on governance and risk, and an ambition and a timeline, risk is you end up with a whole bunch of manual steps which defeats the point - the whole point is to automate.
You also touched on the topic of infrastructure management, the need for data scientists to be able to self serve compute resources? Do you have thoughts around the requirements on that?
I’ve definitely got colleagues and other people in my role will make very strong cases for why they need a DGX-1. And that’s the thing that’s holding them back. Unless you’ve got a fairly well advanced compute roadmap you won’t be able to hold your own against that. There are definitely firms that are spending money that they’re not ready for - the techies are going to be most happy - but my job is to make sure the firm is getting the most value all the time. You don’t want to have a new Tesla when there’s nowhere to charge it up, and you don’t want to have a leaded petrol car when there’s a fuel crisis and there’s nowhere to fill it up. You want to be in the mainstream most of the time with this stuff.
Need to make sure we’re getting a positive ROI and managing the risk.
- Having a model inventory, rather than many systems where people are keeping track of things manually, you can see the different models that are being developed across the org and the state of those models.
- Model management in the sense of being able to easily deploy models into test & staging & production environments.
- Infrastructure management, having data scientists have access to the compute they need when they need it across both on prem and on cloud, rather than an ad-hoc approach to provisioning and consuming resources.