Accenture AI Expert on How First Principles Prevent Problems
Accenture AI Expert on How First Principles Prevent Problems
As more organizations begin employing AI in production environments, it’s clear not everyone has completely thought through how AI will fundamentally change their business.

Most of the focus today tends to be on AI to reduce operational costs in the wake of the economic downturn brought on by the COVID-19 pandemic.

VentureBeat talked with Fernando Lucini, global data science and machine learning engineering lead for Accenture, about why organizations shouldn’t focus on initial success. Lucini stressed how important it is for organizations adopting AI to keep first principles uppermost in mind.

VentureBeat: Prior to the COVID-19 pandemic, most organizations were struggling when it came to AI. Now we’re seeing more AI than ever. How has the pandemic impacted those projects?

Fernando Lucini: It’s been a confluence of events. CEOs are starting to ask “Where has all the money gone?” People started to ask some really deep questions about those investments. We’re thinking more about the value of AI. From a human perspective, companies that were affected needed to get smart because they were squeezed a bit because of COVID.

VentureBeat: Is there a danger organizations are now moving too quickly without really understanding AI?

Lucini: We all get very excited about AI, but it needs to run with the right kind of controls and ethics. Three years from now, you’re going to be in a land where there’s a model that connects to a model that connects to a model. It will all be intertwined in a complex way. I think there’s a ways to go.

VentureBeat: Will different models conflict with one another?

Lucini: There are no models interacting yet, but synthetic data is quite exciting. We have customers who literally can’t get ahold of their own data because it’s so protected, so there’s going to be in the modeling world the concept of synthetic data that is a true synthesis. It’s not a copy anymore. It reflects the original pattern but never has any of the original data. I think there’s going to be a lot of synthetic data out in the world. That’s when you’ll see a model created by a bank interacting with a model from an insurance company. As we move along and we get into more complex models, the winners are going to be those that actually have a great handle on things. They understand how things are happening, why they’re happening, and have strong controls and strong governance around how they do things.

VentureBeat: Right now it takes a fair amount of time to train an AI model. Will that process become faster?

Lucini: I always joke that if you put five software engineers in a room and you give them five hours, no code will be written but they will know how to compile everything and what standards to use. If you put five data scientists in the next room for the same five hours, you’ll get five models based on five different mechanisms that are badly coded but very brilliant. We need to bring those two things together if you want to get the kind of speed of innovation we need. If you just have a few patterns, it’s very clear that you can go from data to model to production in an industrialized way. Where people fall down at the moment is because there have been loads of pilots in the last six months, but none of them can go to production.

VentureBeat: Machine learning operations (MLOps) has emerged as an IT discipline for implementing AI. Does this need to be folded into traditional IT operations?

Lucini: In time. Data science and ML engineering are in the same group at Accenture. These folks need to have quite a deep understanding of the mechanisms to make these things. They need to have knowledge that is a little bit more specific to the model. I suspect there’ll be specialization for a while. I don’t think that’s going to go away anytime soon.

VentureBeat: There’s a lot of talk about the democratization of AI these days using AutoML frameworks. Is that really possible to achieve?

Lucini: It’s inevitable that some of these platforms are doing more and more AutoML. I was speaking to a professor at Stanford a couple of weeks ago, and he was telling me that 90% of the people that go to his course on neural nets are not computer science students. The average education of people understanding statistical mathematics is going up. You also need industry expertise. Having somebody who understands how to use a model but doesn’t understand the problem at hand quite as deeply doesn’t work. My view is you’re going to have more AutoML that people can use, but we’re also going to need more guardrails to make sure that whatever it is they’re using is within the scope of safety.

Education takes them to a point where they do understand whether they created a monster or not. We’re going to have to add more of these industry people that know more of the science.

There are already generalists and citizen data scientists. I joke with CIOs and CEOs that these people can also be dangerous amateurs. Then you have this debate about how people don’t really understand how cars work and they still drive them. We still test people so they can drive cars. There’s a good reason for that, so let’s do the same. It’s important to have enough of an education.

VentureBeat: What’s your best advice to organizations then?

Lucini. Think about the first principles. If you think about AI as being important to you, then you should think about what is your business strategy for AI? Not how AI is part of your business strategy. Educate yourself sufficiently so you can apply principles to understand how AI might actually make a difference to what you’re doing. The truth is AI has a hidden cost of learning how to do it at scale. “Think 10 times” is the first principle of education.

Source: Venturebeat

Share this article:

Share this article: