Digital supply chains: Why AI literacy and an understanding of reasons for deploying AI is crucial

Digitalisation – and specifically the deployment of artificial intelligence – is amongst the top priorities for many in the automotive logistics and supply chain sector, but in order to get the most out of an AI deployment, firms must be disciplined and understand what they are aiming to achieve with the technology.

Published Modified
3 min

Although AI is undoubtedly a transformative technology for supply chains, it's important to remember that it is a tool, and – like all tools – the limits of what it can achieve are largely dependant on the user. During Automotive Logistics' recent 'Digital supply chains: Beyond AI' livestream – sponsored by Loftware – editor Emily Uwemedimo spoke with John Rich, director of AI transformation at Mazda North America, and Paul Harris, director of solution consulting at Loftware, about why AI literacy is so important when taking advantage of new technologies.

"Don't get caught in the AI bubble, because bubbles always pop," Uwemedimo quoted, referencing something that came up in a conversation between the panellists before the livestream began. This is not to diminish the potential of AI in any way, but is an important sentiment to keep in mind when discussing the technology. With so much enthusiasm for the many possibilities AI brings, it is useful to remember that beyond this tool is an operational reality – it is a daily discipline and it's necessary to stay grounded in trusted data, business realities and what it takes to safely and profitably implement AI within the supply chain.

"Grounded AI is not sceptical, you have to be disciplined," said Rich. "The difference really, I find, lies in whether you really know the value before you deploy an AI solution, not after." He explained that being able to understand and articulate the key objectives of implementing an AI solution before it is deployed, is key – and only through this level of understanding will you be able to hold an AI solution accountable for changes to KPIs from day one. 

Balancing enthusiasm for AI adoption with the reality of the daily discipline required to ensure a meaningful deployment is no mean feat, and the day-to-day realities must be communicated to teams looking to take advantage of AI solutions. "AI is not going to fix bad data, and if you put AI on top of bad data, all you end up with is bad AI output," explained Harris.

To truly benefit from AI, Harris emphasised Rich's point that identifying a problem to solve and setting measurable KPIs from the beginning is vital, whether the goal is to streamline an existing process or improve decision-making. "You've got to get the basics right first, because if you don't do that, you're going to end up six, 12, 18 months down the line, and you won't have had that impact and that value realisation you were looking for at the outset," Harris added.

Of course, there are a number of different roles in the supply chain that have a vested interest in AI adoption, from logistics leaders to operational executives to digital transformation heads, and KPIs for each of those people could look very different, so ensuring AI literacy throughout the supply chain can be an important but challenging task. "Literacy is not a uniform language," said Rich. "It really depends on the level of audience."

Rich elaborated that at each level and department, a different vocabulary is needed based on a different threshold of understanding. Executive literacy, for example, is all about risk and accountability, not technical depth. "The dangerous executive is not the one who doesn't understand transformer models or all of the technical breadth of things behind the scenes, it's the one who can't ask the hard question about model confidence," Rich shared.

Both Rich and Harris highlighted the need for quality data in order to get meaningful output from an AI model. Rich focused on three core data streams that every AI solution should have access to: what was planned, what physically happened and when. "Ask yourself: does this model or AI solution objectively have access to clean, complete, consistently structured operational data?" he said. "If the answer requires a caveat, you aren't ready to deploy AI but you are ready to invest in your data."

Today's human-in-the-loop approach, based on active participation from human beings collaborating with AI systems, is starting to be questioned, as some consider the potential benefits of removing that human involvement from the process. However, Rich asserted that until analysis is conducted into the expected cost of errors should that checkpoint be removed, using real production data, this can't be considered a viable option. "The rush to remove human-in-the-loop is driven by throughput pressure, not necessarily value analysis," he noted.