Companies investing in artificial intelligence will face increasing pressure to demonstrate returns on their investments in 2025 and beyond, according to Hitachi Vantara CTO for AI Jason Hardy.
“ROI is going to be very important,” Hardy, who leads AI at the digital transformation arm of Japanese multinational firm Hitachi Global, told PYMNTS in an interview. While he doesn’t expect overall AI investment to decrease, Hardy believes the level of future funding will increasingly depend on demonstrable results.
A 2024 PYMNTS Intelligence report validates his observations. While most CFOs surveyed were using generative AI for increasingly core business tasks, such as financial reporting, only 13% saw a “very positive” ROI. However, this outcome did not deter their plans to continue investing in AI.
This focus on ROI comes as many organizations struggle with fundamental challenges in implementing AI. According to a survey by Hitachi Vantara, a fifth of IT leaders say their AI models hallucinate or make things up. Only 36% trust their AI outputs more than half of the time.
A critical step to AI success that companies pay insufficient attention to is data quality, Hardy said. Data quality is a key component of language and other AI models for training. Bad data leads to poor results. Even so, 6 out of 10 IT leaders do not review their datasets for quality or do not tag their data. Moreover, 18% say their data is “dark” or stored but not used.
“They’re just not in a state that’s ready for this [AI] journey,” Hardy said.
Another complication: 76% of IT leaders say their data is unstructured, meaning it is not stored in a set format like rows and columns in a traditional database for machines to more easily process. Examples of unstructured data include social media posts, images, videos and audio files.
Data Prep Can Be ‘Overwhelming’Hardy said Hitachi Vantara’s customers can find it “very overwhelming” to tackle the mountain of data that needs to be organized, labeled and prepared for AI use. “They’re still trying to figure it out, honestly, and that’s one of the big things that we’ve been preaching to them.”
Hitachi Vantara’s approach is to first ask the customer to identify a use case for AI they’d like to target and then figure out what data is needed. Once they figure out the data needed, they can start to prepare that dataset. Hardy said that data preparation could take anywhere from weeks to years depending on the organization’s size and complexity.
Hardy said they advise clients to start with use cases at the periphery first before moving to tackle core business functions, thereby building expertise while minimizing risks. “Don’t start [with core tasks] because it’s probably not a good idea. Let’s start out here a little bit on the edge and then work our way in as a goal,” said Hardy, describing the implementation strategy.
Hitachi Vantara’s approach to helping clients prepare their data varies based on use case requirements. Hardy advised not to let “perfection be the enemy of good enough.” In some cases, 85% of data readiness is enough, in other cases, the client wants 95%, Hardy said. The target depends on factors such as use case exposure and desired time to value.
This week, a World Economic Forum panel revealed that 74% of companies struggled to scale AI, and only 16% were prepared for AI-enabled reinvention. On the panel were the CEOs of AWS, PepsiCo, Sanofi and Accenture, as well as the president of Saudi Aramco. They said factors for this impasse include relegating AI to IT, not having AI-ready staff, workers distrustful of AI and fearful of losing their jobs, among other challenges.
Trends in Gen AI and Hitachi’s LLMsHardy said he sees use case trends in generative AI for customer service and data analytics. Another hot trend is agentic AI, in which several AI models interact with each other to do tasks. AI agents can be experts at certain tasks, like the manufacturing process, he said. The user’s task is broken down into small parts that a group of AI agents handle.
Hardy said Hitachi Vantara’s 13,000 global clients can either ask for turnkey AI solutions — basically install an AI model that’s up and running — to fully managed solutions where Hitachi Vantara does everything from sourcing Nvidia GPU chips and other hardware to deploying and managing AI. Hitachi Vantara has partnered with Nvidia to offer both on-premises and as-a-service options for GPU computing.
Hitachi Vantara is also developing its own family of foundation models, Hardy said. Rather than fine-tuning existing open-source models, the company has opted to build from scratch, focusing on industries where it has deep expertise. Hardy declined to reveal the models’ names or their parameters. He did say that these models are not as expensive to train as OpenAI’s GPT series, which had cost billions to train on all the data from the internet.
“We don’t need all of Reddit to build a large language model,” Hardy said, highlighting the company’s strategy of leveraging its parent firm’s extensive industrial data and domain knowledge. Hitachi Vantara is developing foundation models for multiple industries where the parent has a presence, including manufacturing, transportation and energy sectors.
Looking ahead, Hardy sees sustainability becoming a crucial focus in enterprise AI deployments. While he acknowledges that direct responsibility for AI energy use lies with the data center and AI foundation model developers, Hitachi Vantara does contribute to the energy use through the GPUs and sending data to the servers. “It’s our responsibility as the peripheral of the GPU to be as sustainable as possible,” Hardy said.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Hitachi Vantara CTO for AI: What Many Enterprises Overlook That Blocks AI Success appeared first on PYMNTS.com.