4 AI trends: It’s all about scale in 2022 (so far)

The heat of July is upon us, which also means we’re exactly halfway to 2023. So, it seems like a good time to pause and ask: What are the biggest AI trends so far in mid-2022? 

The colossal AI trend that all other AI trends serve is the increased scale of artificial intelligence in organizations, said Whit Andrews, vice president and distinguished analyst at Gartner Research. That is, more and more companies are entering an era where AI is an aspect of every new project. 

“If you want to think of a new thing, the new thing that is going to be most attractive is going to be something that you can do with scaled AI,” Andrews said. “The human skills are present, the tools are cheaper, and it’s easier now to get access to data that might be relevant to what you’re trying to accomplish.”  

According to Sameer Maskey, founder and CEO at Fusemachines and an adjunct associate professor at Columbia University, the move toward scaling AI is made possible by more data, prioritizing data strategy and cheaper compute power. 

“We’re also at the point where a lot of enterprises are now seeing the value in AI,” he said. “And they want to do it at scale,” Maskey said. 

Additionally, Julian Sanchez, director of emerging technology at John Deere, points out that the thing about AI is that it “looks like magic.” There’s a natural leap, he explained, from the idea of “look what this can do” to “I just want the magic to scale.” 

AI at scale is not magic, it’s data

“Everybody’s trying to figure out how to go to the next level,” Sanchez said. But the real reason AI can be used at scale, he emphasized, has nothing to do with magic. It’s because of data.  

“I know that the only way John Deere got there was through a rigorous and extensive process of data collection and data labeling,” he said. “So now we have to figure out a way to get the right data collected and implemented in a way that is not so onerous.”  

But some experts emphasize that most companies remain immature in their AI efforts – in terms of having the right data, resources and literacy needed to scale. 

“I think there is still a bit of conflict around testing capability and use cases vs scaling AI,” said Di Mayze, global head of data and AI at agency holding company WPP. One client, she added, described their efforts as “pilot-palooza.” “They’re trying to find ways to link all the various trials to enable a scaled AI capability, but companies are realizing they need to get their data in order before they can worry about scaling AI,” she said.

Here are four AI trends related to scale that are all the rage in mid-2022:

Synthetic data offers speed and scale

Kevin Dunlap, founder and managing partner at early-stage venture capital firm Calibrate Ventures, said organizations use synthetic data – defined as data that is created algorithmically rather than collected via real-world events – to improve software development, speed up R&D, train machine learning models, better understand their own internal data and products, and improve business processes. 

“Synthetic data can stand in for real datasets and be used to validate mathematical models,” he said. “I’ve seen companies in fields such as healthcare, finance, insurance, cybersecurity, manufacturing, robotics, and autonomous vehicles use synthetic data to speed up development and time-to-market so they can scale faster.” 

To scale more quickly, he added, companies are combining synthetic data with real data to get a better understanding of their product, go-to-market strategies, customers and operations, he added. Healthcare companies, for example, use synthetic data to make more accurate diagnoses without compromising patient data, while financial institutions use it to spot fraud. 

“Companies can also build synthetic twins of their own data to see blind spots,” he said. “GE, for example, creates synthetic twins of data from turbines to improve engineering and mechanical designs.”

John Deere’s Sanchez said that in 2021 he heard chatter about synthetic data, but now, this year, he has seen its use firsthand. “Our teams generate synthetic data and try to use it to validate a model or even try to incorporate it into the training data sets,” he said. 

In some ways, the use of synthetic data remains an experiment, he cautioned.

“The whole point of training an AI algorithm is you’re showing it a variety of features and letting it learn, so you’re always so cautious to say, does my simulated data have biases that I don’t want in my algorithm?” Still, he said, “I have seen way more of it this year.” 

AI models: Scale or bust

Scale has been the name of the game in machine learning and deep learning research for the past few years, but bigger and bigger models continue to dominate the landscape in 2022, said Melanie Beck, manager, research engineering at software company Cloudera. 

“From the release of OpenAI’s DALL-E 2 image generation model to Google’s LaMDA conversation agent, the key to high-performance has been bigger models trained on more data and for far longer – all of which requires vastly more computing resources,” she said. “This raises the question: how can organizations that may not have the resources of these tech giants get in and stay in the game?” 

The research community has been most surprised by the unexpected emerging capabilities that arise from large-scale AI models, or foundation models, added Nicolas Chapados, vice president of research at ServiceNow. Originally built as large language models, these are trained on massive multimodal datasets that can adapt to new “downstream” tasks very quickly, sometimes with no new data at all. 

“These models are equally good at dialog, question-answering, describing images in words, translating text to code, and sometimes playing video games and controlling robot arms,” Chapados said. 

What’s surprising, he explained, is that these models, beyond 100 billion parameters, exhibit emerging behavior that designers didn’t expect, such as the ability to provide a step-by-step explanation in a question-answering situation, given the right “prompting” provided to the model. 

“The top challenges in 2022 are for organizations to understand which use cases — especially in the enterprise world — truly benefit from this scale, how to successfully and profitably operationalize these capabilities, as well as how to manage other inhibitors such as access to suitable and sufficient data, and safety risks such as possible model toxicity,” he added. 

MLops on the rise

Kavita Ganesan, founder of Opinosis Analytics and author of The Business Case for AI, said that one of the problems companies have faced in the past is scaling the number of deployed models. 

“Every time a new model is developed, it often has its own deployment requirements, adding friction to each development and deployment cycle,” Ganesan said. “This has caused a slowdown in many machine learning initiatives, and some even had to be shelved because of the work involved in each deployment cycle.” 

That is slowly changing with the growing number of MLops platforms, she explained, which allow organizations to develop, deploy, integrate and monitor models.

“Even better, some of these platforms allow you to autoscale computing resources and other infrastructure requirements, making the deployment of machine learning models for business use cases less painful and more repeatable,” she explained. “Specific vendors also allow businesses to use on-premise or cloud resources depending on needs.”

John Deere’s Sanchez added the current crop of reliable, commercially available MLops platforms is a big shift from three years ago, which were “almost like homegrown systems.” But, he said, they are also a double-edged sword.

“Now I can take a good software developer and once they learn some of the tools that are available, they quickly can behave like an experienced AI developer,” Sanchez said. “But sometimes they may decide to use those tools when they should be trying something else – often it can give you a solution and they’re not quite sure why it works or how it works.”

Scaling AI responsibly

​​From Microsoft’s recent moves toward “responsible AI” to companies taking on the issue of AI safety, discussion about how to scale AI responsibly – that is, ethically and without bias – is everywhere in 2022. 

WPP’s Mayze pointed out that businesses need to be conscious about what they are asking the machines to do and have a full review on whether the KPIs are correct. 

“For example, if you are trying to optimize revenue per customer, AI will find ways to do this that may not look so ethical in the cold light of day,” Mayze said. “So creating an environment where people can explore the unintended consequences of AI use and establish the boundaries of any organization is important.” 

However, applying the principles of responsible AI – such as transparency and explainability – may be an easy answer to societal concerns about how companies might use AI, but it is not sufficient, said François Candelon, global director of the BCG Henderson Institute. 

“It is a good and necessary start, but I believe companies must go beyond being responsible and develop a true social contract with their customers based on dialogue, trust, and a transparent cost/benefits evaluation of AI impact to earn what I call their ‘social license’ – a form of acceptance that companies must gain through consistent and trustworthy behavior and stakeholder interactions,” Candelon said.

AI at scale means adapting to change

No matter how organizations move toward scaling AI in the coming year, it’s important to understand the significant differences between using AI as a ‘proof of concept’ and scaling those efforts, said Bret Greenstein, data, analytics and AI partner at PwC.

“The difference is between making a great sandwich and opening a successful restaurant,” Greenstein said. “You have to think about all the things that need to be available when you need them, ensure things are in the form you need to be useful, and ensure you can adapt your systems to changes.” 

A scaled AI solution, for example, needs to be fed new data as a pipeline, not just a snapshot of data. And while proof of concept can tolerate incomplete data or bad data since it is not mission-critical, data preparation for AI systems is still 80-90% of the work needed to make AI successful. Changing conditions can have severe impacts on models in production. In scaled, product AI systems, models are retrained as data changes and accuracy is monitored as conditions change.

“The key lesson in all of this is to think of AI as a learning-based system,” Greenstein said. “People need to continue to learn with the latest data, and to be aware of changes so they can apply that learning to make accurate decisions today.” 

For John Deere, scaling AI has been all about working with large data sets to train models, giving the organization an important perspective on change. 

“Someone new coming in might say, ”There’s a tool and I can do this thing once and it’s magic,” Sanchez added said. “But when you scale solutions into a product, it’s not just one-time magic – you have to understand how that product gets used in the real world and all of the different corner cases.” 

Clearly, the current 2022 AI trends indicate how AI is becoming useful at a greater scale within an organization, said Gartner analyst Andrews. 

“More people are able to use it, they’re able to accomplish things they could never have accomplished before,” Andrews said. “So the big AI trend in 2022 is every time we do something new, AI is a part of it.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Source

By admin