5 ways enterprise leaders can use large language models to unlock new possibilities

It’s highly unlikely that you’ve missed the buzz surrounding generative AI, and specifically large language models (LLMs) like ChatGPT. In recent months, these have been hot topics everywhere, from social media to the news to everyday conversations, and we’ve only just begun to learn what generative AI could be capable of.

Generally speaking, gen AI refers to a category of machine learning (ML) techniques that can create content like images, music and text that closely resembles human-created content. LLMs, on the other hand, are neural networks with billions of parameters that have been trained on vast amounts of text data, which enables them to understand, process, and generate human-like language.

Together, these technologies offer a diverse range of applications that hold the potential to reshape diverse industries and amplify the quality of interactions between humans and machines. By exploring these applications, business owners and enterprise decision-makers can gain valuable inspiration, drive accelerated growth and achieve tangibly improved results through rapid prototyping. The added advantage of gen AI is that most of these applications require minimal expertise and do not require further model training.

Quick disclaimer: People often tend to associate gen AI exclusively with ChatGPT, but there are numerous models from other providers available, like Google’s T5, Meta’s Llama, TII’s Falcon, and Anthropic’s Claude. While most of the discussed applications in this article have made use of OpenAI’s ChatGPT, you can readily adapt and switch the underlying LLM to align with your specific compute budget, latency (how fast you need your model to generate completions — smaller models allow quicker loading and reduce inference latency), and downstream task.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

1. Connect LLMs to external data

LLMs demonstrate impressive capabilities at many tasks right out of the box, such as translation and summarizing , without requiring initial customization. The reason they are so good at these generic tasks is that the underlying foundation model has been trained on large yet generic datasets. However, this competence might not seamlessly extend to domain-specific tasks including, for example, providing answers about your company’s annual report. This is where Retrieval Augmented Generation (RAG) comes into the picture.

RAG is a framework for building LLM-powered systems that make use of external data sources. RAG gives an LLM access to data it would not have seen during pre-training, but that is necessary to correctly provide relevant and accurate responses. RAG enables language models like ChatGPT to provide better answers to domain-specific questions by combining their natural language processing (NLP) abilities with external knowledge, mitigating instances of generating inaccurate information or “hallucinations.” It does so by:

  • Retrieving relevant information from external knowledge sources, such as large-scale document collections, databases or the internet. The relevance is based on the semantic similarity (measured using, say, cosine similarity) to the user’s question.
  • Augmenting the retrieved information to the original question in the prompt (to provide a helpful context for answering the question) and passing it to the LLM so it can produce a more informed, contextually relevant, and accurate response.

This approach makes LLMs more versatile and useful across various domains and applications, including question-answering, content creation and interactive conversation with access to real-time data. Podurama, a podcast app, has leveraged similar techniques to build its AI-powered recommender chatbots. These bots adeptly suggest relevant shows based on user queries, drawing insights from podcast transcripts to refine their recommendations.

This approach is also valuable in crisis management. PagerDuty, a SaaS incident response platform, uses LLMs to generate summaries of incidents using basic data such as title, severity or other factors, and augmenting it with internal Slack data , where responders discuss details and share troubleshooting updates to refine the quality of the summaries.

While RAG may appear intricate, the LangChain library offers developers the necessary tools to implement RAG and build sophisticated question-answering systems. (In many cases, you only need a single line of code to get started). LangChain is a powerful library that can augment and enhance the performance of the LLM at runtime by providing access to external data sources or connecting to existing APIs of other applications.

When combined with open-source LLMs (such as Llama 2 or BLOOM), RAG emerges as an exceptionally potent architecture for handling confidential documents. What’s particularly interesting is that LangChain boasts over 120 integrations (at the time of writing), enabling seamless functionality with structured data (SQL), unstructured content (PDFs), code snippets and even YouTube videos.

2. Connect LLMs to external applications

Much like utilizing external data sources, LLMs can establish connections with external applications tailored to specific tasks. This is particularly valuable when a model occasionally produces inaccuracies due to outdated information. For example, when questioning the present Prime Minister of the UK, ChatGPT might continue to refer to Boris Johnson, even though he left office in late 2022. This limitation arises because the model’s knowledge is fixed at its pretraining period and doesn’t encompass post-training events like Rishi Sunak’s appointment.

To address such challenges, LLMs can be enhanced by integrating them with the external world through agents. These agents serve to mitigate the absence of internet access inherent in LLMs, allowing them to engage with tools like a weather API (for real-time weather data) or SerpAPI (for web searches). A notable example is Expedia’s chatbot, which guides users in discovering and reserving hotels, responding to queries about accommodations, and delivering personalized travel suggestions.

Another captivating application involves the automatic labeling of tweets in real-time with specific attributes such as sentiment, aggression and language. From a marketing and advertising perspective, an agent connecting to e-commerce tools can help the LLM recommend products or packages based on user interests and content. 

3. Chaining LLMs

LLMs are commonly used in isolation for most applications. However, recently LLM chaining has gained traction for complex applications. It involves linking multiple LLMs in sequence to perform more complex tasks. Each LLM specializes in a specific aspect, and they collaborate to generate comprehensive and refined outputs.

This approach has been applied in language translation, where LLMs are used successively to convert text from one language to another. Companies like Microsoft have proposed LLM chaining for translation services in the case of low-resource languages, enabling more accurate and context-aware translations of rare words.

This approach can offer several valuable use cases in other domains as well. For consumer-facing companies, LLM chaining can create a dynamic customer support experience that can enhance customer interactions, service quality, and operational efficiency.

For instance, the first LLM can triage customer inquiries and categorize them, passing them on to specialized LLMs for more accurate responses. In manufacturing, LLM chaining can be employed to optimize the end-to-end supply chain processes by chaining specialized LLMs for demand forecasting, inventory management, supplier selection and risk assessment.

Prior to the emergence of LLMs, entity extraction relied on labor-intensive ML approaches involving data collection, labeling and complex model training. This process was cumbersome and resource-demanding. However, with LLMs, the paradigm has shifted. Now, entity extraction is simplified to a mere prompt, where users can effortlessly query the model to extract entities from text. More interestingly, when extracting entities from unstructured text like PDFs, you can even define a schema and attributes of interest within the prompt.

Potential examples include financial institutions which can utilize LLMs to extract crucial financial entities like company names, ticker symbols and financial figures from news articles, enabling timely and accurate market analysis. Similarly, it can be used by advertising/marketing agencies for managing their digital assets by employing LLM-driven entity extraction to categorize ad scripts, actors, locations and dates, facilitating efficient content indexing and asset reuse.

5. Enhancing transparency of LLMs with ReAct prompts

While receiving direct responses from LLMs is undoubtedly valuable, the opaqueness of the black box approach often raises hesitations among users. Additionally, when confronted with an inaccurate response for a complex query, pinpointing the exact step of failure becomes challenging. A systematic breakdown of the process could greatly assist in the debugging process. This is precisely where the Reason and Act (ReAct) framework comes into play, offering a solution to these challenges.

ReAct emphasizes on step by step reasoning to make the LLM generate solutions like a human would. The goal is to make the model think through tasks like humans do and explain its reasoning using language. One can easily operationalize this approach as generating ReAct prompts is a straightforward task involving human annotators expressing their thoughts in natural language alongside the corresponding actions they’ve executed. With only a handful of such instances, the model learns to generalize well for new tasks.

Taking inspiration from this framework, many ed-tech companies are piloting tools to offer learners personalized assistance with coursework and assignment and instructors AI-powered lesson plans. To this end, Khan Academy developed Khanmigo, a chatbot designed to guide students through math problems and coding exercises. Instead of merely delivering answers upon request, Khanmigo encourages thoughtful problem-solving by walking students through the reasoning process. This approach not only helps prevent plagiarism but also empowers students to grasp concepts independently.

Conclusion

While the debate may be ongoing about the potential for AI to replace humans in their roles or the eventual achievement of technological singularity (as predicted by the godfather of AI, Geoffrey Hinton), one thing remains certain: LLMs will undoubtedly play a pivotal role in expediting various tasks across a range of domains. They have the power to enhance efficiency, foster creativity and refine decision-making processes, all while simplifying complex tasks.

For professionals in various tech roles, such as data scientists, software developers and product owners, LLMs can offer valuable tools to streamline workflows, gather insights and unlock new possibilities.

Varshita Sher is a data scientist, a dedicated blogger and podcast curator, and leads the NLP and generative AI team at Haleon.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Source