- DiggerInsights
- Posts
- Deep Dive: Demystifying LLM Technology Volume 3
Deep Dive: Demystifying LLM Technology Volume 3
Utilizing LLMs for Business and Innovation
Mornin’ miners⛏️,
Happy Tuesday!
Digger Insights is your easy-to-read daily digest about tech. We gather tech insights that help you gain a competitive advantage!
Let’s get to it!
Today’s Deep Dive: 🤖Demystifying LLM Technology Volume 3 - Utilizing LLMs for Business and Innovation💻
Demystifying LLM Technology Volume 3 - Utilizing LLMs for Business and Innovation
Large language models are incredibly versatile, and one of the main reasons LLMs have been gaining plenty of traction is their ability to do natural language processing (NLP) tasks, meaning they can do tasks that require understanding the way humans write and speak with high accuracy and fluency.
As we learned last week in volume 2 of this Deep Dive series titled Behind the Scenes of Training LLMs, LLMs are trained on vast amounts of datasets, and training LLMs can be done using different methods. The phase of training is what gives LLMs a wide range of capabilities, whether it be summarizing texts, solving mathematical or scientific problems, or more applicable tasks for individuals and businesses to make their workloads done more effectively.
This time, we’ll learn about what follows their training and how LLMs can be utilized for businesses and those who want to innovate.
Leveraging LLMs for NLP Tasks
One method of training we’ve managed to learn briefly was transfer learning, a method in which pre-trained models with pre-existing knowledge are utilized during the training of newer models. Training models to generate coherent and human-like text are almost always computationally expensive and time-consuming, so by leveraging models already trained by academic institutions and big tech companies like OpenAI’s ChatGPT, for example, businesses can utilize one model that can perform multiple tasks during training instead of using separate models for every different task.
This way, businesses can not only greatly utilize LLMs for their day-to-day workloads, but they can develop their own more easily like McKinsey and Company just recently did with “Lilli.”
When leveraging pre-trained models, it’s crucial for a company to choose the correct model depending on each company’s specific needs, especially with the continuous supply of new language models adding up every day in the AI market. Each pre-trained model is equipped with different pre-existing knowledge, every single one associated with a variety of strengths when completing tasks. Research conducted by the organization Towards Data Science presents said differences.
Photo Courtesy of Towards Data Science
A platform built by Allganize, a company that provides Natural Language Understanding API and conversational AI for enterprises, enables companies wanting to build their own AI-based applications to choose different base LLM models.
Content Generation
With the many benefits LLMs can offer due to their comprehensive capabilities, it’s not so surprising that many have deemed these models as revolutionary. Content generation has become one of the tasks in which many individuals and businesses have made use of LLMs.
LLMs are able to generate various types of texts like news articles, poems, legal documents, etc., in a great variety of styles. This has allowed businesses in the field of content creation, translation, or data augmentation, to utilize LLMs.
According to an article published in Harvard Business Review, it is no question that LLMs have grown adept at mimicking human efforts at creative work. LLMs are sensitive to the prompts fed to them, able to make appropriate word choices, and can write without grammatical errors. Though LLMs or generative AI can do a lot, human editing will still be required to avoid any legal and ethical mishaps.
The article written by Thomas H. Davenport and Nitin Mittal states that LLMs’ ability in content generation allows it to be applied in marketing, from email generation to product campaigns and social media copies. LLMs can also be used for code generation, not to eliminate human programmers but to help them make tools work more speedily and effectively, as Microsoft did with Codex and Copilot.
Voice Assistants for Customer Support
LLMs’ great level of understanding of human speech and conversation, as well as context awareness, has brought it quite far in development. At the moment, LLMs are increasingly being used for conversational AI and chatbots.
Designed for dialogue and carrying on long conversations with humans without forgetting older speeches to maintain context, LLMs are quickly turning into an important tool in customer support. In many cases, these models are used to streamline, augment, and automate customer experience for more efficiency and to improve customer satisfaction.
A few ways LLMs can be utilized in customer support is by answering frequently asked questions, giving out product recommendations, and resolving customer complaints. With LLMs’ ability to conduct sentiment analysis, they can assist businesses by responding to customer complaints more quickly.
To implement this, businesses need to ensure that the model used is trained to deal with relevant customer support information, such as order numbers, shipping tracking links, and so on. One platform that can assist businesses in this area is DigitalGenius, an AI auto-pilot platform that understands conversation and automates repetitive customer service processes.
It’s known that big companies such as Apple, Amazon, and Google have already utilized AI as voice assistants, with Siri, Alexa, and Assistant, respectively. However, smaller companies are now creating digital voice assistants as well, such as workout education company Sensory Fitness’ Sasha. Sasha handles customer service phone calls and carries on dynamic conversations. According to Sasha’s developer, FrontdeskAI, the voice assistant has saved Sensory Fitness $30,000 per year.
UK-based flower dealer company Flower Station and online floral dealer 1-800-Flowers have also worked with AI as customer service bots. Flower Station’s chatbot, Tars, tracks and analyzes customer feedback and reviews so the company can identify areas for improvement and address customer concerns. Flower Station has said that customer satisfaction has improved by 20%. 1-800-Flowers’ digital concierge “GYWN” (Gifts When You Need) takes customer orders by having them type “I’m looking for a birthday gift” and respond with questions like “Is it for a male or female?” and “Age 30 or 35?.”
Photo Courtesy of 1-800-Flowers
Personalization and Recommendation Systems
LLMs’ part in helping or being virtual assistants doesn’t seem to be questionable for many, to the point where polls are being made to bet on which incumbent voice assistant will be the first to incorporate an LLM.
Photo Courtesy of Manifold Markets
The training LLMs go through has brought forth unforeseen applications and various use cases, including but not limited to being virtual assistants and customer support bots. This is made possible due to LLMs having the ability to understand human conversations and context, which has allowed the creation of recommendation systems.
Recommendation systems enable LLMs to effectively filter and rank relevant items to satisfy the needs of users and customers. The recommendation system consists of several categories, including conversational recommendations, sequential recommendations, rating predictions, and text embedding-based recommendations.
Conversational recommendations include what Bard can do when you ask it to recommend movies in conversation. For example, I asked Bard for movies that I can watch during date night.
Photo Courtesy of Bard
This kind of interface allows chatbots to be helpful for businesses by guiding customers with a fluid and personalized experience in a shopping app, for example.
Sequential recommendations involve an LLM that looks at historical activities and sequences of items that it has interacted with so it could infer what to recommend. Rating predictions are recommendations sorted based on certain criteria using a learning-to-rank library*. When using the rating predictions system, you can ask the LLM to rate a list of candidate movies one by one and sort them in order before making recommendations according to the criteria.
Text embedding-based recommendations are made by embedding text associated with your items into vectors, and the LLMs would use nearest neighbor search* techniques to identify similar items that they could recommend based on the users’ queries. Text embeddings can also be used as a side feature, by using the text embeddings to capture semantic information* related to candidate items. This can be done by providing description text to improve the model's accuracy.
*Learning-to-Rank Library: the application of supervised or semi-supervised machine learning in the construction of ranking models for information retrieval systems.
*Nearest Neighbor Search: the optimization problem of finding the point in a given set that is closest or most similar to a given point.
*Semantic Information: interpretation of language or data by computers by combining machine learning and natural language processing.
It is decidedly true that LLM use cases are already exceptionally varied, but as the models continue to develop, there is no telling how limitless their applications can be for diverse sectors and industries.
Meme & AI-Generated Picture
Job Posting
GitLab - Senior Product Designer, Switchboard - United States (Remote)
Path - Senior Software Engineer, Full Stack -Los Angeles, CA (Remote)
Tempus - Machine Learning Scientist, Generative AI - Chicago, IL (Remote/Hybrid)
Zoom Video Communications - Machine Learning Engineer - San Francisco, CA (Remote)
Promote your product/service to Digger Insights’ Community
Advertise with Digger Insights. Digger Insights’ Miners are professionals and business owners with diverse Industry backgrounds who are looking for interesting and helpful tools, products, services, jobs, events, apps, and books. Email us [email protected]
Gives us feedback at [email protected]
Reply