Berri AI's LiteLLM Simplifies LLM API Integration

Berri AI Seed Funded by Y Combinator

Programming is no easy feat. It gets confusing and tricky, as it is a difficult discipline to master. It involves complicated languages and vocabulary and requires a person to understand logical statements as well as work with abstract concepts.

Some may say foundational programming concepts aren’t too grueling, and I suppose whether a discipline is difficult or not depends on the different proclivities and interests people have.

It might not also be completely accurate to say that all programming is hard, just like it isn’t accurate to say all cooking is hard. It may be easy to cook an egg or make yourself a peanut butter and jelly sandwich, but more advanced cooking skills are certainly needed to cook a lobster thermidor or a beef Wellington.

Nevertheless, programming can get difficult, not just for those who are new to the business but even for programmers who are already coding 24/7. Many more enterprises are realizing this and are offering solutions to problems that often make programming even more strenuous for people in the field, one of them being Berri AI.

Programming and Coding

One of the essential purposes of programming is to automate tasks and processes, achieved by finding and entering a sequence of instructions in code, otherwise known as coding. Coding is extremely vast, with foundational basics that every coder can tinker with. Though they share the same ultimate goal that is to automate, the ways they achieve this can be different.

With no hard-and-fast rule in coding, one coder can find one method perfect while the other prefers a different approach. What matters most is understanding the logic behind programming language to achieve efficiency and readability in one’s code.

There are way too many aspects to consider when wanting to write clean code that can run efficiently, some easier to implement than others, some too complicated. This is what Berri AI, a San Francisco-based startup specializing in LLM app production and LLM API calls, tries to assist coders and programmers with “LiteLLM.”

Photo Courtesy of LiteLLM

The Problem

With more and more LLM APIs coming out into the world, like Google’s PaLM, Meta’s LLaMa, Anthtropic’s Claude, and so on, there are more providers with their own packages and different input/output, causing calling them to involve messy, unclean code.

The inconsistency in I/O requires those wanting to incorporate the LLM APIs into their application to go through complicated measures. One of the complications programmers may encounter is the need to perform error-handling or model-fallback logic, which in this case means having to test every provider one by one until the integration works, which would be incredibly time-consuming.

According to the founders of Berry AI, Ishaan Jaffer and Krrish Dholakia, implementations that are specific to different providers would also need an increasingly large number of for-loops, which are iterative statements repeatedly used to check for certain conditions and execute a block of code as long as the conditions are met. An example of for-loops is if/else statements, which are statements used to execute a block of code if a specified condition is true. If the condition is false, another block of code can be executed.

Another issue Jaffer and Dholakia have also encountered before is increased difficulty in debugging codes. To prevent all the time-consuming processes, making what was supposed to be simpler and automated become much too complicated, the pair offers a solution with LiteLLM’s features.

The Solution

LiteLLM is a single package that can call Azure, Antropic, Cohere, and Replicate, consisting of consistent I/O using Openai format that removes the need for multiple if/else statements. Having fewer if/else statements could potentially remove any chance of your code being hard to read and maintain.

LiteLLM has also been tested with over 50 cases, claimed to be proven reliable every time, and is observable, creating no obscure errors. If a request does happen to fail, LiteLLM can reiterate what happened and why.

Another feature that the two founders believe can greatly assist programmers is LiteLLM’s easily understandable user interface. LiteLLM provides an updated list of available models users can call from, an added key to LiteLLM’s env. File, and the ability to map intricate (or otherwise confusing to some) model names into simpler user-facing aliases.

Photo Courtesy of LiteLLM

By having LiteLLM integrated into every LLM API, zero configuration is required. Any hassle of translating inputs, standardizing exceptions, and guaranteeing consistent consistent outputs for calls has gone null, and LiteLLM’s single environment variable allows users to automatically add 100+ LLM API integrations without the need to modify code.

Enterprises that assist with complications such as these already exist in the industry, examples being LangChain, which provides developers with tools to build LLM apps, and Gorilla, which is also connected with massive APIs, allowing them to have appropriate API calls, similar to LiteLLM.

Photo Courtesy of Gorilla

Berri AI believes that LiteLLM can assist developers greatly by simplifying the process of calling LLM providers without any clunky or bloated techniques and processes.

Meme & AI-Generated Picture

Job Posting

  • Axio - Cyber Risk Engineer - New York City, NY (Remote/Hybrid)

  • Optum - Senior Advisory Services Consultant - Nashville, TN (Remote)

  • PagerDuty - Senior Product Designer - Atlanta, GA (Remote)

  • Yieldmo - Creative Director, Tech and Ad Experiences - New York City, NY (Remote)

Promote your product/service to Digger Insights’ Community

Advertise with Digger Insights. Digger Insights’ Miners are professionals and business owners with diverse Industry backgrounds who are looking for interesting and helpful tools, products, services, jobs, events, apps, and books. Email us [email protected]

Your feedback would be greatly appreciated, send it to [email protected] 

Reply

or to participate.