LangChain 101 - Hello LangChain (Java)
GETTING STARTED WITH LARGE LANGUAGE MODELS USING JAVA
The world of large language models (LLMs) is brimming with potential, capable of generating creative text formats, translating languages, and writing different kinds of creative content. But how do you harness this power to build practical applications? Enter LangChain, a framework that acts as a bridge between LLMs and developers, simplifying the process of interacting with these powerful models.
What is LangChain?
Imagine a world where you can create applications powered by LLMs without getting bogged down in complex code or wrestling with individual LLM APIs. That's the magic of LangChain. It provides a user-friendly library that streamlines interacting with various LLMs. Here's what makes LangChain so valuable:
Effortless LLM Integration: LangChain abstracts away the complexities of interacting with different LLMs through various APIs. You can focus on crafting effective prompts and building your application logic, leaving the LLM interaction details to LangChain.
Prompt Engineering Made Easy: Crafting effective prompts is crucial for unlocking the true potential of LLMs. LangChain provides tools for building and managing prompts, ensuring you get the most out of your LLM interactions. By providing the right prompts and context, you can guide the LLM towards the desired outcome for your application.
Chaining Power: LangChain goes beyond basic LLM calls. It allows you to chain multiple LLM interactions together, enabling the creation of complex multi-step workflows. This opens doors to building more intricate applications that leverage the strengths of LLMs in various stages of the process.
Prerequisites
This tutorial uses Ollama for locally running LLMs. You can download and install Ollama from here. If you have access to hosted LLMs (Open AI, Gemini, etc.), you can use them as well.
While LangChain is primarily available in Python/JavaScript, there is a nice community maintained Java library called LangChain4j that is based on the good parts from multiple frameworks like LangChain, LlamaIndex, Haystack, etc. It works with Java 8 or higher and supports Spring Boot 2 and 3.
We will be using Java as the programming language and LangChain4j as the framework for this tutorial.
Installation and Setup
Ollama
Download and run the open-source llama3
model using Ollama.
ollama run llama3
This will pull the llama3
model from Ollama's repository and run it locally. Alternatively, you can pull and use any other model of your choice available in the Ollama model repository.
LangChain4j
Install the required LangChain4j dependencies.
For Maven, add the following in your pom.xml
file.
<dependencies>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>0.31.0</version>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-ollama</artifactId>
<version>0.31.0</version>
</dependency>
</dependencies>
For Gradle, add the following in your build.gradle
file.
implementation 'dev.langchain4j:langchain4j:0.31.0'
implementation 'dev.langchain4j:langchain4j-ollama:0.31.0'
How to Chat with LLMs
Chat Models are a key part of LangChain. These models take chat messages as inputs and return chat messages as outputs, instead of just plain text. LangChain works with many model providers like Open AI, Cohere, Ollama, and Hugging Face, and it offers a simple interface to interact with all these models.
ChatLanguageModel
is the low-level API in LangChain4j, offering the most power and flexibility. Let's start by creating a ChatLanguageModel
instance that can connect to Ollama and chat with our locally running llama3
model.
ChatLanguageModel chatLanguageModel = OllamaChatModel.OllamaChatModelBuilder()
.baseUrl("http://localhost:11434")
.modelName("llama3")
.temperature(0.1)
.build();
Note: Read more about controlling various model parameters like temperature, top-k, etc. here.
Now, we need just one line of code to interact with llama3
.
String answer = chatLanguageModel.generate("Hello");
System.out.println(answer);
The above code will issue a Chat Completion request to the locally running llama3
model, and the LLM will respond with something like:
Hi there! How can I assist you today?
Conclusion
That's it! You just created a Java application that uses AI. LangChain has abstracted away the complexity of interacting with LLMs, and we can start building AI based applications in just a few lines of code. In the next few articles, we will learn how to develop more applications using LangChain that leverage advanced concepts like memory, chaining, retrieval, and agents, etc.
As usual, the code for this article can be found on my GitHub.
Thank you for staying with me so far. Hope you liked the article. You can connect with me on LinkedIn where I regularly discuss technology and life. Also, take a look at some of my other articles and my YouTube channel. You can also book a 1:1 or a Mock Interview with me on Topmate. Happy reading. 🙂