LangChain¶
🦜️🔗
About
LangChain is a framework for developing applications powered by language models, written in Python, and with a strong focus on composability. As a language model integration framework, LangChain’s use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.
The LangChain adapter for CrateDB provides support to use CrateDB as a vector store database, to load documents using LangChain’s DocumentLoader, and also supports LangChain’s conversational memory subsystem.
RAG
LangChain supports retrieval-augmented generation (RAG), which is a technique for augmenting LLM knowledge with additional, often private or real-time, data, and mixing in “prompt engineering” as the process of structuring text that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform.
Use case examples
LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally. Those are typical applications you can build using LLMs:
Install¶
pip install \
'langchain-community @ git+https://github.com/crate-workbench/langchain.git@cratedb#subdirectory=libs/community' \
'sqlalchemy-cratedb>=0.40.0'
Learn¶
Tutorials and Notebooks about using LangChain together with CrateDB.
Tutorials
Tutorial: Set up LangChain with CrateDB
LangChain is a framework for developing applications powered by language models. For this tutorial, we are going to use it to interact with CrateDB using only natural language without writing any SQL.
To achieve that, you will need a CrateDB instance running, an OpenAI API key, and some Python knowledge.
Fundamentals
Vector Store
LLM
RAG
Webinars
LangChain Cookbook
The LangChain Cookbook is based off the LangChain Conceptual Documentation.
Its goal is to provide an introductory understanding of the components, use cases, and concepts of LangChain via ELI5 examples and code snippets.
Webinar Fundamentals
How to Use Private Data in Generative AI
In this video recorded at FOSDEM 2024, we explain how to leverage private data in generative AI on behalf of an end-to-end Retrieval Augmented Generation (RAG) solution.
Webinar Integrations