Question Answering
Use caseβ
Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this.
In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:
- QA over structured data (e.g., SQL)
- QA over code (e.g., Python)
Overviewβ
The pipeline for converting raw unstructured data into a QA chain looks like this:
Loading
: First we need to load our data. Unstructured data can be loaded from many sources. Use the LangChain integration hub to browse the full set of loaders. Each loader returns data as a LangChainDocument
.Splitting
: Text splitters breakDocuments
into splits of specified sizeStorage
: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval
: The app retrieves splits from storage (e.g., often with similar embeddings to the input question)Generation
: An LLM produces an answer using a prompt that includes the question and the retrieved dataConversation
(Extension): Hold a multi-turn conversation by adding Memory to your QA chain.
Quickstartβ
To give you a sneak preview, the above pipeline can be all be wrapped in a single object: VectorstoreIndexCreator
. Suppose we want a QA app over this blog post. We can create this in a few lines of code. First set environment variables and install packages:
pip install openai chromadb
# Set env var OPENAI_API_KEY or load from a .env file
# import dotenv
# dotenv.load_dotenv()
from langchain.document_loaders import WebBaseLoader
from langchain.indexes import VectorstoreIndexCreator
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
index = VectorstoreIndexCreator().from_loaders([loader])
API Reference:
index.query("What is Task Decomposition?")
' Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done using LLM with simple prompting, task-specific instructions, or with human inputs. Tree of Thoughts (Yao et al. 2023) is an extension of Chain of Thought (Wei et al. 2022) which explores multiple reasoning possibilities at each step.'
Ok, but what's going on under the hood, and how could we customize this for our specific use case? For that, let's take a look at how we can construct this pipeline piece by piece.
Step 1. Loadβ
Specify a DocumentLoader
to load in your unstructured data as Documents
. A Document
is a piece of text (the page_content
) and associated metadata.
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
API Reference:
Go deeperβ
Step 2. Splitβ
Split the Document
into chunks for embedding and vector storage.
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)
all_splits = text_splitter.split_documents(data)
API Reference:
Go deeperβ
DocumentSplitters
are just one type of the more genericDocumentTransformers
, which can all be useful in this preprocessing step.- See further documentation on transformers here.
Context-aware splitters
keep the location ("context") of each split in the originalDocument
:
Step 3. Storeβ
To be able to look up our document splits, we first need to store them where we can later look them up. The most common way to do this is to embed the contents of each document then store the embedding and document in a vector store, with the embedding being used to index the document.
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
API Reference:
Go deeperβ
Browse the > 40 vectorstores integrations here.
See further documentation on vectorstores here.
Browse the > 30 text embedding integrations here.
See further documentation on embedding models here.
Here are Steps 1-3:
Step 4. Retrieveβ
Retrieve relevant splits for any question using similarity search.
question = "What are the approaches to Task Decomposition?"
docs = vectorstore.similarity_search(question)
len(docs)
4
Go deeperβ
Vectorstores are commonly used for retrieval, but they are not the only option. For example, SVMs (see thread here) can also be used.
LangChain has many retrievers including, but not limited to, vectorstores. All retrievers implement a common method get_relevant_documents()
(and its asynchronous variant aget_relevant_documents()
).
from langchain.retrievers import SVMRetriever
svm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())
docs_svm=svm_retriever.get_relevant_documents(question)
len(docs_svm)
API Reference:
4
Some common ways to improve on vector similarity search include:
MultiQueryRetriever
generates variants of the input question to improve retrieval.Max marginal relevance
selects for relevance and diversity among the retrieved documents.- Documents can be filtered during retrieval using
metadata
filters.
import logging
from langchain.chat_models import ChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever
logging.basicConfig()
logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)
retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(),
llm=ChatOpenAI(temperature=0))
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
len(unique_docs)
API Reference:
Step 5. Generateβ
Distill the retrieved documents into an answer using an LLM/Chat model (e.g., gpt-3.5-turbo
) with RetrievalQA
chain.
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever())
qa_chain({"query": question})
API Reference:
{'query': 'What are the approaches to Task Decomposition?',
'result': 'The approaches to task decomposition include:\n\n1. Simple prompting: This approach involves using simple prompts or questions to guide the agent in breaking down a task into smaller subgoals. For example, the agent can be prompted with "Steps for XYZ" or "What are the subgoals for achieving XYZ?" to facilitate task decomposition.\n\n2. Task-specific instructions: In this approach, task-specific instructions are provided to the agent to guide the decomposition process. For example, if the task is to write a novel, the agent can be instructed to "Write a story outline" as a step in the task decomposition.\n\n3. Human inputs: This approach involves incorporating human inputs in the task decomposition process. Humans can provide guidance, feedback, and assistance to the agent in breaking down complex tasks into manageable subgoals.\n\nThese approaches aim to enable efficient handling of complex tasks by breaking them down into smaller, more manageable subgoals.'}
Note, you can pass in an LLM
or a ChatModel
(like we did here) to the RetrievalQA
chain.
Go deeperβ
Choosing LLMsβ
- Browse the > 55 LLM and chat model integrations here.
- See further documentation on LLMs and chat models here.
- See a guide on local LLMS here.
Customizing the promptβ
The prompt in RetrievalQA
chain can be easily customized.
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectorstore.as_retriever(),
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
result = qa_chain({"query": question})
result["result"]
API Reference:
'The approaches to Task Decomposition are (1) using simple prompting by LLM, (2) using task-specific instructions, and (3) incorporating human inputs. Thanks for asking!'
We can also store and fetch prompts from the LangChain prompt hub.
This will work with your LangSmith API key.
For example, see here is a common prompt for RAG.
We can load this.
pip install langchainhub
# RAG prompt
from langchain import hub
QA_CHAIN_PROMPT_HUB = hub.pull("rlm/rag-prompt")
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectorstore.as_retriever(),
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT_HUB}
)
result = qa_chain({"query": question})
result["result"]
'The approaches to task decomposition include using LLM with simple prompting, task-specific instructions, and human inputs.'
Return source documentsβ
The full set of retrieved documents used for answer distillation can be returned using return_source_documents=True
.
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever(),
return_source_documents=True)
result = qa_chain({"query": question})
print(len(result['source_documents']))
result['source_documents'][0]
API Reference:
4
Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agentβs brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': "LLM Powered Autonomous Agents | Lil'Log"})
Return citationsβ
Answer citations can be returned using RetrievalQAWithSourcesChain
.
from langchain.chains import RetrievalQAWithSourcesChain
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=vectorstore.as_retriever())
result = qa_chain({"question": question})
result
API Reference:
{'question': 'What are the approaches to Task Decomposition?',
'answer': 'The approaches to Task Decomposition include:\n1. Using LLM with simple prompting, such as providing steps or subgoals for achieving a task.\n2. Using task-specific instructions, such as providing a specific instruction like "Write a story outline" for writing a novel.\n3. Using human inputs to decompose the task.\nAnother approach is the Tree of Thoughts, which extends the Chain of Thought (CoT) technique by exploring multiple reasoning possibilities at each step and generating multiple thoughts per step, creating a tree structure. The search process can be BFS or DFS, and each state can be evaluated by a classifier or majority vote.\nSources: https://lilianweng.github.io/posts/2023-06-23-agent/',
'sources': ''}
Customizing retrieved document processingβ
Retrieved documents can be fed to an LLM for answer distillation in a few different ways.
stuff
, refine
, map-reduce
, and map-rerank
chains for passing documents to an LLM prompt are well summarized here.
stuff
is commonly used because it simply "stuffs" all retrieved documents into the prompt.
The load_qa_chain is an easy way to pass documents to an LLM using these various approaches (e.g., see chain_type
).
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain({"input_documents": unique_docs, "question": question},return_only_outputs=True)
API Reference:
{'output_text': 'The approaches to task decomposition mentioned in the provided context are:\n\n1. Chain of thought (CoT): This approach involves instructing the language model to "think step by step" and decompose complex tasks into smaller and simpler steps. It enhances model performance on complex tasks by utilizing more test-time computation.\n\n2. Tree of Thoughts: This approach extends CoT by exploring multiple reasoning possibilities at each step. It decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS or DFS, and each state is evaluated by a classifier or majority vote.\n\n3. LLM with simple prompting: This approach involves using a language model with simple prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?" to perform task decomposition.\n\n4. Task-specific instructions: This approach involves providing task-specific instructions to guide the language model in decomposing the task. For example, providing the instruction "Write a story outline" for the task of writing a novel.\n\n5. Human inputs: Task decomposition can also be done with human inputs, where humans provide guidance and input to break down the task into smaller subtasks.'}
We can also pass the chain_type
to RetrievalQA
.
qa_chain = RetrievalQA.from_chain_type(llm,retriever=vectorstore.as_retriever(),
chain_type="stuff")
result = qa_chain({"query": question})
In summary, the user can choose the desired level of abstraction for QA:
Step 6. Chatβ
See our use-case on chat for detail on this!