Log10
This page covers how to use the Log10 within LangChain.
What is Log10?
Log10 is an open source proxiless LLM data management and application development platform that lets you log, debug and tag your Langchain calls.
Quick start
- Create your free account at log10.io
 - Add your 
LOG10_TOKENandLOG10_ORG_IDfrom the Settings and Organization tabs respectively as environment variables. - Also add 
LOG10_URL=https://log10.ioand your usual LLM API key: for e.g.OPENAI_API_KEYorANTHROPIC_API_KEYto your environment 
How to enable Log10 data management for Langchain
Integration with log10 is a simple one-line log10_callback integration as shown below:
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
from log10.langchain import Log10Callback
from log10.llm import Log10Config
log10_callback = Log10Callback(log10_config=Log10Config())
messages = [
    HumanMessage(content="You are a ping pong machine"),
    HumanMessage(content="Ping?"),
]
llm = ChatOpenAI(model_name="gpt-3.5-turbo", callbacks=[log10_callback])
API Reference:
More details + screenshots including instructions for self-hosting logs
How to use tags with Log10
from langchain.llms import OpenAI
from langchain.chat_models import ChatAnthropic
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
from log10.langchain import Log10Callback
from log10.llm import Log10Config
log10_callback = Log10Callback(log10_config=Log10Config())
messages = [
    HumanMessage(content="You are a ping pong machine"),
    HumanMessage(content="Ping?"),
]
llm = ChatOpenAI(model_name="gpt-3.5-turbo", callbacks=[log10_callback], temperature=0.5, tags=["test"])
completion = llm.predict_messages(messages, tags=["foobar"])
print(completion)
llm = ChatAnthropic(model="claude-2", callbacks=[log10_callback], temperature=0.7, tags=["baz"])
llm.predict_messages(messages)
print(completion)
llm = OpenAI(model_name="text-davinci-003", callbacks=[log10_callback], temperature=0.5)
completion = llm.predict("You are a ping pong machine.\nPing?\n")
print(completion)
API Reference:
You can also intermix direct OpenAI calls and Langchain LLM calls:
import os
from log10.load import log10, log10_session
import openai
from langchain.llms import OpenAI
log10(openai)
with log10_session(tags=["foo", "bar"]):
    # Log a direct OpenAI call
    response = openai.Completion.create(
        model="text-ada-001",
        prompt="Where is the Eiffel Tower?",
        temperature=0,
        max_tokens=1024,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
    )
    print(response)
    # Log a call via Langchain
    llm = OpenAI(model_name="text-ada-001", temperature=0.5)
    response = llm.predict("You are a ping pong machine.\nPing?\n")
    print(response)