Gradient
Gradient
allows to fine tune and get completions on LLMs with a simple web API.
This notebook goes over how to use Langchain with Gradient.
Imports
import os
import requests
from langchain.llms import GradientLLM
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
API Reference:
Set the Environment API Key
Make sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.
from getpass import getpass
if not os.environ.get("GRADIENT_ACCESS_TOKEN",None):
# Access token under https://auth.gradient.ai/select-workspace
os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")
if not os.environ.get("GRADIENT_WORKSPACE_ID",None):
# `ID` listed in `$ gradient workspace list`
# also displayed after login at at https://auth.gradient.ai/select-workspace
os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:")
Optional: Validate your Enviroment variables GRADIENT_ACCESS_TOKEN
and GRADIENT_WORKSPACE_ID
to get currently deployed models.
import requests
resp = requests.get(f'https://api.gradient.ai/api/models', headers={
"authorization": f"Bearer {os.environ['GRADIENT_ACCESS_TOKEN']}",
"x-gradient-workspace-id": f"{os.environ['GRADIENT_WORKSPACE_ID']}",
},
)
if resp.status_code == 200:
models = resp.json()
print("Credentials valid.\nPossible values for `model_id` are:\n", models)
else:
print("Error when listing models. Are your credentials valid?", resp.text)
Credentials valid.
Possible values for `model_id` are:
{'models': [{'id': '99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model', 'name': 'bloom-560m', 'slug': 'bloom-560m', 'type': 'baseModel'}, {'id': 'f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model', 'name': 'llama2-7b-chat', 'slug': 'llama2-7b-chat', 'type': 'baseModel'}, {'id': 'cc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_model', 'name': 'nous-hermes2', 'slug': 'nous-hermes2', 'type': 'baseModel'}, {'baseModelId': 'f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model', 'id': 'bb7b9865-0ce3-41a8-8e2b-5cbcbe1262eb_model_adapter', 'name': 'optical-transmitting-sensor', 'type': 'modelAdapter'}]}
Create the Gradient instance
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GradientLLM(
# `ID` listed in `$ gradient model list`
model_id="99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model",
# # optional: set new credentials, they default to environment variables
# gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"],
# gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"],
)
Create a Prompt Template
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 1994?"
llm_chain.run(
question=question
)
' The first team to win the Super Bowl was the New England Patriots. The Patriots won the'