使用 Hugging Face 模型
使用 Hugging Face 模型之前,需要先设置环境变量
import os
os.environ['HUGGINGFACEHUB_API_TOKEN'] = ''
使用在线的 Hugging Face 模型
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64})
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
print(llm_chain.run(question))
将 Hugging Face 模型直接拉到本地使用
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM
model_id = 'google/flan-t5-large'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
pipe = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
max_length=100
)
local_llm = HuggingFacePipeline(pipeline=pipe)
print(local_llm('What is the capital of France? '))
template = """Question: {question} Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=local_llm)
question = "What is the capital of England?"
print(llm_chain.run(question))
将模型拉到本地使用的好处:
- 训练模型
- 可以使用本地的 GPU
- 有些模型无法在 Hugging Face 运行
当前内容版权归 liaokongVFX 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 liaokongVFX .