
使用NestJS和Prisma構建REST API:身份驗證
在BaseOpenAI中列舉了OpenAI的模型,具體的每個模型可以做什么事情可以查看OpenAI官網:https://platform.openai.com/docs/models/overview
@staticmethod
def modelname_to_contextsize(modelname: str) -> int:
...
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo": 4096,
"gpt-3.5-turbo-0301": 4096,
"gpt-3.5-turbo-0613": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
"gpt-3.5-turbo-instruct": 4096,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049,
"davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097,
"code-davinci-002": 8001,
"code-davinci-001": 8001,
"code-cushman-002": 2048,
"code-cushman-001": 2048,
}
# handling finetuned models
if "ft-" in modelname:
modelname = modelname.split(":")[0]
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
OpenAI類的modelname_to_contextsize方法列舉了LangChian中OpenAI支持的模型,在其構造方法中可以看到這些模型不是所有的都能支持。
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore
"""Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if (
model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4")
) and "-instruct" not in model_name:
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"from langchain_community.chat_models import ChatOpenAI
"
)
return OpenAIChat(**data)
return super().__new__(cls)
在BaseOpenAI的___new___方法中可以看到以模型名“gpt-3.5-turbo”和“gpt-4”開頭且不包含“-instruct”的是是chat模型。也就是OpenAI中列舉的模型中以gpt-3.5-turbo和gpt-4開頭是ChatOpenAI 支持的模型,其余都是OpenAI支持的模型。
OpenAI支持的模型:
當然,LangChian可能會更新所支持的OpenAI模型,具體的以最新LangChian源碼為準。
ChatOpenAI支持的模型:
直接官方例子:
(一)OpenAI 的使用:
官方鏈接:https://python.langchain.com/docs/modules/model_io/llms/quick_start
大型語言模型(LLMs)是LangChain的核心組件。LangChain不為自己的LLMs提供服務,而是提供一個標準接口來與許多不同的LLMs進行交互。
有很多 LLM 提供商(OpenAI、Cohere、Hugging Face 等)——LLM 類為所有提供商提供標準接口。
安裝 OpenAI Python 包:
pip install openai
訪問 API 需要 API 密鑰,您可以通過創建帳戶并前往?此處獲取該密鑰。一旦我們有了密鑰,我們就需要通過運行以下命令將其設置為環境變量:
export OPENAI_API_KEY="..."
如果您不想設置環境變量,可以openai_api_key在啟動 OpenAI LLM 類時直接通過命名參數傳遞密鑰:
from langchain_openai import OpenAI
llm = OpenAI(openai_api_key="...")
LLMs 實現Runnable 接口,這是LangChain 表達式語言 (LCEL)的基本構建塊。這意味著它們支持invoke、 ainvoke、stream、astream、batch、abatch、astream_log調用。
LLM 接受字符串作為輸入,或可以強制為字符串提示的對象,包括List[BaseMessage]和PromptValue。
llm.invoke(
"What are some theories about the relationship between unemployment and inflation?"
)
'\n\n1. The Phillips Curve Theory: This suggests that there is an inverse relationship between unemployment and inflation, meaning that when unemployment is low, inflation will be higher, and when unemployment is high, inflation will be lower.\n\n2. The Monetarist Theory: This theory suggests that the relationship between unemployment and inflation is weak, and that changes in the money supply are more important in determining inflation.\n\n3. The Resource Utilization Theory: This suggests that when unemployment is low, firms are able to raise wages and prices in order to take advantage of the increased demand for their products and services. This leads to higher inflation.'
(二)ChatOpenAI 的使用:
官方鏈接:https://python.langchain.com/docs/modules/model_io/chat/quick_start
聊天模型是語言模型的變體。雖然聊天模型在底層使用語言模型,但它們使用的接口有點不同。聊天模型沒有使用“文本輸入、文本輸出”的接口,而是使用“聊天消息”作為輸入和輸出的接口。
與OpenAI一樣,ChatOpenAI類也是集成OpenAI官方的模型,所以一樣需要一樣的密鑰。
同樣如果不想設置環境變量,可以openai_api_key在啟動 OpenAI LLM 類時直接通過命名參數傳遞密鑰:
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(openai_api_key="...")
聊天模型界面基于消息而不是原始文本。LangChain目前支持的消息類型有AIMessage, HumanMessage, SystemMessage,FunctionMessage和ChatMessage- ChatMessage接受任意角色參數。大多數時候,您只需處理HumanMessage、AIMessage和 SystemMessage
聊天模型實現了Runnable 接口,這是LangChain 表達式語言(LCEL)的基本構建塊。這意味著它們支持invoke、 ainvoke、stream、astream、batch、abatch、astream_log調用。
聊天模型接受List[BaseMessage]作為輸入或可以強制為消息的對象,包括str(轉換為HumanMessage)和PromptValue。
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization?"),
]
chat.invoke(messages)
AIMessage(content="The purpose of model regularization is to prevent overfitting in machine learning models. Overfitting occurs when a model becomes too complex and starts to fit the noise in the training data, leading to poor generalization on unseen data. Regularization techniques introduce additional constraints or penalties to the model's objective function, discouraging it from becoming overly complex and promoting simpler and more generalizable models. Regularization helps to strike a balance between fitting the training data well and avoiding overfitting, leading to better performance on new, unseen data.")
在探索LangChian的ChatOpenAI 和 OpenAI這兩個類時,了解到這兩個類使用OpenAI接口不一樣, OpenAI使用的是/v1/completions接口,而ChatOpenAI 使用的是/v1/chat/completions。
詳細的可以查看OpenAI官網:
https://platform.openai.com/docs/models/model-endpoint-compatibility
文章轉自微信公眾號@神州數碼云基地