
區塊鏈API推薦,快速開發去中心化應用
[the documentation of vllm
](https://docs.vllm.ai/en/stable/).
Now, you can have fun with Qwen2.5 models.
這是一個能很好體現從其他上下文中獲益的信息塊示例。單獨來看,這個信息塊的信息含量相對有限。接下來,我們來看看增加了上下文信息后的信息塊:
帶上下文的示例數據塊:
For more information, please refer to
[the documentation of vllm
](https://docs.vllm.ai/en/stable/).
Now, you can have fun with Qwen2.5 models.
The chunk is situated at the end of the document, following the section on
deploying Qwen2.5 models with vLLM, and serves as a concluding remark
encouraging users to explore the capabilities of Qwen2.5 models.
你可以想象,當模型接收到這個塊時,它對上下文有了更好的理解,并且可以提供更準確的答案。讓我們構建管道來創建這些塊。
上下文檢索(由 Anthropic 引入)解決了傳統檢索增強生成 (RAG) 系統中的一個常見問題:單個文本塊通常缺乏足夠的上下文來準確檢索和理解。
上下文檢索通過在嵌入或索引之前添加特定的解釋性上下文來增強每個塊。這保留了塊與其更廣泛的文檔之間的關系,從而顯著提高了系統檢索和使用最相關信息的能力。
根據 Anthropic 的實驗:
這些改進凸顯了上下文檢索的潛力,可以提高 AI 驅動的問答系統的性能,使其更加準確和上下文感知。
我們將使用兩個示例文檔來演示上下文檢索如何改進問答系統。我們的系統將執行以下操作:
首先,讓我們安裝必要的庫:
pip install -Uqqq pip --progress-bar off
pip install -qqq fastembed==0.3.6 --progress-bar off
pip install -qqq sqlite-vec==0.1.2 --progress-bar off
pip install -qqq groq==0.11.0 --progress-bar off
pip install -qqq langchain-text-splitters==0.3.0 --progress-bar off
現在,讓我們導入所需的模塊:
import sqlite3
from textwrap import dedent
from typing import List
import sqlite_vec
from fastembed import TextEmbedding
from google.colab import userdata
from groq import Groq
from groq.types.chat import ChatCompletionMessage
from langchain_text_splitters import RecursiveCharacterTextSplitter
from sqlite_vec import serialize_float32
from tqdm import tqdm
我們將通過 Groq API 使用 Llama 3.1。首先,讓我們設置客戶端:
client = Groq(api_key=userdata.get("GROQ_API_KEY"))
MODEL = "llama-3.1-70b-versatile"
TEMPERATURE = 0
接下來,我們將創建一個輔助函數來與模型交互。此函數將接受提示和可選的消息歷史記錄:
def call_model(prompt: str, messages=[]) -> ChatCompletionMessage:
messages.append({
"role": "user",
"content": prompt,
})
response = client.chat.completions.create(
model=MODEL,
messages=messages,
temperature=TEMPERATURE,
)
return response.choices[0].message.content
此函數向模型發送提示并返回模型的響應。您還可以傳遞消息歷史記錄以維護對話的上下文。
我們將使用帶有sqlite-vec擴展的SQLite來存儲我們的文檔及其嵌入。以下是設置數據庫的方法:
db = sqlite3.connect("readmes.sqlite3")
db.enable_load_extension(True)
sqlite_vec.load(db)
db.enable_load_extension(False)
連接到數據庫后,讓我們創建必要的表:
db.execute("""
CREATE TABLE documents(
id INTEGER PRIMARY KEY AUTOINCREMENT,
text TEXT
);
""")
db.execute("""
CREATE TABLE chunks(
id INTEGER PRIMARY KEY AUTOINCREMENT,
document_id INTEGER,
text TEXT,
FOREIGN KEY(document_id) REFERENCES documents(id)
);
""")
db.execute(f"""
CREATE VIRTUAL TABLE chunk_embeddings USING vec0(
id INTEGER PRIMARY KEY,
embedding FLOAT[{document_embeddings[0].shape[0]}]
);
""")
以下是表格的分類:
documents
:存儲每個文檔的全文。chunks
:存儲從文檔中拆分的較小文本塊。chunk_embeddings
:存儲每個塊的嵌入,以便進行相似性搜索。這種數據庫設置允許我們有效地存儲、檢索和嵌入塊,從而便于以后執行相似性搜索。
為了將文檔分解為可管理的塊以便更好地進行上下文檢索,我們將按照以下步驟操作:
我們將使用的文檔是Qwen 2.5模型和LangGraph項目的README文件。
首先,讓我們將文檔保存在數據庫中:
documents = [qwen_doc, langgraph_doc]
with db:
for doc in documents:
db.execute("INSERT INTO documents(text) VALUES(?)", [doc])
為了將文檔拆分成更小的信息塊,我們將使用LangChain中的RecursiveCharacterTextSplitter3工具:
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2048, chunk_overlap=128)
我們現在可以創建塊并將它們存儲在數據庫中:
with db:
document_rows = db.execute("SELECT id, text FROM documents").fetchall()
for row in document_rows:
doc_id, doc_text = row
chunks = text_splitter.split_text(doc_text)
contextual_chunks = create_contextual_chunks(chunks, doc_text)
save_chunks(contextual_chunks)
為了給每個數據塊提供額外的上下文,我們將使用以下提示生成簡短的摘要:
CONTEXTUAL_EMBEDDING_PROMPT = """
Here is the chunk we want to situate within the whole document:
<chunk>
{chunk}
</chunk>
Here is the content of the whole document:
<document>
{document}
</document>
Please provide a short, succinct context to situate this chunk within the overall document to improve search retrieval. Respond only with the context.
"""
以下是該函數的工作原理:
def create_contextual_chunks(chunks: List[str], document: str) -> List[str]:
contextual_chunks = []
for chunk in chunks:
prompt = CONTEXTUAL_EMBEDDING_PROMPT.format(chunk=chunk, document=document)
chunk_context = call_model(prompt)
contextual_chunks.append(f"{chunk}\n{chunk_context}")
return contextual_chunks
此函數會將每個信息塊連同整個文檔一起發送到模型,模型會生成一個簡短的上下文,以提高搜索檢索的準確性。然后,將這個上下文前置到信息塊的前面。
我們將使用fastembed4庫來為文檔的信息塊創建嵌入表示:
embedding_model = TextEmbedding()
最后,讓我們將塊及其嵌入保存在數據庫中:
def save_chunks(chunks: List[str]):
chunk_embeddings = list(embedding_model.embed(chunks))
for chunk, embedding in zip(chunks, chunk_embeddings):
result = db.execute(
"INSERT INTO chunks(document_id, text) VALUES(?, ?)", [doc_id, chunk]
)
chunk_id = result.lastrowid
db.execute(
"INSERT INTO chunk_embeddings(id, embedding) VALUES (?, ?)",
[chunk_id, serialize_float32(embedding)],
)
此函數將每個信息塊及其嵌入表示保存到數據庫中的chunks表和chunk_embeddings表中。serialize_float32函數用于將嵌入表示存儲為一種可以稍后高效檢索的格式。
一旦塊及其嵌入存儲在數據庫中,我們就可以檢索給定查詢的最相關上下文。下面是實現這一點的函數:
def retrieve_context(query: str, k: int = 3, embedding_model: TextEmbedding = embedding_model) -> str:
query_embedding = list(embedding_model.embed([query]))[0]
results = db.execute(
"""
SELECT
chunk_embeddings.id,
distance,
text
FROM chunk_embeddings
LEFT JOIN chunks ON chunks.id = chunk_embeddings.id
WHERE embedding MATCH ? AND k = ?
ORDER BY distance
""",
[serialize_float32(query_embedding), k],
).fetchall()
return "\n-----\n".join([item[2] for item in results])
為了生成答案,我們將系統提示符與檢索到的上下文相結合。這可確保模型提供準確且與上下文相關的響應。
系統提示為模型應如何響應設定基調和期望:
SYSTEM_PROMPT = """
You're an expert AI/ML engineer with a background in software development.
You're answering questions about technical topics and projects.
If you don't know the answer, simply state that you don't know.
Keep your answers brief and to the point. Be kind and respectful.
Use the provided context for your answers. The most relevant information is
at the top. Each piece of information is separated by ---.
"""
以下是將所有內容聯系在一起的函數:
def ask_question(query: str) -> str:
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT,
},
]
context = retrieve_context(query)
prompt = dedent(
f"""
Use the following information:
```
{context}
```
to answer the question:
{query}
"""
)
return call_model(prompt, messages), context
要回答問題,您可以像這樣調用函數:
answer, context = ask_question("How does Contextual Retrieval improve RAG performance?")
print("Answer:", answer)
print("Context used:", context)
這既提供了答案,也提供了模型用來生成回答的上下文。
現在我們可以用一些問題來測試我們的系統。讓我們先問一個關于Qwen模型的簡單問題:
query = "How many parameters does Qwen have?"
response, context = ask_question(query)
print(response)
輸出:
Qwen2.5 models are available in various sizes, with the number of parameters
ranging from 0.5B to 72B. The specific model mentioned in the text has 32.5B
parameters, with 31.0B non-embedding parameters.
非常好,看起來模型是基于檢索到的上下文提供了準確的信息。讓我們嘗試一些技術性更強的內容:
query = "How should one deploy Qwen model on a private server?"
response, context = ask_question(query)
print(response)
輸出:
To deploy Qwen2.5 on a private server, you can use vLLM, a fast and easy-to-use
framework for LLM inference and serving. First, install vllm>=0.4.0
using
pip. Then, run the following command to build up a vLLM service:
```bash
python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-7B-Instruct
```
Alternatively, with vllm>=0.5.3
, you can use:
```bash
vllm serve Qwen/Qwen2.5-7B-Instruct
```
This will start a service that you can interact with using the OpenAI API.
這是對文檔部署部分的一個很好的總結。讓我們再嘗試一個問題:
query = "I have a RTX 4090 (24GB). Which version of the model can I run with good inference speed?"
response, context = ask_question(query)
print(response)
輸出:
Based on the provided information, the model sizes available for Qwen2.5 are
0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B.
Considering your RTX 4090 has 24GB of memory, you can likely run the 7B or 14B
models with good inference speed. However, the 14B model might be pushing the
limits of your GPU's memory, so the 7B model would be a safer choice.
Keep in mind that the actual performance will also depend on other factors such
as your system's CPU, RAM, and the specific use case.
此信息在文檔中找不到,但該模型根據檢索到的上下文及其推理能力提供了很好的答案。
您已經構建了一個 RAG 系統,該系統使用:
在繼續完善此系統的過程中,您可以考慮通過以下方式進行增強:
告訴我您打算用這個系統來構建什么!
原文鏈接:https://www.mlexpert.io/blog/rag-contextual-retrieval