sidebar_position: 5
:::tip NOTE Run the following command to download the Python SDK:
pip install ragflow-sdk
:::
| Code | Message | Description |
|---|---|---|
| 400 | Bad Request | Invalid request parameters |
| 401 | Unauthorized | Unauthorized access |
| 403 | Forbidden | Access denied |
| 404 | Not Found | Resource not found |
| 500 | Internal Server Error | Server internal error |
| 1001 | Invalid Chunk ID | Invalid Chunk ID |
| 1002 | Chunk Update Failed | Chunk update failed |
Creates a model response for the given historical chat conversation via OpenAI’s API.
str, RequiredThe model used to generate the response. The server will parse this automatically, so you can set it to any value for now.
list[object], RequiredA list of historical chat messages used to generate the response. This must contain at least one message with the user role.
booleanWhether to receive the response as a stream. Set this to false explicitly if you prefer to receive the entire response in one go instead of as a stream.
Exceptionfrom openai import OpenAI
model = "model"
client = OpenAI(api_key="ragflow-api-key", base_url=f"http://ragflow_address/api/v1/chats_openai/<chat_id>")
completion = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
],
stream=True
)
stream = True
if stream:
for chunk in completion:
print(chunk)
else:
print(completion.choices[0].message.content)
RAGFlow.create_dataset(
name: str,
avatar: Optional[str] = None,
description: Optional[str] = None,
embedding_model: Optional[str] = "BAAI/bge-large-zh-v1.5@BAAI",
permission: str = "me",
chunk_method: str = "naive",
pagerank: int = 0,
parser_config: DataSet.ParserConfig = None
) -> DataSet
Creates a dataset.
str, RequiredThe unique name of the dataset to create. It must adhere to the following requirements:
strBase64 encoding of the avatar. Defaults to None
strA brief description of the dataset to create. Defaults to None.
Specifies who can access the dataset to create. Available options:
"me": (Default) Only you can manage the dataset."team": All team members can manage the dataset.strThe chunking method of the dataset to create. Available options:
"naive": General (default)"manual: Manual"qa": Q&A"table": Table"paper": Paper"book": Book"laws": Laws"presentation": Presentation"picture": Picture"one": One"email": EmailintThe pagerank of the dataset to create. Defaults to 0.
The parser configuration of the dataset. A ParserConfig object’s attributes vary based on the selected chunk_method:
chunk_method="naive":{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="picture":Nonechunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="one":Nonechunk_method="knowledge-graph":{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}chunk_method="email":Nonedataset object.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="kb_1")
RAGFlow.delete_datasets(ids: list[str] | None = None)
Deletes datasets by ID.
list[str] or None, RequiredThe IDs of the datasets to delete. Defaults to None.
None, all datasets will be deleted.Exceptionrag_object.delete_datasets(ids=["d94a8dc02c9711f0930f7fbc369eab6d","e94a8dc02c9711f0930f7fbc369eab6e"])
RAGFlow.list_datasets(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
) -> list[DataSet]
Lists datasets.
intSpecifies the page on which the datasets will be displayed. Defaults to 1.
intThe number of datasets on each page. Defaults to 30.
strThe field by which datasets should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved datasets should be sorted in descending order. Defaults to True.
strThe ID of the dataset to retrieve. Defaults to None.
strThe name of the dataset to retrieve. Defaults to None.
DataSet objects.Exception.for dataset in rag_object.list_datasets():
print(dataset)
dataset = rag_object.list_datasets(id = "id_1")
print(dataset[0])
DataSet.update(update_message: dict)
Updates configurations for the current dataset.
dict[str, str|int], RequiredA dictionary representing the attributes to update, with the following keys:
"name": str The revised name of the dataset.
"avatar": (Body parameter), string"embedding_model": (Body parameter), string"chunk_count" is 0 before updating "embedding_model".model_name@model_factory format"permission": (Body parameter), string"me": (Default) Only you can manage the dataset."team": All team members can manage the dataset."pagerank": (Body parameter), int00100"chunk_method": (Body parameter), enum<string>"naive": General (default)"book": Book"email": Email"laws": Laws"manual": Manual"one": One"paper": Paper"picture": Picture"presentation": Presentation"qa": Q&A"table": Table"tag": TagExceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="kb_name")
dataset = dataset[0]
dataset.update({"embedding_model":"BAAI/bge-zh-v1.5", "chunk_method":"manual"})
DataSet.upload_documents(document_list: list[dict])
Uploads documents to the current dataset.
list[dict], RequiredA list of dictionaries representing the documents to upload, each containing the following keys:
"display_name": (Optional) The file name to display in the dataset."blob": (Optional) The binary content of the file to upload.Exceptiondataset = rag_object.create_dataset(name="kb_name")
dataset.upload_documents([{"display_name": "1.txt", "blob": "<BINARY_CONTENT_OF_THE_DOC>"}, {"display_name": "2.pdf", "blob": "<BINARY_CONTENT_OF_THE_DOC>"}])
Document.update(update_message:dict)
Updates configurations for the current document.
dict[str, str|dict[]], RequiredA dictionary representing the attributes to update, with the following keys:
"display_name": str The name of the document to update."meta_fields": dict[str, Any] The meta fields of the document."chunk_method": str The parsing method to apply to the document.
"naive": General"manual: Manual"qa": Q&A"table": Table"paper": Paper"book": Book"laws": Laws"presentation": Presentation"picture": Picture"one": One"email": Email"parser_config": dict[str, Any] The parsing configuration for the document. Its attributes vary based on the selected "chunk_method":
"chunk_method"="naive":{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="picture":Nonechunk_method="one":Nonechunk_method="knowledge-graph":{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}chunk_method="email":NoneExceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id='id')
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
doc.update([{"parser_config": {"chunk_token_count": 256}}, {"chunk_method": "manual"}])
Document.download() -> bytes
Downloads the current document.
The downloaded document in bytes.
from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="id")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
open("~/ragflow.txt", "wb+").write(doc.download())
print(doc)
Dataset.list_documents(id:str =None, keywords: str=None, page: int=1, page_size:int = 30, order_by:str = "create_time", desc: bool = True) -> list[Document]
Lists documents in the current dataset.
strThe ID of the document to retrieve. Defaults to None.
strThe keywords used to match document titles. Defaults to None.
intSpecifies the page on which the documents will be displayed. Defaults to 1.
intThe maximum number of documents on each page. Defaults to 30.
strThe field by which documents should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved documents should be sorted in descending order. Defaults to True.
Document objects.Exception.A Document object contains the following attributes:
id: The document ID. Defaults to "".name: The document name. Defaults to "".thumbnail: The thumbnail image of the document. Defaults to None.dataset_id: The dataset ID associated with the document. Defaults to None.chunk_method The chunking method name. Defaults to "naive".source_type: The source type of the document. Defaults to "local".type: Type or category of the document. Defaults to "". Reserved for future use.created_by: str The creator of the document. Defaults to "".size: int The document size in bytes. Defaults to 0.token_count: int The number of tokens in the document. Defaults to 0.chunk_count: int The number of chunks in the document. Defaults to 0.progress: float The current processing progress as a percentage. Defaults to 0.0.progress_msg: str A message indicating the current progress status. Defaults to "".process_begin_at: datetime The start time of document processing. Defaults to None.process_duation: float Duration of the processing in seconds. Defaults to 0.0.run: str The document’s processing status:
"UNSTART" (default)"RUNNING""CANCEL""DONE""FAIL"status: str Reserved for future use.parser_config: ParserConfig Configuration object for the parser. Its attributes vary based on the selected chunk_method:
chunk_method="naive":{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="picure":Nonechunk_method="one":Nonechunk_method="email":Nonefrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="kb_1")
filename1 = "~/ragflow.txt"
blob = open(filename1 , "rb").read()
dataset.upload_documents([{"name":filename1,"blob":blob}])
for doc in dataset.list_documents(keywords="rag", page=0, page_size=12):
print(doc)
DataSet.delete_documents(ids: list[str] = None)
Deletes documents by ID.
list[list]The IDs of the documents to delete. Defaults to None. If it is not specified, all documents in the dataset will be deleted.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="kb_1")
dataset = dataset[0]
dataset.delete_documents(ids=["id_1","id_2"])
DataSet.async_parse_documents(document_ids:list[str]) -> None
Parses documents in the current dataset.
list[str], RequiredThe IDs of the documents to parse.
Exceptionrag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="dataset_name")
documents = [
{'display_name': 'test1.txt', 'blob': open('./test_data/test1.txt',"rb").read()},
{'display_name': 'test2.txt', 'blob': open('./test_data/test2.txt',"rb").read()},
{'display_name': 'test3.txt', 'blob': open('./test_data/test3.txt',"rb").read()}
]
dataset.upload_documents(documents)
documents = dataset.list_documents(keywords="test")
ids = []
for document in documents:
ids.append(document.id)
dataset.async_parse_documents(ids)
print("Async bulk parsing initiated.")
DataSet.async_cancel_parse_documents(document_ids:list[str])-> None
Stops parsing specified documents.
list[str], RequiredThe IDs of the documents for which parsing should be stopped.
Exceptionrag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="dataset_name")
documents = [
{'display_name': 'test1.txt', 'blob': open('./test_data/test1.txt',"rb").read()},
{'display_name': 'test2.txt', 'blob': open('./test_data/test2.txt',"rb").read()},
{'display_name': 'test3.txt', 'blob': open('./test_data/test3.txt',"rb").read()}
]
dataset.upload_documents(documents)
documents = dataset.list_documents(keywords="test")
ids = []
for document in documents:
ids.append(document.id)
dataset.async_parse_documents(ids)
print("Async bulk parsing initiated.")
dataset.async_cancel_parse_documents(ids)
print("Async bulk parsing cancelled.")
Document.add_chunk(content:str, important_keywords:list[str] = []) -> Chunk
Adds a chunk to the current document.
str, RequiredThe text content of the chunk.
list[str]The key terms or phrases to tag with the chunk.
Chunk object.Exception.A Chunk object contains the following attributes:
id: str: The chunk ID.content: str The text content of the chunk.important_keywords: list[str] A list of key terms or phrases tagged with the chunk.create_time: str The time when the chunk was created (added to the document).create_timestamp: float The timestamp representing the creation time of the chunk, expressed in seconds since January 1, 1970.dataset_id: str The ID of the associated dataset.document_name: str The name of the associated document.document_id: str The ID of the associated document.available: bool The chunk’s availability status in the dataset. Value options:
False: UnavailableTrue: Available (default)from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(id="123")
dataset = datasets[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
Document.list_chunks(keywords: str = None, page: int = 1, page_size: int = 30, id : str = None) -> list[Chunk]
Lists chunks in the current document.
strThe keywords used to match chunk content. Defaults to None
intSpecifies the page on which the chunks will be displayed. Defaults to 1.
intThe maximum number of chunks on each page. Defaults to 30.
strThe ID of the chunk to retrieve. Default: None
Chunk objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets("123")
dataset = dataset[0]
docs = dataset.list_documents(keywords="test", page=1, page_size=12)
for chunk in docs[0].list_chunks(keywords="rag", page=0, page_size=12):
print(chunk)
Document.delete_chunks(chunk_ids: list[str])
Deletes chunks by ID.
list[str]The IDs of the chunks to delete. Defaults to None. If it is not specified, all chunks of the current document will be deleted.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="123")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
doc.delete_chunks(["id_1","id_2"])
Chunk.update(update_message: dict)
Updates content or configurations for the current chunk.
dict[str, str|list[str]|int] RequiredA dictionary representing the attributes to update, with the following keys:
"content": str The text content of the chunk."important_keywords": list[str] A list of key terms or phrases to tag with the chunk."available": bool The chunk’s availability status in the dataset. Value options:
False: UnavailableTrue: Available (default)Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="123")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
chunk.update({"content":"sdfx..."})
RAGFlow.retrieve(question:str="", dataset_ids:list[str]=None, document_ids=list[str]=None, page:int=1, page_size:int=30, similarity_threshold:float=0.2, vector_similarity_weight:float=0.3, top_k:int=1024,rerank_id:str=None,keyword:bool=False,highlight:bool=False) -> list[Chunk]
Retrieves chunks from specified datasets.
str, RequiredThe user query or query keywords. Defaults to "".
list[str], RequiredThe IDs of the datasets to search. Defaults to None.
list[str]The IDs of the documents to search. Defaults to None. You must ensure all selected documents use the same embedding model. Otherwise, an error will occur.
intThe starting index for the documents to retrieve. Defaults to 1.
intThe maximum number of chunks to retrieve. Defaults to 30.
floatThe minimum similarity score. Defaults to 0.2.
floatThe weight of vector cosine similarity. Defaults to 0.3. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
intThe number of chunks engaged in vector cosine computation. Defaults to 1024.
strThe ID of the rerank model. Defaults to None.
boolIndicates whether to enable keyword-based matching:
True: Enable keyword-based matching.False: Disable keyword-based matching (default).boolSpecifies whether to enable highlighting of matched terms in the results:
True: Enable highlighting of matched terms.False: Disable highlighting of matched terms (default).Chunk objects representing the document chunks.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="ragflow")
dataset = dataset[0]
name = 'ragflow_test.txt'
path = './test_data/ragflow_test.txt'
documents =[{"display_name":"test_retrieve_chunks.txt","blob":open(path, "rb").read()}]
docs = dataset.upload_documents(documents)
doc = docs[0]
doc.add_chunk(content="This is a chunk addition test")
for c in rag_object.retrieve(dataset_ids=[dataset.id],document_ids=[doc.id]):
print(c)
RAGFlow.create_chat(
name: str,
avatar: str = "",
dataset_ids: list[str] = [],
llm: Chat.LLM = None,
prompt: Chat.Prompt = None
) -> Chat
Creates a chat assistant.
str, RequiredThe name of the chat assistant.
strBase64 encoding of the avatar. Defaults to "".
list[str]The IDs of the associated datasets. Defaults to [""].
Chat.LLMThe LLM settings for the chat assistant to create. Defaults to None. When the value is None, a dictionary with the following values will be generated as the default. An LLM object contains the following attributes:
model_name: strNone, the user’s default chat model will be used.temperature: float0.1.top_p: float0.3presence_penalty: float0.2.frequency penalty: float0.7.Chat.PromptInstructions for the LLM to follow. A Prompt object contains the following attributes:
similarity_threshold: float RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted reranking score during retrieval. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is 0.2.keywords_similarity_weight: float This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is 0.7.top_n: int This argument specifies the number of top chunks with similarity scores above the similarity_threshold that are fed to the LLM. The LLM will only access these ‘top N’ chunks. The default value is 8.variables: list[dict[]] This argument lists the variables to use in the ‘System’ field of Chat Configurations. Note that:
knowledge is a reserved variable, which represents the retrieved chunks.[{"key": "knowledge", "optional": True}].rerank_model: str If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to "".top_k: int Refers to the process of reordering or selecting the top-k items from a list or set based on a specific ranking criterion. Default to 1024.empty_response: str If nothing is retrieved in the dataset for the user’s question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to None.opener: str The opening greeting for the user. Defaults to "Hi! I am your assistant, can I help you?".show_quote: bool Indicates whether the source of text should be displayed. Defaults to True.prompt: str The prompt content.Chat object representing the chat assistant.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(name="kb_1")
dataset_ids = []
for dataset in datasets:
dataset_ids.append(dataset.id)
assistant = rag_object.create_chat("Miss R", dataset_ids=dataset_ids)
Chat.update(update_message: dict)
Updates configurations for the current chat assistant.
dict[str, str|list[str]|dict[]], RequiredA dictionary representing the attributes to update, with the following keys:
"name": str The revised name of the chat assistant."avatar": str Base64 encoding of the avatar. Defaults to """dataset_ids": list[str] The datasets to update."llm": dict The LLM settings:
"model_name", str The chat model name."temperature", float Controls the randomness of the model’s predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses."top_p", float Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from."presence_penalty", float This discourages the model from repeating the same information by penalizing words that have appeared in the conversation."frequency penalty", float Similar to presence penalty, this reduces the model’s tendency to repeat the same words."prompt" : Instructions for the LLM to follow.
"similarity_threshold": float RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted rerank score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is 0.2."keywords_similarity_weight": float This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is 0.7."top_n": int This argument specifies the number of top chunks with similarity scores above the similarity_threshold that are fed to the LLM. The LLM will only access these ‘top N’ chunks. The default value is 8."variables": list[dict[]] This argument lists the variables to use in the ‘System’ field of Chat Configurations. Note that:knowledge is a reserved variable, which represents the retrieved chunks.[{"key": "knowledge", "optional": True}]."rerank_model": str If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to ""."empty_response": str If nothing is retrieved in the dataset for the user’s question, this will be used as the response. To allow the LLM to improvise when nothing is retrieved, leave this blank. Defaults to None."opener": str The opening greeting for the user. Defaults to "Hi! I am your assistant, can I help you?"."show_quote: bool Indicates whether the source of text should be displayed Defaults to True."prompt": str The prompt content.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(name="kb_1")
dataset_id = datasets[0].id
assistant = rag_object.create_chat("Miss R", dataset_ids=[dataset_id])
assistant.update({"name": "Stefan", "llm": {"temperature": 0.8}, "prompt": {"top_n": 8}})
RAGFlow.delete_chats(ids: list[str] = None)
Deletes chat assistants by ID.
list[str]The IDs of the chat assistants to delete. Defaults to None. If it is empty or not specified, all chat assistants in the system will be deleted.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.delete_chats(ids=["id_1","id_2"])
RAGFlow.list_chats(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
) -> list[Chat]
Lists chat assistants.
intSpecifies the page on which the chat assistants will be displayed. Defaults to 1.
intThe number of chat assistants on each page. Defaults to 30.
strThe attribute by which the results are sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved chat assistants should be sorted in descending order. Defaults to True.
strThe ID of the chat assistant to retrieve. Defaults to None.
strThe name of the chat assistant to retrieve. Defaults to None.
Chat objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
for assistant in rag_object.list_chats():
print(assistant)
Chat.create_session(name: str = "New session") -> Session
Creates a session with the current chat assistant.
strThe name of the chat session to create.
Session object containing the following attributes:
id: str The auto-generated unique identifier of the created session.name: str The name of the created session.message: list[Message] The opening message of the created session. Default: [{"role": "assistant", "content": "Hi! I am your assistant,can I help you?"}]chat_id: str The ID of the associated chat assistant.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
Session.update(update_message: dict)
Updates the current session of the current chat assistant.
dict[str, Any], RequiredA dictionary representing the attributes to update, with only one key:
"name": str The revised name of the session.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session("session_name")
session.update({"name": "updated_name"})
Chat.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
) -> list[Session]
Lists sessions associated with the current chat assistant.
intSpecifies the page on which the sessions will be displayed. Defaults to 1.
intThe number of sessions on each page. Defaults to 30.
strThe field by which sessions should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved sessions should be sorted in descending order. Defaults to True.
strThe ID of the chat session to retrieve. Defaults to None.
strThe name of the chat session to retrieve. Defaults to None.
Session objects associated with the current chat assistant.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
for session in assistant.list_sessions():
print(session)
Chat.delete_sessions(ids:list[str] = None)
Deletes sessions of the current chat assistant by ID.
list[str]The IDs of the sessions to delete. Defaults to None. If it is not specified, all sessions associated with the current chat assistant will be deleted.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
assistant.delete_sessions(ids=["id_1","id_2"])
Session.ask(question: str = "", stream: bool = False, **kwargs) -> Optional[Message, iter[Message]]
str, RequiredThe question to start an AI-powered conversation. Default to ""
boolIndicates whether to output responses in a streaming way:
True: Enable streaming (default).False: Disable streaming.The parameters in prompt(system).
Message object containing the response to the question if stream is set to False.message objects (iter[Message]) if stream is set to TrueThe following shows the attributes of a Message object:
strThe auto-generated message ID.
strThe content of the message. Defaults to "Hi! I am your assistant, can I help you?".
list[Chunk]A list of Chunk objects representing references to the message, each containing the following attributes:
id strcontent strimg_id strdocument_id strdocument_name strposition list[str]dataset_id strsimilarity float0 to 1, with a higher value indicating greater similarity. It is the weighted sum of vector_similarity and term_similarity.vector_similarity float0 to 1, with a higher value indicating greater similarity between vector embeddings.term_similarity float0 to 1, with a higher value indicating greater similarity between keywords.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
print("\n==================== Miss R =====================\n")
print("Hello. What can I do for you?")
while True:
question = input("\n==================== User =====================\n> ")
print("\n==================== Miss R =====================\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
cont = ans.content
Agent.create_session(**kwargs) -> Session
Creates a session with the current agent.
The parameters in begin component.
Session object containing the following attributes:
id: str The auto-generated unique identifier of the created session.message: list[Message] The messages of the created session assistant. Default: [{"role": "assistant", "content": "Hi! I am your assistant,can I help you?"}]agent_id: str The ID of the associated agent.Exceptionfrom ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
agent_id = "AGENT_ID"
agent = rag_object.list_agents(id = agent_id)[0]
session = agent.create_session()
Session.ask(question: str="", stream: bool = False) -> Optional[Message, iter[Message]]
strThe question to start an AI-powered conversation. Ifthe Begin component takes parameters, a question is not required.
boolIndicates whether to output responses in a streaming way:
True: Enable streaming (default).False: Disable streaming.Message object containing the response to the question if stream is set to Falsemessage objects (iter[Message]) if stream is set to TrueThe following shows the attributes of a Message object:
strThe auto-generated message ID.
strThe content of the message. Defaults to "Hi! I am your assistant, can I help you?".
list[Chunk]A list of Chunk objects representing references to the message, each containing the following attributes:
id strcontent strimage_id strdocument_id strdocument_name strposition list[str]dataset_id strsimilarity float0 to 1, with a higher value indicating greater similarity. It is the weighted sum of vector_similarity and term_similarity.vector_similarity float0 to 1, with a higher value indicating greater similarity between vector embeddings.term_similarity float0 to 1, with a higher value indicating greater similarity between keywords.from ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.list_agents(id = AGENT_id)[0]
session = agent.create_session()
print("\n===== Miss R ====\n")
print("Hello. What can I do for you?")
while True:
question = input("\n===== User ====\n> ")
print("\n==== Miss R ====\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
cont = ans.content
Agent.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "update_time",
desc: bool = True,
id: str = None
) -> List[Session]
Lists sessions associated with the current agent.
intSpecifies the page on which the sessions will be displayed. Defaults to 1.
intThe number of sessions on each page. Defaults to 30.
strThe field by which sessions should be sorted. Available options:
"create_time""update_time"(default)boolIndicates whether the retrieved sessions should be sorted in descending order. Defaults to True.
strThe ID of the agent session to retrieve. Defaults to None.
Session objects associated with the current agent.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.list_agents(id = AGENT_id)[0]
sessons = agent.list_sessions()
for session in sessions:
print(session)
Agent.delete_sessions(ids: list[str] = None)
Deletes sessions of a agent by ID.
list[str]The IDs of the sessions to delete. Defaults to None. If it is not specified, all sessions associated with the agent will be deleted.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.list_agents(id = AGENT_id)[0]
agent.delete_sessions(ids=["id_1","id_2"])
RAGFlow.list_agents(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
title: str = None
) -> List[Agent]
Lists agents.
intSpecifies the page on which the agents will be displayed. Defaults to 1.
intThe number of agents on each page. Defaults to 30.
strThe attribute by which the results are sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved agents should be sorted in descending order. Defaults to True.
strThe ID of the agent to retrieve. Defaults to None.
strThe name of the agent to retrieve. Defaults to None.
Agent objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
for agent in rag_object.list_agents():
print(agent)
RAGFlow.create_agent(
title: str,
dsl: dict,
description: str | None = None
) -> None
Create an agent.
strSpecifies the title of the agent.
dictSpecifies the canvas DSL of the agent.
strThe description of the agent. Defaults to None.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.create_agent(
title="Test Agent",
description="A test agent",
dsl={
# ... canvas DSL here ...
}
)
RAGFlow.update_agent(
agent_id: str,
title: str | None = None,
description: str | None = None,
dsl: dict | None = None
) -> None
Update an agent.
strSpecifies the id of the agent to be updated.
strSpecifies the new title of the agent. None if you do not want to update this.
dictSpecifies the new canvas DSL of the agent. None if you do not want to update this.
strThe new description of the agent. None if you do not want to update this.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.update_agent(
agent_id="58af890a2a8911f0a71a11b922ed82d6",
title="Test Agent",
description="A test agent",
dsl={
# ... canvas DSL here ...
}
)
RAGFlow.delete_agent(
agent_id: str
) -> None
Delete an agent.
strSpecifies the id of the agent to be deleted.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.delete_agent("58af890a2a8911f0a71a11b922ed82d6")