您最多选择25个主题 主题必须以字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符

Fix agent completion requiring calling twice with parameters in begin component (#6659) ### What problem does this PR solve? Fix #5418 Actually, the fix #4329 also works for agent flows with parameters, so this PR just relaxes the `else` branch of that. With this PR, it works fine on my side, may need more testing to make sure this does not break something. I guess the real problem may be deeply hidden in the code which relates to conversation and canvas execution. After a few hours of debugging, I see the only difference between with and without parameters in `begin` component, is the `history` field of canvas data. When the `begin` component contains some parameters, the debug log shows: ``` 025-03-29 19:50:38,521 DEBUG 356590 { "component_name": "Begin", "params": {"output_var_name": "output", "message_history_window_size": 22, "query": [{"type": "fileUrls", "key": "fileUrls", "name": "files", "optional": true, "value": "问题.txt\n今天天气怎么样"}], "inputs": [], "debug_inputs": [], "prologue": "你好! 我是你的助理,有什么可以帮到你的吗?", "output": null}, "output": null, "inputs": [] }, history: [["user", "请回答我上传文件中的问题。"]], kwargs: {"stream": false} 2025-03-29 19:50:38,523 DEBUG 356590 { "component_name": "Answer", "params": {"output_var_name": "output", "message_history_window_size": 22, "query": [], "inputs": [], "debug_inputs": [], "post_answers": [], "output": null}, "output": null, "inputs": [] }, history: [["user", "请回答我上传文件中的问题。"]], kwargs: {"stream": false} ``` Then it does not go further along the flow. When the `begin` component does not contain any parameter, the debug log shows: ``` 2025-03-29 19:41:13,518 DEBUG 353596 { "component_name": "Begin", "params": {"output_var_name": "output", "message_history_window_size": 22, "query": [], "inputs": [], "debug_inputs": [], "prologue": "你好! 我是你的助理,有什么可以帮到你的吗?", "output": null}, "output": null, "inputs": [] }, history: [], kwargs: {"stream": false} 2025-03-29 19:41:13,520 DEBUG 353596 { "component_name": "Answer", "params": {"output_var_name": "output", "message_history_window_size": 22, "query": [], "inputs": [], "debug_inputs": [], "post_answers": [], "output": null}, "output": null, "inputs": [] }, history: [], kwargs: {"stream": false} 2025-03-29 19:41:13,556 INFO 353596 127.0.0.1 - - [29/Mar/2025 19:41:13] "POST /api/v1/agents/fee6886a0c6f11f09b48eb8798e9aa9b/sessions?user_id=123 HTTP/1.1" 200 - 2025-03-29 19:41:21,115 DEBUG 353596 Canvas.prepare2run: Retrieval:LateGuestsNotice 2025-03-29 19:41:21,116 DEBUG 353596 { "component_name": "Retrieval", "params": {"output_var_name": "output", "message_history_window_size": 22, "query": [], "inputs": [], "debug_inputs": [], "similarity_threshold": 0.2, "keywords_similarity_weight": 0.3, "top_n": 8, "top_k": 1024, "kb_ids": ["9aca3c700c5911f0811caf35658b9385"], "rerank_id": "", "empty_response": "", "tavily_api_key": "", "use_kg": false, "output": null}, "output": null, "inputs": [] }, history: [["user", "请回答我上传文件中的问题。"]], kwargs: {"stream": false} ``` It correctly goes along the flow and generates correct answer. You can see the difference: when the `begin` component has any parameter, the `history` field is filled from the beginning, while it is just `[]` if the `begin` component has no parameter. ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue) - [ ] New Feature (non-breaking change which adds functionality) - [ ] Documentation Update - [ ] Refactoring - [ ] Performance Improvement - [ ] Other (please describe):
7 个月前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791
  1. #
  2. # Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
  3. #
  4. # Licensed under the Apache License, Version 2.0 (the "License");
  5. # you may not use this file except in compliance with the License.
  6. # You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. #
  16. import json
  17. import re
  18. import time
  19. import tiktoken
  20. from flask import Response, jsonify, request
  21. from api.db.services.conversation_service import ConversationService, iframe_completion
  22. from api.db.services.conversation_service import completion as rag_completion
  23. from api.db.services.canvas_service import completion as agent_completion, completionOpenAI
  24. from agent.canvas import Canvas
  25. from api.db import LLMType, StatusEnum
  26. from api.db.db_models import APIToken
  27. from api.db.services.api_service import API4ConversationService
  28. from api.db.services.canvas_service import UserCanvasService
  29. from api.db.services.dialog_service import DialogService, ask, chat
  30. from api.db.services.file_service import FileService
  31. from api.db.services.knowledgebase_service import KnowledgebaseService
  32. from api.utils import get_uuid
  33. from api.utils.api_utils import get_result, token_required, get_data_openai, get_error_data_result, validate_request, check_duplicate_ids
  34. from api.db.services.llm_service import LLMBundle
  35. @manager.route("/chats/<chat_id>/sessions", methods=["POST"]) # noqa: F821
  36. @token_required
  37. def create(tenant_id, chat_id):
  38. req = request.json
  39. req["dialog_id"] = chat_id
  40. dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
  41. if not dia:
  42. return get_error_data_result(message="You do not own the assistant.")
  43. conv = {
  44. "id": get_uuid(),
  45. "dialog_id": req["dialog_id"],
  46. "name": req.get("name", "New session"),
  47. "message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
  48. "user_id": req.get("user_id", ""),
  49. }
  50. if not conv.get("name"):
  51. return get_error_data_result(message="`name` can not be empty.")
  52. ConversationService.save(**conv)
  53. e, conv = ConversationService.get_by_id(conv["id"])
  54. if not e:
  55. return get_error_data_result(message="Fail to create a session!")
  56. conv = conv.to_dict()
  57. conv["messages"] = conv.pop("message")
  58. conv["chat_id"] = conv.pop("dialog_id")
  59. del conv["reference"]
  60. return get_result(data=conv)
  61. @manager.route("/agents/<agent_id>/sessions", methods=["POST"]) # noqa: F821
  62. @token_required
  63. def create_agent_session(tenant_id, agent_id):
  64. req = request.json
  65. if not request.is_json:
  66. req = request.form
  67. files = request.files
  68. user_id = request.args.get("user_id", "")
  69. e, cvs = UserCanvasService.get_by_id(agent_id)
  70. if not e:
  71. return get_error_data_result("Agent not found.")
  72. if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
  73. return get_error_data_result("You cannot access the agent.")
  74. if not isinstance(cvs.dsl, str):
  75. cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
  76. canvas = Canvas(cvs.dsl, tenant_id)
  77. canvas.reset()
  78. query = canvas.get_preset_param()
  79. if query:
  80. for ele in query:
  81. if not ele["optional"]:
  82. if ele["type"] == "file":
  83. if files is None or not files.get(ele["key"]):
  84. return get_error_data_result(f"`{ele['key']}` with type `{ele['type']}` is required")
  85. upload_file = files.get(ele["key"])
  86. file_content = FileService.parse_docs([upload_file], user_id)
  87. file_name = upload_file.filename
  88. ele["value"] = file_name + "\n" + file_content
  89. else:
  90. if req is None or not req.get(ele["key"]):
  91. return get_error_data_result(f"`{ele['key']}` with type `{ele['type']}` is required")
  92. ele["value"] = req[ele["key"]]
  93. else:
  94. if ele["type"] == "file":
  95. if files is not None and files.get(ele["key"]):
  96. upload_file = files.get(ele["key"])
  97. file_content = FileService.parse_docs([upload_file], user_id)
  98. file_name = upload_file.filename
  99. ele["value"] = file_name + "\n" + file_content
  100. else:
  101. if "value" in ele:
  102. ele.pop("value")
  103. else:
  104. if req is not None and req.get(ele["key"]):
  105. ele["value"] = req[ele["key"]]
  106. else:
  107. if "value" in ele:
  108. ele.pop("value")
  109. for ans in canvas.run(stream=False):
  110. pass
  111. cvs.dsl = json.loads(str(canvas))
  112. conv = {"id": get_uuid(), "dialog_id": cvs.id, "user_id": user_id, "message": [{"role": "assistant", "content": canvas.get_prologue()}], "source": "agent", "dsl": cvs.dsl}
  113. API4ConversationService.save(**conv)
  114. conv["agent_id"] = conv.pop("dialog_id")
  115. return get_result(data=conv)
  116. @manager.route("/chats/<chat_id>/sessions/<session_id>", methods=["PUT"]) # noqa: F821
  117. @token_required
  118. def update(tenant_id, chat_id, session_id):
  119. req = request.json
  120. req["dialog_id"] = chat_id
  121. conv_id = session_id
  122. conv = ConversationService.query(id=conv_id, dialog_id=chat_id)
  123. if not conv:
  124. return get_error_data_result(message="Session does not exist")
  125. if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
  126. return get_error_data_result(message="You do not own the session")
  127. if "message" in req or "messages" in req:
  128. return get_error_data_result(message="`message` can not be change")
  129. if "reference" in req:
  130. return get_error_data_result(message="`reference` can not be change")
  131. if "name" in req and not req.get("name"):
  132. return get_error_data_result(message="`name` can not be empty.")
  133. if not ConversationService.update_by_id(conv_id, req):
  134. return get_error_data_result(message="Session updates error")
  135. return get_result()
  136. @manager.route("/chats/<chat_id>/completions", methods=["POST"]) # noqa: F821
  137. @token_required
  138. def chat_completion(tenant_id, chat_id):
  139. req = request.json
  140. if not req:
  141. req = {"question": ""}
  142. if not req.get("session_id"):
  143. req["question"] = ""
  144. if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
  145. return get_error_data_result(f"You don't own the chat {chat_id}")
  146. if req.get("session_id"):
  147. if not ConversationService.query(id=req["session_id"], dialog_id=chat_id):
  148. return get_error_data_result(f"You don't own the session {req['session_id']}")
  149. if req.get("stream", True):
  150. resp = Response(rag_completion(tenant_id, chat_id, **req), mimetype="text/event-stream")
  151. resp.headers.add_header("Cache-control", "no-cache")
  152. resp.headers.add_header("Connection", "keep-alive")
  153. resp.headers.add_header("X-Accel-Buffering", "no")
  154. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  155. return resp
  156. else:
  157. answer = None
  158. for ans in rag_completion(tenant_id, chat_id, **req):
  159. answer = ans
  160. break
  161. return get_result(data=answer)
  162. @manager.route("/chats_openai/<chat_id>/chat/completions", methods=["POST"]) # noqa: F821
  163. @validate_request("model", "messages") # noqa: F821
  164. @token_required
  165. def chat_completion_openai_like(tenant_id, chat_id):
  166. """
  167. OpenAI-like chat completion API that simulates the behavior of OpenAI's completions endpoint.
  168. This function allows users to interact with a model and receive responses based on a series of historical messages.
  169. If `stream` is set to True (by default), the response will be streamed in chunks, mimicking the OpenAI-style API.
  170. Set `stream` to False explicitly, the response will be returned in a single complete answer.
  171. Example usage:
  172. curl -X POST https://ragflow_address.com/api/v1/chats_openai/<chat_id>/chat/completions \
  173. -H "Content-Type: application/json" \
  174. -H "Authorization: Bearer $RAGFLOW_API_KEY" \
  175. -d '{
  176. "model": "model",
  177. "messages": [{"role": "user", "content": "Say this is a test!"}],
  178. "stream": true
  179. }'
  180. Alternatively, you can use Python's `OpenAI` client:
  181. from openai import OpenAI
  182. model = "model"
  183. client = OpenAI(api_key="ragflow-api-key", base_url=f"http://ragflow_address/api/v1/chats_openai/<chat_id>")
  184. completion = client.chat.completions.create(
  185. model=model,
  186. messages=[
  187. {"role": "system", "content": "You are a helpful assistant."},
  188. {"role": "user", "content": "Who are you?"},
  189. {"role": "assistant", "content": "I am an AI assistant named..."},
  190. {"role": "user", "content": "Can you tell me how to install neovim"},
  191. ],
  192. stream=True
  193. )
  194. stream = True
  195. if stream:
  196. for chunk in completion:
  197. print(chunk)
  198. else:
  199. print(completion.choices[0].message.content)
  200. """
  201. req = request.json
  202. messages = req.get("messages", [])
  203. # To prevent empty [] input
  204. if len(messages) < 1:
  205. return get_error_data_result("You have to provide messages.")
  206. if messages[-1]["role"] != "user":
  207. return get_error_data_result("The last content of this conversation is not from user.")
  208. prompt = messages[-1]["content"]
  209. # Treat context tokens as reasoning tokens
  210. context_token_used = sum(len(message["content"]) for message in messages)
  211. dia = DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value)
  212. if not dia:
  213. return get_error_data_result(f"You don't own the chat {chat_id}")
  214. dia = dia[0]
  215. # Filter system and non-sense assistant messages
  216. msg = []
  217. for m in messages:
  218. if m["role"] == "system":
  219. continue
  220. if m["role"] == "assistant" and not msg:
  221. continue
  222. msg.append(m)
  223. # tools = get_tools()
  224. # toolcall_session = SimpleFunctionCallServer()
  225. tools = None
  226. toolcall_session = None
  227. if req.get("stream", True):
  228. # The value for the usage field on all chunks except for the last one will be null.
  229. # The usage field on the last chunk contains token usage statistics for the entire request.
  230. # The choices field on the last chunk will always be an empty array [].
  231. def streamed_response_generator(chat_id, dia, msg):
  232. token_used = 0
  233. answer_cache = ""
  234. reasoning_cache = ""
  235. response = {
  236. "id": f"chatcmpl-{chat_id}",
  237. "choices": [{"delta": {"content": "", "role": "assistant", "function_call": None, "tool_calls": None, "reasoning_content": ""}, "finish_reason": None, "index": 0, "logprobs": None}],
  238. "created": int(time.time()),
  239. "model": "model",
  240. "object": "chat.completion.chunk",
  241. "system_fingerprint": "",
  242. "usage": None,
  243. }
  244. try:
  245. for ans in chat(dia, msg, True, toolcall_session=toolcall_session, tools=tools):
  246. answer = ans["answer"]
  247. reasoning_match = re.search(r"<think>(.*?)</think>", answer, flags=re.DOTALL)
  248. if reasoning_match:
  249. reasoning_part = reasoning_match.group(1)
  250. content_part = answer[reasoning_match.end() :]
  251. else:
  252. reasoning_part = ""
  253. content_part = answer
  254. reasoning_incremental = ""
  255. if reasoning_part:
  256. if reasoning_part.startswith(reasoning_cache):
  257. reasoning_incremental = reasoning_part.replace(reasoning_cache, "", 1)
  258. else:
  259. reasoning_incremental = reasoning_part
  260. reasoning_cache = reasoning_part
  261. content_incremental = ""
  262. if content_part:
  263. if content_part.startswith(answer_cache):
  264. content_incremental = content_part.replace(answer_cache, "", 1)
  265. else:
  266. content_incremental = content_part
  267. answer_cache = content_part
  268. token_used += len(reasoning_incremental) + len(content_incremental)
  269. if not any([reasoning_incremental, content_incremental]):
  270. continue
  271. if reasoning_incremental:
  272. response["choices"][0]["delta"]["reasoning_content"] = reasoning_incremental
  273. else:
  274. response["choices"][0]["delta"]["reasoning_content"] = None
  275. if content_incremental:
  276. response["choices"][0]["delta"]["content"] = content_incremental
  277. else:
  278. response["choices"][0]["delta"]["content"] = None
  279. yield f"data:{json.dumps(response, ensure_ascii=False)}\n\n"
  280. except Exception as e:
  281. response["choices"][0]["delta"]["content"] = "**ERROR**: " + str(e)
  282. yield f"data:{json.dumps(response, ensure_ascii=False)}\n\n"
  283. # The last chunk
  284. response["choices"][0]["delta"]["content"] = None
  285. response["choices"][0]["delta"]["reasoning_content"] = None
  286. response["choices"][0]["finish_reason"] = "stop"
  287. response["usage"] = {"prompt_tokens": len(prompt), "completion_tokens": token_used, "total_tokens": len(prompt) + token_used}
  288. yield f"data:{json.dumps(response, ensure_ascii=False)}\n\n"
  289. yield "data:[DONE]\n\n"
  290. resp = Response(streamed_response_generator(chat_id, dia, msg), mimetype="text/event-stream")
  291. resp.headers.add_header("Cache-control", "no-cache")
  292. resp.headers.add_header("Connection", "keep-alive")
  293. resp.headers.add_header("X-Accel-Buffering", "no")
  294. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  295. return resp
  296. else:
  297. answer = None
  298. for ans in chat(dia, msg, False, toolcall_session=toolcall_session, tools=tools):
  299. # focus answer content only
  300. answer = ans
  301. break
  302. content = answer["answer"]
  303. response = {
  304. "id": f"chatcmpl-{chat_id}",
  305. "object": "chat.completion",
  306. "created": int(time.time()),
  307. "model": req.get("model", ""),
  308. "usage": {
  309. "prompt_tokens": len(prompt),
  310. "completion_tokens": len(content),
  311. "total_tokens": len(prompt) + len(content),
  312. "completion_tokens_details": {
  313. "reasoning_tokens": context_token_used,
  314. "accepted_prediction_tokens": len(content),
  315. "rejected_prediction_tokens": 0, # 0 for simplicity
  316. },
  317. },
  318. "choices": [{"message": {"role": "assistant", "content": content}, "logprobs": None, "finish_reason": "stop", "index": 0}],
  319. }
  320. return jsonify(response)
  321. @manager.route('/agents_openai/<agent_id>/chat/completions', methods=['POST']) # noqa: F821
  322. @validate_request("model", "messages") # noqa: F821
  323. @token_required
  324. def agents_completion_openai_compatibility (tenant_id, agent_id):
  325. req = request.json
  326. tiktokenenc = tiktoken.get_encoding("cl100k_base")
  327. messages = req.get("messages", [])
  328. if not messages:
  329. return get_error_data_result("You must provide at least one message.")
  330. if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
  331. return get_error_data_result(f"You don't own the agent {agent_id}")
  332. filtered_messages = [m for m in messages if m["role"] in ["user", "assistant"]]
  333. prompt_tokens = sum(len(tiktokenenc.encode(m["content"])) for m in filtered_messages)
  334. if not filtered_messages:
  335. return jsonify(get_data_openai(
  336. id=agent_id,
  337. content="No valid messages found (user or assistant).",
  338. finish_reason="stop",
  339. model=req.get("model", ""),
  340. completion_tokens=len(tiktokenenc.encode("No valid messages found (user or assistant).")),
  341. prompt_tokens=prompt_tokens,
  342. ))
  343. # Get the last user message as the question
  344. question = next((m["content"] for m in reversed(messages) if m["role"] == "user"), "")
  345. if req.get("stream", True):
  346. return Response(completionOpenAI(tenant_id, agent_id, question, session_id=req.get("id", ""), stream=True), mimetype="text/event-stream")
  347. else:
  348. # For non-streaming, just return the response directly
  349. response = next(completionOpenAI(tenant_id, agent_id, question, session_id=req.get("id", ""), stream=False))
  350. return jsonify(response)
  351. @manager.route("/agents/<agent_id>/completions", methods=["POST"]) # noqa: F821
  352. @token_required
  353. def agent_completions(tenant_id, agent_id):
  354. req = request.json
  355. cvs = UserCanvasService.query(user_id=tenant_id, id=agent_id)
  356. if not cvs:
  357. return get_error_data_result(f"You don't own the agent {agent_id}")
  358. if req.get("session_id"):
  359. dsl = cvs[0].dsl
  360. if not isinstance(dsl, str):
  361. dsl = json.dumps(dsl)
  362. conv = API4ConversationService.query(id=req["session_id"], dialog_id=agent_id)
  363. if not conv:
  364. return get_error_data_result(f"You don't own the session {req['session_id']}")
  365. # If an update to UserCanvas is detected, update the API4Conversation.dsl
  366. sync_dsl = req.get("sync_dsl", False)
  367. if sync_dsl is True and cvs[0].update_time > conv[0].update_time:
  368. current_dsl = conv[0].dsl
  369. new_dsl = json.loads(dsl)
  370. state_fields = ["history", "messages", "path", "reference"]
  371. states = {field: current_dsl.get(field, []) for field in state_fields}
  372. current_dsl.update(new_dsl)
  373. current_dsl.update(states)
  374. API4ConversationService.update_by_id(req["session_id"], {"dsl": current_dsl})
  375. else:
  376. req["question"] = ""
  377. if req.get("stream", True):
  378. resp = Response(agent_completion(tenant_id, agent_id, **req), mimetype="text/event-stream")
  379. resp.headers.add_header("Cache-control", "no-cache")
  380. resp.headers.add_header("Connection", "keep-alive")
  381. resp.headers.add_header("X-Accel-Buffering", "no")
  382. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  383. return resp
  384. try:
  385. for answer in agent_completion(tenant_id, agent_id, **req):
  386. return get_result(data=answer)
  387. except Exception as e:
  388. return get_error_data_result(str(e))
  389. @manager.route("/chats/<chat_id>/sessions", methods=["GET"]) # noqa: F821
  390. @token_required
  391. def list_session(tenant_id, chat_id):
  392. if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
  393. return get_error_data_result(message=f"You don't own the assistant {chat_id}.")
  394. id = request.args.get("id")
  395. name = request.args.get("name")
  396. page_number = int(request.args.get("page", 1))
  397. items_per_page = int(request.args.get("page_size", 30))
  398. orderby = request.args.get("orderby", "create_time")
  399. user_id = request.args.get("user_id")
  400. if request.args.get("desc") == "False" or request.args.get("desc") == "false":
  401. desc = False
  402. else:
  403. desc = True
  404. convs = ConversationService.get_list(chat_id, page_number, items_per_page, orderby, desc, id, name, user_id)
  405. if not convs:
  406. return get_result(data=[])
  407. for conv in convs:
  408. conv["messages"] = conv.pop("message")
  409. infos = conv["messages"]
  410. for info in infos:
  411. if "prompt" in info:
  412. info.pop("prompt")
  413. conv["chat_id"] = conv.pop("dialog_id")
  414. if conv["reference"]:
  415. messages = conv["messages"]
  416. message_num = 0
  417. while message_num < len(messages) and message_num < len(conv["reference"]):
  418. if message_num != 0 and messages[message_num]["role"] != "user":
  419. chunk_list = []
  420. if "chunks" in conv["reference"][message_num]:
  421. chunks = conv["reference"][message_num]["chunks"]
  422. for chunk in chunks:
  423. new_chunk = {
  424. "id": chunk.get("chunk_id", chunk.get("id")),
  425. "content": chunk.get("content_with_weight", chunk.get("content")),
  426. "document_id": chunk.get("doc_id", chunk.get("document_id")),
  427. "document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
  428. "dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
  429. "image_id": chunk.get("image_id", chunk.get("img_id")),
  430. "positions": chunk.get("positions", chunk.get("position_int")),
  431. }
  432. chunk_list.append(new_chunk)
  433. messages[message_num]["reference"] = chunk_list
  434. message_num += 1
  435. del conv["reference"]
  436. return get_result(data=convs)
  437. @manager.route("/agents/<agent_id>/sessions", methods=["GET"]) # noqa: F821
  438. @token_required
  439. def list_agent_session(tenant_id, agent_id):
  440. if not UserCanvasService.query(user_id=tenant_id, id=agent_id):
  441. return get_error_data_result(message=f"You don't own the agent {agent_id}.")
  442. id = request.args.get("id")
  443. user_id = request.args.get("user_id")
  444. page_number = int(request.args.get("page", 1))
  445. items_per_page = int(request.args.get("page_size", 30))
  446. orderby = request.args.get("orderby", "update_time")
  447. if request.args.get("desc") == "False" or request.args.get("desc") == "false":
  448. desc = False
  449. else:
  450. desc = True
  451. # dsl defaults to True in all cases except for False and false
  452. include_dsl = request.args.get("dsl") != "False" and request.args.get("dsl") != "false"
  453. convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id, user_id, include_dsl)
  454. if not convs:
  455. return get_result(data=[])
  456. for conv in convs:
  457. conv["messages"] = conv.pop("message")
  458. infos = conv["messages"]
  459. for info in infos:
  460. if "prompt" in info:
  461. info.pop("prompt")
  462. conv["agent_id"] = conv.pop("dialog_id")
  463. if conv["reference"]:
  464. messages = conv["messages"]
  465. message_num = 0
  466. chunk_num = 0
  467. while message_num < len(messages):
  468. if message_num != 0 and messages[message_num]["role"] != "user":
  469. chunk_list = []
  470. if "chunks" in conv["reference"][chunk_num]:
  471. chunks = conv["reference"][chunk_num]["chunks"]
  472. for chunk in chunks:
  473. new_chunk = {
  474. "id": chunk.get("chunk_id", chunk.get("id")),
  475. "content": chunk.get("content_with_weight", chunk.get("content")),
  476. "document_id": chunk.get("doc_id", chunk.get("document_id")),
  477. "document_name": chunk.get("docnm_kwd", chunk.get("document_name")),
  478. "dataset_id": chunk.get("kb_id", chunk.get("dataset_id")),
  479. "image_id": chunk.get("image_id", chunk.get("img_id")),
  480. "positions": chunk.get("positions", chunk.get("position_int")),
  481. }
  482. chunk_list.append(new_chunk)
  483. chunk_num += 1
  484. messages[message_num]["reference"] = chunk_list
  485. message_num += 1
  486. del conv["reference"]
  487. return get_result(data=convs)
  488. @manager.route("/chats/<chat_id>/sessions", methods=["DELETE"]) # noqa: F821
  489. @token_required
  490. def delete(tenant_id, chat_id):
  491. if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
  492. return get_error_data_result(message="You don't own the chat")
  493. errors = []
  494. success_count = 0
  495. req = request.json
  496. convs = ConversationService.query(dialog_id=chat_id)
  497. if not req:
  498. ids = None
  499. else:
  500. ids = req.get("ids")
  501. if not ids:
  502. conv_list = []
  503. for conv in convs:
  504. conv_list.append(conv.id)
  505. else:
  506. conv_list = ids
  507. unique_conv_ids, duplicate_messages = check_duplicate_ids(conv_list, "session")
  508. conv_list = unique_conv_ids
  509. for id in conv_list:
  510. conv = ConversationService.query(id=id, dialog_id=chat_id)
  511. if not conv:
  512. errors.append(f"The chat doesn't own the session {id}")
  513. continue
  514. ConversationService.delete_by_id(id)
  515. success_count += 1
  516. if errors:
  517. if success_count > 0:
  518. return get_result(
  519. data={"success_count": success_count, "errors": errors},
  520. message=f"Partially deleted {success_count} sessions with {len(errors)} errors"
  521. )
  522. else:
  523. return get_error_data_result(message="; ".join(errors))
  524. if duplicate_messages:
  525. if success_count > 0:
  526. return get_result(
  527. message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
  528. data={"success_count": success_count, "errors": duplicate_messages}
  529. )
  530. else:
  531. return get_error_data_result(message=";".join(duplicate_messages))
  532. return get_result()
  533. @manager.route("/agents/<agent_id>/sessions", methods=["DELETE"]) # noqa: F821
  534. @token_required
  535. def delete_agent_session(tenant_id, agent_id):
  536. errors = []
  537. success_count = 0
  538. req = request.json
  539. cvs = UserCanvasService.query(user_id=tenant_id, id=agent_id)
  540. if not cvs:
  541. return get_error_data_result(f"You don't own the agent {agent_id}")
  542. convs = API4ConversationService.query(dialog_id=agent_id)
  543. if not convs:
  544. return get_error_data_result(f"Agent {agent_id} has no sessions")
  545. if not req:
  546. ids = None
  547. else:
  548. ids = req.get("ids")
  549. if not ids:
  550. conv_list = []
  551. for conv in convs:
  552. conv_list.append(conv.id)
  553. else:
  554. conv_list = ids
  555. unique_conv_ids, duplicate_messages = check_duplicate_ids(conv_list, "session")
  556. conv_list = unique_conv_ids
  557. for session_id in conv_list:
  558. conv = API4ConversationService.query(id=session_id, dialog_id=agent_id)
  559. if not conv:
  560. errors.append(f"The agent doesn't own the session {session_id}")
  561. continue
  562. API4ConversationService.delete_by_id(session_id)
  563. success_count += 1
  564. if errors:
  565. if success_count > 0:
  566. return get_result(
  567. data={"success_count": success_count, "errors": errors},
  568. message=f"Partially deleted {success_count} sessions with {len(errors)} errors"
  569. )
  570. else:
  571. return get_error_data_result(message="; ".join(errors))
  572. if duplicate_messages:
  573. if success_count > 0:
  574. return get_result(
  575. message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
  576. data={"success_count": success_count, "errors": duplicate_messages}
  577. )
  578. else:
  579. return get_error_data_result(message=";".join(duplicate_messages))
  580. return get_result()
  581. @manager.route("/sessions/ask", methods=["POST"]) # noqa: F821
  582. @token_required
  583. def ask_about(tenant_id):
  584. req = request.json
  585. if not req.get("question"):
  586. return get_error_data_result("`question` is required.")
  587. if not req.get("dataset_ids"):
  588. return get_error_data_result("`dataset_ids` is required.")
  589. if not isinstance(req.get("dataset_ids"), list):
  590. return get_error_data_result("`dataset_ids` should be a list.")
  591. req["kb_ids"] = req.pop("dataset_ids")
  592. for kb_id in req["kb_ids"]:
  593. if not KnowledgebaseService.accessible(kb_id, tenant_id):
  594. return get_error_data_result(f"You don't own the dataset {kb_id}.")
  595. kbs = KnowledgebaseService.query(id=kb_id)
  596. kb = kbs[0]
  597. if kb.chunk_num == 0:
  598. return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
  599. uid = tenant_id
  600. def stream():
  601. nonlocal req, uid
  602. try:
  603. for ans in ask(req["question"], req["kb_ids"], uid):
  604. yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
  605. except Exception as e:
  606. yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
  607. yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
  608. resp = Response(stream(), mimetype="text/event-stream")
  609. resp.headers.add_header("Cache-control", "no-cache")
  610. resp.headers.add_header("Connection", "keep-alive")
  611. resp.headers.add_header("X-Accel-Buffering", "no")
  612. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  613. return resp
  614. @manager.route("/sessions/related_questions", methods=["POST"]) # noqa: F821
  615. @token_required
  616. def related_questions(tenant_id):
  617. req = request.json
  618. if not req.get("question"):
  619. return get_error_data_result("`question` is required.")
  620. question = req["question"]
  621. chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
  622. prompt = """
  623. Objective: To generate search terms related to the user's search keywords, helping users find more valuable information.
  624. Instructions:
  625. - Based on the keywords provided by the user, generate 5-10 related search terms.
  626. - Each search term should be directly or indirectly related to the keyword, guiding the user to find more valuable information.
  627. - Use common, general terms as much as possible, avoiding obscure words or technical jargon.
  628. - Keep the term length between 2-4 words, concise and clear.
  629. - DO NOT translate, use the language of the original keywords.
  630. ### Example:
  631. Keywords: Chinese football
  632. Related search terms:
  633. 1. Current status of Chinese football
  634. 2. Reform of Chinese football
  635. 3. Youth training of Chinese football
  636. 4. Chinese football in the Asian Cup
  637. 5. Chinese football in the World Cup
  638. Reason:
  639. - When searching, users often only use one or two keywords, making it difficult to fully express their information needs.
  640. - Generating related search terms can help users dig deeper into relevant information and improve search efficiency.
  641. - At the same time, related terms can also help search engines better understand user needs and return more accurate search results.
  642. """
  643. ans = chat_mdl.chat(
  644. prompt,
  645. [
  646. {
  647. "role": "user",
  648. "content": f"""
  649. Keywords: {question}
  650. Related search terms:
  651. """,
  652. }
  653. ],
  654. {"temperature": 0.9},
  655. )
  656. return get_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])
  657. @manager.route("/chatbots/<dialog_id>/completions", methods=["POST"]) # noqa: F821
  658. def chatbot_completions(dialog_id):
  659. req = request.json
  660. token = request.headers.get("Authorization").split()
  661. if len(token) != 2:
  662. return get_error_data_result(message='Authorization is not valid!"')
  663. token = token[1]
  664. objs = APIToken.query(beta=token)
  665. if not objs:
  666. return get_error_data_result(message='Authentication error: API key is invalid!"')
  667. if "quote" not in req:
  668. req["quote"] = False
  669. if req.get("stream", True):
  670. resp = Response(iframe_completion(dialog_id, **req), mimetype="text/event-stream")
  671. resp.headers.add_header("Cache-control", "no-cache")
  672. resp.headers.add_header("Connection", "keep-alive")
  673. resp.headers.add_header("X-Accel-Buffering", "no")
  674. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  675. return resp
  676. for answer in iframe_completion(dialog_id, **req):
  677. return get_result(data=answer)
  678. @manager.route("/agentbots/<agent_id>/completions", methods=["POST"]) # noqa: F821
  679. def agent_bot_completions(agent_id):
  680. req = request.json
  681. token = request.headers.get("Authorization").split()
  682. if len(token) != 2:
  683. return get_error_data_result(message='Authorization is not valid!"')
  684. token = token[1]
  685. objs = APIToken.query(beta=token)
  686. if not objs:
  687. return get_error_data_result(message='Authentication error: API key is invalid!"')
  688. if "quote" not in req:
  689. req["quote"] = False
  690. if req.get("stream", True):
  691. resp = Response(agent_completion(objs[0].tenant_id, agent_id, **req), mimetype="text/event-stream")
  692. resp.headers.add_header("Cache-control", "no-cache")
  693. resp.headers.add_header("Connection", "keep-alive")
  694. resp.headers.add_header("X-Accel-Buffering", "no")
  695. resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
  696. return resp
  697. for answer in agent_completion(objs[0].tenant_id, agent_id, **req):
  698. return get_result(data=answer)