浏览代码

Fix: raptor overloading (#7889)

### What problem does this PR solve?

#7840

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
tags/v0.19.1
Kevin Hu 5 个月前
父节点
当前提交
28cb4df127
没有帐户链接到提交者的电子邮件
共有 1 个文件被更改,包括 2 次插入1 次删除
  1. 2
    1
      rag/svr/task_executor.py

+ 2
- 1
rag/svr/task_executor.py 查看文件

@@ -537,7 +537,8 @@ async def do_handle_task(task):
# bind LLM for raptor
chat_model = LLMBundle(task_tenant_id, LLMType.CHAT, llm_name=task_llm_id, lang=task_language)
# run RAPTOR
chunks, token_count = await run_raptor(task, chat_model, embedding_model, vector_size, progress_callback)
async with kg_limiter:
chunks, token_count = await run_raptor(task, chat_model, embedding_model, vector_size, progress_callback)
# Either using graphrag or Standard chunking methods
elif task.get("task_type", "") == "graphrag":
if not task_parser_config.get("graphrag", {}).get("use_graphrag", False):

正在加载...
取消
保存