瀏覽代碼

Fix xinference chat role order issue. (#4898)

### What problem does this PR solve?

#4831

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
tags/v0.17.0
Kevin Hu 8 月之前
父節點
當前提交
1287558f24
No account linked to committer's email address
共有 1 個文件被更改,包括 4 次插入4 次删除
  1. 4
    4
      graphrag/light/graph_extractor.py

+ 4
- 4
graphrag/light/graph_extractor.py 查看文件

@@ -94,11 +94,11 @@ class GraphExtractor(Extractor):
gen_conf = {"temperature": 0.8}
final_result = self._chat(hint_prompt, [{"role": "user", "content": "Output:"}], gen_conf)
token_count += num_tokens_from_string(hint_prompt + final_result)
history = pack_user_ass_to_openai_messages(hint_prompt, final_result)
history = pack_user_ass_to_openai_messages("Output:", final_result, self._continue_prompt)
for now_glean_index in range(self._max_gleanings):
glean_result = self._chat(self._continue_prompt, history, gen_conf)
token_count += num_tokens_from_string("\n".join([m["content"] for m in history]) + glean_result + self._continue_prompt)
history += pack_user_ass_to_openai_messages(self._continue_prompt, glean_result)
glean_result = self._chat(hint_prompt, history, gen_conf)
history.extend([{"role": "assistant", "content": glean_result}, {"role": "user", "content": self._continue_prompt}])
token_count += num_tokens_from_string("\n".join([m["content"] for m in history]) + hint_prompt + self._continue_prompt)
final_result += glean_result
if now_glean_index == self._max_gleanings - 1:
break

Loading…
取消
儲存