瀏覽代碼

fix: correct indentation in TokenBufferMemory get_history_prompt_messages method

tags/2.0.0-beta.2^2
-LAN- 1 月之前
父節點
當前提交
cc1d437dc1
No account linked to committer's email address
共有 1 個文件被更改,包括 4 次插入4 次删除
  1. 4
    4
      api/core/memory/token_buffer_memory.py

+ 4
- 4
api/core/memory/token_buffer_memory.py 查看文件

@@ -167,11 +167,11 @@ class TokenBufferMemory:
else:
prompt_messages.append(AssistantPromptMessage(content=message.answer))

if not prompt_messages:
return []
if not prompt_messages:
return []

# prune the chat message if it exceeds the max token limit
curr_message_tokens = self.model_instance.get_llm_num_tokens(prompt_messages)
# prune the chat message if it exceeds the max token limit
curr_message_tokens = self.model_instance.get_llm_num_tokens(prompt_messages)

if curr_message_tokens > max_token_limit:
while curr_message_tokens > max_token_limit and len(prompt_messages) > 1:

Loading…
取消
儲存