Pārlūkot izejas kodu

fix: correct indentation in TokenBufferMemory get_history_prompt_messages method

tags/2.0.0-beta.2^2
-LAN- pirms 1 mēnesi
vecāks
revīzija
cc1d437dc1
Revīzijas autora e-pasta adrese nav piesaistīta nevienam kontam
1 mainītis faili ar 4 papildinājumiem un 4 dzēšanām
  1. 4
    4
      api/core/memory/token_buffer_memory.py

+ 4
- 4
api/core/memory/token_buffer_memory.py Parādīt failu

@@ -167,11 +167,11 @@ class TokenBufferMemory:
else:
prompt_messages.append(AssistantPromptMessage(content=message.answer))

if not prompt_messages:
return []
if not prompt_messages:
return []

# prune the chat message if it exceeds the max token limit
curr_message_tokens = self.model_instance.get_llm_num_tokens(prompt_messages)
# prune the chat message if it exceeds the max token limit
curr_message_tokens = self.model_instance.get_llm_num_tokens(prompt_messages)

if curr_message_tokens > max_token_limit:
while curr_message_tokens > max_token_limit and len(prompt_messages) > 1:

Notiek ielāde…
Atcelt
Saglabāt