瀏覽代碼

feat: implement NotImplementedError for token counting in LLMs and reintroduce disabled token count method

Signed-off-by: -LAN- <laipz8200@outlook.com>
tags/0.15.6-alpha.1^0
-LAN- 7 月之前
父節點
當前提交
fa6fa730b5
沒有連結到貢獻者的電子郵件帳戶。
共有 1 個檔案被更改,包括 5 行新增2 行删除
  1. 5
    2
      api/core/model_runtime/model_providers/__base/large_language_model.py

+ 5
- 2
api/core/model_runtime/model_providers/__base/large_language_model.py 查看文件

:param tools: tools for tool calling :param tools: tools for tool calling
:return: :return:
""" """
# Disable the token count in LLMs for profermance testing.
return 0
raise NotImplementedError


def enforce_stop_tokens(self, text: str, stop: list[str]) -> str: def enforce_stop_tokens(self, text: str, stop: list[str]) -> str:
"""Cut off the text as soon as any stop words occur.""" """Cut off the text as soon as any stop words occur."""
filtered_model_parameters[parameter_name] = parameter_value filtered_model_parameters[parameter_name] = parameter_value


return filtered_model_parameters return filtered_model_parameters

def _get_num_tokens_by_gpt2(self, text: str) -> int:
# Disable the token count in LLMs for profermance testing.
return 0

Loading…
取消
儲存