浏览代码

feat: implement NotImplementedError for token counting in LLMs and reintroduce disabled token count method

Signed-off-by: -LAN- <laipz8200@outlook.com>
tags/0.15.6-alpha.1^0
-LAN- 7 个月前
父节点
当前提交
fa6fa730b5
没有帐户链接到提交者的电子邮件
共有 1 个文件被更改,包括 5 次插入2 次删除
  1. 5
    2
      api/core/model_runtime/model_providers/__base/large_language_model.py

+ 5
- 2
api/core/model_runtime/model_providers/__base/large_language_model.py 查看文件

@@ -553,8 +553,7 @@ if you are not sure about the structure.
:param tools: tools for tool calling
:return:
"""
# Disable the token count in LLMs for profermance testing.
return 0
raise NotImplementedError

def enforce_stop_tokens(self, text: str, stop: list[str]) -> str:
"""Cut off the text as soon as any stop words occur."""
@@ -915,3 +914,7 @@ if you are not sure about the structure.
filtered_model_parameters[parameter_name] = parameter_value

return filtered_model_parameters

def _get_num_tokens_by_gpt2(self, text: str) -> int:
# Disable the token count in LLMs for profermance testing.
return 0

正在加载...
取消
保存