浏览代码

add locally deployed llm (#841)

### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
tags/v0.6.0
KevinHuSh 1年前
父节点
当前提交
a7bd427116
没有帐户链接到提交者的电子邮件
共有 1 个文件被更改,包括 16 次插入1 次删除
  1. 16
    1
      rag/llm/chat_model.py

+ 16
- 1
rag/llm/chat_model.py 查看文件

@@ -298,4 +298,19 @@ class LocalLLM(Base):
)
return ans, num_tokens_from_string(ans)
except Exception as e:
return "**ERROR**: " + str(e), 0
return "**ERROR**: " + str(e), 0

def chat_streamly(self, system, history, gen_conf):
if system:
history.insert(0, {"role": "system", "content": system})
token_count = 0
answer = ""
try:
for ans in self.client.chat_streamly(history, gen_conf):
answer += ans
token_count += 1
yield answer
except Exception as e:
yield answer + "\n**ERROR**: " + str(e)

yield token_count

正在加载...
取消
保存