Explorar el Código

add locally deployed llm (#841)

### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
tags/v0.6.0
KevinHuSh hace 1 año
padre
commit
a7bd427116
No account linked to committer's email address
Se han modificado 1 ficheros con 16 adiciones y 1 borrados
  1. 16
    1
      rag/llm/chat_model.py

+ 16
- 1
rag/llm/chat_model.py Ver fichero

@@ -298,4 +298,19 @@ class LocalLLM(Base):
)
return ans, num_tokens_from_string(ans)
except Exception as e:
return "**ERROR**: " + str(e), 0
return "**ERROR**: " + str(e), 0

def chat_streamly(self, system, history, gen_conf):
if system:
history.insert(0, {"role": "system", "content": system})
token_count = 0
answer = ""
try:
for ans in self.client.chat_streamly(history, gen_conf):
answer += ans
token_count += 1
yield answer
except Exception as e:
yield answer + "\n**ERROR**: " + str(e)

yield token_count

Cargando…
Cancelar
Guardar