Parcourir la source

show error log of KG extraction (#2045)

### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
tags/v0.10.0
Kevin Hu il y a 1 an
Parent
révision
4580ad2fd7
Aucun compte lié à l'adresse e-mail de l'auteur
1 fichiers modifiés avec 2 ajouts et 0 suppressions
  1. 2
    0
      graphrag/graph_extractor.py

+ 2
- 0
graphrag/graph_extractor.py Voir le fichier

total_token_count += token_count total_token_count += token_count
if callback: callback(msg=f"{doc_index+1}/{total}, elapsed: {timer() - st}s, used tokens: {total_token_count}") if callback: callback(msg=f"{doc_index+1}/{total}, elapsed: {timer() - st}s, used tokens: {total_token_count}")
except Exception as e: except Exception as e:
if callback: callback("Knowledge graph extraction error:{}".format(str(e)))
logging.exception("error extracting graph") logging.exception("error extracting graph")
self._on_error( self._on_error(
e, e,
text = perform_variable_replacements(CONTINUE_PROMPT, history=history, variables=variables) text = perform_variable_replacements(CONTINUE_PROMPT, history=history, variables=variables)
history.append({"role": "user", "content": text}) history.append({"role": "user", "content": text})
response = self._llm.chat("", history, gen_conf) response = self._llm.chat("", history, gen_conf)
if response.find("**ERROR**") >=0: raise Exception(response)
results += response or "" results += response or ""


# if this is the final glean, don't bother updating the continuation flag # if this is the final glean, don't bother updating the continuation flag

Chargement…
Annuler
Enregistrer