Просмотр исходного кода

Resolve 8475 support rerank model from infinity (#10939)

Co-authored-by: linyanxu <linyanxu2@qq.com>
tags/0.12.0
LastHopeOfGPNU 11 месяцев назад
Родитель
Сommit
1a6b961b5f
Аккаунт пользователя с таким Email не найден
1 измененных файлов: 8 добавлений и 2 удалений
  1. 8
    2
      api/core/model_runtime/model_providers/openai_api_compatible/rerank/rerank.py

+ 8
- 2
api/core/model_runtime/model_providers/openai_api_compatible/rerank/rerank.py Просмотреть файл



# TODO: Do we need truncate docs to avoid llama.cpp return error? # TODO: Do we need truncate docs to avoid llama.cpp return error?


data = {"model": model_name, "query": query, "documents": docs, "top_n": top_n}
data = {"model": model_name, "query": query, "documents": docs, "top_n": top_n, "return_documents": True}


try: try:
response = post(str(URL(url) / "rerank"), headers=headers, data=dumps(data), timeout=60) response = post(str(URL(url) / "rerank"), headers=headers, data=dumps(data), timeout=60)
index = result["index"] index = result["index"]


# Retrieve document text (fallback if llama.cpp rerank doesn't return it) # Retrieve document text (fallback if llama.cpp rerank doesn't return it)
text = result.get("document", {}).get("text", docs[index])
text = docs[index]
document = result.get("document", {})
if document:
if isinstance(document, dict):
text = document.get("text", docs[index])
elif isinstance(document, str):
text = document


# Normalize the score # Normalize the score
normalized_score = (result["relevance_score"] - min_score) / score_range normalized_score = (result["relevance_score"] - min_score) / score_range

Загрузка…
Отмена
Сохранить