瀏覽代碼

Fix: CoHereRerank not respecting base_url when provided (#5784)

### What problem does this PR solve?

vLLM provider with a reranking model does not work : as vLLM uses under
the hood the [CoHereRerank
provider](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/__init__.py#L250)
with a `base_url`, if this URL [is not passed to the Cohere
client](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/rerank_model.py#L379-L382)
any attempt will endup on the Cohere SaaS (sending your private api key
in the process) instead of your vLLM instance.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
tags/v0.17.1
Edouard Hur 7 月之前
父節點
當前提交
b29539b442
共有 1 個檔案被更改,包括 1 行新增1 行删除
  1. 1
    1
      rag/llm/rerank_model.py

+ 1
- 1
rag/llm/rerank_model.py 查看文件

def __init__(self, key, model_name, base_url=None): def __init__(self, key, model_name, base_url=None):
from cohere import Client from cohere import Client


self.client = Client(api_key=key)
self.client = Client(api_key=key, base_url=base_url)
self.model_name = model_name self.model_name = model_name


def similarity(self, query: str, texts: list): def similarity(self, query: str, texts: list):

Loading…
取消
儲存