### What problem does this PR solve? vLLM provider with a reranking model does not work : as vLLM uses under the hood the [CoHereRerank provider](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/__init__.py#L250) with a `base_url`, if this URL [is not passed to the Cohere client](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/rerank_model.py#L379-L382) any attempt will endup on the Cohere SaaS (sending your private api key in the process) instead of your vLLM instance. ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue) - [ ] New Feature (non-breaking change which adds functionality) - [ ] Documentation Update - [ ] Refactoring - [ ] Performance Improvement - [ ] Other (please describe):tags/v0.17.1
| @@ -381,7 +381,7 @@ class CoHereRerank(Base): | |||
| def __init__(self, key, model_name, base_url=None): | |||
| from cohere import Client | |||
| self.client = Client(api_key=key) | |||
| self.client = Client(api_key=key, base_url=base_url) | |||
| self.model_name = model_name | |||
| def similarity(self, query: str, texts: list): | |||