浏览代码

Fix Ollama instructions (#7478)

Fix instructions for Ollama

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
tags/v0.19.0
Raffaele Mancuso 6 个月前
父节点
当前提交
60787f8d5d
没有帐户链接到提交者的电子邮件
共有 1 个文件被更改,包括 8 次插入6 次删除
  1. 8
    6
      docs/guides/models/deploy_local_llm.mdx

+ 8
- 6
docs/guides/models/deploy_local_llm.mdx 查看文件



### 1. Deploy Ollama using Docker ### 1. Deploy Ollama using Docker


Ollama can be [installed from binaries](https://ollama.com/download) or [deployed with Docker](https://hub.docker.com/r/ollama/ollama). Here are the instructions to deploy with Docker:

```bash ```bash
$ sudo docker run --name ollama -p 11434:11434 ollama/ollama $ sudo docker run --name ollama -p 11434:11434 ollama/ollama
> time=2024-12-02T02:20:21.360Z level=INFO source=routes.go:1248 msg="Listening on [::]:11434 (version 0.4.6)" > time=2024-12-02T02:20:21.360Z level=INFO source=routes.go:1248 msg="Listening on [::]:11434 (version 0.4.6)"
> success > success
``` ```


### 2. Ensure Ollama is accessible
### 2. Find Ollama URL and ensure it is accessible


- If RAGFlow runs in Docker and Ollama runs on the same host machine, check if Ollama is accessible from inside the RAGFlow container:
- If RAGFlow runs in Docker, the localhost is mapped within the RAGFlow Docker container as `host.docker.internal`. If Ollama runs on the same host machine, the right URL to use for Ollama would be `http://host.docker.internal:11434/' and you should check that Ollama is accessible from inside the RAGFlow container with:
```bash ```bash
$ sudo docker exec -it ragflow-server bash $ sudo docker exec -it ragflow-server bash
$ curl http://host.docker.internal:11434/
$ curl http://host.docker.internal:11434/
> Ollama is running > Ollama is running
``` ```


- If RAGFlow is launched from source code and Ollama runs on the same host machine as RAGFlow, check if Ollama is accessible from RAGFlow's host machine: - If RAGFlow is launched from source code and Ollama runs on the same host machine as RAGFlow, check if Ollama is accessible from RAGFlow's host machine:
```bash ```bash
$ curl http://localhost:11434/
$ curl http://localhost:11434/
> Ollama is running > Ollama is running
``` ```


- If RAGFlow and Ollama run on different machines, check if Ollama is accessible from RAGFlow's host machine: - If RAGFlow and Ollama run on different machines, check if Ollama is accessible from RAGFlow's host machine:
```bash ```bash
$ curl http://${IP_OF_OLLAMA_MACHINE}:11434/
$ curl http://${IP_OF_OLLAMA_MACHINE}:11434/
> Ollama is running > Ollama is running
``` ```


In the popup window, complete basic settings for Ollama: In the popup window, complete basic settings for Ollama:


1. Ensure that your model name and type match those been pulled at step 1 (Deploy Ollama using Docker). For example, (`llama3.2` and `chat`) or (`bge-m3` and `embedding`). 1. Ensure that your model name and type match those been pulled at step 1 (Deploy Ollama using Docker). For example, (`llama3.2` and `chat`) or (`bge-m3` and `embedding`).
2. In Ollama base URL, as determined by step 2, replace `localhost` with `host.docker.internal`.
2. In Ollama base URL, put the URL you found in step 2 followed by `/v1`, i.e. `http://host.docker.internal:11434/v1`, `http://localhost:11434/v1` or `http://${IP_OF_OLLAMA_MACHINE}:11434/v1`.
3. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model. 3. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.





正在加载...
取消
保存