sidebar_position: 5
RAGFlow supports deploying models locally using Ollama or Xinference. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local “server” for interacting with your local models.
:::note
To deploy a local model, e.g., Llama3, using Ollama:
Ensure that your host machine’s firewall allows inbound connections on port 11434. For example:
sudo ufw allow 11434/tcp
Restart system and use curl or your web browser to check if the service URL of your Ollama service at http://localhost:11434 is accessible.
Ollama is running
ollama run llama3
docker exec -it ollama ollama run llama3
In RAGFlow, click on your logo on the top right of the page > Model Providers and add Ollama to RAGFlow:
In the popup window, complete basic settings for Ollama:
:::caution NOTE
http://localhost:11434 as base URL.http://host.docker.internal:11434 as base URL.http://<IP_OF_OLLAMA_MACHINE>:11434 as base URL.
::::::danger WARNING
If your Ollama runs on a different machine, you may also need to set the OLLAMA_HOST environment variable to 0.0.0.0 in ollama.service (Note that this is NOT the base URL):
Environment="OLLAMA_HOST=0.0.0.0"
:::caution WARNING Improper base URL settings will trigger the following error:
Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff98b81ff0>: Failed to establish a new connection: [Errno 111] Connection refused'))
:::
Click on your logo > Model Providers > System Model Settings to update your model:
*You should now be able to find llama3 from the dropdown list under Chat model.*
If your local model is an embedding model, you should find your local model under Embedding model.
Update your chat model accordingly in Chat Configuration:
If your local model is an embedding model, update it on the configruation page of your knowledge base.
:::note
To deploy a local model, e.g., Mistral, using Xinference:
Ensure that your host machine’s firewall allows inbound connections on port 9997.
$ xinference-local --host 0.0.0.0 --port 9997
Launch your local model (Mistral), ensuring that you replace ${quantization} with your chosen quantization method
:
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
In RAGFlow, click on your logo on the top right of the page > Model Providers and add Xinference to RAGFlow:
Enter an accessible base URL, such as http://<your-xinference-endpoint-domain>:9997/v1.
Click on your logo > Model Providers > System Model Settings to update your model.
*You should now be able to find mistral from the dropdown list under Chat model.*
If your local model is an embedding model, you should find your local model under Embedding model.
Update your chat model accordingly in Chat Configuration:
If your local model is an embedding model, update it on the configruation page of your knowledge base.