You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

deploy_local_llm.mdx 12KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349
  1. ---
  2. sidebar_position: 5
  3. slug: /deploy_local_llm
  4. ---
  5. # Deploy a local LLM
  6. import Tabs from '@theme/Tabs';
  7. import TabItem from '@theme/TabItem';
  8. RAGFlow supports deploying models locally using Ollama or Xinference. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
  9. RAGFlow seamlessly integrates with Ollama and Xinference, without the need for further environment configurations. You can use them to deploy two types of local models in RAGFlow: chat models and embedding models.
  10. :::tip NOTE
  11. This user guide does not intend to cover much of the installation or configuration details of Ollama or Xinference; its focus is on configurations inside RAGFlow. For the most current information, you may need to check out the official site of Ollama or Xinference.
  12. :::
  13. # Deploy a local model using jina
  14. [Jina](https://github.com/jina-ai/jina) lets you build AI services and pipelines that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production.
  15. To deploy a local model, e.g., **gpt2**, using Jina:
  16. ### 1. Check firewall settings
  17. Ensure that your host machine's firewall allows inbound connections on port 12345.
  18. ```bash
  19. sudo ufw allow 12345/tcp
  20. ```
  21. ### 2.install jina package
  22. ```bash
  23. pip install jina
  24. ```
  25. ### 3. deployment local model
  26. Step 1: Navigate to the rag/svr directory.
  27. ```bash
  28. cd rag/svr
  29. ```
  30. Step 2: Use Python to run the jina_server.py script and pass in the model name or the local path of the model (the script only supports loading models downloaded from Huggingface)
  31. ```bash
  32. python jina_server.py --model_name gpt2
  33. ```
  34. ## Deploy a local model using Ollama
  35. [Ollama](https://github.com/ollama/ollama) enables you to run open-source large language models that you deployed locally. It bundles model weights, configurations, and data into a single package, defined by a Modelfile, and optimizes setup and configurations, including GPU usage.
  36. :::note
  37. - For information about downloading Ollama, see [here](https://github.com/ollama/ollama?tab=readme-ov-file#ollama).
  38. - For information about configuring Ollama server, see [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
  39. - For a complete list of supported models and variants, see the [Ollama model library](https://ollama.com/library).
  40. :::
  41. To deploy a local model, e.g., **Llama3**, using Ollama:
  42. ### 1. Check firewall settings
  43. Ensure that your host machine's firewall allows inbound connections on port 11434. For example:
  44. ```bash
  45. sudo ufw allow 11434/tcp
  46. ```
  47. ### 2. Ensure Ollama is accessible
  48. Restart system and use curl or your web browser to check if the service URL of your Ollama service at `http://localhost:11434` is accessible.
  49. ```bash
  50. Ollama is running
  51. ```
  52. ### 3. Run your local model
  53. ```bash
  54. ollama run llama3
  55. ```
  56. <details>
  57. <summary>If your Ollama is installed through Docker, run the following instead:</summary>
  58. ```bash
  59. docker exec -it ollama ollama run llama3
  60. ```
  61. </details>
  62. ### 4. Add Ollama
  63. In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Ollama to RAGFlow:
  64. ![add ollama](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814)
  65. ### 5. Complete basic Ollama settings
  66. In the popup window, complete basic settings for Ollama:
  67. 1. Because **llama3** is a chat model, choose **chat** as the model type.
  68. 2. Ensure that the model name you enter here *precisely* matches the name of the local model you are running with Ollama.
  69. 3. Ensure that the base URL you enter is accessible to RAGFlow.
  70. 4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
  71. :::caution NOTE
  72. - If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL.
  73. - If your Ollama and RAGFlow run on the same machine and Ollama is in Docker, use `http://host.docker.internal:11434` as base URL.
  74. - If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
  75. :::
  76. :::danger WARNING
  77. If your Ollama runs on a different machine, you may also need to set the `OLLAMA_HOST` environment variable to `0.0.0.0` in **ollama.service** (Note that this is *NOT* the base URL):
  78. ```bash
  79. Environment="OLLAMA_HOST=0.0.0.0"
  80. ```
  81. See [this guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) for more information.
  82. :::
  83. :::caution WARNING
  84. Improper base URL settings will trigger the following error:
  85. ```bash
  86. Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff98b81ff0>: Failed to establish a new connection: [Errno 111] Connection refused'))
  87. ```
  88. :::
  89. ### 6. Update System Model Settings
  90. Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model:
  91. *You should now be able to find **llama3** from the dropdown list under **Chat model**.*
  92. > If your local model is an embedding model, you should find your local model under **Embedding model**.
  93. ### 7. Update Chat Configuration
  94. Update your chat model accordingly in **Chat Configuration**:
  95. > If your local model is an embedding model, update it on the configruation page of your knowledge base.
  96. ## Deploy a local model using Xinference
  97. Xorbits Inference ([Xinference](https://github.com/xorbitsai/inference)) enables you to unleash the full potential of cutting-edge AI models.
  98. :::note
  99. - For information about installing Xinference Ollama, see [here](https://inference.readthedocs.io/en/latest/getting_started/).
  100. - For a complete list of supported models, see the [Builtin Models](https://inference.readthedocs.io/en/latest/models/builtin/).
  101. :::
  102. To deploy a local model, e.g., **Mistral**, using Xinference:
  103. ### 1. Check firewall settings
  104. Ensure that your host machine's firewall allows inbound connections on port 9997.
  105. ### 2. Start an Xinference instance
  106. ```bash
  107. $ xinference-local --host 0.0.0.0 --port 9997
  108. ```
  109. ### 3. Launch your local model
  110. Launch your local model (**Mistral**), ensuring that you replace `${quantization}` with your chosen quantization method:
  111. ```bash
  112. $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
  113. ```
  114. ### 4. Add Xinference
  115. In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
  116. ![add xinference](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814)
  117. ### 5. Complete basic Xinference settings
  118. Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
  119. > For rerank model, please use the `http://<your-xinference-endpoint-domain>:9997/v1/rerank` as the base URL.
  120. ### 6. Update System Model Settings
  121. Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model.
  122. *You should now be able to find **mistral** from the dropdown list under **Chat model**.*
  123. > If your local model is an embedding model, you should find your local model under **Embedding model**.
  124. ### 7. Update Chat Configuration
  125. Update your chat model accordingly in **Chat Configuration**:
  126. > If your local model is an embedding model, update it on the configruation page of your knowledge base.
  127. ## Deploy a local model using IPEX-LLM
  128. [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLMs on local Intel CPUs or GPUs (including iGPU or discrete GPUs like Arc, Flex, and Max) with low latency. It supports Ollama on Linux and Windows systems.
  129. To deploy a local model, e.g., **Qwen2**, using IPEX-LLM-accelerated Ollama:
  130. ### 1. Check firewall settings
  131. Ensure that your host machine's firewall allows inbound connections on port 11434. For example:
  132. ```bash
  133. sudo ufw allow 11434/tcp
  134. ```
  135. ### 2. Launch Ollama service using IPEX-LLM
  136. #### 2.1 Install IPEX-LLM for Ollama
  137. :::tip NOTE
  138. IPEX-LLM's supports Ollama on Linux and Windows systems.
  139. :::
  140. For detailed information about installing IPEX-LLM for Ollama, see [Run llama.cpp with IPEX-LLM on Intel GPU Guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md):
  141. - [Prerequisites](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md#0-prerequisites)
  142. - [Install IPEX-LLM cpp with Ollama binaries](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md#1-install-ipex-llm-for-llamacpp)
  143. *After the installation, you should have created a Conda environment, e.g., `llm-cpp`, for running Ollama commands with IPEX-LLM.*
  144. #### 2.2 Initialize Ollama
  145. 1. Activate the `llm-cpp` Conda environment and initialize Ollama:
  146. <Tabs
  147. defaultValue="linux"
  148. values={[
  149. {label: 'Linux', value: 'linux'},
  150. {label: 'Windows', value: 'windows'},
  151. ]}>
  152. <TabItem value="linux">
  153. ```bash
  154. conda activate llm-cpp
  155. init-ollama
  156. ```
  157. </TabItem>
  158. <TabItem value="windows">
  159. Run these commands with *administrator privileges in Miniforge Prompt*:
  160. ```cmd
  161. conda activate llm-cpp
  162. init-ollama.bat
  163. ```
  164. </TabItem>
  165. </Tabs>
  166. 2. If the installed `ipex-llm[cpp]` requires an upgrade to the Ollama binary files, remove the old binary files and reinitialize Ollama using `init-ollama` (Linux) or `init-ollama.bat` (Windows).
  167. *A symbolic link to Ollama appears in your current directory, and you can use this executable file following standard Ollama commands.*
  168. #### 2.3 Launch Ollama service
  169. 1. Set the environment variable `OLLAMA_NUM_GPU` to `999` to ensure that all layers of your model run on the Intel GPU; otherwise, some layers may default to CPU.
  170. 2. For optimal performance on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), set the following environment variable before launching the Ollama service:
  171. ```bash
  172. export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
  173. ```
  174. 3. Launch the Ollama service:
  175. <Tabs
  176. defaultValue="linux"
  177. values={[
  178. {label: 'Linux', value: 'linux'},
  179. {label: 'Windows', value: 'windows'},
  180. ]}>
  181. <TabItem value="linux">
  182. ```bash
  183. export OLLAMA_NUM_GPU=999
  184. export no_proxy=localhost,127.0.0.1
  185. export ZES_ENABLE_SYSMAN=1
  186. source /opt/intel/oneapi/setvars.sh
  187. export SYCL_CACHE_PERSISTENT=1
  188. ./ollama serve
  189. ```
  190. </TabItem>
  191. <TabItem value="windows">
  192. Run the following command *in Miniforge Prompt*:
  193. ```cmd
  194. set OLLAMA_NUM_GPU=999
  195. set no_proxy=localhost,127.0.0.1
  196. set ZES_ENABLE_SYSMAN=1
  197. set SYCL_CACHE_PERSISTENT=1
  198. ollama serve
  199. ```
  200. </TabItem>
  201. </Tabs>
  202. :::tip NOTE
  203. To enable the Ollama service to accept connections from all IP addresses, use `OLLAMA_HOST=0.0.0.0 ./ollama serve` rather than simply `./ollama serve`.
  204. :::
  205. *The console displays messages similar to the following:*
  206. ![](https://llm-assets.readthedocs.io/en/latest/_images/ollama_serve.png)
  207. ### 3. Pull and Run Ollama model
  208. #### 3.1 Pull Ollama model
  209. With the Ollama service running, open a new terminal and run `./ollama pull <model_name>` (Linux) or `ollama.exe pull <model_name>` (Windows) to pull the desired model. e.g., `qwen2:latest`:
  210. ![](https://llm-assets.readthedocs.io/en/latest/_images/ollama_pull.png)
  211. #### 3.2 Run Ollama model
  212. <Tabs
  213. defaultValue="linux"
  214. values={[
  215. {label: 'Linux', value: 'linux'},
  216. {label: 'Windows', value: 'windows'},
  217. ]}>
  218. <TabItem value="linux">
  219. ```bash
  220. ./ollama run qwen2:latest
  221. ```
  222. </TabItem>
  223. <TabItem value="windows">
  224. ```cmd
  225. ollama run qwen2:latest
  226. ```
  227. </TabItem>
  228. </Tabs>
  229. ### 4. Configure RAGflow
  230. To enable IPEX-LLM accelerated Ollama in RAGFlow, you must also complete the configurations in RAGFlow. The steps are identical to those outlined in the *Deploy a local model using Ollama* section:
  231. 1. [Add Ollama](#4-add-ollama)
  232. 2. [Complete basic Ollama settings](#5-complete-basic-ollama-settings)
  233. 3. [Update System Model Settings](#6-update-system-model-settings)
  234. 4. [Update Chat Configuration](#7-update-chat-configuration)