Browse Source

Fixed a docusaurus display issue (#1431)

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
tags/v0.8.0
writinwaters 1 year ago
parent
commit
3413f43b47
No account linked to committer's email address
1 changed files with 5 additions and 9 deletions
  1. 5
    9
      docs/guides/deploy_local_llm.md

+ 5
- 9
docs/guides/deploy_local_llm.md View File

ollama serve ollama serve
``` ```


> [!NOTE]
> Please set environment variable `OLLAMA_NUM_GPU` to `999` to make sure all layers of your model are running on Intel GPU, otherwise, some layers may run on CPU. > Please set environment variable `OLLAMA_NUM_GPU` to `999` to make sure all layers of your model are running on Intel GPU, otherwise, some layers may run on CPU.


> [!TIP]
> If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`: > If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`:
> >
> ```bash > ```bash
> export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 > export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
> ``` > ```


> [!NOTE]
> To allow the service to accept connections from all IP addresses, use `OLLAMA_HOST=0.0.0.0 ./ollama serve` instead of just `./ollama serve`. > To allow the service to accept connections from all IP addresses, use `OLLAMA_HOST=0.0.0.0 ./ollama serve` instead of just `./ollama serve`.


The console will display messages similar to the following: The console will display messages similar to the following:


<a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama_serve.png" target="_blank">
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama_serve.png" width=100%; />
</a>
![](https://llm-assets.readthedocs.io/en/latest/_images/ollama_serve.png)


### 3. Pull and Run Ollama Model ### 3. Pull and Run Ollama Model


Keep the Ollama service on and open another terminal and run `./ollama pull <model_name>` in Linux (`ollama.exe pull <model_name>` in Windows) to automatically pull a model. e.g. `qwen2:latest`: Keep the Ollama service on and open another terminal and run `./ollama pull <model_name>` in Linux (`ollama.exe pull <model_name>` in Windows) to automatically pull a model. e.g. `qwen2:latest`:


<a href="https://llm-assets.readthedocs.io/en/latest/_images/ollama_pull.png" target="_blank">
<img src="https://llm-assets.readthedocs.io/en/latest/_images/ollama_pull.png" width=100%; />
</a>
![](https://llm-assets.readthedocs.io/en/latest/_images/ollama_pull.png)


#### Run Ollama Model #### Run Ollama Model



Loading…
Cancel
Save