You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

Feat: make document parsing and embedding batch sizes configurable via environment variables (#8266) ### Description This PR introduces two new environment variables, ‎`DOC_BULK_SIZE` and ‎`EMBEDDING_BATCH_SIZE`, to allow flexible tuning of batch sizes for document parsing and embedding vectorization in RAGFlow. By making these parameters configurable, users can optimize performance and resource usage according to their hardware capabilities and workload requirements. ### What problem does this PR solve? Previously, the batch sizes for document parsing and embedding were hardcoded, limiting the ability to adjust throughput and memory consumption. This PR enables users to set these values via environment variables (in ‎`.env`, Helm chart, or directly in the deployment environment), improving flexibility and scalability for both small and large deployments. - ‎`DOC_BULK_SIZE`: Controls how many document chunks are processed in a single batch during document parsing (default: 4). - ‎`EMBEDDING_BATCH_SIZE`: Controls how many text chunks are processed in a single batch during embedding vectorization (default: 16). This change updates the codebase, documentation, and configuration files to reflect the new options. ### Type of change - [ ] Bug Fix (non-breaking change which fixes an issue) - [x] New Feature (non-breaking change which adds functionality) - [x] Documentation Update - [ ] Refactoring - [x] Performance Improvement - [ ] Other (please describe): ### Additional context - Updated ‎`.env`, ‎`helm/values.yaml`, and documentation to describe the new variables. - Modified relevant code paths to use the environment variables instead of hardcoded values. - Users can now tune these parameters to achieve better throughput or reduce memory usage as needed. Before: Default value: <img width="643" alt="image" src="https://github.com/user-attachments/assets/086e1173-18f3-419d-a0f5-68394f63866a" /> After: 10x: <img width="777" alt="image" src="https://github.com/user-attachments/assets/5722bbc0-0bcb-4536-b928-077031e550f1" />
4 月之前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269
  1. # README
  2. <details open>
  3. <summary></b>📗 Table of Contents</b></summary>
  4. - 🐳 [Docker Compose](#-docker-compose)
  5. - 🐬 [Docker environment variables](#-docker-environment-variables)
  6. - 🐋 [Service configuration](#-service-configuration)
  7. - 📋 [Setup Examples](#-setup-examples)
  8. </details>
  9. ## 🐳 Docker Compose
  10. - **docker-compose.yml**
  11. Sets up environment for RAGFlow and its dependencies.
  12. - **docker-compose-base.yml**
  13. Sets up environment for RAGFlow's dependencies: Elasticsearch/[Infinity](https://github.com/infiniflow/infinity), MySQL, MinIO, and Redis.
  14. > [!CAUTION]
  15. > We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-gpu-CN-oc9.yml**, or **docker-compose-gpu.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve any of them.
  16. ## 🐬 Docker environment variables
  17. The [.env](./.env) file contains important environment variables for Docker.
  18. ### Elasticsearch
  19. - `STACK_VERSION`
  20. The version of Elasticsearch. Defaults to `8.11.3`
  21. - `ES_PORT`
  22. The port used to expose the Elasticsearch service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `1200`.
  23. - `ELASTIC_PASSWORD`
  24. The password for Elasticsearch.
  25. ### Kibana
  26. - `KIBANA_PORT`
  27. The port used to expose the Kibana service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `6601`.
  28. - `KIBANA_USER`
  29. The username for Kibana. Defaults to `rag_flow`.
  30. - `KIBANA_PASSWORD`
  31. The password for Kibana. Defaults to `infini_rag_flow`.
  32. ### Resource management
  33. - `MEM_LIMIT`
  34. The maximum amount of the memory, in bytes, that *a specific* Docker container can use while running. Defaults to `8073741824`.
  35. ### MySQL
  36. - `MYSQL_PASSWORD`
  37. The password for MySQL.
  38. - `MYSQL_PORT`
  39. The port used to expose the MySQL service to the host machine, allowing **external** access to the MySQL database running inside the Docker container. Defaults to `5455`.
  40. ### MinIO
  41. - `MINIO_CONSOLE_PORT`
  42. The port used to expose the MinIO console interface to the host machine, allowing **external** access to the web-based console running inside the Docker container. Defaults to `9001`
  43. - `MINIO_PORT`
  44. The port used to expose the MinIO API service to the host machine, allowing **external** access to the MinIO object storage service running inside the Docker container. Defaults to `9000`.
  45. - `MINIO_USER`
  46. The username for MinIO.
  47. - `MINIO_PASSWORD`
  48. The password for MinIO.
  49. ### Redis
  50. - `REDIS_PORT`
  51. The port used to expose the Redis service to the host machine, allowing **external** access to the Redis service running inside the Docker container. Defaults to `6379`.
  52. - `REDIS_PASSWORD`
  53. The password for Redis.
  54. ### RAGFlow
  55. - `SVR_HTTP_PORT`
  56. The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
  57. - `RAGFLOW-IMAGE`
  58. The Docker image edition. Available editions:
  59. - `infiniflow/ragflow:v0.20.0-slim` (default): The RAGFlow Docker image without embedding models.
  60. - `infiniflow/ragflow:v0.20.0`: The RAGFlow Docker image with embedding models including:
  61. - Built-in embedding models:
  62. - `BAAI/bge-large-zh-v1.5`
  63. - `maidalun1020/bce-embedding-base_v1`
  64. > [!TIP]
  65. > If you cannot download the RAGFlow Docker image, try the following mirrors.
  66. >
  67. > - For the `nightly-slim` edition:
  68. > - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim` or,
  69. > - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim`.
  70. > - For the `nightly` edition:
  71. > - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly` or,
  72. > - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly`.
  73. ### Timezone
  74. - `TIMEZONE`
  75. The local time zone. Defaults to `'Asia/Shanghai'`.
  76. ### Hugging Face mirror site
  77. - `HF_ENDPOINT`
  78. The mirror site for huggingface.co. It is disabled by default. You can uncomment this line if you have limited access to the primary Hugging Face domain.
  79. ### MacOS
  80. - `MACOS`
  81. Optimizations for macOS. It is disabled by default. You can uncomment this line if your OS is macOS.
  82. ### Maximum file size
  83. - `MAX_CONTENT_LENGTH`
  84. The maximum file size for each uploaded file, in bytes. You can uncomment this line if you wish to change the 128M file size limit. After making the change, ensure you update `client_max_body_size` in nginx/nginx.conf correspondingly.
  85. ### Doc bulk size
  86. - `DOC_BULK_SIZE`
  87. The number of document chunks processed in a single batch during document parsing. Defaults to `4`.
  88. ### Embedding batch size
  89. - `EMBEDDING_BATCH_SIZE`
  90. The number of text chunks processed in a single batch during embedding vectorization. Defaults to `16`.
  91. ## 🐋 Service configuration
  92. [service_conf.yaml](./service_conf.yaml) specifies the system-level configuration for RAGFlow and is used by its API server and task executor. In a dockerized setup, this file is automatically created based on the [service_conf.yaml.template](./service_conf.yaml.template) file (replacing all environment variables by their values).
  93. - `ragflow`
  94. - `host`: The API server's IP address inside the Docker container. Defaults to `0.0.0.0`.
  95. - `port`: The API server's serving port inside the Docker container. Defaults to `9380`.
  96. - `mysql`
  97. - `name`: The MySQL database name. Defaults to `rag_flow`.
  98. - `user`: The username for MySQL.
  99. - `password`: The password for MySQL.
  100. - `port`: The MySQL serving port inside the Docker container. Defaults to `3306`.
  101. - `max_connections`: The maximum number of concurrent connections to the MySQL database. Defaults to `100`.
  102. - `stale_timeout`: Timeout in seconds.
  103. - `minio`
  104. - `user`: The username for MinIO.
  105. - `password`: The password for MinIO.
  106. - `host`: The MinIO serving IP *and* port inside the Docker container. Defaults to `minio:9000`.
  107. - `oss`
  108. - `access_key`: The access key ID used to authenticate requests to the OSS service.
  109. - `secret_key`: The secret access key used to authenticate requests to the OSS service.
  110. - `endpoint_url`: The URL of the OSS service endpoint.
  111. - `region`: The OSS region where the bucket is located.
  112. - `bucket`: The name of the OSS bucket where files will be stored. When you want to store all files in a specified bucket, you need this configuration item.
  113. - `prefix_path`: Optional. A prefix path to prepend to file names in the OSS bucket, which can help organize files within the bucket.
  114. - `s3`:
  115. - `access_key`: The access key ID used to authenticate requests to the S3 service.
  116. - `secret_key`: The secret access key used to authenticate requests to the S3 service.
  117. - `endpoint_url`: The URL of the S3-compatible service endpoint. This is necessary when using an S3-compatible protocol instead of the default AWS S3 endpoint.
  118. - `bucket`: The name of the S3 bucket where files will be stored. When you want to store all files in a specified bucket, you need this configuration item.
  119. - `region`: The AWS region where the S3 bucket is located. This is important for directing requests to the correct data center.
  120. - `signature_version`: Optional. The version of the signature to use for authenticating requests. Common versions include `v4`.
  121. - `addressing_style`: Optional. The style of addressing to use for the S3 endpoint. This can be `path` or `virtual`.
  122. - `prefix_path`: Optional. A prefix path to prepend to file names in the S3 bucket, which can help organize files within the bucket.
  123. - `oauth`
  124. The OAuth configuration for signing up or signing in to RAGFlow using a third-party account.
  125. - `<channel>`: Custom channel ID.
  126. - `type`: Authentication type, options include `oauth2`, `oidc`, `github`. Default is `oauth2`, when `issuer` parameter is provided, defaults to `oidc`.
  127. - `icon`: Icon ID, options include `github`, `sso`, default is `sso`.
  128. - `display_name`: Channel name, defaults to the Title Case format of the channel ID.
  129. - `client_id`: Required, unique identifier assigned to the client application.
  130. - `client_secret`: Required, secret key for the client application, used for communication with the authentication server.
  131. - `authorization_url`: Base URL for obtaining user authorization.
  132. - `token_url`: URL for exchanging authorization code and obtaining access token.
  133. - `userinfo_url`: URL for obtaining user information (username, email, etc.).
  134. - `issuer`: Base URL of the identity provider. OIDC clients can dynamically obtain the identity provider's metadata (`authorization_url`, `token_url`, `userinfo_url`) through `issuer`.
  135. - `scope`: Requested permission scope, a space-separated string. For example, `openid profile email`.
  136. - `redirect_uri`: Required, URI to which the authorization server redirects during the authentication flow to return results. Must match the callback URI registered with the authentication server. Format: `https://your-app.com/v1/user/oauth/callback/<channel>`. For local configuration, you can directly use `http://127.0.0.1:80/v1/user/oauth/callback/<channel>`.
  137. - `user_default_llm`
  138. The default LLM to use for a new RAGFlow user. It is disabled by default. To enable this feature, uncomment the corresponding lines in **service_conf.yaml.template**.
  139. - `factory`: The LLM supplier. Available options:
  140. - `"OpenAI"`
  141. - `"DeepSeek"`
  142. - `"Moonshot"`
  143. - `"Tongyi-Qianwen"`
  144. - `"VolcEngine"`
  145. - `"ZHIPU-AI"`
  146. - `api_key`: The API key for the specified LLM. You will need to apply for your model API key online.
  147. > [!TIP]
  148. > If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
  149. ## 📋 Setup Examples
  150. ### 🔒 HTTPS Setup
  151. #### Prerequisites
  152. - A registered domain name pointing to your server
  153. - Port 80 and 443 open on your server
  154. - Docker and Docker Compose installed
  155. #### Getting and configuring certificates (Let's Encrypt)
  156. If you want your instance to be available under `https`, follow these steps:
  157. 1. **Install Certbot and obtain certificates**
  158. ```bash
  159. # Ubuntu/Debian
  160. sudo apt update && sudo apt install certbot
  161. # CentOS/RHEL
  162. sudo yum install certbot
  163. # Obtain certificates (replace with your actual domain)
  164. sudo certbot certonly --standalone -d your-ragflow-domain.com
  165. ```
  166. 2. **Locate your certificates**
  167. Once generated, your certificates will be located at:
  168. - Certificate: `/etc/letsencrypt/live/your-ragflow-domain.com/fullchain.pem`
  169. - Private key: `/etc/letsencrypt/live/your-ragflow-domain.com/privkey.pem`
  170. 3. **Update docker-compose.yml**
  171. Add the certificate volumes to the `ragflow` service in your `docker-compose.yml`:
  172. ```yaml
  173. services:
  174. ragflow:
  175. # ...existing configuration...
  176. volumes:
  177. # SSL certificates
  178. - /etc/letsencrypt/live/your-ragflow-domain.com/fullchain.pem:/etc/nginx/ssl/fullchain.pem:ro
  179. - /etc/letsencrypt/live/your-ragflow-domain.com/privkey.pem:/etc/nginx/ssl/privkey.pem:ro
  180. # Switch to HTTPS nginx configuration
  181. - ./nginx/ragflow.https.conf:/etc/nginx/conf.d/ragflow.conf
  182. # ...other existing volumes...
  183. ```
  184. 4. **Update nginx configuration**
  185. Edit `nginx/ragflow.https.conf` and replace `my_ragflow_domain.com` with your actual domain name.
  186. 5. **Restart the services**
  187. ```bash
  188. docker-compose down
  189. docker-compose up -d
  190. ```
  191. > [!IMPORTANT]
  192. > - Ensure your domain's DNS A record points to your server's IP address
  193. > - Stop any services running on ports 80/443 before obtaining certificates with `--standalone`
  194. > [!TIP]
  195. > For development or testing, you can use self-signed certificates, but browsers will show security warnings.
  196. #### Alternative: Using existing certificates
  197. If you already have SSL certificates from another provider:
  198. 1. Place your certificates in a directory accessible to Docker
  199. 2. Update the volume paths in `docker-compose.yml` to point to your certificate files
  200. 3. Ensure the certificate file contains the full certificate chain
  201. 4. Follow steps 4-5 from the Let's Encrypt guide above