sidebar_position: 10
Answers to questions about general features, troubleshooting, usage, and more.
The “garbage in garbage out” status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
You can find the RAGFlow version number on the System page of the UI:
If you build RAGFlow from source, the version number is also in the system log:
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
2025-02-18 10:10:43,835 INFO 1445658 RAGFlow version: v0.15.0-50-g6daae7f2 full
Where:
v0.15.0: The officially published release.50: The number of git commits since the official release.g6daae7f2: g is the prefix, and 6daae7f2 is the first seven characters of the current commit ID.full/slim: The RAGFlow edition.
full: The full RAGFlow edition.slim: The RAGFlow edition without embedding models and Python packages.We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision models. This contributes to the additional time required.
RAGFlow has a number of built-in models for document structure parsing, which account for the additional computational resources.
We officially support x86 CPU and nvidia GPU. While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. If you are on an ARM platform, follow this guide to build a RAGFlow Docker image.
RAGFlow offers two Docker image editions, v0.17.2-slim and v0.17.2:
infiniflow/ragflow:v0.17.2-slim (default): The RAGFlow Docker image without embedding models.infiniflow/ragflow:v0.17.2: The RAGFlow Docker image with embedding models including:
BAAI/bge-large-zh-v1.5BAAI/bge-reranker-v2-m3maidalun1020/bce-embedding-base_v1maidalun1020/bce-reranker-base_v1BAAI/bge-base-en-v1.5BAAI/bge-large-en-v1.5BAAI/bge-small-en-v1.5BAAI/bge-small-zh-v1.5jinaai/jina-embeddings-v2-base-enjinaai/jina-embeddings-v2-small-ennomic-ai/nomic-embed-text-v1.5sentence-transformers/all-MiniLM-L6-v2The corresponding APIs are now available. See the RAGFlow HTTP API Reference or the RAGFlow Python API Reference for more information.
Yes, we do.
No, this feature is not supported.
Yes, we support enhancing user queries based on existing context of an ongoing conversation:
See Build a RAGFlow Docker image.
A locally deployed RAGflow downloads OCR and embedding modules from Huggingface website by default. If your machine is unable to access this site, the following error occurs and PDF parsing fails:
FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'
To fix this issue, use https://hf-mirror.com instead:
cd ragflow/docker/
docker compose down
# HF_ENDPOINT=https://hf-mirror.com
docker compose up -d
MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
- ~/deepdoc:/ragflow/rag/res/deepdoc
WARNING: can't find /raglof/rag/res/borker.tmIgnore this warning and continue. All system warnings can be ignored.
network anomaly There is an abnormality in your network and you cannot connect to the server.You will not log in to RAGFlow unless the server is fully initialized. Run docker logs -f ragflow-server.
The server is successfully initialized, if your system displays the following:
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
Realtime synonym is disabled, since no redis connectionIgnore this warning and continue. All system warnings can be ignored.
Click the red cross beside the ‘parsing status’ bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, try the following:
docker logs -f ragflow-server
MEM_LIMIT value in docker/.env.:::note Ensure that you restart up your RAGFlow server for your changes to take effect!
docker compose stop
docker compose up -d
:::
Index failureAn index failure usually indicates an unavailable Elasticsearch service.
tail -f ragflow/docker/ragflow-logs/*.log
$ docker ps
The following is an example result:
5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up 16 seconds (healthy) 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp ragflow-mysql
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
:::danger IMPORTANT The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues. :::
Exception: Can't connect to ES cluster $ docker ps
The status of a healthy Elasticsearch component should look as follows:
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
:::danger IMPORTANT The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues. :::
vm.max_map_count >= 262144 as per this README. Updating the vm.max_map_count value in /etc/sysctl.conf is required, if you wish to keep your change permanent. Note that this configuration works only for Linux.Elasticsearch did not exit normallyThis is because you forgot to update the vm.max_map_count value in /etc/sysctl.conf and your change to this value was reset after a system reboot.
{"data":null,"code":100,"message":"<NotFound '404: Not Found'>"}Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (NOT 9380, AND NO PORT NUMBER REQUIRED!) in your browser. This should work.
Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlowA correct Ollama IP address and port is crucial to adding models to Ollama:
See Deploy a local LLM for more information.
Yes, we do. See the Python files under the rag/app folder.
Ensure that you update the MAX_CONTENT_LENGTH environment variable:
MAX_CONTENT_LENGTH: MAX_CONTENT_LENGTH=176160768 # 168MB
client_max_body_size 168M
docker compose up ragflow -d
FileNotFoundError: [Errno 2] No such file or directory $ docker ps
The status of a healthy Elasticsearch component should look as follows:
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
:::danger IMPORTANT The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues. :::
You can use Ollama or Xinference to deploy local LLM. See here for more information.
If your model is not currently supported but has APIs compatible with those of OpenAI, click OpenAI-API-Compatible on the Model providers page to configure your model:
See here for more information.
Error: Range of input length should be [1, 30000]This error occurs because there are too many chunks matching your search criteria. Try reducing the TopN and increasing Similarity threshold to fix this issue:
See Acquire a RAGFlow API key.
See Upgrade RAGFlow for more information.