|  | 1 年之前 | |
|---|---|---|
| .github | 1 年之前 | |
| agent | 1 年之前 | |
| api | 1 年之前 | |
| conf | 1 年之前 | |
| deepdoc | 1 年之前 | |
| docker | 1 年之前 | |
| docs | 1 年之前 | |
| graphrag | 1 年之前 | |
| rag | 1 年之前 | |
| sdk/python | 1 年之前 | |
| web | 1 年之前 | |
| .gitattributes | 1 年之前 | |
| .gitignore | 1 年之前 | |
| Dockerfile | 1 年之前 | |
| Dockerfile.arm | 1 年之前 | |
| Dockerfile.cuda | 1 年之前 | |
| Dockerfile.scratch | 1 年之前 | |
| Dockerfile.scratch.oc9 | 1 年之前 | |
| LICENSE | 1 年之前 | |
| README.md | 1 年之前 | |
| README_ja.md | 1 年之前 | |
| README_ko.md | 1 年之前 | |
| README_zh.md | 1 年之前 | |
| SECURITY.md | 1 年之前 | |
| download_deps.sh | 1 年之前 | |
| poetry.lock | 1 年之前 | |
| poetry.toml | 1 年之前 | |
| printEnvironment.sh | 1 年之前 | |
| pyproject.toml | 1 年之前 | |
| requirements.txt | 1 年之前 | |
| requirements_arm.txt | 1 年之前 | |
| ubuntu.sources | 1 年之前 | |
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
Try our demo at https://demo.ragflow.io.
vm.max_map_count >= 262144:To check the value of
vm.max_map_count:> $ sysctl vm.max_map_count > ``` > > Reset `vm.max_map_count` to a value at least 262144 if it is not. > > ```bash > # In this case, we set it to 262144: > $ sudo sysctl -w vm.max_map_count=262144 > ``` > > This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly: > > ```bash > vm.max_map_count=262144 > ``` 2. Clone the repo: ```bash $ git clone https://github.com/infiniflow/ragflow.git
Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update
RAGFLOW_VERSIONin docker/.env to the intended version, for exampleRAGFLOW_VERSION=v0.11.0, before running the following commands.
   $ cd ragflow/docker
   $ chmod +x ./entrypoint.sh
   $ docker compose up -d
The core image is about 9 GB in size and may take a while to load.
   $ docker logs -f ragflow-server
The following output confirms a successful launch of the system:
       ____                 ______ __
      / __ \ ____ _ ____ _ / ____// /____  _      __
     / /_/ // __ `// __ `// /_   / // __ \| | /| / /
    / _, _// /_/ // /_/ // __/  / // /_/ /| |/ |/ /
   /_/ |_| \__,_/ \__, //_/    /_/ \____/ |__/|__/
                 /____/
    * Running on all addresses (0.0.0.0)
    * Running on http://127.0.0.1:9380
    * Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network abnormalerror because, at that moment, your RAGFlow may not be fully initialized.
http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.user_default_llm and update the API_KEY field with the corresponding API key.See llm_api_key_setup for more information.
The show is now on!
When it comes to system configurations, you will need to manage the following files:
SVR_HTTP_PORT, MYSQL_PASSWORD, and MINIO_PASSWORD.You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.
The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.
Updates to all system configurations require a system reboot to take effect:
> $ docker-compose up -d > ``` ## 🛠️ Build from source To build the Docker images from source: ```bash $ git clone https://github.com/infiniflow/ragflow.git $ cd ragflow/ $ docker build -t infiniflow/ragflow:dev . $ cd ragflow/docker $ chmod +x ./entrypoint.sh $ docker compose up -d
To launch the service from source:
   $ git clone https://github.com/infiniflow/ragflow.git
   $ cd ragflow/
   $ conda create -n ragflow python=3.11.0
   $ conda activate ragflow
   $ pip install -r requirements.txt
   # If your CUDA version is higher than 12.0, run the following additional commands:
   $ pip uninstall -y onnxruntime-gpu
   $ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
   # Get the Python path:
   $ which python
   # Get the ragflow project path:
   $ pwd
   $ cp docker/entrypoint.sh .
   $ vi entrypoint.sh
   # Adjust configurations according to your actual situation (the following two export commands are newly added):
   # - Assign the result of `which python` to `PY`.
   # - Assign the result of `pwd` to `PYTHONPATH`.
   # - Comment out `LD_LIBRARY_PATH`, if it is configured.
   # - Optional: Add Hugging Face mirror.
   PY=${PY}
   export PYTHONPATH=${PYTHONPATH}
   export HF_ENDPOINT=https://hf-mirror.com
   $ cd docker
   $ docker compose -f docker-compose-base.yml up -d 
Check the configuration files, ensuring that:
Launch the RAGFlow backend service:
   $ chmod +x ./entrypoint.sh
   $ bash ./entrypoint.sh
   $ cd web
   $ npm install --registry=https://registry.npmmirror.com --force
   $ vim .umirc.ts
   # Update proxy.target to http://127.0.0.1:9380
   $ npm run dev 
   $ cd web
   $ npm install --registry=https://registry.npmmirror.com --force
   $ umi build
   $ mkdir -p /ragflow/web
   $ cp -r dist /ragflow/web
   $ apt install nginx -y
   $ cp ../docker/nginx/proxy.conf /etc/nginx
   $ cp ../docker/nginx/nginx.conf /etc/nginx
   $ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
   $ systemctl start nginx
See the RAGFlow Roadmap 2024
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.