You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Kevin Hu 4a6a2a0f1b
refine xinference (#2521)
1 year ago
.github Updated badge link (#545) 1 year ago
agent refine the warning message for rewrite component (#2429) 1 year ago
api refine reteival of multi-turn conversation (#2520) 1 year ago
conf OpenAITTS (#2493) 1 year ago
deepdoc fix parsing spaces in russian language PDFs (#1987) (#2427) 1 year ago
docker feat: Display mindmap in drawer #2247 (#2430) 1 year ago
docs Update and rename agentic_rag_introduction.md to agent_introduction.md (#2443) 1 year ago
graphrag fix empty children in mindmap (#2418) 1 year ago
rag refine xinference (#2521) 1 year ago
sdk/python update document sdk (#2485) 1 year ago
web remove setting-system/index.tsx error import (#2507) 1 year ago
.gitattributes add lf end-lines in `*.sh` (#425) 1 year ago
.gitignore Update SDK->sdk, and add create_dataset (#1047) 1 year ago
Dockerfile refine Dockerfile (#1802) 1 year ago
Dockerfile.arm Fix the issue about `No module named 'graspologic'` #2157 (#2158) 1 year ago
Dockerfile.cuda Format file format from Windows/dos to Unix (#1949) 1 year ago
Dockerfile.scratch Format file format from Windows/dos to Unix (#1949) 1 year ago
Dockerfile.scratch.oc9 Format file format from Windows/dos to Unix (#1949) 1 year ago
LICENSE Initial commit 1 year ago
README.md prepare document for release (#2438) 1 year ago
README_ja.md prepare document for release (#2438) 1 year ago
README_ko.md prepare document for release (#2438) 1 year ago
README_zh.md Update README_zh.md (#2491) 1 year ago
SECURITY.md Added kibana (#2286) 1 year ago
printEnvironment.sh Add automation scripts to support displaying environment information such as RAGFlow repository version, operating system, Python version, etc. in a Linux environment for users to report issues. (#396) 1 year ago
requirements.txt fix pip install error (#2407) 1 year ago
requirements_arm.txt fix pip install error (#2407) 1 year ago

README.md

English | 简体中文 | 日本語 | 한국어

Latest Release Static Badge license

Document | Roadmap | Twitter | Discord | Demo

📕 Table of Contents

💡 What is RAGFlow?

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.

🎮 Demo

Try our demo at https://demo.ragflow.io.

🔥 Latest Updates

  • 2024-09-13 Adds search mode for knowledge base Q&A.
  • 2024-09-09 Adds a medical consultant agent template.
  • 2024-08-22 Support text to SQL statements through RAG.
  • 2024-08-02 Supports GraphRAG inspired by graphrag and mind map.
  • 2024-07-23 Supports audio file parsing.
  • 2024-07-08 Supports workflow based on Graph.
  • 2024-06-27 Supports Markdown and Docx in the Q&A parsing method, extracting images from Docx files, extracting tables from Markdown files.
  • 2024-05-23 Supports RAPTOR for better text retrieval.

🌟 Key Features

🍭 “Quality in, quality out”

  • Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
  • Finds “needle in a data haystack” of literally unlimited tokens.

🍱 Template-based chunking

  • Intelligent and explainable.
  • Plenty of template options to choose from.

🌱 Grounded citations with reduced hallucinations

  • Visualization of text chunking to allow human intervention.
  • Quick view of the key references and traceable citations to support grounded answers.

🍔 Compatibility with heterogeneous data sources

  • Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.

🛀 Automated and effortless RAG workflow

  • Streamlined RAG orchestration catered to both personal and large businesses.
  • Configurable LLMs as well as embedding models.
  • Multiple recall paired with fused re-ranking.
  • Intuitive APIs for seamless integration with business.

🔎 System Architecture

🎬 Get Started

📝 Prerequisites

  • CPU >= 4 cores
  • RAM >= 16 GB
  • Disk >= 50 GB
  • Docker >= 24.0.0 & Docker Compose >= v2.26.1 > If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

🚀 Start up the server

  1. Ensure vm.max_map_count >= 262144:

To check the value of vm.max_map_count:

   > $ sysctl vm.max_map_count
   > ```
   >
   > Reset `vm.max_map_count` to a value at least 262144 if it is not.
   >
   > ```bash
   > # In this case, we set it to 262144:
   > $ sudo sysctl -w vm.max_map_count=262144
   > ```
   >
   > This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
   >
   > ```bash
   > vm.max_map_count=262144
   > ```

2. Clone the repo:

   ```bash
   $ git clone https://github.com/infiniflow/ragflow.git
  1. Build the pre-built Docker images and start up the server:

Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update RAGFLOW_VERSION in docker/.env to the intended version, for example RAGFLOW_VERSION=v0.11.0, before running the following commands.

   $ cd ragflow/docker
   $ chmod +x ./entrypoint.sh
   $ docker compose up -d

The core image is about 9 GB in size and may take a while to load.

  1. Check the server status after having the server up and running:
   $ docker logs -f ragflow-server

The following output confirms a successful launch of the system:

       ____                 ______ __
      / __ \ ____ _ ____ _ / ____// /____  _      __
     / /_/ // __ `// __ `// /_   / // __ \| | /| / /
    / _, _// /_/ // /_/ // __/  / // /_/ /| |/ |/ /
   /_/ |_| \__,_/ \__, //_/    /_/ \____/ |__/|__/
                 /____/

    * Running on all addresses (0.0.0.0)
    * Running on http://127.0.0.1:9380
    * Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit

If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network abnormal error because, at that moment, your RAGFlow may not be fully initialized.

  1. In your web browser, enter the IP address of your server and log in to RAGFlow. > With the default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.
  2. In service_conf.yaml, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

See llm_api_key_setup for more information.

The show is now on!

🔧 Configurations

When it comes to system configurations, you will need to manage the following files:

You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.

The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.

To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.

Updates to all system configurations require a system reboot to take effect:

> $ docker-compose up -d
> ```

## 🛠️ Build from source

To build the Docker images from source:

```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d

🛠️ Launch service from source

To launch the service from source:

  1. Clone the repository:
   $ git clone https://github.com/infiniflow/ragflow.git
   $ cd ragflow/
  1. Create a virtual environment, ensuring that Anaconda or Miniconda is installed:
   $ conda create -n ragflow python=3.11.0
   $ conda activate ragflow
   $ pip install -r requirements.txt
   # If your CUDA version is higher than 12.0, run the following additional commands:
   $ pip uninstall -y onnxruntime-gpu
   $ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
  1. Copy the entry script and configure environment variables:
   # Get the Python path:
   $ which python
   # Get the ragflow project path:
   $ pwd
   $ cp docker/entrypoint.sh .
   $ vi entrypoint.sh
   # Adjust configurations according to your actual situation (the following two export commands are newly added):
   # - Assign the result of `which python` to `PY`.
   # - Assign the result of `pwd` to `PYTHONPATH`.
   # - Comment out `LD_LIBRARY_PATH`, if it is configured.
   # - Optional: Add Hugging Face mirror.
   PY=${PY}
   export PYTHONPATH=${PYTHONPATH}
   export HF_ENDPOINT=https://hf-mirror.com
  1. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL):
   $ cd docker
   $ docker compose -f docker-compose-base.yml up -d 
  1. Check the configuration files, ensuring that:

    • The settings in docker/.env match those in conf/service_conf.yaml.
    • The IP addresses and ports for related services in service_conf.yaml match the local machine IP and ports exposed by the container.
  2. Launch the RAGFlow backend service:

   $ chmod +x ./entrypoint.sh
   $ bash ./entrypoint.sh
  1. Launch the frontend service:
   $ cd web
   $ npm install --registry=https://registry.npmmirror.com --force
   $ vim .umirc.ts
   # Update proxy.target to http://127.0.0.1:9380
   $ npm run dev 
  1. Deploy the frontend service:
   $ cd web
   $ npm install --registry=https://registry.npmmirror.com --force
   $ umi build
   $ mkdir -p /ragflow/web
   $ cp -r dist /ragflow/web
   $ apt install nginx -y
   $ cp ../docker/nginx/proxy.conf /etc/nginx
   $ cp ../docker/nginx/nginx.conf /etc/nginx
   $ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
   $ systemctl start nginx

📚 Documentation

📜 Roadmap

See the RAGFlow Roadmap 2024

🏄 Community

🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.