### What problem does this PR solve? ### Type of change - [x] Documentation Updatetags/v0.16.0
| { | { | ||||
| "id": 10, | "id": 10, | ||||
| "title": "Research Report Generator", | |||||
| "description": "This generator can produce a research report based on the given title and language. It decomposes into sub-titles and queries to search engine from different angles, and generates sections based on search engine results and comprehension of the sub-titles.", | |||||
| "title": "Research report generator", | |||||
| "description": "A report generator that creates a research report from a given title, in the specified target language. It generates queries from the input title, then uses these to create subtitles and sections, compiling everything into a comprehensive report.", | |||||
| "canvas_type": "chatbot", | "canvas_type": "chatbot", | ||||
| "dsl": { | "dsl": { | ||||
| "answer": [], | "answer": [], |
| - In the **Prompt Engine** tab of your **Chat Configuration** dialogue, disabling **Multi-turn optimization** will reduce the time required to get an answer from the LLM. | - In the **Prompt Engine** tab of your **Chat Configuration** dialogue, disabling **Multi-turn optimization** will reduce the time required to get an answer from the LLM. | ||||
| - In the **Prompt Engine** tab of your **Chat Configuration** dialogue, leaving the **Rerank model** field empty will significantly decrease retrieval time. | - In the **Prompt Engine** tab of your **Chat Configuration** dialogue, leaving the **Rerank model** field empty will significantly decrease retrieval time. | ||||
| - In the **Assistant Setting** tab of your **Chat Configuration** dialogue, disabling **Keyword analysis** will reduce the time to get get an answer from the LLM. | |||||
| - In the **Assistant Setting** tab of your **Chat Configuration** dialogue, disabling **Keyword analysis** will reduce the time to receive an answer from the LLM. | |||||
| - When chatting with your chat assistant, click the light bulb icon above the *current* dialogue and scroll down the popup window to view the time taken for each task: | - When chatting with your chat assistant, click the light bulb icon above the *current* dialogue and scroll down the popup window to view the time taken for each task: | ||||
|  |  | ||||
| This image is approximately 2 GB in size and relies on external LLM and embedding services. | This image is approximately 2 GB in size and relies on external LLM and embedding services. | ||||
| :::tip NOTE | :::tip NOTE | ||||
| While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM. However, you can build an image yourself on a `linux/arm64` or `darwin/arm64` host machine as well. | |||||
| While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. However, you can build an image yourself on a `linux/arm64` or `darwin/arm64` host machine as well. | |||||
| ::: | ::: | ||||
| ```bash | ```bash | ||||
| This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only. | This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only. | ||||
| :::tip NOTE | :::tip NOTE | ||||
| While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM. However, you can build an image yourself on a `linux/arm64` or `darwin/arm64` host machine. | |||||
| While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. However, you can build an image yourself on a `linux/arm64` or `darwin/arm64` host machine. | |||||
| ::: | ::: | ||||
| ```bash | ```bash |
| - Establishing an AI chat based on your datasets. | - Establishing an AI chat based on your datasets. | ||||
| :::danger IMPORTANT | :::danger IMPORTANT | ||||
| We officially support x86 CPU and Nvidia GPU, and this document offers instructions on deploying RAGFlow using Docker on x86 platforms. While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM. | |||||
| We officially support x86 CPU and Nvidia GPU, and this document offers instructions on deploying RAGFlow using Docker on x86 platforms. While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. | |||||
| If you are on an ARM platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a RAGFlow Docker image. | If you are on an ARM platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a RAGFlow Docker image. | ||||
| ::: | ::: |
| - Supports ARM64 platforms. | - Supports ARM64 platforms. | ||||
| :::danger IMPORTANT | :::danger IMPORTANT | ||||
| While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM. | |||||
| While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. | |||||
| If you are on an ARM platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a RAGFlow Docker image. | If you are on an ARM platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a RAGFlow Docker image. | ||||
| ::: | ::: |