選択できるのは25トピックまでです。 トピックは、先頭が英数字で、英数字とダッシュ('-')を使用した35文字以内のものにしてください。
-LAN- 32dc963556
feat(api/workflow): Add `Conversation.dialogue_count` (#7275)
1年前
..
configs chore: update package versions to 0.7.0 (#7236) 1年前
constants refactor(*): Update hard-code '[__HIDDEN__]' to the constant. (#7048) 1年前
contexts feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 1年前
controllers add secondary sort_key when using `order_by` and `paginate` at the same time (#7225) 1年前
core feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 1年前
docker fix: ensure db migration in docker entry script running with `upgrade-db` command for proper locking (#6946) 1年前
events Feat/delete file when clean document (#5882) 1年前
extensions chore: update SQLAlchemy configuration with custom naming convention (#6854) 1年前
fields Feat: conversation variable & variable assigner node (#7222) 1年前
libs test(*): Avoid import from `api` in tests. (#7251) 1年前
migrations feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 1年前
models feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 1年前
schedule Feat/environment variables in workflow (#6515) 1年前
services chore(api/services/app_dsl_service.py): Bump DSL version to 0.1.1 (#7235) 1年前
tasks Feat: conversation variable & variable assigner node (#7222) 1年前
templates feat: implement forgot password feature (#5534) 1年前
tests feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 1年前
.dockerignore build: support Poetry for depencencies tool in api's Dockerfile (#5105) 1年前
.env.example feat: support elasticsearch vector database (#3558) 1年前
Dockerfile add nltk punkt resource (#7063) 1年前
README.md Chores: add missing profile for middleware docker compose cmd and fix ssrf-proxy doc link (#6372) 1年前
app.py chore: Add processId field for metrics of threads/db-pool-stat/health (#6797) 1年前
commands.py feat: support elasticsearch vector database (#3558) 1年前
poetry.lock feat: support elasticsearch vector database (#3558) 1年前
poetry.toml build: initial support for poetry build tool (#4513) 1年前
pyproject.toml feat: support elasticsearch vector database (#3558) 1年前

README.md

Dify Backend API

Usage

[!IMPORTANT] In the v0.6.12 release, we deprecated pip as the package management tool for Dify API Backend service and replaced it with poetry.

  1. Start the docker-compose stack

The backend require some middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using docker-compose.

   cd ../docker
   cp middleware.env.example middleware.env
   # change the profile to other vector database if you are not using weaviate
   docker compose -f docker-compose.middleware.yaml --profile weaviate -p dify up -d
   cd ../api
  1. Copy .env.example to .env
  2. Generate a SECRET_KEY in the .env file.
   sed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .env
   secret_key=$(openssl rand -base64 42)
   sed -i '' "/^SECRET_KEY=/c\\
   SECRET_KEY=${secret_key}" .env
  1. Create environment.

Dify API service uses Poetry to manage dependencies. You can execute poetry shell to activate the environment.

  1. Install dependencies
   poetry env use 3.10
   poetry install

In case of contributors missing to update dependencies for pyproject.toml, you can perform the following shell instead.

   poetry shell                                               # activate current environment
   poetry add $(cat requirements.txt)           # install dependencies of production and update pyproject.toml
   poetry add $(cat requirements-dev.txt) --group dev    # install dependencies of development and update pyproject.toml
  1. Run migrate

Before the first launch, migrate the database to the latest version.

   poetry run python -m flask db upgrade
  1. Start backend
   poetry run python -m flask run --host 0.0.0.0 --port=5001 --debug
  1. Start Dify web service.
  2. Setup your application by visiting http://localhost:3000
  3. If you need to debug local async processing, please start the worker service.
   poetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion

The started celery app handles the async tasks, e.g. dataset importing and documents indexing.

Testing

  1. Install dependencies for both the backend and the test environment
   poetry install --with dev
  1. Run the tests locally with mocked system environment variables in tool.pytest_env section in pyproject.toml
   cd ../
   poetry run -C api bash dev/pytest/pytest_all_tests.sh