|
|
1 month ago | |
|---|---|---|
| .. | ||
| entities | 1 month ago | |
| graph | 1 month ago | |
| graph_engine | 1 month ago | |
| graph_events | 1 month ago | |
| node_events | 1 month ago | |
| nodes | 1 month ago | |
| repositories | 1 month ago | |
| utils | 1 month ago | |
| README.md | 1 month ago | |
| __init__.py | 1 year ago | |
| constants.py | 1 month ago | |
| conversation_variable_updater.py | 1 month ago | |
| enums.py | 1 month ago | |
| errors.py | 1 month ago | |
| system_variable.py | 1 month ago | |
| variable_loader.py | 2 months ago | |
| workflow_cycle_manager.py | 1 month ago | |
| workflow_entry.py | 1 month ago | |
| workflow_type_encoder.py | 1 month ago | |
This is the workflow graph engine module of Dify, implementing a queue-based distributed workflow execution system. The engine handles agentic AI workflows with support for parallel execution, node iteration, conditional logic, and external command control.
The graph engine follows a layered architecture with strict dependency rules:
Graph Engine (graph_engine/) - Orchestrates workflow execution
Graph (graph/) - Graph structure and runtime state
Nodes (nodes/) - Node implementations
Events (node_events/) - Event system
Entities (entities/) - Domain models
External workflow control via Redis or in-memory channels:
# Send stop command to running workflow
channel = RedisChannel(redis_client, f"workflow:{task_id}:commands")
channel.send_command(AbortCommand(reason="User requested"))
Extensible middleware for cross-cutting concerns:
engine = GraphEngine(graph)
engine.add_layer(DebugLoggingLayer(level="INFO"))
engine.add_layer(ExecutionLimitsLayer(max_nodes=100))
All node executions emit events for monitoring and integration:
NodeRunStartedEvent - Node execution beginsNodeRunSucceededEvent - Node completes successfullyNodeRunFailedEvent - Node encounters errorGraphRunStartedEvent/GraphRunCompletedEvent - Workflow lifecycleCentralized variable storage with namespace isolation:
# Variables scoped by node_id
pool.add(["node1", "output"], value)
result = pool.get(["node1", "output"])
The codebase enforces strict layering via import-linter:
Workflow Layers (top to bottom):
Graph Engine Internal Layers:
Domain Isolation:
Command Channel Independence:
nodes/<node_type>/BaseNode or appropriate base class_run() methodnodes/node_mapping.pytests/unit_tests/core/workflow/nodes/Layer baseon_graph_start(), on_event(), on_graph_end()engine.add_layer()Enable debug logging layer:
debug_layer = DebugLoggingLayer(
level="DEBUG",
include_inputs=True,
include_outputs=True
)