Plugins¶
aigise.plugins.build_verifier_plugin ¶
Plugin to verify code builds before allowing finish_task to complete.
For known project types (Go, Python), automatically runs build verification. For unknown project types, prompts agent on first attempt; allows on second.
aigise.plugins.validator_plugin ¶
Plugin to detect and warn about common test execution pitfalls.
This plugin is designed with high precision in mind: - Prefers false negatives over false positives - Only warns when patterns are unambiguous - Uses strict pattern matching
logger = logging.getLogger(__name__) module-attribute ¶
ValidatorPlugin ¶
Bases: BasePlugin
Plugin to validate test commands and provide actionable warnings.
after_tool_callback(*, tool: BaseTool, tool_args: dict, tool_context: ToolContext, result: dict) -> Optional[Dict[str, Any]] async ¶
Analyze test command results and add warnings if issues detected.
aigise.plugins.memory_observer_plugin ¶
Memory observer plugin for async tool result storage.
logger = logging.getLogger(__name__) module-attribute ¶
MemoryObserverPlugin ¶
Bases: BasePlugin
Plugin that observes tool results and stores valuable information in memory.
This plugin monitors each tool execution's result and uses an LLM-based StorageDecider to determine whether the result contains valuable information worth persisting to the memory graph. Storage happens asynchronously to avoid blocking agent execution.
Example configuration in TOML
[plugins] enabled = ["memory_observer_plugin"]
memory_controller: MemoryUpdateController property ¶
Lazily initialize the memory controller.
__init__(enable_llm_decision: bool = True, fire_and_forget: bool = True, decider_model: Optional[str] = None) -> None ¶
Initialize the memory observer plugin.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_llm_decision | bool | Whether to use LLM for storage decisions. If False, stores all non-excluded tool results above MIN_RESULT_LENGTH. | True |
fire_and_forget | bool | Whether to run storage in background without waiting. If True, tool execution is not blocked by storage operations. | True |
decider_model | Optional[str] | LiteLLM model identifier for the storage decider. If None, reads from [memory].llm_model in config. | None |
after_tool_callback(*, tool: BaseTool, tool_args: dict, tool_context: ToolContext, result: dict) -> None async ¶
Observe tool results and potentially store valuable information.
This callback is invoked after each tool execution. It evaluates the result and, if deemed valuable, stores it in the memory graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tool | BaseTool | The tool that was executed. | required |
tool_args | dict | Arguments passed to the tool. | required |
tool_context | ToolContext | Tool execution context. | required |
result | dict | The tool's result dictionary. | required |
cleanup() -> None async ¶
Wait for all pending storage tasks to complete.
MemoryUpdateController ¶
Controller for orchestrating memory update operations.
This controller: 1. Extracts entities from content (using EntityExtractor) 2. Discovers relationships between entities (using RelationshipDiscoverer) 3. Decides operation type for each entity (using LLMOperationDecider) 4. Executes graph operations (using GraphOperations)
__init__(domain_config: Optional['DomainConfig'] = None, use_llm_extraction: bool = True, generate_embeddings: bool = True, similarity_threshold: float = 0.7, use_llm_decision: bool = False) ¶
Initialize the update controller.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
domain_config | Optional['DomainConfig'] | Domain configuration defining entity types. | None |
use_llm_extraction | bool | Whether to use LLM for semantic extraction. | True |
generate_embeddings | bool | Whether to generate embeddings for entities. | True |
similarity_threshold | float | Threshold for similarity-based relationships. | 0.7 |
use_llm_decision | bool | Whether to use LLM for operation type decisions. | False |
delete_entity(label: str, match_key: Dict[str, Any], client: Any) -> OperationResult async ¶
Delete an entity from the memory graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
label | str | Node label (e.g., "Question", "Topic"). | required |
match_key | Dict[str, Any] | Properties to identify the node. | required |
client | Any | Neo4j client. | required |
Returns:
| Type | Description |
|---|---|
OperationResult | OperationResult with operation details. |
delete_relationship(rel_type: str, source_label: str, source_key: Dict[str, Any], target_label: str, target_key: Dict[str, Any], client: Any) -> OperationResult async ¶
Delete a relationship from the memory graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rel_type | str | Relationship type. | required |
source_label | str | Label of source node. | required |
source_key | Dict[str, Any] | Properties to identify source node. | required |
target_label | str | Label of target node. | required |
target_key | Dict[str, Any] | Properties to identify target node. | required |
client | Any | Neo4j client. | required |
Returns:
| Type | Description |
|---|---|
OperationResult | OperationResult with operation details. |
link_entities(source_label: str, source_key: Dict[str, Any], target_label: str, target_key: Dict[str, Any], relationship_type: str, client: Any, properties: Optional[Dict[str, Any]] = None) -> OperationResult async ¶
Create a relationship between two existing entities.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source_label | str | Label of source node. | required |
source_key | Dict[str, Any] | Properties to identify source node. | required |
target_label | str | Label of target node. | required |
target_key | Dict[str, Any] | Properties to identify target node. | required |
relationship_type | str | Type of relationship to create. | required |
client | Any | Neo4j client. | required |
properties | Optional[Dict[str, Any]] | Optional relationship properties. | None |
Returns:
| Type | Description |
|---|---|
OperationResult | OperationResult for the relationship creation. |
store_knowledge(content: str, content_type: str = 'text', client: Any = None, aigise_session_id: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None) -> UpdateResult async ¶
Store knowledge in the memory graph.
Generic method for storing any type of content.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content | str | Content to store. | required |
content_type | str | Type of content ('text', 'code', 'question', 'answer'). | 'text' |
client | Any | Neo4j client. | None |
aigise_session_id | Optional[str] | Optional session ID. | None |
metadata | Optional[Dict[str, Any]] | Additional metadata. | None |
Returns:
| Type | Description |
|---|---|
UpdateResult | UpdateResult with operation details. |
store_knowledge_with_decision(content: str, content_type: str = 'text', client: Any = None, aigise_session_id: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None) -> UpdateResult async ¶
Store knowledge using LLM to decide operation type for each entity.
This method uses the LLMOperationDecider to intelligently decide whether to ADD, UPDATE, DELETE, or skip each extracted entity based on what already exists in the graph.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content | str | Content to store. | required |
content_type | str | Type of content. | 'text' |
client | Any | Neo4j client. | None |
aigise_session_id | Optional[str] | Optional session ID. | None |
metadata | Optional[Dict[str, Any]] | Additional metadata. | None |
Returns:
| Type | Description |
|---|---|
UpdateResult | UpdateResult with operation details. |
store_qa_pair(question: str, answer: str, answering_agent: str, answering_model: str, client: Any, aigise_session_id: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None) -> UpdateResult async ¶
Store a question-answer pair in the memory graph.
This is the main entry point for storing Q&A knowledge. It extracts entities, discovers relationships, and persists to Neo4j.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
question | str | The question text. | required |
answer | str | The answer text. | required |
answering_agent | str | Name of the agent that generated the answer. | required |
answering_model | str | Model used to generate the answer. | required |
client | Any | Neo4j client. | required |
aigise_session_id | Optional[str] | Optional session ID for tracking. | None |
metadata | Optional[Dict[str, Any]] | Additional metadata to store. | None |
Returns:
| Type | Description |
|---|---|
UpdateResult | UpdateResult with operation details. |
StorageDecider ¶
LLM-based decision maker that evaluates whether tool results should be stored.
__init__(model_name: str = 'gemini/gemini-2.5-flash-lite', temperature: float = 1.0, max_result_preview: int = 2000) ¶
Initialize the storage decider.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name | str | LiteLLM model identifier for decision making. | 'gemini/gemini-2.5-flash-lite' |
temperature | float | LLM temperature for decisions (lower = more consistent). | 1.0 |
max_result_preview | int | Maximum characters of tool result to include in prompt. | 2000 |
should_store(tool_name: str, tool_args: dict, tool_result: Any, full_output_file: Optional[str] = None) -> StorageDecision async ¶
Decide whether a tool result should be stored in memory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tool_name | str | Name of the tool that produced the result. | required |
tool_args | dict | Arguments passed to the tool. | required |
tool_result | Any | The result returned by the tool. | required |
full_output_file | Optional[str] | Path to file containing full output if truncated. | None |
Returns:
| Type | Description |
|---|---|
StorageDecision | StorageDecision indicating whether and how to store the content. |
StorageDecision dataclass ¶
Result of storage decision analysis.
confidence: float = 0.0 class-attribute instance-attribute ¶
Confidence score (0.0 to 1.0) for the decision.
content_type: str instance-attribute ¶
Type of content: 'code', 'text', 'finding', 'error', 'search_result', etc.
reason: str = '' class-attribute instance-attribute ¶
Explanation of why the decision was made.
should_store: bool instance-attribute ¶
Whether the content should be stored in memory.
summary: Optional[str] = None class-attribute instance-attribute ¶
Optional condensed version of the content for storage.