AI Agents That Users Actually Use
Build intelligent agents that solve real problems - from simple chat bots to complex business automation. Deploy to the marketplace where users discover and install your AI solutions.
Agent Types
Choose the right agent architecture for your use case
Custom Python Agents
Build sophisticated agents with full programmatic control and advanced business logic.
from fiberwise_sdk import FiberAgent
class ContentGenerator(FiberAgent):
def run_agent(self, input_data: dict,
llm_provider: LLMProviderService):
topic = input_data.get('topic', '')
result = llm_provider.complete(
f"Write content about {topic}"
)
return {
'status': 'success',
'content': result.text,
'word_count': len(result.text.split())
}
- Full programmatic control
- Custom business logic
- Service injection support
- Advanced error handling
LLM Agents (Configuration-Based)
Rapidly deploy AI agents using simple YAML configuration without custom code.
agents:
- name: Smart Assistant
type: llm
provider: openai
model: gpt-4
system_prompt: |
You are a helpful AI assistant that
provides accurate, concise responses.
temperature: 0.7
max_tokens: 500
- No-code deployment
- YAML configuration
- Multiple LLM providers
- Quick prototyping
- Complex reasoning pipelines with simple agents
Modern Service Injection Architecture
Automatic dependency resolution with type-hint based injection
def run_agent(self, input_data: dict,
llm_provider: LLMProviderService,
email_service: EmailService,
fiber_app: FiberApp):
"""Services are automatically injected by type hints"""
# Process with LLM
response = llm_provider.complete(input_data['prompt'])
# Send notification
email_service.send({
'to': input_data['user_email'],
'subject': 'Task Complete',
'body': response.text
})
# Store in app data
fiber.data.create_item({
'user_id': input_data['user_id'],
'response': response.text,
'timestamp': self.get_current_time()
})
return {'status': 'success', 'response': response.text}
Benefits of Service Injection
Automatic Resolution
Services are injected based on type hints - no manual wiring required
Testable Architecture
Easy to mock and test individual components in isolation
Modular Design
Clean separation of concerns with reusable service components
Performance Optimized
Efficient pattern detection and service lifecycle management
Agent Activation System
Powerful execution engine with intelligent lifecycle management
Pattern Detection
Analyzes agent code for service dependencies automatically
Service Injection
Provides required services based on detected patterns
Lifecycle Management
Handles execution, error handling, and cleanup automatically
Performance Monitoring
Tracks execution time and resource usage in real-time
# Agent activation via API
result = await fiber.agents.activate('content-generator', {
'topic': 'AI automation',
'type': 'blog_post',
'tone': 'professional',
'target_audience': 'developers'
})
# Response includes execution metadata
{
"status": "success",
"content": "Generated blog post content...",
"metadata": {
"execution_time": "2.3s",
"tokens_used": 1250,
"services_injected": ["llm_provider", "content_service"],
"activation_id": "act_123456789"
}
}
Core Capabilities
Multi-Language SDKs
Build agents in Python or Node.js with consistent APIs and seamless integration across languages.
- Python SDK with async support
- Node.js SDK with TypeScript
- Consistent API patterns
- Built-in error handling
Dependency Injection
Powerful dependency injection system for complex services and seamless agent coordination.
- Service container management
- Automatic dependency resolution
- Testable architecture
- Configuration-driven setup
Real-Time Tracking
Monitor agent performance, track activations, and analyze execution patterns in real-time.
- Activation history and metrics
- Performance monitoring
- Error tracking and debugging
- Usage analytics
Multi-Agent Coordination
Orchestrate multiple agents working together with conversation, chain, and parallel modes.
- Conversation coordination
- Sequential chain execution
- Parallel processing
- Workflow templates
Agent Development Examples
class ChatAgent:
async def run_agent(self, input_data, fiber, llm_service):
"""AI agent with dependency injection"""
# Get message from input
message = input_data.get('message', '')
chat_id = input_data.get('chat_id')
# Generate AI response
response = await llm_service.generate_completion(
prompt=message,
temperature=0.7,
max_tokens=1000
)
# Store conversation history
await fiber.storage.store_chat_message(
chat_id=chat_id,
message=message,
response=response.get("text")
)
return {
"status": "success",
"response": response.get("text"),
"metadata": {
"model": response.get("model"),
"provider": response.get("provider"),
"tokens_used": response.get("usage", {}).get("total_tokens", 0)
}
}
Quick Start Guide
Create Agent
Build your agent with the Fiberwise SDK
fiber agent create my-chat-agent
Test Locally
Test your agent with sample inputs
fiber agent test my-chat-agent --input "Hello world"
Deploy Agent
Deploy to Fiberwise platform
fiber agent deploy my-chat-agent
Activate & Use
Activate and start using your agent
fiber activate my-chat-agent