The framework is defined. The theoretical foundations are laid. The 15 pillars illuminate the path forward. But now comes the moment of truth: how do you transform a vision into working code?
Every system architect knows that the first brick is the most important. It determines the stability of everything that comes after. In our case, this first brick wasn't a database, an API, or a user interface. It was something much more specific and strategic: our first AI agent.
The Fundamental Question: What Kind of Agent?
Facing the blank page in VS Code, the first question we asked ourselves wasnt "what technology to use?" or "how to structure the database?". It was a much more strategic question: what kind of AI personality should we create first?
A generic agent, capable of doing a bit of everything? Or a specialized agent, expert in a specific domain?
The answer came from our Pillar #4 (Scalable & Self-Learning). Instead of building an intelligent monolith, we had to think from the beginning about a system of specialists. Like a company that hires experts in different fields rather than generalists, our AI team had to be composed of digital professionals, each excellent in their own domain.
Specialist Approach Advantages | Description | Reference Pillar |
---|---|---|
Scalability | We can add new roles (e.g., "Data Scientist") without modifying code, simply by adding a new configuration to the database. | #4 (Scalable & Self-Learning) |
Maintainability | Its much simpler to debug and improve the prompt of an "Email Copywriter" than to modify a monolithic 2000-line prompt. | #10 (Production-Ready Code) |
AI Performance | An LLM given a specific role and context ("You are a finance expert...") produces significantly higher quality results than a generic prompt. | #2 (AI-Driven) |
Reusability | The same SpecialistAgent can be instantiated with different configurations in different workspaces, promoting code reuse. | #4 (Reusable Components) |
💡 Insight: The "AI Micromanaging" Problem
As Tomasz Tunguz points out in his article "Micromanaging AI" (2024), today we treat LLMs like "high school interns": extremely high motivation, but still low competence requiring step-by-step micromanagement.
This approach works for the first agent, but becomes a scalability nightmare. Imagine managing 10 agents, each requiring constant clarifications, authorizations, and manual corrections. Its the perfect "human switchboard" scenario copying and pasting outputs between Slack channels.
Our solution: Instead of treating each agent as an intern, we design them as specialized senior consultants. With clear roles, defined processes, and above all - controlled autonomy. Its the transition from the era of "artisanal prompting" to systems that scale without continuous human supervision.
class Agent(BaseModel):
id: UUID = Field(default_factory=uuid4)
workspace_id: UUID
name: str
role: str
seniority: str
status: str = "active"
# Fields that define "personality" and competencies
system_prompt: Optional[str] = None
llm_config: Optional[Dict[str, Any]] = None
tools: Optional[List[Dict[str, Any]]] = []
# Details for deeper intelligence
hard_skills: Optional[List[Dict]] = []
soft_skills: Optional[List[Dict]] = []
background_story: Optional[str] = None
The execution logic, instead, resides in the specialist_enhanced.py
module. The execute
function is the beating heart of the agent. It doesnt contain business logic, but orchestrates the phases of an agent's "reasoning".
"War Story": The First Crash – Object vs. Dictionary
Our first SpecialistAgent
was ready. We launched the first integration test and, almost immediately, the system crashed.
ERROR: Task object has no attribute get
File "/app/backend/ai_agents/tools.py", line 123, in get_memory_context_for_task
task_name = current_task.get("name", "N/A")
AttributeError: Task object has no attribute get
This seemingly trivial error hid one of the most important lessons of our entire journey. The problem wasnt missing data, but a "type" misalignment between system components.
Component | Data Type Handled | Problem |
---|---|---|
Executor | Pydantic Task object |
Passed a structured and typed object. |
Tool get_memory_context |
Python dict dictionary |
Expected a simple dictionary to use the .get() method. |
The immediate solution was simple, but the lesson was profound.
Reference correction code: backend/ai_agents/tools.py
# The current task could be a Pydantic object or a dictionary
if isinstance(current_task, Task):
# If its a Pydantic object, we convert it to a dictionary
# to ensure compatibility with downstream functions.
current_task_dict = current_task.dict()
else:
# If its already a dictionary, we use it directly.
current_task_dict = current_task
# From here on, we always use current_task_dict
task_name = current_task_dict.get("name", "N/A")