AI Agent 90-Day Training Framework

OpenClaw requires 90-day training cycle like human employees
92% ai_automation · Matthew Ganzak · 2m 52s · tfww
Do this: We're treating AI agents like magic tools instead of trainable employees, wasting cycles on creative tasks that lack feedback signals while neglecting the data-driven conversation optimization that actually books appointments.

Comparison to Current State

Summary DIFFERENT ANGLE

Current: Enable AI-generated video content for DDB by upgrading OpenClaw infrastructure and installing the Pexo skill.

New: Matthew Ganzak explains that OpenClaw fails for most users due to unrealistic expectations, requiring a 4-step training process for repetitive data-driven tasks, not creative work, with success needing 90 days of consistent feedback.

The existing plan focuses on a specific skill for video generation, while the new analysis redefines OpenClaw's core utility and training requirements.

Implicit focus/purpose of OpenClaw WORSE

Current: OpenClaw adds Pexo skill for automated video generation in chat.

New: OpenClaw is a blank slate requiring a 4-step training process (Tell, Show, Do, Feedback) focused on repetitive data-driven tasks, not creative work.

The new analysis completely contradicts the premise of the existing plan, suggesting that creative tasks like video generation are not suitable for OpenClaw.

Relevance/Impact DIFFERENT ANGLE

Current: Reduces video content production costs and time for DDB brand, enabling higher volume social media posting without additional editor overhead.

New: Key insights include auditing existing skills, implementing explicit feedback loops for AIAS SMS, applying the 4-step framework to Claude Code, shifting ReelBot's execution, and creating specific training SOPs.

The existing plan highlights the direct impact of a new feature on DDB's content production, while the new analysis outlines strategic and operational changes for OpenClaw's overall training and application.

OpenClaw's core problem/need DIFFERENT ANGLE

Current: OpenClaw requires infrastructure stabilization to productize organic growth from creator outreach.

New: OpenClaw fails due to unrealistic user expectations and lack of proper training, requiring a 90-day training cycle like human employees.

The existing plan focuses on technical issues preventing productization, while the new analysis identifies fundamental user misconception and training deficiencies as the core problem.

OpenClaw's application and capabilities WORSE

Current: OpenClaw is an 'insane' bot that signs hundreds of creators automatically by finding, scoring, personalizing emails, sending follow-ups, and booking calls 24/7.

New: OpenClaw is a blank slate requiring a 4-step training process (Tell, Show, Do, Feedback) focused on repetitive data-driven tasks, not creative work or immediate high-autonomy success.

The new analysis directly contrasts Julian Goldie's depiction of a fully autonomous, high-performing OpenClaw, suggesting the video's portrayal is misleading or based on an untrained context.

actionable insights/next steps BETTER

Current: Stabilize OpenClaw infrastructure to capitalize on organic validation and potential new revenue streams in creator economy tooling.

New: Implement explicit feedback loops for AIAS SMS conversations, apply the 4-step framework to Claude Code, shift ReelBot's focus to data-driven task execution, and create 'Training SOPs' for OpenClaw skills.

The new analysis provides highly specific, actionable steps across multiple related projects, moving beyond general infrastructure stabilization to detailed process improvements and skill reevaluation.

overall theme/focus DIFFERENT ANGLE

Current: The existing plan focuses on creating a persistent 'second brain' for Claude Code using Obsidian to reduce context recreation overhead.

New: The new analysis introduces the theme of training AI (specifically OpenClaw and implicitly Claude Code) over a 90-day cycle, akin to human employees, to set realistic expectations and achieve success.

The new analysis shifts from a tool-centric, efficiency-driven approach to an AI-training and expectation-management perspective, which could inform broader AI strategy.

context/purpose of Claude Code BETTER

Current: The existing plan positions Claude Code as a tool that benefits from persistent knowledge management via Obsidian to avoid re-explaining project context in coding sessions.

New: The new analysis implies Claude Code, while initially 'pre-trained as a developer,' can also benefit from the 'Tell, Show, Do, Feedback' training framework, especially for repeatable, data-driven tasks, moving away from subjective creative work.

This new perspective refines the understanding of Claude Code's optimal application, suggesting a structured training approach even for 'pre-trained' AI to maximize its utility for concrete tasks.

actionable insights/implementation BETTER

Current: The existing plan provides concrete steps for setting up Obsidian and Claude Code for context persistence, including file conventions and web scraping.

New: The new analysis provides actionable insights on auditing existing AI skills, implementing explicit feedback loops, documenting 'Show' steps, and creating 'Training SOPs' for AI skills, similar to employee onboarding.

The new analysis offers a more strategic, long-term implementation framework for AI integration, focusing on iterative improvement and clear success metrics rather than just initial setup.

Similar to: OpenClaw Pexo Video Generation for DDB (65% overlap)
Overlap: OpenClaw specific focus, Clear need for task definition/optimization implicitly tied to success metrics
Different enough to proceed.
Prevents wasted development cycles on OpenClaw skills that are too broad or creative to train effectively; accelerates AIAS optimization by focusing on measurable conversation outcomes rather than subjective 'quality' improvements.

Implements Tell/Show/Do/Feedback methodology for OpenClaw and AIAS, treating AI as trainable employees focused on repetitive data tasks with explicit success metrics.

Business Applications

HIGH AI agent training methodology (general)

Restructure OpenClaw skills documentation to follow the Tell/Show/Do/Feedback framework. Each skill needs explicit success metrics (data) rather than subjective quality assessments.

MEDIUM AIAS conversation optimization (aias)

Use the existing A/B testing framework to create feedback loops for SMS responses. Tag outcomes (booked, qualified, disqualified) and feed results back into prompt engineering—treating conversation handling as a repetitive data task rather than creative writing.

MEDIUM Claude Code workflow standardization (general)

Leverage the existing context-handoff skill and standards.md as the 'Show' step. Implement mandatory post-session review (Feedback step) before ending Claude sessions to capture what worked/didn't for the next handoff.

Implementation Levels

Tasks

0 selected

Social Media Play

React Angle

We should validate this framework with our own spin: 'We learned this the hard way with OpenClaw—tried to make it 'do marketing' and failed. Now we treat it like a new SDR: daily feedback loops on SMS conversion data only.' Positions us as practitioners who figured out the same lesson independently.

Repurpose Ideas
Engagement Hook

Love the delegation point in the comments. We've seen the same with our SMS AI—treating it like a human SDR with daily feedback loops changed everything. The 90-day rule is real.

What This Video Covers

Matthew Ganzak is the creator of OpenClaw (mentioned by name as his product). He positions himself as an experienced operator (25 years management, $500M revenue) rather than a technical "vibe coder," giving him authority to critique how people deploy AI agents in business contexts.
Hook: Direct authority play: "Number one problem with OpenClaw" combined with credibility markers (25 years management, $500M revenue) to distinguish from "vibe coders"
“Open Claw right out of the box has no skills, no knowledge, no abilities”
“The problem is your expectations and not training it on a skill”
“AI is exposing people who are shit at delegation”
“You need to pick a task that is rinse and repeat... data is the best thing that you can feed back to Open Claw”

Key Insights

Analysis Notes

What it is: A methodology for training autonomous AI agents (specifically OpenClaw) using the same framework as onboarding human employees. Emphasizes task selection, documentation, and feedback loops over creative work.

How it helps us: Directly applicable to our OpenClaw VPS deployment and Claude Code usage. We already have OpenClaw running with 21 workspace skills, but the framework suggests we need to narrow focus to specific repetitive tasks (like the ReelBot agent loop or AIAS conversation handling) rather than expecting broad creative output. Validates our current cron-based monitoring approach as "data feedback."

Limitations: The "90 days" timeline may be excessive for our specific use cases—we're already seeing results with Claude Code in shorter cycles. The advice to avoid creative tasks conflicts slightly with our content generation needs, though we can frame copywriting as data-driven (conversion metrics) rather than pure creativity.

Who should see this: Dylan + Dev team (OpenClaw/AIAS architecture decisions)

Reality Check

🤔 [PLAUSIBLE] "OpenClaw has no skills out of the box and requires 90 days of training like a human employee" — Audience comment 'AI is exposing people who are shit at delegation' confirms the underlying thesis about training requirements. However, our existing OpenClaw deployment already has 21 functioning workspace skills, suggesting the 'blank slate' description is slightly overstated for marketing effect—it's configurable but not literally empty.
Instead: Treat OpenClaw as having 'base capability' but requiring domain-specific training (skills) for business tasks. The 90-day timeline is aspirational; focus on achieving competency in specific skills within 2-4 weeks through iterative feedback.
✅ [SOLID] "You should only assign repetitive, data-driven tasks to AI agents, not creative work like image/video generation" — Aligns with our current architecture where creative tasks (content) go through ReelBot's review pipeline while operational tasks (calendar polling, SMS routing) run autonomously. Data feedback loops (cron monitoring, conversion tracking) provide the necessary training signal the creator emphasizes.

Cost Breakdown →

StepPromptCompletionCost
analysis12,1952,386$0.0107
similarity982270$0.0003
plan8,2038,192$0.0217
Total$0.0328