Current: Enable AI-generated video content for DDB by upgrading OpenClaw infrastructure and installing the Pexo skill.
New: Matthew Ganzak explains that OpenClaw fails for most users due to unrealistic expectations, requiring a 4-step training process for repetitive data-driven tasks, not creative work, with success needing 90 days of consistent feedback.
The existing plan focuses on a specific skill for video generation, while the new analysis redefines OpenClaw's core utility and training requirements.
Current: OpenClaw adds Pexo skill for automated video generation in chat.
New: OpenClaw is a blank slate requiring a 4-step training process (Tell, Show, Do, Feedback) focused on repetitive data-driven tasks, not creative work.
The new analysis completely contradicts the premise of the existing plan, suggesting that creative tasks like video generation are not suitable for OpenClaw.
Current: Reduces video content production costs and time for DDB brand, enabling higher volume social media posting without additional editor overhead.
New: Key insights include auditing existing skills, implementing explicit feedback loops for AIAS SMS, applying the 4-step framework to Claude Code, shifting ReelBot's execution, and creating specific training SOPs.
The existing plan highlights the direct impact of a new feature on DDB's content production, while the new analysis outlines strategic and operational changes for OpenClaw's overall training and application.
Current: OpenClaw requires infrastructure stabilization to productize organic growth from creator outreach.
New: OpenClaw fails due to unrealistic user expectations and lack of proper training, requiring a 90-day training cycle like human employees.
The existing plan focuses on technical issues preventing productization, while the new analysis identifies fundamental user misconception and training deficiencies as the core problem.
Current: OpenClaw is an 'insane' bot that signs hundreds of creators automatically by finding, scoring, personalizing emails, sending follow-ups, and booking calls 24/7.
New: OpenClaw is a blank slate requiring a 4-step training process (Tell, Show, Do, Feedback) focused on repetitive data-driven tasks, not creative work or immediate high-autonomy success.
The new analysis directly contrasts Julian Goldie's depiction of a fully autonomous, high-performing OpenClaw, suggesting the video's portrayal is misleading or based on an untrained context.
Current: Stabilize OpenClaw infrastructure to capitalize on organic validation and potential new revenue streams in creator economy tooling.
New: Implement explicit feedback loops for AIAS SMS conversations, apply the 4-step framework to Claude Code, shift ReelBot's focus to data-driven task execution, and create 'Training SOPs' for OpenClaw skills.
The new analysis provides highly specific, actionable steps across multiple related projects, moving beyond general infrastructure stabilization to detailed process improvements and skill reevaluation.
Current: The existing plan focuses on creating a persistent 'second brain' for Claude Code using Obsidian to reduce context recreation overhead.
New: The new analysis introduces the theme of training AI (specifically OpenClaw and implicitly Claude Code) over a 90-day cycle, akin to human employees, to set realistic expectations and achieve success.
The new analysis shifts from a tool-centric, efficiency-driven approach to an AI-training and expectation-management perspective, which could inform broader AI strategy.
Current: The existing plan positions Claude Code as a tool that benefits from persistent knowledge management via Obsidian to avoid re-explaining project context in coding sessions.
New: The new analysis implies Claude Code, while initially 'pre-trained as a developer,' can also benefit from the 'Tell, Show, Do, Feedback' training framework, especially for repeatable, data-driven tasks, moving away from subjective creative work.
This new perspective refines the understanding of Claude Code's optimal application, suggesting a structured training approach even for 'pre-trained' AI to maximize its utility for concrete tasks.
Current: The existing plan provides concrete steps for setting up Obsidian and Claude Code for context persistence, including file conventions and web scraping.
New: The new analysis provides actionable insights on auditing existing AI skills, implementing explicit feedback loops, documenting 'Show' steps, and creating 'Training SOPs' for AI skills, similar to employee onboarding.
The new analysis offers a more strategic, long-term implementation framework for AI integration, focusing on iterative improvement and clear success metrics rather than just initial setup.
Implements Tell/Show/Do/Feedback methodology for OpenClaw and AIAS, treating AI as trainable employees focused on repetitive data tasks with explicit success metrics.
Restructure OpenClaw skills documentation to follow the Tell/Show/Do/Feedback framework. Each skill needs explicit success metrics (data) rather than subjective quality assessments.
Use the existing A/B testing framework to create feedback loops for SMS responses. Tag outcomes (booked, qualified, disqualified) and feed results back into prompt engineering—treating conversation handling as a repetitive data task rather than creative writing.
Leverage the existing context-handoff skill and standards.md as the 'Show' step. Implement mandatory post-session review (Feedback step) before ending Claude sessions to capture what worked/didn't for the next handoff.
We should validate this framework with our own spin: 'We learned this the hard way with OpenClaw—tried to make it 'do marketing' and failed. Now we treat it like a new SDR: daily feedback loops on SMS conversion data only.' Positions us as practitioners who figured out the same lesson independently.
Love the delegation point in the comments. We've seen the same with our SMS AI—treating it like a human SDR with daily feedback loops changed everything. The 90-day rule is real.
What it is: A methodology for training autonomous AI agents (specifically OpenClaw) using the same framework as onboarding human employees. Emphasizes task selection, documentation, and feedback loops over creative work.
How it helps us: Directly applicable to our OpenClaw VPS deployment and Claude Code usage. We already have OpenClaw running with 21 workspace skills, but the framework suggests we need to narrow focus to specific repetitive tasks (like the ReelBot agent loop or AIAS conversation handling) rather than expecting broad creative output. Validates our current cron-based monitoring approach as "data feedback."
Limitations: The "90 days" timeline may be excessive for our specific use cases—we're already seeing results with Claude Code in shorter cycles. The advice to avoid creative tasks conflicts slightly with our content generation needs, though we can frame copywriting as data-driven (conversion metrics) rather than pure creativity.
Who should see this: Dylan + Dev team (OpenClaw/AIAS architecture decisions)
| Step | Prompt | Completion | Cost |
|---|---|---|---|
| analysis | 12,195 | 2,386 | $0.0107 |
| similarity | 982 | 270 | $0.0003 |
| plan | 8,203 | 8,192 | $0.0217 |
| Total | $0.0328 | ||