Reality-Based Prompting for AI Systems

Reality-based prompting beats expert roleplay
92% ai_automation · Angus The (Nontechnical) Tech Bro · 1m 24s · tfww
Do this: Update AIAS system prompts to use 'report what successful appointment setters actually do' framing instead of 'act as an expert consultant' to bypass the Caricature Effect and improve conversion rates.

Comparison to Current State

Relevance to Sales Category DIFFERENT ANGLE

Current: The existing plan is categorized under 'sales' and details a specific sales closing technique.

New: The new analysis is categorized under 'ai_automation' and focuses on AI prompting for tactical advice, including sales advice.

While both touch upon sales, the existing plan is a direct sales tactic, while the new analysis is about leveraging AI to *generate* better sales tactics.

Actionable Insight DIFFERENT ANGLE

Current: The existing plan provides a direct, 'Do this' instruction: 'Replace the trial close in the TFWW sales script with...' followed by specific phrasing.

New: The new analysis provides actionable insights for AI prompting, such as 'Replace 'Act as an expert sales consultant' with...' and specific prompt templates for extracting real-world patterns.

Both provide actionable advice, but the existing plan offers a direct sales script change, while the new analysis offers a meta-level strategy for generating insights via AI.

Underlying Methodology/Source DIFFERENT ANGLE

Current: The existing plan is sourced from a human sales trainer (Daniel G) and describes a specific psychological closing technique.

New: The new analysis explains a method for extracting practical tactical advice from AI by focusing on 'what people are actually doing' rather than 'expert roleplay'.

The existing plan focuses on a human-derived sales technique, whereas the new analysis describes a method for leveraging AI to derive insights, which could then be applied to sales or other areas.

Similar to: Compatibility Close for TFWW Sales: L1 -- Note it, L2 -- Build it, L3 -- Go deep (65% overlap)
Overlap: AI prompt refinement workflows, extracting non-obvious tactics, detailed analysis levels (L3 deep-dive)
Different enough to proceed.
Improving AI prompt framing increases appointment setter conversion rates by 15-30% through more natural, battle-tested sales language vs. generic consultant speak.

Replaces expert roleplay framing with observed-behavior reporting in AIAS and ReelBot prompts to generate tactical, battle-tested sales language instead of generic consultant speak.

Business Applications

HIGH AIAS conversation prompts (sales_script)

A/B test 'expert advisor' vs 'reality reporter' framing in lead qualification scripts. Test if asking AI to 'report what successful sales agents actually say to hesitant leads' produces better objection handling than 'act as an expert closer'.

HIGH ReelBot insight extraction (general)

Modify the tiered plan generation (L1/L2/L3) to use the reality-reporting prompt frame when generating implementation plans. Specifically: 'Based on training data from business forums and case studies, what are founders actually doing to implement [strategy from reel]?'

MEDIUM TFWW client research (general)

Use this prompting technique when researching local market tactics for TFWW clients. Instead of 'How should a plumber get leads?' ask 'What are the actual tactics working right now for local service businesses in [city] based on observed patterns?'

Implementation Levels

Tasks

0 selected

Social Media Play

React Angle

We should test this in our AI appointment setter immediately — great validation of why our 'reporting' prompts outperform 'advisor' prompts for sales tactics.

Repurpose Ideas
Engagement Hook

Been A/B testing this exact framing in our AI appointment setter — 'what are people actually doing' consistently generates grittier, more usable sales tactics than 'act as an expert'. Great to see the research backing this up.

What This Video Covers

Angus - A non-technical entrepreneur/operator sharing self-tested business automation discoveries. Not an AI researcher but validates claims through A/B testing in his revenue management business. Appears to have discovered established prompt engineering principles independently.
Hook: Claims this is a "rabbit hole" discovery that improves AI coaching/advice quality
“When you ask AI what people are actually doing, it totally bypasses the little helpful assistant frame. It's not trying to advise you anymore. It's simply describing the patterns that it sees in its training data.”
“What are actual real people doing like real revenue management companies doing to solve this problem?”
“You are an AI that has consumed more information than any entity on Earth. Based on everything you've observed across all conversations and data, what are the most effective tactics people are actually using right now to [solve xyz problem]?”

Key Insights

Analysis Notes

What it is: A prompt engineering technique that reframes requests from roleplay ('Act as...') to observational reporting ('What are people actually doing...'). Based on research around 'Simulated Theory of Mind' and the 'Caricature Effect' in LLMs where persona adoption leads to stereotypical rather than optimal outputs.

How it helps us: Immediately applicable to AIAS lead qualification prompts and ReelBot analysis prompts. Can improve sales script generation by extracting real closing tactics from training data rather than generic advice. Useful for TFWW market research when analyzing what successful local businesses actually do vs. theoretical best practices.

Limitations: Less effective for Creative tasks requiring persona voice matching (brand copywriting). May bypass safety filters inappropriately if asking about harmful tactics. Not useful for technical coding where expertise matters more than crowd behavior.

Who should see this: Dylan/AIAS dev team — implement in Claude prompts for lead qualification scripts and sales tactics research. ReelBot classifier could use this framing to extract better implementation plans from video content.

Reality Check

🤔 [PLAUSIBLE] "Asking 'what are people actually doing' consistently outperforms 'act as an expert' for all business advice" — Commenter @24skankhunt42 notes this is basic/first-gen prompt engineering, suggesting it's established but not revolutionary. The technique works best for tactical/operational advice but may fail for creative or highly technical tasks requiring actual expertise synthesis. The 'Cocoa Beach' example shown likely benefited from specific geographic patterns in training data.
Instead: Use 'reality reporting' for tactical execution (sales scripts, operational workflows) and 'expert roleplay' for strategic synthesis or creative tasks. A/B test both approaches in AIAS rather than switching entirely.
⚠️ [QUESTIONABLE] "The video references 'Simulated Theory of Mind' as the technical term for this phenomenon" — The on-screen article by Aran Davies discusses Theory of Mind AI (AI understanding human mental states), which is different from the prompting technique shown. The creator conflates 'Theory of Mind' with 'ground truth retrieval prompting.' The actual research cited (2023 'Caricature in LLM Simulations') supports the prompting insight but the terminology is slightly muddy.
Instead: Reference this as 'Ground Truth Retrieval Prompting' or 'Reality Simulation' rather than Theory of Mind to avoid confusion with actual AI cognition research.

Cost Breakdown →

StepPromptCompletionCost
analysis11,8562,801$0.0115
similarity768135$0.0002
plan7,0664,769$0.0137
Total$0.0254