Brave New Bookshelf Episode 62 – Critical Thinking and AI Myths with Steph Pajonas and Danica Favorite
In this episode, Steph Pajonas and Danica Favorite take a thoughtful look at the real conversations surrounding AI. From media headlines to medical literacy, from writing workflows to “brain atrophy” myths, this discussion explores what actually happens when humans and AI work together.
This is not a hype episode. And it’s not a fear episode either. It’s about critical thinking.
Apple Podcasts | Spotify | Amazon Music | YouTube | iHeartRadio | RSS Feed
Meet Steph Pajonas and Danica Favorite
Steph Pajonas is the CTO of Future Fiction Academy and Future Fiction Press, where she teaches authors how to integrate AI across writing, editing, and publishing workflows.
Danica Favorite is the Community Manager at PublishDrive, helping authors navigate metadata, distribution, and royalties so their books reach readers worldwide.
Together, they connect creativity and infrastructure, showing how AI fits into a modern publishing ecosystem without replacing the human core of storytelling.
"It does not atrophy your brain or cause you to not be able to critically think anymore. It's actually a part of the whole process."
Steph Pajonas, on debunking myths about AI and brain health.
AI as Support, Not Replacement
Danica shares her experience in a year-long RIM (Regenrating Images in Memory) training program, where human intuition is essential. Yet even in deeply personal work, AI has found a place.
Instructors use it to:
- Create memorable theme songs for concepts
- Generate worksheets and coloring pages
- Support students with quick answers through expert bots
The key insight is simple: when AI handles repetitive or administrative tasks, humans have more space for meaningful connection.
Steph reinforces this idea throughout the episode. AI is not the main character. It is an auxiliary tool that clears the runway for deeper work.
The “Brain Atrophy” Narrative
One of the biggest talking points comes from a widely discussed MIT study that many headlines framed as proof that AI harms critical thinking.
Steph breaks this down carefully.
The original study measured cognitive load, not intelligence. The researcher later clarified that the findings did not mean people were becoming less intelligent or experiencing “brain rot.”
Instead, the question is how we use AI.
When an AI suggests unexpected plot points or angles, it can expand the thinking process. Rather than replacing thought, it introduces more variables. The creator still evaluates, selects, rejects, and refines.
As Steph puts it, AI does not stop you from thinking. It gives you more to think about.
Media Bias and the Rage-Click Economy
Why do so many AI headlines feel dramatic?
Steph and Danica explore how negative framing often drives engagement. Sensational claims travel faster than nuance.
They encourage listeners to pause and ask:
- Who benefits from this headline?
- Is the summary accurate to the full study?
- Am I reacting because it confirms a fear?
Danica shares an example of a meteorologist criticized for using AI imagery despite decades of expertise. The backlash said more about public anxiety than about his actual knowledge.
The broader takeaway is not about defending AI. It is about defending critical thinking.
"Who's making the money off of you rage clicking, rage sharing, rage commenting? That's what makes the money."
Danica Favorite, on the motivation behind sensationalist and negative AI news coverage.
AI in Everyday Life
The conversation moves beyond publishing into daily life.
Steph uses AI to:
- Design fitness progressions
- Break down complex grammar rules from Duolingo lessons
- Compare products and research purchases
Danica uses AI to:
- Interpret medical lab results before appointments
- Reduce anxiety through clearer understanding of technical data
- Explore lifestyle questions in a structured way
Both emphasize the same boundary: AI is not a doctor, a lawyer, or a replacement for expertise. It is preparation. It is literacy. It is clarity.
And clarity reduces fear.
The “Stolen Data” Debate
The episode also addresses one of the most emotionally charged AI discussions: training data.
Steph and Danica reference the Anthropic settlement and highlight a detail often overlooked in media coverage. Court documents indicated that certain pirated books were stored on servers but were not part of the training corpus for released models.
They also discuss a broader philosophical point.
Human authors are shaped by everything they read. Styles evolve from exposure to language patterns. AI systems learn patterns at scale in a similar way.
The conversation does not dismiss ethical concerns. Instead, it asks for consistency. If someone rejects AI on principle, that is a position. But selectively using it for marketing while condemning it elsewhere creates tension.
The core theme returns again: think critically. Stay consistent. Stay informed.
Tools Mentioned in This Episode
- Gemini for research, shopping comparisons, and lifestyle support
- ChatGPT for explaining complex data and technical concepts
- Claude for its constitution-based model structure
- Duolingo for language learning
- Dragon Dictation as an early example of AI-driven workflow support
Key Takeaways
- AI works best as a junior collaborator.
- Sensational headlines deserve scrutiny.
- Using AI does not eliminate critical thinking.
- Clarity reduces fear.
- Human connection remains the point.
This episode invites you to step back from the noise and examine your own relationship with AI. Not through hype. Not through panic. But through thoughtful evaluation.
Want more insights on the evolving role of AI in publishing? Listen to this episode of Brave New Bookshelf on your favorite podcast platform.
Apple Podcasts | Spotify | Amazon Music | YouTube | iHeartRadio | RSS Feed