The Claim-Evidence-Reasoning Video Series
Bridging AI Innovation with Cognitive Science
Role: Instructional Designer & Creative Producer
Tools: HeyGen (Avatar Animation), Midjourney (Visual Assets), LLMs (Scripting & Iteration), Canva (Visual Assets & Post-production)
This series was developed using an agile-inspired ADDIE framework, specifically designed to address learner variability and reduce extraneous cognitive load in the science classroom.
Here is a quick introduction:
Analysis: Needs Assessment & Scoping
The project began with a rigorous front-end analysis to identify the gap between student performance and scientific literacy requirements.
Criterion & Constraints: Defined the technical constraints (duration, accessibility) and instructional criteria (NGSS alignment).
Concept Mapping: Mapped the CER framework against sensemaking models to ensure the content promotes high-level inquiry.
Cognitive Load Management: Applied Mayer’s Principles of Multimedia Learning to determine how to scaffold complex logic through low-stakes, relatable "anchor" examples.
Design: Iterative Rapid Prototyping
Using a Human-in-the-Loop AI workflow, the design phase focused on creating a cohesive and inclusive Learning Experience.
Scripting & LLM Collaboration: Leveraged AI models for iterative scripting, moving through multiple versions to refine tone, pedagogical flow, and clarity.
Visual Asset Generation: Utilized Midjourney for the development of consistent representational imagery and character archetypes to foster learner connection and social presence.
Formative Testing: Conducted "thin-slice" prototyping with HeyGen to validate the signaling principle—ensuring on-screen avatars and text cues effectively direct learner attention.
Development: Modular Content Production
I adopted a Modular Development strategy to enhance the scalability and maintainability of the series.
Chunking: Content was "chunked" into 2-5 minute Micro-learning modules, facilitating better retention and enabling asynchronous learning.
Concurrent Workflows: Managed a parallel production pipeline—generating high-fidelity video assets via HeyGen while concurrently designing Cognitive Scaffolding tools.
Multimodal Integration: Ensured tight alignment between the narration and the visual "Logic Bridge" to reduce Split-Attention Effect.
Implementation: Ecosystem Deployment
The rollout strategy focused on Transfer of Learning, ensuring the video content translates into classroom practice.
Instructional Scaffolding: Developed supplementary performance support tools (Teacher guide) to help learners bridge the gap between video observation and independent application.
Implementation Guides: Authored facilitator guides detailing strategic questioning techniques to help educators pivot from simple observations to complex inquiry.
Evaluation: Quality Assurance & Stakeholder Feedback
The project concludes with a multi-layered evaluation to ensure Instructional Integrity.
Expert Review (SME): Engaged Subject Matter Experts and independent advisors for Validation of the scientific claims and logical frameworks.
Stakeholder Feedback Loops: Conducted reviews to identify areas for refinement in future iterations, ensuring the product meets the evolving needs of both educators and students.
Key Instructional Design Competencies Demonstrated:
Cognitive Science Application: Intentional use of Load Theory and Multimedia Principles.
AI Tool Fluency: Sophisticated integration of GenAI (Midjourney, HeyGen, LLMs) into a professional production pipeline.
Modular Design: Expertise in creating reusable, scalable learning assets
The Technical Stack: AI-Powered Learning Production
| Tool | Instructional Design Purpose |
|---|---|
| LLMs (Gemini/ChatGPT) | Scripting & Alignment: Iterative drafting to ensure pedagogical tone and curriculum alignment. |
| Midjourney | Visual Assets & B-Roll: Creating consistent character archetypes and high-fidelity video snippets. |
| HeyGen | Avatar Synthesis: Generating the instructional "face" of the series to build instructor immediacy. |
| Canva | Assembly & Fidelity: Final compositing, audio balancing, and applying consistent UI elements. |