Why Better Prompts Lead To Better Learning—And Why
What I’ve found is that generative AI (Gen AI) becomes a true partner in learning design when we approach it with purpose. It’s not just a time-saver. It’s a prototype generator, a sounding board, and—when prompted well—a cocreator of rich, personalized, and reusable learning assets. The key isn’t just in using Artificial Intelligence (AI)—it’s in how we prompt it and, even more importantly, who we prompt it with. As an Instructional Designer, I’m constantly looking for ways to scale our work without compromising quality or intent. The demand for timely, engaging, and outcome-aligned learning content continues to grow across departments, campuses, and organizations. Meeting that demand isn’t just about working faster; it’s about working smarter and more collaboratively.
Some of the most effective prompts I’ve used weren’t crafted in isolation. They came out of live cocreation sessions with faculty, Subject Matter Experts (SMEs), team leads, and even learners themselves. Because when we prompt together, we’re not just generating content—we’re building shared understanding. That understanding turns into templates, not one-offs. Into systems, not just solutions. Let’s explore how to do that using three interconnected frameworks—starting with the one I come back to most.
Prompting Is Design—Not Just A Command
In Instructional Design, we use frameworks like ADDIE, SAM, and Bloom’s taxonomy to bring structure and clarity to what we build. Prompting, when done well, is no different. It’s not a one-line question we toss to a machine—it’s an intentional design move.
When we align prompt creation with thoughtful frameworks, we get better outputs. But more importantly, we create scalable, repeatable, and teachable systems that others on our team can use and adapt. One of the simplest and most powerful tools I use to do this is the pentagon model.
The Pentagon Model: Make Prompts Transferable
The pentagon model breaks down the key ingredients of a well-structured prompt into five core components: persona, context, task, output, and constraint. When each of these is clearly defined, the prompt becomes specific enough to deliver relevant results and general enough to be reused across different learning scenarios. Let’s break this down:
- Persona is about role
Who is the AI responding as? A professor, a nurse, a coach, a historian? Giving AI a defined persona gives its output voice, perspective, and credibility. - Context frames the environment or situation
Is the content meant for onboarding, clinical practice, student projects, or leadership coaching? Providing that background ensures the AI understands how to tailor its response. - Task clarifies the purpose
Are we asking AI to summarize, generate dialogue, simulate a scenario, or create an outline? A clearly defined task keeps the output focused and useful. - Output defines the format
Do we need a bulleted list, a dialogue script, a quiz, a chart? By setting this expectation, we reduce editing and improve usability. - Constraint adds guardrails
Should the tone be conversational or academic? Does the response need to fit within a 200-word limit? Should it be appropriate for learners with different reading levels?
Using the pentagon model, teams can cocreate prompt templates that aren’t tied to one situation but can be adapted across departments and use cases. For instance, a prompt we originally created to generate nursing case studies was later adapted for HR onboarding materials, just by tweaking the role, audience, and context. The structure stayed the same, which meant the process didn’t have to start from scratch. That’s how we scale content creation with consistency and quality intact.
Design Thinking: Prompting As A Team Process
While the pentagon model provides the anatomy of a good prompt, design thinking provides the mindset. It invites empathy, iteration, and collaboration—all of which make prompting more meaningful and sustainable. Design thinking isn’t just for product development—it’s a creative and human-centered way to write better AI prompts. Instead of jumping straight to the output, you step into the user’s shoes, experiment, and refine. The goal? Prompts that make AI responses more useful, personalized, and actionable.
When Instructional Designers work side-by-side with faculty, staff, and learners to create prompts, something important happens: we stop guessing what people need and start building with them. Prompting becomes less of a solo act and more of a cocreation process.
In one project, we developed a set of AI prompts to simulate real-world conflict resolution scenarios for a professional development course. But rather than just designing the content ourselves, we invited managers, support staff, and even interns into the prompting session. Their lived experiences shaped the tone, complexity, and vocabulary of the scenarios. The result? Content that felt immediately real and useful—because it was.
This collaborative approach speeds up iteration and increases buy-in. Instead of revisiting and revising content after it misses the mark, you’re aligning from the start. And because the knowledge is shared, the process becomes scalable. Others in the organization can take the same design approach and generate new content without depending on a single gatekeeper or team.
Backward Design: Align Prompts With Learning Goals
If the pentagon model gives you structure and design thinking brings collaboration, backward design ensures everything we create actually supports learning outcomes. Backward design for AI prompts borrows from the well-known Wiggins and McTighe framework, but with a twist: it’s all about crafting prompts that get the results you actually need. Whether you’re asking AI to help design a lesson, write a script, generate images, or break down data, this approach helps you stay focused on outcomes, not just outputs.
Backward design starts with the end in mind: what should learners know, do, or feel after this experience? From there, we decide how we’ll measure success (the assessment), and only then do we design the learning experience—and the prompts to support it.
For example, in a customer service training, we needed learners to demonstrate empathy and problem-solving skills in real-time conversations. Instead of starting by asking AI to “write a scenario,” we started with the learning goal: “Employees will deescalate a frustrated customer using active listening techniques.” That drove the task (“create a realistic conversation”), the context (“in a retail setting with long wait times”), and the output (“a role-play script with labeled speaker turns”).
Because we tied the prompt to a performance goal, the output was immediately aligned. Better yet, the structure could be reused in different industries—just substitute a hospital, university, or call center as the setting, and the same framework applies. Prompts rooted in outcomes don’t drift. They scale, translate, and evolve.
Why Prompting Should Be A Collaborative Habit
Working with AI can feel fast—but working with AI together, using a shared prompt model, is not only faster but smarter. When we involve stakeholders early in the prompting process, we avoid the typical back-and-forth that comes from misaligned expectations. Cocreated prompts reflect real needs, use shared language, and generate reusable formats. Over time, these prompts become part of your design toolkit—a library of modular components you can mix, match, and adapt.
Even more powerful? Prompting collaboratively is a form of upskilling. Faculty, staff, and designers learn how to speak AI’s language together. They start to think in frameworks, articulate tasks more clearly, and use AI more effectively on their own. Prompting becomes a shared literacy—and that’s what makes it sustainable.
Building A Scalable Prompting Culture
Scaling content doesn’t mean creating more from scratch. It means creating smarter, reusable systems through collaboration. AI can help—but only when we use it with intention, and when we prompt with purpose. Here’s what I’ve learned really works:
- Use frameworks like the pentagon model, design thinking, and backward design to structure your prompts
- Involve stakeholders early, not just at review stages
- Build shared prompt templates and store them where others can easily access and adapt them
- Host prompt jam sessions during planning or sprint cycles to normalize the practice
In short: treat prompting like design. Make it collaborative, purposeful, and repeatable. You’ll move faster. You’ll align better. And most importantly, you’ll build a learning ecosystem where content isn’t just generated—it’s strategically created, built in community, and made to scale.