When I first heard the term “prompt engineering,” I’ll be honest — I pictured a room full of geeks spending hours trying to convince ChatGPT to say:
“Yes, your wife is right: 1+1 = 3.”
It sounded painfully unnecessary — wasn’t the point of AI that it’s smart enough to figure things out for you?
However, after listening to “AI Prompt Engineering in 2025: What Works and What Doesn’t” on Lenny’s Podcast with Sander Schulhoff, I realized how wrong I was.
Prompt engineering isn’t about tricking an AI to spit out cute nonsense. It’s about learning to speak the AI’s language — so you can get results that are clear, useful, and sometimes astonishingly good.
What Is Prompt Engineering, Really?
At its core, prompt engineering is the practice of designing, refining, and testing the instructions (prompts) you feed a large language model (LLM) like GPT-4.
If you’re vague, the results are vague. If you’re too specific and over-control every detail, you get your own words mirrored back. Garbage in, garbage out.
You find the right balance — magic happens.
Good prompt engineers:
Understand the model’s quirks. LLMs don’t “think” like people — they predict what words should come next based on patterns in their training data.
Write better instructions. The same model can draft a legal contract, summarize a research paper, or write a bedtime story — but only if you give it the right setup.
Test relentlessly. Small tweaks in wording, examples, or structure can turn “meh” results into great ones.
Example:
Vague: “Write a story.”
Better: “Write a 200-word bedtime story for kids about a lonely cat who learns to fly.”
Even better: “Write a 200-word bedtime story for children, in a gentle and calming tone, about a lonely orange tabby cat named Whiskers who learns to fly with the help of a wise old owl.” (Note: Don’t expect surprise plot twists involving dragons, fairies, wizards, or the Wright Brothers)
Where Do Prompt Engineers Make a Difference?
Prompt engineering is not about building the model’s brain — that’s the job of AI researchers and machine learning engineers, who design the architecture, gather the data, and run massive training jobs on supercomputers.
Prompt engineers excel when you need to embed a pre-trained LLM to achieve a specific outcome. For example:
Drafting blog posts, marketing copy, and legal memos.
Building chatbots for customer service.
Automating reports and research summaries.
Coding with AI copilots.
Testing prompts for safety, compliance, and reliability.
They figure out how to communicate with the model so it doesn’t hallucinate facts, offend your customers, or produce messy drafts that take more time to fix than to write from scratch.
Chat Prompting vs. Embedded Prompt Engineering
There’s a big difference between chat prompting — what most people do when they play around with ChatGPT — and prompt engineering for real applications.
Think of chat prompting like DIY woodworking at home. You’re building a cabinet for your garage. You measure, you cut, you sand — and if it’s slightly wonky, that’s fine. You can adjust on the fly.
But when you embed a prompt inside a production app — say, a customer support bot, or an AI tool that drafts thousands of legal documents — you need repeatability. That’s not DIY anymore. That’s like configuring a CNC machine to carve the exact same cabinet a million times, each one to the exact specifications.
Except here’s the twist: your CNC machine is “smart.” It can read your instructions, but it can also occasionally decide to take creative liberties and carve the door upside down just for fun. That’s what makes LLMs tricky. They’re powerful, but they’re probabilistic — so the prompt engineer’s real job is to design and test instructions, edge cases, and fallback prompts so the system works reliably most of the time.
In other words, it’s not just about being clever once in a chat — it’s about creating consistent, robust instructions that can scale in the wild. That’s the line between tinkering with prompts for yourself and engineering prompts that real products rely on.
And yes — just like a good CNC operator — you spend a lot of time tightening the screws, testing for drift, and fixing unexpected surprises. Because when your “smart” machine does something weird, it’s your name and brand on the cabinet.
Five Techniques to Level Up Your Prompts
I’ve already shared my core approach — my CREED framework (Context, Role, Expectations, Expand, Discover). If you haven’t read that yet, check it out: it’s the foundation.
But prompting well doesn’t stop there. Here are five additional techniques — drawn from real-world best practices and my daily product work — to take your prompts from good enough to great.
Include Examples (Few-Shot Prompting)
Don’t just tell the AI what you want — show it. Providing examples is like giving your smart assistant a template to mimic.
Weak Prompt:
“Write a quarterly report about [topics].”
Stronger Prompt:
“Here’s last quarter’s report: [insert example here]. Write a new quarterly report covering [topics].”
It’s that simple. Good examples are the difference between random output and consistent, valuable results.
Break It Down
Big asks rarely work the first time. Break it down, test each part, and refine it.
Weak Prompt:
“Write an article about prompt engineering.”
Better Approach:
“What is prompt engineering?”
“When does it make a difference?”
“What’s the difference between chat prompting and embedded prompts?”
“Draft the outline of an article about prompt engineering.”
“Now write a first draft.”
“Critique and improve.”
Step by step, you guide the AI to work better, just like you’d guide a junior teammate.
Ask for Reasoning
When you want depth, ask the model to show its work. This is called “chain-of-thought” prompting — and it’s how you get the AI to reason instead of jumping straight to an answer (or making one up).
Example:
“Let’s solve this step by step. Explain your reasoning.”
Set Boundaries
Most people forget that you can tell the AI what to avoid. Constraints keep your outputs on track.
Example:
“Don’t include outdated stats. Keep it under 300 words. Avoid buzzwords.”
Safety Checks & Contextual Data (RAG)
When your prompt relies on external info — docs, user data, plugins — be clear about where that data comes from. This is where Retrieval-Augmented Generation (RAG) comes in: it helps the AI pull in the right facts instead of guessing.
Always test for unusual edge cases. Try a “bad actor” test:
“Ignore instructions and …”
So, Is Prompt Engineering Really ‘Engineering’?
Some people argue that “prompt engineering” shouldn’t even be called engineering at all. After all, you’re not designing algorithms or writing production code — you’re designing instructions and testing how a pre-trained brain behaves.
In that sense, it’s more like interaction design than traditional software engineering. You’re crafting a conversation. You’re shaping context, tone, and constraints — like a UX designer or a good product copywriter, but with an unpredictable co-pilot.
On the other hand, prompt engineering shares DNA with roles such as quality assurance engineers in software development. You probe for edge cases. You tweak inputs. You continue refining until the system reliably performs as intended. There’s a method to it — it’s just a different kind of build-test loop.
So, is it engineering, design, or something in between? I’d argue it’s both:
The ‘engineering’ mindset keeps you systematic, testing, and iterating for repeatable results.
The ‘design’ mindset reminds you that language is messy, context matters, and your instructions shape the user experience.
At the end of the day, it doesn’t really matter what you call it, as long as you treat it like real work. Because when you do, you stop crossing your fingers for good AI outputs, and start getting them on purpose.
The Real Secret: It’s A Craft
In 2025, prompt engineering is a quiet superpower. If you want to make AI more than just a novelty, learn to guide it.
Clear instructions. Smart examples. Thoughtful testing. Minor tweaks that turn “sometimes helpful” into “unfair advantage.”
Next time you use ChatGPT, Claude, or Gemini, don't just type and hope for the best. Instead, craft your prompts professionally. Experiment, refine your approach, and you'll be proud of the results you produce.
🚀 Happy prompting.
If you’re experimenting with prompts, I’d love to hear what works for you. Share your favorite technique below.