Introduction to Prompt Engineering
From vague asks to precise instructions; The craft of steering models on purpose
Prompt engineering is the practice of turning a messy goal into a crisp specification the model can follow. Not magic words—just clear intent, constraints, and expected form.
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs).
Prompt engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs. It's an important skill to interface, build with, and understand capabilities of LLMs.
You can use prompt engineering to improve safety of LLMs and build new capabilities like augmenting LLMs with domain knowledge and external tools.
What prompt engineering is (and isn't)
- Is: Task design + instruction design + output design.
- Is: Making intent explicit: role, objective, inputs, rules, format.
- Is not: Secret spells or one‑off hacks.
- Is not: A substitute for data/tools/guardrails—just the interface.
Don't worry if this seems like a lot for now, throughout this module we'll talk about and practice a bunch of techniques that would help you in improving your prompting skills, and by the end of this module you'll be a complete expert.
For now, let's take a look at some of the prompt examples closely to see what makes a good prompt 'good'?
Example — Bad vs Better (same task)
Let's say we have this message and we want to create a tool that helps in creating support tickets:
Hey, my laptop won't turn on after the last windows update 😡 also my charger gets super hot. i need someone to fix it today because i have a deadline tomorrow.
Example 1
Prompt
Summarize this issue.
Response
The user’s laptop will not power on after a Windows update. The charger is overheating. They have a deadline tomorrow and need urgent help.
On the surface, this works. It's a correct summary. But if our goal is to feed this into a structured ticketing system, this format isn't that useful. It still needs a human to copy details into the right fields.
Let's see if we can improve upon this.
Example 2
Prompt
jsonResponse
jsonDid you notice the difference? How come the second example gave a much more structured approach. Some of this structure were:
- Directly actionable: the ticket can be auto-routed to hardware support.
- Consistent categories & priority: enables filtering, reporting, SLAs.
- Automation-ready: can trigger notifications for urgent cases without a human reading it.
- No post-processing guesswork: every field is explicit and type-safe.
Noticed how a simple change in the prompt can lead to exponential advantages? In this module we'll talk about a lot of techniques on how we can write better prompts to take full advantage of a LLM.