Basics of Prompting

From simple asks to reliable instructions; How to talk to models so they listen


Prompting an LLM

You can achieve a lot with simple prompts, but the quality of results depends on how much information you provide it and how well-crafted the prompt is. A prompt can contain information like the instruction or question you are passing to the model and include other details such as context, inputs, or examples. You can use these elements to instruct the model more effectively to improve the quality of results.

Let's get started by going over a basic example of a simple prompt:

Interactive

Prompt
Response
[Click 'Run' to see a response]

You can observe from the prompt example above that the language model responds with a sequence of tokens that make sense given the context "The sky is". The output might be unexpected or far from the task you want to accomplish.

In fact, this basic example highlights the necessity to provide more context or instructions on what specifically you want to achieve with the system. This is what prompt engineering is all about.

Let's try to improve it a bit:

Interactive

Prompt
Response
[Click 'Run' to see a response]

Is that better? Well, with the prompt above you are instructing the model to complete the sentence so the result looks a lot better as it follows exactly what you told it to do ("complete the sentence").

This approach of designing effective prompts to instruct the model to perform a desired task is what's referred to as prompt engineering in this module.

The example above is a basic illustration of what's possible with LLMs today. Today's LLMs are able to perform all kinds of advanced tasks that range from text summarization to mathematical reasoning to code generation.


Roles in Prompts

When working with chat-style LLMs like gpt-4 orgpt-3.5-turbo, your prompt can be structured into different roles:

  • System — sets the overall behavior, tone, or rules for the model. For example: "You are a helpful travel assistant."
  • User — represents what you or the end-user is asking the model. This is the main instruction or question.
  • Assistant — the model’s reply. You can also pre-fill it with examples in few-shot setups to demonstrate desired style or structure.

Roles help guide the model’s behavior more consistently, especially for multi-turn conversations. The system role acts like the high-level brief, while the user role provides the specific task or question, and the assistant role is the output.

For the rest of the examples in this lesson, we’ll focus only on theuser prompts to keep things simple.


Prompt Formatting

You tried a simple prompt above. In general, prompts are either a direct Question or an Instruction. You can also wrap them in a QA-style format when it helps clarity.

Minimal formats:

<Question>?
<Instruction>

QA-style formatting, common in datasets:

Q: <Question>?
A:

When you prompt like this without examples, it's called zero-shot prompting. Some models handle zero-shot well, but mileage varies with task complexity and training.

Concrete example:

Interactive

Prompt
Response
[Click 'Run' to see a response]

With newer models you can often skip the "Q:" since the intent is obvious:

Interactive

Prompt
Response
[Click 'Run' to see a response]

Few-shot prompting adds brief demonstrations. Format options:

Plain format:

<Question>?
<Answer>
<Question>?
<Answer>
<Question>?
<Answer>
<Question>?

QA format:

Q: <Question>?
A: <Answer>
Q: <Question>?
A: <Answer>
Q: <Question>?
A: <Answer>
Q: <Question>?
A:

You don't have to use QA. Pick a format that matches the task. Example - a tiny sentiment classifier with exemplars:

Interactive

Prompt
Response
[Click 'Run' to see a response]

Few-shot prompts enable in-context learning - the model infers the task from your examples. We will go deeper on zero-shot vs few-shot next.


Next up: Prompt Builder