Prompting: The Art of Briefing Your AI Intern

“Prompting is like briefing a highly talented, slightly literal intern.”

In reality, it’s the process of providing specific instructions to a GenAI tool (like ChatGPT, Gemini, or Claude) to receive new information or achieve a specific outcome. Good prompting is more than just a command, it’s a blend of context, constraint, and conversation.

From Bad to Good: An Example

The Bad Prompt: “Give me a chicken recipe.”

The Good Prompt: “Act as a resourceful home cook. I need a dinner recipe using chicken breast, spinach, and heavy cream.
Here is your context:
I only have 30 minutes to cook.
I want to use only one pan because I hate doing dishes.
I only have the basics like salt, pepper, oil, and garlic.

Give me a simple, step-by-step recipe with the following constraints:
Use the metric system.
No fancy equipment.
Keep instructions short and concise.
Suggest one side dish that goes well with it.”


The C.R.E.A.F. Acronym

When you’re stuck, check your prompt against these five letters:

C — Context: Does the AI know why I’m asking this?
R — Role: Did I tell it who to be? (Expert, friend, critic?)
E — Exclusions: Did I tell it what not to include?
A — Audience: Does it know who is reading the output?
F — Format: Do I want a table, a list, an email, or a poem?


How it Actually Works

GenAI doesn’t know stuff the way we do. It predicts the next most likely word in a sequence, much like an auto-complete feature. This is why Context is so important. You are narrowing the probability of which words it chooses so they match your intent.

1. Tokenisation. Computers read numbers, not letters. Your prompt is broken into “tokens.” For example, “Tokenisation is fun” becomes Token + isation + is + fun + !. These are then converted into numeric IDs like [34502, 421, 328, …].

2. Embeddings. The AI looks these numbers up on a massive multi-dimensional map. In this map, similar words are physically closer to each other. “Apple” is near “Pear,” but far from “House.”

Imagine a 3D map where:
Up/Down: How formal the word is.
Left/Right: How “technical” the word is.
Forward/Backward: How “happy” the word is.

3. The Transformer & Attention. The AI calculates how much “attention” to pay to each word. In the sentence “The bank was closed because the river flooded,” the attention mechanism realises “bank” refers to a river bank, not a financial one, because it saw the word “flooded” nearby.

4. Token Prediction. This is the “guessing game.” The AI writes one token at a time by looking at your prompt plus what it has already written, calculating the probability of the next word, and repeating the loop until the thought is finished.


Hallucinations

The fact is, GenAI does not THINK or WRITE like we do. It is a probability machine.

Because it prioritizes looking correct over being factually true, it can “hallucinate.” If a fake name or a made-up date sounds mathematically plausible in a sentence, the AI will print it confidently. It isn’t lying, it’s just following the math of what word usually follows another.

Conclusion

Knowing how the gears turn make us better users. It’s time to get our prompts in order.

Similar Posts

  • The History of AI

    According to Michael Wooldridge (https://www.cs.ox.ac.uk/people/michael.wooldridge/), the head of the computer science department at Oxford, modern AI is about “building machines that can do things which currently only can be done by brains“. The Birth of AI (1950s) The scientific history of AI began around 1950 with Alan Turing, a British mathematician. He essentially invented the idea…

  • Industry 4.0 and AI

    Historically, the development of a General Purpose Technology does not follow a straight line. Instead, societies usually move through cycles of high excitement followed by periods of losing interest. Even though we are currently in the middle of the Fourth Industrial Revolution, we can still use old frameworks to understand what is happening. One important concept…

  • The uncomfortable middle

    or the erosion of human agency Frontier AI models behave intelligently. Several dimensions of human intelligence are increasingly replicated by multimodal AI architectures, including interpersonal and intrapersonal capabilities, the classic human realms of emotional intelligence where we understand others and ourselves. While AI machines still fail strict empathy tests, the social reality is that they don’t…

  • Finding a definition of artificial intelligence

    The four approaches to defining AI according to Russell and Norvig (2020): Acting humanly ( The Turing Test approach) The output counts. It doesn’t matter how the machine “thinks,” as long as its behavior is indistinguishable from that of a human. Thinking humanly (The cognitive modeling approach) The process counts. We must first understand how the…

  • Generative AI in European Policing

    The landscape of law enforcement is undergoing a major shift. While still slow and in its beginnings, Generative Artificial Intelligence (GenAi) is no longer a buzzword and is rapidly qualifying as a new General-Purpose Technology (OECD, 2025). Its current trajectory, marked massive performance gains, industry-wide adoption and a developing high-performance computing ecosystem, suggests it will soon…

  • The Centaurs

    We’ve talked about the nine Gardnerian intelligences and how AI as a technology appears to have developed competence across many of them, including making it close to emotional intelligence (EQ). But “appears” is the pivot here because what large language models and affective computing systems actually do is produce an expected and predicted output without possessing…