Navigating Generative AI Tools: Think of it Like Running a Kitchen
6–10 minutes

A lot of people use generative Artificial Intelligence (genAI) tools every day now. Chat tools. AI agents. Writing assistants. Workflow automations. In personal Life. At work. On the phone. In between meetings.

Maybe you do too. (I know I do.)

Perhaps you notice that when something doesn’t work — when the output isn’t quite right — the first instinct is usually the same: rewrite the prompt. Quickly try again.

The prompt is visible. So that’s where the attention goes.

Which, honestly, is also a very human way of solving problems.

We usually start with the part we can immediately see and change — even when the real issue sits somewhere deeper in the system around it. I wrote recently about this and how this concept appears in more parts of life than we might care to admit.

To be honest, rewriting prompts did matter more a year ago. But more and more, these systems don’t just respond to one prompt at a time. They work with things like saved instructions, connected documents, and workflows that sit around the conversation itself.

Which means the result depends less on writing one “perfect” prompt and more on how the overall working environment is structured.

What better way to simplify this, than by using a kitchen metaphor. Here we go:

It’s less like writing prompts, more like running a kitchen

A useful way to think about modern genAI is not as a tool you simply “prompt,” but as a kitchen you’re operating.

The quality of the outcome depends not just on one instruction, but on how the entire kitchen is organised:

  • the structure of the task (the recipe)
  • the available inputs (the ingredients)
  • the system doing the work (the cook or model)
  • how tasks are divided and coordinated across the kitchen
  • and whether anyone actually checks the result before serving it (tasting it)

In Short

  • When using genAI, a prompt is just the recipe — not the whole kitchen.
  • Better outputs depend on ingredients, structure, timing, and coordination.
  • More context is not always better. Sometimes it’s just a chaotic pantry.
  • Modern AI workflows increasingly behave more like coordinated kitchen staff.
  • And if nobody tastes the dish before serving it, surprises are inevitable.

The recipe isn’t just instructions — it’s structure

Remember that recipes are not just descriptions. They are step-by-step instructions. Structures of dependency. They define what happens first, what depends on what, and what cannot be done all at once. That’s why recipes have a list of ingredients and numbered steps with temperatures and timings.

Prompts behave the same way. When too much responsibility gets pushed into a single request, the system has to solve structure, context, tone, dependencies, and execution all at once. Which is a bit like asking one cook to prep ingredients, manage timing, and plate dishes all simultaneously.

Unsurprisingly, that is usually where things start to degrade when using genAI.

Because we expected structure without actually providing the needed instructions.

Ingredients matter more than the recipe looks like it does

Even a perfect recipe fails without the right ingredients.

You can have a perfectly written recipe for blue cheese pasta. But if there’s no blue cheese in the kitchen, the dish simply isn’t happening. No amount of confidence in the recipe will fix that.

With genAI, the same constraint exists. Just less visibly.

What the system can produce depends on three different kinds of “ingredients”:

  • what it was trained on (its baseline knowledge, I.e. the pantry it already has)
  • what it can access during the task (documents, retrieval systems, connected data)
  • what you explicitly provide in the moment (your actual inputs)

In many real use cases, people only control the last category. Sometimes the second.

This is also why people who get better outputs don’t just write better prompts. They are more deliberate about sourcing ingredients, like having real examples instead of writing “it should be kind of like…” and adding relevant documents.

It’s less “prompt engineering” and more “designing a kitchen that can produce good dishes.”

Which is also how most reliable human systems work: not through perfect individual decisions every time, but through environments that make good outcomes easier to produce consistently.

But there’s a limit.

A tomato sauce works best when it has a few clear, intentional ingredients.

Add a bit of garlic, basil, oregano and you get something balanced. But start throwing everything in — thyme, chilli, random herbs from the shelf because they were there — and it doesn’t become richer. It becomes harder to tell what you’re even tasting.

It’s the same with genAI. More context helps… until it becomes noise. Funny enough, even machines don’t appreciate a chaotic pantry.

This is also why many newer AI workflows rely less on sets of prompts, and more on building reusable context around the work itself. Things like folders of reference material, saved instructions, example outputs, or connected knowledge bases.

In other words: instead of re-explaining what everyone in the kitchen should do and how to do it every time, people are increasingly trying to build a working kitchen staff that stays organized between tasks. Because no one would explain how to make the salad dressing or sauce to their cook every single day.

Not all cooks behave the same way

Even with the same recipe and ingredients, the outcome depends heavily on who is actually doing the cooking.

Some people work like a fine-dining kitchen: careful, structured, breaking tasks into stages, constantly checking details and adjusting as they go.

Others operate more like a busy lunch service: fast-paced, efficient, aiming for something that works under pressure because there are five other orders waiting.

And some follow instructions very literally: they execute the steps as written, and then quietly panic when something slightly unexpected happens, because the recipe definitely did not mention this part.

Same inputs. Same kitchen. Very different outcomes.

And in more complex workflows, different “cooks” may even handle different stages of the same task.

GenAI systems reflect this more than we might think. ChatGPT. Claude. Take your pick.

Some are structured and deliberate. Some are fast and pragmatic. Some are fluent enough that they sound confident even when they’re filling in gaps. Which sounds impressive at first, but becomes dangerous surprisingly quickly.

But that means the difference is not just what you ask for. It’s how the “cook” actually handles the task once it starts.

And not every cook, both human and model, is suited for every dish.

The shift happening now reflects this perfectly. Where AI workflows are less like single conversations and more like coordinated systems. One tool retrieves information. Another summarizes it. Another formats, checks, or rewrites the result.

Which means more and more that our job is designing how the work moves through the kitchen, from one staff member to the other.

Where things usually break: nothing gets tasted

Even when everything else is right — structure, ingredients, system choice — there is still one step that often disappears.

Tasting the dish.

In a professional kitchen, a dish doesn’t go straight onto the menu. You don’t cook it once, wave your hand, and assume it’s magically turned out perfect.

You make it. You taste it. You adjust seasoning. You test it again.

Only when it is stable, consistent, and actually good does it make it onto the menu.

With genAI, this step is often skipped.

Not because people don’t know it exists, but because the output looks finished enough that questioning it feels unnecessary.

It’s fluent. Structured. Confident. So it gets used immediately.

And that’s where things go wrong way too often. Small inaccuracies. Confident but incomplete answers. All because nobody bothered to check whether it actually worked before serving it to an audience.

What this actually means

When something doesn’t seem to work as hoped, it’s easy to reach for the prompt. Blame the recipe. It can feel like the easiest to change.

But the real issue is usually one of a few things:

  • the task wasn’t structured clearly enough (hello recipe steps)
  • the system didn’t have the right inputs (ingredient check)
  • the tool wasn’t suited for the job (who’s cooking?)
  • or the output wasn’t properly checked (always taste test!)

Because in a kitchen, you wouldn’t:

  • expect a full dish to appear in a single step
  • assume ingredients are optional
  • assign every task to the same cook and expect every dish to turn out equally well
  • or serve something you haven’t tasted (unless you enjoy surprises)

But with AI, that’s often exactly what happens. Mostly because the result already looks finished the moment it appears.

And that’s the real trap.

A finished-looking dish isn’t necessarily a usable one.

Because humans also tend to trust things that look finished before checking whether they actually work.

Closing

Increasingly, genAI behaves more and more like a working environment.

A kitchen.

And good kitchens do not run on recipes alone. They run on structure, coordination, good ingredients, clear roles, and constant tasting.

And increasingly, the real skill is not writing a better prompt. It’s learning how to run the kitchen.

Which, honestly, is probably true of more systems in life than we would like to admit.


Discover more from At The Overlap

Subscribe to get the latest posts sent to your email.

Leave a Reply

I’m Anisa Heck

— and this is At The Overlap

Making complexity legible — without pretending it’s simple.

Science evolves. Policies shift. Technology accelerates. Life changes.

Instead of asking, “Why can’t this stay consistent?”

I’m more interested in asking, “What’s actually happening underneath?”

Here you’ll find reflections at the intersection of science, work, people, and lived experience — exploring how stability is maintained through movement, and why visible change isn’t the same as failure.

Thanks for stopping by — I’m glad you’re here.

Let’s connect

Discover more from At The Overlap

Subscribe now to keep reading and get access to the full archive.

Continue reading