Category: AI Tools & Prompt Engineering

  • 4 Research‑Backed Prompting Techniques to Master Any LLM

    From Frustration to Fluency

    We’ve all been there. You ask a Large Language Model (LLM) a seemingly simple question and get back a generic, unhelpful, or just plain wrong answer. It’s a common frustration that can make you feel like you’re not speaking the AI’s language. But what if the problem isn’t the AI, but the way we’re asking?

    Crafting effective prompts is a skill—an iterative process of designing high‑quality inputs that guide models to produce accurate and relevant outputs. This discipline is called “prompt engineering.” The good news is that you don’t need a specialized degree to master it. I recently dove into a detailed Google whitepaper on the subject, and my goal here is to share the most surprising and impactful takeaways, making these advanced techniques accessible to everyone.

    The paper emphasizes this accessibility with a powerful, standalone statement:

    You don’t need to be a data scientist or a machine learning engineer – everyone can write a prompt.

    TechniqueGoalStable Recipe
    Positive InstructionGive specific directions, not a list of ‘don’ts’Write a one‑paragraph overview of [topic]. Include [A], [B], and [C]. Exclude everything else.
    Role PromptingControl tone, expertise, and audience fitRole + Audience + Tone + Task + Constraints
    Step‑Back PromptingActivate broader knowledge before executionTurn 1: Frame the problem. Turn 2: Execute with the chosen frame.
    Chain‑of‑Thought (CoT)Improve math/logic/planning accuracyLet’s think step by step. Show your work. Answer:

    1. Positive Instruction Prompting: Tell the LLM What to Do

    One of the most immediate and practical shifts you can make is to stop telling the LLM what to avoid and start telling it what you want. The whitepaper makes it clear: it is more effective to give the model an instruction (what to do) rather than a constraint (what not to do).[1]

    The reasoning is intuitive. Instructions provide clear, positive direction, giving the model a target to aim for while encouraging creativity within defined boundaries. Constraints can be confusing, limit the model’s potential, or even clash with each other, leaving the model guessing what is actually allowed.

    Consider this example for generating a blog post about video game consoles:

    • DO: Generate a 1 paragraph blog post about the top 5 video game consoles. Only discuss the console, the company who made it, the year, and total sales.
    • DO NOT: Generate a 1 paragraph blog post about the top 5 video game consoles. Do not list video game names.

    The “DO” example is specific and gives the model a clear structure to follow. The “DO NOT” example is less direct and forces the model to work around a negative rule.

    Use This Prompt

    Write a one‑paragraph overview of [topic]. Include [A], [B], and [C]. Exclude everything else. If anything is ambiguous, ask 1 clarifying question before writing.
    

    💡 Constraints can still help when they are crisp and few. Prefer 1–2 clear inclusions over long lists of “don’ts,” as also recommended in Google’s guide.

    2. Role Prompting: Setting the Stage for the Perfect Answer

    This technique, called “role prompting,” involves assigning a specific character or identity to the AI model before you give it a task. By telling the model to act as a travel guide, a book editor, or a kindergarten teacher, you provide it with a blueprint for the desired tone, style, and expertise. This simple act dramatically improves the quality and relevance of its output.

    For example, asking the model to act as a travel guide for Amsterdam yields a competent but straightforward list of museums.

    But watch what happens when we build on that role by specifying a tone. By adding a simple stylistic instruction—in a humorous style—to the same travel guide role for a trip to Manhattan, the entire output is transformed. Instead of a dry list, you get suggestions like these:

    • Behold the Empire State of Mind: Ascend to the dizzying heights of the Empire State Building and bask in the glory of Manhattan’s skyline. Prepare to feel like King Kong atop the Big Apple, minus the giant ape‑sized banana.
    • Get Artsy‑Fartsy at MoMA: Unleash your inner art aficionado at the Museum of Modern Art (MoMA). Gaze upon masterpieces that will boggle your mind and make you question whether your stick‑figure drawings have any artistic merit.

    Assigning a role gives the model context for how to answer, not just what to answer.

    Use This Prompt

    Act as a [role]. Audience: [who]. Tone: [style].
    Task: [what to produce]. Inputs: [bullets, links, or pasted text].
    Constraints: [length, format, must‑include items].
    

    🧭 Stable recipe: role + audience + tone + task + constraints. This consistently improves relevance and control.

    3. Step‑Back Prompting: Activating Higher‑Level Reasoning

    Sometimes, to get a highly specific and creative answer, you have to do something counter‑intuitive: ask a more general question first. This technique is called “Step‑back prompting.” It works by prompting the LLM to consider a broader, higher‑level concept related to your task before you ask it to execute the specific details. This activates the model’s relevant background knowledge and establishes a stronger foundation for reasoning.

    For example, if you ask the model to create a storyline for a first‑person shooter video game, you might get a generic paragraph. It’s functional, but not particularly inspiring.

    The “step‑back” approach changes the game. First, you ask the model a broader question: Based on popular first‑person shooter action games, what are 5 fictional key settings that contribute to a challenging and engaging level storyline?

    The model responds with themes: Abandoned Military Base, Cyberpunk City, Alien Spaceship, Zombie‑Infested Town, and Underwater Research Facility.

    Now, you use one of those settings as context for your original prompt. By taking the underwater theme and asking for a storyline, you get a far more creative and detailed result, like this:

    In the heart of a murky abyss, lies a dilapidated underwater research facility… The player, an elite marine equipped with advanced diving gear and experimental weaponry, is tasked with venturing into this aquatic realm of terror… they must navigate treacherous corridors, solve cryptic puzzles, and confront gruesome sea monstrosities that lurk in the depths.

    By taking a step back, you force the model to build a strong conceptual framework before getting lost in the details, leading to a richer final output.

    Use This Prompt

    Before answering, list 5 higher‑level frames for [problem]. For each, note pros/cons.
    Then pick the best frame and produce [deliverable] with [constraints].
    

    🪜 Two‑turn workflow works best: Turn 1 frames the problem. Turn 2 executes using the selected frame.

    4. The Magic Words That Fix AI Math: Unpacking “Chain of Thought”

    It’s no secret that LLMs often struggle with math and logic problems. They are prediction engines trained on text, not calculators. But a remarkably simple technique called Chain of Thought (CoT) prompting can dramatically improve their reasoning ability. It works by instructing the model to generate intermediate steps instead of jumping straight to an answer.

    Consider this problem: Prompt: When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner?

    Without guidance, the model might incorrectly calculate the answer as 63 years old.

    But by adding a simple, almost magical phrase—”Let’s think step by step.”—you fundamentally change the process. The model is now forced to outline its reasoning before providing a conclusion.

    Its new output looks like this:

    1. When I was 3 years old, my age was 3 years.
    2. My partner’s age at that time was 3 times my age, which means my partner’s age was 3 * 3 = 9 years.
    3. Now, I am 20 years old, which means my age has increased by 20 – 3 = 17 years.
    4. Since my partner’s age was 9 years when I was 3, and my age has increased by 17 years, my partner’s age must have also increased by 17 years.
    5. Therefore, my partner’s current age is 9 + 17 = 26 years.

    Answer: 26 years old.

    By showing its work, the model follows a logical path and arrives at the correct answer. This technique also makes the AI’s “thinking” interpretable, so if it does make a mistake, you can see exactly where the reasoning went wrong.

    Use This Prompt

    Let’s think step by step. Show your work and calculations. Give the final answer on the last line prefixed with “Answer:”.
    

    ⏱️ Lightweight variants: “think step by step,” “briefly outline your reasoning,” or “show your work.” Skip CoT for trivial tasks or when latency tokens matter. Use it for math, planning, multi‑constraint writing, or logic puzzles.

    The Future is a Conversation

    These four takeaways—using positive instructions, assigning roles, taking a step back, and showing the steps—all point to a single, powerful idea: effective prompting is about guiding a model’s reasoning process, not just giving it a command. It’s less about programming and more about structuring a clear, thoughtful dialogue.

    Prompt engineering isn’t a purely technical discipline reserved for data scientists. It’s the emerging skill of having a structured and intentional conversation.

    As these models become more integrated into our lives, what if the most crucial skill isn’t coding, but conversation?