Prompting primer
Much of working with AI tooling requires “prompting,“ and writing great prompts is part of "prompt engineering." Essentially, writing instructions into a text box that the AI will execute.
For the entire history of computing, programming relied on getting consistent outputs from consistent inputs. However, this is not true when working with AI tools and so knowing how to write good inputs to get predictably good outputs becomes critically important.
Many factors will define the results that you get. The model that you are working with (for example, Claude Sonnet by Anthropic, ChatGPT by OpenAI, etc), the context that it has, and above all the quality of your prompt.
When an LLM responds with words, it is generating the most likely next word, word-by-word in its response.
It will compare the text that you have written to everything that it already knows to create the most likely response to your prompt.
So writing a prompt is less about getting an answer and more about shaping the most likely response.
If you ask a question in your prompt, the model isn't following a separate “Q&A” code path, but rather it appears to answer the question because an answer is the most likely sort of response for the given question as input.
Prompts are the default way most people interact with LLM's. Try any of the examples below in ChatGPT, Claude or Grok.
Through a lot of hype and the need for attention, AI tools have been largely oversold in terms of the scale what they can do with short prompts and large code-bases.
If you have only seen a few tweets and demos, you may expect to be able to write a short description of what you need, watch the computer magically do the rest, and put your feet up.
This is not the case. Bad prompting will lead to frustration. Great prompting leads to success.
On the first page of Anthropic's documentation on Prompt Engineering they recommend before prompting to have a clear definition of success.
Without knowing what "good" or "finished" looks like, how can you begin to evaluate the work that AI in response to your prompt?
A strong recommendation for new and existing projects is to create a "product requirements document" that lives at the root of your code base to provide as much context as possible about its aims and functionality.
So before you next prompt, think about “I need X that achieves Y measured by Z.”
Please review this marketing copy. It is intentionally short, deliberate to the point and conversational.
The goal of this copy is to dramatically increase conversions. The target audience is busy professionals. If this landing page works I'm more likely to get a promotion.The first part of this prompt is the "what," the second part explains the "why" and the benefits of being succesful.
While prompts describe what you want, the best way to keep an LLM on track is to show it what you want.
A simple prompt without examples might ask a question or define a task to be done. But the response will be greatly enhanced by providing an example of what you are going to provide and how you would like it to be returned.
I'm writing a course on how to build web applications with AI tooling. It is primarily directed at people with application development experience, but some novices want to take the course as well. I want to build out a glossary of terms page for topics that they may be expected to know. Here are some examples. Can you think of any other useful terminology that a web development novice or beginner may not know? And write out more explanations of the same style as the examples that I'm providing.
INPUT:
<examples><item>GitHubThe world's most popular service for version controlling files. Developers make changes to files locally and "commit" their files to a GitHub repository in order to keep projects organized.</item></examples>
OUTPUT:
## {{ title }}{{ description }}Without the example "input" and "output" the model would likely have given me way too much information and formatted in some random way. With the examples I got exactly what I wanted, in the short manner in which I described.
When working with code, it is useful to provide snippets of what works or patterns you already implemented that should be copied.
The default setting with most LLMs is subservient agreement. You can ask for some pretty ridiculous things, and the model will cheerfully enable you.
For this reason, it can be useful—particularly the less familiar you are with the code—to invite criticism and enable a failure mode if your ask is poorly defined or will lead to bad results.
Change the button color to lime green. The intention is to make it more eye-catching so that more people click on the button.
If this will cause accessibility problems, or goes against our brand style guide, don't make this change and instead suggest alternatives toward the same goal.Perhaps the most common reason prompts fall short of a user's expectation is a lack of context. LLMs simply do not know everything that you know.
Bad prompting highlights the "curse of knowledge" where a person may find it difficult to communicate a requirement to someone that does not have the same context they have.
Whatever context you leave out of your prompt will be filled by the LLM's own knowledge. This means whatever context you are missing makes getting the response you want a roll of the dice.
It can be time-consuming, but any amount of context, no matter how small, provided in the prompt will greatly improve the responses you receive.
This is also why it is a major benefit to put together files that contain your context that can be fed into every prompt—such as rules or product requirements documents.
Many LLM's will accept URL's which can be fetched and read for additional context.
"System prompting" is a term given to beginning your prompt by asking the LLM to inhabit a character for their response.
You may have seen silly demos of this where LLM's respond by talking like a pirate or fictional characters.
The same method can be applied to tell the model that it is an expert in a particular topic or has a particular role within your organization.
You are an expert in mobile application onboarding with high activation rates.
Please review our onboarding flow as described in the product requirements document and identify any friction that may lead users to bounce or churn.Without the first part of this prompt the response may have been generic to address any website or application. Since we've narrowed the role down to "mobile" the response should be more useful.
- Before prompting, create a definition of success to evaluate your responses
- Provide examples of what you want the response to contain
- Enable criticism and allow failure to reduce blind prompt following
- Provide context to reduce guess-work by LLM's
- Make the LLM and domain-expert with a system role to narrow the response