Prompts instruct Aimée about how to write ad creative for each of your products.
After the onboarding process you will have 1 or more prompts already setup, which you can to modify as needed. For example to change the tonality or what product aspects Aimée writes about.
You can create new prompts from the project or the catalog. When doing so you'll have the option to select a prompt template to start from, there's different prompts available geared towards different use cases. For example copy writing of descriptions, headlines, reasons to buy or even text-to-voice scripts that pitch your product in an animated video ad.
Executing prompts
Prompts are automatically executed whenever Aimée processes a product. This happens when new products appear in the feed or when they're changed in some for, for example
The first time you setup a project Aimée will process all the products.
When you add new products to your catalog they will be imported, processed and will appear in the outgoing channel as soon as possible.
When products are modified in some way, for example changing the product description or adding images, Aimée will also reprocess them and update all affected channels.
You can disable a prompt by toggling the "Activate prompt" checkbox, which will prevent that prompt from executing the next time Aimée processes a product.
Reprocessing products
When creating new prompts or making changes to existing ones, you may need to ask Aimée to reprocess all your products. Otherwise the prompt will only be applied to new and modified products when they're reprocessed. This option is available from the ... menu on the project page.
Instruction section
This section tells Aimée what task she should be doing, this is the so called "system prompt" which provide general context and instructions.
You can instruct Aimée using natural language, much as you would instruct a freelancing copy writer that has just started working for you. The prompts will be written in English by default, but you can also use your own language. In either case the instructions contain a statement to write the creative in the native language for your products.
Sometimes it can also be useful to give some examples of what the desired output should look like, which helps Aimée mimic the tonality and formatting of your existing creatives.
Input section
This section contains all the relevant product information that Aimée should use when writing creative, for example the product name and description.
Product information is supplied to Aimée out using placeholder expressions, so your prompt shouldn't contain any product specific text, all that will be filled in later when the prompt is executed. For example you'd instread refer to (Title) and (Description) placeholders in the input, which will then replace those expressions with the actual product information before executing the prompt.
Previewing the prompt output
When you're working with prompts you're encouraged to use the preview function in order to test the prompt out on one or more products. To the upper right you'll find a search box that lets to find a product and execute the prompt on that product, and see what Aimée will then write for you.
Use as section
This section tells Aimée what metadata to annotate the written creative with. For example what type of creative is should be, what catalogs, product sets and tags it should be used for.
This lets you restrict where the resulting creative is used. For example you may only want this prompt to apply to a specific product set. Or you may want the resulting creative tagged in a specific way, so that you can use this particular creative in a particular channel field or template placeholder.
Optional fact-checking
Aimée may sometimes write things that are plausible, but not backed up by the product information and perhaps therefore not strictly true. You can filter out such creative by turning on the Fact-checking feature. This will run each output through another special prompt, which verifies that all the facts mentioned in the creative are indeed backed up by the given product information. You'll find this setting under the Advanced settings.
Advanced section
This section lets you change advanced model settings, for example what particular model version to use. By default the GPT-3.5 Turbo (ChatGPT) model will be used, which provides a good balance between computing power consumption and writing skills.
Prompts will consume a varying amount of computing power, depending on the length of the instructions, input and output and what model you select. When you preview the prompt you'll also see an estimate of the expected credit consumption (credits are an in-platform currency and not equal to your dollar cost).
Available models
You can choose between different models and kinds of models to use
LLM's (Large Language Models) will take instructions and input text in a chat format.
Image captioning models take a single question to answer about an input image.
Claude 3 - Large (Opus) | A very capable LLM from Anthropic, at least as good as GPT-4 and possibly better. |
Claude 3.5 - Medium (Sonnet) | A capable LLM from Anthropic, possibly as good as GPT-4o and significantly cheaper. |
Claude 3 - Small (Haiku) | A capable LLM from Anthropic, possibly as good as GPT-3.5 Turbo and even cheaper. |
GPT-3.5 Turbo | A capable and very low cost LLM from OpenAI. This is the original model used for ChatGPT. |
GPT-4 | A very capable LLM from OpenAI at a premium price. |
GPT-4 Turbo | A very capable LLM from OpenAI which you would normally choose over the original GPT-4, since it's as capable and at a lower price. |
GPT-4o | A very capable LLM from OpenAI at a premium price. |
GPT-4o Mini | A capable and very low cost LLM from OpenAI. This is the default choice for most prompts because it provides a good balance of capability and low cost. |
Image caption V1 | Write a simple description of images |
Image caption V2 | Can write longer descriptions of images and answer questions about them. |
Image caption V3 | Can write longer descriptions of images and answer questions about them. |
Cost of running prompts
The various features in our platform consume computing power, which is our cost for delivering the service to you. Some features, like prompting, require a lot of computing power, and are thus more expensive for us to run.
Our Pricing Tiers have been set generously, based on a reasonable usage of all features for each Tier’s number of products. However, repetitive and extended misuse of our platform will cause significant overspending, which may, in worst cases, block further product processing for your account. If that happens, you will be notified.
Read more about how to avoid overuse in our pricing FAQ, under the question "How can I reduce my cost?"
Troubleshooting prompts
Sometimes you may have to tweak your prompts in order to get good results.
Outputting list of texts
The default prompts will instruct the LLM's to use a special format, called JSON, for the text fragments. For example:
["The first headline", "Another headline", "A third headline"]
But some models (especially Claude, Llama and Mistral) may respond with some extra explanation preamble. For example:
Here are the headlines you asked for:
["The first headline", "Another headline", "A third headline"]
In this case you'll need to force the model to write just JSON, by adding another prompt step and toggling it to "Example response". This acts as a primer for the model to start writing JSON directly, without any preamble. Please note that the GPT family of models does not support this kind of primer.
Click the "..." menu on the Input step for the prompt and select "Add prompt step"
Click the "Input" icon/text to toggle the new step to "Example response" instead.
Add a JSON prefix to the "Example response" prompt step. This will prime the model with this text and force it to continue writing from there.
```json
["
Afterwards it should look similar to this. Where you've just added the "Example response" step to your existing prompt: