How to create Botly Prompt as Ice breaker using Vision AI?

Create your own AI Prompt templates to help you chat on dating apps such as tinder & bumble.

Botly Prompts is one of the most desired features provided by the chrome extension. Online dating is very competitive and we often get a writer’s block. Sometimes one smooth line can be all that is required between a dead chat and a start of something fun.

Using models out of the box can be quite stupid or very AI-ish sounding. You can fix adjust that by creating your own flavor of prompts for each case.

You have a complete autonomy to write your entire Botly prompt. You can create as many as you’d like, you can customize the prompt, when it is shown, how it should respond, your voice and tone, as well as you can tell facts about yourself that model should know.

Let’s walk it step by step. Best way to get started is by using a real world example.

First, we will create super simple prompt. Then we will leverage the reasoning/thinking approach to improve quality of suggestions. Read until the end. :)

Ice breakers with a Botly Prompt

One of the most challenging parts is crafting the first message. The hey’s and how are you’s typically don’t work wonders.

Let’s open Tinder, click on a new match and click the cog icon next to the text box.

Then, in the Prompts tab, you can either start fresh by creating a New (A) one, or Duplicate (B) an existing prompt into a new one.

We now have the entire dialog to create our prompt. It’s actually super simple yet powerful. Let’s Name (A) it concisely using an emoji, we also add a label (B) so a week later we don’t forget what it is about.

Best prompts are written from scratch but we use our initial template (C) and then express our instructions.

Make sure to include the context of the match by using the variables (D). These are words wrapped in curly braces. For example, you can tell AI that you matched Emma by using {matchName} and that it is Tinder by using {platform}.

Because this is an opener, we want to give the model access to images. Let’s click on the Options section - it will expand revealing bunch of customization options.

We select to include match photos, allow running on empty chats, and actually only show on empty chats. We unselect other Only show on XYZ options as they are irrelevant. We also select Stop on new line. This instructs AI provider to stop generation as soon as AI model generates a double new line (think of clicking Enter twice in the row). Otherwise these models can be very chatty.

It is wise to know that…

including match photos can exponentially increase the token input to the AI model. That means, it becomes slower and more expensive. At a times, LLMs (Large Language Models) can also get confused.

Only submit images when creating prompts you don’t use too often such as opener lines, or when you got ghosted.

Certain LLMs such as Mistral’s Pixtral only support up to 8 images (note Tinder can be up to 9 so we remove the last one), or Facebook’s Llama 3 only support one image. We don’t recommend using the latter at all.

If you are using your own API key, you can use any model you would like.

Now we are ready to test. In the right section we can try various models using this prompt and our own API key. Not a bad suggestion for a first time, huh. ;)

You can also click Minimize and switch between matches to see what lines it generates. Your prompt, in the meantime, is saved into local storage of the browser.

Let AI think before answering

You probably heard about DeepSeek’s R1 and OpenAI’s o1/o3 models. They are designed to think-out-loud before spitting out the response.

Unfortunately the entire thinking system is not yet available for custom apps via API (i.e outside ChatGPT) but we can do the same with ordinary models.

Let’s update the system prompt by adding instructions to think: (complete prompt available at the end)

Before generating a line, let's think step by step. Separate your thinking process between <think> and </think> tags. After you're done analyzing matchName photos and profile, close the tag and write your suggestion. Don't wrap in quotes, don't write anything else.

Then, we have to disable Stop on new line (1) and also increase limit of tokens (2) before it gets cut off - 600 is plenty of tokens.

Now we try another suggestion & boom, we get smarter lines.

Remember, each model and provider behaves differently. Some ignore instructions to <think/> while others hide their thought progress (such as OpenAI).

If you’re using a reasoning or thinking model by design such as OpenAI’s o1 or Gemini Think you probably don’t want to use to instruct to think manually.

Rinse & repeat until you find the match you are ultimately happy with. But for now, we are done. As you can see, you have variety of options to create prompt for new chats, for on going texting ideas, for dead chats. Your only limit is imagination.

Thoughts in Botly Row

When your response is generated using explicit thinking, you will see a small brains button appear. This will reveal the entire response so you can better understand how the model works inside.

Complete prompt used in this example

Updated on