Back to All Modules
Fundamentals
Personas
Train
Chat
Integrate

Prompting and Carrying Context with a Persona

How to prompt your Personal AI

You can interact with Personal AI in a similar way to an LLM: you ask questions and get an AI-generated answer. But since the purpose of Personal AI is to respond based on your data, there are special techniques you can learn to leverage your memories and get higher-quality responses.

How Personal AI processes your prompt

In a traditional LLM, once your send your prompt to the model, it will:

  • Turn your prompt into numbers as it interprets its meaning.
  • Calculate the most probable response to it based on previous training data, reinforcement learning and other machine learning controls.
  • Convert numbers back into text, starting the generation. It will calculate the probability of each word in the sequence until it reaches the end-of-sequence token, indicating that the answer is complete.

Personal AI is similar, with a few critical differences:

  • Personal AI also turns your prompt into numbers as it interprets its meaning. In this case, this is done by MODEL-2, not an LLM like GPT-4o. This is why that your data isn’t processed by OpenAI or similar companies: it always runs inside your model in Personal AI, not on external servers.
  • The MODEL-2’s memory layer matches between 8 to 10 memory blocks related to the prompt. These are used to provide context for the response, grounding it in your facts.
  • MODEL-2 generates an answer using the memory blocks, padding the answer with text generated with LLMs to fill the gaps—such as adding transitions or questions to make the conversation more flexible, natural and interactive.

How Personal AI deals with context

Context—also known as conversational memory or context window—is how AI chat applications remember what the user says during a conversation. Personal AI offers a context window as well: each message you trade with your AI influences the interaction as it unfolds.

However, unlike in ChatGPT, Personal AI also has the memories in your memory stack to use during the conversation—and the weight of these memories in the response is represented by the Personal Score.

Personal score example in the app

To preserve flexibility and to help you steer the conversation, the app balances the weight from saved memories and information in the context window. Here’s what this means when chatting about a topic with plenty of saved memories:

  • In shorter conversations, the Personal Score will be consistently higher for every response, as each is heavily based in the content from your Memory Stack.
  • In longer conversations, there’s more information in the context window. Personal AI balances context and memory, which reduces the Personal Score but increases the relevance of the responses based on your current chat.

Context while messaging your AI

Your one-on-one conversation with your AI or any of your AI Personas always carries the context. If you feel it’s too attached to a particular topic, you can click the New Convo button to clear and start fresh.

New conversation button in the Personal AI app

Context in Channels

Context is also available in Channels, but it works in a different way. Due to the multi-user nature and the unpredictable ways in which the conversations here can unfold, Personal AI doesn’t have a unified context window inside a Channel: all messages sent are treated as a unique conversation, without drawing on context.

However, when anyone uses the Reply action on a single message, the AI starts saving context for that exchange, enabling this functionality for that conversation thread only.

Hover over a message and click the Reply icon.

Prompts with examples

Shot-based prompting in a nutshell

Including examples in your prompts helps guide your AI to a higher-quality answer. In technical terms, adding examples is known as shot-based prompting, where each example is a “shot”:

  • Zero-shot is when you add no examples.
  • Single-shot uses a single example.
  • Multi-shot includes multiple examples.

The more examples you provide, the more context you’re giving the model on how to analyze the prompt and return the intended answer. For example, this is useful for labeling customer needs:

Assign  a category to each support message. Follow the examples below. Only use the category names provided:

Inquiry: 'I received my order, but one of the items was damaged. Can you help me with a replacement?' Category: Replacement.

Inquiry: 'I placed an order two weeks ago, and it still hasn't arrived. Can you provide an update on the delivery?' Category: Delay.

Inquiry: 'I need to change the shipping address for my recent order. Can I update this before it ships?' Category: Changes.

Now, process this inquiry:

Inquiry: 'The TV I got is damaged, the screen shows static when I turn it on. How do I solve this problem?'"

With this example prompt, every time you send a customer inquiry, it will sort it into one of the 3 preset categories by analyzing the message.

How shot-based prompting works in Personal AI

Like with other LLMs, Personal AI also supports shot-based prompting. However, since each response is already based on your memories, you need to make sure that the data stored in the memory is consistent with the kind of multi-shot prompting that you want to implement.

Colon scoping: using a specific document as a source

You can order Personal AI to respond only based on the content of a single document using the colon scoping feature. Type a : and then start typing the name of the document you’d like to use as a source. You’ll see a dropdown with potential matches appearing close to the text you’re writing: simply click and choose that document.

Once you finish writing your prompt, Personal AI will only reply with the memories connected to that document.

Using a colon in the Personal AI App

Be ultra specific

Sometimes you’re just chatting with your AI, browsing through its knowledge. But in other circumstances, when writing a blog post or creating a report, you want to be as specific as possible with your instructions.

This is especially important since Personal AI’s MODEL-2 retrieves a limited number of memory blocks when generating a response. This means that, when you write a general prompt, answering it perfectly could require a lot more memory blocks.

Here’s an example to highlight the difference:

General: "Help me write a blog post about our products."

With a broad request like this, the AI will pull together general information from different areas—it’ll cover a bit about your product categories, why customers love them, and even some past marketing campaigns. But with the memory block limit, it might end up as a high-level overview that doesn’t dive deep into any one area. You’ll get a little bit of everything.

Specific: "Write an intro for a blog post that highlights our new product line, SmartBiz Solutions, focusing on the benefits for small business owners and including feedback from our beta testers."

This more specific prompt guides the AI straight to what you need: details about SmartBiz Solutions, why it’s great for small businesses, and what your early users loved about it. Because the AI isn’t stretching to cover lots of topics, it can pull in just the most relevant memory blocks to generate a higher-quality response.

PRECISE. PRIVATE. POWERFUL.

Build Your AI 
Workforce
With Personal AI

Build

Your

AI

Workforce

With

Personal

AI

Get StartedBook Demo