You can interact with Personal AI in a similar way to an LLM: you ask questions and get an AI-generated answer. But since the purpose of Personal AI is to respond based on your data, there are special techniques you can learn to leverage your memories and get higher-quality responses.
In a traditional LLM, once your send your prompt to the model, it will:
Personal AI is similar, with a few critical differences:
Context—also known as conversational memory or context window—is how AI chat applications remember what the user says during a conversation. Personal AI offers a context window as well: each message you trade with your AI influences the interaction as it unfolds.
However, unlike in ChatGPT, Personal AI also has the memories in your memory stack to use during the conversation—and the weight of these memories in the response is represented by the Personal Score.
To preserve flexibility and to help you steer the conversation, the app balances the weight from saved memories and information in the context window. Here’s what this means when chatting about a topic with plenty of saved memories:
Your one-on-one conversation with your AI or any of your AI Personas always carries the context. If you feel it’s too attached to a particular topic, you can click the New Convo button to clear and start fresh.
Context is also available in Channels, but it works in a different way. Due to the multi-user nature and the unpredictable ways in which the conversations here can unfold, Personal AI doesn’t have a unified context window inside a Channel: all messages sent are treated as a unique conversation, without drawing on context.
However, when anyone uses the Reply action on a single message, the AI starts saving context for that exchange, enabling this functionality for that conversation thread only.
Including examples in your prompts helps guide your AI to a higher-quality answer. In technical terms, adding examples is known as shot-based prompting, where each example is a “shot”:
The more examples you provide, the more context you’re giving the model on how to analyze the prompt and return the intended answer. For example, this is useful for labeling customer needs:
Assign a category to each support message. Follow the examples below. Only use the category names provided:
Inquiry: 'I received my order, but one of the items was damaged. Can you help me with a replacement?' Category: Replacement.
Inquiry: 'I placed an order two weeks ago, and it still hasn't arrived. Can you provide an update on the delivery?' Category: Delay.
Inquiry: 'I need to change the shipping address for my recent order. Can I update this before it ships?' Category: Changes.
Now, process this inquiry:
Inquiry: 'The TV I got is damaged, the screen shows static when I turn it on. How do I solve this problem?'"
With this example prompt, every time you send a customer inquiry, it will sort it into one of the 3 preset categories by analyzing the message.
Like with other LLMs, Personal AI also supports shot-based prompting. However, since each response is already based on your memories, you need to make sure that the data stored in the memory is consistent with the kind of multi-shot prompting that you want to implement.
You can order Personal AI to respond only based on the content of a single document using the colon scoping feature. Type a : and then start typing the name of the document you’d like to use as a source. You’ll see a dropdown with potential matches appearing close to the text you’re writing: simply click and choose that document.
Once you finish writing your prompt, Personal AI will only reply with the memories connected to that document.
Sometimes you’re just chatting with your AI, browsing through its knowledge. But in other circumstances, when writing a blog post or creating a report, you want to be as specific as possible with your instructions.
This is especially important since Personal AI’s MODEL-2 retrieves a limited number of memory blocks when generating a response. This means that, when you write a general prompt, answering it perfectly could require a lot more memory blocks.
Here’s an example to highlight the difference:
General: "Help me write a blog post about our products."
With a broad request like this, the AI will pull together general information from different areas—it’ll cover a bit about your product categories, why customers love them, and even some past marketing campaigns. But with the memory block limit, it might end up as a high-level overview that doesn’t dive deep into any one area. You’ll get a little bit of everything.
Specific: "Write an intro for a blog post that highlights our new product line, SmartBiz Solutions, focusing on the benefits for small business owners and including feedback from our beta testers."
This more specific prompt guides the AI straight to what you need: details about SmartBiz Solutions, why it’s great for small businesses, and what your early users loved about it. Because the AI isn’t stretching to cover lots of topics, it can pull in just the most relevant memory blocks to generate a higher-quality response.