PRE-Trained vs. PRO-Trained Models: What You Need to Know

Have you ever wondered what “GPT” actually stands for?

G - generative (meaning the model can generate outputs).

P - pre-trained (the model has learned historic patterns on a large data set).

T - transformer (the neural network architecture that is used).

“Generative” and “Transformer” make sense, but “Pre-trained”? Why can’t it continuously train? Why must it be fixed in time? Who trained it? How did they determine the training data set? Where can I find this data? Is there another way?

Large Language Models (LLMs) like ChatGPT, Claude, and LLaMA have demonstrated the impressive capabilities of pre-trained AI systems. However, a new frontier is emerging - pro-trained Personal Language Models (PLMs), also known as Small Language Models (SLMs) that are meticulously customized to each individual user, and can be rapidly retained.

The key distinction lies in the training approach. Pre-trained models are exposed to vast datasets from the internet to acquire general knowledge. In contrast, pro-trained models undergo continuous training on a single user's unique data - emails, documents, messages and more. This allows them to deeply understand and accurately replicate that person's communication style, expertise, and nuanced perspectives.

Pro-trained models contain advanced architectures like Generative Grounded Transformers (GGTs) that unify all your data into a centralized "Memory Stack." They can then generate highly contextualized responses tailored specifically to you. While pre-trained models offer broad but shallow capabilities, pro-trained models go narrow and deep into the domains most relevant to each user.

The true power of pro-trained models emerges when they are expertly crafted through a white-glove / custom training process. Rather than attempting to fine-tune open-source tools yourself, having experienced professionals meticulously train your Personal AI ensures it captures the full depth of your knowledge and preferences. This collaborative approach unlocks unprecedented levels of personalization and accuracy.

As AI becomes increasingly grounded in our individual data and experiences, pro-trained models will likely surpass pre-trained ones for many practical applications. From writing assistants to executive assistants, and domain-specific question answering, pro-trained AIs can become powerful extensions of ourselves.

If you are a subject matter expert (SME), attorney, healthcare professional, consultant, writer, or other knowledge worker, the pro-trained approach might be right for you.

Stay Connected