Introduction
Personal AI (Human AI Labs, Inc.) is a private, for-profit entity headquartered in New York, NY. The Company has led the creation of the Personal Language Model (PLM), that offers a contrasting approach to Large Language Models (LLMs), where user-ownership, control, and privacy are foundational principles. The Company’s mission is to unlock the vast potential of personal memory and knowledge through a democratized approach to AI, where Artificial Personal Intelligence (API) takes priority over Artificial General Intelligence (AGI).
This submission highlights our unique approach, which aligns with the RFI's objectives for responsible AI procurement in the US Government, and addresses questions 7, 2, 9, and 10 of the RFI.
Key Questions for Adopting AI Technologies
As the US Government seeks to adopt new AI technologies, we believe the OMB should keep the following critical questions in mind when crafting terms that agencies should include in contracts:
Who owns the data? (Question 7)
It is crucial to determine whether the data is owned by big tech companies or government/entities. The ownership of data directly impacts who can access, view, and amend it. When the individual or entity is deemed a “user” online, typically that limits the legal basis for ownership and control. At Personal AI, we firmly believe that individuals/companies/government (i.e., the client) should have full control and ownership of their data. This means that client data should not be used to train any models outside of the contract’s scope (i.e., data should not be used to train next-gen LLMs).
Is open-sourcing enough? (Question 7)
While open-sourcing AI models is a step towards transparency, it does not absolve companies of responsibility. The concept of “open source” can mean different things to different players, so understanding which entities actually control the creation, development, and deployment of a particular AI model, and in what ways, may not always be obvious. It is also essential to understand that open-sourcing a model does not necessarily mean revealing its training dataset, which can be biased, flawed, or influenced by external factors. Therefore, a thorough understanding of the training data is vital, regardless of whether the model is open-sourced or not.
AI as a service or AI as an asset? (Question 7)
Governmental bodies must consider as a legal and policy matter whether AI should be treated as a service offering, or a unique asset. Some stakeholders have been advocating for treating AI as a service, which means it is largely defined and controlled by the service providers. Personal AI disagrees. The type of PLM that Personal AI provides to its clients is actually a resource, a unique asset closer in kind to land or other real property. This means that AI is not a commodity, but instead is non-fungible (carries unique value and meaning) and non-rivalrous (meaning anyone can utilize it without reducing its overall value). Personal AI sees AI as empowering individuals and preventing the concentration of power and value in the hands of a few incumbents.
Response to Question 2:
How can government procurement drive innovation and competition?
The US Government can substantially impact the AI industry and the lives of the American people through its procurement decisions. By using the “power of the purse” to support innovative, principled young companies rather than incumbents, the government can foster faster growth, spur research, and drive competition in the market. This approach will lead to more innovation, lower prices, and solutions that better benefit the public.
Below are some ideas on promoting robust competition and attracting new entrants:
Diversifying Suppliers: The Office of Management and Budget (OMB) can encourage agencies to diversify their supplier base by allocating a specific portion of their procurement budget to small businesses and startups which abide by certain data-protecting and PLM-promoting principles. This can be implemented through programs similar to the Small Business Innovation Research (SBIR) program.
Creating Transparent and Accessible Procurement Processes: To reduce barriers to entry for new and smaller vendors, OMB can simplify and standardize the procurement process. This includes providing clear and consistent information about bidding opportunities, simplifying the application and compliance requirements, and offering training and support to help new entrants navigate the procurement process.
Fostering Open Innovation and Collaboration: Establish public-private partnerships and innovation hubs where businesses of all sizes can collaborate on government projects. These hubs can serve as incubators and allow small businesses to demonstrate their capabilities in a real-world setting.
Response to Question 9:
Agencies should structure their procurement practices of AI with specific use-cases in mind. Ensuring that procured AI models are trained and tuned for specific use-cases allows for specific metrics of efficacy, and clear objectives. Improper use of a “one-size-fits-all” model may lead to harmful or unintended outcomes.
Below are select examples on how tailored PLMs can address specific use-cases, minimizing the risk of adverse outcomes.
Healthcare (Veterans Affairs, CMS): For veterans suffering from PTSD, Personal AI can provide personalized, continuous support and monitoring. For veterans and other individuals suffering from dementia or trauma-induced memory loss, Personal AI can augment biological memory. For those whose stories can no longer be told, Personal AI can capture personal histories, lessons, stories, and wisdom. PLMs can improve the quality of care, reduce readmission rates, and enhance the overall well-being of veterans.
Law Enforcement & Intelligence Community (IC): Personal AI enables law enforcement agencies to handle sensitive data more securely and efficiently. PLMs can also reduce information silos that stifle communication within law enforcement. Furthermore, Personal AI's ability to capture the personality, voice, and likeness of an individual is valuable for the IC who must monitor, simulate, and understand persons of interest.
Legislative Support (Congress): Personal AI can improve constituent communications in the legislative domain. Each congressperson's Personal AI can understand constituent inquiries and draft personalized responses that are accurate to the congressperson's position. This both improves the efficiency of congressional offices and fosters greater public engagement and trust in the legislative process.
Foreign Policy and Information (State Department): In the realm of foreign policy, where information and priorities can shift between administrations, Personal AI offers a secure and adaptable model. PLMs can be tailored to different tiers of security clearance, ensuring that sensitive information is handled appropriately. Unlike pre-trained models or those with closed datasets, Personal AI mitigates the risks of information leaks or disinformation campaigns.
Response to Question 10:
Advancing Equitable Outcomes and Mitigating Risks with Personal AI
Personal AI's approach to AI development and deployment centers on the principles of privacy, transparency, and user control. By empowering individuals with ownership over their data and AI models, we aim to foster trust in AI technologies and ensure that they serve the interests of the people. This user-centric model helps to mitigate risks to privacy, civil rights, and civil liberties by giving individuals the ability to control how their data is used and preventing the concentration of power in the hands of a few entities.
Our commitment to ethical AI practices extends to making AI accessible and inclusive. Personal AI's PLMs are designed to be user-friendly, enabling individuals from diverse backgrounds to benefit from AI without requiring specialized knowledge. This approach helps to democratize the benefits of AI across society, reducing barriers to access and promoting equitable outcomes. Indeed, under one scenario, the PLMs eventually can evolve to become the individual’s trusted digital agents in the world, interacting on behalf of that person with an array of companies and other entities. In that role, the digital agent then can query, interrogate, agree to, or even challenge decisions ostensibly made on behalf of the individual. To the extent LLMs and other AI platforms are poised to become the consequential decision engines for our society, the PLM as an authentic digital agent can become a powerful tool for ordinary people to represent their own best interests. Among other benefits, this bottoms-up, grassroots approach can help mitigate the societal risks that the larger players otherwise could bring.
Within the US Government, we believe that this approach is crucial for building public trust and ensuring that AI technologies are deployed in a manner that upholds societal values and promotes the well-being of all individuals.
Conclusion
Personal AI is at the forefront of AI innovation and ethical technology deployment. Our decentralized, user-centric approach addresses the current challenges in AI governance and offers a future-proof solution for responsible AI adoption in government. We are excited about the prospect of collaborating with federal agencies to unlock the transformative potential of AI while upholding the highest standards of responsibility and integrity.
For Further Information
We welcome the opportunity to engage in further discussions and provide additional information regarding our submission. Please feel free to contact us directly at the email address or phone number provided. We are eager to explore how Personal AI can contribute to the government's objectives for responsible AI adoption and look forward to the possibility of future collaboration.
Sincerely,
/S/ Jonathan Bikoff, Head of Strategy
Personal AI (Human AI Labs, Inc.)