How can we ensure AI systems remain trustworthy and aligned with human values as their capabilities grow? I sat down with David Danks, Professor of Data Science & Philosophy at UC San Diego, to explore this critical issue.
Danks has spent over two decades at the intersection of AI, cognitive science, and ethics. His work has spanned from developing novel AI algorithms to understanding the ethical implications as these powerful technologies are applied in sensitive real-world domains like healthcare and national security.
When I ask him to define "responsible AI," Danks emphasizes the need for AI systems to genuinely support the values, needs and interests of the humans using them. "It's about me having good reason to think the AI is going to do what I would have done if I had the time, energy and knowledge it has - that it's acting on my behalf, rather than just a company's."
This notion of AI aiding rather than undermining human agency is central to Danks' vision of trustworthy AI. He sees a key role for personalized AI that can be carefully tuned and customized to each individual's priorities.
"The big three language models out there - they're just not going to ever be able to offer that level of personalization at scale," Danks explains. "If we want these technologies to be truly empowering, we need AI that can remain under the control of each user while still being highly capable."
Beyond the technical challenges, achieving responsible AI will require an ecosystem of new practices. Danks advocates for governance models moving beyond binary approve/deny decisions to more dynamic certification processes contingent on use case and context. He also emphasizes the need for developers to embrace an ethical culture of data responsibility, avoiding excessive collection "just because it might be useful later."
On the topic of AI regulation, Danks stresses that policymakers must resist taking a one-size-fits-all approach and develop nuanced frameworks attuned to the new realities these systems create:
"AI doesn't just make existing technologies a bit fancier - it often introduces entirely new capabilities we haven't had to consider before. The regulators may need to ask a totally different set of questions than they're used to."
As our conversation turns to the future of AI and education, Danks draws a parallel to the evolution of graphing calculators. What was once banned classroom technology eventually became an embraced tool, once pedagogy adapted to leverage it for enhancing true understanding rather than rote calculation.
"We're in those early days with AI where we don't yet know how to wield it to actually improve learning outcomes rather than just spouting answers," Danks observes. "But like graphing calculators, I believe we'll get there - the key is being intentional about using AI as an aid to deeper understanding, not just a shortcut."
For everyday people grappling with AI's societal impacts, Danks advocates a mindset shift from simply understanding the technology to understanding ourselves and our values. "The education we need is about reflecting: How could this AI system really benefit me or undermine what matters to me? Too often, people adopt AI tools without that purposeful consideration."
"The value for me is that it's another way I don't have to think about logistical details like phrasing and mechanics. I can simply focus on communicating my ideas as effectively as possible to my audience, whether it's one person or many."
As our wide-ranging discussion made clear, the path to making AI remain responsible and trustworthy as it grows more powerful is a multifaceted challenge - a constant negotiation between technical innovation, cultural norms, policy guardrails, and human self-awareness.
But Danks' work counsels an approach of clear-eyed pragmatism and empowerment rather than fear. If we can create AI systems that genuinely put human values at the center, enhancing our agency rather than undermining it, we may yet navigate AI's transformative potential as a society.
"AI can be incredibly powerful and enable people to innovate in astonishing ways," Danks affirms. "The key is ensuring we're using it intentionally to create value aligned with what truly matters to us as individuals."