

Salt Lake City, UT – June 18, 2025
The University of Utah’s John and Marcia Price College of Engineering hosted its inaugural AI Summit today, drawing approximately 400 researchers, students, policymakers, and industry leaders to explore artificial intelligence’s transformative potential in fields such as medicine, quantum technologies, smart infrastructure, and transportation. Held on the top floor of the S. J. Quinney College of Law, the sold-out event highlighted Utah’s growing prominence as a hub for AI innovation and collaboration.
In his welcome address, Charles Musgrave, Dean of the Price College of Engineering, described AI as a “multiplier” for innovation, urging attendees to foster a culture that embraces risk-taking, learns from failures, and iterates rapidly. “The key to winning this race is the pace of innovation,” Musgrave declared. “The outcomes will shape the future for decades, possibly centuries, shifting wealth, influence, and geopolitical power.”

The summit showcased the Price College’s latest advances in AI education and research, with the Kahlert School of Computing—identified as a vital school within the Price College—playing a central role. The Kahlert School taught 57% of all AI-related courses on campus last year, as stated in the opening remarks, cementing its position as a cornerstone of AI expertise at the U of U. However, Musgrave emphasized that AI leadership must extend beyond engineering. “We believe deeply that AI leadership must be shared across disciplines,” he said, highlighting initiatives like an AI upskilling course for faculty campus-wide and a new program to help non-engineering faculty teach AI.
Ethics and responsibility were central themes. Musgrave underscored the university’s Responsible AI Initiative, stressing that advancements must be guided by integrity. “The question is not just what can AI do, but what should it do,” he said, advocating for transparency and service to the public good.
Panel Highlights: AI in Sensing, Seeing, and Securing the World
A standout session was the panel "AI in Sensing, Seeing, and Securing the World," moderated by Varun Shankar, Assistant Professor in the Kahlert School of Computing, Associate Director of the Master of Software Development (MSD) program, and Associate Chair of the Stena Center for Financial Technology within the College of Engineering. Featuring three experts from the Price College, the panel explored cutting-edge AI applications, from optical computing to multimodal intelligence and secure large language models (LLMs).
Weilu Gao, Assistant Professor in Electrical and Computer Engineering, opened with a visionary talk on machine learning with optics. Gao highlighted the limitations of traditional electronic chips, constrained by physics as Moore’s Law slows. He proposed optical computing as a solution, leveraging light’s parallelism to overcome computational and memory bottlenecks. “Optics can offer high-speed, energy-efficient AI hardware acceleration,” Gao explained, showcasing his group’s work on diffractive neural networks. These tabletop optical systems, built with liquid crystal technology, mimic neural network architectures to perform tasks like image classification and material property prediction. Gao’s team has also applied these systems to drug screening and autonomous driving, demonstrating their potential for real-world impact. By solving Maxwell’s equations optically, his group aims to design self-evolving hardware, pushing the boundaries of AI efficiency.
Ziad Al-Halah, Assistant Professor in the Kahlert School of Computing, focused on multimodal embedded intelligence, enabling AI to “see, hear, and act” like humans. He emphasized the gap between current AI models, like ChatGPT, and human-like intelligence, which relies on multi-sensory interaction with the world. Al-Halah’s lab develops AI systems that combine audio and visual inputs for tasks like localizing sounding objects in 3D environments—critical for applications like search-and-rescue robots or home assistants. His team has trained models to identify active speakers in crowded settings, even when faces are occluded, and to use echolocation for navigation in darkness, inspired by biological systems like bats. “Our intelligence evolves through interacting with the world,” Al-Halah said, underscoring the need for embodied AI to learn from real-world experiences.
Guanhong Tao, also from the Kahlert School of Computing, addressed the critical need for safe and secure large language models. Tao’s focus on LLMs highlighted the importance of mitigating risks in AI systems that underpin applications in sensitive domains like healthcare and national security, aligning with the summit’s emphasis on responsible AI.

A lively Q&A session revealed the technical and societal challenges of translating these innovations into real-world applications. Gao noted that scaling optical computing from lab prototypes to market-ready devices requires significant resources and interdisciplinary engineering efforts, from optical design to electronics integration. Al-Halah highlighted efficiency as a major hurdle, pointing out that current AI models, like ChatGPT, consume vast energy and data compared to human learning. He aims to develop low-power models for devices like robots and smart glasses over the next five years. Tao addressed AI security, describing it as a “cat-and-mouse game” where vulnerabilities persist. He advocated for designing LLMs from scratch with security in mind, incorporating techniques like model unlearning to protect privacy. Responding to an audience question from undergraduate Ivan about “jailbreaking” LLMs (e.g., tricking models into providing harmful outputs), Tao explained that input and output protections, such as filtering malicious responses, are key to ensuring safety. These discussions underscored the panel’s commitment to overcoming technical barriers while prioritizing ethical and secure AI deployment.
The panel underscored the Price College’s interdisciplinary approach, blending physics, computing, and ethics to advance AI. From Gao’s optical innovations to Al-Halah’s multimodal systems and Tao’s secure LLMs, the session illustrated how Utah’s researchers are inventing the future of AI, tackling both technical and societal challenges with ingenuity and responsibility.
Panel Highlights: Next-Gen AI: From Supervision to Autonomy
Another compelling session was the panel "Next-Gen AI: From Supervision to Autonomy," moderated by Tucker Hermans, featuring Jacob Hochhalter, Daniel Brown, and Vivek Srikumar from the Price College of Engineering. The panel explored innovative approaches to reducing AI training costs, enhancing human-AI interaction, and advancing human language technologies.
Daniel Brown, Assistant Professor in the Kahlert School of Computing, delivered a particularly captivating talk on developing robust, interactive, and human-aligned AI systems. Leading the Aligned, Robust, Interactive Autonomy (ARIA) Lab, Brown emphasized the "alignment problem"—ensuring AI systems do what humans intend. He highlighted the difficulty of specifying objectives, quoting Stuart Russell: “For any given incorrectly stated objective, the better a system is at that objective, the worse it is at correction.” Brown illustrated this with examples like social media recommender systems, where maximizing click-through rates can lead to polarization and misinformation. His lab focuses on incorporating human feedback—through natural language, facial expressions, demonstrations, and comparisons—to create AI that adapts to user preferences despite noisy or ambiguous input.

Brown’s research leverages reinforcement learning from human feedback, a technique powering generative AI like ChatGPT, to train reward models based on qualitative human comparisons. His team applies this to complex tasks, such as robotic surgery, where precise objectives are hard to define, and assistive robotics, enabling wheelchair-mounted robotic arms to infer user intent for tasks like fetching water. “We want to bring AI systems out of the lab and have them adapt to different users’ preferences,” Brown said, emphasizing human-centered AI that fosters synergy between human intent and robotic capability.
Addressing robustness, Brown highlighted vulnerabilities in machine learning models, noting that minor pixel noise can mislead robot systems, with attacks often transferable across tasks. His lab develops self-assessing robots that gauge data sufficiency and query humans when uncertain, critical for applications like surgical robotics. “We want robots to know when to ask for help,” he said, underscoring the importance of uncertainty-aware systems that improve over time through human interaction.
The panel showcased the Price College’s forward-thinking approach to AI, with Brown’s work exemplifying how human feedback and robustness can bridge the gap between supervised AI and autonomous, human-aligned systems, paving the way for safer and more adaptive technologies.

Smarter Models with Less Data: Hochhalter’s Derivative-Driven Machine Learning Breakthrough
Why train AI on massive datasets when a handful of carefully measured points — enriched with derivative data — can yield even deeper insights?
That’s the question Jacob Hochhalter, Associate Professor of Mechanical Engineering, tackled in his mathematically rigorous talk at the summit.
In a landscape dominated by data-hungry models, Hochhalter’s approach flips the paradigm. Instead of scaling up with more data, his team uses Hypercomplex Automatic Differentiation — a technique based on complex numbers — to extract high-order derivatives from just a few data points. Think Taylor Series, but applied to machine learning: a way of approximating a whole function from knowledge at a single point.
This method is paired with symbolic regression, a machine learning technique that discovers explicit mathematical relationships rather than opaque, black-box models. The result? Interpretable equations that often align with real-world physics — a major advantage in scientific and engineering domains.
Here’s the core idea: if you know not just a value, but its slope, curvature, and rate of curvature at a point, you can infer far more about the system than from values alone. Feeding this derivative-rich data into a model dramatically boosts learning efficiency.
“I don’t just know where I am — I know where I’m headed, how fast I’m speeding up, and so on.”

To demonstrate, Hochhalter applied the method to a pressurized cylinder — a model relevant to everything from blood vessels to aircraft fuselages. Traditional training on 10 data points led to error rates around 200%. Using derivative-informed training, error dropped to near zero, matching analytical solutions with only minor numerical noise.
Hochhalter closed with a call to rethink how engineering integrates AI: not by throwing more hardware or data at the problem, but by infusing models with richer, mathematically grounded information.
In fields like aerospace, bioengineering, and materials science — where experiments are costly, slow, or ethically constrained — it’s not just a breakthrough. It’s a necessity.
Summit Impact and Call to Action
In addition to the featuring U of U faculty, as mentioned above, the AI summit included a student poster session showcasing next-generation AI research, and plenty of networking opportunities, some of which on the stylish rooftop overlooking Rice Stadium, to spark collaborations.
Musgrave invoked a quote from computer science pioneer and Price College alumnus Alan Kay: “The best way to predict the future is to invent it.” This ethos drove the event, encouraging attendees to shape AI’s trajectory in science, economics, national security, and even art and literature. This message resonated with the event's attendees from Utah’s tech community, known for its entrepreneurial spirit and collaborative ecosystem.
For more information about the AI Summit or on AI initiatives at the University of Utah, visit price.utah.edu.
