Salt Lake City, UT – June 18, 2025

The University of Utah’s John and Marcia Price College of Engineering hosted its inaugural AI Summit today, drawing approximately 400 researchers, students, policymakers, and industry leaders to explore artificial intelligence’s transformative potential in fields such as medicine, quantum technologies, smart infrastructure, and transportation. Held on the top floor of the S. J. Quinney College of Law, the sold-out event highlighted Utah’s growing prominence as a hub for AI innovation and collaboration.

In his welcome address, Charles Musgrave, Dean of the Price College of Engineering, described AI as a “multiplier” for innovation, urging attendees to foster a culture that embraces risk-taking, learns from failures, and iterates rapidly. “The key to winning this race is the pace of innovation,” Musgrave declared. “The outcomes will shape the future for decades, possibly centuries, shifting wealth, influence, and geopolitical power.”

Charles Musgrave, Dean, John and Marcia Price College of Engineering, University of Utah

The summit showcased the Price College’s latest advances in AI education and research, with the Kahlert School of Computing—identified as a vital school within the Price College—playing a central role. The Kahlert School taught 57% of all AI-related courses on campus last year, as stated in the opening remarks, cementing its position as a cornerstone of AI expertise at the U of U. However, Musgrave emphasized that AI leadership must extend beyond engineering. “We believe deeply that AI leadership must be shared across disciplines,” he said, highlighting initiatives like an AI upskilling course for faculty campus-wide and a new program to help non-engineering faculty teach AI.

Ethics and responsibility were central themes. Musgrave underscored the university’s Responsible AI Initiative, stressing that advancements must be guided by integrity. “The question is not just what can AI do, but what should it do,” he said, advocating for transparency and service to the public good.

Panel Highlights: AI in Sensing, Seeing, and Securing the World

A standout session was the panel "AI in Sensing, Seeing, and Securing the World," moderated by Varun Shankar, Assistant Professor in the Kahlert School of Computing, Associate Director of the Master of Software Development (MSD) program, and Associate Chair of the Stena Center for Financial Technology within the College of Engineering. Featuring three experts from the Price College, the panel explored cutting-edge AI applications, from optical computing to multimodal intelligence and secure large language models (LLMs).

Weilu Gao, Assistant Professor in Electrical and Computer Engineering, opened with a visionary talk on machine learning with optics. Gao highlighted the limitations of traditional electronic chips, constrained by physics as Moore’s Law slows. He proposed optical computing as a solution, leveraging light’s parallelism to overcome computational and memory bottlenecks. “Optics can offer high-speed, energy-efficient AI hardware acceleration,” Gao explained, showcasing his group’s work on diffractive neural networks. These tabletop optical systems, built with liquid crystal technology, mimic neural network architectures to perform tasks like image classification and material property prediction. Gao’s team has also applied these systems to drug screening and autonomous driving, demonstrating their potential for real-world impact. By solving Maxwell’s equations optically, his group aims to design self-evolving hardware, pushing the boundaries of AI efficiency.

Ziad Al-Halah, Assistant Professor in the Kahlert School of Computing, focused on multimodal embedded intelligence, enabling AI to “see, hear, and act” like humans. He emphasized the gap between current AI models, like ChatGPT, and human-like intelligence, which relies on multi-sensory interaction with the world. Al-Halah’s lab develops AI systems that combine audio and visual inputs for tasks like localizing sounding objects in 3D environments—critical for applications like search-and-rescue robots or home assistants. His team has trained models to identify active speakers in crowded settings, even when faces are occluded, and to use echolocation for navigation in darkness, inspired by biological systems like bats. “Our intelligence evolves through interacting with the world,” Al-Halah said, underscoring the need for embodied AI to learn from real-world experiences.

Guanhong Tao, also from the Kahlert School of Computing, addressed the critical need for safe and secure large language models. Tao’s focus on LLMs highlighted the importance of mitigating risks in AI systems that underpin applications in sensitive domains like healthcare and national security, aligning with the summit’s emphasis on responsible AI.

Guanhong Tao, Assistant Professor, Kahlert School of Computing, University of Utah

A lively Q&A session revealed the technical and societal challenges of translating these innovations into real-world applications. Gao noted that scaling optical computing from lab prototypes to market-ready devices requires significant resources and interdisciplinary engineering efforts, from optical design to electronics integration. Al-Halah highlighted efficiency as a major hurdle, pointing out that current AI models, like ChatGPT, consume vast energy and data compared to human learning. He aims to develop low-power models for devices like robots and smart glasses over the next five years. Tao addressed AI security, describing it as a “cat-and-mouse game” where vulnerabilities persist. He advocated for designing LLMs from scratch with security in mind, incorporating techniques like model unlearning to protect privacy. Responding to an audience question from undergraduate Ivan about “jailbreaking” LLMs (e.g., tricking models into providing harmful outputs), Tao explained that input and output protections, such as filtering malicious responses, are key to ensuring safety. These discussions underscored the panel’s commitment to overcoming technical barriers while prioritizing ethical and secure AI deployment.

The panel underscored the Price College’s interdisciplinary approach, blending physics, computing, and ethics to advance AI. From Gao’s optical innovations to Al-Halah’s multimodal systems and Tao’s secure LLMs, the session illustrated how Utah’s researchers are inventing the future of AI, tackling both technical and societal challenges with ingenuity and responsibility.

Panel Highlights: Next-Gen AI: From Supervision to Autonomy

Another compelling session was the panel "Next-Gen AI: From Supervision to Autonomy," moderated by Tucker Hermans, featuring Jacob Hochhalter, Daniel Brown, and Vivek Srikumar from the Price College of Engineering. The panel explored innovative approaches to reducing AI training costs, enhancing human-AI interaction, and advancing human language technologies.

Daniel Brown, Assistant Professor in the Kahlert School of Computing, delivered a particularly captivating talk on developing robust, interactive, and human-aligned AI systems. Leading the Aligned, Robust, Interactive Autonomy (ARIA) Lab, Brown emphasized the "alignment problem"—ensuring AI systems do what humans intend. He highlighted the difficulty of specifying objectives, quoting Stuart Russell: “For any given incorrectly stated objective, the better a system is at that objective, the worse it is at correction.” Brown illustrated this with examples like social media recommender systems, where maximizing click-through rates can lead to polarization and misinformation. His lab focuses on incorporating human feedback—through natural language, facial expressions, demonstrations, and comparisons—to create AI that adapts to user preferences despite noisy or ambiguous input.

Brown’s research leverages reinforcement learning from human feedback, a technique powering generative AI like ChatGPT, to train reward models based on qualitative human comparisons. His team applies this to complex tasks, such as robotic surgery, where precise objectives are hard to define, and assistive robotics, enabling wheelchair-mounted robotic arms to infer user intent for tasks like fetching water. “We want to bring AI systems out of the lab and have them adapt to different users’ preferences,” Brown said, emphasizing human-centered AI that fosters synergy between human intent and robotic capability.

Addressing robustness, Brown highlighted vulnerabilities in machine learning models, noting that minor pixel noise can mislead robot systems, with attacks often transferable across tasks. His lab develops self-assessing robots that gauge data sufficiency and query humans when uncertain, critical for applications like surgical robotics. “We want robots to know when to ask for help,” he said, underscoring the importance of uncertainty-aware systems that improve over time through human interaction.

The panel showcased the Price College’s forward-thinking approach to AI, with Brown’s work exemplifying how human feedback and robustness can bridge the gap between supervised AI and autonomous, human-aligned systems, paving the way for safer and more adaptive technologies.

Smarter Models with Less Data: Hochhalter’s Derivative-Driven Machine Learning Breakthrough

Jacob Hochhalter, Associate Professor of Mechanical Engineering and research lead, delivered a technically rigorous, math-intensive presentation, based a compelling idea: why gather massive datasets when carefully measured points, enriched with derivative data, yield deeper insights?

In a field dominated by data-hungry models, Hochhalter proposes a method that flips the script. Instead of blindly increasing the volume of training data, his team uses a method called Hypercomplex Automatic Differentiation (a derivative calculation technique based on complex numbers) to extract high-order derivative information from a small number of data points — think Taylor Series on steroids.

This is paired with symbolic regression, a machine learning technique that searches for explicit mathematical formulas (rather than opaque black-box models) without presupposing what those formulas look like. It’s a math-first approach, rather than a physics-first one, allowing models to emerge that are interpretable and often physically meaningful — crucial in scientific and engineering applications.

Imagine trying to predict a curve with only one data point — impossible. But if you also know the slope at that point (how steep the curve is), and how the slope is changing (its curvature), you can start to guess what the shape of the curve looks like in that area. This is the essence of a Taylor Series — a math trick that uses derivatives at a single point to approximate the entire surrounding function.

Jacob Hochhalter, Associate Professor, Department of Mechanical Engineering, University of Utah

Instead of training models only on the values of data points (e.g., displacement under pressure), Hochhalter and his team also feed in their first, second, or even third derivatives — things like how fast that displacement is changing, or how the rate of change itself is changing. This gives the model far more information from far fewer data points.

“I don’t just know where I am — I know where I’m headed, how fast I’m speeding up, and so on.”

That’s essentially the Taylor Series mindset — and it’s a game-changer in data-limited environments like aerospace, bioengineering, or materials science.

To demonstrate, Hochhalter applied the method to a pressurized cylinder — a scenario equally applicable to blood vessels, aircraft fuselages, and gas pipelines. Using traditional training, a model with 10 data points delivered ~200% error. With derivative-informed training, the error plummeted to near-zero — matching analytic solutions almost exactly, except for minor numerical noise.

Hochhalter wrapped up with a challenge to reimagine how engineering disciplines approach AI — not by out-scaling with bigger GPUs or deeper networks, but by infusing models with better mathematical structure and informative data. It’s a message that resonates across industries where experimentation is costly, time-consuming, or ethically constrained.

Summit Impact and Call to Action

In addition to the featuring U of U faculty, as mentioned above, the AI summit included a student poster session showcasing next-generation AI research, and plenty of networking opportunities, some of which on the stylish rooftop overlooking Rice Stadium, to spark collaborations.

Musgrave invoked a quote from computer science pioneer and Price College alumnus Alan Kay: “The best way to predict the future is to invent it.” This ethos drove the event, encouraging attendees to shape AI’s trajectory in science, economics, national security, and even art and literature. This message resonated with the event's attendees from Utah’s tech community, known for its entrepreneurial spirit and collaborative ecosystem.

For more information about the AI Summit or on AI initiatives at the University of Utah, visit price.utah.edu.

Interior rendering of the future $194M John and Marcia Price Computing and Engineering Building. Housing the Kahlert School of Computing and AI, FinTech, and cybersecurity programs, it will boost Price College of Engineering’s annual graduates by over 500 per year.
Share this article
The link has been copied!