Amazon delivers gift to 10 Penn Ph.D. -The students to work on reliable AI

Today, Amazon Web Services (AWS) announced that it delivers a $ 700,000 gift to the University of Pennsylvania School of Engineering and Applied Science to support research at fair and reliable AI. The funds are distributed to 10 Ph.D. -The engineering student who researches in this field.

Penn students who receive funding are conducting their research under the auspices of the Asset Center, part of the Penn Engineering’s Ideas Initiative.

University of Pennsylvania

The students carry out their research under the auspices of the asset’s (Aititity System: SAFE, Explanable and Reliable) Center, part of Penn Engineering’s Innovation in Data Engineering and Science (Ideas) Initiative. Asset’s mission is to promote “Science and tools for developing AI-activated data-driven engineering systems so that designers can guarantee that they do what they designed they to do and users can trust them to do what they expect them”

“The Asset Center is proud to receive Amazon’s support for these doctorate pins, who work to ensure that the system that is dependent on artificial intelligence is reliable,” said Rajeev Alur, director of Aktiv and tea Zisman Family Professor of the Department of Computer and Information Science (CIS). “Penn’s interdisciplinary research team is in the way of answering the key questions that will define the future of AI and its acceptance of society. How do we make sure AI-activated systems are safe? How can we provide insurance and guarantee against harm? Explained in ways that are understandable to stakeholders?

“It’s great to collaborate with PENN important topics such as trust, security and interpretation,” said Stefano Soatto, vice president of Applied Science for Amazon Web Services (AWS) Artificial Intelligence (AI). “This is the key to the long -term advantageous effect of AI and Penn has a leadership position in this field. I look forward to seeing students’ work in action in the real world.”

The funded research projects are centered on themes for machine learning algorithms with justice/privacy/robustness/security guarantee; Analysis of artificial intelligence activation systems for insurance; Explainability and interpretation; Neurosymbolic learning; And human -centric design.

AI Responsible

Generative AI raises new challenges in defining, measuring and mitigating concerns about justice, toxicity and intellectual property. But the work has started with the solution.

“This gift from AWS comes at an important time for research in responsible AI,” said Michael Kearns, an Amazon -Lærd and National Center professor in management and technology. “Our studs are working hard to create the knowledge that industry requires for commercial technologies that will define so much of our lives, and it is important to invest in talented researchers who focus on technically strict and socially engaged ways to use it.”

Below are the 10 students who receive funding and details of their research.

  • Eshwar Ram Arunachaleswaran is another PhD student, advised by Sampathy Kannan, Henry Salvatori professor at the Department of Computer and Information Science, and Anindya de, assistant professor in computer science. Arunachaleswaran’s research is focused on fairness performances and fair algorithms when individuals are classified by a network of classifiers, possibly with feedback.
  • Natalie Collina is a second-yar PhD student, advised by Kearns and Aaron Roth, Henry Salvatori professor in computer and cognitive science, which, like Kearns, is also an Amazon student. Collina is investigating models for data markets where a seller may choose to add noise to queries to ask years for both privacy and returned purposes. Her goal is to set the study of markets for data on the company’s algorithmic and microeconomic foundations.
  • Ziyang Li Is a fourth-year PhD student, advised by Mayur Naik, professor of computer science. LI develops a programming language and open source frames called tongues for the development of neurosymbolic AI applications. Li looks neurosymbolic AI as a growing paradigm that seeks to integrate deep learning and classic algorithms to leave the best of both worlds.
  • Stephen Mell Is a fourth-year PhD student, advised by Osbert Bastani, assistant professor at CIS, and Steve Zancewic, the Schlein family’s prominent professor and associate chairman of CIS. Mell is currently studying how to make machine learning algorithms more robust and data -efficient by utilizing neurosymbolic techniques. His goal is to design algorithms that can learn from just a handful of examples in security -critical surroundings.
  • Georgy Noarov Is a third-year PhD student, advised by Kearns and Roth. Noarov is studying funds for uncertainty quantification of models with black-box machine learning, included strong variants of calibration and observance forecast.
  • Panagopoulou Artemis is a second-yar PhD student, advised by Chris Callison-Burch, associate professor of CIS, and Mark Yatskar, an assistant professor in CIS. Panagopoulou designer explainable image classification models using large language models to generate concepts used for classification. The goal of Panagopoulous Research is to produce more reliable AI systems by creating human-readable features that are faithfully used by the model during classification.
  • Jianing Qian Is a third-year PhD student, advised by Dinesh Jayaraman, an assistant professor in CIS. Qian’s research is focused on acquiring hierarchical object -centered visual representations that can be interpreted to humans, and learning -structured visuomotor control police for robots that utilize these visual suppresses through imitation and reinforcement.
  • Alex Robey is a third-year PhD student, advised by George Pappas, UPS Foundation professor and chairman of the Department of Electric and System Technology, and Hamed Hassani, an assistant professor at the Department of Electric and System technology. Dress works on deep learning that is robust for distribution changes due to natural variation, e.g. Lighting, background changes and weather forces.
  • Anton Xue Is a fourth-year PhD student, advised by ALUR. Xues’ research is focused on robustness and interpretability of deep learning. He is currently investigating techniques to compare and analyze the effectiveness of methods of interpretable learning.
  • Yahan Yang Is a third-year PhD student advised by Insup Lee, Cecilia Fitler Moore professor at the Department of Computer and Information Science and the director of the Precise Center in the School of Engineering and Applied Science. Yang has examined a two-stage technical classification, called memory classifiers that can improve the robustness of standard distribution changes. Her approaches expert knowledge of the “high -level” structure of the data with standard classifiers.

Leave a Comment