Johns Hopkins University and Amazon announces a new initiative for Interactive AI (AI2AI) Research Awards

Amazon and Johns Hopkins University (JHU) Annoudéd the second year’s hiking of PhD fellowships and faculty research awards as part of the JHU + Amazon Initiative for Interactive AI (AI2AI).

The AI2AI initiative, launched in April 2022 and houses in the Jhing School of Engineering, is focused on running pioneering AI progress with emphasis on machine learning, computer vision, natural language understanding and speech treatment.

As part of the initiative, annual Amazon fellowships are amazed by the Ph.D. -Truds that are enrolled in the Whiting School of Engineering. Amazon also funds research projects led by the JHU faculty in collaboration with post-doctoral researchers, bachelor and graduate students and research staff.

Below is a list of the amazed fellows and their research projects, followed by the Faculty Award Contaisse and their research projects.

Academic colleague Contaisse

From left to right: Jiang Liu, Ambar Pal, Aniker Roy and Xuan Zhang.

Jiang Liu Is a fifth-year PhD student who studies electric and computer technology and is advised by Rama Chellappa. His research is focused on developmental and reliable AI systems, included computer vision algorithms that are robust to conflicting attacks, protection of facial development and multimodal AI algorithms that can understand both vision and language.

Ambar Pal Is a last year’s PhD student who studies computer science and is advised by René Vidal and Jeremiia’s Sulam. He is focused on the theory and practice of security in AI, with a central philosophy that incorporates structural restrictions from data, can effectively mitigate vulnerability to malicious agents in the current ML system.

Aniker Roy Is a fourth-year PhD student studying computer science under the guidance of chellappa. He examines computer vision and machine learning-specific, few-shot learning, multimodal learning and generative AI, including diffusion models and large language models.

Xuan Zhang is a fifth-year PhD study studying computer science under the guidance of Kevin Duh. She is focused on sign language treatment with emphasis on sign language recognition and translation.

Faculty control

Top row, left right: Rama Chellappa, Anjalie Field, Philipp Koehn and Leibny Paola Garcia Perera; Second row, left right: Vishal Patel, Carey Priebe, Jan Trmal and Masha Yarmohammadi.

Rama ChellappBloomberg Distinguished Professor of the Department of Electric and Computer Engineering and Department of Biomedical Engineering: “Self-under-supervision for skeletal-based learning of actions

“Monitored learning of skeletal coders for action recognition has received meaningful attention. However, learning such codes without labels still has a challenging problem. In this work we offer to build on contrastive-learning approach with Amazon scientists on further curing of the proposed approach and testing on the sequences of reality to validate its effectiveness and robustness.

Anjalie FieldDeputy in Computer Science: “FAIR AND PRIVATE NLP to data with high risk

“This proposal aims to develop text generation tools to create realistic synthetic data that can facilitate research and model development, while the model improves the model and minimizes privacy violations.

Philipp KoehnProfessor of Computer Science: “Convergence of language and translation models

“We offer to combine the strengths of both large language models (LLMS) and neural machine translation, especially the LLM’s ability to model broader, multi-breakdown context and larger love education data and the focus of translation models on the actual task in a monitored way.”

Leibny Paola Garcia PereraAssistant researcher: “On-Device Compressed Models for Speaker Diarization

“In this proposal, we will study how to build effective diarage models based on self -monitored models that can be implemented on device.

Vishal PatelAssociate Professor of Vision & Image and VIU) -laboratory: “Language -controlled universal domain adaptation

“Recent progress in deep learning has led to the development of accurate and effective models for various computer vision applications, such as classification, segmentation and detection. However, learning of very accurate models is saved on the availability of large -scale annotated data sets. The breakdown of performance when evaluated in images tried from another distribution of training images. Wild. Wild. Wild. Wild. Wild. Wild. Wild. Wild. Wild. Wild. Wild. Wild.

Carey PriebeProfessor, Department of Applied Mathematics and Statistics, and Director, Mathematical Institute for Data Science (Minds): “Comparison of large language models using data kernels

“We offer a framework for comparison and contrast to the representation room for deep neural networks – specifically large language models (LLMs) before and after the introduction of reinforcement learning from human feedback (RLHF) – it is also calculated practical, statistical, mathematical tractable and visibly interpretable.”

Jan TrmalAssociated Researcher, Center for Language and Specech Processing (CLSP) and Masha YarmohammadiAssistant researcher, CLSP: “Developing an evaluation protocol for contextualized ASR

“We propose to develop an evaluation protocol that has a wide range of types of scenarios where speaker context can be incorporated into the recognition process.”

Leave a Comment