UT Austin-Amazon Science Hub has entered into the initial winners of two gift project awards and a doctorate trained scholarship. The prizes recognize researchers whose work meets the hub’s goals: To tackle current challenges through advanced technological solutions that will benefit society as a whole.
The Amazon-funded collaboration launched in April 2023 and host of UT Austin’s Cockrell School of Engineering aims to promote partnership between the faculty, students and other leading scholars and promote another and sustainable pipeline of research talent.
In line with the goals of Huben, this year’s award winners are researching research in artificial intelligence, machine learning and large language models (LLMS).
Fellowship delivers selected doctorate students in Austin with up to a whole financing year to pursue independent research projects. The two chosen research projects will be run by UT Faculty’s main sub -subsequently.
The winners of the awards are as follows:
Doctoral-Fellowship Award
Ajay JaiswalPhD. -Scandle, Visual ITMOS Group
Jaiswal’s research is about effective and scalable learning, deep-neural network compression, sparse networks and effective inference. Jaiswal is a member of the Visual Itmatics Group (Vita) in Austin. His current research project is about effective scaling of multimodal models on the server, while also making them implement at the edge. His advisers are Ying Ding, Bill and Lewis professor of the School of Information and himself a train from an Amazon Research Award; and Atlas Wang, Jack Kilby/Texas Instruments Equipped Deputy in Chandra Family Department of Electrical and Computer Engineering.
Gift project prices
“Confirmation of Bility of LLMS, with LLMS”
Greg Durrett, Associate Professor of Computer Science, and his team plan to build on prior work in terms of political fact control and large language models to improve machine -written text. The team has previously criticized the output from Summation Models, and this project’s goal is to break down and verify the answers in paragraphs. The system says three phases: degradation, sourcing and verification. This mimics the process that a human ustes to fact control content.
“Tinyclip: Transferable Vision Language Models Transferable Less Multimodal Vision Language”
The transferability of the clip (contrastive language image-out-of-the-out-of-the-out) models) is crucial to Mary Vision-Langue tasks. Sujay Sanghav, Associate Professor of Electric and Computer Technology, and his team has the goal of developing smaller clip models that remain fully transferable. This project will use several new algorithmic ideas to ensure that the new models are functional and also involve data set creation.