Science Hub for Humanity and Artificial Intelligence at UCLA has announced three gift-level awards and a sponsored project that recognizes researchers who study the societal effect of artificial intelligence (AI).
The science nodes were launched in October 2021 and projecting projects that investigate how AI can help solve humanity’s most pressing challenges while dealing with critical questions about bias, justice, responsibility and responsible AI. NAVET seeks to promote collaboration between Amazon scientists and academic researchers across disciplines, including computer science, electric and computer technology and mechanical and space technology.
Funded by Amazon and the house at UCLA Samueli School of Engineering, Science Hub supports a number of research projects and doctorate fellow. In May 2022, Amazon and UCLA HABS for HUB’s named prices, which focused on topics ranging from computational neuroscience and children’s automatic speech recognition to human-robot collaboration and the privacy of machine learning.
Project suborders and the respective projects that are supported are as follows:
Kai-Wei ChangAssociate Professor and Amazon Scholar,,,,,,,, and Nanyun (violet) moneyDeputy, Institute of Computer Science and Amazon visits academically: ‘Contextualized Document Understanding: To learn to include through documents relevant information ”
“Documents, such as receipts, tax forms and resumes, are critical of communication between companies and individuals,” CHANG and PENG writes. “E-treatment, treatment of them is boring, time-consuming and incorrectly exposed to officials. Therefore, automatic extraction of scanned documents using an AI system, however, is a valuable solution. However, the variation in document layout’s challenges for AI to understand documents.
“In this project, we explore the potential of using contextual information to improve AI’s ability to process, interpret and extract information from documents,” they continue. “We offer a Multimodal Foundation model based on denoising-sequence-to-sequence-pre-making and examining how contextual information, such as document type, purpose and filling instructions, can be gearing to understand documents.”
Cho-jui HsiehAssociate Professor, Department of Computer Science: “Make large language models small and effective ”
“Large language models (LLMs) have shown unique capabilities across another interval of tasks. However, these models are supplied with high calculation and memory costs,” Hsieh writes. “The Open Sourced T5 model contains 770 million parameters, and advanced models such as GPT and Palm usually have dogs of billions of parameters. The gigantic model size also results in significant calculation overhead during the inferency, making it challenging to DEP Language Models in real-time use, not to mention border with limited capacity.
“We will introduce a new family of data-ware compression algorithms that take into account both the structure and the semantics of language,” he continues. “For example, the importance of the words in a text can vary widely, leading to the possibility of filtering unimantic tokens in the model. In addition, texts often have a strong low rank or cluster structure, which provides an opportunity to improve existing compression. Compression methods by utilizing the language structure and developing a new form to faster inference.”
Chenfanfu JiangAssociate Professor, Department of Mathematics: “Differentible Physics Increases Neural Random Fields for Real to SIM and Manufacturer-Refy 3D Clothing Reconstruction ”
“The core challenge is to digitally reconstruct clothes in a way that not only exactly models their 3D form, but also predicts how they move and can be manufactured,” Jiang writes. “Traditional methods catch form, but overlook the material properties and sewing patterns of the fabric that are essential to realistic simulation and production. Tackling this hole has broad consequences-from faster and waste-reducing design process in the fashion industry to improve realism in the virtual world.
“We integrate physics-noticing machine learning models with existing 3D geometry techniques,” he continues. “At the same time, the goal is to restore the 2D health and material parameters from pictures or videos of the clothing. This allows for both virtual accident simulation and manufacture in the real world.”
Jens PalsbergProfessor, Department of Computer Science: To learn to plum false positive from a static program analysis
“Static program analyzes can detect safety vulnerability and support program confirmation,” writes Palsberg. “If we can prune the false positive that these tools sometimes produce, they will become even more useful for developers.
“Our society produces more code for more tasks than ever before, but that code is of mixed quality. Fortunately, we have tools that can discover Mary of the problemms and if we can make these tools more useful, we will be on a path to solve more problem options,” he continues. “Our idea is to use machine learning to plum false positive. Our goal is to reach a false-a-positive speed of No Highri than 15-20 pierce.”