Speaker
Description
There has been a significant progress in development of large language models in the industries. These models use self-supervised learning methods through which a big AI model can learn how to effectively capture a greater scope of contexts and result in so-called Foundation Models (FMs). Thanks to their strong encoding capability that extracts a comprehensive set of key features in data, once trained with a big data, FMs can be applied on a spectrum of tasks with high quality output, often competitive against traditional deep learning models trained with a supervised learning method. There has been an active research to develop a Scientific FMs, the big models trailed with self-supervision on scientific datasets. While progress is made, there are unique challenges and potential AI/ML research opportunities identified for scientific datasets. In this talk, I will give a brief example of FMs and applications in High Energy Physics. I will also discuss FMs research as an opportunity to develop a greater AI/ML research ecosystem that can be benefited across multiple domains of science.