At the heart of designing Machine Learning (ML) algorithms, lies extraction of representations or features from the data that 'better' facilitate the underlying downstream tasks such as classification, regression, and sampling. Deep Neural Networks have demonstrated success in learning representations for several data domains including images, audio and text albeit they still suffer from problems such as the need for a large amount of labelled training data, poor generalization across data from heterogeneous data sources and lack of robustness. In this talk, we argue that guiding the learning of representations with priors/regularizers that are non-oblivious to the data/problem at hand provides some respite to a few aforementioned problems. Specifically, we discuss two questions: (a) How to generalize ML models trained on a set of source domains on an unseen target domain with different statistics, (b) What is the optimal prior distribution to be imposed in a generative auto-encoder used for sampling from an unknown distribution?
Please click on the link https://zoom.us/j/92676735430?pwd=UG5WVHlzTFViQ0NONksrYWJoblNYdz09 to join the zoom meeting
Meeting ID: 926 7673 5430