Links: github | gscholar | CV | roadmap
I am currently a machine learning researcher (post college appointee) at the Lawrence Livermore National Laboratory, mostly working on data science projects related to calculating risk factors of COVID19 patients, prognosis of energy grid incipient failure, and developing principles for interpretable and explainable AI. I graduated from University of California, Berkeley with a BA in Applied Mathematics (EECS). Since November 2019, I have been working in Yi Ma’s lab, researching on topics such as low dimensional models in deep learning, nonconvex optimization, and dictionary learning. Before that, I researched with Dylan Paiton at Bruno Olshausen lab, working on sparse coding theory such as Locally Competitive Algorithm.
I am interested in both the theory and practice of Deep Learning, especially 1) the development of algorithms for generalizable, robust, and interpretable models using tools from classical low-dimensional methods and new fields such as information theory and high-dimensional statistics; and 2) the application of data-driven models in practical fields such as neuroscience and healthcare for better understanding and solution to real-life problems.
[Dec 19] L0-norm, L1-norm, L4-norm in n-Sphere writeup repo
[Feb 19] Brief Introduction to Independent Subspace Analysis writeup
[Oct 17] Artistic Style Transfer repo
Special Thanks: I want to take the opporuntiy to thank my family for the support they have given me.
Last Updated this website: 6 June 2020