Photo of Dongkuan Xu

Dongkuan
Xu

IST Ph.D. Student
Graduate Adviser
Xiang Zhang
Graduate Cohort
2017
Biography

Hello! I am a Ph.D. student at Penn State, where I work on machine learning, natural language processing, and data mining, advised by Xiang Zhang. I received my M.S. in Optimization at the University of Chinese Academy of Sciences, where I was advised by Yingjie Tian. I received my B.E. at the Renmin University of China, advised by Wei Xu.

In the summer of 2021, I am a research intern at Microsoft Research (MSR) working with SubhoXiaodong and Dey, exploring neural architecture search for efficient Transformer models. In 2020, I was an intern research scientist at Moffett AI luckily advised by Ian En-Hsu Yen, studying model compression and few-shot knowledge distillation. I also spent two wonderful summers (2019, 2018) as a research intern advised by Wei Cheng and Haifeng Chen at NEC Labs America, working on contrastive learning and multi-task learning in multivariate time series.

Other than my work, I am a big fan of American football. I love Nittany Lions and New York Giants. I also like workout, soccer ball, and hotpot.

Research Interests

I am interested in efficient AI, including parameter efficiency, data efficiency, and sparse learning. My current research is investigating how we can improve the efficiency of deep learning methods to achieve the Pareto optimality between resources (e.g., computation, data) and performance (e.g., inference, training). My research goal is to make AI learning at various low-resource scenarios efficient, effective, and elastic.

  • Parameter Efficiency: Neural Architecture Search, Pruning, Knowledge Distillation
  • Data Efficiency: Few-shot Compression, Contrastive Learning, Generator Learning
  • Computation Efficiency: Weight-sharing Learning, Reduced-cost Training, Trainingless Proxies
  • Model Architectures: Transformers, Temporal Networks, Graph Neural Networks