CREATORLINK
  • Home
  • Portfolio
  • About
    • Biography
    • Client
    • History
  • Contact

Statistical Inference &
Information Theory Laboratory


The Statistical Inference and Information Theory Laboratory is directed by professor Junmo Kim since 2009. Our research focuses on development of theoretical methods which can be applied to image processing, computer vision, pattern recognition, and machine learning. 

Recruit
Those who are interested in our lab, please contact us via the following information.

​
   •  Position
       -  Ph.D. / MS. students
       -  Internship
   •  Requirements
       -  Resume (free form)
       -  Academic transcript
​
Professor: junmo.kim@kaist.ac.kr​
Lab head: kmc0207@kaist.ac.kr​

Research Topics 

We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN and diffusion models), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory. Our various research topics and details can be found in the Research and Publication section.
Deep Learning Applications
Our research topics in application of deep learning are focused on generative models (GAN, diffusion models), multi-modal learning, autonomous driving (depth estimation, object tracking), and anomaly detection (manufacturing, medical imaging).
Neural Network Architecture
Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include vision transformer, designing new architectures, or new operations in the network, and automated architecture search.
Learning Methods
We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve the utilization of neural networks in real life through trustworthy ML, domain adaptation, and continual learning.

 News    [More]   

[New] We have three publications accepted for NeurIPS 2025. 
  1.  Preference Distillation via Value based Reinforcement Learning
                     Minchan Kwon, Junwon Ko, Kangil Kim†, and Junmo Kim† (Corresponding authors†) 
  1.  Frequency-Aware Token Reduction for Efficient Vision Transformer
                     Dong-Jae Lee, Jiwan Hur, Jaehyun Choi, Jaemyung Yu, and Junmo Kim
  1.  Video Diffusion Models Excel at Tracking Similar-Looking Objects Without Supervision
                     Chenshuang Zhang, Kang Zhang, Joon Son Chung†, In So Kweon†, Junmo Kim†, Chengzhi Mao† (Corresponding authors†) 

[New] We have one publication accepted for ACMMM 2025.
  1. B4DL: A Benchmark for 4D LiDAR LLM in Spatio-Temporal Understanding
                   Changho Choi*, Youngwoo Shin*, Gyojin Han, Dong-Jae Lee, and Junmo Kim (equally contributed by the authors*)

  We have three publications accepted for ICCV 2025. 
  1.  Controllable Feature Whitening for Hyperparameter-Free Bias Mitigation 
                     Yooshin Cho, Hanbyel Cho, Janghyeon Lee, HyeongGwon Hong, Jaesung Ahn, and Junmo Kim 
  1.  SynAD: Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration 
                     Jongsuk Kim, Jae Young Lee, Gyojin Han, Dong-Jae Lee, Minki Jeong, and Junmo Kim 
  1.  DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization 
                     Dongyeun Lee, Jiwan Hur, Hyounguk Shon, Jae Young Lee, and Junmo Kim 

   We have two publications accepted for INTERSPEECH 2025. 
  1.  InfiniteAudio: Infinite-Length Audio Generation with Consistency 
                     Chaeyoung Jung, Hojoon Ki, Ji-Hoon Kim, Junmo Kim†, and Joon son Chung† (Corresponding authors†) 
  1.  FairASR: Fair Audio Contrastive Learning for Automatic Speech Recognition 
                     Jongsuk Kim, Jaemyung Yu, Minchan Kwon, and Junmo Kim 

   We have one publication accepted for CVPR 2025.
  1. Efficient Dynamic Scene Editing via 4D Gaussian-based Static-Dynamic Separation
                   Joohyun Kwon*, Hanbyel Cho*, and Junmo Kim (equally contributed by the authors*)
​

‌