CREATORLINK
  • Home
  • Portfolio
  • About
    • Biography
    • Client
    • History
  • Contact

Statistical Inference &
Information Theory Laboratory


The Statistical Inference and Information Theory Laboratory is directed by professor Junmo Kim since 2009. Our research focuses on development of theoretical methods which can be applied to image processing, computer vision, pattern recognition, and machine learning. 

Recruit
Those who are interested in our lab, please contact us via the following information.
Note: We have filled all available positions for 2025 and are not accepting new student applications for Fall 2025.
​
   •  Position
       -  Ph.D. / MS. students
       -  Internship
   •  Requirements
       -  Resume (free form)
       -  Academic transcript
​
Professor: junmo.kim@kaist.ac.kr​
Lab head: kmc0207@kaist.ac.kr​

Research Topics 

We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN and diffusion models), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory. Our various research topics and details can be found in the Research and Publication section.
Deep Learning Applications
Our research topics in application of deep learning are focused on generative models (GAN, diffusion models), multi-modal learning, autonomous driving (depth estimation, object tracking), and anomaly detection (manufacturing, medical imaging).
Neural Network Architecture
Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include vision transformer, designing new architectures, or new operations in the network, and automated architecture search.
Learning Methods
We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve the utilization of neural networks in real life through trustworthy ML, domain adaptation, and continual learning.

News  [More]

[New] We have one publication accepted for CVPR 2025.
  1. Efficient Dynamic Scene Editing via 4D Gaussian-based Static-Dynamic Separation
                   Joohyun Kwon*, Hanbyel Cho*, Junmo Kim (equally contributed by the authors*)

[New] We have three publications accepted for WACV 2025. 
  1.  AH-OCDA: Amplitude-based Curriculum Learning and Hopfield Segmentation Model for Open Compound Domain Adaptation 
                     Jaehyun Choi*, Junwon Ko*, Dong-Jae Lee*, and Junmo Kim (equally contributed by the authors*) 
  1.  Beta Sampling is All You Need: Efficient Image Generation Strategy for Diffusion Models using Stepwise Spectral Analysis 
                     Haeil Lee*, Hansang Lee*, Seoyeon Gye, and Junmo Kim (equally contributed by the authors*) 
  1.  Reducing the Content Bias for AI-generated Image Detection 
                     Seoyeon Gye*, Junwon Ko*, Hyounguk Shon*, Minchan Kwon, and Junmo Kim (equally contributed by the authors*) 

​​[New] We have two publications accepted for NeurIPS 2024.
  1. Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance
                   Jiwan Hur, Dong-Jae Lee, Gyojin Han, Jaehyun Choi, Yunho Jeon†, and Junmo Kim† (Corresponding authors†)
  1. Self-supervised Transformation Learning for Equivariant Representations
                   Jaemyung Yu, Jaehyun Choi, Dong-Jae Lee, HyeongGwon Hong, and Junmo Kim

[New] We have two publications accepted for EMNLP 2024.
  1. Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality
                   Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon†, and Junmo Kim† (Corresponding authors†)
  1. StablePrompt : Automatic Prompt Tuning using Reinforcement Learning for Large Language Model
                   Minchan Kwon, Gaeun Kim, Jongsuk Kim, Haeil Lee, and Junmo Kim

[New] We have two publications accepted for ECCV 2024.
  1. Implicit Steganography Beyond the Constraints of Modality
                   Sojeong Song*, ​Seoyun Yang*, Chang D. Yoo†, and Junmo Kim† (equally contributed by the authors*, Corresponding authors†)
  1. Learning Neural Deformation Representation for 4D Dynamic Shape Generation
                   Gyojin Han, Jiwan Hur, Jaehyun Choi, and Junmo Kim

[New] We have one publication accepted for Interspeech 2024.
  1. AVCap: Leveraging Audio-Visual Features as Text Tokens for Captioning
                   Jongsuk Kim, Jiwon Shin, and Junmo Kim​

[New] We have one publication accepted for ICML 2024.
  1. EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning
                   Jongsuk Kim*, Hyeongkeun Lee*, Kyeongha Rho*, Junmo Kim, and Joon Son Chung (equally contributed by the authors*)

[New] We have one publication accepted for ICASSP 2024.
  1. Stereo-Matching Knowledge Distilled Monocular Depth Estimation Filtered by Multiple Disparity Consistency
                   Woonghyun Ka*, Jae Young Lee*, Jaehyun Choi, and Junmo Kim (equally contributed by the authors*)
​
[New] We have one publication accepted for CVPR 2024.
  1. ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object 
                   Chenshuang Zhang, Fei Pan, Junmo Kim*, In So Kweon*, and Chengzhi Mao* (Corresponding authors*)

[New] We have three publications accepted for AAAI 2024.
  1. FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection
                    Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, and Junmo Kim
  1. Foreseeing Reconstruction Quality of Gradient Inversion: An Optimization Perspective
                    HyeongGwon Hong, Yooshin Cho, Hanbyel Cho, Jaesung Ahn, and Junmo Kim
  1. Modeling Stereo-Confidence Out of the End-to-End Stereo-Matching Network via Disparity Plane Sweep
                    Jae Young Lee*, Woonghyun Ka*, Jaehyun Choi, and Junmo Kim (equally contributed by the authors*)

‌