CREATORLINK
  • Home
  • Portfolio
  • About
    • Biography
    • Client
    • History
  • Contact

Statistical Inference &
Information Theory Laboratory


The Statistical Inference and Information Theory Laboratory is directed by professor Junmo Kim since 2009. Our research focuses on development of theoretical methods which can be applied to image processing, computer vision, pattern recognition, and machine learning. 

Recruit
Those who are interested in our lab, please contact via following information.
​
   •  Position
       -  Ph.D. / MS. students
       -  Internship
   •  Requirements
       -  Resume (free form)
       -  Academic transcript
​
Professor: junmo.kim@kaist.ac.kr
Lab head: jskpop@kaist.ac.kr

Research Topics 

We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN and diffusion models), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory. Our various research topics and details can be found in the Research and Publication section.
Deep Learning Applications
Our research topics in application of deep learning are focused on generative models (GAN, diffusion models), multi-modal learning, autonomous driving (depth estimation, object tracking), and anomaly detection (manufacturing, medical imaging).
Neural Network Architecture
Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include vision transformer, designing new architectures, or new operations in the network, and automated architecture search.
Learning Methods
We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve the utilization of neural networks in real life through trustworthy ML, domain adaptation, and continual learning.

News [More]

[New] We have a publication accepted for CVPR 2023.
"Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view", Hanbyel Cho, Yooshin Cho, Jaesung Ahn, and Junmo Kim

[New] We have a publication accepted for CVPR 2023.
 "Fix the Noise: Disentangling Source Feature for Controllable Domain Translation", Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, and Junmo Kim

[New] We have a publication accepted for CVPR 2023.
 "Reinforcement Learning Based Black-Box Model Inversion Attacks", Gyojin Han, Jaehyun Choi, Haeil Lee, and Junmo Kim
​
[New] We have a publication accepted for ICRA 2023.
 "Test‐Time Synthetic‐to‐Real Adaptive Depth Estimation", Eojindl Yi and Junmo Kim
​
[New] We have a publication accepted for ICRA 2023.
"Lightweight Monocular Depth Estimation via Token-Sharing Transformer", Dong-Jae Lee*, Jae Young Lee*, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, and Junmo Kim (equally contributed by the authors*)

[New] We have a publication accepted for AAAI 2023.
"Frequency Selective Augmentation for Video Representation Learning", Jinhyung Kim, Taeoh Kim , Minho Shim, Dongyoon Han, Dongyoon Wee, and Junmo Kim
​​​
[New]We have a publication accepted for NeurIPSW 2022.
"Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning", Gyojin Han*, Jaehyun Choi*, Hyeong Gwon Hong, and Junmo Kim (equally contributed by the authors*)

[New] We have a publication accepted for WACV 2023.
"I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images", Jiwan Hur*, Jae Young Lee*, Jaehyun Choi, and Junmo Kim (equally contributed by the authors*)

[New] We have a publication accepted for NeurIPS 2022.
"UniCLIP: Unified Framework for Contrastive Language-Image Pre-training", Janghyeon Lee*, Jongsuk Kim*, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee and Junmo Kim (equally contributed by the authors*)

‌