CREATORLINK
  • Home
  • Portfolio
  • About
    • Biography
    • Client
    • History
  • Contact

Statistical Inference &
Information Theory Laboratory


The Statistical Inference and Information Theory Laboratory is directed by professor Junmo Kim since 2009. Our research focuses on development of theoretical methods which can be applied to image processing, computer vision, pattern recognition, and machine learning. 

Recruit
Those who are interested in our lab, please contact via following information.
​
   •  Position
       -  Ph.D. / MS. students
       -  Individual research student
   •  Requirements
       -  Resume (free form)
       -  Transcript
​
Professor: junmo.kim@kaist.ac.kr
Lab head: jskpop@kaist.ac.kr

Research Topics 

We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory. Our various research topics and details can be found in the Research and Publication section.
Deep Learning Applications

Our research topics in application of deep learning are focused on autonomous driving (depth estimation, object tracking, multi-sensor recognition), video recognition (video Turing test, video captioning), and defect detection (manufacturing, medical imaging).

Neural Network Architecture

Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include designing new architectures, or new operations in the network, and automated architecture search.

Learning Methods

We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve utilization of neural networks in real life, by knowledge distillation, network pruning, and continual learning.


News [More]

[New] We have a publication accepted for ICRA 2023.
"Lightweight Monocular Depth Estimation via Token-Sharing Transformer", Dong-Jae Lee, Jae Young Lee, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, Junmo Kim

[New] We have a publication accepted for AAAI 2023.
"Frequency Selective Augmentation for Video Representation Learning", Jinhyung Kim, Taeoh Kim , Minho Shim, Dongyoon Han, Dongyoon Wee, and Junmo Kim
​​​
[New]We have a publication accepted for NeurIPSW 2022.
"Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning", Gyojin Han*, Jaehyun Choi*, Hyeong Gwon Hong, and Junmo Kim (equally contributed by the authors*)

[New] We have a publication accepted for WACV 2023.
"I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images", Jiwan Hur*, Jae Young Lee*, Jaehyun Choi, and Junmo Kim (equally contributed by the authors*)

[New] We have a publication accepted for NeurIPS 2022.
"UniCLIP: Unified Framework for Contrastive Language-Image Pre-training", Janghyeon Lee*, Jongsuk Kim*, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee and Junmo Kim (equally contributed by the authors*)
​
[New] We have a publication accepted for ACSAC 2022.
"Closing the Loophole: Rethinking Reconstruction Attacks in Federated Learning from a Privacy Standpoint", Seung Ho Na, Hyeong Gwon Hong, Junmo Kim and Seungwon Shin

[New] We have a publication accepted for ECCV 2022.
"‌DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning", Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim

[New] We have a publication accepted for ECCV 2022.
"‌On the Angular Update and Hyperparameter Tuning of a Scale-Invariant Network", Juseung Yun, Janghyeon Lee, Hyounguk Shon, Eojindl Yi, Seung Hwan Kim, Junmo Kim

‌