We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory. Our various research topics can be categorized into three sections which are Neural Network Architecture, Learning Methods, and Deep Learning Applications.
Our research topics in application of deep learning are focused on autonomous driving (depth estimation, object tracking, multi-sensor recognition), video recognition (video Turing test, video captioning), and defect detection (manufacturing, medical imaging).
Generating a fusion image with one's identity and one's pose
Applying deep learning methods on 3D point cloud data acquired by LiDAR sensors
Generating captions or answering questions on video
Road environment recognition for autunomous driving
Detecting abnormality in manufacturing process or medical images
Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include designing new architectures, or new operations in the network, and automated architecture search.
Learning the shape of convolution kernels in convolutional neural networks (Active Convolution)
Deconstruction of 3x3 convolution operations into efficient 1x1 convolutions (Active Shift)
Designing a new CNN architecture with gradually increasing feature dimension (PyramidNet)
Automated architecture search with deep learning
We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve the utilization of neural networks in real life, by knowledge distillation, network pruning, and continual learning.
Utilizing negative ground truth for robustness to noisy-labeled data (Negative Learning)
Learning useful information while not learning biases included in the data
Transferring knowledge from a network to another
Minimizing neural networks by removing unnecessary parameters
Training a neural network with new data while keeping information learned from original data