CREATORLINK
  • Home
  • Portfolio
  • About
    • Biography
    • Client
    • History
  • Contact

Research Topics


We mainly focus on developing deep learning algorithms to solve many problems in the computer vision field. Including general tasks in computer vision such as classification or detection, we also study applicable work like generative models(GAN), depth estimation, medical imaging, noisy data, semantic segmentation, etc. Besides, there are many research on the fundamental structure of neural networks such as network minimization and architecture search in our laboratory.  Our various research topics can be categorized into three sections which are Neural Network Architecture, Learning Methods, and Deep Learning Applications. 


 

Deep Learning Applications


Our research topics in application of deep learning are focused on autonomous driving (depth estimation, object tracking, multi-sensor recognition), video recognition (video Turing test, video captioning), and defect detection (manufacturing, medical imaging).

Image Generation


Generating a fusion image with one's identity and one's pose

Point Cloud Recognition


Applying deep learning methods on 3D point cloud data acquired by LiDAR sensors

Video Recognition


Generating captions or answering questions on video

Object Tracking 


Road environment recognition for autunomous driving

Defect Detection


Detecting abnormality in manufacturing process or medical images


 

Neural Network Architecture


Our research on neural network architectures aims to find the optimal structure for convolutional neural networks. Recent topics include designing new architectures, or new operations in the network, and automated architecture search.

Shape of Convolution Kernels


Learning the shape of convolution kernels in convolutional neural networks (Active Convolution)

Fast Convolution Operation


Deconstruction of 3x3 convolution operations into efficient 1x1 convolutions (Active Shift)

Network Architecture


Designing a new CNN architecture with gradually increasing feature dimension (PyramidNet)

Neural Architecture Search


Automated architecture search with deep learning


 

Learning Methods


We study various topics in learning methods to solve problems in data used for training, such as noisy labels or biases. We also aim to improve the utilization of neural networks in real life, by knowledge distillation, network pruning, and continual learning.

Noisy Labeled Data


Utilizing negative ground truth for robustness to noisy-labeled data (Negative Learning)

Biased Data


Learning useful information while not learning biases included in the data

Knowledge Distillation


Transferring knowledge from a network to another

Network Pruning


Minimizing neural networks by removing unnecessary parameters

Incremental Learning


Training a neural network with new data while keeping information learned from original data