CONFERENCE

Program


IWFCV 2019 Schedule

Click here to download the detailed program of IW-FCV2019.

Click here to download the booklet of IW-FCV2019.

 

IW-FCV 2019 Overall Program

FEBRUARY 20, WEDNESSDAY
08:30-10:00 Registration
10:00-10:05 Opening
10:05-10:55 Invited Speaker 1 : Dr. Yeunbae Kim (Exective PM, IITP)
– Talk title : AI & ICT Technologies and Social Problem Solving
11:00-12:00 Oral Session 1
12:00-13:00 Lunch
13:00-15:00 Oral Session 2
15:00-16:30 Poster & Demo Session 1
16:40-17:30 Invited Speaker 2 : Prof. Yusuke Sugano (Osaka University
– Talk title : Appearance-based Gaze Estimation for Real-World Eye Tracking Applications

 

FEBRUARY 21, tHURSDAY
10:00-11:00 Oral Session 3
11:10-12:00 Invited Speaker 3 : Seungjae Lee (ETRI)
– Talk title : Visual Searching: Engineering Aspects
12:00-13:00 Lunch
13:30-15:00 Oral Session 4
15:00-16:30 Poster & Demo Session 2
17:30-18:00 Performance (Korea Traditional Masque) with Cocktail Party
18:00-20:00 Banquet

 

FEBRUARY 22, fRIDAY
09:00-10:00 Co-operative Workshop Session 1 (ETRI)
– Workshop title : Object recognition and manipulation intelligence in robotics
10:10-12:00 Co-operative Workshop Session 2 (University of Ulsan, UNIST, Saitama Univ.)
– Workshop title : CV based Intelligent Systems
12:00-13:30 Farewell Lunch

 

 

 

Invited Talks


  1. Invited Talk 1

– Talk Title: A I & ICT Technologies and Social Problem Solving

– Speaker: Yeunbae Kim (Executive PM (IITP: Institute of Information & Communications Technology Planning and Evaluation)

– CV: Dr. Yeunbae Kim is an Executive PM at IITP and he works with the Korean Ministry of Science and ICT for R&D planning and policy making. He was a Prof. at Hanyang University and a VP at Samsung Electronics where he led numerous projects in the field of AI.

– Talk Abstract; Modern societal issues occur in a broad spectrum with very high levels of complexity and challenges, many of which are becoming increasingly difficult to address without the aid of cutting-edge technology. To alleviate these social problems, the Korean government recently announced the implementation of mega-projects to solve social problems by utilizing AI and ICBM (IoT, Cloud Computing, Big Data, Mobile) technologies. In this talk, I will explain Korean government’s policies and approaches toward social problem solving with actual project results.

 

  1. Invited Talk 2

– Talk Title: Appearance-based Gaze Estimation for Real-World Eye Tracking Applications

– Speaker: Prof. Yusuke Sugano (Professor of Osaka University, Japan)

– CV: Yusuke Sugano is an associate professor at Graduate School of Information Science and Technology, Osaka University. His research interests focus on computer vision and human-computer interaction. He received his Ph.D. in information science and technology from the University of Tokyo in 2010. He was previously a postdoctoral researcher at Max Planck Institute for Informatics, and a project research associate at Institute of Industrial Science, the University of Tokyo.

– Talk Abstract: Gaze plays an important role for analyzing human attention and behavior. Although gaze estimation techniques has been actively studied, it is still quite challenging to estimate gaze direction from ordinary camera images. This talk will introduce recent attempts on learning-based gaze estimation using large-scale training data. I will also discuss some application researches focusing on deploying learning-based gaze estimation in real-world environments, and illustrate the potential of learning-based estimation for daily-life eye tracking applications.

 

  1. Invited Talk 3

– Talk Title: Visual Searching: Engineering Aspects

– Speaker: Seungjae Lee (Senior researcher and the Project Leader, ETRI)

– CV: Seungjae Lee is a senior researcher and the project leader of visual browsing technology development at ETRI. He joined the creative content research division at ETRI in 2005 and has researched content identification, classification, and retrieval systems. He and his team participated in visual searching related challenges such as ImageNet challenge (classification and localization: 5th place in 2016, detection: 3rd place in 2017), Google Landmark Retrieval (8th place in 2018) and Low Power ImageNet Recognition Challenge (1st place in 2018)

– Talk Abstract: Visual searching is one of the most complex problems and all tech giants are fiercely competing. The recent advance of deep learning and data exploration show meaningful results and promising future in visual searching. In this talk, we briefly review the visual searching problem in view of the engineering aspects. First, we will present a visual place recognition case study to explain how to solve the visual searching problem in engineering aspects. Second, ImageNet challenge will be reviewed to show how dataset and deep learning boost up visual searching such as object classification and detection. Finally, we will address speed-accuracy trade-off and efficient visual search for future visual searching applications.

Detailed Presentation Schedule


Click here to download the detailed program of IW-FCV2019.

Click here to download the booklet of IW-FCV2019.

Awards


Best paper:

Tsuyoshi Migita, Ryuichi Saito, Takeshi Shakunaga,

“Batch Estimation for Face Modeling with Tracking on Image Sequence”,

Okayama Univ.

 

Jeong Inho, Lee Chul,

“Low-Light Video Enhancement Based on Optimal Gamma Correction Parameter Estimation”,

Pukyong National Univ.

 

Tadashi Matsuo, Nobutaka Shimad,

“Auto-encoder Factorizing into Transform Invariants and Transform Parameters”,

Ritsumeikan Univ.

 

 

Best Student Paper;

Hunjun Yang, Wonkeun Lee, Kyungtae Kim, Jin-Gyeom Kim, Sanghong Kim , Bowon Lee,

“SDM : Squeeze and Excitation Deformable Mask-RCNN”,

Inha Univ.

 

Best Poster Presentation:

Maxence Remy, Hideo Saito, Hideaki Uchiyama, Hiroshi Kawasaki, Vincent Nozick, Diego Thomas,

“Merging SLAM and photometric stereo for 3D reconstruction with a moving camera”,

Keio Univ., Kyushu Univ.

 

Sanghong Kim, Taeyong Kim and Bowon Lee,

“Real-Time Facial Expression Recognition System Using Raspberry Pi”,

McGill Univ., Inha Univ.

 

Masato Fukuzaki, Seiya Ito, Naoshi Kaneko and Kazuhiko Sumi,

“Robot Grasp Planning with Integration Map of Graspability and Object Occupancy”,

Aoyama Gakuin Univ.