IWFCV 2017 Schedule
Click here to download the full program of IW-FCV2017.
Special Talks – 16:50 ~ 17:50 (Feb.1)
KIZKI-algorithm as OMNIPOTENT Image Defect Inspection and Its Applications
Prof. Hiroyasu Koshimizu
Chukyo University, Japan
Prof. H. Koshimizu graduated from Grad.School of Nagoya Univ.(Dr.Eng) in 1975 and has been with Assistant Prof. of Nagoya Univ., Research Staff of NMIRI and Prof. of Chukyo University in 1986. He was Dean of SCCS in 2004, Dean of SIST in 2006, Dean of GSCCS. He is now The Director of IASAI of Chukyo University and the Councillor of Umemura Gakuen Chukyo Univ.
He has been and is now active in the research fields of Image Sensing, Image Processing, Facial Studies, Sampling & Quantization Theory OKQT, Hough Transform and their Industrial applications.
He has been working in the academic societies such as IEEE(Senior Member), IEE(Senior Member), IPSJ(Fellow), IEICE, SICE, JSPE(IAIP Counselor), JFACE(President), SSII(President), JSAI and QCAV, FCV, MVA, SSII, ViEW, DIA, etc.
Odawara Prize(IAIP／JSPE、2002、2005、2012、2014), JSPE Technology Award(2016), Soc. of Automotive Engineers of Japan Asahara Academic Encouraging Prize (2014) ,IEEJ Exellent Presenrtation Awards(2004、2009、2010、2011、2012、2014), SSII Exellent Acadenic Award (2010), etc were presented so far.
He is now chairing ASTF Committee as the vice Chair and IPSJ Tokai Branch as the Manager.
Since the inspection is fatally indispensable process for every kinds of productions, the number of image inspection algorithms becomes just the same as the number of the products. Thus several kinds of individually customized image processing systems have been valid for the individual inspection tasks so far. Because we do not have yet an idea for establishing general purpose or OMNIPOTENT method for coping with the diversity of defects in the industrial fields. Ultimate goal to do now is to establish‘OMNIPOTENT’ method. The KIZKI consciousness algorithm for defects is a smart image processing technology initiated by human visual inspection mechanism. In this proposed KIZKI algorithm (Japanese Patent No.5821708), we developed a simple iterative and simultaneous scheme of multiple image processing processes realized by both the coarse to fine spatial resolutions processing similar to the human periphery vision and the spatially tremor phase processing similar to the human micro saccadic vision.
Keynote Speech – 9:00 ~ 10:00 (Feb.2)
ICT-Vehicle convergence research and education
Prof. Kunsoo Huh
Hanyang Univ, Korea
Prof. Kunsoo Huh received the Ph.D. degree from the University of Michigan, Ann Arbor, in 1992.
He is currently a Professor with the Department of Automotive Engineering in Hanyang University in Korea and the Director of the ICT-Vehicle Convergence Research Center supported by the Government. His current research focus includes fault-tolerant control, sensor-based active safety control, V2X based connected safety control and autonomous vehicle control systems. He has been serving as the Editor for the International Journal of Automotive Technology since 2008 and he is currently a Vice-president of Korean Society of Automotive Engineers (KSAE).
Even though significant progress has been made both in ICT (Information-Communication Technology) and vehicle technology, more than one million people lost their lives in traffic accidents worldwide. Besides, the number of traffic fatalities does not change much even in leading OECD member countries such US, Germany and Japan. From this perspective, the necessity and progress for the ICT-Vehicle convergence are described particularly in Korea. In addition, incubation and training for the convergence engineers are explained for ICT-vehicle research and development in Korea.
Invited Talk – 16:10 ~ 17:10 (Feb.2)
Video Highlight Detection at Yahoo!
Dr. Yale Song
Yahoo Research, USA
Yale Song is a Senior Research Scientist at Yahoo Research in New York City. He graduated with a Ph.D. in Computer Science from MIT in 2014. He is interested in innovative techniques to video understanding using computer vision and deep learning. His current research projects include video highlighting and summarization, video captioning and visual question answering, and generative modeling for video prediction. At Yahoo Research, he works on various real-world problems involving Yahoo’s web-scale image and video data. Some of his works have been deployed to various products at Yahoo, including Flickr, Tumblr, Video Guide, and Yahoo Esports, and have been featured in MIT News, Economist, Vice Motherboard, among others.
The sheer amount of video produced each day makes it increasingly more difficult to search, browse and watch desired content efficiently. Video highlight detection has the potential to alleviate this issue by providing users with the most interesting moments from a video. In this talk, I will give an overview of various video highlighting techniques we developed at Yahoo, and show how we use the techniques to empower innovative product features that serve millions of users each day. Specifically, I will show how we detect highlights from live broadcast eSports matches (pro gaming events), how we create animated GIFs automatically from videos, how we leverage textual descriptions for video summarization, and how we exploit visual aesthetics to detect the most beautiful thumbnails from videos.
Detailed Presentation Schedule