IEEE MIPR Keynote Speakers

Edward Chang
Edward Y. Chang
President, HTC Research and Healthcare
Keynote Talk Title: Representation Learning on Big and Small Data
Time: April 11, 2018 1:30 PM

Abstract:
The approaches in feature extraction can be divided into two categories: model-centric and data-driven. The model-centric approach relies on human heuristics to develop a computer model to extract features from data. These models were engineered by scientists and then validated via empirical studies. A major shortcoming of the model-centric approach is that unusual circumstances that a model does not take into consideration during its design can render the engineered features less effective. Contrast to the model-centric approach, which dictates representations independent of data, the data-driven approach learns representations from data. Example data-driven algorithms are multilayer perceptron (MLP) and convolutional neural network (CNN), which belong to the general category of neural network and deep learning. In this talk I will first explain why my team at Google in 2006 embarked on the data-drive approach. In 2010, we funded the ImageNet project at Stanford, and subsequently in 2011 filed two data-driven deep learning patents, one on feature extraction and the other on object recognition. We parallelized five widely used machine learning algorithms including SVMs, PFP, LDA, Spectral Clustering, and CNN, and open-sourced all these algorithms. I will present our latest work in accelerating CNN training using second order methods, and in reducing CNN model size. In the second half of this presentation, I will share our experience in the healthcare domain, where small data is the norm. I will discuss our experiences with transfer learning and GANs, both positive and negative ones. This talk concludes with a list of open research issues.
Biography:

Edward Y. Chang currently serves as the President of Research and Healthcare (DeepQ) at HTC. Ed's most notable work is co-leading the DeepQ project (with Prof. CK Peng at Harvard), working with a team of physicians, scientists, and engineers to design and develop mobile wireless diagnostic instruments that can help consumers make their own reliable health diagnoses anywhere at anytime. The project entered the Tricorder XPRIZE competition in 2013 with 310 other entrants and was awarded second place in April 2017 with US$1M prize. DeepQ is powered by deep architecture to quest for cure. Similar deep architecture is also used to power Vivepaper, an AR product Ed's team launched in 2016 to support immersive augmented reality experience (for education, training, and entertainment).


Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including scalable machine learning, indoor localization, social networking and search integration, and Web search (spam fighting). His contributions in parallel machine learning algorithms and data-driven deep learning (US patents 8798375 and 9547914) are recognized through several keynote invitations and the developed open-source codes have been collectively downloaded over 30,000 times. His work on IMU calibration/fusion (US patents 8362949, 9135802, 9295027, 9383202, and 9625290) with project X was first deployed via Google Maps (see XINX paper and ASIST/ACM SIGIR/ICADL keynotes) and now widely used on mobile phones and VR/AR devices. Ed's team also developed the Google Q&A system (codename Confucius), which was launched in over 60 countries.


Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara (UCSB). He joined UCSB in 1999 after receiving his PhD from Stanford University, and was tenured in 2003 and promoted to full professor in 2006. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, ACM MM, ICDE, and WWW. He is a recipient of the NSF Career Award, IBM Faculty Partnership Award, and Google Innovation Award. He is a Fellow of IEEE for his contributions to scalable machine learning.


Chang Wen Chen
Chang Wen Chen
IEEE Fellow, SPIE Fellow
Professor, State University of New York at Buffalo, USA
Dean and Professor, School of Science and Engineering, The Chinese University of Hong Kong, China
Keynote Talk Title: Intelligent Multimedia Delivery in 5G Mobile Networks
Time: April 12, 2018 8:00 AM

Abstract:
Great efforts have been made by both commercial companies and government agencies to define the key requirements of 5G mobile networks, develop the 5G standards, and perform the technology trials aiming at initial deployment of 5G in 2020. In this talk, we shall point out that, unlike the one-size-fit-all 4G core networks, the 5G core networks must be flexible and adaptable and are expected to provide simultaneous optimized support for several 5G usage scenarios. We shall show that intelligent scenario aware strategies are needed in order to best match the usage scenarios to different quality of experience demands in various applications of multimedia data transport over 5G networks. Several specific multimedia delivery examples will be presented to illustrate how to take full advantage of scenario-aware capability of these applications.
Biography:

Chang Wen Chen is an Empire Innovation Professor of Computer Science and Engineering at the University at Buffalo, State University of New York since 2008. He has been Allen Henry Endow Chair Professor at the Florida Institute of Technology from July 2003 to December 2007. He was on the faculty of Electrical and Computer Engineering at the University of Rochester from 1992 to 1996 and on the faculty of Electrical and Computer Engineering at the University of Missouri-Columbia from 1996 to 2003.


He has been the Editor-in-Chief for IEEE Trans. Multimedia since January 2014. He has also served as the Editor-in-Chief for IEEE Trans. Circuits and Systems for Video Technology from 2006 to 2009. He has been an Editor for several other major IEEE Transactions and Journals, including the Proceedings of IEEE, IEEE Journal of Selected Areas in Communications, and IEEE Journal of Journal on Emerging and Selected Topics in Circuits and Systems. He has served as Conference Chair for several major IEEE, ACM and SPIE conferences related to multimedia video communications and signal processing. His research is supported by NSF, DARPA, Air Force, NASA, Whitaker Foundation, Microsoft, Intel, Kodak, Huawei, and Technicolor.


He received his BS from University of Science and Technology of China in 1983, MSEE from University of Southern California in 1986, and Ph.D. from University of Illinois at Urbana-Champaign in 1992. He and his students have received eight (8) Best Paper Awards or Best Student Paper Awards over the past two decades. He has also received several research and professional achievement awards, including the Sigma Xi Excellence in Graduate Research Mentoring Award in 2003, Alexander von Humboldt Research Award in 2009, and the University at Buffalo Exceptional Scholar - Sustained Achievement Award in 2012, and the State University of New York System Chancellor's Award for Excellence in Scholarship and Creative Activities in 2016. He is an IEEE Fellow since 2004 and an SPIE Fellow since 2007.



Wen Gao
Wen Gao
ACM Fellow, IEEE Fellow, Chinese Academy of Engineering
Professor, Peking University, China
President, China Computer Federation (CCF)
Keynote Talk Title: Digital Retina: A New Feature Surveillance Device for Smart City
Time: April 12, 2018 1:30 PM

Abstract:
A city brain is the central decision system in the smart/intelligent city system. In this talk, I will present several grand challenges for a smart/intelligent city brain, and then give our solution which called digital retina, a new-generation evolutional city eye, or new feature surveillance device/camera. Like human eyes, the new feature surveillance camera not only own the function of video coding for storage and offline viewing, but also can perform feature coding for pattern recognition and scene understanding in real time and higher accuracy. Towards this end, I will share some of our recent achievements, including but not limited on background-modeling-based surveillance video coding, visual feature compression for visual search, joint R-D and R-A optimization, as well as software defined image quantization. Some standards are also available for this new feature surveillance device, such as IEEE 1857.4/AVS2 and MPEG CDVS/CDVA. With this device, we can expect a key step of evolution towards the massive artificial visual system for smart/intelligent city.
Biography:

Wen Gao now is a Boya Chair Professor at Peking university. He also serves as the vice president of National Natural Science Foundation of China (NSFC) from 2013, and the president of China Computer Federation (CCF) from 2016.


He received his Ph.D. degree in electronics engineering from the University of Tokyo in 1991. He joined with Harbin Institute of Technology from 1991 to 1995, and Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS) from 1996 to 2005. He joined the Peking University in 2006.


Prof. Gao works in the areas of multimedia and computer vision, topics including video coding, video analysis, multimedia retrieval, face recognition, multimodal interfaces, and virtual reality. His most cited contributions are model-based video coding and face recognition. He published seven books, over 220 papers in refereed journals, and over 600 papers in selected international conferences. He earned many awards including six State Awards in Science and Technology Achievements. He has been featured by IEEE Spectrum in June 2005 as one of the "Ten-To-Watch" among China's leading technologists. He is a fellow of IEEE, a fellow of ACM, and a member of Chinese Academy of Engineering.




Ramesh Jain
Ramesh Jain
ACM Fellow, IEEE Fellow, AAAS Fellow
Professor, University of California, USA
Keynote Talk Title: Multimedia Health: Designing Personal Navigators
Time: April 10, 2018 8:15 AM

Abstract:
Health is a continuous state of complete physical, mental, and social well-being. A person’s health is the result of genetics, lifestyle, environment, and socio-economic situation. Advances in smart phones, sensors, and wearable technology are now making it possible to analyze and understand an individual’s life style from mostly passively collected objective multimodal data streams to build her model and predict important health events in her life. Multimedia technology may help people manage lifestyle and environment for many chronic conditions, such as Diabetes. Three major components in building such systems are: building personal model, using diverse observations for estimating current health state, and guiding people through lifestyle and environment for best results. Institute for Future Health at UCI is exploring these approaches. We will present our approach to developing such health navigators for helping people in getting to desirable health states.
Biography:

Ramesh Jain is an entrepreneur, researcher, and educator.


He is a Donald Bren Professor in Information & Computer Sciences at University of California, Irvine. His current research passion is in addressing health issues using cybernetic principles building on the progress in sensors, mobile, processing, and storage technologies. He is founding director of the Institute for Future Health at UCI. Earlier he served on faculty of Georgia Tech, University of California at San Diego, The university of Michigan, Ann Arbor, Wayne State University, and Indian Institute of Technology, Kharagpur. He is a Fellow of AAAS, ACM, IEEE, AAAI, IAPR, and SPIE.


Ramesh co-founded several companies, managed them in initial stages, and then turned them over to professional management. He enjoys new challenges and likes to use technology to solve them. He is participating in addressing the biggest challenge for us all: how to live long in good health.



Klara Nahrstedt
Klara Nahrstedt
ACM Fellow, IEEE Fellow
Professor, University of Illinois at Urbana-Champaign, USA
Keynote Talk Title: Multi-Camera Content Delivery Systems: Challenges and Opportunities
Time: April 11, 2018 8:00 AM

Abstract:
Multi-camera content delivery systems are becoming an integral part of our video consumption, enabling advanced 3D teleimmersive interactions in arts and sciences, 360-degree video viewing of YouTube content, and enhanced broadcasting experience of live events. However, multi-camera content systems generate high volume, variety and velocity data that present multiple orders of magnitude higher demands on bandwidth and latency than current network systems and their providers can accommodate. In this talk, we discuss bridging the gap between multi-camera content and network systems. We elaborate on the challenges of this “semantic gap” between multi-camera processing applications and the underlying network systems, and we present opportunities to bridge this gap with “semantic hints” to network delivery systems such as users’ desired views, summary of activities, camera locality and other sensory context information. These semantic hints improve differentiation and selection among multi-camera sources, therefore decrease bandwidth demands, give valuable information to network systems for possible network resources adaptation, and deliver desired interactive latency and viewing experience to users.
Biography:

Klara Nahrstedt is a full professor at the University of Illinois at Urbana-Champaign, Computer Science Department and Director of Coordinated Science Laboratory (Interdisciplinary Research Institute in College of Engineering at UIUC). Her research interests are directed toward teleimmersive systems, multi-camera systems, multimedia mobile systems, quality of service (QoS) management in multimedia networks, QoS-aware resource management for cloud-based operating cyber-infrastructures, and real-time security in cyber-physical systems. She is the co-author of widely used multimedia books `Multimedia: Computing, Communications and Applications' published by Prentice Hall, and ‘Multimedia Systems’ published by Springer Verlag. She is the recipient of the IEEE Computer Society Technical Achievements Award, ACM SIGMM Technical Achievements Award, Ralph and Catherine Fisher Professor Chair, University Scholar, Humboldt Research Award, the chair of ACM SIG Multimedia (2007-2013), and member of CRA's Computing Community Consortium (2014-2017). She was the general chair of ACM Multimedia 2006, general chair of ACM NOSSDAV 2007, the general chair of IEEE Percom 2009 and TPC co-chair of IEEE/ACM IoTDI 2018.


Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in numerical analysis in 1985. In 1995 she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is the ACM Fellow, IEEE Fellow, and member of the German National Academy of Sciences (Leopoldina Society).



© 2018. IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), All Rights Reserved.