Keynote Talks

Prof. Matthew Taylor

Assistant Professor, Allred Distinguished Professorship in Artificial Intelligence, School of Electrical Engineering and Computer Science, Washington State University, USA

Matthew E. Taylor received his PhD from the University of Texas at Austin, supervised by Peter Stone. Matt then completed a two-year postdoctoral research position at the University of Southern California with Milind Tambe and spent 2.5 years as an assistant professor at Lafayette College in the computer science department. He currently holds the Allred Professorship in Artificial Intelligence as an assistant professor at Washington State University in the School of Electrical Engineering and Computer Science. He and his group have published over 100 peer-reviewed papers and funding support includes a National Science Foundation CAREER award. Current research interests include intelligent agents, multi-agent systems, reinforcement learning, transfer learning, and robotics.

Title:Improving Reinforcement Learning with Human Input

Abstract: Reinforcement learning (RL) has had many successes, from controlling video games and robots to web server and data center optimization. However, significant amounts of time and/or data can be required to reach acceptable performance. If agents or robots are to be deployed in real-world environments, it is critical that our algorithms take advantage of existing human knowledge. This talk will discuss a selection of recent work that improves RL by leveraging 1) demonstrations and 2) reward feedback from imperfect users, with an emphasis on how interactive machine learning can be extended to best leverage the unique abilities of both computers and humans.


Prof. Yang Yu

Associate Professor, LAMDA Group, Department of Computer Science, National Key Laboratory for Novel Software Technology, Nanjing University, China

Yang Yu is an Associate Professor in the Department of Computer Science, Nanjing University, China. His research interest is in artificial intelligence, mainly on derivative-free reinforcement learning, theoretical-grounded evolutionary algorithms, and ensemble learning. His work has been published in Artificial Intelligence, IJCAI, AAAI, NIPS, KDD, etc. He has been granted several awards such as the best paper award of IDEAL'16, GECCO'11, PAKDD'08. He is/was a Senior PC member of IJCAI'15/17, a Publicity Chair of IJCAI'16/17 and IEEE ICDM'16, a Workshop Chair of ACML'16.

Title: Derivative-free Optimization — Towards More Possibilities for Learning

Abstract: Machine learning systems are commonly rooted in optimizations. Optimization ability restricts what a learning system can represent and learn. Convex programming and gradient-based methods are widely adopted optimization tools in machine learning, which, however, have limited suitable conditions. Derivative-free optimization, with recent progress in both theoretical foundation and practice advantage, is catching up. Without the restrictions of gradients, derivative-free optimization has a much broader range of applicability. In this talk, we will introduce some progress of derivative-free optimization, and demonstrate its usefulness in creating more possibilities for learning system design.



Prof. Pingzhong Tang

Assistant professor, IIIS, Tsinghua University, China

Dr. Pingzhong Tang is a National-Youth-1000 assistant professor and head of the Computational economics group at Institute of Interdisciplinary Information Sciences (aka. Yao class), Tsinghua University. Before joining Tsinghua, he spent two years as postdoc at computer science department of CMU. He obtained PhD degree at department of computer science and engineering at HKUST. He has been visiting scientist at Stanford University, Harvard University, University of California at Berkeley and Microsoft research Asia.

Title: Large-scale mechanism design

Abstract: In this talk, I will summarize our recent efforts on applying the theory of mechanism design to nationwide scale industrial settings, around the theme of resource allocation. The results are a set of mechanisms that satisfy both economics and computational constraints.


Prof. Takeshi Nagae

Associate professor, Tohoku University, Japan

Takeshi Nagae is an Associate Professor in the Graduate School of Engineering, Tohoku University. He received his Ph.D from Tohoku University. He then has spent 1 year in Kyoto University (in the disaster management field), 4 years in Kobe University (in the traffic engineering field), 4 years in the University of Electro-Communications (in the information science and mechanism design field), before he returned to Tohoku University at 2012, right after the Great East Japan Earthquake. His research interests are in traffic engineering, analyses and management of urban road network, pre- and post- disaster management, mechanism design / auction theory, stochastic control and financial engineering.

Title: Multi-Agent Systems and Mechanism Design in Urban Road Transportation

In this talk, the following three topics are introduced to demonstrate how the multi-agent systems regarding transportation (e.g. the traffic flow in an arterial road section, the fleet of buses and their user in a demand-driven bus system, the shared-cars and their users, etc) could be captured as a mathematical model, and how these models can be utilized for management of such systems:
1) A simplified traffic flow model for a risk-sensitive control of an arterial road section with multiple signalized intersections;
2) An adoptive pricing and scheduling of a buss fleet for morning commute without demand estimation;
3) a coordination of multiple stakeholder via price-adjustment in a car-sharing system with limited resources of the shared vehicles and the parking slot capacities