Prof. Jürgen Schmidhuber
University of Lugano
Title: Modern Artificial Intelligence 1980s-2021 and Beyond
Significant historic events appear to be occurring more frequently as time goes on. Interestingly, it seems like subsequent intervals between these events are shrinking exponentially by a factor of four. This process looks like it should converge around the year 2040. The last of these major events can be said to have occurred around 1990 when the cold war ended, the WWW was born, mobile phones became mainstream, the first self-driving cars appeared, and modern AI with very deep neural networks came into being. In this talk, I’ll focus on the latter, with emphasis on Metalearning since 1987 and what I call “the miraculous year of deep learning” which saw the birth of—among other things—(1) very deep learning through unsupervised pre- training, (2) the vanishing gradient analysis that led to the LSTMs running on your smartphones and to the really deep Highway Nets/ResNets, (3) neural fast weight programmers that are formally equivalent to what’s now called linear Transformers, (4) artificial curiosity for agents that invent their own problems (familiar to many nowadays in the form of GANs), (5) the learning of sequential neural attention, (6) the distilling of teacher nets into student nets, and (7) reinforcement learning and planning with recurrent world models. I’ll discuss how in the 2000s much of this has begun to impact billions of human lives, how the timeline predicts the next big event to be around 2030, what the final decade until convergence might hold, and what will happen in the subsequent 40 billion years. Take all of this with a grain of salt though.
Prof. Soo-Young Lee
Korea Advanced Institute of Science and Technology, Korea
Title: Mindful Conversational Agents with Emotion and Personality
For the successful interaction between human and digital companion, i.e., machine agents, the digital companions need to have human-like personality as well as to make emotional dialogue, understand human. emotion, and express its own emotion. In this talk we will report our continuing efforts and recent results to develop human-like emotional conversational agents as a part of the Korean National Flagship Al Program. The emotion of human users is estimated from text, audio, and visual face expression during verbal conversation, and the emotion of intelligent agents is expressed in the speech and facial images. We will first show how our ensemble of neural networks won the Emotion Recognition in the Wild (EmotiW2015) challenge with 61.6% accuracy to recognize seven emotions from facial expression. Then, the top-down attention mechanism provides multimodal integration of text, voice, and facial images for better accuracy. Also, a deep learning based Text-to-Speech (TTS) system will be introduced to express emotions in the dialogue as well as the personal speech styles. These emotions of human users and agents interact each other during the conversation. The agents respond differently for different emotional states in chitchat mode. Then, the emotion as a brain internal state will be further extended into trustworthiness, implicit intention, and preference. We will report cognitive neuroscientific experiments for the brain internal states as well as deep learning models for the emotion recognition, representation, and dialogue generation.
Prof. P.N. Suganthan
Nanyang Technological University, Singapore
Title: Randomization Based Deep and Shallow Learning for Classification
This talk will first introduce randomization-based neural networks. Subsequently, the origin of randomization-based neural networks will be presented. The popular instantiation of the feedforward model called random vector functional link neural network (RVFL) originated in the early-1990s. Other randomized feedforward models that will be briefly mentioned are random weight neural networks (RWNN), extreme learning machines (ELM), stochastic configuration network (SCN), broad learning system (BLS), etc. Recently developed deep implementations of the RVFL will be presented in detail. The talk will also include extensive benchmarking studies using tabular classification datasets.
Prof. Guang-Bin Huang
Nanyang Technological University, Singapore
Title: Extreme Learning Machines (ELM) – When ELM and Deep Learning Synergize
One of the most curious in the world is how brains produce intelligence. The objectives of this talk are three-folds: 1) There exists some convergence between machine learning and biological learning. Although there exist many different types of techniques for machine learning and also many different types of learning mechanism in brains, Extreme Learning Machines (ELM) as a common learning mechanism may fill in the gap between machine learning and biological learning. In fact, ELM theories have been validated by more and more direct biological evidences recently. ELM theories point out that the secret learning capabilities of brains may be due to the globally ordered and structured mechanism but with locally random individual neurons in brains, and such a learning system happens to have regression, classification, sparse coding, clustering, compression and feature learning capabilities, which are fundamental to cognition and reasoning; 2) Single hidden layer of ELM unifies Support Vector Machines (SVM), Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF); 3) ELM provides some theoretical support to the universal approximation and classification capabilities of Convolutional Neural Networks (CNN). In addition to the good performance in small to medium datasets, hierarchical ELM is catching up with Deep Learning in some benchmark big datasets which Deep Learning used to perform well.
Prof. Naoyuki Kubota
Tokyo Metropolitan University
Title: Multiscopic Topological Twin in Robotics
Recently, various approaches to Digital Transformation, Cyber-Physical Systems, and Digital Twin have been proposed and discussed based on the integration of information, intelligence, communication, and robot technologies. The essence of these approaches is to realize super real-time measurement, monitoring, simulation, prediction, search, adaptation, and control integrated mutually from micro-, meso-, and macro-scopic points of view. Especially, feature extraction from big data is important to realize super real-time information processing. The methodology on topological mapping, knowledge graph, and graph neural networks is very useful to deal with feature-based information processing. Topological mapping methods are used for 3D modeling available for accurate physics simulation from the microscopic point of view. In contrast, graph-based methods are used for knowledge representation available for huge-scale rule-based inference from the macroscopic point of view. Furthermore, we can build a topological model and knowledge according to a mesoscopic modeling and simulation approach to integrate microscopic models and macroscopic knowledge, called Topological Twin. In this talk, we discuss the concept of topological twin in robotics in order to bridge the cyber-physical gap from the multiscopic point of view. First, we introduce various types of topological mapping methods, unsupervised learning methods, and graph-based methods. One of them is Growing Neural Gas (GNG) that can dynamically change the topological structure composed of nodes and edges. One important advantage of GNG is in the incremental learning capability of nodes and edges according to a target data distribution. Next, we explain multi-layer GNG to extract hierarchical features in environmental maps as a multi-scale approach, batch learning algorithm for GNG (BL-GNG) to improve the convergence property, and a modified method of GNG-utility (GNG-U), which is called GNG-U2. Furthermore, we show several experimental results of mobile robots and multi-legged robots. Finally, we discuss the applicability and future direction of multiscopic topological twin in robotics.
Prof. Tom Gedeon
Australian National University
Title: Detecting doubt and deception from multimodal data
Use of neural networks and deep learning on human physiological data has particular challenges as well as rewards. In this talk, I will discuss a number of approaches we have discovered to be necessary to be able to make robust predictions from such inherently noisy and expensive to label data. I will also share some of our results in predicting human internal states such as a speaker’s doubt in what they are saying, and how we can leak personal information in many way using the example of deception. I will close with a discussion of privacy by design principles we will need in the tools we develop as our societies monitor us more and more.
Prof. Nojun Kwak
Seoul National University
Title: AI in edge devices
Nowadays, Deep Neural Networks (DNNs) have become prevalent with tons of real-world use cases. As edge devices such as smartphones and IoT devices are widespread, there are intensive demands for deploying DNNs on edge devices. However, unlike DNN in the core, which has (virtually) unlimited access to the computational resources and datasets, there are several challenges hindering the successful deployment of deep networks to the edge.
The main challenges can be categorized into twofold: 1) the problems caused by limited resources in the edges and 2) the problems caused by limited data both in the core and the edges. In this talk, some remedies for these problems are introduced. More specifically, as answers to the first problem (resource-limited environment), we treat 1) knowledge distillation that boosts the performance of a student model, 2) model compression methods including quantization and pruning for reduced-sized deep models, and 3) developing dynamic inference paths for efficient inference adaptive to inputs and/or resources. The second problem (data-limited environment) is tackled with respect to 1) federated learning and 2) personalization.