Workshop: BabyMind: How Babies Learn and How Machines Can Imitate
Byoung-Tak Zhang, Gary Marcus, Angelo Cangelosi, Pia Knoeferle, Klaus Obermayer, David Vernon, Chen Yu
2020-12-11T08:40:00-08:00 - 2020-12-11T17:30:00-08:00
Abstract: Deep neural network models have shown remarkable performance in tasks such as visual object recognition, speech recognition, and autonomous robot control. We have seen continuous improvements throughout the years which have led to these models surpassing human performance in a variety of tasks such as image classification, video games, and board games. However, the performance of deep learning models heavily relies on a massive amount of data, which requires huge time and effort to collect and label them.
Recently, to overcome these weaknesses and limitations, attention has shifted towards machine learning paradigms such as semi-supervised learning, incremental learning, and meta-learning which aim to be more data-efficient. However, these learning models still require a huge amount of data to achieve high performance on real-world problems. There has been only a few achievement or breakthrough, especially in terms of the ability to grasp abstract concepts and to generalize problems.
In contrast, human babies gradually make sense of the environment through their experiences, a process known as learning by doing, without a large amount of labeled data. They actively engage with their surroundings and explore the world through their own interactions. They gradually acquire the abstract concept of objects and develop the ability to generalize problems. Thus, if we understand how a baby's mind develops, we can imitate those learning processes in machines and thereby solve previously unsolved problems such as domain generalization and overcoming the stability-plasticity dilemma. In this workshop, we explore how these learning mechanisms can help us build human-level intelligence in machines.
In this interdisciplinary workshop, we bring together eminent researchers in Computer Science, Cognitive Science, Psychology, Brain Science, Developmental Robotics and various other related fields to discuss the below questions on babies vs. machines.
■ How far is the state-of-the-art machine intelligence from babies?
■ How does a baby learn from their own interactions and experiences?
■ What sort of insights can we acquire from the baby's mind?
■ How can those insights help us build smart machines with baby-like intelligence?
■ How can machines learn from babies to do better?
■ How can these machines further contribute to solving the real-world problems?
We will invite selected experts in the related fields to give insightful talks. We will also encourage interdisciplinary contributions from researchers in the above topics. Hence, we expect this workshop to be a good starting point for participants in various fields to discuss theoretical fundamentals, open problems, and major directions of further development in an exciting new area.
Recently, to overcome these weaknesses and limitations, attention has shifted towards machine learning paradigms such as semi-supervised learning, incremental learning, and meta-learning which aim to be more data-efficient. However, these learning models still require a huge amount of data to achieve high performance on real-world problems. There has been only a few achievement or breakthrough, especially in terms of the ability to grasp abstract concepts and to generalize problems.
In contrast, human babies gradually make sense of the environment through their experiences, a process known as learning by doing, without a large amount of labeled data. They actively engage with their surroundings and explore the world through their own interactions. They gradually acquire the abstract concept of objects and develop the ability to generalize problems. Thus, if we understand how a baby's mind develops, we can imitate those learning processes in machines and thereby solve previously unsolved problems such as domain generalization and overcoming the stability-plasticity dilemma. In this workshop, we explore how these learning mechanisms can help us build human-level intelligence in machines.
In this interdisciplinary workshop, we bring together eminent researchers in Computer Science, Cognitive Science, Psychology, Brain Science, Developmental Robotics and various other related fields to discuss the below questions on babies vs. machines.
■ How far is the state-of-the-art machine intelligence from babies?
■ How does a baby learn from their own interactions and experiences?
■ What sort of insights can we acquire from the baby's mind?
■ How can those insights help us build smart machines with baby-like intelligence?
■ How can machines learn from babies to do better?
■ How can these machines further contribute to solving the real-world problems?
We will invite selected experts in the related fields to give insightful talks. We will also encourage interdisciplinary contributions from researchers in the above topics. Hence, we expect this workshop to be a good starting point for participants in various fields to discuss theoretical fundamentals, open problems, and major directions of further development in an exciting new area.
Chat
To ask questions please use rocketchat, available only upon registration and login.
Schedule
2020-12-11T08:40:00-08:00 - 2020-12-11T09:00:00-08:00
Opening Remarks: BabyMind, Byoung-Tak Zhang and Gary Marcus
Byoung-Tak Zhang, Gary Marcus
2020-12-11T09:00:00-08:00 - 2020-12-11T09:40:00-08:00
Invited Talk: Latent Diversity in Human Concepts
Celeste Kidd
2020-12-11T09:40:00-08:00 - 2020-12-11T10:20:00-08:00
Invited Talk: The Role of Embodiment in Development
Oliver Brock
2020-12-11T10:20:00-08:00 - 2020-12-11T10:45:00-08:00
Coffee Break
2020-12-11T10:45:00-08:00 - 2020-12-11T11:00:00-08:00
Contributed Talk: Automatic Recall Machines: Internal Replay, Continual Learning and the Brain
Xu Ji
2020-12-11T11:00:00-08:00 - 2020-12-11T11:15:00-08:00
Contributed Talk: Architecture Agnostic Neural Networks
Sabera Talukder
2020-12-11T11:15:00-08:00 - 2020-12-11T11:30:00-08:00
Contributed Talk: Human-Like Active Learning: Machines Simulating Human Learning Process
Jaeseo Lim
2020-12-11T11:30:00-08:00 - 2020-12-11T12:30:00-08:00
Poster Session
Kwanyoung Park, Haizi Yu, Alban Laflaquière, Yizhou Zhang, Hugo Caselles-Dupré, Charlie Snell, Phil Ball, Jhoseph Shin, Jelena Sucevic, Kezhen Chen, Won-Seok Choi, Eon-Suk Ko, Xu Ji
2020-12-11T12:30:00-08:00 - 2020-12-11T13:30:00-08:00
Lunch Break
2020-12-11T13:30:00-08:00 - 2020-12-11T13:45:00-08:00
Contributed Talk: What can babies teach us about contrastive methods?
Jovana Mitrovic
2020-12-11T13:45:00-08:00 - 2020-12-11T14:00:00-08:00
Contributed Talk: Learning Canonical Transformations
Zack Dulberg
2020-12-11T14:00:00-08:00 - 2020-12-11T14:15:00-08:00
Contributed Talk: Modeling Social Interaction for Baby in Simulated Environment for Developmental Robotics
Rubel Mondol, Deokgun Park
2020-12-11T14:15:00-08:00 - 2020-12-11T14:30:00-08:00
Contributed Talk: ARLET: Adaptive Representation Learning with End-to-end Training
Won-Seok Choi
2020-12-11T14:30:00-08:00 - 2020-12-11T14:45:00-08:00
Contributed Talk: Not all input is equal: the efficacy of book reading in infants' word learning is mediated by child-directed speech
Eon-Suk Ko
2020-12-11T14:45:00-08:00 - 2020-12-11T15:30:00-08:00
Coffee Break & Poster Session
2020-12-11T15:30:00-08:00 - 2020-12-11T16:10:00-08:00
Invited Talk: Developmental Robotics: Language Learning, Trust and Theory of Mind
Angelo Cangelosi
2020-12-11T16:10:00-08:00 - 2020-12-11T16:50:00-08:00
Invited Talk 4
Josh Tenenbaum
2020-12-11T16:50:00-08:00 - 2020-12-11T17:30:00-08:00