timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Content
  • People
  • Messages
  • Channels
  • Help
Sign In

Building Better Reinforcement Learning With World Models & Self-Attention Methods

Posted Oct 06
# TransformX 2021
# Breakout Session
Share
SPEAKER
David Ha
David Ha
David Ha
Research Scientist @ Google Japan

David is a Research Scientist at Google Brain. His research interests include Recurrent Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Mathematics from the University of Toronto.

+ Read More

David is a Research Scientist at Google Brain. His research interests include Recurrent Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Mathematics from the University of Toronto.

+ Read More
SUMMARY

Internal mental models, as well as consciousness and the concept of mind modeling, are major themes in neuroscience and psychology. However, we do not understand them well enough to create conscious artificial intelligence. In this talk, David Ha, Research Scientist at Google Brain, explores building "world models" for artificial agents. Such world models construct an abstract representation of the agent's environment that allows it to navigate it. David discusses artificial agents' use of world models and self-attention as a kind of limitation, connecting it in with computational evolution and artificial life ideas and methods. The goal of the presentation is to motivate scientists to create conscious machines by encouraging them to build artificial life that includes an internal mental model.

+ Read More

Watch More

26:58
Posted Oct 06 | Views 1K
# TransformX 2021
# Breakout Session
See more
Terms of Use
Privacy Policy
Powered by