Kelly Lane is twelve years old and in the G.A.T.E (gifted and talented education) program in middle school. He is an avid reader. His two recent favorite books are Unbroken by Laura Hillenbrand and Dawn of the New Everything by Jaron Lanier. Kelly also
loves playing video games, particularly on Oculus Rift and Oculus Go VR headsets. He enjoys surfing with his family and has traveled to England, France, and India. His first book was When Computers Become Human: A Kid's Guide to the Future of Artificial Intelligence, which was also the basis of a short film. Huazhang Company in Beijing, China, has translated Kelly's book into Mandarin Chinese and has issued it in a new edition under their scientific publishing division. Kelly is currently working on a new fantasy novel.
Technologically speaking, we are creating our own version of the Oracle of Delphi. But this time we are not relying on outdated mythological beliefs or superstitious thinking. Instead of invoking magical incantations (or succumbing to misunderstood gaseous fumes), we are designing elaborate software programs that have the ability to allow machines to learn autonomously and then analyze tomorrow's outcomes today. Science, in other words, is taking the promise of Pythia and making it a reality in machine hardware. This can and will occur because artificial intelligence (unlike our own) can evolve at an ever-accelerating rate. Vestri is merely the forerunner of a new kind of Oracle, one which is algorithmically programmed and which can access enormous amounts of data worldwide and (on the mathematical basis of game theory) divine what the Greek gods of yesteryear, such as Apollo, could not. This book is richly illustrated in color. Written by Kelly Lane, a twelve-year-old middle school student, whose first book, When Computers Become Human, was published by the MSAC Philosophy Group when he was eleven and has subsequently been translated in Mandarin and published in China.
ABSTRACT: A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.