The Future of Artificial Intelligence
210 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Juliette Decugis
Scoop.it!

How Machine Learning Became Useful: Reflecting on a Decade of Research (Joseph Gonzales)

How Machine Learning Became Useful: Reflecting on a Decade of Research (Joseph Gonzales) | The Future of Artificial Intelligence | Scoop.it

How did a field born out of mathematics and theoretical computer science join forces with rapid innovation in data and computer systems to change the modern world? What enabled the ML revolution, and what critical problems are left to solve?

Juliette Decugis's insight:

Professor Joseph Gonzales, specialized in ML and data systems, highlights the evolution of the AI field since he was first a graduate student to now. He highlights the change from statistical graphical models to more computationally expensive data driven models. 

 

As someone coming into the field, it's interesting to realize the innovations that enabled and continue to motivate the growth of deep learning.

 

He highlights key innovations:

  • python data processing --> availability to data
  • big data systems for ML
  • development of model APIs (scikit-learn, XGBoost, Keras)

 

According to Prof. Gonzales, the next step in ML is "reliably deploying and managing these trained models to render predictions in real-world settings." 

No comment yet.
Scooped by Juliette Decugis
Scoop.it!

Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence

Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence | The Future of Artificial Intelligence | Scoop.it
Yann LeCun, machine learning pioneer and head of AI at Meta, lays out a vision for AIs that learn about the world more like humans in a new study.
Juliette Decugis's insight:

In a talk at UC Berkeley this Tuesday, Yann LeCun, one of the founding fathers of deep learning, discussed approaches for more generalizable and autonomous AI.

 

Current deep learning frameworks require error training to learn very specific tasks and often fail to generalize to even out of distribution input on the same task. Specifically with reinforcement learning, we need a model to "fail" hundreds of times for it to start learning.

 

As a potential lead away from specialized AI, LeCun proposes a novel architecture composed of five sub-models mirroring the different parts of our brain. Specifically, one of the modules would ressemble memory as a world model moduleInstead of each model learning a representation of the wold specific to their task, this framework would maintain a world model usable across tasks by different module.

 

See full paper: https://openreview.net/pdf?id=BZ5a1r-kVsf

No comment yet.