Skip to content

Commit

Permalink
Update 2024-12-13-Superintelligence.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ashishtele authored Dec 17, 2024
1 parent d11fb93 commit cc9f558
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion _posts/2024-12-13-Superintelligence.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Hi There,

Sutskever began by looking back at a talk he gave 10 years prior, in 2014, where he presented the "Deep Learning Hypothesis." The core idea was that a large neural network with 10 layers could perform any task a human could do in a fraction of a second. This was based on the assumption that artificial and biological neurons are similar and that real neurons are slow. While this hypothesis proved largely correct, the field was still in its early stages, and some of Sutskever's predictions didn't quite pan out.

One of the key ideas of deep learning is connectionism, which holds that a large artificial neural network can be configured to do many of the same things that humans can. This has led to the age of pre-training, where very large neural networks are trained on huge datasets. Sutskever mentioned the GPT-2, GPT-3, and scaling laws as examples of this progress. However, he believes that pre-training as we know it will eventually end because while computing power is growing through better hardware, algorithms, and larger clusters, data is not growing at the same pace. He refers to data as the "fossil fuel of AI."
One of the key ideas of deep learning is connectionism, which holds that a large artificial neural network can be configured to do many of the same things humans can. This has led to the age of pre-training, where very large neural networks are trained on huge datasets. Sutskever mentioned the GPT-2, GPT-3, and scaling laws as examples of this progress. However, he believes that pre-training as we know it will eventually end because while computing power is growing through better hardware, algorithms, and larger clusters, data is not increasing at the same pace, but not all researchers think the same. [Logan](https://x.com/OfficialLoganK) says "Pre-training is only over if you have no imagination".

Sutskever then shifted his focus to the longer term, speculating about the future of superintelligence. He mentioned two potential developments: agents and synthetic data. Agents are AI systems that can act autonomously, while synthetic data is artificially generated data that can be used to train AI models. He sees both of these as promising areas for future research.

Expand Down

0 comments on commit cc9f558

Please sign in to comment.