THE FUTURE IS HERE

Language Model Safety Explained!

Sources used:
– Open letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
– CBS full interview with Geoffrey Hinton: https://www.youtube.com/watch?v=qpoRO378qRY
– 2022’s Expert survey on AI: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/
– Standford’s AI Index report: https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
– Yann LeCun’s proposal for new architecture: https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view
– Wait But Why blog on AI: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
– 📚 Superintelligence by Nick Bostrom: https://amzn.to/41JCjiM
– The AI Dilemma (highly recommended!): https://www.youtube.com/watch?v=xoVJKj8lcNQ
– The Alignment Problem from a Deep Learning Perspective: https://arxiv.org/pdf/2209.00626.pdf
– Predictability and Surprise in Large Generative Models: https://arxiv.org/pdf/2202.07785.pdf
– Max Tegmark lecture on Life 3.0: https://www.youtube.com/watch?v=1MqukDzhlqA
– What should we learn from past AI forecasts: https://www.openphilanthropy.org/research/what-should-we-learn-from-past-ai-forecasts/
– ABC News interview with Sam Altman: https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/
– Chris Olah on what the hell is going on inside neural networks: https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/
– X-Risk Analysis for AI Research: https://arxiv.org/pdf/2206.05862.pdf
– The Case for Halting AI Development | Max Tegmark on Lex Fridman Podcast: https://www.youtube.com/watch?v=VcVfceTsD0A
– Sparks of Artificial General Intelligence: Early experiments with GPT-4: https://arxiv.org/pdf/2303.12712.pdf
– Goal Misgeneralisation: https://shorturl.at/ijmqx

Honestly, I feel a bit restless these days by the recent AI developments. It is very exciting and terrifying at the same time. Hence, I dove into this topic and did some research, with the hope to understand a bit better what’s going on. AI risks are sometimes so conceptual and hard to put into words. So I hope this video will give you a glimpse of the discussions around these difficult topics and what still needs to be done to help make AI really benefit humans, instead of doing harm.

Hope you enjoyed the vid! 🤗

🔑 TIMESTAMPS
================================
0:00 – Open letter & Debate about AI risks
3:26 – When will AGI exist?
7:45 – AI forecasts have been wrong before
10:02 – Ways AI could be dangerous
15:37 – Solving AI alignment problem

👩🏻‍💻 COURSES & RESOURCES
================================
📖 Learn SQL Basics for Data Science Specialization 👉 https://imp.i384100.net/AovPnJ
📖 Excel Skills for Business 👉 https://coursera.pxf.io/doPaoy
📖 Machine Learning Specialization 👉 https://imp.i384100.net/RyjykN
📖 Data Visualization with Tableau Specialization 👉https://imp.i384100.net/n15XWR
📖 Deep Learning Specialization 👉 https://imp.i384100.net/zavBA0
📖 Mathematics for Machine Learning and Data Science Specialization 👉 https://imp.i384100.net/LXK0gj
📖 Google Data Analytics Certificate 👉 https://imp.i384100.net/15v9y6
📖 Applied Data Science with Python 👉 https://imp.i384100.net/gbxOqv

🙋🏻‍♀️ LET’S CONNECT!
================================
🤓 Join my Discord server: https://discord.gg/SK7ZC5XhcS
📩 Newsletter: https://substack.com/profile/87689887-thu-vu
✍ Medium: https://medium.com/@vuthihienthu.ueb
🔗 All links: https://linktr.ee/thuvuanalytics

As a member of the Amazon and Coursera Affiliate Programs, I earn a commission from qualifying purchases on the links above. By using the links you help support this channel at no cost for you.

#artificialintelligence #datascience #ThuVu