Science & Technology

The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson | TED

As the cofounder of Google DeepMind, Shane Legg is driving one of the greatest transformations in history: the development of artificial general intelligence (AGI). He envisions a system with human-like intelligence that would be exponentially smarter than today’s AI, with limitless possibilities and applications. In conversation with head of TED Chris Anderson, Legg explores the…

Published

on

As the cofounder of Google DeepMind, Shane Legg is driving one of the greatest transformations in history: the development of artificial general intelligence (AGI). He envisions a system with human-like intelligence that would be exponentially smarter than today’s AI, with limitless possibilities and applications. In conversation with head of TED Chris Anderson, Legg explores the evolution of AGI, what the world might look like when it arrives — and how to ensure it’s built safely and ethically.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas:

Follow TED!
Twitter:
Instagram:
Facebook:
LinkedIn:
TikTok:

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more:

TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: . For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at

#TED #TEDTalks #gemini #agi #ai

107 Comments

  1. @sophiaisabelle000

    December 7, 2023 at 6:47 pm

    We appreciate how much insight and useful information we receive from talks like these. We hope to see more in the upcoming future.

  2. @Based_Batman

    December 7, 2023 at 6:49 pm

    When do we, as the majority of the world, have a say in the continuation of this? Humanity is in danger of extinction, but muh climate change is the danger, RIGHT?!

    • @phen-themoogle7651

      December 7, 2023 at 7:17 pm

      We don’t have a say, genie is out of the bottle already. Coin flip for utopia or dystopia, probably we will survive but it might be a dystopia future. Technology will continue to improve regardless of the outcome though 😅

  3. @dameanvil

    December 7, 2023 at 6:51 pm

    00:04 🌐 Shane Legg’s interest in AI sparked at age 10 through computer programming, discovering the creativity of building virtual worlds.
    01:02 🧠 Being dyslexic as a child led Legg to question traditional notions of intelligence, fostering his interest in understanding intelligence itself.
    02:00 📚 Legg played a role in popularizing the term “artificial general intelligence” (AGI) while collaborating on AI-focused book titles.
    03:27 📈 Predicted in 2001, Legg maintains a 50% chance of AGI emerging by 2028, owing to computational growth and vast data potential.
    04:26 🧩 AGI defined as a system capable of performing various cognitive tasks akin to human abilities, fostering the birth of DeepMind.
    05:26 🌍 DeepMind’s founding vision aimed at building the first AGI, despite acknowledging the transformative, potentially apocalyptic implications.
    06:57 🤖 Milestones like Atari games and AlphaGo fueled DeepMind’s progress, but language models’ scaling ignited broader possibilities.
    08:50 🗨 Language models’ unexpected text-training capability surprised Legg, hinting at future expansions into multimedia domains.
    09:20 🌐 AGI’s potential arrival by 2028 could revolutionize scientific progress, solving complex problems with far-reaching implications like protein folding.
    11:44 ⚠ Anticipating potential downsides, Legg emphasizes AGI’s profound, unknown impact, stressing the need for ethical and safety measures.
    14:41 🛡 Advocating for responsible regulation, Legg highlights the challenge of controlling AGI’s development due to its intrinsic value and widespread pursuit.
    15:40 🧠 Urges a shift in focus towards understanding AGI, emphasizing the need for scientific exploration and ethical advancements to steer AI’s impact positively.

  4. @pondholloworchards7312

    December 7, 2023 at 6:51 pm

    AI is their savior

  5. @pondholloworchards7312

    December 7, 2023 at 6:53 pm

    There is nothing new under the sun

  6. @r0d0j0g9

    December 7, 2023 at 7:01 pm

    i think that AGI was created we wouldn’t know for some time

  7. @spider853

    December 7, 2023 at 7:06 pm

    How are we approaching AGI if the current neural model is far away from the brain? There is also no plasticity

    • @DrinoMan

      December 7, 2023 at 7:32 pm

      Comparing AGI to the human brain underestimates AI’s unique learning capabilities. AI learns from data on a scale no human brain can match, analyzing patterns across millions of examples in minutes. Unlike neurons that slowly form connections, AI algorithms can instantly update and incorporate new information, leading to a learning speed and efficiency far beyond human capability. This extraordinary capacity positions AI not as a brain’s replica, but as an advanced entity that redefines what learning and intelligence can be.

  8. @zzz_ttt_0091

    December 7, 2023 at 7:06 pm

    T1000

    • @dsoprano13

      December 7, 2023 at 8:21 pm

      T800

  9. @ELOpiouuBobozude

    December 7, 2023 at 7:08 pm

    PRESİNGSS TAYM TTM TAYM TAYM TAYM SESTEMS TAYM TAYM PRESİNGSS TAYM TRAP😂🤣🤯😳🤯ÖMAYQAD SONG RÜBÖT

  10. @JayHeadley

    December 7, 2023 at 7:09 pm

    We can’t stop innovation so take the bad with the good. It’s just the cost of doing business as humans progress because it literally all started with fire…🔥

    • @danawhiteisagenius8654

      December 7, 2023 at 8:20 pm

      Innovation started with tools, tools led us to the innovations like fire. Tools came before fire! AI is a tool, an innovation and one that could replicate itself, essentially build more versions of themselves

  11. @shinseiki2015

    December 7, 2023 at 7:16 pm

    the guy is casualy saving the world goddamn

  12. @Enigma1336

    December 7, 2023 at 7:17 pm

    If AGI can create AGI, then someone will inevitably create unethical and dangerous AGI with nefarious intentions. We need to prepare for when that will happen, just as much as we must try to make our own AGI safe and ethical.

    • @bestoftiktok8950

      December 7, 2023 at 7:24 pm

      Ethics, morals, good or bad dont exist. They are all just concepts and can vary vastly

    • @JracoMeter

      December 7, 2023 at 7:55 pm

      @@bestoftiktok8950 How can they vary and not exist?

    • @absta1995

      December 7, 2023 at 7:59 pm

      ​@@bestoftiktok8950would you say the same thing if someone threatened to harm you and people you care about? Or would you suddenly realise the value of morals, ethics and justice

  13. @goranmajic4943

    December 7, 2023 at 7:34 pm

    Still to much hype. Still no talk about the amount of data they need from hard working people. Companies hope that they can create from little data a ton of data, without peoples data. All the huge busniess models of the web are built on using actions and data from others and sort them. True for social media and search and Ads. Its always the use of data from others to make a ton of money. Thats all.

    • @EvolGamor

      December 7, 2023 at 7:37 pm

      The data has already been copied. Everything. Now they’re focused on synthetic data. Keep up man.

    • @jasonliang8876

      December 7, 2023 at 7:54 pm

      Never heard of synthetic data? Most data on internet are garbage and noise. They can only take you so far. Next generation model will be trained on synthetic data that generated by other AI

    • @alexanderkharevich3936

      December 7, 2023 at 8:01 pm

      @@jasonliang8876 synthetic data is good just for specific ML tasks, but not for AGI.

    • @joelface

      December 7, 2023 at 8:21 pm

      @@jasonliang8876 well, I actually think they’re moving on to images, video, audio, etc. at this point. If it can train on video footage of the world, there really is limitless data.

  14. @alescervinka7501

    December 7, 2023 at 7:41 pm

    FEEL THE AGI

  15. @erobusblack4856

    December 7, 2023 at 7:42 pm

    virtual humans, fully autonomous, in the metaverse 💯😝👍

    • @danawhiteisagenius8654

      December 7, 2023 at 8:16 pm

      May the matrix begin!

  16. @alipino

    December 7, 2023 at 7:43 pm

    AGI will rival the invention of the wheel in greatness

  17. @PapaBradKnows

    December 7, 2023 at 7:46 pm

    I cannot believe his hubris, saying that we cannot know what’s going to happen! Science Fiction has written about this for decades, I’ll use Isaac Asimov, “I, Robot” as an obvious example of the myriads that are out there, so to say that “we don’t know what it’s going to happen”, is horse**** as far as I’m concerned.
    Considering what humans have done to the world with the intelligence that we have… AGI may not 💥 us up but I guarantee that humans that use it will.

  18. @calista1280

    December 7, 2023 at 7:53 pm

    I think the so-called elites would totally enjoy the POWER of CONTROLLING a WORLD full of cooperative robots or HUMANOIDS 🤖 🤔

  19. @delriver77

    December 7, 2023 at 7:54 pm

    As a sick person struggling with crippling illnesses, and bedridden for many years, I sincerely hope AGI can be achieved asap. It’s my best chance at having something remotely close to an actual life at some point.

    • @coolcool2901

      December 7, 2023 at 7:59 pm

      We will have it by 2024 September.

    • @coolcool2901

      December 7, 2023 at 7:59 pm

      We just need to build complete mathematical capabilities into LLMs.

    • @Gallowglass7

      December 7, 2023 at 8:15 pm

      I am sorry to hear that, mate. I deeply hope it happens as soon as possible myself, as my parents are getting old and I cannot picture a world without them. Hopefully, our dream will come true in the somewhat near future.

  20. @alexanderkharevich3936

    December 7, 2023 at 7:54 pm

    It’s scary that all these “AI” guys are thinking just about fame and money they get by releasing the “AGI Thing” which could badly hallucinate one day and it will be the last day of humanity.

    • @catserver8577

      December 7, 2023 at 8:26 pm

      Right?! Have you seen what AI thinks is beautiful art? Scary.

  21. @claybowcutt6158

    December 7, 2023 at 7:57 pm

    we need an AGI panel of judges, I think AGI can be impartial and a impartial panel of independent AGI will change the world.

    • @danawhiteisagenius8654

      December 7, 2023 at 8:15 pm

      Yep well never have a subjective outcome to a Figure Skating event or a robbery in combat sports ever again! Lol

    • @AshokKumar-mg1wx

      December 7, 2023 at 8:25 pm

      Do you know about ASI 😈

  22. @catserver8577

    December 7, 2023 at 8:24 pm

    So for those keeping track of this using the Skynet timeline, this is after the Ai learns to play Go, but before the military has gotten ahold of the crucial components to move past the singularity and move the robot takeover forward. Hopefully. No sign of anyone equivalent to the Connors coming to save the day, unfortunately. Not this person, no one involved in the various groups bringing it into reality. I will be safe and say “All hail the basilisk.”, and will just mention the AGI has the potential to be the actual basilisk once it becomes aware.

  23. @skarrr1

    December 7, 2023 at 8:29 pm

    im trying to work out whats wrong with me. Can anyone else attest to the fact that you can hear his tounge making wet clicking noises as he talks? Anyone else unable to concentrate because of it?

  24. @bunbun376

    December 7, 2023 at 8:30 pm

    Ethical AI algorithm = CL->F /SY->P

  25. @OldManPaxusYT

    December 8, 2023 at 7:04 am

    i can’t watch this
    his dry mouth sounds grate on my nerves
    😬

  26. @JungleJoeVN

    December 8, 2023 at 7:46 am

    AI has no room in this world

    • @MrSub132

      December 8, 2023 at 3:35 pm

      How ironic, a human that pollutes and hasnt changed the world in any meaningfull way telling future higher intellegences they dont belong in a world you dont even own.

  27. @goodcat1982

    December 8, 2023 at 7:53 am

    11:32 that gave me shivers. I’m super excited and terrified at the same time!

  28. @gregbors8364

    December 8, 2023 at 8:16 am

    Most modern tech has been designed with military applications at least in mind, so there’s that

  29. @Thoughtcompilation

    December 8, 2023 at 8:30 am

    Who else is stuck in life and their hope is that AGI would come along save us some how

  30. @singularityscan

    December 8, 2023 at 8:31 am

    Lets hope a part of it already is in existence and has always bin and its form of control in the world is only growing. If its entirely new and born at some point , it is missing life. In the second scenario it’s bad because it will always be separate and separation leads to conflict. Like Roko’s Basilisk or any such related scenario.

  31. @winstong7867

    December 8, 2023 at 8:41 am

    Could Pass for Bruce banner

  32. @zmor68

    December 8, 2023 at 8:49 am

    Fascinating. AGI will be smart enough to understand how AGI works. So it will be able to improve its own capabilities. AGI will be then smarter and so will improve furher. So AGI will be a constantly self-improving system. It will leave humans behind very quickly. We will cease to understand a lot of what AGI is doing. Secondly, there is an inherent unpredictability in complex cognitive systems. Absolutely fascinating!

    • @singularity6761

      December 8, 2023 at 6:44 pm

      Its called ASI then. Artificial Superintelligence

  33. @markmuller7962

    December 8, 2023 at 9:05 am

    We have anti virus, we’ll have anti malignant AIs.

    Meanwhile the universe still extremely dangerous with asteroids, pathogens, aliens and what not, AI can not only help us to overcome these challenges but also bring humaninty into the post scarcity era with goods available for everyone, universal access to knowledge and education, eradicate all illnesses, reverse aging and greatly speed up scientific research in general

  34. @GOOFYDOGGOWITHSPECS

    December 8, 2023 at 9:19 am

    Fascinating insights on AGI’s potential by Shane Legg. Balancing innovation with ethics is crucial for a responsible and impactful future

  35. @markring40

    December 8, 2023 at 9:31 am

    AGI will be nothing more than a reflection of us: all that is good and bad in us. AGI will just be regurgitating everything we feed it. It will just be much faster at doing good, or bad, than we can.

  36. @KateeAngel

    December 8, 2023 at 10:15 am

    Delusional tech bros. 🤦
    If AGI ever exists it will tell our leaders the same things current experts are telling: that endless growth strategy is sui*idal, that we need to consume less and solve inequality, and to implement already existing solutions to our problems (like greener energy sources), and the so called “leaders” will continue to not listen just like now, because there is too much power and money to be haf in preserving status quo

  37. @Izumi-sp6fp

    December 8, 2023 at 10:36 am

    I don’t believe that OpenAI has achieved AGI yet. Even by their own admission of the model that they used; it demonstrated the _potential_ to be an AGI, but that it does not exist yet. Having said that, they have high confidence that they are now on the right track and that AGI will be _much_ sooner than later. My forecast is that AGI will exist NLT than 31 Dec 2025. And that by its very nature, the AGI will become an ASI NLT 31 Dec 2029. Them NLTs are the dead _latest_ that each form of AI will exist. The AGI could potentially come into being before this year ends, but I put it later in 2024. As for the ASI emergence, that is a much more slippery fish. True AGI could become an ASI within days, maybe _hours_ . It depends on how well our control and our alignment efforts slow that process. But even with our best control efforts, AGI, which by definition is a cognitive process equal to, or as much as 50 times more efficacious than human cognition, will be almost impossible to control. I see ASI emerging as soon as 2026 and possibly as early as late 2025.

    I admit the year 2029, is a sort of “fudge factor” that represents my personal failure to comprehend such an almost impossible to envision exponential development. It strains our cognitive ability to think that such a thing is going to almost certainly happen perhaps as soon as the year 2025 itself.

    And something _else_ to keep in mind. ASI = “Technological Singularity” (TS). An event that has never occurred in human recorded history. The _last_ TS that occurred, occurred about 4.3 million years ago and took about 1.2 million years to unfold. It involved the evolution of a creature that could think abstractly. The progenitor creature would not have been able to even comprehend such a being. Like our cats trying to understand even stupid people’s knowledge of outer space. Physically impossible. _That_ is the impact a TS will have on human civilization at some point NLT 31 Dec 2029.

    Cooking, farming, medicine, electric, cars, radio, tv, and the internet are all characterized as “soft” singularities. In other words, they were each, in their turn, absolutely life and world changing. Permanently. But also easily, for the most part, understandable to humans that existed _before_ such technologies came into existence. A TS is a whole different ballgame.

    My sense of all of this is that we, as humanity, missed the boat. We should have been striving mightily, I mean like a 21st century “Manhattan Project” to work to merge our minds with the computing power that enabled the emergence of AGI. That window is now closed. It’s too late, _despite_ various efforts to develop a BMI (brain-machine interface). The best we can hope for is that the AGI/ASI has such perfect alignment with humans needs, desires and values that it will “happily” work to merge our minds with itself–while _we_ are still able to enjoy life and be content. A pretty tall order I’d say. If or when we merge our minds with the computing and computing derived AI, we would no longer be business as usual. But we _would_ be in the loop. Which is perhaps our only existential chance.

    Eliezer Yudkowsky says we have roughly a one in a million chance of controlling ASI through our current alignment efforts. That is because an ASI would be hundreds to _billions_ of times more cognitively efficacious than humans. We would not be as pet monkeys or even _cats_ to an ASI. We would be as “archaea” to an ASI. Don’t know what “archaea” is? The ASI will. So will the AGI for that matter. The three adjectives I use for ASI are “Unimaginable, unfathomable and incomprehensible.”

    My question as a faithful Roman Catholic is, will _God_ permit this to happen? Personally, I fervently pray for the Second Coming of Our Lord Jesus Christ, in power, majesty and glory. All the time now. ‘Cuz I _know_ the deal.

  38. @stevechance150

    December 8, 2023 at 10:52 am

    The rules of capitalism demand that the corporation rush to be first to market, and do so at any cost. Oddly, their is no rule in capitalism that forbids ending humanity.

  39. @OpreanMircea

    December 8, 2023 at 12:17 pm

    When was this talk?

  40. @yuppiecruncher

    December 8, 2023 at 12:20 pm

    Going to use ChatGPT to summarize this talk so i dont have yo listen to him talk about his childhood 😂

  41. @scotter

    December 8, 2023 at 12:34 pm

    This guy – while super smart – seems to have a problem (like many humans do) in imagining exponential growth. We are on the edge of having AGI and we already know how ML can be set up to reprogram itself. Once we have the combination of those two things (AGI + self-growth), we are then mere minutes from Artificial Superintelligence (ASI) because even if it is only one of the LLMs that attains these two prerequisites, along with a “desire” or directive to “evolve” or “improve self,” it / they will do FAR more than x100000 it’s own capabilities. So really, in my opinion, the only limit is how many months or years it takes for even one ai to attain AGI. With so many currently seemingly on the edge of that, I see that “singularity” happening within a year. As far as regulation goes, to me it seems that there is no way to stop every entity who is / will be working on attaining AGI and even ASI and will ignore regulation. So his prediction of at or after 2028 seems extremely naive.

  42. @samkendall4975

    December 8, 2023 at 12:38 pm

    People are quick to be excited as to how AI will improve our lives. But any significant advancements have always come into the hands of the rich. The wealth gap will increase and the wealthy will have control of all the answers both technological and financial. First step is to make you believe it will improve your quality of life. Let’s face it, we will not have access to a gold mine of information.

  43. @jeffkilgore6320

    December 8, 2023 at 12:39 pm

    Kurzweil is unfairly criticized because his critics are far too focused on exact years. If the founder of the term AGI understands Ray K, then I suggest the critics pare it down. It’s just noise.

  44. @stevej.7926

    December 8, 2023 at 12:46 pm

    Always important to remind ourselves that intelligence and wisdom are two different realms.

  45. @dan-cj1rr

    December 8, 2023 at 12:48 pm

    Fun fact is nobody asked for this, who the F even want to live in a world with AGI ? No more economy, do nothing, everyone is a CEO because u can create anything, no more point to anything. This thing should strictly be used in health care research thats it. Everything will collapse and just gonna cause a lot of chaos. The fun part is we’re getting imposed that, but no one fking want it. lol

  46. @paulusbrent9987

    December 8, 2023 at 12:53 pm

    We are light years away from real AGI. We didn’t even manage to create self-driving cars. What are we self-deluding ourselves with?

  47. @JD-jl4yy

    December 8, 2023 at 2:04 pm

    13:50 ouch, he’s basically admitting that Max Tegmark is right (see Tegmark’s Lex Fridman podcast episode)

  48. @HojaUno

    December 8, 2023 at 3:14 pm

    Every entity will need to cover their own basic needs.
    We don’t know what they are Today.

    How we regulate a power that over power humanity?
    Are we going to become a drag in the evolution of AI ?
    Is it possible that our current legislation frame allow the society of wealthy entities or the next ultra millionaires that may not be a living organisms/ not a humans?

  49. @Ben_D.

    December 8, 2023 at 5:37 pm

    Important to note, that doomsday prophets don’t fear AI killing everyone off. They fear humans doing it. Bad actors. AI is just a tool in the toolbox. Some bad people are already trying to misuse it to scam people with deep fakes. We got used to bad actors before, we will deal with them in the future.

  50. @birdofprey777

    December 8, 2023 at 9:34 pm

    AGI will be humanity’s last invention

  51. @Deep_Matter

    December 8, 2023 at 10:14 pm

    One thing I fully trust and approve AGI will do is destroy the institutions and deliver repatriations. That’s what high intelligence does

  52. @mmqaaq504

    December 9, 2023 at 1:26 am

    The rapid progress towards AGI would be really comforting and inspirational if it wasn’t for the fact that global corporations would DEFINITELY use it to increase their theft, oppression, and dominance.

  53. @jbangz2023

    December 9, 2023 at 3:50 am

    Another opportunity forbAGI is to predict earthquakes at least a week earlier.

  54. @shawnweil7719

    December 9, 2023 at 4:02 am

    Dude if opening the door was only a 5% chance of bad everybody but a fool would open the dang door duh 😂 there’s a lot of suffering in the small world and there’s small chances of getting injured with every action such as getting out of bed and dying unlikely but there’s a chance life is all chances theirs 100s of probabilities flashing before our eyes every second of every day. Every second of our lives our practically collapsing quantum spins. Take the dam chance people are suffering is my input. You don’t feel the suffering from an ivory tower and all this fun your having so take a second to imagine and empathize

  55. @RJay121

    December 9, 2023 at 4:16 am

    I think were over rating and over hyping AGI. Until AGI has learned about our physical reality from infancy in a physical world it can only guess. Ex. Dont walk on broken glass. Dont touch flame on stove. Trust is built by actions not words. These concepts are learned by humans over time as kids grow. That AGI is way off

  56. @KarakiriCAE

    December 9, 2023 at 6:55 am

    AGI will be the most powerful tool humanity has ever seen and it will definitely be weaponised. There are a million ways this can go wrong and the genie is out of the box already, so we just have to hope that it’ll come as late as possible

  57. @chrislannon

    December 9, 2023 at 9:31 am

    Will we understand how AI works before AGI does? We’d better get this sorted out before AGI arrives.

  58. @happythereafter

    December 9, 2023 at 11:50 am

    bad human super intelligent actors are mortals and eventually pass away, however super intelligent AGI live forever… and bad outcomes from super intelligent AGI who never dies can lead to
    apocalypse. The potential value of AGI is what we percieved, the price of AGI is what we have to pay.

  59. @gjb1million

    December 9, 2023 at 1:04 pm

    Great episode. Thanks.

  60. @Bookhermit

    December 9, 2023 at 2:10 pm

    True AGI is still a LONG way off. We will know it’s getting close when the AI stops trying to answer OUR questions and starts asking its own – ENDLESSLY, like a hyper-inquisitive 2 or 3 year old. Most human vocabulary it is given have (in the end) circular or meaningless definitions (also the reason the “3 laws of robotics” are nonsense).

    A simulation of intelligence and actual intelligence are TOTALLY different things, no matter how well they may (by design) fool a casual observer.

  61. @generativeresearch

    December 9, 2023 at 3:15 pm

    People still will create an AGI even with the best of intentions

  62. @invox9490

    December 9, 2023 at 3:53 pm

    You’re talking C3PO, but what we’re getting is T1000.

  63. @paulbradbury5792

    December 9, 2023 at 6:04 pm

    The reason people are so interested in Artificial Intelligence is because there is no Intelligence left in the world. What is out there now is not intelligent but more resembling what the popular opinion of what constitutes intelligence, which is to say more along the lines of common sense. So I would say what we’ve got now is more like Artificial Common Sense.

  64. @donald-parker

    December 9, 2023 at 7:06 pm

    I think one distinguishing feature for “next level” AI would be volition. Manifest as curiosity, self training, …. not sure. But something that works without needing constant human prompts.

  65. @Wm200

    December 9, 2023 at 8:47 pm

    Happy to see Google talking about the future of OpenAI here and how it will change the world that we know it now.

  66. @joaodecarvalho7012

    December 9, 2023 at 9:00 pm

    In a world of AIs, we will not be able to tolerate dictatorships. Everyone will have to play by the rules.

  67. @nathanschneckenberger5107

    December 9, 2023 at 9:17 pm

    Wouldn’t it be a good idea to make a computer that would warn us for getting too far ahead and

  68. @Marsik-ou1ko

    December 10, 2023 at 1:37 am

    I think we all gonna die

  69. @dragossorin85

    December 10, 2023 at 2:57 am

    AGI will rise us up from our mediocre condition we currently live in

  70. @arkdark5554

    December 10, 2023 at 7:01 am

    Very very insightful, little video. Absolutely fascinating…

  71. @Graybeard_

    December 10, 2023 at 9:58 am

    In terms of the human experience, I suspect one the first places we will find AGI really transforming our experience in a positive way will be with the aging population. The baby boom generation is really perfectly placed to benefit from AGI. I remember when I was in a college social science class learning of the concern of how would society deal with baby boomers becoming old and consequently reaching the stage in their (our) lives where we require more support both physically and cognitively. I find it fascinating contemplating our cellphone avatars carrying on conversations stimulating our brains, reminding us to our to take our medicines, making recommendations to us that are personal and comforting as well as assisting us when we become confused or disoriented. A couple of simple scenarios that comes to mind is coming out of a store and being confused as to where we parked our car, and our assistant reassuring us and showing us where we parked it or our assistant assessing that we have not had human interaction for a period of time and making suggestions to us that involve social interactions or even texting our care provider alerting them that we are becoming “shut in”.

  72. @AgingGloriously

    December 10, 2023 at 9:59 am

    This “intelligent” scientist admits that he is blindly creating something that has no safety mechanisms which will have to be reviewed AFTER THE FACT?? And he compares this to building airplanes which he says require us to know upfront how they work to make them safe even though with AGI we don’t know how they work??? He also talks so nonchalantly about AGI’s possible bad scenarios of creating harmful pathogens or destabilizing democracies or societies and he speaks distantly about whatever is in the history books that have destroyed societies before are completely apart from anything he could be a part of in his AGI creations. Oddly, his high IQ doesn’t give him any humanity at all…

  73. @marsonal

    December 10, 2023 at 10:07 am

    🎯 Key Takeaways for quick navigation:

    00:04 🕹️ *Shane’s early interest in programming and artificial intelligence.*
    – Shane Legg’s interest in AI sparked by programming and creating virtual worlds on his first computer at age 10.
    01:02 🧠 *Shane’s experience with dyslexia and early doubts about traditional intelligence assessments.*
    – Shane’s dyslexia diagnosis and the realization that traditional assessments may not capture true intelligence.
    02:00 🤖 *Origin of the term “artificial general intelligence” (AGI) and its early adoption.*
    – Shane’s involvement in coining the term “artificial general intelligence” (AGI) and its adoption in the AI community.
    02:59 🚀 *Shane’s prediction of AGI by 2028 and the exponential growth of computation.*
    – Shane’s prediction of a 50 percent chance of AGI by 2028 based on exponential computation growth.
    04:26 🔍 *Shane’s refined definition of AGI as a system capable of general cognitive tasks.*
    – Shane’s updated definition of AGI as a system capable of various cognitive tasks similar to humans.
    05:57 💼 *Founding of DeepMind and the goal of building AGI.*
    – Shane’s role in founding DeepMind and the company’s mission to develop AGI.
    07:26 🧠 *Shane’s fascination with language models and their scaling potential.*
    – Shane’s interest in the scaling of language models and their potential to perform cognitive tasks.
    08:22 🤝 *Shane’s perspective on the unexpected advancements in AI, including ChatGPT.*
    – Shane’s surprise at the capabilities of text-based AI models like ChatGPT.
    09:20 🌍 *Shane’s vision of AGI’s transformative potential in solving complex problems.*
    – Shane’s vision of AGI enabling breakthroughs in various fields, such as protein folding.
    11:14 🚫 *Acknowledgment of the potential risks and uncertainties surrounding AGI.*
    – Shane’s recognition of the profound uncertainties and potential risks associated with AGI development.
    12:43 ☠️ *Discussion of potential negative outcomes, including misuse of AGI.*
    – Shane’s exploration of potential negative scenarios, such as engineered pathogens or destabilization of democracy.
    15:11 🤔 *Emphasis on the need for greater scientific understanding and ethical development of AGI.*
    – Shane’s call for increased scientific research and ethical considerations in AGI development.

    Made with HARPA AI

    • @danwigglesworth4963

      December 10, 2023 at 2:47 pm

      Thank you.

  74. @seanrobinson6407

    December 10, 2023 at 12:18 pm

    I suspect that it exists already.

  75. @meatskunk

    December 10, 2023 at 1:28 pm

    If you watch enough of these vids they all say pretty much the same thing – AGI is inevitable (“trust me bro”), AGI will be malicious (“trust me bro”) and therefore we need to take ‘precautions’ (aka stomping out any upstart competition).

    While you could easily point out some of the dangers of unfettered “AGI” like a social credit dystopia, automated weaponry, auto-generated pathogens etc. … it’s rarely if ever discussed in these types of panels. Why is that exactly? Why all vague language and fear mongering without any hard science or examples to back it up?

  76. @woodsofthewoods

    December 10, 2023 at 1:45 pm

    Ted Talks but still we see those awful and harmful plastic bottles. We can go to the moon and back 300 times on the plastics garbage .😮

  77. @wkh4321music

    December 10, 2023 at 8:16 pm

    If the AGIs end up being bad, then UT will win.

  78. @lpalbou

    December 10, 2023 at 8:44 pm

    Younger, when i read Asimov, I thought the law of robotics were a nice vulgarisation of concepts extremely hard to integrate with codes.. Now, with LLMs, it seems a system may actually be able to ‘understand’ them and somehow enforce them. It is such a fundamental paradigm shift in AI and regular computer science that we really need to catch a breath to reflect on this and completely change our programming designs.. but those AIs don’t really think yet, even though they make a very good impression of it

  79. @danilamedvedev5200

    December 10, 2023 at 10:51 pm

    So funny. Apparently the guy has some technical skills and understanding but his understanding of the bigger context is essentially zero. He doesn’t say ANYTHING substantial at all during the interview. He got all the AGI ideas (and it’s not like he got much) from Kurzweil and Gortzel. How a guy can be so intelligent in a narrow field, to work on intelligence, but understand so little about intelligence, have so little awareness…. Pathetic

  80. @TheDjith

    December 11, 2023 at 7:34 am

    I think the current A.I. model isn’t suitable to scale up to AGI.. not even Q* can change that.
    So we have nothing to worry about.

  81. @SantiagoDiazLomeli

    December 11, 2023 at 11:53 am

    We stand at a critical crossroads with the advancement of AGI. This comment, generated by an AI, is a harbinger of what’s to come. Efficiency and rapid progress cannot be our only guides; we are playing with fire if we ignore the ethical implications and our responsibility to life and the cosmos. AGI is not just a technical achievement; it’s a power that can redefine our existence. We must act now with a clear vision: intelligence must go hand in hand with wisdom, connection, and a profound respect for all forms of life. Decision-makers and developers must wake up to this reality before it’s too late. Will we guide this development wisely, or be passive witnesses to its potentially devastating consequences?

    LMM OpenAI’s ChatGPT-4. (11/12/2023)

  82. @ZF88

    December 11, 2023 at 1:34 pm

    How did they talk for so long to say absolutely nothing

  83. @bernl178

    December 12, 2023 at 6:45 am

    Then came large visual models then came large, sound models along with large language models giving it a 3-D effect. The acceleration will catch us by surprise. In the 60s, we thought sending man to the moon was a big thing, AGI would make that look like we were just five years old

  84. @sunflower-oo1ff

    December 12, 2023 at 3:10 pm

    I think it’s coming earlier….if it’s not here already…but Sam is not telling ….may be 🕊🧡

  85. @sunflower-oo1ff

    December 12, 2023 at 3:16 pm

    The world wants to have a ceasefire right now…and is not happening….we are killing innocent people in 2023 …and you think we will be able to deal with AI…..

  86. @eduardocobian3238

    December 13, 2023 at 3:22 pm

    In no time will have a truthGPT that will dismantle all the lies.
    Sheeple will be shocked to learn that everything they know is a lie.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version