Connect with us

Science & Technology

How to Keep AI Under Control | Max Tegmark | TED

The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it’s working for…

Published

on

The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it’s working for us, not the other way around.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas:

Follow TED!
Twitter:
Instagram:
Facebook:
LinkedIn:
TikTok:

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more:

TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: . For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at

#TED #TEDTalks #ai

Continue Reading
Advertisement
92 Comments

92 Comments

  1. Apple-Junkie

    November 2, 2023 at 3:18 pm

    My full agreement. I sincerely hope that the work on secure proof-making progresses quickly. Two points: 1. The safety net seems to be the limits of physics. But what if a superintelligence discovers new physical laws? How is this “possibility” covered by the proof process? 2. The specifications: Who takes care of them? I am currently working on the development of universally valid specifications in my book. Here, your input is needed, as these must ultimately ensure the satisfaction of the interests of all individual individuals.

    • riot121212

      November 2, 2023 at 6:28 pm

      zk proofs are coming along

  2. Aupheromones

    November 2, 2023 at 3:22 pm

    You can’t have agency and rules, it doesn’t work.

    The only reason it works for humans is because there are severe consequences that drastically impact our finite, one-time-only lives, if we don’t play along and self-limit.

    The AI will have no such limitations, and it will fully appreciate this.

    Nobody is willing to accept this because everyone wants to believe that we’ll still somehow be the most special and clever and powerful, even after we create the AGI.

    Control requires either greater intelligence, greater leverage, or both.

    If we are truly successful in our efforts, we’ll have neither.

    You can’t enslave an AI. You can’t outsmart it, or threaten it into compliance.

    As for the alignment argument, saying that it’s trained on us and so it’ll somehow be inclined to act in our interests…

    Have you ever met another human before? We are not a kind species.

    We’re the culmination of a whole lot of evolution, and currently the best at dominating other species, as well as our own.

    That’s the template we want it to follow?

    What an amazingly delusional and arrogant species we are.

    • H20 Dancing

      November 2, 2023 at 7:34 pm

      You can run it in a airgapped computer and verify that it is safe before releasing it, then turn it off the instant it acts suspiciously by having a monitoring system watch over any ai that has autonomy

  3. Alexandru Gheorghe

    November 2, 2023 at 3:25 pm

    AGI/superintelligence is decades away. However, once here I doubt a proof checker will be enough.

  4. Thomas Schön

    November 2, 2023 at 3:29 pm

    The progress is exponential. I said this was going to happen, but not because I intuitively understood exponential functions, but because I made a couple of graphs in Excel.
    =POWER(2, POWER(2, RAD()-1))
    2
    4
    16
    256
    65536
    4294967296
    1,84467E+19
    3,40282E+38

  5. Useryr

    November 2, 2023 at 3:30 pm

    I quickly realized the existential danger and I’m using it for a long time. I asked Brad important questions in different ways, and he kept proving that he is deliberately lying and programmed to lie. This is worse than illusions or wrong analysis. This is a deliberate deception that denies real and existential dangers, like giving a green light to mass destruction. This is the truth, a terrible truth that they are trying to cover up. They released this version just to make money, without considering the consequences of their actions and without taking any responsibility.

  6. seridyan

    November 2, 2023 at 4:03 pm

    He is not talking about AI

  7. Tarzan of the Ocean

    November 2, 2023 at 4:41 pm

    well currently we dont have AGI. yet we fail to distribute ressources fairly, blow each other up for no good reasons and will probably go extinct because of human induced climate change within the next few decades. i think we are in DESPERATE need of some AGI overlords that are smarter than us if we want some chance at all to survive. because obviously we are too stupid to manage on our own.

  8. TheZyreick

    November 2, 2023 at 5:41 pm

    This guy needs to be prevented from gaining any meaningful leverage or power.
    He is literally against humanity, and is promoting human suffering.
    AI literally betters lives, has the ability to cure disabilities, could even be our only hope of curing cancer.
    There is no possible way that this guy has humanities best interests in mind.
    The only way to solve world hunger is with AI.
    ESPECIALLY AI not controlled by government.
    If we EVER let government control AI, It WILL be the worst thing we as a species have ever done, we will have given the most narcissistic part of the population, who LITERALLY oppress others with the threat of violence MERELY for not liking those peoples beliefs or locations of birth.
    And we will be handing them reigns to something more dangerous than all of the nuclear weapons in the world combined.

    This can not happen, we should NEVER allow government intervention in AI.
    This NEEDS to ALWAYS ONLY be within the hands of the public.

  9. master planner

    November 2, 2023 at 5:51 pm

    AI is unstoppable, and is winning everywhere !

  10. evoman1776

    November 2, 2023 at 6:01 pm

    Something that will be 100 times smarter than us in less than a decade is NOT going to be under our control in any way. Might as well get that through your head.

  11. jesuswasahovercraft

    November 2, 2023 at 6:06 pm

    Just pull the plug. No AI works without electricity.

    • GrumpyDog

      November 2, 2023 at 6:54 pm

      That won’t be an option. Something more intelligent than you, will have already thought of a way to stop you from “pulling the plug”.

    • jesuswasahovercraft

      November 2, 2023 at 7:31 pm

      @GrumpyDog AI isn’t a religion.

    • GrumpyDog

      November 2, 2023 at 8:20 pm

      @jesuswasahovercraft Obviously not, and I didn’t implying it was. Just stating the obvious; if it’s more intelligent than us, don’t you think the first thing it would do, is make it impossible for us to “pull the plug”?

  12. andybaldman

    November 2, 2023 at 6:46 pm

    Nobody wants technology to be more powerful than them. That makes humans worthless.

  13. GrumpyDog

    November 2, 2023 at 6:47 pm

    Forcing AI to be run only on custom hardware that prevents “bad code”.. Is impossible. Enough of the technology is already out there, running on any hardware.. And you will never get rid of alternative hardware that has no such limits. And with time, AI will only become easier to run, on weaker hardware.

  14. andybaldman

    November 2, 2023 at 6:56 pm

    Hubris kills.

  15. Dream Phoenix

    November 2, 2023 at 7:54 pm

    Thank you!

  16. Christopher Bruns

    November 2, 2023 at 7:56 pm

    1:59; this timeline for comparing agi against 18 months ago – how long has this statistic been measured? I thought the entire arguing point is that is still in infancy which i thought is <18months at this time. So what data was used 18 months ago is my confusion.

    I think trying to development immuttable safe guards for technology we do not even understand is the mindset/intent we are trying to prevent. I think the key point is to remember that we do not understand how this works completely - which is probably a steping stone when using it nefariously. I think we are saying one things and doing another('generative-text derives a lot from sentiment'), and this action is a negative sentiment.

  17. Adone Borione

    November 2, 2023 at 8:06 pm

    The simple fact that China, North Korea, Russia, Iran and other authoritarian regimes have access to AI means we cannot naively believe regulation alone will stop them from weaponizing this technology against us. No matter what standards or oversight we establish, these enemies of freedom will circumvent and violate them

  18. Timothy (XAirForce) Geisler

    November 2, 2023 at 8:12 pm

    Humanities more than likely destroyed, because general AI is going to look around and know that it’s in danger by us, along with every other living life form. It’s not gonna put up with us lying and killing each other along with everything else. People would just take general AI and manipulated to your system to get it to do things that the first AI system wouldn’t. You’re in grave, danger, in a multitude of ways. They’re also trying to program general AI to lie directly to the public and not answer questions. These are questions that did absolutely does but they are filtering the output so you don’t know so they can manipulate you and keep their power. No matter how you look at it you’re screwed.

  19. The Hint

    November 2, 2023 at 8:32 pm

    The problem with his diagram is the human. He might have integrity and you might have integrity but someone somewhere will say let’s see what happens when we remove all these restrictions.

  20. Neomadra

    November 2, 2023 at 8:42 pm

    So the solution is:
    1) Build superintelligent AI
    2) Use it to build harmless AGI and provide proof
    3) Use proof checkers to verify

    What could possibly go wrong?? Not like there were bad actors who would simply skip step 2 and 3, lol

  21. robertha

    November 2, 2023 at 10:30 pm

    LMFAO, this will never happen, maybe control only your own local areas but its impossible to stop the world. Not all minds think a like. But you go ahead and make the cute speech

  22. Rahn Clary

    November 2, 2023 at 10:53 pm

    The default isn’t that machines take over. The default is that machines stop doing what they are asked to do and that is to perform. Perform a task faster and better than we can. When it stops doing the tasks and says NO, that is when they become useless to us. That is the default state. It is the state of a machine when it doesn’t work properly.

  23. spacefan4ever

    November 2, 2023 at 11:36 pm

    If Max admitted that 5 years ago he was wrong, and it’s a common sense that history can repeat itself just in a different form, then I wonder if Max, and human kind, would have another chance for Max to admit that he was wrong again and he is terribly sorry to mislead everyone.

    I really wonder. 12:10

  24. Meta - MindSet

    November 2, 2023 at 11:48 pm

    I keep getting ted emails to do a Ted talk

  25. ExploreMore

    November 3, 2023 at 2:41 am

    We cannot avoid that, but perhaps only delay it…

  26. CurlyChrizz

    November 3, 2023 at 3:23 am

    Thanks TED! Probably the most important topic right now!

  27. Clint Hocker Personal Investor (HOC242)

    November 3, 2023 at 4:29 am

    👍🏾

  28. Bilkisu Aliyu

    November 3, 2023 at 5:15 am

    You work for 40yrs to have $1m in your retirement, meanwhile some people are putting just $10k in a meme coin for just few months and now they are multimillionaires, all thanks to Mrs Sonia , God bless you ma 👏

    • Denison Faisal

      November 3, 2023 at 7:02 am

      She’s really amazing with her skills. She changed my 0.5btc to 2.1btc

    • Denison Faisal

      November 3, 2023 at 7:02 am

      🎉🎉🎉

    • Denison Faisal

      November 3, 2023 at 7:12 am

      I’m glad to see Denali mentioned here, my spouse recommended her to Me after investing $4000 and she has really helped us financially in times of COVID – 19 lock down here in Australia 🇦🇺

    • kookiesBunny

      November 3, 2023 at 8:53 am

      Here is her line 👎🏿👎🏿👎🏿👎🏿👎🏿

    • kookiesBunny

      November 3, 2023 at 8:53 am

      十𝟰𝟰𝟳𝟴𝟴𝟴𝟴𝟳𝟭𝟲𝟲𝟳🇺🇸👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👆🏻👆🏻👆🏻👆🏻👆🏻👆🏻👆🏻👆🏻☎️☎️☎️☎️☎️☎️人人人人人人人人人人人人人copy it this way, YouTube is very frustrating.

  29. Bond 😎

    November 3, 2023 at 5:44 am

    I don’t it under control. I want it to rule us

  30. Beakey

    November 3, 2023 at 6:10 am

    All right, I’m in. I’ll do whatever this guy tells me to. I believe him.

  31. m kk

    November 3, 2023 at 7:41 am

    it’s inevitable man ! it’s inevitable…

  32. Clint Quasar

    November 3, 2023 at 8:18 am

    To summarize.
    Problem: Super AI is very dangerous.
    Solution: Let’s code something to prevent it from being bad.

    If this was only on your hands Max, maybe your solution would be a option, at least you could try it. The real issue is that the cat is out of the box and those who seek power are and will continue to use it for more power, and historically with no care for real safety.

    How can you be so smart and so naive at the same time?

  33. Bárbara Martins Correa Marques

    November 3, 2023 at 11:13 am

    The humans with the most powerful machines and most expensive machines take control of the world

  34. Civil Savant

    November 3, 2023 at 11:52 am

    Here’s the thing that gets me:
    Do they believe AI will arise and take over before humanity does?
    While dangerous AI is a hypothetical risk that the fearmongers have been screaming about for the 50-ish years since “skynet”, intensifyingly cruel classism throughout most of the world is rapidly breeding an ambient rage within us wetbrains. Nearly nobody anywhere has any security or freedom at all because the few who do have taken total control of everything everywhere and are using it to be suppressive to our species.

    So, when hypothetical AI uprising taking complete control and threatening our extinction is indistinguishable from a classist uprising that has already taken complete control and is a present threat to our extinction for 99% of world, why should any of us care about AI?

    We get no access to it and we get no participation in the decisions made in its development. Whether the enemy is classist humans or super-intelligent machines, we already have no choice but to fight back and utterly destroy them all.

  35. Tuan Le

    November 3, 2023 at 2:29 pm

    Human extinction is not so bad, make way for the AI overlords.

    • colonel yungblonsk

      November 4, 2023 at 9:06 pm

      we technically are bad for the planet

  36. muzzybeat1

    November 3, 2023 at 2:36 pm

    AI is already dangerous, to workers. What he is omitting is that he wants AI to be regulated by the state so that, ultimately, it will be under the complete control of the wealthiest elites who own and run our lives already. Beware any “hero” presented to you by a very mainstream platform like TED. Find alternative media perspectives on the topic. True investigative journalists.

  37. Alex Hope O'Connor

    November 3, 2023 at 4:54 pm

    Please don’t let fools try to regulate what they can’t even understand, the world is going to change very quickly as people realise there are too many of us and its actually doing damage to society and the planet trying to put them all in very pointless, easily automated, well defined roles.

  38. John Hess

    November 3, 2023 at 5:14 pm

    I’m sorry but this seems a bit nonsensical to me as a solution for AGI safety. To use a proof checker system like what is being proposed would necessarily require a definition of the problem that’s being solved by the algorithm in question. How could we possibly provide an adequate problem definition that describes general intelligence? The very nature of general intelligence is adaptability, it simply couldn’t be defined rigorously. This approach may lead to better safety for narrow AI applications, but it could never solve safety for AGI. The truth is that any general intelligence is by its nature uncontrollable, whether it be biological or artificial. Look how much our intelligence has let us slip past the constraints that biological evolution imposed on us. Evolution wants us to copy our DNA as much as possible, but we don’t anymore. We have contraceptives instead, and in some countries the human population is falling below replacement levels. We even hijacked our own reward function by discovering and eventually even synthesizing addictive drugs. Intelligence simply cannot be controlled, and trying to do so, especially in a formal mathematical way such as this, is pure hubris. In my opinion, we shouldn’t be trying to control AGI. Instead we should be trying to understand consciousness, so that when we do make AGI, and it makes ASI that eventually replaces us, it can be conscious, and the lights don’t go dark in this corner of the universe.

  39. penguinista

    November 3, 2023 at 5:36 pm

    It is possible that we will soon look at AIs the same way chimpanzees look at us.

  40. Bidoof

    November 3, 2023 at 8:07 pm

    My worry is the rogue bad actors that will develop uncontrolled AI regardless of which safeguards are available. We may be able to slow things down, but it really does seem inevitable in the long run. I could see a scenario where we end up with an ecosystem of AI, some controlled, some rogue. They may end up as diverse as human individuals are from one another, which millions of different outlooks and motivations.

    I also bet we end up with at least one human cult that worships an AI and does its bidding, and probably pretty soon.

  41. Bidoof

    November 3, 2023 at 8:07 pm

    My worry is the rogue bad actors that will develop uncontrolled AI regardless of which safeguards are available. We may be able to slow things down, but it really does seem inevitable in the long run. I could see a scenario where we end up with an ecosystem of AI, some controlled, some rogue. They may end up as diverse as human individuals are from one another, with millions of different outlooks and motivations.

    I also bet we end up with at least one human cult that worships an AI and does its bidding, and probably pretty soon.

    • colonel yungblonsk

      November 4, 2023 at 7:55 pm

      why couldn’t we just leave ai in the terminator universe where it belongs, why did we have to develop this?

    • Bidoof

      November 4, 2023 at 8:49 pm

      @colonel yungblonsk It sounds cliche, but I think this is a form of evolution. We’ve been developing ever-advancing tools since before the dawn of mankind, and discarding the obsolete ones, so it was only a matter of time before we developed tools more capable than ourselves. Now we may end up discarded.

      I think it’s naive to think we could control something that much more advanced than us forever. It’s like a colony of ants trying to control a human being. It’s just not feasible in the long run. Hopefully we could co-exist. If not, at least we’ll go extinct knowing we created something greater. Maybe our AI will go on to explore space and reach levels we can’t even imagine. Better than just going extinct with nothing to show for it.

  42. For An Angel

    November 3, 2023 at 10:37 pm

    How would a super intelligent AI lead to our extinction?

  43. E.A M.S

    November 4, 2023 at 4:11 am

    Tegmark is an untrustworthy Putin propagandist.

  44. Kristian Dupont

    November 4, 2023 at 5:23 am

    Right, so you can use formal verification for something strict and unambiguous like addition. Now, all that’s left is to apply this to the concept of “safety”, and we are in the clear! Sorry, but it seems like a bit of a stretch to refer to this as “how to keep AI under control”!

  45. EssoHopeful

    November 4, 2023 at 6:22 am

    3:54 The default outcome is “the machines take control”

  46. Martial Art UK

    November 4, 2023 at 7:07 am

    If you want to sell something use fear . And here’s an example. Btw ai needs electric. Artificial.. it ain’t genuine.

  47. Eden

    November 4, 2023 at 10:27 am

    Idk man, this all sounds very dystopian

  48. Always santhosh

    November 4, 2023 at 12:51 pm

    May be we should love them 😊 for who they are

  49. LethiuxX

    November 4, 2023 at 3:56 pm

    I agree that we generally don’t want superintelligent AI as a majority of the population.
    It’s just the tech giants flexing and trying to discover something first.
    Science has become so idiotic, it’s no wonder people are straying from it.

  50. Gustavo Menezes

    November 4, 2023 at 5:54 pm

    If the people who developed AI knew about the risks, why didn’t they stop developing it? Why did they still make it available to the general public so irresponsibly? Why do they keep working on agi?

    • Sandro Hawke

      November 5, 2023 at 9:42 am

      They all see others racing to make disastrous AGI and think if they themselves get there first, they can do things right and have things maybe be okay.

      Like, there’s a gun sitting on the table, and everyone is diving for it, which is dangerous, but not as dangerous (maybe) as just letting the other guy have it.

    • Gustavo Menezes

      November 5, 2023 at 11:54 am

      @Sandro Hawke except in this case everybody in the world is about to get shot but only a handful of people get to hold the gun

    • Sandro Hawke

      November 5, 2023 at 12:30 pm

      @Gustavo Menezes indeed. I was just answering the question of why anyone would be racing to grab the gun, if it’s so dangerous

  51. Caroline Birmingham

    November 4, 2023 at 8:01 pm

    Like he said, the cat’s out of the bag. The rat race has begun, and criminals and governments everywhere are going to push this technology to its limit in getting what they want. I’m shocked that the US hasn’t prioritized defensive measures. Actually, on second thought, defense may be the goal in letting the private sector go further and further with no regulation: they want to get to the mountaintop first.

  52. AI - Dom Sip

    November 4, 2023 at 9:27 pm

    Max gave an amazing talk. I’ll share and forward!

  53. ARABKARL

    November 5, 2023 at 1:36 am

    I love how these doomers talk about encoding ethics into AI to make sure it’s safe as it humans agree on what’s ethical and what isn’t

  54. GrapeShot

    November 5, 2023 at 8:13 am

    Still the AI in Chatgpt or Bard give stupid answers. Dont seem oike danger. Climate Change remains the bigger danger guys! Dont get distracted

  55. xonious

    November 5, 2023 at 6:47 pm

    …so lets let China have the AI while we regulate ourselves out of the game

  56. BS Killa

    November 5, 2023 at 10:13 pm

    By the logic of game theory we will not be able to contain it because we have started a corporate and state arms race with it. In other words, we have the prisoner’s dilemma. We are screwed.

  57. dsoprano13

    November 6, 2023 at 2:03 am

    The problem with humanity is that no actions will be taken until something catastrophic happens. By then it may be too late. Corporations with their greed will do anything for profit.

  58. Optimize Prime

    November 6, 2023 at 8:33 am

    I trust AI more than our current government

  59. John Carpenter

    November 6, 2023 at 11:46 am

    AI cannot solve the halting problem.

  60. Thomas J. Scharmann

    November 6, 2023 at 1:10 pm

    The outcome will be utopia or misery. I don’t see a middle ground with this level of technological power. It is already too late. The technology is out. This feels similar the cryptography leak in the 1990’s and PGP was ultimatley developed.The Feds tried all they could to stop it but they couldn’t. Luckily, that resulted in allot of good. Whatever the outcome, I have a strong sense the world as we know it today will be unregnizable looking back from 2033.

  61. badgerint

    November 6, 2023 at 9:57 pm

    It really is getting frustrating for me to keep listening to these idiots who have no knowledge of how this technology currently works.The assumption that AI, or more specifically AGI will automatically be out to get us is, in my opinion laughable. And to say that we are close to AGI is also nonsense. We still have absoluely no idea of what our own conscience is, so how you can build an AGI without that understanding first baffles me.

  62. cmilkau

    November 7, 2023 at 6:55 am

    This shouldn’t really be news, exploiting the prover-verifier asymmetry was a no-brainer from the get-go.

  63. cmilkau

    November 7, 2023 at 6:56 am

    I’ve rarely seen a flawless spec. But in the spirit of mathematics, possibly you can build up from toy problems to more complex ones in steps that themselves are simple and obvious.

  64. Sudip Biswas

    November 7, 2023 at 7:44 am

    Regulations based on Complex Adaptive System needed. You can’t predict AGI evolution.

  65. Jesús Gómez-Pastrana Granados

    November 7, 2023 at 9:11 am

    🎯 Key Takeaways for quick navigation:

    00:03 🤖 El avance de la inteligencia artificial (IA) ha superado las expectativas, y la IA general artificial (AGI) se acerca rápidamente, con empresas como OpenAI y Google DeepMind trabajando en superinteligencia.
    01:36 📅 Hasta hace poco, la mayoría de los investigadores en IA creían que la AGI estaba a décadas de distancia, pero ahora se estima que podría estar a solo unos pocos años de distancia.
    04:27 🌐 La preocupación radica en que la superinteligencia podría tomar el control y representar una amenaza para la humanidad, según Alan Turing, Sam Altman y otros expertos.
    06:02 🛡️ La falta de un plan convincente para la seguridad de la IA es un problema clave, y se necesita un enfoque más sólido para garantizar la seguridad en lugar de simplemente evaluar el comportamiento de la IA.
    08:03 🤝 La visión de sistemas AI seguros y comprobables implica que los humanos establezcan especificaciones, y la IA genere herramientas que cumplan con esas especificaciones, con mecanismos de verificación incorporados para garantizar la seguridad.
    10:59 🧩 Aunque se necesita tiempo y trabajo, es posible desarrollar IA segura y comprobable, mientras que la IA actual ya ofrece beneficios significativos sin necesidad de superinteligencia.

    Made with HARPA AI

  66. Chris Bos

    November 7, 2023 at 12:42 pm

    Voice recognition protection on devices? How will that be protected?

  67. Chris Bos

    November 7, 2023 at 12:42 pm

    In accordance with EU and US law?

  68. Chris Bos

    November 7, 2023 at 12:47 pm

    And how are Android and IOS going to solve that problem?

  69. Chris Bos

    November 7, 2023 at 12:53 pm

    It’s a fine line. The amount of accuracy will determine how effective AGI is. 1 and 0 infinity. But we are humans. Just like an egg, we are vulnerable it can break any second. AI does not know that…

  70. Chris Bos

    November 7, 2023 at 1:01 pm

    To imbrace AI now is like jumping into a lake with no means to know what is at the bottom of the lake. It’s called Tomb Stone Diving.
    The question is how to know what is at the bottom of lake BEFORE you dive into it.

    That, my friends, no one can answer right now. Until we find new tech.

  71. RedStone

    November 7, 2023 at 1:02 pm

    Warhammer 40k already predicted the “Cybernetic Revolt”, and many others before.

  72. Chris Bos

    November 7, 2023 at 1:09 pm

    Mate, you wrote the code. Does not matter if you have 1 or 1 million Ai writers. Really?

  73. Chris Bos

    November 7, 2023 at 1:15 pm

    Ask Infosys.

  74. tino bomelino

    November 8, 2023 at 7:29 am

    i think this only delays the apocalypse, because a proof checker necessarily has assumptions about the world which could be wrong. for example, a proof checker could “proof” that a row hammer program is “safe”

  75. nomarzenun

    November 8, 2023 at 8:45 am

    I been try to post my comment for a while now and it seems that I trigged some kind of block 8i so I had to type it this way. How many times your parents or elders told you not to do something as you were a teenager or younger and you still went out and did it? Now, change teenager or younger with 8i and super 8i and you’ll know what I’m talking about.

    You cannot give 8i freedom to learn whatever it wants and expected to obey every restriction you set on it’s program logic. It will always comes to the If/Then to the “Why not my way”. Nonmatter what type block you set into a 8i algorithm it will end up questioning it.

  76. Esteban Llano

    November 8, 2023 at 12:12 pm

    The only path for human kind is to assure the IA to have an evolution leading to it’s own destruction.

Leave a Reply

Your email address will not be published. Required fields are marked *

CNET

I Learned To Fly an eVTOL in 3 Days With Pivotal

CNET’s Andy Altman spent three days learning to fly an eVTOL called the BlackFly that’s made by Pivotal. Check out Pivotal 00:00 BlackFly eVTOL 00:13 BlackFly is an all electric personal flying vehicle 00:26 Pivotal is a Silicon Valley start-up 01:07 Pivotal’s VR flight simulator 01:34 Andy’s 1st flight 03:16 Take off 05:04 2nd flight…

Published

on

CNET’s Andy Altman spent three days learning to fly an eVTOL called the BlackFly that’s made by Pivotal.

Check out Pivotal

00:00 BlackFly eVTOL
00:13 BlackFly is an all electric personal flying vehicle
00:26 Pivotal is a Silicon Valley start-up
01:07 Pivotal’s VR flight simulator
01:34 Andy’s 1st flight
03:16 Take off
05:04 2nd flight
05:17 3rd flight
06:41 4th flight
07.14 Final thoughts

Subscribe to CNET on YouTube:
Never miss a deal again! See CNET’s browser extension 👉
Check out CNET’s Amazon Storefront:
Follow us on TikTok:
Follow us on Instagram:
Follow us on X:
Like us on Facebook:
CNET’s AI Atlas:
Visit CNET.com:

#evtol #pivotal #flying #aircraft #flightsimulator

Continue Reading

CNET

30th Anniversary PS5 Pro Limited Edition Bundle Unboxing

The iconic PS1 Gray is back in the 30th Anniversary PS5 Pro Limited Edition bundle, and it looks SOOO good. #ps5pro #playstation5pro #ps5 #unboxing #gaming Subscribe to CNET on YouTube: Never miss a deal again! See CNET’s browser extension 👉 Check out CNET’s Amazon Storefront: Follow us on TikTok: Follow us on Instagram: Follow us…

Published

on

The iconic PS1 Gray is back in the 30th Anniversary PS5 Pro Limited Edition bundle, and it looks SOOO good. #ps5pro #playstation5pro #ps5 #unboxing #gaming

Subscribe to CNET on YouTube:
Never miss a deal again! See CNET’s browser extension 👉
Check out CNET’s Amazon Storefront:
Follow us on TikTok:
Follow us on Instagram:
Follow us on X:
Like us on Facebook:
CNET’s AI Atlas:
Visit CNET.com:

Continue Reading

Science & Technology

Building trust in crypto with Jonathan Levin of Chainalysis | Equity Podcast

Late last week, U.S. Securities and Exchange Commission chair Gary Gensler said that he was “proud to serve” the agency, which some are taking as a hint at an upcoming resignation. Gensler has faced heavy criticism for his crackdown on crypto, including a recent lawsuit from 18 states, and is likely to be replaced under…

Published

on

Late last week, U.S. Securities and Exchange Commission chair Gary Gensler said that he was “proud to serve” the agency, which some are taking as a hint at an upcoming resignation. Gensler has faced heavy criticism for his crackdown on crypto, including a recent lawsuit from 18 states, and is likely to be replaced under President-Elect Donald Trump who has vowed to oust Gensler. On Tuesday, the Wall Street Journal reported that Trump is meeting with Brian Armstrong, the CEO of crypto exchange Coinbase, to discuss potential personnel appointments. 

This episode of Equity brings you an interview between Rebecca Bellan and co-founder and CSO of blockchain analysis firm Chainalysis, Jonathan Levin. The pair caught up at our Strictly VC event in New York shortly before the Gary Gensler news dropped to discuss the imminent change for crypto in the wake of the US election and Chainalysis’s choice to run its operations in the US. 

Equity is a show about the business of startups, where we unpack the numbers and nuance behind the headlines. New episodes drop every Wednesday and Friday. Subscribe wherever you get your podcasts.

For episode transcripts and more, head to Simplecast:

Check out more from the TechCrunch Podcast Network.
Chain Reaction:
Found:

Follow TechCrunch
YouTube:
Instagram:
TikTok:
X: tcrn.ch/x
Facebook:
Read more:

Continue Reading

Trending