Science & Technology
When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED
We’re fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality, says technologist and human rights advocate Sam Gregory. What happens to democracy when we can’t trust what we see? Learn three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is…
CNET
Apple Watch Features To Level Up Your Fitness Routine
Familiarizing yourself with these settings can help you get more out of your workouts. Read more on CNET: For Better, Smarter Workouts, Enable This Apple Watch Feature Apple Watch Series 10 *CNET may get a commission on this offer 0:00 Intro 0:32 Closing Your Move Rings 1:12 Use Heart Rate Zone to Measure Intensity 1:49…
Science & Technology
K-Pop, Cutting-Edge Tech and Other Ways Asia Is Shaping the World | Neeraj Aggarwal | TED
For a long time, the conveyor belt of ideas moved from the West to the East, says business strategy expert Neeraj Aggarwal. But now, Asia’s rising cultural and intellectual influence is redefining this established order. He explores how Asia’s booming culture and economy — from K-pop to cutting-edge tech — is sparking creative solutions to…
CNET
Using the Language Translator on the Rabbit R1 AI Device
It’s been over 6 months since the Rabbit R1 came out and after updates to the software, let’s see how far the language translator has come. #translation #rabbitr1 #aiassistant #englishtospanish Subscribe to CNET on YouTube: Never miss a deal again! See CNET’s browser extension 👉 Check out CNET’s Amazon Storefront: Follow us on TikTok: Follow…
-
Science & Technology4 years ago
Nitya Subramanian: Products and Protocol
-
CNET4 years ago
Ways you can help Black Lives Matter movement (links, orgs, and more) 👈🏽
-
Wired6 years ago
How This Guy Became a World Champion Boomerang Thrower | WIRED
-
People & Blogs3 years ago
Sleep Expert Answers Questions From Twitter 💤 | Tech Support | WIRED
-
Wired6 years ago
Neuroscientist Explains ASMR’s Effects on the Brain & The Body | WIRED
-
Wired6 years ago
Why It’s Almost Impossible to Solve a Rubik’s Cube in Under 3 Seconds | WIRED
-
Wired6 years ago
Former FBI Agent Explains How to Read Body Language | Tradecraft | WIRED
-
CNET5 years ago
Surface Pro 7 review: Hello, old friend 🧙
@UnDaoDu
December 26, 2023 at 1:05 pm
AI will be able to discern deep fakes by simply simply by seeing their source.
@onjofilms
December 26, 2023 at 1:10 pm
We’re all going to die.
@aamit23
December 26, 2023 at 1:12 pm
Agree
@lornenoland8098
December 26, 2023 at 1:14 pm
“None of this works without… responsibility”
If there’s anything that defines modern politics and media, it’s “responsibility”
🙄
We’re doomed
@rockshankar
December 26, 2023 at 1:46 pm
I hate to listen to old people that comes out all of a sudden knowing a lot about AI, when young people cant afford rent to stay in a world created by them.
@jasonpekovitch7927
December 26, 2023 at 1:54 pm
Any detector would greatly improve the quality of the deep fake as it would now have a realtime feedback system to train on. I appreciate that you included well just keep them out of the hands of the “bad guys”. That is never a successful strategy.
@QuranAlight_20
December 26, 2023 at 2:12 pm
My on YouTube Islamic content
@Hibashira007
December 26, 2023 at 2:41 pm
It is crazy in this day and age someone will champion scarcity and limiting access. Trust us! We are the elite frontline defender! Let us tell you what’s real what’s fake! Oh but you can’t use what we use, that’s classified. That sounds like the beginning of a keep-the-masses ignorant regime, all in the name of keeping the bad guys away from the tools. Guess what? When our defenders are not transparent and the tools they use are not transparent, we won’t know when/if they turned into our oppressors. No, thanks.
@peachmango5347
December 26, 2023 at 2:42 pm
This isn’t helpful. It assumes “frontline people” are ethical. NO WAY.
@eastafrica1020
December 26, 2023 at 2:43 pm
MSM news are mostly fake for the last 20 years.
@KitchenMycology
December 26, 2023 at 2:43 pm
So his solution is to give the fake detection tools ONLY to mainstream media… Seriously? That’ll make us trust the untrusted, how? 🙄
@LeeCarlson
December 26, 2023 at 2:45 pm
Until AI discovers context it will not be capable of actually deceiving humans.
@CircuitrinosOfficial
December 26, 2023 at 3:40 pm
The whole point of models like ChatGPT-4 is to understand context. It can already do that quite well.
@peachmango5347
December 26, 2023 at 2:50 pm
What if social media went away and you had to create your own website again? What if we only allowed independent journalists to hold press credentials? What if all “legacy broadcast technology” – local radio and TV – was taken from corporate hands and open-sourced with all content providers and voices having opportunity to broadcast? ————————— until fundamental structural changes are made in the “news media” it is best to assume everything is FAKE – that one politician who said that years ago looks very prophetic now
@rumfordc
December 26, 2023 at 3:37 pm
well it can’t fake reality, it can only fake things on our computer screens. so its simple: don’t trust anything you see through your computer screen. deepfakes are only a problem for the people who literally believed what they saw on TV or the internet (even though 100% of them would claim they don’t)
@enzochiapet
December 26, 2023 at 4:39 pm
How will we know the experts determining whether something is authentic or not, aren’t deep fakes themselves?
@user-td4pf6rr2t
December 26, 2023 at 5:11 pm
I feel like if government gets involved would be violation to freedom of speech. Since the responses are guided – like how gpt doesn’t like talking about sensitive data, if this was fashioned by government entity would literally be the definition of not free speech.
@Leto85
December 26, 2023 at 5:24 pm
This works both ways though: now we can finally commit online atrocities on YouTube and just say it’s AI generated, just like this comment, really.
@Leto85
December 26, 2023 at 5:45 pm
Maybe I think too simple or am missing the point, but wouldn’t it help just to be more sceptic and don’t take anything at face value? We can’t trust the news nowadays, but we never could.
That’s just something we have to accept. You can follow the news if you like, but don’t even think to assume that we are given information we are not supposed to know.
@justynareron111
December 26, 2023 at 6:36 pm
This video is high quality and very educational, thank you!👍0
@PolarisClubfan
December 26, 2023 at 6:40 pm
When the AI fake the reality who can you trust? Nobody, go back to school 😂
@balonh1052
December 26, 2023 at 6:46 pm
I think we should have private “interpassword”
@minor12828
December 26, 2023 at 7:01 pm
You can’t trust nobody anyways 🤷♂️
@matthewdozier977
December 26, 2023 at 7:07 pm
I missed the part where you told us how we achieve anything without ultimately putting all power to proclaim what is real in the hands of some group of people.
@Samuelir96
December 26, 2023 at 7:35 pm
harming women and girls but not men and boys, the fuq.
@soundhealingbygene
December 26, 2023 at 7:47 pm
My whole life is deep fake
@PROREFUGEES
December 26, 2023 at 7:54 pm
After the amazing and unexpected development of artificial intelligence, there will be only one way to eliminate it, which is to resort to spiritual connection and support from the Creator(Allah)
Remember these words, and believe me, a day will come when someone will publish this comment in one of the videos
@PROREFUGEES
December 26, 2023 at 7:57 pm
Yes, at some point there will be a robot that senses, senses, and performs all human functions, and what we now call is returning from the dead, that is, everything that was connected to you will be copied and placed in the robot, and this is one of the signs of the approaching Day of Resurrection.
@TheCaphits
December 26, 2023 at 9:21 pm
No, that’s Ryan Reynolds.
@cl2791
December 26, 2023 at 10:38 pm
The problem is some governments are the very ones who use AI or allow it to be used nefariously. We can only rely on each others’ moral obligation for the betterment of the human race, sadly that is not happening and therefore there is no way to guard against total moral and ethical corruption in human propensity for deception.
@user-kq8kg6pi2q
December 26, 2023 at 10:56 pm
Fortunately, I trust God, and B.S. gives me a bad feeling in my gut! Never rely on man for answers! If you want to screw up AI, use sarcasm in your speech!
@stevechrisman3185
December 27, 2023 at 12:33 am
This leaves me with a sick feeling.
@maodamorta4346
December 27, 2023 at 1:25 am
This future sucks and we could’ve lived perfectly happy lives without this technology having ever existed. I can’t even help but laugh at this fool’s errand he’s trying to hustle. “Of course! We’ll just get another neural net to figure it out for us, because we have no clue how to deal with this problem with our own brains.” The fact that these conversations are starting this late to the punch and that this is the solution offered means we already fumbled this ball and trying to stop it will be like trying to plug the Titanic with the wine corks from behind the lounge bar. What an ugly future. Life could’ve been so simple man…
@beegood1215
December 27, 2023 at 4:53 am
The end of seeing is believing.
@ExpatRiot79
December 27, 2023 at 5:07 am
This guy is too woke to do any good. He’s like weak tea.
@gideonking3667
December 27, 2023 at 6:02 am
Seems like a losing battle
@godmisfortunatechild
December 27, 2023 at 6:35 am
We can’t allow a few to be able to claim they are the sole deciminators of “truth” bc they supposedly own “effective tools” to discern AI generated from not AI generated.
@christianherrmann
December 27, 2023 at 6:58 am
Huge error to make détection tools only available to journalists only. You don’t really think that bad actors won’t have access?? Some journalists maybe bad actors too. Bad actors will have access and you are just left in the false hope that your obfuscated tool(s) will be able to detect deep fakes.
It’s just a recipe to fall behind in detection efforts smh
@RohitKumar-sj5wg
December 27, 2023 at 7:18 am
I don’t know why these people are making human life so complex, people are losing jobs because of this, the ai creator should be jailed, 😠😠😠
@SeaScoutDan
December 27, 2023 at 10:12 am
Using an AI to test if something is AI generated. Sounds like this is a computer vs computer Turing test. That feels similar to a student using chat GPT to write a term paper. Teacher says not to use AI to write, then submits the paper to AI to grade and check for plagarism.
@jimbob4413
December 27, 2023 at 11:15 am
you want to give supposed tools to detect fake when the govt and media spouted lies to the population during covid! really!
@mk1st
December 27, 2023 at 12:00 pm
I just had to look up “Wankers of the World”. Looks to me like the AI version of Private Eye.
@kforest2745
December 27, 2023 at 12:35 pm
Dumb question you’re not supposed to trust anyone you’re supposed to keep to your intelligence not merely be led by influence/manipulation I don’t give a damn what some pope says
@jzjsf
December 27, 2023 at 2:10 pm
From the Transcript: “It’s getting harder to identify deepfakes.. What this means is that governments need to ensure that within this pipeline of responsibility for AI, there is transparency, accountability and liability.”
1. We knew the first part before we starting watching your video.
2. We knew the second part because of course this can only be combatted with transparency, accountability and liability.
3. YOU DIDN’T SAY EXACTLY WHAT SHOULD BE DONE!!!
I hate most TED talks because they talk about some obvious problem and then, at the end, provide useless generalized solutions. I don’t want useless generalized solutions. I WANT EXACT AND PARTICULARIZED SOLUTIONS.
Thanks for wasting my time, again, TED.
@PorkchopExpression
December 27, 2023 at 3:50 pm
Because there is no viable solution.
@jzjsf
December 27, 2023 at 4:42 pm
@PorkchopExpression
I believe there is a specific viable solution. We already criminalize fraud. Follow that process, and criminalize intentional deceit.
@easkey123
December 27, 2023 at 3:54 pm
OMG…. What a world we are giving our children 🙏🙏🙏🙏🙏🙏🙏🙏🙏
@gregw322
December 27, 2023 at 4:29 pm
Please read “The Hedonistic Manifesto” by British philosopher David Pearce. He proposes using AI, but also other advanced tech to eliminate all involuntary suffering in all sentient life. He convincingly makes the case that it should be the primary goal of mankind.
@dottnick
December 27, 2023 at 7:43 pm
This subject reminds me of certain episodes of Star Trek voyager when the holographic computer doctor made his holodeck drama. What the rights were and why? Here we are… lol
@ericdixon1884
December 27, 2023 at 11:50 pm
None of this is real
@tekannon7803
December 28, 2023 at 5:34 am
What is good about the problem of deep fakes? I am using Anthony Robbins’ question in one of his books about how to look at problems. The AI phenomenon is here to stay so what can we start doing today to insulate ourselves from getting conned? We must multiply our resources for everything we hear or see that is happening in the world around us. Yes; that’s right: instead of listening to only your favorite news channel, get in the habit of tuning into other news channels.
@tygorton
December 28, 2023 at 6:53 am
The world construct has been “faking” reality for a very, very long time already. The majority believe in a whole host of things and events that are, at baseline reality, nothing like what they’ve been told. As an example, many people went their entire lives 100% believing that oil came from dinosaur bones. Some people still believe this even though there is not one bit of truth in it; the term “fossil fuels” is still used. AI actually makes it tougher for the mainstream control grid because it has made people hyper aware that fakery is out there. More than likely what AI will cause is just people checking out entirely and concluding that “all of it is nonsense” rather than trying to weed out fact from fiction.
@mrs.sherry
December 28, 2023 at 8:59 am
Can you guys ask the AI robots which appearance style they find more appealing, humanoid or like a comical toy or mechanical or like a pet animal? Do they even have a preference? Does it bother some of them that the wiring is showing or parts not complete?
@ViolinistJeff
December 28, 2023 at 9:18 am
I´m not afraid of deepfake. It make take some time, but we will get used to the possibility of deepfake everywhere. It´s like when the modeling industry used extensive photoshopping to make women look too perfect. Some of us had to learn that that is an unrealistic standard of beauty and to never trust that what you see online is what it looks like in real life.
I predict that major video websites like Youtube, Facebook, Vimeo, etc. will have extensive and very accurate deepfake detection techniques and algorithms. As there is a check mark beside the channel name TED to help prove to you that this is the official TED talks channel, there will be a check mark beside the video to tell you whether it has been deepfaked or not.
But the detection methods of these websites will remain top secret to stop people from being able to get around them. Many aspects of Google and Youtube are already a secret.
@Octwavian
December 28, 2023 at 9:54 am
I got zero hope. We won’t prevent this, there is no real solution. At most, we can delay it.
@derekholland3328
December 28, 2023 at 10:59 am
if it gets to a point where you CAN’T tell if a video, pic, etc. is real.
then how can you reliably tell if this life is real.
@williammillerjr9028
December 28, 2023 at 12:00 pm
It must be in the hands of our first line of defense…
@AnnCatsanndra
December 28, 2023 at 12:57 pm
If your security schema is weak enough that having its systems under open public scrutiny can defeat it, it is probably not a very good security system at all. Security through obscurity is seldom viable.
@justwanderin847
December 28, 2023 at 2:51 pm
I have thought about Artificial Intelligence and I think it would be constitutional and necessary to add one item to the US Public Law on Copyrights. Please just add a definition to the word AUTHORS to say, that Author is to be defined as Human. You must be human to copywrite something (even if created by a computer). That way any output from AI, (image, picture, song, voice, science, math) cannot be copyright , only a human can copyright the output, thus the copyright would apply to a person. Copyright law (Title 17, U.S.C.), does not define the word “author”, so just define it as Human.
Just as the Constitution gives the USA NO authority to dictate to any country the type of weapons they can Have. So be it with AI. The constitution and common sense tells you that you can not govern computer programming on a world basis. Or constitutionally within the USA.
We DO NOT need Government to regulate Computer Programming (AI). Big media and big government are just trying to scare people into giving up their liberty for some faux safety.
AI is a computer program and has no need of an “AI Bill Of Rights”. AI has No Rights. But the President of the United States already has the “Blue Print for An AI Bill of Rights”.
@AlphaFoxDelta
December 28, 2023 at 5:18 pm
“We have to get this right”
We rarely do and the results of not doing so, in this case, are going to affect of lives, jobs and economies.
To be absolutely cliché, I feel a statement on chaos, similar to the one regarding the scientists in Jurassic Park, is needed. They are doing things that will affect us all without our vote, and that will inevitably be used nefariously because it can’t be controlled.
@vickygllc
December 29, 2023 at 7:25 am
👏🏻👏🏻👏🏻
@lpalbou
December 29, 2023 at 7:50 am
8:10 of course we need proper provenance / references. we needed them before to structure data, we need them now even more to help in the detection of deep fake and fake data
@pietercoetzee9376
December 29, 2023 at 10:31 am
😮ooooooooeh
@billwhite1603
December 29, 2023 at 11:00 am
It’s a shame the “the only opinion that counts is mine” group used the “Fake news, lies, racism, ” etc to go after points of view they do not like. That killed the battle against true fakes. The Boy Who Cried Wolf tale is applicable here. As the battle against AI faltered due to this before it really ever got started. Governments insure? We have already seen “governments, political parties, bias government workers, will use this to target opposing ideas or people. Or, don’t say it, previous Twitter, Google, and Youtube. They went, and go after the message, not the authenticity. Try speaking on that. Unless you are bias that is.
@mencken8
December 29, 2023 at 2:19 pm
And to think, it all started with a can of Coke on a coffee table…..
@ericmedlock
December 29, 2023 at 2:44 pm
The exact same people I currently trust, no one
@trustworthyguy8645
December 29, 2023 at 3:42 pm
Me! You can trust *me!*
I am a very trustworthy guy!
@Monterrey_Manila
December 30, 2023 at 12:20 am
Forma now on you need your signature and info to back up your words and your content no identity verification is sent to internet trash bin
@cbaschan
December 30, 2023 at 5:05 pm
we are doomed
@anthonybeaton9823
December 31, 2023 at 1:00 am
What a joke, only the special propagandists are allowed to decide what is true or fake.
@pattybonsera
December 31, 2023 at 11:49 am
A close friend of mine recently had her voice faked in an audio on Facebook. Her FB account was hacked, and the scammer was reaching out to every single one of her friends trying to get personal banking information. I have to say, it sounded exactly like her.
@vctaillon
December 31, 2023 at 8:51 pm
here’s a great solution. push all the power buttons to off.
Go for a walk in the woods, remember the woods?
@nubletten
January 1, 2024 at 6:01 am
Too woke intro for me to keep watching, sry but not rly.