CNET
ChatGPT-OSS Is Here And Can Run Locally On Your PC
You can now run ChatGPT locally on your own computer. Let’s see how it holds up. Read more about ChatGPT OSS-20B on CNET.com OpenAI’s New Models Aren’t Really Open: What to Know About Open-Weights AI 0:00 Open AI’s local AI model is here 0:07 GPT-OSS 0:21 Custom-built local AI PC 0:28 GPT-OSS-20B vs. ChatGPT 1:04…
CNET
This Robot Looks Right Out of Star Wars
The $25,000 Tron 1 from LimX Dynamics looks like a mini AT-ST from Star Wars: here’s what it actually can do. 00:00 – Introducing the Star Wars-inspired Tron 00:11 – Durability and Speed Testing 00:34 – Real-World Applications: Deliveries and Inspections 01:15 – Autonomous Tours and Human Interaction 01:32 – Photobot: Autonomous Photography 01:55 –…
CNET
The Latest Folding iPhone Reports – April 30, 2026
There’s still a lot of rumor, speculation, and alleged leaks around the possibility of Apple releasing a foldable phone, but who would this be for, anyway? One guess: it could be for people who miss the iPad Mini, if the apps can just work. #ipad #iphone #iphoneultra #iphonefold Add CNET as a trusted news source…
CNET
iPhone Ultra Details! How Apple’s Folding iPhone Stands Apart
The folding phone space is getting crowded, and Apple is expected to jump into the mix this year with what could be called the iPhone Ultra. CNET’s Bridget Carey unfolds the rumors of how Apple’s folding iPhone could stand out by giving users an iPad-like experience. 0:00 iPhone fold expected this year 0:27 Apple’s folding…
-
Science & Technology6 years agoNitya Subramanian: Products and Protocol
-
People & Blogs4 years agoSleep Expert Answers Questions From Twitter ???? | Tech Support | WIRED
-
CNET6 years agoWays you can help Black Lives Matter movement (links, orgs, and more) ????????
-
Wired7 years agoHow This Guy Became a World Champion Boomerang Thrower | WIRED
-
Wired7 years agoNeuroscientist Explains ASMR’s Effects on the Brain & The Body | WIRED
-
Wired7 years agoWhy It’s Almost Impossible to Solve a Rubik’s Cube in Under 3 Seconds | WIRED
-
Wired7 years agoFormer FBI Agent Explains How to Read Body Language | Tradecraft | WIRED
-
CNET7 years agoSurface Pro 7 review: Hello, old friend ????

@witness1013
August 11, 2025 at 8:30 am
I can also stick a fork in my eyeball – but why would I ? Same goes for OSS
@OnurOzalp-personal
August 11, 2025 at 11:59 am
Some companies prefer not to share sensitive information with GPT, so maybe you can sell custom GPT OSS’ to them.
@Sebpv2006
August 11, 2025 at 9:58 am
RTX 5090 – 2000$ … ?
@hunterhealer8022
August 11, 2025 at 11:31 am
Bought mine at that price
@Chewbucksa
August 11, 2025 at 10:27 am
can it generate images and videos?
@OnurOzalp-personal
August 11, 2025 at 11:58 am
“We introduce gpt-oss-120b and gpt-oss-20b, two open-weight reasoning models available under the
Apache 2.0 license and our gpt-oss usage policy. Developed with feedback from the open-source
community, these *text-only* models are compatible with our Responses API and are designed to
be used within agentic workflows with strong instruction following, tool use like web search and
Python code execution, and reasoning capabilities—including the ability to adjust the reasoning
effort for tasks that don’t require complex reasoning”
@SonicVibe
August 11, 2025 at 11:17 am
i like perplexity
@grtninja
August 11, 2025 at 6:18 pm
PSA, you don’t need a 5090 to run OSS 20b, just any gpu with 16gb, and LM studio allows for multiple Cuda GPUs to share the load, which means multiple 8gb GPUs can run it. Works with AMD too, just download the right Ollama version for your system.
@radish6691
August 12, 2025 at 8:10 pm
PSA I’m running gpt-oss-20b on a 3070 with 8GB and get 8-16 tokens/sec with time to first token typically a couple of seconds and often < 1 second, low reasoning effort. It’s *much* faster and gives better answers than any other model I’ve run locally. My only complaint is about LM Studio…I wish they’d add chat export to PDF.
@acasualviewer5861
August 12, 2025 at 11:29 pm
@@radish6691 I ran it on my M4 Pro MBP with 48gb of ram, and I was surprised how fast it was. For simple hello world type prompts it was faster than 60 toks/s for slower more complex it went down to 30 toks/s.
But felt as fast as online ChatGPT.
@JasonB808
August 11, 2025 at 8:13 pm
I just use the online version via Copilot.
@HokgiartoSaliem
August 12, 2025 at 6:20 am
Try to use latest blackwell, how fast / slot it respond? RTX 5050.
@malcomf7991
August 12, 2025 at 12:36 pm
Relax
@tigernikesh7358
August 12, 2025 at 1:08 pm
❤
@acasualviewer5861
August 12, 2025 at 11:28 pm
It ran 30-60 tokens/sec on my 48gb M4 Pro MacBook Pro. It didn’t feel slow.
(I’m talking about the 20gb model, couldn’t run the 120gb model with only 48gb of RAM).
@Maceyee1
August 13, 2025 at 2:06 pm
Why does this guy talk so fast
@CNET
August 13, 2025 at 2:22 pm
Read more about ChatGPT OSS-20B on CNET.com: OpenAI’s New Models Aren’t Really Open: What to Know About Open-Weights AI
@Jean-Sylvain-v5t
August 13, 2025 at 8:06 pm
There’s something faster than local AI. A presenter on caffeine (or whatever).
@ShibaVid
August 15, 2025 at 8:17 am
What is the point of using inferior, error-prone models that require powerful GPU’s to run locally?