CNET

ChatGPT-OSS Is Here And Can Run Locally On Your PC

You can now run ChatGPT locally on your own computer. Let’s see how it holds up. Read more about ChatGPT OSS-20B on CNET.com OpenAI’s New Models Aren’t Really Open: What to Know About Open-Weights AI 0:00 Open AI’s local AI model is here 0:07 GPT-OSS 0:21 Custom-built local AI PC 0:28 GPT-OSS-20B vs. ChatGPT 1:04…

Published

on

You can now run ChatGPT locally on your own computer. Let’s see how it holds up.

Read more about ChatGPT OSS-20B on CNET.com
OpenAI’s New Models Aren’t Really Open: What to Know About Open-Weights AI

0:00 Open AI’s local AI model is here
0:07 GPT-OSS
0:21 Custom-built local AI PC
0:28 GPT-OSS-20B vs. ChatGPT
1:04 GPT-OSS efficiency
1:17 GPT-OSS results
1:28 GPT-OSS and ChatGPT write a poem
1:51 GPT-OSS answers a research question
2:26 Final first impressions of GPT-OSS

Subscribe to CNET on YouTube:
Never miss a deal again! See CNET’s browser extension 👉
Check out CNET’s Amazon Storefront:
Follow us on TikTok:
Follow us on Instagram:
Follow us on Bluesky:
Follow us on X:
Like us on Facebook:
CNET’s AI Atlas:
Visit CNET.com:

#openai #chatgpt #ai #chatbot #artificialintelligence

19 Comments

  1. @witness1013

    August 11, 2025 at 8:30 am

    I can also stick a fork in my eyeball – but why would I ? Same goes for OSS

    • @OnurOzalp-personal

      August 11, 2025 at 11:59 am

      Some companies prefer not to share sensitive information with GPT, so maybe you can sell custom GPT OSS’ to them.

  2. @Sebpv2006

    August 11, 2025 at 9:58 am

    RTX 5090 – 2000$ … ?

    • @hunterhealer8022

      August 11, 2025 at 11:31 am

      Bought mine at that price

  3. @Chewbucksa

    August 11, 2025 at 10:27 am

    can it generate images and videos?

    • @OnurOzalp-personal

      August 11, 2025 at 11:58 am

      “We introduce gpt-oss-120b and gpt-oss-20b, two open-weight reasoning models available under the
      Apache 2.0 license and our gpt-oss usage policy. Developed with feedback from the open-source
      community, these *text-only* models are compatible with our Responses API and are designed to
      be used within agentic workflows with strong instruction following, tool use like web search and
      Python code execution, and reasoning capabilities—including the ability to adjust the reasoning
      effort for tasks that don’t require complex reasoning”

  4. @SonicVibe

    August 11, 2025 at 11:17 am

    i like perplexity

  5. @grtninja

    August 11, 2025 at 6:18 pm

    PSA, you don’t need a 5090 to run OSS 20b, just any gpu with 16gb, and LM studio allows for multiple Cuda GPUs to share the load, which means multiple 8gb GPUs can run it. Works with AMD too, just download the right Ollama version for your system.

    • @radish6691

      August 12, 2025 at 8:10 pm

      PSA I’m running gpt-oss-20b on a 3070 with 8GB and get 8-16 tokens/sec with time to first token typically a couple of seconds and often < 1 second, low reasoning effort. It’s *much* faster and gives better answers than any other model I’ve run locally. My only complaint is about LM Studio…I wish they’d add chat export to PDF.

    • @acasualviewer5861

      August 12, 2025 at 11:29 pm

      @@radish6691 I ran it on my M4 Pro MBP with 48gb of ram, and I was surprised how fast it was. For simple hello world type prompts it was faster than 60 toks/s for slower more complex it went down to 30 toks/s.

      But felt as fast as online ChatGPT.

  6. @JasonB808

    August 11, 2025 at 8:13 pm

    I just use the online version via Copilot.

  7. @HokgiartoSaliem

    August 12, 2025 at 6:20 am

    Try to use latest blackwell, how fast / slot it respond? RTX 5050.

  8. @malcomf7991

    August 12, 2025 at 12:36 pm

    Relax

  9. @tigernikesh7358

    August 12, 2025 at 1:08 pm

  10. @acasualviewer5861

    August 12, 2025 at 11:28 pm

    It ran 30-60 tokens/sec on my 48gb M4 Pro MacBook Pro. It didn’t feel slow.
    (I’m talking about the 20gb model, couldn’t run the 120gb model with only 48gb of RAM).

  11. @Maceyee1

    August 13, 2025 at 2:06 pm

    Why does this guy talk so fast

  12. @CNET

    August 13, 2025 at 2:22 pm

    Read more about ChatGPT OSS-20B on CNET.com: OpenAI’s New Models Aren’t Really Open: What to Know About Open-Weights AI

  13. @Jean-Sylvain-v5t

    August 13, 2025 at 8:06 pm

    There’s something faster than local AI. A presenter on caffeine (or whatever).

  14. @ShibaVid

    August 15, 2025 at 8:17 am

    What is the point of using inferior, error-prone models that require powerful GPU’s to run locally?

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version