This Week in AI: A Big Leap for Desktop AI
For Oct. 17, 2025: NVIDIA DGX Spark, Apple's M5 chip, California responds to Tilly Norwood, AI is writing more of the Internet.News

NVIDIA's $3,999 DGX Spark could end up being one of those moments, like Apple's 2007 iPhone introduction, where the tech industry could tell things were about to change rapidly. The device landed on store shelves this week, promising effectively server-like AI hardware in the size of a hardcover book. It is built around the Grace Blackwell architecture, a system-on-a-chip (SoC) with a 20-core Arm processor (combining 10 Cortex-X925 performance cores with 10 Cortex-A725 efficiency cores) and a Blackwell architecture GPU (with 6,144 CUDA cores) on the same package. The system is rated for up to 1 PetaFLOP of AI performance at FP4 precision, a number that was reserved until recently for massive data center servers.
That power comes from the massive 128GB of LPDDR5x unified system memory, which is accessible across a 256-bit interface, providing 273 GB/s of bandwidth that both the CPU and GPU share dynamically. That is significantly more than the typical 8GB-32GB most consumer computers have. The end result is that the DGX Spark is capable of running much more complex and powerful AI workloads than consumer or enthusiast computers typically can.
There are a few hurdles for the DGX Spark of course. The first is price. The other is that the software and designs are focused on fine-tuning models, rather than running an open-source model locally. Of course, depending on community support, that could change.
For now, the DGX Spark is a look at what the future could be, with access to high-end AI development tools on our desks that don't rely on cloud connections to remote servers to work.
In the meantime...
Apple bumps MacBook Pro and iPad Pro to M5
No, you didn't miss it. Apple decided to announce its newest laptops in a press release rather than at a flashy in-person event. The newest 14-inch MacBook Pro starts at $1,599, promising new M5 silicon, which Apple says has a redesigned 10-core GPU with a new architecture and neural accelerators to speed up AI workloads.
Apple said this new design delivers over four times the peak GPU compute performance for AI compared to the M4 chip, with unified bandwidth up to 153GB/s.
Apple also upgraded its 11-inch iPad Pro to the M5, starting at $999. Apple said its M5 iPad Pro is up to 3.5x faster than the M4 iPad Pro, and 5.6x faster than the M1 iPad Pro.
Whether M5 lives up to Apple's promises is yet to be seen. So far, Apple's largely kept the promises it makes, aside from features it said would be included in Apple Intelligence. Reviewers of course will have their own say around the time both devices launch on Oct. 22.
California responds to AI actors
California Governor Gavin Newsom has signed into law a new bill that sets rules around how people's likenesses can be used by AI. The new law, called SB 683, will help remove "unlawful content" that misappropriates a person's voice or likeness in commercial settings. The Wrap noted that this law follows a series of incidents in which celebrities including Tom Hanks, Keanu Reeves, and Jamie Lee Curtis have said their likeness was being used in ads without their permission.
Though the law has been years in the making, it comes at a particularly tense time in the entertainment industry.
Two years ago, actors and other unions across Hollywood went on strike as they battled with movie studios over a new contract that, among other things, included language about future usage of AI in making movies. Then again this fall, the twin releases of OpenAI's Sora 2 and the new AI actress, Tilly Norwood raised many more questions about the future of entertainment, and how people's likenesses may be abused by new AI tools.
Google and OpenAI's video tools can create remarkably realistic video and audio of pretty much anything based on a text prompt entered into a computer. And since these technologies have been built by using information gathered across the Internet, they are also very good at reproducing the likeness and voices of celebrities.
The end result has been a flood of videos, including astrophysicist Stephen Hawking fighting in a wrestling match, or comedian Robin Williams performing skits that he never participated in.
California’s new law certainly draws a line in the sand, but it’s unclear how well it will stand up to scrutiny. Courts have already begun ruling that companies are allowed to train their AI on information across the Internet, and even create facsimiles, under fair use protections. Those cases haven’t specifically involved the likenesses of celebrities, but it’s clear this debate will drag on for a while.
The web is still mostly written by humans, for now
One of the most interesting debates about the Internet these days is something called the dead internet theory. The idea goes something like this: Computers are so widely being used to collect information and track changes in various websites that the true size of the Internet is significantly different than we think it is.
Said another way, a website might report that 1,000 people visited its homepage, but in fact, half of them were computer programs tracking how much that homepage changed. Imagine if this was a news site, or a press release blog, or collection of sports scores, and it sounds increasingly possible.
This matters because businesses make many decisions based on trends they see from the internet. Imagine sustained traffic spikes to a particular product's information page, or a certain type of news story.
Advertisers also commonly judge the value of an ad on a website based on its views. So, if half the views are computer programs that are never going to buy anything anyway, then in truth, the advertiser shouldn’t be paying as much.
It’s almost impossible to accurately track how much of this is true, especially because these computer programs and bots are designed to act like human beings in order to gain access to the information they're seeking.
It's not just on standard websites. Hackers on social media sites use fake profiles for all sorts of reasons, whether it's to make someone look more important than they are or try to persuade people that there are more supporters of a certain point of view. And of course, they've all become harder to identify thanks to AI programs that make them and their posts seem more human.
The next step in this debate is the question of what percentage of the websites and news articles and images and videos on the internet are currently created by humans. Axios reports that a new study from SEO firm Graphite says we are nearing a tipping point where AI-generated content will outpace human-made content.
The study used AI-detection tools on many different websites, but of course, Axios adds that AI researchers say it "isn't possible" to get a definitive count of AI-made content. One reason is that AI is just getting better at seeming human, and another is that humans are increasingly using AI in their daily work.
It’s unclear where any of this will eventually end up. "For now, humans still want to read content that is written mostly by humans," Axios added.
More from MC News
- Hands-on with the NVIDIA DGX Spark
- How to Build a PC with a Hardline Water-Cooling Loop
- 3D Print a Mac Mini Monitor Mount
- The End Has Come for Windows 10: Four Tips to Make the Most of Windows 11
- Everything You Need to Know About WiFi 7
- Keyboard 101: Intro to Computer Keyboards
- Can Your PC Run OpenAI's New GPT-OSS Large Language Models?
- Fix It Yourself: Talking to iFixit on Why Repairable Tech Matters
Ian Sherr is a widely published journalist who's covered nearly every major tech company from Apple to Netflix, Facebook, Google, Microsoft, and more for CBS News, The Wall Street Journal, Reuters, and CNET.
