This Week in AI: The New AI Browser Wars Begin
For Oct. 24, 2025: Google Veo takes on OpenAI's Sora, Netflix pushes more generative video, AI babysitting, Reddit sues Perplexity.News

How much is AI shaking up the tech industry? Google has gone from being seen as one of the most innovative companies in the industry to being a sort-of underdog against massive startups like OpenAI and Anthropic. For Google competitors, this is the moment to go after some of its most powerful positions in industry.
That is why OpenAI has released its own web browser, Atlas, which is designed to take on the most popular web browser in the world, Google Chrome.
Atlas takes a much different approach to the internet, effectively turning that prompt box you know from ChatGPT, where you can ask it any question or type in a webpage to surf to instead.
Atlas also includes an in-browser AI agent that can then help you to automate certain things on a webpage because it can see what you’re doing. For example, you could ask ChatGPT to automatically apply to jobs that it sees on LinkedIn's website, while you do other things in another browser tab. Agentic AI, especially running on your local browser (instead in a remote instance) is one way to make that technology more practical for people. For now at least, Atlas is Mac-only.
OpenAI isn’t the only company offering a web browser. Perplexity has also launched its own web browser called Comet. Getting people to switch web browsers will be hard enough, but the potential upside if successful is enormous.
Google Chrome isn’t just a popular web browser, it is also Google’s primary window into who we are and the way we act around the internet. Chrome is designed to provide different signals, sending data to advertisers and webpage makers about who you are, what you’re doing, and where you came from.
Ultimately, that data has helped Google grow into the multi-trillion dollar company it is, and solidify its place as one of the biggest companies on earth. Just like Microsoft, 20 years ago, Google's dominance of the web browser has become a key topic of conversation on the legal and tech worlds.
OpenAI, and any other AI company, don’t just have an opportunity to take on Chrome, but if successful, they will also have an unprecedented view into our lives. OpenAI has already shown how ChatGPT watches what you are browsing, and learns from your behavior. For OpenAI, the data that comes from your web browsing habits could significantly shift the way it understands human behavior, and ultimately trains its AI.
"It’s too early to evaluate whether Atlas’s new artificial intelligence capabilities are useful enough to make it worth all the data gathering," my former coworker and forever colleague Geoffrey Fowler wrote in the Washington Post. "But the implications for privacy are vast, and the controls for managing what Atlas remembers are confusing at best."
In the short term, this increased competition for web browsers will look good. It'll turn into what many of us techies love: A showdown of who has the best and most useful features. Two decades ago that pressure helped Mozilla's Firefox stand out against Microsoft's dominant Internet Explorer. Google Chrome then came and disrupted the industry again, providing technology and tools that made the modern internet possible. Atlas may be the next step, or it may not.
Google's Veo takes on Sora
AI video has been rapidly evolving over the last few months with the announcements of the digital actor Tilly Norwood and highly publicized launch of OpenAI's updated Sora. Now, not to be outdone, Google is updating its app, Veo, with new features to take on Sora.
Among the changes announced, Veo 3.1 can now work with multiple reference images, allowing you to upload several photos of different people, for example, and have the AI create videos based on those likenesses. Google also said it supports longer videos and better editing.
Google's push further into AI video is just the latest way it is trying to remain competitive in the ultra competitive AI industry. When it comes to video, in particular, the race currently stands largely between OpenAI and Google, though other companies, including Meta, are attempting to build similar tools.
A key question about Google's Veo will be how much the company has learned so far from Sora's launch. Artists have expressed concern about AI videos's easily create deepfake videos, effectively putting real-life people in situations that aren’t real, and making them appear to do things they hadn’t done. OpenAI eventually blocked the use of some public figures in its Sora videos, but depictions of other celebrities continue to proliferate.
Netflix goes all-in on AI
You may not think of Netflix as an AI company, but you absolutely should. After all, its recommendation system is among one of the most advanced examples of machine learning and artificial intelligence in the tech world, and the company has been pushing evermore into AI alongside those efforts. Now, as Hollywood debates modern AI and generative technologies, Netflix has made clear that it stands firmly on the side of technology companies.
The company has said that it is increasingly using generative AI video technologies in its productions, including to create scenes for shows and movies. Earlier, this year, that included footage for Netflix's Argentine show "The Eternaut," TechCrunch reported. Netflix has also used AI to help with de-aging stars.
“It takes a great artist to make something great,” Netflix CEO Ted Sarandos said on a conference call with investors earlier this week. “AI can give creatives better tools to enhance their overall TV/movie experience for our members, but it doesn’t automatically make you a great storyteller if you’re not.”
Researchers task AIs with babysitting one another
There’s a lot of really interesting debate about how to handle the question of trusting an AI. The larger issue many companies face is that AI is not always going to provide the same answer every time you ask it the same question. Because of the nature of how generative AI works, it’s possible that in some cases, you could get radically different answers for the same question, whether you're using a cloud-based AI service or running AI on your own local hardware.
There is a lot of work being done around how to mitigate this issue and ensure consistent quality across the board. The latest answer from Meta's AI teams is that instead of attempting to manage one AI on its own, why not have two work together? After all, one AI might hallucinate and make something up in an effort to answer a question, but if another AI has the job of double-checking that work, it's not likely to hallucinate in the same way.
Now, of course, researchers were quick to note that this is not a guaranteed solution. "Large language models (LLMs) present immense potential for both positive impact, and significant risks if not managed responsibly," researchers wrote in their paper, published online earlier this week.
Meta's research may lead to better AI, but it doesn't solve the problem entirely. As we’ve learned with any other system, backups and fail-safes are never true answers to mitigate a problem.
As you dig deeper into this debate, there’s another thing to consider: whether we know it or not, many of us tend to trust a machine more than we do other people.
For example, many of us know when to apply skeptical analysis to other people's statements. So far, most people have not been taught that type of skill with computers. In fact, we’ve been taught to do the opposite, especially when it comes to asking computers for help when researching facts or doing math.
Using multiple AI’s may help to mitigate some of these concerns, but they haven’t completely gone away. It’s likely that in the coming years, we will develop much better skills and training to suss out when an AI is right or wrong, rather than just relying on tools built by tech companies to mitigate their own risks.
In the meantime, it’s a reminder that we still have a lot of improvements to make, both with AI and the way we think about it. It’s also a fascinating window into a conversation about the nature of truth, what authority really is, and how we think of facts and our shared reality.
Reddit sues Perplexity
A few months ago, it seemed as though copyright lawsuits over AI were largely over. Courts largely seemed to say that AI companies could use legally–acquired content to train their AI without facing copyright violation lawsuits. There were still some instances where companies could get in trouble, including if they somehow stole content in order to train their AI. But otherwise, the wild west seemed untamed.
Despite those court ruins, Reddit has sued AI startup Perplexity, saying the company illegally stole information from its service by scraping, or copying, Reddit’s sites.
"Reddit alleged that the three smaller entities were able to extract its copyrighted content 'by masking their identities, hiding their locations and disguising their web scrapers as regular people,'" CNBC reported.
Perplexity denied the allegations, while also accusing Reddit of "extortion."
The suit will mark an interesting test of copyright law, potentially setting clear, not just how copyrighted works can be used to train in AI, but also some rules around how they can be acquired.
More from MC News
- Hands-on with the NVIDIA DGX Spark
- How to Build a PC with a Hardline Water-Cooling Loop
- 3D Print a Mac Mini Monitor Mount
- The End Has Come for Windows 10: Four Tips to Make the Most of Windows 11
- Everything You Need to Know About WiFi 7
- Keyboard 101: Intro to Computer Keyboards
- Can Your PC Run OpenAI's New GPT-OSS Large Language Models?
- Fix It Yourself: Talking to iFixit on Why Repairable Tech Matters
Ian Sherr is a widely published journalist who's covered nearly every major tech company from Apple to Netflix, Facebook, Google, Microsoft, and more for CBS News, The Wall Street Journal, Reuters, and CNET.
