Newsom Signs Law Requiring Online Age Verification
Authored by Arjun Singh via The Epoch Times,
California Gov. Gavin Newsom has signed into law a series of state bills that will require age verification to access Apple and Google devices, impose social media warning labels, and regulate artificial intelligence (AI) chatbots and the creation of “deepfake” videos.
The seven bills were passed by the California Legislature during its 2025–2026 legislative session and were signed on Oct. 12.
Some of the statutes will take effect on Jan. 1, 2027, while others—related to “deepfake” pornography and legal defenses against liability for AI usage—are effective immediately.
“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue,” Newsom wrote in a statement.
The bill on age verification, Assembly Bill 1043, would require operating systems, such as Apple’s iOS and Google’s Android, to determine whether a user was under 13 years old, between 13 and 16, between 16 and 18, or over 18 years of age, and then curate content available to them accordingly.
The restriction would apply to in-built software on those operating systems—such as the Apple App Store and the Google Play Store used to download mobile applications.
Violations of this rule may incur civil penalties of $2,500 per child for each incident and up to $7,500 per child for intentional violations.
Assembly Bill 1043 does not impose age verification requirements to access pornographic websites, which 25 other states have recently imposed, causing several, such as Pornhub, to shut down their operations in those states.
A separate bill to impose such requirements in California, Assembly Bill 3080, was not passed by the Legislature this year, and “died in the Senate Appropriations Committee,” according to a report by the state senate.
Another piece of legislation, Assembly Bill 621, imposes steep penalties for producing or sharing “deepfake” pornographic videos, where AI programs are used to create realistic depictions of real people engaging in sexual activity, often with faces adapted from publicly available photographs.
The bill allows plaintiffs who sue to recover a maximum of $250,000 from defendants who create or share such content with malice, as well as punitive damages and legal fees.
The bill does not impose liability on social media companies, however, for content posted on their platforms, consistent with federal law under Section 230 of the Communications Act.
A separate bill signed by Newsom, Assembly Bill 316, does not allow users of AI to shield themselves from liability when the AI, prompted by them, creates content that harms another person.
A bill regarding social media warning labels, Assembly Bill 56, would require companies to warn users under 17 when they have cumulatively spent more than three hours a day on that platform.
It does not apply to specific content on those platforms, which in the past have been heavily criticized by conservatives for being politically biased and censorious.
Tyler Durden
Tue, 10/14/2025 – 17:00ZeroHedge NewsRead More