人工智能法律年度回顾:2025年重大法律案件及其对2026年的意义
2026-03-17 来源:网络 浏览:8

We can confidently say that artificial intelligence law stopped being “emerging” in 2025. This was the year the courts, regulators, and legislators around the world started drawing real lines in the sand on copyright, data use, AI-washing, and high-risk systems—with obligations that will fully bite in 2026 and beyond. For in-house teams, founders, and boards, this year was less about theoretical risk and more about the following issues: what, exactly, is now illegal, what must we document, and how do we keep launching AI products without stepping on a legal landmine?
我们可以自信地说,人工智能法在2025年不再是“新兴”领域。这一年,世界各地的法院、监管机构和立法者开始在版权、数据使用、人工智能洗白和高风险系统等方面划定明确界限,相关义务将在2026年及以后全面生效。对于内部团队、创始人及董事会而言,今年的重点较少涉及理论风险,更多在于以下问题:确切地说,现在哪些行为是非法的,我们必须记录哪些内容,以及如何在不触碰法律雷区的情况下继续推出人工智能产品?
版权与知识产权:“合理使用三角”的形成
This year gave us the first real cluster of U.S. decisions on whether using copyrighted works to train AI is fair use. The answer so far: it depends heavily on how you got the data and what you do with it.
今年,我们首次看到美国就使用受版权保护的作品训练人工智能是否属于合理使用做出了一系列真正意义上的判决。到目前为止,答案是:这在很大程度上取决于你如何获取数据以及用这些数据做什么。
Thomson Reuters v. ROSS (D. Del.) – “Headnotes are not a free training set”
汤姆森路透诉ROSS案(特拉华州地区法院)——“标题注释并非免费训练集”
In February 2025, a Delaware federal court issued one of the first major training-data decisions in Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc. Thomson Reuters, owner of Westlaw, accused ROSS of using Westlaw headnotes to train a competing AI-driven legal research tool. The court rejected ROSS’s fair-use defense at summary judgment and found infringement, emphasizing the commercial, competitive use and the creative value of the curated headnotes.
2025年2月,特拉华州联邦法院在《汤姆森路透企业中心有限公司诉ROSS智能公司》一案中作出了首批重大训练数据判决之一。Westlaw的所有者汤姆森路透指控ROSS使用Westlaw的标题注释来训练一款具有竞争力的人工智能法律研究工具。法院在即决判决中驳回了ROSS的合理使用抗辩,认定其构成侵权,并强调了这种精心编辑的标题注释的商业性、竞争性用途及创作价值。
Takeaway: Scraping proprietary, value-added content from a competitor to build a directly competing AI product is a high-risk strategy.
要点:从竞争对手那里抓取专有、增值内容来构建直接竞争的人工智能产品,是一种高风险策略。
Bartz v. Anthropic (N.D. Cal.) – Lawful copies vs. pirated “central library”
巴茨诉Anthropic案(加利福尼亚州北区)——合法复制与盗版“中央图书馆”的对决
In June 2025, Judge William Alsup issued a pivotal summary-judgment ruling in Bartz v. Anthropic:
2025年6月,威廉·阿尔苏普法官在巴茨诉Anthropic案中作出了一项关键的即决判决:
Training on lawfully acquired books was “quintessentially transformative” and fair use.
使用合法获取的书籍进行训练具有“典型的转化性”,属于合理使用。
But creating and retaining a “central library” of pirated books raised serious infringement concerns, and fair use was denied for those works.
但创建并保留一个“盗版书籍中央图书馆”引发了严重的侵权问题,这些作品也不适用合理使用原则。
The case later settled on the eve of trial in September 2025 for a reported $1.5 billion, underscoring the stakes of training-data decisions. So, the courts are increasingly drawing a line between lawfully acquired corpora (more defensible) and pirated or unauthorized data.
该案后来在2025年9月庭审前夕以据报道15亿美元的金额达成和解,这凸显了训练数据决策的风险。因此,法院正日益在合法获取的语料库(更具可辩护性)与盗版或未经授权的数据之间划清界限。
Kadrey v. Meta & other N.D. Cal. cases – More nuance on fair use
卡德里诉Meta及其他北加利福尼亚地区法院案件——关于合理使用的更多细微差别
Companion cases out of the Northern District of California (including Kadrey v. Meta) produced additional rulings that, on their face, are more favorable to AI developers, finding fair use in some training scenarios involving lawfully sourced content.
加利福尼亚州北区的关联案件(包括Kadrey诉Meta案)产生了其他裁决,这些裁决表面上对人工智能开发者更有利,认定在某些涉及合法来源内容的训练场景中存在合理使用。
Collectively, practitioners talk about a “fair use triangle”:
从业者们共同讨论着一个“合理使用三角”:
Delaware (Thomson Reuters) – highly skeptical when AI is trained on proprietary, curated content to build a direct competitor.
特拉华州(汤森路透)—— 当人工智能通过专有、精心策划的内容进行训练以打造直接竞争对手时,人们对此持高度怀疑态度。
N.D. Cal. (Anthropic / Meta) – more open to fair use where content is lawfully acquired and the AI model is considered transformative, but not when developers hoard pirated content.
美国加利福尼亚北区联邦地区法院(Anthropic/元宇宙公司案)——对于合法获取内容且人工智能模型具有变革性的情况,更倾向于认定为合理使用,但对于开发者囤积盗版内容的情况则不适用。
Media & music: NYT v. OpenAI, Disney/Universal v. Midjourney, and Suno/Udio
媒体与音乐:《纽约时报》诉OpenAI、迪士尼/环球诉Midjourney以及Suno/Udio
Meanwhile, The New York Times v. OpenAI / Microsoft continued as one of the most closely watched AI cases. In 2025, the court issued a sweeping preservation order requiring OpenAI to retain and segregate ChatGPT and API output logs, then later allowed OpenAI to resume normal deletion after the order expired in September. In November, Magistrate Judge Ona Wang ordered OpenAI to produce some 20 million ChatGPT logs, a stark reminder that product logs can become discoverable evidence in AI litigation.
与此同时,《纽约时报》诉OpenAI/微软案仍是最受关注的人工智能案件之一。2025年,法院发布了一项全面的保全令,要求OpenAI保留并隔离ChatGPT和API的输出日志,随后在该命令于9月到期后,允许OpenAI恢复正常的删除操作。11月,地方法官奥娜·王下令OpenAI提供约2000万条ChatGPT日志,这鲜明地提醒我们,产品日志可能会成为人工智能诉讼中可被发现的证据。
In the media space, Disney and Universal sued Midjourney this year for alleged copyright infringement related to image training, marking the first major visual-media plaintiffs in the AI space.
在媒体领域,迪士尼和环球影业今年起诉了Midjourney,指控其在图像训练方面存在版权侵权行为,这是人工智能领域首次出现主要的视觉媒体原告。
Music labels likewise intensified litigation against AI music generators like Suno and Udio; by late 2025, Warner Music had settled and pivoted into a licensing partnership with Suno, allowing licensed AI models and artist opt-ins. This signals a likely future: litigation leading to structured licensing deals instead of pure prohibition.
唱片公司同样加大了对Suno和Udio等人工智能音乐生成器的诉讼力度;到2025年底,华纳音乐已达成和解,并转而与Suno建立了授权合作关系,允许获得授权的人工智能模型以及艺术家自主选择参与。这预示着一种可能的未来:诉讼将促成结构化的授权协议,而非单纯的禁令。
Emerging frontiers: Trade secrets, trademarks, and data promises
新兴前沿:商业秘密、商标和数据前景
This year, we also saw new angles:
今年,我们还看到了新的角度:
A proposed class action against Figma alleges the company used customers’ design files to train AI without consent, focusing on misappropriation of confidential information and broken data promises rather than pure copyright.
一项针对Figma的拟议集体诉讼指控该公司未经同意使用客户的设计文件来训练人工智能,诉讼焦点在于对机密信息的侵占以及数据承诺的违背,而非单纯的版权问题。
OverDrive v. OpenAI accuses OpenAI of trademark infringement for naming its video model “Sora” in a way that allegedly conflicts with OverDrive’s existing “Sora” library app.
OverDrive诉OpenAI指控OpenAI将其视频模型命名为“Sora”,涉嫌侵犯商标权,称这与OverDrive现有的“Sora”图书馆应用存在冲突。
Strategic IP lesson for 2026: Build a documented data-provenance strategy. Track what data is used, how it was obtained, and under what license; wall off dubious sources (pirated sites, competitor headnotes, confidential customer content) and revisit your public promises about “never” using certain data for training.
2026年的战略性知识产权教训:制定一套有记录的数据来源策略。追踪使用了哪些数据、数据是如何获取的以及依据何种许可;隔离可疑来源(盗版网站、竞争对手的批注、客户的机密内容),并重新审视你关于“绝不”使用某些数据进行训练的公开承诺。
Enforcement: FTC’s “AI-Washing” Crackdown and Agentic AI Claims
执法:联邦贸易委员会对“人工智能洗白”的打击及智能体人工智能宣称
On the enforcement front, the Federal Trade Commission (FTC) made clear that there is no AI exemption from existing consumer-protection laws.
在执法方面,联邦贸易委员会(FTC)明确表示,现有的消费者保护法不适用于人工智能豁免。
Operation AI Comply and “AI-washing”
AI合规行动与“人工智能洗白”
Building on a 2024 announcement that “using AI tools to trick, mislead, or defraud people is illegal,” the FTC has now brought at least a dozen “AI-washing” cases, targeting companies that overstate what their AI does or mislead consumers about AI-powered earnings and performance claims. In August 2025, the FTC sued Air AI, alleging deceptive claims that its agentic AI could fully replace human sales reps and deliver unrealistic business results, while also raising concerns about exaggerated “AI-powered” marketing around a business opportunity scheme.
美国联邦贸易委员会(FTC)在2024年宣布“使用人工智能工具欺骗、误导或欺诈他人是非法的”,在此基础上,该机构目前已提起至少12起“人工智能洗白”案件,目标直指那些夸大其人工智能功能,或在人工智能驱动的收益和性能宣称方面误导消费者的公司。2025年8月,联邦贸易委员会起诉了Air AI,指控其进行虚假宣传,称其智能体人工智能可完全取代人类销售代表并带来不切实际的商业成果,同时还对围绕一项商业机会计划进行的夸大“人工智能驱动”营销表示担忧。
Key themes: 核心主题:
Claiming “full automation” or “no humans needed” without proof is risky.
在没有证据的情况下宣称“完全自动化”或“无需人工”是有风险的。
Exaggerated ROI/earnings tied to “AI” are classic unsubstantiated claims.
与“人工智能”相关的夸大投资回报率/收益是典型的未经证实的说法。
Labeling something as “AI-powered” when it’s not meaningfully different from a standard SaaS tool can be deceptive.
当某样东西与标准的SaaS工具并无显著差异时,却将其标榜为“人工智能驱动”,这可能具有欺骗性。
Strategic enforcement lesson for 2026:
2026年的战略执行教训:
Run all AI product and marketing copy through a truth-in-advertising filter:
通过广告真实性过滤器审核所有人工智能产品和营销文案:
Can we prove this claim with competent evidence?
我们能用充分的证据证明这一说法吗?
Are we implying capabilities (e.g., “human-level,” “guaranteed replacement of employees”) we can’t substantiate?
我们是否在暗示一些我们无法证实的能力(例如,“人类水平”、“保证取代员工”)?
Are we clear about limitations, guardrails, and human oversight?
我们是否清楚其局限性、防护措施和人工监督?
New Statutes & Regulatory Frameworks: EU AI Act, Colorado, and State Patchwork
新法规与监管框架:欧盟人工智能法案、科罗拉多州及各州拼凑式监管
EU AI Act: Obligations start phasing in
欧盟人工智能法案:义务开始逐步实施
The EU Artificial Intelligence Act formally entered into force on August 1, 2024, but 2025 is when the first obligations started to bite.
《欧盟人工智能法案》于2024年8月1日正式生效,但从2025年起,首批义务开始生效。
Key 2025–2026 milestones: 2025-2026年的关键里程碑:
Feb 2, 2025 – Ban on “unacceptable-risk” AI systems (e.g., social scoring, certain manipulative systems) and AI literacy obligations.
2025年2月2日——禁止“具有不可接受风险”的人工智能系统(例如社会评分、某些操纵性系统),并规定人工智能素养义务。
Aug 2, 2025 – Governance rules and obligations for general-purpose AI (GPAI) providers take effect, including documentation, transparency and some risk-management obligations.
2025年8月2日——通用人工智能(GPAI)提供商的治理规则和义务生效,包括文件记录、透明度以及一些风险管理义务。
Aug 2, 2026–27 – The full high-risk framework for AI embedded into regulated products, sectoral compliance, and national AI sandboxes come online.
2026年8月2日至27日——适用于嵌入受监管产品、行业合规以及国家人工智能沙盒的完整高风险人工智能框架投入使用。
If you build or deploy AI in the EU (or serve EU users), 2025 was the year to start classifying use cases and mapping them to future obligations.
如果你在欧盟构建或部署人工智能(或为欧盟用户提供服务),那么2025年就是开始对用例进行分类并将其与未来义务相对应的一年。
Colorado AI Act: The first comprehensive U.S. AI statute
科罗拉多州人工智能法案:美国首部全面的人工智能法规
Colorado’s SB 24-205 (Colorado Artificial Intelligence Act / CAIA), signed in 2024, has been under intense scrutiny in 2025 but remains on track to take effect February 1, 2026.
科罗拉多州的《SB 24-205法案》(《科罗拉多州人工智能法案》/ CAIA)于2024年签署,2025年受到了严格审查,但仍计划于2026年2月1日生效。
Key features: 主要特点:
Risk-based approach similar to the EU AI Act.
基于风险的方法,类似于《欧盟人工智能法案》。
Focus on preventing algorithmic discrimination by “high-risk” AI systems. (NAAG)
重点关注防止“高风险”人工智能系统的算法歧视。(美国总检察长协会)
Obligations for both developers and deployers, including risk assessments, notice to consumers, and documentation.
开发者和部署者双方的义务,包括风险评估、向消费者发出通知以及提供文档。
This is the first broad, state-level AI framework in the U.S.—and it’s influencing drafts in other states.
这是美国首个广泛的州级人工智能框架,并且正在影响其他州的草案。
California & other states: Deepfakes, elections, and transparency
加利福尼亚州及其他州:深度伪造、选举与透明度
California and other states continued to enact narrower, issue-specific AI laws, often around election integrity and deepfakes:
加利福尼亚州和其他州继续颁布范围更窄、针对特定问题的人工智能法律,这些法律往往围绕选举诚信和深度伪造展开:
California has laws requiring disclosures on AI-generated political ads and manipulated media used in campaign communications.
加利福尼亚州有法律要求披露人工智能生成的政治广告以及竞选宣传中使用的被篡改媒体。
Additional bills (like AB 2839 and AB 2655) target election-related deepfake disinformation and require platforms to block or label deceptive AI-generated political content during sensitive pre-election periods.
其他法案(如AB 2839和AB 2655)针对与选举相关的深度伪造虚假信息,并要求平台在敏感的选举前时期屏蔽或标记具有欺骗性的人工智能生成的政治内容。
California also advanced an AI Transparency Act aimed at labeling or watermarking AI content and addressing harms from non-consensual sexual deepfakes.
加利福尼亚州还推进了一项《人工智能透明度法案》,旨在为人工智能生成内容添加标签或水印,并解决未经同意的性深度伪造所带来的危害。
In November 2025, a bipartisan group of 35 state attorneys general urged Congress not to preempt state AI laws, highlighting state-level momentum around AI harms such as chatbots causing injuries, discrimination, and deepfake abuse.
2025年11月,由35名州检察长组成的跨党派团体敦促国会不要取代各州的人工智能法律,强调各州在应对人工智能危害方面的势头,例如聊天机器人造成伤害、歧视和深度伪造滥用等问题。
Strategic regulatory lesson for 2026:
2026年的战略性监管教训:
You should assume a patchwork: 你应当设想一种拼凑的局面:
EU: horizontal, comprehensive AI Act.
欧盟:横向、全面的人工智能法案。
U.S. states: sector- and harm-specific rules (discrimination, elections, deepfakes, consumer AI).
美国各州:针对特定行业和特定危害的规则(歧视、选举、深度伪造、消费者人工智能)。
Vertical rules: financial, health, employment, housing, etc.
垂直规则:金融、健康、就业、住房等。
Building a single, global AI risk-management framework that can be tuned to local rules will be more sustainable than playing whack-a-mole with individual laws.
构建一个单一的全球人工智能风险管理框架,并使其能够适应当地规则,这将比逐个应对各项法律更具可持续性。
Strategy for 2026: Practical AI Compliance Priorities
2026年战略:切实可行的人工智能合规重点
Given this 2025 landscape, here are concrete planning priorities for 2026.
鉴于2025年的这一局面,以下是2026年具体的规划重点。
Build an AI inventory and risk map
构建人工智能清单和风险地图
Catalogue all AI systems you develop or deploy (internal tools, customer-facing features, vendor models).
对您开发或部署的所有人工智能系统(内部工具、面向客户的功能、供应商模型)进行分类编目。
Tag each system by jurisdiction, purpose, and risk (e.g., customer scoring, hiring, health, safety-critical, election content).
按管辖范围、用途和风险(例如,客户评分、招聘、健康、安全关键、选举内容)为每个系统添加标签。
Map each category to obligations under the EU AI Act, Colorado AI Act, and relevant state deepfake / discrimination laws.
将每个类别与《欧盟人工智能法案》、《科罗拉多州人工智能法案》以及相关的州级深度伪造/歧视法律规定的义务相对应。
Clean up your training data and contracts
清理你的训练数据和合同
Document sources and licenses for training corpora.
训练语料库的文档来源和许可。
Avoid or segregate pirated or obviously unauthorized content; it is now clearly litigated territory.
避免或隔离盗版或明显未经授权的内容;这一领域现已存在明确的诉讼案例。
Update customer contracts and privacy notices to be explicit (and honest) about whether customer data will be used for training, and on what terms. Cases like the Figma lawsuit show how quickly this can become a trade secret and data-privacy problem.
更新客户合同和隐私声明,明确(且诚实地)说明客户数据是否会用于训练以及使用条款。像Figma诉讼这样的案例表明,这很快就可能演变成商业秘密和数据隐私问题。
Tighten AI marketing and sales claims
收紧人工智能营销和销售声明
In light of the FTC’s “AI-washing” enforcement:
鉴于联邦贸易委员会(FTC)对“人工智能洗白”的执法行动:
Scrub your website, decks, and sales scripts for overblown AI claims (“fully autonomous,” “guaranteed 10x revenue,” “no human oversight needed”).
清理你的网站、演示文稿和销售脚本中夸大的人工智能宣传(“完全自主”“保证收入增长10倍”“无需人工监督”)。
Document evidence for material claims, including benchmarks, A/B tests, or client case studies.
为重大声明提供文件证据,包括基准测试、A/B测试或客户案例研究。
Train marketing and sales teams on what they can and cannot say about AI.
培训营销和销售团队,让他们知道关于人工智能可以说什么、不可以说什么。
Prepare for discovery in AI litigation
为人工智能诉讼中的证据开示做准备
Cases like NYT v. OpenAI show that courts are willing to order production of massive volumes of logs and training records.
像《纽约时报》诉OpenAI案这样的案例表明,法院愿意下令要求提供大量的日志和训练记录。
Implement data-retention policies that balance privacy, storage cost, and anticipated litigation needs.
实施数据保留政策,在隐私、存储成本和预期诉讼需求之间取得平衡。
Ensure your logging and observability systems avoid storing more personal data than necessary but still capture enough metadata to defend your systems (e.g., to show filtering, safety measures, and provenance).
确保你的日志记录和可观测性系统避免存储过多不必要的个人数据,但仍要捕获足够的元数据以保护你的系统(例如,用于展示过滤、安全措施和来源)。
Stand up cross-functional AI governance
建立跨职能人工智能治理体系
For most organizations, AI is no longer “just an IT issue.” Consider:
对于大多数组织而言,人工智能不再“仅仅是一个信息技术问题”。想想看:
An AI Governance Committee with legal, security, product, and compliance represented.
一个由法律、安全、产品和合规部门代表组成的人工智能治理委员会。
A lightweight but formal AI impact assessment process for higher-risk deployments (hiring, lending, health, elections, safety-critical use).
一种针对高风险部署(招聘、贷款、健康、选举、安全关键型应用)的轻量级但正式的人工智能影响评估流程。
Regular updates to the board on AI risk and opportunity, especially as EU and state laws phase in by 2026.
定期向董事会更新人工智能的风险与机遇,尤其是在欧盟和各州法律将于2026年逐步实施的情况下。
What All of This Means for 2026 这一切对2026年意味着什么
If 2023–2024 were the years of AI experimentation, 2025 was the year courts and regulators began to tighten the frame. The pattern is clear:
如果说2023到2024年是人工智能的试验之年,那么2025年就是法院和监管机构开始收紧框架的一年。这种模式很明显:
Data provenance and licensing will decide many copyright disputes.
数据出处和许可将决定许多版权纠纷。
Truthfulness and transparency will drive enforcement around AI marketing and consumer protection.
真实性和透明度将推动围绕人工智能营销和消费者保护的执法工作。
Risk-based frameworks (EU, Colorado, state laws) will reward organizations that can explain how their models work, what data they use, and what safeguards they put in place.
基于风险的框架(欧盟、科罗拉多州、州法律)将奖励那些能够解释其模型如何运作、使用何种数据以及采取了哪些保障措施的组织。
For companies building or deploying AI, 2026 is not the time to pause innovation—but it is the time to professionalize your AI compliance program. Please feel free to contact our law firm if you’d like help auditing your AI systems, updating your contracts and product claims, or building an AI governance framework tailored to your risk profile.
对于正在构建或部署人工智能的公司而言,2026年并非暂停创新之时,但确实是让人工智能合规计划专业化的时机。如果您需要帮助审计人工智能系统、更新合同和产品声明,或者构建适合自身风险状况的人工智能治理框架,欢迎联系我们的律师事务所。
December 29, 2025|by Law Offices of Salar Atrizadeh
发布日期:2025年12月29日 | 作者:萨拉尔·阿特里扎德律师事务所
免责声明:本网部分文章和信息来源于互联网,转载出于传递更多信息和学习之目的。如转载稿涉及版权等问题,请立即联系我们,我们会予以更改或删除相关文章,保证您的权利。
