X says its Terms of Service will change Jan. 15, 2026, expanding how the platform defines user “Content” and adding contract language tied to the operation and protection of its AI systems.
The current terms, dated Nov. 15, 2024, remain in effect until the 2026 version takes over.
A core revision is that X now treats AI-era interactions as “Content” that users are responsible for, alongside posts and other materials.
How X’s updated terms redefine ownership and responsibility in the AI era
Users are responsible for Content that includes “inputs, prompts, outputs,” and information “obtained or created through the Services,” and X cautions users to only provide, create, or generate what they are comfortable sharing.
The 2024 terms framed responsibility around “any Content you provide,” without expressly naming prompts and outputs. That places Grok-style usage further outside the main contract vocabulary.
That broadened definition sits alongside a license that already grants X wide reuse rights.
Users grant a worldwide, royalty-free, sublicensable license to use, copy, reproduce, process, adapt, modify, publish, transmit, display, and distribute Content “for any purpose,” including analyzing it and training machine learning and AI models.
X also states that no compensation is paid for those uses and that access to the service is “sufficient compensation.” That makes the prompts and outputs language consequential for users who treat AI chats as separate from public posting.
The 2026 draft also adds a specific prohibited-conduct clause aimed at AI circumvention.
“Misuse” includes attempts to bypass platform controls, including through ‘jailbreaking’, ‘prompt engineering, or injection’.”
That phrasing does not appear in the comparable misuse list in the 2024 terms. It gives X a contract-based hook to cite when enforcing against attempts to defeat safeguards on AI features, rather than relying solely on product rules or policy guidance.
Europe-specific language changes how the document describes content enforcement and user challenges.
The summary and content rules note that EU and UK law can require enforcement not only against illegal content but also against content described as “harmful” or “unsafe.”
Examples include bullying or humiliating content, eating disorder content, and content about methods of self-harm or suicide.
The 2026 terms add UK-specific language describing how users can challenge enforcement actions under the UK Online Safety Act 2023.
How X’s updated terms expand enforcement, data controls, and user liability
The updated terms keep X’s restrictions on automated access and data collection, including a liquidated-damages schedule tied to large-scale viewing.
Crawling or scraping is barred “in any form, for any purpose” without prior written consent, and access is generally limited to “published interfaces.”
The terms set liquidated damages at $15,000 per 1,000,000 posts requested, viewed, or accessed in any 24-hour period when a violation involves that volume.
The 2026 draft adjusts related wording to apply when a user induces or knowingly facilitates violations.
Dispute provisions remain anchored in Texas while changing in narrower ways that can extend some state-law timelines.
Disputes must proceed in federal or state courts in Tarrant County, Texas. The 2026 text adds that the forum and choice-of-law provisions apply to “pending and future disputes” regardless of when the underlying conduct occurred.
The 2024 terms more specifically referenced the U.S. District Court for the Northern District of Texas as the federal venue option, alongside Tarrant County state courts.
The 2026 draft also splits time limits: one year for federal claims and two years for state claims. That replaces the single one-year clock in the earlier language.
X also continues to limit how users can pursue claims and what they can recover if they win. The agreement includes a class-action waiver that bars users from bringing claims as a class or in a representative proceeding in many cases, and caps X’s liability at $100 per covered dispute.
Those provisions have drawn criticism in broader commentary about whether the terms reduce practical remedies even when users allege material harm.
Critics warn X’s terms changes could chill research and speech
Public pushback around the shift has often centered on provisions that predate the 2026 draft and still appear in it, including venue selection and scraping penalties.
The Knight First Amendment Institute said X’s terms “will stifle independent research” and called the approach “a disturbing move that the company should reverse,” according to its statement.
The Center for Countering Digital Hate said in November 2024 that it would quit X ahead of a terms change and criticized the Texas venue requirement as a tactic to steer disputes toward favorable courts.
The Reuters Institute for the Study of Journalism has also described how lawsuits can have “a chilling effect” on critics.
AI training and licensing concerns have been packaged as a consumer-facing hook in coverage about users leaving the platform.
| Clause | Current ToS (Nov. 15, 2024) | Future ToS (effective Jan. 15, 2026) |
|---|---|---|
| What counts as “Content” | User responsibility centered on content a user provides | Explicitly includes “inputs, prompts, outputs” and information obtained or created through the services |
| AI circumvention | No explicit “jailbreaking” or prompt-injection clause | Bans bypass attempts “including through ‘jailbreaking’, ‘prompt engineering or injection’” |
| EU/UK enforcement framing | No UK Online Safety Act challenge-process callout in the summary | Adds “harmful/unsafe” examples and UK Online Safety Act 2023 redress language |
| U.S. venue and claim windows | Northern District of Texas (federal) or Tarrant County (state); one-year deadline | Tarrant County federal or state courts; one year for federal claims, two years for state claims; forum provisions apply to pending and future disputes |
| Scraping penalty | $15,000 per 1,000,000 posts requested, viewed, or accessed in 24 hours when tied to a violation | Same schedule, with facilitation narrowed to conduct a user “induces or knowingly facilitates” |
With the Jan. 15, 2026, effective date, X’s contract language treats prompts and generated outputs as user Content under the platform’s licensing and enforcement framework.
It also adds “jailbreaking” and prompt injection to its prohibited-conduct list.
Go to Source to See Full Article
Author: Liam ‘Akiba’ Wright
