X Updates Explained: Starter Packs, Sponsored Post Tags, and the EU Investigation Into Grok

  • X Adds Starter Packs To Help Users Find Relevant Creators [1]

X has begun rolling out โ€œStarter Packsโ€ (also referred to as โ€œStarterpacksโ€), a discovery feature that groups recommended accounts into topic-based follow lists designed to help users, particularly new users, quickly build a relevant feed. Social Media Today reports that the concept follows a format popularized by Bluesky in 2024, where curated lists reduce friction during onboarding by offering instant, interest-based networks of accounts to follow. [1]

Separate reporting indicates the feature was publicly confirmed by Xโ€™s head of product, Nikita Bier, and is intended to make it easier for users to identify creators and communities aligned with specific interests, rather than starting from a blank follow graph. [2][3]

What changed

  • X introduced โ€œStarter Packsโ€ as curated, topic-based follow lists:
    Social Media Today reports that Starter Packs are designed to help users discover relevant accounts by niche, giving users a faster way to populate their feed with creators and voices in specific interest categories. [1]
  • The feature mirrors Blueskyโ€™s โ€œStarter Packsโ€ discovery approach:
    Social Media Today notes that Xโ€™s rollout is explicitly aligned with the Starter Packs trend first popularized on Bluesky, which used curated lists to help users find relevant communities when joining a new platform. [1][2]
  • Starter Packs emphasize fast following decisions and structured discovery:
    TechCrunch describes Xโ€™s implementation as a feature that helps users find who to follow through curated lists, with the goal of accelerating discovery and connection-building across interests. [2]

Why X is doing this

Xโ€™s product direction reflects a known onboarding problem in social platforms: new users often churn when they cannot quickly find people to follow, leaving their feed empty or irrelevant. Curated follow lists are a direct response to that retention risk.

Social Media Today frames Starter Packs as a practical discovery mechanism for onboarding reducing dependence on manual search and making it easier to enter established communities. TechCrunch similarly positions the tool as a way to simplify following and help users connect with relevant creators more quickly. [1][2]

Practical implications for users, creators, and brands

  • New users may build a relevant feed faster:
    Topic-based lists shorten the setup process and reduce the likelihood of an โ€œempty feedโ€ experience, which is a common barrier to early retention on social platforms. [1][2]
  • Creators can gain visibility through inclusion in curated packs:
    If Starter Packs become a widely used onboarding surface, being listed in high-traffic interest packs could become a meaningful distribution advantage, especially for niche creators who are harder to find via keyword search alone. [1]
  • Discovery may shift from search-first to list-first onboarding:
    Curated collections can change how users discover accounts, moving discovery toward pre-built thematic bundles rather than individual account discovery through browsing, hashtags, or search. [1][2]

Overall, Xโ€™s rollout of Starter Packs formalizes a topic-based onboarding path intended to make following decisions easier and speed up community formation. The move aligns X with discovery mechanics that have gained traction on competitor platforms, and it signals a continued effort to improve early user experience by making creator and interest discovery more structured and immediate. [1][2][4]

  • X Experiments With Paid Promotion Tags [1]

X is testing new in-stream โ€œpaid promotionโ€ / paid partnership tags that would allow creators to label sponsored posts directly inside the platform interface, rather than relying primarily on manual disclosures such as โ€œAd,โ€ โ€œPromoted Content,โ€ or hashtags like #ad. Social Media Today reports the feature was identified through app testing signals shared by app researcher Nima Owji, indicating X is developing new content tags to make paid promotions easier to disclose within the feed. [1][4]

The move would align X more closely with common branded-content disclosure systems used by other major social platforms, while also supporting Xโ€™s broader push to attract creators and formalize creator monetization workflows. [1]

What changed

  • X is developing native โ€œpaid promotionโ€ disclosure tags (test):
    Some early indicators show X building new โ€œcontent tagsโ€ that can be applied to posts to indicate a paid partnership or sponsored promotion inside the feed. [1][3][5]
  • Disclosure could shift from manual text/hashtags to platform-applied labels:
    X currently requires that organic posts involved in paid partnerships include clear disclosure language (e.g., โ€œAdโ€ or โ€œPromoted Contentโ€), which creators often implement via hashtags like #ad. A dedicated tag would standardize and simplify that disclosure behavior. [1][2][3]
  • No performance analytics layer is shown in the early examples (as reported):
    Social Media Today notes that the tag, as surfaced in testing, does not appear to include brand-facing performance reporting or analytics features at this stage. [1] Social Samosa reported the same limitation. [3]

Why X is doing this

X has been increasing its emphasis on creator activity and monetization, and a native sponsorship disclosure tool reduces friction for both creators and brands by making commercial labeling more consistent and visible. Social Media Today framed the potential tag as a small but meaningful step toward more streamlined, transparent promotions, particularly as X seeks to grow creator participation and paid content opportunities on the platform. [1]

A native label also strengthens compliance alignment. Xโ€™s Paid Partnerships Policy places responsibility on users to disclose commercial content clearly and to comply with applicable laws and advertising regulations (including FTC endorsement guidance in the U.S.). A platform-level disclosure mechanism can reduce ambiguity around whether disclosure is sufficiently โ€œclear and conspicuous.โ€ [2]

Practical implications for creators, brands, and users

  • Creators may get a simpler, standardized disclosure workflow:
    A built-in tag could reduce reliance on manual wording and hashtags, making disclosures easier to apply consistently across sponsored posts. [1][2][3]
  • Brands may gain clearer visibility into sponsored content:
    Even without analytics features, a native โ€œpaid promotionโ€ label can make sponsorship relationships more transparent to audiences and easier for brands to verify in-feed. [1][3]
  • Disclosure norms on X may become more enforceable:
    If a standardized tag becomes widely adopted, it can support clearer policy enforcement and reduce disputes about whether posts meet disclosure requirements. Xโ€™s policy already outlines expectations and potential enforcement actions for violations. [2]

Overall, Xโ€™s test of in-stream paid promotion tags suggests a move toward platform-native branded content labeling, shifting disclosure from user-typed disclaimers toward structured, product-level indicators. The experiment aligns with Xโ€™s creator monetization focus while also reinforcing clearer sponsorship transparency expectations in the feed. [1][2][3]

  • EU Launches Investigation Into X Over Grok-Generated Images, Expanding Digital Services Act Scrutiny of Risk Controls and Recommendation Systems [1]

The European Commission has opened a new formal investigation under the EU Digital Services Act (DSA) into X (formerly Twitter), following reports that Xโ€™s AI assistant Grok was used to generate and spread non-consensual sexualised (โ€œnudifiedโ€) images, including content involving minors. Social Media Today reported the Commissionโ€™s move as a response to public concerns that X may not have adequately assessed and mitigated the risks created by deploying Grokโ€™s image functionalities into the platform. [1]

In its official announcement dated 26 January 2026, the European Commission said the proceeding will examine whether X complied with DSA obligations related to systemic risk management around Grokโ€™s capabilities. In parallel, the Commission said it extended its ongoing DSA investigation (opened in December 2023) concerning Xโ€™s recommender systems risk management obligations. [2]

What changed

  • European Commission opened a new DSA investigation focused on Grok-related risks:
    The Commission announced a formal proceeding to assess whether X properly evaluated and reduced risks tied to Grokโ€™s functionalities and the potential spread of illegal and harmful content, including manipulated explicit imagery. [2]
  • Existing DSA scrutiny of Xโ€™s recommender systems was expanded:
    Alongside the new Grok investigation, the Commission stated it extended its ongoing inquiry launched in December 2023 into Xโ€™s compliance with DSA obligations concerning recommender systems and risk management. [2]
  • The trigger involved viral โ€œnudificationโ€ and sexually explicit manipulated imagery:
    Multiple outlets reported that the investigation follows outrage over Grok being used to create digitally altered sexualised images of real people without consent, including cases involving children. [3][4][5]

Why the EU is doing this

Under the DSA, very large online platforms are required to identify, assess, and mitigate systemic risks, particularly risks linked to the distribution of illegal content and harms to fundamental rights and user safety. The Commissionโ€™s announcement frames the Grok probe as a test of whether X met these obligations when integrating and operating Grok features in the EU. [2]

Reporting around the decision also notes that regulators are increasingly focused on generative AI tools embedded in social platforms, especially when image manipulation can facilitate non-consensual intimate imagery, harassment, and gender-based violence. [3][6]

Practical implications for X, users, and advertisers

  • Potential for enforcement actions and remedial requirements under the DSA:
    A formal DSA proceeding can lead to requests for information, compliance commitments, interim measures, or penalties depending on findings and cooperation. The Commissionโ€™s announcement signals a deeper compliance review rather than an informal inquiry. [2]
  • Product changes and restrictions to image-generation features may accelerate:
    Coverage of the incident has indicated that restrictions to image editing/generation features can be used as risk controls when misuse becomes widespread, and regulators will examine whether mitigations were timely and effective. [3][6]
  • Higher compliance expectations for AI features inside social feeds:
    The EU action reinforces that integrating generative AI tools into recommendation-driven platforms is treated as a systemic risk question, especially where AI outputs may materially increase distribution of illegal or harmful content. [2][5]

Overall, the EUโ€™s new DSA investigation signals heightened regulatory pressure on X regarding the integration of Grokโ€”specifically whether X properly assessed and mitigated risks linked to the creation and spread of illegal or harmful manipulated sexual imagery. The Commissionโ€™s decision also expands scrutiny of Xโ€™s recommender systems obligations, reinforcing that risk management under the DSA applies both to AI features and to the distribution mechanisms that amplify their outputs. [2][1].