Skip to content

Conversation

@touchmeangel
Copy link

@touchmeangel touchmeangel commented Oct 4, 2025

Check issue #389. Updated from guilds.json to GenericTimelineById

Summary by Sourcery

Switch get_trends to use the GenericTimelineById GraphQL endpoint instead of the legacy v1.1 guide API, introduce feature flags and helper methods for explore_page and generic_timeline_by_id, add a mapping of timeline IDs for trending categories, and update the Trend model to parse snake_case fields from the new response.

Enhancements:

  • Introduce EXPLORE_PAGE and GENERIC_TIMELINE_BY_ID GraphQL endpoints and corresponding client methods
  • Add EXPLORE_PAGE_FEATURES, GENERIC_TIMELINE_FEATURES feature-flag constants and TIMELINE_IDS mapping for trend categories
  • Refactor get_trends to retrieve trends via generic_timeline_by_id and simplify category handling

Summary by CodeRabbit

  • New Features

    • Added support for exploring trends by category (trending, for-you, news, sports, entertainment).
    • Introduced new endpoint for accessing explore page content.
  • Bug Fixes

    • Improved data extraction and cursor handling for trends and replies, ensuring more reliable pagination.
  • Refactor

    • Standardized internal data model naming conventions for consistency.

@sourcery-ai
Copy link

sourcery-ai bot commented Oct 4, 2025

Reviewer's Guide

This PR replaces the old v11 guide-based trends fetch with a GraphQL GenericTimelineById call, adding new endpoints, feature-flag maps, and updating the Trend model to align with the new response shape.

Sequence diagram for updated get_trends flow using GenericTimelineById

sequenceDiagram
    participant Client
    participant GQL
    participant "GenericTimelineById Endpoint"
    participant Trend
    Client->>GQL: generic_timeline_by_id(timeline_id, count)
    GQL->>"GenericTimelineById Endpoint": gql_get(...)
    "GenericTimelineById Endpoint"-->>GQL: Response (entries)
    GQL-->>Client: Response (entries)
    loop For each entry
        Client->>Trend: Trend(self, entry['content']['itemContent'])
    end
    Client-->>Client: Return list of Trend objects
Loading

Updated class diagram for Trend model

classDiagram
    class Trend {
        - _client: Client
        - name: str
        - tweets_count: str | None
        - domain_context: str
        - grouped_trends: list[str]
        + __init__(client: Client, data: dict)
        + __repr__() -> str
    }
    Trend <-- Client
Loading

Class diagram for new GQL methods and endpoint constants

classDiagram
    class GQL {
        + explore_page()
        + generic_timeline_by_id(timeline_id, count)
    }
    class Endpoint {
        + EXPLORE_PAGE
        + GENERIC_TIMELINE_BY_ID
    }
    GQL --> Endpoint
Loading

File-Level Changes

Change Details Files
Refactor get_trends to use GenericTimelineById
  • Map input category to timeline_id via TIMELINE_IDS
  • Replace v11.guide call with gql.generic_timeline_by_id
  • Early-return empty list when timeline_id is missing
  • Adjust retry logic indentation
  • Filter entries by 'trend' prefix and build Trend objects
twikit/client/client.py
Add new GraphQL endpoints and client methods
  • Introduce EXPLORE_PAGE and GENERIC_TIMELINE_BY_ID endpoints
  • Implement explore_page() with EXPLORE_PAGE_FEATURES
  • Implement generic_timeline_by_id() with GENERIC_TIMELINE_FEATURES
twikit/client/gql.py
Define feature-flag constants and timeline IDs
  • Add EXPLORE_PAGE_FEATURES and GENERIC_TIMELINE_FEATURES maps
  • Add TIMELINE_IDS mapping for trending categories
twikit/constants.py
Update Trend model to match new API schema
  • Rename trendMetadata to trend_metadata and related fields
  • Rename metaDescription/domainContext/groupedTrends keys
  • Adjust tweets_count type and TODO-int conversion
twikit/trend.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 4, 2025

Warning

Rate limit exceeded

@touchmeangel has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 16 minutes and 20 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 4844287 and a51f349.

📒 Files selected for processing (1)
  • twikit/client/client.py (5 hunks)

Walkthrough

The PR refactors data retrieval for trends and timelines by introducing new GraphQL endpoints (EXPLORE_PAGE, GENERIC_TIMELINE_BY_ID) with accompanying feature flags and a timeline ID mapping. Client logic is updated to extract data from itemContent and safely handle cursors. The trend module adopts snake_case naming conventions for consistency.

Changes

Cohort / File(s) Summary
Build & Environment
.gitignore
Added __pycache__ to ignored patterns; re-specified /node_modules entry with no functional change.
Constants & Configuration
twikit/constants.py
Added three new public constants: EXPLORE_PAGE_FEATURES (feature flags for explore page), GENERIC_TIMELINE_FEATURES (feature flags for generic timelines), and TIMELINE_IDS (mapping of timeline categories to encoded IDs).
GraphQL API Layer
twikit/client/gql.py
Introduced two new endpoints: EXPLORE_PAGE and GENERIC_TIMELINE_BY_ID in Endpoint enum; added corresponding GQLClient methods explore_page() and generic_timeline_by_id() with appropriate feature flag configurations; updated imports for new feature constants.
Client Logic Updates
twikit/client/client.py
Added TIMELINE_IDS import; refactored _get_more_replies() to extract next_cursor from itemContent and only schedule fetch if cursor exists; updated get_tweet_by_id() to derive sr_cursor from itemContent with null-safety guards; refactored get_trends() to use new generic_timeline_by_id endpoint, extract trend data from itemContent, and apply updated cursor handling for reply continuation.
Data Model
twikit/trend.py
Renamed keys from camelCase to snake_case: trendMetadatatrend_metadata, metaDescriptionmeta_description, domainContextdomain_context, groupedTrendsgrouped_trends; updated tweets_count type hint from `int

Sequence Diagram

sequenceDiagram
    actor User
    participant Client as twikit.client
    participant GQL as gql
    participant API as Twitter API

    User->>Client: get_trends()
    
    alt No timeline_id mapped
        Client-->>User: return []
    else Valid timeline_id exists
        Client->>GQL: generic_timeline_by_id(timeline_id, count)
        GQL->>API: POST GENERIC_TIMELINE_BY_ID endpoint
        API-->>GQL: response with entries
        
        GQL-->>Client: entries
        
        Client->>Client: Extract trend data from itemContent
        
        alt Valid cursor found
            Client->>Client: Schedule follow-up fetch for more replies
        end
        
        Client-->>User: Trend objects
    end
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • twikit/client/gql.py: Review new endpoint definitions and GQLClient methods to verify correct feature flag assignments and parameter passing.
  • twikit/client/client.py: Verify refactored cursor extraction logic (itemContent-based) across _get_more_replies(), get_tweet_by_id(), and get_trends() for consistency and null-safety; confirm early-exit conditions prevent malformed API calls.
  • twikit/trend.py: Validate snake_case renaming consistency; confirm tweets_count type change and TODO note align with actual data structure from API.

Poem

🐰 Whiskers twitching with delight,
New endpoints hopping into sight!
snake_case trails through trends so bright,
Cursors safely guarded right,
Timelines fetch with features tight! 🎉

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'get_trends deprecation update/fix' accurately reflects the main change: updating the get_trends function from legacy API to GraphQL.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@touchmeangel touchmeangel changed the title get_trends update/fix get_trends deprecation update/fix Oct 4, 2025
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b18105 and 9ace371.

📒 Files selected for processing (5)
  • .gitignore (1 hunks)
  • twikit/client/client.py (2 hunks)
  • twikit/client/gql.py (3 hunks)
  • twikit/constants.py (1 hunks)
  • twikit/trend.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
twikit/client/client.py (2)
twikit/client/gql.py (1)
  • generic_timeline_by_id (312-317)
twikit/utils.py (1)
  • find_dict (111-127)
twikit/client/gql.py (1)
twikit/client/v11.py (1)
  • Endpoint (14-50)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
twikit/client/client.py (1)

2602-2626: Consider converting recursive retry to a loop with backoff to prevent unbounded recursion.

The refactor correctly implements the timeline-based trend fetching with proper nested extraction. However, the recursive retry at line 2617 lacks protection against unbounded recursion. Since retry defaults to True and is passed unchanged through recursion, if the API consistently returns empty entries, the method will recurse indefinitely until hitting Python's recursion limit.

The codebase uses sleep-based polling for similar scenarios (e.g., media processing at line 1072). Consider replacing the recursive retry with an iterative loop:

-        if not entries:
-          if not retry:
-              return []
-          # Recall the method again, as the trend information
-          # may not be returned due to a Twitter error.
-          return await self.get_trends(category, count, retry, additional_request_params)
+        attempt = 0
+        while not entries and retry and attempt < 3:
+            await asyncio.sleep(1)
+            attempt += 1
+            response, _ = await self.gql.generic_timeline_by_id(timeline_id, count)
+            entries = [
+                i for i in find_dict(response, 'entries', find_one=True)[0]
+                if i['entryId'].startswith(entry_id_prefix)
+            ]
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed6c000 and caa3e2c.

📒 Files selected for processing (1)
  • twikit/client/client.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
twikit/client/client.py (2)
twikit/client/gql.py (1)
  • generic_timeline_by_id (312-317)
twikit/utils.py (1)
  • find_dict (111-127)
🔇 Additional comments (1)
twikit/client/client.py (1)

21-21: LGTM: Import addition supports get_trends refactor.

The addition of TIMELINE_IDS to the constants import is necessary for the refactored get_trends method and follows the existing import pattern.

Comment on lines 1638 to 1642
item_content = entry['item'].get('itemContent', {})
reply_next_cursor = item_content.get('value')
if reply_next_cursor:
_fetch_more_replies = partial(self._get_more_replies,
tweet_id, reply_next_cursor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix cursor extraction to use correct entry and path.

Two issues here:

  1. The code checks entries[-1] but then accesses the stale loop variable entry (which happens to be the last loop value)
  2. Uses ['item']['itemContent'] instead of ['content']['itemContent'] (inconsistent with line 1526)

This breaks pagination for loading more top-level replies.

Apply this diff:

         if entries[-1]['entryId'].startswith('cursor'):
             # if has more replies
-            item_content = entry['item'].get('itemContent', {})
+            item_content = entries[-1].get('content', {}).get('itemContent', {})
             reply_next_cursor = item_content.get('value')
             if reply_next_cursor:
               _fetch_more_replies = partial(self._get_more_replies,
                                             tweet_id, reply_next_cursor)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
item_content = entry['item'].get('itemContent', {})
reply_next_cursor = item_content.get('value')
if reply_next_cursor:
_fetch_more_replies = partial(self._get_more_replies,
tweet_id, reply_next_cursor)
if entries[-1]['entryId'].startswith('cursor'):
# if has more replies
item_content = entries[-1].get('content', {}).get('itemContent', {})
reply_next_cursor = item_content.get('value')
if reply_next_cursor:
_fetch_more_replies = partial(self._get_more_replies,
tweet_id, reply_next_cursor)
🤖 Prompt for AI Agents
In twikit/client/client.py around lines 1638 to 1642, the code incorrectly uses
the loop variable `entry` after iterating instead of explicitly referencing the
last entry and also uses the wrong key path ('item' instead of 'content') to
extract the cursor; change the logic to assign last_entry = entries[-1], read
item_content = last_entry.get('content', {}).get('itemContent', {}), extract
reply_next_cursor = item_content.get('value'), and if present set
_fetch_more_replies = partial(self._get_more_replies, tweet_id,
reply_next_cursor) so pagination uses the correct entry and path.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
twikit/client/client.py (1)

2565-2565: Remove unused additional_request_params parameter.

The additional_request_params parameter is declared but never used in the function body. This is dead code and should be removed, or implemented if the functionality is intended.

♻️ Duplicate comments (2)
twikit/client/client.py (2)

1618-1621: Fix cursor extraction to use correct source and path.

The code checks if reply is a cursor entry but then extracts from entry['item'] instead of reply['content']. This is inconsistent with the cursor extraction pattern used elsewhere (e.g., line 1526) and will fail to extract the cursor correctly.

Apply this diff:

                     if 'cursor' in reply.get('entryId'):
-                        item_content = entry['item'].get('itemContent', {})
+                        item_content = reply.get('content', {}).get('itemContent', {})
                         sr_cursor = item_content.get('value')
                         if not sr_cursor:
                             continue

1640-1644: Fix cursor extraction to use correct entry reference and path.

Two issues:

  1. The code checks entries[-1] but uses the stale loop variable entry (which is the last iterated value, not necessarily entries[-1])
  2. Uses entry['item'] instead of the correct path entries[-1]['content']

This breaks pagination for top-level replies.

Apply this diff:

     if entries[-1]['entryId'].startswith('cursor'):
         # if has more replies
-        item_content = entry['item'].get('itemContent', {})
+        item_content = entries[-1].get('content', {}).get('itemContent', {})
         reply_next_cursor = item_content.get('value')
         if reply_next_cursor:
           _fetch_more_replies = partial(self._get_more_replies,
                                         tweet_id, reply_next_cursor)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e71a746 and 4844287.

📒 Files selected for processing (1)
  • twikit/client/client.py (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
twikit/client/client.py (2)
twikit/client/gql.py (1)
  • generic_timeline_by_id (312-317)
twikit/utils.py (1)
  • find_dict (111-127)
🔇 Additional comments (2)
twikit/client/client.py (2)

21-21: LGTM - Import supports refactored get_trends method.

The TIMELINE_IDS import is correctly added to support the new GraphQL-based trend retrieval.


2622-2627: LGTM - Trend extraction correctly handles nested payload.

The code now properly extracts the nested 'trend' key from itemContent and skips entries where trend data is missing, addressing the issue raised in previous reviews.

Comment on lines +2615 to +2619
if not retry:
return []
# Recall the method again, as the trend information
# may not be returned due to a Twitter error.
return await self.get_trends(category, count, retry, additional_request_params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent infinite recursion in retry logic.

The recursive call to get_trends on line 2619 doesn't set retry=False, which could cause infinite recursion if the API consistently returns no trend entries. Each retry will attempt another retry indefinitely.

Apply this diff:

       if not entries:
         if not retry:
             return []
         # Recall the method again, as the trend information
         # may not be returned due to a Twitter error.
-        return await self.get_trends(category, count, retry, additional_request_params)
+        return await self.get_trends(category, count, False, additional_request_params)
🤖 Prompt for AI Agents
twikit/client/client.py around lines 2615 to 2619: the retry path calls
get_trends recursively without flipping the retry flag, risking infinite
recursion if the API keeps returning no trends; change the recursive call to
pass retry=False (i.e., return await self.get_trends(category, count, False,
additional_request_params)) so only one retry is attempted, or alternatively
replace the recursion with a loop that decrements a retry counter and stops when
it reaches zero.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant