-
-
Notifications
You must be signed in to change notification settings - Fork 468
get_trends deprecation update/fix #390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideThis PR replaces the old v11 guide-based trends fetch with a GraphQL GenericTimelineById call, adding new endpoints, feature-flag maps, and updating the Trend model to align with the new response shape. Sequence diagram for updated get_trends flow using GenericTimelineByIdsequenceDiagram
participant Client
participant GQL
participant "GenericTimelineById Endpoint"
participant Trend
Client->>GQL: generic_timeline_by_id(timeline_id, count)
GQL->>"GenericTimelineById Endpoint": gql_get(...)
"GenericTimelineById Endpoint"-->>GQL: Response (entries)
GQL-->>Client: Response (entries)
loop For each entry
Client->>Trend: Trend(self, entry['content']['itemContent'])
end
Client-->>Client: Return list of Trend objects
Updated class diagram for Trend modelclassDiagram
class Trend {
- _client: Client
- name: str
- tweets_count: str | None
- domain_context: str
- grouped_trends: list[str]
+ __init__(client: Client, data: dict)
+ __repr__() -> str
}
Trend <-- Client
Class diagram for new GQL methods and endpoint constantsclassDiagram
class GQL {
+ explore_page()
+ generic_timeline_by_id(timeline_id, count)
}
class Endpoint {
+ EXPLORE_PAGE
+ GENERIC_TIMELINE_BY_ID
}
GQL --> Endpoint
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Warning Rate limit exceeded@touchmeangel has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 16 minutes and 20 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThe PR refactors data retrieval for trends and timelines by introducing new GraphQL endpoints (EXPLORE_PAGE, GENERIC_TIMELINE_BY_ID) with accompanying feature flags and a timeline ID mapping. Client logic is updated to extract data from itemContent and safely handle cursors. The trend module adopts snake_case naming conventions for consistency. Changes
Sequence DiagramsequenceDiagram
actor User
participant Client as twikit.client
participant GQL as gql
participant API as Twitter API
User->>Client: get_trends()
alt No timeline_id mapped
Client-->>User: return []
else Valid timeline_id exists
Client->>GQL: generic_timeline_by_id(timeline_id, count)
GQL->>API: POST GENERIC_TIMELINE_BY_ID endpoint
API-->>GQL: response with entries
GQL-->>Client: entries
Client->>Client: Extract trend data from itemContent
alt Valid cursor found
Client->>Client: Schedule follow-up fetch for more replies
end
Client-->>User: Trend objects
end
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.gitignore(1 hunks)twikit/client/client.py(2 hunks)twikit/client/gql.py(3 hunks)twikit/constants.py(1 hunks)twikit/trend.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
twikit/client/client.py (2)
twikit/client/gql.py (1)
generic_timeline_by_id(312-317)twikit/utils.py (1)
find_dict(111-127)
twikit/client/gql.py (1)
twikit/client/v11.py (1)
Endpoint(14-50)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
twikit/client/client.py (1)
2602-2626: Consider converting recursive retry to a loop with backoff to prevent unbounded recursion.The refactor correctly implements the timeline-based trend fetching with proper nested extraction. However, the recursive retry at line 2617 lacks protection against unbounded recursion. Since
retrydefaults toTrueand is passed unchanged through recursion, if the API consistently returns empty entries, the method will recurse indefinitely until hitting Python's recursion limit.The codebase uses sleep-based polling for similar scenarios (e.g., media processing at line 1072). Consider replacing the recursive retry with an iterative loop:
- if not entries: - if not retry: - return [] - # Recall the method again, as the trend information - # may not be returned due to a Twitter error. - return await self.get_trends(category, count, retry, additional_request_params) + attempt = 0 + while not entries and retry and attempt < 3: + await asyncio.sleep(1) + attempt += 1 + response, _ = await self.gql.generic_timeline_by_id(timeline_id, count) + entries = [ + i for i in find_dict(response, 'entries', find_one=True)[0] + if i['entryId'].startswith(entry_id_prefix) + ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
twikit/client/client.py(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
twikit/client/client.py (2)
twikit/client/gql.py (1)
generic_timeline_by_id(312-317)twikit/utils.py (1)
find_dict(111-127)
🔇 Additional comments (1)
twikit/client/client.py (1)
21-21: LGTM: Import addition supports get_trends refactor.The addition of
TIMELINE_IDSto the constants import is necessary for the refactoredget_trendsmethod and follows the existing import pattern.
twikit/client/client.py
Outdated
| item_content = entry['item'].get('itemContent', {}) | ||
| reply_next_cursor = item_content.get('value') | ||
| if reply_next_cursor: | ||
| _fetch_more_replies = partial(self._get_more_replies, | ||
| tweet_id, reply_next_cursor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix cursor extraction to use correct entry and path.
Two issues here:
- The code checks
entries[-1]but then accesses the stale loop variableentry(which happens to be the last loop value) - Uses
['item']['itemContent']instead of['content']['itemContent'](inconsistent with line 1526)
This breaks pagination for loading more top-level replies.
Apply this diff:
if entries[-1]['entryId'].startswith('cursor'):
# if has more replies
- item_content = entry['item'].get('itemContent', {})
+ item_content = entries[-1].get('content', {}).get('itemContent', {})
reply_next_cursor = item_content.get('value')
if reply_next_cursor:
_fetch_more_replies = partial(self._get_more_replies,
tweet_id, reply_next_cursor)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| item_content = entry['item'].get('itemContent', {}) | |
| reply_next_cursor = item_content.get('value') | |
| if reply_next_cursor: | |
| _fetch_more_replies = partial(self._get_more_replies, | |
| tweet_id, reply_next_cursor) | |
| if entries[-1]['entryId'].startswith('cursor'): | |
| # if has more replies | |
| item_content = entries[-1].get('content', {}).get('itemContent', {}) | |
| reply_next_cursor = item_content.get('value') | |
| if reply_next_cursor: | |
| _fetch_more_replies = partial(self._get_more_replies, | |
| tweet_id, reply_next_cursor) |
🤖 Prompt for AI Agents
In twikit/client/client.py around lines 1638 to 1642, the code incorrectly uses
the loop variable `entry` after iterating instead of explicitly referencing the
last entry and also uses the wrong key path ('item' instead of 'content') to
extract the cursor; change the logic to assign last_entry = entries[-1], read
item_content = last_entry.get('content', {}).get('itemContent', {}), extract
reply_next_cursor = item_content.get('value'), and if present set
_fetch_more_replies = partial(self._get_more_replies, tweet_id,
reply_next_cursor) so pagination uses the correct entry and path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
twikit/client/client.py (1)
2565-2565: Remove unusedadditional_request_paramsparameter.The
additional_request_paramsparameter is declared but never used in the function body. This is dead code and should be removed, or implemented if the functionality is intended.
♻️ Duplicate comments (2)
twikit/client/client.py (2)
1618-1621: Fix cursor extraction to use correct source and path.The code checks if
replyis a cursor entry but then extracts fromentry['item']instead ofreply['content']. This is inconsistent with the cursor extraction pattern used elsewhere (e.g., line 1526) and will fail to extract the cursor correctly.Apply this diff:
if 'cursor' in reply.get('entryId'): - item_content = entry['item'].get('itemContent', {}) + item_content = reply.get('content', {}).get('itemContent', {}) sr_cursor = item_content.get('value') if not sr_cursor: continue
1640-1644: Fix cursor extraction to use correct entry reference and path.Two issues:
- The code checks
entries[-1]but uses the stale loop variableentry(which is the last iterated value, not necessarilyentries[-1])- Uses
entry['item']instead of the correct pathentries[-1]['content']This breaks pagination for top-level replies.
Apply this diff:
if entries[-1]['entryId'].startswith('cursor'): # if has more replies - item_content = entry['item'].get('itemContent', {}) + item_content = entries[-1].get('content', {}).get('itemContent', {}) reply_next_cursor = item_content.get('value') if reply_next_cursor: _fetch_more_replies = partial(self._get_more_replies, tweet_id, reply_next_cursor)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
twikit/client/client.py(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
twikit/client/client.py (2)
twikit/client/gql.py (1)
generic_timeline_by_id(312-317)twikit/utils.py (1)
find_dict(111-127)
🔇 Additional comments (2)
twikit/client/client.py (2)
21-21: LGTM - Import supports refactoredget_trendsmethod.The
TIMELINE_IDSimport is correctly added to support the new GraphQL-based trend retrieval.
2622-2627: LGTM - Trend extraction correctly handles nested payload.The code now properly extracts the nested
'trend'key fromitemContentand skips entries where trend data is missing, addressing the issue raised in previous reviews.
| if not retry: | ||
| return [] | ||
| # Recall the method again, as the trend information | ||
| # may not be returned due to a Twitter error. | ||
| return await self.get_trends(category, count, retry, additional_request_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent infinite recursion in retry logic.
The recursive call to get_trends on line 2619 doesn't set retry=False, which could cause infinite recursion if the API consistently returns no trend entries. Each retry will attempt another retry indefinitely.
Apply this diff:
if not entries:
if not retry:
return []
# Recall the method again, as the trend information
# may not be returned due to a Twitter error.
- return await self.get_trends(category, count, retry, additional_request_params)
+ return await self.get_trends(category, count, False, additional_request_params)🤖 Prompt for AI Agents
twikit/client/client.py around lines 2615 to 2619: the retry path calls
get_trends recursively without flipping the retry flag, risking infinite
recursion if the API keeps returning no trends; change the recursive call to
pass retry=False (i.e., return await self.get_trends(category, count, False,
additional_request_params)) so only one retry is attempted, or alternatively
replace the recursion with a loop that decrements a retry counter and stops when
it reaches zero.
Check issue #389. Updated from
guilds.jsontoGenericTimelineByIdSummary by Sourcery
Switch get_trends to use the GenericTimelineById GraphQL endpoint instead of the legacy v1.1 guide API, introduce feature flags and helper methods for explore_page and generic_timeline_by_id, add a mapping of timeline IDs for trending categories, and update the Trend model to parse snake_case fields from the new response.
Enhancements:
Summary by CodeRabbit
New Features
Bug Fixes
Refactor