Twitter keyword search, monitoring, and trend analysis via GraphQL
Search Twitter by keyword, collect high-engagement tweets, analyze trends over time, and generate structured reports. Powered by rnet_twitter.py GraphQL search (no browser automation needed).
Phase 1: On-demand Search (user-triggered)
User says "search OpenAI on twitter" -> search -> filter -> report
Phase 2: Keyword Monitoring (cron-driven)
Config defines keywords -> scheduled search -> diff with last run -> alert on new high-engagement tweets
Phase 3: Trend Analysis (on-demand or weekly)
Aggregate saved searches -> group by week -> detect topic shifts -> generate narrative
# Install rnet (Rust HTTP client with TLS fingerprint emulation)
pip install "rnet>=3.0.0rc20" --pre
# Required files:
# 1. rnet_twitter.py — lightweight async Twitter GraphQL client
# Get it: https://github.com/PHY041/rnet-twitter-client
# 2. twitter_cookies.json — your auth cookies
# Format: [{"name": "auth_token", "value": "..."}, {"name": "ct0", "value": "..."}]
# Get cookies: Chrome DevTools → Application → Cookies → x.com
# Cookies expire ~2 weeks. Refresh when you get 403 errors.
Set TWITTER_COOKIES_PATH env var to your cookies file location.
When user says "search [keyword] on twitter", "twitter intel [topic]", "find tweets about [X]":
import asyncio, os
from rnet_twitter import RnetTwitterClient
async def search(query, count=200):
client = RnetTwitterClient()
cookies_path = os.environ.get("TWITTER_COOKIES_PATH", "twitter_cookies.json")
client.load_cookies(cookies_path)
tweets = await client.search_tweets(query, count=count, product="Top")
return tweets
Search modes:
| Mode | product= | Use case |
|---|---|---|
| High-engagement | "Top" | Find influential tweets, content analysis |
| Real-time | "Latest" | Monitor breaking discussions, live tracking |
Useful Twitter search operators:
| Operator | Example | Effect |
|---|---|---|
lang:en | OpenAI lang:en | English only |
since: / until: | since:2026-01-24 until:2026-02-24 | Date range |
-filter:replies | OpenAI -filter:replies | Original tweets only |
min_faves:N | min_faves:50 | Minimum likes (only works with Latest) |
from: | from:karpathy | Specific author |
"exact" | "AI agent" | Exact phrase |
After raw search, filter for quality:
filtered = [
t for t in tweets
if keyword.lower() in t["text"].lower()
and (t["favorite_count"] >= 10 or t["retweet_count"] >= 5)
and not t["is_reply"]
]
Output a structured summary:
## Twitter Intel: [keyword]
**Period:** [date range] | **Tweets found:** N | **After filter:** N
### Top Tweets (by engagement)
1. @author (X likes, Y RTs, Z views) — date
"tweet text..."
[link]
### Key Themes
- Theme 1: [description] (N tweets)
- Theme 2: [description] (N tweets)
### Notable Authors
| Author | Followers | Tweets in set | Total engagement |
{
"monitors": [
{
"id": "my-product-en",
"query": "MyProduct lang:en -filter:replies",
"product": "Top",
"count": 100,
"min_likes": 10,
"alert_threshold": 100,
"enabled": true
}
]
}
{
"my-product-en": {
"last_run": "2026-02-24T12:00:00Z",
"last_tweet_ids": ["id1", "id2"],
"total_collected": 450
}
}
search_tweets(query, count, product)min_likeslast_tweet_ids -> find NEW tweets onlyfavorite_count >= alert_threshold -> immediate alert{monitor_id}/YYYY-MM-DD.jsonWhen user says "analyze twitter trend for [keyword]", "twitter trend report":
{monitor_id}/| User Says | Agent Does |
|---|---|
/twitter-intel [keyword] | Search + filter + report (Top, 200 tweets) |
/twitter-intel "[phrase]" --latest | Search Latest mode |
monitor "[keyword]" on twitter | Add to monitoring config |
twitter intel status | Show all active monitors + last run |
twitter trend report [keyword] | Analyze saved data, generate trend narrative |
refresh twitter cookies | Guide user through cookie refresh |
rnet_twitter.pyhttps://abs.twimg.com/responsive-web/client-web/main.*.jsauth_token expires after ~2 weeks. Monitor for 403 errors.ZIP package — ready to use