Real-time Twitter data scraping via standard OpenAI SDK. Search tweets, users, followers, following lists — all through familiar chat completions format.
OpenX API is 100% compatible with the OpenAI Chat Completions format. You can use any OpenAI SDK to access Twitter data.
| Item | Value |
|---|---|
base_url | https://api.gugou.ai/v1 |
api_key | Your API key (starts with sk-) |
model | One of the 5 supported models below |
from openai import OpenAI
import json
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.gugou.ai/v1",
)
resp = client.chat.completions.create(
model="x-search-tweets",
messages=[{
"role": "user",
"content": json.dumps({
"query": "bitcoin",
"limit": 5,
"product": "Latest"
})
}]
)
tweets = json.loads(resp.choices[0].message.content)
for t in tweets:
print(f"@{t['user']['username']}: {t['rawContent'][:80]}")
curl -X POST https://api.gugou.ai/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "x-search-tweets",
"messages": [{"role": "user", "content": "{\"query\":\"bitcoin\",\"limit\":5,\"product\":\"Latest\"}"}]
}'
All requests require an API key in the Authorization header:
Authorization: Bearer sk-your-api-key
Search tweets by keyword or query string.
Params: query, limit, product
For stock-related posts, use $ + ticker, such as $TSLA, $AAPL, $NVDA.
Search Twitter users by keyword.
Params: query, limit
Get tweets from a specific user.
Params: user_id, limit
Get a user's followers list.
Params: user_id, limit
Get accounts a user follows.
Params: user_id, limit
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | For search models | Search keyword or query string |
user_id | integer | For user models | Twitter numeric user ID |
limit | integer | No (default: 20) | Max results to return (1–500) |
product | string | No (default: "Top") | Search type: Top, Latest, People, Photos, Videos |
Parameters are passed as a JSON string inside the user message content:
{
"model": "x-search-tweets",
"messages": [
{
"role": "user",
"content": "{\"query\": \"bitcoin\", \"limit\": 10, \"product\": \"Latest\"}"
}
],
"stream": false
}
content field must be a JSON string, not a nested object. Use json.dumps() in Python or JSON.stringify() in JavaScript.
When searching for stock-related tweets, it is recommended to use $ + stock symbol for better precision.
{
"model": "x-search-tweets",
"messages": [
{
"role": "user",
"content": "{\"query\": \"$TSLA\", \"limit\": 10, \"product\": \"Latest\"}"
}
]
}
$TSLA, $AAPL, $NVDA, $MSFT. In many cases, using the ticker symbol is more accurate than using the company name directly.
You can combine ticker symbols with company names or other query operators for broader or more precise matching.
{
"model": "x-search-tweets",
"messages": [
{
"role": "user",
"content": "{\"query\": \"$TSLA OR Tesla\", \"limit\": 20, \"product\": \"Latest\"}"
}
]
}
{
"model": "x-search-tweets",
"messages": [
{
"role": "user",
"content": "{\"query\": \"$AAPL lang:en\", \"limit\": 20, \"product\": \"Latest\"}"
}
]
}
$TSLA OR Tesla, $NVDA OR Nvidia, $AAPL lang:en, $MSFT -filter:replies. These query strings are passed directly to X search.
{
"model": "x-user-tweets",
"messages": [
{
"role": "user",
"content": "{\"user_id\": 44196397, \"limit\": 10}"
}
]
}
x-search-users first — the id field in the response is the numeric user ID.
Responses follow the standard OpenAI Chat Completions format. The actual Twitter data is in choices[0].message.content as a JSON string that needs to be parsed.
{
"id": "chatcmpl-openx-7663002f5683",
"object": "chat.completion",
"created": 1774489310,
"model": "x-search-tweets",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "[{\"id\": 123, \"rawContent\": \"...\", \"user\": {...}}, ...]"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 100,
"completion_tokens": 900,
"total_tokens": 1000
}
}
| Field | Type | Description |
|---|---|---|
id | integer | Tweet numeric ID |
url | string | Direct link to the tweet |
date | string | Timestamp |
rawContent | string | Full tweet text |
user | object | Author info (username, displayname, followersCount, etc.) |
replyCount | integer | Number of replies |
retweetCount | integer | Number of retweets |
likeCount | integer | Number of likes |
viewCount | integer | Number of views |
hashtags | array | Hashtags in the tweet |
media | object | Attached photos, videos, GIFs |
| Field | Type | Description |
|---|---|---|
id | integer | User numeric ID |
username | string | Handle (without @) |
displayname | string | Display name |
rawDescription | string | Bio text |
followersCount | integer | Number of followers |
friendsCount | integer | Number of following |
statusesCount | integer | Total tweets |
profileImageUrl | string | Avatar URL |
verified | boolean | Legacy verification badge |
blue | boolean | Twitter Blue subscriber |
from openai import OpenAI
import json
client = OpenAI(api_key="sk-your-key", base_url="https://api.gugou.ai/v1")
resp = client.chat.completions.create(
model="x-search-tweets",
messages=[{"role": "user", "content": json.dumps({
"query": "$TSLA OR $BTC",
"limit": 20,
"product": "Latest"
})}]
)
tweets = json.loads(resp.choices[0].message.content)
for t in tweets:
print(f"@{t['user']['username']} [{t['likeCount']} likes]: {t['rawContent'][:100]}")
$ + ticker format, such as $TSLA or $AAPL, instead of only using the company name.
resp = client.chat.completions.create(
model="x-search-tweets",
messages=[{
"role": "user",
"content": json.dumps({
"query": "$TSLA OR Tesla lang:en",
"limit": 20,
"product": "Latest"
})
}]
)
tweets = json.loads(resp.choices[0].message.content)
for t in tweets:
print(f"@{t['user']['username']}: {t['rawContent'][:120]}")
# Step 1: Find user ID
resp = client.chat.completions.create(
model="x-search-users",
messages=[{"role": "user", "content": json.dumps({"query": "elonmusk", "limit": 1})}]
)
users = json.loads(resp.choices[0].message.content)
user_id = users[0]["id"]
print(f"Found: @{users[0]['username']} (ID: {user_id}, Followers: {users[0]['followersCount']})")
# Step 2: Get their tweets
resp = client.chat.completions.create(
model="x-user-tweets",
messages=[{"role": "user", "content": json.dumps({"user_id": user_id, "limit": 5})}]
)
tweets = json.loads(resp.choices[0].message.content)
for t in tweets:
print(f" {t['rawContent'][:100]}")
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "sk-your-key",
baseURL: "https://api.gugou.ai/v1",
});
const resp = await client.chat.completions.create({
model: "x-search-users",
messages: [{ role: "user", content: JSON.stringify({ query: "AI", limit: 5 }) }],
});
const users = JSON.parse(resp.choices[0].message.content);
users.forEach(u => console.log(`@${u.username} - ${u.followersCount} followers`));
curl -X POST https://api.gugou.ai/v1/chat/completions \
-H "Authorization: Bearer sk-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "x-user-followers",
"messages": [{"role": "user", "content": "{\"user_id\": 44196397, \"limit\": 10}"}]
}'
Set "stream": true to receive results as Server-Sent Events. Each chunk contains one page of results.
resp = client.chat.completions.create(
model="x-search-tweets",
messages=[{"role": "user", "content": json.dumps({
"query": "breaking news",
"limit": 50,
"product": "Latest"
})}],
stream=True
)
for chunk in resp:
if chunk.choices[0].delta.content:
page = json.loads(chunk.choices[0].delta.content)
print(f"Received {len(page)} tweets in this chunk")
x-search-tweets with large limit values. Results are delivered page-by-page as they are scraped.
| Error | HTTP Status | Description |
|---|---|---|
model_not_found | 400 | Invalid model name. Check the 5 supported models above. |
invalid_request_error | 400 | Missing or invalid parameters (e.g., query not provided for search models). |
scraping_error | 500 | Internal scraping failure. Usually temporary — retry after a few seconds. |
invalid token | 401 | API key is invalid or expired. |
insufficient quota | 403 | Account balance exhausted. Please top up. |
{
"error": {
"message": "Model 'x-bad-model' not found. Available: ['x-search-tweets', ...]",
"type": "model_not_found",
"code": "model_not_found"
}
}
limit per request: 20–50 for optimal latency.limit per request: 500.stream: true.scraping_error responses.