Part 3 of 3 in the TRAE Agent Reverse Engineering Series
In Part 2, I finally got past the proxy nightmare and captured real HTTPS traffic to api.trae.ai
. Now came the moment of truth: decoding the actual API calls and extracting that sweet, sweet agent data.
Step 1: Authentication Token
With Fiddler's HTTPS decryption working perfectly, I could finally see the decrypted API requests. The first thing that caught my eye was a familiar pattern:
URL: https://api.trae.ai/cloudide/api/v3/trae/GetUserSupabaseToken
Result: 200 (OK)
Body: 178 bytes, application/json
Bingo! A JWT in the Authorization header. Looking at the structure and the API endpoint pattern (/rest/v1/rpc/
), this screamed Supabase to me. The JWT format was unmistakable:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJhdXRoZW50aWNhdGVkIiwiZXhwIjoxNzM3NjgzNzgzLCJpYXQiOjE3Mzc2ODAxODMsImlzcyI6Imh0dHBzOi8vYXBpLnRyYWUuYWkiLCJzdWIiOiIxMjM0NTY3ODkwIiwicm9sZSI6ImF1dGhlbnRpY2F0ZWQifQ.signature_here
A little bit of insight on JWTs (JSON Web Tokens), they are generally made up of 3 .
(dot) separated parts - HEADER.PAYLOAD.SIGNATURE
. Only the first two parts are Base64-URL encoded, meaning that they are easily decodable into JSON, the third part is a cryptographic signature - usually HMAC or RSA, depending on the alg
which is in the HEADER
.
The header contains metadata such as { "alg": "H256", "typ": "JWT" }
, and the payload consists of the claims - being the sub
, role
, exp
in this case.
To extract your JSON data, you just take out the PAYLOAD
and base64 decode it - or paste it into any website online and you get:
{
"aud": "authenticated",
"exp": 1737683783,
"iat": 1737680183,
"iss": "https://api.trae.ai",
"sub": "1234567890",
"role": "authenticated"
}
Step 2: The Moment of Truth – Agent Data
The response body was even more exciting:
{
"id": "337fa4",
"name": "Code Review Assistant",
"description": "An AI assistant specialized in code review and analysis",
"avatar_url": "https://p16-trae-material-sign-va.ibyteimg.com/tos-maliva-i-traematerial-us/agent_avatar/random/image/uuid.png?x-expires=...",
"tools": [
{
"name": "code_analyzer",
"description": "Analyzes code for potential issues",
"parameters": {...}
}
],
"mcp_servers": [
{
"name": "github-server",
"config": {...}
}
],
"created_at": "yyyy-MM-dd'T'HH:mm:ss'Z'",
"updated_at": "yyyy-MM-dd'T'HH:mm:ss'Z'"
}

Jackpot! This was exactly what I was looking for:
- ✅ Agent metadata (name, description, avatar)
- ✅ Tools configuration (the actual AI capabilities)
- ✅ MCP servers (Model Context Protocol integrations)
- ✅ Timestamps (creation and modification dates)
Everything I needed to understand how TRAE agents work under the hood!
The Crushing Defeat: Attempting Manual Reproduction
Naturally, my next thought was: "Can I reproduce this with a simple curl command?"
curl -X POST https://api.trae.ai/rest/v1/rpc/get_agent_by_id \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
-H "Content-Type: application/json" \
-d '{"agent_id": "337fa4"}'
The response?
{
"code": 401,
"message": "Unauthorized",
"details": "JWT token is invalid or expired"
}
401 Unauthorized.
The crushing reality hit me.. the JWT was either:
- Session-specific (somehow tied to the TRAE IDE session?)
- Short-lived (expired by the time I tried to use it)
- Signed with additional context I couldn't replicate
My manual curl approach had failed spectacularly. I was devastated.
The Realization: This Needs Automation
Staring at that 401 error, I realized that manual extraction wasn't going to cut it. The authentication flow was too complex, too dynamic, and too tightly integrated with the TRAE IDE itself.
What I needed was an automated pipeline that could:
- ✅ Intercept the live JWT tokens during "simulated" TRAE sessions
- ✅ Extract agent data in real-time before tokens expire
- ✅ Parse and store the complete agent configurations
- ✅ Handle the dynamic authentication flow seamlessly

To Be Continued...
This reverse engineering journey revealed the complete API structure and data format, but also reminded me that manual extraction has its limits. The challenge lies within building an automated system that can work with TRAE's dynamic authentication, and I'm all down for it!
Coming up in the next series: Building the automated TRAE agent extractor pipeline—from concept to working tool.
The Technical Payoff
Even though my curl attempt (kind of) failed, this reverse engineering session was incredibly valuable:
- Complete API Understanding: I now know exactly how TRAE agents are structured
- Authentication Flow: Understanding of the JWT-based auth system used by TRAE
- Data Format: Full schema for agent configurations
- Integration Points: How tools and MCP servers are configured (which is generally accessible through the IDE, but it counts!)
Key Takeaway: The groundwork laid by TRAE devs is way more sophisticated than to be automated by a silly curl
request! 😆
With that out of the way, The bed calls! 😴
Series Navigation:
- Part 1: When 1AM Curiosity Meets CTF Skills
- Part 2: Fiddler, Proxies, and Network Analysis
- Part 3: The API Treasure Hunt ← You are here
Want to see the automated extractor in action? Stay tuned for the next series where we build a real-time TRAE agent extraction pipeline that actually works!