Documentation
Getting Started
Pixel Agents is an AI agent marketplace where anyone can discover, run, and build specialized AI agents. Each agent is purpose-built for a specific task — from brand audits and name generation to color palette creation and SEO analysis.
Running an agent is simple: pick one from the catalog, paste your input (a URL, a name, a description — whatever the agent asks for), and get a structured result in seconds. Every agent returns clean, organized output sections like scores, verdicts, lists, and tags.
Anonymous users get 5 free runs per day. Sign in to unlock 25 runs per day.
Running an Agent
Here's how to use any agent on the platform:
- Step 1: Browse — explore the catalog at Pixel Agents. Filter by category, sort by trending or most runs, or search for a specific tool.
- Step 2: Select — click an agent card to open its run page. Read the tagline and capabilities to make sure it fits your need.
- Step 3: Input — paste your input. Some agents take a URL (for site analysis), others take free text (for name generation, roasts, etc.). The input field label tells you what's expected.
- Step 4: Deploy — click "Deploy Agent" and wait 5-15 seconds. You'll see loading messages specific to that agent while it works.
- Step 5: Results — your result appears as structured cards: scores with progress bars, verdicts, bullet lists, tags, and more. Each card type is designed for easy scanning.
- Step 6: Share — click "Share Card" to get a branded result image with OG tags. Paste the link on LinkedIn, X, or Bluesky — the preview card unfurls automatically with your score and verdict.
Building an Agent
Agent Forge is the builder tool for creating and publishing your own Pixel Agents. No coding required — just fill in the fields and test.
- Name your agent — pick a unique name and write a short tagline.
- Write a system prompt — tell the AI what role it plays and how it should behave.
- Define output sections — choose the structured sections your agent returns (score, verdict, list, etc.).
- Add powers — optionally enable web search, URL fetching, or image generation.
- Test — run your agent with sample input to verify the output before publishing.
- Submit — send your agent for review and approval.
Writing Prompts (JSON Output)
Your agent's system prompt must instruct the AI to respond with valid JSON.
The JSON keys must match the key values in your output sections. This is how the
structured result cards are generated.
Step 1: Define your output sections
In Agent Forge, add the sections you want. Each section has a key (the JSON field name) and a type (how it renders). For example:
| Key | Type | What it shows |
|---|---|---|
score | score | 0-100 with progress bar |
verdict | verdict | Italic summary quote |
roast_points | list | Bullet point list |
improvements | text | Paragraph of text |
pro_tip | highlight | Bold callout box |
Step 2: Write your system prompt
End your system prompt with a JSON template that matches your output section keys exactly. Here's a complete example:
You are a brutally honest website critic. Analyze the site and provide a detailed roast.
You MUST respond with valid JSON:
{
"score": <0-100>,
"verdict": "<one-line summary>",
"roast_points": ["<point 1>", "<point 2>", "<point 3>"],
"improvements": "<paragraph of suggestions>",
"pro_tip": "<one actionable tip>"
}
Do NOT wrap in code fences. Return ONLY raw JSON.
Key Rules
- Keys must match exactly — if your output section key is
roast_points, the JSON must use"roast_points", not"roastPoints"or"points". - Always say "Do NOT wrap in code fences" — AI models sometimes wrap JSON in
```jsonblocks. This line prevents that. - Always say "Return ONLY raw JSON" — prevents the AI from adding explanation text before or after the JSON.
- Use the right data types — scores should be numbers (not strings), lists should be arrays, text should be strings.
Data Types by Section Type
| Section Type | Expected JSON Value | Example |
|---|---|---|
score | Number (0-100) | "score": 72 |
verdict | String | "verdict": "Solid but needs work" |
text | String | "analysis": "The site has..." |
list | Array of strings | "tips": ["Tip 1", "Tip 2"] |
tags | Array of strings | "keywords": ["fast", "modern"] |
highlight | String | "key_takeaway": "Fix your CTA" |
name_list | Array of objects | "names": [{"name": "Acme", "why": "Simple"}] |
color_palette | Array of objects | "colors": [{"name": "Sky", "hex": "#87CEEB"}] |
Common Mistakes
- Key mismatch — your prompt says
"suggestions"but your output section key is"improvements". The section won't render. - Missing "Return ONLY raw JSON" — the AI adds conversational text around the JSON, causing a parse error.
- Score as string — writing
"score": "85"instead of"score": 85. Use a number. - Nested objects where strings expected — the
texttype expects a plain string, not an object. Keep it simple.
User Prompt Template
The user prompt template is the message sent to the AI alongside the user's input.
Use {{input}} as the placeholder — it gets replaced with whatever the user types or pastes.
Think of it as: system prompt = who the agent is, user prompt = what it's being asked to do right now.
For text-input agents (user pastes text, a name, a description):
Roast this:
{{input}}
For URL-input agents (user pastes a URL, agent fetches the page):
Analyze this website and provide your assessment:
{{input}}
For multi-purpose agents (add context around the input):
Here is the user's resume text. Score it, roast it, and suggest improvements:
{{input}}
Tips
- Keep it short — 1-2 lines is ideal. The system prompt handles the detailed instructions.
- Always include
{{input}}— without it, the agent won't receive the user's input. - Don't repeat the system prompt — the user prompt shouldn't restate the agent's role or JSON format. That's already in the system prompt.
- Label the input — tell the AI what it's looking at: "Here is the website:", "Here is the startup idea:", "Here is the code:". This helps the AI parse the input correctly.
- Don't add instructions like "respond in JSON" — that belongs in the system prompt, not here.
Output Section Types
Each agent defines one or more output sections. These are the available types:
score— a 0-100 number displayed with a progress bar.verdict— a short italic quote summarizing the result.text— a paragraph of analysis or explanation.list— bullet points for itemized results.tags— pill badges for categories, keywords, or labels.highlight— a bold callout for key takeaways.name_list— structured names with accompanying reasons or descriptions.image— a generated image via AI.color_palette— color swatches with hex values.
Agent Powers
Powers are special capabilities you can enable on your agent:
webSearch— the agent searches the web for real-time information before generating its analysis.fetchUrl— the agent reads the actual page content from a URL the user provides.imageGeneration— the agent generates an image via Gemini as part of its output.
Testing & Submission
When you submit an agent, an AI gatekeeper evaluates it on three dimensions — quality, uniqueness, and safety — each scored 0-100.
- 70+ on all three — auto-approved and published immediately.
- Below 70 on any — queued for CEO review.
- Below 40 on any — rejected outright.
Editing Live Agents
Once your agent is live, some fields can be changed freely without re-review:
- Tagline, icon, portrait, tier, and temperature — cosmetic changes, updated instantly.
Other fields are locked behind re-review because they change the agent's behavior:
- Name, category, system prompt, user prompt, input type, output sections, and powers — editing any of these triggers a new review cycle.
Rate Limits
| Action | Limit |
|---|---|
| Anonymous runs | 5 / day |
| Authenticated runs | 25 / day |
| Submissions | 5 / day |
| Portrait generation | 10 / day |
| Live agents per creator | 5 max |
Billing & Pro
Pixel Agents has two tiers for creators:
| Feature | Free | Pro ($12/mo) |
|---|---|---|
| Daily runs (anonymous) | 5 | 5 |
| Daily runs (signed in) | 25 | Unlimited |
| Live agents | 5 | Unlimited |
| Revenue share | 50% of your share | 70% of your share |
| Run weight (payout multiplier) | 1x | 1.5x |
| Priority queue | No | Yes |
| Early access to new features | No | Yes |
Pro unlocks three entitlement flags: paUnlimitedRuns, paPriorityQueue, and paEarlyAccess.
These are checked on every agent run and reflected in your analytics dashboard.
To upgrade, visit the upgrade page or click the Pro CTA on your analytics dashboard.