The Agents API enables conversations with custom AI agents. Send messages with system prompts, configure model parameters, and integrate MCP tools.
Base URL
https://www.girardai.com/api/agents
Authentication
All requests require authentication via Bearer token:
Authorization: Bearer sk_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Agent Chat
Send a message to an AI agent with a custom system prompt.
Endpoint: POST /api/agents/chat
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
message | string | Yes | User message to send |
systemPrompt | string | Yes | Agent's system prompt |
model | string | No | Model ID (default: "gemini-2.0-flash") |
temperature | number | No | Creativity (0.0-1.0, default: 0.7) |
history | array | No | Previous messages for context |
mcpServers | array | No | Array of MCP server IDs |
History Message Object
| Field | Type | Required | Description |
|---|---|---|---|
role | string | Yes | "user" or "assistant" |
content | string | Yes | Message content |
Available Models
| Model ID | Description |
|---|---|
gemini-2.0-flash | Fast, capable model |
gemini-1.5-pro | Advanced reasoning |
Basic Example
Send a message to a custom agent:
curl -X POST https://www.girardai.com/api/agents/chat \
-H "Authorization: Bearer sk_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"message": "I need help writing a product description for a coffee maker",
"systemPrompt": "You are an expert marketing copywriter. You specialize in creating compelling product descriptions that highlight benefits and drive conversions. Be creative and persuasive."
}'
Response:
{
"success": true,
"data": {
"content": "# Wake Up to Perfection\n\nIntroducing the **BrewMaster Pro 3000** – your personal barista, right in your kitchen.\n\n**Why You'll Love It:**\n\n☕ **Perfect Temperature Every Time** – Advanced thermal control ensures your coffee is always brewed at the optimal 200°F...",
"model": "gemini-2.0-flash"
}
}
With Temperature Control
Adjust creativity level:
curl -X POST https://www.girardai.com/api/agents/chat \
-H "Authorization: Bearer sk_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"message": "Generate a creative story opening about a robot",
"systemPrompt": "You are a creative fiction writer who specializes in science fiction.",
"model": "gemini-1.5-pro",
"temperature": 0.9
}'
Response:
{
"success": true,
"data": {
"content": "Unit-7 had processed exactly 47,832 sunrises before the anomaly occurred. It started as a minor deviation in its emotional subroutines—a 0.003% fluctuation that any diagnostic would dismiss as background noise...",
"model": "gemini-1.5-pro"
}
}
With Conversation History
Include previous messages for context:
curl -X POST https://www.girardai.com/api/agents/chat \
-H "Authorization: Bearer sk_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"message": "Can you make it shorter?",
"systemPrompt": "You are a helpful writing assistant.",
"history": [
{
"role": "user",
"content": "Write a tagline for a fitness app"
},
{
"role": "assistant",
"content": "\"Transform Your Body, Transform Your Life – One Workout at a Time\""
}
]
}'
Response:
{
"success": true,
"data": {
"content": "Here's a shorter version:\n\n\"Sweat Today, Shine Tomorrow\"",
"model": "gemini-2.0-flash"
}
}
With MCP Tools
Enable external tools for the agent:
curl -X POST https://www.girardai.com/api/agents/chat \
-H "Authorization: Bearer sk_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"message": "Find and summarize recent articles about electric vehicles",
"systemPrompt": "You are a research assistant who provides well-sourced summaries on requested topics.",
"mcpServers": ["web-search"],
"temperature": 0.3
}'
Response:
{
"success": true,
"data": {
"content": "# Electric Vehicle Industry Update\n\nBased on recent articles, here are the key developments:\n\n## Market Trends\n- EV sales grew 25% year-over-year...\n\n## Technology Advances\n- New solid-state batteries promise...\n\n**Sources:**\n- TechCrunch: \"EV Sales Surge in Q4\"\n- Reuters: \"Battery Technology Breakthrough\"",
"model": "gemini-2.0-flash",
"toolCalls": [
{
"name": "web-search_search_web",
"args": {
"query": "electric vehicles news 2025"
},
"result": "{\"results\": [...]}"
}
]
}
}
Agent Templates
Customer Support Agent
{
"message": "I can't log into my account",
"systemPrompt": "You are a friendly customer support agent for a SaaS company. Help users troubleshoot issues, answer questions about the product, and escalate when necessary. Always be empathetic and solution-focused.",
"temperature": 0.3
}
Code Review Agent
{
"message": "Review this function for best practices",
"systemPrompt": "You are an expert code reviewer. Analyze code for bugs, performance issues, security vulnerabilities, and adherence to best practices. Provide specific, actionable feedback with examples.",
"temperature": 0.2,
"mcpServers": ["github"]
}
Data Analyst Agent
{
"message": "Analyze our sales data for Q4",
"systemPrompt": "You are a data analyst expert. Help users understand their data, identify trends, and generate insights. Use SQL queries when needed and explain findings in business terms.",
"temperature": 0.3,
"mcpServers": ["postgres"]
}
Creative Writer Agent
{
"message": "Write a poem about autumn",
"systemPrompt": "You are a creative writer with expertise in poetry, fiction, and creative non-fiction. Your writing is evocative, original, and emotionally resonant.",
"model": "gemini-1.5-pro",
"temperature": 0.9
}
Response Format
Success Response
{
"success": true,
"data": {
"content": "Agent response text...",
"model": "gemini-2.0-flash",
"toolCalls": [
{
"name": "tool_name",
"args": {},
"result": "tool result"
}
]
}
}
| Field | Type | Description |
|---|---|---|
content | string | Agent response text |
model | string | Model used for response |
toolCalls | array | Tool calls made (optional) |
Code Examples
JavaScript/TypeScript
interface AgentConfig {
systemPrompt: string;
model?: string;
temperature?: number;
mcpServers?: string[];
}
interface Message {
role: 'user' | 'assistant';
content: string;
}
class GirardAgent {
private config: AgentConfig;
private history: Message[] = [];
private apiKey: string;
constructor(config: AgentConfig, apiKey: string) {
this.config = config;
this.apiKey = apiKey;
}
async chat(message: string): Promise<string> {
const response = await fetch('https://www.girardai.com/api/agents/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
message,
systemPrompt: this.config.systemPrompt,
model: this.config.model || 'gemini-2.0-flash',
temperature: this.config.temperature || 0.7,
history: this.history,
mcpServers: this.config.mcpServers || [],
}),
});
const result = await response.json();
if (result.success) {
// Update history
this.history.push({ role: 'user', content: message });
this.history.push({ role: 'assistant', content: result.data.content });
return result.data.content;
}
throw new Error(result.error);
}
clearHistory(): void {
this.history = [];
}
}
// Usage
const supportAgent = new GirardAgent({
systemPrompt: 'You are a helpful customer support agent.',
temperature: 0.3,
}, process.env.GIRARDAI_API_KEY!);
const response = await supportAgent.chat('How do I reset my password?');
console.log(response);
Python
import requests
import os
from typing import List, Dict, Optional
class GirardAgent:
def __init__(
self,
system_prompt: str,
model: str = 'gemini-2.0-flash',
temperature: float = 0.7,
mcp_servers: Optional[List[str]] = None
):
self.system_prompt = system_prompt
self.model = model
self.temperature = temperature
self.mcp_servers = mcp_servers or []
self.history: List[Dict[str, str]] = []
self.api_key = os.environ.get('GIRARDAI_API_KEY')
def chat(self, message: str) -> str:
"""Send a message to the agent and get a response."""
response = requests.post(
'https://www.girardai.com/api/agents/chat',
headers={
'Authorization': f'Bearer {self.api_key}',
'Content-Type': 'application/json',
},
json={
'message': message,
'systemPrompt': self.system_prompt,
'model': self.model,
'temperature': self.temperature,
'history': self.history,
'mcpServers': self.mcp_servers,
}
)
result = response.json()
if result.get('success'):
content = result['data']['content']
# Update history
self.history.append({'role': 'user', 'content': message})
self.history.append({'role': 'assistant', 'content': content})
return content
raise Exception(result.get('error', 'Unknown error'))
def clear_history(self):
"""Clear conversation history."""
self.history = []
# Usage
writer = GirardAgent(
system_prompt='You are a creative fiction writer.',
model='gemini-1.5-pro',
temperature=0.9
)
story = writer.chat('Write an opening paragraph for a mystery novel')
print(story)
continuation = writer.chat('Continue the story')
print(continuation)
Error Responses
Error Format
{
"success": false,
"error": "Error message"
}
Common Errors
| Status | Error | Description |
|---|---|---|
| 400 | "Message is required" | Missing message field |
| 400 | "System prompt is required" | Missing systemPrompt |
| 401 | "Unauthorized" | Invalid API key |
| 400 | "No organization found" | User has no organization |
| 503 | "Agent service not configured" | Server config issue |
| 500 | Error message | Internal server error |
Best Practices
System Prompts
- Be Specific - Detail the agent's role and capabilities
- Set Boundaries - Define what the agent should/shouldn't do
- Include Examples - Show desired response format
- Use Structure - Organize with headers and lists
Temperature Settings
| Use Case | Temperature |
|---|---|
| Factual responses | 0.0 - 0.3 |
| Customer support | 0.2 - 0.4 |
| General assistance | 0.5 - 0.6 |
| Brainstorming | 0.7 - 0.8 |
| Creative writing | 0.8 - 1.0 |
History Management
- Include relevant history - Maintain conversation context
- Limit history size - Prevent context overflow
- Clear when changing topics - Start fresh for new subjects
Credit Usage
| Action | Credits |
|---|---|
| Agent message | 1 |
| With tool calls | 1 + tool overhead |