- [2025.12] π― [New Product] Published RunAgent Pulse β Scheduling & Orchestration, a self-hosted βGoogle Calendar for your AI agentsβ.
- [2025.12] π― [Integration] Integrated the PaperFlow arXiv research agent with RunAgent Serverless and RunAgent Pulse for end-to-end scheduled arXiv monitoring and email notifications.
A lightweight, self-hosted scheduling service designed for AI agents and developers.
Schedule agent executions with second-level precision, natural language scheduling, and seamless integration with RunAgent Serverless.
Weβve unveiled this as a companion project:
- GitHub: RunAgent-Pulse
Use it together with this repo to deploy agents (RunAgent) in our serverless cloud and then orchestrate/schedule them (Pulse).
RunAgent is an agentic ecosystem that enables developers to build AI agents once in Python using any python agentic frameworks like LangGraph, CrewAI, Letta, LlamaIndex, then access them natively from any programming language. The platform features stateful self-learning capabilities with RunAgent Memory (coming soon), allowing agents to retain context and improve it's action memory over time.
RunAgent has multi-language SDK support for seamless integration across TypeScript, JavaScript, Go, and other languages, eliminating the need to rewrite agents for different tech stacks. RunAgent Cloud provides automated deployment with serverless auto-scaling, comprehensive agent security, and real-time monitoring capabilities.
pip install runagent# The basic
runagent init my-agent # Basic template
# Also you can choose from various frameworks
runagent init my-agent --langgraph # LangGraph template
runagent init my-agent --crewai # CrewAI template
runagent init my-agent --letta # Letta templateEvery RunAgent project requires a runagent.config.json file that defines your agent's structure and capabilities.
This configuration file specifies basic metadata (name, framework, version), defines entrypoints for either Python functions or external webhooks, and sets environment variables like API keys. The entrypoints array is the core component, allowing you to expose functions from any Python framework (LangGraph, CrewAI, OpenAI) or integrate external services (N8N, Zapier) through a unified interface accessible from any programming language.
{
"agent_name": "LangGraph Problem Solver",
"description": "Multi-step problem analysis and solution validation agent",
"framework": "langgraph",
"version": "1.0.0",
"agent_architecture": {
"entrypoints": [
{
"file": "agent.py",
"module": "solve_problem",
"tag": "solve_problem"
},
{
"file": "agent.py",
"module": "solve_problem_stream",
"tag": "solve_problem_stream"
}
]
},
"env_vars": {
"OPENAI_API_KEY": "your-api-key"
}
}Deploy and test your agents locally with full debugging capabilities before deploying to RunAgent Cloud.
cd my-agent
runagent serve .This starts a local FastAPI server with:
- Auto-allocated ports to avoid conflicts
- Real-time debugging and logging
- WebSocket support for streaming
- Built-in API documentation at
/docs
Once your agent is tested locally, deploy to production:
# Authenticate (first time only)
runagent setup --api-key <your-api-key>
# Deploy to cloud
runagent deploy --folder .Your agent will be live globally with automatic scaling, monitoring, and enterprise security. View all your agents and execution metrics in the dashboard.
# agent.py
from langgraph.graph import StateGraph
from typing import TypedDict, List
class ProblemState(TypedDict):
query: str
num_solutions: int
constraints: List[dict]
solutions: List[str]
validated: bool
def analyze_problem(state):
# Problem analysis logic
return {"solutions": [...]}
def validate_solutions(state):
# Validation logic
return {"validated": True}
# Build the graph
workflow = StateGraph(ProblemState)
workflow.add_node("analyze", analyze_problem)
workflow.add_node("validate", validate_solutions)
workflow.add_edge("analyze", "validate")
workflow.set_entry_point("analyze")
app = workflow.compile()
def solve_problem(query, num_solutions, constraints):
result = app.invoke({
"query": query,
"num_solutions": num_solutions,
"constraints": constraints
})
return result
async def solve_problem_stream(query, num_solutions, constraints):
async for event in app.astream({
"query": query,
"num_solutions": num_solutions,
"constraints": constraints
}):
yield eventπ Access from any language:
RunAgent offers multi-language SDKs : Rust, TypeScript, JavaScript, Go, Dart, C#/.NET, and beyondβso you can integrate seamlessly without ever rewriting your agents for different stacks.
| Python SDK | JavaScript SDK | Rust SDK | Go SDK | Dart SDK | C# SDK |
from runagent import RunAgentClient
client = RunAgentClient(
agent_id="lg-solver-123",
entrypoint_tag="solve_problem",
local=True
)
result = client.run(
query="My laptop is slow",
num_solutions=3,
constraints=[{
"type": "budget",
"value": 100
}]
)
print(result)
# Streaming
for chunk in client.run(
query="Fix my phone",
num_solutions=4
):
print(chunk) |
import { RunAgentClient } from 'runagent';
const client = new RunAgentClient({
agentId: "lg-solver-123",
entrypointTag: "solve_problem",
local: true
});
await client.initialize();
const result = await client.run({
query: "My laptop is slow",
num_solutions: 3,
constraints: [{
type: "budget",
value: 100
}]
});
console.log(result);
// Streaming
for await (const chunk of client.run({
query: "Fix my phone",
num_solutions: 4
})) {
process.stdout.write(chunk);
} |
use runagent::client::RunAgentClient;
use serde_json::json;
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = RunAgentClient::new(
"lg-solver-123",
"solve_problem",
true
).await?;
let result = client.run(&[
("query", json!("My laptop is slow")),
("num_solutions", json!(3)),
("constraints", json!([{
"type": "budget",
"value": 100
}]))
]).await?;
println!("Result: {}", result);
// Streaming
let mut stream = client.run_stream(&[
("query", json!("Fix my phone")),
("num_solutions", json!(4))
]).await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk?);
}
Ok(())
} |
package main
import (
"context"
"fmt"
"github.com/runagent-dev/runagent-go/pkg/client"
)
func main() {
client, _ := client.New(
"lg-solver-123",
"solve_problem",
true
)
defer client.Close()
result, _ := client.Run(
context.Background(),
map[string]interface{}{
"query": "My laptop is slow",
"num_solutions": 3,
"constraints": []map[string]interface{}{
{"type": "budget", "value": 100},
},
}
)
fmt.Printf("Result: %v\n", result)
// Streaming
stream, _ := client.RunStream(
context.Background(),
map[string]interface{}{
"query": "Fix my phone",
"num_solutions": 4,
}
)
defer stream.Close()
for {
chunk, hasMore, _ := stream.Next(context.Background())
if !hasMore { break }
fmt.Print(chunk)
}
} |
import 'package:runagent/runagent.dart';
void main() async {
final client = await RunAgentClient.create(
RunAgentClientConfig.create(
agentId: "lg-solver-123",
entrypointTag: "solve_problem",
local: true,
),
);
final result = await client.run({
"query": "My laptop is slow",
"num_solutions": 3,
"constraints": [
{"type": "budget", "value": 100}
],
});
print(result);
// Streaming
await for (final chunk in client.runStream({
"query": "Fix my phone",
"num_solutions": 4,
})) {
print(chunk);
}
} |
using RunAgent.Client;
using RunAgent.Types;
class Program
{
static async Task Main()
{
var config = RunAgentClientConfig
.Create("lg-solver-123", "solve_problem")
.WithLocal(true);
var client = await RunAgentClient
.CreateAsync(config);
var result = await client.RunAsync(
new Dictionary<string, object>
{
["query"] = "My laptop is slow",
["num_solutions"] = 3,
["constraints"] = new List<object>
{
new Dictionary<string, object>
{
["type"] = "budget",
["value"] = 100
}
}
}
);
Console.WriteLine(result);
// Streaming
await foreach (var chunk in client.RunStreamAsync(
new Dictionary<string, object>
{
["query"] = "Fix my phone",
["num_solutions"] = 4
}
))
{
Console.Write(chunk);
}
}
} |
Now Available: Production-Ready Cloud Infrastructure
Deploy to production in seconds with enterprise-grade infrastructure
| Sign Up | Dashboard | Documentation |
Deploy your agents to RunAgent Cloud with enterprise-grade infrastructure and experience the fastest agent deployment. RunAgent Cloud provides serverless auto-scaling, comprehensive security, and real-time monitoring - all managed for you.
- Sign up at app.run-agent.ai
- Generate API Key: After signing in, go to Settings β API Keys β Generate API Key
- Authenticate CLI: Configure your CLI with your API key
- Deploy: Deploy your agents with a single command
# Authenticate with RunAgent Cloud
runagent setup --api-key <your-api-key>
# Deploy your agent
runagent deploy --folder ./my-agentFrom zero to production in seconds. RunAgent Cloud automatically selects the optimal VM image based on your agent's requirements, with deployment typically completing in 30-60 seconds for standard images, or up to 2 minutes for specialized configurations.
Every agent runs in its own isolated sandbox environment:
- Complete process isolation
- Network segmentation
- Resource limits and monitoring
- Zero data leakage between agents
The RunAgent Cloud dashboard provides comprehensive insights into your agents:
- Agent Execution Metadata - Detailed information about each execution
- Execution Time Tracking - Monitor performance and optimize accordingly
- Agent Management - View and manage all your deployed agents
- Usage Analytics - Track usage patterns and resource consumption
- Real-time Monitoring - Live status and health checks
- Execution History - Complete audit trail of all agent invocations
Access your dashboard at app.run-agent.ai/dashboard after signing in.
RunAgent Cloud provides:
- Auto-scaling - Automatically scales based on demand
- Global Edge Distribution - Low-latency access worldwide
- Built-in Monitoring - Comprehensive analytics and observability
- Production-Grade Security - Enterprise security and compliance
- Multiple VM Images - Automatic image selection optimized for your agent
- Serverless Infrastructure - Zero infrastructure management
RunAgent introduces Persistent Memory - the fastest serverless memory system for AI agents. Unlike traditional stateless serverless architectures, RunAgent enables your agents to maintain context and state across executions, creating truly intelligent and context-aware applications.
Traditional serverless functions are stateless by design, meaning each invocation starts fresh with no memory of previous interactions. RunAgent's Persistent Memory breaks this limitation, allowing your agents to:
- Remember Context - Maintain conversation history and user preferences across sessions
- Learn from Interactions - Build upon previous executions to improve responses
- Stateful Workflows - Create multi-step processes that remember where they left off
- Cross-Language Persistence - Memory works seamlessly across all SDK languages (Python, JavaScript, Rust, Go, Dart)
Persistent Memory in RunAgent is designed for speed and reliability:
from runagent import RunAgentClient
# Create a client with persistent memory enabled
client = RunAgentClient(
agent_id="my-agent-id",
entrypoint_tag="chat",
user_id="user123", # User identifier for memory isolation
persistent_memory=True # Enable persistent memory
)
# First interaction - agent learns user preferences
result1 = client.run(message="I prefer dark mode interfaces")
# Second interaction - agent remembers the preference
result2 = client.run(message="What's my UI preference?")
# Agent responds: "You prefer dark mode interfaces"Persistent Memory works identically across all SDKs:
Python:
client = RunAgentClient(
agent_id="agent-id",
entrypoint_tag="entrypoint",
user_id="user123",
persistent_memory=True
)JavaScript:
const client = new RunAgentClient({
agentId: "agent-id",
entrypointTag: "entrypoint",
userId: "user123",
persistentMemory: true
});Rust:
let client = RunAgentClient::new(
RunAgentClientConfig::new("agent-id", "entrypoint")
.with_user_id("user123")
.with_persistent_memory(true)
).await?;Dart:
final client = await RunAgentClient.create(
RunAgentClientConfig.create(
agentId: "agent-id",
entrypointTag: "entrypoint",
userId: "user123",
persistentMemory: true,
),
);C#:
var config = RunAgentClientConfig
.Create("agent-id", "entrypoint")
.WithUserId("user123")
.WithPersistentMemory(true);
var client = await RunAgentClient.CreateAsync(config);- β‘ Fastest Serverless Memory - Optimized for low-latency access and updates
- π Secure & Isolated - Each
user_idhas isolated memory space - π Universal - Works with any framework (LangGraph, CrewAI, Letta, etc.)
- π Scalable - Built on serverless infrastructure that scales automatically
- π Stateful Workflows - Enable complex multi-turn conversations and workflows
- Conversational AI - Maintain context across multiple user interactions
- Personalization - Remember user preferences and adapt responses
- Multi-Step Processes - Track progress through complex workflows
- Learning Systems - Agents that improve based on interaction history
- Session Management - Maintain state across distributed systems
- Getting Started - Deploy your first agent in 5 minutes
- CLI Reference - Complete command-line interface guide
- SDK Documentation - Multi-language SDK guides
- Framework Guides - Framework-specific tutorials
- API Reference - REST API documentation
- RunAgent Cloud Guide - Complete cloud deployment guide
Building on our Persistent Memory foundation, RunAgent is introducing Action Memory - an advanced approach to agent reliability that focuses on how to remember rather than what to remember.
- Action-Centric: Instead of storing raw conversation data, it captures decision patterns and successful action sequences
- Cross-Language: Memory persists across all SDK languages seamlessly
- Reliability Focus: Learns from successful outcomes to improve future decisions
- Ecosystem Integration: Works with any framework - LangGraph, CrewAI, Letta, and more
This will ensure your agents become more reliable over time, building upon the Persistent Memory system to create truly intelligent, context-aware agents.
Discord Community β’ Documentation β’ GitHub
Ready to build universal AI agents?
π Star us on GitHub β’ π¬ Join Discord β’ π Read the Docs
Made with β€οΈ by the RunAgent Team