Skip to content

RunAgent simplifies serverless deployment of your AI agents. With a powerful CLI, multi-language SDK support, built-in agent invocation & streaming suppprt.

License

Notifications You must be signed in to change notification settings

runagent-dev/runagent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

724 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

RunAgent logo RunAgent Logo

Secured, reliable AI agent deployment at scale

Run your stack. Let us run your agents.

Read the Docs


πŸŽ‰ News

We have published RunAgent Pulse

  • [2025.12] 🎯 [New Product] Published RunAgent Pulse – Scheduling & Orchestration, a self-hosted β€œGoogle Calendar for your AI agents”.
  • [2025.12] 🎯 [Integration] Integrated the PaperFlow arXiv research agent with RunAgent Serverless and RunAgent Pulse for end-to-end scheduled arXiv monitoring and email notifications.

What is RunAgent-Pulse?

RunAgent Pulse is a Google Calendar for Your AI Agents

A lightweight, self-hosted scheduling service designed for AI agents and developers.
Schedule agent executions with second-level precision, natural language scheduling, and seamless integration with RunAgent Serverless.
We’ve unveiled this as a companion project:

Use it together with this repo to deploy agents (RunAgent) in our serverless cloud and then orchestrate/schedule them (Pulse).


What is RunAgent?

RunAgent is an agentic ecosystem that enables developers to build AI agents once in Python using any python agentic frameworks like LangGraph, CrewAI, Letta, LlamaIndex, then access them natively from any programming language. The platform features stateful self-learning capabilities with RunAgent Memory (coming soon), allowing agents to retain context and improve it's action memory over time.

Animated SVG

RunAgent has multi-language SDK support for seamless integration across TypeScript, JavaScript, Go, and other languages, eliminating the need to rewrite agents for different tech stacks. RunAgent Cloud provides automated deployment with serverless auto-scaling, comprehensive agent security, and real-time monitoring capabilities.

Quick Start

Installation

pip install runagent

Initialize Your First Agent

# The basic
runagent init my-agent                # Basic template


# Also you can choose from various frameworks
runagent init my-agent --langgraph    # LangGraph template
runagent init my-agent --crewai       # CrewAI template  
runagent init my-agent --letta        # Letta template

Agent Configuration

Every RunAgent project requires a runagent.config.json file that defines your agent's structure and capabilities.

This configuration file specifies basic metadata (name, framework, version), defines entrypoints for either Python functions or external webhooks, and sets environment variables like API keys. The entrypoints array is the core component, allowing you to expose functions from any Python framework (LangGraph, CrewAI, OpenAI) or integrate external services (N8N, Zapier) through a unified interface accessible from any programming language.

Example Configuration

{
  "agent_name": "LangGraph Problem Solver",
  "description": "Multi-step problem analysis and solution validation agent",
  "framework": "langgraph",
  "version": "1.0.0",
  "agent_architecture": {
    "entrypoints": [
      {
        "file": "agent.py",
        "module": "solve_problem",
        "tag": "solve_problem"
      },
      {
        "file": "agent.py",
        "module": "solve_problem_stream",
        "tag": "solve_problem_stream"
      }
    ]
  },
  "env_vars": {
    "OPENAI_API_KEY": "your-api-key"
  }
}

Local Development

Deploy and test your agents locally with full debugging capabilities before deploying to RunAgent Cloud.

Deploy Agent Locally

cd my-agent
runagent serve .

This starts a local FastAPI server with:

  • Auto-allocated ports to avoid conflicts
  • Real-time debugging and logging
  • WebSocket support for streaming
  • Built-in API documentation at /docs

Deploy to RunAgent Cloud

Once your agent is tested locally, deploy to production:

# Authenticate (first time only)
runagent setup --api-key <your-api-key>

# Deploy to cloud
runagent deploy --folder .

Your agent will be live globally with automatic scaling, monitoring, and enterprise security. View all your agents and execution metrics in the dashboard.

LangGraph Problem Solver Agent (An Example)

# agent.py
from langgraph.graph import StateGraph
from typing import TypedDict, List

class ProblemState(TypedDict):
    query: str
    num_solutions: int
    constraints: List[dict]
    solutions: List[str]
    validated: bool

def analyze_problem(state):
    # Problem analysis logic
    return {"solutions": [...]}

def validate_solutions(state):
    # Validation logic
    return {"validated": True}

# Build the graph
workflow = StateGraph(ProblemState)
workflow.add_node("analyze", analyze_problem)
workflow.add_node("validate", validate_solutions)
workflow.add_edge("analyze", "validate")
workflow.set_entry_point("analyze")

app = workflow.compile()

def solve_problem(query, num_solutions, constraints):
    result = app.invoke({
        "query": query,
        "num_solutions": num_solutions,
        "constraints": constraints
    })
    return result

async def solve_problem_stream(query, num_solutions, constraints):
    async for event in app.astream({
        "query": query,
        "num_solutions": num_solutions,
        "constraints": constraints
    }):
        yield event

🌐 Access from any language:

RunAgent offers multi-language SDKs : Rust, TypeScript, JavaScript, Go, Dart, C#/.NET, and beyondβ€”so you can integrate seamlessly without ever rewriting your agents for different stacks.

Python SDK JavaScript SDK Rust SDK Go SDK Dart SDK C# SDK
from runagent import RunAgentClient

client = RunAgentClient(
    agent_id="lg-solver-123",
    entrypoint_tag="solve_problem",
    local=True
)

result = client.run(
    query="My laptop is slow",
    num_solutions=3,
    constraints=[{
        "type": "budget", 
        "value": 100
    }]
)
print(result)

# Streaming
for chunk in client.run(
    query="Fix my phone", 
    num_solutions=4
):
    print(chunk)
import { RunAgentClient } from 'runagent';

const client = new RunAgentClient({
  agentId: "lg-solver-123",
  entrypointTag: "solve_problem",
  local: true
});

await client.initialize();
const result = await client.run({
  query: "My laptop is slow",
  num_solutions: 3,
  constraints: [{
    type: "budget",
    value: 100
  }]
});
console.log(result);

// Streaming
for await (const chunk of client.run({
  query: "Fix my phone",
  num_solutions: 4
})) {
  process.stdout.write(chunk);
}
use runagent::client::RunAgentClient;
use serde_json::json;
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = RunAgentClient::new(
        "lg-solver-123", 
        "solve_problem", 
        true
    ).await?;
    
    let result = client.run(&[
        ("query", json!("My laptop is slow")),
        ("num_solutions", json!(3)),
        ("constraints", json!([{
            "type": "budget", 
            "value": 100
        }]))
    ]).await?;
    
    println!("Result: {}", result);
    
    // Streaming
    let mut stream = client.run_stream(&[
        ("query", json!("Fix my phone")),
        ("num_solutions", json!(4))
    ]).await?;
    
    while let Some(chunk) = stream.next().await {
        print!("{}", chunk?);
    }
    
    Ok(())
}
package main

import (
    "context"
    "fmt"
    "github.com/runagent-dev/runagent-go/pkg/client"
)

func main() {
    client, _ := client.New(
        "lg-solver-123", 
        "solve_problem", 
        true
    )
    defer client.Close()

    result, _ := client.Run(
        context.Background(), 
        map[string]interface{}{
            "query": "My laptop is slow",
            "num_solutions": 3,
            "constraints": []map[string]interface{}{
                {"type": "budget", "value": 100},
            },
        }
    )
    fmt.Printf("Result: %v\n", result)

    // Streaming
    stream, _ := client.RunStream(
        context.Background(),
        map[string]interface{}{
            "query": "Fix my phone",
            "num_solutions": 4,
        }
    )
    defer stream.Close()

    for {
        chunk, hasMore, _ := stream.Next(context.Background())
        if !hasMore { break }
        fmt.Print(chunk)
    }
}
import 'package:runagent/runagent.dart';

void main() async {
  final client = await RunAgentClient.create(
    RunAgentClientConfig.create(
      agentId: "lg-solver-123",
      entrypointTag: "solve_problem",
      local: true,
    ),
  );

  final result = await client.run({
    "query": "My laptop is slow",
    "num_solutions": 3,
    "constraints": [
      {"type": "budget", "value": 100}
    ],
  });
  print(result);

  // Streaming
  await for (final chunk in client.runStream({
    "query": "Fix my phone",
    "num_solutions": 4,
  })) {
    print(chunk);
  }
}
using RunAgent.Client;
using RunAgent.Types;

class Program
{
    static async Task Main()
    {
        var config = RunAgentClientConfig
            .Create("lg-solver-123", "solve_problem")
            .WithLocal(true);

        var client = await RunAgentClient
            .CreateAsync(config);

        var result = await client.RunAsync(
            new Dictionary<string, object>
            {
                ["query"] = "My laptop is slow",
                ["num_solutions"] = 3,
                ["constraints"] = new List<object>
                {
                    new Dictionary<string, object>
                    {
                        ["type"] = "budget",
                        ["value"] = 100
                    }
                }
            }
        );
        Console.WriteLine(result);

        // Streaming
        await foreach (var chunk in client.RunStreamAsync(
            new Dictionary<string, object>
            {
                ["query"] = "Fix my phone",
                ["num_solutions"] = 4
            }
        ))
        {
            Console.Write(chunk);
        }
    }
}

πŸš€ RunAgent Cloud Deployment

Now Available: Production-Ready Cloud Infrastructure

Deploy to production in seconds with enterprise-grade infrastructure


Sign Up Dashboard Documentation

Deploy your agents to RunAgent Cloud with enterprise-grade infrastructure and experience the fastest agent deployment. RunAgent Cloud provides serverless auto-scaling, comprehensive security, and real-time monitoring - all managed for you.

Get Started with RunAgent Cloud

  1. Sign up at app.run-agent.ai
  2. Generate API Key: After signing in, go to Settings β†’ API Keys β†’ Generate API Key
  3. Authenticate CLI: Configure your CLI with your API key
  4. Deploy: Deploy your agents with a single command
# Authenticate with RunAgent Cloud
runagent setup --api-key <your-api-key>

# Deploy your agent
runagent deploy --folder ./my-agent

Fastest Agent Deployment

From zero to production in seconds. RunAgent Cloud automatically selects the optimal VM image based on your agent's requirements, with deployment typically completing in 30-60 seconds for standard images, or up to 2 minutes for specialized configurations.

Security-First Architecture

Every agent runs in its own isolated sandbox environment:

  • Complete process isolation
  • Network segmentation
  • Resource limits and monitoring
  • Zero data leakage between agents

Dashboard & Monitoring

The RunAgent Cloud dashboard provides comprehensive insights into your agents:

  • Agent Execution Metadata - Detailed information about each execution
  • Execution Time Tracking - Monitor performance and optimize accordingly
  • Agent Management - View and manage all your deployed agents
  • Usage Analytics - Track usage patterns and resource consumption
  • Real-time Monitoring - Live status and health checks
  • Execution History - Complete audit trail of all agent invocations

Access your dashboard at app.run-agent.ai/dashboard after signing in.

Enterprise-Grade Features

RunAgent Cloud provides:

  • Auto-scaling - Automatically scales based on demand
  • Global Edge Distribution - Low-latency access worldwide
  • Built-in Monitoring - Comprehensive analytics and observability
  • Production-Grade Security - Enterprise security and compliance
  • Multiple VM Images - Automatic image selection optimized for your agent
  • Serverless Infrastructure - Zero infrastructure management

🧠 Persistent Memory: Revolutionary Serverless Memory System

RunAgent introduces Persistent Memory - the fastest serverless memory system for AI agents. Unlike traditional stateless serverless architectures, RunAgent enables your agents to maintain context and state across executions, creating truly intelligent and context-aware applications.

Why Persistent Memory Matters

Traditional serverless functions are stateless by design, meaning each invocation starts fresh with no memory of previous interactions. RunAgent's Persistent Memory breaks this limitation, allowing your agents to:

  • Remember Context - Maintain conversation history and user preferences across sessions
  • Learn from Interactions - Build upon previous executions to improve responses
  • Stateful Workflows - Create multi-step processes that remember where they left off
  • Cross-Language Persistence - Memory works seamlessly across all SDK languages (Python, JavaScript, Rust, Go, Dart)

How It Works

Persistent Memory in RunAgent is designed for speed and reliability:

from runagent import RunAgentClient

# Create a client with persistent memory enabled
client = RunAgentClient(
    agent_id="my-agent-id",
    entrypoint_tag="chat",
    user_id="user123",           # User identifier for memory isolation
    persistent_memory=True        # Enable persistent memory
)

# First interaction - agent learns user preferences
result1 = client.run(message="I prefer dark mode interfaces")

# Second interaction - agent remembers the preference
result2 = client.run(message="What's my UI preference?")
# Agent responds: "You prefer dark mode interfaces"

Multi-Language Support

Persistent Memory works identically across all SDKs:

Python:

client = RunAgentClient(
    agent_id="agent-id",
    entrypoint_tag="entrypoint",
    user_id="user123",
    persistent_memory=True
)

JavaScript:

const client = new RunAgentClient({
  agentId: "agent-id",
  entrypointTag: "entrypoint",
  userId: "user123",
  persistentMemory: true
});

Rust:

let client = RunAgentClient::new(
    RunAgentClientConfig::new("agent-id", "entrypoint")
        .with_user_id("user123")
        .with_persistent_memory(true)
).await?;

Dart:

final client = await RunAgentClient.create(
  RunAgentClientConfig.create(
    agentId: "agent-id",
    entrypointTag: "entrypoint",
    userId: "user123",
    persistentMemory: true,
  ),
);

C#:

var config = RunAgentClientConfig
    .Create("agent-id", "entrypoint")
    .WithUserId("user123")
    .WithPersistentMemory(true);

var client = await RunAgentClient.CreateAsync(config);

Key Benefits

  • ⚑ Fastest Serverless Memory - Optimized for low-latency access and updates
  • πŸ”’ Secure & Isolated - Each user_id has isolated memory space
  • 🌐 Universal - Works with any framework (LangGraph, CrewAI, Letta, etc.)
  • πŸ“ˆ Scalable - Built on serverless infrastructure that scales automatically
  • πŸ”„ Stateful Workflows - Enable complex multi-turn conversations and workflows

Use Cases

  • Conversational AI - Maintain context across multiple user interactions
  • Personalization - Remember user preferences and adapt responses
  • Multi-Step Processes - Track progress through complex workflows
  • Learning Systems - Agents that improve based on interaction history
  • Session Management - Maintain state across distributed systems

πŸ“š Documentation


🧠 Action Memory System (Coming Soon)

Building on our Persistent Memory foundation, RunAgent is introducing Action Memory - an advanced approach to agent reliability that focuses on how to remember rather than what to remember.

How It Will Work

  • Action-Centric: Instead of storing raw conversation data, it captures decision patterns and successful action sequences
  • Cross-Language: Memory persists across all SDK languages seamlessly
  • Reliability Focus: Learns from successful outcomes to improve future decisions
  • Ecosystem Integration: Works with any framework - LangGraph, CrewAI, Letta, and more

This will ensure your agents become more reliable over time, building upon the Persistent Memory system to create truly intelligent, context-aware agents.


Community & Support


Ready to build universal AI agents?

Get Started with Local Development β†’

🌟 Star us on GitHub β€’ πŸ’¬ Join Discord β€’ πŸ“š Read the Docs

Visitor Badge

Made with ❀️ by the RunAgent Team

About

RunAgent simplifies serverless deployment of your AI agents. With a powerful CLI, multi-language SDK support, built-in agent invocation & streaming suppprt.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •