providerplaneai
    Preparing search index...

    providerplaneai

    ProviderPlaneAI

    npm version npm downloads License: MIT CI Documentation

    ProviderPlaneAI is a workflow-first AI orchestration framework for Node.js. It provides a provider-agnostic workflow layer above raw model SDKs:

    • Provider-agnostic orchestration across OpenAI, Anthropic, and Gemini, with additional providers planned
    • Workflow-first API with jobs available as the lower-level execution layer
    • Multimodal workflows across text, audio, image, video, moderation, and embeddings
    • Retry, fallback, persistence, and workflow-level observability

    See providerplane.dev for guides, examples, configuration, changelog, and API reference. See providerplane.ai for the main project site.


    npm install providerplaneai
    
    • Node.js 20+
    • TypeScript 5+

    ProviderPlaneAI loads configuration via node-config + dotenv.

    Create config/default.json (or environment-specific config files) with a providerplane section containing appConfig and providers.

    Minimal example:

    {
    "providerplane": {
    "appConfig": {
    "executionPolicy": {
    "providerChain": [
    { "providerType": "openai", "connectionName": "default" }
    ]
    }
    },
    "providers": {
    "openai": {
    "default": {
    "type": "openai",
    "apiKeyEnvVar": "OPENAI_API_KEY_1",
    "defaultModel": "gpt-5"
    }
    }
    }
    }
    }

    Minimal .env for the config above:

    OPENAI_API_KEY_1=your_openai_api_key
    

    For full multi-provider config and environment examples covering OpenAI, Gemini, Anthropic, and Voyage, see providerplane.dev.

    import {
    AIClient,
    MultiModalExecutionContext,
    Pipeline,
    WorkflowRunner
    } from "providerplaneai";

    const client = new AIClient();
    const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
    const ctx = new MultiModalExecutionContext();

    const pipeline = new Pipeline<{
    generatedText: string;
    transcriptText: string;
    audioArtifactId: string;
    }>("readme-workflow-1", {});

    // Typed step handles keep `source` and `after` references readable and safe
    const generateText = pipeline.step("generateText");
    const tts = pipeline.step("tts");
    const transcribe = pipeline.step("transcribe");

    // Build a workflow: chat -> tts -> transcribe
    const workflow = pipeline
    .chat(generateText.id, "Generate one short inspirational quote in French.", {
    normalize: "text"
    })
    .tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
    .transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
    .output((values) => ({
    generatedText: String(values.generateText ?? ""),
    transcriptText: String(values.transcribe ?? ""),
    audioArtifactId: String(((values.tts as any[])?.[0]?.id ?? ""))
    }))
    .build();

    // Run the workflow
    const execution = await runner.run(workflow, ctx);

    console.log("Output", execution.output);
    graph TD
    n0["generateText"]
    n1["tts"]
    n2["transcribe"]
    n0 --> n1
    n1 --> n2

    For most applications, this is the right abstraction level: the workflow layer via Pipeline plus WorkflowRunner.

    Use direct jobs only when you need low-level control outside a workflow DAG, are integrating with an external scheduler, or are building custom orchestration on top of the library.

    • OpenAI
    • Anthropic
    • Gemini

    Providers listed in appConfig.executionPolicy.providerChain are initialized automatically when AIClient is constructed.

    ProviderPlaneAI includes a DAG workflow engine for orchestrating multi-step AI workflows. Pipeline is the recommended authoring API. WorkflowBuilder remains available for advanced node-level control.

    • Deterministic DAG execution with explicit dependencies
    • Parallel fan-out and fan-in
    • Single-source and multi-source step inputs via source
    • Conditional step execution via when
    • Per-step retry and timeout policies
    • Per-step provider and provider-chain overrides
    • Streaming and non-streaming workflow nodes
    • Nested workflows
    • Export to JSON, Mermaid, DOT, and D3
    • Pipeline for most workflows
    • WorkflowRunner for execution
    • WorkflowExporter for visualization and export
    • WorkflowBuilder for advanced custom graph construction
    const client = new AIClient();
    const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
    const ctx = new MultiModalExecutionContext();

    const pipeline = new Pipeline<{
    generatedText: string;
    transcriptText: string;
    translationText: string;
    moderationFlagged: boolean;
    }>("readme-workflow-2", {});

    // Typed step handles keep `source` and `after` references readable and safe
    const generateText = pipeline.step("generateText");
    const tts = pipeline.step("tts");
    const transcribe = pipeline.step("transcribe");
    const translate = pipeline.step("translate");
    const moderate = pipeline.step("moderate");

    // Build a workflow: chat -> tts -> transcribe + translate -> moderate
    const workflow = pipeline
    .chat(generateText.id, "Generate one short inspirational quote in French.", { normalize: "text" })
    .tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
    .transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
    .translate(translate.id, { targetLanguage: "english", responseFormat: "text" }, { source: tts, normalize: "text" })
    .moderate(moderate.id, {}, { source: [transcribe, translate] })
    .output((values) => ({
    generatedText: String(values.generateText ?? ""),
    transcriptText: String(values.transcribe ?? ""),
    translationText: String(values.translate ?? ""),
    moderationFlagged: Boolean((values.moderate as any)?.[0]?.flagged ?? false)
    }))
    .build();

    // Run the workflow
    const execution = await runner.run(workflow, ctx);

    console.log("Output", execution.output);

    Notes:

    • source binds step input to upstream output and can be either a single step or an array of steps.
    • after adds ordering dependencies when you need sequencing without data binding.
    • Typed step handles created with pipeline.step("...") reduce stringly-typed wiring mistakes.
    • custom(...) and customAfter(...) are escape hatches for custom capability steps without dropping to WorkflowBuilder.
    • If you find yourself reaching for createCapabilityJob in application code, you are usually below the preferred abstraction level.
    graph TD
    n0["generateText"]
    n1["tts"]
    n2["transcribe"]
    n3["translate"]
    n4["moderate"]
    n0 --> n1
    n1 --> n2
    n1 --> n3
    n2 --> n4
    n3 --> n4

    For the full Pipeline method reference and step-by-step DSL documentation, see providerplane.dev.

    • approvalGate
    • saveFile

    These are registered by default and are intended for workflow authoring rather than provider-specific model calls.

    • Use WorkflowBuilder when you need direct node functions or full control over graph construction.
    • Use WorkflowExporter to render workflows as Mermaid, DOT, D3, or JSON.
    • Keep advanced builder/export usage in docs and internal tooling; use Pipeline for the common path.

    npm run build
    npm run test
    npm run lint
    npm run perf:quick

    For integration testing, PR title conventions, release workflow notes, and contribution guidance, see CONTRIBUTING.md.


    MIT