The input type for the job.
The output type for the job.
Constructs a new GenericJob instance.
The input type for the job.
The output type for the job.
The job input.
Whether streaming is enabled for this job.
The function to execute the job.
Optionalhooks: JobLifecycleHooks<TOutput>Optional lifecycle hooks.
OptionalmaxStoredResponseChunks: numberMax number of response chunks to store.
OptionalexecutionMetadata: {Optional metadata for execution (capability, provider chain, etc).
ReadonlyinputThe job input.
OptionalonOptional streaming callback
OptionalonCalled whenever status changes
Internal diagnostic view of final orchestration response (read-only).
Internal diagnostic view of orchestration chunks (read-only snapshot copy).
Optionalreason: ErrorHydrate this job from a persisted snapshot. Internal response envelopes/chunks are intentionally not restored because rerun semantics are deterministic replay from input + executor, not raw resume.
Run the job. NOTE: Consumers should typically call JobManager.runJob() instead of invoking this directly, to ensure proper concurrency management, hooks, and persistence.
MultiModalExecutionContext
Optionalsignal: AbortSignalOptional abort signal
OptionalonChunk: (chunk: JobChunk<TOutput>) => voidOptional streaming callback (overrides existing onChunk)
GenericJob manages the lifecycle, execution, and state of an AI job, supporting streaming and non-streaming modes. Handles input, output, error, status, artifacts, and raw response management, with hooks for orchestration.