This is a prerelease version.

View latest


With the ALTER JOB statement, you can suspend, resume, or restart a job that is running on a cluster. You can also update the configuration of a suspended job and resume it.

Syntax Summary

This code block is a quick reference with all the parameters that you can use with the ALTER JOB statement.

See some practical examples.

ALTER JOB job_name {  SUSPEND | [OPTIONS ( 'option_name' = 'option_value' [, ...] )] RESUME | RESTART }


The ALTER JOB statement accepts the following parameters.

The job_name parameter is required.

Parameter Description Example


The name of the job to suspend, resume, or restart.


Suspend the job. For details, see the API reference for the suspend() method.


Applies new configuration options to a suspended job. The following option_name parameters are accepted:

  • autoScaling

  • maxProcessorAccumulatedRecords

  • metricsEnabled

  • snapshotIntervalMillis

  • splitBrainProtection

  • storeMetricsAfterJobCompletion

  • suspendOnFailure

  • timeoutMillis

See job configuration options for valid values for each of the listed parameters. For more details, see the API for the Job interface: updateConfig(DeltaJobConfig deltaConfig).


Resume a suspended job. For details, see the API reference for the resume() method.


Suspends and resumes the job. For details, see the API reference for the restart() method.


This section lists some example SQL queries that show you how to use the ALTER JOB statement.

Suspend and Resume a Job

You may want to suspend a job to perform maintenance on a source or a sink without disrupting the job.

ALTER JOB track_trades SUSPEND

When maintenance is finished, you can resume the job to restart it.

ALTER JOB track_trades RESUME

Restart a Job

You may want to restart a job if you want to distribute it over some new members in your cluster and auto-scaling is disabled.

ALTER JOB track_trades RESTART

Update the Configuration of a Suspended Job

Currently, Jet processors implement basic memory management by limiting the number of objects individual processors store. When this number is exceeded, the job fails. To recover the failed job, try updating the job configuration to increase the processor limit, and resume the job.

ALTER JOB hello-world OPTIONS ('maxProcessorAccumulatedRecords'='100') RESUME;

You might also consider increasing the number of records that each processor can accumulate, if SQL operations such as grouping, sorting, or joining end in errors.

By default, all streaming jobs are automatically suspended on failure.