Close

Batch Jobs

Overview

  • BatchJobs are compute resources which enable developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs.

  • A single job instance is simply an execution of an arbitrary container. The execution of a job is initiated by an event (see section Triggering batchjob for more info).

  • When using batchJobs you do NOT need to handle and manage batch computing environment. You can simply define a container and the resources (CPU, memory, GPU) the job needs needs and the underlying infrastructure will take care of rest. In the background your self managed compute environment uses optimal EC2 instancies from the 'c', 'm', and 'r' instance families. In case of batchjobs that use GPU, the instances are chosen from 'p2', 'p3', 'g3', 'g3s' and 'g4' families.

  • The best thing about batchJobs is that you are only paying for what you use. The self managed compute environment spins up compute resources only when there are jobs to execute. If there are NO jobs to execute, there are NO compute resources to pay for.

  • Moreover, the self managed compute environment can be enabled to use spot instances, which can help you save up to 90% of computation costs.


When to use

Batchjobs are made for running long running or resource demanding containerized tasks.

The main difference between batchjobs and container workloads is that while container workloads were designed for applications that run continuously, batchjobs are great for one-off or regularly scheduled task executions.

Also while container workloads are made to scale horizontally and are limited in how much CPU and memory single container workload instance can have, with batchjobs there are almost no limits. The self managed environment in which your batchjobs are executed always try to meet your needs by spining up correct EC2 instance to run your batchjob.


For better understanding when to use batchjobs, consider following examples:

  • Machine learning training - consider you want to train your machine learning model. The training process occurs regularly every day after new data for your training arrives. Your training job requires substantial amount of compute resources in order to execute in feasible time. Other than during the training, you have no use for these compute resources. This is a great use-case for a batchjob. Moreover, you can configure your batchjob to use spot instancies, which can help you save huge amounts of computation costs. Example scenario similar to this one is discused here.

  • Batch data processing - consider you are continuosly receiving log events from your applications. Once a week you upload your log files to a storage bucket to persist your logs. Each time a log file is uploaded to your bucket, you would like to run a complex analytics job to receive some actionable insights from your logs. This is another great use-case for a batchjob.

Advantages

  • Pay-per-use - You only pay for compute resources your jobs use.
  • Resource flexibility - Whether your job requires 1 CPU or 50 CPUs, 1GiB or 128Gib of memory, the self managed compute environment always meets your needs, by spinning up the optimal instance to run your job.
  • Time flexibility - Your job execution time can span from few minutes up to hours or days depending on what you need.
  • Secure by default - Underlying environment is securely managed by AWS
  • Easy integration - BatchJob can be invoked by events from a wide variety of services

Disadvantages

  • Slow start time - After the job execution is triggered the job is put into execution queue. Since the self managed compute environment scales according to your needs, it can take from few seconds up to few minutes for it to scale up and meet requirements for your queued job. This makes batchJobs ideal for long running tasks which are not time sensitive in a sense of immediate start.

Usage

➡️ Defnition breakdown

Batchjob definition is made out of multiple segments:

resources:
batchJobs:
jsBatchJob:
# REQUIRED specification of container that defines the batchjob
container:
# REQUIRED specification of image that container uses. There are multiple ways to define image using imageConfig
imageConfig:
...
# OPTIONAL Environment variables that are injected into container.
# Environment variables can be used to inject information about other parts of infrastructure.
environment:
...
# REQUIRED specification of resources that batchjob needs for execution
# you can define "cpu", "memory" and "gpu"
resources:
cpu: 2
memory: 3800
...
# REQUIRED Specification of strategy to use for batchJob in terms of attempts (retries),
# timeouts (i.e to prevent job from running too long) and underlying resources.
# Multiple strategies can be used, and all are discussed further down in docmuentation.
strategy:
onDemand:
attempts: 2
attemptDurationSeconds: 3600
...
# OPTIONAL List of other stack resources which you want to access from your batchjob.
# For example, you can give batchjob permission to access specific bucket to store the results of the job.
allowAccessTo:
...
# OPTIONAL ADVANCED By using iamRoleStatements, you have flexibility to allow fine grained access even to resources that are not part of your stack
iamRoleStatements:
...
# OPTIONAL ADVANCED Specify a role which will be used by batchjob during execution
iamRoleArn: ...
# events upon which the batchjob is triggered
# in this example, the batchjob is triggered when an object is created (uploaded into) in referenced bucket
events:
- s3:
bucketArn: $GetParam('myStpBucket', 'Bucket::Arn')
event: 's3:ObjectCreated:Put'


➡️ Specifying container image

In essence batchjobs are only containers triggered by various events, executed in self managed environment. The container itself is defined by image it is using.

There are three ways you can define container image in your batchjob definition:

  1. Defining container image by specifing filePath.
  2. Defining container image by specifing Dockerfile.
  3. Defining container image by specifing image name.

Each way of defining image is described further below.

Defining container image by specifing filePath

The "filePath" parameter points to local (javascript or typescript) file containing your code.

During deployment:

  • First, your code is bundeled using esbuild.
  • Resulting bundle is then copied into the image during its creation.
  • Image is then uploaded into your private repository (created as a part of deployment proccess) during deployment.

resources:
batchJobs:
filePathBatchJob:
container:
imageConfig:
# REQUIRED path to file which will be bundeled within image.
filePath: '_example-configs/batch-jobs/js-batch-job.js'
# OPTIONAL you can provide path to files (static assets) that should be included in your bundle
includeFiles: '_example-configs/include/explicitly-included.txt'
resources:
cpu: 2
memory: 3800
strategy:
onDemand:
attempts: 2
attemptDurationSeconds: 3600
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

Defining container image by specifing Dockerfile

The "dockerfilePath" parameter points to Dockerfile containing definition of your image.

During deployment:

  • First, your image is build according to provided Dockerfile.
  • Image is uploaded into your private repository (created as a part of deployment proccess) during deployment.

resources:
batchJobs:
dockerfilePathBatchJob:
container:
imageConfig:
# REQUIRED path to dokerfile
dockerfilePath: 'my-batch-job/Dockerfile'
# OPTIONAL provide context path for building image
dockerBuildContextPath: 'my-batch-job'
# OPTIONAL provide a command for execution of the image
# If dockerfile contains CMD drective, this overrides it.
command: ['node', 'index.js']
resources:
cpu: 2
memory: 3800
strategy:
spot:
attempts: 2
attemptDurationSeconds: 3600
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

Defining container image by specifing image name

The "image" parameter points to image in public Docker repository.

The image is pulled directly from docker repository when the job is invoked.

resources:
batchJobs:
pythonBatchJob:
container:
imageConfig:
# REQUIRED name of the image stored in public docker repository
image: 'mypublicrepo/python-batch-job'
# OPTIONAL provide a command for execution of the image
# If dockerile contains CMD drective, this overrides it.
command: ['python', 'main.py']
resources:
cpu: 2
memory: 3800
strategy:
spot:
attempts: 2
attemptDurationSeconds: 3600
events:
- s3:
bucketArn: $GetParam('myStpBucket', 'Bucket::Arn')
event: 's3:ObjectCreated:Put'


➡️ Specifing compute resources

You can configure the number of cpus and ammount of memory required for batchjob execution.

Minimum memory value is 4MB and minimal amounts of CPUs is 1.

resources:
batchJobs:
dockerfilePathBatchJob:
container:
imageConfig:
dockerfilePath: 'my-batch-job/Dockerfile'
dockerBuildContextPath: 'my-batch-job'
command: ['node', 'index.js']
# specification of resources that batchjob needs for execution
# you can define "cpu", "memory" and "gpu"
resources:
cpu: 2
memory: 3800
strategy:
spot:
attempts: 2
attemptDurationSeconds: 3600
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

If you define memory required for your batchjob in multiples of 1024 be aware: Your self managed environment might spin up instances that are much bigger than you expect. This is because instancies which are spinned up in your self managed environment also need memory to handle the management processes associated with running the batchjob container.

Example: If you define memory 8192 for your batchjob, you might expect that the self managed environment will primarily try to spin up one of the instances from used families with memory 8GiB(8192MB). However, the self managed environment knows that instance with such memory would not be sufficient for both the batchjob and management processes. As a result, it will try to spin up bigger instance. More on this issue here.

Due to this behaviour, we advise to specify memory for your batchjobs smartly. I.e instead of specifing 8192, consider specifing lower value instead, i.e 7680. This way the self managed environment will be able to use instancies with 8GiB (8192MB) of memory, which can lead to cost saving.

➡️ Triggering batchjob

Batchjob execution can be triggered by various events. Each batchjob can have multiple event integrations attached.


Stacktape implements triggering of batchJobs by leveraging functions (AWS Lambda). Each batchJob has its dedicated "trigger function" that can be invoked to spawn the "batchJob state machine". This means that any event integration supported by functions is also supported by batchJob.

Commonly the batchjobs are triggered by following events:
  • on scheduled bases (i.e daily, weekly, monthly or at any specific time)
  • an event received by event bus
  • file being uploaded to a bucket
  • http request comming to http api or load balancer
  • ...
Since batchJobs use "trigger functions", each time the "trigger function" is invoked a new batchJob instance (batchJob container) is spawned. When using event integrations such as httpApi, kinesis or dynamoDb this can be fairly often. As batchJobs usually run longer and often require more resources than functions, it can be costly to spawn many batchJobs.

This is why we recommend using event integrations such as eventBus, schedule or s3.

Remember: batchJobs are meant for long running and resource demanding operations. However for most cases, using function might be a better choice.

To better understand how triggering batchJobs works, see example scenario below.

Triggering flow explained

The flow of triggering batchJob can be summarized by following steps:

  1. "trigger function" receives event from one of its integrations.
  2. "trigger function" starts execution of "batchJob state machine".
  3. "batchJob state machine" spawns batchJob instance (batchJob container) and controls the execution according to specified strategy.

The flow is also explained by figure in the following example scenario.

Example scenario

In this example we have two events attached to batchjob:

  • first event integration triggers batchjob when object gets created in the myStpBucket bucket
  • second event integration triggers batchjob when an event that matches eventPattern is received by myEventBus.

resources:
batchJobs:
pythonBatchJob:
container:
imageConfig:
dockerfilePath: 'my-batch-job/Dockerfile'
dockerBuildContextPath: 'my-batch-job'
resources:
cpu: 2
memory: 3800
strategy:
spot:
attempts: 2
attemptDurationSeconds: 3600
# this batchjob can be triggered in two ways
# 1. object was created in the myStpBucket bucket
# 2. an event that matches eventPattern was received by myEventBus
events:
- s3:
bucketArn: "$GetParam('myStpBucket', 'Bucket::Arn')"
event: 's3:ObjectCreated:*'
- eventBridge:
customEventBusArn: "$GetParam('myEventBus', 'EventBus::Arn')"
eventPattern:
detail:
name: ['run-python-job']
# eventBus and bucket we create as a part of the stack
buckets:
myStpBucket:
accessibility: 'private'
eventBuses:
myEventBus: {}


Following figure explains the architecture of batchJob from this example scenario:

Architecture of the batchJob
Architecture of the batchJob


➡️ Specifying strategy

Strategy for a batchjob is specified within strategy block of batchjob definition.

Each of the strategies is ideal in different situations and scenarios, however all of the strategies share these similarities:

  • Your batchjob container is always executed in your self managed secure environment powered by well known and reliable EC2 instancies in background.
  • Self managed environment only spawns instancies when there are jobs to be executed. Once there are no jobs to be executed, instancies are terminated.
  • Instancies in self managed environment are charged on per second basis.
  • Self managed environment always tries to find optimal instance(s) for running the batchjobs that are waiting to be executed.
  • You are not charged for having self managed environment. You are only charged for the instancies that are spawned into environment, when there are jobs to be executed. The self managed environment itself is free of charge.

When choosing strategy for your batchjob, currently there are three options. Each of these options is further explained.

  1. OnDemand strategy - Using strategy, where you only use onDemand instancies.
  2. Spot strategy - Using strategy, where you only use spot instancies.
  3. Combined strategy - Using strategy, where you use both types of instancies.

OnDemand strategy

Strategy, where you only use onDemand instancies.

  • Pro - once your batchjob begins its execution, it will never get interrupted by AWS.

  • Con - you always pay the full price for the instance the batchjob is running on

Example scenario

Assume you have deployed stack containing following resource

resources:
batchJobs:
filePathBatchJob:
container:
imageConfig:
filePath: '_example-configs/batch-jobs/js-batch-job.js'
includeFiles: '_example-configs/include/explicitly-included.txt'
resources:
cpu: 2
memory: 3800
strategy:
onDemand:
# REQUIRED total amount of execution attempts. In this case, if job fails for any reason, it will be retried at most twice.
attempts: 3
# OPTIONAL If job runs for longer than 3600 seconds it will be terminated and considered failed
attemptDurationSeconds: 3600
# OPTIONAL Number of seconds before the first retry attempt (1 by default).
retryIntervalSeconds: 90
# OPTIONAL The multiplier by which the retry interval increases during each attempt (1 by default).
retryIntervalMultiplier: 2
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

  1. Following the stack deployment, the batchjob is ready to be invoked. Your self managed environment is idle and waiting to spawn instances.
  2. After you trigger the batchjob execution (see section Triggering batchjob, batchjob is put into execution queue. Self managed environment immediately tries to find and deploy optimal EC2 instance to run your batchjob.
  3. Once the EC2 instance is deployed, batchjob is immediately scheduled to run on the EC2 instance.
  4. Information about an event which triggered batchjob is injected into your container as environment variable TRIGGER_EVENT in form of JSON string
  5. Execution is completed once the container execution SUCCEEDS or FAILS. Upon failure the job gets retried (by being put into execution queue again), if there are retry attempts left.
  6. If there are no other batchjobs waiting for execution, self managed environment terminates the EC2 instance and becomes idle.

Spot strategy

Strategy, where you only use spot instancies.

  • Pro - you are paying only a fraction of a price compared to using onDemand instancies (discount up to 90%).

  • Con - your batchjob execution can be interrupted by AWS.

The only difference between an OnDemand instance and a spot instance is that a spot instance can be interrupted by Amazon EC2 with two minutes of notification when EC2 needs the capacity back. At the time of writing this documentation interruptions are fairly infrequent; less then 10% of batch instances get interrupted by AWS.

For some batchjobs, interruptions can be very costly. Especially if your batchjob needs to restart work from the beginning. This is common in sustained load testing scenarios, and machine learning training. To lower the cost of interruption, investigate patterns for implementing checkpointing within your application. We are also discussing checkpointing in following section.
If the spot instance gets interrupted within first hour of being spawned, there is no charge for the instance.

Example scenario

Assume you have deployed stack containing following resource

resources:
batchJobs:
filePathBatchJob:
container:
imageConfig:
filePath: '_example-configs/batch-jobs/js-batch-job.js'
includeFiles: '_example-configs/include/explicitly-included.txt'
resources:
cpu: 2
memory: 3800
strategy:
spot:
# REQUIRED total amount of execution attempts. In this case, if job fails for any reason, it will be retried at most twice.
attempts: 2
# OPTIONAL If job runs for longer than 3600 seconds it will be terminated and considered failed
attemptDurationSeconds: 1800
# OPTIONAL Number of seconds before the first retry attempt (1 by default).
retryIntervalSeconds: 300
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

The overall flow is same as in example scenario showed within OnDemand strategy section.

The only difference is discussed above. When using spot strategy, your batchjob can get interrupted. We advise to familiarize yourself with "spot instances" concept before using spot batchjobs in production.

Combined strategy

  • In this strategy we are using both spot and onDemand instancies.
  • Spot strategy is prioritized; i.e primarily the spot strategy attempts get executed.
  • If batchjob does not succeed using spot instancies, onDemand strategy gets executed.

Example scenario

Assume you have deployed stack containing following resource

resources:
batchJobs:
trainingBatchJob:
container:
imageConfig:
dockerfilePath: 'training-batch-job/Dockerfile'
command: ['python', 'train.py']
environment:
CHECKPOINTING_BUCKET: "$GetParam('myCheckpointBucket', 'Bucket::Name')"
TRAINING_BUCKET: "$GetParam('trainingDataBucket', 'Bucket::Name')"
resources:
cpu: 4
memory: 8192
strategy:
spot:
attempts: 2
attemptDurationSeconds: 10800 #3hours
onDemand:
attempts: 1
attemptDurationSeconds: 10800 #3hours
allowAccessTo:
- 'myCheckpointBucket'
- 'trainingDataBucket'
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC
buckets:
checkpointBucket:
accessibility: 'private'
trainingDataBucket:
accessibility: 'private'

The overall flow is same as in example scenario showed within OnDemand strategy section.


However, as you can see this example scenario is a bit more complex:
  • We are creating batchjob (with purpose to train a model).
  • We are creating buckets which are used to persist the data during checkpointing and to store training data.
  • We are injecting information about buckets into batchjob container and also allowing batchjob container to access these buckets.

The idea behind the used strategy here is :

  • Primarily, we are trying to use spot instance to run our batchjob.
  • If our batchjob running on spot instances fails twice, onDemand instance is used to finish the job.
  • As we are checkpointing the progress of our batchjob, each time the batchjob gets retried it can pick-up the work where it left off.
When it comes to checkpointing your batchjob progress, the key is to persist data externally, and then reload the data once the batchjob gets retried. For example, TensorFlow, a popular open source library for deep-learning, has a mechanism for executing callbacks during the execution of deep learning jobs. Custom Callbacks can be invoked during the completion of each training epoch, and can perform custom actions such as saving checkpointing data to bucket.

Result of using this strategy:

  • Best case scenario - Our model gets trained using spot instances only (we have saved a lot of money).
  • Average case scenario - Our model gets trained mostly using spot instances. Some of the training in the end will need to be done by onDemand instance.
  • Worst case scenario - Our model gets trained mosty by onDemand instancies. This might be due to spot instances being interrupted too soon. If your spot instancies were interrupted in the first hour, you are not charged for them.

➡️ Accessing other resources

Depending on your use case, your batchJob might need to:

By default batchJobs have access to the internet. However, many resources such as eventBuses, buckets even cloudformationResources are protected within your stack.

There are two approaches when granting batchJob access to a resource:

  1. allowAccessTo property - list of strings. Elements of this list are resources of your stack which batchJob should be able to access such as eventBuses, buckets... allowAccessTo gives you ability to easily and transparently control accesses to resources of your stack. More on using allowAccessTo property here.

  2. iamRoleStatements property - list of iam role statement objects. Elements of this array are iam role statements, which will be appended to your batchJobs role. This is an advanced feature and should be used with caution. iamRoleStatements is useful when you want to have more granular control of accesses or you need to control access to resources that cannot be scoped by (listed in) allowAccessTo property. More on using iamRoleStatements property here

AllowAccessTo property does NOT work with resources defined in section cloudformationResources. See which resources can be scoped by allowAccessTo.

Allow access using allowAccessTo

In order to allow your batchJob to access other resources which are part of your stack resources section, you can use allowAccessTo property.

Listing resource in allowAccessTo property will give the batchJob basic set of permissions over the resource (permissions differ based on what type of resource it is, see this section).

Databases and atlas mongo clusters are accessible from batchJobs by default IF their allowConnectionsFrom property is set to "internet" or "vpc".

Example scenario

In this example scenario, we are giving batchJob permission to access specified buckets.

resources:
batchJobs:
trainingBatchJob:
container:
imageConfig:
dockerfilePath: 'training-batch-job/Dockerfile'
command: ['python', 'train.py']
# we are injecting AWS names of buckets into container environment variables, to make these names accessible during batchjob execution
environment:
CHECKPOINTING_BUCKET: "$GetParam('myCheckpointBucket', 'Bucket::Name')"
TRAINING_BUCKET: "$GetParam('trainingDataBucket', 'Bucket::Name')"
resources:
cpu: 4
memory: 8192
strategy:
spot:
attempts: 2
attemptDurationSeconds: 10800 #3hours
# here we are giving batchjob basic access to the buckets "myCheckpointBucket" and "trainingDataBucket" defined in our template
allowAccessTo:
- 'myCheckpointBucket'
- 'trainingDataBucket'
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC
buckets:
checkpointBucket:
accessibility: 'private'
trainingDataBucket:
accessibility: 'private'

Scope of allowAccessTo

Resources which can be scoped by allowAccessTo are:

resource type

granted permissions on resource

bucket

  • list objects in bucket
  • create/get/delete/tag object in bucket

atlasMongoCluster

database

  • allows batchJob to connect to a database which has restricted allowConnectionsFrom. See databases docs.

eventBus

  • publish events into specified eventBus.

function

  • invoke specified function

batchJob

  • submit batchJob instance into batchJob queue
  • list submitted job instancies in batchJob queue
  • describe/terminate specific batchJob instance
  • list executions of state machine which executes batchJob according to its strategy
  • start/terminate execution of state machine which executes batchJob according to its strategy

stateMachine

  • list executions of stateMachine
  • start/terminate execution of stateMachine

Allow access using iamRoleStatements

If you need to access resources which you cannot scope with allowAccessTo or you need to have fine grained resource access control, you can use iamRoleStatements property. Stacktape adds listed iamRoleStatements to the role which batchJob uses during execution.

You can also use iamRoleStatements to give batchJob access to cloudformation resources defined in cloudformationResources section.

Example scenario

In this example scenario, we are giving batchJob permission to access DynamoDB table defined in cloudformationResources section.

resources:
batchJobs:
trainingBatchJob:
container:
imageConfig:
dockerfilePath: 'training-batch-job/Dockerfile'
command: ['python', 'train.py']
# we are injecting Dynamo table name into container environment variables, to make it accessible during execution
environment:
TABLE_NAME: "$GetParam('ResultsTable', 'Name')"
resources:
cpu: 4
memory: 7800
strategy:
spot:
attempts: 2
attemptDurationSeconds: 10800 #3hours
# giving batchjob permissions to get and put items into dynamoDB table defined in cloudformation resources
iamRoleStatements:
- Resource: $GetParam('ResultsTable', 'Arn')
Effect: 'Allow'
Action:
- 'dynamodb:PutItem'
- 'dynamodb:Get*'
events:
- schedule:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC
cloudformationResources:
ResultsTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: "$Format('{}-results-table', $GetCliArgs().stage)"
BillingMode: 'PAY_PER_REQUEST'
AttributeDefinitions:
- AttributeName: 'resultDate'
AttributeType: 'S'
KeySchema:
- AttributeName: 'resultDate'
KeyType: 'HASH'


API Reference

Property in Stacktape configAllowed types
resources.batchJobs.{name}.allowAccessTo[]string
resources.batchJobs.{name}.container.environment.{name}stringnumberboolean
resources.batchJobs.{name}.container.imageConfigFilepathImageDockerfileImageExistingImageRequired
resources.batchJobs.{name}.events[].cloudwatchLogCloudwatchLog
resources.batchJobs.{name}.events[].dynamoDb.batchSizenumber
resources.batchJobs.{name}.events[].dynamoDb.batchWindownumber
resources.batchJobs.{name}.events[].dynamoDb.bisectBatchOnFunctionErrorboolean
resources.batchJobs.{name}.events[].dynamoDb.destinations.onFailureOnFailureRequired
resources.batchJobs.{name}.events[].dynamoDb.enabledboolean
resources.batchJobs.{name}.events[].dynamoDb.maximumRetryAttemptsnumber
resources.batchJobs.{name}.events[].dynamoDb.parallelizationFactornumber
resources.batchJobs.{name}.events[].dynamoDb.startingPositionstring
resources.batchJobs.{name}.events[].dynamoDb.streamArnstringRequired
resources.batchJobs.{name}.events[].eventBus.customEventBusArnstring
resources.batchJobs.{name}.events[].eventBus.descriptionstring
resources.batchJobs.{name}.events[].eventBus.eventPatternEventPatternRequired
resources.batchJobs.{name}.events[].eventBus.inputany
resources.batchJobs.{name}.events[].eventBus.inputPathstring
resources.batchJobs.{name}.events[].eventBus.inputTransformerInputTransformer
resources.batchJobs.{name}.events[].httpApi.authorizerAuthorizer
resources.batchJobs.{name}.events[].httpApi.httpApiGatewayNamestringRequired
resources.batchJobs.{name}.events[].httpApi.methodenumRequired
resources.batchJobs.{name}.events[].httpApi.pathstringRequired
resources.batchJobs.{name}.events[].httpApi.payloadFormatenum
resources.batchJobs.{name}.events[].iotIot
resources.batchJobs.{name}.events[].kinesis.autoCreateConsumerboolean
resources.batchJobs.{name}.events[].kinesis.batchSizenumber
resources.batchJobs.{name}.events[].kinesis.bisectBatchOnFunctionErrorboolean
resources.batchJobs.{name}.events[].kinesis.consumerArnstring
resources.batchJobs.{name}.events[].kinesis.destinations.onFailureOnFailureRequired
resources.batchJobs.{name}.events[].kinesis.enabledboolean
resources.batchJobs.{name}.events[].kinesis.maxBatchWindowSecondsnumber
resources.batchJobs.{name}.events[].kinesis.maximumRetryAttemptsnumber
resources.batchJobs.{name}.events[].kinesis.parallelizationFactornumber
resources.batchJobs.{name}.events[].kinesis.startingPositionenum
resources.batchJobs.{name}.events[].kinesis.streamArnstringRequired
resources.batchJobs.{name}.events[].loadBalancerLoadBalancer
resources.batchJobs.{name}.events[].s3.bucketArnstringRequired
resources.batchJobs.{name}.events[].s3.eventstring[]stringRequired
resources.batchJobs.{name}.events[].s3.filterRules[]FilterRulesFilterRules
resources.batchJobs.{name}.events[].schedule.descriptionstring
resources.batchJobs.{name}.events[].schedule.inputany
resources.batchJobs.{name}.events[].schedule.inputPathstring
resources.batchJobs.{name}.events[].schedule.inputTransformerInputTransformer
resources.batchJobs.{name}.events[].schedule.scheduleRatestringRequired
resources.batchJobs.{name}.events[].sns.destinations.onDeliveryFailureOnDeliveryFailureRequired
resources.batchJobs.{name}.events[].sns.filterPolicyany
resources.batchJobs.{name}.events[].sns.topicArnstringRequired
resources.batchJobs.{name}.events[].sqsSqs
resources.batchJobs.{name}.iamRoleArnstring
resources.batchJobs.{name}.iamRoleStatements[]IamRoleStatements
resources.batchJobs.{name}.resourcesResources
resources.batchJobs.{name}.strategy.onDemandOnDemand
resources.batchJobs.{name}.strategy.spotSpot
🗄️ Resources — Previous
Container Workloads
Next — 🗄️ Resources
Databases