logoStacktape docs




Batch Jobs

Overview and basic concepts

  • Batch job is a computing resource - it runs your code. Batch job runs until your code finishes processing.

  • The execution of a batch job is initiated by an event (such as incoming request to HTTP API Gateway, message arriving to an SQS queue, or object being created in an S3 bucket)

  • Batch jobs can be configured to use spot instances, which can help you save up to 90% of computing costs.

  • Similarly to functions and container workloads, batch jobs are serverless and fully managed. This means you don't have to worry about administration tasks such as provisioning and managing servers, scaling, VM secority, OS security & much more.

  • Stacktape batch job consists of:

    • User-defined Docker container (runs your code)
    • Lambda function & State-machine (stacktape-managed, used to manage the lifecycle, integrations and execution of the batch job)
  • The container image can be supplied in 3 different ways:

    • built automatically from your source code by Stacktape
    • built using a supplied Dockerfile by Stacktape
    • pre-built images
  • In addition to CPU and RAM, you can also configure a GPU for your batch job's environment.

When to use

Batch jobs are ideal for long-running and resource-demanding tasks, such as data-processing and ETL pipelines, training a machine-learning model, etc.

Advantages

  • Pay-per-use - You only pay for the compute resources your jobs use.
  • Resource flexibility - Whether your job requires 1 CPU or 50 CPUs, 1GiB or 128Gib of memory, the self-managed compute environment always meets your needs by spinning up the optimal instance to run your job.
  • Time flexibility - Unlike functions, batch jobs can run indefinitely.
  • Secure by default - Underlying environment is securely managed by AWS.
  • Easy integration - batch-job can be invoked by events from a wide variety of services.

Disadvantages

  • Slow start time - After a job execution is triggered, the job instance is put into an execution queue and can take anywhere from few seconds up to few minutes to start.

Basic usage

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: schedule
properties:
scheduleRate: cron(0 14 * * ? *) # every day at 14:00 UTC

(async () => {
const event = JSON.parse(process.env.STP_TRIGGER_EVENT_DATA);
// process the event
})();

Container

  • Batch jobs execution runs a Docker container inside a fully managed batch environment.
  • You can configure the following properties of the container:
imageConfig
Required

Configures an image for the container used in this batch job

Type: (FilePathBasedImage or BatchJobDockerfileBasedImage or BatchJobPrebuiltImage)

environment

Environment variables injected to the batch job's environment

Type: Array of EnvironmentVar

  • Environment variables are often used to inject information about other parts of the infrastrucutre (such as database URLs, secrets, etc.).

Image

  • Docker container is a running instance of an Docker image.
  • The image for your container can be supplied in 3 different ways:

Built from source code

  • Stacktape bundles your source code with all of its dependencies and builds the image for you.
  • During deployment, Stacktape pushes the image to the stack's private AWS container registry.
FilePathBasedImage  API reference
Parent API reference: BatchJobContainer
filePath
Required

Path to the entry point of your workload (relative to the stacktape config file)

Type: string

  • Stacktape tries to bundle all your source code with its dependencies into a single file.
  • If a certain dependency doesn't support static bundling (because it has binary, uses dynamic require() calls, etc.), Stacktape will install it copy it to the bundle

languageSpecificConfig

Configuration of packaging properties specific to given language

Type: ContainerLanguageSpecificConfig

includeFiles

Files that should be explicitly included in the deployment package (glob pattern)

Type: Array of string

  • Example glob pattern: images/*.jpg

excludeFiles

Files that should be explicitly excluded from deployment package (glob pattern)

Type: Array of string

Example glob pattern: images/*.jpg

dependenciesToIgnore

Dependencies to ignore.

Type: Array of string

  • These dependencies won't be a part of your deployment package.

resources:
mybatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts

Built from a Dockerfile

  • Stacktape builds the image using the specified Dockerfile.
  • During the deployment, Stacktape pushes the image to the stack's private AWS container registry.
  • If you are not familiar with Docker or writing Dockerfiles, you can refer to Docker's official guide on writing Dockerfiles.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
dockerfilePath: src/Dockerfile
buildContextPath: src

BatchJobDockerfileBasedImage  API reference
Parent API reference: BatchJobContainer
buildContextPath
Required

Path to directory (relative to stacktape config file) used as build context

Type: string

dockerfilePath

Path to Dockerfile (relative to buildContextPath) used to build application image.

Type: string

buildArgs

List of arguments passed to the docker build command when building the image

Type: Array of DockerBuildArg

command

Command to be executed when the container starts. Overrides CMD instruction in the Dockerfile.

Type: Array of string

Pre-built ahead of time

  • Pre-built image from the Docker registry is used. Stacktape does not build the image for you.
BatchJobPrebuiltImage  API reference
Parent API reference: BatchJobContainer
image
Required

Name or the URL of the image used in this workload.

Type: string

command

Command to be executed when the container starts. Overrides CMD instruction in the Dockerfile.

Type: Array of string

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
image: mypublicrepo/my-container
command: ['index.js']

Environment variables

Most commonly used types of environment variables:

environment:
STATIC_ENV_VAR: 'my-env-var'
DYNAMICALLY_SET_ENV_VAR: "$MyCustomDirective('input-for-my-directive')"
DB_HOST: "$Param('myPgSql', 'DbInstance::Endpoint.Address')"
DB_PASSWORD: "$Secret('dbSecret.password')"


Pre-set environment variables

Stacktape pre-sets the following environment variables:

NameValue
STP_TRIGGER_EVENT_DATAContains JSON stringified event from event integration that triggered this batch job.
STP_MAXIMUM_ATTEMPTSAbsolute amount of attempts this batch-job gets, before it is considered failed.
STP_CURRENT_ATTEMPTSerial number of this attempt

Logging

  • Every time your code outputs (prints) something to the stdout or stderr, your log will be captured and stored in a AWS CloudWatch log group.
  • You can browse your logs in 2 ways:
    • go to your batch job's log-group in the AWS CloudWatch console. You can use stacktape stack-info command to get a direct link.
    • use stacktape logs command that will print logs to the console

Computing resources

  • You can configure the amount of resource your batch job will have access to.
  • In addition to CPU and RAM, batch jobs also allow you to configure GPU. To learn more about GPU instances, refer to AWS Docs.
  • Behind the scenes, AWS Batch selects an instance type (from the C4, M4, and R4 instance families) that best fits the needs of the jobs with a preference for the lowest-cost instance type (BEST_FIT strategy).

If you define memory required for your batchjob in multiples of 1024 be aware: Your self managed environment might spin up instances that are much bigger than expected. This can happen because the instances in your environment need memory to handle the management processes (managed by AWS) associated with running the batch job. Example: If you define 8192 memory for your batchjob, you might expect that the self managed environment will primarily try to spin up one of the instances from used families with memory 8GiB(8192MB). However, the self managed environment knows that instance with such memory would not be sufficient for both the batch job and management processes. As a result, it will try to spin up a bigger instance. To learn more about this issue, refer to AWS Docs Due to this behaviour, we advise to specify memory for your batchjobs smartly. I.e instead of specifing 8192, consider specifing lower value, i.e 7680. This way the self managed environment will be able to use instances with 8GiB (8192MB) of memory, which can lead to cost saving.

If you define GPUs, instances are chosen according to your need from the GPU accelerated families:

  • p2 family: NVIDIA K80 GPU. More in AWS Docs
  • p3 family: NVIDIA V100 Tensor Core. More in AWS Docs
  • g3 family and g3s family: Tesla M60 GPU. More in AWS Docs
  • g4 family: T4 Tensor Core GPU. More in AWS Docs
BatchJobResources  API reference
Parent API reference: BatchJob
cpu
Required

Amount of virtual CPUs accessible to the batch job

Type: number

memory
Required

Amount of memory accessible to the batch job

Type: number

If you define memory required for your batchjob in multiples of 1024 be aware: Your self managed environment might spin up instances that are much bigger than expected. This can happen because the instances in your environment need memory to handle the management processes (managed by AWS) associated with running the batch job. Example: If you define 8192 memory for your batchjob, you might expect that the self managed environment will primarily try to spin up one of the instances from used families with memory 8GiB(8192MB). However, the self managed environment knows that instance with such memory would not be sufficient for both the batch job and management processes. As a result, it will try to spin up a bigger instance. To learn more about this issue, refer to AWS Docs Due to this behaviour, we advise to specify memory for your batchjobs smartly. I.e instead of specifing 8192, consider specifing lower value, i.e 7680. This way the self managed environment will be able to use instances with 8GiB (8192MB) of memory, which can lead to cost saving.

gpu

Number of physical GPUs accessible to the batch job

Type: number

If you define GPUs, instances are chosen according to your need from the GPU accelerated families:

  • p2 family: NVIDIA K80 GPU. More in AWS Docs
  • p3 family: NVIDIA V100 Tensor Core. More in AWS Docs
  • g3 family and g3s family: Tesla M60 GPU. More in AWS Docs
  • g4 family: T4 Tensor Core GPU. More in AWS Docs

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: batch-jobs/js-batch-job.js
resources:
cpu: 2
memory: 1800
events:
- type: schedule
properties:
scheduleRate: 'cron(0 14 * * ? *)' # every day at 14:00 UTC

Spot instances

  • Batch jobs can be configured to use spot instances.
  • Spot instances leverage AWS's spare computing capacity and can cost up to 90% less than "onDemand" (normal) instances.
  • However, your batch job can be interupted at any time, if AWS needs the capacity back. When this happens, your batch job receives a SIGTERM signal and you then have 120 seconds to save your progress or clean up.
  • Interruptions are usually infrequent as can be seen in the AWS Spot instance advisor.
  • To learn more about spot instances, refer to AWS Docs.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
useSpotInstances: true

Retries

  • If the batch job exits with non-zero exit code (due to internal failure, timeout, spot instance interuption from AWS, etc.) and attempts are not exhausted, it can be retried.
BatchJobRetryConfiguration  API reference
Parent API reference: BatchJob
attempts
Default: 1

Maximum number of times the batch job will try to execute before considered failed.

Type: number

  • If the batch job exits with non-zero exit code (due to internal failure, timeout, spot instance interuption from AWS, etc.) and attempts are not exhausted, it will be retried.
  • When there are no attempts left, the batch job is considered failed.

retryIntervalSeconds
Default: 0

Amount of time (in seconds) to wait between the attempts.

Type: number

retryIntervalMultiplier
Default: 1

Multiplier for retryIntervalSeconds

Type: number

  • Every time the batch job is retried, the amount of time between the executions will be multiplied by this number.
  • This can be used to implement a backoff strategy.

Timeout

  • When the timeout is reached, the batch job will be stopped.
  • If the batch job fails and maximum attempts are not yet exhausted, it will be retried.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
timeout: 1200

Storage

  • Each batch job instance has access to its own ephemeral storage. It's removed after the batch job finishes processing or fails.
  • It has a fixed size of 20GB.
  • To store data persistently, consider using Buckets.

Lifecycle process

The lifecycle of your batch job is fully managed. Stacktape leverages 2 extra resources to achieve this:

  • Trigger function

    • Stacktape-managed AWS lambda function used to connect event integration to the batch job and start the execution of the batch job state machine
  • Batch job state machine

    • Stacktape-managed AWS State machine used to control the lifecycle of the batch job container.

Batch job execution flow:

  1. Trigger function receives the event from one of its integrations
  2. Trigger function starts the execution of the batch job state machine
  3. Batch job state machine spawns the batch job instance (Docker container) and controls its lifecycle.

Trigger events

  • Batch jobs are invoked ("triggered") in a reaction to an event.
  • Each batch job can have multiple event integrations.
  • Payload (data) received by the batch job depends on the event integration. It is accessible using the STP_TRIGGER_EVENT_DATA environment variable as a JSON stringified value.
  • Be careful when connecting your batch jobs to event integrations that can spawn your batch job. Your batch job can get triggered many times a second, and this can get very costly.
  • Example: connecting your batch job to an HTTP API Gateway and receiving 1000 HTTP requests will result in 1000 invocations.

HTTP Api event

  • The batch job is triggered in a reaction to an incoming request to the specified HTTP API Gateway.
  • HTTP API Gateway selects the route with the most-specific match. To learn more about how paths are evaluated, refer to AWS Docs

resources:
myHttpApi:
type: http-api-gateway
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myHttpApi
path: /hello
method: GET

Lambda function connected to an HTTP API Gateway "myHttpApi"

HttpApiIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "http-api-gateway"

properties.httpApiGatewayName
Required

Name of the HTTP API Gateway

Type: string

properties.method
Required

HTTP method that the request should match to be routed by this event integration

Type: string ENUM

Possible values: *DELETEGETHEADOPTIONSPATCHPOSTPUT

Can be either:

  • exact method (e.g. GET or PUT)
  • wildcard matching any method (*)

properties.path
Required

URL path that the request should match to be routed by this event integration

Type: string

Can be either:

  • Exact URL Path - e.g. /post
  • Path with a positional parameter - e.g. /post/{id}. This matches any id parameter, e.g. /post/6. The parameter will be available to the workload using event.pathParameters.id
  • Greedy path variable - e.g. /pets/{anything+}. This catches all child resources of the route. Example: /post/{anything+} catches both /post/something/param1 and /post/something2/param

properties.authorizer

Configures authorization rules for this event integration

Type: (CognitoAuthorizer or LambdaAuthorizer)

  • Only the authorized requests will be forwarded to the workload.
  • All other requests will receive { "message": "Unauthorized" }

properties.payloadFormat
Default: "1.0"

The format of the payload that the workload will receiveed with this integration.

Type: string ENUM

Possible values: 1.02.0

  • To learn more about the differences between the formats, refer to AWS Docs

Cognito authorizer

  • Using Cognito authorizer allows only the users authenticated with User pool to access your batch job.
  • Request must include access token (specified as a bearer token, { Authorization: "<<your-access-token>>"" })
  • If the request is successfully authorized, your batch job will receive some authorization claims in its payload. To get more information about the user, you can use getUser API Method
  • HTTP API uses JWT(JSON Web token)-based authorization. To lean more about how requests are authorized, refer to AWS Docs.

resources:
myGateway:
type: http-api-gateway
myUserPool:
type: user-auth-pool
properties:
userVerificationType: email-code
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myGateway
path: /some-path
method: '*'
authorizer:
type: cognito
properties:
userPoolName: myUserPool

Example cognito authorizer

import { CognitoIdentityProvider } from '@aws-sdk/client-cognito-identity-provider';
const cognito = new CognitoIdentityProvider({});
(async () => {
const event = JSON.parse(process.env.STP_TRIGGER_EVENT_DATA);
const userData = await cognito.getUser({ AccessToken: event.headers.authorization });
// do something with your user data
})();

Example lambda batch job that fetches user data from Cognito

CognitoAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required

No description

Type: string "cognito"

properties.userPoolName
Required

No description

Type: string

properties.identitySources

No description

Type: Array of string


Lambda authorizer

  • When using Lambda authorizer, a special lambda function determines if the client can access your batch job.
  • When a request arrives to the HTTP API Gateway, lambda authorizer function is invoked. It must return either a simple response indicating if the client is authorized
{
"isAuthorized": true,
"context": {
"exampleKey": "exampleValue"
}
}

Simple lambda authorizer response format

or an IAM Policy document (when the iamReponse property is set to true, you can further configure permissions of the target batch job)

{
"principalId": "abcdef", // The principal user identification associated with the token sent by the client.
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:{regionId}:{accountId}:{apiId}/{stage}/{httpVerb}/[{resource}/[{child-resources}]]"
}
]
},
"context": {
"exampleKey": "exampleValue"
}
}

IAM Policy document lambda authorizer response format

  • Data returned in the context property will be available to the batch job.
  • You can configure identitySources that specify the location of data that's required to authorize a request. If they are not included in the request, the Lambda authorizer won't be invoked, and the client receives a 401 error. The following identity sources are supported: $request.header.name, $request.querystring.name and $context.variableName.
  • When caching is enabled for an authorizer, API Gateway uses the authorizer's identity sources as the cache key. If a client specifies the same parameters in identity sources within the configured TTL, API Gateway uses the cached authorizer result, rather than invoking the authorizer Lambda function.
  • By default, API Gateway uses the cached authorizer response for all routes of an API that use the authorizer. To cache responses per route, add $context.routeKey to your authorizer's identity sources.
  • To learn more about Lambda authorizers, refer to AWS Docs
LambdaAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required

No description

Type: string "lambda"

properties.lambdaName
Required

No description

Type: string

properties.iamResponse

No description

Type: boolean

properties.identitySources

No description

Type: Array of string

properties.cacheResultSeconds

No description

Type: number

Schedule event

The batch job is triggered on a specified schedule. You can use 2 different schedule types:

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
# invoke function every two hours
- type: schedule
properties:
scheduleRate: rate(2 hours)
# invoke function at 10:00 UTC every day
- type: schedule
properties:
scheduleRate: cron(0 10 * * ? *)

ScheduleIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "schedule"

properties.scheduleRate
Required

Invocation schedule rate

Type: string

2 different formats are supported:

  • rate expression - example: rate(2 hours) or rate(20 seconds)
  • cron - example: cron(0 10 * * ? *) or cron(0 15 3 * ? *)

properties.input

No description

Type: UNSPECIFIED

properties.inputPath

No description

Type: string

properties.inputTransformer

No description

Type: EventInputTransformer

Event Bus event

The batch job is triggered when the specified event bus receives an event matching the specified pattern.

2 types of event buses can be used:


  • Default event bus

    • Default event bus is pre-created by AWS and shared by the whole AWS account.
    • Can receive events from multiple AWS services. Full list of supported services.
    • To use the default event bus, set the useDefaultBus property.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: event-bus
properties:
useDefaultBus: true
eventPattern:
source:
- 'aws.autoscaling'
region:
- 'us-west-2'

Batch job connected to the default event bus

  • Custom event bus
    • Your own, custom Event bus.
    • This event bus can receive your own, custom events.
    • To use custom event bus, specify either eventBusArn or eventBusName property.

resources:
myEventBus:
type: event-bus
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: event-bus
properties:
eventBusName: myEventBus
eventPattern:
source:
- 'mycustomsource'

Batch job connected to a custom event bus

EventBusIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "event-bus"

properties.eventPattern
Required

Used to filter the events from the event bus based on a pattern

Type: EventBusIntegrationPattern

  • Each event received by the Event Bus gets evaluated against this pattern. If the event matches this pattern, the integration invokes the workload.
  • To learn more about the event bus filter pattern syntax, refer to AWS Docs

properties.eventBusArn

Arn of the event-bus

Type: string

  • Use this, if you want to use an event bus defined outside of the stack resources.
  • You need to specify exactly one of eventBusArn, eventBusName or useDefaultBus.

properties.eventBusName

Name of the Event Bus defined within the Stacktape resources

Type: string

  • Use this, if you want to use an event bus defined within the stack resources.
  • You need to specify exactly one of eventBusArn, eventBusName or useDefaultBus.

properties.useDefaultBus

Configures the integration to use the default (AWS created) event bus

Type: boolean

  • You need to specify exactly one of eventBusArn, eventBusName or useDefaultBus.

properties.input

No description

Type: UNSPECIFIED

properties.inputPath

No description

Type: string

properties.inputTransformer

No description

Type: EventInputTransformer

SNS event

The batch job is triggered every time a specified SNS topic receives a new message.

  • Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
  • Messages (notifications) are published to the topics
  • To add your custom SNS topic to your stack, add Cloudformation resource to the cloudformationResources section of your config.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: sns
properties:
topicArn: $Param('mySnsTopic', 'Arn')
onDeliveryFailure:
sqsQueueArn: $Param('mySnsTopic', 'Arn')
sqsQueueUrl: $Param('mySqsQueue', 'QueueURL')
cloudformationResources:
mySnsTopic:
Type: AWS::SNS::Topic
mySqsQueue:
Type: AWS::SQS::Queue

SnsIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "sns"

properties.topicArn
Required

Arn of the SNS topic. Messages arriving to this topic will invoke the workload.

Type: string

properties.filterPolicy

No description

Type: UNSPECIFIED

properties.onDeliveryFailure

SQS Destination for messages that fail to be delivered to the workload

Type: SnsOnDeliveryFailure

  • Failure to deliver can happen in rare cases, i.e. when function is not able to scale fast enough to react to incoming messages.

SQS event

The function is triggered whenever there are messages in the specified SQS Queue.

  • Messages are processed in batches
  • If the SQS queue contains multiple messages, the batch job is invoked with multiple messages in its payload
  • A single queue should always be "consumed" by a workload. SQS message can only be read once from the queue and while it's being processed, it's invisible to other workloads. If multiple different workloads are processing messages from the same queue, each will get their share of the messages, but one message won't be delivered to more than one workload at a time. If you need to consume the same message by multiple consumers (Fanout pattern), consider using EventBus integration or SNS integration.
  • To add your custom SQS queue to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.

Batching behavior can be configured. The batch job is triggered when any of the following things happen:

  • Batch window expires. Batch window can be configured using maxBatchWindowSeconds property.
  • Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured usingbatchSize property.
  • Maximum Payload limit is reached. Maximum payload size is 6 MB.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: sqs
properties:
queueArn: $Param('mySqsQueue', 'Arn')
cloudformationResources:
mySqsQueue:
Type: AWS::SQS::Queue

SqsIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "sqs"

properties.queueArn
Required

Arn of sqs queue from which function consumes messages.

Type: string

  • Failure to deliver can happen in rare cases, i.e. when the workload is not able to scale fast enough to react to incoming messages.

properties.batchSize
Default: 10

Configures how many records to collect in a batch, before function is invoked.

Type: number

  • Maximum 10,000

properties.maxBatchWindowSeconds

Configures maximum amount of time (in seconds) to gather records before invoking the workload

Type: number

  • By default, the batch window is not configured
  • Maximum 300 seconds

Kinesis event

The batch job is triggered whenever there are messages in the specified Kinesis Stream.

  • Messages are processed in batches.
  • If the stream contains multiple messages, the batch job is invoked with multiple messages in its payload.
  • To add a custom Kinesis stream to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.
  • Similarly to SQS, Kinesis is used to process messages in batches. To learn the differences, refer to AWS Docs

Batching behavior can be configured. The batch job is triggered when any of the following things happen:

  • Batch window expires. Batch window can be configured using maxBatchWindowSeconds property.
  • Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured usingbatchSize property.
  • Maximum Payload limit is reached. Maximum payload size is 6 MB.

Consoming messages from a kinesis stream can be done in 2 ways:

  • Consuming directly from the stream - polling each shard in your Kinesis stream for records once per second. Read throughput of the kinesis shard is shared with other stream consumers.
  • Consuming using a stream consumer - To minimize latency and maximize read throughput, use "stream consumer" with enhanced fan-out. Enhanced fan-out consumers get a dedicated connection to each shard that doesn't impact other applications reading from the stream. You can either pass reference to the consumer using consumerArn property, or you can let Stacktape auto-create consumer using autoCreateConsumer property.

resources:
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: kinesis
properties:
autoCreateConsumer: true
maxBatchWindowSeconds: 30
batchSize: 200
streamArn: $Param('myKinesisStream', 'Arn')
onFailure:
arn: $Param('myOnFailureSqsQueue', 'Arn')
type: sqs
cloudformationResources:
myKinesisStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
myOnFailureSqsQueue:
Type: AWS::SQS::Queue

KinesisIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "kinesis"

properties.streamArn
Required

Arn of Kinesis stream from which function consumes records.

Type: string

properties.consumerArn

Arn of the consumer which will be used by integration.

Type: string

  • This parameter CAN NOT be used is combination with autoCreateConsumer

properties.autoCreateConsumer

Specifies whether to create separate consumer for this integration

Type: boolean

  • Specifies whether Stacktape creates the consumer for this integration
  • Using a consumer can help minimize latency and maximize read throughput
  • To learn more about stream consumers, refer to AWS Docs
  • This parameter CAN NOT be used when in combination with consumerArn

properties.maxBatchWindowSeconds

Configures maximum amount of time (in seconds) to gather the records before invoking the workload

Type: number

  • By default batch window is not configured
  • Maximum 300 seconds

properties.batchSize

Configures how many records to collect in a batch, before function is invoked.

Type: number

  • Maximum 10,000
  • @default 10

properties.startingPosition
Default: "TRIM_HORIZON"

Specifies position in the stream from which to start reading.

Type: string ENUM

Possible values: LATESTTRIM_HORIZON

Available values are:

  • LATEST - Read only new records.
  • TRIM_HORIZON - Process all available records

properties.maximumRetryAttempts

Configures the number of times failed "record batches" are retried

Type: number

  • If the workload fails, the entire batch of records is retried (not only the failed ones). This means that even the records that you processed successfully can get retried. You should implement your function with idempotency in mind.

properties.onFailure

Configures the on-failure destination for failed record batches

Type: DestinationOnFailure

  • SQS queue or SNS topic

properties.parallelizationFactor

Allows to process more than one shard of the stream simultaneously

Type: number

properties.bisectBatchOnFunctionError

If the workload returns an error, split the batch in two before retrying.

Type: boolean

  • This can help in cases, when the failure happened because the batch was too large to be successfully processed.

DynamoDb event

The batch job is triggered whenever there are processable records in the specified DynamoDB streams.

  • DynamoDB stream captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours.
  • Records from the stream are processed in batches. This means that multiple records are included in a single batch job invocation.
  • DynamoDB stream must be enabled in a DynamoDB table definition. Learn how to enable streams in dynamo-table docs

resources:
myDynamoDbTable:
type: dynamo-db-table
properties:
primaryKey:
partitionKey:
attributeName: id
attributeType: string
dynamoStreamType: NEW_AND_OLD_IMAGES
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: dynamo-db
properties:
streamArn: $Param('myDynamoDbTable', 'DynamoTable::StreamArn')
# OPTIONAL
batchSize: 200

DynamoDbIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "dynamo-db"

properties.streamArn
Required

Arn of the DynamoDb table stream from which the workload consumes records.

Type: string

properties.maxBatchWindowSeconds

Configures maximum amount of time (in seconds) to gather records before invoking the workload

Type: number

  • By default, the batch window is not configured

properties.batchSize
Default: 100

Configures how many records to collect in a batch, before the workload is invoked.

Type: number

  • Maximum 1000

properties.startingPosition
Default: "TRIM_HORIZON"

Specifies position in the stream from which to start reading.

Type: string

Available values are:

  • LATEST - Read only new records.
  • TRIM_HORIZON - Process all available records

properties.maximumRetryAttempts

Configures the number of times failed "record batches" are retried

Type: number

  • If the workload fails, the entire batch of records is retried (not only the failed ones). This means that even the records that you processed successfully can get retried. You should implement your function with idempotency in mind.

properties.onFailure

Configures the on-failure destination for failed record batches

Type: DestinationOnFailure

  • SQS queue or SNS topic

properties.parallelizationFactor

Allows to process more than one shard of the stream simultaneously

Type: number

properties.bisectBatchOnFunctionError

If the workload returns an error, split the batch in two before retrying.

Type: boolean

  • This can help in cases, when the failure happened because the batch was too large to be successfully processed.

S3 event

The batch job is triggered when a specified event occurs in your bucket.

  • Supported events are listed in the s3EventType API Reference.

  • To learn more about the even types, refer to AWS Docs.

resources:
myBucket:
type: bucket
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: s3
properties:
bucketArn: $Param('myBucket', 'Bucket::Arn')
s3EventType: 's3:ObjectCreated:*'
filterRule:
prefix: order-
suffix: .jpg

S3Integration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "s3"

properties.bucketArn
Required

Arn of the S3 bucket, events of which can invoke the workload

Type: string

properties.s3EventType
Required

Specifies which event types invokes the workload

Type: string ENUM

Possible values: s3:ObjectCreated:*s3:ObjectCreated:CompleteMultipartUploads3:ObjectCreated:Copys3:ObjectCreated:Posts3:ObjectCreated:Puts3:ObjectRemoved:*s3:ObjectRemoved:Deletes3:ObjectRemoved:DeleteMarkerCreateds3:ObjectRestore:*s3:ObjectRestore:Completeds3:ObjectRestore:Posts3:ReducedRedundancyLostObjects3:Replication:*s3:Replication:OperationFailedReplications3:Replication:OperationMissedThresholds3:Replication:OperationNotTrackeds3:Replication:OperationReplicatedAfterThreshold

properties.filterRule

Allows to filter the objects that can invoke the workload

Type: S3FilterRule

S3FilterRule  API reference
Parent API reference: S3Integration
prefix

Prefix of the object which can invoke function

Type: string

suffix

Suffix of the object which can invoke function

Type: string

Cloudwatch Log event

The batch job is triggered when a log record arrives to the specified log group.

  • Event payload arriving to the batch job is BASE64 encoded and has the following format: { "awslogs": { "data": "BASE64ENCODED_GZIP_COMPRESSED_DATA" } }
  • To read access the log data, event payload needs to be decoded and decompressed first.

resources:
myLogProducingLambda:
type: function
properties:
packageConfig:
filePath: lambdas/log-producer.ts
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: cloudwatch-log
properties:
logGroupArn: $Param('myLogProducingLambda', 'LogGroup::Arn')

CloudwatchLogIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "cloudwatch-log"

properties.logGroupArn
Required

Arn of the watched Log group

Type: string

properties.filter

Allows to filter the logs that invoke the workload based on a pattern

Type: string

  • To learn more about the filter pattern, refer to AWS Docs

Application Load Balancer event

The batch job is triggered when a specified Application load Balancer receives an HTTP request that matches the integration's conditions.

  • You can filter requests based on HTTP Method, Path, Headers, Query parameters, and IP Address.

resources:
# load balancer which routes traffic to the function
myLoadBalancer:
type: application-load-balancer
properties:
listeners:
- port: 80
protocol: HTTP
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
resources:
cpu: 2
memory: 1800
events:
- type: application-load-balancer
properties:
# referencing load balancer defined above
priority: 1
loadBalancerName: myLoadBalancer
listenerPort: 80
paths:
- /invoke-my-lambda
- /another-path

LoadBalancerIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "application-load-balancer"

properties.loadBalancerName
Required

Name of the Load balancer

Type: string

properties.listenerPort
Required

Port of the Load balancer listener

Type: number

properties.priority
Required

Priority of the integration

Type: number

  • Load balancers evaluate integrations according to priority.
  • If multiple event integrations match the same conditions (paths, methods ...), request will be forwarded to the event integration with the highest priority.

properties.paths

List of URL paths that the request should match to be routed by this event integration

Type: Array of string

  • The condition is satisfied if any of the paths matches the request URL
  • The maximum size is 128 characters
  • The comparison is case sensitive

The following patterns are supported:

  • basic URL path, i.e. /post
  • * - wildcard (matches 0 or more characters)
  • ? - wildcard (matches 1 or more characters)

properties.methods

List of HTTP methods that the request should match to be routed by this event integration

Type: Array of string

properties.hosts

List of hostnames that the request should match to be routed by this event integration

Type: Array of string

  • Hostname is parsed from the host header of the request

The following wildcard patterns are supported:

  • * - wildcard (matches 0 or more characters)
  • ? - wildcard (matches 1 or more characters)

properties.headers

List of header conditions that the request should match to be routed by this event integration

Type: Array of LbHeaderCondition

  • All conditions must be satisfied.

properties.queryParams

List of query parameters conditions that the request should match to be routed by this event integration

Type: Array of LbQueryParamCondition

  • All conditions must be satisfied.

properties.sourceIps

List of IP addresses that the request should match to be routed by this event integration

Type: Array of string

  • IP addresses must be in a CIDR format.
  • If a client is behind a proxy, this is the IP address of the proxy, not the IP address of the client.

Accessing other resources

  • For most of the AWS resources, resource-to-resource communication is not allowed by default. This helps to enforce security and resource isolation. Access must be explicitly granted using IAM (Identity and Access Management) permissions.

  • Access control of Relational Databases is not managed by IAM. These resources are not "cloud-native" by design and have their own access control mechanism (connection string with username and password). They are accessible by default, and you don't need to grant any extra IAM permissions. You can further restrict the access to your relational databases by configuring their access control mode.

  • Stacktape automatically handles IAM permissions for the underlying AWS services that it creates (i.e. granting functions permission to write logs to Cloudwatch, allowing functions to communicate with their event source and many others).

If your workload needs to communicate with other infrastructure components, you need to add permissions manually. You can do this in 2 ways:

Using allowAccessTo

  • List of resource names that this function will be able to access (basic IAM permissions will be granted automatically). Granted permissions differ based on the resource.
  • Works only for resources managed by Stacktape (not arbitrary Cloudformation resources)
  • This is useful if you don't want to deal with IAM permissions yourself. Handling permissions using raw IAM role statements can be cumbersome, time-consuming and error-prone.

resources:
photosBucket:
type: bucket
myBatchJob:
type: batch-job
properties:
container:
imageConfig:
filePath: path/to/my/batch-job.ts
accessControl:
allowAccessTo:
- photosBucket


Granted permissions:

Bucket

  • list objects in a bucket
  • create / get / delete / tag object in a bucket

DynamoDb Table

  • get / put / update / delete item in a table
  • scan / query a table
  • describe table stream

MongoDb Atlas Cluster

  • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.

Relational database

  • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs.

Redis cluster

  • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.

Event bus

  • publish events to the specified Event bus

Function

  • invoke the specified function

Batch job

  • submit batch-job instance into batch-job queue
  • list submitted job instances in a batch-job queue
  • describe / terminate a batch-job instance
  • list executions of state machine which executes the batch-job according to its strategy
  • start / terminate execution of a state machine which executes the batch-job according to its strategy

Using iamRoleStatements

  • List of raw IAM role statement objects. These will be appended to the function's role.
  • Allow you to set granular control over your function's permissions.
  • Can be used to give access to any Cloudformation resource

resources:
myBatchJob:
type: batch-job
properties:
resources:
cpu: 2
memory: 1800
container:
imageConfig:
filePath: path/to/my/batch-job.ts
accessControl:
iamRoleStatements:
- Resource:
- $Param('NotificationTopic', 'Arn')
Effect: Allow
Action:
- 'sns:Publish'
cloudformationResources:
NotificationTopic:
Type: AWS::SNS::Topic

Default VPC connection

Pricing

  • You are charged for the instances running in your batch job compute environment.
  • Instance sizes are automatically chosen to best suit the needs of your batch jobs.
  • You are charged only for the time your batch job runs. After your batch job finishes processing, the instances are automatically killed.
  • Price depends on region and instance used. (https://aws.amazon.com/ec2/pricing/on-demand/)
  • You can use spot instances to save costs. These instances can be up to 90% cheaper. (https://aws.amazon.com/ec2/spot/pricing/)
  • You are also paying a very neglibile price for lambda functions and state machines used to manage the execution and integrations of your batch job.

API reference

BatchJob  API reference
type
Required

No description

Type: string "batch-job"

properties.container
Required

Configures properties of the Docker container that will run in this batch job.

Type: BatchJobContainer

properties.resources
Required

Configures computing resources for this batch job.

Type: BatchJobResources

properties.timeout

Maximum number of seconds the batch job is allowed to run.

Type: number

  • When the timeout is reached, the batch job will be stopped.
  • If the batch job fails and maximum attempts are not yet exhausted, it will be retried.

properties.useSpotInstances
Default: false

Configures the batch job to use spot instances

Type: boolean

  • Batch jobs can be configured to use spot instances.
  • Spot instances leverage AWS's spare computing capacity and can cost up to 90% less than "onDemand" (normal) instances.
  • However, your batch job can be interupted at any time, if AWS needs the capacity back. When this happens, your batch job receives a SIGTERM signal and you then have 120 seconds to save your progress or clean up.
  • Interruptions are usually infrequent as can be seen in the AWS Spot instance advisor.
  • To learn more about spot instances, refer to AWS Docs.

properties.logging

Configures logging behavior for the batch job

Type: BatchJobLogging

  • Container logs (stdout and stderr) are automatically sent to a pre-created CloudWatch log group.
  • By default, logs are retained for 180 days.
  • You can browse your logs in 2 ways:
    • go to the log group page in the AWS CloudWatch console. You can use stacktape stack-info command to get a direct link.
    • use stacktape logs command to print logs to the console

properties.retryConfig

Configures retries for the batch job

Type: BatchJobRetryConfiguration

  • If the batch job exits with non-zero exit code (due to internal failure, timeout, spot instance interuption from AWS, etc.) and attempts are not exhausted, it can be retried.

properties.events

Configures events (triggers) that will trigger the execution of this batch job.

Type: Array of (LoadBalancerIntegration or SnsIntegration or SqsIntegration or KinesisIntegration or DynamoDbIntegration or S3Integration or ScheduleIntegration or CloudwatchLogIntegration or HttpApiIntegration or EventBusIntegration)

  • Triggering of batch jobs leverages trigger functions (special purpose lambda functions).
  • Event integrations are attached to the trigger function

properties.accessControl

Configures access to other resources of your stack (such as relational-databases, buckets, event-buses, etc.).

Type: AccessControl

overrides

Overrides one or more properties of the specified child resource.

Type: Object

  • Child resouces are specified using their descriptive name (e.g. DbInstance or Events.0.HttpApiRoute).
  • To see all configurable child resources for given Stacktape resource, use stacktape stack-info --detailed command.
  • To see the list of properties that can be overriden, refer to AWS Cloudformation docs.

DockerBuildArg  API reference
Parent API reference: BatchJobDockerfileBasedImage
argName
Required

Argument name

Type: string

value
Required

Argument value

Type: string

CognitoAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required

No description

Type: string "cognito"

properties.userPoolName
Required

No description

Type: string

properties.identitySources

No description

Type: Array of string

LambdaAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required

No description

Type: string "lambda"

properties.lambdaName
Required

No description

Type: string

properties.iamResponse

No description

Type: boolean

properties.identitySources

No description

Type: Array of string

properties.cacheResultSeconds

No description

Type: number

EventInputTransformer  API reference
Parent API reference: (EventBusIntegration or ScheduleIntegration)
inputTemplate
Required

No description

Type: string

inputPathsMap

No description

Type: UNSPECIFIED

EventBusIntegrationPattern  API reference
Parent API reference: EventBusIntegration
version

No description

Type: UNSPECIFIED

detail-type

No description

Type: UNSPECIFIED

source

No description

Type: UNSPECIFIED

account

No description

Type: UNSPECIFIED

region

No description

Type: UNSPECIFIED

resources

No description

Type: UNSPECIFIED

detail

No description

Type: UNSPECIFIED

replay-name

No description

Type: UNSPECIFIED

SnsOnDeliveryFailure  API reference
Parent API reference: SnsIntegration
sqsQueueArn
Required

Arn of the SQS queue

Type: string

sqsQueueUrl
Required

Url of the SQS queue

Type: string

DestinationOnFailure  API reference
Parent API reference: (DynamoDbIntegration or KinesisIntegration)
arn
Required

Arn of the SNS topic or SQS queue into which failed record batches are sent

Type: string

type
Required

Type of destination being used are using

Type: string ENUM

Possible values: snssqs

S3FilterRule  API reference
Parent API reference: S3Integration
prefix

Prefix of the object which can invoke function

Type: string

suffix

Suffix of the object which can invoke function

Type: string

LbHeaderCondition  API reference
Parent API reference: LoadBalancerIntegration
headerName
Required

Header name

Type: string

values
Required

List of header values

Type: Array of string

  • The Condition is satisfied if at least one of the request headers matches the values in this list.

LbQueryParamCondition  API reference
Parent API reference: LoadBalancerIntegration
paramName
Required

Name of the query parameter

Type: string

values
Required

List of query parameter values

Type: Array of string

  • The Condition is satisfied if at least one of the request query parameters matches the values in this list.

ContainerLanguageSpecificConfig  API reference
Parent API reference: FilePathBasedImage
requiresGlibcBinaries

Builds image with support for glibc-based binaries

Type: boolean

-

  • You can use this option to add support for glibc-based native dependencies.
  • This means that Stacktape will use different (and significantly larger) base-image for your container.
  • Stacktape uses alpine Docker images by default. These images use musl, instead of glibc.
  • Packages with C-based binaries compiled using glibc doesn't work with musl.

tsConfigPath

Path to tsconfig.json file to use.

Type: string

This is used mostly to resolve path aliases.

emitTsDecoratorMetadata

Emits decorator metadata to the final bundle.

Type: boolean

  • This is used by frameworks like NestJS or ORMs like TypeORM.
  • This is not turned on by default, because it can slow down the build process.

dependenciesToExcludeFromBundle

Dependencies to exclude from main bundle.

Type: Array of string

  • These dependencies will be treated as external and won't be statically built into the main bundle
  • Instead, they will be installed and copied to the deployment package.
  • Using * means all of the workload's dependencies will be treated as external

KinesisIntegration  API reference
Parent API reference: BatchJob
type
Required

Type of the event integration

Type: string "kinesis"

properties.streamArn
Required

Arn of Kinesis stream from which function consumes records.

Type: string

properties.consumerArn

Arn of the consumer which will be used by integration.

Type: string

  • This parameter CAN NOT be used is combination with autoCreateConsumer

properties.autoCreateConsumer

Specifies whether to create separate consumer for this integration

Type: boolean

  • Specifies whether Stacktape creates the consumer for this integration
  • Using a consumer can help minimize latency and maximize read throughput
  • To learn more about stream consumers, refer to AWS Docs
  • This parameter CAN NOT be used when in combination with consumerArn

properties.maxBatchWindowSeconds

Configures maximum amount of time (in seconds) to gather the records before invoking the workload

Type: number

  • By default batch window is not configured
  • Maximum 300 seconds

properties.batchSize

Configures how many records to collect in a batch, before function is invoked.

Type: number

  • Maximum 10,000
  • @default 10

properties.startingPosition
Default: "TRIM_HORIZON"

Specifies position in the stream from which to start reading.

Type: string ENUM

Possible values: LATESTTRIM_HORIZON

Available values are:

  • LATEST - Read only new records.
  • TRIM_HORIZON - Process all available records

properties.maximumRetryAttempts

Configures the number of times failed "record batches" are retried

Type: number

  • If the workload fails, the entire batch of records is retried (not only the failed ones). This means that even the records that you processed successfully can get retried. You should implement your function with idempotency in mind.

properties.onFailure

Configures the on-failure destination for failed record batches

Type: DestinationOnFailure

  • SQS queue or SNS topic

properties.parallelizationFactor

Allows to process more than one shard of the stream simultaneously

Type: number

properties.bisectBatchOnFunctionError

If the workload returns an error, split the batch in two before retrying.

Type: boolean

  • This can help in cases, when the failure happened because the batch was too large to be successfully processed.

EnvironmentVar  API reference
Parent API reference: BatchJobContainer
name
Required

Name of the environment variable

Type: string

value
Required

Value of the environment variable

Type: (string or number or boolean)

AccessControl  API reference
Parent API reference: BatchJob
iamRoleStatements

Raw AWS IAM role statements appended to your resources's role.

Type: Array of StpIamRoleStatement

allowAccessTo

Names of the resources that will recieve basic permissions.

Type: Array of string

Granted permissions:

Bucket

  • list objects in a bucket
  • create / get / delete / tag object in a bucket

DynamoDb Table

  • get / put / update / delete item in a table
  • scan / query a table
  • describe table stream

MongoDb Atlas Cluster

  • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.

Relational database

  • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs.

Redis cluster

  • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.

Event bus

  • publish events to the specified Event bus

Function

  • invoke the specified function

Batch job

  • submit batch-job instance into batch-job queue
  • list submitted job instances in a batch-job queue
  • describe / terminate a batch-job instance
  • list executions of state machine which executes the batch-job according to its strategy
  • start / terminate execution of a state machine which executes the batch-job according to its strategy

StpIamRoleStatement  API reference
Parent API reference: AccessControl
Resource
Required

List of resources we want to access

Type: Array of string

  • See AWS reference here.

Sid

Statement identifier.

Type: string

  • See AWS reference here.

Effect

Effect of the statement

Type: string

  • See AWS reference here.

Action

List of actions allowed/denied by the statement

Type: Array of string

see AWS reference here.

Condition

No description

Type: UNSPECIFIED