Stacktape
Stacktape


Batch Job

This example shows a basic batch job configuration.

Batch job resource

  • Fully managed, on-demand runtime for your container jobs with pay-per-use pricing.
  • Supports GPU compute resources.

Basic example

resources:
myBatchJob:
type: batch-job
properties:
# Configures properties of the batch job Docker container.
#
# - Type: object
# - Required: true
container:
# Configures what image to use for the batch job container
#
# - Type: union (anyOf)
# - Required: true
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: stacktape-image-buildpack
# Configures properties for the image automatically built by Stacktape from the source code.
#
# - Type: object
# - Required: true
properties:
# Path to the entry point of your compute resource (relative to the stacktape config file)
#
# - Stacktape tries to bundle all your source code with its dependencies into a single file.
# - If a certain dependency doesn't support static bundling (because it depends on binary executable, uses dynamic require() calls, etc.),
# Stacktape will install it and copy it to the bundle
#
# - Type: string
# - Required: true
entryfilePath: ./src/index.ts
# Configuration of packaging properties specific to given language
#
# - Type: union (anyOf)
# - Required: false
# Builds image with support for glibc-based binaries
#
# - You can use this option to add support for glibc-based native dependencies.
# - This means that Stacktape will use different (and significantly larger) base-image for your container.
# - Stacktape uses alpine Docker images by default. These images use musl, instead of glibc.
# - Packages with C-based binaries compiled using glibc doesn't work with musl.
#
# - Type: boolean
# - Required: false
requiresGlibcBinaries: true
# List of commands to be executed during docker image building.
#
# - This property enables you to execute custom commands in your container during image building.
# - Commands are executed using docker `RUN` directive.
# - Commands can be used to install required additional dependencies into your container.
#
# - Type: array<string>
# - Required: false
customDockerBuildCommands:
- apt-get update && apt-get install -y curl
- npm install -g pm2
# Files that should be explicitly included in the deployment package (glob pattern)
#
# - Example glob pattern: `images/*.jpg`
# - The path is relative to the stacktape configuration file location or to `cwd` if configured using `--currentWorkingDirectory` command line option.
#
# - Type: array<string>
# - Required: false
includeFiles:
- public/**/*
- assets/*.png
# Files that should be explicitly excluded from deployment package (glob pattern)
#
# Example glob pattern: `images/*.jpg`
#
# - Type: array<string>
# - Required: false
excludeFiles:
- *.test.ts
- node_modules/**
# Dependencies to ignore.
#
# - These dependencies won't be a part of your deployment package.
#
# - Type: array<string>
# - Required: false
excludeDependencies:
- example-value
# Environment variables injected to the batch job container at runtime
#
# - Environment variables are often used to inject information about other parts of the infrastructure (such as database URLs, secrets, etc.).
#
# - Type: array<object (reference)>
# - Required: false
environment:
- name: NODE_ENV
value: production
- name: DATABASE_URL
value: $ResourceParam(myDatabase, connectionString)
# Configures computing resources for this batch job.
#
# - Use this property to select amount of cpu, memory and gpu your job needs.
# - Based on the job needs, instance is chosen to run the job at runtime
#
# - Type: object
# - Required: true
resources:
# Amount of virtual CPUs accessible to the batch job
#
# - Type: number
# - Required: true
cpu: 0.5
# Amount of memory accessible to the batch job
#
# > If you define memory required for your batch-job in multiples of 1024 be aware:
# > Your self managed environment might spin up instances that are much bigger than
# > expected. This can happen because the instances in your environment need memory to handle the
# > `management processes` (managed by AWS) associated with running the batch job.
# > **Example:** If you define 8192 memory for your batch-job, you might expect
# > that the self managed environment will primarily try to spin up
# > [one of the instances from used families](https://docs.stacktape.com/compute-resources/batch-jobs/#computing-resources) with memory
# > 8GiB(8192MB). However, the self managed environment knows that instance with
# > such memory would not be sufficient for both the batch job and management
# > processes. As a result, it will try to spin up a bigger instance. To learn more about this issue, refer to
# > [AWS Docs](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html#ecs-reserved-memory)
# > Due to this behaviour, we advise to specify memory for your batch-jobs smartly.
# > I.e instead of specifying 8192, consider specifying lower value, i.e 7680. This
# > way the self managed environment will be able to use instances with 8GiB
# > (8192MB) of memory, which can lead to cost saving.
#
# - Type: number
# - Required: true
memory: 2048
# Number of physical GPUs accessible to the batch job
#
# If you define GPUs, instances are chosen according to your need from the GPU accelerated families:
# - `p4d family`: uses Tesla A100 GPU. More in [AWS Docs](https://aws.amazon.com/ec2/instance-types/p4/)
# - `g5 family`: uses NVIDIA A10G GPU. More in [AWS Docs](https://aws.amazon.com/ec2/instance-types/g5/)
#
# - Type: number
# - Required: false
gpu: 100
# Maximum number of seconds the batch job is allowed to run.
#
# - When the timeout is reached, the batch job will be stopped.
# - If the batch job fails and maximum attempts are not exhausted, it will be retried.
#
# - Type: number
# - Required: false
timeout: 3600
# Configures the batch job to use spot instances
#
# - Batch jobs can be configured to use spot instances.
# - Spot instances leverage AWS's spare computing capacity and can cost up to 90% less than "onDemand" (normal) instances.
# - However, your batch job can be interrupted at any time, if AWS needs the capacity back. When this happens,
# your batch job receives a SIGTERM signal and you then have 120 seconds to save your progress or clean up.
# - Interruptions are usually infrequent as can be seen in the
# [AWS Spot instance advisor](https://aws.amazon.com/ec2/spot/instance-advisor/).
# - To learn more about spot instances, refer to [AWS Docs](https://aws.amazon.com/ec2/spot/use-case/batch/).
#
# - Type: boolean
# - Required: false
# - Default: false
useSpotInstances: false
# Configures logging behavior for the batch job
#
# - Container logs (stdout and stderr) are automatically sent to a pre-created CloudWatch log group.
# - By default, logs are retained for 180 days.
# - You can browse your logs in 2 ways:
# - go to the log group page in the AWS CloudWatch console. You can use `stacktape stack-info` command to get a
# direct link.
# - use [stacktape logs command](https://docs.stacktape.com/cli/commands/logs/) to print logs to the console
#
# - Type: object
# - Required: false
logging:
# Disables the collection of containers's application logs (stdout and stderr) to CloudWatch
#
# - Type: boolean
# - Required: false
# - Default: false
disabled: false
# Amount of days the logs will be retained in the log group
#
# - Type: enum: [1, 120, 14, 150, 180, 1827, 3, 30, 365, 3653, 400, 5, 545, 60, 7, 731, 90]
# - Required: false
# - Default: 90
# - Allowed values: [1, 120, 14, 150, 180, 1827, 3, 30, 365, 3653, 400, 5, 545, 60, 7, 731, 90]
retentionDays: 90
# Configures forwarding of logs to specified destination
#
# - Log forwarding is done using [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/) delivery stream.
# - When using log forwarding, you will incur costs based on the amount of data being transferred to the destination (~$0.03 per transferred GB).
# Refer to [AWS Kinesis Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/?nc=sn&loc=3) page to see details.
# - Currently supported destinations for logs:
# - `http-endpoint`
# - delivers logs to any HTTP endpoint.
# - The endpoint must follow [Firehose request and response specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
# (Many of the third party vendors are compliant with this specifications out of the box.)
# - `datadog`
# - delivers logs to [Datadog](https://www.datadoghq.com/).
# - `highlight`
# - delivers logs to [Highlight.io](https://www.highlight.io/) project.
#
# Refer to [our docs](https://docs.stacktape.com/configuration/log-forwarding/) for more information.
#
# > Logs that fail to be delivered to the destination even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: union (anyOf)
# - Required: false
#
# - Type: object
# - Required: false
logForwarding:
#
# - Type: string
# - Required: true
type: http-endpoint
#
# - Type: object
# - Required: true
properties:
# HTTPS endpoint where logs will be forwarded
#
# - Type: string
# - Required: true
endpointUrl: https://example.com
# Specifies whether to use GZIP compression for the request
#
# - When enabled, Firehose uses the content encoding to compress the body of a request before sending the request to the destination
#
# - Type: boolean
# - Required: false
gzipEncodingEnabled: true
# Parameters included in each call to HTTP endpoint
#
# - Key/Value pairs containing additional metadata you wish to send to the HTTP endpoint.
# - Parameters are delivered within **X-Amz-Firehose-Common-Attributes** header as a JSON object with following format: `{"commonAttributes":{"param1":"val1", "param2":"val2"}}`
#
# - Type: object
# - Required: false
# Amount of time spend on retries.
#
# - The total amount of time that Kinesis Data Firehose spends on retries.
# - This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails.
# - Logs that fail to be delivered to the HTTP endpoint even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: number
# - Required: false
retryDuration: 100
# Access key (credentials), needed for authenticating with endpoint
#
# - Access key is carried within a **X-Amz-Firehose-Access-Key** header
# - The configured key is copied verbatim into the value of this header.The contents can be arbitrary and can potentially represent a JWT token or an ACCESS_KEY.
# - It is recommended to use [secret](https://docs.stacktape.com/resources/secrets/) for storing your access key.
#
# - Type: string
# - Required: false
accessKey: example-value
# Configures retries for the batch job
#
# - If the batch job exits with non-zero exit code (due to internal failure, timeout,
# spot instance interruption from AWS, etc.) and attempts are not exhausted, it can be retried.
#
# - Type: object
# - Required: false
retryConfig:
# Maximum number of times the batch job will try to execute before considered failed.
#
# - If the batch job exits with non-zero exit code (due to internal failure, timeout,
# spot instance interruption from AWS, etc.) and attempts are not exhausted, it will be retried.
# - When there are no attempts left, the batch job is considered failed.
#
# - Type: number
# - Required: false
# - Default: 1
attempts: 1
# Amount of time (in seconds) to wait between the attempts.
#
# - Type: number
# - Required: false
# - Default: 0
retryIntervalSeconds: 0
# Multiplier for `retryIntervalSeconds`
#
# - Every time the batch job is retried, the amount of time between the executions will be multiplied by this number.
# - This can be used to implement a backoff strategy.
#
# - Type: number
# - Required: false
# - Default: 1
retryIntervalMultiplier: 1
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: array<union (anyOf)>
# - Required: false
events:
# Array of objects
# Configures access to other resources of your stack (such as databases, buckets, event-buses, etc.) and aws services
#
# By referencing resources (or services) in `connectTo` list, Stacktape automatically:
# - configures correct compute resource's **IAM role permissions** if needed
# - sets up correct **security group rules** to allow access if needed
# - **injects relevant environment variables** containing information about resource you are connecting to into the compute resource's runtime
# - names of environment variables use upper-snake-case and are in form `STP_[RESOURCE_NAME]_[VARIABLE_NAME]`,
# - examples: `STP_MY_DATABASE_CONNECTION_STRING` or `STP_MY_EVENT_BUS_ARN`,
# - list of injected variables for each resource type can be seen below.
#
#
# Granted permissions and injected environment variables are different depending on resource type:
#
#
# `Bucket`
# - **Permissions:**
# - list objects in a bucket
# - create / get / delete / tag object in a bucket
# - **Injected env variables**: `NAME`, `ARN`
#
#
# `DynamoDB table`
# - **Permissions:**
# - get / put / update / delete item in a table
# - scan / query a table
# - describe table stream
# - **Injected env variables**: `NAME`, `ARN`, `STREAM_ARN`
#
#
# `MongoDB Atlas cluster`
# - **Permissions:**
# - Allows connection to a cluster with `accessibilityMode` set to `scoping-workloads-in-vpc`. To learn more about
# MongoDB Atlas clusters accessibility modes, refer to
# [MongoDB Atlas cluster docs](https://docs.stacktape.com/3rd-party-resources/mongo-db-atlas-clusters/#accessibility).
# - Creates access "user" associated with compute resource's role to allow for secure credential-less access to the the cluster
# - **Injected env variables**: `CONNECTION_STRING`
#
#
# `Relational(SQL) database`
# - **Permissions:**
# - Allows connection to a relational database with `accessibilityMode` set to `scoping-workloads-in-vpc`. To learn more about
# relational database accessibility modes, refer to [Relational databases docs](https://docs.stacktape.com/resources/relational-databases#accessibility).
# - **Injected env variables**: `CONNECTION_STRING`, `JDBC_CONNECTION_STRING`, `HOST`, `PORT`
# (in case of aurora multi instance cluster additionally: `READER_CONNECTION_STRING`, `READER_JDBC_CONNECTION_STRING`, `READER_HOST`)
#
#
# `Redis cluster`
# - **Permissions:**
# - Allows connection to a redis cluster with `accessibilityMode` set to `scoping-workloads-in-vpc`. To learn more about
# redis cluster accessibility modes, refer to [Redis clusters docs](https://docs.stacktape.com/resources/redis-clusters#accessibility).
# - **Injected env variables**: `HOST`, `READER_HOST`, `PORT`
#
#
# `Event bus`
# - **Permissions:**
# - publish events to the specified Event bus
# - **Injected env variables**: `ARN`
#
#
# `Function`
# - **Permissions:**
# - invoke the specified function
# - invoke the specified function via url (if lambda has URL enabled)
# - **Injected env variables**: `ARN`
#
#
# `Batch job`
# - **Permissions:**
# - submit batch-job instance into batch-job queue
# - list submitted job instances in a batch-job queue
# - describe / terminate a batch-job instance
# - list executions of state machine which executes the batch-job according to its strategy
# - start / terminate execution of a state machine which executes the batch-job according to its strategy
# - **Injected env variables**: `JOB_DEFINITION_ARN`, `STATE_MACHINE_ARN`
#
#
# `User auth pool`
# - **Permissions:**
# - full control over the user pool (`cognito-idp:*`)
# - for more information about allowed methods refer to [AWS docs](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoncognitouserpools.html)
# - **Injected env variables**: `ID`, `CLIENT_ID`, `ARN`
#
#
#
# `SNS Topic`
# - **Permissions:**
# - confirm/list subscriptions of the topic
# - publish/subscribe to the topic
# - unsubscribe from the topic
# - **Injected env variables**: `ARN`, `NAME`
#
#
#
# `SQS Queue`
# - **Permissions:**
# - send/receive/delete message
# - change visibility of message
# - purge queue
# - **Injected env variables**: `ARN`, `NAME`, `URL`
#
#
# `Upstash Kafka topic`
# - **Injected env variables**: `TOPIC_NAME`, `TOPIC_ID`, `USERNAME`, `PASSWORD`, `TCP_ENDPOINT`, `REST_URL`
#
#
# `Upstash Redis`
# - **Injected env variables**: `HOST`, `PORT`, `PASSWORD`, `REST_TOKEN`, `REST_URL`, `REDIS_URL`
#
#
# `Private service`
# - **Injected env variables**: `ADDRESS`
#
#
# `aws:ses`(Macro)
# - **Permissions:**
# - gives full permissions to aws ses (`ses:*`).
# - for more information about allowed methods refer to [AWS docs](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonses.html)
#
# - Type: array<string>
# - Required: false
connectTo:
- myDatabase
- myBucket
# Raw AWS IAM role statements appended to your resources's role.
#
# - Type: array<object (reference)>
# - Required: false
iamRoleStatements:
- Resource: ["example-value"]
Sid: example-value

Packaging alternatives

stacktape-image-buildpack

This example shows how to configure packaging using stacktape-image-buildpack.

resources:
myBatchJob:
type: batch-job
properties:
container:
# Configures what image to use for the batch job container
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: stacktape-image-buildpack
# Configures properties for the image automatically built by Stacktape from the source code.
#
# - Type: object
# - Required: true
properties:
# Path to the entry point of your compute resource (relative to the stacktape config file)
#
# - Stacktape tries to bundle all your source code with its dependencies into a single file.
# - If a certain dependency doesn't support static bundling (because it depends on binary executable, uses dynamic require() calls, etc.),
# Stacktape will install it and copy it to the bundle
#
# - Type: string
# - Required: true
entryfilePath: ./src/index.ts
# Configuration of packaging properties specific to given language
#
# - Type: union (anyOf)
# - Required: false
# Builds image with support for glibc-based binaries
#
# - You can use this option to add support for glibc-based native dependencies.
# - This means that Stacktape will use different (and significantly larger) base-image for your container.
# - Stacktape uses alpine Docker images by default. These images use musl, instead of glibc.
# - Packages with C-based binaries compiled using glibc doesn't work with musl.
#
# - Type: boolean
# - Required: false
requiresGlibcBinaries: true
# List of commands to be executed during docker image building.
#
# - This property enables you to execute custom commands in your container during image building.
# - Commands are executed using docker `RUN` directive.
# - Commands can be used to install required additional dependencies into your container.
#
# - Type: array<string>
# - Required: false
customDockerBuildCommands:
- apt-get update && apt-get install -y curl
- npm install -g pm2
# Files that should be explicitly included in the deployment package (glob pattern)
#
# - Example glob pattern: `images/*.jpg`
# - The path is relative to the stacktape configuration file location or to `cwd` if configured using `--currentWorkingDirectory` command line option.
#
# - Type: array<string>
# - Required: false
includeFiles:
- public/**/*
- assets/*.png
# Files that should be explicitly excluded from deployment package (glob pattern)
#
# Example glob pattern: `images/*.jpg`
#
# - Type: array<string>
# - Required: false
excludeFiles:
- *.test.ts
- node_modules/**
# Dependencies to ignore.
#
# - These dependencies won't be a part of your deployment package.
#
# - Type: array<string>
# - Required: false
excludeDependencies:
- example-value

external-buildpack

This example shows how to configure packaging using external-buildpack.

resources:
myBatchJob:
type: batch-job
properties:
container:
# Configures what image to use for the batch job container
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: external-buildpack
#
# - Type: object
# - Required: true
properties:
# Path to the directory where the buildpack will be executed
#
# - Type: string
# - Required: true
sourceDirectoryPath: ./
# Buildpack Builder to use
#
# - By default, [paketobuildpacks/builder-jammy-base](https://github.com/paketo-buildpacks/builder-jammy-base) is used.
#
# - Type: string
# - Required: false
# - Default: paketobuildpacks/builder-jammy-base
builder: paketobuildpacks/builder-jammy-base
# Buildpack to use
#
# - By default, buildpacks is detected automatically.
#
# - Type: array<string>
# - Required: false
buildpacks:
- example-value
# Command to be executed when the container starts.
#
# - Example: `['app.py']`.
#
# - Type: array<string>
# - Required: false
command:
- node
- dist/index.js

prebuilt-image

This example shows how to configure packaging using prebuilt-image.

resources:
myBatchJob:
type: batch-job
properties:
container:
# Configures what image to use for the batch job container
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: prebuilt-image
# Configures properties for the image pre-built by user.
#
# - Type: object
# - Required: true
properties:
# Name or the URL of the image
#
# - Type: string
# - Required: true
image: example-value
# Command to be executed when the container starts. Overrides CMD instruction in the Dockerfile.
#
# - Example: `['app.py']`
#
# - Type: array<string>
# - Required: false
command:
- node
- dist/index.js

custom-dockerfile

This example shows how to configure packaging using custom-dockerfile.

resources:
myBatchJob:
type: batch-job
properties:
container:
# Configures what image to use for the batch job container
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: custom-dockerfile
# Configures properties for image built from the specified Dockerfile by Stacktape
#
# - Type: object
# - Required: true
properties:
# Path to directory (relative to stacktape config file) used as build context
#
# - Type: string
# - Required: true
buildContextPath: ./
# Path to Dockerfile (relative to `buildContextPath`) used to build application image.
#
# - Type: string
# - Required: false
dockerfilePath: Dockerfile
# List of arguments passed to the `docker build` command when building the image
#
# - Type: array<object (reference)>
# - Required: false
buildArgs:
- argName: NODE_ENV
value: production
- argName: BUILD_VERSION
value: 1.0.0
# Command to be executed when the container starts. Overrides CMD instruction in the Dockerfile.
#
# - Example: `['app.py']`
#
# - Type: array<string>
# - Required: false
command:
- node
- dist/index.js

nixpacks

This example shows how to configure packaging using nixpacks.

resources:
myBatchJob:
type: batch-job
properties:
container:
# Configures what image to use for the batch job container
#
# - Type: object
# - Required: true
packaging:
#
# - Type: string
# - Required: true
type: nixpacks
#
# - Type: object
# - Required: true
properties:
# Path to the directory where the buildpack will be executed
#
# - Type: string
# - Required: true
sourceDirectoryPath: ./
# Build Image
#
# - The image to use as the base when building the application.
# - To learn more, refer to [nixpacks docs](https://nixpacks.com/docs/configuration/file#build-image)
#
# - Type: string
# - Required: false
buildImage: example-value
# Providers
#
# - A list of provider names used to determine build and runtime environments.
#
# - Type: array<string>
# - Required: false
providers:
- example-value
# Start Command
#
# - The command to execute when starting the application.
# - Overrides default start commands inferred by nixpacks.
#
# - Type: string
# - Required: false
startCmd: example-value
# Start Run Image
#
# - The image to use as the base when running the application.
#
# - Type: string
# - Required: false
startRunImage: example-value
# Start Only Include Files
#
# - A list of file paths to include in the runtime environment.
# - Other files will be excluded.
#
# - Type: array<string>
# - Required: false
startOnlyIncludeFiles:
- example-value
# Phases
#
# - Defines the build phases for the application.
# - Each phase specifies commands, dependencies, and settings.
#
# - Type: array<object (reference)>
# - Required: false
phases:
- name: example-name
cmds: ["example-value"]

LogForwarding alternatives

http-endpoint

This example shows how to configure logforwarding using http-endpoint.

resources:
myBatchJob:
type: batch-job
properties:
logging:
# Configures forwarding of logs to specified destination
#
# - Log forwarding is done using [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/) delivery stream.
# - When using log forwarding, you will incur costs based on the amount of data being transferred to the destination (~$0.03 per transferred GB).
# Refer to [AWS Kinesis Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/?nc=sn&loc=3) page to see details.
# - Currently supported destinations for logs:
# - `http-endpoint`
# - delivers logs to any HTTP endpoint.
# - The endpoint must follow [Firehose request and response specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
# (Many of the third party vendors are compliant with this specifications out of the box.)
# - `datadog`
# - delivers logs to [Datadog](https://www.datadoghq.com/).
# - `highlight`
# - delivers logs to [Highlight.io](https://www.highlight.io/) project.
#
# Refer to [our docs](https://docs.stacktape.com/configuration/log-forwarding/) for more information.
#
# > Logs that fail to be delivered to the destination even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: object
# - Required: true
logForwarding:
#
# - Type: string
# - Required: true
type: http-endpoint
#
# - Type: object
# - Required: true
properties:
# HTTPS endpoint where logs will be forwarded
#
# - Type: string
# - Required: true
endpointUrl: https://example.com
# Specifies whether to use GZIP compression for the request
#
# - When enabled, Firehose uses the content encoding to compress the body of a request before sending the request to the destination
#
# - Type: boolean
# - Required: false
gzipEncodingEnabled: true
# Parameters included in each call to HTTP endpoint
#
# - Key/Value pairs containing additional metadata you wish to send to the HTTP endpoint.
# - Parameters are delivered within **X-Amz-Firehose-Common-Attributes** header as a JSON object with following format: `{"commonAttributes":{"param1":"val1", "param2":"val2"}}`
#
# - Type: object
# - Required: false
# Amount of time spend on retries.
#
# - The total amount of time that Kinesis Data Firehose spends on retries.
# - This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails.
# - Logs that fail to be delivered to the HTTP endpoint even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: number
# - Required: false
retryDuration: 100
# Access key (credentials), needed for authenticating with endpoint
#
# - Access key is carried within a **X-Amz-Firehose-Access-Key** header
# - The configured key is copied verbatim into the value of this header.The contents can be arbitrary and can potentially represent a JWT token or an ACCESS_KEY.
# - It is recommended to use [secret](https://docs.stacktape.com/resources/secrets/) for storing your access key.
#
# - Type: string
# - Required: false
accessKey: example-value

highlight

This example shows how to configure logforwarding using highlight.

resources:
myBatchJob:
type: batch-job
properties:
logging:
# Configures forwarding of logs to specified destination
#
# - Log forwarding is done using [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/) delivery stream.
# - When using log forwarding, you will incur costs based on the amount of data being transferred to the destination (~$0.03 per transferred GB).
# Refer to [AWS Kinesis Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/?nc=sn&loc=3) page to see details.
# - Currently supported destinations for logs:
# - `http-endpoint`
# - delivers logs to any HTTP endpoint.
# - The endpoint must follow [Firehose request and response specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
# (Many of the third party vendors are compliant with this specifications out of the box.)
# - `datadog`
# - delivers logs to [Datadog](https://www.datadoghq.com/).
# - `highlight`
# - delivers logs to [Highlight.io](https://www.highlight.io/) project.
#
# Refer to [our docs](https://docs.stacktape.com/configuration/log-forwarding/) for more information.
#
# > Logs that fail to be delivered to the destination even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: object
# - Required: true
logForwarding:
#
# - Type: string
# - Required: true
type: highlight
#
# - Type: object
# - Required: true
properties:
# Id of a [highlight.io](https://www.highlight.io/) project.
#
# - You can get the id of your project in your [highlight.io console](https://app.highlight.io/).
#
# - Type: string
# - Required: true
projectId: example-value
# HTTPS endpoint where logs will be forwarded
#
# - By default Stacktape uses `https://pub.highlight.io/v1/logs/firehose`
#
# - Type: string
# - Required: false
# - Default: https://pub.highlight.io/v1/logs/firehose
endpointUrl: https://pub.highlight.io/v1/logs/firehose

datadog

This example shows how to configure logforwarding using datadog.

resources:
myBatchJob:
type: batch-job
properties:
logging:
# Configures forwarding of logs to specified destination
#
# - Log forwarding is done using [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/) delivery stream.
# - When using log forwarding, you will incur costs based on the amount of data being transferred to the destination (~$0.03 per transferred GB).
# Refer to [AWS Kinesis Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/?nc=sn&loc=3) page to see details.
# - Currently supported destinations for logs:
# - `http-endpoint`
# - delivers logs to any HTTP endpoint.
# - The endpoint must follow [Firehose request and response specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
# (Many of the third party vendors are compliant with this specifications out of the box.)
# - `datadog`
# - delivers logs to [Datadog](https://www.datadoghq.com/).
# - `highlight`
# - delivers logs to [Highlight.io](https://www.highlight.io/) project.
#
# Refer to [our docs](https://docs.stacktape.com/configuration/log-forwarding/) for more information.
#
# > Logs that fail to be delivered to the destination even after multiple retries (time spend on retries can be configured) are put into bucket with name `{stackName}-{resourceName}-logs-{generatedHash}`
#
# - Type: object
# - Required: true
logForwarding:
#
# - Type: string
# - Required: true
type: datadog
#
# - Type: object
# - Required: true
properties:
# API key required to enable delivery of logs to Datadog
#
# - You can get your Datadog API key in [Datadog console](https://app.datadoghq.com/organization-settings/api-keys)
# - It is recommended to use [secret](https://docs.stacktape.com/resources/secrets/) for storing your api key.
#
# - Type: string
# - Required: true
apiKey: example-value
# HTTPS endpoint where logs will be forwarded
#
# - By default Stacktape uses `https://aws-kinesis-http-intake.logs.datadoghq.com/v1/input`
# - If your Datadog site is in EU you should probably use `https://aws-kinesis-http-intake.logs.datadoghq.eu/v1/input`
#
# - Type: string
# - Required: false
# - Default: https://aws-kinesis-http-intake.logs.datadoghq.com/v1/input
endpointUrl: https://aws-kinesis-http-intake.logs.datadoghq.com/v1/input

Events alternatives

application-load-balancer

This example shows how to configure events using application-load-balancer.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
# The function is triggered when the specified Application load Balancer receives an HTTP request that matches the integration's conditions.
#
# - You can filter requests based on **HTTP Method**, **Path**, **Headers**, **Query parameters**, and **IP Address**.
#
# - Type: string
# - Required: true
type: application-load-balancer
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Name of the Load balancer
#
# - Reference to the load balancer
#
# - Type: string
# - Required: true
loadBalancerName: myLoadBalancerName
# Priority of the integration
#
# - Load balancers evaluate integrations according to priority (from lowest to highest).
# - Incoming event is always sent to the first integration that matches the condition(path, method...).
#
# - Type: number
# - Required: true
priority: 100
# Port of the Load balancer listener
#
# - You need to specify listener port if the referenced load balancer uses custom listeners. Otherwise do not specify this property.
#
# - Type: number
# - Required: false
listenerPort: 3000
# List of URL paths that the request must match to be routed by this event integration
#
# - The condition is satisfied if any of the paths matches the request URL
# - The maximum size is 128 characters
# - The comparison is case sensitive
#
# The following patterns are supported:
# - basic URL path, i.e. `/posts`
# - `*` - wildcard (matches 0 or more characters)
# - `?` - wildcard (matches 1 or more characters)
#
# - Type: array<string>
# - Required: false
paths:
- example-value
# List of HTTP methods that the request must match to be routed by this event integration
#
# - Type: array<string>
# - Required: false
methods:
- example-value
# List of hostnames that the request must match to be routed by this event integration
#
# - Hostname is parsed from the host header of the request
#
# The following wildcard patterns are supported:
# - `*` - wildcard (matches 0 or more characters)
# - `?` - wildcard (matches 1 or more characters)
#
# - Type: array<string>
# - Required: false
hosts:
- example-value
# List of header conditions that the request must match to be routed by this event integration
#
# - All conditions must be satisfied.
#
# - Type: array<object (reference)>
# - Required: false
headers:
- headerName: myHeaderName
values: ["example-value"]
# List of query parameters conditions that the request must match to be routed by this event integration
#
# - All conditions must be satisfied.
#
# - Type: array<object (reference)>
# - Required: false
queryParams:
- paramName: myParamName
values: ["example-value"]
# List of IP addresses that the request must match to be routed by this event integration
#
# - IP addresses must be in a CIDR format.
# - If a client is behind a proxy, this is the IP address of the proxy, not the IP address of the client.
#
# - Type: array<string>
# - Required: false
sourceIps:
- example-value

sns

This example shows how to configure events using sns.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: sns
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Name of the sns-topic defined within resources
#
# - Use this, if you want to use an sns topic defined within the stack resources.
# - You need to specify exactly one of `snsTopicName` or `snsTopicArn`.
#
# - Type: string
# - Required: false
snsTopicName: mySnsTopicName
# Arn of the SNS topic. Messages arriving to this topic will invoke the workload.
#
# - Use this, if you want to use an sns topic defined outside of the stack resources.
# - You need to specify exactly one of `snsTopicName` or `snsTopicArn`.
#
# - Type: string
# - Required: false
snsTopicArn: example-value
# Allows you to filter messages based on the message `attributes`
#
# - Filters messages based on the message `attributes`
# - If you need to filter based on the content of the message, use an [Event bus integration](#event-bus).
# - To learn more about filter policies, refer to [AWS Docs](https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html)
#
# - Required: false
# SQS Destination for messages that fail to be delivered to the workload
#
# - Failure to deliver can happen in rare cases, i.e. when function is not able to scale fast enough to react to incoming messages.
#
# - Type: object
# - Required: false
onDeliveryFailure:
# Arn of the SQS queue
#
# - Type: string
# - Required: false
sqsQueueArn: example-value
# Name of the SQS queue in Stacktape config
#
# - Type: string
# - Required: false
sqsQueueName: mySqsQueueName

sqs

This example shows how to configure events using sqs.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: sqs
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Name of the sqs-queue defined within resources
#
# - Use this, if you want to use an sqs queue defined within the stack resources.
# - You need to specify exactly one of `sqsQueueName` or `sqsQueueArn`.
#
# - Type: string
# - Required: false
sqsQueueName: mySqsQueueName
# Arn of sqs queue from which function consumes messages.
#
# - Use this, if you want to use an sqs queue defined outside of the stack resources.
# - You need to specify exactly one of `sqsQueueName` or `sqsQueueArn`.
#
# - Type: string
# - Required: false
sqsQueueArn: example-value
# Configures how many records to collect in a batch, before function is invoked.
#
# - Maximum `10,000`
#
# - Type: number
# - Required: false
# - Default: 10
batchSize: 10
# configures maximum amount of time (in seconds) to gather records before invoking the workload
#
# - By default, the batch window is not configured
# - Maximum 300 seconds
#
# - Type: number
# - Required: false
maxBatchWindowSeconds: 100

kinesis-stream

This example shows how to configure events using kinesis-stream.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: kinesis-stream
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Arn of Kinesis stream from which function consumes records.
#
# - Type: string
# - Required: true
streamArn: example-value
# Arn of the consumer which will be used by integration.
#
# - This parameter CAN NOT be used is combination with `autoCreateConsumer`
#
# - Type: string
# - Required: false
consumerArn: example-value
# Specifies whether to create separate consumer for this integration
#
# - Specifies whether Stacktape creates the consumer for this integration
# - Using a consumer can help minimize latency and maximize read throughput
# - To learn more about stream consumers, refer to [AWS Docs](https://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html)
# - This parameter CAN NOT be used when in combination with `consumerArn`
#
# - Type: boolean
# - Required: false
autoCreateConsumer: true
# Configures maximum amount of time (in seconds) to gather the records before invoking the workload
#
# - By default batch window is not configured
# - Maximum `300` seconds
#
# - Type: number
# - Required: false
maxBatchWindowSeconds: 100
# configures how many records to collect in a batch, before function is invoked.
#
# - Maximum `10,000`
#
# - Type: number
# - Required: false
# - Default: 10
batchSize: 10
# Specifies position in the stream from which to start reading.
#
# Available values are:
# - `LATEST` - Read only new records.
# - `TRIM_HORIZON` - Process all available records
#
# - Type: enum: [LATEST, TRIM_HORIZON]
# - Required: false
# - Default: TRIM_HORIZON
# - Allowed values: [LATEST, TRIM_HORIZON]
startingPosition: TRIM_HORIZON
# Configures the number of times failed "record batches" are retried
#
# - If the compute resource fails, the entire batch of records is retried (not only the failed ones).
# This means that even the records that you processed successfully can get retried.
# You should implement your function with idempotency in mind.
#
# - Type: number
# - Required: false
maximumRetryAttempts: 100
# Configures the on-failure destination for failed record batches
#
# - `SQS queue` or `SNS topic`
#
# - Type: object
# - Required: false
onFailure:
# Arn of the SNS topic or SQS queue into which failed record batches are sent
#
# - Type: string
# - Required: true
arn: example-value
# Type of destination being used are using
#
# - Type: enum: [sns, sqs]
# - Required: true
# - Allowed values: [sns, sqs]
type: sns
# Allows to process more than one shard of the stream simultaneously
#
# - Type: number
# - Required: false
parallelizationFactor: 100
# If the compute resource returns an error, split the batch in two before retrying.
#
# - This can help in cases, when the failure happened because the batch was too large to be successfully processed.
#
# - Type: boolean
# - Required: false
bisectBatchOnFunctionError: true

dynamo-db-stream

This example shows how to configure events using dynamo-db-stream.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: dynamo-db-stream
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Arn of the DynamoDb table stream from which the compute resource consumes records.
#
# - Type: string
# - Required: true
streamArn: example-value
# Configures maximum amount of time (in seconds) to gather records before invoking the workload
#
# - By default, the batch window is not configured
#
# - Type: number
# - Required: false
maxBatchWindowSeconds: 100
# Configures how many records to collect in a batch, before the compute resource is invoked.
#
# - Maximum `1000`
#
# - Type: number
# - Required: false
# - Default: 100
batchSize: 10
# Specifies position in the stream from which to start reading.
#
# Available values are:
# - `LATEST` - Read only new records.
# - `TRIM_HORIZON` - Process all available records
#
# - Type: string
# - Required: false
# - Default: TRIM_HORIZON
startingPosition: TRIM_HORIZON
# Configures the number of times failed "record batches" are retried
#
# - If the compute resource fails, the entire batch of records is retried (not only the failed ones).
# This means that even the records that you processed successfully can get retried.
# You should implement your function with idempotency in mind.
#
# - Type: number
# - Required: false
maximumRetryAttempts: 100
# Configures the on-failure destination for failed record batches
#
# - `SQS queue` or `SNS topic`
#
# - Type: object
# - Required: false
onFailure:
# Arn of the SNS topic or SQS queue into which failed record batches are sent
#
# - Type: string
# - Required: true
arn: example-value
# Type of destination being used are using
#
# - Type: enum: [sns, sqs]
# - Required: true
# - Allowed values: [sns, sqs]
type: sns
# Allows to process more than one shard of the stream simultaneously
#
# - Type: number
# - Required: false
parallelizationFactor: 100
# If the compute resource returns an error, split the batch in two before retrying.
#
# - This can help in cases, when the failure happened because the batch was too large to be successfully processed.
#
# - Type: boolean
# - Required: false
bisectBatchOnFunctionError: true

s3

This example shows how to configure events using s3.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: s3
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Arn of the S3 bucket, events of which can invoke the workload
#
# - Type: string
# - Required: true
bucketArn: example-value
# Specifies which event types invokes the workload
#
# - Type: enum: [s3:ObjectCreated:*, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectCreated:Copy, s3:ObjectCreated:Post, s3:ObjectCreated:Put, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Completed, s3:ObjectRestore:Post, s3:ReducedRedundancyLostObject, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationMissedThreshold, s3:Replication:OperationNotTracked, s3:Replication:OperationReplicatedAfterThreshold]
# - Required: true
# - Allowed values: [s3:ObjectCreated:*, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectCreated:Copy, s3:ObjectCreated:Post, s3:ObjectCreated:Put, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Completed, s3:ObjectRestore:Post, s3:ReducedRedundancyLostObject, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationMissedThreshold, s3:Replication:OperationNotTracked, s3:Replication:OperationReplicatedAfterThreshold]
s3EventType: s3:ObjectCreated:*
# Allows to filter the objects that can invoke the workload
#
# - Type: object
# - Required: false
filterRule:
# Prefix of the object which can invoke function
#
# - Type: string
# - Required: false
prefix: example-value
# Suffix of the object which can invoke function
#
# - Type: string
# - Required: false
suffix: example-value

schedule

This example shows how to configure events using schedule.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: schedule
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Invocation schedule rate
#
# 2 different formats are supported:
# - `rate expression` - example: `rate(2 hours)` or `rate(20 seconds)`
# - `cron` - example: `cron(0 10 * * ? *)` or `cron(0 15 3 * ? *)`
#
# - Type: string
# - Required: true
scheduleRate: example-value
# Valid JSON event passed to the target instead of the original event
#
# - Use this property, if the delivered event should always be the same.
# - If you wish to use parts of the original event or directives in your event, use `inputTransformer`.
#
# Example:
#
# ```yml
# ...
# events:
# - type: schedule
# properties:
# input:
# property1: always-same
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Required: false
# The JSON path that is used for extracting part of the matched event when passing it to the target
#
# - Use this property, if you wish to deliver only specific part of the event to the target
# - If you wish to use parts of the original event or directives in your event, use `inputTransformer`.
#
# Example (passing only "detail" portion of event to the result):
#
# ```yml
# ...
# events:
# - type: schedule
# properties:
# inputPath: $.detail
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Type: string
# - Required: false
inputPath: ./path/to/inputPath
# Enables you to provide custom input to a target based on certain event data
#
# - Use this property, if you wish to extract one or more key-value pairs from the event and then use that data to send customized input to the target.
#
# Example (extracting information from original event and passing into new event):
#
# ```yml
# ...
# events:
# - type: schedule
# properties:
# inputTransformer:
# inputPathsMap:
# time: $.time
# inputTemplate:
# message: 'event with time <time>'
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Type: object
# - Required: false
inputTransformer:
# Template where you specify placeholders that will be filled with the values of the keys from InputPathsMap to customize the data sent to the target.
#
# - Enclose each inputPathsMaps value in brackets: `<value>`
#
# - Required: true
# Map of JSON paths to be extracted from the event
#
# - You can then insert these in the template in `inputTemplate` to produce the output you want to be sent to the target.
# - `inputPathsMap` is an array key-value pairs, where each value is a valid JSON path.
#
# - Required: false

cloudwatch-log

This example shows how to configure events using cloudwatch-log.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: cloudwatch-log
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Arn of the watched Log group
#
# - Type: string
# - Required: true
logGroupArn: example-value
# Allows to filter the logs that invoke the compute resource based on a pattern
#
# - To learn more about the filter pattern, refer to [AWS Docs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html)
#
# - Type: string
# - Required: false
filter: example-value

http-api-gateway

This example shows how to configure events using http-api-gateway.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: http-api-gateway
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Name of the HTTP API Gateway
#
# - Type: string
# - Required: true
httpApiGatewayName: myHttpApiGatewayName
# HTTP method that the request should match to be routed by this event integration
#
# Can be either:
# - exact method (e.g. `GET` or `PUT`)
# - wildcard matching any method (`*`)
#
# - Type: enum: [*, DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT]
# - Required: true
# - Allowed values: [*, DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT]
method: *
# URL path that the request should match to be routed by this event integration
#
# Can be either:
# - **Exact URL Path** - e.g. `/posts`
# - **Path with a positional parameter** - e.g. `/post/{id}`. This matches any `id` parameter, e.g. `/post/6`.
# The parameter will be available to the compute resource using `event.pathParameters.id`
# - **Greedy path variable** - e.g. `/post/{anything+}`. This catches all child resources of the route.
# Example: `/post/{anything+}` catches both `/post/something/param1` and `/post/something2/param`
#
# - Type: string
# - Required: true
path: example-value
# Configures authorization rules for this event integration
#
# - Only the authorized requests will be forwarded to the workload.
# - All other requests will receive `{ "message": "Unauthorized" }`
#
# - Type: union (anyOf)
# - Required: false
# The format of the payload that the compute resource will received with this integration.
#
# - To learn more about the differences between the formats, refer to
# [AWS Docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html)
#
# - Type: enum: [1.0, 2.0]
# - Required: false
# - Default: '1.0'
# - Allowed values: [1.0, 2.0]
payloadFormat: '1.0'

event-bus

This example shows how to configure events using event-bus.

resources:
myBatchJob:
type: batch-job
properties:
# Configures events (triggers) that will trigger the execution of this batch job.
#
# - Triggering of batch jobs leverages `trigger functions` (special purpose lambda functions).
# - Event integrations are attached to the `trigger function`
#
# - Type: object
# - Required: true
events:
#
# - Type: string
# - Required: true
type: event-bus
# Properties of the integration
#
# - Type: object
# - Required: true
properties:
# Used to filter the events from the event bus based on a pattern
#
# - Each event received by the Event Bus gets evaluated against this pattern. If the event matches this pattern, the integration invokes the workload.
# - To learn more about the event bus filter pattern syntax, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Type: object
# - Required: true
eventPattern:
# Version property filter
#
# - If you do not specify this filter, version field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
version: 1.0.0
# Detail-type property filter
#
# - If you do not specify this filter, detail-type field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
# Source property filter
#
# - If you do not specify this filter, source field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
# Account property filter
#
# - If you do not specify this filter, account field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
# Region property filter
#
# - If you do not specify this filter, region field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
region: us-east-1
# Resources property filter
#
# - If you do not specify this filter, resources field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
# Detail property filter
#
# - Detail property contains the custom message of an event. The message is always a valid JSON.
# - If you do not specify this filter, detail of event is ignored.
#
# - Required: false
# Replay-name property filter
#
# - If you do not specify this filter, replay-name field of the event is ignored.
# - To learn more about event patterns, refer to [AWS Docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
#
# - Required: false
# Arn of the event-bus
#
# - Use this, if you want to use an event bus defined outside of the stack resources.
# - You need to specify exactly one of `eventBusArn`, `eventBusName` or `useDefaultBus`.
#
# - Type: string
# - Required: false
eventBusArn: example-value
# Name of the event-bus defined within resources
#
# - Use this, if you want to use an event bus defined within the stack resources.
# - You need to specify exactly one of `eventBusArn`, `eventBusName` or `useDefaultBus`.
#
# - Type: string
# - Required: false
eventBusName: myEventBusName
# Configures the integration to use the default (AWS created) event bus
#
# - You need to specify exactly one of `eventBusArn`, `eventBusName` or `useDefaultBus`.
#
# - Type: boolean
# - Required: false
useDefaultBus: true
# SQS Destination for messages that fail to be delivered to the workload
#
# - Failure to deliver can happen in rare cases, i.e. when function is not able to scale fast enough to react to incoming messages.
#
# - Type: object
# - Required: false
onDeliveryFailure:
# Arn of the SQS queue
#
# - Type: string
# - Required: false
sqsQueueArn: example-value
# Name of the SQS queue in Stacktape config
#
# - Type: string
# - Required: false
sqsQueueName: mySqsQueueName
# Valid JSON event passed to the target instead of the original event
#
# - Use this property, if the delivered event should always be the same.
# - If you wish to use parts of the original event or directives in your event, use `inputTransformer`.
#
# Example:
#
# ```yml
# ...
# events:
# - type: event-bus
# properties:
# useDefaultBus: true
# input:
# property1: always-same
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Required: false
# The JSON path that is used for extracting part of the matched event when passing it to the target
#
# - Use this property, if you wish to deliver only specific part of the event to the target
# - If you wish to use parts of the original event or directives in your event, use `inputTransformer`.
#
# Example (passing only "detail" portion of event to the result):
#
# ```yml
# ...
# events:
# - type: event-bus
# properties:
# useDefaultBus: true
# inputPath: $.detail
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Type: string
# - Required: false
inputPath: ./path/to/inputPath
# Enables you to provide custom input to a target based on certain event data
#
# - Use this property, if you wish to extract one or more key-value pairs from the event and then use that data to send customized input to the target.
#
# Example (extracting information from original event and passing into new event):
#
# ```yml
# ...
# events:
# - type: event-bus
# properties:
# useDefaultBus: true
# inputTransformer:
# inputPathsMap:
# instanceFromDetail: $.detail.instance
# statusFromDetail: $.detail.status
# inputTemplate:
# instance: <instanceFromDetail>
# status: <statusFromDetail>
# ```
#
# > You can only specify one of `input`, `inputPath` or `inputTransformer`
#
# - Type: object
# - Required: false
inputTransformer:
# Template where you specify placeholders that will be filled with the values of the keys from InputPathsMap to customize the data sent to the target.
#
# - Enclose each inputPathsMaps value in brackets: `<value>`
#
# - Required: true
# Map of JSON paths to be extracted from the event
#
# - You can then insert these in the template in `inputTemplate` to produce the output you want to be sent to the target.
# - `inputPathsMap` is an array key-value pairs, where each value is a valid JSON path.
#
# - Required: false

Contents