Stacktape
Stacktape


Deployment Scripts



Deployment scripts allow you to execute custom logic as part of your deployment process. You can pass information about your infrastructure to the script and grant it permissions to interact with other resources in your stack.

Under the hood, a deployment script is packaged as an AWS Lambda function and triggered during the deployment or delete process. Deployment scripts are not executed during hot-swap deployments.

When to use them

Deployment scripts are useful for tasks that need to run as part of your infrastructure provisioning, such as:

  • Seeding a database with initial data.
  • Running database migrations.
  • Running smoke tests to ensure that your application is running correctly after a deployment.

Basic usage

This example uses a deployment script to test a public API endpoint after a deployment.

DeploymentScript  API reference
type
Required
properties.trigger
Required
properties.packaging
Required
properties.runtime
properties.environment
properties.parameters
properties.memory
properties.timeout
Default: 10
properties.joinDefaultVpc
properties.storage
Default: 512
properties.connectTo
properties.iamRoleStatements
overrides
import fetch from 'node-fetch';
export default async (event) => {
const { apiURL } = event;
// do whatever you want with apiURL ...
const result = await fetch(apiURL);
// fail the script if the test fails
if (result.statusCode === 404) {
throw Error('API test failed');
}
};

A deployment script in TypeScript (test-url.ts).

resources:
myHttpApi:
type: http-api-gateway
testApiMethods:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: test-url.ts
parameters:
apiURL: $ResourceParam('myHttpApi', 'url')

The Stacktape configuration for the deployment script.

Trigger

The trigger property determines when the script is executed.

  • Currently available options for triggering script are:
    • after:deploy - executes script at the end of stack deploy operation (after all resources are deployed). If the script fails, whole deployment fails and stack will be rolled back.
    • before:delete - executes script before stack delete operation starts deleting resources. NOTE that even if the script fails, delete will continue and delete all resources.
  • Besides triggering script during stack operations, you can trigger it manually using stacktape deployment-script:run command.
resources:
myHttpApi:
type: http-api-gateway
testApiMethods:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: test-url.ts
parameters:
apiURL: $ResourceParam('myHttpApi', 'url')

Scripts that are triggered before a stack is deleted before:delete must have been present during the last deployment to be executed.

Packaging

Deployment scripts are packaged and executed as Lambda functions. For more information, see the documentation on packaging Lambda functions.

Parameters

You can pass parameters to your deployment script.

  • Parameters can be used to pass complex information to your script handler

You cannot pass secret values (i.e using $Secret directive) using parameters. To pass secret values use environment variables instead.

resources:
myHttpApi:
type: http-api-gateway
testApiMethods:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: test-url.ts
parameters:
apiURL: $ResourceParam('myHttpApi', 'url')
testPaths:
- my/path/1
- my/path/2

Environment variables

  • Environment variables can be used to inject information about infrastructure (database URLS, secrets ...) into script's runtime
  • To pass complex objects into your script use parameters instead
name
Required
value
Required
resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('my-database-password')
engine:
type: aurora-postgresql-serverless
testDatabase:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: test-url.ts
environment:
- name: DATABASE_URL
value: $ResourceParam('myDatabase', 'connectionString')

Accessing other resources

By default, AWS resources cannot communicate with each other. Access must be granted using IAM permissions.

Stacktape automatically handles the necessary permissions for the services it manages. For example, it allows a deployment script to write logs to CloudWatch.

However, if your script needs to access other resources, you must grant permissions manually. You can do this in two ways:

Using connectTo

The connectTo property lets you grant access to other Stacktape-managed resources by simply listing their names. Stacktape automatically configures the required IAM permissions and injects connection details as environment variables into your script.

resources:
myScript:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-script.ts
environment:
- name: MY_BUCKET_NAME
value: $ResourceParam('myBucket', 'name')
connectTo:
# access to the bucket
- myBucket
# access to AWS SES
- aws:ses
myBucket:
type: bucket

By referencing resources (or services) in connectTo list, Stacktape automatically:

  • configures correct compute resource's IAM role permissions if needed
  • sets up correct security group rules to allow access if needed
  • injects relevant environment variables containing information about resource you are connecting to into the compute resource's runtime
    • names of environment variables use upper-snake-case and are in form STP_[RESOURCE_NAME]_[VARIABLE_NAME],
    • examples: STP_MY_DATABASE_CONNECTION_STRING or STP_MY_EVENT_BUS_ARN,
    • list of injected variables for each resource type can be seen below.

Granted permissions and injected environment variables are different depending on resource type:


Bucket

  • Permissions:
    • list objects in a bucket
    • create / get / delete / tag object in a bucket
  • Injected env variables: NAME, ARN

DynamoDB table

  • Permissions:
    • get / put / update / delete item in a table
    • scan / query a table
    • describe table stream
  • Injected env variables: NAME, ARN, STREAM_ARN

MongoDB Atlas cluster

  • Permissions:
    • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDB Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
    • Creates access "user" associated with compute resource's role to allow for secure credential-less access to the the cluster
  • Injected env variables: CONNECTION_STRING

Relational(SQL) database

  • Permissions:
    • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs.
  • Injected env variables: CONNECTION_STRING, JDBC_CONNECTION_STRING, HOST, PORT (in case of aurora multi instance cluster additionally: READER_CONNECTION_STRING, READER_JDBC_CONNECTION_STRING, READER_HOST)

Redis cluster

  • Permissions:
    • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
  • Injected env variables: HOST, READER_HOST, PORT

Event bus

  • Permissions:
    • publish events to the specified Event bus
  • Injected env variables: ARN

Function

  • Permissions:
    • invoke the specified function
    • invoke the specified function via url (if lambda has URL enabled)
  • Injected env variables: ARN

Batch job

  • Permissions:
    • submit batch-job instance into batch-job queue
    • list submitted job instances in a batch-job queue
    • describe / terminate a batch-job instance
    • list executions of state machine which executes the batch-job according to its strategy
    • start / terminate execution of a state machine which executes the batch-job according to its strategy
  • Injected env variables: JOB_DEFINITION_ARN, STATE_MACHINE_ARN

User auth pool

  • Permissions:
    • full control over the user pool (cognito-idp:*)
    • for more information about allowed methods refer to AWS docs
  • Injected env variables: ID, CLIENT_ID, ARN


SNS Topic

  • Permissions:
    • confirm/list subscriptions of the topic
    • publish/subscribe to the topic
    • unsubscribe from the topic
  • Injected env variables: ARN, NAME


SQS Queue

  • Permissions:
    • send/receive/delete message
    • change visibility of message
    • purge queue
  • Injected env variables: ARN, NAME, URL

Upstash Kafka topic

  • Injected env variables: TOPIC_NAME, TOPIC_ID, USERNAME, PASSWORD, TCP_ENDPOINT, REST_URL

Upstash Redis

  • Injected env variables: HOST, PORT, PASSWORD, REST_TOKEN, REST_URL, REDIS_URL

Private service

  • Injected env variables: ADDRESS

aws:ses(Macro)

  • Permissions:
    • gives full permissions to aws ses (ses:*).
    • for more information about allowed methods refer to AWS docs

Using iamRoleStatements

For more granular control, you can provide a list of raw IAM role statements. These statements are added to the script's IAM role, allowing you to define precise permissions for any AWS resource.

resources:
myScript:
type: deployment-script
properties:
trigger: after:deploy
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-script.ts
environment:
- name: TOPIC_ARN
value: $CfResourceParam('NotificationTopic', 'Arn')
iamRoleStatements:
- Resource:
- $CfResourceParam('NotificationTopic', 'Arn')
Effect: 'Allow'
Action:
- 'sns:Publish'
cloudformationResources:
NotificationTopic:
Type: AWS::SNS::Topic

API reference

StpIamRoleStatement  API reference
Resource
Required
Sid
Effect
Action
Condition

Contents