Overview and basic concepts
Deployment script resources enable you to execute custom script as a part of deployment process. On the background,
script is bundeled into lambda function which is triggered during deployment
or delete
process. You can pass
information about your infrastructure into the script by using parameters and environment variables and grant script
permissions to stack resources. This gives the script ability to interact with other parts of deployed infrastructure.
When to use
Performing provisioning steps related to infrastructure - Deployment of infrastructure is only a part of having the application successfully running. You might need to seed the database or run migrations, both of which are a great fit for a deployment script. Another example might be running a smoke test from your deployment script to ensure that everything is running correctly after deployment.
Basic usage
No description
Type: string "deployment-script"
Configures trigger for the script
Type: string ENUM
Possible values: after:deploybefore:delete
- Currently available options for triggering script are:
after:deploy
- executes script at the end of stackdeploy
operation (after all resources are deployed). If the script fails, whole deployment fails and stack will be rolled back.before:delete
- executes script before stackdelete
operation starts deleting resources. NOTE that even if the script fails,delete
will continue and delete all resources.
- Besides triggering script during stack operations, you can trigger it manually using
stacktape deployment-script:run
command.
Configures how your script code is turned into a deployment package (deployment artifact)
Type: (StpBuildpackLambdaPackaging or CustomArtifactLambdaPackaging)
- Currently supported packaging types are:
stacktape-lambda-buildpack
- Stacktape automatically builds your source code from the supplied source file path.custom-artifact
- You provide path to your own lambda artifact. Stacktape will zip it for you if it's not zipped.
- Your deployment artifacts are automatically uploaded to the S3 deployment bucket.
Runtime used for script lambda execution environment
Type: string ENUM
Possible values: dotnetcore2.1go1.xjava11java8nodejs12.xnodejs14.xpython2.7python3.6python3.7python3.8python3.9ruby2.5
- Stacktape automatically detects the script's language and uses the latest runtime version associated with that language
- Example: uses
nodejs14.x
for all files ending with.js
and.ts
- You might want to use an older version if some of your dependencies are not not compatible with the default runtime version
Environment variables injected to the script.
Type: Array of EnvironmentVar
- Environment variables can be used to inject information about infrastructure (database URLS, secrets ...) into script's runtime
- If you wish to pass complex objects into your script use
parameters
instead
Parameters which will be passed to the script handler during execution
Type: Object
- Parameters can be used to pass complex information to your script handler
You cannot pass secret values (i.e using $Secret
directive) using parameters.
To pass secret values use environment
variables instead.
Amount of memory (in MB) available to the script lambda execution environment.
Type: number
- Must be between 128 MB and 10,240 MB in 1-MB increments.
- Amount of CPU power available to the script lambda execution environment is also set using memory property - it's proportionate to the amount of available memory.
- Lambda with 1797MB has a CPU power equal to 1 virtual CPU. Lambda function can have a maximum of 6 vCPUs (at 10,240 MB of RAM).
Maximum amount of time (in seconds) the script is allowed to run
Type: number
Maximum allowed time is 900 seconds.
Connects the lambda function that executes the script to the default VPC
Type: boolean
- Functions are NOT connected to the default VPC of your stack by default.
- To communicate with certain resources inside your VPC, you need to connect your function to the VPC. Most common use-case for this is accessing a relational-database or a mongo-db-atlas-cluster that is configured to only allows connections from VPC.
- Connecting a function to the VPC makes it lose connection to the internet. (Outbound requests will fail). To restore a connection to the internet, you would need to use NAT Gateway. We do dont recommend this, and advice you to re-architect your application instead.
- To learn more about VPCs, refer to VPCs Stacktape documentation.
Configures access to other resources of your stack (such as relational-databases, buckets, event-buses, etc.).
Type: AccessControl
Overrides one or more properties of the specified child resource.
Type: Object
- Child resources are specified using their cloudformation logical id (e.g.
MyBucketBucket
). - To see all configurable child resources for given Stacktape resource, use
stacktape stack-info --detailed
command. - To see the list of properties that can be overridden, refer to AWS Cloudformation docs.
In this example we are using deployment-script
to test public API endpoint and integrations after deployment.
Copy
import fetch from 'node-fetch';export default async (event) => {const { apiURL } = event;// do whatever you want with apiURL ...const result = await fetch(apiURL);// fail the script if the test failsif (result.statusCode === 404) {throw Error('API test failed');}};
Example deployment script function written in Typescript (test-url.ts
)
Copy
resources:myHttpApi:type: http-api-gatewaytestApiMethods:type: deployment-scriptproperties:trigger: after:deploypackaging:type: stacktape-lambda-buildpackproperties:entryfilePath: test-url.tsparameters:apiURL: $ResourceParam('myHttpApi', 'url')
Stacktape configuration with deployment script
Trigger
Trigger property determines when is the script triggered.
- Currently available options for triggering script are:
after:deploy
- executes script at the end of stackdeploy
operation (after all resources are deployed). If the script fails, whole deployment fails and stack will be rolled back.before:delete
- executes script before stackdelete
operation starts deleting resources. NOTE that even if the script fails,delete
will continue and delete all resources.
- Besides triggering script during stack operations, you can trigger it manually using
stacktape deployment-script:run
command.
Copy
resources:myHttpApi:type: http-api-gatewaytestApiMethods:type: deployment-scriptproperties:trigger: after:deploypackaging:type: stacktape-lambda-buildpackproperties:entryfilePath: test-url.tsparameters:apiURL: $ResourceParam('myHttpApi', 'url')
Scripts that use triggers associated with stack delete operation (before:delete
) are only executed if the script was
present during the last deployment. In other words script must be first added to the stack during deployment so that it
can be triggered during delete operation.
Packaging
Scripts are during deployment packaged and executed as lambda functions. Refer to lambda functions packaging docs.
Parameters
- Parameters can be used to pass complex information to your script handler
You cannot pass secret values (i.e using $Secret
directive) using parameters.
To pass secret values use environment
variables instead.
Copy
resources:myHttpApi:type: http-api-gatewaytestApiMethods:type: deployment-scriptproperties:trigger: after:deploypackaging:type: stacktape-lambda-buildpackproperties:entryfilePath: test-url.tsparameters:apiURL: $ResourceParam('myHttpApi', 'url')testPaths:- my/path/1- my/path/2
Environment
- Environment variables can be used to inject information about infrastructure (database URLS, secrets ...) into script's runtime
- If you wish to pass complex objects into your script use
parameters
instead
Name of the environment variable
Type: string
Value of the environment variable
Type: (string or number or boolean)
Copy
resources:myDatabase:type: relational-databaseproperties:credentials:masterUserName: adminusermasterUserPassword: $Secret('my-database-password')engine:type: aurora-postgresql-serverlesstestDatabase:type: deployment-scriptproperties:trigger: after:deploypackaging:type: stacktape-lambda-buildpackproperties:entryfilePath: test-url.tsenvironment:- name: DATABASE_URLvalue: $ResourceParam('myDatabase', 'connectionString')
Accessing other resources
For most of the AWS resources, resource-to-resource communication is not allowed by default. This helps to enforce security and resource isolation. Access must be explicitly granted using IAM (Identity and Access Management) permissions.
Access control of Relational Databases is not managed by IAM. These resources are not "cloud-native" by design and have their own access control mechanism (connection string with username and password). They are accessible by default, and you don't need to grant any extra IAM permissions. You can further restrict the access to your relational databases by configuring their access control mode.
Stacktape automatically handles IAM permissions for the underlying AWS services that it creates (i.e. granting script permission to write logs to Cloudwatch).
Raw AWS IAM role statements appended to your resources's role.
Type: Array of StpIamRoleStatement
Names of the resources that will receive basic permissions.
Type: Array of string
Granted permissions:
Bucket
- list objects in a bucket
- create / get / delete / tag object in a bucket
DynamoDb Table
- get / put / update / delete item in a table
- scan / query a table
- describe table stream
MongoDb Atlas Cluster
- Allows connection to a cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
Relational database
- Allows connection to a relational database with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about relational database accessibility modes, refer to Relational databases docs.
Redis cluster
- Allows connection to a redis cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
Event bus
- publish events to the specified Event bus
Function
- invoke the specified function
Batch job
- submit batch-job instance into batch-job queue
- list submitted job instances in a batch-job queue
- describe / terminate a batch-job instance
- list executions of state machine which executes the batch-job according to its strategy
- start / terminate execution of a state machine which executes the batch-job according to its strategy
If your script needs to communicate with other infrastructure components, you need to add permissions manually. You can do this in 2 ways:
Using allowAccessTo
- List of resource names that this script will be able to access (basic IAM permissions will be granted automatically). Granted permissions differ based on the resource.
- Works only for resources managed by Stacktape (not arbitrary Cloudformation resources)
- This is useful if you don't want to deal with IAM permissions yourself. Handling permissions using raw IAM role statements can be cumbersome, time-consuming and error-prone.
Copy
resources:myScript:type: deployment-scriptproperties:trigger: after:deploypackaging:type: stacktape-lambda-buildpackproperties:entryfilePath: path/to/my-script.tsenvironment:- name: MY_BUCKET_NAMEvalue: $ResourceParam('myBucket', 'name')accessControl:allowAccessTo:- myBucketmyBucket:type: bucket
Using iamRoleStatements
- IAM Role statements are a low-level, granular and AWS-native way of controlling access to your resources.
- IAM Role statements can be used to add permissions to any Cloudformation resource.
- Configured IAM role statement objects will be appended to the script's role.
Copy
resources:myScript:type: deployment-scriptproperties:packaging:type: stacktape-lambda-buildpackproperties:entryfilePath: path/to/my-script.tsenvironment:- name: TOPIC_ARNvalue: $CfResourceParam('NotificationTopic', 'Arn')accessControl:iamRoleStatements:- Resource:- $CfResourceParam('NotificationTopic', 'Arn')Effect: 'Allow'Action:- 'sns:Publish'cloudformationResources:NotificationTopic:Type: AWS::SNS::Topic
API reference
No description
Type: UNSPECIFIED