Stacktape
Stacktape


Multi-Container workloads



A multi-container workload is a compute resource that runs one or more containers continuously. Unlike functions and batch jobs, which are event-driven, container workloads are designed for long-running applications and scale based on CPU and memory usage.

Like other Stacktape compute resources, container workloads are serverless, meaning you don't need to manage the underlying infrastructure. You can provide your container image by building it from source code, using a Dockerfile, or pulling a pre-built image.

Workloads run securely within a VPC, and you can expose container ports to the internet using integrations with HTTP API Gateways and Load Balancers.

Under the hood

Stacktape uses AWS Elastic Container Service (ECS) to orchestrate containers. You can run your containers using two launch types:

  • Fargate: A serverless compute engine that runs containers without requiring you to manage servers.
  • EC2 instances: Virtual machines that give you more control over the operating environment.

ECS services are self-healing, automatically replacing any unhealthy container instances. They also provide auto-scaling out of the box.

When to use

If you're unsure which compute resource to use, this table provides a comparison of container-based resources in Stacktape:

Resource typeDescriptionUse-cases
web-servicecontinuously running container with public endpoint and URLpublic APIs, websites
private-servicecontinuously running container with private endpointprivate APIs, services
worker-servicecontinuously running container not accessible from outsidecontinuous processing
multi-container-workloadcustom multi container workload - you can customize accessibility for each containermore complex use-cases requiring customization
batch-jobsimple container job - container is destroyed after job is doneone-off/scheduled processing jobs

Advantages

  • Control over environment: You can run any Docker image or build from your own Dockerfile.
  • Cost-effective for predictable loads: More economical than functions for applications with consistent traffic.
  • Load-balanced and auto-scalable: Automatically scales horizontally based on CPU and memory utilization.
  • High availability: Runs in multiple Availability Zones for resilience.
  • Secure by default: The underlying environment is securely managed by AWS.

Disadvantages

  • Slower scaling: Adding new container instances takes longer than scaling functions.
  • Not fully serverless: Cannot scale to zero, meaning you will always pay for at least one running instance.

Basic usage

import express from 'express';
const app = express();
app.get('/', async (req, res) => {
res.send({ message: 'Hello' });
});
app.listen(process.env.PORT, () => {
console.info(`Server running on port ${process.env.PORT}`);
});

Example server container written in Typescript

resources:
mainGateway:
type: http-api-gateway
apiServer:
type: multi-container-workload
properties:
resources:
cpu: 2
memory: 2048
scaling:
minInstances: 1
maxInstances: 5
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
environment:
- name: PORT
value: 3000
events:
- type: http-api-gateway
properties:
method: '*'
path: /{proxy+}
containerPort: 3000
httpApiGatewayName: mainGateway

Container connected to HTTP API Gateway

Containers

Every workload consists of one or more containers. You can configure the following properties for each container:

ContainerWorkloadContainer  API reference
name
Required
packaging
Required
essential
logging
dependsOn
environment
events
loadBalancerHealthCheck
internalHealthCheck
stopTimeout
Default: 2
volumeMounts

Image

You can provide a container image in four ways:

Environment variables

Most commonly used types of environment variables:

  • Static - string, number or boolean (will be stringified).
  • Result of a custom directive.
  • Referenced property of another resource (using $ResourceParam directive). To learn more, refer to referencing parameters guide. If you are using environment variables to inject information about resources into your script, see also property connectTo which simplifies this process.
  • Value of a secret (using $Secret directive).
environment:
- name: STATIC_ENV_VAR
value: my-env-var
- name: DYNAMICALLY_SET_ENV_VAR
value: $MyCustomDirective('input-for-my-directive')
- name: DB_HOST
value: $ResourceParam('myDatabase', 'host')
- name: DB_PASSWORD
value: $Secret('dbSecret.password')
EnvironmentVar  API reference
name
Required
value
Required

Dependencies between containers

You can define dependencies between containers to control their startup order.

ContainerDependency  API reference
containerName
Required
condition
Required

For example, the frontend container will only start after the backend container is running successfully.

resources:
myApiGateway:
type: http-api-gateway
myMultiContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: frontend-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/client/index.ts
dependsOn:
- containerName: backend
condition: START
environment:
- name: PORT
value: 80
- name: API_COINTAINER_PORT
value: 3000
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myApiGateway
containerPort: 80
path: '*'
method: '*'
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/server/index.ts
environment:
- name: PORT
value: 3000
events:
- type: workload-internal
properties:
containerPort: 3000
resources:
cpu: 2
memory: 2048

Healthcheck

A health check monitors the container from within. If an essential container becomes unhealthy, the entire instance is automatically replaced.

ContainerHealthCheck  API reference
healthCheckCommand
Required
intervalSeconds
Default: 30
timeoutSeconds
Default: 5
retries
Default: 3
startPeriodSeconds

resources:
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
internalHealthCheck:
healthCheckCommand: ['CMD-SHELL', 'curl -f http://localhost/ || exit 1']
intervalSeconds: 20
timeoutSeconds: 5
startPeriodSeconds: 150
retries: 2
resources:
cpu: 2
memory: 2048

This example uses a shell command to send a curl request every 20 seconds. If the request fails or times out, the health check fails.

Shutdown

When a container instance is shut down, all containers receive a SIGTERM signal, giving them a chance to clean up gracefully. By default, they have 2 seconds before a SIGKILL signal is sent. You can adjust this with the stopTimeout property.

process.on('SIGTERM', () => {
console.info('Received SIGTERM signal. Cleaning up and exiting process...');
// Finish any outstanding requests, or close a database connection...
process.exit(0);
});

Example of cleaning up before container shutdown.

Logging

Any output to stdout or stderr is captured and stored in a CloudWatch log group. You can view logs through the Stacktape Console, the stacktape stack-info command, or by streaming them with the stacktape logs command.

Forwarding logs

You can forward logs to third-party services. See the Log Forwarding documentation for more details.

Events

Events route traffic from an integration to a specified port on your container.

HTTP API event

Forwards requests from an HTTP API Gateway.

ContainerWorkloadHttpApiIntegration  API reference
type
Required
properties.containerPort
Required
properties.httpApiGatewayName
Required
properties.method
Required
properties.path
Required
properties.authorizer
properties.payloadFormat
Default: '1.0'

resources:
myApiGateway:
type: http-api-gateway
myApp:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myApiGateway
containerPort: 80
path: '/my-path'
method: GET
resources:
cpu: 2
memory: 2048

Incoming GET requests to /my-path on myApiGateway are routed to port 80 of the api-container.

Application Load Balancer event

Forwards requests from an Application Load Balancer. This allows for advanced routing based on path, query parameters, headers, and more.

ContainerWorkloadLoadBalancerIntegration  API reference
type
Required
properties.containerPort
Required
properties.loadBalancerName
Required
properties.priority
Required
properties.listenerPort
properties.paths
properties.methods
properties.hosts
properties.headers
properties.queryParams
properties.sourceIps
LbHeaderCondition  API reference
headerName
Required
values
Required
LbQueryParamCondition  API reference
paramName
Required
values
Required
resources:
myLoadBalancer:
type: application-load-balancer
myApp:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
events:
- type: application-load-balancer
properties:
loadBalancerName: myLoadBalancer
containerPort: 80
priority: 1
paths: ['*']
resources:
cpu: 2
memory: 2048

Network Load Balancer event

Forwards traffic from a Network Load Balancer.

resources:
myLoadBalancer:
type: 'network-load-balancer'
properties:
listeners:
- port: 8080
protocol: TLS
myWorkload:
type: 'multi-container-workload'
properties:
containers:
- name: container1
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: containers/ts-container.ts
events:
- type: network-load-balancer
properties:
loadBalancerName: myLoadBalancer
listenerPort: 8080
containerPort: 8080
resources:
cpu: 0.25
memory: 512

Internal port (workload-internal)

Opens a port for communication with other containers within the same workload.

ContainerWorkloadInternalIntegration  API reference
type
Required
properties.containerPort
Required
resources:
myApiGateway:
type: http-api-gateway
myApp:
type: multi-container-workload
properties:
containers:
- name: frontend
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/frontend/index.ts
dependsOn:
- containerName: backend
condition: START
environment:
- name: PORT
value: 80
- name: BACKEND_PORT
value: 3000
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myApiGateway
containerPort: 80
path: /my-path
method: GET
- name: backend
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/backend/index.ts
environment:
- name: PORT
value: 3000
events:
- type: workload-internal
properties:
containerPort: 3000
resources:
cpu: 2
memory: 2048

Private port (service-connect)

Opens a port for communication with other workloads in the same stack.

  • Combination of alias and container port creates a unique identifier. You can then reach compute resource using URL in form protocol://alias:containerPort for example http://my-service:8080 or grpc://appserver:8080
  • By default, alias is derived from the name of your resource and container i.e resourceName-containerName
ContainerWorkloadServiceConnectIntegration  API reference
type
Required
properties.containerPort
Required
properties.alias
properties.protocol
resources:
internalService:
type: multi-container-workload
properties:
containers:
- name: api
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/private/index.ts
events:
- type: service-connect
properties:
containerPort: 3000
resources:
cpu: 2
memory: 2048
publicService:
type: multi-container-workload
properties:
containers:
- name: api
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/public/index.ts
resources:
cpu: 2
memory: 2048

Resources

You can specify the CPU, memory, and EC2 instance types for your workload.

  • When specifying resources there are two underlying compute engines to use:

    • Fargate - abstracts the server and cluster management away from the user, allowing them to run containers without managing the underlying servers, simplifying deployment and management of applications but offering less control over the computing environment.
    • EC2 (Elastic Compute Cloud) - provides granular control over the underlying servers (instances). By choosing instanceTypes you get complete control over the computing environment and the ability to optimize for specific workloads.
  • To use Fargate: Do NOT specify instanceTypes and specify cpu and memory properties.

  • To use EC2 instances: specify instanceTypes.

If your workload has multiple containers, the assigned resources are shared between them.

ContainerWorkloadResourcesConfig  API reference
cpu
memory
instanceTypes
enableWarmPool

Using Fargate

If you omit the instanceTypes property, your workload will run on Fargate.

resources:
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
cpu: 0.25
memory: 512

Using EC2 instances

If you specify instanceTypes, your workload will run on EC2 instances.

  • EC2 instances are automatically added or removed to meet the scaling needs of your compute resource(see also scaling property).
  • When using instanceTypes, we recommend to specify only one instance type and to NOT set cpu or memory properties. By doing so, Stacktape will set the cpu and memory to fit the instance precisely - resulting in the optimal resource utilization.
  • Stacktape leverages ECS Managed Scaling with target utilization 100%. This means that there are no unused EC2 instances(unused = not running your workload/service) running. Unused EC2 instances are terminated.
  • Ordering in instanceTypes list matters. Instance types which are higher on the list are preferred over the instance types which are lower on the list. Only when instance type higher on the list is not available, next instance type on the list will be used.
  • For exhaustive list of available EC2 instance types refer to AWS docs.

To ensure that your containers are running on patched and up-to-date EC2 instances, your instances are automatically refreshed (replaced) once a week(Sunday 00:00 UTC). Your compute resource stays available throughout this process.

resources:
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large

Placing containers on EC2

Stacktape optimizes for 100% utilization of your EC2 instances. If you specify cpu and memory, AWS uses a binpack strategy to place as many workload instances as possible onto the available EC2 instances.

Using warm pool

Enable a warm pool to keep pre-initialized EC2 instances in a stopped state, ready for faster scaling. This is only supported for workloads with a single instance type.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large
enableWarmPool: true

Scaling

Configure the minimum and maximum number of concurrent workload instances and define a scaling policy based on CPU and memory utilization.

ContainerWorkloadScaling  API reference
minInstances
Default: 1
maxInstances
Default: 1
scalingPolicy

Scaling policy

A scaling policy triggers scaling actions when CPU or memory thresholds are crossed. The workload scales out aggressively when metrics are high and scales in more cautiously when they are low.

ContainerWorkloadScalingPolicy  API reference
keepAvgCpuUtilizationUnder
Default: 80
keepAvgMemoryUtilizationUnder
Default: 80
resources:
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: container-1
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/cont1/index.ts
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myApiGateway
containerPort: 80
method: '*'
path: '*'
- name: container-2
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/cont1/index.ts
events:
- type: workload-internal
properties:
containerPort: 3000
resources:
cpu: 0.5
memory: 1024
scaling:
minInstances: 1
maxInstances: 5
scalingPolicy:
keepAvgMemoryUtilizationUnder: 80
keepAvgCpuUtilizationUnder: 80

Storage

Each workload instance has 20GB of ephemeral storage, which is shared among all containers within that instance. This storage is deleted when the instance is removed. For persistent storage, use Buckets.

Accessing other resources

By default, workloads cannot access other AWS resources. You must grant permissions using IAM.

Using connectTo

The connectTo property is a simplified way to grant access to other Stacktape-managed resources.

resources:
photosBucket:
type: bucket
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: apiContainer
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: sr/index.ts
connectTo:
# access to the bucket
- photosBucket
# access to AWS SES
- aws:ses
resources:
cpu: 0.25
memory: 512

By referencing resources (or services) in connectTo list, Stacktape automatically:

  • configures correct compute resource's IAM role permissions if needed
  • sets up correct security group rules to allow access if needed
  • injects relevant environment variables containing information about resource you are connecting to into the compute resource's runtime
    • names of environment variables use upper-snake-case and are in form STP_[RESOURCE_NAME]_[VARIABLE_NAME],
    • examples: STP_MY_DATABASE_CONNECTION_STRING or STP_MY_EVENT_BUS_ARN,
    • list of injected variables for each resource type can be seen below.

Granted permissions and injected environment variables are different depending on resource type:


Bucket

  • Permissions:
    • list objects in a bucket
    • create / get / delete / tag object in a bucket
  • Injected env variables: NAME, ARN

DynamoDB table

  • Permissions:
    • get / put / update / delete item in a table
    • scan / query a table
    • describe table stream
  • Injected env variables: NAME, ARN, STREAM_ARN

MongoDB Atlas cluster

  • Permissions:
    • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDB Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
    • Creates access "user" associated with compute resource's role to allow for secure credential-less access to the the cluster
  • Injected env variables: CONNECTION_STRING

Relational(SQL) database

  • Permissions:
    • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs.
  • Injected env variables: CONNECTION_STRING, JDBC_CONNECTION_STRING, HOST, PORT (in case of aurora multi instance cluster additionally: READER_CONNECTION_STRING, READER_JDBC_CONNECTION_STRING, READER_HOST)

Redis cluster

  • Permissions:
    • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
  • Injected env variables: HOST, READER_HOST, PORT

Event bus

  • Permissions:
    • publish events to the specified Event bus
  • Injected env variables: ARN

Function

  • Permissions:
    • invoke the specified function
    • invoke the specified function via url (if lambda has URL enabled)
  • Injected env variables: ARN

Batch job

  • Permissions:
    • submit batch-job instance into batch-job queue
    • list submitted job instances in a batch-job queue
    • describe / terminate a batch-job instance
    • list executions of state machine which executes the batch-job according to its strategy
    • start / terminate execution of a state machine which executes the batch-job according to its strategy
  • Injected env variables: JOB_DEFINITION_ARN, STATE_MACHINE_ARN

User auth pool

  • Permissions:
    • full control over the user pool (cognito-idp:*)
    • for more information about allowed methods refer to AWS docs
  • Injected env variables: ID, CLIENT_ID, ARN


SNS Topic

  • Permissions:
    • confirm/list subscriptions of the topic
    • publish/subscribe to the topic
    • unsubscribe from the topic
  • Injected env variables: ARN, NAME


SQS Queue

  • Permissions:
    • send/receive/delete message
    • change visibility of message
    • purge queue
  • Injected env variables: ARN, NAME, URL

Upstash Kafka topic

  • Injected env variables: TOPIC_NAME, TOPIC_ID, USERNAME, PASSWORD, TCP_ENDPOINT, REST_URL

Upstash Redis

  • Injected env variables: HOST, PORT, PASSWORD, REST_TOKEN, REST_URL, REDIS_URL

Private service

  • Injected env variables: ADDRESS

aws:ses(Macro)

  • Permissions:
    • gives full permissions to aws ses (ses:*).
    • for more information about allowed methods refer to AWS docs

Using iamRoleStatements

For fine-grained control, you can provide raw IAM role statements.

resources:
myContainerWorkload:
type: multi-container-workload
properties:
containers:
- name: apiContainer
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: server/index.ts
iamRoleStatements:
- Resource:
- $CfResourceParam('NotificationTopic', 'Arn')
Effect: 'Allow'
Action:
- 'sns:Publish'
resources:
cpu: 2
memory: 2048
cloudformationResources:
NotificationTopic:
Type: 'AWS::SNS::Topic'
StpIamRoleStatement  API reference
Resource
Required
Sid
Effect
Action
Condition

Deployment strategies

By default, Stacktape uses a rolling update strategy. You can choose a different strategy using the deployment property.

  • Using deployment you can update the container workload in live environment in a safe way - by shifting the traffic to the new version gradually.
  • Gradual shift of traffic gives you opportunity to test/monitor the workload during update and in a case of a problem quickly rollback.
  • Deployment supports multiple strategies:
    • Canary10Percent5Minutes - Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.
    • Canary10Percent15Minutes - Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.
    • Linear10PercentEvery1Minute - Shifts 10 percent of traffic every minute until all traffic is shifted.
    • Linear10PercentEvery3Minutes - Shifts 10 percent of traffic every three minutes until all traffic is shifted.
    • AllAtOnce - Shifts all traffic to the updated container workload at once.
  • You can validate/abort deployment(update) using lambda-function hooks.

    When using deployment, your container workload must use application-load-balancer event integration

ContainerWorkloadDeploymentConfig  API reference
strategy
Required
beforeAllowTrafficFunction
afterTrafficShiftFunction
testListenerPort
resources:
myLoadBalancer:
type: application-load-balancer
myApp:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
events:
- type: application-load-balancer
properties:
loadBalancerName: myLoadBalancer
containerPort: 80
priority: 1
paths: ['*']
resources:
cpu: 2
memory: 2048
deployment:
strategy: Canary10Percent5Minutes

Hook functions

You can use hook functions to perform checks during deployment, including sending test traffic to a new version before it receives production traffic.

resources:
myLoadBalancer:
type: application-load-balancer
myApp:
type: multi-container-workload
properties:
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
events:
- type: application-load-balancer
properties:
loadBalancerName: myLoadBalancer
containerPort: 80
priority: 1
paths: ['*']
resources:
cpu: 2
memory: 2048
deployment:
strategy: Canary10Percent5Minutes
afterTrafficShiftFunction: validateDeployment
validateDeployment:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: src/validate-deployment.ts
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';
const client = new CodeDeployClient({});
export default async (event) => {
// read DeploymentId and LifecycleEventHookExecutionId from payload
const { DeploymentId, LifecycleEventHookExecutionId } = event;
// performing validations here
await client.send(
new PutLifecycleEventHookExecutionStatusCommand({
deploymentId: DeploymentId,
lifecycleEventHookExecutionId: LifecycleEventHookExecutionId,
status: 'Succeeded' // status can be 'Succeeded' or 'Failed'
})
);
};

Default VPC connection

Container workloads are connected to the default VPC of your stack by default. This allows them to communicate with other VPC-enabled resources without extra configuration.

Referenceable parameters

Currently, no parameters can be referenced.

Pricing

You are charged for:

  • Virtual CPU per hour
  • Memory per hour

Pricing is rounded to the nearest second with a one-minute minimum. For details, see the Fargate pricing page.

API reference

ContainerWorkload  API reference
type
Required
properties.containers
Required
properties.resources
Required
properties.scaling
properties.deployment
properties.enableRemoteSessions
properties.connectTo
properties.iamRoleStatements
overrides
CognitoAuthorizer  API reference
type
Required
properties.userPoolName
Required
properties.identitySources
LambdaAuthorizer  API reference
type
Required
properties.functionName
Required
properties.iamResponse
properties.identitySources
properties.cacheResultSeconds
StpIamRoleStatement  API reference
Resource
Required
Sid
Effect
Action
Condition

Contents