Multi-Container workloads
A multi-container workload is a compute resource that runs one or more containers continuously. Unlike functions and batch jobs, which are event-driven, container workloads are designed for long-running applications and scale based on CPU and memory usage.
Like other Stacktape compute resources, container workloads are serverless, meaning you don't need to manage the underlying infrastructure. You can provide your container image by building it from source code, using a Dockerfile, or pulling a pre-built image.
Workloads run securely within a VPC, and you can expose container ports to the internet using integrations with HTTP API Gateways and Load Balancers.
Under the hood
Stacktape uses AWS Elastic Container Service (ECS) to orchestrate containers. You can run your containers using two launch types:
- Fargate: A serverless compute engine that runs containers without requiring you to manage servers.
- EC2 instances: Virtual machines that give you more control over the operating environment.
ECS services are self-healing, automatically replacing any unhealthy container instances. They also provide auto-scaling out of the box.
When to use
If you're unsure which compute resource to use, this table provides a comparison of container-based resources in Stacktape:
| Resource type | Description | Use-cases |
|---|---|---|
| web-service | continuously running container with public endpoint and URL | public APIs, websites |
| private-service | continuously running container with private endpoint | private APIs, services |
| worker-service | continuously running container not accessible from outside | continuous processing |
| multi-container-workload | custom multi container workload - you can customize accessibility for each container | more complex use-cases requiring customization |
| batch-job | simple container job - container is destroyed after job is done | one-off/scheduled processing jobs |
Advantages
- Control over environment: You can run any Docker image or build from your own Dockerfile.
- Cost-effective for predictable loads: More economical than functions for applications with consistent traffic.
- Load-balanced and auto-scalable: Automatically scales horizontally based on CPU and memory utilization.
- High availability: Runs in multiple Availability Zones for resilience.
- Secure by default: The underlying environment is securely managed by AWS.
Disadvantages
- Slower scaling: Adding new container instances takes longer than scaling functions.
- Not fully serverless: Cannot scale to zero, meaning you will always pay for at least one running instance.
Basic usage
import express from 'express';const app = express();app.get('/', async (req, res) => {res.send({ message: 'Hello' });});app.listen(process.env.PORT, () => {console.info(`Server running on port ${process.env.PORT}`);});
Example server container written in Typescript
resources:mainGateway:type: http-api-gatewayapiServer:type: multi-container-workloadproperties:resources:cpu: 2memory: 2048scaling:minInstances: 1maxInstances: 5containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsenvironment:- name: PORTvalue: 3000events:- type: http-api-gatewayproperties:method: '*'path: /{proxy+}containerPort: 3000httpApiGatewayName: mainGateway
Container connected to HTTP API Gateway
Containers
Every workload consists of one or more containers. You can configure the following properties for each container:
Image
You can provide a container image in four ways:
Environment variables
A list of environment variables to pass to the script or command.
Values can be:
- A static string, number, or boolean.
- The result of a custom directive.
- A reference to another resource's parameter using the `$ResourceParam` directive.
- A value from a secret using the `$Secret` directive.
environment:- name: STATIC_ENV_VARvalue: my-env-var- name: DYNAMICALLY_SET_ENV_VARvalue: $MyCustomDirective('input-for-my-directive')- name: DB_HOSTvalue: $ResourceParam('myDatabase', 'host')- name: DB_PASSWORDvalue: $Secret('dbSecret.password')
Dependencies between containers
You can define dependencies between containers to control their startup order.
For example, the frontend container will only start after the backend container is running successfully.
resources:myApiGateway:type: http-api-gatewaymyMultiContainerWorkload:type: multi-container-workloadproperties:containers:- name: frontend-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/client/index.tsdependsOn:- containerName: backendcondition: STARTenvironment:- name: PORTvalue: 80- name: API_COINTAINER_PORTvalue: 3000events:- type: http-api-gatewayproperties:httpApiGatewayName: myApiGatewaycontainerPort: 80path: '*'method: '*'- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/server/index.tsenvironment:- name: PORTvalue: 3000events:- type: workload-internalproperties:containerPort: 3000resources:cpu: 2memory: 2048
Healthcheck
A health check monitors the container from within. If an essential container becomes unhealthy, the entire instance is automatically replaced.
resources:myContainerWorkload:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsinternalHealthCheck:healthCheckCommand: ['CMD-SHELL', 'curl -f http://localhost/ || exit 1']intervalSeconds: 20timeoutSeconds: 5startPeriodSeconds: 150retries: 2resources:cpu: 2memory: 2048
This example uses a shell command to send a curl request every 20 seconds. If the request fails or times out, the health check fails.
Shutdown
When a container instance is shut down, all containers receive a SIGTERM signal, giving them a chance to clean up gracefully. By default, they have 2 seconds before a SIGKILL signal is sent. You can adjust this with the stopTimeout property.
process.on('SIGTERM', () => {console.info('Received SIGTERM signal. Cleaning up and exiting process...');// Finish any outstanding requests, or close a database connection...process.exit(0);});
Example of cleaning up before container shutdown.
Logging
Any output to stdout or stderr is captured and stored in a CloudWatch log group. You can view logs through the Stacktape Console, the stacktape stack-info command, or by streaming them with the stacktape logs command.
Forwarding logs
You can forward logs to third-party services. See the Log Forwarding documentation for more details.
Events
Events route traffic from an integration to a specified port on your container.
HTTP API event
Forwards requests from an HTTP API Gateway.
resources:myApiGateway:type: http-api-gatewaymyApp:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsevents:- type: http-api-gatewayproperties:httpApiGatewayName: myApiGatewaycontainerPort: 80path: '/my-path'method: GETresources:cpu: 2memory: 2048
Incoming GET requests to /my-path on myApiGateway are routed to port 80 of the api-container.
Application Load Balancer event
Forwards requests from an Application Load Balancer. This allows for advanced routing based on path, query parameters, headers, and more.
resources:myLoadBalancer:type: application-load-balancermyApp:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsevents:- type: application-load-balancerproperties:loadBalancerName: myLoadBalancercontainerPort: 80priority: 1paths: ['*']resources:cpu: 2memory: 2048
Network Load Balancer event
Forwards traffic from a Network Load Balancer.
resources:myLoadBalancer:type: 'network-load-balancer'properties:listeners:- port: 8080protocol: TLSmyWorkload:type: 'multi-container-workload'properties:containers:- name: container1packaging:type: stacktape-image-buildpackproperties:entryfilePath: containers/ts-container.tsevents:- type: network-load-balancerproperties:loadBalancerName: myLoadBalancerlistenerPort: 8080containerPort: 8080resources:cpu: 0.25memory: 512
Internal port (workload-internal)
Opens a port for communication with other containers within the same workload.
resources:myApiGateway:type: http-api-gatewaymyApp:type: multi-container-workloadproperties:containers:- name: frontendpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/frontend/index.tsdependsOn:- containerName: backendcondition: STARTenvironment:- name: PORTvalue: 80- name: BACKEND_PORTvalue: 3000events:- type: http-api-gatewayproperties:httpApiGatewayName: myApiGatewaycontainerPort: 80path: /my-pathmethod: GET- name: backendpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/backend/index.tsenvironment:- name: PORTvalue: 3000events:- type: workload-internalproperties:containerPort: 3000resources:cpu: 2memory: 2048
Private port (service-connect)
Opens a port for communication with other workloads in the same stack.
Other resources in the stack can connect to this service using a URL like protocol://alias:port (e.g., http://my-service:8080).
By default, the alias is derived from the resource and container names (e.g., my-resource-my-container).
resources:internalService:type: multi-container-workloadproperties:containers:- name: apipackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/private/index.tsevents:- type: service-connectproperties:containerPort: 3000resources:cpu: 2memory: 2048publicService:type: multi-container-workloadproperties:containers:- name: apipackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/public/index.tsresources:cpu: 2memory: 2048
Resources
You can specify the CPU, memory, and EC2 instance types for your workload.
You can choose between two compute engines:
- Fargate: A serverless option where you don't need to manage the underlying servers. You specify CPU and memory, and AWS handles the rest. This is the simplest way to run containers.
- EC2: This option gives you fine-grained control over the underlying virtual machines (instances). You can choose specific EC2 instance types to optimize for your workload's needs.
To use Fargate, specify the cpu and memory properties. To use EC2, specify the instanceTypes property.
If your workload has multiple containers, the assigned resources are shared between them.
Using Fargate
If you omit the instanceTypes property, your workload will run on Fargate.
resources:myContainerWorkload:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:cpu: 0.25memory: 512
Using EC2 instances
If you specify instanceTypes, your workload will run on EC2 instances.
Instances are automatically added or removed to meet scaling demands.
Recommendation: For optimal resource utilization, specify a single instance type and omit the cpu and memory properties. Stacktape will then size the containers to fit the instance perfectly.
The order of instance types matters; the first in the list is preferred. For a full list of instance types, see the AWS EC2 instance types documentation.
Instances are automatically refreshed weekly to ensure they are patched and up-to-date. Your workload remains available during this process.
resources:myContainerWorkload:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:instanceTypes:- c5.large
Placing containers on EC2
Stacktape optimizes for 100% utilization of your EC2 instances. If you specify cpu and memory, AWS uses a binpack strategy to place as many workload instances as possible onto the available EC2 instances.
Using warm pool
Enable a warm pool to keep pre-initialized EC2 instances in a stopped state, ready for faster scaling. This is only supported for workloads with a single instance type.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:instanceTypes:- c5.largeenableWarmPool: true
Scaling
Configure the minimum and maximum number of concurrent workload instances and define a scaling policy based on CPU and memory utilization.
Scaling policy
A scaling policy triggers scaling actions when CPU or memory thresholds are crossed. The workload scales out aggressively when metrics are high and scales in more cautiously when they are low.
resources:myContainerWorkload:type: multi-container-workloadproperties:containers:- name: container-1packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/cont1/index.tsevents:- type: http-api-gatewayproperties:httpApiGatewayName: myApiGatewaycontainerPort: 80method: '*'path: '*'- name: container-2packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/cont1/index.tsevents:- type: workload-internalproperties:containerPort: 3000resources:cpu: 0.5memory: 1024scaling:minInstances: 1maxInstances: 5scalingPolicy:keepAvgMemoryUtilizationUnder: 80keepAvgCpuUtilizationUnder: 80
Storage
Each workload instance has 20GB of ephemeral storage, which is shared among all containers within that instance. This storage is deleted when the instance is removed. For persistent storage, use Buckets.
Accessing other resources
By default, workloads cannot access other AWS resources. You must grant permissions using IAM.
Using connectTo
The connectTo property is a simplified way to grant access to other Stacktape-managed resources.
resources:photosBucket:type: bucketmyContainerWorkload:type: multi-container-workloadproperties:containers:- name: apiContainerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: sr/index.tsconnectTo:# access to the bucket- photosBucket# access to AWS SES- aws:sesresources:cpu: 0.25memory: 512
Configures access to other resources in your stack and AWS services. By specifying resources here, Stacktape automatically:
- Configures IAM role permissions.
- Sets up security group rules to allow network traffic.
- Injects environment variables with connection details into the compute resource.
Environment variables are named STP_[RESOURCE_NAME]_[VARIABLE_NAME] (e.g., STP_MY_DATABASE_CONNECTION_STRING).
Using iamRoleStatements
For fine-grained control, you can provide raw IAM role statements.
resources:myContainerWorkload:type: multi-container-workloadproperties:containers:- name: apiContainerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: server/index.tsiamRoleStatements:- Resource:- $CfResourceParam('NotificationTopic', 'Arn')Effect: 'Allow'Action:- 'sns:Publish'resources:cpu: 2memory: 2048cloudformationResources:NotificationTopic:Type: 'AWS::SNS::Topic'
Deployment strategies
By default, Stacktape uses a rolling update strategy. You can choose a different strategy using the deployment property.
This allows for safe, gradual deployments. Instead of instantly replacing the old version, traffic is shifted to the new version over time. This provides an opportunity to monitor for issues and roll back if necessary.
Supported strategies include Canary, Linear, and AllAtOnce deployments.
Note: To use gradual deployments, your workload must be integrated with an Application Load Balancer.
resources:myLoadBalancer:type: application-load-balancermyApp:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsevents:- type: application-load-balancerproperties:loadBalancerName: myLoadBalancercontainerPort: 80priority: 1paths: ['*']resources:cpu: 2memory: 2048deployment:strategy: Canary10Percent5Minutes
Hook functions
You can use hook functions to perform checks during deployment, including sending test traffic to a new version before it receives production traffic.
resources:myLoadBalancer:type: application-load-balancermyApp:type: multi-container-workloadproperties:containers:- name: api-containerpackaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsevents:- type: application-load-balancerproperties:loadBalancerName: myLoadBalancercontainerPort: 80priority: 1paths: ['*']resources:cpu: 2memory: 2048deployment:strategy: Canary10Percent5MinutesafterTrafficShiftFunction: validateDeploymentvalidateDeployment:type: functionproperties:packaging:type: stacktape-lambda-buildpackproperties:entryfilePath: src/validate-deployment.ts
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';const client = new CodeDeployClient({});export default async (event) => {// read DeploymentId and LifecycleEventHookExecutionId from payloadconst { DeploymentId, LifecycleEventHookExecutionId } = event;// performing validations hereawait client.send(new PutLifecycleEventHookExecutionStatusCommand({deploymentId: DeploymentId,lifecycleEventHookExecutionId: LifecycleEventHookExecutionId,status: 'Succeeded' // status can be 'Succeeded' or 'Failed'}));};
Default VPC connection
Container workloads are connected to the default VPC of your stack by default. This allows them to communicate with other VPC-enabled resources without extra configuration.
Referenceable parameters
Currently, no parameters can be referenced.
Pricing
You are charged for:
- Virtual CPU per hour
- Memory per hour
Pricing is rounded to the nearest second with a one-minute minimum. For details, see the Fargate pricing page.