Stacktape
Stacktape


Worker Services



A worker service is a continuously running container that is not directly accessible from outside your stack. It's ideal for background jobs, such as processing items from a message queue or handling other asynchronous tasks.

Key features include:

  • Automatic scaling: Scales based on CPU or memory usage.
  • Zero-downtime deployments: New versions are deployed without interrupting the service.
  • Flexible container images: Supports various methods for providing a container image, including auto-packaging for popular languages.
  • Fully managed: No need to manage servers, operating systems, or virtual machines.
  • Seamless connectivity: Easily connects to other resources within your stack.

How it works

Stacktape uses AWS Elastic Container Service (ECS) to run your containers on either Fargate or EC2 instances.

  • Fargate is a serverless compute engine that runs containers without requiring you to manage the underlying servers.
  • EC2 instances are virtual servers that give you more control over the computing environment.

ECS services are self-healing, automatically replacing any container that fails. They also scale automatically based on the rules you define.

When to use it

This table helps you choose the right container-based resource for your needs:

Resource typeDescriptionUse-cases
web-serviceA container with a public endpoint and URL.Public APIs, websites
private-serviceA container with a private endpoint, accessible only within your stack.Private APIs, internal services
worker-serviceA container that runs continuously but is not directly accessible.Background processing, message queue consumers
multi-container-workloadA customizable workload with multiple containers, where you define the accessibility of each one.Complex, multi-component services
batch-jobA container that runs a single job and then terminates.One-off or scheduled data processing tasks

Advantages

  • Control over the environment: Runs any Docker image or an image built from a Dockerfile.
  • Cost-effective for predictable loads: Cheaper than Lambda functions for services with steady traffic.
  • Load-balanced and scalable: Automatically scales horizontally based on CPU and memory usage.
  • Highly available: Runs across multiple Availability Zones to ensure resilience.
  • Secure by default: The underlying environment is managed and secured by AWS.

Disadvantages

  • Slower scaling: Adding new container instances can take several seconds to a few minutes, which is slower than the nearly-instant scaling of Lambda functions.
  • Not fully serverless: Cannot scale down to zero. You pay for at least one running instance (starting at ~$8/month), even if it's idle.

Basic usage

Here's a basic example of a worker service configuration:

resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048

Example worker service configuration.

And here's the corresponding application code:

import myContinuouslyRunningApp from './my-app';
const app = myContinuouslyRunningApp();
app.run();

Example worker container in TypeScript (main.ts).


WorkerService  API reference
type
Required
properties.packaging
Required
properties.resources
Required
properties.environment
properties.logging
properties.scaling
properties.internalHealthCheck
properties.stopTimeout
properties.enableRemoteSessions
properties.volumeMounts
properties.connectTo
properties.iamRoleStatements
overrides

Image

A worker service runs a Docker image. You can provide this image in four ways:

Environment variables

Most commonly used types of environment variables:

  • Static - string, number or boolean (will be stringified).
  • Result of a custom directive.
  • Referenced property of another resource (using $ResourceParam directive). To learn more, refer to referencing parameters guide. If you are using environment variables to inject information about resources into your script, see also property connectTo which simplifies this process.
  • Value of a secret (using $Secret directive).
resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
environment:
- name: STATIC_ENV_VAR
value: my-env-var
- name: DYNAMICALLY_SET_ENV_VAR
value: $MyCustomDirective('input-for-my-directive')
- name: DB_HOST
value: $ResourceParam('myDatabase', 'host')
- name: DB_PASSWORD
value: $Secret('dbSecret.password')
resources:
cpu: 2
memory: 2048
name
Required
value
Required

Health check

Health checks monitor your container to ensure it's running correctly. If a container fails its health check, it's automatically terminated and replaced with a new one.

ContainerHealthCheck  API reference
healthCheckCommand
Required
intervalSeconds
Default: 30
timeoutSeconds
Default: 5
retries
Default: 3
startPeriodSeconds

For example, this health check uses curl to send a request to the service every 20 seconds. If the request fails or takes longer than 5 seconds, the check is considered failed.

resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
internalHealthCheck:
healthCheckCommand: ['CMD-SHELL', 'curl -f http://localhost/ || exit 1']
intervalSeconds: 20
timeoutSeconds: 5
startPeriodSeconds: 150
retries: 2
resources:
cpu: 2
memory: 2048

Shutdown

When a service instance is shut down (for example, during a deployment or when the stack is deleted), all of its containers receive a SIGTERM signal. This gives your application a chance to shut down gracefully.

By default, the application has 2 seconds to clean up before it's forcefully stopped with a SIGKILL signal. You can change this with the stopTimeout property (from 2 to 120 seconds).

process.on('SIGTERM', () => {
console.info('Received SIGTERM signal. Cleaning up and exiting process...');
// Finish any outstanding requests, or close a database connection...
process.exit(0);
});

Example of a cleanup function that runs before the container shuts down.

Logging

Anything your application writes to stdout or stderr is captured and stored in AWS CloudWatch.

You can view logs in a few ways:

  • Stacktape Console: Find a direct link to the logs in the Stacktape Console.
  • Stacktape CLI: Use the stacktape logs command to stream logs to your terminal.
  • AWS Console: Browse logs directly in the AWS CloudWatch console. The stacktape stack-info command can provide a link.

Log storage can be expensive. To manage costs, you can configure retentionDays to automatically delete logs after a certain period.

ContainerWorkloadContainerLogging  API reference
disabled
retentionDays
Default: 90
logForwarding

Forwarding logs

You can forward logs to third-party services. See Forwarding Logs for more information.

Compute resources

In the resources section, you configure the CPU, memory, and instance types for your service. You can run your containers using either Fargate or EC2 instances.

  • Fargate is a serverless option that lets you run containers without managing servers. You only need to specify the cpu and memory your service requires. It's a good choice for applications that need to meet high security standards like PCI DSS Level 1 and SOC 2.
  • EC2 instances are virtual servers that give you more control. You choose the instance types that best fit your needs, and ECS places your containers on them.

Regardless of whether you use Fargate or EC2 instances, your containers run securely within a VPC.

  • When specifying resources there are two underlying compute engines to use:

    • Fargate - abstracts the server and cluster management away from the user, allowing them to run containers without managing the underlying servers, simplifying deployment and management of applications but offering less control over the computing environment.
    • EC2 (Elastic Compute Cloud) - provides granular control over the underlying servers (instances). By choosing instanceTypes you get complete control over the computing environment and the ability to optimize for specific workloads.
  • To use Fargate: Do NOT specify instanceTypes and specify cpu and memory properties.

  • To use EC2 instances: specify instanceTypes.

ContainerWorkloadResourcesConfig  API reference
cpu
memory
instanceTypes
enableWarmPool

Using Fargate

To use Fargate, specify cpu and memory in the resources section without including instanceTypes.

resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
cpu: 0.25
memory: 512

Example of a service running on Fargate.

Using EC2 instances

To use EC2 instances, specify a list of instanceTypes in the resources section.

  • EC2 instances are automatically added or removed to meet the scaling needs of your compute resource(see also scaling property).
  • When using instanceTypes, we recommend to specify only one instance type and to NOT set cpu or memory properties. By doing so, Stacktape will set the cpu and memory to fit the instance precisely - resulting in the optimal resource utilization.
  • Stacktape leverages ECS Managed Scaling with target utilization 100%. This means that there are no unused EC2 instances(unused = not running your workload/service) running. Unused EC2 instances are terminated.
  • Ordering in instanceTypes list matters. Instance types which are higher on the list are preferred over the instance types which are lower on the list. Only when instance type higher on the list is not available, next instance type on the list will be used.
  • For exhaustive list of available EC2 instance types refer to AWS docs.

To ensure that your containers are running on patched and up-to-date EC2 instances, your instances are automatically refreshed (replaced) once a week(Sunday 00:00 UTC). Your compute resource stays available throughout this process.

resources:
myWebService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large

Example of a service running on EC2 instances.

Container placement on EC2

Stacktape tries to use your EC2 instances as efficiently as possible.

  • If you specify instanceTypes without cpu and memory, Stacktape configures each service instance to use the full resources of one EC2 instance. When the service scales out, a new EC2 instance is added for each new service instance.
  • If you specify cpu and memory, AWS will place multiple service instances on a single EC2 instance if there's enough capacity, maximizing utilization.

Default CPU and memory for EC2

  • If cpu is not specified, containers on an EC2 instance share its CPU capacity.
  • If memory is not specified, Stacktape sets the memory to the maximum amount available on the smallest instance type in your instanceTypes list.

Using a warm pool

A warm pool keeps pre-initialized EC2 instances in a stopped state, allowing your service to scale out much faster. This is useful for handling sudden traffic spikes. You only pay for the storage of stopped instances, not for compute time.

To enable it, set enableWarmPool to true. This feature is only available when you specify exactly one instance type.

For more details, see the AWS Auto Scaling warm pools documentation.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large
enableWarmPool: true

Scaling

The scaling section lets you control how your service scales. You can set the minimum and maximum number of running instances and define a policy that triggers scaling actions.

ContainerWorkloadScaling  API reference
minInstances
Default: 1
maxInstances
Default: 1
scalingPolicy

Scaling policy

A scaling policy defines the CPU and memory thresholds that trigger scaling.

  • Scaling out (adding instances): The service scales out if either the average CPU or memory utilization exceeds the target you set.
  • Scaling in (removing instances): The service scales in only when both CPU and memory utilization are below their target values.

The scaling process is more aggressive when adding capacity than when removing it. This helps ensure your application can handle sudden increases in load, while scaling in more cautiously to prevent flapping (scaling in and out too frequently).

ContainerWorkloadScalingPolicy  API reference
keepAvgCpuUtilizationUnder
Default: 80
keepAvgMemoryUtilizationUnder
Default: 80
resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
cpu: 0.5
memory: 1024
scaling:
minInstances: 1
maxInstances: 5
scalingPolicy:
keepAvgMemoryUtilizationUnder: 80
keepAvgCpuUtilizationUnder: 80

Example of a scaling configuration.

Storage

Each service instance has its own temporary, or ephemeral storage, with a fixed size of 20GB. This storage is deleted when the instance is removed. Different instances of the same service do not share their storage.

For persistent data storage, use Buckets.

Accessing other resources

By default, AWS resources cannot communicate with each other. Access must be granted using IAM permissions.

Stacktape automatically configures the necessary permissions for the services it manages. For example, it allows a worker service to write logs to CloudWatch.

However, if your application needs to access other resources, you must grant permissions manually. You can do this in two ways:

Using connectTo

The connectTo property lets you grant access to other Stacktape-managed resources by simply listing their names. Stacktape automatically configures the required IAM permissions and injects connection details as environment variables into your service.

resources:
photosBucket:
type: bucket
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
connectTo:
# access to the bucket
- photosBucket
# access to AWS SES
- aws:ses
resources:
cpu: 0.25
memory: 512

By referencing resources (or services) in connectTo list, Stacktape automatically:

  • configures correct compute resource's IAM role permissions if needed
  • sets up correct security group rules to allow access if needed
  • injects relevant environment variables containing information about resource you are connecting to into the compute resource's runtime
    • names of environment variables use upper-snake-case and are in form STP_[RESOURCE_NAME]_[VARIABLE_NAME],
    • examples: STP_MY_DATABASE_CONNECTION_STRING or STP_MY_EVENT_BUS_ARN,
    • list of injected variables for each resource type can be seen below.

Granted permissions and injected environment variables are different depending on resource type:


Bucket

  • Permissions:
    • list objects in a bucket
    • create / get / delete / tag object in a bucket
  • Injected env variables: NAME, ARN

DynamoDB table

  • Permissions:
    • get / put / update / delete item in a table
    • scan / query a table
    • describe table stream
  • Injected env variables: NAME, ARN, STREAM_ARN

MongoDB Atlas cluster

  • Permissions:
    • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDB Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
    • Creates access "user" associated with compute resource's role to allow for secure credential-less access to the the cluster
  • Injected env variables: CONNECTION_STRING

Relational(SQL) database

  • Permissions:
    • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs.
  • Injected env variables: CONNECTION_STRING, JDBC_CONNECTION_STRING, HOST, PORT (in case of aurora multi instance cluster additionally: READER_CONNECTION_STRING, READER_JDBC_CONNECTION_STRING, READER_HOST)

Redis cluster

  • Permissions:
    • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
  • Injected env variables: HOST, READER_HOST, PORT

Event bus

  • Permissions:
    • publish events to the specified Event bus
  • Injected env variables: ARN

Function

  • Permissions:
    • invoke the specified function
    • invoke the specified function via url (if lambda has URL enabled)
  • Injected env variables: ARN

Batch job

  • Permissions:
    • submit batch-job instance into batch-job queue
    • list submitted job instances in a batch-job queue
    • describe / terminate a batch-job instance
    • list executions of state machine which executes the batch-job according to its strategy
    • start / terminate execution of a state machine which executes the batch-job according to its strategy
  • Injected env variables: JOB_DEFINITION_ARN, STATE_MACHINE_ARN

User auth pool

  • Permissions:
    • full control over the user pool (cognito-idp:*)
    • for more information about allowed methods refer to AWS docs
  • Injected env variables: ID, CLIENT_ID, ARN


SNS Topic

  • Permissions:
    • confirm/list subscriptions of the topic
    • publish/subscribe to the topic
    • unsubscribe from the topic
  • Injected env variables: ARN, NAME


SQS Queue

  • Permissions:
    • send/receive/delete message
    • change visibility of message
    • purge queue
  • Injected env variables: ARN, NAME, URL

Upstash Kafka topic

  • Injected env variables: TOPIC_NAME, TOPIC_ID, USERNAME, PASSWORD, TCP_ENDPOINT, REST_URL

Upstash Redis

  • Injected env variables: HOST, PORT, PASSWORD, REST_TOKEN, REST_URL, REDIS_URL

Private service

  • Injected env variables: ADDRESS

aws:ses(Macro)

  • Permissions:
    • gives full permissions to aws ses (ses:*).
    • for more information about allowed methods refer to AWS docs

Using iamRoleStatements

For more granular control, you can provide a list of raw IAM role statements. These statements are added to the service's IAM role, allowing you to define precise permissions for any AWS resource.

resources:
myWorkerService:
type: worker-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: server/index.ts
iamRoleStatements:
- Resource:
- $CfResourceParam('NotificationTopic', 'Arn')
Effect: 'Allow'
Action:
- 'sns:Publish'
resources:
cpu: 2
memory: 2048
cloudformationResources:
NotificationTopic:
Type: 'AWS::SNS::Topic'
StpIamRoleStatement  API reference
Resource
Required
Sid
Effect
Action
Condition

Default VPC connection

Some AWS services, like relational databases, must be deployed within a VPC. If your stack includes such resources, Stacktape automatically creates a default VPC and connects them to it.

Worker services are connected to this default VPC by default, allowing them to communicate with other VPC-based resources without extra configuration.

To learn more, see the documentation on VPCs and resource accessibility.

Pricing

When using Fargate, you are charged for:

  • vCPU per hour: ~$0.04 - $0.07, depending on the region.
  • Memory (GB) per hour: ~$0.004 - $0.008, depending on the region.

Usage is billed by the second, with a one-minute minimum. For more details, see AWS Fargate pricing.

API reference

StpIamRoleStatement  API reference
Resource
Required
Sid
Effect
Action
Condition

Contents