Stacktape
Stacktape


Web Services



A web service is a continuously running container with a public endpoint, making it ideal for hosting public APIs and websites.

Key features include:

  • Automatic scaling: Scales based on CPU or memory usage.
  • Zero-downtime deployments: Supports various deployment strategies, including blue/green, to ensure your service is always available.
  • Flexible container images: Supports multiple ways to provide a container image, including automatic packaging for popular languages.
  • Easy domain management: Simplifies using custom domains with SSL/TLS certificates.
  • CDN integration: Can be fronted by a CDN to cache content and improve performance.
  • Fully managed: No need to manage servers, operating systems, or virtual machines.
  • Seamless connectivity: Easily connects to other resources in your stack.

Example

Here's an example of a simple web service that listens for HTTP requests.

import express from 'express';
const app = express();
app.get('/', async (req, res) => {
res.send({ message: 'Hello' });
});
// this environment variable is automatically injected by Stacktape
app.listen(process.env.PORT, () => {
console.info(`Server running on port ${process.env.PORT}`);
});

Example server code in TypeScript.

Stacktape automatically injects a PORT environment variable into your container. Your application must bind to this port to receive traffic.

And here's the corresponding configuration:

resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048

Example web service configuration.

WebService  API reference
type
Required
properties.packaging
Required
properties.resources
Required
properties.cors
properties.customDomains
properties.loadBalancing
properties.cdn
properties.alarms
properties.disabledGlobalAlarms
properties.deployment
properties.useFirewall
properties.environment
properties.logging
properties.scaling
properties.internalHealthCheck
properties.stopTimeout
Default: 2
properties.enableRemoteSessions
properties.volumeMounts
properties.sideContainers
properties.usePrivateSubnetsWithNAT
properties.connectTo
properties.iamRoleStatements
overrides

How it works

Stacktape uses AWS Elastic Container Service (ECS) to run your containers on either Fargate or EC2 instances.

  • Fargate is a serverless compute engine that runs containers without requiring you to manage the underlying servers.
  • EC2 instances are virtual servers that give you more control over the computing environment.

ECS services are self-healing, automatically replacing any container that fails. They also scale automatically based on the rules you define.

Traffic is routed to your containers using one of the following, depending on your configuration:

  • HTTP API Gateway (default): A lightweight, cost-effective solution for HTTP APIs.
  • Application Load Balancer (ALB): A more powerful load balancer that supports features like WebSockets and sticky sessions.
  • Network Load Balancer (NLB): A high-performance load balancer that can handle millions of requests per second and supports protocols other than HTTP/S.

Stacktape automatically provisions and configures the chosen entry point for you.

When to use it

This table helps you choose the right container-based resource for your needs:

Resource typeDescriptionUse-cases
web-serviceA container with a public endpoint and URL.Public APIs, websites
private-serviceA container with a private endpoint, accessible only within your stack.Private APIs, internal services
worker-serviceA container that runs continuously but is not directly accessible.Background processing, message queue consumers
multi-container-workloadA customizable workload with multiple containers, where you define the accessibility of each one.Complex, multi-component services
batch-jobA container that runs a single job and then terminates.One-off or scheduled data processing tasks

Advantages

  • Control over the environment: Runs any Docker image or an image built from a Dockerfile.
  • Cost-effective for predictable loads: Cheaper than Lambda functions for services with steady traffic.
  • Load-balanced and scalable: Automatically scales horizontally based on CPU and memory usage.
  • Highly available: Runs across multiple Availability Zones to ensure resilience.
  • Secure by default: The underlying environment is managed and secured by AWS.

Disadvantages

  • Slower scaling: Adding new container instances can take several seconds to a few minutes, which is slower than the nearly-instant scaling of Lambda functions.
  • Not fully serverless: Cannot scale down to zero. You pay for at least one running instance (starting at ~$8/month), even if it's idle.

Image

A web service runs a Docker image. You can provide this image in four ways:

Environment variables

A list of environment variables to pass to the script or command.

Values can be:

resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
environment:
- name: STATIC_ENV_VAR
value: my-env-var
- name: DYNAMICALLY_SET_ENV_VAR
value: $MyCustomDirective('input-for-my-directive')
- name: DB_HOST
value: $ResourceParam('myDatabase', 'host')
- name: DB_PASSWORD
value: $Secret('dbSecret.password')
resources:
cpu: 2
memory: 2048
EnvironmentVar  API reference
name
Required
value
Required

Health check

Health checks monitor your container to ensure it's running correctly. If a container fails its health check, it's automatically terminated and replaced with a new one.

ContainerHealthCheck  API reference
healthCheckCommand
Required
intervalSeconds
Default: 30
timeoutSeconds
Default: 5
retries
Default: 3
startPeriodSeconds

For example, this health check uses curl to send a request to the service every 20 seconds. If the request fails or takes longer than 5 seconds, the check is considered failed.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
internalHealthCheck:
healthCheckCommand: ['CMD-SHELL', 'curl -f http://localhost/ || exit 1']
intervalSeconds: 20
timeoutSeconds: 5
startPeriodSeconds: 150
retries: 2
resources:
cpu: 2
memory: 2048

Shutdown

When a service instance is shut down (for example, during a deployment or when the stack is deleted), all of its containers receive a SIGTERM signal. This gives your application a chance to shut down gracefully.

By default, the application has 2 seconds to clean up before it's forcefully stopped with a SIGKILL signal. You can change this with the stopTimeout property (from 2 to 120 seconds).

process.on('SIGTERM', () => {
console.info('Received SIGTERM signal. Cleaning up and exiting process...');
// Finish any outstanding requests, or close a database connection...
process.exit(0);
});

Example of a cleanup function that runs before the container shuts down.

Logging

Anything your application writes to stdout or stderr is captured and stored in AWS CloudWatch.

You can view logs in a few ways:

  • Stacktape Console: Find a direct link to the logs in the Stacktape Console.
  • Stacktape CLI: Use the stacktape logs command to stream logs to your terminal.
  • AWS Console: Browse logs directly in the AWS CloudWatch console. The stacktape stack-info command can provide a link.

Log storage can be expensive. To manage costs, you can configure retentionDays to automatically delete logs after a certain period.

ContainerWorkloadContainerLogging  API reference
disabled
retentionDays
Default: 90
logForwarding

Forwarding logs

You can forward logs to third-party services. See Forwarding Logs for more information.

Compute resources

In the resources section, you configure the CPU, memory, and instance types for your service. You can run your containers using either Fargate or EC2 instances.

  • Fargate is a serverless option that lets you run containers without managing servers. You only need to specify the cpu and memory your service requires. It's a good choice for applications that need to meet high security standards like PCI DSS Level 1 and SOC 2.
  • EC2 instances are virtual servers that give you more control. You choose the instance types that best fit your needs, and ECS places your containers on them.

Regardless of whether you use Fargate or EC2 instances, your containers run securely within a VPC.

Configures the CPU, memory, and underlying compute engine for the service container.

You can choose between two compute engines:

  • Fargate: A serverless engine that abstracts away server management. To use Fargate, specify cpu and memory without instanceTypes.
  • EC2: Provides direct control over the underlying virtual servers. To use EC2, specify the desired instanceTypes.
ContainerWorkloadResourcesConfig  API reference
Parent:WebService
cpu
memory
instanceTypes
enableWarmPool
architecture
Default: 'x86_64'

Using Fargate

To use Fargate, specify cpu and memory in the resources section without including instanceTypes.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
cpu: 0.25
memory: 512

Example of a service running on Fargate.

Using EC2 instances

To use EC2 instances, specify a list of instanceTypes in the resources section.

Instances are automatically added or removed to meet scaling demands.

Recommendation: For optimal resource utilization, specify a single instance type and omit the cpu and memory properties. Stacktape will then size the containers to fit the instance perfectly.

The order of instance types matters; the first in the list is preferred. For a full list of instance types, see the AWS EC2 instance types documentation.

Instances are automatically refreshed weekly to ensure they are patched and up-to-date. Your workload remains available during this process.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large

Example of a service running on EC2 instances.

Container placement on EC2

Stacktape tries to use your EC2 instances as efficiently as possible.

  • If you specify instanceTypes without cpu and memory, Stacktape configures each service instance to use the full resources of one EC2 instance. When the service scales out, a new EC2 instance is added for each new service instance.
  • If you specify cpu and memory, AWS will place multiple service instances on a single EC2 instance if there's enough capacity, maximizing utilization.

Default CPU and memory for EC2

  • If cpu is not specified, containers on an EC2 instance share its CPU capacity.
  • If memory is not specified, Stacktape sets the memory to the maximum amount available on the smallest instance type in your instanceTypes list.

Using a warm pool

A warm pool keeps pre-initialized EC2 instances in a stopped state, allowing your service to scale out much faster. This is useful for handling sudden traffic spikes. You only pay for the storage of stopped instances, not for compute time.

To enable it, set enableWarmPool to true. This feature is only available when you specify exactly one instance type.

For more details, see the AWS Auto Scaling warm pools documentation.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
instanceTypes:
- c5.large
enableWarmPool: true

Example of using a warm pool with EC2 instances.

Scaling

The scaling section lets you control how your service scales. You can set the minimum and maximum number of running instances and define a policy that triggers scaling actions.

ContainerWorkloadScaling  API reference
Parent:WebService
minInstances
Default: 1
maxInstances
Default: 1
scalingPolicy

Scaling policy

A scaling policy defines the CPU and memory thresholds that trigger scaling.

  • Scaling out (adding instances): The service scales out if either the average CPU or memory utilization exceeds the target you set.
  • Scaling in (removing instances): The service scales in only when both CPU and memory utilization are below their target values.

The scaling process is more aggressive when adding capacity than when removing it. This helps ensure your application can handle sudden increases in load, while scaling in more cautiously to prevent flapping (scaling in and out too frequently).

ContainerWorkloadScalingPolicy  API reference
keepAvgCpuUtilizationUnder
Default: 80
keepAvgMemoryUtilizationUnder
Default: 80
resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
resources:
cpu: 0.5
memory: 1024
scaling:
minInstances: 1
maxInstances: 5
scalingPolicy:
keepAvgMemoryUtilizationUnder: 80
keepAvgCpuUtilizationUnder: 80

Example of a scaling configuration.

Storage

Each service instance has its own temporary, or ephemeral storage, with a fixed size of 20GB. This storage is deleted when the instance is removed. Different instances of the same service do not share their storage.

For persistent data storage, use Buckets.

Accessing other resources

By default, AWS resources cannot communicate with each other. Access must be granted using IAM permissions.

Stacktape automatically configures the necessary permissions for the services it manages. For example, it allows a web service to write logs to CloudWatch.

However, if your application needs to access other resources, you must grant permissions manually. You can do this in two ways:

Using connectTo

The connectTo property lets you grant access to other Stacktape-managed resources by simply listing their names. Stacktape automatically configures the required IAM permissions and injects connection details as environment variables into your service.

resources:
photosBucket:
type: bucket
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/index.ts
connectTo:
# access to the bucket
- photosBucket
# access to AWS SES
- aws:ses
resources:
cpu: 0.25
memory: 512

Configures access to other resources in your stack and AWS services. By specifying resources here, Stacktape automatically:

  • Configures IAM role permissions.
  • Sets up security group rules to allow network traffic.
  • Injects environment variables with connection details into the compute resource.

Environment variables are named STP_[RESOURCE_NAME]_[VARIABLE_NAME] (e.g., STP_MY_DATABASE_CONNECTION_STRING).

Using iamRoleStatements

For more granular control, you can provide a list of raw IAM role statements. These statements are added to the service's IAM role, allowing you to define precise permissions for any AWS resource.

resources:
myWebService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: server/index.ts
iamRoleStatements:
- Resource:
- $CfResourceParam('NotificationTopic', 'Arn')
Effect: 'Allow'
Action:
- 'sns:Publish'
resources:
cpu: 2
memory: 2048
cloudformationResources:
NotificationTopic:
Type: 'AWS::SNS::Topic'
StpIamRoleStatement  API reference
Parent:WebService
Resource
Required
Sid
Effect
Default: Allow
Action
Condition

Load balancing

The loadBalancing property configures how traffic is distributed to your containers.

The following entry point types are supported:

  • http-api-gateway (default):

    • Distributes traffic to available containers randomly.
    • Uses a pay-per-use pricing model (~$1 per million requests).
    • Ideal for most workloads, but an application-load-balancer may be more cost-effective if you exceed ~500,000 requests per day.
  • application-load-balancer:

    • Distributes traffic to available containers in a round-robin fashion.
    • Uses a pricing model that combines a flat hourly charge ($0.0252/hour) with usage-based charges for LCUs (Load Balancer Capacity Units) ($0.08/hour).
    • Eligible for the AWS Free Tier. For more details, see the AWS pricing documentation.
  • network-load-balancer:

    • Supports TCP and TLS protocols.
    • Uses the same pricing model as the application-load-balancer.
    • Also eligible for the AWS Free Tier.

Application Load Balancer

An Application Load Balancer (ALB) is a good choice when you need features like WebSocket support or sticky sessions, or if you expect high traffic volumes, as it can be more cost-effective than an HTTP API Gateway at scale.

WebServiceAlbLoadBalancing  API reference
Parent:WebService
type
Required
properties.healthcheckPath
Default: /
properties.healthcheckInterval
Default: 5
properties.healthcheckTimeout
Default: 4
resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
loadBalancing:
type: application-load-balancer # default is http-api-gateway

Network Load Balancer

A Network Load Balancer (NLB) is ideal for applications that require extreme performance, need to expose multiple ports, or use protocols other than HTTP/S.

WebServiceNlbLoadBalancing  API reference
Parent:WebService
type
Required
properties.ports
Required
properties.healthcheckPath
Default: /
properties.healthcheckInterval
Default: 5
properties.healthcheckTimeout
Default: 4
properties.healthCheckProtocol
Default: TCP
properties.healthCheckPort
resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
loadBalancing:
type: network-load-balancer
properties:
ports:
- port: 443
containerPort: 80 # OPTIONAL: specify if target container port is different from `port`
protocol: TLS # OPTIONAL: Supports protocols are TLS and TCP. Default is TLS.

Deployment strategies

By default, Stacktape uses a rolling update strategy to deploy new versions of your service. You can use the deployment property to choose a different strategy, such as blue/green.

This allows you to safely update your service in a live environment by gradually shifting traffic to the new version. This gives you the opportunity to monitor the workload during the update and quickly roll back in case of any issues.

The following deployment strategies are supported:

  • Canary10Percent5Minutes: Shifts 10% of traffic, then the remaining 90% five minutes later.
  • Canary10Percent15Minutes: Shifts 10% of traffic, then the remaining 90% fifteen minutes later.
  • Linear10PercentEvery1Minute: Shifts 10% of traffic every minute until all traffic is shifted.
  • Linear10PercentEvery3Minutes: Shifts 10% of traffic every three minutes until all traffic is shifted.
  • AllAtOnce: Shifts all traffic to the updated service at once.

You can use Lambda function hooks to validate or abort the deployment.

This feature requires the loadBalancing type to be set to application-load-balancer.

ContainerWorkloadDeploymentConfig  API reference
Parent:WebService
strategy
Required
beforeAllowTrafficFunction
afterTrafficShiftFunction
testListenerPort
resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
loadBalancing:
type: application-load-balancer
deployment:
strategy: Canary10Percent5Minutes

Hook functions

You can use hook functions to run checks before, during, or after a deployment.

resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
loadBalancing:
type: application-load-balancer
deployment:
strategy: Canary10Percent5Minutes
afterTrafficShiftFunction: validateDeployment
validateDeployment:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: src/validate-deployment.ts
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';
const client = new CodeDeployClient({});
export default async (event) => {
// read DeploymentId and LifecycleEventHookExecutionId from payload
const { DeploymentId, LifecycleEventHookExecutionId } = event;
// performing validations here
await client.send(
new PutLifecycleEventHookExecutionStatusCommand({
deploymentId: DeploymentId,
lifecycleEventHookExecutionId: LifecycleEventHookExecutionId,
status: 'Succeeded' // status can be 'Succeeded' or 'Failed'
})
);
};

Code for the validateDeployment function.

Test traffic listener

When using the beforeAllowTraffic hook, you can use a test listener to send traffic to the new version of your service before it receives production traffic. By default, the test listener is created on port 8080.

resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
loadBalancing:
type: application-load-balancer
deployment:
strategy: Canary10Percent5Minutes
beforeAllowTrafficFunction: testDeployment
testDeployment:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: src/test-deployment.ts
environment:
- name: WEB_SERVICE_URL
value: $ResourceParam('webService', 'url')
- name: TEST_LISTENER_PORT
value: 8080
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';
import fetch from 'node-fetch';
const client = new CodeDeployClient({});
export default async (event: { DeploymentId: string; LifecycleEventHookExecutionId: string }) => {
const { DeploymentId: deploymentId, LifecycleEventHookExecutionId: lifecycleEventHookExecutionId } = event;
try {
// test new version by using test listener port
await fetch(`${process.env.WEB_SERVICE_URL}:${process.env.TEST_LISTENER_PORT}`);
// validate result
// do some other tests ...
} catch (err) {
// send FAILED status if error occurred
await client.send(
new PutLifecycleEventHookExecutionStatusCommand({
deploymentId,
lifecycleEventHookExecutionId,
status: 'Failed'
})
);
throw err;
}
// send SUCCEEDED status after successful testing
await client.send(
new PutLifecycleEventHookExecutionStatusCommand({
deploymentId,
lifecycleEventHookExecutionId,
status: 'Succeeded'
})
);
};

Code for the testDeployment function.

Default VPC connection

Some AWS services, like relational databases, must be deployed within a VPC. If your stack includes such resources, Stacktape automatically creates a default VPC and connects them to it.

Web services are connected to this default VPC by default, allowing them to communicate with other VPC-based resources without extra configuration.

To learn more, see the documentation on VPCs and resource accessibility.

CORS

Cross-Origin Resource Sharing (CORS) is a security feature that controls how web browsers handle requests to a different domain than the one the user is currently on.

If your frontend and backend are on different domains (e.g., mydomain.com and api.mydomain.com), you'll need to configure CORS.

If you are already handling CORS in your application code, you don't need to enable it in your Stacktape configuration. Also, the cors property cannot be used when the loadBalancing type is application-load-balancer.

You can enable CORS with a single line:

resources:
myWebService:
type: 'web-service'
properties:
resources:
cpu: 2
memory: 2048
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
cors:
enabled: true

A web service with CORS enabled.

You can also customize the CORS headers:

If you do not specify any additional properties, a default CORS configuration is used:

  • allowedMethods: Inferred from the methods used by integrations associated with the API Gateway.
  • allowedOrigins: *
  • allowedHeaders: Content-Type, X-Amz-Date, Authorization, X-Api-Key, X-Amz-Security-Token, X-Amz-User-Agent
HttpApiCorsConfig  API reference
Parent:WebService
enabled
Required
allowedOrigins
Default: *
allowedHeaders
allowedMethods
allowCredentials
exposedResponseHeaders
maxAge

Custom domain names

You can use a custom domain for your web service. If you don't have one, you can register one through Stacktape.

If you already have a domain, you can either let Stacktape manage it (if you use AWS Route 53 for DNS) or use a third-party DNS provider.

For more details, see the Domains and Certificates page.

Using Stacktape to manage domains and certs

Stacktape allows you to connect custom domains to various resources, including Web Services, HTTP API Gateways, Application Load Balancers, and Buckets with CDNs.

When you connect a custom domain, Stacktape automatically:

  • Creates DNS records: A DNS record is created to point your domain name to the resource.
  • Adds TLS certificates: If the resource uses HTTPS, Stacktape issues and attaches a free, AWS-managed TLS certificate, handling TLS termination for you.

If you want to use your own certificates, you can configure customCertificateArns.

To manage a custom domain, it must first be added to your AWS account as a hosted zone, and your domain registrar's name servers must point to it. For more details, see the Adding a domain guide.

DomainConfiguration  API reference
domainName
Required
customCertificateArn
disableDnsRecordCreation
resources:
myWebService:
type: web-service
properties:
resources:
cpu: 2
memory: 2048
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
customDomains:
- domainName: whatever.mydomain.com

Using a 3rd-party DNS

To use a domain from a provider like GoDaddy or Cloudflare:

  1. Create or import a TLS certificate for your domain in the AWS Certificate Manager console and copy its ARN.
  2. Add the customDomains configuration to your service, using the certificate ARN and disabling DNS record creation.
resources:
apiService:
type: web-service
properties:
# ...
customDomains:
- domainName: mydomain.com
disableDnsRecordCreation: true
customCertificateArn: <<ARN_OF_YOUR_CERTIFICATE>>
  1. After deploying, find the service's domain name in the Stacktape Console.
  2. In your DNS provider's dashboard, create a CNAME or ALIAS record pointing to the service's domain name.

CDN

You can place an AWS CloudFront CDN in front of your web service to cache content and reduce latency.

A CDN is a globally distributed network of edge locations that caches responses from your Web Service, bringing content closer to your users.

Using a CDN can:

  • Reduce latency and improve load times.
  • Lower bandwidth costs.
  • Decrease the amount of traffic hitting your origin (the Web Service containers).
  • Enhance security.

The CDN caches responses from the origin at the edge for a specified amount of time.

For more information, see the CDN documentation.

resources:
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
cdn:
enabled: true

Firewall

You can protect your web service with a web application firewall (WAF).

A web-app-firewall can protect your resources from common web exploits that could affect availability, compromise security, or consume excessive resources. The firewall works by filtering malicious requests before they reach your application.

For more information, see the firewall documentation.

To learn more, see the Web Application Firewall documentation.

resources:
myFirewall:
type: web-app-firewall
properties:
scope: regional
webService:
type: web-service
properties:
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
resources:
cpu: 2
memory: 2048
useFirewall: myFirewall

Referenceable parameters

The following parameters can be easily referenced using $ResourceParam directive directive.

To learn more about referencing parameters, refer to referencing parameters.

domain
  • Web service default domain name

  • Usage: $ResourceParam('<<resource-name>>', 'domain')
url
  • Web service default URL

  • Usage: $ResourceParam('<<resource-name>>', 'url')
customDomains
  • Comma-separated list of custom domain names assigned to the Web Service (only available if you use custom domain names)

  • Usage: $ResourceParam('<<resource-name>>', 'customDomains')
customDomainUrls
  • Comma-separated list of custom domain name URLs (only available if you use custom domain names)

  • Usage: $ResourceParam('<<resource-name>>', 'customDomainUrls')
cdnDomain
  • Default domain of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnDomain')
cdnUrl
  • Default url of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnUrl')
cdnCustomDomains
  • Comma-separated list of custom domain names assigned to the CDN (only available if you configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnCustomDomains')
cdnCustomDomainUrls
  • Comma-separated list of custom domain name URLs of the CDN (only available if you configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnCustomDomainUrls')

Pricing

When using Fargate, you are charged for:

  • vCPU per hour: ~$0.04 - $0.07, depending on the region.
  • Memory (GB) per hour: ~$0.004 - $0.008, depending on the region.

Usage is billed by the second, with a one-minute minimum. For more details, see AWS Fargate pricing.

API reference

StpIamRoleStatement  API reference
Parent:WebService
Resource
Required
Sid
Effect
Default: Allow
Action
Condition
WebServiceNlbLoadBalancingPort  API reference
port
Required
protocol
Default: TLS
containerPort

Contents