Web Services
A web service is a continuously running container with a public endpoint, making it ideal for hosting public APIs and websites.
Key features include:
- Automatic scaling: Scales based on CPU or memory usage.
- Zero-downtime deployments: Supports various deployment strategies, including blue/green, to ensure your service is always available.
- Flexible container images: Supports multiple ways to provide a container image, including automatic packaging for popular languages.
- Easy domain management: Simplifies using custom domains with SSL/TLS certificates.
- CDN integration: Can be fronted by a CDN to cache content and improve performance.
- Fully managed: No need to manage servers, operating systems, or virtual machines.
- Seamless connectivity: Easily connects to other resources in your stack.
Example
Here's an example of a simple web service that listens for HTTP requests.
import express from 'express';const app = express();app.get('/', async (req, res) => {res.send({ message: 'Hello' });});// for your use port number stored in PORT environment variable for your application// this environment variable is automatically injected by Stacktapeapp.listen(process.env.PORT, () => {console.info(`Server running on port ${process.env.PORT}`);});
Example server code in TypeScript.
Stacktape automatically injects a PORT
environment variable into your container. Your application must bind to this port to receive traffic.
And here's the corresponding configuration:
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048
Example web service configuration.
How it works
Stacktape uses AWS Elastic Container Service (ECS) to run your containers on either Fargate or EC2 instances.
- Fargate is a serverless compute engine that runs containers without requiring you to manage the underlying servers.
- EC2 instances are virtual servers that give you more control over the computing environment.
ECS services are self-healing, automatically replacing any container that fails. They also scale automatically based on the rules you define.
Traffic is routed to your containers using one of the following, depending on your configuration:
- HTTP API Gateway (default): A lightweight, cost-effective solution for HTTP APIs.
- Application Load Balancer (ALB): A more powerful load balancer that supports features like WebSockets and sticky sessions.
- Network Load Balancer (NLB): A high-performance load balancer that can handle millions of requests per second and supports protocols other than HTTP/S.
Stacktape automatically provisions and configures the chosen entry point for you.
When to use it
This table helps you choose the right container-based resource for your needs:
Resource type | Description | Use-cases |
---|---|---|
web-service | A container with a public endpoint and URL. | Public APIs, websites |
private-service | A container with a private endpoint, accessible only within your stack. | Private APIs, internal services |
worker-service | A container that runs continuously but is not directly accessible. | Background processing, message queue consumers |
multi-container-workload | A customizable workload with multiple containers, where you define the accessibility of each one. | Complex, multi-component services |
batch-job | A container that runs a single job and then terminates. | One-off or scheduled data processing tasks |
Advantages
- Control over the environment: Runs any Docker image or an image built from a Dockerfile.
- Cost-effective for predictable loads: Cheaper than Lambda functions for services with steady traffic.
- Load-balanced and scalable: Automatically scales horizontally based on CPU and memory usage.
- Highly available: Runs across multiple Availability Zones to ensure resilience.
- Secure by default: The underlying environment is managed and secured by AWS.
Disadvantages
- Slower scaling: Adding new container instances can take several seconds to a few minutes, which is slower than the nearly-instant scaling of Lambda functions.
- Not fully serverless: Cannot scale down to zero. You pay for at least one running instance (starting at ~$8/month), even if it's idle.
Image
A web service runs a Docker image. You can provide this image in four ways:
- stacktape-image-buildpack: Automatically packages your code without needing a Dockerfile.
- external-buildpack: Uses external buildpacks to create an image.
- custom-dockerfile: Builds an image from your own Dockerfile.
- prebuilt-images: Uses an existing image from a container registry.
Environment variables
Most commonly used types of environment variables:
- Static - string, number or boolean (will be stringified).
- Result of a custom directive.
- Referenced property of another resource (using $ResourceParam directive). To learn more, refer to referencing parameters guide. If you are using environment variables to inject information about resources into your script, see also property connectTo which simplifies this process.
- Value of a secret (using $Secret directive).
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsenvironment:- name: STATIC_ENV_VARvalue: my-env-var- name: DYNAMICALLY_SET_ENV_VARvalue: $MyCustomDirective('input-for-my-directive')- name: DB_HOSTvalue: $ResourceParam('myDatabase', 'host')- name: DB_PASSWORDvalue: $Secret('dbSecret.password')resources:cpu: 2memory: 2048
Health check
Health checks monitor your container to ensure it's running correctly. If a container fails its health check, it's automatically terminated and replaced with a new one.
For example, this health check uses curl
to send a request to the service every 20 seconds. If the request fails or takes longer than 5 seconds, the check is considered failed.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsinternalHealthCheck:healthCheckCommand: ['CMD-SHELL', 'curl -f http://localhost/ || exit 1']intervalSeconds: 20timeoutSeconds: 5startPeriodSeconds: 150retries: 2resources:cpu: 2memory: 2048
Shutdown
When a service instance is shut down (for example, during a deployment or when the stack is deleted), all of its containers receive a SIGTERM
signal. This gives your application a chance to shut down gracefully.
By default, the application has 2 seconds to clean up before it's forcefully stopped with a SIGKILL
signal. You can change this with the stopTimeout
property (from 2 to 120 seconds).
process.on('SIGTERM', () => {console.info('Received SIGTERM signal. Cleaning up and exiting process...');// Finish any outstanding requests, or close a database connection...process.exit(0);});
Example of a cleanup function that runs before the container shuts down.
Logging
Anything your application writes to stdout
or stderr
is captured and stored in AWS CloudWatch.
You can view logs in a few ways:
- Stacktape Console: Find a direct link to the logs in the Stacktape Console.
- Stacktape CLI: Use the
stacktape logs
command to stream logs to your terminal. - AWS Console: Browse logs directly in the AWS CloudWatch console. The
stacktape stack-info
command can provide a link.
Log storage can be expensive. To manage costs, you can configure retentionDays
to automatically delete logs after a certain period.
Forwarding logs
You can forward logs to third-party services. See Forwarding Logs for more information.
Compute resources
In the resources
section, you configure the CPU, memory, and instance types for your service. You can run your containers using either Fargate or EC2 instances.
- Fargate is a serverless option that lets you run containers without managing servers. You only need to specify the
cpu
andmemory
your service requires. It's a good choice for applications that need to meet high security standards like PCI DSS Level 1 and SOC 2. - EC2 instances are virtual servers that give you more control. You choose the instance types that best fit your needs, and ECS places your containers on them.
Regardless of whether you use Fargate or EC2 instances, your containers run securely within a VPC.
When specifying resources there are two underlying compute engines to use:
- Fargate - abstracts the server and cluster management away from the user, allowing them to run containers without managing the underlying servers, simplifying deployment and management of applications but offering less control over the computing environment.
- EC2 (Elastic Compute Cloud) - provides granular control over the underlying servers (instances).
By choosing
instanceTypes
you get complete control over the computing environment and the ability to optimize for specific workloads.
To use Fargate: Do NOT specify
instanceTypes
and specifycpu
andmemory
properties.To use EC2 instances: specify
instanceTypes
.
Using Fargate
To use Fargate, specify cpu
and memory
in the resources
section without including instanceTypes
.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:cpu: 0.25memory: 512
Example of a service running on Fargate.
Using EC2 instances
To use EC2 instances, specify a list of instanceTypes
in the resources
section.
- EC2 instances are automatically added or removed to meet the scaling needs of your compute resource(see also
scaling
property). - When using
instanceTypes
, we recommend to specify only one instance type and to NOT setcpu
ormemory
properties. By doing so, Stacktape will set the cpu and memory to fit the instance precisely - resulting in the optimal resource utilization. - Stacktape leverages ECS Managed Scaling with target utilization 100%. This means that there are no unused EC2 instances(unused = not running your workload/service) running. Unused EC2 instances are terminated.
- Ordering in
instanceTypes
list matters. Instance types which are higher on the list are preferred over the instance types which are lower on the list. Only when instance type higher on the list is not available, next instance type on the list will be used. - For exhaustive list of available EC2 instance types refer to AWS docs.
To ensure that your containers are running on patched and up-to-date EC2 instances, your instances are automatically refreshed (replaced) once a week(Sunday 00:00 UTC). Your compute resource stays available throughout this process.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:instanceTypes:- c5.large
Example of a service running on EC2 instances.
Container placement on EC2
Stacktape tries to use your EC2 instances as efficiently as possible.
- If you specify
instanceTypes
withoutcpu
andmemory
, Stacktape configures each service instance to use the full resources of one EC2 instance. When the service scales out, a new EC2 instance is added for each new service instance. - If you specify
cpu
andmemory
, AWS will place multiple service instances on a single EC2 instance if there's enough capacity, maximizing utilization.
Default CPU and memory for EC2
- If
cpu
is not specified, containers on an EC2 instance share its CPU capacity. - If
memory
is not specified, Stacktape sets the memory to the maximum amount available on the smallest instance type in yourinstanceTypes
list.
Using a warm pool
A warm pool keeps pre-initialized EC2 instances in a stopped state, allowing your service to scale out much faster. This is useful for handling sudden traffic spikes. You only pay for the storage of stopped instances, not for compute time.
To enable it, set enableWarmPool
to true
. This feature is only available when you specify exactly one instance type.
For more details, see the AWS Auto Scaling warm pools documentation.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:instanceTypes:- c5.largeenableWarmPool: true
Example of using a warm pool with EC2 instances.
Scaling
The scaling
section lets you control how your service scales. You can set the minimum and maximum number of running instances and define a policy that triggers scaling actions.
Scaling policy
A scaling policy defines the CPU and memory thresholds that trigger scaling.
- Scaling out (adding instances): The service scales out if either the average CPU or memory utilization exceeds the target you set.
- Scaling in (removing instances): The service scales in only when both CPU and memory utilization are below their target values.
The scaling process is more aggressive when adding capacity than when removing it. This helps ensure your application can handle sudden increases in load, while scaling in more cautiously to prevent flapping (scaling in and out too frequently).
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsresources:cpu: 0.5memory: 1024scaling:minInstances: 1maxInstances: 5scalingPolicy:keepAvgMemoryUtilizationUnder: 80keepAvgCpuUtilizationUnder: 80
Example of a scaling configuration.
Storage
Each service instance has its own temporary, or ephemeral storage, with a fixed size of 20GB. This storage is deleted when the instance is removed. Different instances of the same service do not share their storage.
For persistent data storage, use Buckets.
Accessing other resources
By default, AWS resources cannot communicate with each other. Access must be granted using IAM permissions.
Stacktape automatically configures the necessary permissions for the services it manages. For example, it allows a web service to write logs to CloudWatch.
However, if your application needs to access other resources, you must grant permissions manually. You can do this in two ways:
Using connectTo
The connectTo
property lets you grant access to other Stacktape-managed resources by simply listing their names. Stacktape automatically configures the required IAM permissions and injects connection details as environment variables into your service.
resources:photosBucket:type: bucketmyWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/index.tsconnectTo:# access to the bucket- photosBucket# access to AWS SES- aws:sesresources:cpu: 0.25memory: 512
By referencing resources (or services) in connectTo
list, Stacktape automatically:
- configures correct compute resource's IAM role permissions if needed
- sets up correct security group rules to allow access if needed
- injects relevant environment variables containing information about resource you are connecting to into the compute resource's runtime
- names of environment variables use upper-snake-case and are in form
STP_[RESOURCE_NAME]_[VARIABLE_NAME]
, - examples:
STP_MY_DATABASE_CONNECTION_STRING
orSTP_MY_EVENT_BUS_ARN
, - list of injected variables for each resource type can be seen below.
- names of environment variables use upper-snake-case and are in form
Granted permissions and injected environment variables are different depending on resource type:
Bucket
- Permissions:
- list objects in a bucket
- create / get / delete / tag object in a bucket
- Injected env variables:
NAME
,ARN
DynamoDB table
- Permissions:
- get / put / update / delete item in a table
- scan / query a table
- describe table stream
- Injected env variables:
NAME
,ARN
,STREAM_ARN
MongoDB Atlas cluster
- Permissions:
- Allows connection to a cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about MongoDB Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs. - Creates access "user" associated with compute resource's role to allow for secure credential-less access to the the cluster
- Allows connection to a cluster with
- Injected env variables:
CONNECTION_STRING
Relational(SQL) database
- Permissions:
- Allows connection to a relational database with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about relational database accessibility modes, refer to Relational databases docs.
- Allows connection to a relational database with
- Injected env variables:
CONNECTION_STRING
,JDBC_CONNECTION_STRING
,HOST
,PORT
(in case of aurora multi instance cluster additionally:READER_CONNECTION_STRING
,READER_JDBC_CONNECTION_STRING
,READER_HOST
)
Redis cluster
- Permissions:
- Allows connection to a redis cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
- Allows connection to a redis cluster with
- Injected env variables:
HOST
,READER_HOST
,PORT
Event bus
- Permissions:
- publish events to the specified Event bus
- Injected env variables:
ARN
Function
- Permissions:
- invoke the specified function
- invoke the specified function via url (if lambda has URL enabled)
- Injected env variables:
ARN
Batch job
- Permissions:
- submit batch-job instance into batch-job queue
- list submitted job instances in a batch-job queue
- describe / terminate a batch-job instance
- list executions of state machine which executes the batch-job according to its strategy
- start / terminate execution of a state machine which executes the batch-job according to its strategy
- Injected env variables:
JOB_DEFINITION_ARN
,STATE_MACHINE_ARN
User auth pool
- Permissions:
- full control over the user pool (
cognito-idp:*
) - for more information about allowed methods refer to AWS docs
- full control over the user pool (
- Injected env variables:
ID
,CLIENT_ID
,ARN
SNS Topic
- Permissions:
- confirm/list subscriptions of the topic
- publish/subscribe to the topic
- unsubscribe from the topic
- Injected env variables:
ARN
,NAME
SQS Queue
- Permissions:
- send/receive/delete message
- change visibility of message
- purge queue
- Injected env variables:
ARN
,NAME
,URL
Upstash Kafka topic
- Injected env variables:
TOPIC_NAME
,TOPIC_ID
,USERNAME
,PASSWORD
,TCP_ENDPOINT
,REST_URL
Upstash Redis
- Injected env variables:
HOST
,PORT
,PASSWORD
,REST_TOKEN
,REST_URL
,REDIS_URL
Private service
- Injected env variables:
ADDRESS
aws:ses
(Macro)
- Permissions:
- gives full permissions to aws ses (
ses:*
). - for more information about allowed methods refer to AWS docs
- gives full permissions to aws ses (
Using iamRoleStatements
For more granular control, you can provide a list of raw IAM role statements. These statements are added to the service's IAM role, allowing you to define precise permissions for any AWS resource.
resources:myWebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: server/index.tsiamRoleStatements:- Resource:- $CfResourceParam('NotificationTopic', 'Arn')Effect: 'Allow'Action:- 'sns:Publish'resources:cpu: 2memory: 2048cloudformationResources:NotificationTopic:Type: 'AWS::SNS::Topic'
Load balancing
The loadBalancing
property configures how traffic is distributed to your containers.
- Supported types of entry points are:
http-api-gateway
andapplication-load-balancer
andnetwork-load-balancer
. To understand when to use which, refer to below sections.
http-api-gateway
(default)
- distributes traffic to the available containers randomly
- uses pay-per-use pricing model (~$1 for million requests)
- pay-per-use pricing model is ideal for most workloads but once you cross ~500,000 requests per day, it might be cheaper to use application-load-balancer
application-load-balancer
- distributes traffic to the available containers in a round robin fashion
- uses pricing which is combination of flat hourly charge(~$0.0252/hour) and used LCUs(Load Balancer Capacity Units)(~$0.08/hour)
- is eligible for free tier, for better understanding of pricing refer to AWS docs
network-load-balancer
- supports TCP and TLS protocols
- uses pricing which is combination of flat hourly charge(~$0.0252/hour) and used LCUs(Load Balancer Capacity Units)(~$0.08/hour)
- is eligible for free tier, for better understanding of pricing refer to AWS docs
Application Load Balancer
An Application Load Balancer (ALB) is a good choice when you need features like WebSocket support or sticky sessions, or if you expect high traffic volumes, as it can be more cost-effective than an HTTP API Gateway at scale.
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048loadBalancing:type: application-load-balancer # default is http-api-gateway
Network Load Balancer
A Network Load Balancer (NLB) is ideal for applications that require extreme performance, need to expose multiple ports, or use protocols other than HTTP/S.
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048loadBalancing:type: network-load-balancerproperties:ports:- port: 443containerPort: 80 # OPTIONAL: specify if target container port is different from `port`protocol: TLS # OPTIONAL: Supports protocols are TLS and TCP. Default is TLS.
Deployment strategies
By default, Stacktape uses a rolling update strategy to deploy new versions of your service. You can use the deployment
property to choose a different strategy, such as blue/green.
- Using
deployment
you can update the web-service in live environment in a safe way - by shifting the traffic to the new version gradually. - Gradual shift of traffic gives you opportunity to test/monitor the workload during update and in a case of a problem quickly rollback.
- Deployment supports multiple strategies:
- Canary10Percent5Minutes - Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.
- Canary10Percent15Minutes - Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.
- Linear10PercentEvery1Minute - Shifts 10 percent of traffic every minute until all traffic is shifted.
- Linear10PercentEvery3Minutes - Shifts 10 percent of traffic every three minutes until all traffic is shifted.
- AllAtOnce - Shifts all traffic to the updated web-service at once.
- You can validate/abort deployment(update) using lambda-function hooks.
When using deployment, your web-service must use application-load-balancer load balancing type
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048loadBalancing:type: application-load-balancerdeployment:strategy: Canary10Percent5Minutes
Hook functions
You can use hook functions to run checks before, during, or after a deployment.
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048loadBalancing:type: application-load-balancerdeployment:strategy: Canary10Percent5MinutesafterTrafficShiftFunction: validateDeploymentvalidateDeployment:type: functionproperties:packaging:type: stacktape-lambda-buildpackproperties:entryfilePath: src/validate-deployment.ts
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';const client = new CodeDeployClient({});export default async (event) => {// read DeploymentId and LifecycleEventHookExecutionId from payloadconst { DeploymentId, LifecycleEventHookExecutionId } = event;// performing validations hereawait client.send(new PutLifecycleEventHookExecutionStatusCommand({deploymentId: DeploymentId,lifecycleEventHookExecutionId: LifecycleEventHookExecutionId,status: 'Succeeded' // status can be 'Succeeded' or 'Failed'}));};
Code for the validateDeployment
function.
Test traffic listener
When using the beforeAllowTraffic
hook, you can use a test listener to send traffic to the new version of your service before it receives production traffic. By default, the test listener is created on port 8080
.
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048loadBalancing:type: application-load-balancerdeployment:strategy: Canary10Percent5MinutesbeforeAllowTrafficFunction: testDeploymenttestDeployment:type: functionproperties:packaging:type: stacktape-lambda-buildpackproperties:entryfilePath: src/test-deployment.tsenvironment:- name: WEB_SERVICE_URLvalue: $ResourceParam('webService', 'url')- name: TEST_LISTENER_PORTvalue: 8080
import { CodeDeployClient, PutLifecycleEventHookExecutionStatusCommand } from '@aws-sdk/client-codedeploy';import fetch from 'node-fetch';const client = new CodeDeployClient({});export default async (event: { DeploymentId: string; LifecycleEventHookExecutionId: string }) => {const { DeploymentId: deploymentId, LifecycleEventHookExecutionId: lifecycleEventHookExecutionId } = event;try {// test new version by using test listener portawait fetch(`${process.env.WEB_SERVICE_URL}:${process.env.TEST_LISTENER_PORT}`);// validate result// do some other tests ...} catch (err) {// send FAILED status if error occurredawait client.send(new PutLifecycleEventHookExecutionStatusCommand({deploymentId,lifecycleEventHookExecutionId,status: 'Failed'}));throw err;}// send SUCCEEDED status after successful testingawait client.send(new PutLifecycleEventHookExecutionStatusCommand({deploymentId,lifecycleEventHookExecutionId,status: 'Succeeded'}));};
Code for the testDeployment
function.
Default VPC connection
Some AWS services, like relational databases, must be deployed within a VPC. If your stack includes such resources, Stacktape automatically creates a default VPC and connects them to it.
Web services are connected to this default VPC by default, allowing them to communicate with other VPC-based resources without extra configuration.
To learn more, see the documentation on VPCs and resource accessibility.
CORS
Cross-Origin Resource Sharing (CORS) is a security feature that controls how web browsers handle requests to a different domain than the one the user is currently on.
If your frontend and backend are on different domains (e.g., mydomain.com
and api.mydomain.com
), you'll need to configure CORS.
If you are already handling CORS in your application code, you don't need to enable it in your Stacktape configuration. Also, the cors
property cannot be used when the loadBalancing
type is application-load-balancer
.
You can enable CORS with a single line:
resources:myWebService:type: 'web-service'properties:resources:cpu: 2memory: 2048packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tscors:enabled: true
A web service with CORS enabled.
You can also customize the CORS headers:
If you do not specify any additional properties, default CORS configuration is used:
AllowedMethods
: Inferred from methods used by integrations associated with the api gatewayAllowedOrigins
:*
AllowedHeaders
:Content-Type
,X-Amz-Date
,Authorization
,X-Api-Key
,X-Amz-Security-Token
,X-Amz-User-Agent
Custom domain names
You can use a custom domain for your web service. If you don't have one, you can register one through Stacktape.
If you already have a domain, you can either let Stacktape manage it (if you use AWS Route 53 for DNS) or use a third-party DNS provider.
For more details, see the Domains and Certificates page.
Using Stacktape to manage domains and certs
Stacktape allows you to connect your custom domain names to some of your resources (Web Service, HTTP API Gateways, Application Load Balancers and Buckets with CDNs).
Connecting a custom domain to the resource does 2 things:
- Creates DNS records:
- If you use your custom domain with a resource, Stacktape automatically creates a DNS record (during deploy) pointing the specified domain name to the resource.
- Adds TLS certificates
- If the origin resource (HTTP API Gateway, Application Load Balancer or CDN) uses HTTPS protocol, Stacktape takes care of issuing and attaching correct (free, AWS-managed) certificate to the resource. This means, you do not have to deal with TLS termination as it is handled by the connected resource.
- If you want to use your own certificates, you can configure
customCertificateArns
.
To manage a custom domain, it first needs to be added to your AWS account. This means that a hosted zone (collection of records managed together for a given domain) for your domain exists in your AWS account and your domain registrar's name servers are pointing to it. To learn more, refer to Adding a domain guide.
resources:myWebService:type: web-serviceproperties:resources:cpu: 2memory: 2048packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tscustomDomains:- domainName: whatever.mydomain.com
Using a 3rd-party DNS
To use a domain from a provider like GoDaddy or Cloudflare:
- Create or import a TLS certificate for your domain in the AWS Certificate Manager console and copy its ARN.
- Add the
customDomains
configuration to your service, using the certificate ARN and disabling DNS record creation.
resources:apiService:type: web-serviceproperties:# ...customDomains:- domainName: mydomain.comdisableDnsRecordCreation: truecustomCertificateArn: <<ARN_OF_YOUR_CERTIFICATE>>
- After deploying, find the service's domain name in the Stacktape Console.
- In your DNS provider's dashboard, create a
CNAME
orALIAS
record pointing to the service's domain name.
CDN
You can place an AWS CloudFront CDN in front of your web service to cache content and reduce latency.
- CDN is a globally distributed network that can cache responses from your Web Service at the edge - close to your users.
- AWS Cloudfront has 205 edge locations on 6 continents.
- The CDN is used to:
- reduce latency & improve load times
- reduce bandwidth costs
- reduce the amount of traffic coming to the origin (Web Service containers)
- improve security
- CDN caches responses from the origin at the edge for specified amount of time.
- If the content requested by the client is in the CDN cache, the CDN immediately returns it to the client without making a request to the origin.
- If the content is NOT in the cache, the CDN makes a request to the Origin. The response from the origin is then forwarded to the client, and cached at the edge.
For more information, see the CDN documentation.
resources:webService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048cdn:enabled: true
Firewall
You can protect your web service with a web application firewall (WAF).
- You can use
web-app-firewall
to protect your resources from common web exploits that could affect application availability, compromise security, or consume excessive resources. - Web app firewall protects your application by filtering dangerous requests coming to your app. You can read more about the firewall in our docs.
To learn more, see the Web Application Firewall documentation.
resources:myFirewall:type: web-app-firewallproperties:scope: regionalwebService:type: web-serviceproperties:packaging:type: stacktape-image-buildpackproperties:entryfilePath: src/main.tsresources:cpu: 2memory: 2048useFirewall: myFirewall
Referenceable parameters
The following parameters can be easily referenced using $ResourceParam directive directive.
To learn more about referencing parameters, refer to referencing parameters.
Web service default domain name
- Usage:
$ResourceParam('<<resource-name>>', 'domain')
Web service default URL
- Usage:
$ResourceParam('<<resource-name>>', 'url')
Comma-separated list of custom domain names assigned to the Web Service (only available if you use custom domain names)
- Usage:
$ResourceParam('<<resource-name>>', 'customDomains')
Comma-separated list of custom domain name URLs (only available if you use custom domain names)
- Usage:
$ResourceParam('<<resource-name>>', 'customDomainUrls')
Default domain of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).
- Usage:
$ResourceParam('<<resource-name>>', 'cdnDomain')
Default url of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).
- Usage:
$ResourceParam('<<resource-name>>', 'cdnUrl')
Comma-separated list of custom domain names assigned to the CDN (only available if you configure custom domain names for the CDN).
- Usage:
$ResourceParam('<<resource-name>>', 'cdnCustomDomains')
Comma-separated list of custom domain name URLs of the CDN (only available if you configure custom domain names for the CDN).
- Usage:
$ResourceParam('<<resource-name>>', 'cdnCustomDomainUrls')
Pricing
When using Fargate, you are charged for:
- vCPU per hour: ~$0.04 - $0.07, depending on the region.
- Memory (GB) per hour: ~$0.004 - $0.008, depending on the region.
Usage is billed by the second, with a one-minute minimum. For more details, see AWS Fargate pricing.