Stacktape
Stacktape


Buckets (S3)



Buckets provide a way to store and retrieve any amount of data, at any time, from anywhere on the web. They are a good choice for a variety of use cases, including hosting websites, storing user-generated content, and for backup and disaster recovery.

A bucket is a collection of objects, where an object is a file and any metadata that describes it. Buckets have a flat structure, but you can simulate a folder hierarchy by using a common prefix for your object names (e.g., photos/).

Under the hood, Stacktape uses Amazon S3 for buckets.

When to use them

Buckets are ideal for storing large amounts of data. They are not recommended for use cases that require very low read/write latency.

Advantages

  • Easy to use: S3 provides a simple, HTTP-based API.
  • Serverless: Buckets scale automatically, and you only pay for what you use.
  • Highly available and durable: S3 is designed for 99.999999999% (eleven 9s) of data durability, storing objects across multiple Availability Zones.
  • Flexible storage classes: You can choose from multiple storage classes with different latencies, prices, and durability characteristics.
  • Access control: You can easily control who can access your bucket.
  • Encryption: Supports server-side encryption.
  • Integrations: You can trigger a function or a batch job in response to bucket events.

Disadvantages

  • Performance: Read and write operations are significantly slower than with block storage (a physical disk attached to a machine).

Basic usage

resources:
myBucket:
type: bucket
myFunction:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my/lambda.ts
environment:
- name: BUCKET_NAME
value: $ResourceParam('myBucket', 'arn')
connectTo:
- myBucket

A Lambda function connected to a bucket.

import { S3 } from '@aws-sdk/client-s3';
const s3 = new S3({});
// getObject returns a readable stream, so we need to transform it to string
const streamToString = (stream) => {
const chunks = [];
return new Promise((resolve, reject) => {
stream.on('data', (chunk) => chunks.push(Buffer.from(chunk)));
stream.on('error', (err) => reject(err));
stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf8')));
});
};
const handler = async (event, context) => {
await s3.putObject({
Bucket: process.env.BUCKET_NAME,
Key: 'my-file.json',
Body: JSON.stringify({ message: 'hello' }) // or fs.createReadStream('my-source-file.json')
});
const res = await s3.getObject({
Bucket: process.env.BUCKET_NAME,
Key: 'my-file.json'
});
const body = await streamToString(res.Body);
};
export default handler;

A Lambda function that uploads and downloads a file from a bucket.

Directory upload

You can automatically upload the contents of a local directory to a bucket during deployment.

  • Allows you to upload a specified directory to the bucket on every deployment
  • After the upload is finished, your bucket will contain the contents of the local folder.
  • Files are uploaded using parallel, multipart uploads.

Existing contents of the bucket will be deleted and replaced with the contents of the local directory. You should not use directoryUpload for buckets with application-generated or user-generated content.

resources:
myBucket:
type: bucket
properties:
directoryUpload:
directoryPath: ../public

This configuration uploads the public folder to the bucket on every deployment.

DirectoryUpload  API reference
directoryPath
Required
fileOptions
excludeFilesPatterns
headersPreset
disableS3TransferAcceleration

Adding metadata

You can add metadata, such as headers and tags, to the files you upload. This is useful for setting Cache-Control headers for a static website or for filtering objects in lifecycle rules.

DirectoryUploadFilter  API reference
includePattern
Required
excludePattern
headers
tags

Encryption

If enabled, all objects uploaded to the bucket will be encrypted on the server side using the AES-256 algorithm.

resources:
myBucket:
type: bucket
properties:
encryption: true

CORS

You can configure Cross-Origin Resource Sharing (CORS) to allow web applications from other domains to access the resources in your bucket.

  • Web browsers use CORS (Cross-Origin Resource Sharing) to block the website from making requests to a different origin (server) than the one the website is served from. This means that if you make a request from a website served from https://my-website.s3.eu-west-1.amazonaws.com/ to https://my-api.my-domain.com, the request will fail.

  • If you enable CORS and do not specify any cors rules, the default rule with following configuration is used:

    • AllowedMethods: GET, PUT, HEAD, POST, DELETE
    • AllowedOrigins: '*'
    • AllowedHeaders: Authorization, Content-Length, Content-Type, Content-MD5, Date, Expect, Host, x-amz-content-sha256, x-amz-date, x-amz-security-token
  • When the bucket receives a preflight request from a browser, it evaluates the CORS configuration for the bucket and uses the first CORS rule that matches the incoming browser request to enable a cross-origin request. For a rule to match, the following conditions must be met:

    • The request's Origin header must match one of allowedOrigins element.
    • The request method (for example, GET or PUT) or the Access-Control-Request-Method header in case of a preflight OPTIONS request must be one of the allowedMethods.
    • Every header listed in the request's Access-Control-Request-Headers header on the preflight request must match one of headers allowedHeaders.
resources:
myBucket:
type: bucket
properties:
cors:
enabled: true
BucketCorsConfig  API reference
Parent:Bucket
enabled
Required
corsRules
BucketCorsRule  API reference
allowedOrigins
allowedHeaders
allowedMethods
exposedResponseHeaders
maxAge

Versioning

You can enable versioning to keep a complete history of all object versions. This is useful for protecting against accidental deletions or overwrites.

  • If enabled, bucket keeps multiple variants of an object.
  • This can help you to recover objects from an accidental deletion/overwrite, or to store multiple objects with the same name.
resources:
myBucket:
type: bucket
properties:
versioning: true

CDN

You can place an AWS CloudFront CDN in front of your bucket to cache its content in edge locations around the world, reducing latency for your users. This is a common pattern for serving static websites.

For more information, see the CDN documentation.

resources:
myBucket:
type: bucket
properties:
directoryUpload:
directoryPath: my-web/build
headersPreset: static-website
cdn:
enabled: true

A bucket with a CDN and directory upload enabled.

If you're hosting a static website, consider using a hosting bucket, which is pre-configured for that use case.

Object lifecycle rules

Lifecycle rules allow you to automate the management of your objects. You can define rules to transition objects to different storage classes or delete them after a certain period.

Storage class transition

You can transition objects to a different storage class to save costs. For example, you can move infrequently accessed data to a cheaper, long-term storage class like Glacier.

  • By default, all objects are in the standard (general purpose) class.
  • Depending on your access patterns, you can transition your objects to a different storage class to save costs.
  • To better understand differences between storage classes, refer to AWS Docs
  • To learn more about storage class transitions, refer to AWS Docs
resources:
myBucket:
type: bucket
properties:
lifecycleRules:
- type: class-transition
properties:
daysAfterUpload: 90
storageClass: 'GLACIER'

This configuration transfers all objects to the Glacier storage class 90 days after they are uploaded.

ClassTransition  API reference
Parent:Bucket
type
Required
properties.daysAfterUpload
Required
properties.storageClass
Required
properties.prefix
properties.tags

Expiration

You can configure objects to be automatically deleted after a specified number of days.

Expiration  API reference
Parent:Bucket
type
Required
properties.daysAfterUpload
Required
properties.prefix
properties.tags
resources:
myBucket:
type: bucket
properties:
lifecycleRules:
- type: class-transition
properties:
daysAfterUpload: 90
storageClass: 'GLACIER'
- type: expiration
properties:
daysAfterUpload: 365

Non-current version class transition

You can transition previous versions of an object to a different storage class.

NonCurrentVersionClassTransition  API reference
Parent:Bucket
type
Required
properties.daysAfterVersioned
Required
properties.storageClass
Required
properties.prefix
properties.tags
resources:
myBucket:
type: bucket
properties:
versioning: true
lifecycleRules:
- type: non-current-version-class-transition
properties:
daysAfterVersioned: 10
storageClass: 'DEEP_ARCHIVE'

Non-current version expiration

You can automatically delete previous versions of an object after a specified number of days.

NonCurrentVersionExpiration  API reference
Parent:Bucket
type
Required
properties.daysAfterVersioned
Required
properties.prefix
properties.tags
resources:
myBucket:
type: bucket
properties:
versioning: true
lifecycleRules:
- type: non-current-version-expiration
properties:
daysAfterVersioned: 10

Abort incomplete multipart upload

You can abort multipart uploads that do not complete within a specified number of days to avoid storing incomplete object parts.

AbortIncompleteMultipartUpload  API reference
Parent:Bucket
type
Required
properties.daysAfterInitiation
Required
properties.prefix
properties.tags
resources:
myBucket:
type: bucket
properties:
lifecycleRules:
- type: abort-incomplete-multipart-upload
properties:
daysAfterInitiation: 5

Accessibility

You can control who can access your bucket and its objects.

BucketAccessibility  API reference
Parent:Bucket
accessibilityMode
Default: privateRequired
accessPolicyStatements

Accessibility modes

  • Allows you to easily configure the most commonly used access patterns.
  • Available modes:
    • public-read-write - Everyone can read from and write to the bucket.
    • public-read - Everyone can read from the bucket. Only compute resources and entities with sufficient IAM permissions can write to the bucket.
    • private - (default) Only compute resources and entities with sufficient IAM permissions can read from or write to the bucket.
  • For functions, batch jobs and container workloads, you can grant required IAM permissions to read/write from the bucket using allowsAccessTo or iamRoleStatements in their configuration.

Access policy statements

For fine-grained control, you can define custom access policy statements. This requires knowledge of AWS IAM. For examples, see the AWS documentation.

resources:
myBucket:
type: bucket
properties:
accessibility:
accessibilityMode: private
accessPolicyStatements:
- Resource:
- $ResourceParam('myBucket', 'arn')
Action:
- 's3:ListBucket'
Principal: '*'
BucketPolicyIamRoleStatement  API reference
Principal
Required
Resource
Required
Sid
Effect
Action
Condition

Referenceable parameters

The following parameters can be easily referenced using $ResourceParam directive directive.

To learn more about referencing parameters, refer to referencing parameters.

name
  • AWS (physical) name of the bucket

  • Usage: $ResourceParam('<<resource-name>>', 'name')
arn
  • Arn of the bucket

  • Usage: $ResourceParam('<<resource-name>>', 'arn')
cdnDomain
  • Default domain of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnDomain')
cdnUrl
  • Default url of the CDN distribution (only available if you DO NOT configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnUrl')
cdnCustomDomains
  • Comma-separated list of custom domain names assigned to the CDN (only available if you configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnCustomDomains')
cdnCustomDomainUrls
  • Comma-separated list of custom domain name URLs of the CDN (only available if you configure custom domain names for the CDN).

  • Usage: $ResourceParam('<<resource-name>>', 'cdnCustomDomainUrls')

API reference

KeyValuePair  API reference
key
Required
value
Required
Bucket  API reference
type
Required
properties.directoryUpload
properties.accessibility
properties.cors
properties.versioning
properties.encryption
properties.lifecycleRules
properties.enableEventBusNotifications
properties.cdn
overrides

Contents