Stacktape


Lambda Functions

Overview and basic concepts

  • Lambda function is a scalable and highly available computing resource that runs your code.

  • Function execution is initiated by an event (such as HTTP request arriving to an API Gateway, message arriving to a SQS queue or an object created in a S3 storage bucket). The function runs until your code finishes processing the request (maximum 15 minutes).

  • Lambda functions are "serverless" and fully managed. You don't have to worry about provisioning and managing servers, container and OS security, patching, scaling & many other DevOps tasks.

  • Supported runtimes are Node.js (Javascript and Typescript), Python, Ruby, Java, Go and .NET Core (C#).

When to use

Lambda functions work well for many use-cases (HTTP APIs, scheduled tasks, integrations & more). However, they can't be used for long-running tasks and tasks that require a higher degree of control over an execution environment.

Advantages

  • Pay-per-use - You only pay for the compute time you consume (rounded to 1ms)
  • Massive & fast scaling - Can scale up to 1000s of parallel executions. New containers running your code can be spawned in milliseconds.
  • High availability - AWS Lambda runs your function in multiple Availability Zones
  • Secure by default - Underlying environment is securely managed by AWS
  • Lots of integrations - Function can be invoked by events from a wide variety of services

Disadvantages

  • Limited execution time - Can run only up to 15 minutes
  • Limited configuration of lambda environment - You can configure only memory (CPU power scales with it). The maximum amount of memory is 10GB (6 virtual CPUs).
  • More expensive for certain tasks - Continuously running tasks and tasks with predictable load can be performed for less using batch jobs and container workloads.
  • Cold starts - Lambda function can take some additional time (usually ~0.2 to 4 seconds) to execute when it runs for the first time. To learn more, refer to cold starts

Basic usage

Copy

// Stacktape will automatically package any library for you
import anyLibrary from 'any-library';
import { initializeDatabaseConnection } from './database';
// Everything outside of the handler function will be executed only once (on every cold-start).
// You can execute any code that should be "cached" here (such as initializing a database connection)
const myDatabaseConnection = initializeDatabaseConnection();
// handler will be executed on every function invocation
const handler = async (event, context) => {
// This log will be published to a CloudWatch log group
console.log(event, context);
const posts = myDatabaseConnection.query('SELECT * FROM posts');
return { result: posts };
};
export default handler;

Example lambda function written in Typescript

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts

Stacktape configuration of a basic lambda function

Packaging

Refer to packaging docs.

Computing resources

  • Lambda function environment is fully managed. You can't directly configure the type of virtual machine that runs your workload.
  • Amount of memory available to the function can be set using memory property. This value should be between 128 MB and 10,240 MB in 1-MB increments.
  • Amount of CPU power available to the function is also set using memory property - it's proportionate to the amount of available RAM. Function with 1797MB has a CPU power equal to 1 virtual CPU. Lambda function can have a maximum of 6 vCPUs (at 10,240 MB of RAM).

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
memory: 1024

Runtime

  • Stacktape automatically detects the function's language uses the latest runtime version associated with that language
  • Example: uses nodejs14.x for all files ending with .js and .ts
  • You might want to use an older version if some of your dependencies are not not compatible with the default runtime version

Timeout

  • Sets the maximum amount of time (in seconds) that a function can run before a timeout error is thrown.
  • Maximum allowed time is 900 seconds.
  • The default is 3 seconds.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
timeout: 300

Environment variables

Most commonly used types of environment variables:

Copy

environment:
- name: STATIC_ENV_VAR
value: my-env-var
- name: DYNAMICALLY_SET_ENV_VAR
value: $MyCustomDirective('input-for-my-directive')
- name: DB_HOST
value: $ResourceParam('myDatabase', 'host')
- name: DB_PASSWORD
value: $Secret('dbSecret.password')

Logging

  • Every time your code outputs (prints) something to the stdout or stderr, your log will be captured and stored in a AWS CloudWatch log group.
  • You can browse your logs in 2 ways:
    • go to your function's log-group in the AWS CloudWatch console. You can use stacktape stack-info command to get a direct link.
    • use stacktape logs command that will print logs to the console
  • Please note that storing log data can become costly over time. To avoid excessive charges, you can configure logRetentionDays.
disabled
retentionDays
Default: 180

Storage

  • Each lambda function has access to its own ephemeral, temporary storage.
  • It's available at /tmp and has a fixed size of 512MB.
  • This storage is NOT shared between multiple execution environments. If there are 2 or more concurrently running functions, they don't share this storage.
  • This storage can be used to cache certain data between function executions.
  • To store data persistently, consider using Buckets.

Trigger events

  • Functions are invoked ("triggered") in a reaction to an event.
  • When you specify an event, Stacktape creates an event integration and adds all the required permissions to invoke the function.
  • Each function can have multiple event integrations.
  • Payload (data) received by the function is based on the event integration.

HTTP Api event

  • The function is triggered in a reaction to an incoming request to the specified HTTP API Gateway.
  • HTTP API Gateway selects the route with the most-specific match. To learn more about how paths are evaluated, refer to AWS Docs

Copy

resources:
myHttpApi:
type: http-api-gateway
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myHttpApi
path: /hello
method: GET

Lambda function connected to an HTTP API Gateway "myHttpApi"

HttpApiIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.httpApiGatewayName
Required
properties.method
Required
properties.path
Required
properties.authorizer
properties.payloadFormat
Default: '1.0'

Cognito authorizer

  • Using Cognito authorizer allows only the users authenticated with User pool to access your function.
  • Request must include access token (specified as a bearer token, { Authorization: "<<your-access-token>>"" })
  • If the request is successfully authorized, your function will receive some authorization claims in its payload. To get more information about the user, you can use getUser API Method
  • HTTP API uses JWT(JSON Web token)-based authorization. To lean more about how requests are authorized, refer to AWS Docs.

Copy

resources:
myGateway:
type: http-api-gateway
myUserPool:
type: user-auth-pool
properties:
userVerificationType: email-code
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: src/my-lambda.ts
events:
- type: http-api-gateway
properties:
httpApiGatewayName: myGateway
path: /some-path
method: '*'
authorizer:
type: cognito
properties:
userPoolName: myUserPool

Example cognito authorizer

Copy

import { CognitoIdentityProvider } from '@aws-sdk/client-cognito-identity-provider';
const cognito = new CognitoIdentityProvider({});
const handler = async (event, context) => {
const userData = await cognito.getUser({ AccessToken: event.headers.authorization });
// do something with your user data
};
export default handler;

Example lambda function that fetches user data from Cognito

CognitoAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required
properties.userPoolName
Required
properties.identitySources

Lambda authorizer

  • When using Lambda authorizer, a special lambda function determines if the client can access your function.
  • When a request arrives to the HTTP API Gateway, lambda authorizer function is invoked. It must return either a simple response indicating if the client is authorized

Copy

{
"isAuthorized": true,
"context": {
"exampleKey": "exampleValue"
}
}

Simple lambda authorizer response format

or an IAM Policy document (when the iamReponse property is set to true, you can further configure permissions of the target lambda function)

Copy

{
"principalId": "abcdef", // The principal user identification associated with the token sent by the client.
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:{regionId}:{accountId}:{apiId}/{stage}/{httpVerb}/[{resource}/[{child-resources}]]"
}
]
},
"context": {
"exampleKey": "exampleValue"
}
}

IAM Policy document lambda authorizer response format

  • Data returned in the context property will be available to the function.
  • You can configure identitySources that specify the location of data that's required to authorize a request. If they are not included in the request, the Lambda authorizer won't be invoked, and the client receives a 401 error. The following identity sources are supported: $request.header.name, $request.querystring.name and $context.variableName.
  • When caching is enabled for an authorizer, API Gateway uses the authorizer's identity sources as the cache key. If a client specifies the same parameters in identity sources within the configured TTL, API Gateway uses the cached authorizer result, rather than invoking your Lambda function.
  • By default, API Gateway uses the cached authorizer response for all routes of an API that use the authorizer. To cache responses per route, add $context.routeKey to your authorizer's identity sources.
  • To learn more about Lambda authorizers, refer to AWS Docs
LambdaAuthorizer  API reference
Parent API reference: HttpApiIntegration
type
Required
properties.functionName
Required
properties.iamResponse
properties.identitySources
properties.cacheResultSeconds

Schedule event

The function is triggered on a specified schedule. You can use 2 different schedule types:

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
# invoke function every two hours
- type: schedule
properties:
scheduleRate: rate(2 hours)
# invoke function at 10:00 UTC every day
- type: schedule
properties:
scheduleRate: cron(0 10 * * ? *)

ScheduleIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.scheduleRate
Required
properties.input
properties.inputPath
properties.inputTransformer

Event Bus event

The function is triggered when the specified event bus receives an event matching the specified pattern.

2 types of event buses can be used:


  • Default event bus

    • Default event bus is pre-created by AWS and shared by the whole AWS account.
    • Can receive events from multiple AWS services. Full list of supported services.
    • To use the default event bus, set the useDefaultBus property.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: event-bus
properties:
useDefaultBus: true
eventPattern:
source:
- 'aws.autoscaling'
region:
- 'us-west-2'

Batch job connected to the default event bus

  • Custom event bus
    • Your own, custom Event bus.
    • This event bus can receive your own, custom events.
    • To use custom event bus, specify either eventBusArn or eventBusName property.

Copy

resources:
myEventBus:
type: event-bus
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: event-bus
properties:
eventBusName: myEventBus
eventPattern:
source:
- 'mycustomsource'

Batch job connected to a custom event bus

EventBusIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.eventPattern
Required
properties.eventBusArn
properties.eventBusName
properties.useDefaultBus
properties.input
properties.inputPath
properties.inputTransformer

Kafka Topic event

The function is triggered whenever there are messages in the specified Kafka Topic.

  • Messages are processed in batches.
  • If the Kafka topic contains multiple messages, the function is invoked with multiple messages in its payload.

Batching behavior can be configured. The function is triggered when any of the following things happen:

  • Batch window expires. Batch window can be configured using maxBatchWindowSeconds property.
  • Maximum Batch size (amount of messages fetched from the topic) is reached. Batch size can be configured using batchSize property.
  • Maximum Payload limit is reached. Maximum payload size is 6 MB.

Upstash Kafka

Stacktape provides simple integration with Upstash Kafka topic or you can also integrate with any custom Kafka cluster and topic.

Copy

providerConfig:
upstash:
accountEmail: xxxxx.yyyy@example.com
apiKey: $Secret('upstash-api-key')
resources:
myTopic:
type: upstash-kafka-topic
# is triggered when there are records in upstash kafka topic
myConsumer:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: consumer.ts
events:
- type: kafka-topic
properties:
upstashKafkaTopic: myTopic

Integrating with Upstash Kafka topic

Custom Kafka

When configuring custom Kafka integration you need to specify bootstrapServers, topicName and authentication method.

Copy

resources:
# is triggered when there are records in custom configured kafka topic
myConsumer:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: consumer.ts
events:
- type: kafka-topic
properties:
customKafkaConfiguration:
bootstrapServers:
- my-kafka-broker-1.my-domain.com:9092
- my-kafka-broker-2.my-domain.com:9092
topicName: myTopic
authentication:
type: SASL_SCRAM_256_AUTH
properties:
authenticationSecretArn: arn:aws:secretsmanager:eu-west-1:xxxxxxxxxxx:secret:mySecret

Custom Kafka integration

KafkaTopicIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.upstashKafkaTopic
properties.customKafkaConfiguration
properties.batchSize
Default: 100
properties.maxBatchWindowSeconds

SNS event

The function is triggered every time a specified SNS topic receives a new message.

  • Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
  • Messages (notifications) are published to the topics
  • To add your custom SNS topic to your stack, add Cloudformation resource to the cloudformationResources section of your config.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: sns
properties:
topicArn: $CfResourceParam('mySnsTopic', 'Arn')
onDeliveryFailure:
sqsQueueArn: $CfResourceParam('mySnsTopic', 'Arn')
sqsQueueUrl: $CfResourceParam('mySqsQueue', 'QueueURL')
cloudformationResources:
mySnsTopic:
Type: AWS::SNS::Topic
mySqsQueue:
Type: AWS::SQS::Queue

SnsIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.topicArn
Required
properties.filterPolicy
properties.onDeliveryFailure

SQS event

The function is triggered whenever there are messages in the specified SQS Queue.

  • Messages are processed in batches
  • If the SQS queue contains multiple messages, the function is invoked with multiple messages in its payload
  • A single queue should always be "consumed" by a single function. SQS message can only be read once from the queue and while it's being processed, it's invisible to other functions. If multiple different functions are processing messages from the same queue, each will get their share of the messages, but one message won't be delivered to more than one function at a time. If you need to consume the same message by multiple consumers (Fanout pattern), consider using EventBus integration or SNS integration.
  • To add your custom SQS queue to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.

Batching behavior can be configured. The function is triggered when any of the following things happen:

  • Batch window expires. Batch window can be configured using maxBatchWindowSeconds property.
  • Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured using batchSize property.
  • Maximum Payload limit is reached. Maximum payload size is 6 MB.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: sqs
properties:
queueArn: $CfResourceParam('mySqsQueue', 'Arn')
cloudformationResources:
mySqsQueue:
Type: AWS::SQS::Queue

SqsIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.queueArn
Required
properties.batchSize
Default: 10
properties.maxBatchWindowSeconds

Kinesis stream event

The function is triggered whenever there are messages in the specified Kinesis Stream.

  • Messages are processed in batches.
  • If the stream contains multiple messages, the function is invoked with multiple messages in its payload.
  • To add a custom Kinesis stream to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.
  • Similarly to SQS, Kinesis is used to process messages in batches. To learn the differences, refer to AWS Docs

Batching behavior can be configured. The function is triggered when any of the following things happen:

  • Batch window expires. Batch window can be configured using maxBatchWindowSeconds property.
  • Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured using batchSize property.
  • Maximum Payload limit is reached. Maximum payload size is 6 MB.

Consoming messages from a kinesis stream can be done in 2 ways:

  • Consuming directly from the stream - polling each shard in your Kinesis stream for records once per second. Read throughput of the kinesis shard is shared with other stream consumers.
  • Consuming using a stream consumer - To minimize latency and maximize read throughput, use "stream consumer" with enhanced fan-out. Enhanced fan-out consumers get a dedicated connection to each shard that doesn't impact other applications reading from the stream. You can either pass reference to the consumer using consumerArn property, or you can let Stacktape auto-create consumer using autoCreateConsumer property.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: 'path/to/my-lambda.ts'
events:
- type: kinesis-stream
properties:
autoCreateConsumer: true
maxBatchWindowSeconds: 30
batchSize: 200
streamArn: $CfResourceParam('myKinesisStream', 'Arn')
onFailure:
arn: $CfResourceParam('myOnFailureSqsQueue', 'Arn')
type: sqs
cloudformationResources:
myKinesisStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
myOnFailureSqsQueue:
Type: AWS::SQS::Queue

KinesisIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.streamArn
Required
properties.consumerArn
properties.autoCreateConsumer
properties.maxBatchWindowSeconds
properties.batchSize
Default: 10
properties.startingPosition
Default: TRIM_HORIZON
properties.maximumRetryAttempts
properties.onFailure
properties.parallelizationFactor
properties.bisectBatchOnFunctionError

DynamoDb stream event

The function is triggered whenever there are processable records in the specified DynamoDB streams.

  • DynamoDB stream captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours.
  • Records from the stream are processed in batches. This means that multiple records are included in a single function invocation.
  • DynamoDB stream must be enabled in a DynamoDB table definition. Learn how to enable streams in dynamo-table docs

Copy

resources:
myDynamoDbTable:
type: dynamo-db-table
properties:
primaryKey:
partitionKey:
name: id
type: string
streamType: NEW_AND_OLD_IMAGES
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: dynamo-db-stream
properties:
streamArn: $ResourceParam('myDynamoDbTable', 'streamArn')
# OPTIONAL
batchSize: 200

DynamoDbIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.streamArn
Required
properties.maxBatchWindowSeconds
properties.batchSize
Default: 100
properties.startingPosition
Default: TRIM_HORIZON
properties.maximumRetryAttempts
properties.onFailure
properties.parallelizationFactor
properties.bisectBatchOnFunctionError

S3 event

The function is triggered when a specified event occurs in your bucket.

  • To learn more about supported event types, refer to AWS Docs.

Copy

resources:
myBucket:
type: bucket
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: s3
properties:
bucketArn: $ResourceParam('myBucket', 'arn')
s3EventType: 's3:ObjectCreated:*'
filterRule:
prefix: order-
suffix: .jpg

S3Integration  API reference
Parent API reference: LambdaFunction
type
Required
properties.bucketArn
Required
properties.s3EventType
Required
properties.filterRule
S3FilterRule  API reference
Parent API reference: S3Integration
prefix
suffix

Cloudwatch Log event

The function is triggered when a log record arrives to the specified log group.

  • Event payload arriving to the function is BASE64 encoded and has the following format: &#123; "awslogs": &#123; "data": "BASE64ENCODED_GZIP_COMPRESSED_DATA" &#125; &#125;
  • To read access the log data, event payload needs to be decoded and decompressed first.

Copy

resources:
myLogProducingLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: lambdas/log-producer.ts
myLogConsumingLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: lambdas/log-consumer.ts
events:
- type: cloudwatch-log
properties:
logGroupArn: $ResourceParam('myLogProducingLambda', 'arn')

CloudwatchLogIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.logGroupArn
Required
properties.filter

Application Load Balancer event

The function is triggered when a specified Application load Balancer receives an HTTP request that matches the integration's conditions.

  • You can filter requests based on HTTP Method, Path, Headers, Query parameters, and IP Address.

Copy

resources:
# load balancer which routes traffic to the function
myLoadBalancer:
type: application-load-balancer
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
events:
- type: application-load-balancer
properties:
# referencing load balancer defined above
loadBalancerName: myLoadBalancer
priority: 1
paths:
- /invoke-my-lambda
- /another-path

ApplicationLoadBalancerIntegration  API reference
Parent API reference: LambdaFunction
type
Required
properties.loadBalancerName
Required
properties.priority
Required
properties.listenerPort
properties.paths
properties.methods
properties.hosts
properties.headers
properties.queryParams
properties.sourceIps

Sync vs. Async invocations

Functions can be invoked in 2 different ways. Different integrations (events) invoke your function in different ways.

Synchronous invocation

  • AWS Lambda runtime invokes your functions, waits for it to complete, and then returns the result to the caller.
  • Synchronous invocation can be performed by these callers:
    • HTTP API Gateway event integration
    • Application Load balancer event integration
    • Amazon Cognito
    • Directly calling invokeSync method (or similar method, depending on the language used) from the aws-sdk. This method then directly returns the result of your function.

Asynchronous invocation

  • AWS Lambda runtime invokes your functions but doesn't wait for it to complete. The caller receives only the information, if it's been successfully enqueued.
  • Asynchronous invocation can be performed by these callers:
    • SNS event integration
    • SQS event integration
    • Event-bus event integration
    • Schedule event integration
    • S3 event integration
    • Cloudwatch Log event integration
    • DynamoDB event integration
    • Kinesis event integration
    • Directly calling invoke method (or similar method, depending on the language used) from the aws-sdk. This method doesn't directly return the result of your function, only the information wheter the invocation successfully started.
  • If the function execution fails, lambda retries the function for 2 more times. Please note that this can sometimes cause issues, if the function is not idempotent.

Lambda Destinations

Lambda Destinations allow you to orchestrate simple, lambda-based, event-driven workflows.

  • Works only for asynchronous invocations
  • You can hook into onSuccess or onFailure events
  • 4 different destinations are supported:
    • SQS queue
    • SNS topic
    • Event bus
    • other lambda function
  • Destination receives both function's result (or error) and original event.
  • To learn more about Lambda destinations, refer to AWS blog post.
  • Defined using a destinations property on the function
  • For SNS, DynamoDB and Kinesis event integrations, onFailure destination can be set per event integration.

Copy

resources:
myEventBus:
type: event-bus
mySuccessLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: lambdas/success-handler.ts
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
destinations:
# if function succeeds, invoke the mySuccessLambda with the result data
onSuccess: $ResourceParam('mySuccessLambda', 'arn')
# if the function fails, send the result to "myEventBus"
onFailure: $ResourceParam('myEventBus', 'arn')

LambdaFunctionDestinations  API reference
Parent API reference: LambdaFunction
onSuccess
onFailure

Accessing other resources

  • For most of the AWS resources, resource-to-resource communication is not allowed by default. This helps to enforce security and resource isolation. Access must be explicitly granted using IAM (Identity and Access Management) permissions.

  • Access control of Relational Databases is not managed by IAM. These resources are not "cloud-native" and have their own access control mechanism (connection string with username and password). They are accessible by default, and you don't need to grant any extra IAM permissions. If the default, connection-string-based access-control is not sufficient for your use case, you can restrict connection to only resources in the same VPC. In that case, your function must join that VPC to access them.

  • Stacktape automatically handles IAM permissions for the underlying AWS services that it creates (i.e. granting container workload permission to write logs to Cloudwatch, or allowing to communicate with their event source and many others).

  • If your workload needs to communicate with other infrastructure components, you need to add permissions manually. You can do this in 2 ways listed below.

AccessControl  API reference
Parent API reference: LambdaFunction
iamRoleStatements
allowAccessTo

Using allowAccessTo

  • List of resource names that this function will be able to access (basic IAM permissions will be granted automatically). Granted permissions differ based on the resource.
  • Works only for resources managed by Stacktape (not arbitrary Cloudformation resources)
  • This is useful if you don't want to deal with IAM permissions yourself. Handling permissions using raw IAM role statements can be cumbersome, time-consuming and error-prone.

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
environment:
- name: MY_BUCKET_NAME
value: $ResourceParam('myBucket', 'name')
accessControl:
allowAccessTo:
- myBucket
myBucket:
type: bucket


Granted permissions:

Bucket

  • list objects in a bucket
  • create / get / delete / tag object in a bucket DynamoDb Table
  • get / put / update / delete item in a table
  • scan / query a table
  • describe table stream MongoDb Atlas Cluster
  • Allows connection to a cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs. Relational database
  • Allows connection to a relational database with accessibilityMode set to scoping-workloads-in-vpc. To learn more about relational database accessibility modes, refer to Relational databases docs. Redis cluster
  • Allows connection to a redis cluster with accessibilityMode set to scoping-workloads-in-vpc. To learn more about redis cluster accessibility modes, refer to Redis clusters docs. Event bus
  • publish events to the specified Event bus Function
  • invoke the specified function Batch job
  • submit batch-job instance into batch-job queue
  • list submitted job instances in a batch-job queue
  • describe / terminate a batch-job instance
  • list executions of state machine which executes the batch-job according to its strategy
  • start / terminate execution of a state machine which executes the batch-job according to its strategy

Using iamRoleStatements

  • IAM Role statements are a low-level, granular and AWS-native way of controlling access to your resources.
  • IAM Role statements can be used to add permissions to any Cloudformation resource.
  • Configured IAM role statement objects will be appended to the function's role.

Copy

resources:
myFunction:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
environment:
- name: TOPIC_ARN
value: $CfResourceParam('NotificationTopic', 'Arn')
accessControl:
iamRoleStatements:
- Resource:
- $CfResourceParam('NotificationTopic', 'Arn')
Effect: 'Allow'
Action:
- 'sns:Publish'
cloudformationResources:
NotificationTopic:
Type: AWS::SNS::Topic

Cold starts

  • Lambda function can take some additional time to execute when it runs for the first time.
  • Behind the scenes, AWS runs your function inside a container. Cold start happens every time a new container is added to run your function. This happens when:
    • your function is invoked for the first time after deployment
    • your function has not been invoked for some times (~15-45 minutes) and the container is removed
    • existing cotainers can't handle the load and new conrainer(s) must be added
  • Cold start usually take from ~0.2 to several seconds. It depends on:
    • Runtime used (Java and .NET are usually slower).
    • Duration of execution of code that runs outside the function handler (executed only once, on every cold start)
    • Size of your lambda function. (Stacktape does everything it can to reduce the size of your lambda function as much as possible.)

Default VPC connection

  • Certain AWS services (such as Relational Databases) must be connected to a VPC (Virtual private cloud) to be able to run. Stacktape automatically creates a default VPC for stacks that include these resources and connects them to the VPC.
  • Functions are NOT connected to the default VPC of your stack by default.
  • To communicate with resources inside a default VPC that have their accessibility mode set to only allow connection from the same VPC, you need to connect your function to that VPC.
  • Connecting a function to a VPC makes it lose connection to the internet. (Outbound requests will fail). To restore a connection to the internet, you need to use NAT Gateway. We do not recommend using NAT Gateways and advice you to re-architect your application instead.
  • To learn more about VPCs and accessibility modes, refer to VPC docs, accessing relational databases, accessing redis clusters and accessing MongoDB Atlas clusters

Copy

resources:
myLambda:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my-lambda.ts
joinDefaultVpc: true

Function connected to the default VPC

Pricing

You are charged for:

  • Total compute (gigabyte seconds):

    • Amount of memory * execution time
    • The price for 128MB per 1 ms execution: $0.0000000021.
  • Request charges: (invocation):

    • $0.20/1 million invocations

(forever) FREE TIER includes one million free requests per month and 400,000 GB-seconds of compute time.


To learn more about lambda functions pricing, refer to AWS pricing page

Referenceable parameters

The following parameters can be easily referenced using $ResourceParam directive directive.

To learn more about referencing parameters, refer to referencing parameters.

arn
  • Arn of the function

  • Usage: $ResourceParam('<<resource-name>>', 'arn')
logGroupArn
  • Arn of the log group aggregating logs from the function

  • Usage: $ResourceParam('<<resource-name>>', 'logGroupArn')

API reference

LambdaFunction  API reference
type
Required
properties.packaging
Required
properties.events
properties.environment
properties.runtime
properties.memory
properties.timeout
Default: 10
properties.joinDefaultVpc
properties.tags
properties.destinations
properties.accessControl
properties.logging
properties.deployment
overrides
EventInputTransformer  API reference
Parent API reference: (EventBusIntegration or ScheduleIntegration)
inputTemplate
Required
inputPathsMap
EventBusIntegrationPattern  API reference
Parent API reference: EventBusIntegration
version
detail-type
source
account
region
resources
detail
replay-name
CustomKafkaEventSource  API reference
Parent API reference: KafkaTopicIntegration
bootstrapServers
Required
topicName
Required
authentication
Required
KafkaSASLAuth  API reference
Parent API reference: CustomKafkaEventSource
type
Required
properties.authenticationSecretArn
Required
KafkaMTLSAuth  API reference
Parent API reference: CustomKafkaEventSource
type
Required
properties.clientCertificate
Required
properties.serverRootCaCertificate
EventInputTransformer  API reference
Parent API reference: (EventBusIntegration or ScheduleIntegration)
inputTemplate
Required
inputPathsMap
SnsOnDeliveryFailure  API reference
Parent API reference: SnsIntegration
sqsQueueArn
Required
sqsQueueUrl
Required
DestinationOnFailure  API reference
Parent API reference: (DynamoDbIntegration or KinesisIntegration)
arn
Required
type
Required
LbHeaderCondition  API reference
headerName
Required
values
Required
LbQueryParamCondition  API reference
paramName
Required
values
Required
EnvironmentVar  API reference
Parent API reference: LambdaFunction
name
Required
value
Required
CloudformationTag  API reference
Parent API reference: LambdaFunction
name
Required
value
Required
StpIamRoleStatement  API reference
Parent API reference: AccessControl
Resource
Required
Sid
Effect
Action
Condition
Need help? Ask a question on SlackDiscord or info@stacktape.com.