Close

Functions

Overview

AWS Lambda functions are compute resources - they run your code. Their environment is completely managed by AWS.

Supported languages are: Node.js, Python, Ruby, Java, Go and C#.

They start their execution in a reaction to event. They stop, when they finish executing your code.


When to use

Lambda functions are a great options for majority of use-cases. They work well for HTTP APIs, scheduled jobs, integrations & much more. However, they don't work well for long running jobs and jobs that require higher degree of control over execution environment.

Advantages

  • Pay-per-use - You only pay for compute time you consume (rounded to 1ms)
  • Massive & fast scaling - Can scale up to 1000s of parallel executions. New containers running your code are added in milliseconds to seconds
  • High availability - Lambda runs your function in multiple Availability Zones
  • Secure by default - Underlying environment is securely managed by AWS
  • Easy integration - Function can be invoked by events from a wide variety of services

Disadvantages

  • Limited execution time - Can ran only up to 15 minutes
  • Limited configuration of lambda environment - You can configure only function's memory
  • More expensive for some jobs - You can get better price with other services when your workloads run continuously and have predictable load
  • Colds start - Functions take na addional ~0.5 - 3sec to execute, when they haven't run for some time or need to add new container when scaling

Usage

➡️ Basic usage

The only required property of lambda function is path to it's source code.
Stacktape automatically builds your source-code and sets it's runtime based on your source-file's extension.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'


➡️ Configuring resources

You can configure the amount of memory available to the function while it is running. Should be between 128 MB and 3,008 MB in 64-MB increments.
CPU power is proportionate to the amount of memory configured. Configuring more than 1,792 MB adds second virtual CPU.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
memory: 1024


➡️ Configuring timeout

With timeout property you can define the amount of time (in seconds) that a single function invocation is allowed to run before timeout error is thrown.

  • Maximum allowed time is 900 seconds.
  • The default is three seconds.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
# upper time limit the function is allowed to run before time-out error is thrown
timeout: 300


➡️ Configuring environment variables

Adding static environment variables is straightforward.
Your variables can have dynamic values, if you use your own directive. Learn more about directives.
Your variables can reference other resources or secrets. Learn more about referencing resources.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
environment:
STATIC_ENV_VAR: 'my-env-var'
DYNAMICALLY_SET_ENV_VAR: "@MyCustomDirective('input-for-my-directive')"
DB_HOST: "$GetParam('myPgSql', 'DbInstance::Endpoint.Address')"
DB_PASSWORD: "$GetSecret('dbSecret.password')"


➡️ Join VPC

VPC (Virtual Private Cloud) is an isolated virtual network created for each stack you create. This virtual network closely resembles a traditional network that you'd operate in your own data center. Databases, containerWorkloads and batchJobs are by default connected to this network.

By default functions are NOT connected to the VPC of your stack. However, in some cases you might need to connect the function to the VPC. For example in case when you need to access a database or an atlasMongoCluster which is set up to only allows connections from VPC.

To connect function into VPC you can use joinVpc property.

Be aware: when you connect function to VPC, it loses connection to outer internet.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
joinVpc: true


➡️ Triggering function

To specify how a function can be invoked, you use events list property. Events enable you to configure event integrations. Event integrations invoke the associated function when specified event occurs.


Each function can have multiple event integrations configured.


Currently Stacktape supports following event integrations:


httpApi

httpApi event integration connects function to specified httpApiGateway. When the specified httpApiGateway receives an event (matching configured httpApi integration conditions) the function is invoked.

resources:
# http api gateway, to which function is connected
httpApiGateways:
myHttpApi: {}
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- httpApi:
# referencing http api gateway defined above
httpApiGatewayName: 'myHttpApi'
path: '/invoke-my-lambda'
method: 'GET'

REQUIRED parameters

parameter name

description

httpApiGatewayName

name of the httpApiGateway created as part of your stack

method

HTTP method for which function will be invoked, for example GET or PUT. If you want to invoke function for any method use '*'

path

  • url path for example /pets
  • you can use path variables in HTTP API routes, for example /pets/{petID}. This path catches a request that a client submits to path /pets/6 or /pets/dog
  • you can use a greedy path variable which catches all child resources of a route. To create a greedy path variable, add + to the variable name — for example /pets/{anything+}. This path catches /pets/dog, /pets/dog/rusty or /pets/cat/tony
To understand how paths are evaluated see AWS docs.

OPTIONAL parameters

@todo authorizer


schedule

With schedule integration function is invoked regularly:

Following example demonstrates both cases:

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
# invoke function every two hours
- schedule:
scheduleRate: 'rate(2 hours)'
# invoke function at 10:00 UTC every day
- schedule:
scheduleRate: 'cron(0 10 * * ? *)' # CRON expression is always in UTC time

REQUIRED parameters

parameter name

description

scheduleRate

Can be of following form

  • rate expression - for example rate(2 hours) or rate(20 seconds)
  • cron expression - for example cron(0 10 * * ? *) or cron(0 15 3 * ? *)

OPTIONAL parameters

parameter name

description

input

@todo

inputPath

@todo

inputTransformer

@todo

eventBus

eventBus event integration connects function to eventBus. EventBus-es receive events from various services and applications.

  • Each event received by eventBus gets evaluated. If the event matches eventPattern defined in your event integration, the function is invoked.
  • Function can be integrated with AWS pre-created default eventBus or with user-created custom eventBus. To understand the difference, read the respective sections.

Integrate with default event bus

  • Default eventBus is eventBus which is pre-created by AWS and is shared by entire AWS account.
  • Default eventBus receives events from various AWS services.
  • The list of event types received by default eventBus can be found in AWS docs.
  • The default eventBus is used, if you do NOT specify customEventBusArn property.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- eventBus:
eventPattern:
source:
- 'aws.autoscaling'
region:
- 'us-west-2'

Integrate with custom event bus

  • Custom eventBus can receive events from your custom applications and services.
  • You can create custom eventBus as a part of your stack.
  • Custom eventBus is used, if you specify customEventBusArn property.

resources:
# custom event bus we create as part of the stack
eventBuses:
myEventBus: {}
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- eventBus:
# we associate function with custom event bus by setting customEventBusArn property
customEventBusArn: "$GetParam('myEventBus', 'EventBus::Arn')"
eventPattern:
source:
- 'mycustomsource'

REQUIRED parameters

parameter name

description

eventPattern

Each event received by eventBus gets evaluated against this pattern. If the event matches this pattern, function is invoked. Guide to writting event patterns can be found in AWS docs.

OPTIONAL parameters

parameter name

description

customEventBusArn

Arn of the user-created custom eventBus.

input

@todo

inputPath

@todo

inputTransformer

@todo

sns

With sns event itegration, you can subscribe function to SNS topic.

  • Function is invoked for each message received by SNS topic.
  • You can create SNS topic as a part of your stack using cloudformationResources section.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- sns:
topicArn: "$GetParam('mySnsTopic', 'Arn')"
# OPTIONAL send messages that failed to deliver to SQS queue
destinations:
onDeliveryFailure:
sqsQueueArn: "$GetParam('myFailedToDeliverQueue', 'Arn')"
sqsQueueUrl: "$GetParam('myFailedToDeliverQueue', 'QueueURL')"
cloudformationResources:
mySnsTopic:
Type: 'AWS::SNS::Topic'
Properties:
TopicName: "$CfFormat('{}-{}', $GetCliArgs().stage, 'myTopic')"
myFailedToDeliverQueue:
Type: 'AWS::SQS::Queue'

REQUIRED parameters

parameter name

description

topicArn

Arn of sns topic function is subscribed to.

OPTIONAL parameters

parameter name

description

filterPolicy

Allows you to filter messages based on the message attributes (if you need to filter based on meesage content use eventBus event). Only messages that pass the filter will invoke function. More on filter policies can be found in AWS docs.

destinations

Currently destinations only support onDeliveryFailure sub-property. You can use it to specify target SQS queue for messages that fail to deliver to the function (happens in rare cases, usually when function is not able to scale fast enough to react to incoming messages)

sqs

With sqs event integration function is triggered whenever there are messages in the given SQS Queue.

  • With sqs event integration messages are processed in batches. This means that multiple messages are included in single function invocation.
  • You can create SQS queue as a part of your stack using cloudformationResources section.
  • Function is invoked when one of the following happens: batch window expires OR full batch size is reached OR the payload limit is reached. See parameters below to better understand how to configure the integration properly.
A single queue should always be "consumed" by a single function. SQS message can only be read once from the queue and while it's being processed by one function it will be invisible to other functions. If you've got multiple funcions each will get their share of the messages but one message won't be delivered to more than one function at a time. If you need to consume incoming messages by multiple functions try using event bus with eventBus integration or sns topic with sns integration.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- sqs:
queueArn: "$GetParam('mySqsQueue', 'Arn')"
# OPTIONAL
maxBatchWindowSeconds: 30
# OPTIONAL
batchSize: 100
cloudformationResources:
mySqsQueue:
Type: 'AWS::SQS::Queue'

REQUIRED parameters

parameter name

description

queueArn

Arn of sqs queue from which function consumes messages.

OPTIONAL parameters

parameter name

description

batchSize

configures how many records to collect in a batch, before function is invoked. Default 10. Max 10,000.

maxBatchWindowSeconds

configures maximum amount of time (in seconds) to gather records before invoking the function (by default batch window is not configured).

kinesis

With kinesis integration function consumes records from kinesis stream.

  • Records are processed in batches. This means that multiple records are included in single function invocation.
  • You can create Kinesis stream as a part of your stack using cloudformationResources section.
  • Function is invoked when one of the following happens: batch window expires OR full batch size is reached OR the payload limit is reached. See parameters below to better understand how to configure the integration properly.
You can consume from kinesis stream in 2 ways:
  • consuming directly from stream - In this scenario integration polls each shard in your Kinesis stream for records at a base rate of once per second. Integration shares read throughput with other consumers of the shard.
  • consuming using "stream consumer" - To minimize latency and maximize read throughput, use "stream consumer" with enhanced fan-out. Enhanced fan-out consumers get a dedicated connection to each shard that doesn't impact other applications reading from the stream. You can either pass reference to consumer using consumerArn property, or you can let Stacktape auto-create consumer by setting autoCreateConsumer to true.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- kinesis:
streamArn: "$GetParam('myKinesisStream', 'Arn')"
# OPTIONAL
autoCreateConsumer: true
# OPTIONAL
maxBatchWindowSeconds: 30
# OPTIONAL
batchSize: 200
# OPTIONAL
startingPosition: 'LATEST'
# OPTIONAL
destinations:
onFailure:
type: 'sqs'
arn: "$GetParam('myOnFailureSqsQueue', 'Arn')"
cloudformationResources:
myKinesisStream:
Type: 'AWS::Kinesis::Stream'
Properties:
ShardCount: 1
myOnFailureSqsQueue:
Type: 'AWS::SQS::Queue'

REQUIRED parameters

parameter name

description

streamArn

Arn of Kinesis stream from which function consumes messages.

OPTIONAL parameters

parameter name

description

consumerArn

Arn of the consumer which will be used by integration. This parameter CANNOT be used when autoCreateConsumer parameter is used.

autoCreateConsumer

Specifies whether Stacktape creates consumer for this integration, which can help minimize latency and maximize read throughput. This parameter CANNOT be used when consumerArn parameter is used.

batchSize

configures how many records to collect in a batch, before function is invoked. Default 100. Max 10,000.

maxBatchWindowSeconds

configures maximum amount of time (in seconds) to gather records before invoking the function (by default batch window is not configured).

startingPosition

specify position in a stream from which to start reading. Possible values are:
  • LATEST - Read only new records.
  • TRIM_HORIZON - Process all available records (default value).

maximumRetryAttempts

configures number of times failed "record batches" are retried (by default records are retried until expired). Be aware that if function fails, the entire batch of records is retried (not only the failed ones).You should implement your function with idempotency in mind.

destinations

Currently destinations only support onFailure sub-property. You can use it to specify target SQS queue or SNS topic for record batches that fail to be processed, i.e batches for which function returns error and retry attempts are exhausted.

bisectBatchOnFunctionError

when enabled and function returns error, the batch is split into two batches and retried.

dynamoDb

With dynamoDb integration function consumes records in DynamoDB streams.

  • DynamoDB stream captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours.
  • Records from stream are processed in batches. This means that multiple records are included in single function invocation.
  • Stream can be easily enabled for your DynamoDB table using StreamSpecification property when defining table (see example below).

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- dynamoDb:
streamArn: "$GetParam('myDynamoDbTable', 'StreamArn')"
# OPTIONAL
batchSize: 200
cloudformationResources:
myDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: "$Format('{}-{}', $GetCliArgs().stage, 'mytable')"
# by specifing StreamSpecification property we enable dynamoDb stream on table
StreamSpecification:
StreamViewType: 'NEW_IMAGE'
AttributeDefinitions:
- AttributeName: 'myAttributeName'
AttributeType: 'S'
KeySchema:
- AttributeName: 'myAttributeName'
KeyType: 'HASH'
BillingMode: 'PAY_PER_REQUEST'

REQUIRED parameters

parameter name

description

streamArn

Arn of DynamoDB table stream from which function consumes messages.

OPTIONAL parameters

parameter name

description

batchSize

configures how many records to collect in a batch, before function is invoked. Default 100. Max 1000.

maxBatchWindowSeconds

configures maximum amount of time (in seconds) to gather records before invoking the function (by default batch window is not configured).

startingPosition

specify position in a stream from which to start reading. Possible values are:
  • LATEST - Read only new records.
  • TRIM_HORIZON - Process all available records (default value).

maximumRetryAttempts

configures number of times failed "record batches" are retried (by default records are retried until expired). Be aware that if function fails, the entire batch of records is retried (not only the failed ones).You should implement your function with idempotency in mind.

destinations

Currently destinations only support onFailure sub-property. You can use it to specify target SQS queue or SNS topic for record batches that fail to be processed, i.e batches for which function returns error and retry attempts are exhausted.

bisectBatchOnFunctionError

when enabled and function returns error, the batch is split into two batches and retried.

s3

s3 integration listens to events from S3 bucket and invokes function when an object is created, modfied or removed from the bucket.

resources:
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- s3:
bucketArn: "$GetParam('myBasicBucket', 'Bucket::Arn')"
event:
- 's3:ObjectCreated:*'
- 's3:ObjectRemoved:*'
# OPTIONAL filterRules will only invoke function when created/removed objects end with ".png" or ".jpg"
filterRules:
- suffix: '.png'
- suffix: '.jpg'
buckets:
# bucket to which function integration is attached
myBasicBucket:
accessibility: 'private'

REQUIRED parameters

parameter name

description

bucketArn

Arn of S3 bucket, events of which can invoke function

event

Array of S3 event types which can invoke function. Supported events are:
  • s3:ReducedRedundancyLostObject
  • s3:ObjectCreated:*
  • s3:ObjectCreated:Put
  • s3:ObjectCreated:Post
  • s3:ObjectCreated:Copy
  • s3:ObjectCreated:CompleteMultipartUpload
  • s3:ObjectRemoved:*
  • s3:ObjectRemoved:Delete
  • s3:ObjectRemoved:DeleteMarkerCreated
  • s3:ObjectRestore:*
  • s3:ObjectRestore:Post
  • s3:ObjectRestore:Completed
  • s3:Replication:*
  • s3:Replication:OperationFailedReplication
  • s3:Replication:OperationNotTracked
  • s3:Replication:OperationMissedThreshold
  • s3:Replication:OperationReplicatedAfterThreshold

OPTIONAL parameters:

parameter name

description

filterRules

Array of rules filtering out which objects can trigger the function. Each rule can have exactly one of these properties:
  • prefix - function is invoked only when object starts with the specified prefix
  • suffix - function is invoked only when object name ends with specified suffix

cloudwatchLog

cloudwatchLog integration listens for logs incoming to specified log group.

  • function receives event in following format: { "awslogs": { "data": "BASE64ENCODED_GZIP_COMPRESSED_DATA" } }. This means that the content must be first Base64 decoded and decompressed.
  • after decoding the event has following format (this is EXAMPLE, the format of the logEvent inner message can be different)
    {
    "owner": "123456789012",
    "logGroup": "<<log_group_name>>",
    "logStream": "<<log_stream>>",
    "subscriptionFilters": ["Destination"],
    "messageType": "DATA_MESSAGE",
    "logEvents": [
    {
    "id": "31953106606966983378809025079804211143289615424298221568",
    "timestamp": 1432826855000,
    "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
    },
    {
    "id": "31953106606966983378809025079804211143289615424298221569",
    "timestamp": 1432826855000,
    "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
    }
    ]
    }

REQUIRED parameters:

parameter name

description

logGroupArn

Arn of the logGroup logs of which we are watching. This can be a logGroup of another workload or any other logGroup you wish to watch for.

OPTIONAL parameters:

parameter name

description

filter

Filter allows you to specify filter pattern, to filter out logs which invoke function.

I.e, you might be interested to only invoke function when there is an ERROR string in your log. In that case the pattern would be ERROR.

A guide on how to write filter patterns can be found in AWS docs

loadBalancer

Load balancer event integration connects function to specified load balancer. When the specified load balancer receives a request (event) which matches the spcified integration's conditions, the function is invoked.

resources:
# load balancer which routes traffic to the function
loadBalancers:
myLoadBalancer:
interface: 'internet'
ports:
80:
protocol: 'HTTP'
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
events:
- loadBalancer:
# referencing load balancer defined above
loadBalancerName: 'myLoadBalancer'
loadBalancerPort: 80
path: '/invoke-my-lambda'
method: 'GET'

REQUIRED parameters

parameter name

description

loadBalancerName

name of the loadBalancer created as part of your stack

loadBalancerPort

port of the loadBalancer on which the loadBalancer is listening for traffic

priority

Priority of integration. Each load balancer (with its exposed load balancer port) can route traffic to different workloads through workload's respective event integrations. Once the request (event) arrives to load balancer, it starts to evaluate all its integrations in priority order, from the lowest value to the highest value. If load balancer finds integration that matches conditions (see OPTIONAL parameters below), the request (event) is forwarded to workload associated with this integration.

OPTIONAL parameters

parameter name

description

method

method determines which HTTP method must request (event) use in order to be forwarded to function, for example GET or PUT. Value can be an array of methods. If you specify array of methods, the condition is satisfied if one of the strings matches the HTTP request method.

path

  • url path for example /pets
  • Path pattern to compare against the request (event) URL. The maximum size is 128 characters. The comparison is case sensitive. The following wildcard characters are supported: * (matches 0 or more characters) and ? (matches exactly 1 character). Value can be an array. If you specify array of patterns, the condition is satisfied if one of them matches the request URL.

host

Hostname to which request is sent. The comparison is case insensitive. The following wildcard characters are supported: * (matches 0 or more characters) and ? (matches exactly 1 character). Value can be an array. If you specify multiple hostnames, the condition is satisfied if one of them matches the request host name.

sourceIp

Source IP address in CIDR format. You can use both IPv4 and IPv6 addresses. Wildcards are not supported. Value can be an array. If you specify multiple addresses, the condition is satisfied if the source IP address of the request matches one of the CIDR blocks. This condition is not satisfied by the addresses in the X-Forwarded-For header.

header

Object property. Each key of the object represents single condition. The key also specifies the name of the "header" to which the condition applies. The value can be either a single string or a list of strings. In latter case, the condition for the given "header" is satisfied if one of the strings in the list matches the header value of incoming request. Conditions for each specified "header" must be satisified in order for the entire condition to succeed.

resources:
functions:
myFunc:
...
events:
- loadBalancer:
...
header:
x-my-header: 'hello'
x-another:
- 'ola'
- 'adios'
In this example the condition would succeed if the evaluated request has:
  • x-my-header header set to hello
  • x-another header set to ola or adios

query

Object property. Each key of the object represents single condition. The key also specifies the "query parameter" name. The value can be either a single string or a list of strings. In latter case, the condition for the given "query parameter" is satisfied if one of the strings in the list matches the query parameter value of incoming request. Conditions for each specified "query parameter" must be satisified in order for the entire condition to succeed.


➡️ Destination onSuccess/onFailure

The concept of destinations was introduced to simplify event-driven applications and reduce their code complexity.

  • Destinations allows you to sent a result of a function invocation into SQS queue, SNS topic, event bus or another function.
  • You can choose to only send results of failed/successfull invocations or both.
  • The destination (where result is sent) can be different for successfull and failed invocations.
  • Result object delivered into destination also includes original event.
  • More information about destinations can be found in this AWS blog post.
Destinations only work with event integrations, which invoke function asynchronously. This means that destinations property is only applicable when function is invoked by one of the following event integrations:For kinesis and dynamodb integrations you can define destinations.onFailure directly on the integration (see kinesis example).

Example 1

  • If function myLambda fails, the result (which also includes details about error and original event) is sent into myEventBus.

resources:
# custom event bus we create as part of the stack
eventBuses:
myEventBus: {}
functions:
myLambda:
filePath: 'path/to/my-lambda.ts'
# if function fails, send the result into "myEventBus"
destinations:
onFailure: "$GetParam('myEventBus', 'EventBus::Arn')"
events:
# invoke function every two hours
- schedule:
scheduleRate: 'rate(2 hours)'

Example 2

  • If function myLambda fails, the result (which also includes details about error and original event) is sent into myEventBus.
  • If function myLambda succeeds, mySuccessLambda is invoked with the myLambda's result passed as invoke payload.

resources:
# custom event bus we create as part of the stack
eventBuses:
myEventBus: {}
functions:
mySuccessLambda:
filePath: 'path/to/my-success-lambda.ts'
myLambda:
filePath: 'path/to/my-lambda.ts'
# if function succeeds, invoke mySuccessLambda with result data
# if function fails, send the result into "myEventBus"
destinations:
onSuccess: "$GetParam('mySuccessLambda', 'LambdaFunction::Arn')"
onFailure: "$GetParam('myEventBus', 'EventBus::Arn')"
events:
# invoke function every two hours
- schedule:
scheduleRate: 'rate(2 hours)'


➡️ Accessing other resources

Depending on your use case, your function might need to:

By default functions have access to the internet. However, many resources such as eventBuses, buckets even cloudformationResources are protected within your stack.

There are two approaches when granting function access to a resource:

  1. allowAccessTo property - list of strings. Elements of this list are resources of your stack which function should be able to access such as eventBuses, buckets... allowAccessTo gives you ability to easily and transparently control accesses to resources of your stack. More on using allowAccessTo property here.

  2. iamRoleStatements property - list of iam role statements objects. Elements of this array are iam role statements, which will be appended to your functions role. This is an advanced feature and should be used with caution. iamRoleStatements is useful when you want to have more granular control of accesses or you need to control access to resources that cannot be scoped by (listed in) allowAccessTo property. More on using iamRoleStatements property here

AllowAccessTo property does NOT work with resources defined in section cloudformationResources. See which resources can be scoped by allowAccessTo.

Allow access using allowAccessTo

In order to allow your function to access other resources which are part of your stack resources section, you can use allowAccessTo property.

Listing resource in allowAccessTo property will give the function basic set of permissions over the resource (permissions differ based on what type of resource it is, see this section).

Databases and atlas mongo clusters are accessible from functions by default IF their allowConnectionsFrom property is set to "internet". However, when using more strict allowConnectionsFrom setting, function must be connected to VPC.

Example scenario

In this example scenario, we are giving function permission to access specific bucket.

resources:
functions:
myLambda:
filePath: 'path/to/my-photo-processing-lambda.ts'
# we are injecting AWS names of buckets into container environment variables, to make these names accessible during function execution
environment:
PHOTOS_BUCKET: "$GetParam('photosBucket', 'Bucket::Name')"
allowAccessTo:
- 'photosBucket'
buckets:
photosBucket:
accessibility: 'private'

Scope of allowAccessTo

Resources which can be scoped by allowAccessTo are:

resource type

granted permissions on resource

bucket

  • list objects in bucket
  • create/get/delete/tag object in bucket

atlasMongoCluster

database

  • allows function to connect to a database which has restricted allowConnectionsFrom. See databases docs.

eventBus

  • publish events into specified eventBus.

function

  • invoke specified function

batchJob

  • submit batchJob instance into batchJob queue
  • list submitted job instancies in batchJob queue
  • describe/terminate specific batchJob instance
  • list executions of state machine which executes batchJob according to its strategy
  • start/terminate execution of state machine which executes batchJob according to its strategy

stateMachine

  • list executions of stateMachine
  • start/terminate execution of stateMachine

Allow access using iamRoleStatements

If you need to access resources which you cannot scope with allowAccessTo or you need to have fine grained resource access control, you can use iamRoleStatements property. Stacktape adds listed iamRoleStatements to the role which function uses during execution.

You can also use iamRoleStatements to give function access to cloudformation resources defined in cloudformationResources section.

Example scenario

In this example scenario, we are giving function permission to access DynamoDB table defined in cloudformationResources section.

resources:
functions:
myLambda:
filePath: 'path/to/my-processing-lambda.ts'
# we are injecting DynamoDB table name into environment variables, to make it accessible during execution
environment:
RESULT_TABLE_NAME: "$GetParam('ResultsTable', 'Name')"
# giving function permissions to get and put items into dynamoDB table defined in cloudformation resources
iamRoleStatements:
- Resource: "$GetParam('ResultsTable', 'Arn')"
Effect: 'Allow'
Action:
- 'dynamodb:PutItem'
- 'dynamodb:Get*'
cloudformationResources:
ResultsTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: "$Format('{}-results-table', $GetCliArgs().stage)"
BillingMode: 'PAY_PER_REQUEST'
AttributeDefinitions:
- AttributeName: 'resultDate'
AttributeType: 'S'
KeySchema:
- AttributeName: 'resultDate'
KeyType: 'HASH'


API Reference

Property in Stacktape configAllowed types
resources.functions.{name}.allowAccessTo[]string
resources.functions.{name}.deadLetterQueueArnstring
resources.functions.{name}.dependsOn[]string
resources.functions.{name}.descriptionstring
resources.functions.{name}.destinationsDestinations
resources.functions.{name}.disableLogsboolean
resources.functions.{name}.environment.{name}stringnumberboolean
resources.functions.{name}.events[].cloudwatchLogCloudwatchLog
resources.functions.{name}.events[].dynamoDb.batchSizenumber
resources.functions.{name}.events[].dynamoDb.batchWindownumber
resources.functions.{name}.events[].dynamoDb.bisectBatchOnFunctionErrorboolean
resources.functions.{name}.events[].dynamoDb.destinations.onFailureOnFailureRequired
resources.functions.{name}.events[].dynamoDb.enabledboolean
resources.functions.{name}.events[].dynamoDb.maximumRetryAttemptsnumber
resources.functions.{name}.events[].dynamoDb.parallelizationFactornumber
resources.functions.{name}.events[].dynamoDb.startingPositionstring
resources.functions.{name}.events[].dynamoDb.streamArnstringRequired
resources.functions.{name}.events[].eventBus.customEventBusArnstring
resources.functions.{name}.events[].eventBus.descriptionstring
resources.functions.{name}.events[].eventBus.eventPatternEventPatternRequired
resources.functions.{name}.events[].eventBus.inputany
resources.functions.{name}.events[].eventBus.inputPathstring
resources.functions.{name}.events[].eventBus.inputTransformerInputTransformer
resources.functions.{name}.events[].httpApi.authorizerAuthorizer
resources.functions.{name}.events[].httpApi.httpApiGatewayNamestringRequired
resources.functions.{name}.events[].httpApi.methodenumRequired
resources.functions.{name}.events[].httpApi.pathstringRequired
resources.functions.{name}.events[].httpApi.payloadFormatenum
resources.functions.{name}.events[].iotIot
resources.functions.{name}.events[].kinesis.autoCreateConsumerboolean
resources.functions.{name}.events[].kinesis.batchSizenumber
resources.functions.{name}.events[].kinesis.bisectBatchOnFunctionErrorboolean
resources.functions.{name}.events[].kinesis.consumerArnstring
resources.functions.{name}.events[].kinesis.destinations.onFailureOnFailureRequired
resources.functions.{name}.events[].kinesis.enabledboolean
resources.functions.{name}.events[].kinesis.maxBatchWindowSecondsnumber
resources.functions.{name}.events[].kinesis.maximumRetryAttemptsnumber
resources.functions.{name}.events[].kinesis.parallelizationFactornumber
resources.functions.{name}.events[].kinesis.startingPositionenum
resources.functions.{name}.events[].kinesis.streamArnstringRequired
resources.functions.{name}.events[].loadBalancerLoadBalancer
resources.functions.{name}.events[].s3.bucketArnstringRequired
resources.functions.{name}.events[].s3.eventstring[]stringRequired
resources.functions.{name}.events[].s3.filterRules[]FilterRulesFilterRules
resources.functions.{name}.events[].schedule.descriptionstring
resources.functions.{name}.events[].schedule.inputany
resources.functions.{name}.events[].schedule.inputPathstring
resources.functions.{name}.events[].schedule.inputTransformerInputTransformer
resources.functions.{name}.events[].schedule.scheduleRatestringRequired
resources.functions.{name}.events[].sns.destinations.onDeliveryFailureOnDeliveryFailureRequired
resources.functions.{name}.events[].sns.filterPolicyany
resources.functions.{name}.events[].sns.topicArnstringRequired
resources.functions.{name}.events[].sqsSqs
resources.functions.{name}.filePathstringRequired
resources.functions.{name}.handlerstring
resources.functions.{name}.iamRoleArnstring
resources.functions.{name}.iamRoleStatements[]IamRoleStatements
resources.functions.{name}.includeFilesstring[]string
resources.functions.{name}.joinVpcboolean
resources.functions.{name}.layers[]string
resources.functions.{name}.logRetentionDaysnumber
resources.functions.{name}.memorynumber
resources.functions.{name}.provisionedConcurrencynumber
resources.functions.{name}.reservedConcurrencynumber
resources.functions.{name}.runtimeenum
resources.functions.{name}.tags.{name}string
resources.functions.{name}.timeoutnumber
resources.functions.{name}.tracingenum
💻 CLI — Previous
version
Next — 🗄️ Resources
Container Workloads