Close
logoStacktape docs



DynamoDb tables

Overview

Amazon DynamoDB is a key-value and JSON document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

Advantages

  • Performance at scale - DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage.
  • No servers to manage - DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate.
  • Enterprise ready - DynamoDB supports ACID transactions to enable you to build business-critical applications at scale. DynamoDB encrypts all data by default and provides fine-grained identity and access control on all your tables.

Basic usage

DynamoTable  API reference
Required
type
Type: string "dynamo-table"
Type of the resource
properties.provisionedThroughput

Throughput specification for the specified table.

properties.pointInTimeRecoveryEnabled
Type: boolean

Enables continuous backups with point-in-time recovery capability.

properties.dynamoStreamType
Type: string ENUM

Enables stream (and determines stream type) which captures changes to items stored in table.

overrides
Type: Object

Overrides properties of the specified sub-resource.

Only thing required for creating Dynamo table is specifying primary key. primaryKey uniquely identifies each item of a table.

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_id
attributeType: string

Accessing table

You can access table from your workloads once you grant permission to your workload.

In the following configuration we are granting function myFunction access to myDynamoTable.

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: id
attributeType: string
myFunction:
Type: function
Properties:
packageConfig:
filePath: 'my-lambda.ts'
environment:
- name: TABLE_NAME
value: $GetParam('myDynamoTable', 'DynamoTable::Name')
accessControl:
allowAccessTo:
- myDynamoTable

The code of the function (file my-lambda.ts) might look like this

import { DynamoDBClient, GetItemCommand, PutItemCommand } from '@aws-sdk/client-dynamodb';
// creating dynamo client
const client = new DynamoDBClient({});
export default async (event, context) => {
// put item into table
await client.send(
new PutItemCommand({
Item: { id: { S: 'my_id_1' }, writeTimestamp: { S: new Date().toLocaleTimeString() } },
TableName: process.env.TABLE_NAME
})
);
// get item from table
const result = await client.send(
new GetItemCommand({ Key: { id: { S: 'my_id_1' } }, TableName: process.env.TABLE_NAME })
);
// ... other code
};

Designing primary key

  • The primary key uniquely identifies each item in the table, so that no two items can have the same key.

  • Two different kinds of primary keys are supported:

    1. simple primary key - you only specify partitionKey
    2. composite primary key - you specify both partitionKey and sortKey
  • Primary key specification cannot be modified during updates (after table is created).

  • To understand primary key better see AWS docs

TablePrimaryKey  API reference
Parent API reference: DynamoTable
Required
partitionKey

Specifies a single top-level attribute which must be included in each table item.

sortKey

If specified, this attribute becomes part of primary key together with partitionKey

The attribute you will choose must be a TOP-level (non-nested) attribute(field) of the JSON document. Each JSON inserted in the database must contain this attribute.

KeyAttribute  API reference
Parent API reference: TablePrimaryKey
Required
attributeName
Type: string

name of the top-level attribute

Required
attributeType
Type: string ENUM

type of the key attribute

Example of a composite primary key.

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_partition_key
attributeType: string
sortKey:
attributeName: this_attribute_will_be_sort_key
attributeType: number

Configuring provisioned throughput

  • When you specify provisionedThroughput the table is ran in provisioned mode and you need to specify read and write throughput for your table. Provisioned mode gives you ability to stay at or below a defined request rate in order to obtain cost predictability.
  • When you do NOT specify provisionedThroughput dynamo table is ran and billed in on-demand mode. This means you are only paying for what you use. However if you have predictable read/write load it might be cheaper to specify provisionedThroughput.
  • To understand differences between provisioned and on-demand mode see AWS docs
ProvisionedThroughput  API reference
Parent API reference: DynamoTable
Required
readUnitsPerSecond
Type: number

Number of read units available every second (if exceeded DynamoDB returns a ThrottlingException)

Required
writeUnitsPerSecond
Type: number

Number of write units available every second (if exceeded DynamoDB returns a ThrottlingException)

autoScaling

Auto scaling configuration for read/write units

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_id
attributeType: string
provisionedThroughput:
readUnitsPerSecond: 4
writeUnitsPerSecond: 4

Autoscale provisioned throughput

If you operate your table in provisioned mode, you can setup provisioned throughput autoscaling. Scaling provisioned throughput up and down based on demand, can lead to significant cost savings.

Autoscaling is useful if your dynamo table is not utilized consistentnly at all times. For example a lot of consumer web applications are heavily visisted throughout day, but are less utlized during nights. In these scenarios, autoscaling allows you to scale table throughput up/down once the specified thresholds are met.

Great information on throughput autoscaling can be found in this AWS article

ThroughputScaling  API reference
Parent API reference: ProvisionedThroughput
readScaling

Auto scaling specifications for reads units

writeScaling

Auto scaling specifications for write units

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_id
attributeType: string
provisionedThroughput:
readUnitsPerSecond: 4
writeUnitsPerSecond: 4
autoScaling:
readScaling:
minReadUnits: 4
maxReadUnits: 10
keepReadUtilizationBelowPercent: 80

ReadScaling  API reference
Parent API reference: ThroughputScaling
Required
minReadUnits
Type: number

Minimum number of provisioned read units available every second.

Required
maxReadUnits
Type: number

Maximum number of provisioned read units table can scale up to

Required
keepReadUtilizationBelowPercent
Type: number

Percentual utilization threshold for scaling

WriteScaling  API reference
Parent API reference: ThroughputScaling
Required
minWriteUnits
Type: number

Minimum number of provisioned write units available every second.

Required
maxWriteUnits
Type: number

Maximum number of provisioned write units table can scale up to

Required
keepWriteUtilizationBelowPercent
Type: number

Percentual utilization threshold for scaling

Enable point in time recovery

  • With point-in-time recovery, you can restore table to any point in time during the last 35 days

The point-in-time recovery process always restores to a new table.

Enabling point-in-time recovery can result in increased AWS charges. See DynamoDB pricing to understand the costs

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_id
attributeType: string
pointInTimeRecoveryEnabled: true

Enable dynamo stream

By specifiyng dynamoStreamType property, you are enabling dynamo stream which captures changes to items stored in table.

The stream can be easily consumed by function.

  • Stream type determines what information is written to the stream for this table when item in table is modified.
  • Allowed values are:
    1. KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
    2. NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
    3. OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
    4. NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
  • Streams can be consumed by consumed by function

resources:
myDynamoTable:
Type: dynamo-table
Properties:
primaryKey:
partitionKey:
attributeName: this_attribute_will_be_id
attributeType: string
dynamoStreamType: NEW_AND_OLD_IMAGES