Stacktape

Sign up

Stacktape

Sign up



Relational (SQL) databases

Overview

  • Relational databases allow you to deploy a fully managed relational (SQL) database to your stack. You can choose from multiple database types, such as PostgreSQL, MySQL, MariaDB, Oracle Database or SQL Server.

  • They are easy to set up, operate and scale. Capacity scaling, hardware & VM provisioning, database setup, patching, logging, backups and more are provided out of the box.

Under the hood

Under the hood, Stacktape uses AWS RDS service to provision databases.

3 engine categories, with different topology and scaling behavior are supported:

  • Basic RDS engines: default, cheapest, single-node database with optional read replicas for higher performance and Multi AZ standby instances for increased resilience and fault tolerance. Basic engines include PostgreSQL, MySQL, MariaDB, Oracle Database or SQL Server. For more information refer to AWS web.
  • Aurora engine: multi-node, highly available database cluster with increased durability and fault tolerance. Database requests are automatically balanced between available nodes resulting in higher performance. Supported engines are PostgreSQL, MySQL. For more information refer to AWS web.
  • Aurora serverless engine: similar to Aurora engine, but with support for automatic usage-based scaling. Your database compute capacity can scale up and down(within configured boundaries) depending on the load. The database can be completely paused after configured time of inactivity, saving you the resources and money. Supported engines are PostgreSQL, MySQL. For more information refer to AWS web.

Security

Databases are always deployed within stack's VPC(Virtual Private Cloud) in your private network. You can configure database accessibility, i.e whether the database is accessible from the internet or only available to other resources in VPC.

Basic usage

  • Relational databases require an active connection. To establish a connection, you typically need connection string.
  • You can use connectTo property on the resource which is connecting to automatically inject required variables.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('database.password')
engine:
type: postgres
properties:
version: '16.2'
primaryInstance:
instanceSize: db.t3.micro
apiServer:
type: multi-container-workload
properties:
resources:
cpu: 1
memory: 1024
containers:
- name: api-container
packaging:
type: stacktape-image-buildpack
properties:
entryfilePath: src/main.ts
connectTo:
- myDatabase

Single-node Postgres database that uses RDS Engine

Copy

import express from 'express';
import { Pool } from 'pg';
const pgPool = new Pool({
connectionString: process.env.STP_MY_DATABASE_CONNECTION_STRING // env variable was automatically injected by Stacktape
});
const app = express();
app.get('/time', async (req, res) => {
const result = await pgPool.query('SELECT NOW()');
const time = result.rows[0];
res.send(time);
});
app.listen(3000, () => {
console.info('Server running on port 3000');
});

Container workload connecting to the database

Database credentials

  • Configures credentials for the database master user.
  • You should not input these directly. The recommended way is using a secret.

Copy

resources:
myRelationalDatabase:
type: relational-database
properties:
credentials:
masterUserName: $File('.env').DB_USER_NAME # OPTIONAL
masterUserPassword: $Secret('dbCredentials.password')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t2.micro

masterUserPassword
Required
masterUserName
Default: db_master_user

Engine

  • Database engine determines multiple important properties of your database:
    • Database type (PostgreSQL, MySQL, MariaDB, Oracle Database or SQL Server)
    • Performance
    • High availability and fault tolerance
    • Scaling behavior
    • Pricing

Engine Version

Depending on the engine type, you must choose one of the available engine versions. Here is the list of available versions.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t3.micro

Available versions

  • aurora-mysql

    8.0.mysql_aurora.3.07.1

    8.0.mysql_aurora.3.07.0

    8.0.mysql_aurora.3.06.1

  • aurora-mysql-serverless

    5.7.mysql_aurora.2.11.4

  • aurora-postgresql

    16.4

    16.3

    16.2

  • aurora-postgresql-serverless

    13.12

  • mariadb

    11.4.3

    10.11.9

    10.11.8

  • mysql

    8.0.39

    8.0.37

    8.0.36

  • oracle-ee

    19.0.0.0.ru-2024-07.rur-2024-07.r1

    19.0.0.0.ru-2024-04.rur-2024-04.r1

    19.0.0.0.ru-2024-01.rur-2024-01.r1

  • oracle-se2

    19.0.0.0.ru-2024-07.rur-2024-07.r1

    19.0.0.0.ru-2024-04.rur-2024-04.r1

    19.0.0.0.ru-2024-01.rur-2024-01.r1

  • postgres

    16.4

    16.3

    16.2

  • sqlserver-ee

    16.00.4140.3.v1

    16.00.4135.4.v1

    16.00.4131.2.v1

  • sqlserver-ex

    16.00.4140.3.v1

    16.00.4135.4.v1

    16.00.4131.2.v1

  • sqlserver-se

    16.00.4140.3.v1

    16.00.4135.4.v1

    16.00.4131.2.v1

  • sqlserver-web

    16.00.4140.3.v1

    16.00.4135.4.v1

    16.00.4131.2.v1

Rds Engine

  • To use the RDS engine, set the engine.type property to postgres, mysql, mariadb, oracle-ee, oracle-se2, sqlserver-ee, sqlserver-ex, sqlserver-se or sqlserver-web.
  • RDS engine is the default, cheapest, single-node, fully managed database engine.
  • RdsEngine is not highly available or fault-tolerant by default. However, Stacktape allows you to configure a standby instance in a different AZ (Availability zone) to increase resilience or add read replicas to increase read performance.
RdsEngine  API reference
type
Required
properties.primaryInstance
Required
properties.version
Required
properties.dbName
properties.port
properties.storage
properties.readReplicas
properties.disableAutoMinorVersionUpgrade

Instance size

  • Allows you to choose the database instance size. Each instance size offers different combination of CPU, memory, storage, and networking capacity.
  • To see a list of available instances, refer to AWS docs
  • Not every instance size is supported for every database engine, version or region.
  • Be aware that some instance families (such as t3 or t4) are mostly meant for testing and not production workloads
  • Refer to AWS Docs for detailed breakdown of supported combinations.
  • Instance size can be configured for both primary instance and read replicas.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t3.micro

MultiAz mode

  • When enabled, the data is replicated to a standby instance in a different AZ (Availability Zone).
  • If the default (primary) instance fails, the failover to the standby instance in another AZ is performed.
  • The failover to the standby instance is synchronous (highly durable).
  • The standby instance can not be directly accessed (doesn't have its own database endpoint).
  • In contrast to using replicas, standby instance can fully take-over the responsibilities of the primary instance, while replicas can only be used for read operations.
  • Reduces the impact of maintenance. The maintenance is performed on the standby instance first, promotes the standby instance to a primary instance, and then performs maintenance on the old primary instance which is now a standby replica.
  • Multi AZ (Availability zone) mode can be configured for both primary instances and read replicas.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t2.micro
multiAz: true

RdsEnginePrimaryInstance  API reference
instanceSize
Required
multiAz

Read replicas

  • Read replicas help to decrease the load on the primary instance by serving a read-only database requests.
  • Replication uses database's native asynchronous replication to update the read replicas whenever there is a change to the primary instance.
  • Each read replica has its own database endpoint.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t3.micro
readReplicas:
- instanceSize: db.t3.micro
- instanceSize: db.t3.micro

RdsEngineReadReplica  API reference
instanceSize
Required
multiAz

Storage

  • When you run out of free database space, your storage will automatically scale up.
  • By default, scales between 20GB and 200GB.
  • The scaling process happens when the following conditions are met:
    • Free available space` is less than 10 percent of the allocated storage
    • The low-storage condition lasts at least five minutes.
    • At least six hours have passed since the last storage modification.
  • To learn more about storage autoscaling, refer to AWS Docs

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: postgres
properties:
version: '16.2'
port: 5432
primaryInstance:
instanceSize: db.t3.micro
storage:
initialSize: 40
maxSize: 400

RdsEngineStorage  API reference
initialSize
Default: 20
maxSize
Default: 200

Aurora Engine

  • To use the Aurora engine, set the engine.type property to aurora-postgresql or aurora-mysql.
  • Fully-managed, AWS-developed engine with clustering support, high-availability, increased durability & performance.
  • Compute instances (nodes) run in a single Availability Zones. Storage is automatically replicated 6-ways across 3 availability zones.
  • Automatically load-balances read operations between nodes.
  • Automatic failover - if a primary instance fails, one of the read replicas is elected as a new primary instance.
  • To learn more about the AuroraEngine, refer to AWS Docs

Copy

resources:
auroraSlsPostgres:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbSecret.password')
engine:
type: aurora-postgresql
properties:
version: '16.2'
instances:
- instanceSize: db.t3.medium
port: 5432

AuroraEngine  API reference
type
Required
properties.instances
Required
properties.version
Required
properties.dbName
properties.port
properties.disableAutoMinorVersionUpgrade
AuroraEngineInstance  API reference
instanceSize
Required

Aurora Serverless Engine

  • To use the Aurora Serverless engine, set the engine.type property to aurora-postgresql-serverless or aurora-mysql-serverless.
  • Fully-managed AWS-developed engine with clustering support, high-availability, increased durability & performance.
  • Similar to AuroraEngine type, but automatically scales based on usage. Scaling is done using ACUs (Aurora Compute units).
  • Each ACU has ~2GB of RAM and 1 virtual CPU.
  • Can scale to 0 ACUs (database is paused, and you don't pay anything).
  • To learn more about AuroraServerlessEngine, refer to AWS Docs
AuroraServerlessEngine  API reference
type
Required
properties.version
properties.dbName
properties.minCapacity
Default: 2
properties.maxCapacity
Default: 4
properties.pauseAfterSeconds
properties.disableAutoMinorVersionUpgrade

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbSecret.password')
engine:
type: aurora-postgresql-serverless
properties:
version: '13.12'
minCapacity: 4
maxCapacity: 8
pauseAfterSeconds: 600

Backups

  • Databases are automatically backed up once a day.
  • Maximum retention period is 35 days.
  • You can disable automated backups by setting the value to 0 (works only for RDS engines).
  • You can also take manual backup snapshots (in the console or using the API). The retention is not applied to manual backups.
  • By default, backups are retained for 1 day.
  • To learn more about RDS engine backups, refer to RDS engine backups AWS Docs.
  • To learn more about Aurora engine backups, refer to Aurora engine backups AWS Docs.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: my_secret_password
engine:
type: postgres
properties:
version: '16.2'
primaryInstance:
instanceSize: db.t3.micro
automatedBackupRetentionDays: 5

Logging

  • By default, logging is enabled and logs are preserved for 90 days.
  • Logged data depend on the used engine. You can log information about connections, disconnections, executed queries & much more.
RelationalDatabaseLogging  API reference
disabled
retentionDays
Default: 90
logTypes
engineSpecificOptions
logForwarding

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: my_secret_password
engine:
type: postgres
properties:
version: '16.2'
primaryInstance:
instanceSize: db.t3.micro
logging:
retentionDays: 30
engineSpecificOptions:
log_connections: true

Forwarding logs

It is possible to forward logs to the third party services/databases. See page Forwarding logs for more information and examples.

Closing zombie connections

  • Connections initiated by resources that are no longer running (for example stopped containers) can result in so called "zombie" connections.
  • Modern, cloud-native architectures usually include horizontally scalable, ephemeral (short-lived) resources. When using these resources, you should have a strategy of dealing with zombie connections.

From container workloads

  • For connections initiated from container workloads, you shouldn't forget to close the connection before your container exits. For example, ou can hook to a sigterm signal.

Copy

const connectionPool = createConnectionPool();
// remember to close the connection even on errors
process
.on('uncaughtException', () => {
connectionPool.close();
process.exit(1);
})
.on('unhandledRejection', () => {
connectionPool.close();
process.exit(1);
});
process.on('SIGTERM', () => {
connectionPool.close();
process.exit(0);
});

Example of closing

From batch jobs

  • For connections initiated from batch jobs, you shouldn't forget to close the connection before your batch job finishes its job and your container exits.

Copy

const connectionPool = createConnectionPool();
connectionPool.connect();
// remember to close the connection even on errors
process
.on('uncaughtException', () => {
connectionPool.close();
process.exit(1);
})
.on('unhandledRejection', () => {
connectionPool.close();
process.exit(1);
});
doSomethingWithYourConnection();
connectionPool.close();

From lambda functions

  • For connections initiated from lambda functions, you have 2 options:

Initialize and close the connection INSIDE the function handler.

  • This way you can avoid having a zombie connection.
  • This approach is not optimal, because creating a database connection can be slow (can take 10s to 100s of milliseconds).

    Copy

    import { Client } from 'pg';
    const handler = async (event, context) => {
    const pgClient = new Client({
    user: process.env.DB_USER,
    host: process.env.DB_HOST,
    database: process.env.DB_NAME,
    password: process.env.DB_PASSWORD,
    port: process.env.DB_PORT
    });
    await pgClient.connect();
    const result = await pgClient.query('SELECT NOW()');
    const time = result.rows[0];
    await pgClient.end();
    return { result: time };
    };
    export default handler;

Initialize connections OUTSIDE the function handler

  • reuse it in every function invocation.

  • This WILL result in zombie connections, when the container running your function stops (you can't hook to a lambda container SIGTERM signal to close it). In this case, you should do 2 things:

    • lower your database connection timeout (using a database parameter based on the database used):

      • for Postgres, set idle_in_transaction_session_timeout to something like 30min, tcp_keepalives_idle to 30min and tcp_keepalives_interval to 1min). This means inactive connections will be closed by the database.
    • if a database request fails because of a closed connection, you should re-create it within your application code.

      Copy

      import { Client } from 'pg';
      const pgClient = new Client({
      user: process.env.DB_USER,
      host: process.env.DB_HOST,
      database: process.env.DB_NAME,
      password: process.env.DB_PASSWORD,
      port: process.env.DB_PORT
      });
      (async () => {
      await pgClient.connect();
      })();
      const handler = async (event, context) => {
      const result = await pgClient.query('SELECT NOW()');
      const time = result.rows[0];
      return { result: time };
      };
      export default handler;

Accessibility

  • You can configure which resources and hosts can access your cluster.
  • To access your database, you always need to use your database user credentials.
  • On top of that, Stacktape allows your to restrict the accessibility of your database to only certain resources or hosts.
DatabaseAccessibility  API reference
accessibilityMode
Default: internetRequired
forceDisablePublicIp
whitelistedIps

Internet mode

  • Default mode.
  • Least restrictive. The database can be accessed from anywhere on the internet.

VPC mode

  • The database can be accessed only from resources within the default VPC.
  • Any function (provided it has joinDefaultVpc set to true), batch job or container workload or services within your stack can access the database.
  • Additionally, IP addresses configured in whitelistedIps can also access the database (even from the internet). To disable this behavior, and enforce database isolation ONLY to the VPC, you can set the forceDisablePublicIp property.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: aurora-postgresql
properties:
version: '16.2'
instances:
- instanceSize: db.t3.medium
port: 5432
accessibility:
accessibilityMode: vpc
myFunction:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my/function.ts
joinDefaultVpc: true

Scoping workloads in VPC mode

  • Similar to vpc mode, but even more restrictive. In addition to resource being in the VPC, any host or resource trying to access your cluster must explicitly include the database in its connectTo list.
  • Additionally, IP addresses configured in whitelistedIps can also access the database (even from the internet). To disable this behavior, and enforce database isolation, you can set the forceDisablePublicIp property.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: aurora-postgresql
properties:
version: '16.2'
instances:
- instanceSize: db.t3.medium
port: 5432
accessibility:
accessibilityMode: scoping-workloads-in-vpc
myFunction:
type: function
properties:
packaging:
type: stacktape-lambda-buildpack
properties:
entryfilePath: path/to/my/function.ts
joinDefaultVpc: true
connectTo:
- myDatabase

Whitelisted IPs only mode

  • The database can only be accessed from an IP addresses and CIDR blocks listed in the whitelistedIps list.

Copy

resources:
myDatabase:
type: relational-database
properties:
credentials:
masterUserPassword: $Secret('dbPassword')
engine:
type: aurora-postgresql
properties:
version: '16.2'
instances:
- instanceSize: db.t3.medium
port: 5432
accessibility:
accessibilityMode: whitelisted-ips-only
whitelistedIps:
- '147.25.33.12'

Referenceable parameters

The following parameters can be easily referenced using $ResourceParam directive directive.

To learn more about referencing parameters, refer to referencing parameters.

connectionString
  • Fully-formed connection string that can be used to access the primary instance. For aurora databases, this is connectionString to cluster endpoint, which can be used for both reads and writes. Includes host, port, username, password and dbName.

  • Usage: $ResourceParam('<<resource-name>>', 'connectionString')
jdbcConnectionString
  • Fully-formed connection string in JDBC form that can be used to access the primary instance. Includes host, port, username, password and dbName.

  • Usage: $ResourceParam('<<resource-name>>', 'jdbcConnectionString')
host
  • Hostname (address) of the primary instance that can be used for both reads and writes. For aurora databases, this is hostname of a cluster endpoint, which can be used for both reads and writes.

  • Usage: $ResourceParam('<<resource-name>>', 'host')
port
  • Port of the database.

  • Usage: $ResourceParam('<<resource-name>>', 'port')
dbName
  • Name of the automatically created database (can be configured using the dbName property).

  • Usage: $ResourceParam('<<resource-name>>', 'dbName')
readerHost
  • Hostname (address) used for reads only. (only available for aurora-postgresql and aurora-mysql engines). If you have multiple instances, it is advised to use readerHost for reads to offload the primary (read/write) host. ReaderHost automatically balances requests between available instances. Connections are auto-balanced among available reader hosts.

  • Usage: $ResourceParam('<<resource-name>>', 'readerHost')
readerConnectionString
  • Same as connectionString but targets readerHosts (only available for aurora-postgresql and aurora-mysql engines). Connections are auto-balanced among available reader hosts.

  • Usage: $ResourceParam('<<resource-name>>', 'readerConnectionString')
readerJdbcConnectionString
  • Same as readerConnectionString but in JDBC format (only available for aurora-postgresql and aurora-mysql engines).

  • Usage: $ResourceParam('<<resource-name>>', 'readerJdbcConnectionString')
readReplicaHosts
  • Comma-separated list of read replica hostnames (only available if read replicas are configured). Read replicas can only be used for read operations.

  • Usage: $ResourceParam('<<resource-name>>', 'readReplicaHosts')
readReplicaConnectionStrings
  • Comma-separated list of connection strings (URLs) used to connect to read replicas (only available when read replicas are configured). Read replicas can only be used for read operations.

  • Usage: $ResourceParam('<<resource-name>>', 'readReplicaConnectionStrings')
readReplicaJdbcConnectionStrings
  • Same as readReplicaConnectionStrings but in JDBC format (only available when read replicas are configured).

  • Usage: $ResourceParam('<<resource-name>>', 'readReplicaJdbcConnectionStrings')

Pricing

Pricing heavily depends on the engine used.


RDS engines:

  • Database server instance
  • Storage
    • $0.115 - $0.238 per GB-month
  • Backups:
    • For automated backups with default retention setting (where you never store more than 100% of your total database storage), there is no additional charge.
    • Additional backup storage is $0.095 per GB-month.

Aurora engines:

  • Database server instance
    • Price depends on the instance sizes and region. To see exact prices, refer to AWS pricing page
    • Price starts at $0.073 / hour.
  • Storage
    • $0.10 - $0.19 per GB-month
  • I/O Rate
    • $0.20 - $0.28 per 1 million read/write operations
  • Backups
    • For automated backups with default retention setting (where you never store more than 100% of your total database storage), there is no additional charge.
    • Additional backup storage is $0.021 - $0.037 per GB-month.

Aurora serverless:

  • ACUs (Aurora capacity units)
    • Each capacity unit has ~2GB of memory, ~1 Virtual CPU and corresponding network capabilities
    • 1 ACU costs $0.06 - $0.10 (depending on region)
    • Aurora Serverless databases can scale to 0 ACUs (with no costs)
  • Storage, I/O Rate and Backups cost the same amount as Aurora non-serverless.

Data transfer charges (same for all engines).

  • IN transfer: free
  • OUT to VPC (subnet in the same Availability zone): free
  • OUT to VPC (subnet in different Availability zone): $0.01 per GB ($0.02, because you are paying on both sides)
  • OUT to Internet: first 1 GB free, then $0.09 -$0.15 per GB

FREE TIER (eligible for first 12 months)

  • 750 Hours per month of db.t2.micro database usage (applicable DB engines)
  • 20 GB of General Purpose (SSD) database storage \
  • 20 GB of storage for database backups and DB Snapshots

API reference

RelationalDatabase  API reference
type
Required
properties.credentials
Required
properties.engine
Required
properties.accessibility
properties.deletionProtection
properties.automatedBackupRetentionDays
Default: 1
properties.alarms
properties.disabledGlobalAlarms
properties.logging
overrides
RdsEngineProperties  API reference
Parent:RdsEngine
primaryInstance
Required
version
Required
dbName
port
storage
readReplicas
disableAutoMinorVersionUpgrade
AuroraEngineProperties  API reference
instances
Required
version
Required
dbName
port
disableAutoMinorVersionUpgrade
AuroraServerlessEngineProperties  API reference
version
dbName
minCapacity
Default: 2
maxCapacity
Default: 4
pauseAfterSeconds
disableAutoMinorVersionUpgrade
PostgresLoggingOptions  API reference
log_connections
log_disconnections
log_lock_waits
log_min_duration_statement
Default: 10000
log_statement
Default: ddl
MysqlLoggingOptions  API reference
server_audit_events
Default: [QUERY_DDL]
long_query_time
Default: 10

Need help? Ask a question on Discord or info@stacktape.com.