Skip to main content

TypeScript AWS Policies with @cdklib/aws-policy

· 5 min read
Koby Bass
Koby-One Kenoby

For more details and CDK libraries, check out the @cdklib project readme.

After working with AWS for a while, I've found myself writing the same IAM policy patterns over and over. Whether you're using CDK, Terraform, or just the AWS console, policy creation often involves copying JSON snippets and tweaking them for your specific resources.

I wanted a more TypeScript-friendly way to handle this common task, so I built @cdklib/aws-policy - a simple library that brings type safety to AWS IAM policies.

It's designed to work with any TypeScript project, whether you're using infrastructure as code tools or creating resources dynamically (tenant provisioning, etc).

Life As We Know It

If you've worked with AWS IAM policies in TypeScript, you're probably familiar with awkward patterns like these:

// Approach 1: JSON.stringify a raw object
const bucketPolicy = JSON.stringify({
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: ["arn:aws:s3:::my-bucket", "arn:aws:s3:::my-bucket/*"],
},
],
});

// Approach 2: Template literals
const bucketName = "app-assets";
const policyJson = `{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::${bucketName}",
"arn:aws:s3:::${bucketName}/*"
]
}
]
}`;

// Approach 3: CDK policies with duplicated statement wrappers
new iam.PolicyDocument({
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ["s3:GetObject", "s3:ListBucket"],
resources: [`arn:aws:s3:::${bucketName}`, `arn:aws:s3:::${bucketName}/*`],
}),
],
});

These approaches have several drawbacks:

  • No TypeScript intellisense for action names or effect types
  • Duplication of Version and Statement wrapper boilerplate (that never changes)
  • Error-prone when you need to modify for multiple resources
  • Inconsistent approaches across your codebase

Life with @cdklib/aws-policy

At its core, @cdklib/aws-policy lets you:

  1. Create policies with TypeScript instead of JSON
  2. Get intellisense and type checking for your policy statements
  3. Build reusable policy templates with parameters
  4. Easily convert AWS examples into type-safe code

Let's look at how it works.

Basic Usage

First, install the package:

npm install @cdklib/aws-policy

Here's a simple example of creating a policy:

import { AwsPolicy } from "@cdklib/aws-policy";

// Create a policy with multiple statements
const bucketPolicy = AwsPolicy.from(
{
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: ["arn:aws:s3:::my-bucket", "arn:aws:s3:::my-bucket/*"],
},
{
Effect: "Deny",
Action: "s3:DeleteObject",
Resource: "arn:aws:s3:::my-bucket/*",
}
);

// Get JSON output - the Version is automatically added
const policyJson = bucketPolicy.toJson();

This gives you the same JSON policy you'd write by hand, but with TypeScript's help along the way. If you try to use an invalid effect type or forget a required field, your editor will let you know immediately.

Importing AWS Examples

Many times you just want to copy an example from the AWS docs and use it in your code.

The library makes it extremely easy - just copy paste the statements, and format-on-save will do the rest:

// Example straight from AWS docs:
// {
// "Effect": "Allow",
// "Action": "s3:ListBucket",
// "Resource": "arn:aws:s3:::example_bucket"
// }

// Import into TypeScript
const policy = AwsPolicy.from({
Effect: "Allow",
Action: "s3:ListBucket",
Resource: "arn:aws:s3:::example_bucket",
});

You can also import existing policy JSON from files or APIs:

const rawStatement = JSON.parse(fs.readFileSync("policy.json", "utf8"));
const importedPolicy = AwsPolicy.fromRaw(rawStatement);

Reusable Policy Templates

As you build more AWS resources, you'll find yourself creating similar policies with slight variations. For example, you might need S3 bucket policies with different bucket names. That's where prepared policies become useful:

import { AwsPreparedPolicy } from "@cdklib/aws-policy";

// Define a reusable policy template
const s3BucketPolicy = new AwsPreparedPolicy<{
bucketName: string;
}>((params) => ({
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: [
`arn:aws:s3:::${params.bucketName}`,
`arn:aws:s3:::${params.bucketName}/*`,
],
}));

// Use it for different buckets
const userDataPolicy = s3BucketPolicy.fill({
bucketName: "user-data",
});

const appAssetsPolicy = s3BucketPolicy.fill({
bucketName: "app-assets",
});

// You can also partially fill templates with .fillPartial() for progressive parameter filling
const partialPolicy = s3BucketPolicy.fillPartial({ bucketName: "user-data" });
const fullPolicy = partialPolicy.fill({ otherParam: "value" });

This approach helps eliminate duplicate code while keeping your policies type-safe.

Integration with CdkConfig

If you're using the @cdklib/config library I mentioned in my previous post, you can create policies that use the CDK scope to access configuration:

import { AwsPreparedPolicy } from "@cdklib/aws-policy";
import { awsConfig } from "./config/aws";

// Define a policy that includes scope as a parameter
const s3BucketPolicy = new AwsPreparedPolicy<{
scope: Construct;
bucketName: string;
}>(({ scope, bucketName }) => {
// Get config values from scope
const { accountId } = awsConfig.get(scope);

return {
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: [`arn:aws:s3:::${bucketName}`, `arn:aws:s3:::${bucketName}/*`],
Principal: {
AWS: `arn:aws:iam::${accountId}:root`,
},
};
});

// Provide scope and parameters
const policy = s3BucketPolicy.fill({
scope: myApp,
bucketName: "app-assets",
});

Combining Policies

You can combine multiple policies together, for example granting S3 read access and Lambda invoke access.

The policy statements are combined - the library does not attempt to merge policies logically.

// Define individual policies
const s3ReadPolicy = new AwsPreparedPolicy<{ bucketName: string }>(
(params) => ({
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: [
`arn:aws:s3:::${params.bucketName}`,
`arn:aws:s3:::${params.bucketName}/*`,
],
})
);

const lambdaInvokePolicy = new AwsPreparedPolicy<{ functionName: string }>(
(params) => ({
Effect: "Allow",
Action: "lambda:InvokeFunction",
Resource: `arn:aws:lambda:*:*:function:${params.functionName}`,
})
);

// Combine policies - parameters are combined
const combinedPolicy = AwsPreparedPolicy.combine(
s3ReadPolicy,
lambdaInvokePolicy
);

// Fill with all required parameters
const policy = combinedPolicy.fill({
bucketName: "my-bucket",
functionName: "my-function",
});

Closing Thoughts

The @cdklib/aws-policy library is a small utility that makes working with AWS IAM policies a bit nicer in TypeScript projects.

The library is open source and available on GitHub, where you can find more examples and documentation. Feel free to use it, modify it, or build on it to fit your needs.

CDK Config with @cdklib/config

· 7 min read
Koby Bass
Koby-One Kenoby

Preface

Over the last 3 years I've started numerous IAC projects that utilized CDKs heavily -

  • AWS CDK for CloudFormation template generation
  • cdktf for intrastructure as code with Terraform
  • cdk8s with ArgoCD for Kubernetes

The main reason I like this stack so much, is the type-safety and easier learning curve that it provides.

New people can easily navigate in the project, with the best intelisense and type safety that TypeScript can provide.

While I really recommend using these tools, it's an emerging technology and hard to migrate to. If you're starting a new project, or a new company, I'd highly recommend you consider them.

I'll cover these topics more thoroughly in future posts, but for now I'll focus on the configuration library that I've been developing over the last few years.

note

I've met, and talked to many DevOps engineers who are against using anything other than plain Terraform and Helm. I've converted some of them, but as they say -

Different strokes for different folks.

Configuration

Have you ever struggled with managing configuration across your dev, staging, and prod environments? You're not alone. While tools like Terragrunt (for Terraform) and Helm (for Kubernetes) handle this beautifully, we've been missing something similar in the CDK world.

That's why I built @cdklib/config - a simple, type-safe configuration library for CDK projects. Whether you're using AWS CDK, cdktf, or cdk8s, this tool can help bring some sanity to your infrastructure code.

The Configuration Headache in CDK

If you've worked on CDK projects with multiple environments, you've probably faced these issues:

  • Values hard-coded all over your code
  • Messy if/else statements for different environments
  • No type checking for your config values
  • Each team member handling config differently

These issues might not matter much in small projects, but they become real headaches as things grow.

Another issue you may encounter, is having to copy a bunch of role ARNs to your Kubernetes / Helm charts.

How @cdklib/config Helps

This library brings a simple approach to configuration with several key benefits:

  • Type safety - get TypeScript help with your configuration (using Zod for validation)
  • Nested environments - organize configs like dev → dev/staging → dev/east/staging
  • Calculated values - compute values based on environment and other settings
  • Modular design - organize config logically for your needs
  • Easy CDK integration - works with the CDK context system you already use

Getting Started

Installing is simple:

npm install @cdklib/config

Customizing Environment IDs (Optional)

By default, an environment ID is any string. This is undesirable, since it's prone to typos and mistakes.

For better type safety, you can define your own environment IDs in a .d.ts file:

// cdklib-config.d.ts
declare module "@cdklib/config/types" {
export type EnvId = "global" | "dev/staging" | "dev/qa" | "prod/us-central-1";
}

Then add this file to your tsconfig.json:

{
"include": ["...", "path/to/cdklib-config.d.ts"]
}

This gives you autocompletion for your environments when using set and get methods.

Basic Example

Here's how to configure AWS account info across environments:

import { CdkConfig } from "@cdklib/config";
import { z } from "zod";

// Define your configuration schema
const awsSchema = z.object({
accountId: z.string(),
region: z.string(),
tags: z.record(z.string()).optional(),
});

// Create and configure
export const awsConfig = new CdkConfig(awsSchema)
.setDefault({
tags: { ManagedBy: "CDK" },
})
.set("dev", {
accountId: "123456789012",
region: "us-east-1",
tags: { Environment: "development" },
})
.set("prod", {
accountId: "987654321098",
region: "us-west-2",
tags: { Environment: "production" },
});

// Get configuration for a specific environment
const devConfig = awsConfig.get("dev/staging");
console.log(devConfig);
// {
// accountId: '123456789012',
// region: 'us-east-1',
// tags: { ManagedBy: 'CDK', Environment: 'development' }
// }

What I love about this approach:

  1. Your configuration is type-safe - TypeScript helps you include all the required fields
  2. It's validated when you run it - clear errors if you're missing something important, before you start applying your infrastructure
  3. It's all in one place - no more hunting through code for environment settings

Building Nested Environments

As projects grow, you often need more detailed environment definitions. The library makes this simple:

export const awsConfig = new CdkConfig(awsSchema)
.set("dev", {
accountId: "123456789012",
region: "us-east-1",
})
.set("dev/staging", {
tags: { Environment: "staging" },
});

const stagingConfig = awsConfig.get("dev/staging");
// {
// accountId: '123456789012', // Inherited from 'dev'
// region: 'us-east-1', // Inherited from 'dev'
// tags: { Environment: 'staging' }
// }

Child environments inherit from sub paths, which means less copy-pasting and more consistency.

Using Runtime Config for EKS Clusters

A common challenge is configuring resources consistently across environments. Here's a simple example for EKS clusters:

// Define EKS configuration
const eksSchema = z.object({
clusterName: z.string().optional(),
nodeSize: z.string(),
minNodes: z.number(),
maxNodes: z.number(),
});

export const eksConfig = new CdkConfig(eksSchema)
// Set base config for all environments
.setDefault({
nodeSize: "m5.large",
minNodes: 2,
maxNodes: 10,
})
// Set environment-specific values
.set("staging", {
minNodes: 2,
maxNodes: 5,
})
.set("prod", {
nodeSize: "m5.xlarge",
minNodes: 3,
maxNodes: 20,
})
// Add computed values that use the environment ID
.addRuntime((envId, config) => {
// Generate cluster name from environment ID if not specified
const clusterName = config.clusterName || `${envId}-eks`;

// Get AWS account details from another config
const aws = awsConfig.get(envId);

return {
// Set the cluster name if not explicitly provided
clusterName,

// Generate the cluster ARN
clusterArn: `arn:aws:eks:${aws.region}:${aws.accountId}:cluster/${clusterName}`,
};
});

// Usage
const stagingEks = eksConfig.get("staging");
console.log(stagingEks.clusterName); // "staging-eks"
console.log(stagingEks.clusterArn); // "arn:aws:eks:us-east-1:123456789012:cluster/staging-eks"

const prodEks = eksConfig.get("prod");
console.log(prodEks.nodeSize); // "m5.xlarge" - overridden for prod
console.log(prodEks.minNodes); // 3 - overridden for prod
console.log(prodEks.clusterName); // "prod-eks"

This approach lets you derive values based on the environment ID while still allowing overrides when needed. The runtime function gives you the flexibility to generate consistent resource names and ARNs across your infrastructure.

note

This is a simple example, I'd recommend you put this logic in a function like getConsistentName to follow your naming conventions.

For example - dev-staging-eks or ProdUsCentral1Eks (for you AWS CDK lovers).

Adding to Your CDK Projects

Integrating with CDK is straightforward:

import { App, Stack } from "aws-cdk-lib";
import { getEnvId, initialContext } from "@cdklib/config";
import { awsConfig } from "./config/aws";
import { eksConfig } from "./config/eks";

// Initialize app with an environment context
const app = new App({
context: initialContext("dev/staging"),
});

class MyInfraStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id);

// Get configuration for this environment
const aws = awsConfig.get(this);
const eks = eksConfig.get(this);

// Use the configuration values in your constructs
new eks.Cluster(this, "EksCluster", {
clusterName: eks.clusterName,
version: eks.version,
nodeSize: eks.nodeSize,
minNodes: eks.minNodes,
maxNodes: eks.maxNodes,
tags: aws.tags,
});
}
}

The getEnvId utility lets you access the environment ID from any construct, so you can get the right config wherever you need it.

Tags, specficially are better managed using Aspects, which I'll cover separately

CDKTF Integration

If you're using Terraform CDK, @cdklib/config works just as well with it. You can use either an app-per-environment or stack-per-environment approach, depending on your workflow. The library integrates seamlessly with the CDKTF context system, similar to how it works with AWS CDK.

Check out the readme for more examples of CDKTF integration.

Wrapping Up

Managing configuration is one of those things that's easy to overlook but can make a huge difference in your daily CDK work. With @cdklib/config, you get a simple, type-safe way to handle configuration across all your CDK projects.

The library is lightweight and can dramatically simplify how you manage environment settings in your infrastructure code.

For more examples and best practices, take a look at the project readme.

npm install @cdklib/config

One final note: this library is intentionally lightweight and simple by design.

The core functionality is just a few hundred lines of code, which means you can easily copy it into your project and modify it to fit your specific needs if you prefer.

AWS EKS Addon Policies Cheat Sheet

· 2 min read
Koby Bass
Koby-One Kenoby

To provide basic functionality over your Kubernetes cluster, we often need to configure Addons (usually in the form of Helm charts).

Understanding which permissions are required for the Helm charts is cruicial. Too many permissions, and you open yourself up to a breach if the service account is compromised. Too little, and you will run into runtime issues.

Documentation around required permissions for each addon are often limited, and we have to scour google to find the best permissions.

AWS EKS Blueprints

EKS Blueprints is an official repository managed by AWS, to provision EKS clusters using AWS CDK.

The repository contains a lib/addon directory. Each addon defines required permissions for it to function. The policies are nested inside some functions, but they're simple enough to understand and copy.

LB Controller Policy

For example -

We can further refine the search of all available policies by using GitHub search filters. The following filter will find all files containing polic in the lib/addons folder:

repo:aws-quickstart/cdk-eks-blueprints path:/^lib\/addons\// path:*polic*.ts

How to use

warning

I do not recommend using the blueprints directly for serious environments

  • The code is overly complex, and a lot of it is auto-generated.
  • Addons should be managed using GitOps in production clusters, not AWS CDK.

If you're unsure about permissions for a specific addon, you can look at the EKS blueprints to help you figure them out. This is not a catch-all solution, but it may help you define fine grained permissions for your addons.

You can take these permissions, and copy them to your Terraform / Pulumi / AWS CDK code, and reference the blueprints.

  • Addon is a helm chart that adds functionality to your cluster ()
  • EKS is the managed Kubernetes solution on AWS
  • IAM is the security engine for creating users, roles and policies.

In-House Developer blog

· 2 min read
Koby Bass
Koby-One Kenoby
tip

The blog GitHub repository is public, you're welcome to take a look :)

Here's a quick explanation on why I decided to open my own blog, and how I manage it.

I will update it throughout the journey to keep you updated!

Motivation

If you're considering creating your own blog, I'd encourage you to do so!

It's quite simple to do, and has some real advantages -

  • You own your content, and can manage it in git.
  • Your domain will gain popularity through SEO.
    • The older your domain and the more links it gets, the higher the SEO score.
    • → The quicker you start, the more your domain will be indexed.
  • Helps attract clients as a contractor.

Stack

The stack I use for the blog is quite simple -

Kubernetes Multi-AZ Block Storage on AWS

· 5 min read
Koby Bass
Koby-One Kenoby

This blog post is about high availbility block storage on AWS EKS.

The provided block storage, EBS, does not replicate across regions. This has major implications during AZ downtime. A We explore how Rook, an CNCF distibuted storage solution, can be used to provide a better infrastrcutre. We look at the pros and cons of managing a Rook cluster and some use-cases for it.

Why do I need distributed block storage?

A common problem people run into is deploying a high-availbity service on top of AWS EKS.

A few years ago, we deployed a Prometheus instance on AWS. Everything ran smoothly, until the availability zone went down.

Suddenly, the pod was unschedulable. We found out the hard way that EBS storage is restricted to the AZ it was created in.

If you want to build a highly available system, your storage needs to be highly available as well.

warning

While AWS EFS supports Multi-AZ storage, many services require block storage.

Using the wrong storage may lead to data corruption.

Rook - Distributed Storage

Rook allows us to create a highly available, distributed storage on EKS.

Simply put, Rook:

  • Spins up a Ceph cluster on multiple AZs with provisioned capacity
  • Lets you define a Storage Class
  • PVCs (persistent volume claims) can then use this storage class for multi AZ

Rook has been a CNCF graduate since 2020, so it's extremely stable.

Setup

note

Install Rook using a Helm Chart managed in GitOp.

The video demonstrates basic Rook setup, but I'd recommend using Helm with CRDs to facilitate GitOps.

There's a great video by Red Hat Developer on how to set up Rook on AWS.

I'd recommend installing Rook using the Rook Ceph helm chart to provision the Ceph cluster to better integrate with your GitOps environment.

Considerations

  • Rook comes with a signicant learning curve for understanding and using Ceph
  • Ceph requires a lot of resources
    • Multi AZ requires multiple instance
    • Minimum recommended storage of 100GB per node (totalling 300GB for the cluster)

For the above reasons, to fully utilize and justify deploying Rook, you should have high storage requirements and justify the cost overhead of doing so.

Use Cases

Prometheus High Availability

tip

Without Rook, you can setup 2 prometheus instances with the same configuration for an HA setup.

If you decide to go with this solution, you can use Node Selector or affinity to use a different AZ on each instance.

Prometheus is a great candidate for Rook:

  • Prometheus scans metrics perdiocially (usually every 30 seconds)
    • having it down for two minutes is not a deal breaker.
  • Simplifies the HA setup for Prometheus
  • Removes the need to send duplicate metrics to third parties which can get expensive.

Kafka

Kafka relies on block storage which means the storage will not be available during AZ downtime.

While distributed by design, this has performance implications on your cluster. Here's a quick breakdown of how Kafka manages partitions:

  • Topics are partitioned and saved to disk.
  • Partitions are replicated across brokers using a replication factor.
  • Each partition is assigned a partition leader that serves all reads and writes to the parition.
info

MSK relies on EBS behind the scenes, so it doesn't solve these issues.

Using distributed storage will allow you to avoid the following Kafka shortcomings:

Scenario 1 - (Likely) new parition leader is elected

tip

Without distributed storage, you can mitigate this shortcomings this by over-provisioning to 150% (depending on your number of AZs) of your cluster usage.

  • AZ goes down
  • A new parition leader is elected
  • All requests are routed to the new leader

While this looks OK on paper, there's an underlying problem. Only 66% of you cluster available!

The new parition leaders will have a lot more work to do, stalling your cluster throughput. Depending on the parition assignments, it may lead to siginificant lag in your system.

Scenario 2 - (Unfortunate) All partitions are on the same AZ

warning

Without distributed storage, I'm unaware of any non-manual method to verify this doesn't happen.

If all partitions are in the same AZ, the entire partition data is lost.

If you're using MSK or Strimzi, in both cases, your data will be unavailable until the AZ is available again.

This can happen when:

  • Replication factor is set to 0
  • Multiple brokers are running on the same AZ, and the paritions were assigned to them..

With distributed storage in place, in both scenarios the broker will reschedule on another AZ.

Notable Mentions

  • Redis (in non-cluster mode), can benefit from distributed storage.

Summary (TL;DR)

Overall, distributed storage can be very useful, and Rook provides an easy setup for it.

Higher costs and Ceph maintenance should be considered and weighted against other disaster scenarios to understand whether it's worth it.

The larger your cluster and storage requirements, the more distributed storage becomes cost-efficient.

For non-production environments clusters, distributed storage makes little sense.