Enabling Fargate ECS Exec via CDK

Configure command line access to your AWS Fargate deployed ECS containers via CDK
02.03.2023
Tags

Assumptions

This tutorial assumes that:

  • you have some knowledge of:
  • AWS ECS with a Fargate deployment configuration
  • the AWS CDK
  • you have an AWS account
  • you have the AWS CLI installed with adequate permissions to deploy AWS resources such as S3, ECS and IAM permissions

The Need

As every developer knows, using “ssh” to connect to instances/containers deployed in the cloud is a very bad practice. It opens a large number of vulnerabilities and exposes you to a number of potential issues.

Nevertheless, it is sometimes necessary, especially during the development phase or in a staging environment, to access an instance or a container for a variety of reasons. For example:

  • to figure out why the IAM policy is not giving you the permissions you want
  • to check the network connectivity or the access to other services
  • to verify that a storage system is properly mounted

This also applies if you are deploying containers by using AWS ECS.

The Problem

If you use ECS with an EC2 deployment configuration, then achieving such a connection is relatively easy. You have 2 options:

  • the old fashioned way: configure your EC2 instance to accept ssh connections, then login into it and use docker Exec to access a container running on it and debug from there
  • the safer and modern way: use the System Manager’s (SSM) Session Manager feature to get into the instance, then use docker Exec

Unfortunately if you use ECS with Fargate, then this is not possible, since there is no instances to access.

The Solution

To allow access to ECS Fargate containers (and to ECS EC2 based containers), AWS provides the so called ECS Exec feature. This AWS blog post provides all the details you need to implement such a solution via the CLI. However this does not cover how you would implement this via infrastructure as code (CloudFormation, CDK).

The purpose of the next section is to demonstrate how to configure ECS Exec when deploying an ECS Fargate cluster with the CDK.

Configuration via CDK

The code mentioned in this article can be found on Github in this repository. The simplest way to try it is to clone the repository and deploy the sample ECS cluster.

We will start by creating a basic ECS Fargate cluster, with a single task and service deploying an Apache Server.

const mainVpc = new ec2.Vpc(this, 'ECSVpc', {
      ipAddresses: ec2.IpAddresses.cidr('10.0.0.0/16'),
      maxAzs: 99
    });

    const ecsCluster = new ecs.Cluster(this, 'SimpleCluster', {
      vpc: mainVpc
    });

    const ecsTaskDefinition = new ecs.FargateTaskDefinition(this, 'SimpleTask', {
      cpu: 256,
      memoryLimitMiB: 512
    });

    ecsTaskDefinition.addContainer('SimpleContainer', {
      image: ecs.ContainerImage.fromRegistry('httpd:2.4'),
      portMappings: [{ containerPort: 80 }],
      logging: new ecs.AwsLogDriver({ streamPrefix: 'SimpleLogging', mode: ecs.AwsLogDriverMode.NON_BLOCKING }),
    });

    const ecsService = new ecs.FargateService(this, 'SimpleService', {
      cluster: ecsCluster,
      taskDefinition: ecsTaskDefinition,
      desiredCount: 1,
      assignPublicIp: false
    });

    const loadBalancer = new elbv2.ApplicationLoadBalancer(this, 'SimpleLoadBalancer', {
      vpc: mainVpc,
      internetFacing: true,
    });

    const listener = loadBalancer.addListener('SimpleListener', { port: 80 });

    ecsService.registerLoadBalancerTargets({
      containerName: 'SimpleContainer',
      containerPort: 80,
      newTargetGroupId: 'SimpleTargetGroup',
      listener: ecs.ListenerConfig.applicationListener(listener, {
        protocol: elbv2.ApplicationProtocol.HTTP
      })
    });

Once we have that, adding the configuration required to allow connection via SSM Sessions is pretty simple.

First, create a new KMS key (or use an existing one) and give it to the ECS Cluster. It will be used when establishing the tunnel necessary for a secured SSM session.

const ksmEncryptionKey = new kms.Key(this, 'ECSClusterKey', {
      enableKeyRotation: true,
    });

    const ecsCluster = new ecs.Cluster(this, 'SimpleCluster', {
      ...
      executeCommandConfiguration: { kmsKey: ksmEncryptionKey }
    });

Then add the Task permissions that will enable the connection via SSM and allow the Task to read the KMS key.

ecsTaskDefinition.addToTaskRolePolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ['ssmmessages:CreateControlChannel', 'ssmmessages:CreateDataChannel', 'ssmmessages:OpenControlChannel', 'ssmmessages:OpenDataChannel'],
        resources: ['*']
      }),
    )

    ecsTaskDefinition.addToTaskRolePolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ['kms:Decrypt'],
        resources: [ksmEncryptionKey.keyArn]
      }),
    );

And finally, explicitly enable the ExecuteCommand (or ECS Exec) feature on the Service.

const ecsService = new ecs.FargateService(this, 'SimpleService', {
      ...
      enableExecuteCommand: true,
    });

Connection to the container via the AWS SSM CLI plugin

Once you have deployed this configuration via cdk deploy , what remains is to try to open a session inside the container.

You will first need to install the SSM plugin, for the AWS CLI.

You will also need to find out your ECS cluster ARN, as well as your Task ARN. You can do this by using the following commands.

$ aws ecs list-clusters                      // Copy the obtained ECS cluster ARN
$ aws ecs list-tasks --cluster <CLUSTER_ARN> // Copy the obtained task ARN

Then use the SSM plugin to open a session inside your container.

$ aws ecs execute-command --cluster <CLUSTER_ARN> --task <TASK_ARN> --container SimpleContainer --interactive --command "/bin/sh"

The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.

Starting session with SessionId: ecs-execute-command-0da29166452f7f66d
This session is encrypted using AWS KMS.
# ls
bin  build  cgi-bin  conf  error  htdocs  icons  include  logs	modules

Conclusion

Hopefully this tutorial gave you all the information you need to configure ECS Exec via CDK, and establish a secure connection via the SSM Session Manager. The amount of changes it requires is in fact minimal and provides a powerful debugging and administrative tool. And since we have configured this via CDK, you can easily replicate this or even make it part of your standard staging deployment set.

If you found this article useful, do share it with your friends! And if you have more solutions that can be applied for this use case from the AWS ecosystem, let me know. I’d be happy learn more about it.