Back in March we announced Aqua MicroEnforcer, a new deployment technology that enabled us to secure runtime workload running on AWS Fargate and Azure Container Instances. Since then we’ve seen a lot of interest from customers who see these services not only as a way to deploy containers on demand for spillover capacity, or for ad hoc needs, but also as a way to accelerate deployment for development, testing, and production workloads.
Naturally they are concerned with security, and know that Aqua has the only solution that can address their needs across Fargate as well as “regular” container workloads that run on nodes/hosts. Fargate “breaks” Aqua’s sidecar container deployment model since the customer has no visibility or administrative access to the underlying VM instance running their containers. AWS completely abstracts this layer from the user, and automagically runs their containers somewhere in their vast virtual infrastructure.
When we examined options for securing container workloads that have no visible/accessible host, we looked at various options:
- Relying on cloud provider APIs to allow us to deploy a sidecar container in the right namespace: This is a valid approach, but one that relies on 3rd parties, and furthermore makes it difficult to deploy uniformly across different clouds. It was also not available at the time (nor is it now, for the most part), and there’s no knowing if and when it will be available, and crucially what capabilities it would offer. The assumption was that each application container will need its own sidecar (unlike the host-based model where one sidecar can monitor and controls dozens of containers). Customers were also concerned that if each container deployed will have an entourage of sidecars, this will become a very expensive endeavor.
- Injecting Aqua code into containers as they are instantiated: Embedding code into the container seemed like the right approach, but where and when do you inject it? This option seemed feasible, potentially using orchestrator-based automation to add our code as a container is instantiated. However it presented several complications. It may have a performance impact at the most crucial point of starting up a container, something customers would find unacceptable. It also presents challenges in compatibility across runtime engines (Docker, CRI-O, ContainerD, RKT) which the cloud provider might use now or in the future. And finally, it breaks image integrity and container immutability – the container you’re running will no longer be identical to its originating image, and that’s a problem in an environment where immutability is not only cherished, but used as a security measure.
- Injecting Aqua code into the image during build: While this too ultimately changes the container, it does so at the source which maintains the immutability principle. Furthermore, as a full lifecycle solution we already automate security into CI builds, so adding a step to insert our code is already taken care of from a process standpoint. In recent months, we’ve learned from customers that they like this approach because it also gives them visibility into where those containers are running even in non-production environments. One caveat is that if you don’t know in advance which images will run on Fargate vs. other place, you have to inject MicroEnforcer into all images. We solved this by making our solution “aware” of the presence of an Aqua Enforcer (the sidecar), and if it exists the MicroEnforcer is not activated.
Protecting Workloads in Fargate
As part of our Aqua 3.0 release, Liz wrote a blog that described in detail the process of embedding MicroEnforcer. This time I’d like to focus on the runtime aspects of how this works on Fargate, and for good measure we threw in the AWS CloudWatch integration.
Watch this 4 min video to see it in action: