Containers alone are not the be-all and end-all for DevOps. Corporate culture, teamwork and other factors play important roles too.
Containers by themselves do not enable the speed and scalability required for DevOps. A mature platform that can be used by developers to build business value requires a combination of a multitude of components.
Meanwhile, organizations need to balance between the need to secure and the imperative to go faster. To make this happen, we need to focus on providing developers with components that are secure and ready-to-use.
DigiconAsia speaks to Jerome Walter, Field CISO, APJ, Pivotal about the benefits of containers in the cloud era, and the best practices and pitfalls to watch out for when embarking on a container strategy.
What are the benefits that containers offer over virtualization?
Walter: Virtual Machines (VMs) triggered the advent of modern computing and ultimately, cloud technology. By providing a layer of abstraction above the hardware, VMs enabled the automation of computer provisioning tasks and advanced redundancy capabilities.
The ability to run several logical servers on the same physical device unlocked the consolidation of computing resources to reduce waste. Leading companies in IT have quickly jumped on the bandwagon to leverage the massive economies of scale enabled by this automation and consolidation. The endless quest for automating everything started.
Virtual machines (VMs) are still the base layer of IT infrastructures today. However, the need for automation and consolidation have moved a notch higher from the operating system to the application itself. As the “Accelerate State of DevOps” report from the DevOps Research and Assessment (DORA) highlights, leading IT practitioners leverage Continuous Integration and Continuous Deployment (CI/CD) to reduce the efforts of deploying applications, enabling fast iterations and increased reliability.
Containers are the cornerstone of this capability: they allow developers to ship lightweight self-sufficient application packages providing enough consistency and replicability for scalability, and ensuring separation for security.
With a container orchestration, several applications are hosted on the same server and allocated dynamically, thus further consolidating the utilisation of resources (memory, CPU cycles, disk, network) compared to manually allocated VMs. Mature application and container platforms also automate networking and security controls, allowing developers to focus on the business value rather than the full-stack of infrastructure components, and provide a level of self-healing capability to increase reliability.
The combination of container orchestration and CI/CD creates an immutable environment where no manual change is made: this reduces incidents and increases the auditability of the platform. Also, the full technology stack (from the operating system to runtime) is built from code and designed to be replaced at any time: containers have an average lifespan of 3.5 days.
The ephemeral nature of cloud-native applications inherently reduces the risk of the dreaded long and slow Advanced Persistent Threat (APT) attacks and radically facilitates incident response.
Overall, organizations adopting a platform-approach for their architecture are seeking improvements in speed of development, stability and scalability of the workloads, security and cost-efficiency.
Please share some best practices to keep containers secure and to get the best out of them.
Walter: An enterprise-grade container platform would provide out-of-the-box mitigations for most of these challenges:
- Hardened operating systems and containers
- Reduction of privileges in the container and the control plane
- A segmented registry, only accessible through APIs
- A mechanism to scan the images and report vulnerabilities
However, to really leverage the value of containers and cloud-native applications for security, there is a need to rethink some of the traditional practices and culture which are considered common practice.
Traditional servers were usually built for the long term, with security patches applied during specific time windows, manual changes which need review, and obstructive network defenses to compensate for the risk and the difficulty to detect compromise. Regular breach reports remind us of the inefficacy of this practice: despite extensive security programs, the overwhelming majority of the breaches are still due to unpatched software, credentials leaked or configuration.
On the other hand, the ephemeral nature of the workloads typically running in containers and the extensive automation provides a fresh approach on security. Applications designed for continuous changes enable the emergence of new security practices which leverage IT production tools rather than a separate toolset built around them.
We can see the following practices being deployed successfully among the cloud-native community:
- Repaving: With automation, servers and containers can be rebuild from source regularly. This increase the resilience of the platform and wipes clean any potential snowflake configuration or malicious persistent threat. Some of the biggest banks in the work are practicing a systematic repave of their platforms on a weekly basis.
- Removing and rotating credentials: As mentioned earlier, service credentials left in the code remain an important source of breaches. Scanning the code for credentials and keys, and automating the generation of credentials will help significantly reduce the risk of leakage or unauthorized access. It also reduces the risk in case a malware steals the code from a developer’s desktop.
- Continuous assurance as code: As applications are changed continuously, traditional paper-based reviews are not efficient. The security teams of leading companies are developing their own programs to test and scan the configuration and find potential weaknesses, then working with the platform operators to remediate the risk.
- Continuous adversarial testing (Bug Bounties, Red teaming, Chaos engineering): While they can significantly improve resilience and security, microservices also increase the complexity of the overall architecture. Leading companies embrace this continuous state of change and focus on improving their velocity to detect vulnerabilities, weaknesses and attacks.
The core of the change however is cultural. Organizations embracing containers want to go faster, often because of a business imperative to become more competitive. Thus, while security professionals must understand the benefits and risks of container architecture, they should also be careful not to impede speed and focus instead on measuring and improving security outcomes with the tools used to deliver value.
The key metrics of DevOps highlighted by DORA have demonstrated that focusing on improving shared metrics triggers different teams to work on underlying bottlenecks in their own processes. This is the fundamental learnings of Lean. Focusing the organization’s efforts on improving its capacity to find and repair vulnerabilities, repaving servers more frequently, rotating credentials, and detect attacks, could also create the shared understanding to close the divide between developers and security.
What are some tips to keep in mind when embarking on a container strategy?
Walter: First and foremost, do not underestimate the complexity of the architecture required to push applications rapidly.
Containers by themselves are not what delivers the speed of delivery seen in leading companies. To achieve this requires hundreds of components that must be finely glued together including networking, load balancing, blue-green deployment, secrets management and hardening.
Organizations that do not already have these capabilities should consider the cost of building and maintaining a custom-built platform, including the cost of delays created when developers are not used for business value creation.
After all, Netflix recently announced that they were moving to use Spring Cloud instead of their now famous tools because developers are better used to create customer value.
Secondly, “Perfection is the enemy of good”. Focus on delivering security improvements quickly, then continue improving. It is better to fix 80% of your risk in a few weeks by focusing on high value items than spending years trying to be exhaustive.
Hunt business-as-usual (BAU) tasks. Automate the security toil, free the time of your security specialists to allow them to spend some time with developers and operators to work on shared pain points. Container platforms deliver better security outcomes by fixing some of the existing IT practices. There are still a lot of shared pain points to remediate until CISOs can all sleep at night.
Finally, companies need to innovate. Do not let your guardrails impede the competitiveness of your company. It would be a disservice. Do not remove security for the sake of “frictionless” either.