We are Codebeez. We specialise in everything Python and help our clients build custom scalable, robust and maintainable solutions.
Introduction
If you are developing a Python application using the FastAPI framework, most likely it will be run either as part of a microservices architecture on Kubernetes or as a standalone application inside of a Docker container. Most people who are not dealing with Docker and Kubernetes as their full-time job assume that due to its roots in process isolation, it is safe by default. In reality, that is unfortunately not the case. There are many possible scenarios in which the proper configuration has been overlooked that can lead to some severe security compormises. Imagine, for example, a scenario where you have built and deployed your FastAPI application inside a Docker container, thinking that it is securely isolated from the host system. However, if not configured correctly, this container could potentially be used to attack the host system, leveraging the way Docker maps user IDs (UIDs) and mounts the host filesystem.
Understanding Docker and UID Mapping
Docker is a powerful tool for creating isolated environments, known as containers, that can run applications independently from the host system. One of the key features that Docker uses to provide isolation is user namespaces, which allow the remapping of user IDs (UIDs) and group IDs (GIDs) inside a container to different UIDs and GIDs on the host system. This feature is critical for security, as it helps to limit the permissions that processes running inside the container have on the host system.
In a typical Linux environment, each process runs with a specific UID, which determines its permissions on the system. When a Docker container is run, by default, the processes inside the container run with the same UID and GID as specified in the Dockerfile or by the user. If not properly managed, this could result in the container's root user (UID 0) having root-level access to the host system, which is extremely dangerous from a security perspective.
To mitigate that, Docker provides a mechanism known as user namespace remapping (userns-remap
), which allows you to map UIDs and GIDs inside the container to different UIDs and GIDs on the host system. For example, the root user inside the container (UID 0) could be mapped to a non-privileged user on the host system (e.g., UID 1000), effectively reducing the risk of privilege escalation.
Potential Security Risks with Improper UID Mapping
While UID remapping provides a layer of security, improper configuration or ignorance of how Docker handles UIDs can lead to significant security vulnerabilities. One of the critical aspects to understand is how UID and GID mapping interacts with filesystem permissions, particularly when you mount the host filesystem into a container.
When you mount a directory from the host into a container using the -v
or --mount
option, the files and directories within the mounted volume retain their original UIDs and GIDs from the host system. If the container is running with a user that has matching UIDs or GIDs, the containerized process can potentially gain access to sensitive files on the host.
For instance, consider a situation where you mount the /etc
directory from the host into the container. If the container is running with root privileges and without proper UID remapping, it could modify critical system files like /etc/passwd
or /etc/shadow
, leading to a complete compromise of the host system.
Attack Scenario: Exploiting UID Mapping and Host Filesystem Mounting
To illustrate the risks, let’s walk through a hypothetical attack scenario:
-
Scenario Setup: You are running a FastAPI application in a Docker container. To allow the application to read configuration files from the host system, you mount a directory from the host (e.g.,
/host/config
) into the container’s/app/config
directory. -
Weakness: The Docker container is configured to run as the root user (UID 0) inside the container, and no UID remapping is enabled. Additionally, the
/host/config
directory on the host is owned by a user with UID 1000, which corresponds to a non-privileged user on the host. -
Attack Execution: An attacker gains access to the container, either through exploiting a vulnerability in the FastAPI application or by other means (e.g., exploiting a misconfigured SSH service running inside the container). Once inside the container, the attacker realizes that they are running as the root user within the container and notices the mounted
/app/config
directory. -
Host Filesystem Manipulation: The attacker observes that the
/app/config
directory is writable and proceeds to create or modify files within this directory. Because the container's root user corresponds to the root user on the host system (due to the lack of UID remapping), the attacker now has the capability to modify files on the host system as root. -
Privilege Escalation: The attacker could now create or modify files like
.bashrc
,.ssh/authorized_keys
, or even critical system files like/etc/passwd
on the host. This could allow them to escalate privileges, create new root users, or execute arbitrary commands with root privileges on the host system.
Mitigating the Risks
To prevent such attacks, it’s crucial to adopt best practices when dealing with UID mapping and filesystem mounting in Docker:
-
Enable User Namespace Remapping:
-
By enabling user namespace remapping (
userns-remap
), you can ensure that the root user inside the container is mapped to a non-privileged user on the host system. This reduces the risk of the container's root user affecting the host system. - You can enable user namespace remapping by adding the following configuration in Docker’s
daemon.json
file:json { "userns-remap": "default" }
-
This remaps the container’s root user to a non-privileged user on the host, effectively reducing the risk of host compromise.
-
Run Containers as Non-Root Users:
-
Instead of running your applications as root inside the container, configure them to run as non-root users. This can be done by specifying the
USER
directive in your Dockerfile:Dockerfile FROM python:3.8-slim RUN adduser --disabled-password --gecos '' appuser USER appuser
-
This ensures that even if an attacker gains access to the container, they have limited permissions and cannot easily escalate privileges.
-
Avoid Mounting Sensitive Directories:
-
Be cautious when mounting host directories into containers, especially sensitive directories like
/etc
,/var
, or/root
. Only mount the directories that are absolutely necessary for the application to function. -
If you must mount a directory, consider mounting it as read-only to prevent any modifications from the container:
bash docker run -v /host/config:/app/config:ro myapp
-
Use Docker Volumes Instead of Bind Mounts:
-
When possible, use Docker volumes instead of bind mounts. Docker volumes are managed by Docker and are isolated from the host filesystem, reducing the risk of unintended host modifications.
-
Volumes are easier to manage and provide better isolation compared to bind mounts.
-
Implement Proper Access Controls:
-
Use Docker’s security features such as AppArmor, SELinux, and seccomp to enforce strict access controls within the container. These tools can limit what the containerized processes are allowed to do, further reducing the attack surface.
-
For example, you can apply a seccomp profile to your container that restricts certain system calls:
bash docker run --security-opt seccomp=default.json myapp
-
Regularly Update and Patch Docker and Containers:
-
Ensure that both Docker and the images you use are regularly updated to the latest versions. Updates often include security patches that address vulnerabilities that could be exploited.
-
Monitor and Audit Container Activity:
- Implement logging and monitoring to track the activities of containers. Tools like Docker’s built-in logging drivers, as well as third-party solutions like Fluentd or ELK Stack, can help you monitor container logs and detect suspicious activities.
- Regular audits of container configurations and access logs can help you identify and respond to potential security incidents promptly.
Docker provides a powerful and flexible environment for running applications like FastAPI in isolated containers. However, the assumption that Docker containers are secure by default can lead to serious security vulnerabilities, particularly when it comes to UID mapping and filesystem mounts.
Understanding how Docker maps UIDs and GIDs between the container and the host is crucial for maintaining a secure environment. Improper configuration, such as running containers as root or failing to enable user namespace remapping, can lead to scenarios where a compromised container can escalate privileges and attack the host system.
By following the best practices listed above, such as enabling user namespace remapping, running containers as non-root users, carefully managing filesystem mounts, leveraging Docker’s own security features and keeping your environment up to date and well monitored, you can reduce the risk of such attacks.
Hidden in layers
Another issue most developers overlook is that the information used in the building of a Docker image is not necessarily removed from the image, even after the build process is complete. As a Python developer working with FastAPI, you might assume that the intermediate files, secrets, or sensitive data used during the image build process are not present in the final image if they are deleted in the later stages of your Dockerfile. However, due to the layered nature of Docker images, this information might still be accessible, which could lead to security vulnerabilities or unintentional exposure of sensitive data.
Understanding the persistence of data within Docker Image Layers
Docker images are constructed in layers, with each layer representing the state of the filesystem after a particular command in the Dockerfile is executed. These layers are immutable and stored as a series of changes from the previous layer. When you build an image, Docker caches these layers to optimize the build process. However, this caching mechanism also means that any files created in a layer are persisted in that layer, even if they are deleted in a subsequent layer.
For instance, if you were to copy sensitive configuration files into your Docker image, install dependencies, and then delete those files in the same Dockerfile, the files would still exist in the layer where they were initially copied. This can be a serious security risk, especially if those layers are accessible to others, or if the image is distributed through public or private registries.
Example: Extracting Information from Docker Image Layers
Imagine that you are a Python developer, working on a FastAPI application, and you are building a Docker image that inadvertently retains sensitive data in its layers. We’ll walk through the process of building the image, identifying the issue, and extracting the retained information.
Step 1: Building the Docker Image
Suppose that you have a FastAPI application wrapped in the docker Image built from the following Dockerfile:
FROM python:3.9-slim
# Step 1: Copy sensitive files
COPY config/secrets.env /app/secrets.env
# Step 2: Install dependencies
RUN pip install -r requirements.txt
# Step 3: Delete sensitive files
RUN rm /app/secrets.env
# Step 4: Copy application code
COPY . /app
# Step 5: Set the working directory
WORKDIR /app
# Step 6: Start the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
In this Dockerfile, a secrets.env
file is copied to the /app
directory in Step 1. This file might contain sensitive environment variables or API keys. In Step 3, the file is deleted to prevent it from being exposed in the final image. However, due to the way Docker layers work, this file is still present in the layer created by Step 1, even though it was deleted in Step 3.
Step 2: Inspecting the Layers with docker history
To confirm that the sensitive information is still present, you can inspect the layers of the built image using the docker history
command:
docker history myfastapiapp:latest
This command will show a list of all the layers in the image, along with the commands that created them. Each layer corresponds to a specific instruction in the Dockerfile.
IMAGE CREATED CREATED BY SIZE COMMENT
<image_id> 2 minutes ago /bin/sh -c #(nop) CMD ["uvicorn" "main:app... 0B
<image_id> 2 minutes ago /bin/sh -c #(nop) WORKDIR /app 0B
<image_id> 2 minutes ago /bin/sh -c #(nop) COPY dir:c0fdcfe94dd23dbf... 10kB
<image_id> 2 minutes ago /bin/sh -c rm /app/secrets.env 0B
<image_id> 3 minutes ago /bin/sh -c pip install -r requirements.txt 20MB
<image_id> 5 minutes ago /bin/sh -c #(nop) COPY file:acb12345678901... 5kB
<image_id> 5 minutes ago /bin/sh -c #(nop) ENV DEBIAN_FRONTEND=non... 0B
<image_id> 5 minutes ago /bin/sh -c #(nop) ENV PYTHON_VERSION=3.9.... 0B
In this output, you can see the COPY
and rm
commands from the Dockerfile, but the COPY
command in Step 1 has created a layer that contains the secrets.env
file. This layer is still part of the image, even though the file was deleted in a subsequent layer.
Step 3: Extracting the Sensitive Information
To extract the contents of the Docker image layers, you can use the docker save
command, which saves an image as a tarball, and then manually inspect the files within the layers.
docker save -o myfastapiapp.tar myfastapiapp:latest
After saving the image to a tarball, you can extract it:
mkdir extracted_image
tar -xf myfastapiapp.tar -C extracted_image
Within the extracted directory, you’ll find a series of folders corresponding to the layers of the image. These folders contain the files and changes introduced by each Dockerfile instruction. You can navigate through these directories and search for the secrets.env
file:
find extracted_image -name "secrets.env"
You can see that the file if there, which confirms that the sensitive information is still present in the image layer, even though it was deleted in the final image state.
Best Practices for Preventing Information Leakage from the interim layers
Given the potential risks of retaining sensitive information in Docker image layers, it’s crucial we adopt the following best practices to prevent such issues.
1. Use Multi-Stage Builds
One of the most effective ways to prevent information leakage is to use multi-stage builds. In a multi-stage build, you can copy sensitive files or build dependencies in one stage, and then only copy the necessary artifacts into the final image. This ensures that the intermediate files are not included in the final image.
Here’s an example of how you can modify the previous Dockerfile to use a multi-stage build:
# Stage 1: Build dependencies
FROM python:3.9-slim AS builder
# Copy and install dependencies
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
# Stage 2: Final image
FROM python:3.9-slim
# Copy dependencies from the builder stage
COPY --from=builder /usr/local/lib/python3.9/site-packages /usr/local/lib/python3.9/site-packages
# Copy application code
COPY . /app
# Set the working directory
WORKDIR /app
# Start the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
In this Dockerfile, the dependencies are installed in the builder
stage, and only the installed packages are copied to the final image. The final image does not include any of the intermediate files, such as the secrets.env
file, ensuring that sensitive information is not inadvertently exposed.
2. Minimize the Use of Secrets in Dockerfiles
Another important practice is to minimize the use of secrets in Dockerfiles. If possible, avoid copying sensitive files directly into the image. Instead, consider using environment variables, Docker secrets, or external configuration management tools like HashiCorp Vault to manage sensitive information.
For example, instead of copying a secrets.env
file, you could use environment variables that are injected at runtime:
FROM python:3.9-slim
# Install dependencies
RUN pip install -r requirements.txt
# Copy application code
COPY . /app
# Set the working directory
WORKDIR /app
# Set environment variables
ENV SECRET_KEY=${SECRET_KEY}
# Start the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
In this scenario, the SECRET_KEY
environment variable is passed to the container at runtime, reducing the risk of including sensitive information in the image layers.
3. Regularly Scan Images for Vulnerabilities
It’s also essential to regularly scan your Docker images for vulnerabilities, including the presence of sensitive files or credentials. Tools like Docker’s built-in docker scan
command, as well as third-party tools like Trivy, can help you identify and remediate potential security issues in your images.
docker scan myfastapiapp:latest
This command will scan the image for known vulnerabilities and provide a report. It’s a good practice to include such scans as part of your CI/CD pipeline to catch issues before they make it to production.
4. Clean Up After Each Layer
Although it’s not always foolproof, cleaning up sensitive files immediately after they are used within the same Dockerfile instruction can reduce the risk of leaving sensitive data in the image layers. However, be aware that this method is not as effective as using multi-stage builds, as it still leaves room for error.
FROM python:3.9-slim
# Copy and install dependencies, then clean up
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt && rm /app
RBAC
The next subject that often gets overlooked is Role-Based Access Control (RBAC) that is not properly configured for your Python application. Imagine this scenario: You’ve developed a FastAPI application that handles various user roles, such as administrators, regular users, and guest accounts. Your application is deployed on a Kubernetes cluster, and you assume that your RBAC configuration ensures that each user or service has the appropriate permissions. However, what if a minor oversight in your RBAC settings allowed a regular user to gain access to administrator-level functions? Or what if a misconfigured Kubernetes RBAC policy inadvertently granted broad access to sensitive resources? Such mistakes could lead to serious security breaches, data leaks, or even total system compromise.
Understanding RBAC in Application and Infrastructure
Role-Based Access Control (RBAC) is a critical security feature that allows the administrator to control who can access specific resources in the application and infrastructure. By defining roles and assigning permissions to individual roles, you ensure that only authorized users can perform certain actions. RBAC can be applied at multiple levels: within the application, at the database level, and across your Kubernetes cluster.
In the context of a FastAPI application, RBAC is typically used to manage user access to different API endpoints. For example, an administrator might have full access to all endpoints, while a regular user might only be able to access their own data. In Kubernetes, RBAC is used to control access to the cluster’s resources, such as pods, services, and secrets.
The Risks of Improperly Configured RBAC
Improper RBAC configuration can lead to various security issues, including:
- Privilege Escalation: A user might gain more privileges than intended, allowing them to perform actions that should be restricted. For example, a regular user might gain access to administrative endpoints.
- Unauthorized Access: Users or services might access resources they shouldn’t have access to. For instance, a service account might be able to access secrets or configmaps containing sensitive data.
- Data Leakage: Misconfigured RBAC can lead to unauthorized users viewing or modifying sensitive data, leading to data breaches.
- Compliance Violations: Many industries require strict control over data access to comply with regulations like GDPR or HIPAA. Improper RBAC could lead to non-compliance and legal penalties.
Example: Misconfigured RBAC in a FastAPI Application
Let’s consider an example where RBAC is improperly configured in a FastAPI application, leading to potential security issues.
Step 1: Defining Roles and Permissions
Imagine you have developed a FastAPI application that provides API endpoints for managing a system of records. You have defined three roles: admin
, manager
, and employee
.
admin
: Has full access to all records and can perform any operation, including creating, reading, updating, and deleting records.manager
: Can view and update records for employees within their department but cannot delete records or access system-wide settings.employee
: Can only view their own records and cannot modify them.
The RBAC configuration might be implemented as follows:
roles_permissions = {
"admin": ["read_all", "write_all", "delete_all", "manage_settings"],
"manager": ["read_department", "write_department"],
"employee": ["read_own"]
}
Step 2: Implementing RBAC in FastAPI
You implement a dependency in FastAPI to check roles and permissions:
from fastapi import FastAPI, Depends, HTTPException, Security
from fastapi.security import OAuth2PasswordBearer
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
def get_current_user(token: str = Depends(oauth2_scheme)):
# Decode the JWT token and return the user object
user = decode_token(token)
return user
def check_permissions(required_permissions: list):
def permission_checker(current_user: dict = Depends(get_current_user)):
user_permissions = roles_permissions.get(current_user["role"], [])
if not any(permission in user_permissions for permission in required_permissions):
raise HTTPException(status_code=403, detail="Forbidden")
return current_user
return permission_checker
@app.get("/records", dependencies=[Security(check_permissions(["read_all"]))])
def read_records():
return {"message": "Reading all records"}
@app.post("/records", dependencies=[Security(check_permissions(["write_all"]))])
def write_record():
return {"message": "Writing a record"}
@app.delete("/records", dependencies=[Security(check_permissions(["delete_all"]))])
def delete_record():
return {"message": "Deleting a record"}
Step 3: Misconfiguration Leading to Security Issues
Now, let’s explore what could go wrong if RBAC is misconfigured. Suppose that, during the deployment of your FastAPI application, you accidentally assign the manager
role the delete_all
permission, assuming it was necessary for their role. This misconfiguration means that any user with the manager
role now has the ability to delete records across the entire system, not just within their department.
Additionally, if you incorrectly configure the employee
role with the read_all
permission instead of read_own
, this would allow employees to view records for other employees, leading to a breach of privacy.
Understanding Kubernetes RBAC and Its Importance
When deploying the FastAPI application in a Kubernetes environment, RBAC becomes even more critical, as Kubernetes uses RBAC to control access to the API server, which governs the entire cluster. Kubernetes RBAC policies determine which users or service accounts can perform actions such as creating or modifying pods, accessing secrets, or managing network policies.
In Kubernetes, RBAC is configured using roles, role bindings, cluster roles, and cluster role bindings:
- Role: Defines permissions for resources within a specific namespace.
- ClusterRole: Defines permissions for resources across the entire cluster.
- RoleBinding: Binds a role to a user or group within a specific namespace.
- ClusterRoleBinding: Binds a cluster role to a user or group across the entire cluster.
Example of Misconfigured Kubernetes RBAC
Let’s consider a Kubernetes RBAC configuration where a service account used by the FastAPI application is granted excessive privileges.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fastapi-admin
rules:
- apiGroups: [""]
resources: ["pods", "services", "secrets", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
In this example, the fastapi-admin
ClusterRole is granted full access to critical resources like pods, services, secrets, and configmaps across the entire cluster. If your FastAPI application is compromised, the attacker could use the application’s service account to gain full control of these resources, potentially leading to cluster-wide breaches.
A more secure approach would involve creating a Role with more limited permissions within a specific namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: fastapi-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
This Role limits the access to pods and services within the my-namespace
namespace, reducing the risk of a compromised service account leading to a cluster-wide breach.
Best Practices for Configuring RBAC
To prevent the risks associated with improper RBAC configuration, it’s essential to follow best practices when defining roles and permissions in both your FastAPI application and Kubernetes environment.
1. Principle of Least Privilege
The principle of least privilege dictates that users, roles, and services should have the minimum permissions necessary to perform their tasks. This reduces the risk of accidental or malicious actions that could lead to security breaches.
- Application RBAC: Ensure that each role is only granted the permissions necessary for its functions. For example, a "manager" role should only have access to the data and actions needed for managing their department, not the entire system.
- Kubernetes RBAC: Limit the permissions of service accounts and users to the specific actions they need to perform within the cluster. Avoid granting broad permissions that could be exploited if the account is compromised.
2. Audit and Review Roles and Permissions
Roles and permissions should not be static. Regularly audit and review the RBAC configurations in your FastAPI application and Kubernetes environment to ensure they align with current security requirements.
- Application RBAC: Periodically review the roles and permissions defined in your application to ensure they are still relevant and that no users have more access than necessary.
- Kubernetes RBAC: Use tools like
kubectl auth can-i
to test and verify what actions a particular role or user can perform. Additionally, Kubernetes audit logs can help track and analyze access patterns to detect potential misconfigurations or unauthorized access.
3. Implement Attribute-Based Access Control (ABAC)
While RBAC is widely used, it may not always provide the granularity needed for complex access control scenarios. Attribute-Based Access Control (ABAC) can be implemented as an additional layer of security. ABAC allows access decisions to be based on attributes such as user roles, resource types, and environmental factors.
In FastAPI, you can implement ABAC by adding custom logic in your access control checks, considering factors like user attributes, the context of the request, and more.
4. Use Namespaced Roles When Possible
In Kubernetes, prefer using namespaced roles (Role and RoleBinding) over cluster-wide roles (ClusterRole and ClusterRoleBinding) whenever possible. Namespaced roles limit the scope of permissions to a specific namespace, reducing the impact of a compromised role
Default network policies
The next critical aspect that is often overlooked in Kubernetes security is the network policy configuration, or more accurately, the lack thereof. Many default Kubernetes network settings do not include restrictive network policies, leading to a situation where pods within a cluster can communicate freely with each other. This unrestricted communication can pose a significant security risk, particularly for a FastAPI application, where an attacker could exploit these open network channels to move laterally within the cluster, access sensitive data, or disrupt services.
Understanding Kubernetes Network Policies
Kubernetes network policies are essentially firewall rules that control the flow of traffic between pods and between pods and other network endpoints. These policies allow you to specify which pods are allowed to communicate with each other and under what circumstances. Without network policies, Kubernetes allows unrestricted communication between all pods within a cluster, which can be dangerous in a production environment.
In a typical deployment, you might have several services running in different pods within your Kubernetes cluster. For example, your FastAPI application might be communicating with a backend database, a caching layer, and other microservices. If you don’t configure network policies, every pod in the cluster, including those that should not have access, could potentially connect to any other pod, leading to a broad attack surface.
The Risks of Unrestricted Pod Communication
When network policies are not properly configured, it creates several security vulnerabilities:
-
Lateral Movement: If an attacker gains access to one pod, they could move laterally within the cluster, probing and accessing other services that they should not have access to. For example, if your FastAPI pod is compromised, the attacker could access other critical services like your database or internal APIs.
-
Data Access: Without network policies, sensitive data can be copied from the cluster. An attacker could send data from one compromised pod to another, or even outside of the cluster, without any restrictions.
-
Denial of Service (DoS): An attacker could flood another service with traffic, causing a denial of service. Without network policies to limit which pods can communicate with each service, it’s difficult to prevent this type of attack.
-
Service Disruption: If an attacker can send malicious traffic between pods, they could disrupt services, leading to application downtime or inconsistent behavior.
Example: Exploiting Lack of Network Policies in a FastAPI Application
Let’s consider a scenario where a FastAPI application is deployed on a Kubernetes cluster without any network policies. We’ll walk through how an attacker can exploit this situation to compromise the entire cluster.
Step 1: Initial Compromise
Imagine your FastAPI application has a vulnerability in one of its API endpoints that allows an attacker to execute arbitrary code within the pod. The attacker exploits this vulnerability and gains a foothold within the FastAPI pod. At this point, the attacker has access to the filesystem, environment variables, and network within the pod.
Step 2: Scanning for Other Services
With access to the FastAPI pod, the attacker can now start scanning the network to discover other services running within the cluster. Since there are no network policies in place, the attacker can freely probe other pods, looking for open ports and vulnerable services.
For instance, the attacker might find that there is a PostgreSQL database running in another pod, accessible at a known port (5432
). The attacker could then attempt to connect to this database using default or weak credentials.
Step 3: Across-Pod Movement and Data Access
After discovering the database pod, the attacker can attempts to connect to it and if they succeed, thanks to the lack of network segmentation, the attacker can now query the database, extracting sensitive information such as user data, application logs, or other critical data.
The attacker might also deploy additional tools within the compromised FastAPI pod to further explore the network, looking for other critical services, such as internal APIs, message queues, or even management interfaces that might be exposed.
Step 4: Escalation and Persistent Access
With unrestricted access, the attacker can move laterally to other pods, compromising additional services and expanding their control over the cluster. They might deploy a backdoor to maintain persistent access, allowing them to re-enter the environment even if the initial vulnerability is patched.
Moreover, the attacker could use the compromised services to stage a more sophisticated attack, such as escalating privileges to gain access to the Kubernetes API server, or disrupting services by flooding key components with malicious traffic.
Best Practices for Configuring Network Policies
To prevent such attacks, it’s crucial to implement network policies that restrict communication between pods to only what is necessary. Here are some best practices for configuring network policies in a Kubernetes environment running a FastAPI application:
1. Implement the Principle of Least Privilege
Just as with RBAC, you should apply the principle of least privilege to network communication. Only allow the minimal necessary communication between pods. For example, if your FastAPI application only needs to communicate with a database and a caching service, the network policy should explicitly allow this traffic and block all others.
Here’s an example of a simple network policy that restricts traffic to only allow communication between the FastAPI pod and the PostgreSQL database pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-fastapi-db
namespace: mynamespace
spec:
podSelector:
matchLabels:
app: fastapi
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
This policy ensures that the FastAPI pod can only receive traffic from the PostgreSQL pod on port 5432
. All other ingress traffic is denied.
2. Default to Deny All Traffic
A good practice is to create a default deny all
network policy, which blocks all traffic by default. You can then create specific network policies to allow the necessary traffic between your services. This approach ensures that any services not explicitly permitted to communicate are isolated from each other.
Here’s an example of a default deny policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: mynamespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This policy blocks all ingress and egress traffic for all pods in the mynamespace
namespace unless explicitly allowed by other network policies.
3. Use Namespace Isolation
In Kubernetes, namespaces provide a way to isolate resources within the same cluster. By implementing network policies that restrict traffic between namespaces, you can further segment your services and reduce the risk of lateral movement within the cluster.
For example, you might have separate namespaces for frontend, backend, and database services. You can configure network policies to restrict communication between these namespaces, ensuring that only the necessary traffic is allowed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-cross-namespace
namespace: frontend
spec:
podSelector:
matchLabels:
app: frontend-app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: backend
ports:
- protocol: TCP
port: 80
In this example, pods in the frontend
namespace are only allowed to receive traffic from the backend
namespace on port 80.
4. Regularly Audit Network Policies
As your application evolves, so too will your network requirements. Regularly audit your network policies to ensure they are still appropriate for your current architecture. Remove any outdated policies that are no longer needed, and update existing policies to reflect any changes in your network topology.
Kubernetes provides tools to help you audit and monitor network policies. For example, you can use kubectl
commands to view all network policies in a namespace:
kubectl get networkpolicies -n mynamespace
You can also inspect individual policies to verify their configurations:
kubectl describe networkpolicy <policy-name> -n mynamespace
5. Leverage Network Policy Logging and Monitoring
Enable logging and monitoring for your network policies to track traffic flows and detect any anomalies. Tools like Cilium, Calico, and Weave Net provide advanced network policy capabilities, including logging and monitoring.
By analyzing network logs, you can identify unexpected traffic patterns, potential security incidents, and areas where your policies may need adjustment.
Conclusion
In a Kubernetes environment, the absence of properly configured network policies can leave your FastAPI application and the entire cluster vulnerable to attacks. Without these policies, an attacker who gains access to one pod can potentially move laterally within the cluster, access sensitive data, disrupt services, and escalate privileges.
By implementing restrictive network policies based on the principle of least privilege, you can significantly reduce the attack surface of your application. Network policies should be an integral part of your Kubernetes security strategy, ensuring that only authorized communication is allowed between pods and that your application remains secure.
Secrets management
Probably the first thing that comest to mind when thinking about securing the deployment of your Python application is secrets management. Especially when sensitive data such as API keys, passwords, and certificates are involved. In Kubernetes, secrets are often stored as plain text by default, which poses significant risks if not properly addressed. This chapter delves into the challenges and best practices of secrets management within the context of a FastAPI application deployed on Kubernetes. In this section, we’ll explore how an attacker can exploit improperly managed secrets, and provide practical examples of how to secure them effectively.
Understanding Kubernetes Secrets
Kubernetes secrets are designed to store sensitive information such as API keys, database credentials, and TLS certificates. These secrets are usually injected into pods as environment variables or mounted as files. While Kubernetes provides a convenient way to manage sensitive data, the default configuration has several security shortcomings.
-
Plaintext Storage: By default, Kubernetes secrets are stored as base64-encoded strings, which are not encrypted. This means that anyone with access to the etcd database (where Kubernetes stores its cluster state) or with sufficient privileges can easily decode and access these secrets.
-
Broad Access: Secrets in Kubernetes can be accessed by any pod or user with sufficient permissions. If access controls are not properly configured, secrets might be exposed to unauthorized users or services.
-
Lack of Auditing: Without proper auditing, it’s challenging to track who accessed which secrets and when. This makes it difficult to detect unauthorized access or suspicious activity involving sensitive data.
Risks of Insecure Secrets Management
When secrets are not properly secured in a Kubernetes environment, they can be easily compromised. Let’s consider some of the primary risks associated with insecure secrets management:
-
Etcd Exposure: Kubernetes stores secrets in etcd, the key-value store that holds the cluster state. If etcd is not encrypted or properly secured, anyone with access to etcd can retrieve and decode secrets, leading to a full compromise of sensitive information.
-
Compromised Pods: If a pod is compromised (e.g., through a vulnerability in your FastAPI application), the attacker could extract secrets that are mounted as environment variables or files. These secrets could then be used to gain further access to other services or systems.
-
Unauthorized Access: Misconfigured RBAC (Role-Based Access Control) can allow unauthorized users or services to access secrets. This can lead to data leaks or privilege escalation within the cluster.
-
Lack of Rotation: If secrets are not regularly rotated, an attacker who gains access to a secret might have long-term access to sensitive data, even after the initial vulnerability is patched.
Example: Exploiting Insecure Secrets Management in a FastAPI Application
Let’s walk through a scenario where a FastAPI application deployed on Kubernetes is compromised due to insecure secrets management. We’ll explore how an attacker can exploit this weakness to gain access to sensitive data and escalate their privileges within the cluster.
Step 1: Initial Compromised Pod
Imagine your FastAPI application has a vulnerability in one of its API endpoints that allows an attacker to execute arbitrary code from within the pod. The attacker exploits this vulnerability and gains a foothold within the FastAPI pod. At this point, the attacker has access to the filesystem, environment variables, and network within the pod.
# Vulnerable FastAPI endpoint
@app.post("/upload")
async def upload_file(file: UploadFile = File(...)):
file_location = f"/tmp/{file.filename}"
with open(file_location, "wb+") as file_object:
file_object.write(file.file.read())
return {"info": f"file '{file.filename}' saved at '{file_location}'"}
In the above example, an improperly handled file upload could allow the attacker to upload a malicious script and execute it within the pod.
Step 2: Extracting Secrets from Environment Variables
The FastAPI application uses secrets stored as environment variables for connecting to a PostgreSQL database. These secrets are injected into the pod using a Kubernetes Secret object, which is mounted as environment variables.
apiVersion: v1
kind: Secret
metadata:
name: db-secrets
type: Opaque
data:
POSTGRES_USER: cG9zdGdyZXM=
POSTGRES_PASSWORD: c2VjdXJlcGFzcw==
POSTGRES_DB: ZGF0YWJhc2U=
These base64-encoded secrets are mounted into the FastAPI pod as environment variables:
apiVersion: v1
kind: Pod
metadata:
name: fastapi-app
spec:
containers:
- name: fastapi
image: my-fastapi-app:latest
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: db-secrets
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-secrets
key: POSTGRES_PASSWORD
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: db-secrets
key: POSTGRES_DB
Once inside the compromised pod, the attacker can easily access these environment variables and decode the secrets:
echo $POSTGRES_USER | base64 --decode
echo $POSTGRES_PASSWORD | base64 --decode
echo $POSTGRES_DB | base64 --decode
With these credentials, the attacker can connect to the PostgreSQL database, extract sensitive data, and even modify the database if write access is granted.
Step 3: Accessing Secrets from the Filesystem
In addition to environment variables, Kubernetes secrets can be mounted as files in the pod’s filesystem. The attacker, with access to the pod’s shell, can easily locate and read these files.
For example, if the secrets are mounted as files under /etc/secrets
, the attacker can list the files and read their contents:
ls /etc/secrets
cat /etc/secrets/POSTGRES_USER
cat /etc/secrets/POSTGRES_PASSWORD
cat /etc/secrets/POSTGRES_DB
This exposes the same sensitive data that was stored as environment variables but now accessed through the filesystem. The attacker can use these secrets to further infiltrate the application’s backend services.
Step 4: Moving Laterally and Escalating Privileges
With the database credentials in hand, the attacker could attempt to connect to the PostgreSQL database from within the pod. Depending on the security configuration of the database, this might give the attacker access to user data, configuration settings, and other sensitive information.
The attacker might also attempt to escalate privileges by searching for additional secrets or credentials within the pod or other connected services. For instance, if the pod has access to other Kubernetes secrets or service accounts with elevated permissions, the attacker could use these to gain further control over the cluster.
Best Practices for Secure Secrets Management
To prevent such attacks, it’s crucial to implement secure secrets management practices within your Kubernetes environment. Here are some best practices to follow:
1. Encrypt Secrets at Rest
By default, Kubernetes stores secrets in etcd
as base64-encoded strings, which are not encrypted. It’s essential to enable encryption at rest for etcd to ensure that secrets are stored securely.
Kubernetes provides a built-in mechanism to encrypt secrets at rest. You can configure this by modifying the EncryptionConfig
file used by the API server.
Here’s an example configuration for enabling encryption at rest:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}
This configuration encrypts all secrets using the AES-CBC algorithm. The encryption key should be securely stored and rotated regularly.
2. Use External Secrets Management Tools
Instead of relying solely on Kubernetes secrets, consider using an external secrets management tool like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault. These tools provide advanced features like secret rotation, auditing, and fine-grained access control.
For example, with HashiCorp Vault, you can dynamically generate secrets that are short-lived, reducing the risk of exposure if a secret is compromised.
Integrating Vault with Kubernetes involves configuring Vault to inject secrets into pods securely. Here’s an example of how you might configure a FastAPI application to use Vault:
-
Configure Vault: Set up Vault and enable the Kubernetes authentication method.
-
Create a Vault Policy: Define a policy that limits access to only the necessary secrets.
path "secret/data/db-secrets" {
capabilities = ["read"]
}
- Annotate the Pod: Modify the pod configuration to include Vault annotations for injecting secrets:
apiVersion: v1
kind: Pod
metadata:
name: fastapi-app
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-fastapi-role"
vault.hashicorp.com/agent-inject-secret-db-secrets: "secret/data/db-secrets"
spec:
containers:
- name: fastapi
image: my-fastapi-app:latest
env:
- name: POSTGRES_USER
value: "{{- with secret 'db-secrets' -}}{{ .Data.data.POSTGRES_USER }}{{ end }}"
- name: POSTGRES_PASSWORD
value: "{{- with secret 'db-secrets' -}}{{ .Data.data.POSTGRES_PASSWORD }}{{ end }}"
- name: POSTGRES_DB
value: "{{- with secret 'db-secrets' -}}{{ .Data.data.POSTGRES_DB }}{{ end }}"
- Deploy the Application: When the pod starts, Vault will automatically inject the secrets into the specified environment variables or files within the pod. This approach ensures that your secrets are never stored unencrypted in Kubernetes and that they are securely delivered to the application at runtime.
By using external secrets management tools like Vault, you not only protect secrets with robust encryption and access controls, but you also gain additional features like dynamic secret generation, automatic rotation, and detailed auditing.
3. Limit Access to Secrets with RBAC
As mentioned in the previous sections, Role-Based Access Control (RBAC) is essential for securing your Kubernetes environment. You should carefully control which users and service accounts have access to secrets. Misconfigured RBAC settings can lead to unauthorized access to sensitive data, so it’s crucial to ensure that only the necessary roles have permission to read or modify secrets.
Here’s how you can use RBAC to restrict access to secrets:
- Create a Role with Limited Access: Define a role that grants access only to the specific secrets required by your application.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: mynamespace
name: fastapi-secrets-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["db-secrets"]
verbs: ["get"]
This role allows access only to the db-secrets
secret in the mynamespace
namespace.
- Bind the Role to a Service Account: Use a RoleBinding to associate the role with a specific service account that your FastAPI application will use.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: fastapi-secrets-binding
namespace: mynamespace
subjects:
- kind: ServiceAccount
name: fastapi-service-account
namespace: mynamespace
roleRef:
kind: Role
name: fastapi-secrets-reader
apiGroup: rbac.authorization.k8s.io
This RoleBinding ensures that only the fastapi-service-account
service account has permission to access the db-secrets
.
- Use the Service Account in Your Pod: Finally, ensure that your FastAPI pod uses the specified service account.
apiVersion: v1
kind: Pod
metadata:
name: fastapi-app
namespace: mynamespace
spec:
serviceAccountName: fastapi-service-account
containers:
- name: fastapi
image: my-fastapi-app:latest
By following these steps, you can tightly control which entities within your Kubernetes cluster have access to secrets, reducing the risk of unauthorized access.
4. Regularly Rotate Secrets
Secrets should not be static. Regularly rotating secrets is a best practice that reduces the window of opportunity for an attacker who might have gained access to a secret. If a secret is compromised, rotating it immediately can mitigate the impact.
Automating secret rotation is a powerful approach. For example, if you’re using HashiCorp Vault or AWS Secrets Manager, these tools can automatically rotate secrets at defined intervals or upon demand. You should also ensure that your FastAPI application can handle secret rotation without downtime.
Here’s an example of how you might implement secret rotation in Vault:
- Enable Dynamic Secrets: For example, enable the PostgreSQL secrets engine in Vault to dynamically generate database credentials:
vault secrets enable database
vault write database/config/my-postgresql-database \
plugin_name=postgresql-database-plugin \
allowed_roles="fastapi-app" \
connection_url="postgresql://{{username}}:{{password}}@db.example.com:5432/dbname?sslmode=disable"
- Create a Role for Dynamic Credentials: Define a role in Vault that controls how dynamic credentials are generated:
vault write database/roles/fastapi-app \
db_name=my-postgresql-database \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';" \
default_ttl="1h" \
max_ttl="24h"
- Use Dynamic Secrets in Your Application: Modify your FastAPI application to request dynamic credentials from Vault:
import hvac
client = hvac.Client(url='http://127.0.0.1:8200', token='your-vault-token')
db_creds = client.secrets.database.generate_credentials(name='fastapi-app')
db_username = db_creds['data']['username']
db_password = db_creds['data']['password']
# Use these credentials to connect to your database
In this setup, Vault generates a new database username and password each time the FastAPI application requests them. These credentials are short-lived and automatically expire, reducing the risk of exposure.
5. Avoid Hardcoding Secrets in Code
One of the most basic yet often overlooked practices is avoiding the hardcoding of secrets directly in your source code. Hardcoding secrets can lead to accidental exposure, especially if the code is shared, pushed to a public repository, or included in CI/CD pipelines.
Instead of hardcoding, use environment variables, external configuration files, or secrets management tools to inject secrets into your application at runtime.
For example, instead of hardcoding a database password:
DATABASE_PASSWORD = "mysecretpassword"
Use environment variables:
import os
DATABASE_PASSWORD = os.getenv("DATABASE_PASSWORD")
Or, if using a secrets management tool:
import hvac
client = hvac.Client(url='http://127.0.0.1:8200', token='your-vault-token')
secret = client.read('secret/data/db-secrets')
DATABASE_PASSWORD = secret['data']['password']
This approach ensures that sensitive information is not directly embedded in your codebase and is only accessible at runtime.
6. Monitor and Audit Secret Access
Monitoring and auditing access to secrets is crucial for detecting unauthorized access or suspicious behavior. Enable logging for all secret access operations and regularly review these logs to identify potential security incidents.
If you’re using external secrets management tools, they often provide built-in auditing capabilities. For instance, Vault logs all access events, including which user accessed which secret and when. These logs can be integrated with your SIEM (Security Information and Event Management) system for real-time analysis and alerting.
In Kubernetes, you can enable auditing for API requests, including those involving secrets. Configure the audit policy to log secret access events and analyze these logs to detect any abnormal patterns.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
This policy logs both the request and response for all secret-related API calls, providing detailed information that can be used for security analysis.
7. Use Kubernetes Secrets Appropriately
While Kubernetes secrets are not encrypted by default, they still offer a convenient way to manage sensitive data within the cluster. If you must use Kubernetes secrets, take the following precautions:
- Enable etcd Encryption: As mentioned earlier, ensure that etcd encryption is enabled to protect secrets stored in the cluster’s key-value store.
- Limit Secret Access: Use RBAC to restrict which pods and users can access specific secrets. Ensure that only the pods that absolutely need a secret have access to it.
- Avoid Excessive Secrets: Don’t overload your Kubernetes secrets with too much data. Keep secrets minimal, storing only what’s necessary. For large amounts of sensitive data, consider using a more robust secrets management solution.
- Regularly Rotate and Update Secrets: Even if you’re using Kubernetes secrets, regularly rotate and update them. This practice minimizes the risk of long-term exposure if a secret is compromised.
Conclusion
In this blog post, we have walked through some of the most important topics in securing your Python application running in Kubernetes. There are, of course, many more aspects that you need to pay attention to. Those will be covered in the upcoming posts.