top of page
  • Writer's pictureJames Skliros

Secret Management and Kubernetes Part 2

Shhh Kubernetes.. it’s a Secret

In part one of this series we looked at how to build a sound Secret Management strategy, we then looked at a few options available touching on the fact that cloud provider managed Secret Management solutions offered an advantage compaired to self-managed / self-hosted solution.

Regardless of which Secret Management solution you choose, connecting and safely accessing secrets in a secure manner remains a fundamental problem with any solution.

In this post, we’ll look at how to solve the problem in a Kubernetes environment. We’ll look at the Secret Store CSI Driver, which provides a native Kubernetes integration experience and allows applications operating on Kubernetes clusters to access secret content in a safe and secure manner.

Why do we need a Storage Driver?

When it comes to containers, stateless containers are always preferable. However, occasionally, you’ll come into an application, whether commercially available off-the-shelf (COTS) or a container - based version of an old school app, that will require state management. Hence, there was a need to manage storage systems in the container ecosystem in a standard / uniform API driven approach. This will allow portability of storage across multiple orchestration engines like Swarm, Kubernetes, Mesos etc.

Image depicting why we need a storage driver given current problems and ussues

In the early days of Kubernetes, storage vendors had to develop and write a custom driver that would live inside the Kubernetes code. As a result, this storage driver release had to be coordinated with the Kubernetes source code lifecycle and release frequency.

Solution

The Container Storage Interface (CSI), is a community initiative led by Google, Mesosphere, and Docker which was formed to address these issues. They came together and agreed on a standardised format for drivers as an extension. This centralised method involves a set of established standards on how containerised workloads should handle block and file storage systems, similar to the approach employed for Container Network Interfaces (CNI).


CSI + Kubernetes + Secrets = Secret Store CSI Driver


Secrets Store CSI driver for Kubernetes secrets integrates secrets with Kubernetes via a Container Storage Interface (CSI) volume, allowing Kubernetes to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into their pods as a volume. Once the volume is attached, the data in it is mounted into the container’s file system.

By leveraging this driver you get the benefits of managing the lifecycle of your secrets in a enterprise-grade secure provider whilst providing a native Kubernetes integration of these secrets into Kubernetes pods.

Architecture

The Secrets Store CSI Driver is a daemonset that enables kubelets to communicate with each other and providers via gRPC on pod start and restart, as occurs by default with Kubernetes Secrets. The secret content is extracted from the external Secrets Store provided in the SecretProviderClass custom resource, after which it is written to a volume mounted as tmpfs in the pod. On pod delete, the corresponding volume is cleaned up and deleted.

The following containers are included in each driver pod and perform the following functions:

Image shows the cotainers included in the CSI driver namely node-driver-register, secrets-store, liveness-probe

  • node-driver-registrar: This module is in charge of registering the CSI driver with Kubelet so that it knows which unix domain socket to issue CSI calls on.

  • secrets-store: The CSI Node service gRPC services are implemented as defined in the CSI specification. It’s in charge of mounting and unmounting volumes during pod formation and deletion.

  • liveness-probe: Responsible for monitoring the health of the CSI driver and reports to Kubernetes. This enables Kubernetes to automatically detect issues with the driver and restart the pod to try and fix the issue.

External Secret Providers

Currently, there are four enterprise-grade external secrets store drivers (AWS, GCP, Vault and Azure). These act as plugins for the CSI Driver to communicate with the provider using gRPC to fetch secrets. Each of these providers has its own set of adjustable parameters for activities like authentication, including the location and name of the secret, certificate, key, and other provider-specific requirements.

We’ll go through some of the prerequisites for a few of these providers and look at how they can make working with secrets easier.

Amazon Secrets Manager: To use Amazon’s Secrets Manager, make sure the Kubernetes service account is associated with an IAM role and is attached to an IAM Policy that has the appropriate permissions. See IAM roles for service accounts

Google Secrets Manager: To gain access to the Google Secret Manager API, the pod must use Workload Identify and be configured with the appropriate rights and IAM binding. See Using Workload Identity

Vault: In the case of using Vault as the chosen secret store one must ensure if not already in place to integrate Vault with Kubernetes by setting up Kubernetes - Authentication.

The Kubernetes Service Account Admission Controller creates service account tokens by default, and upon inspection of the Json Web Token(JWT), an issuer value of Kubernetes/serviceaccount is established. Everything generally “just works” because this is the default setting for issuers, which is also used in the Kubernetes auth method on Vault.

When using Managed Clusters, however, the service account tokens created and utilised by the Vault CSI provider for auth use the value of the kube- apiserver’s service-account-issuer flag, which differs. As a result, while setting up Kubernetes authentication in vault, the right Issuer value must be specified dependent on the Managed Cluster you are using.

Below are the rules for defining this value based on your Cluster Provider.

  • EKS: https://oidc.eks.<region>.amazonaws.com/id/<ID> (same as OpenID Connect provider URL)

  • AKS: "<dns-prefix>.hcp.<region>.azmk8s.io" (same as API server address, but with quotes)

  • GKE:https://container.googleapis.com/v1/projects/<project>/locations/<az|region>/clusters/<cluster-name>

More information about each of these providers and how to install and configure them can be found here.

  • AWS Provider

  • Azure Provider

  • GCP Provider

  • Vault Provider

Secret Provider Classes

To take advantage of the Secrets Store CSI driver, and External Secret Providers the first resource that must be created is the SecretProviderClass a custom resource that provides the details of which External Secret Provider is home to the secret and the provider-specific parameters to the CSI driver.

All SecretProviderClass resources must include at minimum the following fields:

apiVersion: secrets-store.csi.x-Kubernetes.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-provider                           # The name of the SecretProviderClass 
  namespace: application-ns                   # Namespace of the SecretProviderClass of which must be the same as Pod
spec:
  provider: vault                             # Accepted provider options: vault, gcp, aws, azure
  parameters:                                 # Provider-specific parameters

The Secret Provider Class also allows us to get as many secrets from a single provider as we want, and a Cluster can have numerous Secret Provider Classes that pull from different providers. We can also use the externally provided Secrets to establish a Kubernetes Secret or sync with an existing one, as well as mount it within a volume, which is done through additional setup in the Secret Provider Class. We can include the following components in the SecretProviderClass resource:

spec:
  provider: <provider>                        
  secretObjects:                              
  - data:
    - key: username                           # data field to populate
      objectName: foo1                        # name of the mounted content to sync. this could be the object name or the object alias
    secretName: foosecret                     # name of the Kubernetes Secret object
    type: Opaque                              # type of the Kubernetes Secret object e.g. Opaque, Kubernetes.io/tls

This will then even allow us to call upon our Secret as an environment variable within our Pod, for full details see Sync as Kubernetes Secret - The Secrets Store CSI Driver Book

Secret Retrieval Workflow

The following diagram outlines the workflow of retrieving a Secret using CSI Secrets Store Driver.

Image depicts secret retrieval workflow explain in bullet points below

  1. The API Server is used to communicate the Pod request.

  2. The request is sent to the Secrets Store CSI Driver, which establishes a connection to the requested store using the SecretProviderClass.

  3. The External Provider is queried for the Secret.

  4. The secret is then written to a volume, which the pod then mounts.

  5. Then, from the volume file system, you can get the secret.

Secret Store CSI Driver Example

In this example, we’ll illustrate how to set up a Pod to access a secret from one of Amazon, Google, or Vault Providers using the Secret Store CSI Driver.

Secret Provider Class The following describes how a Secret Provider Class looks and feels, as well as how it can be configured depending on the Provider. Given that one of these Providers contains a secret,

name password value h+7Q?D7ydVXrzCXn

For the purposes of our example will call our Secret Provider Class application-one-spc and store it in the application-one namespace

Secret Provider Class - AWS Provider

apiVersion: secrets-store.csi.x-Kubernetes.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: application-one-spc
  namespace:  application-one  
spec:
  aws                                
  parameters:
    objects: |
        - objectName: password                   
          objectType: "secretsmanager"            # "secretsmanager" or "ssmparameter"

Secret Provider Class - Google Provider

apiVersion: secrets-store.csi.x-Kubernetes.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: application-one-spc
  namespace:  application-one 
spec:
  provider: gcp
  parameters:
    secrets: |
      - resourceName: <resource path>                # "projects/$PROJECT_ID/secrets/testsecret/versions/latest"
        fileName: password.txt                       # file that contains the contents of the secret

Secret Provider Class - Vault Provider

apiVersion: secrets-store.csi.x-Kubernetes.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: application-one-spc
  namespace:  application-one                                                 
spec:
  provider: vault
  parameters:
    roleName:  <vault-role>                                     # The vault role managing access to the secret i.e. prod-read-only
    vaultAddress: 'https://VAULT_SERVICE_IP'                    # The URL of the Vault Sercer i.e. vault.prod.innablr.secure
    vaultNamespace: <vault-namespace>                           # The vault namespace of the secret i.e innablr-application
    vaultKubernetesMountPath:                                   # The path to the secret i.e. Kubernetes/innablr-prod/
    vaultCACertPath: /mnt/ca-certificates/cacert.pem            # The CA cert path used to authenticate to vault
    objects: |
      - objectName: password                                     
        secretPath: <path to secret in vault>                    # "secret/application/data/"
        secretKey:  <key reference to secret in vault>           # "MyKey"

Service Account When using Google or Amazon as our Providers, we’ll need to make sure we have a service account associated with an IAM role and Policy that has the right permissions and access to get the secret, as previously mentioned.

To do so, we’ll need to build a service account and annotate it suitably according to the provider.

Amazon

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provider-user 
  namespace: application-one 
  annotations:
    eks.amazonaws.com/role-arn: eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>

Google

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provider-user 
  namespace: application-one 
  annotations:
    iam.gke.io/gcp-service-account=GSA_NAME@PROJECT_ID.iam.gserviceaccount.com

Vault In the case of Vault we can take advantage of the default Service Account configuration.

apiVersion: v1
kind: ServiceAccount
metadata:
    name: csi-provider-user 
    namespace: application-one 

Pod We’ll utilise Volumes and VolumeMounts in the Kubernetes Pod to use the Secret Provider Class and retrieve the secret; the nice part is that it doesn’t matter which provider we use; the setup within the pod is the same in all situations. As such, we’ll construct a Volume within our Pod that references our SecretProviderClass resource.

- name: secrets-store-inline
     csi:
       driver: secrets-store.csi.Kubernetes.io
       readOnly: true
       volumeAttributes:
         secretProviderClass: “application-one-spc"

Ensure the name of the secretProviderClass matches the name of the SecretProviderClass and is located in the same namespace. Next we will mount this volume within our container as such using the path /mnt/secrets-store as tmpfs and the secret contents are written to the volume.

volumeMounts:
   - name: secrets-store-inline
     mountPath: "/mnt/secrets-store"
     readOnly: true 

Let bring everything together now remembering to include the reference to our service account

kind: Pod
apiVersion: v1
metadata:
 name: nginx-secrets-store-inline
 namespace: application-one
spec:
 containers:
 - image: nginx
   name: nginx
   volumeMounts:
   - name: secrets-store-inline
     mountPath: "/mnt/secrets-store"
     readOnly: true
 serviceAccountName: csi-provider-user
 volumes:
   - name: secrets-store-inline
     csi:
       driver: secrets-store.csi.Kubernetes.io
       readOnly: true
       volumeAttributes:
         secretProviderClass: "application-one-spc"

In the case we also created a Kubernetes Secret called foosecret-1 and wanted to reference it using the environment variable SECRET_PASSWORD within our pod we would include.

env:
  - name: SECRET_PASSWORD
    valueFrom:
      secretKeyRef:
        name: foosecret-1
        key: password

Validation

The secret may be verified and viewed by executing inside the pod and viewing the contents of the mount point, or by viewing the environment variables within the Pod and using grep to filter for the variable we want.

$ kubectl exec -it nginx-secrets-store-inline -- sh  

# cat /mnt/secrets-store/password 
h+7Q?D7ydVXrzCXn#  

# printenv | grep SECRET_PASSWORD 
h+7Q?D7ydVXrzCXn# 

Secret Rotation and Versioning

The ability to rotate secrets is one of the most important security requirements, specially when data has been exposed.

  • Rotation can be enabled in the CSI driver by setting the -enable-secret-rotation flag in the secret store container. In addition, the polling interval can be set to control how often the mounted contents for all pods and Kubernetes secrets need to be updated to the newest version.

  • The rotation-poll-interval is set to 2 minutes by default, however it can be changed as needed.

  • SecretProviderClassPodStatus is a custom resource created by driver to track the binding between a pod and SecretProviderClass. This SecretProviderClassPodStatus status also shows the secrets and versions that are currently loaded in the pod mount.

View the secret versions that are currently loaded in the pod mount.

$ kubectl get secretproviderclasspodstatus pod-vault-deploy-6b69bbb67c-9jktp-innablr-csi-driver-app-secrets-vault -o yaml  
status:
  mounted: true
  objects:
  - id: 'TestSecret:gcp/au/kv/data/corp/production/gcp/au/kv:'
    version: "0"
  podName: pod-vault-deploy-6b69bbb67c-9jktp
  secretProviderClassName: app-secrets-vault
  targetPath: /var/lib/kubelet/pods/a0a83b5e-e8b4-4fa6-af44-3ae8c5f0a35f/volumes/Kubernetes.io~csi/secrets-store-inline/mount

Conclusion

To summarise, we demonstrated the ability of the CSI secret store driver to leverage the power of cloud providers, as well as Hashicorp Vault, to use secrets in Kubernetes in a secure manner, which is exactly what we set out to achieve.

We also determined that storing our secrets in an external enterprise-grade secrets store would provide the highest level of protection for our secrets and leveraging CSI driver provides a Kubernetes native integration of secrets to the applications running on Kubernetes clusters. Along the way we discovered that the CSI Secret Store Driver has the ability to work in scenarios where several cloud providers or Hashicorp Vaults can co-exist using Secret Provider Classes.

Finally, the CSI Secret Store Driver is only one of many solutions for securing secrets in Kubernetes; whether or not this is the best option for your organisation is absolutely up to you.

Innablr is a leading cloud engineering and next generation platform consultancy. We are hiring! Want to be an Innablr? Reach out!


bottom of page