At-rest and PersistentVolume encryption with an external KMS

1. Overview

The intent of this guide is to detail the steps and commands necessary to configure OpenShift Data Foundation (ODF) 4.7 to enable the use of an external HashiCorp Vault instance for storing the at-rest or PersistentVolume encryption keys.

The necessary components are one OCP 4.6 (or greater) clusters and the OpenShift Data Foundation (ODF) operator installed with version 4.7 (or greater).

Encryption at rest can be used to provide protection against the following threats:

  1. Cluster wide encryption to protect the organization against device theft (all server or single device)

  2. PV level encryption to guarantee isolation and confidentiality between tenants (applications).

  3. Combined PV level encryption on top of cluster-wide encryption to offer both protections above at the same time

Starting April 2021, OpenShift Container Storage (OCS) has been rebranded to OpenShift Data Foundation (ODF).
  1. Deploy HashiCorp Vault.
    Download HashiCorp Vault and configure it so it can be used by ODF.

  2. Install ODF 4.7.
    Cluster wide at-rest encryption

  3. Create your Storage Cluster.
    Deploy your storage cluster pointing to HashICorp vault to provide cluster wide at-rest encryption.

  4. Deploy an application with PV encryption.
    Deploy your application over pointing to HashiCorp Vault to provide PersistentVolume granular at-rest encryption.

2. HashiCorp Vault deployment & configuration

2.1. Installation

2.1.1. Using homebrew

If your operating system supports homebrew is an easy way to install HashiCorp Vault. If your system does not support homebrew go to Downloading

brew tap hashicorp/tap
brew install hashicorp/tap/vault
which vault
Example output
$ brew tap hashicorp/tap
==> Tapping hashicorp/tap
Cloning into '/usr/local/Homebrew/Library/Taps/hashicorp/homebrew-tap'...
remote: Enumerating objects: 1090, done.
remote: Counting objects: 100% (278/278), done.
remote: Compressing objects: 100% (170/170), done.
remote: Total 1090 (delta 162), reused 211 (delta 108), pack-reused 812
Receiving objects: 100% (1090/1090), 212.54 KiB | 1.37 MiB/s, done.
Resolving deltas: 100% (582/582), done.
Tapped 1 cask and 8 formulae (52 files, 333.3KB).
$ brew install hashicorp/tap/vault
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and noobaa/noobaa).
==> New Formulae
[...]
==> Installing vault from hashicorp/tap
==> Downloading https://releases.hashicorp.com/vault/1.7.0/vault_1.7.0_darwin_amd64.zip
######################################################################## 100.0%
[...]
==> Summary
🍺  /usr/local/Cellar/vault/1.7.0: 4 files, 188.7MB, built in 12 seconds
==> `brew cleanup` has not been run in 30 days, running now...
[...]
$ which vault
/usr/local/bin/vault

2.1.2. Downloading

You can download the appropriate HashiCorp Vault packages for your Operating System by visiting this page

Download the appropriate binary by selecting the aporpriate tab as illustrated below.

Choose correct operating system
Figure 1. HashiCorp Vault download page

2.2. Customization

In order to configure vault follow the following steps.

2.2.1. HTTPS configuration

This section details the https specific commands using a RHEL node. If your OS is different you will have to adapt the steps for installing certbot.
For certbot to run properly port 80 of the node where vault is running must be reachable. from the node where the certbot command runs. If not configuring HTTPS go to General configuration.
mkdir -p ./vault/config/vault-server-tls
sudo yum install -y certbot
sudo certbot certonly --standalone --noninteractive --agree-tos -m \{your-email\} -d \{your-vault-dns-name\}
Example output
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org
Requesting a certificate for external-vault.ocstraining.com
Performing the following challenges:
http-01 challenge for external-vault.ocstraining.com
Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/external-vault.ocstraining.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/external-vault.ocstraining.com/privkey.pem
   Your certificate will expire on 2021-06-15. To obtain a new or
   tweaked version of this certificate in the future, simply run
   certbot again. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le
Copy the files in /etc/letsencrypt/live/{your-vault-dns-name} to ./vault/config/vault-server-tls and adjust file permissions so the vault binary has access to them when running.

2.3. General configuration

In order to start vault, create a valid configuration file ./vault/config/vault-server-hcl using this template.

disable_mlock = true
ui = true
listener "tcp" {
  address = "{ip_to_bind_to}:8200"
  tls_disable = "false" 	# <- Change to true if not configuring https
  tls_cert_file = "{home-directory}/vault/config/vault-server-tls/fullchain.pem" # <- Omit if not doing https
  tls_key_file  = "{home-directory}/vault/config/vault-server-tls/privkey.pem" # <- Omit if not doing https
  tls_client_ca_file = "{home-directory}/vault/config/vault-server-tls/chain.pem" # <- Omit if not doing https

}

cluster_name = "localvault"
api_addr = "https://{fqdn-hostname}:8200" # <- Change to http if not using https
cluster_addr = "https://{fqdn-hostname}:8201" # <- Change to http if not using https

storage "file" {
  path = "./vault/data"
}

Create the required subdirectories for vault and verify the content of your configuration file.

mkdir -p ./vault/data
mkdir -p ./vault/config
cat ./vault/config/vault-server-hcl
Example output
$ mkdir -p ./vault/data
$ mkdir -p ./vault/config
$ cat /etc/vault/vault-server-hcl
disable_mlock = true
ui = true
listener "tcp" {
  address = "172.31.14.45:8200"
  tls_disable = "false"
  tls_cert_file = "/home/ec2-user/vault/config/vault-server-tls/fullchain.pem"
  tls_key_file  = "/home/ec2-user/vault/config/vault-server-tls/privkey.pem"
  tls_client_ca_file = "/home/ec2-user/vault/config/vault-server-tls/chain.pem"

}

cluster_name = "localvault"
api_addr = "https://ip-172-31-14-45.us-east-2.compute.internal:8200"
cluster_addr = "https://ip-172-31-14-45.us-east-2.compute.internal:8201"

storage "file" {
  path = "./vault/data"
}

2.4. Starting HashiCorp Vault

Start vault with the following command.

By default vault runs in the foreground so we suggest you to use tmux or screen to run the command below.
vault server -config ./vault/config/vault-server-hcl
Example output
==> Vault server configuration:

             Api Address: https://ip-172-31-14-45.us-east-2.compute.internal:8200
                     Cgo: disabled
         Cluster Address: https://ip-172-31-14-45.us-east-2.compute.internal:8201
              Go Version: go1.15.8
              Listener 1: tcp (addr: "172.31.14.45:8200", cluster address: "172.31.14.45:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: file
                 Version: Vault v1.7.0-rc1
             Version Sha: 9af08a1c5f0f855984a1fa56d236675d167f578e

==> Vault server started! Log data will stream in below:

At this point vault is started but not initialized. Check the status of vault before initalizing the KMS.

If you have enabled https, export this specific environment variable.

export VAULT_SKIP_VERIFY=true
If you have enabled https, the -ca-cert ./vault/config/vault-server-tls/cert.pem option must be added to every vault command entered. e.g. vault -ca-cert ./vault/config/vault-server-tls/cert.pem status.
export VAULT_ADDR="http://$(hostname):8200"
vault status
Example output
$ vault status
Key                Value
---                -----
Seal Type          shamir
Initialized        false (1)
Sealed             true (2)
Total Shares       0
Threshold          0
Unseal Progress    0/0
Unseal Nonce       n/a
Version            1.7.0
Storage Type       file
HA Enabled         false
1 The KMS is not initialized
2 The vault is sealed

2.5. Initializing HashiCorp Vault

To initialize your HashiCorp Vault, use the following command:

vault operator init
Example output
$ vault operator init
Unseal Key 1: ipjXvCrThyh8WM2wmEIkWWWXRe3IFNPwoxNfNndbLjxU (1)
Unseal Key 2: ENbgK3UsA+mNWIZ5NKQXlGR+Sd7NzHnPGSRoaZeRRPoE
Unseal Key 3: mKPWCEU7KMSOpLDdEgxFxLzHrqMi4MI1g1DaPsK2An6O
Unseal Key 4: 7V2hdNMp+HB9DrQqi0jn1KPjSYfXwPkw4U99N+KUD/wu
Unseal Key 5: AfQkqT+Z/O+eBcbK1gq2PiVYwzMU6Ijl6oRkUWfQumNC

Initial Root Token: s.BdZ4mPw3J6MdjUyPA5oLum7R (2)

Vault initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated master key. Without at least 3 key to
reconstruct the master key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
1 A set of 5 Unseal Keys. You will need at least 3 to unseal the vault
2 The Root Token to grant root access to your KMS and configure it
Save the information above as it is not saved in any form.

Now that the vault is initalized, it must be unsealed so its configuration cabn be modified or customized. Use the command below to unseal the vault. When prompted, enter one of the Unseal keys.

vault operator unseal
Example output
Unseal Key (will be hidden):
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    1/3 (1)
Unseal Nonce       8c3df261-8318-0ed6-d15c-45f62e34c0ab
Version            1.7.0
Storage Type       file
HA Enabled         false
1 This field shows the progress of the unsealing sequence.
Repeat the vault operator unseal command two more times entering each time a different Unseal key.

Once the third Unseal key is successfully entered the status of the vault will change as illustrated below.

Example output
$ vault operator unseal
Unseal Key (will be hidden):
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false (1)
Total Shares    5
Threshold       3
Version         1.7.0
Storage Type    file
Cluster Name    localvault
Cluster ID      c4f770b8-b571-8c4f-b668-9dcf7cbf0c33
HA Enabled      false
1 The vault is now unsealed.

2.6. Security configuration

You can enable the user and password login capabilites which are disabled by default so you can login through a standard user and password method rather than using the Root Token.

vault login {Root Token}
vault auth enable userpass
vault write auth/userpass/users/{username} password='{password}' policies=admins
Example output
$ vault login s.BdZ4mPw3J6MdjUyPA5oLum7R
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                s.BdZ4mPw3J6MdjUyPA5oLum7R
token_accessor       oy8eRQyt1IdDcUnuHudSh7qX
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]
$ vault auth enable userpass
Success! Enabled userpass auth method at: userpass/
$ vault write auth/userpass/users/myuser password='RedHat' policies=admins
Success! Data written to: auth/userpass/users/myuser

2.7. Create dedicated KV store

Create a dedicated key-value store engine as a receptacle for the ODF keys as they get generated during the deployment of an OSD. Together with the key-value store, create a dedicated security policy and a specific security token to be used by ODF to interact with the vault.

vault secrets enable -path=ocs kv
echo 'path "ocs/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}
  path "sys/mounts" {
  capabilities = ["read"]
 }'| vault policy write ocs -
vault token create -policy=ocs -format json
Example output
$ vault secrets enable -path=ocs kv (1)
Success! Enabled the kv secrets engine at: ocs/
$ echo 'path "ocs/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}
  path "sys/mounts" {
  capabilities = ["read"]
 }'| vault policy write ocs -
Success! Uploaded policy: ocs
$ vault token create -policy=ocs -format json
{
  "request_id": "f3fd9e21-24bd-0685-b9ba-d40c34701abd",
  "lease_id": "",
  "lease_duration": 0,
  "renewable": false,
  "data": null,
  "warnings": null,
  "auth": {
    "client_token": "s.jEQgA9dTDudlGrTUFnn3c45q", (2)
    "accessor": "ZtyshPTy4ltNNDXW6s0zl6F0",
    "policies": [
      "default",
      "ocs"
    ],
    "token_policies": [
      "default",
      "ocs"
    ],
    "identity_policies": null,
    "metadata": null,
    "orphan": false,
    "entity_id": "",
    "lease_duration": 2764800,
    "renewable": true
  }
}
1 ocs is the name of the key-value store dedicated to ODF. It is also known as the KV backend path.
2 This is the token to be used by ODF to authenticate with vault.
At this point your vault configuration is ready.

3. Cluster-wide at-rest encryption

In this section you will be using an OCP cluster to deploy ODF 4.7 using OperatorHub. The following will be installed:

  • The ODF Operator

  • The ODF storage cluster (Ceph Pods, NooBaa Pods, StorageClasses)

3.1. Deploy ODF operator

Navigate to the OperatorsOperatorHub menu.

OCP OperatorHub
Figure 2. OCP OperatorHub

Now type openshift container storage in the Filter by keyword…​ box.

OCP OperatorHub Filter
Figure 3. OCP OperatorHub filter on OpenShift Data Foundation Operator

Select OpenShift Data Foundation Operator and then select Install.

OCP OperatorHub Install
Figure 4. OCP OperatorHub Install OpenShift Data Foundation

On the next screen make sure the settings are as shown in this figure.

OCP OperatorHub Subscribe
Figure 5. OCP Subscribe to OpenShift Data Foundation

Click Install.

Verify the operator is deployed successfully.

Navigate to the OperatorsInstalled operators menu.

Select the openshift-storage namespace in the top of the UI pane as illustrated below.

ODF Operator Deployed
Figure 6. Successful Operator Deployment
The operator status should be Succeeded.

To check using the CLI, use the following command.

oc get pods,csv -n openshift-storage
Example output
NAME                                        READY   STATUS    RESTARTS   AGE
pod/noobaa-operator-7d4999c99f-9l88r        1/1     Running   0          71s
pod/ocs-metrics-exporter-7b499fd65c-m89sc   1/1     Running   0          70s
pod/ocs-operator-7564cf58b7-jbmfx           1/1     Running   0          71s
pod/rook-ceph-operator-b58cfd5c-fbjlh       1/1     Running   0          71s

NAME                                                                    DISPLAY                       VERSION        REPLACES   PHASE
clusterserviceversion.operators.coreos.com/ocs-operator.v4.7.0-353.ci   OpenShift Container Storage   4.7.0-353.ci              Succeeded
The Succeeded phase status is the desired state for the Cluster Service Version (CSV). Reaching this state can take several minutes.
Your ODF version might be different from the one used during the creation of this lab environment. Just make sure it is version 4.7.0 or higher.

3.2. Create encrypted storage cluster

Navigate to the OperatorsInstalled Operators menu.

OCP OperatorHub
Figure 7. Locate ODF Operator

Click on Storage Cluster on the right hand side of the UI as indicated in the screen capture above.

ODF create Storage Cluster
Figure 8. ODF Storage Cluster

Click on Create Storage Cluster on the right hand side of the UI.

ODF node selection
Figure 9. ODF Select Nodes & Storage Class

Select the worker nodes for your StorageCluster as illustrated above and clock Next.

KMS basic configuration
Figure 10. ODF Basic External KMS Configuration

Enter the basic details for your configuration.

  1. Enable encryption by checking this box

  2. Select cluster-wide encryption by checking this box

  3. Select external KMS by checking this box

  4. Provide a unique name for your KMS service (any character string)

  5. Provide the url to your vault configuration (can be http or https)

  6. Provide the TCP port for your vault configuration (default is 8200)

  7. Provide the security token generated for your ocs policy in chapter Create dedicated KV store

Click Advanced Settings to provide the the specific HashiCorp Vault parameters.

KMS advanced configuration
Figure 11. ODF Advanced External KMS Configuration

Enter the advanced details for your configuration.

  1. Enter the name of the KV store you created for ODF (ocs in this guide)

  2. Enter your HashiCorp Vault server FQDN

  3. Using the browse button and select the fullchain.pem file generated by certbot

  4. Using the browse button and select the cert.pem file generated by certbot

  5. Using the browse button and select the privkey.pem file generated by certbot

The Vault Enterprise Namespace can be ignored for this setup.
If you have not configured HashiCorp Vault to use https simply enter the Backend Path parameter and ignore the other parameters (2 through 5).

Click Save to return to the previous screen.

Click Next to go to the Storage Cluster Review screen.

Storage Cluster parameter review
Figure 12. ODF Review Cluster Parameters

Click Create to start the deployment of the ODF cluster.

After a while the cluster should be deployed and its status should be Ready as illustrated below.

Storage Cluster ready
Figure 13. ODF Cluster Ready

3.3. Verify encryption keys

Open a web browser and point to {vault-fqdn}:8200/ui/vault/auth?with=token.

Vault login page
Figure 14. Vault Login UI
  1. In the Token field, enter the token you created for your ODF security policy in Create dedicated KV store

Click Sign In.

Vault secret engines
Figure 15. Vault Secret Engines

Click on the secret engines you have created for ODF, in our example ocs.

Vault ODF key list
Figure 16. Vault ODF Key List

As you can see some secret keys were generated for your OSDs in the storage cluster. They are physically stored in the HashiCorp Vault instance.

3.4. Expand ODF cluster

Expand the cluster through the UI, as with existing version of ODF and verify additional encryption keys are generated and stored in your HashiCorp Vault instance as illustrated below.

Vault ODF additional key list
Figure 17. Vault ODF Expansion Key List

We now have a total of 6 encryption keys.

4. Granular PersistentVolume at-rest encryption

To use PersistentVolume encryption, it is required to setup a new storage class that will be configured to use the external Key Management System we have configured in the previous sectons of this guide.

The current version does not allow PersistentVolume level encryption to use a separate KMS backend. The only customization allowed for this type of encryption feature is the access token used to store the key generated by the applciation.

4.1. Specific storage class

Navigate to the StorageStorage Classes menu.

OCP Storage Classes
Figure 18. OCP Storage Classes

Click Create Storage Class in the top right of the UI.

Enter the details for your new storage class as detailed below.

Encrypted storage class details
Figure 19. Encrypted Storage Class
  1. Specify the name of your storage class

  2. Select the Ceph CSI RBD provisioner

  3. Choose the Ceph pool receiving the PersistentVolumes

  4. Enable encryption for this storage class

The pool can be the same as the default pool.
CephFS based PV encryption is not yet available.

Click Create in the UI.

4.2. Test application

Create a new project for your test application using the following command:

oc new-project my-rbd-storage
Example output
Now using project "my-rbd-storage" on server "https://api.ocp45.ocstraining.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

Create a secret to hold the vault access token specific to this project. Use the following template to create the secret.

---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-csi-kms-token
  namespace: my-rbd-storage
stringData:
  token: "{application_vault_token}"

Replace {application_vault_token} with your actual token.

Deploy your application using the dedicated storage class you just created. Use the following command to do so:

cat <<EOF | oc create -f -
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-cephrbd1
  namespace: my-rbd-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Gi
  storageClassName: encrypted-rbd
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-cephrbd2
  namespace: my-rbd-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  storageClassName: encrypted-rbd
---
apiVersion: batch/v1
kind: Job
metadata:
  name: batch2
  namespace: my-rbd-storage
  labels:
    app: batch2
spec:
  template:
    metadata:
      labels:
        app: batch2
    spec:
      restartPolicy: OnFailure
      containers:
      - name: batch2
        image: amazon/aws-cli:latest
        command: ["sh"]
        args:
          - '-c'
          - 'while true; do echo "Creating temporary file"; export mystamp=$(date +%Y%m%d_%H%M%S); dd if=/dev/urandom of=/mnt/file_${mystamp} bs=1M count=1; echo "Copying temporary file"; cp /mnt/file_${mystamp} /tmp/file_${mystamp}; echo "Going to sleep"; sleep 60; echo "Removing temporary file"; rm /mnt/file_${mystamp}; done'
        volumeMounts:
        - name: tmp-store
          mountPath: /tmp
        - name: tmp-file
          mountPath: /mnt
      volumes:
      - name: tmp-store
        persistentVolumeClaim:
          claimName: pvc-cephrbd1
          readOnly: false
      - name: tmp-file
        persistentVolumeClaim:
          claimName: pvc-cephrbd2
          readOnly: false
EOF
Example output
persistentvolumeclaim/pvc-cephrbd1 created
persistentvolumeclaim/pvc-cephrbd2 created
job.batch/batch2 created

Verify the status of the application and its different components.

oc describe pod
Example output
[...]
Volumes:
  tmp-store:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-cephrbd1
    ReadOnly:   false
  tmp-file:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-cephrbd2
    ReadOnly:   false
  default-token-rghg5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rghg5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Warning  FailedScheduling        8m45s  default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling        8m45s  default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               8m42s  default-scheduler        Successfully assigned my-rbd-storage/batch2-n4cqv to ip-10-0-202-113.us-east-2.compute.internal
  Normal   SuccessfulAttachVolume  8m43s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-f884eadc-9d37-4111-85ea-123c78b646a7"
  Normal   SuccessfulAttachVolume  8m43s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-93affaed-40f4-4fba-b907-53fbeefbd03f"
  Normal   AddedInterface          8m24s  multus                   Add eth0 [10.128.2.19/23]
  Normal   Pulling                 8m23s  kubelet                  Pulling image "amazon/aws-cli:latest"
  Normal   Pulled                  8m23s  kubelet                  Successfully pulled image "amazon/aws-cli:latest" in 563.111829ms
  Normal   Created                 8m23s  kubelet                  Created container batch2
  Normal   Started                 8m23s  kubelet                  Started container batch2
oc get pvc
Example output
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
pvc-cephrbd1   Bound    pvc-93affaed-40f4-4fba-b907-53fbeefbd03f   500Gi      RWO            encrypted-rbd   9m30s
pvc-cephrbd2   Bound    pvc-f884eadc-9d37-4111-85ea-123c78b646a7   500Mi      RWO            encrypted-rbd   9m30s

You can also verify that the HashiCorp Vault scret engine now contains two PersistentVolume specific keys.

PV specific keys craeted
Figure 20. Vault PV Specific Keys
When deleting your application make sure you delete your application pods and PVCs before deleting the secret that contains your access token to the vault. If you fail to do so you will end up with orphans PV keys in your vault.

5. CLI deployment

If needed, an encrypted at-rest cluster that uses HashiCorp Vault can be deployed using the CLI. This section covers this specific procedure:

  1. Deploy ODF operator

  2. Create your KMS specific configuration

  3. Create your customized StorageCluster cofniguration

  4. Deploy your ODF cluster

5.1. Deploy ODF operator

Depending on your environment you might have to deploy the Local Storage Operator and configure it. Follow the procedure here on this web site.

Label the nodes to be used by ODF.

oc label node -l node-role.kubernetes.io/worker="" cluster.ocs.openshift.io/openshift-storage=''
Example output
oc label node -l node-role.kubernetes.io/worker="" cluster.ocs.openshift.io/openshift-storage=''
node/ip-10-0-134-254.us-east-2.compute.internal labeled
node/ip-10-0-186-246.us-east-2.compute.internal labeled
node/ip-10-0-194-104.us-east-2.compute.internal labeled

Create openshift-storage namespace.

cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: openshift-storage
spec: {}
EOF

Create Operator Group for ODF Operator.

cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: openshift-storage-operatorgroup
  namespace: openshift-storage
spec:
  targetNamespaces:
  - openshift-storage
EOF

Subscribe to ODF Operator.

cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: ocs-operator
  namespace: openshift-storage
spec:
  channel: "stable-4.6"
  installPlanApproval: Automatic
  name: ocs-operator
  source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
  sourceNamespace: openshift-marketplace
EOF
Verify your ODF Operator has been deployed using the oc get pods -n openshift-storage or oc get csv -n openshift-storage commands.

5.2. Create KMS configuration

Create a KMS configuration in the openshift-storage namespace.

  1. If using https configure secrets

  2. Create the external vault configuration map

    1. For ODF

    2. For CSI

  3. Create the vault access token secret

5.2.1. https CLI configuration

All secrets for https are base64 encoded. Encode each of the following files using the following command: cat \{filename.pem\} | base64

  • fullchain.pem

  • cert.pem

  • privkey.pem

Create the following secrets in the openshift-storage namespace.

If you have nit configured HashiCorp Vault with https just go to ODF CLI configuration
apiVersion: v1
data:
  cert: {fullchain.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-ca-secret
  namespace: openshift-storage
type: Opaque
---
apiVersion: v1
data:
  cert: {cert.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-client-cert
  namespace: openshift-storage
type: Opaque
---
apiVersion: v1
data:
  cert: {privkey.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-client-key
  namespace: openshift-storage
type: Opaque
---
apiVersion: v1
data:
  token: {vault_token_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-token
  namespace: openshift-storage
type: Opaque
Example output
secret/ocs-kms-ca-secret created
secret/ocs-kms-client-cert created
secret/ocs-kms-client-key created
secret/ocs-kms-token created

5.2.2. ODF CLI configuration

Create the external HashiCorp Vault configuration for ODF using the secrets above.

apiVersion: v1
data:
  KMS_PROVIDER: vault
  KMS_SERVICE_NAME: {vault_service_name} (1)
  VAULT_ADDR: {vault_url}:{vault_port} (2)
  VAULT_BACKEND_PATH: {backend_path} (3)
  VAULT_CACERT: ocs-kms-ca-secret
  VAULT_CLIENT_CERT: ocs-kms-client-cert
  VAULT_CLIENT_KEY: ocs-kms-client-key
  VAULT_NAMESPACE: ""
  VAULT_TLS_SERVER_NAME: {vault_name} (4)
kind: ConfigMap
metadata:
  name: ocs-kms-connection-details
  namespace: openshift-storage
1 Name your KMS configuration e.g. external-vault
2 Replace with your vault FQDN e.g. https://external-vault.ocstraining.com:8200
3 Replace with your vault secret engine path e.g. ocs/
4 Specify a name for your server e.g. external-vault.ocstraining.com
If HashiCorp Vault is not configured with https you can ommit the VAULT_CACERT, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY and VAULT_TLS_SERVER_NAME parameters.

5.2.3. CSI configuration

Create the external HashiCorp Vault configuration for CSI using the secrets above.

apiVersion: v1
data:
  1-external-vault: '{"KMS_PROVIDER":"vaulttokens","KMS_SERVICE_NAME":"{vault_service_name}","VAULT_ADDR":"{vault_url}:{vault_port}","VAULT_BACKEND_PATH":"{backend_path}","VAULT_CACERT":"ocs-kms-ca-secret","VAULT_TLS_SERVER_NAME":"{vault_name}","VAULT_CLIENT_CERT":"ocs-kms-client-cert","VAULT_CLIENT_KEY":"ocs-kms-client-key","VAULT_NAMESPACE":"","VAULT_TOKEN_NAME":"ocs-kms-token","VAULT_CACERT_FILE":"fullchain.pem","VAULT_CLIENT_CERT_FILE":"cert.pem","VAULT_CLIENT_KEY_FILE":"privkey.pem"}'
kind: ConfigMap
metadata:
  name: csi-kms-connection-details
  namespace: openshift-storage
Replace the values {vault_service_name}, {vault_url}, {vault_port}, {backend_path} and {vault_name} with the values you have configured.
If HashiCorp Vault is not configured with https assign a "" value to the VAULT_CACERT, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY and VAULT_TLS_SERVER_NAME parameters.
Example output
configmap/ocs-kms-connection-details created
configmap/csi-kms-connection-details created

5.3. Create custom ODF cluster configuration

Create a storagecluster.yaml configuration that contains the parameters to enable at-rest encryption using an external Hashicorp Vault server. The template below can be used to create your *StorageCluster` CR.

---
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  annotations:
    uninstall.ocs.openshift.io/cleanup-policy: delete
    uninstall.ocs.openshift.io/mode: graceful
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  arbiter: {}
  encryption:
    enable: true				# <- Enable at-rest encryption
    kms:
      enable: true				# <- Enable external KMS service for your keys
  externalStorage: {}
  managedResources:
    cephBlockPools: {}
    cephConfig: {}
    cephFilesystems: {}
    cephObjectStoreUsers: {}
    cephObjectStores: {}
  nodeTopologies: {}
  storageDeviceSets:
  - config: {}
    count: 1
    dataPVCTemplate:
      metadata: {}
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: {size}			# <- Use the desired size for your storage class
        storageClassName: {storageclass}	# <- Use the desired storage class for your environment
        volumeMode: Block
    name: ocs-deviceset-{storageclass}		# <- Customize the PVC name for your environment
    portable: true
    preparePlacement: {}
    replica: 3
  version: 4.7.0

5.4. Deploy ODF cluster

Create your ODF cluster using the template file above.

Example output
oc create -f storagecluster-encrypted-kms.yaml
storagecluster.ocs.openshift.io/ocs-storagecluster created

And monitor the openshift-storage namespace to verify your cluster is coming online.

oc get pod,pvc -n openshift-storage
oc get storagecluster -n openshift-storage
oc get cephcluster -n openshift-storage
Example output
$ oc get pod,pvc -n openshift-storage
NAME                                                                  READY   STATUS      RESTARTS   AGE
pod/csi-cephfsplugin-mjj7b                                            3/3     Running     0          7m26s
pod/csi-cephfsplugin-p6pff                                            3/3     Running     0          7m26s
pod/csi-cephfsplugin-provisioner-f975d886c-6trbh                      6/6     Running     0          7m25s
pod/csi-cephfsplugin-provisioner-f975d886c-8tgws                      6/6     Running     0          7m26s
pod/csi-cephfsplugin-s7h6g                                            3/3     Running     0          7m26s
pod/csi-rbdplugin-9bq45                                               3/3     Running     0          7m26s
pod/csi-rbdplugin-provisioner-6bbf798bfb-9lttr                        6/6     Running     0          7m26s
pod/csi-rbdplugin-provisioner-6bbf798bfb-n5gxr                        6/6     Running     0          7m26s
pod/csi-rbdplugin-tpcvv                                               3/3     Running     0          7m26s
pod/csi-rbdplugin-wkplf                                               3/3     Running     0          7m26s
pod/noobaa-core-0                                                     1/1     Running     0          4m3s
pod/noobaa-db-pg-0                                                    1/1     Running     0          4m3s
pod/noobaa-endpoint-b6f7fb9c8-6mx58                                   1/1     Running     0          2m32s
pod/noobaa-operator-67dc46d9d5-v9q5m                                  1/1     Running     0          37m
pod/ocs-metrics-exporter-7c44944fd6-fzdfh                             1/1     Running     0          37m
pod/ocs-operator-5d55f4d88b-jptqr                                     1/1     Running     0          37m
pod/rook-ceph-crashcollector-ip-10-0-134-254-6f4545b94b-hz42l         1/1     Running     0          6m39s
pod/rook-ceph-crashcollector-ip-10-0-186-246-5d8496576-w9vwx          1/1     Running     0          5m43s
pod/rook-ceph-crashcollector-ip-10-0-194-104-6df5597756-wcwbj         1/1     Running     0          6m14s
pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-5b9f876cwg59f   2/2     Running     0          3m53s
pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5547d7cf9655x   2/2     Running     0          3m52s
pod/rook-ceph-mgr-a-5bc78f6d94-h6gpq                                  2/2     Running     0          4m55s
pod/rook-ceph-mon-a-866fdd69b7-gmk5g                                  2/2     Running     0          6m52s
pod/rook-ceph-mon-b-6bdb9f966c-qj7j2                                  2/2     Running     0          6m14s
pod/rook-ceph-mon-c-7c9cdc7f47-v4tlc                                  2/2     Running     0          5m43s
pod/rook-ceph-operator-6ddb556fd7-6pbqs                               1/1     Running     0          37m
pod/rook-ceph-osd-0-5f8b85475b-cp955                                  2/2     Running     0          4m9s
pod/rook-ceph-osd-1-7b66f8d755-jzvgp                                  2/2     Running     0          4m8s
pod/rook-ceph-osd-2-d765b96f5-snkjs                                   2/2     Running     0          4m4s
pod/rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0vgg9c-j4lrn       0/1     Completed   0          4m53s
pod/rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-07nkxq-bpmcz       0/1     Completed   0          4m51s
pod/rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-09x8d4-nrq6h       0/1     Completed   0          4m50s

NAME                                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
persistentvolumeclaim/db-noobaa-db-pg-0                 Bound    pvc-f903e155-a6be-4272-9780-4057cf1f9146   50Gi       RWO            ocs-storagecluster-ceph-rbd   4m4s
persistentvolumeclaim/ocs-deviceset-gp2-0-data-0vgg9c   Bound    pvc-356fce40-5f7e-4c88-8744-22e965420bf7   2Ti        RWO            gp2                           4m55s
persistentvolumeclaim/ocs-deviceset-gp2-1-data-07nkxq   Bound    pvc-2a1e7ae5-20dc-4696-b247-a055d24c0396   2Ti        RWO            gp2                           4m55s
persistentvolumeclaim/ocs-deviceset-gp2-2-data-09x8d4   Bound    pvc-189d0d6e-707d-4409-bde9-fd303a30940b   2Ti        RWO            gp2                           4m55s
persistentvolumeclaim/rook-ceph-mon-a                   Bound    pvc-5740c8aa-3a52-4a41-9989-5197fc052c09   10Gi       RWO            gp2                           7m5s
persistentvolumeclaim/rook-ceph-mon-b                   Bound    pvc-7d870739-1e26-4b50-adde-4c941f4e5551   10Gi       RWO            gp2                           7m5s
persistentvolumeclaim/rook-ceph-mon-c                   Bound    pvc-57a7906b-33bf-4764-be8a-ab4ac72a9b27   10Gi       RWO            gp2                           7m4s
$ oc get storagecluster -n openshift-storage
NAME                 AGE   PHASE   EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   10m   Ready              2021-04-21T20:55:57Z   4.7.0
$ oc get cephcluster -n openshift-storage
NAME                             DATADIRHOSTPATH   MONCOUNT   AGE     PHASE   MESSAGE                        HEALTH
ocs-storagecluster-cephcluster   /var/lib/rook     3          7m34s   Ready   Cluster created successfully   HEALTH_OK

6. Granular PersistenVolume at-rest encryption without cluster-wide encryption

It is possible to provide PV level encryption on a non at-rest encrypted cluster.

Create a KMS configuration in the openshift-storage namespace.

  1. If using https configure secrets

  2. Create the external vault configuration map

    1. For ODF

    2. For CSI

  3. Create the HashiCorp Vault access token secret

6.1. https configuration

All secrets for https are base64 encoded. Encode each of the following files using the following command: cat \{filename.pem\} | base64

  • fullchain.pem

  • cert.pem

  • privkey.pem

Create the following secrets in the openshift-storage namespace.

apiVersion: v1
data:
  cert: {fullchain.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-ca-secret
  namespace: openshift-storage
type: Opaque
---
apiVersion: v1
data:
  cert: {cert.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-client-cert
  namespace: openshift-storage
type: Opaque
---
apiVersion: v1
data:
  cert: {privkey.pem_encoded_value}
kind: Secret
metadata:
  name: ocs-kms-client-key
  namespace: openshift-storage
type: Opaque
The vault access token secret to be used by the application is created in the application namespace and not in the openshift-storage namespace. See Test application

6.2. ODF configuration

Create the external vault configuration for ODF using the secrets above.

apiVersion: v1
data:
  KMS_PROVIDER: vault
  KMS_SERVICE_NAME: {vault_service_name} (1)
  VAULT_ADDR: {vault_url}:{vault_port} (2)
  VAULT_BACKEND_PATH: {backend_path} (3)
  VAULT_CACERT: ocs-kms-ca-secret
  VAULT_CLIENT_CERT: ocs-kms-client-cert
  VAULT_CLIENT_KEY: ocs-kms-client-key
  VAULT_NAMESPACE: ""
  VAULT_TLS_SERVER_NAME: {vault_name} (4)
kind: ConfigMap
metadata:
  name: ocs-kms-connection-details
  namespace: openshift-storage
1 Name your KMS configuration e.g. external-vault
2 Replace with your vault FQDN e.g. https://external-vault.ocstraining.com:8200
3 Replace with your vault secret engine path e.g. ocs/
4 Specify a name for your server e.g. external-vault.ocstraining.com
If HashiCorp Vault is not configured with https you can ommit the VAULT_CACERT, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY and VAULT_TLS_SERVER_NAME parameters.

6.3. CSI configuration

Create the external vault configuration for CSI using the secret above.

apiVersion: v1
data:
  1-external-vault: '{"KMS_PROVIDER":"vaulttokens","KMS_SERVICE_NAME":"{vault_service_name}","VAULT_ADDR":"{vault_url}:{vault_port}","VAULT_BACKEND_PATH":"{backend_path}","VAULT_CACERT":"ocs-kms-ca-secret","VAULT_TLS_SERVER_NAME":"{vault_name}","VAULT_CLIENT_CERT":"ocs-kms-client-cert","VAULT_CLIENT_KEY":"ocs-kms-client-key","VAULT_NAMESPACE":"","VAULT_TOKEN_NAME":"ocs-kms-token","VAULT_CACERT_FILE":"fullchain.pem","VAULT_CLIENT_CERT_FILE":"cert.pem","VAULT_CLIENT_KEY_FILE":"privkey.pem"}'
kind: ConfigMap
metadata:
  name: csi-kms-connection-details
  namespace: openshift-storage
Replace the values {vault_service_name}, {vault_url}, {vault_port}, {backend_path} and {vault_name} with the values you have configured.
If HashiCorp Vault is not configured with https assign a "" value to the VAULT_CACERT, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY and VAULT_TLS_SERVER_NAME parameters.
You can combine PV level encryption that can only be configured with an external KMS with at-rest cluster wide encryption using locally stored keys (ODF 4.6+).

7. Granular PersistentVolume at-rest encryption without cluster-wide encryption: Kubernetes Auth Method - ServiceAccounts

Before ODF can use external Vault with service account tokens, we have to configure the authentcation method and Vault roles and policies. Details of the Kubernetes Authentication Method are out of scope for this document but are well explained by Vault Integration Using Kubernetes Authentication Method. In addition, Hashicorp Vault provides tutorials on general integration between Vault and Kubernetes.

This section outlines steps for using an external Vault instance with the following configuration options:

  • Vault instance external the Openshift cluster

  • Openshift ServiceAccount token is used to authenticate with Vault via the Kubernetes Authentication method

  • Vault namespace for reading/writing secrets used for encryption (optional)

  • Secrets engine not enabled under Vault namespace but to a different path (optional)

  • Vault Backend is kv v2 (optional)

  • HTTPS (optional)

  • Vault Certificate Authority verified (optional)

Please gather the following information and verify the following:

  1. Your Vault Address (i.e. vault.myvault.com:8200)

  2. Ensure your Vault instance has access to your cluster api endpoint.

  3. Access to configure Vault auth, policies and secrets engine enablement or via a Vault administrator.

  4. If you are verifying the Vault CA certificate, please have your Vault CA cert (PEM) available as you will need to base64 encode this cert in a secret. Detailed steps below.

7.1. ServiceAccounts, bindings and roles

Setup for using Kubernetes Authentication method must be configured before ODF can authenticate with and start using Vault. The instructions below create and configure ServiceAccounts, ClusterRole and ClusterRoleBinding required to allow ODF default ServiceAccount to authenticate with Vault.

Apply the following to your Openshift cluster:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-vault-token-review
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-vault-token-review
rules:
  - apiGroups: ["authentication.k8s.io"]
    resources: ["tokenreviews"]
    verbs: ["create", "get", "list"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-vault-token-review
subjects:
  - kind: ServiceAccount
    name: rbd-csi-vault-token-review
    namespace: default
  - kind: ServiceAccount
    name: default
    namespace: openshift-storage
  - kind: ServiceAccount
    name: rook-csi-rbd-plugin-sa
    namespace: openshift-storage
  - kind: ServiceAccount
    name: rook-csi-rbd-provisioner-sa
    namespace: openshift-storage
roleRef:
  kind: ClusterRole
  name: rbd-csi-vault-token-review
  apiGroup: rbac.authorization.k8s.io

7.2. Configure/Verify Vault Authentication

As mentioned in the previous step, setup for using Kubernetes Authentication method must be configured before ODF can authenticate with and start using Vault. In addition to the previous step, ensure Vault has the right roles and policies created by apply the steps below.

The steps below assume you have root access to Vault. If you do not have root access to Vault (i.e. Vault is administered by another team) please forward these instructions to your Vault administrator.

Apply the following to your cluster:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: rbd-csi-vault-token-review-psp
spec:
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - 'configMap'
    - 'secret'

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: openshift-storage
  name: rbd-csi-vault-token-review-psp
rules:
  - apiGroups: ['policy']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['rbd-csi-vault-token-review-psp']

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-vault-token-review-psp
  namespace: openshift-storage
subjects:
  - kind: ServiceAccount
    name: rbd-csi-vault-token-review
    namespace: openshift-storage
roleRef:
  kind: Role
  name: rbd-csi-vault-token-review-psp
  apiGroup: rbac.authorization.k8s.io

7.3. Configure/Verify Vault Policies for ODF

The following step requires customization to your exact environment. Please do not apply to your cluster until after making all necessary changes. Details for each configuration option are below.

Apply the following for psp, role, and Vault setup configuration:
---
apiVersion: v1
kind: Service
metadata:
  name: vault
  labels:
    app: vault-api
spec:
  ports:
    - name: vault-api
      port: 8200
  clusterIP: None
  selector:
    app: vault
    role: server

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vault
  labels:
    app: vault
    role: server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vault
      role: server
  template:
    metadata:
      labels:
        app: vault
        role: server
    spec:
      containers:
        - name: vault
          image: docker.io/library/vault:latest
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 100
          env:
            - name: VAULT_DEV_ROOT_TOKEN_ID
              value: sample_root_token (1)
            - name: SKIP_SETCAP
              value: any
          livenessProbe:
            exec:
              command:
                - pidof
                - vault
            initialDelaySeconds: 5
            timeoutSeconds: 2
          ports:
            - containerPort: 8200
              name: vault-api
---
apiVersion: v1
items:
  - apiVersion: v1
    data:
      init-vault.sh: |
        set -x -e

        timeout 300 sh -c 'until vault status; do sleep 5; done'

        # login into vault to retrieve token
        vault login ${VAULT_DEV_ROOT_TOKEN_ID}

        # enable kubernetes auth method under specific path:
        vault auth enable -path="/${CLUSTER_IDENTIFIER}" kubernetes

        # write configuration to use your cluster
        vault write auth/${CLUSTER_IDENTIFIER}/config \
          token_reviewer_jwt=@${SERVICE_ACCOUNT_TOKEN_PATH}/token \
          kubernetes_host="${K8S_HOST}" \
          kubernetes_ca_cert=@${SERVICE_ACCOUNT_TOKEN_PATH}/ca.crt

        # create policy to use keys related to the cluster
        vault policy write "${CLUSTER_IDENTIFIER}" - << EOS
        path "secret/data/ceph-csi/*" {
          capabilities = ["create", "update", "delete", "read", "list"]
        }

        path "secret/metadata/ceph-csi/*" {
          capabilities = ["read", "delete", "list"]
        }

        path "sys/mounts" {
          capabilities = ["read"]
        }
        EOS

        # create a role
        vault write "auth/${CLUSTER_IDENTIFIER}/role/${PLUGIN_ROLE}" \
            bound_service_account_names="${SERVICE_ACCOUNTS}" \
            bound_service_account_namespaces="${SERVICE_ACCOUNTS_NAMESPACE}" \
            kubernetes_ca_cert=@${SERVICE_ACCOUNT_TOKEN_PATH}/ca.crt \
            policies="${CLUSTER_IDENTIFIER}"

        # disable iss validation
        # from: external-secrets/kubernetes-external-secrets#721
        vault write auth/kubernetes/config \
          kubernetes_host="${K8S_HOST}" \
          kubernetes_ca_cert=@${SERVICE_ACCOUNT_TOKEN_PATH}/ca.crt \
          disable_iss_validation=true
    kind: ConfigMap
    metadata:
      creationTimestamp: null
      name: init-scripts
kind: List
metadata: {}

---
apiVersion: batch/v1
kind: Job
metadata:
  name: vault-init-job
spec:
  parallelism: 1
  completions: 1
  template:
    metadata:
      name: vault-init-job
    spec:
      serviceAccount: rbd-csi-vault-token-review
      volumes:
        - name: init-scripts-volume
          configMap:
            name: init-scripts
      containers:
        - name: vault-init-job
          image: docker.io/library/vault:latest
          volumeMounts:
            - mountPath: /init-scripts
              name: init-scripts-volume
          env:
            - name: HOME
              value: /tmp
            - name: CLUSTER_IDENTIFIER
              value: kubernetes
            - name: SERVICE_ACCOUNT_TOKEN_PATH
              value: /var/run/secrets/kubernetes.io/serviceaccount
            - name: K8S_HOST
              value: https://{your_openshift_APIServer_external_endpoint} (2)
            - name: PLUGIN_ROLE
              value: csi-kubernetes
            - name: SERVICE_ACCOUNTS
              value: rbd-csi-nodeplugin,rbd-csi-provisioner,csi-rbdplugin,csi-rbdplugin-provisioner,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa
            - name: SERVICE_ACCOUNTS_NAMESPACE
              value: openshift-storage
            - name: VAULT_ADDR
              value: {your_vault_url} (3)
            - name: VAULT_DEV_ROOT_TOKEN_ID
              value: sample_root_token (1)
          command:
            - /bin/sh
            - /init-scripts/init-vault.sh
          imagePullPolicy: "IfNotPresent"
      restartPolicy: Never
1 Replace with a Vault token that allows policy creation. This token is only used during Vault configuration and may be revoked after the job above completes.
2 Replace with your Openshift API server external endpoint. (i.e. api.ocp47.myopenshift.com:6443)
3 Replace with your vault url. (i.e. vault.myvault.com:8200/)

Verify the job in the yaml above completed without error:

oc -n openshift-storage get pods | grep vault-init
oc -n openshift-storage logs pods/{POD from previous command}

7.4. Create an Encrypted StorageClass

In order to create a storageclass that uses our external Vault, we must create and configure a configmap named csi-kms-connection-details that will hold all the information needed to establish the connect. Our storageclass needs to contain the field "encryptionKMSID" whose value is used as a lookup into cm/csi-kms-connection-details.

Create the csi-kms-connection-detail configmap by applying the yaml below. If you change "vault-test" to a more meaningful name for your environment please do not forget to also use your new name in the storageclass encryptionKMSID field in your new storageclass.

---
apiVersion: v1
kind: ConfigMap
data:
  vault-test : |- (1)
    {
      "encryptionKMSType": "vault",
      "vaultAddress": "{URL to your vault address, http or https, and port}", (2)
      "vaultAuthPath": "/v1/auth/kubernetes/login",
      "vaultRole": "csi-kubernetes",
      "vaultPassphraseRoot": "/v1/secret",
      "vaultPassphrasePath": "ceph-csi/",
      "vaultCAVerify": "true" (3)
    }
metadata:
  name: csi-kms-connection-details
1 You may change vault-test to a more meaningful name for your environment. Just remember to use the same value for encryptionKMSID in your StorageClass.
2 Replace with your Vault URL.
3 Change to "false" if your Vault CA should not be verified.

Create a StorageClass that uses the service account for authentication.

allowVolumeExpansion: false
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: {NEW STORAGECLASS NAME} (1)
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  encrypted: "true"
  encryptionKMSID: vault-test (2)

  imageFeatures: layering
  imageFormat: "2"
  pool: ocs-storagecluster-cephblockpool
provisioner: openshift-storage.rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 Replace with what you would like to call your new storageclass.
2 Make sure this value matches the entry in cm/csi-kms-connection-details you want this storageclass to use. See item 1 in the yaml for cm/csi-kms-connect-details above.

7.5. Create a PVC using this StorageClass

Apply the following to create PVC using your new storageclass that uses the Kubernetes Auth Method with Vault:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {PVC NAME} (1)
  namespace: {YOUR NAMESPACE} (2)
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: {NEW STORAGECLASS NAME} (3)
  volumeMode: Filesystem
1 Name your new PVC
2 Replace with your desired namespace
3 Replace with the storageclass name you previously created.

Your PVC should bind within seconds. If your PVC is stuck in Pending review the events and logs for possible reasons. Look for mismatches between namespace, sa and Vault policy and role.

If you need to troubleshoot, try these steps:

oc -n openshift-storage run tmp --rm -i --tty  --serviceaccount=rook-csi-rbd-plugin-sa --image ubi8:latest

To start a container with attached SA rook-csi-rbd-plugin-sa. Install jq (yum install jq) and run the following to verify the rook-csi-rbd-plugin-sa can retrieve a vault client token:

export VAULT_ADDR={VAULT ADDR i.e. https://vault.myvault.com:8200}
export KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
export VT=$(curl -s --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "csi-kubernetes"}' $VAULT_ADDR/v1/auth/kubernetes/login | jq -r '.auth.client_token')
echo $VT

If the last command above did not return any value, there is a mismatch between the SA and namespace the pod is running as, and how the Vault policy was configured. Double check your configuration for typos.