Data Segregation
This feature is Development Preview. |
This feature allows you to create a custom Ceph block pool to have the pool only map to OSD that are in specific hosts or of a specific device type.
1. Prerequisites
The OpenShift Data Foundation operator has been installed and you also need the
Local Storage Operator. Note that this feature works with both dynamic provisioning
based deployments (e.g. AWS gp2
) and LSO based deployments.
2. Label Nodes
The first step prior to deployment is to label your OpenShift Data Foundation nodes
with a specific label that will provide a placement tag for your device set. In the
example below we will use set1
and set2
to create 2 sets of nodes.
$ oc label nodes ip-10-0-133-24.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set1
node/ip-10-0-133-24.us-east-2.compute.internal labeled
$ oc label nodes ip-10-0-141-180.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set2
node/ip-10-0-141-180.us-east-2.compute.internal labeled
$ oc label nodes ip-10-0-184-165.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set1
node/ip-10-0-184-165.us-east-2.compute.internal labeled
$ oc label nodes ip-10-0-186-145.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set2
node/ip-10-0-186-145.us-east-2.compute.internal labeled
$ oc label nodes ip-10-0-209-55.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set1
node/ip-10-0-209-55.us-east-2.compute.internal labeled
$ oc label nodes ip-10-0-220-5.us-east-2.compute.internal cluster.ocs.openshift.io/openshift-storage-device-class=set2
node/ip-10-0-220-5.us-east-2.compute.internal labeled
Apply the standard OpenShift Data Foundation label to all worker nodes that will be part of the cluster.
$ oc label node -l node-role.kubernetes.io/worker cluster.ocs.openshift.io/openshift-storage=''
node/ip-10-0-133-24.us-east-2.compute.internal labeled
node/ip-10-0-141-180.us-east-2.compute.internal labeled
node/ip-10-0-184-165.us-east-2.compute.internal labeled
node/ip-10-0-186-145.us-east-2.compute.internal labeled
node/ip-10-0-209-55.us-east-2.compute.internal labeled
node/ip-10-0-220-5.us-east-2.compute.internal labeled
Verify your nodes are correctly labelled.
oc get nodes -L topology.kubernetes.io/zone,cluster.ocs.openshift.io/openshift-storage-device-class -l node-role.kubernetes.io/worker
NAME STATUS ROLES AGE VERSION ZONE OPENSHIFT-STORAGE-DEVICE-CLASS
ip-10-0-133-24.us-east-2.compute.internal Ready worker 139m v1.21.1+051ac4f us-east-2a set1
ip-10-0-141-180.us-east-2.compute.internal Ready worker 139m v1.21.1+051ac4f us-east-2a set2
ip-10-0-184-165.us-east-2.compute.internal Ready worker 141m v1.21.1+051ac4f us-east-2b set1
ip-10-0-186-145.us-east-2.compute.internal Ready worker 141m v1.21.1+051ac4f us-east-2b set2
ip-10-0-209-55.us-east-2.compute.internal Ready worker 141m v1.21.1+051ac4f us-east-2c set1
ip-10-0-220-5.us-east-2.compute.internal Ready worker 141m v1.21.1+051ac4f us-east-2c set2
In this document we use a Local Storage Operator based deployment on AWS. Each node is an i3.4xlarge
instance
with 2 NVMe drives for each node.
3. Local Storage Operator Configuration
Deploy the Local Storage Operator using the following command.
cat <<EOF | oc create -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-local-storage
spec: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: local-operator-group
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
channel: "4.8"
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
namespace/openshift-local-storage created
operatorgroup.operators.coreos.com/local-operator-group created
subscription.operators.coreos.com/local-storage-operator created
Verify your operator is successfully deployed.
oc get csv -n openshift-local-storage
NAME DISPLAY VERSION REPLACES PHASE
local-storage-operator.4.8.0-202106291913 Local Storage 4.8.0-202106291913 Succeeded (1)
1 | Operator deployment status |
Only proceed to the next step when the status of the deployment is Succeeded .
|
Configure the Local Storage Operator to consume all available NVMes.
cat <<EOF | oc create -f -
---
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeDiscovery
metadata:
name: auto-discover-devices
namespace: openshift-local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: In
values:
- ""
EOF
localvolumediscovery.local.storage.openshift.io/auto-discover-devices created
Wait for the localvolumediscoveryresults
objects to be available.
oc get localvolumediscoveryresults -n openshift-local-storage
NAME AGE
discovery-result-ip-10-0-133-24.us-east-2.compute.internal 27s
discovery-result-ip-10-0-141-180.us-east-2.compute.internal 27s
discovery-result-ip-10-0-184-165.us-east-2.compute.internal 27s
discovery-result-ip-10-0-186-145.us-east-2.compute.internal 27s
discovery-result-ip-10-0-209-55.us-east-2.compute.internal 27s
discovery-result-ip-10-0-220-5.us-east-2.compute.internal 27s
Only proceed to the next step when the number of objects is equal to the number of
nodes labelled with cluster.ocs.openshift.io/openshift-storage .
|
Configure a LocalVolumeSet
to create the PersistenVolumes that will be consumed by ODF.
cat <<EOF | oc create -f -
---
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeSet
metadata:
name: local-block
namespace: openshift-local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: In
values:
- ""
storageClassName: localblock
volumeMode: Block
fstype: ext4
maxDeviceCount: 2
deviceInclusionSpec:
deviceTypes:
- disk
deviceMechanicalProperties:
- NonRotational
localvolumeset.local.storage.openshift.io/local-block created
Wait 60 seconds and verify the LSO PVs get created.
oc get pv | grep localblock
local-pv-1713478d 1769Gi RWO Delete Available localblock 7s
local-pv-1b5f29ce 1769Gi RWO Delete Available localblock 7s
local-pv-25362600 1769Gi RWO Delete Available localblock 6s
local-pv-322d07e 1769Gi RWO Delete Available localblock 6s
local-pv-38cab179 1769Gi RWO Delete Available localblock 6s
local-pv-4417fa00 1769Gi RWO Delete Available localblock 6s
local-pv-445859c3 1769Gi RWO Delete Available localblock 7s
local-pv-4ba9fc11 1769Gi RWO Delete Available localblock 6s
local-pv-58d7b728 1769Gi RWO Delete Available localblock 7s
local-pv-8317069a 1769Gi RWO Delete Available localblock 6s
local-pv-c2fa64b5 1769Gi RWO Delete Available localblock 6s
local-pv-c6855919 1769Gi RWO Delete Available localblock 6s
In the environment used to illustrate this exercise we have a total of 12 local disk devices available on the ODF labelled nodes. |
You should have as many PersistentVolumes as you have local disk devices. Wait until all PersistentVolumes are created if the count is different! |
4. Storage Cluster Configuration
After installing the OpenShift Data Foundation Operator via OperatorHub. Deploy the OpenShift Data Foundation storage cluster using the following CustomResource
file or equivalent based
on your exact configuration.
cat <<EOF | oc create -f -
---
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
monDataDirHostPath: /var/lib/rook
#
# Set 1 device set
#
storageDeviceSets:
- name: ssd-set1
count: 6 (1)
replica: 1 (2)
deviceType: "ssd"
deviceClass: "set1" (3)
dataPVCTemplate:
spec:
storageClassName: localblock (4)
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1 (5)
portable: false (6)
#
# Schedule OSDs for this storageDeviceSet on node with the preset label
#
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "cluster.ocs.openshift.io/openshift-storage-device-class"
operator: In
values:
- "set1" (7)
#
# Set 2 device set
#
- name: ssd-set2
count: 6
replica: 1
deviceType: "ssd"
deviceClass: "set2"
dataPVCTemplate:
spec:
storageClassName: localblock
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1
portable: false
#
# Schedule OSDs for this storageDeviceSet on node with the preset label
#
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "cluster.ocs.openshift.io/openshift-storage-device-class"
operator: In
values:
- "set2"
EOF
1 | Count is the number of replicas to deploy for this storageDeviceSet |
2 | Replica for this storageDeviceSet (how many OSDs to deploy each time count is increased by 1) |
3 | CRUSH device class to be assigned the OSD within the storageDeviceSet |
4 | Storage class to use for the OSD PVCs in this storageDeviceSet . |
5 | Minimum size to claim for the OSD PVC (1 byte) |
6 | In our example we use local storage hence the OSDs are not portable |
7 | Placement affinity specifies which label value to look for |
storagecluster.ocs.openshift.io/ocs-storagecluster created
The deviceClass parameter allows you to override the deviceType parameter
described in the ODF 4.7 ocs4-additionalfeatures.adoc#_mixed_osd_device_type_configuration chapter.
As a result you can now choose customized character strings outside of the hdd, ssd or nvme options.
|
Wait for your cluster to be fully deployed.
oc get pod -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-24z7g 3/3 Running 0 4m19s
csi-cephfsplugin-545pc 3/3 Running 0 4m19s
csi-cephfsplugin-89d2r 3/3 Running 0 4m19s
csi-cephfsplugin-fzckf 3/3 Running 0 4m19s
csi-cephfsplugin-provisioner-5dd599f584-h95ff 6/6 Running 0 4m19s
csi-cephfsplugin-provisioner-5dd599f584-m8q8z 6/6 Running 0 4m19s
csi-cephfsplugin-v6lrh 3/3 Running 0 4m19s
csi-cephfsplugin-v6wfk 3/3 Running 0 4m19s
csi-rbdplugin-52vfz 3/3 Running 0 4m20s
csi-rbdplugin-b9tsn 3/3 Running 0 4m20s
csi-rbdplugin-h9pkd 3/3 Running 0 4m20s
csi-rbdplugin-jbb55 3/3 Running 0 4m20s
csi-rbdplugin-pl7tc 3/3 Running 0 4m20s
csi-rbdplugin-provisioner-85b4b68989-4zlhg 6/6 Running 0 4m20s
csi-rbdplugin-provisioner-85b4b68989-ghwj7 6/6 Running 0 4m20s
csi-rbdplugin-vfptq 3/3 Running 0 4m20s
noobaa-core-0 1/1 Running 0 2m32s
noobaa-db-pg-0 1/1 Running 0 2m32s
noobaa-endpoint-5d6488db87-m5r6m 1/1 Running 0 49s
noobaa-operator-67786dd498-45ktn 1/1 Running 0 124m
ocs-metrics-exporter-795b66d6c5-qghx8 1/1 Running 0 124m
ocs-operator-6fc4f459fb-bdlqs 1/1 Running 0 124m
rook-ceph-crashcollector-ip-10-0-133-24-6b98c55978-qk7pt 1/1 Running 0 3m2s
rook-ceph-crashcollector-ip-10-0-141-180-9cbdc6d98-dsmp2 1/1 Running 0 2m42s
rook-ceph-crashcollector-ip-10-0-184-165-6c94b557d8-scddp 1/1 Running 0 2m33s
rook-ceph-crashcollector-ip-10-0-186-145-67798f4888-r8chx 1/1 Running 0 3m1s
rook-ceph-crashcollector-ip-10-0-209-55-7dbfb485f4-9k9jt 1/1 Running 0 2m54s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-676fd48874ltd 2/2 Running 0 2m11s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-79dd8bf9rjtpf 2/2 Running 0 2m10s
rook-ceph-mgr-a-5cc898c4bc-xxm6w 2/2 Running 0 3m10s
rook-ceph-mon-a-6fb6d9775c-xhtxw 2/2 Running 0 3m53s
rook-ceph-mon-b-dbb555bd4-s2h5h 2/2 Running 0 3m42s
rook-ceph-mon-c-858ffd4f5-69z4q 2/2 Running 0 3m24s
rook-ceph-operator-759d8c4b4c-xhtmc 1/1 Running 0 124m
rook-ceph-osd-0-597596cdb8-5k4gn 2/2 Running 0 7m26s
rook-ceph-osd-1-59b6b8f94b-hk9qs 2/2 Running 0 7m25s
rook-ceph-osd-10-5f58667698-crgqs 2/2 Running 0 61s
rook-ceph-osd-11-df6479cb9-kvzvh 2/2 Running 0 61s
rook-ceph-osd-2-5b8bb7bb4b-cjsp6 2/2 Running 0 7m24s
rook-ceph-osd-3-7b9b85d9c8-g4xgh 2/2 Running 0 7m24s
rook-ceph-osd-4-664b589f9d-bxj8f 2/2 Running 0 7m24s
rook-ceph-osd-5-675f87f9b8-ptz6z 2/2 Running 0 7m15s
rook-ceph-osd-6-76b4cc899c-v6h86 2/2 Running 0 72s
rook-ceph-osd-7-74c6488f6b-m42d9 2/2 Running 0 72s
rook-ceph-osd-8-867dbc6fd8-ng6qx 2/2 Running 0 72s
rook-ceph-osd-9-6fdfc7698c-wbbsd 2/2 Running 0 70s
rook-ceph-osd-prepare-ssd-set1-0-data-07m4ql-w5c5s 0/1 Completed 0 7m45s
rook-ceph-osd-prepare-ssd-set1-0-data-1frdkz-jtgbz 0/1 Completed 0 7m45s
rook-ceph-osd-prepare-ssd-set1-0-data-2mslwh-58r7x 0/1 Completed 0 7m45s
rook-ceph-osd-prepare-ssd-set1-0-data-387vpx-zvb46 0/1 Completed 0 92s
rook-ceph-osd-prepare-ssd-set1-0-data-4n6rtc-mdl6t 0/1 Completed 0 92s
rook-ceph-osd-prepare-ssd-set1-0-data-55gm4c-c65mb 0/1 Completed 0 91s
rook-ceph-osd-prepare-ssd-set2-0-data-0pr4ms-d8frs 0/1 Completed 0 7m44s
rook-ceph-osd-prepare-ssd-set2-0-data-1hrrpt-fbsvq 0/1 Completed 0 7m44s
rook-ceph-osd-prepare-ssd-set2-0-data-2ff7p5-mrmh2 0/1 Completed 0 7m43s
rook-ceph-osd-prepare-ssd-set2-0-data-3mvtsm-2mtkm 0/1 Completed 0 91s
rook-ceph-osd-prepare-ssd-set2-0-data-4fdbf5-s45b6 0/1 Completed 0 91s
rook-ceph-osd-prepare-ssd-set2-0-data-5zdcvs-27n6x 0/1 Completed 0 90s
Verify your cluster is fully operational and in healthy status using the ODF toolbox.
oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'
ocsinitialization.ocs.openshift.io/ocsinit patched
Connect to the toolbox pod.
TOOLS_POD=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
oc rsh -n openshift-storage $TOOLS_POD
Once inside the toolbox, check the status of the cluster.
sh-4.4# ceph -s
cluster:
id: 4dc85b62-d688-45cc-9224-74005839e500
health: HEALTH_OK (1)
services:
mon: 3 daemons, quorum a,b,c (age 7m)
mgr: a(active, since 7m)
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
osd: 12 osds: 12 up (since 32s), 12 in (since 32s)
data:
pools: 3 pools, 288 pgs
objects: 99 objects, 132 MiB
usage: 12 GiB used, 21 TiB / 21 TiB avail
pgs: 288 active+clean (2)
1 | Ceph cluster general health status |
2 | Total number of Placement Groups in your cluster |
The status of your cluster should be HEAKTH_OK . If it is not, something went wrong and you will need
to troubleshoot your deployment before you can continue.
|
Write down the total number of Placement Groups present in the cluster after deployment. In the example above, the value is 288. |
The next step is to identify which OSDs belong to set1
and to set2
.
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 20.73596 root default
-6 20.73596 region us-east-2
-5 6.91199 zone us-east-2a
-4 3.45599 host ip-10-0-133-24
0 set1 1.72800 osd.0 up 1.00000 1.00000 (1)
1 set1 1.72800 osd.1 up 1.00000 1.00000 (1)
-19 3.45599 host ip-10-0-141-180
4 set2 1.72800 osd.4 up 1.00000 1.00000 (2)
8 set2 1.72800 osd.8 up 1.00000 1.00000 (2)
-14 6.91199 zone us-east-2b
-22 3.45599 host ip-10-0-184-165
5 set1 1.72800 osd.5 up 1.00000 1.00000 (1)
9 set1 1.72800 osd.9 up 1.00000 1.00000 (1)
-13 3.45599 host ip-10-0-186-145
2 set2 1.72800 osd.2 up 1.00000 1.00000 (2)
3 set2 1.72800 osd.3 up 1.00000 1.00000 (2)
-26 6.91199 zone us-east-2c
-25 3.45599 host ip-10-0-209-55
6 set1 1.72800 osd.6 up 1.00000 1.00000 (1)
7 set1 1.72800 osd.7 up 1.00000 1.00000 (1)
-31 3.45599 host ip-10-0-220-5
10 set2 1.72800 osd.10 up 1.00000 1.00000 (2)
11 set2 1.72800 osd.11 up 1.00000 1.00000 (2)
1 | The CLASS column indicates OSD has a CRUSH device class set to set1 |
2 | The CLASS column indicates OSD has a CRUSH device class set to set2 |
For this particular environment here are the respectives OSD IDs for each set:
-
set1
OSDs are : 0, 1, 5, 6, 7, 9 -
set2
OSDs are : 2, 3, 4, 8, 10, 11
Disconnect from the toolbox.
exit
The next step is to create a custom cephblockpool that only uses the OSDs that belong to set1
.
This means that all Placement Groups for the pool will be mapped to OSDs with ID 0, 1, 5, 6, 7 or 9.
cat <<EOF | oc create -f -]
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: ssd-set1-pool (1)
namespace: openshift-storage
spec:
failureDomain: zone (2)
replicated:
size: 3 (3)
requireSafeReplicaSize: true
deviceClass: set1 (4)
mirroring:
enabled: false
mode: image
statusCheck:
mirror:
disabled: false
interval: 60s
1 | Custom pool name |
2 | Failure domain to use for the CRUSH rule assigned to the pool |
3 | Size parameter to assign to the pool |
4 | Device class to use for the CRUSH rule assigned to the pool |
cephblockpool.ceph.rook.io/ssd-set1-pool created
Verify the CR has been created successfully.
oc get cephblockpool -n openshift-storage
NAME AGE
ocs-storagecluster-cephblockpool 15m
ssd-set1-pool 22s
We see that we now have 2 cephblockpool CRs. The default one ocs-storagecluster-cephblockpool ,
created during the deployment of the cluster and one for our new test pool ssd-set1-pool .
|
Let’s get back into the toolbox pod and check the status of the cluster.
oc rsh -n openshift-storage $TOOLS_POD
Verify the status of the cluster and check the total number of Placement Groups present in the cluster.
ceph -s
cluster:
id: 4dc85b62-d688-45cc-9224-74005839e500
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 37m)
mgr: a(active, since 37m)
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
osd: 12 osds: 12 up (since 30m), 12 in (since 30m)
data:
pools: 4 pools, 320 pgs
objects: 92 objects, 138 MiB
usage: 12 GiB used, 21 TiB / 21 TiB avail
pgs: 320 active+clean (1)
io:
client: 852 B/s rd, 2.3 KiB/s wr, 1 op/s rd, 0 op/s wr
1 | Total number of Placement Groups in your cluster |
In this example, the total number of Placement Group is now 320. The original number before the creation of the additional pool was 288. This tells us the new pool has 32 Placement Groups. |
Now check the specific Ceph pool we have created.
ceph osd pool ls detail
pool 1 'ocs-storagecluster-cephblockpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 46 lfor 0/0/28 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_mode none target_size_ratio 0.49 application rbd
removed_snaps [1~3]
pool 2 'ocs-storagecluster-cephfilesystem-metadata' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 25 flags hashpspool stripe_width 0 compression_mode none pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'ocs-storagecluster-cephfilesystem-data0' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 46 lfor 0/0/28 flags hashpspool stripe_width 0 compression_mode none target_size_ratio 0.49 application cephfs
pool 4 'ssd-set1-pool' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 83 flags hashpspool stripe_width 0 compression_mode none application rbd (1)
1 | The custom pool created as an example |
Write down the ID of the new pool (column 2) for the new pool ssd-set1-pool we have created (column 3).
In this example our pool ID is 4.
|
You can also check the CRUSH rule ID in column 10 for the new pool. In our example it is 4 (crush_rule 4 ).
|
Now verify the CRUSH rule that was generated for the new pool.
The Rook operator always creates a CRUSH rule with the same name as the pool name |
ceph osd crush rule dump ssd-set1-pool
{
"rule_id": 4, (1)
"rule_name": "ssd-set1-pool", (2)
"ruleset": 4,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -2,
"item_name": "default~set1" (3)
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "zone" (4)
},
{
"op": "emit"
}
]
}
1 | Matches the ID in the ceph osd pool ls detail command above |
2 | The rule name is generated based on the pool name |
3 | Rule uses device of type set1 |
4 | Failure domain is set to zone |
Now verify that all the Placement Groups for the custom pool are mapped to the correct OSDs.
The first step is to verify what we are looking for by inspecting one and only one Placement Group for the new pool.
ceph pg dump | grep '^4.' | head -1
(1)
4.b 0 0 0 0 0 0 0 0 0 0 active+clean 2021-08-02 21:40:11.496593 0'0 82:12 [7,1,5] 7 [7,1,5] 7 0'0 2021-08-02 21:40:10.466388 0'0 2021-08-02 21:40:10.466388 0
(2)
1 | In the grep command above the number 4 is the ID of the pool we want to inspect. |
2 | The first set of number between brackets (column 17) is the active set of OSDs where the Placement Group is mapped. In the example above, the primary OSD is 7 and the secondary OSDs are 1 and 5 . |
Now verify that all the Placement Groups of the new pool have active set with OSD IDs for set1
as identified earlier.
In our test environment (0, 1, 5, 6, 7, 9).
ceph pg dump | grep '^4.' | awk '{ print $17 }' | grep '\[[015679],[015679],[015679]\]' | wc -l
grep '^4.' (1)
awk '{ print $17 }' (2)
grep '\[[015679],[015679],[015679]\]' (3)
1 | Search for Placement Groups that belong to pool with ID 4 |
2 | Select the acting set column in the list returned |
3 | Use the only possible values we look in each acting set |
dumped all
32
If the configuration is correct the command should return a value of 32 has all the Placement Groups for
the pool should only use a set of active OSDs where all OSDs belong to set1 .
|
Disconnect from the toolbox.
exit