Extending Ceph storage

This guide describes the procedure of increasing the disk space of the Ceph storage platform on the Openshift cluster (OKD 4.x).

Increasing the disk space may be part of a scheduled upgrade or something you need to do when storage reaches its 85% capacity.

1. Prerequisites

A Platform administrator must have access to the cluster with the cluster-admin role.

2. Procedure

  1. Expand the root volumes at the cloud provider level (we will use AWS as an example). To do this, perform these steps:

    1. Go to the OKD web console.

    2. Go to openshift-storage namespace.

    3. Open the Persistent Volume Claims section and select the Expand PVC option from the context menu for these three volumes:

      • ocs-deviceset-gp2-0-data-0-xxx

      • ocs-deviceset-gp2-1-data-0-xxx

      • ocs-deviceset-gp2-2-data-0-xxx

    4. Specify the size for these volumes.

      ceph volumes

  2. Modify the following custom resources (CRs):

    For details, refer to OKD documentation: Managing resources from Custom Resource Definitions.
    • Find ocs-storagecluster (an instance of the storagecluster.ocs.openshift.io CRD).

    • Find the .yaml configuration file and change the value of the storage parameter to storage: 768Gi, which was initially set at the stage of expanding the root volumes (see step 1).

      ceph cr

      Alternatively, you can change this value using a command-line interface (CLI):

      oc patch...
  3. In the openshift-storage namespace, restart the necessary pods:

    ceph pods

    Alternatively, restart all pods in this namespace.

    For details on working with pods in Openshift, refer to the Origin Kubernetes Distribution (OKD) documentation: Using pods.

After all the automatic Ceph cluster procedures that follow are done, the disk space will be extended to the size you specified.

If the Ceph disk space does not expand after step 3 and Ceph stops working, perform a force restart of the instances in the Ceph MachineSet of the Openshift cluster.