Deploy OKD 4.10 Cluster⚓︎
This instruction provides detailed information on the OKD 4.10 cluster deployment in the AWS Cloud and contains the additional setup necessary for the managed infrastructure.
A full description of the cluster deployment can be found in the official documentation.
Prerequisites⚓︎
Before the OKD cluster deployment and configuration, make sure to check the prerequisites.
Required Tools⚓︎
-
Install the following tools listed below:
-
Create the AWS IAM user with the required permissions. Make sure the AWS account is active, and the user doesn't have a permission boundary. Remove any Service Control Policy (SCP) restrictions from the AWS account.
-
Generate a key pair for cluster node SSH access. Please perform the steps below:
- Generate the SSH key. Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If there is an existing key pair, ensure that the public key is in the ~/.ssh directory.
- Add the SSH private key identity to the SSH agent for a local user if it has not already been added.
- Add the SSH private key to the ssh-agent:
-
Build the
ccoctl
tool:- Clone the
cloud-credential-operator
repository.
- Move to the
cloud-credential-operator
folder and build theccoctl
tool.
- Clone the
Prepare for the Deployment Process⚓︎
Before deploying the OKD cluster, please perform the steps below:
Create AWS Resources⚓︎
Create the AWS resources with the Cloud Credential Operator utility (the ccoctl
tool):
-
Generate the public and private RSA key files that are used to set up the OpenID Connect identity provider for the cluster:
-
Create an OpenID Connect identity provider and an S3 bucket on AWS:
./ccoctl aws create-identity-provider \ --name=<NAME> \ --region=<AWS_REGION> \ --public-key-file=./serviceaccount-signer.public
where:
- NAME - is the name used to tag any cloud resources created for tracking,
- AWS_REGION - is the AWS region in which cloud resources will be created.
-
Create the IAM roles for each component in the cluster:
-
Extract the list of the
CredentialsRequest
objects from the OpenShift Container Platform release image:oc adm release extract \ --credentials-requests \ --cloud=aws \ --to=./credrequests \ --quay.io/openshift-release-dev/ocp-release:4.10.25-x86_64
Note
A version of the openshift-release-dev docker image can be found in the Quay registry.
- Use the
ccoctl
tool to process allCredentialsRequest
objects in thecredrequests
directory:
-
Create OKD Manifests⚓︎
Before deploying the OKD cluster, please perform the steps below:
-
Download the OKD installer.
-
Extract the installation program:
tar -xvf openshift-install-linux.tar.gz
-
Download the installation pull secret for any private registry. This pull secret allows to authenticate with the services that are provided by the authorities, including Quay.io, serving the container images for OKD components. For example, here is a pull secret for Docker Hub:
-
Create a deployment directory and the install-config.yaml file:
To specify more details about the OKD cluster platform or to modify the values of the required parameters, customize the install-config.yaml file for the AWS. Please see below an example of the customized file:
install-config.yaml - OKD cluster’s platform installation configuration file
apiVersion: v1 baseDomain: <YOUR_DOMAIN> credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: aws: rootVolume: size: 30 zones: - eu-central-1a type: r5.large replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: aws: rootVolume: size: 50 zones: - eu-central-1a type: m5.xlarge replicas: 3 metadata: creationTimestamp: null name: 4-10-okd-sandbox networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: aws: region: eu-central-1 userTags: user:tag: 4-10-okd-sandbox publish: External pullSecret: <PULL_SECRET> sshKey: | <SSH_KEY>
where:
- YOUR_DOMAIN - is a base domain,
- PULL_SECRET - is a created pull secret for a private registry,
- SSH_KEY - is a created SSH key.
-
Create the required OpenShift Container Platform installation manifests:
-
Copy the manifests generated by the
ccoctl
tool to themanifests
directory created by the installation program: -
Copy the private key generated in the
tls
directory by theccoctl
tool to the installation directory:
Deploy the Cluster⚓︎
To initialize the cluster deployment, run the following command:
Note
If the cloud provider account configured on the host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment is completed, directions for accessing the cluster are displayed in the terminal, including a link to the web console and credentials for the kubeadmin user. The kubeconfig
for the cluster will be located in okd-deployment/auth/kubeconfig.
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with the user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s:
Warning
The Ignition config files contain certificates that expire after 24 hours, which are then renewed at that time. Do not turn off the cluster for this time, or you will have to update the certificates manually. See OpenShift Container Platform documentation for more information.
Log Into the Cluster⚓︎
To log into the cluster, export the kubeconfig
:
export KUBECONFIG=<installation_directory>/auth/kubeconfig
Manage OKD Cluster Without the Inbound Rules⚓︎
In order to manage the OKD cluster without the 0.0.0.0/0
inbound rules, please perform the steps below:
-
Create a Security Group with a list of your external IPs:
-
Manually attach this new Security Group to all master nodes of the cluster.
-
Create another Security Group with an Elastic IP of the Cluster VPC:
aws ec2 create-security-group --group-name custom-okd-4-10 --description "Cluster Ip to 80, 443" --vpc-id <VPC_ID> aws ec2 authorize-security-group-ingress \ --group-id '<SECURITY_GROUP_ID>' \ --protocol all \ --port 80 \ --cidr <ELASTIC_IP_OF_CLUSTER_VPC> aws ec2 authorize-security-group-ingress \ --group-id '<SECURITY_GROUP_ID>' \ --protocol all \ --port 443 \ --cidr <ELASTIC_IP_OF_CLUSTER_VPC>
-
Modify the cluster load balancer via the
router-default
svc in theopenshift-ingress
namespace, paste two Security Groups created on previous steps:The pull secret for the private registry
Optimize Spot Instances Usage⚓︎
In order to optimize the usage of Spot Instances on the AWS, add the following line under the providerSpec
field in the MachineSet of Worker Nodes: