Instructions to Install DKube on Amazon EKS Cluster

This section describes the steps required to install DKube on an existing EKS cluster.

EKS Cluster Installation Files

When installing DKube on an existing EKS cluster, there are files required to set up the installation node and provide access to the EKS cluster.

The EKS preinstallation files are cloned from a GitHub repository.

git clone https://github.com/oneconvergence/dkube-eks

Running the EKS Preinstallation Script

Navigate to the dkube-eks folder. Before running the preinstall script:

  • The [AWS] section of the terraform-eks.ini file needs to be completed to provide access to the EKS cluster

The following information needs to be completed in the terraform-eks.ini file:

  • aws_access_key_id

  • aws_secret_access_key

The rest of the fields are not used by the preinstall script, and can be left in their default state.

After running the preinstall script, the config file can be copied by including:

  • The region where the EKS cluster is running (e.g. us-west-2)

  • The EKS cluster name

The following commands will accomplish these steps:

bash preinstall.sh source ~/.bashrc aws eks --region <EKS Region> update-kubeconfig --name <EKS Cluster Name> kubectl get nodes -o wide

The output from the “kubectl get nodes” command will provide the list of IP addresses and node names to use in the k8s.ini and dkube.ini files later in the installation procedure.

EKS Cluster Access

The section Getting the DKube Files provides instructions on getting the DKube installation files. The installation is performed from the $HOME/.dkube folder.

In order to set up the cluster access, all of the nodes in the EKS cluster must be identified in the k8s.ini file, as shown below. The following fields must be completed. The rest of the fields can stay at their default value.

Field

Value

provider

eks

distro

Operating system type

nodes

IP addresses for EKS cluster

user

Username on EKS cluster

Note

The IP addresses were obtained in the previous step with the “kubectl get nodes” command

_images/k8s-ini-File-EKS.png

In order to access an EKS cluster that is already installed, the pem file from the AWS cluster is used. In order to use that file, the following steps should be followed:

  • Copy the .pem key to the $HOME/.dkube folder

  • Use the following commands in the $HOME/.dkube folder to set up cluster access and install the necessary packages

sudo chmod 400 <pem file> sudo ./setup-ssh.sh --key=<pem file>

DKube Installation

The rest of the installation is executed from the $HOME/.dkube folder.

There are 2 configuration files that need to be edited for installation of k8s and DKube.

File

Description

k8s.ini

Configuration for cluster node setup

dkube.ini

Configuration for DKube installation

Important

Both ini files should be configured before executing any commands

Editing the DKube ini File

Before installing DKube, the dkube.ini configuration file must be completed.

_images/dkube-ini-File-EKS-HA-Basic.png

Field

Value

KUBE_PROVIDER

eks

HA

Set true or false to enable/disable DKube resiliency

USERNAME

User-chosen initial login username

PASSWORD

User-chosen initial login password

Resilient Operation

DKube can run as a resilient system that guarantees the databases remain usable when one of the nodes become inoperable. This requires at least 3 schedulable nodes in the cluster. This is explained in the section Cluster and DKube Resiliency. Note that this only applies to DKube resiliency. The k8s cluster can be resilient or not and still run DKube in HA mode, as long as the DKube resiliency requirements are met. If you have provided that minimum configuration, you can set the HA field to be true for resilient DKube operation.

Username and Password

This provides the credentials for initial DKube local login. The initial login user has both Operator and Data Scientist access.

The Username has the following restrictions:

Do not use the following names:

  • dkube

  • monitoring

  • kubeflow

Storage Options

The storage options are configured in the [STORAGE] section of the dkube.ini file. The settings depend upon the type of storage configured, and whether the DKube installation will be HA or non-HA.

Storage Type

Instructions

Local

DKube Installation with Local Storage

NFS

DKube Installation with NFS

Ceph

DKube Installation with Ceph

DKube Installation with Local Storage

DKube can be configured to use local storage on the nodes. The storage configuration will depend upon whether DKube is in HA or non-HA mode. To select local storage, the following field value should be selected:

Field

Value

STORAGE_TYPE

disk

The node field will depend upon the resiliency configuration (HA or non-HA).

Field

Resiliency

Value

STORAGE_DISK_NODE

non-HA

EKS host name

STORAGE_DISK_NODE

HA

Value ignored - DKube will create an internal Ceph cluster using the disks from all of the nodes

The “EKS Host Name” for each node in the cluster can be identified using the “kubectl get nodes” command. For this field, choose the node that will be used for DKube storage.

_images/dkube-ini-File-EKS-non-HA-Storage.png

Proceed to Cluster Access Options.

DKube Installation with NFS

NFS is configured the same for HA and non-HA. In order to configure an external NFS for DKube use, the following fields should be filled in:

Field

Value

STORAGE_TYPE

nfs

STORAGE_NFS_SERVER

Internal IP address of nfs server

STORAGE_NFS_PATH

Absolute path of the exported share

Note

The path must exist on the share, but should not be mounted. DKube will perform its own mount

_images/nfs_dkube_ini.png

Proceed to Cluster Access Options.

DKube Installation with Ceph

Ceph is configured the same for HA and non-HA. For external Ceph configuration, the following fields should be filled in:

Field

Value

STORAGE_TYPE

ceph

STORAGE_CEPH_MONITORS

IP addresses of the Ceph monitors

STORAGE_CEPH_SECRET

Ceph token

Important

Ceph must be installed with 3 monitors

_images/dkube-ini-File-Ceph.png

Cluster Access Options

Cluster access is configured in the [EXTERNAL] section of the dkube.ini file. The fields should be configured as follows, depending upon the load balancer installed.

IP Access or External Load Balancer

Use the following configuration if the cluster is accessed by:

  • The IPs of the cluster nodes, or

  • By a VIP on a load balancer that is external to the k8s cluster

Field

Value

ACCESS

nodeport

INSTALL_LOADBALANCER

false

_images/dkube-ini-External-Default.png

Proceed to Node Setup

DKube-Installed Load Balancer

If the cluster is accessed by the MetalLB load balancer provided by DKube, use the following configuration:

Field

Value

ACCESS

loadbalancer

INSTALL_LOADBALANCER

true

LB_VIP_POOL

Pool of IP addresses used to provision the VIPs for the load balancer

Proceed to Node Setup

User-Deployed Load Balancer

If the cluster is accessed by a user-deployed load balancer that is aware of the k8s cluster, use the following configuration:

Field

Value

ACCESS

loadbalancer

INSTALL_LOADBALANCER

false

Node Setup

Before installing DKube, the appropriate software packages are required to be installed on each node on the DKube cluster. This is accomplished through the following command.

sudo ./dkubeadm node setup

Completion of the Installation

The rest of the steps depend upon whether DKube is being installed in an HA or non-HA configuration, as described in Cluster and DKube Resiliency

HA DKube Installation

DKube HA Installation on EKS

non-HA DKube Installation

DKube non-HA Installation on EKS

DKube HA Installation on EKS

This section describes how to install DKube in an HA configuration. In order to install DKube in HA mode on the cluster, the cluster must have at least 3 schedulable nodes as described in section Cluster and DKube Resiliency .

Installing DKube
sudo ./dkubeadm dkube install
_images/dkube-install-log-EKS.png
Accessing the Installer UI

The steps to get to the installer UI are described here. The output log provides the format to access the installer UI. In order to get the external IP address to fill in, the following command should be run:

kubectl get svc -n dkube dkube-installer-service -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Accessing DKube

The DKube UI can be accessed from the installation dashboard. The url can also be obtained by running the following command:

kubectl get svc -n dkube dkube-proxy -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}'

The output of that command should be put into the following url format:

https://<IP Address Returned>/

Once complete, go the Initial Login

DKube non-HA Installation on EKS

This section describes how to install DKube in an non-HA configuration.

Installing DKube
sudo ./dkubeadm dkube install
Accessing the Installer UI

The installation progress can be viewed at the following url:

http://<masternode-ip>:32323/ui
Accessing DKube

The DKube UI can be accessed from the installation dashboard, or accessed at the following url:

https://<masternode-ip>:32222

Initial Login

The initial login after installation is accomplished with the username and password entered in the dkube.ini file. Authorization is based on a backend mechanism is explained in the User Guide in the section “Getting Started”.


DKube Installation Failure

If the DKube install procedure detects that some of the prerequisites are not correct, the first troubleshooting step is to uninstall and cleanup the system. The command to uninstall and cleanup is:

sudo ./dkubeadm dkube uninstall sudo ./dkubeadm node cleanup sudo ./dkubeadm node setup

After this successfully completes, run the DKube install command again at Installing DKube . If it still fails, contact your IT manager.


Uninstalling DKube

DKube can be uninstalled by following the steps in this section from the $HOME/.dkube directory:

sudo ./dkubeadm dkube uninstall sudo ./dkubeadm node cleanup