First and foremost, you need an AWS account with API access.
Next, download and install all required software:
It is used for the Kubernetes master and worker nodes
# generate ~/.ssh/testground_rsa# ~/.ssh/testground_rsa.pub​$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com" \-f ~/.ssh/testground_rsa -q -P ""
This is similar to Terraform state bucket.
$ aws s3api create-bucket \--bucket <bucket_name> \--region <region> --create-bucket-configuration LocationConstraint=<region>
Where:
<bucket_name>
is an AWS account-wide unique bucket name to store this cluster's kops
state, e.g. kops-backend-bucket-<your_username>
.
<region>
is an AWS region like eu-central-1
or us-west-2
.
a cluster name (for example name.k8s.local
)
set AWS region
set AWS availability zone A (not region; for example us-west-2a
[availability zone]) - used for master node and worker nodes
set AWS availability zone B (not region; for example us-west-2b
[availability zone]) - used for more worker nodes
set kops
state store bucket (the bucket we created in the section above)
set number of worker nodes
set master node instance type (read on best practices at https://kubernetes.io/docs/setup/best-practices/cluster-large/#size-of-master-and-master-components)
set worker node instance type
set location of your cluster SSH public key (for example ~/.ssh/testground_rsa.pub
generated above)
set team and project name - these values are used as tags in AWS for cost allocation purposes
You might want to add them to your rc
file (.zshrc
, .bashrc
, etc.), or to an .env.sh
file that you source.
export NAME=<desired kubernetes cluster name (e.g. mycluster.k8s.local)>export KOPS_STATE_STORE=s3://<kops state s3 bucket>export AWS_REGION=<aws region, for example eu-central-1>export ZONE_A=<aws availability zone, for example eu-central-1a>export ZONE_B=<aws availability zone, for example eu-central-1b>export WORKER_NODES=4export MASTER_NODE_TYPE=c5.2xlargeexport WORKER_NODE_TYPE=c5.2xlargeexport PUBKEY=$HOME/.ssh/testground_rsa.pubexport TEAM=<your team name ; tag is used for cost allocation purposes>export PROJECT=<your project name ; tag is used for cost allocation purposes>
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/$ helm repo add bitnami https://charts.bitnami.com/bitnami$ helm repo add influxdata https://helm.influxdata.com/$ helm repo update
Create a .env.toml
file in your $TESTGROUND_HOME
and add your AWS region to the ["aws"]
section.
This will take about 10-15 minutes to complete.
Once you run this command, take some time to walk the dog, clean up around the office, or go get yourself some coffee! When you return, your shiny new Kubernetes cluster will be ready to run Testground plans.
$ git clone https://github.com/testground/infra​$ cd infra​$ ./k8s/install.sh ./k8s/cluster.yaml
Do not forget to delete the cluster once you are done running test plans.
$ ./k8s/delete.sh
Edit the cluster state and change number of nodes.
$ kops edit ig nodes
Apply the new configuration
$ kops update cluster $NAME --yes
Wait for nodes to come up and for DaemonSets
to be Running
on all new nodes
$ watch 'kubectl get pods'
kops
lets you download the entire Kubernetes context config.
If you want to let other people on your team connect to your Kubernetes cluster, you need to give them the information.
$ kops export kubecfg --state $KOPS_STATE_STORE --name=$NAME