Table of Contents
Kubernetes Installation
Diese Installation ist unter Ubuntu 20.04 durchgeführt worden.
- Master = kube01
- Worker = kube02
- Worker = kube03
OS Update
Das OS sollte aktuell sein
Auf allen Nodes apt update apt dist-upgrade -y
User anlegen
User anlegen mit dem man danach Kubernetes administriert
Auf allen Nodes groupadd -g 8001 k8s useradd -u 8001 -g 8001 -G sudo -m -s /bin/bash k8s passwd k8s
Swap deaktivieren
Am besten man installiert das System gleich ohne Swap. Falls man Swap hat muss man diesen deaktivieren
Auf allen Nodes: swapoff -a vim /etc/fstab Swap auskommentieren
Ich hatte unter Ubuntu 2004 das Problem das Swap trotzdem gemountet wurde. Wenn dies der Fall ist einfach mit dd drüber fahren
Auf allen Nodes: dd if=/dev/zero of=/dev/sdX3 bs=1048576 count=10 oflag=direct status=progress
Firewall abdrehen
Auf allen Nodes: ufw disable
Module laden
Auf allen Nodes: modprobe overlay modprobe br_netfilter echo "overlay" | tee -a /etc/modules echo "br_netfilter" | tee -a /etc/modules
Kernel Settings
Auf allen Nodes: tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF root@kube01:~# sysctl --system ... * Applying /etc/sysctl.d/kubernetes.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 ...
Hostfile vorbereiten
Auf allen Nodes: vim /etc/hosts 192.168.88.121 kube01 192.168.88.122 kube02 192.168.88.123 kube03
Packages vorbereiten
Auf allen Nodes: apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates vim git nmon
Container Runtime installieren
Ich verwende Docker als Runtime
Auf allen Nodes: sudo apt install -y docker.io
Verzeichnis erstellen
Auf alle Nodes: mkdir -p /etc/systemd/system/docker.service.d
Config erstellen
Auf allen Nodes:
tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
Start and enable Services
Auf allen Nodes: sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl enable docker
Kuebernetes installieren
Aktuell werden die Xenial Repos verwendet da es keine neueren gibt. Die Software ist dennoch aktuell.
Auf allen Nodes: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" apt update apt install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
Master Node erstellen
Kontrolle der Module
Auf dem Master: root@kube01:~# lsmod | grep br_netfilter br_netfilter 28672 0 bridge 176128 1 br_netfilter
Kublet Service starten
Auf dem Master: systemctl enable kubelet
Images ziehen
Auf dem Master: root@kube01:~# kubeadm config images pull W1106 10:37:14.055425 5909 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.3 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.3 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.3 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.3 [config/images] Pulled k8s.gcr.io/pause:3.2 [config/images] Pulled k8s.gcr.io/etcd:3.4.13-0 [config/images] Pulled k8s.gcr.io/coredns:1.7.0
Cluster erstellen
Hier gibt es noch ein paar Parameter
--control-plane-endpoint : set the shared endpoint for all control-plane nodes. Can be DNS/IP --pod-network-cidr : Used to set a Pod network add-on CIDR --cri-socket : Use if have more than one container runtime to set runtime socket path --apiserver-advertise-address : Set advertise address for this particular control-plane node's API server
Meine Wahl
Auf dem Master:
root@kube01:~# kubeadm init --pod-network-cidr=10.0.0.0/16
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.88.121:6443 --token ycak9e.7wybnyv36v6rpwxg \
--discovery-token-ca-cert-hash sha256:9a46394c97a91b147fd5eeefb9f6b8d6fd39eed40c9122fc0a2e0c0c141d2543
Kubectl konfigurieren für den User k8s
Auf dem Master: root@kube01:~# su - k8s k8s@kube01:~$ mkdir -p $HOME/.kube k8s@kube01:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config k8s@kube01:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Cluser Status checken
Auf dem Master: k8s@kube01:~$ kubectl cluster-info Kubernetes master is running at https://192.168.88.121:6443 KubeDNS is running at https://192.168.88.121:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Cluster Netzwerk erstellen
Hier gibt es recht viel Auswahl. Ich habe mich für Calico entschieden
Auf dem Master: k8s@kube01:~$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
Warten bis alles fertig ist
Auf dem Master: watch -n 2 "kubectl get pods --all-namespaces" NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7d569d95-k5tmm 1/1 Running 0 50m kube-system calico-node-5bj9s 1/1 Running 0 46m kube-system calico-node-5g2pf 1/1 Running 0 47m kube-system calico-node-kxfjc 1/1 Running 0 50m kube-system coredns-f9fd979d6-mns27 1/1 Running 0 89m kube-system coredns-f9fd979d6-qwzgq 1/1 Running 0 89m kube-system etcd-kube01 1/1 Running 0 90m kube-system kube-apiserver-kube01 1/1 Running 0 90m kube-system kube-controller-manager-kube01 1/1 Running 0 90m kube-system kube-proxy-2qcz4 1/1 Running 0 89m kube-system kube-proxy-9s9qd 1/1 Running 0 46m kube-system kube-proxy-xfknb 1/1 Running 0 47m kube-system kube-scheduler-kube01 1/1 Running 0 90m
Kontrolle das die Master Noden den Status ready hat
Auf dem Master: k8s@kube01:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube01 Ready master 90m v1.19.3 192.168.88.121 <none> Ubuntu 20.04.1 LTS 5.4.0-52-generic docker://19.3.13
Join Worker zum Cluster
Join Command generieren
Dies muss man nur machen wenn man den Befehl vom Cluster stellen nicht mehr hat.
Auf dem Master: k8s@kube01:~$ kubeadm token create --print-join-command kubeadm join 192.168.88.121:6443 --token 34iy1h.c9wbsge61tk22xhb --discovery-token-ca-cert-hash sha256:ea899aafc76fbade4b9c48c812981ed703dbc0523ee8a3282147ea2cb06a5a95
Worker Joinen
Auf einem Worker: root@kube02:~# kubeadm join 192.168.88.121:6443 --token ls037w.js0whlvpk2csm8ck --discovery-token-ca-cert-hash sha256:ea899aafc76fbade4b9c48c812981ed703dbc0523ee8a3282147ea2cb06a5a95 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Kontrolle ob alle Worker fertig sind
Auf dem Master: k8s@kube01:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube01 Ready master 93m v1.19.3 192.168.88.121 <none> Ubuntu 20.04.1 LTS 5.4.0-52-generic docker://19.3.13 kube02 Ready <none> 49m v1.19.3 192.168.88.122 <none> Ubuntu 20.04.1 LTS 5.4.0-52-generic docker://19.3.13 kube03 Ready <none> 49m v1.19.3 192.168.88.123 <none> Ubuntu 20.04.1 LTS 5.4.0-52-generic docker://19.3.13
Bash autocompletion für Kubernetes
Auf dem Master: echo 'source <(kubectl completion bash)' >> ~/.bashrc source .bashrc
Deploy Test Pod
Auf dem Master: k8s@kube01:~$ kubectl create deployment my-dep --image=nginx deployment.apps/my-dep created k8s@kube01:~$ kubectl expose deployment my-dep --name=my-svc --port 80 --type=NodePort service/my-svc exposed k8s@kube01:~$ kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR my-dep 1/1 1 1 35s nginx nginx app=my-dep k8s@kube01:~$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-dep-5b7868d854-8zz2g 1/1 Running 0 72s 10.0.41.3 kube03 <none> <none> k8s@kube01:~$ kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 116m <none> my-svc NodePort 10.106.175.99 <none> 80:30265/TCP 10s app=my-dep
Das Service ist jetzt via NodePort erreichbar.
Dies wäre in diesem Beispiel http://192.168.88.121:30265
