Kubernetes commands
General#
For more detail Helm in short Install Helm - Prefer Scoop for window user Install krew
docker login
docker push <username>/<image>:<tag>
# =========
kubectl config get-contexts
kubectl config current-context
kubectl config set-context --current --namespace=<desire-default-namespace>
# =========
kubectl get ns
kubectl create ns <namespace>
# =========
# Set namespece for current session
kubectl config set-context --current --namespace=<namespace>
# Validate it
kubectl config view --minify | grep namespace:
# =========
kubectl apply -f <file name> [-n] <namespace>
kubectl get deploy [-n] <namespace>
kubectl rollout restart deploy <deployment name>
kubectl delete deploy <deployment name> [-n] <namespace>
kubectl get pods [-n] <namespace>
# =========
kubectl exec -it <pod name> -- /bin/bash
# =========
# By default, services are assigned a DNS A record for a name of the form <service-name>.<namespace>.svc.<cluster-domain>.local
# Usually post fix look like .svc.cluster.local
kubectl get svc [-n] <namespace>
kubectl delete svc <service name> [-n] <namespace>
# =========
kubectl get sc [-n] <namespace> # storage class
kubectl get pvc [-n] <namespace> # Persistent volume claim
# =========
kubectl get ingress [-n] <namespace> [-o yaml]
kubectl describe ingress <ingress name>
kubectl get ingressclass --all-namespaces
kubectl edit deploy my-release-nginx-ingress
# =========
kubectl get secrets
kubectl create secret generic <secret key name> --from-literal=<key>="<value>"
kubectl delete secret <secret key name>
# =========
echo '<plain text>' | base64
echo '<base64 text>' | base64 --decode
base64 <plain text file> [> output base64 file]
base64 -d <base64 file> [> <output plain text file>]
K3s#
Install kubectl Install helm Connect kubectl to k3s Default pods and services DNS
# K3s installation without Traefik
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" sh -s -
# Connect kubectl to k3s
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# If permission error, then copy the config file and grant read permission to that
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chmod +r ~/.kube/config
export KUBECONFIG=~/.kube/config
# Nginx installation script
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set "controller.extraArgs.enable-ssl-passthrough="
# helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
# --set "controller.extraArgs.enable-ssl-passthrough="
# Check ssl passthrough enable (need pod name)
kubectl exec -n ingress-nginx ingress-nginx-controller-578c8cd8f4-jtgfs -- cat nginx.conf | grep is_ssl
# kubectl ingress-nginx backends -n ingress-nginx | grep sslPassthrough
# Cert-manager, visit https://cert-manager.io/docs/installation/helm/ for latest version
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true
# Install krew first
# Install nginx plugin
kubectl krew install ingress-nginx
Helm#
# Create a template chart using its own folder, its structure document https://helm.sh/docs/topics/charts/#the-chart-file-structure
helm create <application-name>
# Test the rendering yaml file from template
helm template <chart-name> <folder-location>
# Install the application using chart into kubernetes cluster
helm install <chart-name> <folder-location> [--values <value-file-path>] [--namespace <namespace>]
# List our releases
# Everytime helm chart got deploy, helm will create a release, where it tracks the revision
helm list
# Upgrade a release to a new version of a chart
helm upgrade <chart-name> <folder-location> [--values <value-file-path>] [--namespace <namespace>]
Ingress | TLS#
Installing Nginx Ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set "controller.extraArgs.enable-ssl-passthrough="
Ensure:#
- The domains are routable
- Nginx and Cert-manager are in place
- The cluster issuer in place
- Ingress resources have
- annotations:
- cert-manager.io/cluster-issuer: "ClusterIssuerName"
- spec.tls: hosts and secretName(certificate and key will be generate via this name) in place
- spec.ingressClassName: nginx
- annotations:
How is this work#
Cert manager have a sub-component called ingress-shim
watches Ingress resources across your cluster.
It observes an Ingress with annotations described in the Supported Annotations
section, it will ensure a Certificate resource with the name provided in the tls.secretName
field and configured
as described on the Ingress exists in the Ingress's namespace.
When ever the annotation in place, the will config to setup a key somewhere via the domain name, then issue a certificate to Letsencrypt
ACME server. This server will attempt to pull the key back from the domain and verify it. If it success, the certificate will be generate
and automatically being store in corresponding certificate
and secret
resources within your cluster. Nginx ingress will handle the rest
of https based on those material.
Cert manager#
Basically a bunch of resources that help manage Certification for website.
Commonly made use of 3 common part is Issuer
, Certificate
, ACME
.
More detail.
In short:
Issuer
: A resource that can be either cluster level or namspace scope level. Responsible for represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests.Certificate
: A resource that define a desired X.509 certificate which will be renewed and kept up to date. It's a namspaced resource that references anIssuer
orClusterIssuer
that determine what will be honoring the certificate request.- ACME: A standardize process to acquire the actual
Certificate
(using for each domain, not the resource definition above).HTTP01
: A mechanism for ACME Server to actually validate that domain was actually in someone control.- Owner have a domain registered and can already be routed over internet.
- An Url that used the domain would be present a computed key.
- When a HTTP01 challenge is created, cert-manager will automatically configure your cluster ingress to route traffic for this URL to a small web server that presents this key.
- Then ACME server is able to get this key from this URL over the internet.
- Therefore ACME server can validate you are the owner of this domain.
- For
Self-signed
andCA
approaching way, need to createCertificate
manually.
General flow
- ACME: Install cert-manager + ingress => Issuer => Ingress.
- Self signed: Install cert-manager + ingress => Issuer => Certificate => Ingress
- CA: Install cert-manager + ingress => Secret => Issuer => Certificate => Ingress
Cert-manager verifier: Confirming Cert-manager was successfully installed and ready to use. Cert-manager kubectl plugin: A kubectl plugin provide more convenience CLI experience working with Cert-manager itself.
A useful tutorial along with Official tutorial securing ingress using ACME process.
Monitoring#
Prometheus and Grafana#
Helm approach#
kube-prometheus-stack is an community chart for prometheus kubernetes.
# Create a namespace first
kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# templateing script
# helm template [RELEASE_NAME] prometheus-community/kube-prometheus-stack [--values pathToValueFile > pathToOutputFile]
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack [--values pathToValueFile -n monitoring]
# Upgrading script
# helm upgrade [RELEASE_NAME] prometheus-community/kube-prometheus-stack [-n monitoring]
# UnInstall script
# helm uninstall [RELEASE_NAME] [-n monitoring]
# kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
# kubectl delete crd alertmanagers.monitoring.coreos.com
# kubectl delete crd podmonitors.monitoring.coreos.com
# kubectl delete crd probes.monitoring.coreos.com
# kubectl delete crd prometheuses.monitoring.coreos.com
# kubectl delete crd prometheusrules.monitoring.coreos.com
# kubectl delete crd servicemonitors.monitoring.coreos.com
# kubectl delete crd thanosrulers.monitoring.coreos.com
Default user/password: admin/prom-operator
.
Manual approach#
- Prometheus Operator
Service monitor
: Allow to find and scrape metrics from other resources. Then Prometheus will point to service monitor to grab the metric which already scraped. This service monitor can be place as part ofgitops
.- Monitoring component:
Application monitoring
:- Custom metrics: everything the application need to trace (like rq/s, time for each request to calculate avg,...)
Counter
Histogram
- Process metrics: The resource that dev are not control (cpu/memory/network,...). Need to deploy some components out of the kubernetes monitoring areas(cause they track something dev are not control of).
- Custom metrics: everything the application need to trace (like rq/s, time for each request to calculate avg,...)
Infrastructure monitoring
:- Node exporter: a node that pretty much equivalent to a single service to deploy on each node of the cluster.They grab all the information needed to tracking the health of the node.
Kubernetes monitoring
:kube-state-metrics
: monitoring pods, workloads, CPU, mem, network of all the podsAPI server
: Life cycle of the pods, daemon sets, deployments, workload status,...Kubelet
: Container metrics(CPU, mem, network,...)