DevOps Rich DevOps content for Infrastructure Engineers

Linux - How to Setup an Apache Web Server

Installation

  • Install apache2:
    sudo apt install apache2
    systemctl status apache2
    lynx http://192.168.101.83
    
  • The main configuration file is /etc/apache2/apache2.conf, the document root is /var/www/html with a default landing page of /var/www/html/index.html.

Configuring an Additional Site

  • You can host multiple website using /etc/apache2/sites-available that includes configuration files for each website configuration with a matching criteria. If hosting single site then 000-default.conf will suffice.
  • Copy the /etc/apache2/sites-available/000-default.conf file.
    • Amend the VirtualHost value to match an additional hostname or IP address, for example:
<VirtualHost 192.168.101.83:80>
<VirtualHost *:80>
ServerName: hexale.net
<VirtualHost *:443>
ServerName: hexale.net:443
  • Amend the DocumentRoot value to a subdirectory of /var/www.
  • Amend the ErrorLog and CustomLog values to include the Site name.
  • You can enable a site configuration using sudo a2ensite hexale.net.conf and than reload the apache2 configuration using sudo systemctl reload apache2. a2dissite may be used to disable a site configuration.

Note: The commands add / delete a symbolic link to the .conf file in /etc/apache2/sites-enabled to enable/ disable the site.

Installing Additional Apache Modules

  • apt search libapache2-mod to see a list of modules that may be installed.
  • sudo apt install libapache2-mod-php8.1 will install a module to support PHP.
  • a2enmod will list all available modules and sudo a2enmod php8.1 would install a specific module. a2dismod may be used to disable a module.
  • Once a module is enabled you must restart the apache2 service using sudo systemctl restart apache2.

Note: apache2 -l will display builtin modules.

Secure a Site with TLS

  • Enable SSL Apache Module:
    sudo a2enmod ssl
    sudo systemctl restart apache2
    

Note: That at this stage a new site configuration will be generated /etc/apache2/sites-available/default-ssl.conf.

  • Generate a certificate
    • Self-signed certificate example:
      sudo mkdir /etc/apache2/certs 
      sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/certs/mysite.key -out /etc/apache2/certs/mysite.crt
      
    • 3rd Party Certificate Authority example:
      sudo openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr
      
  • Edit /etc/apache2/sites-available/default-ssl.conf and set the SSLCertificateFile and SSLCertificateKeyFile values as follows:
    SSLCertificateFile /etc/apache2/certs/mysite.crt
    SSLCertificateKeyFile /etc/apache2/certs/mysite.key
    
  • Enable the site configuration:
    sudo a2ensite default-ssl.conf
    sudo systemctl reload apache2
    ls /etc/apache2/sites-enabled
    
  • If using a self-signed certificate you may carry out the following additional steps to ensure the certificate is trusted certificate is trusted:
sudo cp /etc/apache2/certs/mysite.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
ls -l /etc/ssl/certs
  • If you are not using a DNS Server you may wish to add an entry to /etc/hosts:
    192.168.101.83 devopsrich.com
    
  • Test connectivity using curl https://devopsrich.com.

Linux - How to Deploy Containers to a Kubernetes Cluster

Prerequisites

Steps - Performed on the Controller

Create K8s Deployment YAML for a Pod and Service

  • Create a directory to store the deployment files mkdir k8s_services
  • Create a YAML file that describes the Pod and Container specification sudo pod.yml:
apiVersion: v1
kind: Pod
metadata:
  name: nginx-example
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: linuxserver/nginx
      ports:
        - containerPort: 80
          name: "nginx-http"

Note: labels are key values pairs for reference. linuxserver/nginx is the https://linuxserver.io image repository that support both x86 and ARM architectures (Pis). containerPort is the port the container exposes to the internal K8s network only at this stage.

  • Create a YAML file that describes the NodePort service to provide the ability to map a network port on a Pod to the Node it is running on sudo service-nodeport.yml:
apiVersion: v1
kind: Service
metadata:
  name: nginx-example
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 30080
      targetPort: nginx-http
  selector:
    app: nginx

Note: nodePort may be between 30000 - 32767. The selector value is a way to reference pods that the configuration applies to using the pod’s label.

Apply the Deployment YAML to the K8s Cluster

  • Apply the Pod and Container specification: kubectl apply -f pod.yml
    • Check the pod status kubectl get pods, you can see additional fields with kubectl get pods -o wide.
  • Apply the NodePort service specification: kubectl apply -f service-nodeport.yml
    • Check the status of the service kubectl get services
    • Test connectivity using:
      curl http://192.168.101.90:30080
      curl http://192.168.101.91:30080
      curl http://192.168.101.92:30080
      

      Note: If you browse to any node ip address in the cluster on the given port the requests are directed to the Node that the Pod is running on.

Remove the Pod and Service K8s Cluster

  • Remove the Pod and Container specification:
    kubectl delete pod nginx-example
    kubectl get pods
    
  • Remove the NodePort service specification:
    kubectl delete service nginx-example
    kubectl get services
    

Linux - How to a Setup a Kubernetes Cluster

Lab Requirements

  • 3 Virtual Machines with Ubuntu 22.04.1 LTS installed and a static IP address all residing on the same subnet.
    • A Controller (controller) with 2 vCPUs, 2GB vRAM and IP address 192.168.101.90.
    • 2 Nodes (node1 & node2) with 1 vCPU, 2GB vRAM and IP addresses 192.168.101.91 and 192.168.101.92.

Steps

Prerequisites to Setup Kubernetes Cluster - Applies to all VMs

  • Update the Operating System and packages using:
    • sudo apt update
    • sudo apt dist-upgrade
  • Install a container runtime so that K8s may run containers:
    • sudo apt install containerd
    • systemctl status containerd
  • Create a default configuration for containerd:
    • sudo mkdir /etc/containerd
    • containerd config default | sudo tee /etc/containerd/config.toml
  • Set the cgroup driver to systemd by editing sudo vi /etc/containerd/config.toml:
    • Under [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] set SystemCgroup = true.
  • Disable swap:
    • sudo swapoff -a
    • free -m
    • Comment out the swap entry within /etc/fstab.
  • Enable network bridging by editing sudo vi /etc/sysctl.conf and set:net.ipv4.ip_forward=1.
    • sudo sysctl --system displays the Linux Kernel runtime parameters.
  • Enable Kernel modules by editing sudo vi /etc/modules-load.d/k8s.conf and adding:
    br_netfilter
    overlay
    
    • lsmod displays the status of modules in the Linux Kernel.
  • Reboot the Server.

Install K8s - Applies to all VMs

  • Add key for the K8s package repository so that your server trusts it:
    • sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    • ls -l /usr/share/keyrings/
  • Add the K8s package repository as a source:
    • echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    • sudo cat /etc/apt/sources.list.d/kubernetes.list
  • Install the K8s packages:
    • sudo apt update
    • sudo apt install kubeadm kubectl kubelet
    • sudo apt-mark hold kubelet kubeadm kubectl

Configure the K8s Cluster using the Controller

  • Initialise the K8s Cluster:
    • sudo kubeadm init --control-plane-endpoint=192.168.101.90 --node-name controller --pod-network-cidr=10.244.0.0/16
      • control-plane-endpoint is the IP address of the controller server.
      • node-name is the hostname of the controller server.
      • pod-network-cidr is an internal IP address space used within the K8s Cluster. Note: If you change this value you would have to change other settings also, there should be no reason to change this.
  • The output of kubeadm init will generate a join command that may be run on each worker node to add it to the K8s Cluster:
    sudo kubeadm join 192.168.101.90:6443 --token 1u8i34.gr3ji3jxl7qj0x9m --discovery-token-ca-cert-hash sha256:c8df85b95eb98f2cf21ef0b6382d478d596245850a27a46d92587a70e248d6d3
    
  • You will also receive a set of commands that you may run in the context of your standard user to allow the K8s Cluster to be managed without root or sudo access:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    ls -l ~/.kube
    
  • kubectl get pods --all-namespaces displays kube-system pods running on controller. Note: the coredns pods are in a pending state as an overlay network has not been deployed yet. The overlay network is a network on top of another network and used for management communication between nodes.
  • Deploy a network model using:
    kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    kubectl get pods --all-namespaces
    

    Note: There are multiple network models but flannel is fine for a lab.

Join the Worker Nodes to the K8s Cluster

  • Use the command output by the K8s Cluster initialisation:
    sudo kubeadm join 192.168.101.90:6443 --token 1u8i34.gr3ji3jxl7qj0x9m --discovery-token-ca-cert-hash sha256:c8df85b95eb98f2cf21ef0b6382d478d596245850a27a46d92587a70e248d6d3
    
  • The hash value has a timeout, so if you need to regenerate it on the Controller run kubeadm token create --print-join-command.
  • From the Controller, check the nodes have joined successfully by running kubectl get nodes.

Troubleshooting Advice for Cluster Initialisation

  • Check the containerd and kubelet services are running systemctl status containerd kubelet.
  • Review logs for a failed service using journalctl -xu kubelet.
  • Check that all Prerequisites to Setup Kubernetes Cluster are in place.
  • I found an issue with the version of kubelet and had to use an earlier version using the following commands:
    sudo apt remove --purge kubelet
    sudo apt install kubelet=1.25.5-00
    sudo apt-mark hold kubelet
    sudo apt list --installed | grep kubelet
    sudo kubeadm reset
    sudo kubeadm init --control-plane-endpoint=192.168.101.90 --node-name controller --pod-network-cidr=10.244.0.0/16
    

Troubleshooting Advice for coredns pods in a pending state

  • kubectl describe pod coredns -n kube-system may provide more information.
  • Check whether a node is in a tainted state kubectl describe node controller | grep Taints. Note: node.kubernetes.io/disk-pressure would indicate that more disk space is required.

Linux - How to a Setup Virtual Machine Server

Prerequisites

  • Ubuntu 22.04.1 LTS installed on a computer. If you are using a Virtual Machine you must ensure that nested virtualisation is supported and enabled. egrep -i '(vmx|svm)' /proc/cpuinfo will show if virtualisation is supported.
  • A Workstation to connect to the Virtual Machine Server.
  • The Virtual Machine Server should have 2 NICs, 1 for SSH connectivity and a 2nd that can be used to configure a network bridge device. If you are using a Virtual Machine you must ensure that MAC address spoofing is allowed for the 2nd NIC.
  • A DHCP Server on the same network as the 2nd NIC.
  • Adequate storage to store vDisks and VM Images. These instructions assume that you have created a new logical volume for this purpose.

Steps

Virtual Machine Server Setup

  • Install packages and check libvirtd is running:
    • sudo apt update
    • sudo apt install bridge-utils libvirt-clients libvirt-daemon-system qemu-system-x86
    • systemctl status libvirtd
  • Configure logical volume for vDisk and VM Image storage:
    • sudo vi /etc/fstab and add the following line: /dev/vg-images/lv-images /var/lib/libvirt/images ext4 defaults 0 1
    • Mount the filesystem using sudo mount -a
    • Grant the kvm group permission to /var/lib/libvirt/images using:
      • sudo chown :kvm /var/lib/libvirt/images
      • sudo chmod g+rw /var/lib/libvirt/images
      • sudo systemctl restart libvirtd
      • sudo systemctl status libvirtd
  • Grant your user permission to manage the Virtual Machine Server:
    • sudo usermod -aG kvm rich
    • sudo usermod -aG libvirt rich
  • Update the network configuration to include a network bridge device so that Virtual Machines may connect to the network - sudo vi /etc/netplan/00-installer-config.yaml:
network:
  version: 2
  ethernets:
    eth0:
      addresses:
        - 192.168.101.83/24
      nameservers:
        addresses: [192.168.101.81]
      routes:
        - to: default
          via: 192.168.101.1
    eth1:
      dhcp4: false

  bridges:
    br0:
      interfaces: [eth1]
      dhcp4: true
      parameters:
        stp: false
        forward-delay: 0
  • sudo netplan apply

Virtual Machine Manager Workstation Setup

  • Install Virtual Machine Manager to connect to Virtual Machine Server:
    • sudo apt install ssh-askpass virt-manager
    • Connect to the Virtual Machine Server using SSH before you attempt to use Virtual Machine Manager.
    • Open Virtual Machine Manager and Add Connection vmm
  • Create a storage pool to store ISOs:
    • Right-click the Connection> Details then select the Storage tab.
    • Create a storage pool with Target Path: /var/lib/libvirt/images/ISO.

Virtual Machine Server ISO Library Setup

  • Make sure the permissions are set correctly for /var/lib/libvirt/images/ISO:
    • sudo chown root:kvm /var/lib/libvirt/images/ISO
    • sudo chmod g+rw /var/lib/libvirt/images/ISO
  • Download ISO Image ready to create a Virtual Machine:
    • cd /var/lib/libvirt/images/ISO
    • wget https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-live-server-amd64.iso

Virtual Machine Deployment

  • Within Virtual Machine Manager, right-click a Connection and create New VM.
    • Use the ISO Library path to select an ISO.
    • A Bridge device… with Device name: br0 should be set for your Network selection.

Linux OpenSUSE - Enable Hyper-V Enhanced Session

Assumptions

  • A Linux OpenSUSE Virtual Machine is already configured and working using a console connection.
  • Wayland is in use as the Windowing System.
  • A GNOME desktop environment is being used.

Steps

  • Install the hyper-v-enhanced-session package:
    sudo zypper install hyper-v-enhanced-session
    
  • Configure Xorg as the Windowing System.
  • Create a certificate and private key for xrdp:
    openssl req -x509 -newkey rsa:2048 -nodes -keyout key.pem -out cert.pem -days 365
    
  • Edit the xrdp configuration - sudo vi /etc/xrdp/xrdp.ini to use the cert.pem and key.pem files from previous step.
  • Create a xrdp session preference file startwm.sh in the user’s home directory:
    cp /etc/xrdp/startwm.sh.userwindowmanager-sample ~/
    cd ~
    mv startwm.sh.userwindowmanager-sample startwm.sh
    
  • Edit the xrdp session preference file - vi startwm.sh by uncommenting PREF_SESSION='gnome'.
  • Reboot the computer for the settings to apply.