diff --git a/G000 - Table Of Contents.md b/G000 - Table Of Contents.md index 48f87fd..915c7c9 100644 --- a/G000 - Table Of Contents.md +++ b/G000 - Table Of Contents.md @@ -337,62 +337,17 @@ - [Be watchful of your system's resources usage](G032%20-%20Deploying%20services%2001%20~%20Considerations.md#be-watchful-of-your-systems-resources-usage) - [Do not fill your cluster up to the brim](G032%20-%20Deploying%20services%2001%20~%20Considerations.md#do-not-fill-your-cluster-up-to-the-brim) -### [**G033** - Deploying services 02 ~ **Nextcloud - Part 1** - Outlining setup, arranging storage and choosing service IPs](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md#g033-deploying-services-02-nextcloud-part-1-outlining-setup-arranging-storage-and-choosing-service-ips) - -- [Outlining Nextcloud's setup](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md#outlining-nextclouds-setup) -- [Setting up new storage drives in the K3s agent](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md#setting-up-new-storage-drives-in-the-k3s-agent) -- [Choosing static cluster IPs for Nextcloud related services](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md#choosing-static-cluster-ips-for-nextcloud-related-services) -- [Relevant system paths](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md#relevant-system-paths) - -### [**G033** - Deploying services 02 ~ **Nextcloud - Part 2** - Redis cache server](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#g033-deploying-services-02-nextcloud-part-2-redis-cache-server) - -- [Kustomize project folders for Nextcloud and Redis](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#kustomize-project-folders-for-nextcloud-and-redis) -- [Redis configuration file](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-configuration-file) -- [Redis password](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-password) -- [Redis Deployment resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-deployment-resource) -- [Redis Service resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-service-resource) -- [Redis Kustomize project](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-kustomize-project) -- [Don't deploy this Redis project on its own](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#dont-deploy-this-redis-project-on-its-own) -- [Relevant system paths](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#relevant-system-paths) -- [References](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#references) - -### [**G033** - Deploying services 02 ~ **Nextcloud - Part 3** - MariaDB database server](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#g033-deploying-services-02-nextcloud-part-3-mariadb-database-server) - -- [MariaDB Kustomize project's folders](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-kustomize-projects-folders) -- [MariaDB configuration files](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-configuration-files) -- [MariaDB passwords](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-passwords) -- [MariaDB storage](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-storage) -- [MariaDB StatefulSet resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-statefulset-resource) -- [MariaDB Service resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-service-resource) -- [MariaDB Kustomize project](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#mariadb-kustomize-project) -- [Don't deploy this MariaDB project on its own](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#dont-deploy-this-mariadb-project-on-its-own) -- [Relevant system paths](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#relevant-system-paths) -- [References](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md#references) - -### [**G033** - Deploying services 02 ~ **Nextcloud - Part 4** - Nextcloud server](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#g033-deploying-services-02-nextcloud-part-4-nextcloud-server) - -- [Considerations about the Nextcloud server](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#considerations-about-the-nextcloud-server) -- [Nextcloud server Kustomize project's folders](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-kustomize-projects-folders) -- [Nextcloud server configuration files](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-configuration-files) -- [Nextcloud server password](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-password) -- [Nextcloud server storage](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-storage) -- [Nextcloud server Stateful resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-stateful-resource) -- [Nextcloud server Service resource](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-service-resource) -- [Nextcloud server Kustomize project](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-kustomize-project) -- [Don't deploy this Nextcloud server project on its own](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#dont-deploy-this-nextcloud-server-project-on-its-own) -- [Background jobs on Nextcloud](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#background-jobs-on-nextcloud) -- [Relevant system paths](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#relevant-system-paths) -- [References](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#references) - -### [**G033** - Deploying services 02 ~ **Nextcloud - Part 5** - Complete Nextcloud platform](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#g033-deploying-services-02-nextcloud-part-5-complete-nextcloud-platform) - -- [Preparing pending Nextcloud platform elements](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#preparing-pending-nextcloud-platform-elements) -- [Kustomize project for Nextcloud platform](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#kustomize-project-for-nextcloud-platform) -- [Logging and checking the background jobs configuration on your Nextcloud platform](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#logging-and-checking-the-background-jobs-configuration-on-your-nextcloud-platform) -- [Security considerations in Nextcloud](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#security-considerations-in-nextcloud) -- [Nextcloud platform's Kustomize project attached to this guide series](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#nextcloud-platforms-kustomize-project-attached-to-this-guide-series) -- [Relevant system paths](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#relevant-system-paths) -- [References](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md#references) +### [**G033** - Deploying services 02 ~ **Ghost - Part 1** - Outlining setup, arranging storage and choosing service IPs](G033%20-%20Deploying%20services%2002%20~%20Ghost%20-%20Part%201%20-%20Outlining%20setup,%20arranging%20storage%20and%20choosing%20service%20IPs.md) + + + + + + + + + + ### [**G034** - Deploying services 03 ~ **Gitea - Part 1** - Outlining setup and arranging storage](G034%20-%20Deploying%20services%2003%20~%20Gitea%20-%20Part%201%20-%20Outlining%20setup%20and%20arranging%20storage.md#g034-deploying-services-03-gitea-part-1-outlining-setup-and-arranging-storage) @@ -675,36 +630,23 @@ - [Relevant system paths](G910%20-%20Appendix%2010%20~%20Setting%20up%20virtual%20network%20with%20Open%20vSwitch.md#relevant-system-paths) - [References](G910%20-%20Appendix%2010%20~%20Setting%20up%20virtual%20network%20with%20Open%20vSwitch.md#references) -### [**G911** - Appendix 11 ~ Alternative Nextcloud web server setups](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md) - -- [Ideas for the Apache setup](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md#ideas-for-the-apache-setup) -- [Nginx setup](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md#nginx-setup) -- [Relevant system paths](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md#relevant-system-paths) -- [References](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md#references) - -### [**G912** - Appendix 12 ~ Checking the K8s API endpoints' status](G912%20-%20Appendix%2012%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) +### [**G911** - Appendix 11 ~ Checking the K8s API endpoints' status](G912%20-%20Appendix%2012%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) - [References](G912%20-%20Appendix%2012%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md#references) -### [**G913** - Appendix 13 ~ Post-update manual maintenance tasks for Nextcloud](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md) - -- [Concerns](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md#concerns) -- [Procedure](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md#procedure) -- [References](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md#references) - -### [**G914** - Appendix 14 ~ Updating MariaDB to a newer major version](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) +### [**G912** - Appendix 12 ~ Updating MariaDB to a newer major version](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) -- [Concerns](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#concerns) -- [Enabling the update procedure](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#enabling-the-update-procedure) -- [References](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#references) +- [Concerns](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#concerns) +- [Enabling the update procedure](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#enabling-the-update-procedure) +- [References](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md#references) -### [**G915** - Appendix 15 ~ Updating PostgreSQL to a newer major version](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md) +### [**G913** - Appendix 13 ~ Updating PostgreSQL to a newer major version](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md) -- [Concerns](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#concerns) -- [Upgrade procedure (for Gitea's PostgreSQL instance)](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#upgrade-procedure-for-gitea-s-postgresql-instance) -- [Kustomize project only for updating PostgreSQL included in this guide series](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#kustomize-project-only-for-updating-postgresql-included-in-this-guide-series) -- [Relevant system paths](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#relevant-system-paths) -- [References](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#references) +- [Concerns](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#concerns) +- [Upgrade procedure (for Gitea's PostgreSQL instance)](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#upgrade-procedure-for-gitea-s-postgresql-instance) +- [Kustomize project only for updating PostgreSQL included in this guide series](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#kustomize-project-only-for-updating-postgresql-included-in-this-guide-series) +- [Relevant system paths](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#relevant-system-paths) +- [References](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md#references) ## Navigation diff --git a/G001 - Hardware setup.md b/G001 - Hardware setup.md index e24fce7..4b8a597 100644 --- a/G001 - Hardware setup.md +++ b/G001 - Hardware setup.md @@ -9,7 +9,7 @@ ## You just need a capable enough computer -In the [README](README.md) I talk about a small or low-end consumer-grade computer, meaning that you don't need the latest and fastest machine available in the market. Any relatively modern small tower or mini PC, or even a normal laptop, could be adequate. Still, your computer must meet certain minimum requirements, or it won't be able to run the Kubernetes cluster the way it's explained in this guide. +In this project's [README](README.md) I talk about a small or low-end consumer-grade computer, meaning that you don't need the latest and fastest machine available in the market. Any relatively modern small tower or mini PC, or even a normal laptop, could be adequate. Still, your computer must meet certain minimum requirements, or it won't be able to run the Kubernetes cluster the way it's explained in this guide. > [!NOTE] > **Virtualizing the Proxmox VE setup is problematic**\ @@ -19,7 +19,7 @@ In the [README](README.md) I talk about a small or low-end consumer-grade comput ## The reference hardware setup -The hardware used in this guide is an upgraded [Packard Bell iMedia S2883 desktop computer](https://archive.org/details/manualzilla-id-7098831) from around 2014. This quite old and rather limited computer has the following specifications (after the upgrade): +The hardware used in this guide is an upgraded [Packard Bell iMedia S2883 desktop computer](https://archive.org/details/manualzilla-id-7098831) from around 2014. This quite old and limited computer has the following specifications (after the upgrade): - The BIOS firmware is UEFI (Secure Boot) but also provides a CSM mode. diff --git a/G018 - K3s cluster setup 01 ~ Requirements and arrangement.md b/G018 - K3s cluster setup 01 ~ Requirements and arrangement.md index da5b331..7a119da 100644 --- a/G018 - K3s cluster setup 01 ~ Requirements and arrangement.md +++ b/G018 - K3s cluster setup 01 ~ Requirements and arrangement.md @@ -3,7 +3,7 @@ - [Gearing up for your K3s cluster](#gearing-up-for-your-k3s-cluster) - [Requirements for the K3s cluster and the services to deploy in it](#requirements-for-the-k3s-cluster-and-the-services-to-deploy-in-it) - [Rancher K3s Kubernetes cluster](#rancher-k3s-kubernetes-cluster) - - [Nextcloud](#nextcloud) + - [Ghost](#ghost) - [Gitea](#gitea) - [Kubernetes cluster monitoring stack](#kubernetes-cluster-monitoring-stack) - [Prometheus](#prometheus) @@ -12,7 +12,7 @@ - [References](#references) - [About Kubernetes](#about-kubernetes) - [About Rancher K3s](#about-rancher-k3s) - - [About Nextcloud](#about-nextcloud) + - [About Ghost](#about-ghost) - [About Gitea](#about-gitea) - [About Prometheus](#about-prometheus) - [About Grafana](#about-grafana) @@ -37,18 +37,18 @@ Since the virtual hardware I'm using in this guide series is rather limited, ins | Server | 2 cores | 2 GB | | Agent | 1 core | 512 MB | -### Nextcloud +### Ghost -[Nextcloud](https://nextcloud.com/) is a software mainly for file syncing and sharing, so it's main requirement will always be storage room for saving data. Still, it has some [recommended system requirements](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html) to work properly. +The [Ghost](https://ghost.org/) publishing platform [has a different set of prerequisites specified in its official documentation depending on how it is installed](https://docs.ghost.org/install). Next are listed the particular minimum requirements (extracted from the mentioned prerequisites) that are relevant to this guide: -- Database: MySQL 8.4 or MariaDB 10.11. -- Web server: Apache 2.4 with mod_php or php-fpm. -- PHP Runtime: 8.3 -- RAM: 512 MiB per process. +- RAM: 1GB. +- CPU: 1 core. +- OS: Ubuntu 22.04 or 24.04. +- Database: MySQL 8. ### Gitea -[Gitea](https://gitea.io/) is a lightweight self-hosted git service, so its main requirement will be storage space. +[Gitea](https://gitea.io/) is a lightweight self-hosted git service that requires the following: - Database: PostgreSQL (>= 12), MySQL (>= 8.0), MariaDB (>= 10.4), SQLite (builtin), and MSSQL (>= 2012 SP4). - Git version >= 2.0. @@ -75,7 +75,7 @@ For monitoring the K3s Kubernetes cluster, you will install a stack which includ Now that you have a rough idea about what each software requires, it's time to stablish a proper arrangement for them. So, in my virtual hardware of four-single-threaded cores CPU and 8 GiB of RAM, I'll go with three VMs with the hardware configuration listed next: -- **One VM with 2 vCPU and 2 GiB of RAM**\ +- **One VM with 2 vCPU and 1.50 GiB of RAM**\ This will become the K3s **server** (_master_) node of the Kubernetes cluster. - **Two VMs with 3 vCPU and 2 GiB of RAM**\ @@ -93,9 +93,11 @@ If your hardware setup has more RAM and cores than the one used in this guide, y - [Docs. Installation. Requirements](https://docs.k3s.io/installation/requirements#hardware) -### About [Nextcloud](https://nextcloud.com/) +### About [Ghost](https://ghost.org/) -- [Nextcloud system requirements](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html) +- [Getting Started. How To Install Ghost](https://docs.ghost.org/install) + - [Documentation. How To Install Ghost On Ubuntu](https://docs.ghost.org/install/ubuntu) + - [How To Install Ghost With Docker (preview)](https://docs.ghost.org/install/docker) ### About [Gitea](https://gitea.io/) diff --git a/G025 - K3s cluster setup 08 ~ K3s Kubernetes cluster setup.md b/G025 - K3s cluster setup 08 ~ K3s Kubernetes cluster setup.md index 8f68f30..f4b3ccb 100644 --- a/G025 - K3s cluster setup 08 ~ K3s Kubernetes cluster setup.md +++ b/G025 - K3s cluster setup 08 ~ K3s Kubernetes cluster setup.md @@ -107,7 +107,7 @@ A server can also act as an agent at the same time, but this chapter only explai ### Criteria for IPs -I'll assume the most simple scenario, which is a single local network behind one router. This means that everything falls within a [private network IPv4 range](https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4) such as `10.0.0.0/8`, and no other subnets are present. +This guide's assume the most simple scenario, which is a single local network behind one router. This means that everything falls within a [private network IPv4 range](https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4) such as `10.0.0.0/8`, and no other subnets are present. > [!NOTE] > **I picked a big private network IP range to minimize conflicts**\ diff --git a/G031 - K3s cluster setup 14 ~ Deploying the Headlamp dashboard.md b/G031 - K3s cluster setup 14 ~ Deploying the Headlamp dashboard.md index 32418b4..1a5130b 100644 --- a/G031 - K3s cluster setup 14 ~ Deploying the Headlamp dashboard.md +++ b/G031 - K3s cluster setup 14 ~ Deploying the Headlamp dashboard.md @@ -151,7 +151,7 @@ All the components will be part of the same Kustomize project for deploying Head - headlamp.homelab.cloud - hdl.homelab.cloud ipAddresses: - - 10.7.0.2 + - 10.7.0.3 privateKey: algorithm: Ed25519 encoding: PKCS8 diff --git a/G032 - Deploying services 01 ~ Considerations.md b/G032 - Deploying services 01 ~ Considerations.md index ed4b548..6fb240b 100644 --- a/G032 - Deploying services 01 ~ Considerations.md +++ b/G032 - Deploying services 01 ~ Considerations.md @@ -11,7 +11,7 @@ The next chapters of this guide will show you how to deploy in your K3s cluster ## Be watchful of your system's resources usage -Your K3s Kubernetes cluster is not running "empty" at this point, it already has a fair number of services running which already eat up a good chunk of your hardware's resources. Be always aware of the current resources usage in your setup before you deploy any new app or service in your cluster. +Your K3s Kubernetes cluster is not running "empty" at this point. It has a number of services running that already eat up a good chunk of your hardware's resources. Be always aware of the current resources usage in your setup before you deploy any new app or service in your cluster. Remember that you can get the resource usages from your setup in these ways: @@ -33,4 +33,4 @@ Just because you still have free RAM or a not so high CPU usage, it does not mea ## Navigation -[<< Previous (**G031. K3s cluster setup 14**)](G031%20-%20K3s%20cluster%20setup%2014%20~%20Deploying%20the%20Headlamp%20dashboard.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 1**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md) +[<< Previous (**G031. K3s cluster setup 14**)](G031%20-%20K3s%20cluster%20setup%2014%20~%20Deploying%20the%20Headlamp%20dashboard.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 1**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%20and%20arranging%20storage.md) diff --git a/G033 - Deploying services 02 ~ Ghost - Part 1 - Outlining setup, arranging storage and choosing service IPs.md b/G033 - Deploying services 02 ~ Ghost - Part 1 - Outlining setup, arranging storage and choosing service IPs.md new file mode 100644 index 0000000..c22e724 --- /dev/null +++ b/G033 - Deploying services 02 ~ Ghost - Part 1 - Outlining setup, arranging storage and choosing service IPs.md @@ -0,0 +1,456 @@ +# G033 - Deploying services 02 ~ Ghost - Part 1 - Outlining setup, arranging storage and choosing service IPs + +- [Beginning with Ghost](#beginning-with-ghost) +- [Outlining Ghost's setup](#outlining-ghosts-setup) + - [Choosing the K3s agent](#choosing-the-k3s-agent) +- [Setting up new storage drives in the K3s agent node](#setting-up-new-storage-drives-in-the-k3s-agent-node) + - [Adding the new storage drives to the K3s agent node's VM](#adding-the-new-storage-drives-to-the-k3s-agent-nodes-vm) + - [LVM storage set up](#lvm-storage-set-up) + - [Formatting and mounting the new LVs](#formatting-and-mounting-the-new-lvs) + - [Storage mount points for the Ghost pods](#storage-mount-points-for-the-ghost-pods) + - [About increasing the size of volumes](#about-increasing-the-size-of-volumes) +- [Choosing static cluster IPs for Ghost-related services](#choosing-static-cluster-ips-for-ghost-related-services) +- [Relevant system paths](#relevant-system-paths) + - [Folders in K3s agent node's VM](#folders-in-k3s-agent-nodes-vm) + - [Files in K3s agent node's VM](#files-in-k3s-agent-nodes-vm) +- [References](#references) + - [Ghost](#ghost) + - [Cache servers](#cache-servers) + - [Database engines](#database-engines) +- [Navigation](#navigation) + +## Beginning with Ghost + +From the services listed in the [chapter **G018**](G018%20-%20K3s%20cluster%20setup%2001%20~%20Requirements%20and%20arrangement.md#ghost), let's begin with the publishing platform **Ghost**. Since deploying it requires the configuration and deployment of several different components, the procedure for deploying Ghost is split in five parts, being this chapter the first one of them. + +In this part, you will see how to outline the setup of your Ghost platform, then work in the arrangement of the storage drives needed to store Ghost's data, and finally choose some required IPs. + +## Outlining Ghost's setup + +First, you must define how you want to setup Ghost in your cluster. This means that you have to decide beforehand how to solve the following points: + +- For storing operation-related data, Ghost requires a database. Which one are you going to use and how you will setup it? + +- Ghost can also use a cache server. Which one will you use and will it be exclusive to the Ghost instance? + +- Where in your K3s cluster the Ghost's data should be placed? + +This guide solves the previous points as follows. + +- **Database**\ + [Ghost's documentation indicates](https://docs.ghost.org/install/ubuntu) using [MySQL](https://www.mysql.com/) as database, but this guide will use the compatible alternative [MariaDB](https://mariadb.org/) instead with its data saved in a local SSD storage drive. + +- **Cache server**\ + Ghost can work with [Redis](https://redis.io/), but this guide rather opts for the compatible alternative [Valkey](https://valkey.io/) configured to have data persistence on a local SSD storage drive. + +- **Ghost server's data**\ + Stored in a persistent volume prepared on a local HDD storage drive. + +Also be aware that all the services making up this Ghost platform will run in the same K3s agent node. This is because all the local storage will be setup in one agent node, and Kubernetes applies the affinity rule of making its pods run in the same nodes that provide their storage. + +### Choosing the K3s agent + +Your cluster has only two K3s agent nodes, and the two of them are already running services. Choose the one that currently has the lowest CPU and RAM usage of the two. The node picked in this guide is the `k3sagent02` node. + +## Setting up new storage drives in the K3s agent node + +Given how the K3s cluster has been configured in this guide, the only persistent volumes you can use are local ones. They rely on paths found in the K3s node VM's host system, but you do not want to use the root filesystem in which the underlying Debian OS is installed. It is better and safer to have separate drives for each persistent volume, and you can create those storage drives directly from your Proxmox VE web console. + +### Adding the new storage drives to the K3s agent node's VM + +Here you are going to create two virtual storage drives (called "hard disks" in Proxmox), one in the real HDD drive and another in the SSD unit: + +1. Log in your Proxmox VE web console and go to the `Hardware` page of your chosen K3s agent's VM. There, click on the `Add` button and then on the `Hard Disk` option: + + ![Add Hard Disk option in K3s agent node](images/g033/pve_k3sagent_hardware_add_hard_disk_option.webp "Add Hard Disk option in K3s agent node") + +2. Use the "Hard Disk" form (already seen back in the [chapter **G020**](G020%20-%20K3s%20cluster%20setup%2003%20~%20Debian%20VM%20creation.md#setting-up-a-new-virtual-machine)) to add two "hard disks" in your VM: + + ![Add Hard Disk window for K3s agent node with advanced options enabled](images/g033/pve_k3sagent_hardware_add_hard_disk_window.webp "Add Hard Disk window for K3s agent node with advanced options enabled") + + Notice the highlighted elements in the capture above. When creating each hard disk, be sure of having the `Advanced` checkbox enabled and edit only the featured fields as follows: + + - **SSD drive**\ + Storage `ssd_disks`, Discard `ENABLED`, disk size `10 GiB`, SSD emulation `ENABLED`. + + - **HDD drive**\ + Storage `hdd_data`, Discard `ENABLED`, disk size `10 GiB`, SSD emulation `DISABLED`. + + After adding the new storage drives, they should appear in your K3s agent node VM's hardware list. + + ![New Hard Disks added to K3s agent node VM](images/g033/pve_k3sagent_hardware_hard_disks_added.webp "New Hard Disks added to K3s agent node VM") + +3. Open a shell in your K3s agent node VM and check with `fdisk` that the new drives are already active and running: + + ~~~sh + $ sudo fdisk -l + Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors + Disk model: QEMU HARDDISK + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disklabel type: dos + Disk identifier: 0x5dc9a39f + + Device Boot Start End Sectors Size Id Type + /dev/sda1 * 2048 1556479 1554432 759M 83 Linux + /dev/sda2 1558526 20969471 19410946 9.3G f W95 Ext'd (LBA) + /dev/sda5 1558528 20969471 19410944 9.3G 8e Linux LVM + + + Disk /dev/mapper/k3snode--vg-root: 9.25 GiB, 9936306176 bytes, 19406848 sectors + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + + + Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors + Disk model: QEMU HARDDISK + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + + + Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors + Disk model: QEMU HARDDISK + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + ~~~ + + The new drives are `/dev/sdb` and `/dev/sdc`. Unsurprisingly, they are of the same `Disk model` as `/dev/sda`: `QEMU HARDDISK`. + +### LVM storage set up + +Your new storage drives are now available in your K3s agent node VM, but you still have to configure the required LVM volumes within them: + +1. Create a new GPT partition on each of the new storage drives with `sgdisk`. Remember that these new drives are the `/dev/sdb` and `/dev/sdc` devices you saw before with `fdisk`: + + ~~~sh + $ sudo sgdisk -N 1 /dev/sdb + $ sudo sgdisk -N 1 /dev/sdc + ~~~ + +2. Check with `fdisk` that now you have a new partition on each storage drive: + + ~~~sh + $ sudo fdisk -l /dev/sdb /dev/sdc + Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors + Disk model: QEMU HARDDISK + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disklabel type: gpt + Disk identifier: 0DF6943D-791B-4D7B-9946-36648735901E + + Device Start End Sectors Size Type + /dev/sdb1 2048 20971486 20969439 10G Linux filesystem + + + Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors + Disk model: QEMU HARDDISK + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disklabel type: gpt + Disk identifier: FFC445C4-C09D-4689-9A31-7BB4B0DCFB95 + + Device Start End Sectors Size Type + /dev/sdc1 2048 20971486 20969439 10G Linux filesystem + ~~~ + + See above the `/dev/sdb1` and `/dev/sdc1` partitions ("devices" for `fdisk`) under their respective "disks". + +3. Use `pvcreate` to create a new LVM physical volume, or PV, out of each partition: + + ~~~sh + $ sudo pvcreate --metadatasize 10m -y -ff /dev/sdb1 + $ sudo pvcreate --metadatasize 10m -y -ff /dev/sdc1 + ~~~ + + To determine the metadata size, I've used the rule of thumb of allocating 1 MiB per 1 GiB present in the PV. + + Check with `pvs` that the PVs have been created. + + ~~~sh + $ sudo pvs + PV VG Fmt Attr PSize PFree + /dev/sda5 k3snode-vg lvm2 a-- 9.25g 0 + /dev/sdb1 lvm2 --- <10.00g <10.00g + /dev/sdc1 lvm2 --- <10.00g <10.00g + ~~~ + +4. You also need to assign a volume group, or VG, to each PV, bearing in mind the following: + + - The two drives are running on different storage hardware, so you must clearly differentiate their storage space. + - Ghost's database and cache data will be stored in `/dev/sdb1`, on the SSD drive. + - Ghost's other data will be kept in `/dev/sdc1`, on the HDD drive. + + Knowing all this, create two VGs with `vgcreate`: + + ~~~sh + $ sudo vgcreate ghost-ssd /dev/sdb1 + $ sudo vgcreate ghost-hdd /dev/sdc1 + ~~~ + + See how I named each VG related to Ghost and the kind of underlying drive used. Then, with `pvs` you can see how each PV is now assigned to their respective VG: + + ~~~sh + $ sudo pvs + PV VG Fmt Attr PSize PFree + /dev/sda5 k3snode-vg lvm2 a-- 9.25g 0 + /dev/sdb1 ghost-ssd lvm2 a-- 9.98g 9.98g + /dev/sdc1 ghost-hdd lvm2 a-- 9.98g 9.98g + ~~~ + + Also check with `vgs` the current status of the VGs in your VM. + + ~~~sh + $ sudo vgs + VG #PV #LV #SN Attr VSize VFree + ghost-hdd 1 0 0 wz--n- 9.98g 9.98g + ghost-ssd 1 0 0 wz--n- 9.98g 9.98g + k3snode-vg 1 1 0 wz--n- 9.25g 0 + ~~~ + +5. Now you can create the required light volumes on each VG with `lvcreate`. Remember the purpose of each LV and give them meaningful names: + + ~~~sh + $ sudo lvcreate -l 70%FREE -n db ghost-ssd + $ sudo lvcreate -l 100%FREE -n cache ghost-ssd + $ sudo lvcreate -l 100%FREE -n srv ghost-hdd + ~~~ + + Notice how the `db` LV takes the `70%` of the currently free space in the `ghost-ssd` VG, and how the `cache` LV uses all the **remaining** space available on the same `ghost-ssd` VG. On the other hand, `data` takes all the storage available in the `ghost-hdd` VG. Also see how all the LV's names are short reminders of what kind of data they will store later. + + Check with `lvs` the new LVs in your VM: + + ~~~sh + $ sudo lvs + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + srv ghost-hdd -wi-a----- 9.98g + cache ghost-ssd -wi-a----- <3.00g + db ghost-ssd -wi-a----- <6.99g + root k3snode-vg -wi-ao---- 9.25g + ~~~ + + See how `db` has the size corresponding to the 70% (6.99 GiB) of the 10 GiB that were available in the `ghost-ssd` VG, while `cache` took the remaining 30% (3.00 GiB). + + With `vgs` you can verify that there is no space left (`VFree` column) in the VGs: + + ~~~sh + $ sudo vgs + VG #PV #LV #SN Attr VSize VFree + ghost-hdd 1 1 0 wz--n- 9.98g 0 + ghost-ssd 1 2 0 wz--n- 9.98g 0 + k3snode-vg 1 1 0 wz--n- 9.25g 0 + ~~~ + + Notice how the count of LVs in the `ghost-ssd` VG is `2`, while in the rest of them is just `1`. + +### Formatting and mounting the new LVs + +Your new LVs need to be formatted as ext4 filesystems and then mounted in the K3s agent node's system: + +1. Before you format the new LVs, you need to see their `/dev/mapper/` paths with `fdisk`. To get only the Nextcloud related paths, you can filter out their lines with `grep` because the `ghost` string will be part of their paths: + + ~~~sh + $ sudo fdisk -l | grep ghost + Disk /dev/mapper/ghost--ssd-db: 6.99 GiB, 7503609856 bytes, 14655488 sectors + Disk /dev/mapper/ghost--ssd-cache: 3 GiB, 3217031168 bytes, 6283264 sectors + Disk /dev/mapper/ghost--hdd-srv: 9.98 GiB, 10720641024 bytes, 20938752 sectors + ~~~ + +2. Call the `mkfs.ext4` command on their `/dev/mapper/ghost` paths: + + ~~~sh + $ sudo mkfs.ext4 /dev/mapper/ghost--ssd-db + $ sudo mkfs.ext4 /dev/mapper/ghost--ssd-cache + $ sudo mkfs.ext4 /dev/mapper/ghost--hdd-srv + ~~~ + +3. Create a directory structure that provides mount points for the LVs with `mkdir`: + + ~~~sh + $ sudo mkdir -p /mnt/ghost-ssd/{cache,db} /mnt/ghost-hdd/srv + ~~~ + + You have to use `sudo` for creating those folders, because the system will use its `root` user to mount them later on each boot up. Check, with the `tree` command, that they have been created correctly: + + ~~~sh + $ tree -F /mnt + /mnt/ + ├── ghost-hdd/ + │   └── srv/ + └── ghost-ssd/ + ├── cache/ + └── db/ + + 6 directories, 0 files + ~~~ + +4. Using the `mount` command, mount the LVs in their respective mount points: + + ~~~sh + $ sudo mount /dev/mapper/ghost--ssd-db /mnt/ghost-ssd/db + $ sudo mount /dev/mapper/ghost--ssd-cache /mnt/ghost-ssd/cache + $ sudo mount /dev/mapper/ghost--hdd-srv /mnt/ghost-hdd/srv + ~~~ + + Verify with `df` that they have been mounted in the system. Since you're working in a K3s agent node, you will also see a bunch of containerd-related filesystems mounted. Your newly mounted LVs will appear at the bottom of the list: + + ~~~sh + $ df -h + Filesystem Size Used Avail Use% Mounted on + udev 965M 0 965M 0% /dev + tmpfs 198M 780K 197M 1% /run + /dev/mapper/k3snode--vg-root 9.1G 4.3G 4.4G 50% / + tmpfs 987M 0 987M 0% /dev/shm + tmpfs 5.0M 0 5.0M 0% /run/lock + tmpfs 987M 0 987M 0% /tmp + tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service + /dev/sda1 730M 111M 567M 17% /boot + tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service + shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/42476a81294a94e7af0604956f2f7bf9f3e6250c78221cc5d0b01d560a5809a5/shm + tmpfs 198M 8.0K 198M 1% /run/user/1000 + /dev/mapper/ghost--ssd-db 6.8G 1.8M 6.5G 1% /mnt/ghost-ssd/db + /dev/mapper/ghost--ssd-cache 2.9G 788K 2.8G 1% /mnt/ghost-ssd/cache + /dev/mapper/ghost--hdd-srv 9.8G 2.1M 9.3G 1% /mnt/ghost-hdd/srv + ~~~ + +5. To make the mountings permanent, append them to the `/etc/fstab` file of the VM. First, make a backup of the `fstab` file: + + ~~~sh + $ sudo cp /etc/fstab /etc/fstab.bkp + ~~~ + + Then **append** all these lines to the `fstab` file: + + ~~~sh + # Ghost volumes + /dev/mapper/ghost--ssd-db /mnt/ghost-ssd/db ext4 defaults,nofail 0 0 + /dev/mapper/ghost--ssd-cache /mnt/ghost-ssd/cache ext4 defaults,nofail 0 0 + /dev/mapper/ghost--hdd-srv /mnt/ghost-hdd/srv ext4 defaults,nofail 0 0 + ~~~ + +### Storage mount points for the Ghost pods + +> [!WARNING] +> **Create an inner mount point for pods**\ +> Do not use the directories where you have mounted the new storage volumes as mount points for the persistent volumes you'll enable later for the Ghost deployment. + +Kubernetes pods can change the owner user and group, and also the permission mode, applied to those folders. This can cause a failure when, after a reboot, your K3s agent node tries to mount again its storage volumes. The issue will happen because it will not have the right user or permissions anymore to access the mount point folders. The best thing to do then is to create another folder within each storage volume that can be used safely as mount point by the Ghost pods: + +1. For the LVM storage volumes created before, you have to execute a `mkdir` command like this: + + ~~~sh + $ sudo mkdir /mnt/{ghost-hdd/srv,ghost-ssd/cache,ghost-ssd/db}/k3smnt + ~~~ + + As you did with the mount point folders, these new directories also have to be owned initially by `root`. This is because the K3s service is running under that user. + +2. Check the updated folder structure with `tree`: + + ~~~sh + $ tree -F /mnt + /mnt/ + ├── ghost-hdd/ + │   └── srv/ + │   ├── k3smnt/ + │   └── lost+found/ [error opening dir] + └── ghost-ssd/ + ├── cache/ + │   ├── k3smnt/ + │   └── lost+found/ [error opening dir] + └── db/ + ├── k3smnt/ + └── lost+found/ [error opening dir] + + 12 directories, 0 files + ~~~ + + Do not mind the `lost+found` folders, they are created by the Linux system automatically. + +> [!WARNING] +> **The `k3smnt` folders exist within the already mounted LVM storage volumes!**\ +> You cannot create those folders without mounting the light volumes first. + +### About increasing the size of volumes + +If, after a time using and filling up these volumes, you need to increase their size, take a look to the [appendix chapter **G907**](G907%20-%20Appendix%2007%20~%20Resizing%20a%20root%20LVM%20volume.md). It shows you how to extend a partition and the LVM filesystem within it, although in that case it is done on a LV volume that happens to be also the root filesystem of a VM. + +## Choosing static cluster IPs for Ghost-related services + +For all the main components of your Ghost setup, you are going to create `Service` resources. To make them reachable internally for any pod within your Kubernetes cluster, one way is by assigning them a static cluster IP. With it, you get to know beforehand which internal IP the services have, allowing you pointing the Ghost server instance to the right ones. To determine which cluster IP to assign to those services, take a look with `kubectl` at which cluster IPs are currently in use in your Kubernetes cluster: + +~~~sh +$ kubectl get svc -A +NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +cert-manager cert-manager ClusterIP 10.43.153.243 9402/TCP 21d +cert-manager cert-manager-cainjector ClusterIP 10.43.131.203 9402/TCP 21d +cert-manager cert-manager-webhook ClusterIP 10.43.118.87 443/TCP,9402/TCP 21d +default kubernetes ClusterIP 10.43.0.1 443/TCP 51d +kube-system headlamp LoadBalancer 10.43.119.9 10.7.0.2 80:31146/TCP 17d +kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 51d +kube-system metrics-server ClusterIP 10.43.50.63 443/TCP 38d +kube-system traefik LoadBalancer 10.43.174.63 10.7.0.0 80:30512/TCP,443:32647/TCP 51d +kube-system traefik-dashboard LoadBalancer 10.43.216.2 10.7.0.1 443:31622/TCP 23d +metallb-system metallb-webhook-service ClusterIP 10.43.126.18 443/TCP 47d +~~~ + +Check the values under the `CLUSTER-IP` column, and notice how all of them are in the `10.43` subnet. What you have to do now is just choose IPs that fall into that subnet but do not collide with the ones currently in use by other services. Let's say you choose the following ones: + +- `10.43.100.1` for the Ghost server instance. +- `10.43.100.2` for the Valkey cache server instance. +- `10.43.100.3` for the MariaDB database server instance. + +> [!IMPORTANT] +> **Remember that the internal Kubernetes cluster communications run in a separated virtual network**\ +> In this guide's setup, the cluster IPs will not collide in any way with the external IPs because they exist in completely separated virtual networks run through independent virtual bridges. + +## Relevant system paths + +### Folders in K3s agent node's VM + +- `/etc` +- `/mnt` +- `/mnt/ghost-hdd` +- `/mnt/ghost-hdd/srv` +- `/mnt/ghost-hdd/srv/k3smnt` +- `/mnt/ghost-ssd` +- `/mnt/ghost-ssd/cache` +- `/mnt/ghost-ssd/cache/k3smnt` +- `/mnt/ghost-ssd/db` +- `/mnt/ghost-ssd/db/k3smnt` + +### Files in K3s agent node's VM + +- `/dev/mapper/ghost--hdd-srv` +- `/dev/mapper/ghost--ssd-cache` +- `/dev/mapper/ghost--ssd-db` +- `/dev/sdb` +- `/dev/sdb1` +- `/dev/sdc` +- `/dev/sdc1` +- `/etc/fstab` +- `/etc/fstab.bkp` + +## References + +### [Ghost](https://docs.ghost.org/) + +- [How To Install Ghost On Ubuntu](https://docs.ghost.org/install/ubuntu) + +### Cache servers + +- [Redis](https://redis.io/) +- [Valkey](https://valkey.io/) + +### Database engines + +- [MariaDB](https://mariadb.org/) +- [MySQL](https://www.mysql.com/) + +## Navigation + +[<< Previous (**G032. Deploying services 01**)](G032%20-%20Deploying%20services%2001%20~%20Considerations.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Ghost Part 2**) >>](G033%20-%20Deploying%20services%2002%20~%20Ghost%20-%20Part%202%20-%20Valkey%20cache%20server.md) \ No newline at end of file diff --git a/G033 - Deploying services 02 ~ Ghost - Part 2 - Valkey cache server.md b/G033 - Deploying services 02 ~ Ghost - Part 2 - Valkey cache server.md new file mode 100644 index 0000000..582b9cb --- /dev/null +++ b/G033 - Deploying services 02 ~ Ghost - Part 2 - Valkey cache server.md @@ -0,0 +1,666 @@ +# G033 - Deploying services 02 ~ Ghost - Part 2 - Valkey cache server + +- [You can use Valkey instead of Redis as caching server for Ghost](#you-can-use-valkey-instead-of-redis-as-caching-server-for-ghost) +- [Kustomize project folders for Ghost and Valkey](#kustomize-project-folders-for-ghost-and-valkey) +- [Valkey configuration file](#valkey-configuration-file) +- [Valkey ACL user list](#valkey-acl-user-list) +- [Valkey StatefulSet resource](#valkey-statefulset-resource) +- [Valkey Service resource](#valkey-service-resource) +- [Valkey Kustomize project](#valkey-kustomize-project) + - [Validating the Kustomize YAML output](#validating-the-kustomize-yaml-output) +- [Do not deploy this Valkey project on its own](#do-not-deploy-this-valkey-project-on-its-own) +- [Relevant system paths](#relevant-system-paths) + - [Folders in `kubectl` client system](#folders-in-kubectl-client-system) + - [Files in `kubectl` client system](#files-in-kubectl-client-system) +- [References](#references) + - [Kubernetes](#kubernetes) + - [Concepts](#concepts) + - [Tasks](#tasks) + - [Tutorials](#tutorials) + - [Reference. Kubernetes API](#reference-kubernetes-api) + - [Articles about pod scheduling](#articles-about-pod-scheduling) + - [Articles about ConfigMaps and Secrets](#articles-about-configmaps-and-secrets) + - [Valkey](#valkey) + - [Articles about Valkey](#articles-about-valkey) + - [Redis](#redis) + - [Articles about Redis](#articles-about-redis) +- [Navigation](#navigation) + +## You can use Valkey instead of Redis as caching server for Ghost + +This second part of the Ghost deployment procedure is where you begin working with the Kustomize project for the whole platform's setup. In particular, you will start by preparing the Kustomize subproject of the caching server for Ghost. The official Ghost documentation mentions [Redis](https://redis.io/), but it is possible to use [Valkey](https://valkey.io/) instead. + +## Kustomize project folders for Ghost and Valkey + +You need a main Kustomize project for the deployment of your Ghost platform. In it, you will contain the subprojects for components like Valkey. Start by executing the following `mkdir` command to create the necessary project folder structure for this part: + +~~~sh +$ mkdir -p $HOME/k8sprjs/ghost/components/cache-valkey/{configs,resources,secrets} +~~~ + +The main folder for the Valkey Kustomize subproject, `cache-valkey`, is named following the pattern `-` this guide will use also to name the root directories for the remaining component subprojects. There are also a `configs`, a `resources` and a `secrets` subfolders to better differentiate the files declaring the Kubernetes resources from those related to configurations or secret. + +## Valkey configuration file + +You need to fit Valkey to your needs, and the best way is by setting its parameters in a configuration file: + +1. In the `configs` subfolder of the Valkey project, create a `valkey.conf` file: + + ~~~sh + $ touch $HOME/k8sprjs/ghost/components/cache-valkey/configs/valkey.conf + ~~~ + + The name `valkey.conf` is the default one for the Valkey configuration file. + +2. Enter the custom configuration for Valkey in the `configs/valkey.conf` file: + + ~~~properties + # Custom Valkey configuration + bind 0.0.0.0 + protected-mode no + port 6379 + maxmemory 64mb + maxmemory-policy allkeys-lru + aclfile /etc/valkey/users.acl + ~~~ + + The parameters above mean the following: + + - `bind`\ + To make the Valkey server listen in specific interfaces. With `0.0.0.0` it listens to all available ones. + + > [!NOTE] + > **Do not specify here the cluster IP you chose for the Valkey service**\ + > It is better to leave this parameter with a "flexible" value to avoid worrying about putting a particular IP in several places. + + - `protected-mode`\ + Security option for restricting Valkey from listening in interfaces other than localhost. Enabled by default, is disabled with value `no` so Valkey can listen through the interface that will have the service cluster IP assigned. + + - `port`\ + The default Valkey port is `6379`, specified here just for clarity. + + - `maxmemory`\ + Limits the memory used by the Valkey server. When the limit is reached, it'll try to remove keys accordingly to the eviction policy set in the `maxmemmory-policy` parameter. + + - `maxmemory-policy`\ + Policy for evicting keys from memory when the `maxmemory` limit is reached. Here is set to `allkeys-lru` so it can remove any key accordingly to an LRU (Least Recently Used) algorithm. + + - `aclfile`\ + Path to the file containing the Access Control List (ACL) specifying the users authorized to use this Valkey instance. The path specified is the default one, but it is specified here as a reminder of where it is. + + [This ACL file is specified in the next section](#valkey-acl-user-list). + + > [!NOTE] + > **The Valkey configuration parameters are described in the official example configuration file**\ + > Each Valkey release has its own example `valkey.conf` file, and [the version this guide deploys is the 9.0 one](https://raw.githubusercontent.com/valkey-io/valkey/9.0/valkey.conf). + +## Valkey ACL user list + +You need to secure the access to this Valkey instance. Valkey comes with a `default` user that you could use, but it is better to declare one more specific for Ghost. Since [Valkey supports Access Control Lists](https://valkey.io/topics/acl/), you can declare the users you need in an ACL file: + +1. Create a new `users.acl` file in the `configs` folder: + + ~~~sh + $ touch $HOME/k8sprjs/ghost/components/cache-valkey/secrets/users.acl + ~~~ + +2. In `secrets/users.acl` enter the ACL rules redefining the `default` user and specifiying the user for Ghost: + + ~~~acl + user default off ~* &* +@all >P4s5W0rd_FOr_7h3_DeF4u1t_uSEr + user ghostcache on ~ghost:* &* allcommands >pAS2wORT_f0r_T#e_Gh05T_Us3R + ~~~ + + Each rule declared above have a different purpose in this setup: + + - `user default`\ + Valkey's `default` user comes enabled with no password by default: + + - `off`\ + Disables the `default` user, making impossible to authenticate with it in the Valkey instance. This is because only the Ghost platform will access this Valkey instance and with its own specific user, not with this `default` one. + + - `~*`\ + Indicates that this `default` user can access all the keys stored in the Valkey instance. + + - `&*`\ + Allows the `default` user to access all channels existing in the Valkey instance. + + - `+@all`\ + Enables the `default` user to use all commands. + + - `>P4s5W0rd_FOr_7h3_DeF4u1t_uSEr`\ + A clear string declaring the password for the `default` user. Although this user is disabled, is convenient not leaving it without a password to harden its access in case the user gets reenabled in the future. + + Also notice the initial `>` character: **it is not part of the password string**, is just the indication that the string is the user's password in the rule. + + - `user ghostcache`\ + Declares a specific `ghostcache` user meant only for the Ghost platform: + + - `on`\ + Enables the `ghostcache` user for authentication in the Valkey instance. + + - `~ghost:*`\ + Restricts what keys the `ghostcache` user can access to only those having the `ghost:` prefix. + + - `&*`\ + Allows the `ghostcache` user to access all channels existing in the Valkey instance. + + - `allcommands`\ + Alias for the `+@all` option. Enables the `ghostcache` user to use all commands. + + - `>pAS2wORT_f0r_T#e_Gh05T_Us3R`\ + A clear string declaring the password for the `default` user. **Remember that the initial `>` character is not part of the password**, it is just indicating that the following string is the user's password within the rule. + + > [!WARNING] + > **The passwords in this `secrets/users.acl` file are plain unencrypted strings**\ + > Be careful of who can access this `users.acl` file. + + + + + +## Valkey StatefulSet resource + +The next thing to do is setting up the `StatefulSet` resource that will install Valkey in your K3s cluster: + +1. Create a `cache-valkey.deployment.yaml` file under the `resources` subfolder: + + ~~~sh + $ touch $HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.deployment.yaml + ~~~ + +2. Declare the `Deployment` resource for Valkey in `resources/cache-valkey.deployment.yaml`: + + ~~~yaml + apiVersion: apps/v1 + kind: Deployment + + metadata: + name: cache-valkey + spec: + replicas: 1 + template: + spec: + containers: + - name: server + image: valkey/valkey:9.0-alpine + command: + - valkey-server + - "/etc/valkey/valkey.conf" + - "--requirepass $(VALKEY_PASSWORD)" + env: + - name: VALKEY_PASSWORD + valueFrom: + secretKeyRef: + name: cache-valkey + key: valkey-password + ports: + - containerPort: 6379 + resources: + limits: + cpu: "0.5" + memory: 64Mi + volumeMounts: + - name: valkey-config + subPath: valkey.conf + mountPath: /etc/valkey/valkey.conf + - name: metrics + image: oliver006/redis_exporter:v1.80.0-alpine + env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + name: cache-valkey + key: valkey-password + resources: + limits: + cpu: "0.25" + memory: 32Mi + ports: + - containerPort: 9121 + volumes: + - name: valkey-config + configMap: + name: cache-valkey + defaultMode: 0444 + items: + - key: valkey.conf + path: valkey.conf + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - server-seafile + topologyKey: "kubernetes.io/hostname" + ~~~ + + This `Deployment` resource describes the template for the pod that will contain the Valkey server and its Prometheus metrics exporter service, running each on their own containers. + + - `replicas`\ + Given the limitations of the cluster, only one instance of the Valkey pod is requested. + + - `template`\ + Describes how the pod resulting from this `Deployment` should be. + + - `spec.containers`\ + This pod template has two containers running in it, arranged in what is known as a _sidecar_ pattern: + + - Container `server`\ + Container that runs the Valkey server itself: + + - The container `image` used is the Alpine Linux variant of [the most recent 9.0 version](https://hub.docker.com/r/valkey/valkey). + + - In the `command` section you can see how the configuration file path is directly specified to the service, and also how the Valkey password is obtained from a `cache-valkey` secret (which you will declare later), then turned into an environment variable (`env` section) to be passed to the `--requirepass` option. + + - The `containerPort` is the same as the `port` set in the `valkey.conf` file. + + - The container is set with a RAM and CPU usage limit in the `resources` section. + + - Container `metrics`\ + Container that runs a service specialized in getting statistics from the Valkey server in a format that a Prometheus server can read: + + - The Docker `image` is an Alpine Linux variant of [the 1.80 version of this exporter](https://hub.docker.com/r/oliver006/redis_exporter). + + - By default, this exporter tries to connect to `redis://localhost:6379`, which fits the configuration applied to the Valkey service. + + - In the `env` section, the Valkey password is set as the `REDIS_PASSWORD` environment parameter so the exporter can pick it from the pod's environment and authenticate with it in the Valkey server. + + - It also has limited RAM and CPU `resources`, and its `containerPort` is the one used by default by the exporter and also matches the one you will see declared [in the next section within the corresponding Valkey's `Service` resource](#valkey-service-resource). + + - `spec.volumes`\ + Here the `valkey.conf` item, taken from a yet-to-be-defined `cache-valkey` [ConfigMap object](https://kubernetes.io/docs/concepts/configuration/configmap/), is declared as a volume so it can be mounted by the `server` container, under its `volumeMounts` section. + + - Notice the `defaultMode` parameter here. It sets a particular permission mode by default for the items contained in the specified `configMap` block. In this case, it sets a read-only permission for all users with mode `0444` but only for the `items` listed below it (the `valkey.conf` file in this case). + + - Know that, in the items list, the `key` parameter is the name identifying the file present inside the `ConfigMap` object, and the `path` is the relative path assigned to the item. + + - `spec.affinity`\ + The `podAffinity.requiredDuringSchedulingIgnoredDuringExecution` affinity rule will make the Valkey pod scheduled in the same node where a pod labeled with `app: server-seafile` is also created. + + This implies that the containers of the Valkey pod will not be instanced until that other pod appears in the cluster. On the other hand, the Valkey pod's containers will not be stopped when the pod they have their affinity with disappears from the cluster. + +## Valkey Service resource + +You have defined the pod that will execute the containers running the Valkey server and its Prometheus statistics exporter, now you need to define the `Service` resource that will give access to them: + +1. Generate a new file named `cache-valkey.service.yaml`, also under the `resources` subfolder: + + ~~~sh + $ touch $HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.service.yaml + ~~~ + +2. Declare the Valkey `Service` resource in `cache-valkey.service.yaml`: + + ~~~yaml + apiVersion: v1 + kind: Service + + metadata: + name: cache-valkey + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9121" + spec: + type: ClusterIP + clusterIP: 10.43.100.2 + ports: + - port: 6379 + protocol: TCP + name: server + - port: 9121 + protocol: TCP + name: metrics + ~~~ + + This `Service` resource specifies how to access the services running in the Valkey pod's containers: + + - `metadata.annotations`\ + Two annotations required for the Prometheus data scraping service (you will see how to deploy Prometheus in a later chapter).These annotations inform Prometheus about which port to scrape for getting metrics of your Valkey service, which is data provided by the specialized metrics service that runs in the `metrics` container of the Valkey pod. + + - `spec.type`\ + By default, any `Service` resource is of type `ClusterIP`, meaning that the service is only reachable from within the cluster's internal network. You can omit this parameter altogether from the yaml when you're using this default type, although it is better to leave it specified for clarity. + + - `spec.clusterIP`\ + Where you put the chosen static cluster IP for this service. + + - `spec.ports`\ + Describe the ports open in this service. Notice how I made the `name` and `port` on each port of this `Service` to be the same as the ones already defined for the containers in the previous `Deployment` resource. + +## Valkey Kustomize project + +What remains to setup is the main `kustomization.yaml` file that describes the whole Valkey Kustomize project. + +1. In the main `cache-valkey` folder, create a `kustomization.yaml` file: + + ~~~sh + $ touch $HOME/k8sprjs/ghost/components/cache-valkey/kustomization.yaml + ~~~ + +2. Enter the following `Kustomization` declaration in the `kustomization.yaml` file: + + ~~~yaml + # Seafile Valkey setup + apiVersion: kustomize.config.k8s.io/v1beta1 + kind: Kustomization + + labels: + - pairs: + app: cache-valkey + includeSelectors: true + includeTemplates: true + + resources: + - resources/cache-valkey.deployment.yaml + - resources/cache-valkey.service.yaml + + replicas: + - name: cache-valkey + count: 1 + + images: + - name: valkey/valkey + newTag: 9.0-alpine + - name: oliver006/redis_exporter + newTag: v1.80.0-alpine + + configMapGenerator: + - name: cache-valkey + files: + - configs/valkey.conf + + secretGenerator: + - name: cache-valkey + files: + - valkey-password=secrets/valkey.pwd + ~~~ + + This `kustomization.yaml` file has elements you've already seen in previous deployments, plus a few extra ones: + + - With `labels` you can set up [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to all the resources generated from this `kustomization.yaml` file. In this case, there is only one label `app: cache-valkey` to indicate that the resources declared in this Kustomize project belong to the Valkey caching server. + + The `includeSelectors` and `includeTemplates` are parameters for controlling if the labels must be also included within the `spec.selector` and `spec.template` blocks of resource declarations such as the `Deployment` one you have for your Valkey server. + + - The `replicas` section allows you to handle the number of replicas you want for deployments, overriding whatever number is already set in their base declaration. In this case you only have one deployment listed, and the value put here is the same as the one set in the `cache-valkey` deployment definition. + + - The `images` block gives you a handy way of changing the images specified within deployments, particularly useful for when you want to upgrade to newer minor versions without changing anything else from the deployment declaration. + + - There are two details to notice about the `configMapGenerator` and `secretGenerator`: + + - In the `cache-valkey`'s definition, the file `secrets/valkey.pwd` is renamed to `valkey-password`, which is the key expected within the `cache-valkey` deployment definition. + + - None of these generator blocks have the `disableNameSuffixHash` option enabled, because the name of the resources they generate is only used in standard Kubernetes parameters that are recognized by Kustomize. + +### Validating the Kustomize YAML output + +With everything in place, you can check out the YAML resulting from the Seafile Valkey' Kustomize subproject: + +1. Execute the `kubectl kustomize` command on the Valkey Kustomize subproject's root folder, piped to `less` to get the output paginated: + + ~~~sh + $ kubectl kustomize $HOME/k8sprjs/ghost/components/cache-valkey | less + ~~~ + + Alternatively, you could just dump the YAML output on a file, called `cache-valkey.k.output.yaml` for instance. + + ~~~sh + $ kubectl kustomize $HOME/k8sprjs/ghost/components/cache-valkey > cache-valkey.k.output.yaml + ~~~ + +2. The resulting YAML should look like this one: + + ~~~yaml + apiVersion: v1 + data: + valkey.conf: |- + # Custom Valkey configuration + bind 0.0.0.0 + protected-mode no + port 6379 + maxmemory 64mb + maxmemory-policy allkeys-lru + kind: ConfigMap + metadata: + labels: + app: cache-valkey + name: cache-valkey-g48dg5ktt4 + --- + apiVersion: v1 + data: + valkey-password: | + WTB1cl9yRTNlNDFMeS5sYWzDsWpmbMOxa2FlcnV0YW9uZ3ZvYW46YcOxb2RrbzM0OTQ4dX + lPbmctUzNrcmVUX1A0czV3b1JkLWhlUkUhCg== + kind: Secret + metadata: + labels: + app: cache-valkey + name: cache-valkey-5m6ckg5cd4 + type: Opaque + --- + apiVersion: v1 + kind: Service + metadata: + annotations: + prometheus.io/port: "9121" + prometheus.io/scrape: "true" + labels: + app: cache-valkey + name: cache-valkey + spec: + clusterIP: 10.43.100.2 + ports: + - name: server + port: 6379 + protocol: TCP + - name: metrics + port: 9121 + protocol: TCP + selector: + app: cache-valkey + type: ClusterIP + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + labels: + app: cache-valkey + name: cache-valkey + spec: + replicas: 1 + selector: + matchLabels: + app: cache-valkey + template: + metadata: + labels: + app: cache-valkey + spec: + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - server-seafile + topologyKey: kubernetes.io/hostname + containers: + - command: + - valkey-server + - /etc/valkey/valkey.conf + - --requirepass $(VALKEY_PASSWORD) + env: + - name: VALKEY_PASSWORD + valueFrom: + secretKeyRef: + key: valkey-password + name: cache-valkey-5m6ckg5cd4 + image: valkey/valkey:9.0-alpine + name: server + ports: + - containerPort: 6379 + resources: + limits: + cpu: "0.5" + memory: 64Mi + volumeMounts: + - mountPath: /etc/valkey/valkey.conf + name: valkey-config + subPath: valkey.conf + - env: + - name: REDIS_PASSWORD + valueFrom: + secretKeyRef: + key: valkey-password + name: cache-valkey-5m6ckg5cd4 + image: oliver006/redis_exporter:v1.80.0-alpine + name: metrics + ports: + - containerPort: 9121 + resources: + limits: + cpu: "0.25" + memory: 32Mi + volumes: + - configMap: + defaultMode: 292 + items: + - key: valkey.conf + path: valkey.conf + name: cache-valkey-g48dg5ktt4 + name: valkey-config + ~~~ + + There are a few things to highlight in the YAML output above: + + - You might have noticed this in the previous Kustomize projects you have deployed before, but the generated YAML output has the parameters within each resource sorted alphabetically. Be aware of this when you compare this output with the files you created and your expected results. + + - The names of the `cache-valkey` config map and `cache-valkey` secret have a hash as a suffix, added by Kustomize. The hash is calculated from the content of the renamed resources. + + - Another detail to notice is how the label `app: cache-valkey` appears not only as label in the `metadata` section of all the resources, but Kustomize has also set it as `selector` both in the `Service` and the `Deployment` resources definitions. + + - There's also a particularity that might seem odd. The `defaultMode` of the `valkey-config` volume is shown as `292` instead of the `0444` value you set in the Deployment resource definition. It's not a mistake, just a particularity of the Kustomize generation. The file will have the permission mode set as it is specified in the original `Deployment` resource. + +3. If you installed the `kubeconform` command in your `kubectl` client system (as explained in the [G026 guide](G026%20-%20K3s%20cluster%20setup%2009%20~%20Setting%20up%20a%20kubectl%20client%20for%20remote%20access.md#validate-kubernetes-configuration-files-with-kubeconform)), you can validate the Kustomize output with it. So, assuming you have dumped the output in a `cache-valkey.k.output.yaml` file, execute the following: + + ~~~sh + $ kubeconform -summary cache-valkey.k.output.yaml + Summary: 4 resources found in 1 file - Valid: 4, Invalid: 0, Errors: 0, Skipped: 0 + ~~~ + + Notice the `-summary` option in the shell command above. It is what makes the `kubeconform` command print a results summary when it finishes. + + > [!NOTE] + > **`kubeconform` does not produce an output when the input is valid**\ + > With a completely valid input as in this case and no option specified, `kubeconform` does not print anything in the shell. + > + > On the other hand, `kubeconform` (at least, in the version `0.7.0` installed with this guide) is not yet able to understand Kustomize projects and ends up finding errors in them. + +## Do not deploy this Valkey project on its own + +Although you technically can deploy this Valkey Kustomize project, wait until you have all the components and the main Seafile Kustomize project ready. Then, you will deploy the whole lot at once with just one `kubectl` command. + +## Relevant system paths + +### Folders in `kubectl` client system + +- `$HOME/k8sprjs/seafile` +- `$HOME/k8sprjs/ghost/components` +- `$HOME/k8sprjs/ghost/components/cache-valkey` +- `$HOME/k8sprjs/ghost/components/cache-valkey/configs` +- `$HOME/k8sprjs/ghost/components/cache-valkey/resources` +- `$HOME/k8sprjs/ghost/components/cache-valkey/secrets` + +### Files in `kubectl` client system + +- `$HOME/k8sprjs/ghost/components/cache-valkey/kustomization.yaml` +- `$HOME/k8sprjs/ghost/components/cache-valkey/configs/valkey.conf` +- `$HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.deployment.yaml` +- `$HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.service.yaml` +- `$HOME/k8sprjs/ghost/components/cache-valkey/secrets/valkey.pwd` + +## References + +### [Kubernetes](https://kubernetes.io/docs/) + +#### [Concepts](https://kubernetes.io/docs/concepts/) + +- [Overview. Objects in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/) + - [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) + +- [Configuration](https://kubernetes.io/docs/concepts/configuration/) + - [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) + +- [Scheduling, Preemption and Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/) + - [Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) + +#### [Tasks](https://kubernetes.io/docs/tasks/) + +- [Configure Pods and Containers](https://kubernetes.io/docs/tasks/configure-pod-container/) + - [Assign Pods to Nodes using Node Affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) + - [Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) + +- [Inject Data Into Applications](https://kubernetes.io/docs/tasks/inject-data-application/) + - [Define Dependent Environment Variables](https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/) + - [Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) + +#### [Tutorials](https://kubernetes.io/docs/tutorials/) + +- [Configuration](https://kubernetes.io/docs/tutorials/configuration/) + - [Configuring Redis using a ConfigMap](https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/) + +#### [Reference. Kubernetes API](https://kubernetes.io/docs/reference/kubernetes-api/) + +- [Workload Resources](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/) + - [Pod](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) + - [Scheduling](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) + +#### Articles about pod scheduling + +- [TheNewStack. Strategies for Kubernetes Pod Placement and Scheduling](https://thenewstack.io/strategies-for-kubernetes-pod-placement-and-scheduling/) +- [TheNewStack. Implement Node and Pod Affinity/Anti-Affinity in Kubernetes: A Practical Example](https://thenewstack.io/implement-node-and-pod-affinity-anti-affinity-in-kubernetes-a-practical-example/) +- [TheNewStack. Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes](https://thenewstack.io/tutorial-apply-the-sidecar-pattern-to-deploy-redis-in-kubernetes/) + +#### Articles about ConfigMaps and Secrets + +- [Opensource.com. An Introduction to Kubernetes Secrets and ConfigMaps](https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) +- [Dev. Kubernetes - Using ConfigMap SubPaths to Mount Files](https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i) +- [GoLinuxCloud. Kubernetes Secrets | Declare confidential data with examples](https://www.golinuxcloud.com/kubernetes-secrets/) +- [StackOverflow. Import data to config map from kubernetes secret](https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret) + +### [Valkey](https://valkey.io/) + +- [Documentation by Topic](https://valkey.io/topics/) + - [The Valkey server](https://valkey.io/topics/server/) + - [ACL](https://valkey.io/topics/acl/) + - [Cluster tutorial](https://valkey.io/topics/cluster-tutorial/) + +- [Docker Hub. Valkey](https://hub.docker.com/r/valkey/valkey) +- [Docker Hub. Prometheus Valkey & Redis Metrics Exporter](https://hub.docker.com/r/oliver006/redis_exporter) + +- [GitHub. Example `valkey.conf` for Valkey 9.0](https://raw.githubusercontent.com/valkey-io/valkey/9.0/valkey.conf) + +#### Articles about Valkey + +- [XDA Developers. I set up a Valkey Cache with Nextcloud, and it fixed my biggest complaint](https://www.xda-developers.com/set-up-valkey-cache-with-nextcloud-it-fixed-my-biggest-complaint/) + +### [Redis](https://redis.io/) + +- [Redis FAQ](https://redis.io/topics/faq) +- [Redis administration](https://redis.io/topics/admin) + +#### Articles about Redis + +- [rpi4cluster. K3s Kubernetes Redis](https://rpi4cluster.com/k3s-redis/) +- [Medium. Simple Redis Cache on Kubernetes with Prometheus Metrics](https://itnext.io/simple-Redis-cache-on-kubernetes-with-prometheus-metrics-8667baceab6b) +- [Medium. Deploy and Operate a Redis Cluster in Kubernetes](https://marklu-sf.medium.com/deploy-and-operate-a-redis-cluster-in-kubernetes-94fde7853001) +- [Suse Rancher Blog. Deploying Redis Cluster on Top of Kubernetes](https://www.suse.com/c/rancher_blog/deploying-redis-cluster-on-top-of-kubernetes/) +- [StackOverflow. Redis sentinel vs clustering](https://stackoverflow.com/questions/31143072/redis-sentinel-vs-clustering) + +## Navigation + +[<< Previous (**G033. Deploying services 02. Seafile Part 1**)](G033%20-%20Deploying%20services%2002%20~%20Seafile%20-%20Part%201%20-%20Outlining%20setup,%20arranging%20storage%20and%20choosing%20service%20IPs.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Seafile Part 3**) >>](G033%20-%20Deploying%20services%2002%20~%20Seafile%20-%20Part%203%20-%20MariaDB%20database%20server.md) diff --git a/G033 - Deploying services 02 ~ Nextcloud - Part 1 - Outlining setup, arranging storage and choosing service IPs.md b/G033 - Deploying services 02 ~ Nextcloud - Part 1 - Outlining setup, arranging storage and choosing service IPs.md deleted file mode 100644 index ada062b..0000000 --- a/G033 - Deploying services 02 ~ Nextcloud - Part 1 - Outlining setup, arranging storage and choosing service IPs.md +++ /dev/null @@ -1,412 +0,0 @@ -# G033 - Deploying services 02 ~ Nextcloud - Part 1 - Outlining setup, arranging storage and choosing service IPs - -From the services listed in the [**G018** guide](G018%20-%20K3s%20cluster%20setup%2001%20~%20Requirements%20and%20arrangement.md#nextcloud), let's begin with the **Nextcloud** file-sharing platform. Since deploying it requires the configuration and deployment of several different components, I've split the Nextcloud guide in five parts, being this the first one of them. In this part, you'll see how to outline the setup of your Nextcloud platform, then work in the arrangement of the storage drives needed to store Nextcloud's data, and finally choose some required IPs. - -## Outlining Nextcloud's setup - -First, you must define how you want to setup Nextcloud in your cluster. This means that you'll have to decide beforehand how to solve the following points. - -- For storing metadata, Nextcloud requires a database. Which one are you going to use and how you'll setup it? -- Will you use a cache server to improve the performance? Will it be exclusive to the Nextcloud instance? -- Where in your K3s cluster the Nextcloud's data and html folders should be placed? - -This guide solves the previous points as follows. - -- **Database**: MariaDB with its data saved in a local SSD storage drive. -- **Cache server**: Redis instance configured to be just an in-memory cache for sessions. -- **Nextcloud's folder for html and installation-related files**: persistent volume prepared on a local SSD storage drive. -- **Nextcloud users' data**: persistent volume prepared on a local HDD storage drive. -- Using kubernetes affinity rules, **all the Nextcloud-related services are deployed in pods running in the same K3s agent node**. This implies that the persistent volumes must be also available in the same K3s node. - -### _Choosing the K3s agent_ - -Your cluster has only two K3s agent nodes, and the two of them are already running services. Choose the one that currently has the lowest CPU and RAM usage of the two. In my case it happened to be the `k3sagent02` node. - -## Setting up new storage drives in the K3s agent - -Given how the K3s cluster has been configured, the only persistent volumes you can use are local ones. This means that they rely on paths found in the K3s node VM's host system, but you don't want to use the root filesystem in which the underlying Debian OS is installed. It's better to have separate drives for each persistent volume, which you can create directly from your Proxmox VE web console. - -### _Adding the new storage drives to the K3s agent node's VM_ - -1. Log in your Proxmox VE web console and go to the `Hardware` page of your chosen K3s agent's VM. There, click on the `Add` button and then on the `Hard Disk` option. - - ![Add Hard Disk option](images/g033/pve_k3snode_hardware_add_hard_disk_option.png "Add Hard Disk option") - -2. You'll meet a window (already seen back in the [**G020** guide](G020%20-%20K3s%20cluster%20setup%2003%20~%20Debian%20VM%20creation.md#setting-up-a-new-virtual-machine)) where you can define a new storage drive for your VM. - - ![Add Hard Disk window](images/g033/pve_k3snode_hardware_add_hard_disk_window.png "Add Hard Disk window") - - Add two new "hard disks" to your VM, one SSD drive and a HDD drive. Notice the highlighted elements in the capture above: be sure of having the `Advanced` checkbox enabled and edit only the featured fields as follows. - - - **SSD drive**: storage `ssd_disks`, Discard `ENABLED`, disk size `5 GiB`, SSD emulation `ENABLED`, IO thread `ENABLED`. - - - **HDD drive**: storage `hdd_data`, Discard `ENABLED`, disk size `10 GiB`, SSD emulation `DISABLED`, IO thread `ENABLED`. - - After adding the new drives, they should appear in your VM's hardware list. - - ![New Hard Disks added to VM](images/g033/pve_k3snode_hardware_hard_disks_added.png "New Hard Disks added to VM") - -3. Now, open a shell in your VM and check with `fdisk` that the new drives are already active and running. - - ~~~bash - $ sudo fdisk -l - - - Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors - Disk model: QEMU HARDDISK - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0x76bd2712 - - Device Boot Start End Sectors Size Id Type - /dev/sda1 * 2048 999423 997376 487M 83 Linux - /dev/sda2 1001470 20969471 19968002 9.5G 5 Extended - /dev/sda5 1001472 20969471 19968000 9.5G 8e Linux LVM - - - Disk /dev/mapper/k3snode--vg-root: 9.52 GiB, 10221518848 bytes, 19963904 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - - - Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors - Disk model: QEMU HARDDISK - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - - - Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors - Disk model: QEMU HARDDISK - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - ~~~ - - The new drives are `/dev/sdb` and `/dev/sdc`. Unsurprisingly, they are of the same `Disk model` as `/dev/sda`: `QEMU HARDDISK`. - -### _LVM storage set up_ - -You have the new storage drives available in your VM, but you still have to configure the required LVM volumes in them. - -1. Create a new GPT partition on each of the new storage drives with `sgdisk`. Remember that these new drives are the `/dev/sdb` and `/dev/sdc` devices you saw before with `fdisk`. - - ~~~bash - $ sudo sgdisk -N 1 /dev/sdb - $ sudo sgdisk -N 1 /dev/sdc - ~~~ - - You might see the following warning when executing the `sgdisk` commands. - - ~~~bash - Warning: Partition table header claims that the size of partition table - entries is 0 bytes, but this program supports only 128-byte entries. - Adjusting accordingly, but partition table may be garbage. - ~~~ - - Don't worry about it, the partitions will work fine. This warning may be some odd consequence due to the drives' virtual nature. - -2. Check with `fdisk` that now you have a new partition on each storage drive. - - ~~~bash - $ sudo fdisk -l /dev/sdb /dev/sdc - Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors - Disk model: QEMU HARDDISK - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: gpt - Disk identifier: B9CBA7C7-78E5-4EC7-9243-F3CB7ED69B6E - - Device Start End Sectors Size Type - /dev/sdb1 2048 10485726 10483679 5G Linux filesystem - - - Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors - Disk model: QEMU HARDDISK - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: gpt - Disk identifier: B4168FB4-8501-4763-B89F-B3CEDB30B698 - - Device Start End Sectors Size Type - /dev/sdc1 2048 20971486 20969439 10G Linux filesystem - ~~~ - - Above you can find the `/dev/sdb1` and `/dev/sdc1` partitions under their respective drives. - -3. Use `pvcreate` to create a new LVM physical volume, or PV, out of each partition. - - ~~~bash - $ sudo pvcreate --metadatasize 5m -y -ff /dev/sdb1 - $ sudo pvcreate --metadatasize 10m -y -ff /dev/sdc1 - ~~~ - - To determine the metadata size, I've used the rule of thumb of allocating 1 MiB per 1 GiB present in the PV. - - Check with `pvs` that the PVs have been created. - - ~~~bash - $ sudo pvs - PV VG Fmt Attr PSize PFree - /dev/sda5 k3snode-vg lvm2 a-- <9.52g 0 - /dev/sdb1 lvm2 --- <5.00g <5.00g - /dev/sdc1 lvm2 --- <10.00g <10.00g - ~~~ - -4. You also need to assign a volume group, or VG, to each PV, bearing in mind the following. - - - The two drives are running on different storage hardware, so you must clearly differentiate their storage space. - - Nextcloud's database data will be stored in `/dev/sdb1`, on the SSD drive. - - Nextcloud's html and installation-related files will be also stored in `/dev/sdb1`, on the SSD drive. - - Nextcloud's users files will be kept in `/dev/sdc1`, on the HDD drive. - - Knowing that, create two VGs with `vgcreate`. - - ~~~bash - $ sudo vgcreate nextcloud-ssd /dev/sdb1 - $ sudo vgcreate nextcloud-hdd /dev/sdc1 - ~~~ - - See how I named each VG related to Nextcloud and the kind of underlying drive used. Then, with `pvs` you can see how each PV is now assigned to their respective VG. - - ~~~bash - $ sudo pvs - PV VG Fmt Attr PSize PFree - /dev/sda5 k3snode-vg lvm2 a-- <9.52g 0 - /dev/sdb1 nextcloud-ssd lvm2 a-- 4.99g 4.99g - /dev/sdc1 nextcloud-hdd lvm2 a-- 9.98g 9.98g - ~~~ - - Also check with `vgs` the current status of the VGs in your VM. - - ~~~bash - $ sudo vgs - VG #PV #LV #SN Attr VSize VFree - k3snode-vg 1 1 0 wz--n- <9.52g 0 - nextcloud-hdd 1 0 0 wz--n- 9.98g 9.98g - nextcloud-ssd 1 0 0 wz--n- 4.99g 4.99g - ~~~ - -5. Then, you can create the required light volumes on each VG with `lvcreate`. Remember the purpose of each LV and give them meaningful names. - - ~~~bash - $ sudo lvcreate -l 75%FREE -n nextcloud-db nextcloud-ssd - $ sudo lvcreate -l 100%FREE -n nextcloud-html nextcloud-ssd - $ sudo lvcreate -l 100%FREE -n nextcloud-data nextcloud-hdd - ~~~ - - Notice how the `nextcloud-db` LV only takes the `75%` of the currently free space on the `nextcloud-ssd` VG, and how the `nextcloud-html` LV uses all the **remaining** space available on the same VG. On the other hand, `nextcloud-data` takes all the storage available in the `nextcloud-hdd` VG. Also see how all the LV's names are reminders of the kind of data they'll store later. - - Check with `lvs` the new LVs in your VM. - - ~~~bash - $ sudo lvs - LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert - root k3snode-vg -wi-ao---- <9.52g - nextcloud-data nextcloud-hdd -wi-a----- 9.98g - nextcloud-db nextcloud-ssd -wi-a----- 3.74g - nextcloud-html nextcloud-ssd -wi-a----- 1.25g - ~~~ - - See how `nextcloud-db` has the size corresponding to the 75% (3.74 GiB) of the 5 GiB that were available in the `nextcloud-ssd` VG, while `nextcloud-html` took the remaining 25% (1.25 GiB). - - With `vgs` you can verify that there's no space left (`VFree` column) in the VGs. - - ~~~bash - $ sudo vgs - VG #PV #LV #SN Attr VSize VFree - k3snode-vg 1 1 0 wz--n- <9.52g 0 - nextcloud-hdd 1 1 0 wz--n- 9.98g 0 - nextcloud-ssd 1 2 0 wz--n- 4.99g 0 - ~~~ - - Notice how the count of LVs in the `nextcloud-ssd` VG is `2`, while in the rest of them is just `1`. - -### _Formatting and mounting the new LVs_ - -Your new LVs need to be formatted as ext4 filesystems and then mounted. - -1. Before you format the new LVs, you need to see their `/dev/mapper/` paths with `fdisk`. To get only the Nextcloud related paths, I filter out their lines with `grep` because the `nextcloud` string will be part of the paths. - - ~~~bash - $ sudo fdisk -l | grep nextcloud - Disk /dev/mapper/nextcloud--ssd-nextcloud--db: 3.74 GiB, 4018143232 bytes, 7847936 sectors - Disk /dev/mapper/nextcloud--ssd-nextcloud--html: 1.25 GiB, 1342177280 bytes, 2621440 sectors - Disk /dev/mapper/nextcloud--hdd-nextcloud--data: 9.98 GiB, 10720641024 bytes, 20938752 sectors - ~~~ - -2. Call the `mkfs.ext4` command on their `/dev/mapper/nextcloud` paths. - - ~~~bash - $ sudo mkfs.ext4 /dev/mapper/nextcloud--ssd-nextcloud--db - $ sudo mkfs.ext4 /dev/mapper/nextcloud--ssd-nextcloud--html - $ sudo mkfs.ext4 /dev/mapper/nextcloud--hdd-nextcloud--data - ~~~ - -3. Next you need to create a directory structure that provides mount points for the LVs. It could be like the one created with the following `mkdir` command. - - ~~~bash - $ sudo mkdir -p /mnt/nextcloud-ssd/{db,html} /mnt/nextcloud-hdd/data - ~~~ - - Notice that you have to use `sudo` for creating those folders, because the system will use its `root` user to mount them later on each boot up. Check, with the `tree` command, that they've been created correctly. - - ~~~bash - $ tree -F /mnt/ - /mnt/ - ├── nextcloud-hdd/ - │   └── data/ - └── nextcloud-ssd/ - ├── db/ - └── html/ - - 5 directories, 0 files - ~~~ - -4. Using the `mount` command, mount the LVs in their respective mount points. - - ~~~bash - $ sudo mount /dev/mapper/nextcloud--ssd-nextcloud--db /mnt/nextcloud-ssd/db - $ sudo mount /dev/mapper/nextcloud--ssd-nextcloud--html /mnt/nextcloud-ssd/html - $ sudo mount /dev/mapper/nextcloud--hdd-nextcloud--data /mnt/nextcloud-hdd/data - ~~~ - - Verify with `df` that they've been mounted in the system. Since you're working in a K3s agent node, you'll also see a bunch of containerd-related filesystems mounted. Your newly mounted LVs will appear at the bottom of the list. - - ~~~bash - $ df -h - Filesystem Size Used Avail Use% Mounted on - udev 974M 0 974M 0% /dev - tmpfs 199M 1.2M 198M 1% /run - /dev/mapper/k3snode--vg-root 9.3G 1.8G 7.1G 21% / - tmpfs 992M 0 992M 0% /dev/shm - tmpfs 5.0M 0 5.0M 0% /run/lock - /dev/sda1 470M 48M 398M 11% /boot - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/0a3ab27d616453de9f6723df7e3f98a68842dc4cd6c53546ea3aac7a61f2120f/shm - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/a78e5eca7c8310b13e61d9205e54a53279bc78054490f48303052d4670f7c70a/shm - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/4aa9bb992317404351deadd41c339928c0bbbb0d6bd46f8f496a56761270fdfb/shm - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/dca9bec1a9e1b900335800fc883617a13539e683caf518176c1eefccabc44525/shm - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/0e4e4877c0b21e59e60bac8f58fcb760c24d4121a88cea3ec0797cdc14ae6bb1/shm - shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2fcb1ecfc4dd0902d12c26ce4d31b2067ff2dd6c77ce0ddd936a1907573e280b/shm - tmpfs 173M 0 173M 0% /run/user/1000 - /dev/mapper/nextcloud--ssd-nextcloud--db 3.7G 24K 3.5G 1% /mnt/nextcloud-ssd/db - /dev/mapper/nextcloud--ssd-nextcloud--html 1.2G 24K 1.2G 1% /mnt/nextcloud-ssd/html - /dev/mapper/nextcloud--hdd-nextcloud--data 9.8G 24K 9.3G 1% /mnt/nextcloud-hdd/data - ~~~ - -5. To make the mountings permanent, append them to the `/etc/fstab` file of the VM. First, make a backup of the `fstab` file. - - ~~~bash - $ sudo cp /etc/fstab /etc/fstab.bkp - ~~~ - - Then **append** the following lines to the `fstab` file. - - ~~~bash - # Nextcloud volumes - /dev/mapper/nextcloud--ssd-nextcloud--db /mnt/nextcloud-ssd/db ext4 defaults,nofail 0 0 - /dev/mapper/nextcloud--ssd-nextcloud--html /mnt/nextcloud-ssd/html ext4 defaults,nofail 0 0 - /dev/mapper/nextcloud--hdd-nextcloud--data /mnt/nextcloud-hdd/data ext4 defaults,nofail 0 0 - ~~~ - -### _Storage mount points for Nextcloud containers_ - -You must not use the directories in which you've mounted the new storage volumes as mount points for the persistent volumes you'll enable later for the containers where the Nexcloud components will run. This is because the containers can change the owner user and group, and also the permission mode, applied to those folders. This could cause a failure when, after a reboot, your K3s agent node tries to mount again its storage volumes. The issue will happen because it won't have the right user or permissions anymore to access the mount point folders. The best thing to do then is to create another folder within each storage volume that can be used safely by its respective container. - -1. For the case of the LVM storage volumes created before, you just have to execute a `mkdir` command like the following. - - ~~~bash - $ sudo mkdir /mnt/{nextcloud-ssd/db,nextcloud-ssd/html,nextcloud-hdd/data}/k3smnt - ~~~ - - Like you did with the mount point folders, these new directories also have to be owned by `root`. This is because the K3s service is running under that user. - -2. Check the new folder structure with `tree`. - - ~~~bash - $ tree -F /mnt/ - /mnt/ - ├── nextcloud-hdd/ - │   └── data/ - │   ├── k3smnt/ - │   └── lost+found/ [error opening dir] - └── nextcloud-ssd/ - ├── db/ - │   ├── k3smnt/ - │   └── lost+found/ [error opening dir] - └── html/ - ├── k3smnt/ - └── lost+found/ [error opening dir] - - 11 directories, 0 files - ~~~ - - Don't mind the `lost+found` folders, they are created by the system automatically. - -> **BEWARE!** -> Realize that the `k3smnt` folders are **within** the already mounted LVM storage volumes, meaning that you cannot create them without mounting the light volumes first. - -### _About increasing the size of volumes_ - -If, after a time using and filling up these volumes, you need to increase their size, take a look to the [**G907** appendix guide](G907%20-%20Appendix%2007%20~%20Resizing%20a%20root%20LVM%20volume.md). It shows you how to extend a partition and the LVM filesystem within it, although in that case it works on a LV volume that happens to be also the root filesystem of a VM. - -## Choosing static cluster IPs for Nextcloud related services - -For all the main components of your Nextcloud setup, you're going to create `Service` resources. To make them reachable internally for any pod within your Kubernetes cluster, one way is by assigning them a static cluster IP. With this you get to know beforehand which internal IP the services have, allowing you pointing the Nextcloud server instance to the right ones. To determine which cluster IP is correct for those future services, you need to take a look at which cluster IPs are currently in use in your Kubernetes cluster. - -~~~bash -$ kubectl get svc -A -NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -default kubernetes ClusterIP 10.43.0.1 443/TCP 8d -kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 8d -kube-system traefik LoadBalancer 10.43.110.37 192.168.1.41 80:30963/TCP,443:32446/TCP 7d23h -kube-system metrics-server ClusterIP 10.43.133.41 443/TCP 6d19h -cert-manager cert-manager ClusterIP 10.43.176.145 9402/TCP 5d23h -cert-manager cert-manager-webhook ClusterIP 10.43.75.5 443/TCP 5d23h -kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.43.181.120 8000/TCP 5d1h -kubernetes-dashboard kubernetes-dashboard ClusterIP 10.43.101.112 443/TCP 5d1h -~~~ - -Check the values under the `CLUSTER-IP` column, and notice how all of them fall (in this case) under the `10.43` subnet. What you have to do now is just choose IPs that fall into that subnet but don't collide with the ones currently in use by other services. Let's say you choose the following ones: - -- `10.43.100.1` for the Redis service. -- `10.43.100.2` for the MariaDB service. -- `10.43.100.3` for the Nextcloud service. - -See that I've also chosen IP for the Nextcloud server. With the internal cluster IPs known, in a later guide you'll see how Nextcloud will be able to point at the components it needs to run. - -## Relevant system paths - -### _Folders in K3s agent node's VM_ - -- `/etc` -- `/mnt` -- `/mnt/nextcloud-hdd` -- `/mnt/nextcloud-hdd/data` -- `/mnt/nextcloud-hdd/data/k3smnt` -- `/mnt/nextcloud-ssd` -- `/mnt/nextcloud-ssd/db` -- `/mnt/nextcloud-ssd/db/k3smnt` -- `/mnt/nextcloud-ssd/html` -- `/mnt/nextcloud-ssd/html/k3smnt` - -### _Files in K3s agent node's VM_ - -- `/dev/mapper/nextcloud--hdd-nextcloud--data` -- `/dev/mapper/nextcloud--ssd-nextcloud--db` -- `/dev/mapper/nextcloud--ssd-nextcloud--html` -- `/dev/sdb` -- `/dev/sdb1` -- `/dev/sdc` -- `/dev/sdc1` -- `/etc/fstab` -- `/etc/fstab.bkp` - -## Navigation - -[<< Previous (**G032. Deploying services 01**)](G032%20-%20Deploying%20services%2001%20~%20Considerations.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 2**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md) diff --git a/G033 - Deploying services 02 ~ Nextcloud - Part 2 - Redis cache server.md b/G033 - Deploying services 02 ~ Nextcloud - Part 2 - Redis cache server.md deleted file mode 100644 index 19d2aec..0000000 --- a/G033 - Deploying services 02 ~ Nextcloud - Part 2 - Redis cache server.md +++ /dev/null @@ -1,506 +0,0 @@ -# G033 - Deploying services 02 ~ Nextcloud - Part 2 - Redis cache server - -In this second part of the Nextcloud guide you'll start working in the Kustomize project for the whole Nextcloud platform's setup. In particular, you'll prepare the Kustomize subproject for a Redis server that will act as the session cache component of your Nextcloud platform. - -## Kustomize project folders for Nextcloud and Redis - -You need a main Kustomize project for the deployment of your Nextcloud platform and, in it, you'll contain the subprojects of its components like Redis. So, you could execute the following `mkdir` command to comply with those requirements. - -~~~bash -$ mkdir -p $HOME/k8sprjs/nextcloud/components/cache-redis/{configs,resources,secrets} -~~~ - -The main folder for the Redis Kustomize project, `cache-redis`, is named following the pattern `-` which I'll use also to name the main directories for the remaining component projects. This Redis project will have not only some resources to deploy, but also a configuration file and a secret, thus the need for the `configs`, `resources` and `secrets` subfolders. - -## Redis configuration file - -You need to fit Redis to your needs, and the best way is by setting its parameters in a configuration file. - -1. In the `configs` subfolder of the Redis project, create a `redis.conf` file. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/cache-redis/configs/redis.conf - ~~~ - - The name `redis.conf` is the default one for the Redis configuration file. - -2. Put the lines below in the `redis.conf` file. - - ~~~properties - port 6379 - bind 0.0.0.0 - protected-mode no - maxmemory 64mb - maxmemory-policy allkeys-lru - ~~~ - - The parameters above mean the following. - - - `port`: the default Redis port is `6379`, specified here just for clarity. - - - `bind`: to make the Redis server listen in specific interfaces. With `0.0.0.0` listens to all available ones. - > **BEWARE!** - > You don't want to put here the IP you chose for the Redis service in the previous part. It's better to leave this parameter with a "flexible" value, so you don't have to worry about putting a particular IP in several places. - - - `protected-mode`: security option for restricting Redis from listening in interfaces other than localhost. Enabled by default, is disabled with value `no`. - - - `maxmemory`: limits the memory used by the Redis server. When the limit is reached, it'll try to remove keys accordingly to the eviction policy set in the `maxmemmory-policy` parameter. - - - `maxmemory-policy`: policy for evicting keys from memory when its limit is reached. Here is set to `allkeys-lru` so it can remove any key accordingly to a LRU (Least Recently Used) algorithm. - -## Redis password - -You'll want to secure the access to this Nextcloud's Redis instance. To do so you can set Redis with a password, but you don't want it lying in the clear in the `redis.conf` file. - -1. Create a new `redis.pwd` file in the `secrets` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/cache-redis/secrets/redis.pwd - ~~~ - -2. In `redis.pwd` just type a long alphanumeric password for your Redis instance. - - ~~~properties - Y0ur_rE3e41Ly.lOng-S3kreT_P4s5woRd-heRE! - ~~~ - - > **BEWARE!** - > Be **very careful** about not leaving any kind of spurious characters at the end of the password, like a line break (`\n`), to avoid unexpected odd issues when this password is used. - > Also notice that the password is stored as a plain text here, so be careful about who access this file. - -Later in this guide, you'll see how to set this password as a secret in the corresponding `kustomization.yaml` file of this Redis Kustomize project. - -## Redis Deployment resource - -The next thing to do is setting up the `Deployment` resource you'll use for deploying Redis in your K3s cluster. - -1. Create a `cache-redis.deployment.yaml` file under the `resources` subfolder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.deployment.yaml - ~~~ - -2. In `cache-redis.deployment.yaml` copy the following yaml. - - ~~~yaml - apiVersion: apps/v1 - kind: Deployment - - metadata: - name: cache-redis - spec: - replicas: 1 - template: - spec: - containers: - - name: server - image: redis:6.2-alpine - command: - - redis-server - - "/etc/redis/redis.conf" - - "--requirepass $(REDIS_PASSWORD)" - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - ports: - - containerPort: 6379 - resources: - limits: - memory: 64Mi - volumeMounts: - - name: redis-config - subPath: redis.conf - mountPath: /etc/redis/redis.conf - - name: metrics - image: oliver006/redis_exporter:v1.32.0-alpine - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - resources: - limits: - memory: 32Mi - ports: - - containerPort: 9121 - volumes: - - name: redis-config - configMap: - name: cache-redis - defaultMode: 0444 - items: - - key: redis.conf - path: redis.conf - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - server-nextcloud - topologyKey: "kubernetes.io/hostname" - ~~~ - - This `Deployment` resource describes the template for the pod that will contain the Redis server and its Prometheus metrics exporter service, running each on their own containers. - - - `replicas`: given the limitations of the cluster, only one instance of the Redis pod is requested. - - - `template`: describes how the pod resulting from this `Deployment` should be. - - - `spec.containers`: this pod template has two containers running in it, arranged in what's known as a _sidecar_ pattern. - - Container `server`: container that runs the Redis server itself. - - The Docker `image` used is the Alpine Linux variant of [the most recent 6.2 version](https://hub.docker.com/_/redis). - - In the `command` section you can see how the the configuration file path is indicated to the service, and also how the Redis password is obtained from a `cache-redis` secret (which you'll declare later), then turned into an environment variable (`env` section) to be passed to the `--requirepass` option. - - The `containerPort` is the same as the one set in the `redis.conf` file. - - The container is set with a RAM usage limit in the `resources` section. - - Container `metrics`: container that runs a service specialized in getting statistics from the Redis server in a format that Prometheus can read. - - The Docker `image` is an Alpine Linux variant of [the 1.32 version of this exporter](https://hub.docker.com/r/oliver006/redis_exporter). - - By default it tries to connect to `redis://localhost:6379`, which fits the configuration applied to the Redis service. - - In the `env` section, the Redis password is set so the exporter can authenticate in the Redis server. - - It also has limited RAM `resources` and its `containerPort` matches the one specified in the template's `metadata.annotations`. - - - `spec.volumes`: here only the `redis.conf` item, taken from a yet-to-be-defined `cache-redis` configmap object, is declared as a volume so it can be mounted by the `server` container, under its `volumeMounts` section. - - Notice here the `defaultMode` parameter, which sets to the items contained in the specified configMap a particular permission mode by default. In this case, it sets a read-only permission for all users with mode `0444` but only for the `items` listed below it (the `redis.conf` file in this case). - - Know that, in the items list, the `key` parameter is the name identifying the file inside the config map, and the `path` is the relative filename given to the item as volume. - - - `spec.affinity`: the `requiredDuringSchedulingIgnoredDuringExecution` affinity rule will make the Redis pod scheduled in the same node where a pod labeled with `app: server-nextcloud` is also created. This implies that the containers of the Redis pod won't be instanced until that other pod appears in the cluster. On the other hand, the Redis pod's containers won't be stopped when the pod they have the affinity with dissapears from the cluster. - -## Redis Service resource - -You have defined the pod that will execute the containers running the Redis server and its Prometheus statistics exporter, now you need to define the `Service` resource that will give access to them. - -1. Generate a new file named `cache-redis.service.yaml`, also under the `resources` subfolder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.service.yaml - ~~~ - -2. Fill `cache-redis.service.yaml` with the content below. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - name: cache-redis - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9121" - spec: - type: ClusterIP - clusterIP: 10.43.100.1 - ports: - - port: 6379 - protocol: TCP - name: server - - port: 9121 - protocol: TCP - name: metrics - ~~~ - - The Service resource defines how to access the Redis pod services. - - - `metadata.annotations`: two annotations required for the Prometheus data scraping service (you'll see how to deploy Prometheus in a later guide).These annotations inform Prometheus about which port to scrape for getting metrics of your Redis service, which is data provided by the specialized metrics service that runs in the second container of the Redis pod. - - - `spec.type`: by default, any `Service` resource is of type `ClusterIP`, meaning that the service is only reachable from within the cluster's internal network. You can omit this parameter altogether from the yaml when you're using the default type. - - - `spec.clusterIP`: where you put the chosen static cluster IP for this service. - - - `spec.ports`: describe the ports open in this service. Notice how I made the `name` and `port` on each port of this `Service` to be the same as the ones already defined for the containers in the previous `Deployment` resource. - -## Redis Kustomize project - -What remains to setup is the main `kustomization.yaml` file that describes the whole Redis Kustomize project. - -1. In the main `cache-redis` folder, create a `kustomization.yaml` file. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/cache-redis/kustomization.yaml - ~~~ - -2. Fill the `kustomization.yaml` file with the following yaml. - - ~~~yaml - # Redis setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: cache-redis - - resources: - - resources/cache-redis.deployment.yaml - - resources/cache-redis.service.yaml - - replicas: - - name: cache-redis - count: 1 - - images: - - name: redis - newTag: 6.2-alpine - - name: oliver006/redis_exporter - newTag: v1.32.0-alpine - - configMapGenerator: - - name: cache-redis - files: - - configs/redis.conf - - secretGenerator: - - name: cache-redis - files: - - redis-password=secrets/redis.pwd - ~~~ - - This `kustomization.yaml` file has elements you've already seen in previous deployments, plus a few extra ones. - - - With the `commonLabels` you can set up labels to all the resources generated from this `kustomization.yaml` file. In this case the label is `app: cache-redis`. - - - The `replicas` section allows you to handle the number of replicas you want for deployments, overriding whatever number already set in their base definition. In this case you only have one deployment listed, and the value put here is the same as the one set in the `cache-redis` deployment definition. - - - The `images` block gives you a handy way of changing the images specified within deployments, particularly useful for when you want to upgrade to newer versions. - - - There are two details to notice about the `configMapGenerator` and `secretGenerator`: - - In the `cache-redis`'s definition, the file `secrets/redis.pwd` is renamed to `redis-password`, which is the key expected within the `cache-redis` deployment definition. - - None of these generator blocks have the `disableNameSuffixHash` option enabled, because the name of the resources they generate is only used in standard Kubernetes parameters that are recognized by Kustomize. - -### _Validating the Kustomize yaml output_ - -With everything in place, you can check out the yaml resulting from the Redis' Kustomize project. - -1. Execute the `kubectl kustomize` command on the Redis Kustomize project's root folder, piped to `less` to get the output paginated. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/cache-redis | less - ~~~ - - Alternatively, you could just dump the yaml output on a file, called `cache-redis.k.output.yaml` for instance. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/cache-redis > cache-redis.k.output.yaml - ~~~ - -2. The resulting yaml should look like the one next. - - ~~~yaml - apiVersion: v1 - data: - redis.conf: | - port 6379 - bind 0.0.0.0 - protected-mode no - maxmemory 64mb - maxmemory-policy allkeys-lru - kind: ConfigMap - metadata: - labels: - app: cache-redis - name: cache-redis-6967fc5hc5 - --- - apiVersion: v1 - data: - redis-password: | - WTB1cl9yRTNlNDFMeS5sYWzDsWpmbMOxa2FlcnV0YW9uZ3ZvYW46YcOxb2RrbzM0OTQ4dX - lPbmctUzNrcmVUX1A0czV3b1JkLWhlUkUhCg== - kind: Secret - metadata: - labels: - app: cache-redis - name: cache-redis-bh9d296g5k - type: Opaque - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9121" - prometheus.io/scrape: "true" - labels: - app: cache-redis - name: cache-redis - spec: - clusterIP: 10.43.100.1 - ports: - - name: server - port: 6379 - protocol: TCP - - name: metrics - port: 9121 - protocol: TCP - selector: - app: cache-redis - type: ClusterIP - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - labels: - app: cache-redis - name: cache-redis - spec: - replicas: 1 - selector: - matchLabels: - app: cache-redis - template: - metadata: - labels: - app: cache-redis - spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - server-nextcloud - topologyKey: kubernetes.io/hostname - containers: - - command: - - redis-server - - /etc/redis/redis.conf - - --requirepass $(REDIS_PASSWORD) - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: cache-redis-bh9d296g5k - image: redis:6.2-alpine - name: server - ports: - - containerPort: 6379 - resources: - limits: - memory: 64Mi - volumeMounts: - - mountPath: /etc/redis/redis.conf - name: redis-config - subPath: redis.conf - - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: cache-redis-bh9d296g5k - image: oliver006/redis_exporter:v1.32.0-alpine - name: metrics - ports: - - containerPort: 9121 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - defaultMode: 292 - items: - - key: redis.conf - path: redis.conf - name: cache-redis-6967fc5hc5 - name: redis-config - ~~~ - - A few things to highlight in the yaml output above. - - You might have noticed this in the previous Kustomize projects you've deployed before, but the generated yaml output has the parameters within each resource sorted alphabetically. Be aware of this when you compare this output with the files you created and your expected results. - - - The names of the `cache-redis` config map and `cache-redis` secret have a hash as a suffix, added by Kustomize. The hash is calculated from the content of the renamed resources. - - - Another detail to notice is how the label `app: cache-redis` appears not only as label in the `metadata` section of all the resources, but Kustomize has also set it as `selector` both in the `Service` and the `Deployment` resources definitions. - - - There's also a particularity that might seem odd. The `defaultMode` of the `redis-config` volume is shown as `292` instead of the `0444` value you set in the Deployment resource definition. It's not a mistake, just a particularity of the Kustomize generation. The file will have the permission mode set as it is specified in the original `Deployment` resource. - -3. On the other hand, if you installed the `kubeval` command in your kubectl client system (as explained in the [G026 guide](G026%20-%20K3s%20cluster%20setup%2009%20~%20Setting%20up%20a%20kubectl%20client%20for%20remote%20access.md)), you can validate the Kustomize output with it. So, assuming you have dumped the output in a `cache-redis.k.output.yaml` file, you can execute the following. - - ~~~bash - $ kubeval cache-redis.k.output.yaml - PASS - cache-redis.kustomize.output.yaml contains a valid ConfigMap (cache-redis) - PASS - cache-redis.kustomize.output.yaml contains a valid Secret (cache-redis) - PASS - cache-redis.kustomize.output.yaml contains a valid Service (cache-redis) - PASS - cache-redis.kustomize.output.yaml contains a valid Deployment (cache-redis) - ~~~ - - At this `kubeval`'s output, you can see that prints one line per validated resource and, in this case, all of them `PASS` as valid resources. - -## Don't deploy this Redis project on its own - -Although you technically can deploy this Redis Kustomize project, wait until you have all the components and the main Nextcloud project ready. Then, you'll deploy the whole lot at once with just one `kubectl` command. - -## Relevant system paths - -### _Folders in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud` -- `$HOME/k8sprjs/nextcloud/components` -- `$HOME/k8sprjs/nextcloud/components/cache-redis` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/configs` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/resources` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/secrets` - -### _Files in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/cache-redis/kustomization.yaml` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/configs/redis.conf` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.deployment.yaml` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.service.yaml` -- `$HOME/k8sprjs/nextcloud/components/cache-redis/secrets/redis.pwd` - -## References - -### _Kubernetes_ - -#### **Pod scheduling** - -- [Official Doc - Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) -- [Official Doc - Assign Pods to Nodes using Node Affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) -- [Kubernetes API - Pod scheduling](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) -- [STRATEGIES FOR KUBERNETES POD PLACEMENT AND SCHEDULING](https://thenewstack.io/strategies-for-kubernetes-pod-placement-and-scheduling/) -- [Implement Node and Pod Affinity/Anti-Affinity in Kubernetes: A Practical Example](https://thenewstack.io/implement-node-and-pod-affinity-anti-affinity-in-kubernetes-a-practical-example/) -- [Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes](https://thenewstack.io/tutorial-apply-the-sidecar-pattern-to-deploy-redis-in-kubernetes/) -- [Amazon EKS Workshop Official Doc - Assigning Pods to Nodes](https://www.eksworkshop.com/beginner/140_assigning_pods/affinity_usecases/) - -#### **ConfigMaps and secrets** - -- [Official Doc - ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) -- [Official Doc - Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) -- [An Introduction to Kubernetes Secrets and ConfigMaps](https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) -- [Kubernetes - Using ConfigMap SubPaths to Mount Files](https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i) -- [Kubernetes Secrets | Declare confidential data with examples](https://www.golinuxcloud.com/kubernetes-secrets/) -- [Kubernetes ConfigMaps and Secrets](https://shravan-kuchkula.github.io/kubernetes/configmaps-secrets/) -- [Import data to config map from kubernetes secret](https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret) - -#### **Environment variables** - -- [Official Doc - Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) -- [Official Doc - Define Dependent Environment Variables](https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/) - -### _Redis_ - -- [Redis](https://redis.io/) -- [Redis FAQ](https://redis.io/topics/faq) -- [Redis administration](https://redis.io/topics/admin) -- [Redis on DockerHub](https://hub.docker.com/_/redis) -- [Prometheus Redis Metrics Exporter on DockerHub](https://hub.docker.com/r/oliver006/redis_exporter) -- [`redis.conf` commented example](https://gist.github.com/jeff-french/4712257) -- [Simple Redis Cache on Kubernetes with Prometheus Metrics](https://itnext.io/simple-redis-cache-on-kubernetes-with-prometheus-metrics-8667baceab6b) -- [Deploying single node redis in kubernetes environment](https://developpaper.com/deploying-single-node-redis-in-kubernetes-environment/) -- [Single server Redis](https://rpi4cluster.com/k3s/k3s-redis/) -- [Kubernetes Official Doc - Configuring Redis using a ConfigMap](https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/) -- [redis-server - Man Page](https://www.mankier.com/1/redis-server) -- [Deploy and Operate a Redis Cluster in Kubernetes](https://marklu-sf.medium.com/deploy-and-operate-a-redis-cluster-in-kubernetes-94fde7853001) -- [Redis Setup on Kubernetes](https://blog.opstree.com/2020/08/04/redis-setup-on-kubernetes/) -- [Rancher Official Doc - Deploying Redis Cluster on Top of Kubernetes](https://rancher.com/blog/2019/deploying-redis-cluster/) -- [Running Redis on Multi-Node Kubernetes Cluster in 5 Minutes](https://collabnix.com/running-redis-on-kubernetes-cluster-in-5-minutes/) -- [Redis sentinel vs clustering](https://stackoverflow.com/questions/31143072/redis-sentinel-vs-clustering) - -## Navigation - -[<< Previous (**G033. Deploying services 02. Nextcloud Part 1**)](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup%2C%20arranging%20storage%20and%20choosing%20service%20IPs.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 3**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md) diff --git a/G033 - Deploying services 02 ~ Nextcloud - Part 3 - MariaDB database server.md b/G033 - Deploying services 02 ~ Nextcloud - Part 3 - MariaDB database server.md deleted file mode 100644 index d5ec3ea..0000000 --- a/G033 - Deploying services 02 ~ Nextcloud - Part 3 - MariaDB database server.md +++ /dev/null @@ -1,812 +0,0 @@ -# G033 - Deploying services 02 ~ Nextcloud - Part 3 - MariaDB database server - -The Nextcloud platform needs a database and MariaDB is the database engine chosen for this task. - -## MariaDB Kustomize project's folders - -Since the MariaDB database is just another component of your Nextcloud platform, you'll have to put its corresponding folders within the `nextcloud/components` path you created already in [the previous Redis guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md). - -~~~bash -$ mkdir -p $HOME/k8sprjs/nextcloud/components/db-mariadb/{configs,resources,secrets} -~~~ - -Like Redis, MariaDB also has configurations, secrets and resources files making up its Kustomize setup. - -## MariaDB configuration files - -The MariaDB deployment requires a lot of adjustments that have to be handled in configuration files. - -### _Configuration file `my.cnf`_ - -The `my.cnf` is the default configuration file for MariaDB. In it you can adjust many parameters of this database engine, something you'll need to do in this case. - -1. Create a `my.cnf` file in the `configs` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/configs/my.cnf - ~~~ - -2. Edit `my.cnf` to put in it the configuration below. - - ~~~properties - [server] - skip_name_resolve = 1 - innodb_buffer_pool_size = 224M - innodb_flush_log_at_trx_commit = 2 - innodb_log_buffer_size = 32M - query_cache_type = 1 - query_cache_limit = 2M - query_cache_min_res_unit = 2k - query_cache_size = 64M - slow_query_log = 1 - slow_query_log_file = /var/lib/mysql/slow.log - long_query_time = 1 - innodb_io_capacity = 2000 - innodb_io_capacity_max = 3000 - - [client-server] - !includedir /etc/mysql/conf.d/ - !includedir /etc/mysql/mariadb.conf.d/ - - [client] - default-character-set = utf8mb4 - - [mysqld] - character_set_server = utf8mb4 - collation_server = utf8mb4_general_ci - transaction_isolation = READ-COMMITTED - binlog_format = ROW - log_bin = /var/lib/mysql/mysql-bin.log - expire_logs_days = 7 - max_binlog_size = 100M - innodb_file_per_table=1 - innodb_read_only_compressed = OFF - tmp_table_size= 32M - max_heap_table_size= 32M - max_connections=512 - ~~~ - - The `my.cnf` above is a modified version of an example found [in the official Nexcloud documentation](https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html#configuring-a-mysql-or-mariadb-database). - - - This configuration fits the requirements of transaction isolation level (`READ-COMMITED`) and binlog format (`ROW`) demanded by Nextcloud. - - - The `innodb_buffer_pool_size` parameter preconfigures the size of the buffer pool in memory, which should have between the 60% and the 80% of the RAM available for MariaDB. - - - The `innodb_io_capacity` and `innodb_io_capacity_max` parameters are related to the I/O capacity of the underlying storage. Here they've been increased from their default values to fit better the ssd volume used for storing the MariaDB data. - - - The character set configured is `utf8mb4`, which is wider than the regular `utf8` one. - - - Nextcloud uses table compression, but writing in such format comes disabled by default since MariaDB 10.6. To enable it, the `innodb_read_only_compressed` parameter has to be set as `OFF`. - - - With `max_connections`, it limits the maximum connections that can connect to the instance. - -### _Properties file `dbnames.properties`_ - -There are a few names you need to specify in your database setup. Those names are values that you want to load as variables in the server container rather than typing them directly on MariaDB's configuration. - -1. Create a `dbnames.properties` file under the `configs` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/configs/dbnames.properties - ~~~ - -2. Copy the following parameter lines into `dbnames.properties`. - - ~~~properties - nextcloud-db-name=nextcloud-db - nextcloud-username=nextcloud - prometheus-exporter-username=exporter - ~~~ - - The three key-value pairs above mean the following. - - - `nextcloud-db-name`: name for the Nexcloud's database. - - `nextcloud-username`: name for the user associated to the Nextcloud's database. - - `prometheus-exporter-username`: name for the Prometheus metrics exporter user. - -### _Initializer shell script `initdb.sh`_ - -The Prometheus metrics exporter system you'll include in the MariaDB's deployment requires its own user to access certain statistical data from your MariaDB instance. You've already configured its name as a variable in the previous `dbnames.properties` file, but you also need to create the user within the MariaDB installation. The problem is that MariaDB can only create one user in its initial run, and you need also to create the user Nextcloud needs to work with its own database. - -To solve this issue, you can use a initializer shell script that creates that extra user you need in the MariaDB database. - -1. Create a `initdb.sh` file in the `configs` directory. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/configs/initdb.sh - ~~~ - -2. Fill the `initdb.sh` file with the following shell script. - - ~~~bash - #!/bin/sh - echo ">>> Creating user for Mysql Prometheus metrics exporter" - mysql -u root -p$MYSQL_ROOT_PASSWORD --execute \ - "CREATE USER '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost' IDENTIFIED BY '${MARIADB_PROMETHEUS_EXPORTER_PASSWORD}' WITH MAX_USER_CONNECTIONS 3; - GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost'; - FLUSH privileges;" - ~~~ - - See that the script what in fact does is execute some SQL code through a `mysql` command to create the required user. And notice how, instead of putting raw values, environmental variables (`MARIADB_PROMETHEUS_EXPORTER_USERNAME` and `MARIADB_PROMETHEUS_EXPORTER_PASSWORD`) are used as placeholders for several values. Those variables will be defined within the Deployment resource definition you'll prepare later in this guide. - -## MariaDB passwords - -There's a number of passwords you need to set up in the MariaDB installation. - -- The MariaDB root user's password. -- The Nextcloud database user's password. -- The Prometheus metrics exporter user's password. - -For convenience, let's declare all these passwords as variables in the same properties file, so they can be turned into a Secret resource later. - -1. Create a `dbusers.pwd` file under the `secrets` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/secrets/dbusers.pwd - ~~~ - -2. Fill `dbusers.pwd` with the following variables. - - ~~~properties - root-password=l0nG.Pl4in_T3xt_sEkRet_p4s5wORD-FoR_rOo7_uZ3r! - nextcloud-user-password=l0nG.Pl4in_T3xt_sEkRet_p4s5wORD-FoR_nEx7k1OuD_uZ3r! - prometheus-exporter-password=l0nG.Pl4in_T3xt_sEkRet_p4s5wORD-FoR_3xP0rTeR_uZ3r! - ~~~ - - The passwords have to be put here as plain unencrypted text, so be careful of who accesses this file. - -## MariaDB storage - -Storage in Kubernetes has essentially two sides: the enablement of storage as persistent volumes (PVs), and the claims (PVCs) on each of those persistent volumes. For MariaDB you'll need one persistent volume, which you'll declare in the last part of this Nextcloud guide, and the claim on that particular PV. - -1. A persistent volume claim is a resource, so create a `db-mariadb.persistentvolumeclaim.yaml` file under the `resources` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml - ~~~ - -2. Copy the yaml manifest below into `db-mariadb.persistentvolumeclaim.yaml`. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolumeClaim - - metadata: - name: db-mariadb - spec: - accessModes: - - ReadWriteOnce - storageClassName: local-path - volumeName: db-nextcloud - resources: - requests: - storage: 3.5G - ~~~ - - There are a few details to understand from the PVC above. - - - The `spec.accessModes` is specified. This is mandatory in a claim and it cannot demand a mode that's not enabled in the persistent volume itself. - - - The `spec.storageClassName` is a parameter that indicates what storage profile (a particular set of properties) to use with the persistent volume. K3s comes with just the `local-path` included by default, something you can check out on your own K3s cluster with `kubectl`. - - ~~~bash - $ kubectl get storageclass - NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE - local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 10d - ~~~ - - - The `spec.volumeName` is the name of the persistent volume, in the same namespace, this claim binds itself to. - - - In a claim is also mandatory to specify how much storage is requested, hence the need to put the `spec.resources.requests.storage` parameter there. Be careful of not requesting more space than what's available in the volume. - - - Needless to say, but the persistent volume related to this claim must correspond to the values set here. - -## MariaDB StatefulSet resource - -Instead of using a Deployment resource to put MariaDB in your Kubernetes cluster, you'll use a **StatefulSet**. Stateful sets are meant for deploying apps or services that store data (_state_) permanently, as databases such as MariaDB do. - -1. Create a `db-mariadb.statefulset.yaml` in the `resources` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.statefulset.yaml - ~~~ - -2. Put in `db-mariadb.statefulset.yaml` the next resource description. - - ~~~yaml - apiVersion: apps/v1 - kind: StatefulSet - - metadata: - name: db-mariadb - spec: - replicas: 1 - serviceName: db-mariadb - template: - spec: - containers: - - name: server - image: mariadb:10.6-focal - ports: - - containerPort: 3306 - env: - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-db-name - - name: MYSQL_ROOT_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: root-password - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-username - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: nextcloud-user-password - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - name: db-mariadb - key: prometheus-exporter-username - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: prometheus-exporter-password - resources: - limits: - memory: 320Mi - volumeMounts: - - name: mariadb-config - subPath: my.cnf - mountPath: /etc/mysql/my.cnf - - name: mariadb-config - subPath: initdb.sh - mountPath: /docker-entrypoint-initdb.d/initdb.sh - - name: mariadb-storage - mountPath: /var/lib/mysql - - name: metrics - image: prom/mysqld-exporter:v0.13.0 - ports: - - containerPort: 9104 - args: - - --collect.info_schema.tables - - --collect.info_schema.innodb_tablespaces - - --collect.info_schema.innodb_metrics - - --collect.global_status - - --collect.global_variables - - --collect.slave_status - - --collect.info_schema.processlist - - --collect.perf_schema.tablelocks - - --collect.perf_schema.eventsstatements - - --collect.perf_schema.eventsstatementssum - - --collect.perf_schema.eventswaits - - --collect.auto_increment.columns - - --collect.binlog_size - - --collect.perf_schema.tableiowaits - - --collect.perf_schema.indexiowaits - - --collect.info_schema.userstats - - --collect.info_schema.clientstats - - --collect.info_schema.tablestats - - --collect.info_schema.schemastats - - --collect.perf_schema.file_events - - --collect.perf_schema.file_instances - - --collect.perf_schema.replication_group_member_stats - - --collect.perf_schema.replication_applier_status_by_worker - - --collect.slave_hosts - - --collect.info_schema.innodb_cmp - - --collect.info_schema.innodb_cmpmem - - --collect.info_schema.query_response_time - - --collect.engine_tokudb_status - - --collect.engine_innodb_status - env: - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - name: db-mariadb - key: prometheus-exporter-username - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: prometheus-exporter-password - - name: DATA_SOURCE_NAME - value: "$(MARIADB_PROMETHEUS_EXPORTER_USERNAME):$(MARIADB_PROMETHEUS_EXPORTER_PASSWORD)@(localhost:3306)/" - resources: - limits: - memory: 32Mi - volumes: - - name: mariadb-config - configMap: - name: db-mariadb - items: - - key: initdb.sh - path: initdb.sh - - key: my.cnf - path: my.cnf - - name: mariadb-storage - persistentVolumeClaim: - claimName: db-mariadb - ~~~ - - If you compare this `StatefulSet` with Redis' `Deployment` you'll find many similarities regarding parameters, but there are also several differences. - - - `serviceName`: links this `StatefulSet` to a `Service`. - > **BEWARE!** - > A `StatefulSet` can only be linked to an **already existing** `Service`. - - - `template.spec.containers`: like in the Redis case, two containers are set in the pod as sidecars. - - - Container `server`: the MariaDB server instance. - - The `image` of MariaDB here is based on the Focal Fossa version (20.04 LTS) of Ubuntu. - - The `env` section contains several environment parameters. The ones with the `MYSQL_` prefix are directly recognized by the MariaDB server. The `MARIADB_PROMETHEUS_EXPORTER_USERNAME` and `MARIADB_PROMETHEUS_EXPORTER_PASSWORD` are meant only for the `initdb.sh` initializer script. Notice how the values of these environment parameters are taken from a `db-mariadb` secret and a `db-mariadb` config map you'll declare later. - - The `volumeMounts` contains three mount points. - - MountPath `/etc/mysql/my.cnf`: the default path where MariaDB has its `my.cnf` file. This `my.cnf` file is the one you created before, and you'll load it later in the `db-mariadb` config map resource. - - MountPath `/docker-entrypoint-initdb.d/initdb.sh`: the path `/docker-entrypoint-initdb.d` is a special one within the MariaDB container, prepared to execute (in alphabetical order) any shell or SQL scripts you put in here just the **first time** this container is executed. This way you can initialize databases or create extra users, as your `initdb.sh` script does. You'll also include `initdb.sh` in the `db-mariadb` config map resource. - - MountPath `/var/lib/mysql`: this is the default data folder of MariaDB. It's where the volume `mariadb-storage`'s filesystem will be mounted into. - - - Container `metrics`: the Prometheus metrics exporter service related to the MariaDB server. - - The `image` of this exporter is not clear [on what Linux Distribution is based](https://hub.docker.com/r/prom/mysqld-exporter), although probably is Debian. - - In `args` are set a number of parameters meant for the command launching the service in the container. - - In `env` you have the environment parameters `MARIADB_PROMETHEUS_EXPORTER_USERNAME` and `MARIADB_PROMETHEUS_EXPORTER_PASSWORD` you already saw in the definition of the MariaDB container. They're defined here so the next environment parameter, `DATA_SOURCE_NAME`, can use them. This last parameter is required for this Prometheus metrics service to connect to the MariaDB instance with its own user (the one created by the `initdb.sh` script that initializes MariaDB). Also see how the URL it connects to is `localhost:3306`, because the two containers are running in the same pod. - - - `template.spec.volumes`: sets the storage volumes that are to be used in the pod described in this template. - - With name `mariadb-config`: the `my.cnf` and `initdb.sh` files are enabled here as volumes. The files will have the permission mode `644` by default in the container that mounts them. - - With name `mariadb-storage`: here the `PersistentVolumeClaim` named `db-mariadb` is enabled as a volume called `mariadb-storage`. - -## MariaDB Service resource - -The previous `StatefulSet` requires a `Service` named `db-mariadb` to run, so you need to declare it. - -1. Create a file named `db-mariadb.service.yaml` under `resources`. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.service.yaml - ~~~ - -2. Edit `db-mariadb.service.yaml` and put the following yaml in it. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9104" - name: db-mariadb - spec: - type: ClusterIP - clusterIP: 10.43.100.2 - ports: - - port: 3306 - protocol: TCP - name: server - - port: 9104 - protocol: TCP - name: metrics - ~~~ - - The main things to notice here is that the cluster IP is the one you've chosen beforehand and the `port` numbers correspond with the ones configured as `containerPorts` in the MariaDB's `StatefulSet`. - -## MariaDB Kustomize project - -Now you have to create the main `kustomization.yaml` file describing your MariaDB Kustomize project. - -1. Under `db-mariadb`, create a `kustomization.yaml` file. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/db-mariadb/kustomization.yaml - ~~~ - -2. Fill `kustomization.yaml` with the yaml definition below. - - ~~~yaml - # MariaDB setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: db-mariadb - - resources: - - resources/db-mariadb.persistentvolumeclaim.yaml - - resources/db-mariadb.service.yaml - - resources/db-mariadb.statefulset.yaml - - replicas: - - name: db-mariadb - count: 1 - - images: - - name: mariadb - newTag: 10.6-focal - - name: prom/mysqld-exporter - newTag: v0.13.0 - - configMapGenerator: - - name: db-mariadb - envs: - - configs/dbnames.properties - files: - - configs/initdb.sh - - configs/my.cnf - - secretGenerator: - - name: db-mariadb - envs: - - secrets/dbusers.pwd - ~~~ - - This `kustomization.yaml` is very similar to the one you did for Redis, with the main difference being in the generator sections. - - - The `configMapGenerator` sets up one `ConfigMap` resource called `db-mariadb`. When generated, it'll contain the two archives specified under `files` and all the key-value pairs included in the file referenced in `envs`. - - - The `secretGenerator` prepares one `Secret` resource named `db-mariadb` that only contains the key-value sets within the file pointed at in the `envs` section. - -### _Checking the Kustomize yaml output_ - -At this point, you can verify with `kubectl` that the Kustomize project for MariaDB gives you the proper yaml output. - -1. Execute `kubectl kustomize` and pipe the yaml output on the `less` command or dump it on a file. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/db-mariadb | less - ~~~ - -2. See that your yaml output is like the one below. - - ~~~yaml - apiVersion: v1 - data: - initdb.sh: | - #!/bin/sh - echo ">>> Creating user for Mysql Prometheus metrics exporter" - mysql -u root -p$MYSQL_ROOT_PASSWORD --execute \ - "CREATE USER '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost' IDENTIFIED BY '${MARIADB_PROMETHEUS_EXPORTER_PASSWORD}' WITH MAX_USER_CONNECTIONS 3; - GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost'; - FLUSH privileges;" - my.cnf: | - [server] - skip_name_resolve = 1 - innodb_buffer_pool_size = 224M - innodb_flush_log_at_trx_commit = 2 - innodb_log_buffer_size = 32M - query_cache_type = 1 - query_cache_limit = 2M - query_cache_min_res_unit = 2k - query_cache_size = 64M - slow_query_log = 1 - slow_query_log_file = /var/lib/mysql/slow.log - long_query_time = 1 - innodb_io_capacity = 2000 - innodb_io_capacity_max = 3000 - - [client-server] - !includedir /etc/mysql/conf.d/ - !includedir /etc/mysql/mariadb.conf.d/ - - [client] - default-character-set = utf8mb4 - - [mysqld] - character_set_server = utf8mb4 - collation_server = utf8mb4_general_ci - transaction_isolation = READ-COMMITTED - binlog_format = ROW - log_bin = /var/lib/mysql/mysql-bin.log - expire_logs_days = 7 - max_binlog_size = 100M - innodb_file_per_table=1 - innodb_read_only_compressed = OFF - tmp_table_size= 32M - max_heap_table_size= 32M - max_connections=512 - nextcloud-db-name: nextcloud-db - nextcloud-username: nextcloud - prometheus-exporter-username: exporter - kind: ConfigMap - metadata: - labels: - app: db-mariadb - name: db-mariadb-88gc2m5h46 - --- - apiVersion: v1 - data: - nextcloud-user-password: | - cTQ4OXE1NjlnYWRmamzDsWtqcXdpb2VrbnZrbG5rd2VvbG12bGtqYcOxc2RnYWlvcGgyYXNkZmFz - a2RrbmZnbDIK - prometheus-exporter-password: | - bmd1ZXVlaTVpdG52Ym52amhha29hb3BkcGRrY25naGZ1ZXI5MzlrZTIwMm1mbWZ2bHNvc2QwM2Zr - ZDkyM2zDsQo= - root-password: | - MDk0ODM1bXZuYjg5MDM4N212Mmk5M21jam5yamhya3Nkw7Fzb3B3ZWpmZ212eHNvZWRqOTNkam1k - bDI5ZG1qego= - kind: Secret - metadata: - labels: - app: db-mariadb - name: db-mariadb-dg5cm45947 - type: Opaque - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9104" - prometheus.io/scrape: "true" - labels: - app: db-mariadb - name: db-mariadb - spec: - clusterIP: 10.43.100.2 - ports: - - name: server - port: 3306 - protocol: TCP - - name: metrics - port: 9104 - protocol: TCP - selector: - app: db-mariadb - type: ClusterIP - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: db-mariadb - name: db-mariadb - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3.5G - storageClassName: local-path - volumeName: db-nextcloud - --- - apiVersion: apps/v1 - kind: StatefulSet - metadata: - labels: - app: db-mariadb - name: db-mariadb - spec: - replicas: 1 - selector: - matchLabels: - app: db-mariadb - serviceName: db-mariadb - template: - metadata: - labels: - app: db-mariadb - spec: - containers: - - env: - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - key: nextcloud-db-name - name: db-mariadb-88gc2m5h46 - - name: MYSQL_ROOT_PASSWORD - valueFrom: - secretKeyRef: - key: root-password - name: db-mariadb-dg5cm45947 - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - key: nextcloud-username - name: db-mariadb-88gc2m5h46 - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-user-password - name: db-mariadb-dg5cm45947 - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - key: prometheus-exporter-username - name: db-mariadb-88gc2m5h46 - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - key: prometheus-exporter-password - name: db-mariadb-dg5cm45947 - image: mariadb:10.6-focal - name: server - ports: - - containerPort: 3306 - resources: - limits: - memory: 320Mi - volumeMounts: - - mountPath: /etc/mysql/my.cnf - name: mariadb-config - subPath: my.cnf - - mountPath: /docker-entrypoint-initdb.d/initdb.sh - name: mariadb-config - subPath: initdb.sh - - mountPath: /var/lib/mysql - name: mariadb-storage - - args: - - --collect.info_schema.tables - - --collect.info_schema.innodb_tablespaces - - --collect.info_schema.innodb_metrics - - --collect.global_status - - --collect.global_variables - - --collect.slave_status - - --collect.info_schema.processlist - - --collect.perf_schema.tablelocks - - --collect.perf_schema.eventsstatements - - --collect.perf_schema.eventsstatementssum - - --collect.perf_schema.eventswaits - - --collect.auto_increment.columns - - --collect.binlog_size - - --collect.perf_schema.tableiowaits - - --collect.perf_schema.indexiowaits - - --collect.info_schema.userstats - - --collect.info_schema.clientstats - - --collect.info_schema.tablestats - - --collect.info_schema.schemastats - - --collect.perf_schema.file_events - - --collect.perf_schema.file_instances - - --collect.perf_schema.replication_group_member_stats - - --collect.perf_schema.replication_applier_status_by_worker - - --collect.slave_hosts - - --collect.info_schema.innodb_cmp - - --collect.info_schema.innodb_cmpmem - - --collect.info_schema.query_response_time - - --collect.engine_tokudb_status - - --collect.engine_innodb_status - env: - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - key: prometheus-exporter-username - name: db-mariadb-88gc2m5h46 - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - key: prometheus-exporter-password - name: db-mariadb-dg5cm45947 - - name: DATA_SOURCE_NAME - value: $(MARIADB_PROMETHEUS_EXPORTER_USERNAME):$(MARIADB_PROMETHEUS_EXPORTER_PASSWORD)@(localhost:3306)/ - image: prom/mysqld-exporter:v0.13.0 - name: metrics - ports: - - containerPort: 9104 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - items: - - key: initdb.sh - path: initdb.sh - - key: my.cnf - path: my.cnf - name: db-mariadb-88gc2m5h46 - name: mariadb-config - - name: mariadb-storage - persistentVolumeClaim: - claimName: db-mariadb - ~~~ - - Pay particular attention to the `ConfigMap` and `Secret` resources declared in the output. - - - Their names have a hash as a suffix appended to their names. - - - The `db-mariadb` config map has the `initdb.sh` and `my.cnf` loaded in it (filenames as keys and their full contents as values), and the key-value pairs found in `dbnames.properties` are set independently. - - - The `db-mariadb` secret has all the key-value pairs set in the `dbusers.pwd` but with the particularity that the values have been automatically encoded in base64. - -3. Remember that, if you dumped the Kustomize output into a yaml file, you can validate it with `kubeval`. - -## Don't deploy this MariaDB project on its own - -This MariaDB setup is missing one critical element, the persistent volume it needs to store data and which you must not confuse with the claim you've configured for your MariaDB server. That PV and other elements will be declared in the main Kustomize project you'll prepare in the final part of this guide. Till then, don't deploy this setup of MariaDB. - -## Relevant system paths - -### _Folders in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/db-mariadb` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/configs` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/resources` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/secrets` - -### _Files in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/kustomization.yaml` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/configs/dbnames.properties` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/configs/initdb.sh` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/configs/my.cnf` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.service.yaml` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.statefulset.yaml` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb/secrets/dbusers.pwd` - -## References - -### _Kubernetes_ - -#### **ConfigMaps and secrets** - -- [Official Doc - ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) -- [Official Doc - Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) -- [An Introduction to Kubernetes Secrets and ConfigMaps](https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) -- [Kubernetes - Using ConfigMap SubPaths to Mount Files](https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i) -- [Kubernetes Secrets | Declare confidential data with examples](https://www.golinuxcloud.com/kubernetes-secrets/) -- [Kubernetes ConfigMaps and Secrets](https://shravan-kuchkula.github.io/kubernetes/configmaps-secrets/) -- [Import data to config map from kubernetes secret](https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret) - -#### **Storage** - -- [Official Doc - Local Storage Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) -- [Official Doc - Local Persistent Volumes for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) -- [Official Doc - Reserving a PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume) -- [Official Doc - Local StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#local) -- [Official Doc - Reclaiming Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) -- [Kubernetes API - PersistentVolume](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/) -- [Kubernetes API - PersistentVolumeClaim](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/) -- [Kubernetes Persistent Volumes, Claims, Storage Classes, and More](https://cloud.netapp.com/blog/kubernetes-persistent-storage-why-where-and-how) -- [Rancher K3s - Setting up the Local Storage Provider](https://rancher.com/docs/k3s/latest/en/storage/) -- [K3s local path provisioner on GitHub](https://github.com/rancher/local-path-provisioner) -- [Using "local-path" in persistent volume requires sudo to edit files on host node?](https://github.com/k3s-io/k3s/issues/1823) -- [Kubernetes size definitions: What's the difference of "Gi" and "G"?](https://stackoverflow.com/questions/50804915/kubernetes-size-definitions-whats-the-difference-of-gi-and-g) -- [distinguish unset and empty values for storageClassName](https://github.com/helm/helm/issues/2600) -- [Kubernetes Mounting Volumes in Pods. Mount Path Ownership and Permissions](https://kb.novaordis.com/index.php/Kubernetes_Mounting_Volumes_in_Pods#Mount_Path_Ownership_and_Permissions) - -#### **StatefulSets** - -- [Official Doc](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) - -#### **Environment variables** - -- [Official Doc - Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) -- [Official Doc - Define Dependent Environment Variables](https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/) - -### _MariaDB_ - -- [MariaDB](https://mariadb.com/) -- [MariaDB on DockerHub](https://hub.docker.com/_/mariadb?tab=description&page=1&ordering=last_updated) -- [MySQL Server Exporter on GitHub](https://github.com/prometheus/mysqld_exporter) -- [MySQL Server Prometheus Exporter](https://hub.docker.com/r/prom/mysqld-exporter) -- [Using Prometheus to Monitor MySQL and MariaDB](https://intl.cloud.tencent.com/document/product/457/38553) -- [Server System Variables](https://mariadb.com/kb/en/server-system-variables/) -- [Configuring MariaDB with Option Files](https://mariadb.com/kb/en/configuring-mariadb-with-option-files/) -- [Add another user to MySQL in Kubernetes](https://stackoverflow.com/questions/50373869/add-another-user-to-mysql-in-kubernetes) -- [Initialize a fresh instance of Mariadb](https://github.com/helm/charts/issues/13122) -- [using environment variables in docker-compose mounted files for initializing mysql](https://stackoverflow.com/questions/68411534/using-environment-variables-in-docker-compose-mounted-files-for-initializing-mys) -- [How to initialize mysql container when created on Kubernetes?](https://stackoverflow.com/questions/45681780/how-to-initialize-mysql-container-when-created-on-kubernetes) -- [import mysql data to kubernetes pod](https://stackoverflow.com/questions/40166149/import-mysql-data-to-kubernetes-pod/50877213#50877213) -- [Binary Log Formats](https://mariadb.com/kb/en/binary-log-formats/) -- [How to Enable and Use Binary Log in MySQL/MariaDB](https://snapshooter.com/learn/mysql/enable-and-use-binary-log-mysql) -- [How to Change a Default MySQL/MariaDB Data Directory in Linux](https://www.tecmint.com/change-default-mysql-mariadb-data-directory-in-linux/) -- [`mysql` Command-line Client](https://mariadb.com/kb/en/mysql-command-line-client/) -- [How to see/get a list of MySQL/MariaDB users accounts](https://www.cyberciti.biz/faq/how-to-show-list-users-in-a-mysql-mariadb-database/) -- [Introduction to four key MariaDB client commands](https://mariadb.com/resources/blog/introduction-to-four-key-mariadb-client-commands/) -- [How to fix Nextcloud 4047 InnoDB refuses to write tables with ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE](https://techoverflow.net/2021/08/17/how-to-fix-nextcloud-4047-innodb-refuses-to-write-tables-with-row_formatcompressed-or-key_block_size/) -- [Mariadb 10.6 won't allow writing in compressed innodb by default](https://myhub.eu.org/article/16/mariadb-10-6-wont-allow-writing-in-compressed-innodb-by-default/) -- [Configuring MariaDB for Optimal Performance](https://mariadb.com/kb/en/configuring-mariadb-for-optimal-performance/) -- [15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips](https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/) -- [Analyze MySQL Performance](http://mysql.rjweb.org/doc.php/mysql_analysis) -- [Setup innodb_io_capacity](https://dba.stackexchange.com/questions/258931/setup-innodb-io-capacity) - -### _Nextcloud_ - -- [Database configuration](https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html) - -## Navigation - -[<< Previous (**G033. Deploying services 02. Nextcloud Part 2**)](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 4**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md) diff --git a/G033 - Deploying services 02 ~ Nextcloud - Part 4 - Nextcloud server.md b/G033 - Deploying services 02 ~ Nextcloud - Part 4 - Nextcloud server.md deleted file mode 100644 index 4380b5f..0000000 --- a/G033 - Deploying services 02 ~ Nextcloud - Part 4 - Nextcloud server.md +++ /dev/null @@ -1,949 +0,0 @@ -# G033 - Deploying services 02 ~ Nextcloud - Part 4 - Nextcloud server - -In this fourth part of the Nextcloud platform guide you'll configure the Nextcloud server itself as another component of the whole thing. - -## Considerations about the Nextcloud server - -The Nextcloud server is a PHP application that needs a web service like Apache or Nginx to render its pages. The default image for Nextcloud is an Apache-based one, which is more straightforward to setup, while using Nginx is rather more complex. In this document I'll show you how to declare the Nextcloud server with Apache. You'll find an alternative configuration with Nginx and other ideas at the [**G911** appendix guide](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md). - -On the other hand, there's the detail of the certificate required to encrypt communications between clients and the Nextcloud server. The certificate itself is the wildcard one you already created back in the [**G029** guide](G029%20-%20K3s%20cluster%20setup%2012%20~%20Setting%20up%20cert-manager%20and%20wildcard%20certificate.md#setting-up-a-wildcard-certificate-for-a-domain), where you also deployed the cert-manager service into your Kubernetes cluster. You need to replicate the secret associated to that certificate in the namespace for this whole Nextcloud platform project, but that is something you'll do in the final part of this Nextcloud guide. Here you'll only see how that secret is referenced in the resource definition where is required. - -## Nextcloud server Kustomize project's folders - -As with the other components, you need a folder structure for this Kustomize project. - -~~~bash -$ mkdir -p $HOME/k8sprjs/nextcloud/components/server-nextcloud/{configs,resources,secrets} -~~~ - -## Nextcloud server configuration files - -This Apache-based Nextcloud server needs a couple of configuration files and another one with a bunch of key-value parameters. - -### _Configuration file `ports.conf`_ - -The `ports.conf` file is where you must indicate to Apache which ports it has to listen on. - -1. Create the `ports.conf` file under the `configs` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/ports.conf - ~~~ - -2. Copy the content below in `ports.conf`. - - ~~~apache - Listen 443 - - # vim: syntax=apache ts=4 sw=4 sts=4 sr noet - ~~~ - - There's only one port configured here, the `443` which is the one used for **HTTPS** connections. The last line is just to inform the `vim` editor about this file's format. - -### _Configuration file `000-default.conf`_ - -The `000-default.conf` is another Apache configuration file in which you set up all the neccesary parameters to make the Nextcloud site available and SSL enabled. - -1. Create `000-default.conf` in the `configs` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.conf - ~~~ - -2. Fill `000-default.conf` with the following Apache configuration. - - ~~~apache - MinSpareServers 4 - MaxSpareServers 16 - StartServers 10 - MaxConnectionsPerChild 2048 - - LoadModule socache_shmcb_module /usr/lib/apache2/modules/mod_socache_shmcb.so - SSLSessionCache shmcb:/var/tmp/apache_ssl_scache(512000) - - - Protocols http/1.1 - ServerAdmin root@deimos.cloud - ServerName nextcloud.deimos.cloud - ServerAlias nxc.deimos.cloud - - Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" - - DocumentRoot /var/www/html - DirectoryIndex index.php - - LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so - SSLEngine on - SSLCertificateFile /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - SSLCertificateKeyFile /etc/ssl/certs/wildcard.deimos.cloud-tls.key - - - Options FollowSymlinks MultiViews - AllowOverride All - Require all granted - - - Dav off - - - SetEnv HOME /var/www/html - SetEnv HTTP_HOME /var/www/html - Satisfy Any - - - - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - ~~~ - - This configuration above is a modified version of [one found in the official Nextcloud documentation](https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html#apache-web-server-configuration). - - - The first thing you must be aware of is that the Apache instance you're going to run uses the default MPM (Multi-Processing Module) prefork module. This influences what parameters you have available and their effects to the configuration of the processes spawned by Apache. - - The parameters at the top of the configuration adjust the number of Apache processes running in parallel in your setup. The `MaxConnectionsPerChild` indicates how many connections each process can handle before being respawned. To know more about these parameters, check [the official Apache MPM prefork documentation](https://httpd.apache.org/docs/2.4/mod/prefork.html). - - - For Apache to be able to maintain a cache of SSL sessions, it requires to load (with the `LoadModule` directive) the `socache_shmcb_module` since it doesn't come loaded by default. As you might suppose, the path to the `mod_socache_shmcb.so` library exists only within the container. - - - Notice how the `VirtualHost` is configured with the port `443` for listening to incoming requests. - - - Since the SSL module, named `ssl_module`, doesn't come loaded by default in Apache either, you also need to load it with its corresponding `LoadModule` directive. - - - Below the SSL load module directive, you can see how the SSL engine is enabled and the certificate is expected to be found in the `/etc/ssl/certs` path within the container. - -### _Properties file `params.properties`_ - -There are a few values you want to load as parameters in your Kustomize project. - -1. Create the file `params.properties` under the `configs` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/params.properties - ~~~ - -2. Set the parameters below in `params.properties`. - - ~~~properties - nextcloud-admin-username=admin - nextcloud-trusted-domains=192.168.1.42 nextcloud.deimos.cloud nxc.deimos.cloud - cache-redis-svc-cluster-ip=10.43.100.1 - db-mariadb-svc-cluster-ip=10.43.100.2 - ~~~ - - The key-values above mean the following. - - - `nextcloud-admin-username`: the name given to the administrator user of your Nextcloud server. - - - `nextcloud-trusted-domains`: security restriction which tells the Nextcloud server on what IPs or domains can users log into the platform. In other words, these are the only IPs or domains Nextcloud will be accessible from. In this case, I've included the IP that will be assigned by the MetalLB load balancer to the corresponding service resource, and a couple of domains which should be enabled in the network or router's DNS, or on each client's `hosts` file. - - - `cache-redis-svc-cluster-ip` and `db-mariadb-svc-cluster-ip`: the internal cluster IPs of the Redis and MariaDB services. Put here so its easier to locate and change them if necessary. - -## Nextcloud server password - -You need to set one password for Nextcloud's administrator user as a Secret resource. - -1. Create a `nextcloud-admin.pwd` file in the `secrets` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/secrets/nextcloud-admin.pwd - ~~~ - -2. In the `nextcloud-admin.pwd` file, put just the string you want to use as password. - - ~~~properties - Yo4R_P4s5wORd-f0r_The.n3x7clouD_Adm1niS7r6tor.us3r - ~~~ - - Like in other cases, the string is just a plain unencrypted text. Take care of who can access this file. - -## Nextcloud server storage - -Setting aside the database, which you already configured in the [previous part of this guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md), you need two persistent volumes for Nextcloud. - -- One PV for storing the files (htmls essentially) of the Nextcloud server itself. -- One PV to store the Nextcloud users' files. - -This means that you have to declare two different claim resources, one per PV. - -### _Claim for the server html files PV_ - -1. Create a file named `html-server-nextcloud.persistentvolumeclaim.yaml` under `resources`. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml - ~~~ - -2. Copy the yaml below in `html-server-nextcloud.persistentvolumeclaim.yaml`. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolumeClaim - - metadata: - name: html-server-nextcloud - spec: - accessModes: - - ReadWriteOnce - storageClassName: local-path - volumeName: html-nextcloud - resources: - requests: - storage: 1.2G - ~~~ - - If you compared the yaml above with the sole PVC you created for MariaDB, you'll see that the parameters used are exactly the same except in some values (names and storage requested). - -### _Claim for the users data PV_ - -1. Create a `data-server-nextcloud.persistentvolumeclaim.yaml` file under `resources`. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml - ~~~ - -2. Fill `data-server-nextcloud.persistentvolumeclaim.yaml` with the declaration next. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolumeClaim - - metadata: - name: data-server-nextcloud - spec: - accessModes: - - ReadWriteOnce - storageClassName: local-path - volumeName: data-nextcloud - resources: - requests: - storage: 9.3G - ~~~ - -## Nextcloud server Stateful resource - -Since the Nextcloud server stores data, it's more adequate to deploy it as a `StatefulSet` rather than a `Deployment` resource. - -1. Create a `server-apache-nextcloud.statefulset.yaml` file under the `resources` path. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml - ~~~ - -2. Put the yaml declaration below in `server-apache-nextcloud.statefulset.yaml`. - - ~~~yaml - apiVersion: apps/v1 - kind: StatefulSet - - metadata: - name: server-apache-nextcloud - spec: - replicas: 1 - serviceName: server-apache-nextcloud - template: - spec: - containers: - - name: server - image: nextcloud:22.2-apache - ports: - - containerPort: 443 - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-trusted-domains - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: db-mariadb-svc-cluster-ip - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-db-name - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-username - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: nextcloud-user-password - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: cache-redis-svc-cluster-ip - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - - name: APACHE_ULIMIT_MAX_FILES - value: 'ulimit -n 65536' - lifecycle: - postStart: - exec: - command: - - "sh" - - "-c" - - | - chown www-data:www-data /var/www/html/data - apt-get update - apt-get install -y openrc - start-stop-daemon --start --background --pidfile /cron.pid --exec /cron.sh - resources: - limits: - memory: 512Mi - volumeMounts: - - name: certificate - subPath: wildcard.deimos.cloud-tls.crt - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - - name: certificate - subPath: wildcard.deimos.cloud-tls.key - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.key - - name: apache-config - subPath: ports.conf - mountPath: /etc/apache2/ports.conf - - name: apache-config - subPath: 000-default.conf - mountPath: /etc/apache2/sites-available/000-default.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: metrics - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - ports: - - containerPort: 9205 - env: - - name: NEXTCLOUD_SERVER - value: "https://localhost" - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - resources: - limits: - memory: 32Mi - volumes: - - name: apache-config - configMap: - name: server-apache-nextcloud - defaultMode: 0644 - items: - - key: ports.conf - path: ports.conf - - key: 000-default.conf - path: 000-default.conf - - name: certificate - secret: - secretName: wildcard.deimos.cloud-tls - defaultMode: 0444 - items: - - key: tls.crt - path: wildcard.deimos.cloud-tls.crt - - key: tls.key - path: wildcard.deimos.cloud-tls.key - - name: html-storage - persistentVolumeClaim: - claimName: html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: data-server-nextcloud - ~~~ - - Like with Redis and MariaDB, the pod configured in the `StatefulSet` above has two containers in sidecar pattern. - - - `server` container: executes the Nextcloud server with its associated Apache service. - - The `image` used here provides [the stable version of Nextcloud](https://hub.docker.com/_/nextcloud), running on a Debian setup. - - - `env` section: notice that some values are taken from the secrets and config maps defined for the Redis and MariaDB pods. - - `NEXTCLOUD_ADMIN_USER` and `NEXTCLOUD_ADMIN_PASSWORD`: define the administrator user for the Nextcloud server, creating it when the server autoinstalls itself. - - `NEXTCLOUD_TRUSTED_DOMAINS`: where the list of trusted domains must be set. - - `MYSQL_HOST` and `MYSQL_DATABASE`: the IP of the MariaDB service and the database instance's name Nextcloud has to use. - - `MYSQL_USER` and `MYSQL_PASSWORD`: the user Nextcloud has to use to connect to its own database on the MariaDB server. - - `REDIS_HOST` and `REDIS_HOST_PASSWORD`: the IP to the Redis service and the password require to authenticate in that server. - - `APACHE_ULIMIT_MAX_FILES`: increases the number of files Apache can open at the same time. - - - `lifecycle.postStart.exec.command`: this defines a command meant to be executed right after the container has started. In this case, a number of them are required for Nextcloud to run properly. - - The `sh` command is the shell that will execute the command. - - The `-c` line is an option for the `sh` command to make it read and execute the following strings as commands. - - `|` is the yaml code to indicate that a [literal scalar](https://yaml.org/spec/1.2.2/#23-scalars) begins that includes the following lines. Thanks to this feature, you can just put each command line one below another without concatenating them with `&&`. - - The `chown` command line ensures that the folder where Nextcloud stores all the user data is owned by `www-data`, which is the user that runs the Nextcloud service in the container. - - The `apt-get` commands are for installing the latest version of the `openrc` command in the container. - - The `start-stop-daemon` command starts a `cron` service that your Nextcloud server will use to run its background jobs. - - > **BEWARE!** - > The `apt-get` and `start-stop-daemon` command lines only work with the Debian-based images of Nextcloud, not the Alpine ones. With Alpine images you have to replace those command lines with the following ones. - > ~~~bash - > apk add openrc - > start-stop-daemon --background /cron.sh - > ~~~ - > As you can see, these command lines do essentially the same thing, but in a way fitting to an Alpine system. - - - `volumeMounts` section: mounts the certificate and the two Apache configuration files, and also the two storage volumes needed by this Nextcloud setup. - - `wildcard.deimos.cloud-tls.crt` and `wildcard.deimos.cloud-tls.key`: these two files set up the certificate and all must be in the `/etc/ssl/certs` path, since is the one set in the `000-default.conf` Apache file. These files are found within the `Secret` resource associated to the `wildcard.deimos.cloud-tls` certificate you created in the [G029 guide](G029%20-%20K3s%20cluster%20setup%2012%20~%20Setting%20up%20cert-manager%20and%20wildcard%20certificate.md). - - `ports.conf` is mounted in the default path expected by Apache, `/etc/apache2/ports.conf`. - - `000-default.conf` is also put in a default Apache path, `/etc/apache2/sites-available/000-default.conf`. - - `/var/www/html`: the path where Nextcloud is installed. - - `/var/www/html/data`: where Nextcloud stores the users' data. This folder must be owned by the `www-data` user that exists within the container. - - - `metrics` container: runs the Prometheus metrics exporter of Nextcloud. - - The `image` for this Prometheus exporter is set to be always [the latest one available](https://hub.docker.com/r/xperimental/nextcloud-exporter), and probably runs on a Debian system but its not specified. - - - `env` section: - - `NEXTCLOUD_SERVER`: since this container runs on the same pod as the Nextcloud server, it's set as `localhost`. - - `NEXTCLOUD_TLS_SKIP_VERIFY`: since the certificate is self-signed and this service is running on the same pod, there's little need of checking the certificate when this service connects to Nextcloud. - - `NEXTCLOUD_USERNAME` and `NEXTCLOUD_PASSWORD`: here you'll be forced to use the same administrator user you defined for the Nextcloud server, since its the only one you have at this point. - - - `template.spec.volumes`: here you have enabled the two volumes prepared for Nextcloud, but also the configuration files for Apache, and the certificate files available in the `wildcard.deimos.cloud-tls` secret. Notice how the permission mode of these files are handled with the `defaultMode` parameter. - -## Nextcloud server Service resource - -Your Nextcloud server requires a Service resource named `server-apache-nextcloud` to offer its functionality. - -1. Create a `server-apache-nextcloud.service.yaml` file under `resources`. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml - ~~~ - -2. Copy in `server-apache-nextcloud.service.yaml` the `Service` declaration next. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9205" - name: server-apache-nextcloud - spec: - type: LoadBalancer - clusterIP: 10.43.100.3 - loadBalancerIP: 192.168.1.42 - ports: - - port: 443 - protocol: TCP - name: server - - port: 9205 - protocol: TCP - name: metrics - ~~~ - - This `Service` resource is mostly like the others you've seen before, but with the following differences. - - - This Service's `type` is `LoadBalancer`, meaning that it'll take the next available IP from the MetalLB pool to use as external public IP in your network. But you can also set manually an IP, picked from the ones available in the load balancer's pool, with the `loadBalancerIP` parameter set below. - - - A `LoadBalancer` type service can also have an internal `clusterIP`, and you can see how I've chosen to set it with an IP that follows the ones assigned to the Redis and MariaDB services. You could leave this value unassigned and allow Kubernetes to give it any internal cluster IP, but consider that having a known static IP can be an advantage on certain situations. - - - With the `loadBalancerIP` parameter you can tell a `LoadBalancer` type service what IP you want it to get from your cluster's load balancer (MetalLB in this case). In the yaml above, `192.168.1.42` is right the next one after the IP already assigned to the Traefik service (`192.168.1.41`). By setting this manually, you ensure that the service has the same IP as the one specified in the `nextcloud-trusted-domains` parameter declared in the `params.properties` file. - -## Nextcloud server Kustomize project - -With all the necessary elements for your Nextcloud server component declared in their respective files, you can put them together as a Kustomize project. - -1. Create a `kustomization.yaml` file in the `server-nextcloud` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml - ~~~ - -2. Fill `kustomization.yaml` with the following yaml. - - ~~~yaml - # Nextcloud server setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: server-nextcloud - - resources: - - resources/data-server-nextcloud.persistentvolumeclaim.yaml - - resources/html-server-nextcloud.persistentvolumeclaim.yaml - - resources/server-apache-nextcloud.service.yaml - - resources/server-apache-nextcloud.statefulset.yaml - - replicas: - - name: server-apache-nextcloud - count: 1 - - images: - - name: nextcloud - newTag: 22.2-apache - - name: xperimental/nextcloud-exporter - newTag: 0.4.0-15-gbb88fb6 - - configMapGenerator: - - name: server-apache-nextcloud - envs: - - configs/params.properties - files: - - configs/000-default.conf - - configs/ports.conf - - secretGenerator: - - name: server-nextcloud - files: - - nextcloud-admin-password=secrets/nextcloud-admin.pwd - ~~~ - - This `kustomization.yaml`, compared to the ones you've already set up for the Redis and MariaDB cases, doesn't have anything in particular to highlight. All the parameters specified there should be familiar to you at this point. - -### _Validating the Kustomize yaml output_ - -As with the other components, you should check the output generated by this Kustomize project. - -1. Generate the yaml with `kubectl kustomize` as usual. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/server-nextcloud | less - ~~~ - -2. See if your yaml output looks like the one below. - - ~~~yaml - apiVersion: v1 - data: - 000-default.conf: | - MinSpareServers 4 - MaxSpareServers 16 - StartServers 10 - MaxConnectionsPerChild 2048 - - LoadModule socache_shmcb_module /usr/lib/apache2/modules/mod_socache_shmcb.so - SSLSessionCache shmcb:/var/tmp/apache_ssl_scache(512000) - - - Protocols http/1.1 - ServerAdmin root@deimos.cloud - ServerName nextcloud.deimos.cloud - ServerAlias nxc.deimos.cloud - - Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" - - DocumentRoot /var/www/html - DirectoryIndex index.php - - LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so - SSLEngine on - SSLCertificateFile /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - SSLCertificateKeyFile /etc/ssl/certs/wildcard.deimos.cloud-tls.key - - - Options FollowSymlinks MultiViews - AllowOverride All - Require all granted - - - Dav off - - - SetEnv HOME /var/www/html - SetEnv HTTP_HOME /var/www/html - Satisfy Any - - - - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - cache-redis-svc-cluster-ip: 10.43.100.1 - db-mariadb-svc-cluster-ip: 10.43.100.2 - nextcloud-admin-username: admin - nextcloud-trusted-domains: 192.168.1.42 nextcloud.deimos.cloud nxc.deimos.cloud - ports.conf: | - Listen 443 - - # vim: syntax=apache ts=4 sw=4 sts=4 sr noet - kind: ConfigMap - metadata: - labels: - app: server-nextcloud - name: server-apache-nextcloud-fcckh8bk2d - --- - apiVersion: v1 - data: - nextcloud-admin-password: | - OXE0OHVvbmJvaXU0ODkwdW9paG5nw6x1eTM0ODkwOTIzdWttbmFwamTEusOxamdwYmFpdTM5MHVp - b3UzOTAzMnVpMDl1bmdhb3BpamRkYcOxejM5a2zDkXFla2oK - kind: Secret - metadata: - labels: - app: server-nextcloud - name: server-nextcloud-mmd5t7577c - type: Opaque - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9205" - prometheus.io/scrape: "true" - labels: - app: server-nextcloud - name: server-apache-nextcloud - spec: - clusterIP: 10.43.100.3 - loadBalancerIP: 192.168.1.42 - ports: - - name: server - port: 443 - protocol: TCP - - name: metrics - port: 9205 - protocol: TCP - selector: - app: server-nextcloud - type: LoadBalancer - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: server-nextcloud - name: data-server-nextcloud - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 9.3G - storageClassName: local-path - volumeName: data-nextcloud - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: server-nextcloud - name: html-server-nextcloud - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1.2G - storageClassName: local-path - volumeName: html-nextcloud - --- - apiVersion: apps/v1 - kind: StatefulSet - metadata: - labels: - app: server-nextcloud - name: server-apache-nextcloud - spec: - replicas: 1 - selector: - matchLabels: - app: server-nextcloud - serviceName: server-apache-nextcloud - template: - metadata: - labels: - app: server-nextcloud - spec: - containers: - - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - key: nextcloud-admin-username - name: server-apache-nextcloud-fcckh8bk2d - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-admin-password - name: server-nextcloud-mmd5t7577c - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - key: nextcloud-trusted-domains - name: server-apache-nextcloud-fcckh8bk2d - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - key: db-mariadb-svc-cluster-ip - name: server-apache-nextcloud-fcckh8bk2d - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - key: nextcloud-db-name - name: db-mariadb - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - key: nextcloud-username - name: db-mariadb - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-user-password - name: db-mariadb - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - key: cache-redis-svc-cluster-ip - name: server-apache-nextcloud-fcckh8bk2d - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: cache-redis - - name: APACHE_ULIMIT_MAX_FILES - value: ulimit -n 65536 - image: nextcloud:22.2-apache - lifecycle: - postStart: - exec: - command: - - sh - - -c - - | - chown www-data:www-data /var/www/html/data - apt-get update - apt-get install -y openrc - start-stop-daemon --start --background --pidfile /cron.pid --exec /cron.sh - name: server - ports: - - containerPort: 443 - resources: - limits: - memory: 512Mi - volumeMounts: - - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - name: certificate - subPath: wildcard.deimos.cloud-tls.crt - - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.key - name: certificate - subPath: wildcard.deimos.cloud-tls.key - - mountPath: /etc/apache2/ports.conf - name: apache-config - subPath: ports.conf - - mountPath: /etc/apache2/sites-available/000-default.conf - name: apache-config - subPath: 000-default.conf - - mountPath: /var/www/html - name: html-storage - - mountPath: /var/www/html/data - name: data-storage - - env: - - name: NEXTCLOUD_SERVER - value: https://localhost - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - key: nextcloud-admin-username - name: server-apache-nextcloud-fcckh8bk2d - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-admin-password - name: server-nextcloud-mmd5t7577c - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - name: metrics - ports: - - containerPort: 9205 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - defaultMode: 420 - items: - - key: ports.conf - path: ports.conf - - key: 000-default.conf - path: 000-default.conf - name: server-apache-nextcloud-fcckh8bk2d - name: apache-config - - name: certificate - secret: - defaultMode: 292 - items: - - key: tls.crt - path: wildcard.deimos.cloud-tls.crt - - key: tls.key - path: wildcard.deimos.cloud-tls.key - secretName: wildcard.deimos.cloud-tls - - name: html-storage - persistentVolumeClaim: - claimName: html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: data-server-nextcloud - ~~~ - - There's one particular but important thing you must notice in this yaml: - - - As expected, the `ConfigMap` and `Secret` resources you've defined in this Kustomize project have their names appended with a hash suffix. - - - On the other hand, the names of the config map `db-mariadb` and the secrets `db-mariadb` and `cache-redis` remain unchanged, because they haven't been defined in this particular Kustomize project, they're external resources just being referenced here. But don't worry, this is also expected and that will be fixed in the final part of this guide. - - - Remember that Kustomize transforms the values you put at the `defaultMode` parameters set for the files you configure as `volumes`. You already saw this happening while preparing the Redis Kustomize project. - -## Don't deploy this Nextcloud server project on its own - -This Nextcloud server cannot be deployed on its own because is missing several things. - -- The two persistent volumes it needs to store it's server files and users data. - -- It won't find the external resources it needs to run, like the config maps or secrets of the other components. Their names won't match with the ones this Kustomize project has, and also the certificate's secret isn't replicated in the right namespace. - -So, again I must tell you to wait to the upcoming final part of this Nextcloud guide, where you'll add the missing parts, tie everything together and deploy the whole setup in one go. - -## Background jobs on Nextcloud - -There's a particular thing that I've left out of this guide, and is the configuration of a cronjob for the background tasks Nextcloud requires to do regularly. The problem is that I haven't found a proper or convincing setup to apply in this guide, so I'll leave you here a bunch of references about this matter. - -- First, check the official documentation about the [Nextcloud's background jobs configuration](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html). - -- By default, Nextcloud uses a method that relies on user interaction with the platform to launch background jobs. This method is referred to as [the AJAX method in the official documentation](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#ajax) and, officially at least, is not considered a reliable way of launching the required background tasks. - -- There's a Docker image that you could use as an extra sidecarized container in the Nextcloud pod setup of this guide. This image is offered as the [Nextcloud Cron Job Docker Container](https://hub.docker.com/r/rcdailey/nextcloud-cronjob). - -- Be aware that there's a type of resource in Kubernetes particularly designed to run automated tasks periodically: [the CronJob](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/). - -- In [this thread on the official Nextcloud's help forum](https://help.nextcloud.com/t/verifying-where-to-add-cron-jobs-docker-compose-cron/104110) and in [this issue thread on Nextcloud's GitHub site](https://github.com/nextcloud/helm/issues/55) you'll see some interesting comments about Nextcloud's cronjob. - -## Relevant system paths - -### _Folders in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/secrets` - -### _Files in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/params.properties` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/ports.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/secrets/nextcloud-admin.pwd` - -## References - -### _Kubernetes_ - -#### **ConfigMaps and secrets** - -- [Official Doc - ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) -- [Official Doc - Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) -- [An Introduction to Kubernetes Secrets and ConfigMaps](https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) -- [Kubernetes - Using ConfigMap SubPaths to Mount Files](https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i) -- [Kubernetes Secrets | Declare confidential data with examples](https://www.golinuxcloud.com/kubernetes-secrets/) -- [Kubernetes ConfigMaps and Secrets](https://shravan-kuchkula.github.io/kubernetes/configmaps-secrets/) -- [Import data to config map from kubernetes secret](https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret) - -#### **Storage** - -- [Official Doc - Local Storage Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) -- [Official Doc - Local Persistent Volumes for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) -- [Official Doc - Reserving a PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume) -- [Official Doc - Local StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#local) -- [Official Doc - Reclaiming Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) -- [Kubernetes API - PersistentVolume](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/) -- [Kubernetes API - PersistentVolumeClaim](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/) -- [Kubernetes Persistent Volumes, Claims, Storage Classes, and More](https://cloud.netapp.com/blog/kubernetes-persistent-storage-why-where-and-how) -- [Rancher K3s - Setting up the Local Storage Provider](https://rancher.com/docs/k3s/latest/en/storage/) -- [K3s local path provisioner on GitHub](https://github.com/rancher/local-path-provisioner) -- [Using "local-path" in persistent volume requires sudo to edit files on host node?](https://github.com/k3s-io/k3s/issues/1823) -- [Kubernetes size definitions: What's the difference of "Gi" and "G"?](https://stackoverflow.com/questions/50804915/kubernetes-size-definitions-whats-the-difference-of-gi-and-g) -- [distinguish unset and empty values for storageClassName](https://github.com/helm/helm/issues/2600) -- [Kubernetes Mounting Volumes in Pods. Mount Path Ownership and Permissions](https://kb.novaordis.com/index.php/Kubernetes_Mounting_Volumes_in_Pods#Mount_Path_Ownership_and_Permissions) - -#### **StatefulSets** - -- [Official Doc](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) - -#### **Environment variables** - -- [Official Doc - Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) -- [Official Doc - Define Dependent Environment Variables](https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/) - -#### **Executing multiple commands in lifecycle (postStart/preStop) of containers** - -- [Kubernetes - Passing multiple commands to the container](https://stackoverflow.com/questions/33979501/kubernetes-passing-multiple-commands-to-the-container) -- [multiple command in postStart hook of a container](https://stackoverflow.com/questions/39436845/multiple-command-in-poststart-hook-of-a-container) - -#### _Cronjob_ - -- [Running Automated Tasks with a CronJob](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/) -- [CronJob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) -- [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/job/) - -### _Nextcloud_ - -- [Official Docker build of Nextcloud](https://hub.docker.com/_/nextcloud) -- [Official Docker build of Nextcloud on GitHub](https://github.com/nextcloud/docker) -- [Prometheus exporter for getting some metrics of a Nextcloud server instance](https://hub.docker.com/r/xperimental/nextcloud-exporter) -- [Installation and server configuration](https://docs.nextcloud.com/server/stable/admin_manual/installation/index.html) -- [Installation on Linux](https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#installation-on-linux) -- [Nextcloud on Kubernetes, by modzilla99](https://github.com/modzilla99/kubernetes-nextcloud) -- [Nextcloud on Kubernetes, by andremotz](https://github.com/andremotz/nextcloud-kubernetes) -- [Deploy Nextcloud - Yaml with application and database container and a configure a secret](https://www.debontonline.com/2021/05/part-15-deploy-nextcloud-yaml-with.html) -- [Self-Host Nextcloud Using Kubernetes](https://blog.true-kubernetes.com/self-host-nextcloud-using-kubernetes/) -- [Deploying NextCloud on Kubernetes with Kustomize](https://medium.com/@acheaito/nextcloud-on-kubernetes-19658785b565) -- [A NextCloud Kubernetes deployment](https://github.com/acheaito/nextcloud-kubernetes) -- [Nextcloud self-hosting on K8s](https://eramons.github.io/techblog/post/nextcloud/) -- [Nextcloud self-hosting on K8s on GitHub](https://github.com/eramons/kubenextcloud) -- [Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache](https://bayton.org/docs/nextcloud/installing-nextcloud-on-ubuntu-16-04-lts-with-redis-apcu-ssl-apache/) -- [Setup NextCloud Server with Nginx SSL Reverse-Proxy and Apache2 Backend](https://breuer.dev/tutorial/Setup-NextCloud-FrontEnd-Nginx-SSL-Backend-Apache2) -- [Install Nextcloud with Apache2 on Debian 10](https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/) -- [Nextcloud scale-out using Kubernetes](https://faun.pub/nextcloud-scale-out-using-kubernetes-93c9cac9e493) -- [How to tune Nextcloud on-premise cloud server for better performance](https://www.techrepublic.com/article/how-to-tune-nextcloud-on-premise-cloud-server-for-better-performance/) -- [Nextcloud's background jobs configuration](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html) -- [Nextcloud Cron Job Docker Container](https://hub.docker.com/r/rcdailey/nextcloud-cronjob) -- [Verifying where to add cron jobs docker-compose + cron](https://help.nextcloud.com/t/verifying-where-to-add-cron-jobs-docker-compose-cron/104110) -- [System cron instead of webcron?](https://github.com/nextcloud/helm/issues/55#issuecomment-1126289717) - -### _Apache httpd_ - -- [Apache HTTP Server Documentation](https://httpd.apache.org/docs/) -- [Apache MPM prefork](https://httpd.apache.org/docs/2.4/mod/prefork.html) -- [Apache Module mod_ssl](http://httpd.apache.org/docs/current/mod/mod_ssl.html) -- [Kubernetes: Cannot deploy flask web app with apache and https](https://stackoverflow.com/questions/38043415/kubernetes-cannot-deploy-flask-web-app-with-apache-and-https) -- [SSLSessionCache](https://cwiki.apache.org/confluence/display/httpd/SSLSessionCache) -- [XAMPP - Session Cache is not configured [hint: SSLSessionCache]](https://stackoverflow.com/questions/16644064/xampp-session-cache-is-not-configured-hint-sslsessioncache) -- [SSLSessionCache: ‘shmcb’ session cache not supported](https://bobcares.com/blog/sslsessioncache-shmcb-session-cache-not-supported/) - -### YAML - -- [Scalars](https://yaml.org/spec/1.2.2/#23-scalars) - -## Navigation - -[<< Previous (**G033. Deploying services 02. Nextcloud Part 3**)](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G033. Deploying services 02. Nextcloud Part 5**) >>](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md) diff --git a/G033 - Deploying services 02 ~ Nextcloud - Part 5 - Complete Nextcloud platform.md b/G033 - Deploying services 02 ~ Nextcloud - Part 5 - Complete Nextcloud platform.md deleted file mode 100644 index b265f9b..0000000 --- a/G033 - Deploying services 02 ~ Nextcloud - Part 5 - Complete Nextcloud platform.md +++ /dev/null @@ -1,1354 +0,0 @@ -# G033 - Deploying services 02 ~ Nextcloud - Part 5 - Complete Nextcloud platform - -In this final part of the Nextcloud platform guide, you'll declare any missing elements needed in your Nextcloud platform, put everything together and deploy it all at once in your K3s Kubernetes cluster. Also, I'll show you how to check out the platform and its containers. - -## Preparing pending Nextcloud platform elements - -You've prepared the main components that make up the Nextcloud platform, but there are still a few elements you need to define: the three persistent volumes that correspond to all the claims you've defined for MariaDB and Nextcloud server, a namespace to group all the components together in your Kubernetes cluster, and the secret from the wildcard certificate you created back in the [**G029** guide about cert-manager](G029%20-%20K3s%20cluster%20setup%2012%20~%20Setting%20up%20cert-manager%20and%20wildcard%20certificate.md#setting-up-a-wildcard-certificate-for-a-domain). The volumes and the namespace are Kubernetes resources, so prepare a folder to hold their yaml definitions in the root folder of your Nextcloud project. - -~~~bash -$ mkdir -p $HOME/k8sprjs/nextcloud/resources -~~~ - -### _Persistent volumes_ - -The persistent volumes are the way to use, in your K3s cluster, the LVM storage volumes you arranged in [the first part of this guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup,%20arranging%20storage%20and%20choosing%20service%20IPs.md). - -1. You need to create three different persistent volumes, so prepare a new file for each of them. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/resources/{data-nextcloud,db-nextcloud,html-nextcloud}.persistentvolume.yaml - ~~~ - -2. Copy the following yaml definitions in their corresponding files. - - - In `data-nextcloud.persistentvolume.yaml`. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolume - - metadata: - name: data-nextcloud - spec: - capacity: - storage: 9.3G - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - storageClassName: local-path - persistentVolumeReclaimPolicy: Retain - local: - path: /mnt/nextcloud-hdd/data/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - ~~~ - - - In `db-nextcloud.persistentvolume.yaml`. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolume - - metadata: - name: db-nextcloud - spec: - capacity: - storage: 3.5G - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - storageClassName: local-path - persistentVolumeReclaimPolicy: Retain - local: - path: /mnt/nextcloud-ssd/db/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - ~~~ - - - In `html-nextcloud.persistentvolume.yaml`. - - ~~~yaml - apiVersion: v1 - kind: PersistentVolume - - metadata: - name: html-nextcloud - spec: - capacity: - storage: 1.2G - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - storageClassName: local-path - persistentVolumeReclaimPolicy: Retain - local: - path: /mnt/nextcloud-ssd/html/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - ~~~ - - The yamls above use exactly the same parameters. - - - In the `metadata` section, the `name` strings are the same ones set in the claims you declared for MariaDB and Nextcloud server. - - - In the `spec` section there are a number or particularities. - - - The `spec.capacity.storage` is a decimal number in gigabytes (`G`). Internally the decimal value will be converted to megabytes (`M`). Be sure of not assigning more capacity than is really available in the underlying storage, something you can check on the node with `df -h`. - > **BEWARE!** - > Be careful with the units you use in Kubernetes: its not the same typing **1G** (1000M) than **1Gi** (1024 Mi). Check out [this Stackoverflow question about the matter](https://stackoverflow.com/questions/50804915/kubernetes-size-definitions-whats-the-difference-of-gi-and-g) for further clarification. - - - The `spec.volumeMode` is set to `Filesystem` because the underlying LVM volume has been formatted with an ext4 filesystem. The alternative value is `Block`, and its valid only for raw (unformatted) volumes connected through storage plugins that support that format (`local-path` doesn't). - - - The `spec.accessModes` list has just the `ReadWriteOnce` value to ensure that only one pod (hence the `Once` part) has read and write access to the volume. - - - The `spec.storageClassName` is a parameter that indicates what storage profile (a particular set of properties) to use with the persistent volume. Remember that you only have one available in your K3s cluster, `local-path`, so that's the one you have to use. - > **BEWARE!** - > If you leave the `storageClassName` parameter unset, its value will be set internally to the default one (`local-path` in a K3s cluster). On the other hand, if the value is the empty string (`storageClassName: ""`), this leaves the volume with no storage class assigned at all. - - - The `spec.persistentVolumeReclaimPolicy` parameter is about the reclaim policy to apply to this persistent volume. When all the persistent volume claims that required this volume are deleted from the cluster, the system must know what to do with this storage. - - There are only two policies to use here: `Retain` or `Delete`. - - Left unset, it'll be set to whatever reclaim policy is set in the storage class. The `local-path` has it on `Delete`. - - `Retain` deletes the persistent volume from the cluster but not the associated storage asset. This means that whatever data was stored there will be preserved. - - `Delete` deletes both the persistent volume and the associated storage asset, but only if the volume plugin/storage provisioner used supports it. In the case of the Rancher [local-path-provisioner](https://github.com/rancher/local-path-provisioner) used in K3s (associated with the `local-path` storage class), it will automatically clean up the contents stored in the volume. - - - In `spec.local.path` is where you specify the **absolute path**, within the node's filesystem, where you want to mount this volume. Notice how, in all the PVs, it has the path to their corresponding `k3smnt` folder you already left prepared in your `k3sagent02` VM. - - - The `spec.nodeAffinity` block restricts to which node in the cluster a volume will be binded to. Otherwise, the Kubernetes engine would try to bind the volume to any node, which could lead to errors. In the yamls above, you can see how in all the PVs theres's only one node affinity rule that looks for the hostname of the node (the `key` in the `matchExpressions` section), and checks if it's `In` (the `operator` parameter) the list of admitted `values`. Since the `k3sagent02` node is the only one with the storage ready for those volumes, it's hostname is the only value in the list. - -### _Nextcloud Namespace resource_ - -To avoid naming conflicts with any other resources you could have running in your K3s cluster, its better to put all the components of this Nextcloud platform under an exclusive namespace. - -1. A namespace is also a Kubernetes resource, so create a file for it under the `resources` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/resources/nextcloud.namespace.yaml - ~~~ - -2. Fill `nextcloud.namespace.yaml` with the resource definition next. - - ~~~yaml - apiVersion: v1 - kind: Namespace - - metadata: - name: nextcloud - ~~~ - - As you see above, the Namespace is one of the simplest resources you can set in a Kubernetes cluster. - -### _Updating wildcard certificate's Reflector-managed namespaces_ - -The Apache service running your Nextcloud server needs the secret from the wildcard certificate you created in the [**G029** guide](G029%20-%20K3s%20cluster%20setup%2012%20~%20Setting%20up%20cert-manager%20and%20wildcard%20certificate.md#setting-up-a-wildcard-certificate-for-a-domain). That secret is what contains the files that make up the certificate itself, and you need it replicated in the nextcloud namespace so it can be used by Apache. For replication of secrets and config maps, remember that you installed the Reflector addon and that you already annotated the wildcard certificate so its secret can be managed by it. What you need to do is just add the `nextcloud` namespace in those annotations. - -1. In the Kustomize project you created for the wildcard certificate, create a new `patches` folder. - - ~~~bash - $ mkdir -p $HOME/k8sprjs/cert-manager/certificates/patches - ~~~ - -2. Create a `wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml` file under the `patches` folder. - - ~~~bash - $ touch $HOME/k8sprjs/cert-manager/certificates/patches/wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml - ~~~ - -3. In `wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml` copy the yaml portion below. - - ~~~yaml - # Certificate wildcard.deimos.cloud-tls patch for Reflector-managed namespaces - apiVersion: cert-manager.io/v1 - kind: Certificate - - metadata: - name: wildcard.deimos.cloud-tls - namespace: certificates - spec: - secretTemplate: - annotations: - reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "kube-system,nextcloud" - reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "kube-system,nextcloud" - ~~~ - - This portion has been taken from the original `wildcard.deimos.cloud-tls.certificate.cert-manager.yaml` you created to define your wildcard certificate. - - - Only two Reflector-related annotations have to be modified, the ones in which the namespaces where to copy the certificate's secret are listed. - - - The `nextcloud` namespace is added right after the originally present `kube-system` one. - -4. Edit the `kustomization.yaml` file from this certificate's Kustomize project to add the patch. The file should look like below. - - ~~~yaml - # Certificates deployment - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - resources: - - resources/certificates.namespace.yaml - - resources/cluster-issuer-selfsigned.cluster-issuer.cert-manager.yaml - - resources/wildcard.deimos.cloud-tls.certificate.cert-manager.yaml - - patches: - - path: patches/wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml - target: - group: cert-manager.io - version: v1 - kind: Certificate - namespace: certificates - name: wildcard.deimos.cloud-tls - ~~~ - - What's new in this `kustomization.yaml` file is the whole patches block, where you see that your `wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml` is applied to a very concrete `target` that corresponds to the wildcard certificate you need to change. - -5. Test the Kustomize yaml output to see if the patch is applied properly. The only thing that should change are the two patched Reflector annotations at `spec.secretTemplate.annotations` in the wildcard's `Certificate` resource. - - ~~~yaml - apiVersion: v1 - kind: Namespace - metadata: - name: certificates - --- - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: wildcard.deimos.cloud-tls - namespace: certificates - spec: - dnsNames: - - '*.deimos.cloud' - - deimos.cloud - duration: 8760h - isCA: false - issuerRef: - group: cert-manager.io - kind: ClusterIssuer - name: cluster-issuer-selfsigned - privateKey: - algorithm: ECDSA - encoding: PKCS8 - rotationPolicy: Always - size: 384 - renewBefore: 720h - secretName: wildcard.deimos.cloud-tls - secretTemplate: - annotations: - reflector.v1.k8s.emberstack.com/reflection-allowed: "true" - reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: kube-system,nextcloud - reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true" - reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: kube-system,nextcloud - subject: - organizations: - - Deimos - --- - apiVersion: cert-manager.io/v1 - kind: ClusterIssuer - metadata: - name: cluster-issuer-selfsigned - spec: - selfSigned: {} - ~~~ - -6. If you agree with the Kustomize output, reapply the Kustomize project of your certificate. - - ~~~bash - $ kubectl apply -k $HOME/k8sprjs/cert-manager/certificates - ~~~ - -> **BEWARE!** -> Remember that the `cert-manager` system won't automatically apply this modification to the annotations in the secret already generated for your wildcard certificate. Not a problem right now but later, after you've deployed the whole Nextcloud platform, you'll have to apply the change to the certificate's secret to make Reflector clone it into your new `nextcloud` namespace. - -## Kustomize project for Nextcloud platform - -With every required component declared or configured, now you need to put everything together under the same Kustomize project. - -1. Create a `kustomization.yaml` file under the `nextcloud` folder. - - ~~~bash - $ touch $HOME/k8sprjs/nextcloud/kustomization.yaml - ~~~ - -2. Copy in `kustomization.yaml` the yaml below. - - ~~~yaml - # Nextcloud platform setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - namespace: nextcloud - - commonLabels: - platform: nextcloud - - namePrefix: nxcd- - - resources: - - resources/data-nextcloud.persistentvolume.yaml - - resources/db-nextcloud.persistentvolume.yaml - - resources/html-nextcloud.persistentvolume.yaml - - resources/nextcloud.namespace.yaml - - components/cache-redis - - components/db-mariadb - - components/server-nextcloud - ~~~ - - Be aware of the following details. - - - The `nextcloud` namespace is applied to all the resources coming out of this Kustomize project, except the namespace itself. - - - The `commonLabels` will put a `platform` label to all your resources. Remember that you already set an `app` label within each component. - > **BEWARE!** - > A label set in a Kustomize project will overwrite the same label in any underlying subprojects. - - - The `namePrefix` adds a prefix to all the `metadata.name` strings in your resources, except to namespaces. It also transforms those names in the places they're referred to, although be aware that there might exceptions to this. In this case, the prefix is an acronym of the term _Nextcloud_. - - - In the `resources` list you have yaml files and also the directories of the components you have configured in the previous parts of this guide. - > **BEWARE!** - > You can list directories as resources only if they have a `kustomization.yaml` inside that can be read by Kustomize. In other words, you can list Kustomize projects as resources for another Kustomize project. - -3. As usual, check the Kustomize yaml output for this project. Since this one is going to be particularly long, let's dump it into a file such as `nextcloud.k.output.yaml`. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud > nextcloud.k.output.yaml - ~~~ - -4. Open the `nextcloud.k.output.yaml` file and compare your resulting yaml output with the one next. - - ~~~yaml - apiVersion: v1 - kind: Namespace - metadata: - labels: - platform: nextcloud - name: nextcloud - --- - apiVersion: v1 - data: - redis.conf: | - port 6379 - bind 0.0.0.0 - protected-mode no - maxmemory 64mb - maxmemory-policy allkeys-lru - kind: ConfigMap - metadata: - labels: - app: cache-redis - platform: nextcloud - name: nxcd-cache-redis-6967fc5hc5 - namespace: nextcloud - --- - apiVersion: v1 - data: - initdb.sh: | - #!/bin/sh - echo ">>> Creating user for Mysql Prometheus metrics exporter" - mysql -u root -p$MYSQL_ROOT_PASSWORD --execute \ - "CREATE USER '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost' IDENTIFIED BY '${MARIADB_PROMETHEUS_EXPORTER_PASSWORD}' WITH MAX_USER_CONNECTIONS 3; - GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO '${MARIADB_PROMETHEUS_EXPORTER_USERNAME}'@'localhost'; - FLUSH privileges;" - my.cnf: | - [server] - skip_name_resolve = 1 - innodb_buffer_pool_size = 224M - innodb_flush_log_at_trx_commit = 2 - innodb_log_buffer_size = 32M - query_cache_type = 1 - query_cache_limit = 2M - query_cache_min_res_unit = 2k - query_cache_size = 64M - slow_query_log = 1 - slow_query_log_file = /var/lib/mysql/slow.log - long_query_time = 1 - innodb_io_capacity = 2000 - innodb_io_capacity_max = 3000 - - [client-server] - !includedir /etc/mysql/conf.d/ - !includedir /etc/mysql/mariadb.conf.d/ - - [client] - default-character-set = utf8mb4 - - [mysqld] - character_set_server = utf8mb4 - collation_server = utf8mb4_general_ci - transaction_isolation = READ-COMMITTED - binlog_format = ROW - log_bin = /var/lib/mysql/mysql-bin.log - expire_logs_days = 7 - max_binlog_size = 100M - innodb_file_per_table=1 - innodb_read_only_compressed = OFF - tmp_table_size= 32M - max_heap_table_size= 32M - max_connections=512 - nextcloud-db-name: nextcloud-db - nextcloud-username: nextcloud - prometheus-exporter-username: exporter - kind: ConfigMap - metadata: - labels: - app: db-mariadb - platform: nextcloud - name: nxcd-db-mariadb-88gc2m5h46 - namespace: nextcloud - --- - apiVersion: v1 - data: - 000-default.conf: | - MinSpareServers 4 - MaxSpareServers 16 - StartServers 10 - MaxConnectionsPerChild 2048 - - LoadModule socache_shmcb_module /usr/lib/apache2/modules/mod_socache_shmcb.so - SSLSessionCache shmcb:/var/tmp/apache_ssl_scache(512000) - - - Protocols http/1.1 - ServerAdmin root@deimos.cloud - ServerName nextcloud.deimos.cloud - ServerAlias nxc.deimos.cloud - - Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" - - DocumentRoot /var/www/html - DirectoryIndex index.php - - LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so - SSLEngine on - SSLCertificateFile /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - SSLCertificateKeyFile /etc/ssl/certs/wildcard.deimos.cloud-tls.key - - - Options FollowSymlinks MultiViews - AllowOverride All - Require all granted - - - Dav off - - - SetEnv HOME /var/www/html - SetEnv HTTP_HOME /var/www/html - Satisfy Any - - - - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - cache-redis-svc-cluster-ip: 10.43.100.1 - db-mariadb-svc-cluster-ip: 10.43.100.2 - nextcloud-admin-username: admin - nextcloud-trusted-domains: 192.168.1.42 nextcloud.deimos.cloud nxc.deimos.cloud - ports.conf: | - Listen 443 - - # vim: syntax=apache ts=4 sw=4 sts=4 sr noet - kind: ConfigMap - metadata: - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-server-apache-nextcloud-fcckh8bk2d - namespace: nextcloud - --- - apiVersion: v1 - data: - redis-password: | - WTB1cl9yRTNlNDFMeS5sYWzDsWpmbMOxa2FlcnV0YW9uZ3ZvYW46YcOxb2RrbzM0OTQ4dX - lPbmctUzNrcmVUX1A0czV3b1JkLWhlUkUhCg== - kind: Secret - metadata: - labels: - app: cache-redis - platform: nextcloud - name: nxcd-cache-redis-bh9d296g5k - namespace: nextcloud - type: Opaque - --- - apiVersion: v1 - data: - nextcloud-user-password: | - cTQ4OXE1NjlnYWRmamzDsWtqcXdpb2VrbnZrbG5rd2VvbG12bGtqYcOxc2RnYWlvcGgyYXNkZmFz - a2RrbmZnbDIK - prometheus-exporter-password: | - bmd1ZXVlaTVpdG52Ym52amhha29hb3BkcGRrY25naGZ1ZXI5MzlrZTIwMm1mbWZ2bHNvc2QwM2Zr - ZDkyM2zDsQo= - root-password: | - MDk0ODM1bXZuYjg5MDM4N212Mmk5M21jam5yamhya3Nkw7Fzb3B3ZWpmZ212eHNvZWRqOTNkam1k - bDI5ZG1qego= - kind: Secret - metadata: - labels: - app: db-mariadb - platform: nextcloud - name: nxcd-db-mariadb-dg5cm45947 - namespace: nextcloud - type: Opaque - --- - apiVersion: v1 - data: - nextcloud-admin-password: | - OXE0OHVvbmJvaXU0ODkwdW9paG5nw6x1eTM0ODkwOTIzdWttbmFwamTEusOxamdwYmFpdTM5MHVp - b3UzOTAzMnVpMDl1bmdhb3BpamRkYcOxejM5a2zDkXFla2oK - kind: Secret - metadata: - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-server-nextcloud-mmd5t7577c - namespace: nextcloud - type: Opaque - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9121" - prometheus.io/scrape: "true" - labels: - app: cache-redis - platform: nextcloud - name: nxcd-cache-redis - namespace: nextcloud - spec: - clusterIP: 10.43.100.1 - ports: - - name: server - port: 6379 - protocol: TCP - - name: metrics - port: 9121 - protocol: TCP - selector: - app: cache-redis - platform: nextcloud - type: ClusterIP - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9104" - prometheus.io/scrape: "true" - labels: - app: db-mariadb - platform: nextcloud - name: nxcd-db-mariadb - namespace: nextcloud - spec: - clusterIP: 10.43.100.2 - ports: - - name: server - port: 3306 - protocol: TCP - - name: metrics - port: 9104 - protocol: TCP - selector: - app: db-mariadb - platform: nextcloud - type: ClusterIP - --- - apiVersion: v1 - kind: Service - metadata: - annotations: - prometheus.io/port: "9205" - prometheus.io/scrape: "true" - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-server-apache-nextcloud - namespace: nextcloud - spec: - clusterIP: 10.43.100.3 - loadBalancerIP: 192.168.1.42 - ports: - - name: server - port: 443 - protocol: TCP - - name: metrics - port: 9205 - protocol: TCP - selector: - app: server-nextcloud - platform: nextcloud - type: LoadBalancer - --- - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - platform: nextcloud - name: nxcd-data-nextcloud - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 9.3G - local: - path: /mnt/nextcloud-hdd/data/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - persistentVolumeReclaimPolicy: Retain - storageClassName: local-path - volumeMode: Filesystem - --- - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - platform: nextcloud - name: nxcd-db-nextcloud - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 3.5G - local: - path: /mnt/nextcloud-ssd/db/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - persistentVolumeReclaimPolicy: Retain - storageClassName: local-path - volumeMode: Filesystem - --- - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - platform: nextcloud - name: nxcd-html-nextcloud - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 1.2G - local: - path: /mnt/nextcloud-ssd/html/k3smnt - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - k3sagent02 - persistentVolumeReclaimPolicy: Retain - storageClassName: local-path - volumeMode: Filesystem - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-data-server-nextcloud - namespace: nextcloud - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 9.3G - storageClassName: local-path - volumeName: nxcd-data-nextcloud - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: db-mariadb - platform: nextcloud - name: nxcd-db-mariadb - namespace: nextcloud - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3.5G - storageClassName: local-path - volumeName: nxcd-db-nextcloud - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-html-server-nextcloud - namespace: nextcloud - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1.2G - storageClassName: local-path - volumeName: nxcd-html-nextcloud - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - labels: - app: cache-redis - platform: nextcloud - name: nxcd-cache-redis - namespace: nextcloud - spec: - replicas: 1 - selector: - matchLabels: - app: cache-redis - platform: nextcloud - template: - metadata: - labels: - app: cache-redis - platform: nextcloud - spec: - affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - server-nextcloud - topologyKey: kubernetes.io/hostname - containers: - - command: - - redis-server - - /etc/redis/redis.conf - - --requirepass $(REDIS_PASSWORD) - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: nxcd-cache-redis-bh9d296g5k - image: redis:6.2-alpine - name: server - ports: - - containerPort: 6379 - resources: - limits: - memory: 64Mi - volumeMounts: - - mountPath: /etc/redis/redis.conf - name: redis-config - subPath: redis.conf - - env: - - name: REDIS_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: nxcd-cache-redis-bh9d296g5k - image: oliver006/redis_exporter:v1.32.0-alpine - name: metrics - ports: - - containerPort: 9121 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - defaultMode: 292 - items: - - key: redis.conf - path: redis.conf - name: nxcd-cache-redis-6967fc5hc5 - name: redis-config - --- - apiVersion: apps/v1 - kind: StatefulSet - metadata: - labels: - app: db-mariadb - platform: nextcloud - name: nxcd-db-mariadb - namespace: nextcloud - spec: - replicas: 1 - selector: - matchLabels: - app: db-mariadb - platform: nextcloud - serviceName: nxcd-db-mariadb - template: - metadata: - labels: - app: db-mariadb - platform: nextcloud - spec: - containers: - - env: - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - key: nextcloud-db-name - name: nxcd-db-mariadb-88gc2m5h46 - - name: MYSQL_ROOT_PASSWORD - valueFrom: - secretKeyRef: - key: root-password - name: nxcd-db-mariadb-dg5cm45947 - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - key: nextcloud-username - name: nxcd-db-mariadb-88gc2m5h46 - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-user-password - name: nxcd-db-mariadb-dg5cm45947 - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - key: prometheus-exporter-username - name: nxcd-db-mariadb-88gc2m5h46 - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - key: prometheus-exporter-password - name: nxcd-db-mariadb-dg5cm45947 - image: mariadb:10.6-focal - name: server - ports: - - containerPort: 3306 - resources: - limits: - memory: 320Mi - volumeMounts: - - mountPath: /etc/mysql/my.cnf - name: mariadb-config - subPath: my.cnf - - mountPath: /docker-entrypoint-initdb.d/initdb.sh - name: mariadb-config - subPath: initdb.sh - - mountPath: /var/lib/mysql - name: mariadb-storage - - args: - - --collect.info_schema.tables - - --collect.info_schema.innodb_tablespaces - - --collect.info_schema.innodb_metrics - - --collect.global_status - - --collect.global_variables - - --collect.slave_status - - --collect.info_schema.processlist - - --collect.perf_schema.tablelocks - - --collect.perf_schema.eventsstatements - - --collect.perf_schema.eventsstatementssum - - --collect.perf_schema.eventswaits - - --collect.auto_increment.columns - - --collect.binlog_size - - --collect.perf_schema.tableiowaits - - --collect.perf_schema.indexiowaits - - --collect.info_schema.userstats - - --collect.info_schema.clientstats - - --collect.info_schema.tablestats - - --collect.info_schema.schemastats - - --collect.perf_schema.file_events - - --collect.perf_schema.file_instances - - --collect.perf_schema.replication_group_member_stats - - --collect.perf_schema.replication_applier_status_by_worker - - --collect.slave_hosts - - --collect.info_schema.innodb_cmp - - --collect.info_schema.innodb_cmpmem - - --collect.info_schema.query_response_time - - --collect.engine_tokudb_status - - --collect.engine_innodb_status - env: - - name: MARIADB_PROMETHEUS_EXPORTER_USERNAME - valueFrom: - configMapKeyRef: - key: prometheus-exporter-username - name: nxcd-db-mariadb-88gc2m5h46 - - name: MARIADB_PROMETHEUS_EXPORTER_PASSWORD - valueFrom: - secretKeyRef: - key: prometheus-exporter-password - name: nxcd-db-mariadb-dg5cm45947 - - name: DATA_SOURCE_NAME - value: $(MARIADB_PROMETHEUS_EXPORTER_USERNAME):$(MARIADB_PROMETHEUS_EXPORTER_PASSWORD)@(localhost:3306)/ - image: prom/mysqld-exporter:v0.13.0 - name: metrics - ports: - - containerPort: 9104 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - items: - - key: initdb.sh - path: initdb.sh - - key: my.cnf - path: my.cnf - name: nxcd-db-mariadb-88gc2m5h46 - name: mariadb-config - - name: mariadb-storage - persistentVolumeClaim: - claimName: nxcd-db-mariadb - --- - apiVersion: apps/v1 - kind: StatefulSet - metadata: - labels: - app: server-nextcloud - platform: nextcloud - name: nxcd-server-apache-nextcloud - namespace: nextcloud - spec: - replicas: 1 - selector: - matchLabels: - app: server-nextcloud - platform: nextcloud - serviceName: nxcd-server-apache-nextcloud - template: - metadata: - labels: - app: server-nextcloud - platform: nextcloud - spec: - containers: - - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - key: nextcloud-admin-username - name: nxcd-server-apache-nextcloud-fcckh8bk2d - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-admin-password - name: nxcd-server-nextcloud-mmd5t7577c - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - key: nextcloud-trusted-domains - name: nxcd-server-apache-nextcloud-fcckh8bk2d - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - key: db-mariadb-svc-cluster-ip - name: nxcd-server-apache-nextcloud-fcckh8bk2d - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - key: nextcloud-db-name - name: nxcd-db-mariadb-88gc2m5h46 - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - key: nextcloud-username - name: nxcd-db-mariadb-88gc2m5h46 - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-user-password - name: nxcd-db-mariadb-dg5cm45947 - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - key: cache-redis-svc-cluster-ip - name: nxcd-server-apache-nextcloud-fcckh8bk2d - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - key: redis-password - name: nxcd-cache-redis-bh9d296g5k - - name: APACHE_ULIMIT_MAX_FILES - value: ulimit -n 65536 - image: nextcloud:22.2-apache - lifecycle: - postStart: - exec: - command: - - sh - - -c - - | - chown www-data:www-data /var/www/html/data - apt-get update - apt-get install -y openrc - start-stop-daemon --start --background --pidfile /cron.pid --exec /cron.sh - name: server - ports: - - containerPort: 443 - resources: - limits: - memory: 512Mi - volumeMounts: - - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - name: certificate - subPath: wildcard.deimos.cloud-tls.crt - - mountPath: /etc/ssl/certs/wildcard.deimos.cloud-tls.key - name: certificate - subPath: wildcard.deimos.cloud-tls.key - - mountPath: /etc/apache2/ports.conf - name: apache-config - subPath: ports.conf - - mountPath: /etc/apache2/sites-available/000-default.conf - name: apache-config - subPath: 000-default.conf - - mountPath: /var/www/html - name: html-storage - - mountPath: /var/www/html/data - name: data-storage - - env: - - name: NEXTCLOUD_SERVER - value: https://localhost - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - key: nextcloud-admin-username - name: nxcd-server-apache-nextcloud-fcckh8bk2d - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - key: nextcloud-admin-password - name: nxcd-server-nextcloud-mmd5t7577c - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - name: metrics - ports: - - containerPort: 9205 - resources: - limits: - memory: 32Mi - volumes: - - configMap: - defaultMode: 420 - items: - - key: ports.conf - path: ports.conf - - key: 000-default.conf - path: 000-default.conf - name: nxcd-server-apache-nextcloud-fcckh8bk2d - name: apache-config - - name: certificate - secret: - defaultMode: 292 - items: - - key: tls.crt - path: wildcard.deimos.cloud-tls.crt - - key: tls.key - path: wildcard.deimos.cloud-tls.key - secretName: wildcard.deimos.cloud-tls - - name: html-storage - persistentVolumeClaim: - claimName: nxcd-html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: nxcd-data-server-nextcloud - ~~~ - - The most important thing here is to verify that all the resources' names have been transformed correctly (prefixes or suffixes added) in all the places they are referred to. Remember that Kustomize won't change the names if they've been put in non-standard Kubernetes parameters, and it might be possible that Kustomize won't even touch values in certain particular standard ones. On the other hand, notice how Kustomize has grouped all the resources together according to their kind and ordered them alphabetically by `metadata.name`. - -5. If you agree with the final yaml, you can apply the Nextcloud platform Kustomize project to your cluster. - - ~~~bash - $ kubectl apply -k $HOME/k8sprjs/nextcloud - ~~~ - -6. Right after applying the Kustomize project, check how the deployment is going for all your Nextcloud platform. - - ~~~bash - $ kubectl -n nextcloud get pv,pvc,cm,secret,deployment,replicaset,statefulset,pod,svc -o wide - ~~~ - - Next you have a possible output from the `kubectl` command above. - - > **BEWARE!** - > Notice below that the `wildcard.deimos.cloud-tls` secret is missing and the `nxcd-server-apache-nextcloud-0` pod is still in `ContainerCreating` status, waiting for that secret to be available. - - ~~~bash - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE - persistentvolume/nxcd-data-nextcloud 9300M RWO Retain Bound nextcloud/nxcd-data-server-nextcloud local-path 18m Filesystem - persistentvolume/nxcd-db-nextcloud 3500M RWO Retain Bound nextcloud/nxcd-db-mariadb local-path 18m Filesystem - persistentvolume/nxcd-html-nextcloud 1200M RWO Retain Bound nextcloud/nxcd-html-server-nextcloud local-path 18m Filesystem - - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE - persistentvolumeclaim/nxcd-html-server-nextcloud Bound nxcd-html-nextcloud 1200M RWO local-path 18m Filesystem - persistentvolumeclaim/nxcd-data-server-nextcloud Bound nxcd-data-nextcloud 9300M RWO local-path 18m Filesystem - persistentvolumeclaim/nxcd-db-mariadb Bound nxcd-db-nextcloud 3500M RWO local-path 18m Filesystem - - NAME DATA AGE - configmap/kube-root-ca.crt 1 18m - configmap/nxcd-cache-redis-6967fc5hc5 1 18m - configmap/nxcd-db-mariadb-88gc2m5h46 5 18m - configmap/nxcd-server-apache-nextcloud-fcckh8bk2d 6 18m - - NAME TYPE DATA AGE - secret/default-token-l74p7 kubernetes.io/service-account-token 3 18m - secret/nxcd-cache-redis-7c88b92427 Opaque 1 18m - secret/nxcd-db-mariadb-dg5cm45947 Opaque 3 18m - secret/nxcd-server-nextcloud-mmd5t7577c Opaque 1 18m - - NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR - deployment.apps/nxcd-cache-redis 1/1 1 1 18m server,metrics redis:6.2-alpine,oliver006/redis_exporter:v1.32.0-alpine app=cache-redis,platform=nextcloud - - NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR - replicaset.apps/nxcd-cache-redis-68984788b7 1 1 1 18m server,metrics redis:6.2-alpine,oliver006/redis_exporter:v1.32.0-alpine app=cache-redis,platform=nextcloud,pod-template-hash=68984788b7 - - NAME READY AGE CONTAINERS IMAGES - statefulset.apps/nxcd-server-apache-nextcloud 0/1 18m server,metrics nextcloud:22.2-apache,xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - statefulset.apps/nxcd-db-mariadb 1/1 18m server,metrics mariadb:10.6-focal,prom/mysqld-exporter:v0.13.0 - - NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES - pod/nxcd-server-apache-nextcloud-0 0/2 ContainerCreating 0 18m k3sagent02 - pod/nxcd-cache-redis-68984788b7-npjz9 2/2 Running 0 18m 10.42.2.239 k3sagent02 - pod/nxcd-db-mariadb-0 2/2 Running 0 18m 10.42.2.240 k3sagent02 - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR - service/nxcd-cache-redis ClusterIP 10.43.100.1 6379/TCP,9121/TCP 18m app=cache-redis,platform=nextcloud - service/nxcd-db-mariadb ClusterIP 10.43.100.2 3306/TCP,9104/TCP 18m app=db-mariadb,platform=nextcloud - service/nxcd-server-apache-nextcloud LoadBalancer 10.43.100.3 192.168.1.42 443:31730/TCP,9205:31398/TCP 18m app=server-nextcloud,platform=nextcloud - ~~~ - - There are a few details about this deployment you must know, based on this `kubectl` output. - - - First are the persistent volumes with all their details. Notice how the values under the `CAPACITY` column are in megabytes (`M`), although those sizes were specified in gigabytes (`G`) in the yaml description. Also see how there's no information about to which node are the volumes associated to. The status `Bound` means that the volume has been claimed, so its not free at the moment. - - - Right below the persistent volumes you got the persistent volume claims, and all of them appear with `Bound` status and with the name of the `VOLUME` they're bounded to. And, as it happened with the PVs, the PVCs' requested `CAPACITY` is shown in megabytes (`M`) rather than gigabytes (`G`). - - - The `Deployment` resource for Redis appears indicating that 1 out of 1 required `ReplicaSet`s are `READY`, among other details. - - - What comes out of a `Deployment` resource is a `ReplicaSet`, like the one named `nxcd-cache-redis-68984788b7`. See how the name has an hexadecimal number as a suffix, which is added by Kubernetes automatically. This replica set will give its name to the pods that come out of to it. On the other hand, notice instead of using one `READY` column like in its parent `Deployment` resource, it has a `DESIRED`, `CURRENT` and `READY` to tell you about how many of its pods are running. - - - The `StatefulSet`s appear with their names as they are in their yaml descriptions, next to other values that inform you of the pods they have running (`READY` column) and a few other related details. - - - The resources are apparently fine, and all the expected pods are accounted for and with `Running` in their `STATUS` column. But if, for some reason, a pod is missing some resource they need, they'll remain in `ContainerCreating` or `Pending` status till they get what they require to run. - - For instance, let's suppose the wildcard certificate's secret is not available in the `nextcloud` namespace yet. This would force the `nxcd-server-apache-nextcloud-0` pod to wait with `ContainerCreating` status till that resource becomes available in the namespace. - - > **BEWARE!** - > Remember that, probably due to a bug related to the graceful shutdown feature in the particular K3s version you've installed in this guide series (`v1.22.3+k3s1`), after a reboot the pods may not appear with status `Running` but `Terminated`. - - - Notice a particularity about how pods are named. - - The pods from regular Deployment resources (such as `nxcd-cache-redis-68984788b7-npjz9`) get their _unique_ name from their parent replica set combined with an autogenerated string. - - The pods from stateful sets (`nxcd-db-mariadb-0` and `nxcd-server-apache-nextcloud-0` in this case) get their names from their parent stateful set, plus a number as a suffix. When a stateful set requires to run more than one replica, each generated pod will have a different but consecutive number. Check more about stateful sets behaviour [in the official Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). - - - At the `Service` resources list, you can see that all services have their cluster IP and the one for the Nextcloud server also has the expected external IP for accessing the platform later. On the other hand, notice that the pods also have an IP which is on a different internal cluster subnet. - - - Finally, see how the pods are all assigned to the `k3sagent02` node of your K3s cluster, due mainly to the rules of affinity applied to the persistent volumes. Services that store data (_state_) have to be deployed where their storage is available. Redis, on the other hand, has its affinity to the Nextcloud server itself, so it ended being deployed in the same node too. - -### _Cloning wildcard certificate's secret into the `nextcloud` namespace_ - -Now that you have the Nextcloud platform deployed, you must update the annotations of your wildcard certificate's secret with cert-manager. The most straightforward and clean way is by forcing a manual renew of the certificate itself. To execute this renew method, you can use the kubectl cert-manager plugin executed as follows. - -~~~bash -$ kubectl cert-manager -n certificates renew wildcard.deimos.cloud-tls -Manually triggered issuance of Certificate certificates/wildcard.deimos.cloud-tls -~~~ - -The `kubectl cert-manager` command above forces a `renew` on the `wildcard.deimos.cloud-tls` certificate present in the `certificates` namespace. This way, cert-manager updates the secret directly related to the certificate, also under the `certificates` namespace. In this case, since the certificate won't be reissued because is still valid, the secret is not regenerated either although the annotations and labels in it will be properly updated, which is what you want. Right after the secret is updated, Reflector should replicate that secret into the `nextcloud` namespace. Check it out with `kubectl`. - -~~~bash -$ kubectl -n nextcloud get secrets -NAME TYPE DATA AGE -default-token-l74p7 kubernetes.io/service-account-token 3 35m -nxcd-cache-redis-7c88b92427 Opaque 1 35m -nxcd-db-mariadb-dg5cm45947 Opaque 3 35m -nxcd-server-nextcloud-mmd5t7577c Opaque 1 35m -nxcd-cache-redis-bh9d296g5k Opaque 1 35m -wildcard.deimos.cloud-tls kubernetes.io/tls 3 7m -~~~ - -See how, from all the secrets existing in the `nextcloud` namespace, the `wildcard.deimos.cloud-tls` one is the most recent of them. This secret in the `certificates` namespace will be older. - -## Logging and checking the background jobs configuration on your Nextcloud platform - -Give some minutes to MariaDB and Nextcloud to initialize themselves properly. Then, open a browser and navigate to your Nexcloud platform's main URL, which in my case are `nextcloud.deimos.cloud` and `nxc.deimos.cloud`. Of course, if you haven't made available any URLs for Nextcloud in your network, you'll have to browse to the IP of the Nextcloud service, in this case `192.168.1.42`. - -Either way, if the deployment has gone right, you should reach the login page of your Nextcloud platform. - -![Nextcloud login page](images/g033/nextcloud_login_page.png "Nextcloud login page") - -Remember, you can login here with the username and password of the administrator user you've configured in the Kustomize project of the Nextcloud server. After the login, you'll get to the dashboard page. - -![Nextcloud welcome screen at dashboard](images/g033/nextcloud_dashboard_welcome.png "Nextcloud welcome screen at dashboard") - -Click the welcome screen away and you'll see the dashboard itself. - -![Nextcloud dashboard](images/g033/nextcloud_dashboard.png "Nextcloud dashboard") - -Now that you're in, you need to check a particular setting of your Nextcloud instance. So click on your user's avatar, at the top right of the page, then click on `Settings` in the resulting menu. - -![Nextcloud user avatar menu](images/g033/nextcloud_avatar_menu_settings_highlight.png "Nextcloud user avatar menu") - -You'll reach an administrative page where you can find the options that affect only your current user, the ones under the `Personal` title, and others related to the whole Nextcloud instance, those under the `Administration` moniker, which only an administrator user can see. - -![Nextcloud settings](images/g033/nextcloud_settings_basic_settings_highlighted.png "Nextcloud settings") - -Click on the highlighted `Basic settings` link to get to the page shown next. - -![Nextcloud Basic settings](images/g033/nextcloud_basic_settings_cron_highlighted.png "Nextcloud Basic settings") - -Ensure that the option enabled in the `Background jobs` section is the **`Cron (Recommended)`** one. This way, Nextcloud will use the cron service you declared as a `lifecycle.postStart` command in its `StatefulSet`, right in the [previous part of this Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-stateful-resource), to execute its background tasks. - -## Security considerations in Nextcloud - -After deploying Nextcloud, you'll only have one user available, the administrative one. That user is also used by the Prometheus metrics exporter associated to the Nextcloud server. Ideally, your next steps should be the following. - -1. Create a different administrative user for management tasks. - -2. Create a regular user for storing your personal data. Never use an administrative user for regular usage. - -3. Try to restrict the permissions and capabilities of the original administrative user so the only thing it can really do is access the url `https://nextcloud.deimos.cloud/ocs/v2.php/apps/serverinfo/api/v1/info`. This url just returns an xml with statistics taken from the Nextcloud server, and that's the url the Prometheus metrics exporter uses to get the Nextcloud metrics. - -4. Check the security options available in Nextcloud, and try to increase the security of your setup by using them. In particular, you can enable two-factor authentication and server-side encryption. - - > **BEWARE!** - > The server-side encryption comes with a performance penalty, since it encrypts the files uploaded to the Nextcloud server. Of course, this also implies that they have to be decrypted when they're downloaded later. - -## Nextcloud platform's Kustomize project attached to this guide series - -You can find the Kustomize project for this Nextcloud platform deployment in the following attached folder. - -- `k8sprjs/nextcloud` - -## Relevant system paths - -### _Folders in `kubectl` client system_ - -- `$HOME/k8sprjs/cert-manager/certificates` -- `$HOME/k8sprjs/cert-manager/certificates/patches` -- `$HOME/k8sprjs/nextcloud` -- `$HOME/k8sprjs/nextcloud/components` -- `$HOME/k8sprjs/nextcloud/components/cache-redis` -- `$HOME/k8sprjs/nextcloud/components/db-mariadb` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud` -- `$HOME/k8sprjs/nextcloud/resources` - -### _Files in `kubectl` client system_ - -- `$HOME/k8sprjs/cert-manager/certificates/kustomization.yaml` -- `$HOME/k8sprjs/cert-manager/certificates/patches/wildcard.deimos.cloud-tls.certificate.cert-manager.reflector.namespaces.yaml` -- `$HOME/k8sprjs/nextcloud/kustomization.yaml` -- `$HOME/k8sprjs/nextcloud/resources/data-nextcloud.persistentvolume.yaml` -- `$HOME/k8sprjs/nextcloud/resources/db-nextcloud.persistentvolume.yaml` -- `$HOME/k8sprjs/nextcloud/resources/html-nextcloud.persistentvolume.yaml` -- `$HOME/k8sprjs/nextcloud/resources/nextcloud.namespace.yaml` - -## References - -### _Kubernetes_ - -#### **Storage** - -- [Official Doc - Local Storage Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) -- [Official Doc - Local Persistent Volumes for Kubernetes Goes Beta](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) -- [Official Doc - Reserving a PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume) -- [Official Doc - Local StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#local) -- [Official Doc - Reclaiming Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) -- [Kubernetes API - PersistentVolume](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/) -- [Kubernetes API - PersistentVolumeClaim](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/) -- [Kubernetes Persistent Volumes, Claims, Storage Classes, and More](https://cloud.netapp.com/blog/kubernetes-persistent-storage-why-where-and-how) -- [Rancher K3s - Setting up the Local Storage Provider](https://rancher.com/docs/k3s/latest/en/storage/) -- [K3s local path provisioner on GitHub](https://github.com/rancher/local-path-provisioner) -- [Using "local-path" in persistent volume requires sudo to edit files on host node?](https://github.com/k3s-io/k3s/issues/1823) -- [Kubernetes size definitions: What's the difference of "Gi" and "G"?](https://stackoverflow.com/questions/50804915/kubernetes-size-definitions-whats-the-difference-of-gi-and-g) -- [distinguish unset and empty values for storageClassName](https://github.com/helm/helm/issues/2600) -- [Kubernetes Mounting Volumes in Pods. Mount Path Ownership and Permissions](https://kb.novaordis.com/index.php/Kubernetes_Mounting_Volumes_in_Pods#Mount_Path_Ownership_and_Permissions) - -#### **Pod scheduling** - -- [Official Doc - Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) -- [Official Doc - Assign Pods to Nodes using Node Affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) -- [Kubernetes API - Pod scheduling](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) -- [STRATEGIES FOR KUBERNETES POD PLACEMENT AND SCHEDULING](https://thenewstack.io/strategies-for-kubernetes-pod-placement-and-scheduling/) -- [Implement Node and Pod Affinity/Anti-Affinity in Kubernetes: A Practical Example](https://thenewstack.io/implement-node-and-pod-affinity-anti-affinity-in-kubernetes-a-practical-example/) -- [Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes](https://thenewstack.io/tutorial-apply-the-sidecar-pattern-to-deploy-redis-in-kubernetes/) -- [Amazon EKS Workshop Official Doc - Assigning Pods to Nodes](https://www.eksworkshop.com/beginner/140_assigning_pods/affinity_usecases/) - -#### **StatefulSets** - -- [Official Doc](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) - -#### **ConfigMaps and secrets** - -- [Official Doc - ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) -- [Official Doc - Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) -- [An Introduction to Kubernetes Secrets and ConfigMaps](https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) -- [Kubernetes - Using ConfigMap SubPaths to Mount Files](https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i) -- [Kubernetes Secrets | Declare confidential data with examples](https://www.golinuxcloud.com/kubernetes-secrets/) -- [Kubernetes ConfigMaps and Secrets](https://shravan-kuchkula.github.io/kubernetes/configmaps-secrets/) -- [Import data to config map from kubernetes secret](https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret) - -### _cert-manager_ - -- [Kubectl cert-manager plugin](https://cert-manager.io/docs/usage/kubectl-plugin/) - -### _Nextcloud_ - -- [Official Docker build of Nextcloud](https://hub.docker.com/_/nextcloud) -- [Official Docker build of Nextcloud on GitHub](https://github.com/nextcloud/docker) -- [Prometheus exporter for getting some metrics of a Nextcloud server instance](https://hub.docker.com/r/xperimental/nextcloud-exporter) -- [Installation and server configuration](https://docs.nextcloud.com/server/stable/admin_manual/installation/index.html) -- [Installation on Linux](https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#installation-on-linux) -- [Database configuration](https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html) -- [Nextcloud on Kubernetes, by modzilla99](https://github.com/modzilla99/kubernetes-nextcloud) -- [Nextcloud on Kubernetes, by andremotz](https://github.com/andremotz/nextcloud-kubernetes) -- [Deploy Nextcloud - Yaml with application and database container and a configure a secret](https://www.debontonline.com/2021/05/part-15-deploy-nextcloud-yaml-with.html) -- [Self-Host Nextcloud Using Kubernetes](https://blog.true-kubernetes.com/self-host-nextcloud-using-kubernetes/) -- [Deploying NextCloud on Kubernetes with Kustomize](https://medium.com/@acheaito/nextcloud-on-kubernetes-19658785b565) -- [A NextCloud Kubernetes deployment](https://github.com/acheaito/nextcloud-kubernetes) -- [Nextcloud self-hosting on K8s](https://eramons.github.io/techblog/post/nextcloud/) -- [Nextcloud self-hosting on K8s on GitHub](https://github.com/eramons/kubenextcloud) -- [Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache](https://bayton.org/docs/nextcloud/installing-nextcloud-on-ubuntu-16-04-lts-with-redis-apcu-ssl-apache/) -- [Install Nextcloud with Apache2 on Debian 10](https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/) -- [Nextcloud scale-out using Kubernetes](https://faun.pub/nextcloud-scale-out-using-kubernetes-93c9cac9e493) -- [How to tune Nextcloud on-premise cloud server for better performance](https://www.techrepublic.com/article/how-to-tune-nextcloud-on-premise-cloud-server-for-better-performance/) - -### _Other_ - -- [How-to build your own Kubernetes cluster with a Rasberry Pi 4](https://www.debontonline.com/p/kubernetes.html) -- [Kubernetes Development Series on GitHub](https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/kubernetes) -- [How to deploy WordPress and MySQL on Kubernetes](https://medium.com/@containerum/how-to-deploy-wordpress-and-mysql-on-kubernetes-bda9a3fdd2d5) -- [Alpine, Slim, Stretch, Buster, Jessie, Bullseye — What are the Differences in Docker Images?](https://medium.com/swlh/alpine-slim-stretch-buster-jessie-bullseye-bookworm-what-are-the-differences-in-docker-62171ed4531d) -- [Alpine Linux FAQ: My cron jobs don't run?](https://wiki.alpinelinux.org/wiki/Alpine_Linux:FAQ#My_cron_jobs_don.27t_run.3F) -- [Running cron jobs in a Docker Alpine container](https://devopsheaven.com/cron/docker/alpine/linux/2017/10/30/run-cron-docker-alpine.html) - -## Navigation - -[<< Previous (**G033. Deploying services 02. Nextcloud Part 4**)](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G034. Deploying services 03. Gitea Part 1**) >>](G034%20-%20Deploying%20services%2003%20~%20Gitea%20-%20Part%201%20-%20Outlining%20setup%20and%20arranging%20storage.md) diff --git a/G034 - Deploying services 03 ~ Gitea - Part 2 - Redis cache server.md b/G034 - Deploying services 03 ~ Gitea - Part 2 - Redis cache server.md index fdd266a..50da4b1 100644 --- a/G034 - Deploying services 03 ~ Gitea - Part 2 - Redis cache server.md +++ b/G034 - Deploying services 03 ~ Gitea - Part 2 - Redis cache server.md @@ -30,7 +30,7 @@ Prepare the `redis.conf` file that will specify some configuration values for yo maxmemory-policy allkeys-lru ~~~ - The parameters are exactly the same ones you set up in the [part 2 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-configuration-file), go back to it if you don't remember the meaning of the values above. + The parameters are exactly the same ones you set up in the [part 2 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Valkey%20cache%20server.md#redis-configuration-file), go back to it if you don't remember the meaning of the values above. ## Redis password @@ -129,7 +129,7 @@ Since this Redis instance will only work with data in memory, you can set it up topologyKey: "kubernetes.io/hostname" ~~~ - This `Deployment` resource is almost identical to the one described in the [part 2 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-deployment-resource). The only difference is in the `labelSelector` block set in the `affinity.podAffinity` section: the `key` is the same, `app`, but the value has been changed to `server-gitea`. + This `Deployment` resource is almost identical to the one described in the [part 2 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Valkey%20cache%20server.md#redis-deployment-resource). The only difference is in the `labelSelector` block set in the `affinity.podAffinity` section: the `key` is the same, `app`, but the value has been changed to `server-gitea`. ## Redis Service resource @@ -163,7 +163,7 @@ To expose Redis you need a `Service` resource. name: metrics ~~~ - This `Service` resource is the same as the one declared [in the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-service-resource), except that it doesn't have a `clusterIP` explicitly set to a particular internal IP. And, since that cluster IP can change everytime the Service is restarted, you'll have to invoke it by its FQDN. + This `Service` resource is the same as the one declared [in the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Valkey%20cache%20server.md#redis-service-resource), except that it doesn't have a `clusterIP` explicitly set to a particular internal IP. And, since that cluster IP can change everytime the Service is restarted, you'll have to invoke it by its FQDN. ### _Redis `Service`'s FQDN or DNS record_ @@ -225,7 +225,7 @@ The last piece is the `kustomization.yaml` file that describes this Gitea's Redi - redis-password=secrets/redis.pwd ~~~ - This `kustomization.yaml` is exactly the same as the one you declared for [the Nextcloud's Redis service](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md#redis-kustomize-project). No value needs to be changed here. + This `kustomization.yaml` is exactly the same as the one you declared for [the Nextcloud's Redis service](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Valkey%20cache%20server.md#redis-kustomize-project). No value needs to be changed here. ### _Checking the Kustomize yaml output_ diff --git a/G045 - System update 04 ~ Updating K3s and deployed apps.md b/G045 - System update 04 ~ Updating K3s and deployed apps.md index 447aa79..6aa9a45 100644 --- a/G045 - System update 04 ~ Updating K3s and deployed apps.md +++ b/G045 - System update 04 ~ Updating K3s and deployed apps.md @@ -228,7 +228,7 @@ You know what steps to follow when updating any app, but in what order do you ha > For installing this service, remember that you had to apply a patch to adjust the default configuration to something more in line with the particularities of your K3s cluster. That patch applied particular tolerations and slightly modified arguments to the `Deployment` resource of `metrics-server`, so don't forget to check that resource in the new version to see if the patch still applies as it is. For instance, the patch is still valid as is for the version `0.6.1` of metrics-server. 6. **Nextcloud platform**: deployment procedure begins with [guide **G033** - Deploying services 02 ~ Nexcloud - Part 1](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%201%20-%20Outlining%20setup,%20arranging%20storage%20and%20choosing%20service%20IPs.md) and finishes with [Part 5](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%205%20-%20Complete%20Nextcloud%20platform.md). - 1. **Redis cache server**: : configuration procedure found in [guide **G033** - Deploying services 02 ~ Nextcloud - Part 2](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Redis%20cache%20server.md). + 1. **Redis cache server**: : configuration procedure found in [guide **G033** - Deploying services 02 ~ Nextcloud - Part 2](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%202%20-%20Valkey%20cache%20server.md). 2. **MariaDB server**: configuration procedure found in [guide **G033** - Deploying services 02 ~ Nextcloud - Part 3](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md). > **BEWARE!** > When you upgrade MariaDB to a new **major** version, its system and other tables have to be updated too. This is not something that happens automatically, either you do it manually following a certain procedure or you can configure your MariaDB instance to do it automatically for you. I've explained how to do it in the second, and **much** more convenient, way in the [appendix guide 15](G915%20-%20Appendix%2015%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md). diff --git a/G910 - Appendix 10 ~ Setting up virtual network with Open vSwitch.md b/G910 - Appendix 10 ~ Setting up virtual network with Open vSwitch.md index 272fa92..43bc6f2 100644 --- a/G910 - Appendix 10 ~ Setting up virtual network with Open vSwitch.md +++ b/G910 - Appendix 10 ~ Setting up virtual network with Open vSwitch.md @@ -20,7 +20,7 @@ Log in the PVE web console and go to the `System > Network` tab of your `pve` no In the capture above you can see the initial setup on my PVE host, which I already explained (and reconfigured) in the [G017 guide](G017%20-%20Virtual%20Networking%20~%20Network%20configuration.md). -### _Backing up the current network interfaces configuration_ +### Backing up the current network interfaces configuration Before going forward with the new OVS-based configuration, make a backup of your current network intefaces setup. To do so, open a shell on your PVE host with your `mgrsys` user and do the following. @@ -28,7 +28,7 @@ Before going forward with the new OVS-based configuration, make a backup of your $ sudo cp /etc/network/interfaces /etc/network/interfaces.bkp ~~~ -### _Setting up the OVS bridge_ +### Setting up the OVS bridge Go back to the web console and, again, in the `System > Network` screen of your PVE node do the following. @@ -109,18 +109,18 @@ Go back to the web console and, again, in the `System > Network` screen of your ## Relevant system paths -### _Directories_ +### Directories - `/etc/network` -### _Files_ +### Files - `/etc/network/interfaces` - `/etc/network/interfaces.bkp` ## References -### _Virtual networking with Open vSwitch (OVS)_ +### Virtual networking with Open vSwitch (OVS) - [Open vSwitch](https://www.openvswitch.org/) - [Open vSwitch in Proxmox VE wiki](https://pve.proxmox.com/wiki/Open_vSwitch) @@ -129,4 +129,4 @@ Go back to the web console and, again, in the `System > Network` screen of your ## Navigation -[<< Previous (**G909. Appendix 09**)](G909%20-%20Appendix%2009%20~%20Kubernetes%20object%20stuck%20in%20Terminating%20state.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G911. Appendix 11**) >>](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md) +[<< Previous (**G909. Appendix 09**)](G909%20-%20Appendix%2009%20~%20Kubernetes%20object%20stuck%20in%20Terminating%20state.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G911. Appendix 11**) >>](G911%20-%20Appendix%2011%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) diff --git a/G911 - Appendix 11 ~ Alternative Nextcloud web server setups.md b/G911 - Appendix 11 ~ Alternative Nextcloud web server setups.md deleted file mode 100644 index 14d5630..0000000 --- a/G911 - Appendix 11 ~ Alternative Nextcloud web server setups.md +++ /dev/null @@ -1,1606 +0,0 @@ -# G911 - Appendix 11 ~ Alternative Nextcloud web server setups - -In the [Part 4 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md) you saw how to configure Nextcloud with the Apache web server, but you can also use Nginx. In this appendix guide you'll see what configuration to apply to use Nginx, but also a couple of ideas that you can apply to your Apache setup. - -## Ideas for the Apache setup - -### _Enabling a Traefik IngressRoute_ - -The Nextcloud server you configured in the guide is directly reachable through an IP assigned by the MetalLB load balancer of your cluster. Instead of this, you could enable an ingress route through the Traefik service. - -1. Go to the Kustomize subproject's folder of your Nextcloud server. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud - ~~~ - -2. Make a renamed copy of the `000-default.conf` file. - - ~~~bash - $ cp configs/000-default.conf configs/000-default.no-ssl.conf - ~~~ - - Notice that the copy has a `.no-ssl` suffix. - -3. Edit the `000-default.no-ssl.conf`, and just remove the lines related to SSL directives from it. - - ~~~apache - MinSpareServers 4 - MaxSpareServers 16 - StartServers 10 - MaxConnectionsPerChild 2048 - - - Protocols http/1.1 - ServerAdmin root@deimos.cloud - ServerName nextcloud.deimos.cloud - ServerAlias nxc.deimos.cloud - - Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" - - DocumentRoot /var/www/html - DirectoryIndex index.php - - - Options FollowSymlinks MultiViews - AllowOverride All - Require all granted - - - Dav off - - - SetEnv HOME /var/www/html - SetEnv HTTP_HOME /var/www/html - Satisfy Any - - - - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - ~~~ - -4. Make a renamed copy of the `server-apache-nextcloud.statefulset.yaml` file. - - ~~~bash - $ cp resources/server-apache-nextcloud.statefulset.yaml resources/server-apache-nextcloud.statefulset.no-ssl.yaml - ~~~ - -5. Edit the new `server-apache-nextcloud.statefulset.no-ssl.yaml` file to remove all the references to the wildcard certificate files. The yaml should end looking like below. - - ~~~yaml - apiVersion: apps/v1 - kind: StatefulSet - - metadata: - name: server-apache-nextcloud - spec: - replicas: 1 - serviceName: server-apache-nextcloud - template: - spec: - containers: - - name: server - image: nextcloud:22.2-apache - ports: - - containerPort: 443 - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-trusted-domains - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: db-mariadb-svc-cluster-ip - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-db-name - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-username - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: nextcloud-user-password - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: cache-redis-svc-cluster-ip - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - - name: APACHE_ULIMIT_MAX_FILES - value: 'ulimit -n 65536' - lifecycle: - postStart: - exec: - command: - - "sh" - - "-c" - - | - chown www-data:www-data /var/www/html/data - apt-get update - apt-get install -y openrc - start-stop-daemon --start --background --pidfile /cron.pid --exec /cron.sh - resources: - limits: - memory: 512Mi - volumeMounts: - - name: apache-config - subPath: ports.conf - mountPath: /etc/apache2/ports.conf - - name: apache-config - subPath: 000-default.conf - mountPath: /etc/apache2/sites-available/000-default.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: metrics - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - ports: - - containerPort: 9205 - env: - - name: NEXTCLOUD_SERVER - value: "https://localhost" - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - name: server-apache-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - resources: - limits: - memory: 32Mi - volumes: - - name: apache-config - configMap: - name: server-apache-nextcloud - defaultMode: 0644 - items: - - key: ports.conf - path: ports.conf - - key: 000-default.conf - path: 000-default.conf - - name: html-storage - persistentVolumeClaim: - claimName: html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: data-server-nextcloud - ~~~ - - You just have to remove the `wildcard.deimos.cloud-tls` references from the `volumeMounts` list in the `server` container declaration, and also the `certificate` block in the `volumes` list at the end of the yaml. - -6. Make a renamed copy of the `server-apache-nextcloud.service.yaml`. - - ~~~bash - $ cp resources/server-apache-nextcloud.service.yaml resources/server-apache-nextcloud.service.clusterip.yaml - ~~~ - - See the `.clusterip` suffix added to the copy's filename. - -7. Edit the `server-apache-nextcloud.service.clusterip.yaml` so the `Service` resource declared inside is set as of the `ClusterIP` kind. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9205" - name: server-apache-nextcloud - spec: - type: ClusterIP - clusterIP: 10.43.100.3 - ports: - - port: 443 - protocol: TCP - name: server - - port: 9205 - protocol: TCP - name: metrics - ~~~ - -8. Create a new `server-apache-nextcloud.ingressroute.traefik.yaml` file under the `resources` folder for a new Traefik `IngressRoute`. - - ~~~bash - $ touch resources/server-apache-nextcloud.ingressroute.traefik.yaml - ~~~ - -9. Put the following yaml in `server-apache-nextcloud.ingressroute.traefik.yaml`. - - ~~~yaml - apiVersion: traefik.containo.us/v1alpha1 - kind: IngressRoute - - metadata: - name: server-apache-nextcloud - spec: - entryPoints: - - websecure - tls: - secretName: wildcard.deimos.cloud-tls - routes: - - match: Host(`nextcloud.deimos.cloud`) || Host(`nxc.deimos.cloud`) - kind: Rule - services: - - name: server-apache-nextcloud - kind: Service - port: 443 - ~~~ - - This yaml might look familiar to you, since it's very similar to the one you created for accessing the Traefik dashboard, back in the [**G030** guide](G030%20-%20K3s%20cluster%20setup%2013%20~%20Enabling%20the%20Traefik%20dashboard.md#kustomize-project-for-enabling-access-to-the-traefik-dashboard). - - - This `IngressRoute` references the secret of your wildcard certificate, encrypting the traffic with the referenced `server-apache-nextcloud` service. - - - Remember that you'll need to enable in your network the domains you specify in the `IngressRoute`, although remembering that they'll have to point to the Traefik service's IP. - -10. Now make a backup of the `kustomization.yaml` file of this subproject. - - ~~~yaml - $ cp kustomization.yaml kustomization.yaml.bkp - ~~~ - -11. Edit the `kustomization.yaml` file so it looks as follows. - - ~~~yaml - # Nextcloud server setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: server-nextcloud - - resources: - - resources/data-server-nextcloud.persistentvolumeclaim.yaml - - resources/html-server-nextcloud.persistentvolumeclaim.yaml - - resources/server-apache-nextcloud.ingressroute.traefik.yaml - - resources/server-apache-nextcloud.service.clusterip.yaml - - resources/server-apache-nextcloud.statefulset.no-ssl.yaml - - replicas: - - name: server-apache-nextcloud - count: 1 - - images: - - name: nextcloud - newTag: 22.2-apache - - name: xperimental/nextcloud-exporter - newTag: 0.4.0-15-gbb88fb6 - - configMapGenerator: - - name: server-apache-nextcloud - envs: - - configs/params.properties - files: - - 000-default.conf=configs/000-default.no-ssl.conf - - configs/ports.conf - - secretGenerator: - - name: server-nextcloud - files: - - nextcloud-admin-password=secrets/nextcloud-admin.pwd - ~~~ - - The changes are the three new files at the `resources` list (`server-apache-nextcloud.ingressroute.traefik.yaml`, `server-apache-nextcloud.service.clusterip.yaml`, `server-apache-nextcloud.statefulset.no-ssl.yaml`) and the modified `000-default.no-ssl.conf` file at the `configMapGenerator` block. - -12. The next thing for you to do would be validating the Kustomize subproject's output, then also validating the output for the main Kustomize project of the Nextcloud platform and, finally, reapplying the main project. I'll just note down the related `kubectl` commands below for your reference. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/server-nextcloud | less - $ kubectl kustomize $HOME/k8sprjs/nextcloud | less - $ kubectl apply -k $HOME/k8sprjs/nextcloud - ~~~ - -### _About the HTTP/2 protocol on Apache_ - -The HTTP/2 protocol is a new version of HTTP that provides new features which increases the performance of HTTP traffic. Apache has its own module identified as `http2_module` which provides the HTTP/2 functionality, but it's incompatible with the default MPM **prefork** module (named `mpm_prefork_module`) that comes enabled in the Nextcloud default Apache image. The `http2_module` requires using one of other the Apache MPM modules that make use of threaded server processes, or _workers_, because the HTTP/2 module needs to launch its own threads for serving the requests it receives. - -I haven't tested this myself, but I can give you a hint about how to enable HTTP/2 in Apache. - -1. The first thing to attend to is switching the MPM prefork module to either the **worker** or the **event** module. - - One way would be by manipulating the declaration of the container executing the Nextcloud Apache image. Adding a `lifecycle.postStart` section to the container, as you saw already in the [Part 4 of the Nextcloud guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md), and in it put a command line like the following. - - ~~~bash - a2dismod mpm_prefork; a2enmod mpm_worker http2; apachectl restart - ~~~ - - The `lifecycle.postStart` portion of the container declaration could look like below. - - ~~~yaml - lifecycle: - postStart: - exec: - command: - - "sh" - - "-c" - - | - chown www-data:www-data /var/www/html/data; - apt-get update - apt-get install -y openrc - start-stop-daemon --start --background --pidfile /cron.pid --exec /cron.sh - a2dismod mpm_prefork; - a2enmod mpm_worker http2; - apachectl restart - ~~~ - - I've based this yaml portion on the container declaration for the Nextcloud server used in the Nextcloud platform guide. This is why you see the `chown`, `apt-get` and `start-stop-daemon` commands too. - - - If you have experience building your own Docker images, another procedure would be creating your own version of the Nextcloud Apache image in which you configure the modules as needed to support HTTP/2. - -2. After ensuring that the right modules are loaded, then you need to adjust the configuration at the `000-default.conf` file. Essentially, you have to replace the directives used with the MPM prefork module with the ones of the worker or event modules and add the HTTP/2 protocol. Next I leave an example of how `000-default.conf` would look after the change. - - ~~~apache - ServerLimit 4 - StartServers 2 - MaxRequestWorkers 30 - MinSpareThreads 10 - MaxSpareThreads 30 - ThreadsPerChild 10 - ThreadLimit 20 - MaxConnectionsPerChild 0 - - H2MinWorkers 5 - H2MaxWorkers 10 - - LoadModule socache_shmcb_module /usr/lib/apache2/modules/mod_socache_shmcb.so - SSLSessionCache shmcb:/var/tmp/apache_ssl_scache(512000) - - - Protocols h2 http/1.1 - ServerAdmin root@deimos.cloud - ServerName nextcloud.deimos.cloud - ServerAlias nxc.deimos.cloud - - Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" - - DocumentRoot /var/www/html - DirectoryIndex index.php - - LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so - SSLEngine on - SSLCertificateFile /etc/ssl/certs/wildcard.deimos.cloud-tls.crt - SSLCertificateKeyFile /etc/ssl/certs/wildcard.deimos.cloud-tls.key - - - Options FollowSymlinks MultiViews - AllowOverride All - Require all granted - - - Dav off - - - SetEnv HOME /var/www/html - SetEnv HTTP_HOME /var/www/html - Satisfy Any - - - - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - ~~~ - - Notice all the directives at the top of the file, starting with `ServerLimit` up to `MaxConnectionsPerChild`. Those are all MPM worker module parameters, which are also used in the MPM event module, while the `H2MinWorkers` and `H2MaxWorkers` below them are just a couple of all the ones available at the HTTP/2 module. The HTTP/2 protocol itself is enabled at the `Protocols` directive with the `h2` value. - -3. Since there are no new files or resources used in this modification, the Kustomize project wouldn't change at all and you could just validate and apply it as you did in the Nextcloud platform's guide. - -## Nginx setup - -You've already seen how to deploy Nextcloud with its default Apache image, but you can also make this PHP-based platform be rendered by a Nginx web server. The thing is that there's no Nextcloud image with Nginx integrated, just images with no web server included whatsoever but with an alternative PHP FastCGI process manager called FPM. FPM can improve the performance of any PHP-based website, specially the ones under heavy traffic, but it needs a web server to proxy the requests towards it. So, to run properly the Nextcloud server with FPM you'll need an extra container for Nginx, making the setup use a total of three containers: - -- One container executing the FPM image of Nextcloud. -- One container to run the Nginx server acting as reverse proxy for the Nextcloud PHP-FPM instance. -- One container running the Prometheus metrics exporter for Nextcloud. - -Next, I'll show you what to add or change in the Nextcloud server's Kustomize subproject (`$HOME/k8sprjs/nextcloud/components/server-nextcloud`) for turning it into a Nextcloud server running on FPM and Nginx. - -> **BEWARE!** -> The following Nginx configurations are **not** compatible with the Apache one. So, if you already had the Apache configurations running, first you'll need to undeploy (`kubectl delete`) the whole Nextcloud platform and clean up the storage volumes (remove and then recreate the `k3smnt` folders) used in the agent node. - -### _LoadBalancer configuration_ - -The Nexcloud server setup you'll see here will have its own IP through the MetalLB load balancer. - -#### **Configuration files** - -1. You'll need to create three new configuration files in the `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs` folder. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud/configs - $ touch dhparam.pem nginx.conf zz-docker.conf - ~~~ - -2. Copy in `dhparam.pem` the following content. - - ~~~txt - -----BEGIN DH PARAMETERS----- - MIIBCAKCAQEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz - +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a - 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 - YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi - 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD - ssbzSibBsu/6iGtCOGEoXJf//////////wIBAg== - -----END DH PARAMETERS----- - ~~~ - - This file is for SSL/TLS configuration on the Nginx server. Its content sets the Diffie-Hellman parameter (hence why the file is named `dhparam`) to ensure that at least 2048 bits are used in SSL communications. Any less is considered insecure. This file's content has been taken as-is [from this Mozilla-related URL](https://ssl-config.mozilla.org/ffdhe2048.txt). - -3. In `nginx.conf`, copy all the configuration below. - - ~~~nginx - worker_processes auto; - - error_log /dev/stdout debug; - pid /var/run/nginx.pid; - - events { - worker_connections 8; - } - - http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] $status ' - '"$request" $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /dev/stdout main; - - # Disabled because this nginx is acting - # as a reverse proxy for an application server. - sendfile off; - - upstream php-handler { - server 127.0.0.1:9000; - } - - server { - listen 443 ssl http2; - server_name nextcloud.deimos.cloud nxc.deimos.cloud; - - # You can use Mozilla's SSL Configuration Generator - # for defining your SSL/TLS settings. - # https://ssl-config.mozilla.org/ - ssl_certificate /etc/nginx/cert/wildcard.deimos.cloud-tls.crt; - ssl_certificate_key /etc/nginx/cert/wildcard.deimos.cloud-tls.key; - - ssl_session_timeout 1d; - ssl_session_cache shared:MozSSL:10m; # about 40000 sessions - ssl_session_tickets off; - - # SSL/TLS intermediate configuration - ssl_protocols TLSv1.2 TLSv1.3; - ssl_dhparam /etc/ssl/dhparam.pem; - ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; - ssl_prefer_server_ciphers off; - - # HSTS settings - # WARNING: Only add the preload option once you read about - # the consequences in https://hstspreload.org/. This option - # will add the domain to a hardcoded list that is shipped - # in all major browsers and getting removed from this list - # could take several months. - #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; - - # set max upload size - client_max_body_size 10240M; - fastcgi_buffers 64 4K; - - # Enable gzip but do not remove ETag headers - gzip on; - gzip_vary on; - gzip_comp_level 4; - gzip_min_length 256; - gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; - gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; - - # Pagespeed is not supported by Nextcloud, so if your server is built - # with the `ngx_pagespeed` module, uncomment this line to disable it. - #pagespeed off; - - # HTTP response headers borrowed from Nextcloud `.htaccess` - add_header Referrer-Policy "no-referrer" always; - add_header X-Content-Type-Options "nosniff" always; - add_header X-Download-Options "noopen" always; - add_header X-Frame-Options "SAMEORIGIN" always; - add_header X-Permitted-Cross-Domain-Policies "none" always; - add_header X-Robots-Tag "none" always; - add_header X-XSS-Protection "1; mode=block" always; - - # Remove X-Powered-By, which is an information leak - fastcgi_hide_header X-Powered-By; - - # Path to the root of your installation - root /var/www/html; - - # Specify how to handle directories -- specifying `/index.php$request_uri` - # here as the fallback means that Nginx always exhibits the desired behaviour - # when a client requests a path that corresponds to a directory that exists - # on the server. In particular, if that directory contains an index.php file, - # that file is correctly served; if it doesn't, then the request is passed to - # the front-end controller. This consistent behaviour means that we don't need - # to specify custom rules for certain paths (e.g. images and other assets, - # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus - # `try_files $uri $uri/ /index.php$request_uri` - # always provides the desired behaviour. - index index.php index.html /index.php$request_uri; - - # Rule borrowed from `.htaccess` to handle Microsoft DAV clients - location = / { - if ( $http_user_agent ~ ^DavClnt ) { - return 302 /remote.php/webdav/$is_args$args; - } - } - - location = /robots.txt { - allow all; - log_not_found off; - access_log off; - } - - # Make a regex exception for `/.well-known` so that clients can still - # access it despite the existence of the regex rule - # `location ~ /(\.|autotest|...)` which would otherwise handle requests - # for `/.well-known`. - location ^~ /.well-known { - # The rules in this block are an adaptation of the rules - # in `.htaccess` that concern `/.well-known`. - - location = /.well-known/carddav { return 301 /remote.php/dav/; } - location = /.well-known/caldav { return 301 /remote.php/dav/; } - - location /.well-known/acme-challenge { try_files $uri $uri/ =404; } - location /.well-known/pki-validation { try_files $uri $uri/ =404; } - - # Let Nextcloud's API for `/.well-known` URIs handle all other - # requests by passing them to the front-end controller. - return 301 /index.php$request_uri; - } - - # Rules borrowed from `.htaccess` to hide certain paths from clients - location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; } - location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; } - - # Ensure this block, which passes PHP files to the PHP process, is above the blocks - # which handle static assets (as seen below). If this block is not declared first, - # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php` - # to the URI, resulting in a HTTP 500 error response. - location ~ \.php(?:$|/) { - # Required for legacy support - rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri; - - # regex to split $uri to $fastcgi_script_name and $fastcgi_path_info - fastcgi_split_path_info ^(.+?\.php)(/.*)$; - - # Check that the PHP script exists before passing it - if (!-f $document_root$fastcgi_script_name) { - return 404; - } - - # Mitigate https://httpoxy.org/ vulnerabilities - fastcgi_param HTTP_PROXY ""; - - # pass the request to the PHP handler - fastcgi_pass php-handler; - - fastcgi_index index.php; - - # Bypass the fact that try_files resets $fastcgi_path_info - # see: http://trac.nginx.org/nginx/ticket/321 - set $path_info $fastcgi_path_info; - fastcgi_param PATH_INFO $path_info; - - # Avoid sending the security headers twice - fastcgi_param modHeadersAvailable true; - # Enable pretty urls - fastcgi_param front_controller_active true; - - # set the standard fcgi paramters - include fastcgi.conf; - - fastcgi_intercept_errors on; - fastcgi_request_buffering off; - } - - location ~ \.(?:css|js|svg|gif)$ { - try_files $uri /index.php$request_uri; - expires 6M; # Cache-Control policy borrowed from `.htaccess` - access_log off; # Optional: Don't log access to assets - } - - location ~ \.woff2?$ { - try_files $uri /index.php$request_uri; - expires 7d; # Cache-Control policy borrowed from `.htaccess` - access_log off; # Optional: Don't log access to assets - } - - # Rule borrowed from `.htaccess` - location /remote { - return 301 /remote.php$request_uri; - } - - location / { - try_files $uri $uri/ /index.php$request_uri; - } - } - } - ~~~ - - This is the configuration file for the Nginx server, based mostly on the one found [in the Nextcloud documentation](https://docs.nextcloud.com/server/stable/admin_manual/installation/nginx.html). To know the meaning of all of its parameters, check its [official documentation](https://nginx.org/en/docs/) and, in particular, the [core functionality section](https://nginx.org/en/docs/ngx_core_module.html). Regardless, notice the following. - - - The `error_log` and the `http.access_log` parameters point to `/dev/stdout`, so those Nginx logs can be seen as logs of the container in which this web server will run. - - - In the `http.server` subsection, the `listen` port is specified as `443` for https connections. Also see that it has the `ssl` and `http2` options enabled, so the SSL communications are active and take advantage of the HTTP/2 protocol. - - - In the `http.server` subsection, notice the paths specified in the `ssl_certificate` and `ssl_certificate_key` parameters. They'll be used later to mount there the corresponding files from the `wildcard.deimos.cloud-tls` secret you enabled earlier in the `nextcloud` namespace. - - - Again in the `http.server` subsection, look for the `root` parameter set there. It has the path, within the Nginx container, to the webroot of your Nextcloud server (`/var/www/html`), where Nextcloud will install all the files it needs to run. - - > **BEWARE!** - > It is always a good idea to compare your `nginx.conf` against the one found [in the Nextcloud documentation](https://docs.nextcloud.com/server/stable/admin_manual/installation/nginx.html), for your version of Nextcloud. You may need to apply any addition found there in your config too. - - > **BEWARE!** - > If you get a message like `Error occurred while checking server setup` in the overview screen of the administration section of Nextcloud, carefully look into the logs of your nextcloud pod ([chapter on logs here](https://github.com/kriegalex/smallab-k8s-pve-guide/blob/feature/minor-improvements/G036%20-%20Host%20and%20K3s%20cluster%20~%20Monitoring%20and%20diagnosis.md#checking-the-logs)). For example, you might get an alert log telling you that 8 worker connections is not enough. - -4. Put in `zz-docker.conf` the following parameters. - - ~~~properties - [global] - daemonize = no - - [www] - listen = 9000 - pm = dynamic - pm.max_children = 16 - pm.start_servers = 10 - pm.min_spare_servers = 4 - pm.max_spare_servers = 16 - pm.max_requests = 512 - ~~~ - - This is a configuration file for the PHP-FPM engine running the Nextcloud server. Meant to be used for reconfiguring values of FPM, such as the number of child processes it can run. - - - Based on the one defined in [the Dockerfile for the PHP 8 FPM Alpine image](https://github.com/docker-library/php/blob/master/8.0/alpine3.14/fpm/Dockerfile), found at the Dockerfile's bottom. - - - Notice all the `pm` (process manager) parameters set in the file, they control how the FPM child server processes are spawned and in what number. Set here because the Nextcloud FPM image comes with default values that won't give you the best performance on your setup. You'll have to estimate how many processes you can run in your container, mainly depending on how much RAM it has. To help you calculate the `pm` values, you can use [this FPM process calculator](https://spot13.com/pmcalculator/) or just estimate that each FPM child process can use up to 32 MiB of RAM. - - - You can find the meaning of all the parameters in this [official PHP FPM documentation page](https://www.php.net/manual/en/install.fpm.configuration.php). - -#### **Resources** - -1. You need to declare a new `Service` and a new `StatefulSet`, so create their respective files under the `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources` path. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources - $ touch server-nginx-nextcloud.service.loadbalancer.yaml server-nginx-nextcloud.statefulset.yaml - ~~~ - -2. In the `server-nginx-nextcloud.service.loadbalancer.yaml` file, copy the `Service` resource declaration next. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9205" - name: server-nginx-nextcloud - spec: - type: LoadBalancer - clusterIP: 10.43.100.3 - loadBalancerIP: 192.168.1.42 - ports: - - port: 443 - protocol: TCP - name: server - - port: 9205 - protocol: TCP - name: metrics - ~~~ - - You'll notice that the `Service` declared above is exactly the same as the one you used with the Apache configuration, except only in the `metadata.name` parameter. As you can see, the `Service` only cares about the ports and the protocols used to communicate with the service or app it connects with. In fact, you could have used the very same `Service` resource, although setting it with a bit more generic name such as `server-nextcloud`. - -3. At `server-nginx-nextcloud.statefulset.yaml` set the `StatefulSet` resource below. - - ~~~yaml - apiVersion: apps/v1 - kind: StatefulSet - - metadata: - name: server-nginx-nextcloud - spec: - replicas: 1 - serviceName: server-nginx-nextcloud - template: - spec: - containers: - - name: fpm - image: nextcloud:22.2-fpm-alpine - ports: - - containerPort: 9000 - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-trusted-domains - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: db-mariadb-svc-cluster-ip - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-db-name - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-username - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: nextcloud-user-password - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: cache-redis-svc-cluster-ip - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - lifecycle: - postStart: - exec: - command: - - "sh" - - "-c" - - | - chown www-data:www-data /var/www/html/data - apk add openrc - start-stop-daemon --background /cron.sh - resources: - limits: - memory: 512Mi - volumeMounts: - - name: nginx-config - subPath: zz-docker.conf - mountPath: /usr/local/etc/php-fpm.d/zz-docker.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: server - image: nginx:1.21-alpine - ports: - - containerPort: 443 - volumeMounts: - - name: certificates - subPath: wildcard.deimos.cloud-tls.crt - mountPath: /etc/nginx/cert/wildcard.deimos.cloud-tls.crt - - name: certificates - subPath: wildcard.deimos.cloud-tls.key - mountPath: /etc/nginx/cert/wildcard.deimos.cloud-tls.key - - name: nginx-config - subPath: dhparam.pem - mountPath: /etc/ssl/dhparam.pem - - name: nginx-config - subPath: nginx.conf - mountPath: /etc/nginx/nginx.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: metrics - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - ports: - - containerPort: 9205 - env: - - name: NEXTCLOUD_SERVER - value: "https://localhost" - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - resources: - limits: - memory: 32Mi - hostNetwork: true - volumes: - - name: nginx-config - configMap: - name: server-nginx-nextcloud - defaultMode: 0444 - items: - - key: dhparam.pem - path: dhparam.pem - - key: nginx.conf - path: nginx.conf - - key: zz-docker.conf - path: zz-docker.conf - - name: certificates - secret: - secretName: wildcard.deimos.cloud-tls - defaultMode: 0444 - items: - - key: tls.crt - path: wildcard.deimos.cloud-tls.crt - - key: tls.key - path: wildcard.deimos.cloud-tls.key - - name: html-storage - persistentVolumeClaim: - claimName: html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: data-server-nextcloud - ~~~ - - The `StatefulSet` above might feel similar to the one you used with the Apache setup, but in this case it describes a pod holding three containers instead of two in a sidecar pattern. - - - `fpm` container: executes the Nextcloud server itself. - - The `image` is an Alpine Linux variant running [the FPM-based stable 22.2 version of Nexcloud](https://hub.docker.com/_/nextcloud). - - `env` section: notice that some values are taken from the secrets and config maps defined for the Redis and MariaDB pods. - - `NEXTCLOUD_ADMIN_USER` and `NEXTCLOUD_ADMIN_PASSWORD`: define the administrator user for the Nextcloud server, creating it when the server autoinstalls itself. - - `NEXTCLOUD_TRUSTED_DOMAINS`: where the list of trusted domains must be set. - - `MYSQL_HOST` and `MYSQL_DATABASE`: the IP of the MariaDB service and the database instance's name Nextcloud has to use. - - `MYSQL_USER` and `MYSQL_PASSWORD`: the user Nextcloud has to use to connect to its own database on the MariaDB server. - - `REDIS_HOST` and `REDIS_HOST_PASSWORD`: the IP to the Redis service and the password require to authenticate in that server. - - - `lifecycle.postStart.exec.command`: as with the Debian image, the Nextcloud's Alpine variant also requires to execute the same commands right after starting, although these commands are changed to the proper ones for an Alpine system. Go back to the [G033 guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md#nextcloud-server-stateful-resource) and see the **BEWARE!** note about this particular. - - - `volumeMounts` section: mounts a configuration file and two storage volumes. - - `/usr/local/etc/php-fpm.d/zz-docker.conf`: the path for the extra configuration file for the PHP FPM engine executing Nextcloud. - - `/var/www/html`: the path where Nextcloud is installed. - - `/var/www/html/data`: where Nextcloud stores the user data. This folder must be owned by the `www-data` user that exists within the container. - - - `server` container: runs the Nginx server. - - The `image` is an Alpine Linux variant running [the latest 1.21 version of Nginx](https://hub.docker.com/_/nginx). - - - `volumeMounts` section: it has several files mounted and also the same volumes used in the Nextcloud FPM container. - - `wildcard.deimos.cloud-tls.crt` and `wildcard.deimos.cloud-tls.key`: the files that make up the certificate and both must be in the `/etc/nginx/cert` path, since is the one set in the `nginx.conf` file for them. - - `/etc/ssl/dhparam.pem`: the file with parameters for SSL encryption. - - `/etc/nginx/nginx.conf`: path for the Nginx configuration file, defined in the config map at the top of this yaml manifest. - - `/var/www/html` and `/var/www/html/data`: they're the same folders used by Nextcloud. Nginx need to access them to serve their content to clients. - - - `metrics` container: runs the Prometheus metrics exporter of Nextcloud. - - The `image` for this Prometheus exporter is set to be [the latest one available](https://hub.docker.com/r/xperimental/nextcloud-exporter), and probably runs on a Debian system but its not specified. - - - `env` section: - - `NEXTCLOUD_SERVER`: since this container runs on the same pod as the Nextcloud server, it's set as `localhost`. - - `NEXTCLOUD_TLS_SKIP_VERIFY`: since the certificate is self-signed and this service is running on the same pod, there's little need of checking the certificate when this service connects to Nextcloud. - - `NEXTCLOUD_USERNAME` and `NEXTCLOUD_PASSWORD`: here you'll be forced to use the same administrator user you defined for the Nextcloud server, since its the only one you have at this point. - - - `template.spec.volumes`: here you have enabled the two volumes prepared for Nextcloud, but also the configuration files for FPM and Nginx, and the certificate files available in the `wildcard.deimos.cloud-tls` secret. Notice how all the files are enabled with a read-only permission mode, using the `defaultMode` parameter. - -#### **Kustomize subproject** - -Now you need to modify the `kustomization.yaml` file of the Nextcloud server subproject. - -1. Make a backup of the `kustomization.yaml` file. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud - $ cp kustomization.yaml kustomization.yaml.bkp - ~~~ - -2. Edit the `kustomization.yaml` file to make it look like below. - - ~~~yaml - # Nextcloud server setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: server-nextcloud - - resources: - - resources/data-server-nextcloud.persistentvolumeclaim.yaml - - resources/html-server-nextcloud.persistentvolumeclaim.yaml - - resources/server-nginx-nextcloud.service.loadbalancer.yaml - - resources/server-nginx-nextcloud.statefulset.yaml - - replicas: - - name: server-nginx-nextcloud - count: 1 - - images: - - name: nextcloud - newTag: 22.2-fpm-alpine - - name: nginx - newTag: 1.21-alpine - - name: xperimental/nextcloud-exporter - newTag: 0.4.0-15-gbb88fb6 - - configMapGenerator: - - name: server-nginx-nextcloud - envs: - - configs/params.properties - files: - - configs/dhparam.pem - - configs/nginx.conf - - configs/zz-docker.conf - - secretGenerator: - - name: server-nextcloud - files: - - nextcloud-admin-password=secrets/nextcloud-admin.pwd - ~~~ - - The things that have changed are: - - In the `resources` list, the `server-nginx-nextcloud` files. - - In the `images` list, the `newTag` for the `nextcloud` image and the adding of the `nginx` image. - - In the `configMapGenerator`, the name of the only config map there has changed to `server-nginx-nextcloud` and in its `files` list are all the configuration files that correspond to the Nginx setup. - -3. The only thing remaining here would be to validate the yaml of this Kustomize subproject and the main Nextcloud platform project and, finally, to deploy it on your cluster. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/server-nextcloud | less - $ kubectl kustomize $HOME/k8sprjs/nextcloud | less - $ kubectl apply -k $HOME/k8sprjs/nextcloud - ~~~ - - > **BEWARE!** - If you had already deployed the Apache version of Nextcloud as explained in the G033 guide, you'll have to delete it from your cluster and remake the `k3smnt` folders in the corresponding storage volumes. - -### _Traefik IngressRoute configuration_ - -Instead of exposing your Nginx-based Nextcloud server through an external IP provided by the MetalLB load balancer, you can enable an ingress access through Traefik to expose its service. Next, you'll see how to transform the previous load balancer setup into one using a Traefik `IngressRoute`. - -#### **Adapting the configuration** - -You only need to modify a bit the `nginx.conf` file to remove all the lines related to the SSL certificate. - -1. Make a renamed copy of the `nginx.conf` file. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud/configs - $ cp nginx.conf nginx.no-ssl.conf - ~~~ - - See that I've added a `.no-ssl` suffix to the renamed copy. - -2. Edit the copy `nginx.no-ssl.conf` to remove from it only the lines related to the SSL certificate. The file should end looking like the content next. - - ~~~yaml - worker_processes auto; - - error_log /dev/stdout debug; - pid /var/run/nginx.pid; - - events { - worker_connections 8; - } - - http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] $status ' - '"$request" $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /dev/stdout main; - - # Disabled because this nginx is acting - # as a reverse proxy for an application server. - sendfile off; - - upstream php-handler { - server 127.0.0.1:9000; - } - - server { - listen 443 http2; - server_name nextcloud.deimos.cloud nxc.deimos.cloud; - - # HSTS settings - # WARNING: Only add the preload option once you read about - # the consequences in https://hstspreload.org/. This option - # will add the domain to a hardcoded list that is shipped - # in all major browsers and getting removed from this list - # could take several months. - #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; - - # set max upload size - client_max_body_size 10240M; - fastcgi_buffers 64 4K; - - # Enable gzip but do not remove ETag headers - gzip on; - gzip_vary on; - gzip_comp_level 4; - gzip_min_length 256; - gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; - gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; - - # Pagespeed is not supported by Nextcloud, so if your server is built - # with the `ngx_pagespeed` module, uncomment this line to disable it. - #pagespeed off; - - # HTTP response headers borrowed from Nextcloud `.htaccess` - add_header Referrer-Policy "no-referrer" always; - add_header X-Content-Type-Options "nosniff" always; - add_header X-Download-Options "noopen" always; - add_header X-Frame-Options "SAMEORIGIN" always; - add_header X-Permitted-Cross-Domain-Policies "none" always; - add_header X-Robots-Tag "none" always; - add_header X-XSS-Protection "1; mode=block" always; - - # Remove X-Powered-By, which is an information leak - fastcgi_hide_header X-Powered-By; - - # Path to the root of your installation - root /var/www/html; - - # Specify how to handle directories -- specifying `/index.php$request_uri` - # here as the fallback means that Nginx always exhibits the desired behaviour - # when a client requests a path that corresponds to a directory that exists - # on the server. In particular, if that directory contains an index.php file, - # that file is correctly served; if it doesn't, then the request is passed to - # the front-end controller. This consistent behaviour means that we don't need - # to specify custom rules for certain paths (e.g. images and other assets, - # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus - # `try_files $uri $uri/ /index.php$request_uri` - # always provides the desired behaviour. - index index.php index.html /index.php$request_uri; - - # Rule borrowed from `.htaccess` to handle Microsoft DAV clients - location = / { - if ( $http_user_agent ~ ^DavClnt ) { - return 302 /remote.php/webdav/$is_args$args; - } - } - - location = /robots.txt { - allow all; - log_not_found off; - access_log off; - } - - # Make a regex exception for `/.well-known` so that clients can still - # access it despite the existence of the regex rule - # `location ~ /(\.|autotest|...)` which would otherwise handle requests - # for `/.well-known`. - location ^~ /.well-known { - # The rules in this block are an adaptation of the rules - # in `.htaccess` that concern `/.well-known`. - - location = /.well-known/carddav { return 301 /remote.php/dav/; } - location = /.well-known/caldav { return 301 /remote.php/dav/; } - - location /.well-known/acme-challenge { try_files $uri $uri/ =404; } - location /.well-known/pki-validation { try_files $uri $uri/ =404; } - - # Let Nextcloud's API for `/.well-known` URIs handle all other - # requests by passing them to the front-end controller. - return 301 /index.php$request_uri; - } - - # Rules borrowed from `.htaccess` to hide certain paths from clients - location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; } - location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; } - - # Ensure this block, which passes PHP files to the PHP process, is above the blocks - # which handle static assets (as seen below). If this block is not declared first, - # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php` - # to the URI, resulting in a HTTP 500 error response. - location ~ \.php(?:$|/) { - # Required for legacy support - rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri; - - # regex to split $uri to $fastcgi_script_name and $fastcgi_path_info - fastcgi_split_path_info ^(.+?\.php)(/.*)$; - - # Check that the PHP script exists before passing it - if (!-f $document_root$fastcgi_script_name) { - return 404; - } - - # Mitigate https://httpoxy.org/ vulnerabilities - fastcgi_param HTTP_PROXY ""; - - # pass the request to the PHP handler - fastcgi_pass php-handler; - - fastcgi_index index.php; - - # Bypass the fact that try_files resets $fastcgi_path_info - # see: http://trac.nginx.org/nginx/ticket/321 - set $path_info $fastcgi_path_info; - fastcgi_param PATH_INFO $path_info; - - # Avoid sending the security headers twice - fastcgi_param modHeadersAvailable true; - # Enable pretty urls - fastcgi_param front_controller_active true; - - # set the standard fcgi paramters - include fastcgi.conf; - - fastcgi_intercept_errors on; - fastcgi_request_buffering off; - } - - location ~ \.(?:css|js|svg|gif)$ { - try_files $uri /index.php$request_uri; - expires 6M; # Cache-Control policy borrowed from `.htaccess` - access_log off; # Optional: Don't log access to assets - } - - location ~ \.woff2?$ { - try_files $uri /index.php$request_uri; - expires 7d; # Cache-Control policy borrowed from `.htaccess` - access_log off; # Optional: Don't log access to assets - } - - # Rule borrowed from `.htaccess` - location /remote { - return 301 /remote.php$request_uri; - } - - location / { - try_files $uri $uri/ /index.php$request_uri; - } - } - } - ~~~ - - All the changes have been done inside the server block, in particular, all the lines starting with the `ssl_` string and the `ssl` value in the listen directive have been all removed. - -#### **Resources** - -You need to declare slightly different `Service` and `StatefulSet` resources than in the load balancer case, plus a Traefik `IngressRoute`. - -1. Go to the subproject's `resources` folder. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud/resources - ~~~ - -2. You need to make a renamed copy of `server-nginx-nextcloud.service.loadbalancer.yaml` and `server-nginx-nextcloud.statefulset.yaml`. - - ~~~bash - $ cp server-nginx-nextcloud.service.loadbalancer.yaml server-nginx-nextcloud.service.clusterip.yaml - $ cp server-nginx-nextcloud.statefulset.yaml server-nginx-nextcloud.statefulset.no-ssl.yaml - ~~~ - - The `Service` resource copy changes the `.loadbalancer` string with `.clusterip`, while the `StatefulSet` yaml gets a `.no-ssl` suffix. - -3. Edit the `server-nginx-nextcloud.service.clusterip.yaml` file to make the `Service` of the `ClusterIP` type. - - ~~~yaml - apiVersion: v1 - kind: Service - - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9205" - name: server-nginx-nextcloud - spec: - type: ClusterIP - clusterIP: 10.43.100.3 - ports: - - port: 443 - protocol: TCP - name: server - - port: 9205 - protocol: TCP - name: metrics - ~~~ - - As it happened in the load balancer setup, this file is almost identic to the one used in the Apache scenario, except in the `metadata.name`. - -4. Edit `server-nginx-nextcloud.statefulset.no-ssl.yaml` and just remove from it the lines related to the wildcard certificate. - - ~~~yaml - apiVersion: apps/v1 - kind: StatefulSet - - metadata: - name: server-nginx-nextcloud - spec: - replicas: 1 - serviceName: server-nginx-nextcloud - template: - spec: - containers: - - name: fpm - image: nextcloud:22.2-fpm-alpine - ports: - - containerPort: 9000 - env: - - name: NEXTCLOUD_ADMIN_USER - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - - name: NEXTCLOUD_TRUSTED_DOMAINS - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-trusted-domains - - name: MYSQL_HOST - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: db-mariadb-svc-cluster-ip - - name: MYSQL_DATABASE - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-db-name - - name: MYSQL_USER - valueFrom: - configMapKeyRef: - name: db-mariadb - key: nextcloud-username - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: db-mariadb - key: nextcloud-user-password - - name: REDIS_HOST - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: cache-redis-svc-cluster-ip - - name: REDIS_HOST_PASSWORD - valueFrom: - secretKeyRef: - name: cache-redis - key: redis-password - lifecycle: - postStart: - exec: - command: - - "sh" - - "-c" - - | - chown www-data:www-data /var/www/html/data - apk add openrc - start-stop-daemon --background /cron.sh - resources: - limits: - memory: 512Mi - volumeMounts: - - name: fpm-config - subPath: zz-docker.conf - mountPath: /usr/local/etc/php-fpm.d/zz-docker.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: server - image: nginx:1.21-alpine - ports: - - containerPort: 443 - volumeMounts: - - name: nginx-config - subPath: dhparam.pem - mountPath: /etc/ssl/dhparam.pem - - name: nginx-config - subPath: nginx.conf - mountPath: /etc/nginx/nginx.conf - - name: html-storage - mountPath: /var/www/html - - name: data-storage - mountPath: /var/www/html/data - - name: metrics - image: xperimental/nextcloud-exporter:0.4.0-15-gbb88fb6 - ports: - - containerPort: 9205 - env: - - name: NEXTCLOUD_SERVER - value: "https://localhost" - - name: NEXTCLOUD_TLS_SKIP_VERIFY - value: "true" - - name: NEXTCLOUD_USERNAME - valueFrom: - configMapKeyRef: - name: server-nginx-nextcloud - key: nextcloud-admin-username - - name: NEXTCLOUD_PASSWORD - valueFrom: - secretKeyRef: - name: server-nextcloud - key: nextcloud-admin-password - resources: - limits: - memory: 32Mi - hostNetwork: true - volumes: - - name: nginx-config - configMap: - name: server-nginx-nextcloud - defaultMode: 0444 - items: - - key: dhparam.pem - path: dhparam.pem - - key: nginx.conf - path: nginx.conf - - key: zz-docker.conf - path: zz-docker.conf - - name: html-storage - persistentVolumeClaim: - claimName: html-server-nextcloud - - name: data-storage - persistentVolumeClaim: - claimName: data-server-nextcloud - ~~~ - - The elements to remove are all the `wildcard.deimos.cloud-tls` references from the `volumeMounts` list in the `server` container declaration, and also the `certificate` block in the `volumes` list at the end of the yaml. - -5. Create a new file for the Traefik `IngressRoute`. - - ~~~bash - $ touch server-nginx-nextcloud.ingressroute.traefik.yaml - ~~~ - -6. Put in `server-nginx-nextcloud.ingressroute.traefik.yaml` the yaml below. - - ~~~yaml - apiVersion: traefik.containo.us/v1alpha1 - kind: IngressRoute - - metadata: - name: server-nginx-nextcloud - spec: - entryPoints: - - websecure - tls: - secretName: wildcard.deimos.cloud-tls - routes: - - match: Host(`nextcloud.deimos.cloud`) || Host(`nxc.deimos.cloud`) - kind: Rule - services: - - name: server-nginx-nextcloud - kind: Service - port: 443 - ~~~ - - You might notice that this `IngressRoute` is exactly the same, except in the `metadata.name` and the referenced service `name`, as the one used in the Apache scenario. - -#### **Kustomize subproject** - -You have to make the final changes at the `kustomization.yaml` file of this Nextcloud server Kustomize subproject. - -1. Make a backup of your current `kustomization.yaml` file. - - ~~~bash - $ cd $HOME/k8sprjs/nextcloud/components/server-nextcloud - $ cp kustomization.yaml kustomization.yaml.bkp - ~~~ - -2. Edit the `kustomization.yaml` file and apply the yaml below. - - ~~~yaml - # Nextcloud server setup - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - - commonLabels: - app: server-nextcloud - - resources: - - resources/data-server-nextcloud.persistentvolumeclaim.yaml - - resources/html-server-nextcloud.persistentvolumeclaim.yaml - - resources/server-nginx-nextcloud.ingressroute.traefik.yaml - - resources/server-nginx-nextcloud.service.clusterip.yaml - - resources/server-nginx-nextcloud.statefulset.no-ssl.yaml - - replicas: - - name: server-nginx-nextcloud - count: 1 - - images: - - name: nextcloud - newTag: 22.2-fpm-alpine - - name: nginx - newTag: 1.21-alpine - - name: xperimental/nextcloud-exporter - newTag: 0.4.0-15-gbb88fb6 - - configMapGenerator: - - name: server-nginx-nextcloud - envs: - - configs/params.properties - files: - - nginx.conf=configs/nginx.no-ssl.conf - - configs/zz-docker.conf - - secretGenerator: - - name: server-nextcloud - files: - - nextcloud-admin-password=secrets/nextcloud-admin.pwd - ~~~ - - The modifications done above are: - - In the `resources` list, the `server-nginx-nextcloud` files. - - In the `configMapGenerator`, one file has been removed (`dhparam.pem`), and the `nginx.no-ssl.conf` file replaces the original `nginx.conf`. - -3. The last things to do are the validations and, then, deploy it on your cluster. - - ~~~bash - $ kubectl kustomize $HOME/k8sprjs/nextcloud/components/server-nextcloud | less - $ kubectl kustomize $HOME/k8sprjs/nextcloud | less - $ kubectl apply -k $HOME/k8sprjs/nextcloud - ~~~ - - > **BEWARE!** - > If you had already deployed the Apache version of Nextcloud as explained in the G033 guide, you'll have to delete it from your cluster and remake the `k3smnt` folders in the corresponding storage volumes. - -## Relevant system paths - -### _Folders in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources` - -### _Files in `kubectl` client system_ - -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml.bkp` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.no-ssl.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/dhparam.pem` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/nginx.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/configs/zz-docker.conf` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.ingressroute.traefik.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.clusterip.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.no-ssl.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.ingressroute.traefik.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.clusterip.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.loadbalancer.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.yaml` -- `$HOME/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.no-ssl.yaml` - -## References - -### _Kubernetes_ - -#### **Executing multiple commands in lifecycle (postStart/preStop) of containers** - -- [Kubernetes - Passing multiple commands to the container](https://stackoverflow.com/questions/33979501/kubernetes-passing-multiple-commands-to-the-container) -- [multiple command in postStart hook of a container](https://stackoverflow.com/questions/39436845/multiple-command-in-poststart-hook-of-a-container) - -### _Nextcloud_ - -- [Official Docker build of Nextcloud](https://hub.docker.com/_/nextcloud) -- [Official Docker build of Nextcloud on GitHub](https://github.com/nextcloud/docker) -- [Prometheus exporter for getting some metrics of a Nextcloud server instance](https://hub.docker.com/r/xperimental/nextcloud-exporter) -- [Installation and server configuration](https://docs.nextcloud.com/server/stable/admin_manual/installation/index.html) -- [Installation on Linux](https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#installation-on-linux) -- [Database configuration](https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html) -- [How to Fix Common NextCloud Performance Issues](https://autoize.com/nextcloud-performance-troubleshooting/) -- [Nextcloud on Kubernetes, by modzilla99](https://github.com/modzilla99/kubernetes-nextcloud) -- [Nextcloud on Kubernetes, by andremotz](https://github.com/andremotz/nextcloud-kubernetes) -- [Deploy Nextcloud - Yaml with application and database container and a configure a secret](https://www.debontonline.com/2021/05/part-15-deploy-nextcloud-yaml-with.html) -- [Self-Host Nextcloud Using Kubernetes](https://blog.true-kubernetes.com/self-host-nextcloud-using-kubernetes/) -- [Deploying NextCloud on Kubernetes with Kustomize](https://medium.com/@acheaito/nextcloud-on-kubernetes-19658785b565) -- [A NextCloud Kubernetes deployment](https://github.com/acheaito/nextcloud-kubernetes) -- [Install NextCloud on Ubuntu 20.04 with Apache (LAMP Stack)](https://www.techwizcr.com/install-nextcloud-on-ubuntu-20-04-with-apache-lamp-stack/nextcloud/) -- [Nextcloud Setup with Nginx](https://dev.to/yparam98/nextcloud-setup-with-nginx-2cm1) -- [Nextcloud server installation with NGINX](https://wiki.mageia.org/en/Nextcloud_server_installation_with_NGINX) -- [Nextcloud self-hosting on K8s](https://eramons.github.io/techblog/post/nextcloud/) -- [Nextcloud self-hosting on K8s on GitHub](https://github.com/eramons/kubenextcloud) -- [Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache](https://bayton.org/docs/nextcloud/installing-nextcloud-on-ubuntu-16-04-lts-with-redis-apcu-ssl-apache/) -- [Setup NextCloud Server with Nginx SSL Reverse-Proxy and Apache2 Backend](https://breuer.dev/tutorial/Setup-NextCloud-FrontEnd-Nginx-SSL-Backend-Apache2) -- [Install Nextcloud with Apache2 on Debian 10](https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/) -- [Nextcloud scale-out using Kubernetes](https://faun.pub/nextcloud-scale-out-using-kubernetes-93c9cac9e493) -- [PHP-FPM, Nginx, Kubernetes, and Docker](https://matthewpalmer.net/kubernetes-app-developer/articles/php-fpm-nginx-kubernetes.html) -- [nginx nextcloud config](https://pastebin.com/iTybBRBC) -- [How to install Nextcloud with Nginx and PHP7-FPM on CentOS 7](https://kreationnext.com/support/how-to-install-nextcloud-with-nginx-and-php7-fpm-on-centos-7/) -- [Nextcloud and PHP-FPM adjustments](https://www.reddit.com/r/NextCloud/comments/9e5ljv/nextcloud_and_phpfpm_adjustments/) -- [How to tune Nextcloud on-premise cloud server for better performance](https://www.techrepublic.com/article/how-to-tune-nextcloud-on-premise-cloud-server-for-better-performance/) -- [Nextcloud Installationsanleitung auf Basis von Ubuntu Server 20.04 focal fossa oder Debian 11 bullseye mit nginx, MariaDB, PHP 8 fpm, Let’s Encrypt (acme), Redis, ufw, Fail2ban, postfix und netdata](https://www.c-rieger.de/nextcloud-installationsanleitung/) - -### _Apache httpd_ - -- [Apache HTTP Server Documentation](https://httpd.apache.org/docs/) -- [apachectl - Apache HTTP Server Control Interface](https://httpd.apache.org/docs/2.4/programs/apachectl.html) -- [Apache Module mod_http2](https://httpd.apache.org/docs/2.4/mod/mod_http2.html) -- [Multi-Processing Modules (MPMs)](https://httpd.apache.org/docs/2.4/mpm.html) -- [Apache MPM prefork](https://httpd.apache.org/docs/2.4/mod/prefork.html) -- [Apache MPM worker](https://httpd.apache.org/docs/2.4/mod/worker.html) -- [Apache MPM event](https://httpd.apache.org/docs/2.4/mod/event.html) -- [How to enable or disable Apache modules](https://www.simplified.guide/apache/enable-disable-module) -- [How to enable HTTP/2 support in Apache](https://http2.pro/doc/Apache) -- [Using HTTP/2 with Nextcloud](https://feutl.github.io/nextcloud-http2/) - -### _PHP-FPM_ - -- [FastCGI Process Manager (FPM) Configuration](https://www.php.net/manual/en/install.fpm.configuration.php) -- [PHP-FPM Process Calculator](https://spot13.com/pmcalculator/) -- [PHP 8 Alpine 3.14 FPM image Dockerfile](https://github.com/docker-library/php/blob/master/8.0/alpine3.14/fpm/Dockerfile) -- [How and where to configure pm.max_children for php-fpm with Docker?](https://serverfault.com/questions/884256/how-and-where-to-configure-pm-max-children-for-php-fpm-with-docker) -- [A better way to run PHP-FPM](https://ma.ttias.be/a-better-way-to-run-php-fpm/) -- [How to reduce PHP-FPM (php5-fpm) RAM usage by about 50%](https://linuxbsdos.com/2015/02/17/how-to-reduce-php-fpm-php5-fpm-ram-usage-by-about-50/) -- [PHP-FPM settings tutorial. max_servers, min_servers, etc.](https://thisinterestsme.com/php-fpm-settings/) -- [Optimizar y reducir el uso de memoria de PHP-FPM](https://rm-rf.es/optimizar-reducir-consumo-memoria-php-fpm/) - -### _Nginx_ - -- [Official Docker build of Nginx](https://hub.docker.com/_/nginx) -- [Configuring NGINX and NGINX Plus as a Web Server](https://docs.nginx.com/nginx/admin-guide/web-server/web-server/) -- [Full Example Configuration](https://www.nginx.com/resources/wiki/start/topics/examples/full/) -- [PHP FastCGI Example](https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/) -- [Nginx + phpFPM: PATH_INFO always empty](https://stackoverflow.com/questions/20848899/nginx-phpfpm-path-info-always-empty) -- [Nginx $document_root$fastcgi_script_name vs $request_filename](https://serverfault.com/questions/465607/nginx-document-rootfastcgi-script-name-vs-request-filename) -- [fastcgi_params Versus fastcgi.conf – Nginx Config History](https://blog.martinfjordvald.com/nginx-config-history-fastcgi_params-versus-fastcgi-conf/) -- [Problem configuring php-fpm with nginx](https://serverfault.com/questions/226779/problem-configuring-php-fpm-with-nginx) -- [Two ways of communication between nginx and PHP FPM](https://developpaper.com/two-ways-of-communication-between-nginx-and-php-fpm/) -- [Using a custom nginx.conf on GKE](https://cloud.google.com/endpoints/docs/openapi/custom-nginx) -- [Mozilla SSL Configuration Generator](https://ssl-config.mozilla.org/) -- [Setup SSL on NGINX and configure for best security](https://www.ssltrust.com.au/help/setup-guides/setup-ssl-nginx-configure-best-security) -- [How To Create a Self-Signed SSL Certificate for Nginx in Ubuntu 20.04](https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-20-04-1) - -## Navigation - -[<< Previous (**G910. Appendix 10**)](G910%20-%20Appendix%2010%20~%20Setting%20up%20virtual%20network%20with%20Open%20vSwitch.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G912. Appendix 12**) >>](G912%20-%20Appendix%2012%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) diff --git a/G912 - Appendix 12 ~ Checking the K8s API endpoints status.md b/G911 - Appendix 11 ~ Checking the K8s API endpoints status.md similarity index 91% rename from G912 - Appendix 12 ~ Checking the K8s API endpoints status.md rename to G911 - Appendix 11 ~ Checking the K8s API endpoints status.md index ddd135a..3bcaf73 100644 --- a/G912 - Appendix 12 ~ Checking the K8s API endpoints status.md +++ b/G911 - Appendix 11 ~ Checking the K8s API endpoints status.md @@ -1,4 +1,4 @@ -# G912 - Appendix 12 ~ Checking the K8s API endpoints' status +# G911 - Appendix 11 ~ Checking the K8s API endpoints' status If you want or need to know the status of your Kubernetes cluster's API endpoints, you can do it with the `kubectl` command. The trick is about invoking directly certain URLs active in your cluster with the `get` action and the `--raw` flag. @@ -129,4 +129,4 @@ Notice the deprecation notice in the commands output, and also that is not reall ## Navigation -[<< Previous (**G911. Appendix 11**)](G911%20-%20Appendix%2011%20~%20Alternative%20Nextcloud%20web%20server%20setups.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G913. Appendix 13**) >>](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md) +[<< Previous (**G910. Appendix 10**)](G910%20-%20Appendix%2010%20~%20Setting%20up%20virtual%20network%20with%20Open%20vSwitch.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G912. Appendix 12**) >>](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) diff --git a/G914 - Appendix 14 ~ Updating MariaDB to a newer major version.md b/G912 - Appendix 12 ~ Updating MariaDB to a newer major version.md similarity index 91% rename from G914 - Appendix 14 ~ Updating MariaDB to a newer major version.md rename to G912 - Appendix 12 ~ Updating MariaDB to a newer major version.md index 37cdc5f..81b929a 100644 --- a/G914 - Appendix 14 ~ Updating MariaDB to a newer major version.md +++ b/G912 - Appendix 12 ~ Updating MariaDB to a newer major version.md @@ -1,4 +1,4 @@ -# G914 - Appendix 14 ~ Updating MariaDB to a newer major version +# G912 - Appendix 12 ~ Updating MariaDB to a newer major version MariaDB has been designed to be easily upgraded, something helpful specially when it has been containerized. The standard procedure is explained in [this official documentation page](https://mariadb.com/kb/en/upgrading-between-major-mariadb-versions/), but you won't need to do any of it since there's a much easier way available for containerized MariaDB instances such as the one you deployed in your [Nextcloud setup](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%203%20-%20MariaDB%20database%20server.md). @@ -56,7 +56,7 @@ Rather than executing the update (or upgrade) process yourself, you'll just enab ## References -### _MariaDB_ +### MariaDB - [Upgrading Between Major MariaDB Versions](https://mariadb.com/kb/en/upgrading-between-major-mariadb-versions/) - [MariaDB Docker Environment Variables](https://mariadb.com/kb/en/mariadb-docker-environment-variables/) @@ -65,4 +65,4 @@ Rather than executing the update (or upgrade) process yourself, you'll just enab ## Navigation -[<< Previous (**G913. Appendix 13**)](G913%20-%20Appendix%2013%20~%20Post-update%20manual%20maintenance%20tasks%20for%20Nextcloud.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G915. Appendix 15**) >>](G915%20-%20Appendix%2015%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md) +[<< Previous (**G911. Appendix 11**)](G911%20-%20Appendix%2011%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G913. Appendix 13**) >>](G913%20-%20Appendix%2013%20~%20Updating%20PostgreSQL%20to%20a%20newer%20major%20version.md) diff --git a/G913 - Appendix 13 ~ Post-update manual maintenance tasks for Nextcloud.md b/G913 - Appendix 13 ~ Post-update manual maintenance tasks for Nextcloud.md deleted file mode 100644 index 09a52b6..0000000 --- a/G913 - Appendix 13 ~ Post-update manual maintenance tasks for Nextcloud.md +++ /dev/null @@ -1,147 +0,0 @@ -# G913 - Appendix 13 ~ Post-update manual maintenance tasks for Nextcloud - -When you update your Nextcloud instance to a new minor or major version, the update can also come with changes that affect Nextcloud's database structure. Nextcloud will warn you of this in the `Administration settings > Overview` page. - -For instance, you could be warned of missing primary keys or columns in the database, as [Basil Hendroff](https://github.com/basilhendroff) illustrates in his [Tech Diary's post](https://blog.udance.com.au/2021/02/25/nextcloud-using-occ-in-a-freenas-jail/) with the next snapshot (red highlighting mine over his screenshot). - -![Nextcloud database warnings capture by Basil Hendroff](images/g914/g914-basil-capture-warnings-nextcloud.webp "Nextcloud database warnings capture by Basil Hendroff") - -Or it could just warn you about missing indexes, as shown in the following capture taken from this [How2itsec post](https://how2itsec.blogspot.com/2021/12/nextcloud-repairing-missing-indexes-in.html) (again, red highlighting is mine). - -![Nextcloud missing indexes warning snapshot by How2itsec](images/g914/g914-how2itsec-capture-missing-indexes.webp "Nextcloud missing indexes warning snapshot by How2itsec") - -The common factor in both examples is that all those warnings indicate how you can solve them by executing a certain Nextcloud's `occ` command, together with some specific option for each case. But running shell commands in Kubernetes setups is not a straightforward affair. - -## Concerns - -The main concern here is that you need to execute that `occ` command but you can't do it in any way from Nextcloud's web interface. You must get into the container running your Nextcloud instance and execute the command from there. I already explained how to get shell access into containers in the [Shell access into your containers](G036%20-%20Host%20and%20K3s%20cluster%20~%20Monitoring%20and%20diagnosis.md#shell-access-into-your-containers) section of the [G036 guide](G036%20-%20Host%20and%20K3s%20cluster%20~%20Monitoring%20and%20diagnosis.md). - -With the shell access to the container covered, there are a few extra concerns to consider before trying to execute the `occ` command. - -- The `occ` command **must be executed with the same user** under which Nextcloud is running. This user will be different depending on which Nextcloud's image you're running in your setup, for an apache setup on a Debian-based image it'll be the `www-data` one. - - The problem here is that you won't have the `sudo` command available in the container. Also, other equivalent commands either won't be present too or, if they are, may not work. The only one that I've found out able to run `occ` properly is [`runuser`](https://man7.org/linux/man-pages/man1/runuser.1.html). -- For safety reasons, you don't want to run updates on your software's database while the software itself is running and accessing its own database. In this case, Nextcloud is perfectly fine if you execute `occ` commands with it still running. - - Still, you should do this only when **no user** is using your Nextcloud instance, just to be sure to avoid any corruptions or similar issues happening on the database. -- In general, **Nextcloud doesn't support downgrades**. But, if you find yourself in a situation that it has to be done somehow, remember that, if you've already applied the `occ` fixes to the database after an update, that database may not work well (if at all) with previous versions of Nextcloud. - -## Procedure - -Next, I'll show you the procedure to execute the `occ` command in the container running the Apache version of the Nextcloud's Debian-based image used in the configuration explained by [G033's part 4 guide](G033%20-%20Deploying%20services%2002%20~%20Nextcloud%20-%20Part%204%20-%20Nextcloud%20server.md). - -1. From your `kubectl` client system, check which pod is the one currently running your Nextcloud apache server container. - - ~~~bash - $ kubectl get pod -n nextcloud - NAME READY STATUS RESTARTS AGE - nxcd-db-mariadb-0 1/1 Running 0 3h5m - nxcd-cache-redis-7477c5b8b4-vpg29 1/1 Running 0 3h5m - nxcd-server-apache-nextcloud-0 1/1 Running 0 3h5m - ~~~ - - In this case, the pod is the one named `nxcd-server-apache-nextcloud-0`. - -2. The container itself is called `server`, so you only have to use the right `kubectl` command to open a shell on it. - - ~~~bash - $ kubectl exec -it -n nextcloud nxcd-server-apache-nextcloud-0 -c server -- bash - ~~~ - - Notice that I'm invoking the **bash** shell in the `kubectl` call, and you should get the following prompt as a result. - - ~~~bash - root@nxcd-server-apache-nextcloud-0:/var/www/html# - ~~~ - - Notice how the prompt indicates that you are `root` in this remote session, and placed directly in the `/var/www/html` folder. I'll reduce the prompt just to the '`#`' character in the upcoming shell code samples. - - > **BEWARE!** - > You're the superuser, but you **cannot execute** the `occ` command with it. If you tried, `occ` would complain about you not having the right user to execute it. Hence the need of using the `runuser` command to invoke `occ`. - -3. Locate the `occ` command. In the Apache version of the Nextcloud image is in `/var/www/html`, right where you got placed when you remotely accessed the container. You can verify this by executing an `ls` there. - - ~~~bash - # ls -al - total 184 - drwxr-xr-x 15 www-data www-data 4096 Mar 16 15:21 . - drwxrwxr-x 1 www-data root 4096 Mar 2 01:25 .. - -rw-r--r-- 1 www-data www-data 4385 Mar 16 15:20 .htaccess - -rw-r--r-- 1 www-data www-data 101 Mar 16 15:20 .user.ini - drwxr-xr-x 47 www-data www-data 4096 Mar 16 15:20 3rdparty - -rw-r--r-- 1 www-data www-data 19327 Mar 16 15:20 AUTHORS - -rw-r--r-- 1 www-data www-data 34520 Mar 16 15:20 COPYING - drwxr-xr-x 50 www-data www-data 4096 Mar 16 15:20 apps - drwxr-xr-x 2 www-data root 4096 Sep 6 2022 config - -rw-r--r-- 1 www-data www-data 4095 Mar 16 15:20 console.php - drwxr-xr-x 23 www-data www-data 4096 Mar 16 15:20 core - -rw-r--r-- 1 www-data www-data 6317 Mar 16 15:20 cron.php - drwxr-xr-x 8 www-data root 4096 Apr 2 11:05 custom_apps - drwxrwx--- 7 www-data www-data 4096 Aug 5 2022 data - drwxr-xr-x 2 www-data www-data 12288 Mar 16 15:20 dist - -rw-r--r-- 1 www-data www-data 156 Mar 16 15:20 index.html - -rw-r--r-- 1 www-data www-data 3456 Mar 16 15:20 index.php - drwxr-xr-x 6 www-data www-data 4096 Mar 16 15:20 lib - -rwxr-xr-x 1 www-data www-data 283 Mar 16 15:20 occ - drwxr-xr-x 2 www-data www-data 4096 Mar 16 15:20 ocm-provider - drwxr-xr-x 2 www-data www-data 4096 Mar 16 15:20 ocs - drwxr-xr-x 2 www-data www-data 4096 Mar 16 15:20 ocs-provider - -rw-r--r-- 1 www-data www-data 3139 Mar 16 15:20 public.php - -rw-r--r-- 1 www-data www-data 5549 Mar 16 15:20 remote.php - drwxr-xr-x 4 www-data www-data 4096 Mar 16 15:20 resources - -rw-r--r-- 1 www-data www-data 26 Mar 16 15:20 robots.txt - -rw-r--r-- 1 www-data www-data 2452 Mar 16 15:20 status.php - drwxr-xr-x 3 www-data root 4096 Aug 5 2022 themes - -rw-r--r-- 1 www-data www-data 383 Mar 16 15:20 version.php - ~~~ - - This folder is where all the Nextcloud binaries are found, including the `occ` command. See how the owner of all files and folders is `www-data`, the user running the Apache service that's serving your Nextcloud instance. - -4. Execute `occ` with the `runuser` command. Remember that you'll have to run it once per each different issue reported by Nextcloud. Let's say that you only have to fix the missing indexes problem. You would run `occ` with the `db:add-missing-indices` option as follows. - - ~~~bash - # runuser -u www-data -- /var/www/html/occ db:add-missing-indices - ~~~ - - In this case, you only need to specify two things to `runuser`. - - - `-u www-data`: the user that'll run the command launched by `runuser`. - - `-- /var/www/html/occ db:add-missing-indices`: the command that `runuser` has to execute. Notice that I've used the **absolute path** to the `occ` command. - - This command will give you an output like this. - - ~~~bash - Check indices of the share table. - Check indices of the filecache table. - Adding additional size index to the filecache table, this can take some time... - Filecache table updated successfully. - Check indices of the twofactor_providers table. - Check indices of the login_flow_v2 table. - Check indices of the whats_new table. - Check indices of the cards table. - Check indices of the cards_properties table. - Check indices of the calendarobjects_props table. - Check indices of the schedulingobjects table. - Check indices of the oc_properties table. - ~~~ - -5. After the `occ` command finishes, you can go back to your Nextcloud web console and refresh the `Administration settings > Overview` blade to check if the related warning has disappeared. - -6. When you finish applying all the pending `occ` fixes, don't forget to `exit` from the container. - -## References - -### _Nextcloud_ - -- [Basil's Tech Diary ~ Nextcloud: Using occ in a FreeNAS jail](https://blog.udance.com.au/2021/02/25/nextcloud-using-occ-in-a-freenas-jail/) -- [How2itsec ~ Nextcloud repairing missing indexes in database](https://how2itsec.blogspot.com/2021/12/nextcloud-repairing-missing-indexes-in.html) -- [Some indices are missing in the database! How to add them manually](https://help.nextcloud.com/t/some-indices-are-missing-in-the-database-how-to-add-them-manually/37852) -- [Help with occ db:add-missing-indices](https://help.nextcloud.com/t/help-with-occ-db-add-missing-indices/90696) - -### _About the `runuser` command_ - -- [`runuser(1)` — Linux manual page](https://man7.org/linux/man-pages/man1/runuser.1.html) -- [How to Run Commands as Another User in Linux Scripts](https://www.howtogeek.com/811255/how-to-run-commands-as-another-user-in-linux-scripts/) -- [When should I use runuser command?](https://stackoverflow.com/questions/71905063/when-should-i-use-runuser-command) - -## Navigation - -[<< Previous (**G912. Appendix 12**)](G912%20-%20Appendix%2012%20~%20Checking%20the%20K8s%20API%20endpoints%20status.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) | [Next (**G914. Appendix 14**) >>](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) diff --git a/G915 - Appendix 15 ~ Updating PostgreSQL to a newer major version.md b/G913 - Appendix 13 ~ Updating PostgreSQL to a newer major version.md similarity index 97% rename from G915 - Appendix 15 ~ Updating PostgreSQL to a newer major version.md rename to G913 - Appendix 13 ~ Updating PostgreSQL to a newer major version.md index 0a110d5..e02df93 100644 --- a/G915 - Appendix 15 ~ Updating PostgreSQL to a newer major version.md +++ b/G913 - Appendix 13 ~ Updating PostgreSQL to a newer major version.md @@ -1,4 +1,4 @@ -# G915 - Appendix 15 ~ Updating PostgreSQL to a newer major version +# G913 - Appendix 13 ~ Updating PostgreSQL to a newer major version PostgreSQL runs well as a containerized instance and can be upgraded easily to new minor or debug versions. However, updating it to a new _MAJOR_ version is not a straightforward affair. Still, there's a way that simplifies this process a bit, and I'll show it to you in this guide. @@ -948,7 +948,7 @@ You can find the Kustomize project meant **only for updating PostgreSQL database ## Relevant system paths -### _Folders in `kubectl` client system_ +### Folders in `kubectl` client system - `$HOME` - `$HOME/k8sprjs/gitea` @@ -959,7 +959,7 @@ You can find the Kustomize project meant **only for updating PostgreSQL database - `$HOME/k8sprjs/postgres-upgrade/configs` - `$HOME/k8sprjs/postgres-upgrade/resources` -### _Files in `kubectl` client system_ +### Files in `kubectl` client system - `$HOME/gitea-db-postgres.log` - `$HOME/postgres-upgrade.log` @@ -977,13 +977,13 @@ You can find the Kustomize project meant **only for updating PostgreSQL database - `$HOME/k8sprjs/postgres-upgrade/resources/postgres-upgrade.persistentvolumeclaim.yaml` - `$HOME/k8sprjs/postgres-upgrade/resources/postgres-upgrade.statefulset.yaml` -### _Folders in the K3s agent node_ +### Folders in the K3s agent node - `/mnt/gitea-ssd/db/k3smnt/` - `/mnt/gitea-ssd/db/k3smnt/14` - `/mnt/gitea-ssd/db/k3smnt/15` -### _Files in the K3s agent node_ +### Files in the K3s agent node - `/mnt/gitea-ssd/db/k3smnt/14/pg_hba.conf` - `/mnt/gitea-ssd/db/k3smnt/14/pg_ident.conf` @@ -996,7 +996,7 @@ You can find the Kustomize project meant **only for updating PostgreSQL database - `/mnt/gitea-ssd/db/k3smnt/15/postgresql.conf` - `/mnt/gitea-ssd/db/k3smnt/15/postmaster.opts` -### _Folders in the PostgreSQL container_ +### Folders in the PostgreSQL container - `/var/lib/postgresql/data` - `/var/lib/postgresql/data/14` @@ -1004,7 +1004,7 @@ You can find the Kustomize project meant **only for updating PostgreSQL database ## References -### _PostgreSQL_ +### PostgreSQL - [PostgreSQL's official page](https://www.postgresql.org/) - [Upgrading a PostgreSQL Cluster](https://www.postgresql.org/docs/current/upgrading.html) @@ -1018,17 +1018,17 @@ You can find the Kustomize project meant **only for updating PostgreSQL database - [PostgreSQL Docker image. Quick reference. `PGDATA` environmental variable](https://github.com/docker-library/docs/blob/master/postgres/README.md#pgdata) - [The `pg_hba.conf` File](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) -### _Tianon Gravi's PostgreSQL upgrade Docker image_ +### Tianon Gravi's PostgreSQL upgrade Docker image - [pg_upgrade, Docker style on Hub Docker](https://hub.docker.com/r/tianon/postgres-upgrade/) - [pg_upgrade, Docker style on GitHub](https://github.com/tianon/docker-postgres-upgrade) - [Docker image version 14-to-15](https://hub.docker.com/layers/tianon/postgres-upgrade/14-to-15/images/sha256-18da581c7839388bb25fd3f8b3170b540556ef3eb69d9afcc0662fcaa52d864e?context=explore) - [Tianon Gravi on GitHub](https://github.com/tianon) -### _Gitea_ +### Gitea - [Installation. Database Preparation](https://docs.gitea.io/en-us/database-prep/) ## Navigation -[<< Previous (**G914. Appendix 14**)](G914%20-%20Appendix%2014%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) +[<< Previous (**G912. Appendix 12**)](G912%20-%20Appendix%2012%20~%20Updating%20MariaDB%20to%20a%20newer%20major%20version.md) | [+Table Of Contents+](G000%20-%20Table%20Of%20Contents.md) diff --git a/README.md b/README.md index 2d567fb..8787433 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ The core software used in this guide to build the homelab is: After setting up the Kubernetes cluster, the idea is to deploy in it the following software: -- File cloud: [Nextcloud](https://nextcloud.com/). +- Publishing platform: [Ghost](https://ghost.org/). - Lightweight git server: [Gitea](https://gitea.io/). - Kubernetes cluster monitoring stack: set of monitoring modules including [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/) and a couple of other related services. @@ -75,7 +75,7 @@ All the chapters and their main sections are easily accessible through the [Tabl - [Proxmox Virtual Environment](https://www.proxmox.com/en/) - [Rancher K3s](https://k3s.io/) - [Kubernetes](https://kubernetes.io/) -- [Nextcloud](https://nextcloud.com/) +- [Ghost](https://ghost.org/) - [Gitea](https://gitea.io/) - [Prometheus](https://prometheus.io/) - [Grafana](https://grafana.com/grafana/) diff --git a/Small homelab K8s cluster on Proxmox VE.code-workspace b/Small homelab K8s cluster on Proxmox VE.code-workspace index 568c440..9ced32a 100644 --- a/Small homelab K8s cluster on Proxmox VE.code-workspace +++ b/Small homelab K8s cluster on Proxmox VE.code-workspace @@ -6,7 +6,8 @@ ], "settings": { "cSpell.words": [ - "hugepages" + "hugepages", + "Valkey" ] } } \ No newline at end of file diff --git a/images/g033/nextcloud_avatar_menu_settings_highlight.png b/images/g033/nextcloud_avatar_menu_settings_highlight.png deleted file mode 100644 index 67e8b04..0000000 Binary files a/images/g033/nextcloud_avatar_menu_settings_highlight.png and /dev/null differ diff --git a/images/g033/nextcloud_basic_settings_cron_highlighted.png b/images/g033/nextcloud_basic_settings_cron_highlighted.png deleted file mode 100644 index 9dfbc4f..0000000 Binary files a/images/g033/nextcloud_basic_settings_cron_highlighted.png and /dev/null differ diff --git a/images/g033/nextcloud_dashboard.png b/images/g033/nextcloud_dashboard.png deleted file mode 100644 index 98281d4..0000000 Binary files a/images/g033/nextcloud_dashboard.png and /dev/null differ diff --git a/images/g033/nextcloud_dashboard_welcome.png b/images/g033/nextcloud_dashboard_welcome.png deleted file mode 100644 index 87e9e46..0000000 Binary files a/images/g033/nextcloud_dashboard_welcome.png and /dev/null differ diff --git a/images/g033/nextcloud_login_page.png b/images/g033/nextcloud_login_page.png deleted file mode 100644 index f946e53..0000000 Binary files a/images/g033/nextcloud_login_page.png and /dev/null differ diff --git a/images/g033/nextcloud_settings_basic_settings_highlighted.png b/images/g033/nextcloud_settings_basic_settings_highlighted.png deleted file mode 100644 index b4259f5..0000000 Binary files a/images/g033/nextcloud_settings_basic_settings_highlighted.png and /dev/null differ diff --git a/images/g033/pve_k3sagent_hardware_add_hard_disk_option.webp b/images/g033/pve_k3sagent_hardware_add_hard_disk_option.webp new file mode 100644 index 0000000..9f2f6c6 Binary files /dev/null and b/images/g033/pve_k3sagent_hardware_add_hard_disk_option.webp differ diff --git a/images/g033/pve_k3sagent_hardware_add_hard_disk_window.webp b/images/g033/pve_k3sagent_hardware_add_hard_disk_window.webp new file mode 100644 index 0000000..4e29fdd Binary files /dev/null and b/images/g033/pve_k3sagent_hardware_add_hard_disk_window.webp differ diff --git a/images/g033/pve_k3sagent_hardware_hard_disks_added.webp b/images/g033/pve_k3sagent_hardware_hard_disks_added.webp new file mode 100644 index 0000000..9caf76f Binary files /dev/null and b/images/g033/pve_k3sagent_hardware_hard_disks_added.webp differ diff --git a/images/g033/pve_k3snode_hardware_add_hard_disk_option.png b/images/g033/pve_k3snode_hardware_add_hard_disk_option.png deleted file mode 100644 index c3139c4..0000000 Binary files a/images/g033/pve_k3snode_hardware_add_hard_disk_option.png and /dev/null differ diff --git a/images/g033/pve_k3snode_hardware_add_hard_disk_window.png b/images/g033/pve_k3snode_hardware_add_hard_disk_window.png deleted file mode 100644 index 6652466..0000000 Binary files a/images/g033/pve_k3snode_hardware_add_hard_disk_window.png and /dev/null differ diff --git a/images/g033/pve_k3snode_hardware_hard_disks_added.png b/images/g033/pve_k3snode_hardware_hard_disks_added.png deleted file mode 100644 index 131281c..0000000 Binary files a/images/g033/pve_k3snode_hardware_hard_disks_added.png and /dev/null differ diff --git a/k8sprjs/gitea/components/cache-redis/configs/redis.conf b/k8sprjs/gitea.old/components/cache-redis/configs/redis.conf similarity index 100% rename from k8sprjs/gitea/components/cache-redis/configs/redis.conf rename to k8sprjs/gitea.old/components/cache-redis/configs/redis.conf diff --git a/k8sprjs/gitea/components/cache-redis/kustomization.yaml b/k8sprjs/gitea.old/components/cache-redis/kustomization.yaml similarity index 100% rename from k8sprjs/gitea/components/cache-redis/kustomization.yaml rename to k8sprjs/gitea.old/components/cache-redis/kustomization.yaml diff --git a/k8sprjs/gitea/components/cache-redis/resources/cache-redis.deployment.yaml b/k8sprjs/gitea.old/components/cache-redis/resources/cache-redis.deployment.yaml similarity index 100% rename from k8sprjs/gitea/components/cache-redis/resources/cache-redis.deployment.yaml rename to k8sprjs/gitea.old/components/cache-redis/resources/cache-redis.deployment.yaml diff --git a/k8sprjs/gitea/components/cache-redis/resources/cache-redis.service.yaml b/k8sprjs/gitea.old/components/cache-redis/resources/cache-redis.service.yaml similarity index 100% rename from k8sprjs/gitea/components/cache-redis/resources/cache-redis.service.yaml rename to k8sprjs/gitea.old/components/cache-redis/resources/cache-redis.service.yaml diff --git a/k8sprjs/gitea/components/cache-redis/secrets/redis.pwd b/k8sprjs/gitea.old/components/cache-redis/secrets/redis.pwd similarity index 100% rename from k8sprjs/gitea/components/cache-redis/secrets/redis.pwd rename to k8sprjs/gitea.old/components/cache-redis/secrets/redis.pwd diff --git a/k8sprjs/gitea/components/db-postgresql/configs/dbnames.properties b/k8sprjs/gitea.old/components/db-postgresql/configs/dbnames.properties similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/configs/dbnames.properties rename to k8sprjs/gitea.old/components/db-postgresql/configs/dbnames.properties diff --git a/k8sprjs/gitea/components/db-postgresql/configs/initdb.sh b/k8sprjs/gitea.old/components/db-postgresql/configs/initdb.sh similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/configs/initdb.sh rename to k8sprjs/gitea.old/components/db-postgresql/configs/initdb.sh diff --git a/k8sprjs/gitea/components/db-postgresql/configs/postgresql.conf b/k8sprjs/gitea.old/components/db-postgresql/configs/postgresql.conf similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/configs/postgresql.conf rename to k8sprjs/gitea.old/components/db-postgresql/configs/postgresql.conf diff --git a/k8sprjs/gitea/components/db-postgresql/kustomization.yaml b/k8sprjs/gitea.old/components/db-postgresql/kustomization.yaml similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/kustomization.yaml rename to k8sprjs/gitea.old/components/db-postgresql/kustomization.yaml diff --git a/k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.persistentvolumeclaim.yaml b/k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.persistentvolumeclaim.yaml rename to k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.persistentvolumeclaim.yaml diff --git a/k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.service.yaml b/k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.service.yaml similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.service.yaml rename to k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.service.yaml diff --git a/k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.statefulset.yaml b/k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.statefulset.yaml similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/resources/db-postgresql.statefulset.yaml rename to k8sprjs/gitea.old/components/db-postgresql/resources/db-postgresql.statefulset.yaml diff --git a/k8sprjs/gitea/components/db-postgresql/secrets/dbusers.pwd b/k8sprjs/gitea.old/components/db-postgresql/secrets/dbusers.pwd similarity index 100% rename from k8sprjs/gitea/components/db-postgresql/secrets/dbusers.pwd rename to k8sprjs/gitea.old/components/db-postgresql/secrets/dbusers.pwd diff --git a/k8sprjs/gitea/components/server-gitea/configs/params.properties b/k8sprjs/gitea.old/components/server-gitea/configs/params.properties similarity index 100% rename from k8sprjs/gitea/components/server-gitea/configs/params.properties rename to k8sprjs/gitea.old/components/server-gitea/configs/params.properties diff --git a/k8sprjs/gitea/components/server-gitea/kustomization.yaml b/k8sprjs/gitea.old/components/server-gitea/kustomization.yaml similarity index 100% rename from k8sprjs/gitea/components/server-gitea/kustomization.yaml rename to k8sprjs/gitea.old/components/server-gitea/kustomization.yaml diff --git a/k8sprjs/gitea/components/server-gitea/resources/data-server-gitea.persistentvolumeclaim.yaml b/k8sprjs/gitea.old/components/server-gitea/resources/data-server-gitea.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/gitea/components/server-gitea/resources/data-server-gitea.persistentvolumeclaim.yaml rename to k8sprjs/gitea.old/components/server-gitea/resources/data-server-gitea.persistentvolumeclaim.yaml diff --git a/k8sprjs/gitea/components/server-gitea/resources/repos-server-gitea.persistentvolumeclaim.yaml b/k8sprjs/gitea.old/components/server-gitea/resources/repos-server-gitea.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/gitea/components/server-gitea/resources/repos-server-gitea.persistentvolumeclaim.yaml rename to k8sprjs/gitea.old/components/server-gitea/resources/repos-server-gitea.persistentvolumeclaim.yaml diff --git a/k8sprjs/gitea/components/server-gitea/resources/server-gitea.service.yaml b/k8sprjs/gitea.old/components/server-gitea/resources/server-gitea.service.yaml similarity index 100% rename from k8sprjs/gitea/components/server-gitea/resources/server-gitea.service.yaml rename to k8sprjs/gitea.old/components/server-gitea/resources/server-gitea.service.yaml diff --git a/k8sprjs/gitea/components/server-gitea/resources/server-gitea.statefulset.yaml b/k8sprjs/gitea.old/components/server-gitea/resources/server-gitea.statefulset.yaml similarity index 100% rename from k8sprjs/gitea/components/server-gitea/resources/server-gitea.statefulset.yaml rename to k8sprjs/gitea.old/components/server-gitea/resources/server-gitea.statefulset.yaml diff --git a/k8sprjs/gitea/kustomization.yaml b/k8sprjs/gitea.old/kustomization.yaml similarity index 100% rename from k8sprjs/gitea/kustomization.yaml rename to k8sprjs/gitea.old/kustomization.yaml diff --git a/k8sprjs/gitea/resources/data-gitea.persistentvolume.yaml b/k8sprjs/gitea.old/resources/data-gitea.persistentvolume.yaml similarity index 100% rename from k8sprjs/gitea/resources/data-gitea.persistentvolume.yaml rename to k8sprjs/gitea.old/resources/data-gitea.persistentvolume.yaml diff --git a/k8sprjs/gitea/resources/db-gitea.persistentvolume.yaml b/k8sprjs/gitea.old/resources/db-gitea.persistentvolume.yaml similarity index 100% rename from k8sprjs/gitea/resources/db-gitea.persistentvolume.yaml rename to k8sprjs/gitea.old/resources/db-gitea.persistentvolume.yaml diff --git a/k8sprjs/gitea/resources/gitea.namespace.yaml b/k8sprjs/gitea.old/resources/gitea.namespace.yaml similarity index 100% rename from k8sprjs/gitea/resources/gitea.namespace.yaml rename to k8sprjs/gitea.old/resources/gitea.namespace.yaml diff --git a/k8sprjs/gitea/resources/repos-gitea.persistentvolume.yaml b/k8sprjs/gitea.old/resources/repos-gitea.persistentvolume.yaml similarity index 100% rename from k8sprjs/gitea/resources/repos-gitea.persistentvolume.yaml rename to k8sprjs/gitea.old/resources/repos-gitea.persistentvolume.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/kustomization.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/kustomization.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/kustomization.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/kustomization.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml diff --git a/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml b/k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml rename to k8sprjs/monitoring.old/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml diff --git a/k8sprjs/monitoring/components/agent-prometheus-node-exporter/kustomization.yaml b/k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/kustomization.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-prometheus-node-exporter/kustomization.yaml rename to k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/kustomization.yaml diff --git a/k8sprjs/monitoring/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.daemonset.yaml b/k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.daemonset.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.daemonset.yaml rename to k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.daemonset.yaml diff --git a/k8sprjs/monitoring/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.service.yaml b/k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.service.yaml similarity index 100% rename from k8sprjs/monitoring/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.service.yaml rename to k8sprjs/monitoring.old/components/agent-prometheus-node-exporter/resources/agent-prometheus-node-exporter.service.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/configs/prometheus.rules.yaml b/k8sprjs/monitoring.old/components/server-prometheus/configs/prometheus.rules.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/configs/prometheus.rules.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/configs/prometheus.rules.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/configs/prometheus.yaml b/k8sprjs/monitoring.old/components/server-prometheus/configs/prometheus.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/configs/prometheus.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/configs/prometheus.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/kustomization.yaml b/k8sprjs/monitoring.old/components/server-prometheus/kustomization.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/kustomization.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/kustomization.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/resources/data-server-prometheus.persistentvolumeclaim.yaml b/k8sprjs/monitoring.old/components/server-prometheus/resources/data-server-prometheus.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/resources/data-server-prometheus.persistentvolumeclaim.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/resources/data-server-prometheus.persistentvolumeclaim.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.ingressroute.traefik.yaml b/k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.ingressroute.traefik.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.ingressroute.traefik.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.ingressroute.traefik.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.service.yaml b/k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.service.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.service.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.service.yaml diff --git a/k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.statefulset.yaml b/k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.statefulset.yaml similarity index 100% rename from k8sprjs/monitoring/components/server-prometheus/resources/server-prometheus.statefulset.yaml rename to k8sprjs/monitoring.old/components/server-prometheus/resources/server-prometheus.statefulset.yaml diff --git a/k8sprjs/monitoring/components/ui-grafana/kustomization.yaml b/k8sprjs/monitoring.old/components/ui-grafana/kustomization.yaml similarity index 100% rename from k8sprjs/monitoring/components/ui-grafana/kustomization.yaml rename to k8sprjs/monitoring.old/components/ui-grafana/kustomization.yaml diff --git a/k8sprjs/monitoring/components/ui-grafana/resources/data-ui-grafana.persistentvolumeclaim.yaml b/k8sprjs/monitoring.old/components/ui-grafana/resources/data-ui-grafana.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/monitoring/components/ui-grafana/resources/data-ui-grafana.persistentvolumeclaim.yaml rename to k8sprjs/monitoring.old/components/ui-grafana/resources/data-ui-grafana.persistentvolumeclaim.yaml diff --git a/k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.ingressroute.traefik.yaml b/k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.ingressroute.traefik.yaml similarity index 100% rename from k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.ingressroute.traefik.yaml rename to k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.ingressroute.traefik.yaml diff --git a/k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.service.yaml b/k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.service.yaml similarity index 100% rename from k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.service.yaml rename to k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.service.yaml diff --git a/k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.statefulset.yaml b/k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.statefulset.yaml similarity index 100% rename from k8sprjs/monitoring/components/ui-grafana/resources/ui-grafana.statefulset.yaml rename to k8sprjs/monitoring.old/components/ui-grafana/resources/ui-grafana.statefulset.yaml diff --git a/k8sprjs/monitoring/kustomization.yaml b/k8sprjs/monitoring.old/kustomization.yaml similarity index 100% rename from k8sprjs/monitoring/kustomization.yaml rename to k8sprjs/monitoring.old/kustomization.yaml diff --git a/k8sprjs/monitoring/resources/data-grafana.persistentvolume.yaml b/k8sprjs/monitoring.old/resources/data-grafana.persistentvolume.yaml similarity index 100% rename from k8sprjs/monitoring/resources/data-grafana.persistentvolume.yaml rename to k8sprjs/monitoring.old/resources/data-grafana.persistentvolume.yaml diff --git a/k8sprjs/monitoring/resources/data-prometheus.persistentvolume.yaml b/k8sprjs/monitoring.old/resources/data-prometheus.persistentvolume.yaml similarity index 100% rename from k8sprjs/monitoring/resources/data-prometheus.persistentvolume.yaml rename to k8sprjs/monitoring.old/resources/data-prometheus.persistentvolume.yaml diff --git a/k8sprjs/monitoring/resources/monitoring.clusterrole.yaml b/k8sprjs/monitoring.old/resources/monitoring.clusterrole.yaml similarity index 100% rename from k8sprjs/monitoring/resources/monitoring.clusterrole.yaml rename to k8sprjs/monitoring.old/resources/monitoring.clusterrole.yaml diff --git a/k8sprjs/monitoring/resources/monitoring.clusterrolebinding.yaml b/k8sprjs/monitoring.old/resources/monitoring.clusterrolebinding.yaml similarity index 100% rename from k8sprjs/monitoring/resources/monitoring.clusterrolebinding.yaml rename to k8sprjs/monitoring.old/resources/monitoring.clusterrolebinding.yaml diff --git a/k8sprjs/monitoring/resources/monitoring.namespace.yaml b/k8sprjs/monitoring.old/resources/monitoring.namespace.yaml similarity index 100% rename from k8sprjs/monitoring/resources/monitoring.namespace.yaml rename to k8sprjs/monitoring.old/resources/monitoring.namespace.yaml diff --git a/k8sprjs/nextcloud/components/cache-redis/configs/redis.conf b/k8sprjs/nextcloud.old/components/cache-redis/configs/redis.conf similarity index 100% rename from k8sprjs/nextcloud/components/cache-redis/configs/redis.conf rename to k8sprjs/nextcloud.old/components/cache-redis/configs/redis.conf diff --git a/k8sprjs/nextcloud/components/cache-redis/kustomization.yaml b/k8sprjs/nextcloud.old/components/cache-redis/kustomization.yaml similarity index 100% rename from k8sprjs/nextcloud/components/cache-redis/kustomization.yaml rename to k8sprjs/nextcloud.old/components/cache-redis/kustomization.yaml diff --git a/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.deployment.yaml b/k8sprjs/nextcloud.old/components/cache-redis/resources/cache-redis.deployment.yaml similarity index 100% rename from k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.deployment.yaml rename to k8sprjs/nextcloud.old/components/cache-redis/resources/cache-redis.deployment.yaml diff --git a/k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.service.yaml b/k8sprjs/nextcloud.old/components/cache-redis/resources/cache-redis.service.yaml similarity index 100% rename from k8sprjs/nextcloud/components/cache-redis/resources/cache-redis.service.yaml rename to k8sprjs/nextcloud.old/components/cache-redis/resources/cache-redis.service.yaml diff --git a/k8sprjs/nextcloud/components/cache-redis/secrets/redis.pwd b/k8sprjs/nextcloud.old/components/cache-redis/secrets/redis.pwd similarity index 100% rename from k8sprjs/nextcloud/components/cache-redis/secrets/redis.pwd rename to k8sprjs/nextcloud.old/components/cache-redis/secrets/redis.pwd diff --git a/k8sprjs/nextcloud/components/db-mariadb/configs/dbnames.properties b/k8sprjs/nextcloud.old/components/db-mariadb/configs/dbnames.properties similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/configs/dbnames.properties rename to k8sprjs/nextcloud.old/components/db-mariadb/configs/dbnames.properties diff --git a/k8sprjs/nextcloud/components/db-mariadb/configs/initdb.sh b/k8sprjs/nextcloud.old/components/db-mariadb/configs/initdb.sh similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/configs/initdb.sh rename to k8sprjs/nextcloud.old/components/db-mariadb/configs/initdb.sh diff --git a/k8sprjs/nextcloud/components/db-mariadb/configs/my.cnf b/k8sprjs/nextcloud.old/components/db-mariadb/configs/my.cnf similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/configs/my.cnf rename to k8sprjs/nextcloud.old/components/db-mariadb/configs/my.cnf diff --git a/k8sprjs/nextcloud/components/db-mariadb/kustomization.yaml b/k8sprjs/nextcloud.old/components/db-mariadb/kustomization.yaml similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/kustomization.yaml rename to k8sprjs/nextcloud.old/components/db-mariadb/kustomization.yaml diff --git a/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml b/k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml rename to k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.persistentvolumeclaim.yaml diff --git a/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.service.yaml b/k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.service.yaml similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.service.yaml rename to k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.service.yaml diff --git a/k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.statefulset.yaml b/k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.statefulset.yaml similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/resources/db-mariadb.statefulset.yaml rename to k8sprjs/nextcloud.old/components/db-mariadb/resources/db-mariadb.statefulset.yaml diff --git a/k8sprjs/nextcloud/components/db-mariadb/secrets/dbusers.pwd b/k8sprjs/nextcloud.old/components/db-mariadb/secrets/dbusers.pwd similarity index 100% rename from k8sprjs/nextcloud/components/db-mariadb/secrets/dbusers.pwd rename to k8sprjs/nextcloud.old/components/db-mariadb/secrets/dbusers.pwd diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/000-default.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/000-default.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.no-ssl.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/000-default.no-ssl.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/000-default.no-ssl.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/000-default.no-ssl.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/dhparam.pem b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/dhparam.pem similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/dhparam.pem rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/dhparam.pem diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/nginx.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/nginx.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/nginx.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/nginx.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/nginx.no-ssl.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/nginx.no-ssl.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/nginx.no-ssl.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/nginx.no-ssl.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/params.properties b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/params.properties similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/params.properties rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/params.properties diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/ports.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/ports.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/ports.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/ports.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/configs/zz-docker.conf b/k8sprjs/nextcloud.old/components/server-nextcloud/configs/zz-docker.conf similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/configs/zz-docker.conf rename to k8sprjs/nextcloud.old/components/server-nextcloud/configs/zz-docker.conf diff --git a/k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/kustomization.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/kustomization.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/kustomization.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/data-server-nextcloud.persistentvolumeclaim.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/html-server-nextcloud.persistentvolumeclaim.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.ingressroute.traefik.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.ingressroute.traefik.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.ingressroute.traefik.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.ingressroute.traefik.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.clusterip.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.service.clusterip.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.clusterip.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.service.clusterip.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.service.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.no-ssl.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.no-ssl.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.no-ssl.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.no-ssl.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-apache-nextcloud.statefulset.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.ingressroute.traefik.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.ingressroute.traefik.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.ingressroute.traefik.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.ingressroute.traefik.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.clusterip.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.service.clusterip.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.clusterip.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.service.clusterip.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.loadbalancer.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.service.loadbalancer.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.service.loadbalancer.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.service.loadbalancer.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.no-ssl.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.no-ssl.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.no-ssl.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.no-ssl.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.yaml b/k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.yaml similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.yaml rename to k8sprjs/nextcloud.old/components/server-nextcloud/resources/server-nginx-nextcloud.statefulset.yaml diff --git a/k8sprjs/nextcloud/components/server-nextcloud/secrets/nextcloud-admin.pwd b/k8sprjs/nextcloud.old/components/server-nextcloud/secrets/nextcloud-admin.pwd similarity index 100% rename from k8sprjs/nextcloud/components/server-nextcloud/secrets/nextcloud-admin.pwd rename to k8sprjs/nextcloud.old/components/server-nextcloud/secrets/nextcloud-admin.pwd diff --git a/k8sprjs/nextcloud/kustomization.yaml b/k8sprjs/nextcloud.old/kustomization.yaml similarity index 100% rename from k8sprjs/nextcloud/kustomization.yaml rename to k8sprjs/nextcloud.old/kustomization.yaml diff --git a/k8sprjs/nextcloud/resources/data-nextcloud.persistentvolume.yaml b/k8sprjs/nextcloud.old/resources/data-nextcloud.persistentvolume.yaml similarity index 100% rename from k8sprjs/nextcloud/resources/data-nextcloud.persistentvolume.yaml rename to k8sprjs/nextcloud.old/resources/data-nextcloud.persistentvolume.yaml diff --git a/k8sprjs/nextcloud/resources/db-nextcloud.persistentvolume.yaml b/k8sprjs/nextcloud.old/resources/db-nextcloud.persistentvolume.yaml similarity index 100% rename from k8sprjs/nextcloud/resources/db-nextcloud.persistentvolume.yaml rename to k8sprjs/nextcloud.old/resources/db-nextcloud.persistentvolume.yaml diff --git a/k8sprjs/nextcloud/resources/html-nextcloud.persistentvolume.yaml b/k8sprjs/nextcloud.old/resources/html-nextcloud.persistentvolume.yaml similarity index 100% rename from k8sprjs/nextcloud/resources/html-nextcloud.persistentvolume.yaml rename to k8sprjs/nextcloud.old/resources/html-nextcloud.persistentvolume.yaml diff --git a/k8sprjs/nextcloud/resources/nextcloud.namespace.yaml b/k8sprjs/nextcloud.old/resources/nextcloud.namespace.yaml similarity index 100% rename from k8sprjs/nextcloud/resources/nextcloud.namespace.yaml rename to k8sprjs/nextcloud.old/resources/nextcloud.namespace.yaml