refactor: moved to hugo
BIN
content/blog/2024/02/26/fosdem-2024/cookies-from-firefox.jpeg
Normal file
After Width: | Height: | Size: 199 KiB |
BIN
content/blog/2024/02/26/fosdem-2024/fosdem-2024.jpeg
Normal file
After Width: | Height: | Size: 74 KiB |
121
content/blog/2024/02/26/fosdem-2024/index.md
Normal file
|
@ -0,0 +1,121 @@
|
|||
+++
|
||||
title = "FOSDEM 2024"
|
||||
date = 2024-02-26
|
||||
tags = ["conferences", "fosdem"]
|
||||
+++
|
||||
|
||||

|
||||
|
||||
First weekend of February was, as usual, the FOSDEM conference in Belgium, and I could not miss it. I started attending a few years back, and since then I have tried going if my schedule allowed it.
|
||||
|
||||
This is a super brief summary of my experience during the event on the two days I was there, though this year I left early on Sunday before the conference properly finished so Sunday was a bit more scarce. A lot of unattended talks from the agenda too, I have a **huge** backlog of videos to watch now.
|
||||
|
||||
<!--more-->
|
||||
|
||||
As usual I met with friends and old colleagues and the event in general was a blast. I came back with recharged batteries to dedicate to my open source projects and to revive this blog. I didn't have much spare time to dedicate to anything since then, but I will try to do a bit of daily grind to meet some goals.
|
||||
|
||||
## Saturday
|
||||
|
||||
This is the best day for me because you come fresh and with a lot of motivation (unless you came back late from Delirium Cafe that night!) and eager to get things rolling.
|
||||
|
||||
After a crappy breakfast (because there was no coffeeshop open nearby our place but Starbucks) we went to the conference and the first thing to do, donate! I got a hoodie a two t-shirts. I think is only natural to donate _something_ to an event that is free to attend for everyone, and very well organized on top of that.
|
||||
|
||||
> I may be biased on that last statement, since this is the only conference I attend. Years ago I attended more (mostly local conferences in Spain) but now this is the one.
|
||||
|
||||
After the [Welcome to FOSDEM 2024](https://fosdem.org/2024/schedule/event/fosdem-2024-3023-welcome-to-fosdem-2024/) starting keynote, I made myself strong on the Go track since it was the one that interested me the most and a colleague was giving a talk as well, and I attended a few in that track (with a stop to have lunch, of course).
|
||||
|
||||
- [The state of Go](https://fosdem.org/2024/schedule/event/fosdem-2024-1681-the-state-of-go/): A regular for the past years, an update of what interesting changes happened (or are about to happen) with Go since last FOSDEM. In this case I already new most of them but made notes to review `yield` and `RangeFunc` which I didn't kew were coming to the language.
|
||||
|
||||
- [The secret life of a goroutine](https://fosdem.org/2024/schedule/event/fosdem-2024-1704-the-secret-life-of-a-goroutine/): An interesting talk talking about how goroutines work from the point of view of a _necromancer_. The content was good but rushed, since the speaker only had 25m minutes and it had to prepare the audience and then explain the concepts, but going deeper or have a better understanding is good homework for the audience (us) to do afterwards.
|
||||
|
||||
- [You're already running my code in production: My simple journey to becoming a Go contributor.](https://fosdem.org/2024/schedule/event/fosdem-2024-1813-you-re-already-running-my-code-in-production-my-simple-journey-to-becoming-a-go-contributor-/): The humble story of how a bug became a Go contribution and what were the steps required to arrive to that, showing that anyone can aim to make a contribution to the Go source code.
|
||||
|
||||
- [Maintaining Go as a day job - a year later](https://fosdem.org/2024/schedule/event/fosdem-2024-2000-maintaining-go-as-a-day-job-a-year-later/): A very descriptive and fun talk about the pros and cons of maintaining open source software from his point of view as a maintainer of the cryptography library, as well as other contractors, showing that working in open source full time is possible, just not what you might expect.
|
||||
|
||||
- [How we almost secured our projects by writing more tests](https://fosdem.org/2024/schedule/event/fosdem-2024-1884-how-we-almost-secured-our-projects-by-writing-more-tests/)
|
||||
|
||||
- [Dependency injection: a different way to structure a project](https://fosdem.org/2024/schedule/event/fosdem-2024-1868-dependency-injection-a-different-way-to-structure-a-project/)
|
||||
|
||||
- [Putting an end to Makefiles in go projects with GoReleaser](https://fosdem.org/2024/schedule/event/fosdem-2024-1853-putting-an-end-to-makefiles-in-go-projects-with-goreleaser/): Already knew that Goreleaser was awesome, but his statement was bold enough to catch my interest. In the end it just showed up basic concepts of Goreleaser, and I still love my Makefiles.
|
||||
|
||||
- [Low code graphical apps with Go top to bottom!](https://fosdem.org/2024/schedule/event/fosdem-2024-2621-low-code-graphical-apps-with-go-top-to-bottom-/): A talk about [fyne](https://fyne.io/) and the new (or at least new to me) [fyne GUI editor "defyne"](https://github.com/fyne-io/defyne), showing up how you can build interfaces directly from a graphical interface and it automatically generates some JSON definitions and Go code to go with your app.
|
||||
|
||||
- [Creating a multiplayer game in Go, from zero](https://fosdem.org/2024/schedule/event/fosdem-2024-1886-creating-a-multiplayer-game-in-go-from-zero/)
|
||||
|
||||
- [Smartwatch firmware... in Go? On TinyGo, small displays, and building a delightful developer experience](https://fosdem.org/2024/schedule/event/fosdem-2024-2562-smartwatch-firmware-in-go-on-tinygo-small-displays-and-building-a-delightful-developer-experience/):Content was super interesting and fun, though the speaker was clearly nervous, it was amazing how a super small firmware with go for the pine time was built, and even with just basic features I saw on the fediverse that the battery lasted for more than a month!
|
||||
|
||||
- [Go Without Wires Strikes Back](https://fosdem.org/2024/schedule/event/fosdem-2024-2270-go-without-wires-strikes-back/): It was a bit of a lie, since there were wires involved, but -as expected- it didn't disappoint, and he managed to fly a drone over the crowd, **again**. TinyGo looks awesome, if I just had some project that could build with it.
|
||||
|
||||
And with that, we left for the day to have dinner. The venue of choice was the [Brew Dog](https://www.brewdog.com/eu_en/brewdog-brussels) since there were some events happening in there the day before, we choose to go there for dinner and some drinks and we met with some people from FOSDEM as well, as there were more meetups happening on Saturday too.
|
||||
|
||||
After that, another mandatory visit to Delirium Cafe, and the day was over.
|
||||
|
||||
## Sunday
|
||||
|
||||
I attended way less talks than anticipated because:
|
||||
1) Sunday usually is social day, that means I usually find old colleagues and friends around so I stop to chat, grab something to drink, etc.
|
||||
2) This year I left on Sunday so I had to skip some of the afternoon talks.
|
||||
3) I also missed some morning talks because the taxi that had to pick me up bluntly ignored me and went its way, so I arrived more than an hour late to the venue.
|
||||
|
||||
When life give you lemons, you made lemonade. I took advantage of the situation and sat down around with my laptop to clean Github notifications that pile up week after week until I cleaned them, I wanted to release [shiori](https://github.com/go-shiori/shiori) 1.6.0 during FOSDEM but an unfortunate Windows bug had other plans.
|
||||
|
||||
Some of the talks I attended that day:
|
||||
|
||||
- [New Workflow Orchestrator in town: "Apache Airflow 2.x"](https://fosdem.org/2024/schedule/event/fosdem-2024-1652--new-workflow-orchestrator-in-town-apache-airflow-2-x-/):
|
||||
|
||||
- [Data workflows: translating dbt to Apache Airflow](https://fosdem.org/2024/schedule/event/fosdem-2024-1651-data-workflows-translating-dbt-to-apache-airflow/)
|
||||
|
||||
- [A slow migration from Django templates to Vue+GraphQL](https://fosdem.org/2024/schedule/event/fosdem-2024-2326-a-slow-migration-from-django-templates-to-vue-graphql/)
|
||||
|
||||
- [Chaos Engineering in Action: Enhancing Resilience in Strimzi](https://fosdem.org/2024/schedule/event/fosdem-2024-2194-chaos-engineering-in-action-enhancing-resilience-in-strimzi/)
|
||||
|
||||
- [Version control post-Git](https://fosdem.org/2024/schedule/event/fosdem-2024-3423-version-control-post-git/)
|
||||
|
||||
And from there, to the airport, and home! See you next year FOSDEM!
|
||||
|
||||
## The ones I missed
|
||||
|
||||
This are all the talks that I missed because of the problems I had, conflicting events or because I didn't make it in time/room was full, in no particular order:
|
||||
|
||||
- [Where have the women of tech history gone?](https://fosdem.org/2024/schedule/event/fosdem-2024-2850-where-have-the-women-of-tech-history-gone-/)
|
||||
- [DIY Private Container Registry](https://fosdem.org/2024/schedule/event/fosdem-2024-2161-diy-private-container-registry/)
|
||||
- [Open Food Facts: Learning and using Perl in 2024 to transform the food system !](https://fosdem.org/2024/schedule/event/fosdem-2024-3743-open-food-facts-learning-and-using-perl-in-2024-to-transform-the-food-system-/)
|
||||
- [Observations on a DNSSEC incident: the russian TLD](https://fosdem.org/2024/schedule/event/fosdem-2024-3740-observations-on-a-dnssec-incident-the-russian-tld/)
|
||||
- [A simple caching service for your CI](https://fosdem.org/2024/schedule/event/fosdem-2024-2671-a-simple-caching-service-for-your-ci/)
|
||||
- [The API Landscape : mapping the 2000+ API and opensource tooling for Developers](https://fosdem.org/2024/schedule/event/fosdem-2024-2952-the-api-landscape-mapping-the-2000-api-and-opensource-tooling-for-developers/)
|
||||
- [Effortless Bug Hunting with Differential Fuzzing](https://fosdem.org/2024/schedule/event/fosdem-2024-1927-effortless-bug-hunting-with-differential-fuzzing/)
|
||||
- [Cost-Effective AI Processing with Open Source Infrastructure](https://fosdem.org/2024/schedule/event/fosdem-2024-3656-cost-effective-ai-processing-with-open-source-infrastructure/)
|
||||
- [What's new in Containerd 2.0!](https://fosdem.org/2024/schedule/event/fosdem-2024-3060-what-s-new-in-containerd-2-0-/)
|
||||
- [An open-source, open-hardware offline finding system](https://fosdem.org/2024/schedule/event/fosdem-2024-3264-an-open-source-open-hardware-offline-finding-system/)
|
||||
- [vscode-container-wasm: An Extension of VSCode on Browser for Running Containers Within Your Browser](https://fosdem.org/2024/schedule/event/fosdem-2024-3187-vscode-container-wasm-an-extension-of-vscode-on-browser-for-running-containers-within-your-browser/)
|
||||
- [How do you write an emulator anyway ?](https://fosdem.org/2024/schedule/event/fosdem-2024-2146-how-do-you-write-an-emulator-anyway-/)
|
||||
- [Panda3DS: Climbing the tree of 3DS emulation](https://fosdem.org/2024/schedule/event/fosdem-2024-1726-panda3ds-climbing-the-tree-of-3ds-emulation/)
|
||||
- [Breathing Life into Legacy: An Open-Source Emulator of Legacy Apple Devices](https://fosdem.org/2024/schedule/event/fosdem-2024-2826-breathing-life-into-legacy-an-open-source-emulator-of-legacy-apple-devices/)
|
||||
- [CONFEDSS: Concolic execution and the puzzling practice of peripheral emulation](https://fosdem.org/2024/schedule/event/fosdem-2024-2247-confedss-concolic-execution-and-the-puzzling-practice-of-peripheral-emulation/)
|
||||
- [Arm64EC: Microsoft's emulation Frankenstein](https://fosdem.org/2024/schedule/event/fosdem-2024-1762-arm64ec-microsoft-s-emulation-frankenstein/)
|
||||
- [Yet another event sourcing library](https://fosdem.org/2024/schedule/event/fosdem-2024-2255-yet-another-event-sourcing-library/)
|
||||
- [Self-hosting and autonomy using guix-forge](https://fosdem.org/2024/schedule/event/fosdem-2024-2560-self-hosting-and-autonomy-using-guix-forge/)
|
||||
- [Do you know YAML?](https://fosdem.org/2024/schedule/event/fosdem-2024-2046-do-you-know-yaml-/)
|
||||
- [Welcome to Retrocomputing Devroom](https://fosdem.org/2024/schedule/event/fosdem-2024-3592-welcome-to-retrocomputing-devroom/)
|
||||
- [Project websites that don't suck](https://fosdem.org/2024/schedule/event/fosdem-2024-3154-project-websites-that-don-t-suck/)
|
||||
- [FOSS for DOCS](https://fosdem.org/2024/schedule/event/fosdem-2024-2043-foss-for-docs/)
|
||||
- [Journey to an open source contribution](https://fosdem.org/2024/schedule/event/fosdem-2024-1776-journey-to-an-open-source-contribution/)
|
||||
- [Gameboy Advance hacking for retrogamers](https://fosdem.org/2024/schedule/event/fosdem-2024-1771-gameboy-advance-hacking-for-retrogamers/)
|
||||
- [Gotta Catch ‘Em All! Raspberry Pi and Java Pokemon Training](https://fosdem.org/2024/schedule/event/fosdem-2024-3629-gotta-catch-em-all-raspberry-pi-and-java-pokemon-training/)
|
||||
- [A Game Boy and his cellphone](https://fosdem.org/2024/schedule/event/fosdem-2024-1718-a-game-boy-and-his-cellphone/)
|
||||
- [The wonderful life of a SQL query in a streaming database](https://fosdem.org/2024/schedule/event/fosdem-2024-3342-the-wonderful-life-of-a-sql-query-in-a-streaming-database/)
|
||||
- [Switching the FOSDEM conference management system to pretalx](https://fosdem.org/2024/schedule/event/fosdem-2024-3472-switching-the-fosdem-conference-management-system-to-pretalx/)
|
||||
- [S2S: PeerTube instance dedicated to Sign Language](https://fosdem.org/2024/schedule/event/fosdem-2024-2802-s2s-peertube-instance-dedicated-to-sign-language/)
|
||||
- ... and probably others
|
||||
|
||||
## Worth mentioning
|
||||
|
||||
### Cookies from Firefox!
|
||||
|
||||
There was a small stand that gave free cookies courtesy of Firefox. Do you accept cookies?
|
||||
|
||||

|
||||
|
||||
### MacBook Pro M2 battery life
|
||||
|
||||
I removed my laptop from current early Friday morning, when I arrived back home Sunday night my laptop still had 19% battery left, after use it to take notes, watch videos, develop, having [orbstack](https://orbstack.dev/) running in the background... In terms of battery I haven't seen anything better.
|
After Width: | Height: | Size: 492 KiB |
|
@ -0,0 +1,101 @@
|
|||
+++
|
||||
title = "Create an audiobook file from several mp3 files using ffmpeg"
|
||||
date = 2024-03-12
|
||||
tags = ["guide", "audio", "ffmpeg"]
|
||||
+++
|
||||
|
||||
Due to some recent traveling I have started to listen to audiobooks. I love reading but some times my eyes are just too tired to go with it but I'm not sleepy at all or maybe I just wanted the convenience to lay down but still _do_ something.
|
||||
|
||||
Short story, I bought some from a known distributor but I'm a fan of data preservation and actually owning what I pay for. I found an application in Github that allowed me to download the files that composed the audiobook in split mp3 files, but that didn't do. I wanted a single file with correct metadata, so I got my hands dirty.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
Assuming you have the mp3 laying around in a folder, we need to generate or find three more things:
|
||||
|
||||
- **`files.txt`**: A list of the MP3 files in the order we are going to combine them, in a plain text file with the format `file '<filename>'`.
|
||||
|
||||
You can use a simple bash command to generate the file:
|
||||
|
||||
```sh
|
||||
find . -type f -name "*mp3" -exec printf "file '%s'\n" {} \; | sed "s|\.\/||g" > files.txt
|
||||
```
|
||||
|
||||
- **`metadata.txt`**: A metadata file to be used with the bundled file that contain basic information of the audiobook (title, author, narrator, ...) along with chapter information.
|
||||
|
||||
An example looks like this, documentation can be found [in the ffmpeg documentation](https://ffmpeg.org/ffmpeg-formats.html#Metadata-1):
|
||||
|
||||
```ini
|
||||
;FFMETADATA1
|
||||
title=Book name
|
||||
artist=Book author(s)
|
||||
composer=Book narrator(s)
|
||||
publisher=Book publisher
|
||||
date=Book date of publication
|
||||
|
||||
[CHAPTER]
|
||||
TIMEBASE=1/1000
|
||||
START=0 # 0:00:00
|
||||
END=60000 # 0:01:00
|
||||
title=Intro # Chapter title
|
||||
|
||||
# Repeat the [CHAPTER] block for all chapters
|
||||
```
|
||||
|
||||
- **`cover.jpg`**: The artwork of the audiobook. The files I have been using (downloaded from the store) are 353x353@72dpi. I'm unsure if that's by definition, but at least be sure to use squared images.
|
||||
|
||||
With all the files in place, we can use `ffmpeg` to do the conversion. I have followed a multi step approach to make sure I can review the process and fix any issues that may arise. Pretty sure this can be simplified somehow but I'm not a `ffmpeg` expert and I have spent too much time on this already.
|
||||
|
||||
1. Concatenate the files into a single `mp3` file.
|
||||
|
||||
```sh
|
||||
ffmpeg -f concat -i files.txt -c copy build_01_concat.mp3
|
||||
```
|
||||
|
||||
- `-f concat`: Use the `concat` format, meaning we are going to concatenate files.
|
||||
- `-i files.txt`: The file with the list of files to concatenate as input to ffmpeg.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `build_01_concat.mp3`: The output file.
|
||||
|
||||
1. Add the cover to the file.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_01_concat.mp3 -i cover.jpg -c copy -map 0 -map 1 build_02_cover.mp3
|
||||
```
|
||||
|
||||
- `-i build_01_concat.mp3`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-i cover.jpg`: The cover image to be added to the file as second input to ffmpeg.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `-map 0 -map 1`: Maps the streams from the input files to the output file.
|
||||
- `build_02_cover.mp3`: The output file.
|
||||
|
||||
1. Convert the `mp3` file to `m4a`.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_02_cover.mp3 -c:v copy build_03_m4a.m4a
|
||||
```
|
||||
|
||||
- `-i build_02_cover.mp3`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-c:v copy`: The video codec to be used for the output file, meaning we are not going to re-encode the file, just copy it.
|
||||
- `build_03_m4a.m4a`: The output file.
|
||||
|
||||
4. Add the metadata to the `m4a` file and convert it to `m4b`.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_03_m4a.m4a -i metadata.txt -map 0 -map_metadata 1 -c copy book.m4b
|
||||
```
|
||||
|
||||
- `-i build_03_m4a.m4a`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-i metadata.txt`: The metadata file to be used as second input to ffmpeg.
|
||||
- `-map 0 -map_metadata 1`: Maps the streams from the input files to the output file.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `book.m4b`: The final output file.
|
||||
|
||||
5. Clean up the files we created in the process that we don't need anymore.
|
||||
|
||||
```sh
|
||||
rm build_* files.txt
|
||||
```
|
||||
|
||||
That's it! You should have a `m4b` file with the audiobook ready to be imported to your favorite audiobook player. I have tested this process with a couple of books and it worked like a charm. I hope it helps you too.
|
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 45 KiB |
|
@ -0,0 +1,172 @@
|
|||
+++
|
||||
title = "Journey to K3S: Basic cluster setup"
|
||||
date = 2024-03-14
|
||||
tags = ["k3s", "homelab"]
|
||||
+++
|
||||
|
||||
I've finally started to play with K3S, a lightweight Kubernetes distribution. I have been reading about it for a while and I'm excited to see how it performs in my home lab. My services have been running in an Intel NUC running Docker container for some years now, but the plan is to migrate them to a k3s cluster of three NanoPC-T6 boards.
|
||||
|
||||
I was looking for a small form-factor and low power consumption solution, and the NanoPC-T6 seems to fit the bill. I know I'm going to stumble upon some limitations but I'm eager to see how it goes and the problems I find along the way.
|
||||
|
||||
My requirements are very simple: I want to run a small cluster with a few services, and I want to be able to access them from the internet and from my home. My current setup relies on Tailscale for VPN and Ingress for the services, so I'm going to try and replicate that in this new setup.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Installing DietPi on the NanoPC-T6
|
||||
|
||||
I'm completely new to [DietPi](https://dietpi.com), but nothing that FriendlyElec offered seemed to fit my needs. I'm not a fan of the pre-installed software and I wanted to start from scratch. I tried to find compatible OSs around and there weren't many, but DietPi seemed to be a good fit and it's actively maintained.
|
||||
|
||||
At first I tried to run from an SD Card and try and copy the data manually, but I found out that the NanoPC-T6 has an eMMC slot, so I decided to go with that. I flashed FriendlyWRT into the SD Card, booted it and used their tools to flash DietPi into the eMMC.
|
||||
|
||||
> For this to work properly under my home network setup I had to physically connect a computer and a keyboard to the boards and disable the firewall service using: `service firewall stop`. I believe this happened because the boards live in a different VLAN/subnet in my local network since I connect new devices to a different VLAN for security reasons. With that disabled I could access the boards from my computer and continue the setup.
|
||||
|
||||

|
||||
|
||||
## Setting up the OS
|
||||
|
||||
The first thing once you SSH to the boards is to change the default and global software passwords. Once the setup assistant was over I setup a new SSH key and disabled the password login.
|
||||
|
||||
Before installing K3s I wanted to make sure the OS was up to date and had the necessary tools to run the cluster. I used the `dietpi-software` tool to be sure that some convenience utilities like `vim`, `curl` and `git` where present.
|
||||
|
||||
I also set up the hostname by using the `dietpi-config` tool to `k3s-master-03`, `k3s-master-02` and `k3s-master-03` for the three boards.
|
||||
|
||||
And installed `open-iscsi` to be prepared in the case I end up setting up [Longhorn](https://longhorn.io/).
|
||||
|
||||
## Setting up the network
|
||||
|
||||
I'm using Tailscale to connect the boards to my home network and to the internet. I installed [Tailscale](https://tailscale.com) using `dietpi-software` and link the device to my account using `tailscale up`.
|
||||
|
||||
I also set up the static IP address for the boards using my home router. I'm using a custom [pfsense](https://www.pfsense.org/) router and I set up the IP address for the boards using the MAC address of the boards on the VLAN they are going to reside in.
|
||||
|
||||
## Installing K3S
|
||||
|
||||
I followed the [official documentation](https://docs.k3s.io/datastore/ha-embedded) to create an embedded etcd highly available cluster.
|
||||
|
||||
> I'm not a fan of the `curl ... | sh` installation methods around, but this is the official way to install K3S and I'm going to follow it for convenience. **Always check the script before running it.**.
|
||||
|
||||
1. I created the first node using the following command:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN=<token> sh -s - server --cluster-init
|
||||
```
|
||||
|
||||
I used the `K3S_TOKEN` environment variable to set the token for the cluster that I will need to join the other two nodes to the cluster. Since this is the first node of the cluster I had to provide the `--cluster-init` flag to initialize the cluster.
|
||||
|
||||
2. I joined the other two nodes to the cluster using the following command:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN=<token> sh -s - server --server https://<internal ip of the first node>:6443
|
||||
```
|
||||
|
||||
3. Done! I have a three node K3S cluster running in my home lab. Is **that** simple.
|
||||
|
||||

|
||||
|
||||
## Checking that it works
|
||||
|
||||
I'm going to deploy a simple service to check that the cluster is working properly. I'm going to use the `nginx` image and expose it using an Ingress:
|
||||
|
||||
1. Create the `hello-world` namespace:
|
||||
|
||||
```bash
|
||||
kubectl create namespace hello-world
|
||||
```
|
||||
|
||||
2. Create a simple index file:
|
||||
|
||||
```bash
|
||||
echo "Hello, world!" > index.html
|
||||
```
|
||||
|
||||
3. Create a `ConfigMap` with the index file:
|
||||
|
||||
```bash
|
||||
kubectl create configmap hello-world-index-html --from-file=index.html -n hello-world
|
||||
```
|
||||
|
||||
4. Create a deployment using the `nginx` image and the config map we just created:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hello-world-nginx
|
||||
namespace: hello-world
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hello-world
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello-world
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: hello-world-volume
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumes:
|
||||
- name: hello-world-volume
|
||||
configMap:
|
||||
name: hello-world-index-html
|
||||
EOF
|
||||
```
|
||||
|
||||
5. Create the service to expose the deployment:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: hello-world
|
||||
namespace: hello-world
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: hello-world
|
||||
EOF
|
||||
```
|
||||
|
||||
6. Create the Ingress to expose the service to the internet:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: hello-world
|
||||
namespace: hello-world
|
||||
spec:
|
||||
ingressClassName: "traefik"
|
||||
rules:
|
||||
- host: hello-world.fmartingr.dev
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: hello-world
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
Done! I can access the service from my local network using the `hello-world.fmartingr.dev` domain:
|
||||
|
||||

|
||||
|
||||
That's it! I have a cluster running and I can start playing with it. There's a lot more to be done and progress will be slow since I'm doing this in my free time to dogfood kubernetes at home.
|
||||
|
||||
Will follow up with updates once I make more progress, see you on the next one.
|
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 340 KiB |
|
@ -0,0 +1,303 @@
|
|||
+++
|
||||
title = "Journey to K3S: Deploying the first service and its requirements"
|
||||
date = 2024-03-25
|
||||
tags = ["k3s", "homelab"]
|
||||
edit_comment = "**2024/04/29**: Fixed a typo in the CloudNative PostgreSQL Operator chart example. The `valuesContent` was incorrect as it used attributes from the `Cluster` CRD, not the Chart."
|
||||
+++
|
||||
|
||||
I have my K3S cluster up and running, and I'm ready to deploy my first service. I'm going to start migrating one of the simplest services I have running in my current docker setup, the RSS reader [Miniflux](https://miniflux.app/).
|
||||
|
||||
I'm going to use Helm charts through the process since k3s supports Helm out of the box, but for this first service there's also some preparation to do. I'm missing the storage backend, a way to ingress traffic from the internet, a way to manage the certificates and the database. Also, I need to migrate my current data from one database to another, but those are postgresql databases so I guess a simple `pg_dump`/`pg_restore` or `psql` commands will do the trick.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Setting up Longhorn for storage
|
||||
|
||||
The first thing I need is a storage backend for my services. I'm going to use Longhorn for this, since it's a simple and easy to use solution that works well with k3s. I'm going to install it using Helm, and I'm going to use the default configuration for now.
|
||||
|
||||
```yaml
|
||||
# longhorn-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: longhorn
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.longhorn.io
|
||||
chart: longhorn
|
||||
targetNamespace: longhorn-system
|
||||
createNamespace: true
|
||||
version: v1.6.0
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f longhorn-helm-chart.yaml
|
||||
```
|
||||
|
||||
This should generate all required resources for Longhorn to work. In my case I also enabled the ingress for the Longhorn UI to do some set up of the node allocated storage according to my needs and hardware, though I will not cover that in this post.
|
||||
|
||||
```yaml
|
||||
# longhorn-ingress.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: longhorn-ingress
|
||||
namespace: longhorn-system
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-longhorn-auth-middleware@kubernetescrd
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: longhorn.k3s-01.home.arpa
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: longhorn-frontend
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f longhorn-ingress.yaml
|
||||
```
|
||||
|
||||
With this you should be able to access your Longhorn UI at the domain set up in your ingress. In my case it's `longhorn.k3s-01.home.arpa`.
|
||||
|
||||
> Keep in mind that this is a local domain, so you might need to set up a local DNS server or add the domain to your `/etc/hosts` file.
|
||||
|
||||
This example is not perfect by any means and if you plan to have this ingress exposed be sure to use a proper certificate and secure your ingress properly with authentication and other security measures.
|
||||
|
||||
## Setting up cert-manager to manage certificates
|
||||
|
||||
The next step is to set up cert-manager to manage the certificates for my services. I'm going to use Let's Encrypt as my certificate authority and allow cert-manager to generate domains for the external ingresses I'm going to set up.
|
||||
|
||||
```yaml
|
||||
# cert-manager-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: cert-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.jetstack.io
|
||||
chart: cert-manager
|
||||
targetNamespace: cert-manager
|
||||
createNamespace: true
|
||||
version: v1.14.4
|
||||
valuesContent: |-
|
||||
installCRDs: true
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f cert-manager-helm-chart.yaml
|
||||
```
|
||||
|
||||
In order to use Let's Encrypt as the certificate authority, I need to set up the issuer for it. I'm going to use the production issuer in this example since the idea is exposing the service to the internet.
|
||||
|
||||
```yaml
|
||||
# letsencrypt-issuer.yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
acme:
|
||||
email: your@email.com
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-produdction
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f letsencrypt-issuer.yaml
|
||||
```
|
||||
|
||||
With this, I should be able to request certificates for my services using the `letsencrypt-production` issuer.
|
||||
|
||||
## Setting up the CloudNative PostgreeSQL Operator
|
||||
|
||||
> The chart for Miniflux is capable of deploying a PostgreSQL instance for the service, but I'm going to use the CloudNative PostgreSQL Operator to manage the database for this service (and others) on my own. This is because I want to have the ability to manage the databases separately from the services.
|
||||
|
||||
Miniflux only supports postgresql so I'm going to use the CloudNative PostgreSQL Operator to manage the database, first let's intall the operator using the Helm chart:
|
||||
|
||||
```yaml
|
||||
# cloudnative-pg-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: cloudnative-pg
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://cloudnative-pg.github.io/charts
|
||||
chart: cloudnative-pg
|
||||
targetNamespace: cnpg-system
|
||||
createNamespace: true
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f cloudnative-pg-helm-chart.yaml
|
||||
```
|
||||
|
||||
This will install the CloudNative PostgreSQL Operator in the `cnpg-system` namespace. I'm going to create a PostgreSQL instance for Miniflux in the `miniflux` namespace.
|
||||
|
||||
```yaml
|
||||
# miniflux-db.yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: miniflux-db
|
||||
namespace: miniflux
|
||||
spec:
|
||||
instances: 2
|
||||
storage:
|
||||
size: 2Gi
|
||||
storageClass: longhorn
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-db.yaml
|
||||
```
|
||||
|
||||
With this a PostgreSQL cluster with two instances and 2Gi of storage will be created in the `miniflux` namespace, note that I have specified the `longhorn` storage class for the storage.
|
||||
|
||||
When this is finished a new secret with the connection information for the database called `miniflux-db-app` will be created. It will look like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secre
|
||||
type: kubernetes.io/basic-auth
|
||||
metadata:
|
||||
name: miniflux-db-app
|
||||
namespace: miniflux
|
||||
# ...
|
||||
data:
|
||||
dbname: <base64 encoded data>
|
||||
host: <base64 encoded data>
|
||||
jdbc-uri: <base64 encoded data>
|
||||
password: <base64 encoded data>
|
||||
pgpass: <base64 encoded data>
|
||||
port: <base64 encoded data>
|
||||
uri: <base64 encoded data>
|
||||
user: <base64 encoded data>
|
||||
username: <base64 encoded data>
|
||||
```
|
||||
|
||||
We are going to reference this secret directly in the Miniflux deployment below.
|
||||
|
||||
## Deploying Miniflux
|
||||
|
||||
Now that we have all the requirements set up, we can deploy Miniflux.
|
||||
|
||||
I'm going to use [gabe565's miniflux helm chart](https://artifacthub.io/packages/helm/gabe565/miniflux) for this, since they are simple and easy to use. I tried the [TrueCharts](https://artifacthub.io/packages/helm/truecharts/miniflux) chart but I couldn't get it to work properly, since they only support amd64 and I'm running on arm64, though a few tweaks here and there _should_ make it work.
|
||||
|
||||
```yaml
|
||||
# miniflux-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: miniflux
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.gabe565.com
|
||||
chart: miniflux
|
||||
targetNamespace: miniflux
|
||||
createNamespace: true
|
||||
version: 0.8.1
|
||||
valuesContent: |-
|
||||
image:
|
||||
tag: 2.1.1
|
||||
env:
|
||||
CREATE_ADMIN: "0"
|
||||
DATABASE_URL:
|
||||
secretKeyRef:
|
||||
name: miniflux-db-app
|
||||
key: uri
|
||||
postgresql:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
> In order to customize Miniflux check out their [configuration](https://miniflux.app/docs/configuration.html) documentation and set the appropriate values in the `env` section.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-helm-chart.yaml
|
||||
```
|
||||
|
||||
> I'm using `CREATE_ADMIN: "0"` to avoid creating an admin user for Miniflux, since I already have one in my current database after I migrated it. If you want to create an admin user you can set this to `1` and set the `ADMIN_USERNAME` and `ADMIN_PASSWORD` values in the `env` section. See the [chart documentation](https://artifacthub.io/packages/helm/gabe565/miniflux) for more information.
|
||||
|
||||
This will create a Miniflux deployment in the `miniflux` namespace, using the `miniflux-db-app` database secret for the database connection.
|
||||
|
||||
Wait until everything is ready in the `miniflux` namespace:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -n miniflux
|
||||
NAME READY STATUS
|
||||
miniflux-678b9c8ff5-7dbj5 1/1 Running
|
||||
miniflux-db-1 1/1 Running
|
||||
miniflux-db-2 1/1 Running
|
||||
|
||||
$ kubectl logs -n miniflux miniflux-678b9c8ff5-7dbj5
|
||||
time=2024-03-24T23:00:42.487+01:00 level=INFO msg="Starting HTTP server" listen_address=0.0.0.0:8080
|
||||
```
|
||||
|
||||
## Setting up an external ingress
|
||||
|
||||
> I'm not going to cover the networking setup for this but your cluster should be able to route traffic from the internet to the ingress controller (the master nodes). In my case I'm using a zero-trust approach with Tailscale to avoid exposing my homelab directly to the internet but there are a number of ways to do this, pick the one that suits you best.
|
||||
|
||||
Setting up an ingress for the service that supports SSL is easy with cert-manager and Traefik, we only need to create an `Ingress` resource in the `miniflux` namespace with the appropiate configuration and annotations and cert-manager will take care of the rest:
|
||||
|
||||
```yaml
|
||||
# miniflux-ingress.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: miniflux-external
|
||||
namespace: miniflux
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-production
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: miniflux.fmartingr.com
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: miniflux
|
||||
port:
|
||||
number: 8080
|
||||
tls:
|
||||
- secretName: miniflux-fmartingr-com-tls
|
||||
hosts:
|
||||
- miniflux.fmartingr.com
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-ingress.yaml
|
||||
```
|
||||
|
||||
This will create an ingress for Miniflux in the `miniflux` namespace and cert-manager will take care of the certificate generation and renewal using the `letsencrypt-production` issuer as specified in the `annotations` attribute.
|
||||
|
||||
After a few minutes you should be able to access Miniflux at the domain set up in the `host` field:
|
||||
|
||||
```
|
||||
$ curl -I https://miniflux.fmartingr.com
|
||||
HTTP/2 200
|
||||
server: traefik
|
||||
...
|
||||
```
|
||||
|
||||
And that's it! You should have Miniflux up and running in your k3s cluster with all the requirements set up.
|
||||
|
||||
I can't recommend [Miniflux](https://miniflux.app) enough, it's a great RSS reader that is simple to use and has a great UI. It probably is the first service I deployed in my homelab and I'm happy to have it running in my k3s cluster now, years later.
|
After Width: | Height: | Size: 98 KiB |
|
@ -0,0 +1,28 @@
|
|||
+++
|
||||
title = "Audiobooks can be a great alternative to TV"
|
||||
date = 2024-04-08
|
||||
tags = ["audiobooks", "books", "opinion"]
|
||||
+++
|
||||
|
||||
We have been doing a sort of experiment lately. My SO had eye surgery a few weeks ago and the first days se barely opened her eyes and even when she could open them blue light took a toll and make her eyes dry and tired really quickly.
|
||||
Since I still had to work and she had to rest, I suggested her to try out listening to an audiobook. She was a bit skeptical at first, but gave it a try.
|
||||
|
||||
I got her [Yumi and the nightmare painter](https://openlibrary.org/works/OL34050635W/Yumi_and_the_Nightmare_Painter?edition=); she already read [Tress of the emerald sea](https://openlibrary.org/works/OL28687656W/Tress_of_the_Emerald_Sea?edition=) a few months ago and wanted to read something else from the same author an I got the feeling that this one was good for her too. I had a long plane trip ahead of me at that time too so I tried it out as well, though in my case the book wasn't new to me since I read it last year when it was released.
|
||||
|
||||
She loved it, both the book and the experience, and finished it in a couple of days. Even asked for more! Since _Yumi and the nightmare painter_ is a short book and self-contained, talking to her we decied to try out a longer series together. A friend gave her [Steelheart](https://openlibrary.org/works/OL16807297W/Steelheart_%28The_Reckoners_Book_1%29?edition=steelheart0000sand_g0e1) for her birthday last year, so we decided to start _The Reckoners_.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
What happened since then? We have been ~~reading~~ listening to chapters **almost every day**. I don't remember the day we started, but we have gone through [Steelheart](https://openlibrary.org/works/OL16807297W/Steelheart_%28The_Reckoners_Book_1%29?edition=steelheart0000sand_g0e1), [Firefight](https://openlibrary.org/books/OL27097630M?edition=) and we are 75% through [Calamity](https://openlibrary.org/books/OL26885980M/Calamity). That's 35 hours of audiobooks in less than two months.
|
||||
|
||||
We replaced TV during lunch/dinner with audiobooks, which in our specific case means we can sit down together at the dinning table instead of the couch since not all our furniture aligns with the TV; We also listen to audiobooks while doing chores -making them less boring-, when we sit at the desk doing each his/her own thing and sometimes before going to bed if we are not too tired and can pay attention properly.
|
||||
|
||||
Why do we like it? We are spending more time together, we are _reading_ more, we are enjoying the books we are listening to (would be weird otherwise, yes) with the benefits books have: we are not given everything on a silver plate because we need to use our imagination and that sparks more conversation than just watching TV since we not only discuss what we thing will happen next or how something came to be, but how we imagine the characters, the places, etc. Our heads and past experiences are different, so we have different ideas on how things are in these imaginary worlds.
|
||||
|
||||
The result? We haven't watched TV in almost two months and it doesn't seem that we are going to start watching TV regularly again.
|
||||
|
||||
> **Full disclosure:** While I haven't watched TV with her, I have watched a few episodes of series I'm following on my own, but I have been watching way less TV than usual. A man needs his [Solo leveling](https://anilist.co/manga/105398/Na-Honjaman-Level-Up).
|
||||
|
||||
We are enjoying audiobooks a lot and already planning what to listen to next.
|
After Width: | Height: | Size: 686 KiB |
|
@ -0,0 +1,54 @@
|
|||
+++
|
||||
title = "Importing data manually into a longhorn volume"
|
||||
date = 2024-04-09
|
||||
tags = ["k3s", "homelab"]
|
||||
+++
|
||||
|
||||
I was in the process of migrating [Shiori](https://github.com/go-shiori/shiori) from my docker environment to the new [k3s cluster I'm setting up](https://blog.josem.dev/2024-04-08-setting-up-a-k3s-cluster-on-raspberry-pi/). Shiori is a bookmarks manager that uses an SQLite database and a folder to store the data from the bookmarks. I didn't want to switch engines just yet since I want to improve SQLite's performance first, so I decided to move the data directly to a longhorn volume.
|
||||
|
||||
This probably is super simple and vastly known but it wasn't clear for me at first. Posting it here for future reference and for anyone that might find it useful.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Considering that I already have the data from the docker volume in a `tar.gz` file exported with the correct hierachy the migration process is way simpler than I anticipated. I just need to create the Longhorn volume and the volume claim, create a pod that has access to the volume and pipe the data into the pod to the appropriate location.
|
||||
|
||||
First create your volume in the way that you prefer. You can apply the YAML directly or use the Longhorn UI to create the volume. I created mine using the UI beforhand.
|
||||
|
||||
With the volume and volume claim (named `shiori-data`) created I'm going to create a pod that has access to the volume via the volume claim. I'm going to use the same `shiori` image that I'm going to use in the final pod that will use the volume claim since I'm lucky to have the `tar` command in there. If you don't have it, you can use a different image that has `tar` bundled in it.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: shiori-import-pod
|
||||
namespace: shiori
|
||||
spec:
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: shiori-data
|
||||
containers:
|
||||
- name: shiori
|
||||
image: ghcr.io/go-shiori/shiori:v1.6.2
|
||||
volumeMounts:
|
||||
- mountPath: "/tmp/shiori-data"
|
||||
name: data
|
||||
# In my personal case, I need to specify user, group and filesystem group to match the longhorn volume
|
||||
# with the docker image specification.
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 1000
|
||||
fsGroup: 1000
|
||||
```
|
||||
|
||||
With the pod running I can copy the data into the volume by piping it into an `exec` call and upacking it with `tar` on the fly:
|
||||
|
||||
```bash
|
||||
cat shiori_data.tar.gz | kubectl exec -i -n namespace shiori-import-pod -- tar xzvf - -C /tmp/shiori-data/
|
||||
```
|
||||
|
||||
> **Note**: I tried using `kubectl cp` before to copy the file into the pod -since internally uses the same approach-, but I had some issues apparently due to different `tar` versions on my host machine and the destination pod so I decided to use the pipe approach and it worked. The result should be the same.
|
||||
|
||||
With the data copied into the volume I can now delete the import pod and deploy the application using the approrpiate volume claim. In my case I just need to change the `mountPath` in the deployment container spec to the correct path where the application expects the data to be.
|
||||
|
||||
I don't know why I expected this to be harder than it really is, but I am happy that I was able to migrate everything in less than an hour.
|
|
@ -0,0 +1,85 @@
|
|||
+++
|
||||
title = "Journey to K3s: Basic Cluster Backups"
|
||||
date = 2024-04-21
|
||||
tags = ["k3s", "backups", "homelab"]
|
||||
+++
|
||||
|
||||
There a time to deploy new services to the cluster, and there is a time to backup the cluster. Before I start depending more and more of the services I want to self-host it's time to start thinking about backups and disaster recovery. My previous server have been running with a simple premise: if it breaks, I can rebuild it.
|
||||
|
||||
I'm going to try and keep that same simple approach here, theoretically if something bad happens I should be able to rebuild the cluster from scratch by backing up cluster snapshots and the data stored in the persistent volumes.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Cluster resources
|
||||
|
||||
In my case I store all resources I create in a git repository (namespaces, helm charts, configuration for the charts, etc) so I can recreate them easily if needed. This is a good practice to have in place, but it's also a good idea to have a backup of the resources in the cluster to avoid problems when the cluster tries to regenerate the state from the same resources.
|
||||
|
||||
## Set up the NFS share
|
||||
|
||||
> In my case the required packages to mount NFS shares were already installed in the system, your experience may vary depending on the distribution you are using.
|
||||
|
||||
First I had to create the folder where the NFS share will be mounted:
|
||||
|
||||
```bash
|
||||
mkdir -p /mnt/k3s-01
|
||||
```
|
||||
|
||||
Mount the NFS share
|
||||
|
||||
```bash
|
||||
sudo mount nfs-server.home.arpa:/shares/k3s-01 /mnt/k3s-01
|
||||
```
|
||||
|
||||
Check if the NFS share is mounted correctly by listing the contents of the folder, creating a file and checking the available disk space:
|
||||
|
||||
```bash
|
||||
$ ls /mnt/k3s-01
|
||||
k3s-master-01
|
||||
|
||||
$ df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
...
|
||||
nfs-server.home.arpa:/shares/k3s-01 1.8T 1.1T 682G 62% /mnt/k3s-01
|
||||
...
|
||||
|
||||
$ touch /mnt/k3s-01/k3s-master-01/test.txt
|
||||
$ ls /mnt/k3s-01/k3s-master-01
|
||||
test.txt
|
||||
```
|
||||
|
||||
With this I have the NFS share mounted and ready to be used by the cluster and I can start storing the backups there.
|
||||
|
||||
## The cluster snapshots
|
||||
|
||||
Thankfully for this k3s [provides a very straightforward method to create snapshots by either using the `k3s etcd-snapshot` command](https://docs.k3s.io/datastore/backup-restore) to create them manually or by setting up a cron job to create them automatically. The cron job is set up by default, so I only had to adjust the schedule and retention to my liking and set up a proper backup location: the NFS share.
|
||||
|
||||
Adjusting the `etcd-snapshot-dir` in the k3s configuration file to point to the new location, long with the retention and other options:
|
||||
|
||||
```yaml
|
||||
# /etc/rancher/k3s/config.yaml
|
||||
etcd-snapshot-retention: 15
|
||||
etcd-snapshot-dir: /mnt/k3s-01/k3s-master-01/snapshots
|
||||
etcd-snapshot-compress: true
|
||||
```
|
||||
|
||||
After restarting the k3s service the snapshots will be created in the new location and the old ones will be deleted after the retention period.
|
||||
|
||||
You can also create a snapshot manually by running the command: `k3s etcd-snapshot save`.
|
||||
|
||||
## Longhorn
|
||||
|
||||
Very easy too! I just followed the [Longhorn documentation on NFS backup store](https://longhorn.io/docs/1.6.1/snapshots-and-backups/backup-and-restore/set-backup-target/#set-up-smbcifs-backupstore) by going to the Longhorn Web UI and specifying my NFS share as the backup target.
|
||||
|
||||

|
||||
|
||||
After setting up the backup target I created a backup of the Longhorn volumes and scheduled backups to run every day at 2am with a conservative rentention policy of 3 days.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Yes, it was **that** easy!
|
||||
|
||||
With the backups in place I can now sleep a little better knowing that I can recover from a disaster if needed. The next step is to test the backups and the recovery process to make sure everything is working as expected.
|
||||
|
||||
I hope I don't need to use this ever, though. :)
|
After Width: | Height: | Size: 90 KiB |
After Width: | Height: | Size: 81 KiB |
After Width: | Height: | Size: 95 KiB |
|
@ -0,0 +1,119 @@
|
|||
+++
|
||||
title = "Journey to K3s: Accessing from the Outside"
|
||||
date = 2024-04-28
|
||||
tags = ["k3s", "networking", "homelab"]
|
||||
+++
|
||||
|
||||
Up until now I have been working locally (on my home network). While that is enough for most of the services I'm running I need to access some of them from the outside. For example, I want to expose this blog to the internet and access Miniflux to read my RSS feeds on the go.
|
||||
|
||||
There are a few ways to achieve this but I have some specific requirements that I want to meet:
|
||||
|
||||
1. **Zero-trust approach**: I don't want to expose the services directly to the internet.
|
||||
2. **Public services**: Other clients apart from me should be able to access some of the services.
|
||||
3. **Home IP safety**: Don't directly expose my home IP address. (This is on par with #1, but I want to make it explicit).
|
||||
4. **On-transit encryption**: Full on transit encryption from the client to the cluster with no re-encryption in the middle.
|
||||
5. No Cloudflare. (Breaks #4)
|
||||
6. No Tailscale. (Breaks #2, also there are other users at home and I don't want to have the Tailscale client running all the time).
|
||||
|
||||
What does this leave me? A reverse proxy server.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
I'm going to setup [HAProxy](https://www.haproxy.org/) in a separate external server to act as a reverse proxy that will connect to my home k3s cluster directly, but since the DNS records will point to this HAProxy server my home IP address will not be exposed. HAProxy won't be able to decrypt the traffic but leverage the [SSL SNI header](https://en.wikipedia.org/wiki/Server_Name_Indication) to route the traffic back to the cluster for the allowed domains that I setup. This way I can have a zero-trust approach and have the traffic encrypted from the client to the cluster.
|
||||
|
||||
So, to start working I created a new VPS in [Hetzner Cloud](https://hetzner.cloud/?ref=gSMfCgZFSz1u) _(affiliate link)_ and installed HAProxy in there, which is the easy part. Once the system is up to date and with the minimal services running I can start working on setting up HAProxy.
|
||||
|
||||
## Passthrough traffic
|
||||
|
||||
Passthrough traffic is fairly simple, just create a _frontend_ that listens on port 443 and sends the traffic to the ssl _backend_ that check for SSL and sends data upstream to the k3s cluster. Since I'm are not decrypting the traffic [TCP mode](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/load-balancing/tcp/) is used to tunnel the traffic.
|
||||
|
||||
I'm going to use [`ssl-hello-chk`](https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/#4-option%20ssl-hello-chk) option to ensure the traffic is SSL. This option checks if the first bytes of the connection are a valid SSL handshake, if not it will drop the connection.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-ssl
|
||||
# Listen on port 443 (HTTPS)
|
||||
bind *:443
|
||||
|
||||
# Use TCP mode, meaning that HAProxy won't decrypt the traffic and just passthrough to the upstream server
|
||||
mode tcp
|
||||
|
||||
# Enable advanced logging of TCP connections with session state and timers
|
||||
option tcplog
|
||||
|
||||
# Send to the backend
|
||||
use_backend k3s-01-ssl
|
||||
|
||||
backend k3s-01-ssl
|
||||
# Use TCP mode, meaning that HAProxy won't decrypt the traffic and just passthrough to the upstream server
|
||||
mode tcp
|
||||
|
||||
# Balance the traffic between the servers in a round-robin fashion (not needed for a single server)
|
||||
balance roundrobin
|
||||
|
||||
# Retry at least 3 times before giving up
|
||||
retries 3
|
||||
|
||||
# Check if the traffic is SSL
|
||||
option ssl-hello-chk
|
||||
|
||||
# Send the traffic to the k3s cluster
|
||||
server home UPSTREAM_IP:443 check
|
||||
|
||||
```
|
||||
|
||||
## Force SSL (redirect HTTP to HTTPS)
|
||||
|
||||
Since initially I'm not going to expose plain HTTP services I can just [redirect](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/http-redirects/#sidebar) all HTTP trafic to HTTPS: Just need to create a new frontend that listens on port 80 and redirects the traffic to the HTTPS frontend. This should work transparently for the client.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-http
|
||||
# Listen on port 80 (HTTP)
|
||||
bind *:80
|
||||
|
||||
# Use HTTP mode
|
||||
mode http
|
||||
|
||||
# Redirect switching scheme to HTTPS
|
||||
http-request redirect scheme https
|
||||
```
|
||||
|
||||
## Deny non-allowed domains
|
||||
|
||||
For security reasons I want to deny access to all domains that are not in the allowed list: that is domains that I explicitly allow for outside access.
|
||||
|
||||
I'm going to create a file `/etc/haproxy/allowed-domains.txt` with the list of domains separated by newlines and use the [`acl`](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/core-concepts/acls/) directive to check if the domain is in the list abruptly droping the connection if it's not.
|
||||
|
||||
The file `/etc/haproxy/allowed-domains.txt` looks like this:
|
||||
|
||||
```text
|
||||
# /etc/haproxy/allowed-domains.txt
|
||||
miniflux.fmartingr.com
|
||||
```
|
||||
|
||||
The new configuration options for the **frontend** part. No changes needed on the backend.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-ssl
|
||||
# ... other configurations
|
||||
|
||||
# Allow up to 5 seconds to inspect the TCP request before giving up.
|
||||
# Required since HAProxy needs to inspect the SNI header to route the traffic.
|
||||
tcp-request inspect-delay 5s
|
||||
|
||||
# Accept the request only after the hello message is received (which should contain the SNI header).
|
||||
tcp-request content accept if { req_ssl_hello_type 1 }
|
||||
|
||||
# Deny the request if the domain is not in the allowed list
|
||||
acl allowed_domain req.ssl_sni -m end -i -f /etc/haproxy/allowed-domains.txt
|
||||
|
||||
# Send to the backend if the domain is allowed
|
||||
use_backend k3s-01-ssl if allowed_domain
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Once all this changes are in place I can restart the HAProxy service and the traffic should be routed to the k3s cluster so I can access the services from the outside without exposing my home IP address and have the traffic encrypted from the client to the cluster. Though not perfect this is a fairly simple and good setup, it requires manual labor but it's a good tradeoff for the requirements I have.
|
||||
|
||||
> Back when I set up Miniflux [I created an ingress specifically for external access](/blog/2024/03/25/journey-to-k3s-deploying-the-first-service-and-its-requirements/#setting-up-an-external-ingress) that wasn't working since my cluster could not be reached by the ACME servers on the domain I set up. Now that I have HAProxy in place the domain can be setup to point to it and the traffic will be correctly routed to the cluster completing the configuration by requesting a certificate from Let's Encrypt and exposing the Ingress to the internet.
|