refactor: moved to hugo
This commit is contained in:
parent
4c6912edd0
commit
e77e5583c2
604 changed files with 1675 additions and 2279 deletions
|
@ -0,0 +1,28 @@
|
|||
+++
|
||||
title = "Audiobooks can be a great alternative to TV"
|
||||
date = 2024-04-08
|
||||
tags = ["audiobooks", "books", "opinion"]
|
||||
+++
|
||||
|
||||
We have been doing a sort of experiment lately. My SO had eye surgery a few weeks ago and the first days se barely opened her eyes and even when she could open them blue light took a toll and make her eyes dry and tired really quickly.
|
||||
Since I still had to work and she had to rest, I suggested her to try out listening to an audiobook. She was a bit skeptical at first, but gave it a try.
|
||||
|
||||
I got her [Yumi and the nightmare painter](https://openlibrary.org/works/OL34050635W/Yumi_and_the_Nightmare_Painter?edition=); she already read [Tress of the emerald sea](https://openlibrary.org/works/OL28687656W/Tress_of_the_Emerald_Sea?edition=) a few months ago and wanted to read something else from the same author an I got the feeling that this one was good for her too. I had a long plane trip ahead of me at that time too so I tried it out as well, though in my case the book wasn't new to me since I read it last year when it was released.
|
||||
|
||||
She loved it, both the book and the experience, and finished it in a couple of days. Even asked for more! Since _Yumi and the nightmare painter_ is a short book and self-contained, talking to her we decied to try out a longer series together. A friend gave her [Steelheart](https://openlibrary.org/works/OL16807297W/Steelheart_%28The_Reckoners_Book_1%29?edition=steelheart0000sand_g0e1) for her birthday last year, so we decided to start _The Reckoners_.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
What happened since then? We have been ~~reading~~ listening to chapters **almost every day**. I don't remember the day we started, but we have gone through [Steelheart](https://openlibrary.org/works/OL16807297W/Steelheart_%28The_Reckoners_Book_1%29?edition=steelheart0000sand_g0e1), [Firefight](https://openlibrary.org/books/OL27097630M?edition=) and we are 75% through [Calamity](https://openlibrary.org/books/OL26885980M/Calamity). That's 35 hours of audiobooks in less than two months.
|
||||
|
||||
We replaced TV during lunch/dinner with audiobooks, which in our specific case means we can sit down together at the dinning table instead of the couch since not all our furniture aligns with the TV; We also listen to audiobooks while doing chores -making them less boring-, when we sit at the desk doing each his/her own thing and sometimes before going to bed if we are not too tired and can pay attention properly.
|
||||
|
||||
Why do we like it? We are spending more time together, we are _reading_ more, we are enjoying the books we are listening to (would be weird otherwise, yes) with the benefits books have: we are not given everything on a silver plate because we need to use our imagination and that sparks more conversation than just watching TV since we not only discuss what we thing will happen next or how something came to be, but how we imagine the characters, the places, etc. Our heads and past experiences are different, so we have different ideas on how things are in these imaginary worlds.
|
||||
|
||||
The result? We haven't watched TV in almost two months and it doesn't seem that we are going to start watching TV regularly again.
|
||||
|
||||
> **Full disclosure:** While I haven't watched TV with her, I have watched a few episodes of series I'm following on my own, but I have been watching way less TV than usual. A man needs his [Solo leveling](https://anilist.co/manga/105398/Na-Honjaman-Level-Up).
|
||||
|
||||
We are enjoying audiobooks a lot and already planning what to listen to next.
|
Binary file not shown.
After Width: | Height: | Size: 686 KiB |
|
@ -0,0 +1,54 @@
|
|||
+++
|
||||
title = "Importing data manually into a longhorn volume"
|
||||
date = 2024-04-09
|
||||
tags = ["k3s", "homelab"]
|
||||
+++
|
||||
|
||||
I was in the process of migrating [Shiori](https://github.com/go-shiori/shiori) from my docker environment to the new [k3s cluster I'm setting up](https://blog.josem.dev/2024-04-08-setting-up-a-k3s-cluster-on-raspberry-pi/). Shiori is a bookmarks manager that uses an SQLite database and a folder to store the data from the bookmarks. I didn't want to switch engines just yet since I want to improve SQLite's performance first, so I decided to move the data directly to a longhorn volume.
|
||||
|
||||
This probably is super simple and vastly known but it wasn't clear for me at first. Posting it here for future reference and for anyone that might find it useful.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Considering that I already have the data from the docker volume in a `tar.gz` file exported with the correct hierachy the migration process is way simpler than I anticipated. I just need to create the Longhorn volume and the volume claim, create a pod that has access to the volume and pipe the data into the pod to the appropriate location.
|
||||
|
||||
First create your volume in the way that you prefer. You can apply the YAML directly or use the Longhorn UI to create the volume. I created mine using the UI beforhand.
|
||||
|
||||
With the volume and volume claim (named `shiori-data`) created I'm going to create a pod that has access to the volume via the volume claim. I'm going to use the same `shiori` image that I'm going to use in the final pod that will use the volume claim since I'm lucky to have the `tar` command in there. If you don't have it, you can use a different image that has `tar` bundled in it.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: shiori-import-pod
|
||||
namespace: shiori
|
||||
spec:
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: shiori-data
|
||||
containers:
|
||||
- name: shiori
|
||||
image: ghcr.io/go-shiori/shiori:v1.6.2
|
||||
volumeMounts:
|
||||
- mountPath: "/tmp/shiori-data"
|
||||
name: data
|
||||
# In my personal case, I need to specify user, group and filesystem group to match the longhorn volume
|
||||
# with the docker image specification.
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 1000
|
||||
fsGroup: 1000
|
||||
```
|
||||
|
||||
With the pod running I can copy the data into the volume by piping it into an `exec` call and upacking it with `tar` on the fly:
|
||||
|
||||
```bash
|
||||
cat shiori_data.tar.gz | kubectl exec -i -n namespace shiori-import-pod -- tar xzvf - -C /tmp/shiori-data/
|
||||
```
|
||||
|
||||
> **Note**: I tried using `kubectl cp` before to copy the file into the pod -since internally uses the same approach-, but I had some issues apparently due to different `tar` versions on my host machine and the destination pod so I decided to use the pipe approach and it worked. The result should be the same.
|
||||
|
||||
With the data copied into the volume I can now delete the import pod and deploy the application using the approrpiate volume claim. In my case I just need to change the `mountPath` in the deployment container spec to the correct path where the application expects the data to be.
|
||||
|
||||
I don't know why I expected this to be harder than it really is, but I am happy that I was able to migrate everything in less than an hour.
|
|
@ -0,0 +1,85 @@
|
|||
+++
|
||||
title = "Journey to K3s: Basic Cluster Backups"
|
||||
date = 2024-04-21
|
||||
tags = ["k3s", "backups", "homelab"]
|
||||
+++
|
||||
|
||||
There a time to deploy new services to the cluster, and there is a time to backup the cluster. Before I start depending more and more of the services I want to self-host it's time to start thinking about backups and disaster recovery. My previous server have been running with a simple premise: if it breaks, I can rebuild it.
|
||||
|
||||
I'm going to try and keep that same simple approach here, theoretically if something bad happens I should be able to rebuild the cluster from scratch by backing up cluster snapshots and the data stored in the persistent volumes.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Cluster resources
|
||||
|
||||
In my case I store all resources I create in a git repository (namespaces, helm charts, configuration for the charts, etc) so I can recreate them easily if needed. This is a good practice to have in place, but it's also a good idea to have a backup of the resources in the cluster to avoid problems when the cluster tries to regenerate the state from the same resources.
|
||||
|
||||
## Set up the NFS share
|
||||
|
||||
> In my case the required packages to mount NFS shares were already installed in the system, your experience may vary depending on the distribution you are using.
|
||||
|
||||
First I had to create the folder where the NFS share will be mounted:
|
||||
|
||||
```bash
|
||||
mkdir -p /mnt/k3s-01
|
||||
```
|
||||
|
||||
Mount the NFS share
|
||||
|
||||
```bash
|
||||
sudo mount nfs-server.home.arpa:/shares/k3s-01 /mnt/k3s-01
|
||||
```
|
||||
|
||||
Check if the NFS share is mounted correctly by listing the contents of the folder, creating a file and checking the available disk space:
|
||||
|
||||
```bash
|
||||
$ ls /mnt/k3s-01
|
||||
k3s-master-01
|
||||
|
||||
$ df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
...
|
||||
nfs-server.home.arpa:/shares/k3s-01 1.8T 1.1T 682G 62% /mnt/k3s-01
|
||||
...
|
||||
|
||||
$ touch /mnt/k3s-01/k3s-master-01/test.txt
|
||||
$ ls /mnt/k3s-01/k3s-master-01
|
||||
test.txt
|
||||
```
|
||||
|
||||
With this I have the NFS share mounted and ready to be used by the cluster and I can start storing the backups there.
|
||||
|
||||
## The cluster snapshots
|
||||
|
||||
Thankfully for this k3s [provides a very straightforward method to create snapshots by either using the `k3s etcd-snapshot` command](https://docs.k3s.io/datastore/backup-restore) to create them manually or by setting up a cron job to create them automatically. The cron job is set up by default, so I only had to adjust the schedule and retention to my liking and set up a proper backup location: the NFS share.
|
||||
|
||||
Adjusting the `etcd-snapshot-dir` in the k3s configuration file to point to the new location, long with the retention and other options:
|
||||
|
||||
```yaml
|
||||
# /etc/rancher/k3s/config.yaml
|
||||
etcd-snapshot-retention: 15
|
||||
etcd-snapshot-dir: /mnt/k3s-01/k3s-master-01/snapshots
|
||||
etcd-snapshot-compress: true
|
||||
```
|
||||
|
||||
After restarting the k3s service the snapshots will be created in the new location and the old ones will be deleted after the retention period.
|
||||
|
||||
You can also create a snapshot manually by running the command: `k3s etcd-snapshot save`.
|
||||
|
||||
## Longhorn
|
||||
|
||||
Very easy too! I just followed the [Longhorn documentation on NFS backup store](https://longhorn.io/docs/1.6.1/snapshots-and-backups/backup-and-restore/set-backup-target/#set-up-smbcifs-backupstore) by going to the Longhorn Web UI and specifying my NFS share as the backup target.
|
||||
|
||||

|
||||
|
||||
After setting up the backup target I created a backup of the Longhorn volumes and scheduled backups to run every day at 2am with a conservative rentention policy of 3 days.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Yes, it was **that** easy!
|
||||
|
||||
With the backups in place I can now sleep a little better knowing that I can recover from a disaster if needed. The next step is to test the backups and the recovery process to make sure everything is working as expected.
|
||||
|
||||
I hope I don't need to use this ever, though. :)
|
Binary file not shown.
After Width: | Height: | Size: 90 KiB |
Binary file not shown.
After Width: | Height: | Size: 81 KiB |
Binary file not shown.
After Width: | Height: | Size: 95 KiB |
|
@ -0,0 +1,119 @@
|
|||
+++
|
||||
title = "Journey to K3s: Accessing from the Outside"
|
||||
date = 2024-04-28
|
||||
tags = ["k3s", "networking", "homelab"]
|
||||
+++
|
||||
|
||||
Up until now I have been working locally (on my home network). While that is enough for most of the services I'm running I need to access some of them from the outside. For example, I want to expose this blog to the internet and access Miniflux to read my RSS feeds on the go.
|
||||
|
||||
There are a few ways to achieve this but I have some specific requirements that I want to meet:
|
||||
|
||||
1. **Zero-trust approach**: I don't want to expose the services directly to the internet.
|
||||
2. **Public services**: Other clients apart from me should be able to access some of the services.
|
||||
3. **Home IP safety**: Don't directly expose my home IP address. (This is on par with #1, but I want to make it explicit).
|
||||
4. **On-transit encryption**: Full on transit encryption from the client to the cluster with no re-encryption in the middle.
|
||||
5. No Cloudflare. (Breaks #4)
|
||||
6. No Tailscale. (Breaks #2, also there are other users at home and I don't want to have the Tailscale client running all the time).
|
||||
|
||||
What does this leave me? A reverse proxy server.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
I'm going to setup [HAProxy](https://www.haproxy.org/) in a separate external server to act as a reverse proxy that will connect to my home k3s cluster directly, but since the DNS records will point to this HAProxy server my home IP address will not be exposed. HAProxy won't be able to decrypt the traffic but leverage the [SSL SNI header](https://en.wikipedia.org/wiki/Server_Name_Indication) to route the traffic back to the cluster for the allowed domains that I setup. This way I can have a zero-trust approach and have the traffic encrypted from the client to the cluster.
|
||||
|
||||
So, to start working I created a new VPS in [Hetzner Cloud](https://hetzner.cloud/?ref=gSMfCgZFSz1u) _(affiliate link)_ and installed HAProxy in there, which is the easy part. Once the system is up to date and with the minimal services running I can start working on setting up HAProxy.
|
||||
|
||||
## Passthrough traffic
|
||||
|
||||
Passthrough traffic is fairly simple, just create a _frontend_ that listens on port 443 and sends the traffic to the ssl _backend_ that check for SSL and sends data upstream to the k3s cluster. Since I'm are not decrypting the traffic [TCP mode](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/load-balancing/tcp/) is used to tunnel the traffic.
|
||||
|
||||
I'm going to use [`ssl-hello-chk`](https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/#4-option%20ssl-hello-chk) option to ensure the traffic is SSL. This option checks if the first bytes of the connection are a valid SSL handshake, if not it will drop the connection.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-ssl
|
||||
# Listen on port 443 (HTTPS)
|
||||
bind *:443
|
||||
|
||||
# Use TCP mode, meaning that HAProxy won't decrypt the traffic and just passthrough to the upstream server
|
||||
mode tcp
|
||||
|
||||
# Enable advanced logging of TCP connections with session state and timers
|
||||
option tcplog
|
||||
|
||||
# Send to the backend
|
||||
use_backend k3s-01-ssl
|
||||
|
||||
backend k3s-01-ssl
|
||||
# Use TCP mode, meaning that HAProxy won't decrypt the traffic and just passthrough to the upstream server
|
||||
mode tcp
|
||||
|
||||
# Balance the traffic between the servers in a round-robin fashion (not needed for a single server)
|
||||
balance roundrobin
|
||||
|
||||
# Retry at least 3 times before giving up
|
||||
retries 3
|
||||
|
||||
# Check if the traffic is SSL
|
||||
option ssl-hello-chk
|
||||
|
||||
# Send the traffic to the k3s cluster
|
||||
server home UPSTREAM_IP:443 check
|
||||
|
||||
```
|
||||
|
||||
## Force SSL (redirect HTTP to HTTPS)
|
||||
|
||||
Since initially I'm not going to expose plain HTTP services I can just [redirect](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/http-redirects/#sidebar) all HTTP trafic to HTTPS: Just need to create a new frontend that listens on port 80 and redirects the traffic to the HTTPS frontend. This should work transparently for the client.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-http
|
||||
# Listen on port 80 (HTTP)
|
||||
bind *:80
|
||||
|
||||
# Use HTTP mode
|
||||
mode http
|
||||
|
||||
# Redirect switching scheme to HTTPS
|
||||
http-request redirect scheme https
|
||||
```
|
||||
|
||||
## Deny non-allowed domains
|
||||
|
||||
For security reasons I want to deny access to all domains that are not in the allowed list: that is domains that I explicitly allow for outside access.
|
||||
|
||||
I'm going to create a file `/etc/haproxy/allowed-domains.txt` with the list of domains separated by newlines and use the [`acl`](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/core-concepts/acls/) directive to check if the domain is in the list abruptly droping the connection if it's not.
|
||||
|
||||
The file `/etc/haproxy/allowed-domains.txt` looks like this:
|
||||
|
||||
```text
|
||||
# /etc/haproxy/allowed-domains.txt
|
||||
miniflux.fmartingr.com
|
||||
```
|
||||
|
||||
The new configuration options for the **frontend** part. No changes needed on the backend.
|
||||
|
||||
```cfg
|
||||
frontend k3s-01-ssl
|
||||
# ... other configurations
|
||||
|
||||
# Allow up to 5 seconds to inspect the TCP request before giving up.
|
||||
# Required since HAProxy needs to inspect the SNI header to route the traffic.
|
||||
tcp-request inspect-delay 5s
|
||||
|
||||
# Accept the request only after the hello message is received (which should contain the SNI header).
|
||||
tcp-request content accept if { req_ssl_hello_type 1 }
|
||||
|
||||
# Deny the request if the domain is not in the allowed list
|
||||
acl allowed_domain req.ssl_sni -m end -i -f /etc/haproxy/allowed-domains.txt
|
||||
|
||||
# Send to the backend if the domain is allowed
|
||||
use_backend k3s-01-ssl if allowed_domain
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Once all this changes are in place I can restart the HAProxy service and the traffic should be routed to the k3s cluster so I can access the services from the outside without exposing my home IP address and have the traffic encrypted from the client to the cluster. Though not perfect this is a fairly simple and good setup, it requires manual labor but it's a good tradeoff for the requirements I have.
|
||||
|
||||
> Back when I set up Miniflux [I created an ingress specifically for external access](/blog/2024/03/25/journey-to-k3s-deploying-the-first-service-and-its-requirements/#setting-up-an-external-ingress) that wasn't working since my cluster could not be reached by the ACME servers on the domain I set up. Now that I have HAProxy in place the domain can be setup to point to it and the traffic will be correctly routed to the cluster completing the configuration by requesting a certificate from Let's Encrypt and exposing the Ingress to the internet.
|
Loading…
Add table
Add a link
Reference in a new issue