refactor: moved to hugo
This commit is contained in:
parent
4c6912edd0
commit
e77e5583c2
604 changed files with 1675 additions and 2279 deletions
Binary file not shown.
After Width: | Height: | Size: 492 KiB |
|
@ -0,0 +1,101 @@
|
|||
+++
|
||||
title = "Create an audiobook file from several mp3 files using ffmpeg"
|
||||
date = 2024-03-12
|
||||
tags = ["guide", "audio", "ffmpeg"]
|
||||
+++
|
||||
|
||||
Due to some recent traveling I have started to listen to audiobooks. I love reading but some times my eyes are just too tired to go with it but I'm not sleepy at all or maybe I just wanted the convenience to lay down but still _do_ something.
|
||||
|
||||
Short story, I bought some from a known distributor but I'm a fan of data preservation and actually owning what I pay for. I found an application in Github that allowed me to download the files that composed the audiobook in split mp3 files, but that didn't do. I wanted a single file with correct metadata, so I got my hands dirty.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
Assuming you have the mp3 laying around in a folder, we need to generate or find three more things:
|
||||
|
||||
- **`files.txt`**: A list of the MP3 files in the order we are going to combine them, in a plain text file with the format `file '<filename>'`.
|
||||
|
||||
You can use a simple bash command to generate the file:
|
||||
|
||||
```sh
|
||||
find . -type f -name "*mp3" -exec printf "file '%s'\n" {} \; | sed "s|\.\/||g" > files.txt
|
||||
```
|
||||
|
||||
- **`metadata.txt`**: A metadata file to be used with the bundled file that contain basic information of the audiobook (title, author, narrator, ...) along with chapter information.
|
||||
|
||||
An example looks like this, documentation can be found [in the ffmpeg documentation](https://ffmpeg.org/ffmpeg-formats.html#Metadata-1):
|
||||
|
||||
```ini
|
||||
;FFMETADATA1
|
||||
title=Book name
|
||||
artist=Book author(s)
|
||||
composer=Book narrator(s)
|
||||
publisher=Book publisher
|
||||
date=Book date of publication
|
||||
|
||||
[CHAPTER]
|
||||
TIMEBASE=1/1000
|
||||
START=0 # 0:00:00
|
||||
END=60000 # 0:01:00
|
||||
title=Intro # Chapter title
|
||||
|
||||
# Repeat the [CHAPTER] block for all chapters
|
||||
```
|
||||
|
||||
- **`cover.jpg`**: The artwork of the audiobook. The files I have been using (downloaded from the store) are 353x353@72dpi. I'm unsure if that's by definition, but at least be sure to use squared images.
|
||||
|
||||
With all the files in place, we can use `ffmpeg` to do the conversion. I have followed a multi step approach to make sure I can review the process and fix any issues that may arise. Pretty sure this can be simplified somehow but I'm not a `ffmpeg` expert and I have spent too much time on this already.
|
||||
|
||||
1. Concatenate the files into a single `mp3` file.
|
||||
|
||||
```sh
|
||||
ffmpeg -f concat -i files.txt -c copy build_01_concat.mp3
|
||||
```
|
||||
|
||||
- `-f concat`: Use the `concat` format, meaning we are going to concatenate files.
|
||||
- `-i files.txt`: The file with the list of files to concatenate as input to ffmpeg.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `build_01_concat.mp3`: The output file.
|
||||
|
||||
1. Add the cover to the file.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_01_concat.mp3 -i cover.jpg -c copy -map 0 -map 1 build_02_cover.mp3
|
||||
```
|
||||
|
||||
- `-i build_01_concat.mp3`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-i cover.jpg`: The cover image to be added to the file as second input to ffmpeg.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `-map 0 -map 1`: Maps the streams from the input files to the output file.
|
||||
- `build_02_cover.mp3`: The output file.
|
||||
|
||||
1. Convert the `mp3` file to `m4a`.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_02_cover.mp3 -c:v copy build_03_m4a.m4a
|
||||
```
|
||||
|
||||
- `-i build_02_cover.mp3`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-c:v copy`: The video codec to be used for the output file, meaning we are not going to re-encode the file, just copy it.
|
||||
- `build_03_m4a.m4a`: The output file.
|
||||
|
||||
4. Add the metadata to the `m4a` file and convert it to `m4b`.
|
||||
|
||||
```sh
|
||||
ffmpeg -i build_03_m4a.m4a -i metadata.txt -map 0 -map_metadata 1 -c copy book.m4b
|
||||
```
|
||||
|
||||
- `-i build_03_m4a.m4a`: The file created in the above step to be used as input to ffmpeg.
|
||||
- `-i metadata.txt`: The metadata file to be used as second input to ffmpeg.
|
||||
- `-map 0 -map_metadata 1`: Maps the streams from the input files to the output file.
|
||||
- `-c copy`: The stream copy codec for each stream, meaning we are not going to re-encode the files, just copy them.
|
||||
- `book.m4b`: The final output file.
|
||||
|
||||
5. Clean up the files we created in the process that we don't need anymore.
|
||||
|
||||
```sh
|
||||
rm build_* files.txt
|
||||
```
|
||||
|
||||
That's it! You should have a `m4b` file with the audiobook ready to be imported to your favorite audiobook player. I have tested this process with a couple of books and it worked like a charm. I hope it helps you too.
|
Binary file not shown.
After Width: | Height: | Size: 19 KiB |
Binary file not shown.
After Width: | Height: | Size: 45 KiB |
|
@ -0,0 +1,172 @@
|
|||
+++
|
||||
title = "Journey to K3S: Basic cluster setup"
|
||||
date = 2024-03-14
|
||||
tags = ["k3s", "homelab"]
|
||||
+++
|
||||
|
||||
I've finally started to play with K3S, a lightweight Kubernetes distribution. I have been reading about it for a while and I'm excited to see how it performs in my home lab. My services have been running in an Intel NUC running Docker container for some years now, but the plan is to migrate them to a k3s cluster of three NanoPC-T6 boards.
|
||||
|
||||
I was looking for a small form-factor and low power consumption solution, and the NanoPC-T6 seems to fit the bill. I know I'm going to stumble upon some limitations but I'm eager to see how it goes and the problems I find along the way.
|
||||
|
||||
My requirements are very simple: I want to run a small cluster with a few services, and I want to be able to access them from the internet and from my home. My current setup relies on Tailscale for VPN and Ingress for the services, so I'm going to try and replicate that in this new setup.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Installing DietPi on the NanoPC-T6
|
||||
|
||||
I'm completely new to [DietPi](https://dietpi.com), but nothing that FriendlyElec offered seemed to fit my needs. I'm not a fan of the pre-installed software and I wanted to start from scratch. I tried to find compatible OSs around and there weren't many, but DietPi seemed to be a good fit and it's actively maintained.
|
||||
|
||||
At first I tried to run from an SD Card and try and copy the data manually, but I found out that the NanoPC-T6 has an eMMC slot, so I decided to go with that. I flashed FriendlyWRT into the SD Card, booted it and used their tools to flash DietPi into the eMMC.
|
||||
|
||||
> For this to work properly under my home network setup I had to physically connect a computer and a keyboard to the boards and disable the firewall service using: `service firewall stop`. I believe this happened because the boards live in a different VLAN/subnet in my local network since I connect new devices to a different VLAN for security reasons. With that disabled I could access the boards from my computer and continue the setup.
|
||||
|
||||

|
||||
|
||||
## Setting up the OS
|
||||
|
||||
The first thing once you SSH to the boards is to change the default and global software passwords. Once the setup assistant was over I setup a new SSH key and disabled the password login.
|
||||
|
||||
Before installing K3s I wanted to make sure the OS was up to date and had the necessary tools to run the cluster. I used the `dietpi-software` tool to be sure that some convenience utilities like `vim`, `curl` and `git` where present.
|
||||
|
||||
I also set up the hostname by using the `dietpi-config` tool to `k3s-master-03`, `k3s-master-02` and `k3s-master-03` for the three boards.
|
||||
|
||||
And installed `open-iscsi` to be prepared in the case I end up setting up [Longhorn](https://longhorn.io/).
|
||||
|
||||
## Setting up the network
|
||||
|
||||
I'm using Tailscale to connect the boards to my home network and to the internet. I installed [Tailscale](https://tailscale.com) using `dietpi-software` and link the device to my account using `tailscale up`.
|
||||
|
||||
I also set up the static IP address for the boards using my home router. I'm using a custom [pfsense](https://www.pfsense.org/) router and I set up the IP address for the boards using the MAC address of the boards on the VLAN they are going to reside in.
|
||||
|
||||
## Installing K3S
|
||||
|
||||
I followed the [official documentation](https://docs.k3s.io/datastore/ha-embedded) to create an embedded etcd highly available cluster.
|
||||
|
||||
> I'm not a fan of the `curl ... | sh` installation methods around, but this is the official way to install K3S and I'm going to follow it for convenience. **Always check the script before running it.**.
|
||||
|
||||
1. I created the first node using the following command:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN=<token> sh -s - server --cluster-init
|
||||
```
|
||||
|
||||
I used the `K3S_TOKEN` environment variable to set the token for the cluster that I will need to join the other two nodes to the cluster. Since this is the first node of the cluster I had to provide the `--cluster-init` flag to initialize the cluster.
|
||||
|
||||
2. I joined the other two nodes to the cluster using the following command:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN=<token> sh -s - server --server https://<internal ip of the first node>:6443
|
||||
```
|
||||
|
||||
3. Done! I have a three node K3S cluster running in my home lab. Is **that** simple.
|
||||
|
||||

|
||||
|
||||
## Checking that it works
|
||||
|
||||
I'm going to deploy a simple service to check that the cluster is working properly. I'm going to use the `nginx` image and expose it using an Ingress:
|
||||
|
||||
1. Create the `hello-world` namespace:
|
||||
|
||||
```bash
|
||||
kubectl create namespace hello-world
|
||||
```
|
||||
|
||||
2. Create a simple index file:
|
||||
|
||||
```bash
|
||||
echo "Hello, world!" > index.html
|
||||
```
|
||||
|
||||
3. Create a `ConfigMap` with the index file:
|
||||
|
||||
```bash
|
||||
kubectl create configmap hello-world-index-html --from-file=index.html -n hello-world
|
||||
```
|
||||
|
||||
4. Create a deployment using the `nginx` image and the config map we just created:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hello-world-nginx
|
||||
namespace: hello-world
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hello-world
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello-world
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: hello-world-volume
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumes:
|
||||
- name: hello-world-volume
|
||||
configMap:
|
||||
name: hello-world-index-html
|
||||
EOF
|
||||
```
|
||||
|
||||
5. Create the service to expose the deployment:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: hello-world
|
||||
namespace: hello-world
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: hello-world
|
||||
EOF
|
||||
```
|
||||
|
||||
6. Create the Ingress to expose the service to the internet:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: hello-world
|
||||
namespace: hello-world
|
||||
spec:
|
||||
ingressClassName: "traefik"
|
||||
rules:
|
||||
- host: hello-world.fmartingr.dev
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: hello-world
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
Done! I can access the service from my local network using the `hello-world.fmartingr.dev` domain:
|
||||
|
||||

|
||||
|
||||
That's it! I have a cluster running and I can start playing with it. There's a lot more to be done and progress will be slow since I'm doing this in my free time to dogfood kubernetes at home.
|
||||
|
||||
Will follow up with updates once I make more progress, see you on the next one.
|
Binary file not shown.
After Width: | Height: | Size: 37 KiB |
Binary file not shown.
After Width: | Height: | Size: 340 KiB |
|
@ -0,0 +1,303 @@
|
|||
+++
|
||||
title = "Journey to K3S: Deploying the first service and its requirements"
|
||||
date = 2024-03-25
|
||||
tags = ["k3s", "homelab"]
|
||||
edit_comment = "**2024/04/29**: Fixed a typo in the CloudNative PostgreSQL Operator chart example. The `valuesContent` was incorrect as it used attributes from the `Cluster` CRD, not the Chart."
|
||||
+++
|
||||
|
||||
I have my K3S cluster up and running, and I'm ready to deploy my first service. I'm going to start migrating one of the simplest services I have running in my current docker setup, the RSS reader [Miniflux](https://miniflux.app/).
|
||||
|
||||
I'm going to use Helm charts through the process since k3s supports Helm out of the box, but for this first service there's also some preparation to do. I'm missing the storage backend, a way to ingress traffic from the internet, a way to manage the certificates and the database. Also, I need to migrate my current data from one database to another, but those are postgresql databases so I guess a simple `pg_dump`/`pg_restore` or `psql` commands will do the trick.
|
||||
|
||||

|
||||
|
||||
<!--more-->
|
||||
|
||||
## Setting up Longhorn for storage
|
||||
|
||||
The first thing I need is a storage backend for my services. I'm going to use Longhorn for this, since it's a simple and easy to use solution that works well with k3s. I'm going to install it using Helm, and I'm going to use the default configuration for now.
|
||||
|
||||
```yaml
|
||||
# longhorn-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: longhorn
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.longhorn.io
|
||||
chart: longhorn
|
||||
targetNamespace: longhorn-system
|
||||
createNamespace: true
|
||||
version: v1.6.0
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f longhorn-helm-chart.yaml
|
||||
```
|
||||
|
||||
This should generate all required resources for Longhorn to work. In my case I also enabled the ingress for the Longhorn UI to do some set up of the node allocated storage according to my needs and hardware, though I will not cover that in this post.
|
||||
|
||||
```yaml
|
||||
# longhorn-ingress.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: longhorn-ingress
|
||||
namespace: longhorn-system
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-longhorn-auth-middleware@kubernetescrd
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: longhorn.k3s-01.home.arpa
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: longhorn-frontend
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f longhorn-ingress.yaml
|
||||
```
|
||||
|
||||
With this you should be able to access your Longhorn UI at the domain set up in your ingress. In my case it's `longhorn.k3s-01.home.arpa`.
|
||||
|
||||
> Keep in mind that this is a local domain, so you might need to set up a local DNS server or add the domain to your `/etc/hosts` file.
|
||||
|
||||
This example is not perfect by any means and if you plan to have this ingress exposed be sure to use a proper certificate and secure your ingress properly with authentication and other security measures.
|
||||
|
||||
## Setting up cert-manager to manage certificates
|
||||
|
||||
The next step is to set up cert-manager to manage the certificates for my services. I'm going to use Let's Encrypt as my certificate authority and allow cert-manager to generate domains for the external ingresses I'm going to set up.
|
||||
|
||||
```yaml
|
||||
# cert-manager-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: cert-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.jetstack.io
|
||||
chart: cert-manager
|
||||
targetNamespace: cert-manager
|
||||
createNamespace: true
|
||||
version: v1.14.4
|
||||
valuesContent: |-
|
||||
installCRDs: true
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f cert-manager-helm-chart.yaml
|
||||
```
|
||||
|
||||
In order to use Let's Encrypt as the certificate authority, I need to set up the issuer for it. I'm going to use the production issuer in this example since the idea is exposing the service to the internet.
|
||||
|
||||
```yaml
|
||||
# letsencrypt-issuer.yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
acme:
|
||||
email: your@email.com
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-produdction
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f letsencrypt-issuer.yaml
|
||||
```
|
||||
|
||||
With this, I should be able to request certificates for my services using the `letsencrypt-production` issuer.
|
||||
|
||||
## Setting up the CloudNative PostgreeSQL Operator
|
||||
|
||||
> The chart for Miniflux is capable of deploying a PostgreSQL instance for the service, but I'm going to use the CloudNative PostgreSQL Operator to manage the database for this service (and others) on my own. This is because I want to have the ability to manage the databases separately from the services.
|
||||
|
||||
Miniflux only supports postgresql so I'm going to use the CloudNative PostgreSQL Operator to manage the database, first let's intall the operator using the Helm chart:
|
||||
|
||||
```yaml
|
||||
# cloudnative-pg-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: cloudnative-pg
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://cloudnative-pg.github.io/charts
|
||||
chart: cloudnative-pg
|
||||
targetNamespace: cnpg-system
|
||||
createNamespace: true
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f cloudnative-pg-helm-chart.yaml
|
||||
```
|
||||
|
||||
This will install the CloudNative PostgreSQL Operator in the `cnpg-system` namespace. I'm going to create a PostgreSQL instance for Miniflux in the `miniflux` namespace.
|
||||
|
||||
```yaml
|
||||
# miniflux-db.yaml
|
||||
apiVersion: postgresql.cnpg.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: miniflux-db
|
||||
namespace: miniflux
|
||||
spec:
|
||||
instances: 2
|
||||
storage:
|
||||
size: 2Gi
|
||||
storageClass: longhorn
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-db.yaml
|
||||
```
|
||||
|
||||
With this a PostgreSQL cluster with two instances and 2Gi of storage will be created in the `miniflux` namespace, note that I have specified the `longhorn` storage class for the storage.
|
||||
|
||||
When this is finished a new secret with the connection information for the database called `miniflux-db-app` will be created. It will look like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secre
|
||||
type: kubernetes.io/basic-auth
|
||||
metadata:
|
||||
name: miniflux-db-app
|
||||
namespace: miniflux
|
||||
# ...
|
||||
data:
|
||||
dbname: <base64 encoded data>
|
||||
host: <base64 encoded data>
|
||||
jdbc-uri: <base64 encoded data>
|
||||
password: <base64 encoded data>
|
||||
pgpass: <base64 encoded data>
|
||||
port: <base64 encoded data>
|
||||
uri: <base64 encoded data>
|
||||
user: <base64 encoded data>
|
||||
username: <base64 encoded data>
|
||||
```
|
||||
|
||||
We are going to reference this secret directly in the Miniflux deployment below.
|
||||
|
||||
## Deploying Miniflux
|
||||
|
||||
Now that we have all the requirements set up, we can deploy Miniflux.
|
||||
|
||||
I'm going to use [gabe565's miniflux helm chart](https://artifacthub.io/packages/helm/gabe565/miniflux) for this, since they are simple and easy to use. I tried the [TrueCharts](https://artifacthub.io/packages/helm/truecharts/miniflux) chart but I couldn't get it to work properly, since they only support amd64 and I'm running on arm64, though a few tweaks here and there _should_ make it work.
|
||||
|
||||
```yaml
|
||||
# miniflux-helm-chart.yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: miniflux
|
||||
namespace: kube-system
|
||||
spec:
|
||||
repo: https://charts.gabe565.com
|
||||
chart: miniflux
|
||||
targetNamespace: miniflux
|
||||
createNamespace: true
|
||||
version: 0.8.1
|
||||
valuesContent: |-
|
||||
image:
|
||||
tag: 2.1.1
|
||||
env:
|
||||
CREATE_ADMIN: "0"
|
||||
DATABASE_URL:
|
||||
secretKeyRef:
|
||||
name: miniflux-db-app
|
||||
key: uri
|
||||
postgresql:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
> In order to customize Miniflux check out their [configuration](https://miniflux.app/docs/configuration.html) documentation and set the appropriate values in the `env` section.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-helm-chart.yaml
|
||||
```
|
||||
|
||||
> I'm using `CREATE_ADMIN: "0"` to avoid creating an admin user for Miniflux, since I already have one in my current database after I migrated it. If you want to create an admin user you can set this to `1` and set the `ADMIN_USERNAME` and `ADMIN_PASSWORD` values in the `env` section. See the [chart documentation](https://artifacthub.io/packages/helm/gabe565/miniflux) for more information.
|
||||
|
||||
This will create a Miniflux deployment in the `miniflux` namespace, using the `miniflux-db-app` database secret for the database connection.
|
||||
|
||||
Wait until everything is ready in the `miniflux` namespace:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -n miniflux
|
||||
NAME READY STATUS
|
||||
miniflux-678b9c8ff5-7dbj5 1/1 Running
|
||||
miniflux-db-1 1/1 Running
|
||||
miniflux-db-2 1/1 Running
|
||||
|
||||
$ kubectl logs -n miniflux miniflux-678b9c8ff5-7dbj5
|
||||
time=2024-03-24T23:00:42.487+01:00 level=INFO msg="Starting HTTP server" listen_address=0.0.0.0:8080
|
||||
```
|
||||
|
||||
## Setting up an external ingress
|
||||
|
||||
> I'm not going to cover the networking setup for this but your cluster should be able to route traffic from the internet to the ingress controller (the master nodes). In my case I'm using a zero-trust approach with Tailscale to avoid exposing my homelab directly to the internet but there are a number of ways to do this, pick the one that suits you best.
|
||||
|
||||
Setting up an ingress for the service that supports SSL is easy with cert-manager and Traefik, we only need to create an `Ingress` resource in the `miniflux` namespace with the appropiate configuration and annotations and cert-manager will take care of the rest:
|
||||
|
||||
```yaml
|
||||
# miniflux-ingress.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: miniflux-external
|
||||
namespace: miniflux
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-production
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: miniflux.fmartingr.com
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: miniflux
|
||||
port:
|
||||
number: 8080
|
||||
tls:
|
||||
- secretName: miniflux-fmartingr-com-tls
|
||||
hosts:
|
||||
- miniflux.fmartingr.com
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f miniflux-ingress.yaml
|
||||
```
|
||||
|
||||
This will create an ingress for Miniflux in the `miniflux` namespace and cert-manager will take care of the certificate generation and renewal using the `letsencrypt-production` issuer as specified in the `annotations` attribute.
|
||||
|
||||
After a few minutes you should be able to access Miniflux at the domain set up in the `host` field:
|
||||
|
||||
```
|
||||
$ curl -I https://miniflux.fmartingr.com
|
||||
HTTP/2 200
|
||||
server: traefik
|
||||
...
|
||||
```
|
||||
|
||||
And that's it! You should have Miniflux up and running in your k3s cluster with all the requirements set up.
|
||||
|
||||
I can't recommend [Miniflux](https://miniflux.app) enough, it's a great RSS reader that is simple to use and has a great UI. It probably is the first service I deployed in my homelab and I'm happy to have it running in my k3s cluster now, years later.
|
Binary file not shown.
After Width: | Height: | Size: 98 KiB |
Loading…
Add table
Add a link
Reference in a new issue