I just installed a desktop machine with brand new Ubuntu 26.04. One of the things I missed was the possiblity to paste selected text by pressing the middle mouse button. The fix for Ubuntu default desktop (Gnome) is as easy as: Of course the first command is only needed to check the default setting (which […]
Category: Linux
Adding persistent storage to k3s
Basics Kubernetes/k3s is using the Container Storage Interface (short CSI) to provide persistent storage. More details about that can be found here. Options By default k3s/rancher ships with a storage class called “local-path“. According to k3s documentation an alternative is “longhorn” (see here). But there are plenty of other options based on NFS, Ceph (both […]
When trying to get first access to an OpenStack based kubernetes cluster, you’ll first need to create a correct ~/.kube/config file. OpenStack however only provides you with a collection of files you need to assemble yourself: However you’ll need all that components in the right place in a single file. Here’s a short version of […]
While experimenting with open-webui I was looking for options to use local LLM resources for image creation. Besides the commercial models, openwebui offers two alternatives : Automatic1111 and ComfyUI. As ComfyUI is mentioned in several other places I decided to have a look at it. Installation In order to install ComfyUI I was looking for […]
First: Store credentials for future kcadm.sh calls: Now we can use kcadm.sh without entering passwords every time (at least for some time): Lessons learned For security reasons I had TOTP activated for my (master realm) admin account. When trying to add credentials for kcadm.sh usage I always got: So I had to dig a little […]
What is litellm? litellm is a proxy server you can add between your LLM app and the LLM service. Why use litellm? litellm can be used to configure access, load balancing, accounting and many other things you might be interested in once you’re getting into the LLM business. Installation Install litellm proxy as described here: […]
While ollama does handle multiple different LLMs quite nicely (loading/unloading on demand) there are situations where you may want to run multiple instances of the same model at the same time (e.g. to increase throughput). Here’s how you can do so with minimal changes to your zero effort ollama installation. Let’s assume you just did […]
Where (and why) it all began While trying to join an Ubuntu client to my Samba domain, I got this error that most likely prevented a successful login. From the sssd’s log files I got this as a first hint: So it looks like we got a problem with my Samba domain not serving GPO […]
While creating new docker instances I recently often got the following error message: In the beginning it helped to prune old network configs: But now I’m at a point where this does no longer help. Maybe the number of subnets is limited? So I currently got 33 subnets configured for docker … sounds close enough […]
Currently I’m using Nextcloud with snappymail as mail client. While this works nicely, I do have a “comfort problem” since I switched my Nextcloud authentication to SAML/SSO: I can no longer use the Nextcloud credentials to log into my mail account (as Nextcloud does not know about my password when using SSO). There’s two things […]
