Building a 10-Node Kubernetes Cluster on Raspberry Pi & Ubuntu Server

Originally published at: Building a 10-Node Kubernetes Cluster on Raspberry Pi & Ubuntu Server – LearnLinuxTV

1 Like

This was a fun video. I have several questions. First, are there limitations on POE as to what it can power? For instance, will it have any trouble powering the Pi, a fan, and an external HD?

Second, do you have to use identical hardware for all the nodes, and if not, will the slower hardware pull down the efficiency of the faster hardware?

Third, after setting up a Kubernetes cluster and a few pods running say, NextCloud, Plex, and a web sever would it be possible to backup the entire cluster to Backblaze or a similar service?

Fourth, is there a way to set up a cluster with redundant controllers?

Fifth, could anyone recommend a few options for POE switches on ebay? The switch Jay used is spendy. There are a couple affordable four port options on The Amazon, but I would rather start with something I can grow into…without learning how to deal with Cisco and whatever that entails as I am not doing any of this as a career path. Also, I will not have a server rack. Raspberry Pi’s and discarded hardware will be my entire lab as I have nowhere to put a rack.

Finally, does anyone have an affordable UPS they would recommend for a small home lab? I am not keen on having the power resetting five Pi’s, messing up the SD cards, and starting from scratch every tornado season. I just want something that would allow me to shut everything down gracefully.

Just a quick remark on PoE: You can see the different standards here (among other places):

You should always buy a switch that adheres to the “802.3at” PoE standard (in the old days there was 24V passive PoE, but this is not used anymore).

The Unifi switches have 802.3at PoE+ (type 2), which means it can power up to 25W at the endpoint. A cheaper switch might “only” have 802.3at PoE (type 1), which can power 13W at the endpoint.

This can be a deciding factor when you decide how much should be powered from a single port. For reference, a Pi with PoE hat (and fan) takes around 5W.

Type 3 and 4 are for enterprise use, and only available in really expensive hardware.

1 Like

First, are there limitations on POE as to what it can power…

Great answer from @ameinild on that one. Thanks!

Second, do you have to use identical hardware for all the nodes, and if not, will the slower hardware pull down the efficiency of the faster hardware?

I have been doing more research and watching more videos on The YouTubes, but I am still unclear about the second part of this question. Let us say I have two Raspberry Pi 3 B+ boards with 2GB of RAM and four Pi 4 B boards with 4GB of RAM. Will using the Pi 3 boards as redundant controllers have an adverse effect on the cluster as a hole? My understanding is that the controller uses less resources than the nodes…which makes sense.

…would it be possible to backup the entire cluster to Backblaze or a similar service?

Still unclear on this

…redundant controllers?

Figured that one out…relatively.

POE switches on ebay…

I think I will use this one: TP-Link PoE switch

…affordable UPS…

IF I spend the money I think I will go with a small APC like the 700VA BR700G. I do not plan on plugging in anything other than the cluster, switch, router and modem.

Good question! The answer is usually no, but it depends on how your Kubernetes project is set up.
If you have, for example, a set of pods that operate independent from each other, then each Pi can work independently at their fullest potential, regardless of whether there is a slower one along the bunch.
But you can also have a project where you have Pod A depending on Pod B, like a web server and a database. In this case, if the Pi running the database gets swamped, then the Pi running the web server will have to wait, reducing it’s efficiency.

That makes sense. So if all the pods were the same hardware, but the controllers were less Pi 4 and Pi 3 respectively, the load on the nodes would be “fine” assuming the Pi 3 is powerful enough to act as a controller?

The video I am watching touched on having multiple controllers, but it was unclear if they are redundant or also sharing resources in effect creating a more powerful controller, but still liable to failure.

So the smokeping example in this video shows the data for that program as stored on the node that is running smokeping. Is it possible that pod could be started on a different node if say the cluster were shut down and restarted? Or if the pod were destroyed and recreated? If that were to happen would the data be transferred to the new node? Or would the persistent data only exist on that original node? Is it possible that the persistent data could be lost for that pod? Could/should the data be stored on an external data server? If so how would that be set up?

I am not sure of the answer, but my minimal understanding of the topic thus far is that persistent storage is a sore spot of sorts for k8s. If/when I get my cluster up I am going to look into Longhorn. Covered (somewhat confusingly for beginners) by Techno Tim here on The YouTubes.

Absolutely! I was pressed for time, but the initial treatment of that video included me setting up an NFS share on TrueNAS, so that it would be the central storage for containers. But the video was over an hour long at that point, so I kept it more simple. I probably should’ve mentioned that.

But then again, I am in the process of developing a Kubernetes tutorial series, so that may be a better place to talk about this.

After I initially watched this video I started a shopping list on The Amazon. After a week of trying to decide how big I wanted the cluster, what hardware to buy etc. I had it nailed down. Then I started reading Mastering Ubuntu Server 3rd edition and realized I might want a cluster running x86 processors. Initially I found this Lenovo ThinkCentre M93P Tiny which is speced as follows:

  • Intel Core i5-4570T Dual-Core Processor 2.90GHz
  • 8GB PC3-12800 1600MHz DDR3,
  • 128GB Solid-state Drive
  • Intel HD Graphics 4600
  • Windows 10 Pro 64-Bit (Not sure if these licenses could be used on other machines or not)

To my surprise, after I removed all the bits and pieces needed for a five node pi cluster and replaced them with five of these, the end price was almost exactly the same! It seemed like a no brainer except for two things: power and noise. While they would be low they would not be Raspberry Pi low. I would not be using PoE, and I would need a more substantial UPS.

Then today I found the BMAX B1 Plus speced as follows:

  • Intel Celeron N3350 1.1GHz base 2.4GHz burst
  • 6 GB RAM
  • 64 GB eMMC
  • Ports: RJ45, HDMI, VGA ports, 2 x USB 3.0, 2 x USB 2.0, Audio out, Micro SD card Reader (128 GB max)
  • M.2 SATA 2280 (unpopulated, 1 TB max)

This thing is fanless and draws 4 watts! I am trying to figure out why I would not use these for a x86 full Ubuntu Server cluster. Am I missing something? Surely this would outperform a Raspberry Pi? Also, the overall cost of the cluster is cheaper than using the Lenovo or Pis, and I can power them using PoE.

Any thoughts?

I never used anything from BMAX, so I can’t say how it is on quality, etc. Patrick at Serve the Home has a ton of reviews of the ThinkCentre and other “tiny mini micro” format PCs, most of which he gets on eBay from liquidators (the PCs have typically come off of a corporate lease and been refurbed) and you can get some great deals if you shop carefully.

For a home lab, you really don’t have to get fancy at all. A set of 2GB RPi 4B would be fine, even, or very inexpensive i3-based tiny mini micros PC off of eBay. TBH, for home lab server nodes, you don’t even need many cores. I think people are way over-spec’ing for them; Linux is ridiculously small and light-weight as a server.

I saw you mentioned UPS; we use CyberPower ones (we have 3ea of their 1500 model). They’ve been totally reliable and durable. Dad won’t even look at APC anymore cuz too many have just randomly failed.

Oh and for Windows licenses on those ThinkCenter etc ones; they’re tied to the hardware, so it’s not legit to use them on something else (if you even can).

Patrick at Serve the Home has a ton of reviews of the ThinkCentre and other “tiny mini micro” format PCs…

Exactly where I got the idea :slight_smile: I have been looking on eBay, but have not found a superb deal yet. The advantage of the BEMAX is what you said about power/heat…at least compared to the Tiny PC options.

I saw you mentioned UPS; we use CyberPower ones…

Good to know!

Windows licenses…[are] tied to the hardware…

Shoot

1 Like

Unable to run hello-world
Hi Jay,
I have setup as you told. I used 3 Raspberry Pi, one master and 2 slaves, successfully installed docker.

“sudo systemctl status docker”

docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-05-19 02:49:15 UTC; 27min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1833 (dockerd)
Tasks: 12
Memory: 126.7M
CGroup: /system.slice/docker.service
└─1833 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

“docker run hello-world” - gives the following err

Unable to find image ‘hello-world:latest’ locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See ‘docker run --help’.

“lsb_release -a” - Environment

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal

I completed this tutorial and everything is working great. The next step for me is to be able to use kubectl commands from any computer in my home network to apply configuration to my kubernetes pi cluster. I know that kubectl context exists, and think that is the right way to accomplish this. Correct me if I’m wrong.

However, I don’t know how to gather the required information about my cluster to set the context appropriately. Can someone point me in the right direction?

I’ve tried doing kubectl config view and connecting to the cluster.server but I keep getting a 403 (Forbidden) when trying to issue any kubectl commands. Do I need to set up Basic Auth/certificates?