💾 Archived View for dioskouroi.xyz › thread › 29405614 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
➡️ Next capture (2021-12-05)
-=-=-=-=-=-=-
________________________________________________________________________________
I love reading about all of these exotic Raspberry Pi projects and people pushing the Raspberry Pi into applications far beyond what you'd expect.
But at the same time, I always hope that people recognize that these things are really not great solutions in general. This contraption idles at 15W (per the article) and pulls 24W during a benchmark. Within that power envelope you can pick up a more powerful tiny x86 board that takes up less space, probably costs less overall, and performs much better without requiring you to coordinate your project across 4 separate tiny computers.
The Pi is a lot of fun and the Raspberry Pi foundation has done an excellent job of bringing us powerful ARM processors at low price points, but the gap between Quad Cortex-A72 and even a cheap, low-power x86 board is still massive.
But on the other hand if you're doing this for learning and for fun, these things are awesome.
I've been meaning to build something like this for a while now, and "performs much better without requiring you to coordinate your project across 4 separate tiny computers." is pretty much my motivation to do it — I _want_ a sufficiently limited setup where I'll hit bottlenecks with relatively small workloads, because the whole point is playing with evading those bottlenecks.
I totally understand. This is a fun way to do it.
A cheaper option would also be to run a lot of virtual machines on a single device and only assign them limited CPU and RAM budgets.
You can actually do this all within a single Raspberry Pi 4 8GB with ESXi:
https://www.servethehome.com/getting-started-with-vmware-esx...
Not going to lie: I'd still enjoy having one of these Quad-Pi boards to play with, though.
Doing it with physical machine ls gives the real understsnding and ferls that you just dont get from the VM stuff
I'm almost wondering if this might be useful for a research lab, to prototype HPC algorithms. Like, I don't really want to run stuff on a big cluster or supercomputer while running my code. And it would be nice to have a tiny little cluster that hits bottlenecks while solving matrices that I can easily run on my desktop.
The problem here is that the fidelity of a test run on a raspberry pi cluster, as compared to one run on the actual target hardware, is about the same as running the test on a cheaper and easier simulator environment of VMs.
From what I have heard, Pi's are used by supercomputer clusters to solve the problem of testing code at scale (hundreds of nodes) but not on the expensive hardware.
Probably not a research lab, but maybe an educational lab. Experience problems with multi-node systems and learn to work around them, but expect real application problems to be different.
The Pi generally has pretty atrocious power management, with linear regulators and no option to sleep or save power in any way aside from turning off stuff like USB controllers. It would probably cost them like $2 in parts to add switch mode power supplies and they'd reduce power consumption by at least half if not more. But no we can't afford a few dollars worth' of parts on an $80 product to improve quality of life significantly lmao.
Imagine all the battery powered applications where this would be a night and day difference.
I don't think the Pi uses linear regulators anymore - the Pi 1 certainly did, but for example the Pi 3B+ uses a MXL7704, which includes several buck regulators together.
https://assets.maxlinear.com/web/documents/mxl7704.pdf
Hmm seems like you're right and they have in fact improved, and with synchronous ones no less which should be really up there in efficiency. Then it really is a mystery why the Pi is still so energy inefficient when compared to other ARMs of similar capability.
It's worth noting too that the 28nm node is 10 years old now. [1] That's the biggest factor for power consumption. It's just a really cheap node to run now, which is why we're seeing it in stuff like Pi's. [2]
1: https://en.wikichip.org/wiki/28_nm_lithography_process 2: https://omdia.tech.informa.com/OM016176/28nm-to-be-a-long-lived-node-for-semiconductor-applications-in-the-next-five-years
What are you comparing it against? Even the Pi 4 is Cortex-A72 on a 28nm process node. Pretty sure any other core that’s similar on those vectors will have approximately the same consumption. It’s just 6 year old tech now.
Most smartphones draw roughly one third of the power of an idling Pi at max load (without counting the screen draw), that's what I'm comparing against. I'd check some benchmarks, but the gap in consumption is so hilariously large I don't quite see the point.
Something like 6W for the Pi 4 and 2W for the average flagship smartphone chipset, both under max load if I recall right. Some of them even have double the core count.
Yes, well you're comparing silicon photolithography process nodes that are a decade apart and core architectures that are 5+ years apart, that's the difference. The Pi 4 is using Cortex-A72 cores, released in 2016 [1]. And is made on the 28nm node, released in 2011 [2][3][4]. Compare that to the Snapdragon 888, which uses the 5nm node (and the low-power version of it, at that), and Cortex-X1 cores from 2020. That's where your difference comes from. It's like comparing a Pentium III to a Core i7.
The Pi is cost-optimized to hit an entry level price target of $35. They have to use old cores and cheap lithography processes. High-end smartphones are cost-optimized for a $700-1200 price window with massive economies of scale and stiff competition, so they will naturally use the latest process nodes and cores. It's completely apples and oranges.
Compare the RasPi to other sub-$100 SBCs and it compares quite favorably.
There are lots of reasons the 28nm node has such longevity, mostly coming down to the fact it is the last silicon process node which uses simple gate topographies and is thus highly cost-effective to produce. [5][6]
1: https://en.wikipedia.org/wiki/ARM_Cortex-A72 2: https://en.wikipedia.org/wiki/32_nm_process#28_nm_&_22_nm 3: https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_28nm 4: https://www.techinsights.com/blog/review-tsmc-28-nm-process-technology 5: https://arstechnica.com/gadgets/2021/04/chipmaker-says-it-will-ramp-up-production-of-older-28nm-chips/ 6: https://omdia.tech.informa.com/OM016176/28nm-to-be-a-long-lived-node-for-semiconductor-applications-in-the-next-five-years
If you want a Linux-capable SBC with similar cores to a recent mid-to-high-end smartphone, you need to look at the $400+ Nvidia Xavier NX or the $500+ Qualcomm RB5. Or get the $1400 Snapdragon 888 Developer Kit. Or wait til next year for the likely $1200+ Xavier Orin for something that's truly high-end.
Does anyone offer a comparable product that does that? Might be worth building one just to scare them into fixing it.
There are tons of SBCs out there, but none of them scare Raspberry Foundation because none of them have the wide distribution, brand recognition, and large community that goes with it. It's very much a popularity feeds popularity market.
There's the clusterboard from pine64 which fits up to seven CPU boards:
https://pine64.com/product-category/clusterboard/
You can use two different CPU boards with the clusterboard:
- SOPINE A64 compute module with a quad core ARM Cortex A53 and 2 GB RAM
- SOEDGE AI NEURAL MODULE with a dual-Core ARM Cortex A35, a NPU IP from Verisilicon Vivante and 2 GB RAM
Note that the pinout of the SOPINE/SOEDGE modules is not compatible with the Raspberry Pi 1 and 3 compute modules (DDR3 vs. DDR2 SO-DIMM socket and obviously a different pinout), so you unfortunately can't mix them.
Pine64 also recently announced the SOQuartz module with a more powerful SoC (Rockchip RK3566 with four ARM Cortex-A55 cores, a Mali-G52 GPU and a 0.8 TOPS NPU), which uses the same form factor as the Raspberry Pi 4 compute modules. This might be an interesting alternative for future cluster solutions based on the new form factor.
https://www.notebookcheck.net/PINE64-SOQuartz-Compute-Module...
I always reply to these comments because well they are simply wrong for regular Raspberry clusters (this one is stupid though, but still fun to watch):
1) The Raspberry Pi 4 is THE cheapest 2Gflops/W computer ever made and probably that will ever be made in the future too! Peak of energy/resources/lithography/architectures and velocity of money against that, will most likely make it so.
2) You can scale the Raspberry cluster as you want it, only power the nodes you need, it's modular, one breaks you still have a few left, same for the SD cards which BTW while being so slow a Raspberry 2 (2W!) can saturate them they are SURPRISINGLY sturdy (my original SanDisc (every other brand has been a complete scam) are on their 7th year of 99.999% uptime, down when my power company cut the electricity for an hour).
3) The Raspberry cluster is smaller, cooler and silent (if passively cooled, it's the most powerful device that can be fully passively cooled at 100% CPU (7W) without becoming too hot to wear early) and wont fail because of failing fans!
4) For battery backup there is nothing better because beyond total of 100W for 24 hours you start to see the limits of what is practical to manage on a individual basis.
I post this picture every time:
http://move.rupy.se/file/final_pi_2_4_hybrid.png
(this is how you cool a Raspberry 2/4 hybrid cluster)
> The Raspberry Pi 4 is THE cheapest 2Gflops/W computer ever made and probably that will ever be made in the future too!
Can you expand on this one? I was curious why you think there probably will not be a cheaper one in the future with similar or better specs.
Expanding on #2, when thinking of things like Spectre & Meltdown ie CISC HW flaws, offloading or running a variety of services/daemons on physically separate machines can also improve security. We all get told to reduce the attack vector and this is one way to do that. The organisational hierarchy best exemplifies this security isolation point for an entity, like businesses, govts, etc etc.
> when thinking of things like Spectre & Meltdown ie CISC HW flaws
Spectre in particular is not a CISC HW flaw. It affects ARM and other RISC architectures as well:
https://developer.arm.com/support/arm-security-updates/specu...
Know your HW.
https://www.raspberrypi.com/news/why-raspberry-pi-isnt-vulne...
"Both vulnerabilities exploit performance features (caching and speculative execution) common to many modern processors to leak data via a so-called side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these vulnerabilities, because of the particular ARM cores that we use."
A72 is susceptible to Spectre because it has speculative execution I think.
That article was written before the release of Pi 4: 5th Jan 2018
https://forums.raspberrypi.com/viewtopic.php?t=243416
You don't get to 2 Gflops/W that easily!
Someone ran a checker on the RPI4
https://forums.raspberrypi.com/viewtopic.php?t=243416#p15344...
Results
"> STATUS: NOT VULNERABLE (your CPU vendor reported your CPU model as not vulne rable)
> SUMMARY: CVE-2017-5753:OK CVE-2017-5715:OK CVE-2017-5754:OK CVE-2018-3640:OK C VE-2018-3639:OK CVE-2018-3615:OK CVE-2018-3620:OK CVE-2018-3646:OK CVE-2018-1212 6:OK CVE-2018-12130:OK CVE-2018-12127:OK CVE-2019-11091:OK
We're missing some kernel info (see -v), accuracy might be reduced
Need more detailed information about mitigation options? Use --explain
A false sense of security is worse than no security at all, see --disclaimer"
Still seems to be in the clear but some info cant be obtained which might change the results.
There are actually some pretty good reasons to do this using a bunch of separate processors: better process isolation than you'll ever get on a VM based solution, far more predictable performance per node, physical redundancy (if one breaks you still have three left), potentially better security because being able to compromise one of these leaves the other three in a position to lock out the compromised node without anything like a VM escape or so to worry about.
Having a desktop computer built out of a cluster of small cpus each running a fragment of your workload has disadvantages, for sure, but I can see some advantages as well, besides the fun factor. It could easily be something akin to a hardware assisted version of Qubes OS.
>"There are actually some pretty good reasons to do this using a bunch of separate processors: better process isolation than you'll ever get on a VM based solution"
Things like that usually run in one's basement running home owner's own processes. Why would I care about my own processes "spying" on each other totally eludes me.
Because just opening a web page these days equals someone else running their code on your computer.
https://news.ycombinator.com/item?id=26438040
TLDR. That thing is a server (at least in my cases). They do not fish out and run code from some web page. They get request, they compute result and send that result back. Enough already of that scaring everyone into the "safe environment and languages"
It might make more sense with the nVidia Jetson option. [1] The Jetsons are considerably faster than the Pi CM4 at both general computational tasks and GPU-focused ones (like video decoding/encoding and machine learning). I wouldn't be too surprised if four of those outperformed a comparably priced/power-hungry/sized x86-64 machine for the right workload. YMMV. (That goes for any of the three Jetson models they list.)
I'm also curious to see what the Turing TCM turns out to be.
Though the IO is still limited to the one GbE port + one PCIe 2.0 channel per device for anything you cram in there...and the heterogenous use of the PCIe channel may or may not match what you want...
[1]
https://turingpi.com/turing-pi-v2-is-here/
> But at the same time, I always hope that people recognize that these things are really not great solutions in general. This contraption idles at 15W (per the article) and pulls 24W during a benchmark. Within that power envelope you can pick up a more powerful tiny x86 board that takes up less space, probably costs less overall, and performs much better without requiring you to coordinate your project across 4 separate tiny computers.
I agree with all of this and if you really need to have multiple separate systems (such as to test/develop some clustering thing) you can easily turn a 25W-35W max TDP, x86-64 motherboard and cpu combo into a xen or kvm hypervisor, on debian or centos.
And put as many VMs as you need on that.
kvm works on raspberry pi 4 perfectly well
and works much better in performance on a system where you can install an affordable NVME PCI-E 3.0 or 4.0 SSD for the hypervisor and guest VMs. With a raspberry pi4 you're limited to unreliable microSD cards or weird hacks like a SATA3 drive connected by USB3.
pi4 has a PCI-e bus and supports nvme
> a more powerful tiny x86 board that takes up less space, probably costs less overall
It seems like it should be this way but when it comes down to cases, for example making a network gateway, a lot of the options that get thrown around are frustratingly costly. As in, _a laptop motherboard that does twice what this x86 SBC does would cost half what this does_ costly.
I’d really like to know what x86 SBCs are now considered to be real Raspberry Pi competitors, or even considered to be in the ballpark. When the subject of ARM based RPi competitors comes up, it’s usually a conversation about the level of support those boards offer, and in that regard being x86 based is obviously a total game changer.
If this board is $200 and the 4 raspberry pi modules you plug in need an adapter at $10 each and cost between $25 (1g ram, no wireless, no storage) and $70 (8g ram, no wireless, no storage), and we consider Mini-ITX to be tiny (debatable), we've got a budget of $340 to $520 if I can arithmetic. And we need motherboard + cpu + ram.
PC parts prices aren't great right now, but a AMD Ryzen 5600G @ $240, an ASRock A520M-ITX/ac @ $105, and a Silicon Gaming 2x 16GB of DDR4-3200 for $90, is $435 [1]; assuming the firmware shipped with the board is new enough for the CPU (i didn't check if you can flash this board without a supported CPU). You can limit the CPU power in firmware settings, and to hold close to the power usage of this cluster board.
If you only get 4G ram, you can save some money, but not enough to get under the lowest pi cluster with current pricing. If you need it less expensive, you could maybe get an older AMD processor or look at intel.
If you want smaller than itx, it gets hard. Maybe reuse a chromebox or similar, but then you don't have much choice of hardware and connectivity.
[1]
https://pcpartpicker.com/list/yCj2vf
In amateur astrophotography there’s a few Raspberry Pi based automation rigs which retail for a few hundred dollars (e.g. ASIAIR). This is a case where cheap x86 SBCs are a much better option because they give you access to the vast suites of Windows based astrophotography software available, lately particularly N.I.N.A. I even just came across a video where someone else came to the same conclusion and went into good depth on it. [1]
The MeLe Quieter 2 [2] is a similar form factor to little ARM SBCs, is passively cooled, is a couple hundred dollars, and includes Win10Pro. For that you get a quad core 2Ghz+ 14nm processor, 8GB ram, and 128GB eMMC.
1: https://youtu.be/asSfA6HVHAc 2: https://www.cnx-software.com/2021/06/01/mele-quieter2-review-windows-10-ubuntu-20-04-egpu/
Though unfortunately the Mini-ITX space has been left do dangle for a few years now; most of the affordable (<300 money) boards have pretty old CPUs.
AMD hasn't had something new in that space for a very long time - E-350/450 based boards were just recently discontinued, and that was a 10 year old part. The only other boards still available seem to be re-spins to sell of 5-6 year old APUs. These were decent in their time, but time has moved on. The E-350/450 were also bundled with crappy southbridges which consumed way more power than the CPU itself - when you see one of these boards, the _smaller_, fanless heatsink is for the CPU, the big one for the southbridge.
VIA has been gone for some time.
All the Intel boards are 14nm or 22nm parts, usually from 2014-2017. Newest boards seem to use the J4125 - a Goldmont refresh from 2019.
Intel at least offers some newer stuff in the NUC form factor (though at much higher prices as well). NUCs are pretty non-standard form factor though and offer PCIe only through M.2, so for I/O you'll need an awkward M.2-to-PCIe adapter with a cable. AMD really only has that 4700S kit (which uses a PS5 SoC with defective iGPU as I understand it) at around 400 currency. The 4700S seems to have ridiculous idle power draw (~80 W - maybe power management for the GDDR and other parts are disabled together with the iGPU).
There are _some_ ARM boards now with PCIe. The RPi CM4 has a PCIe 2.0 x1, which is not a lot but maybe good enough for some use cases. The RockPro64 has an PCIe _1.0_ x4 slot; the RockPi4 has an extremely awkwardly positioned M.2 slot (same PCIe 1.0 x4, it's the same SoC as the RockPro64).
None of these are great options.
Picking up a generic Intel Mini-ITX board and dropping a cheap Celeron or i3 into it can produce some surprisingly low idle powers. Around 10-11W is not uncommon, which is lower than this quad Raspberry Pi 4 board.
The key is to pick motherboards with minimal features (few extra chips to power) and an Intel CPU. Intel historically has much better idle power than AMD, although AMD held the power efficiency crown under full load for a while.
Here's a random example from years ago of a cheap Mini-ITX motherboard and CPU with minimal idle power:
https://mattgadient.com/building-a-low-power-pc-on-skylake-1...
The ITX motherboards with built-in CPUs are great if you can find them, but they basically disappeared when the chip shortage kicked in. CPU vendors put all of their production into the expensive money-making parts.
This is a good point. AMD might actually be interesting here, at least on paper, because AM4 can be used without a chipset (A300, X300) and still provide a bunch of USB and SATA (directly out of the CPU), and if combined with an APU instead of a desktop SKU, idle power consumption should be laptop-like (there's only one die for Ryzen APUs afaik).
The problem is that I don't think boards themselves are available, only prebuilts/barebones, and of course AMD APUs have pretty high starting prices right now, and lower-end Zen 2/3 APUs weren't even announced last time I looked.
There's plenty of Mini-ITX motherboards which accept a socket-ed processor though. It's less of a specialty form factor these days where only VIA is participating in and something that more mainstream manufactures are making as well such as Gigabyte and ASUS.
The RPi4 SoC has a busted PCIe controller (can't do 64-bit operations), the Rockchip one has a very small BAR size limit, so neither can run GPUs for example.
The _real_ ARM board options are SolidRun MACCHIATObin and SolidRun HoneyComb LX2K.
Indeed, the number of acceptable ITX boards is in single digits now for last gen CPUs.
>last get CPUs
Did you mean "last _gen_ CPUs"?
If indeed that was a typo, I think it would be a difficult one to make on a QWERTY keyboard, because the N key isn't very close to the T key. Are you a Dvorak typist, by chance?
More likely to be phone keyboard auto correct.
That's right partially too. Autocorrect is tuned to qwerty for its hamming distance data used, and other statistics.
Yes, you are right. But my current laptop comes with qwerty labels. Never managed to get time to move the keycaps.
The first few lines of this article made it clear this was learning oriented and a “challenge” from a friendly competition.
I love the "lets do something just for the sake of it" approach in life.
Too often economic thinking restrict from fun creations and discoveries
> Within that power envelope you can pick up a more powerful tiny x86 board
Which is a poor solution as a local dev/test platform for apps to be deployed to ARM cloud platforms, which seems to be the selling point of this.
Its not just for fun, Raspberry Pi is used in many realworld applications.
I personally implemented LAB equipment access control system and chemical auditing tracker using many Raspberry Pi for Lurie Nanofabrication Facility
The failure rate is like 1 in 20 over a period of year (that too in power system), they are very sturdy.
> idles at 15W
Well, not so fast...
There is a high-performance SSD in there, and this is without power optimizations (you can go as far as shutting down entire nodes!)
There's a lot of bad things about Intel and x86 that I don't have to worry about with this:
- I don't have to worry about Intel ME backdoors.
- I don't have to worry about UEFI at all.
- I have 4 sets of cores that are physically separated, so I have a real defense against spectre-type vulnerabilities. E.g. public-facing nginx can go on one, backend application server can go on another, and SQL server on a third.
- Doesn't look like I really need any fans.
I would take uefi over the insanity that is the pi boot process, any day. (Agreed on the other points)
What are some good x86-64 alternatives to the Raspberry Pi? Asking since I’m quite out-of-date in my hardware knowledge. I guess what I want is low energy consumption, ~8 GB RAM (or more) and enough CPU to run a bunch of Docker containers. And a price tag that’s not higher than the Pi.
I don't think you're going to find anything as cheap as a single Pi, but you should be able to find something cheaper than the Turing Pi 2 ($200?), 4 of its CM4 adapters ($40 total), and 4 Raspberry Pi CM4s ($140 total): $380ish.
One option is to buy a used Intel NUC on eBay. I bought one for $235 (with RAM, SSD, and case) a couple years ago that satisfies all that.
I haven't looked into a lot of the other x86-64 SBCs. I really liked the ODROID-H2+ but it's discontinued due to supply chain problems. I'm hoping an ODROID-H3 or something will come out next year.
I'm told there's an "thin client" class of computers. Ebay may have a few used variants that get close to your target cost, look for the HP t620 / t630 / t640 series for examples but I am also getting hits searching for used enterprise network thin clients, with prices in the ~$45+ range. I'm considering something like this if at some point I decide to upgrade from my RPi 4, which is currently RAM limited.
ServeTheHome did a one-liter PC series they called Project Tiny Mini Micro, that features small units that hit everything you ask for except cost (buying used might avert the ~$250-$800 price tags I see)
Some of the modern Atom CPUs might fit the bill.
I'm curious, how big is the CPU gap between 4 of these and a single cheap ATOM Nuc at this point? Or i3?
I don't doubt that its significant, but it seems like these are catching up?
> _Within that power envelope you can pick up a more powerful tiny x86 board that takes up less space, probably costs less overall, and performs much better without requiring you to coordinate your project across 4 separate tiny computers._
I am convinced that Intel _does not_ deliver on that.
What tiny x86 board is comparable to the Raspberry Pi? I'm looking for excellent Linux support, with well documented GPIO/ADC/DACs, and as few closed-source hardwares as possible at a similar price point. If I want to talk directly to the ethernet controller (for example) then I should be able to look at the board's documentation without having to sign any NDA.
So you have found docs for the Broadcom SoCs on the Pi without NDA? That would be newsworthy. The ethernet controller lives in the SoC btw, and talks to a phy. That's another boardcom chip. See if you can find a datasheet for BCM54210. The Pi is _not_ open hardware, even the schematic is a joke. Broadcom is another way to spell "NDA" and "no docs for plebs."
I don't know about price but you absolutely can find Mini-ITX (and smaller) Intel based systems that idle at 5-15 watts and perform better than the Pi. At $200 for the cluster board, $40 for the adapters, and ~$50 a pop times 4 for the CM4s, we're talking $400-$500 and that's not far off from the price of some of these low end x86-64 systems that -- yess -- are much faster than the Pi 4.
It's worth pointing out that the Pi isn't open by anyone's definition or any stretch of the imagination. But, it sounds like you want a Pi for the thing it's actually somewhat good at, which is not running Kubernetes on a 4-way cluster board with an ATX power supply.
The Pi is nice because it's a bridge between microcontrollers and normal OS-grade computing. You can have a full desktop environment and also have GPIOs and SPI peripherals talking to the outside world. This is pretty cool and is a nice little niche to be in. I don't think anything really beats the Pi here - there are Rockchip and Allwinner based boards that compete, but they're all just different facets of the same gem so to speak.
Once you stop using those microcontroller-style functionalities and bridges to the real world and start using the Pi as a cluster server, things don't really make sense anymore to me. I get why people do this for fun (just like back in the day people would set up MPI/Beowulf clusters of whatever cast-off vintage desktops they could find), but for practicality points, there's no good argument for the Pi IMO.
Compared to this 4x cluster board, an Intel NUC 11 Tiger Canyon comes in at a similar price point, 3x+ the performance/watt, and at least 5x the performance (for real workloads that touch I/O, probably 10x or more).
>Compared to this 4x cluster board, an Intel NUC 11 Tiger Canyon comes in at a similar price point, 3x+ the performance/watt, and at least 5x the performance (for real workloads that touch I/O, probably 10x or more).
google shopping says not. and is the i5 3x faster than 16 ARM cores at 25W (or any W)? (i have no idea - genuine question)
I'm finding the i5 NUC for around 450 EUR incl. VAT (notice EU prices for electronics are almost always higher than US, so you can't just take the US$ value of a Pi and do a direct conversion). I think you need to add RAM though. So it's a little bit more expensive than the four Pis plus adapters plus cluster boards, but very much in the same ballpark.
The CPU in the NUC is i5-1135G7 with a typical TDP of 15W (but allegedly can go up to 28W), and scores 10174 in passmark CPU benchmark.
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-1135G...
We don't know total system power draw, but anecdotally these low power intels can be quite efficient (I have a fanless i7 system that idles around 10 watts and IIRC stays well below 20 watts at load).
Android version of Passmark gives 751 points for the Pi 4's CPU:
https://www.androidbenchmark.net/phone.php?phone=Raspberry+P...
I don't know if that is comparable with the desktop version. But there's a Phoronix comparison of Pi 4 vs G5900, and G5900 scores 2750 in Passmark.
https://www.phoronix.com/scan.php?page=news_item&px=Raspberr...
If we generously eyeball these Phoronix charts, I'd say the Pi 4 is on average no more than 1/3 as fast as the G5900. If we translate that to Passmark points, the Pi would score around 900. I saw another Google result where a similar comparison was made and they also concluded that the Pi would be around 900 passmark points.
So four Pis ganged together (assuming zero overhead) would score 3600 versus the i5-1135G7's 10k points. That's almost exactly 3x difference.
TLDR is that yes, 10W-15W Intel CPUs are a lot more powerful than a Pi4 or even four Pi4s. And if you actually try to use them, the difference in performance is very evident.
Ooof. That's terrible. No wonder people think ARM chips can't go fast.
Pragmatic (as usual). ;-)
I'm not so sure though that we're not seeing early forays into something that will in fact come down in cost, go up in efficiency.
I am hoping someone can educate me. I do enjoy Jeff's articles and videos, but I fail to see what these could be used for. I concede that this is entirely due to my ignorance.
A few of the most pertinent applications would be running an application at a live event, where you might run off battery / solar / small generator, and you don't have much of a power budget, but would still want to replicate a small K3s cluster setup locally for simplicity's sake. (I realize the irony of including Kubernetes in a line about simplicity...).
But in general, it would be more focused on a lower budget / lower power option for exploring small cluster ideas. Much more fun to test things out or learn on a platform that costs $600 all-in and doesn't suck down a few hundred watts of power all day, than to buy four used PCs that use more power, take up a little more space, and cost maybe a little less, all-in.
It is a niche product, but nowhere near as niche as the Seaberry board I showed last week :)
I'm going to sheepishly wonder out loud if too much sponsored content will be to Jeff's channel's long-term detriment. I would be excited too to have a popular channel growing and being offered these cool toys but I have also seen blow-back of late.
Perhaps it's just par for the course on HN though. Or perhaps we're missing the more down-in-the-kernel hobbyist Jeff.
Don't worry, been doing my other stuff too, but this month for some reason all this gear showed up within a couple weeks and I wanted to make sure I gave it some air time before I go towards some more fun (to me) projects.
The key I think will be trying to make the blend be 'learning/teaching something new' along with finding the right way to fund it.
While I am extremely appreciative for every dollar that comes in through Patreon/GitHub Sponsors, the reality is that isn't enough (and except for maybe like 0.2% of creators/devs won't ever be) to cover all the bills. So I do need to work with more deep-pocketed sponsors from time to time.
It is a bit of a delicate balance, though, because it's easy to run the risk of being a hypocrite depending on who I work with and what projects I take on ;)
Cheers for this, I really enjoy the collaboration to play against the "big toys" Patrick has.
Maybe I missed it, but are there I2C or GPIO broken out for each module? I remember being really excited to the V1 Turing board in regard to the mashup potential with cluster-type things and the "Physical Computing" sensor / IO, but I didn't see anyone try to solder in those pins.
Each board has a UART header exposed, and through the MCU it can be rebooted / flashed over the network. Full GPIO is only exposed in a 40 pin header on node 1, though.
Might be interesting combined with VMware's ARM hypervisor "fling" here
https://flings.vmware.com/esxi-arm-edition
to make a toy "datacenter in a box", using the third node as a NAS datastore.
Sorry about the late reply but this is something I've been thinking about for a long time and something for which I have been trying to design my own carrier board while waiting for the turing pi 2.
Think about Qubes OS where you define different "security contexts" for diferent workloads where each of these run in isolated virtual machines. If properly implemented it's a pretty sturdy solution but you're one hypervisor away from total compromise regardless of what you do.
Now think of the same thing except this time you have one main system that acts as a firewall, shared resource, pxe and window server and a few secondary nodes that boot up from lightweight static images they get from the main node and mount specific resources from it to achieve a sense of persistence (ie. firefox / thunderbird profiles, etc).
There you have a pretty decently marked attach surface so you know which services you need to audit and isolate in the main box.
In my mind I'd love to have the same thing but with the compute modules laying parallel to the main board to be able to fit it in an oversized laptop form factor (kind of like the mnt reform) but this would be a really cool middle ground.
I plan on getting to the software side of it as soon as I have some spare time because even if this doesn't work or isn't ever in stock the whole thing could be put together in a ghettoish way using regular PIs and it'd still be something that could be fit in a mini tower with a small switch which would be good enough for me.
There are a lot of industrial and commercial applications. My interest though is in running a low power cluster at home for testing random applications that I build.
For example I'm writing an app for hosting documentation for household items and integrating it with a label printer. It would be great if I could host this and many other applications locally on a K8s cluster instead of stringing together a bunch of raspberry pis with a network switch, or hosting it on a cloud service
> It would be great if I could host this and many other applications locally on a K8s cluster instead of stringing together a bunch of raspberry pis with a network switch, or hosting it on a cloud service.
Pardon my ignorance, but why couldn't you run this on a NUC or a NAS? Do you need HA for testing random apps? Or is it just that this is more fun and a learning experience?
My interest in this space is in the opportunity to test assumptions when scaling wide. Those of us without University compute access or Cloud budgets ( like neighborhood STEM classes ) can plunk down for a desktop learning environment that can run k8s or slurm and you can learn / teach MPI programming patterns with hands-on, and minimal risk.
Yes a NUC or NAS could do the job as well, so my interest is more of a fun learning experience.
I think most of these "cluster pi" things are really just because some people don't want to have a cluster of VMs but a "real" cluster instead.
"Can I do it with a Pi?" is very similar to "Can I run Doom on it?" Most of the projects exist just to see how far the capabilities of a tiny computer can be stretched. However, someone may realized that their five-figure project could be done much cheaply with a Pi and HAT.
I think one thing that would be interesting would be a fully native build environment and full native development tools for the Raspberry Pi.
Building Ubuntu OS images from scratch on a Raspberry Pi server equivalent machine would be very interesting.
For the non-embedded firmware engineer it is a-trivial to wrap your head around initially w.r.t. cross compiling Linux distros using ARM compilers running on an x86 host.
Every time something like this gets posted on HN, my brain does the same thing: “Neat! I wonder what Jeff Geerling will think of—Oh! This _is_ him!”
Keep up the good work, and looking forward to hearing about if you’re able to get those Coral AI TPUs working!
Jeff Geerling is one of the best tech Youtube channels out there
Red Shirt Jeff Geerling though? Still one of the best, let's be real.
Jeff thank you so much for your continued work with the Pi!! I’ve learned so much from your book and YouTube content.
I have a similar-ish setup in the form of the ClusterHAT on a Pi 4 with 4x Pi Zero W connected to it. It gives me 4 individual Pi Zero machines (each with wireless) connected over the USB-Ethernet gadget interface to the Pi 4 which you can then bridge the interfaces as you need.
My purpose was to have a SFF of individual wireless clients that I could control over SSH to do some network testing. Added benefit, the whole thing can be powered from a single PSU into the Pi 4.
I wonder if that will take the Zero 2W without excessive power drain.
Other than raspberry pi, does anyone know of an even lower powered board which can run a very simply web server (only needs to return a single html file)? I have an idea for a fun hobby project where I want to connect my echo bike (for cardio) to the board which charges it everyday and returns an html with how much I charged it and daily cardio stats. Basically, if I don’t do cardio, then the board won’t be charged enough to keep the site up, so that gives me incentive to do it regularly.
This can be achieved with the original ESP 8266
Bad thing is that with a sufficiently large battery, he can probably skip a lot of days. They don't take much power.
Its the right choice, though.
Except the Raspberry Pi Compute Modules for this are now 4-5x their original price. I have a new Turing Pi in box that I refuse to drop $500 on the CM3 boards for. I’d rather just buy a surplus lot of desktop computers to build a toy cluster. I should probably just sell the Turing Pi. Useless, never even got to boot it. My Raspberry Pi 4 cluster works just fine though
The modules are the same price... just out of stock in perpetuity.
A few months ago I'd find some here and there at Micro Center, but they've been out of stock for months.
If you're patient you can get them at list price, though—I ordered the four 8GB Lite modules I used in the video over the past year, from two different suppliers.
I have a turing pi v1 too, and havent been able to play with it because of this.
Neat. How would this be used in edge infrastructure? And more broadly, what exactly is "edge infrastructure"?
Cynically "edge" is the way to say "not everything actually makes sense to run in the cloud but we can't go back on the marketing push that on prem infrastructure is a fossil so we'll call it edge". Or at least that's what it's meant at the last couple of companies I've worked for/with that were convinced they were going to "move everything to the cloud" despite the cries that the goal should be "move most things to the cloud" at the start. The marketing headline that tends to go with it is "edge is where the users are" but really it's come to mean any place you have non cloud infrastructure closer to the user than the internet (i.e. likely every place you have non cloud infrastructure).
Less cynically some try to use it as a differentiator for running cloud services dispersed as close to users as possible for performance reasons vs running cloud services in centralized locations to reap the benefits of hyper scaling. Leaving "on premise infrastructure" to mean "legacy infrastructure" instead of allowing the term to follow any changes in the way that on premise infrastructure is run over time.
I'd never considered "on premise" to be a type of "edge infrastructure" before, but technically I guess it is. But I'll stick to calling it "on premise" - because it's not "on premise" in order to have cloud services close to the user. It just hasn't yet migrated to the cloud.
So keeping with the "cloud but close to user", where would this or any similar device play a role in edge infrastructure? Don't cloud/edge providers use the same hardware for both? Nobody is going to use this in production, and I don't see a role for testing either.
It's a shitty choice for Edge device for sure but it's still an edge device. Just like it's a shitty Linux computer but still a computer. "Edge" is about where not what just like whether something is a "cloud" doesn't change based on if a Pi is used or not.
> Cynically "edge" is the way to say "not everything actually makes sense to run in the cloud but we can't go back on the marketing push that on prem infrastructure is a fossil so we'll call it edge".
“Edge” is (mostly) also cloud, but with stronger guarantees of ntopological proximity to users than general-purpose cloud regions provide. On-prem is, well, on-prem, not generally referred to as “edge”.
While not exactly the same, Chic-fila in the US runs k8s clusters on Intel NUCs in their restaurants.
Generally, the idea is that you could deploy something like this across a large number of sites (at scale). It's reasonably cheap hardware. When paired with multiple nodes & K3s/k8s, you get HA. Installation and set-up is easy which are HUGE factors when installing equipment across a large number of sites. The system must be easy enough for a tech to go to site, plug the equipment in, and leave. Everything else is handled automatically and or remotely so as to avoid technician call-outs to site.
What is the metal mount used for the board in this picture:
https://www.jeffgeerling.com/sites/default/files/images/turi...
BC1 mini ITX build platform:
That's a BC1 mini from Streacom.
Why do I enjoy these raspberry pi articles so much?
For me it's because of the very idea of a $35 computer that's as powerful as a million dollar computer was when I was at university.
Each one of these Raspberry Pi's is more powerful than the first few computers that I've used and owned. So to have a cluster of them feels like 1990's ultimate power!
unfortunately due to the way they're mounted, it won't fit in a 1U chassis...
We are waited for this for soooo loooong!!