Two mini PCs, facing off against each other in a race that’s somewhat network dependent. What fun!
I got a couple of those weird mini NUC-style PCs. They’re very cosmetically similar PCs, of the “AK2” variety, that you can get on the Amazon for between $70 and $175 depending on what deals are going on and spec. They were bought for other things, but I figured why not see what the difference is between a couple of generations of Celeron?
Similar things on each: both have 2x HDMI ports, a smattering of USB 2 and 3 ports, RTL8111-family GbE network, onboard single-port SATA, AC wireless (one with an Intel card, one with a Realtek). The differences are memory, CPU, and storage (outside the SATA).
AK2: J3455 - Celeron J3455 Apollo Lake (4c4t), 6GB RAM, 64GB eMMC, no NVMe slot (there is an open slot but it’s Mini-PCI for some reason)
AK2 “Pro”: N5095 - Celeron N5095 Jasper Lake (4c4t), 12GB RAM, 256GB NVMe SSD (this came with an SSD that threw a ton of errors during Ubuntu installation.. swapped it for a known-good 256GB drive but not sure if that was just weirdness or that the pack-in drive is flaky)
To do the drag race, I set both of these up with Ubuntu Server 22.04 LTS with full updates, pyenv
, and Docker Engine; and connected them to my network via Ethernet. The Ethernet connection is somewhat bottlenecked as I’m using the two Ethernet ports on the TP-Link Deco P9 mesh pod in the room where they are, and that’s generally using the slower HomePNA powerline backhaul to the rest of the netwrok. But, they ranged from 7-10MB/s when both were hitting the network simultaneously to about 15MB/s when one got full shouting rights over the cable, and they were run so they were both basically sharing space all the time.
The workload I chose was setting up an Open edX devstack instance on each from scratch. Open edX is a pretty big thing - a full “large and slow” setup ends up with 14 Docker containers - and there’s a smattering of compiling stuff and decompression and database ops and all that, so it seemed like a good fit. (Plus, I’m really familiar with it. The day job mostly entails writing software that interfaces with Open edX in some manner, so I’ve run it on much faster systems than these two.) However, it’s worth noting that some of these steps are very network bound, and those steps are noted as such. I did include the preliminary Python setup steps here too, so that’s a lot more compiling.
Here’s the results. The times listed are the Real time from time(1)
.
J3455 | N5095 | ||
---|---|---|---|
pyenv install 3.11.0 |
10m40s | 05m20s | |
pyenv virtualenv |
00m12s | 00m05s | |
make requirements |
01m35s | 01m09s | - this step is pretty network dependent |
make dev.clone.https |
04m56s | 05m00s | - this step is pretty much just network access (cloning GH repos) |
make dev.pull.l&s |
10m20s | 09m39s | - yup a lot more network, this time Docker stuff |
make dev.provision |
108m54s | 51m32s | - this one is not network |
Round 2: now with identical TeamGroup AX2 SATA SSDs (512GB) connected to onboard storage and fresh install of Ubuntu Server 22.04. Some of the network speeds went up here; the machines got kinda out of sync and so they had the network to themselves for a bit.
J3455 | N5095 | ||
---|---|---|---|
pyenv install 3.11.0 |
10m40s | 05m22s | |
pyenv virtualenv |
00m12s | 00m05s | |
make requirements |
03m35s | 01m11s | - this step is pretty network dependent |
make dev.clone.https |
04m04s | 06m33s | - this step is pretty much just network access (cloning GH repos) |
make dev.pull.l&s |
09m22s | 07m31s | - yup a lot more network, this time Docker stuff |
make dev.provision |
90m03s | 43m48s | - this one is not network |
The most telling of these is the first and last result - pyenv install 3.11.0
and make dev.provision
are places where you can really tell what the difference a couple of generations of Intel architecture enhancement make. As a reminder, these two chips are about 5 years apart (Skylake to Ice Lake; 6th gen Core to 11th gen). Interestingly, the performance difference is about the same as the cost difference. The J3455 system was about $75 and the N5095 system was about $150.
Neither of these systems are particularly performant (and they’re probably gonna lose those 512GB SSDs) but they make good point of need systems for lower-end tasks. They’re pretty small - roughly 5in square and about 3in high. The J3455 is going to be a Home Assistant box because it’ll outperform the Raspberry Pi 3 that’s currently doing that task and it’ll fit nearly anywhere.
A couple weird hardware things I’ve noticed:
Most of this was from some videos by Goodmonkey on YouTube. He had some better luck with the AK2/GK2 pricing than I did. (But I might also look at deploying these TP-Link Omada WiFi dingles..)