Stupid Computer Drag Racing

Two mini PCs, facing off against each other in a race that’s somewhat network dependent. What fun!

I got a couple of those weird mini NUC-style PCs. They’re very cosmetically similar PCs, of the “AK2” variety, that you can get on the Amazon for between $70 and $175 depending on what deals are going on and spec. They were bought for other things, but I figured why not see what the difference is between a couple of generations of Celeron?

Similar things on each: both have 2x HDMI ports, a smattering of USB 2 and 3 ports, RTL8111-family GbE network, onboard single-port SATA, AC wireless (one with an Intel card, one with a Realtek). The differences are memory, CPU, and storage (outside the SATA).

AK2: J3455 - Celeron J3455 Apollo Lake (4c4t), 6GB RAM, 64GB eMMC, no NVMe slot (there is an open slot but it’s Mini-PCI for some reason)
AK2 “Pro”: N5095 - Celeron N5095 Jasper Lake (4c4t), 12GB RAM, 256GB NVMe SSD (this came with an SSD that threw a ton of errors during Ubuntu installation.. swapped it for a known-good 256GB drive but not sure if that was just weirdness or that the pack-in drive is flaky)

To do the drag race, I set both of these up with Ubuntu Server 22.04 LTS with full updates, pyenv, and Docker Engine; and connected them to my network via Ethernet. The Ethernet connection is somewhat bottlenecked as I’m using the two Ethernet ports on the TP-Link Deco P9 mesh pod in the room where they are, and that’s generally using the slower HomePNA powerline backhaul to the rest of the netwrok. But, they ranged from 7-10MB/s when both were hitting the network simultaneously to about 15MB/s when one got full shouting rights over the cable, and they were run so they were both basically sharing space all the time.

The workload I chose was setting up an Open edX devstack instance on each from scratch. Open edX is a pretty big thing - a full “large and slow” setup ends up with 14 Docker containers - and there’s a smattering of compiling stuff and decompression and database ops and all that, so it seemed like a good fit. (Plus, I’m really familiar with it. The day job mostly entails writing software that interfaces with Open edX in some manner, so I’ve run it on much faster systems than these two.) However, it’s worth noting that some of these steps are very network bound, and those steps are noted as such. I did include the preliminary Python setup steps here too, so that’s a lot more compiling.

Here’s the results. The times listed are the Real time from time(1).

J3455 N5095
pyenv install 3.11.0 10m40s 05m20s
pyenv virtualenv 00m12s 00m05s
make requirements 01m35s 01m09s - this step is pretty network dependent
make dev.clone.https 04m56s 05m00s - this step is pretty much just network access (cloning GH repos)
make dev.pull.l&s 10m20s 09m39s - yup a lot more network, this time Docker stuff
make dev.provision 108m54s 51m32s - this one is not network

Round 2: now with identical TeamGroup AX2 SATA SSDs (512GB) connected to onboard storage and fresh install of Ubuntu Server 22.04. Some of the network speeds went up here; the machines got kinda out of sync and so they had the network to themselves for a bit.

J3455 N5095
pyenv install 3.11.0 10m40s 05m22s
pyenv virtualenv 00m12s 00m05s
make requirements 03m35s 01m11s - this step is pretty network dependent
make dev.clone.https 04m04s 06m33s - this step is pretty much just network access (cloning GH repos)
make dev.pull.l&s 09m22s 07m31s - yup a lot more network, this time Docker stuff
make dev.provision 90m03s 43m48s - this one is not network

The most telling of these is the first and last result - pyenv install 3.11.0 and make dev.provision are places where you can really tell what the difference a couple of generations of Intel architecture enhancement make. As a reminder, these two chips are about 5 years apart (Skylake to Ice Lake; 6th gen Core to 11th gen). Interestingly, the performance difference is about the same as the cost difference. The J3455 system was about $75 and the N5095 system was about $150.

Neither of these systems are particularly performant (and they’re probably gonna lose those 512GB SSDs) but they make good point of need systems for lower-end tasks. They’re pretty small - roughly 5in square and about 3in high. The J3455 is going to be a Home Assistant box because it’ll outperform the Raspberry Pi 3 that’s currently doing that task and it’ll fit nearly anywhere.

A couple weird hardware things I’ve noticed:

  • They both have a USB-C under the lid. You can get power out of it, but it doesn’t seem to do anything. I plugged a drive into it and nothing.
  • The J3455 has a micro SD card reader on the board (that evidently works). The N5095 one doesn’t.
  • The J3455 has a mini PCI slot on it. I was thinking maybe I could put a M.2 2242 drive but nope! I suppose you could use it for a WwAN modem or something, though.. do they still make those in mini-PCI? I have a CDMA one floating around, I could try it to see if it works in the slot..
  • If you get one and take it apart, be careful about the WiFi antennas. I disconnected one taking apart the J3455 unit and in the process of trying to wedge the connector underneath the plastic thing they glued down to the top of the WiFi module (to keep the antennas connected..) I really broke the other one. Surprisingly it still connects to my local network, but that may be a function of it being basically next to one of the mesh pods.
  • I also learned that Realtek USB WiFi NICs are less than great for use in Linux.

Most of this was from some videos by Goodmonkey on YouTube. He had some better luck with the AK2/GK2 pricing than I did. (But I might also look at deploying these TP-Link Omada WiFi dingles..)

Mastodon Week 2

After another week of Mastodon instance running, I’ve learned a few more things. So, here they are.

Sizing: Sizing was a problem! Turned out this was due to some choices I’d made (which I’ll discuss later). By the end of things, I went from a giant Sidekiq job backlog to none at all, and from a 2 vCPU/4GB droplet to a 4 vCPU/8GB droplet. This is actually too big now but it’s going to stay that way until I get some actual monitoring going on the machine. DigitalOcean provides some but I’d like some better stuff.

Relays: I added like 7 relays to the server to help grab stuff for my federated timeline. Relays are sort of a garden hose of posts - if you’re connected to a relay, your posts get aggregated and resent by the relay to the other connected instances. For me, this ended up creating two problems.

  1. I had too many of them and that spawned a ton of jobs that my smaller droplet couldn’t keep up with. If you want to add a relay to your instance, maybe only do one or two. (And you’ll want to be very judicious about what relays you add.)
  2. As for the relays I’d added from this list that I’d mentioned in the previous post: yeah, don’t do that. If you add a relay, you need to go through the published connections list to see who’s connected to the relay first or you’re going to get a bunch of hate speech and shitposting.

To be honest, once you get started following folks and rolling through hashtags and stuff like that, your instance will find things on its own. If there ends up being some focused relay systems in the future for certain communities, then I can see that being a thing to join up with; otherwise it really does seem to take care of itself.

Upgrades: As I’d noted, the DO one-click installer rolled forward with version 3.1.3 of the Mastodon server software. That’s pretty old and I upgraded it to 4.0.0rc3. (Aside: it’s nice that I can do that! One of the benefits of running your own stack.) It was a pretty easy process. The Mastodon docs mostly cover it but this was the gist of what I ended up doing (note that this is for the DigitalOcean one-click installer environment, yours will be different if it’s not that):

  1. Stopped all Mastodon services.
  2. Install newer Ruby and Node. rbenv is installed already, so you just use that to install 3.0.4 or newer. I added the NodeSource packages (see nodejs.org) to the apt sources list and installed Node 16 using that.
  3. Follow the instructions in the tagged release.

I took a snapshot of my instance before starting. That took longer than the upgrade process. If you’re moving from an old version of Mastodon to a newer one and you’re out a few versions, you should go back and re-trace the release notes to see if anything special needs to be done - in my case, going from 3.1.3 to 4, the instructions for 4.0.0rc3 included all the extra steps. I would have had to do those steps even if I didn’t go to 4.0.0rc3 as they were recommended in verion 3.2 or something. (There’s probably a reason why they’re not, but I kinda think the steps attached to the 4.0.0rc3 release should probably be the steps to do going forward.)

Hashtag pinning and following: This is in 4.0 and it’s great. I heart it.

Anyway, that’s it for now. The machine is humming along nicely and as an added benefit I check Twitter far less often now. Don’t get me wrong, they’re still complimentary services (to me), but I’m enjoying being on the pachyderm site. Next thing on the list is adding some more instrumentation. You get some with DigitalOcean and I have it linked into Pulseway but it’d be nice to get some Prometheus/Grafana and/or Zabbix or something running too.

Riding a Mastodon

I rolled out a vanity Mastodon instance a week ago (as of writing this) so I could get in on the Fediverse experience and start distancing myself from the bird site. It’s been an interesting week as it goes, so here’s some notes I’ve collected.

Mastodon has a pretty distinct “feel” compared to Twitter.

Especially right now, going through Mastodon is a lot more like being new at a newish community meeting spot than anything else. There’s a lot of introductions and people trying to find their people and trying to figure out how to work things, and there’s a lot less people - it’s more like the party is just starting to crest over, compared to Twitter’s already in the deep end and getting deeper. It definitely still feels nicer and more civil, for now at least.

Visibility is a lot different.

I used the Fedifinder tool from @Luca@vis.social and that.. really didn’t help a whole lot! Turns out a lot of my followers don’t have the Mastodon handles in their bios, so it found like 5 (and hasn’t found any more since then). It may work better for you. Mostly, I’ve found people I’d like to continue following from Twitter by virtue of the fact that I’m following them on Twitter and they’ve posted their Mastodon handles there.

However, there’s some other things that I’ve been doing to find folks:

  • Hashtags help a lot. Especially a few that are interesting - for me, that’s #introduction, #highered, #edtech, stuff like that.
    • I wish there was a way to follow these tags - none of the clients I’ve used so far seem to do that. Updated: Server v4 will do this!
  • Looking through the instance list. I can find instances that are generally geared towards my interests and then go looking through their member directory to see if I can find folks to follow. This doesn’t work so much for the huge general purpose instances like mstdn or the official(?) Mastodon instance but for ones like the already mentioned vis.social or like mastodon.art, it can be pretty nice.
  • Looking through other sites. For example, I’m on MetaFilter, so I added my Mastodon handle to my profile there and I can look up a list of other folks that have handles too.
  • The federated timeline helps, of course, but I have more to say about that later.

But, there’s some caveats too. Because everything is spread out, it’s sometimes hard to see content. I’ve kinda blindly followed a few people because their bio is interesting or they’re part of another account, but I wasn’t able to see anything they’d posted because their last post was too old to hit whatever upper bound my instance or client has. It’s also sometimes kinda frustrating to really not have an effective firehose (but that’s also sorta nice too).

Choosing an instance to be on can be easy!

You can mostly just search around and find an instance that is geared towards your interests (or a portion of them) and see if you like the vibe, and then (try to) join it. There are some more general instances too, but a lot of them have turned off signups for now because of the exodus from Twitter.

..but it’s sometimes not easy

And that’s why I fired up my own instance - I didn’t really feel like I fit in with the instances I had short-listed enough to want to sign up for an account on any of them. So, I used the one-click installer on DigitalOcean to get my own vanity instance set up. This is a most decidedly user hostile way to go - I found it to be pretty easy but I’ve been running Internet-connected servers for 20+ years; if you’re not a tech folk then this is much less of an option.

However, one thing I have learned is that migrating between instances is pretty easy. There’s just a button and it’ll move your account settings (including followers) to a new account if you want to do that, even across instances. Your history doesn’t move, but it does stay wherever you were. So, that takes some of the anxiety out of the choosing process.

Maybe don’t run your own instance?

I’ve done it and I plan on sticking with it now, but there were a couple of things I’d wish I’d known about it beforehand.

  • Visibility: I said earlier that it’s a lot different but it’s even more differenter if you’ve started up your own instance, because they don’t come federated out of the box. In other words, you’re staring a blank page until it knows about other servers. This is pretty quickly resolved - as you follow people, your instance will start gathering toots from their instances - but it’s a little discouraging to begin with. I also worried a bit about whether or not anything I was posting was making it out of my bubble, which it probably wasn’t until I started following people.
  • Sizing: The one-click DigitalOcean installer doesn’t give you a sizing guide and neither do the official server docs (or I never saw one, at least) but you probably need to sock more resources at the thing than you may think. I thought I was going to be ok with a 2 vCPU, 4GB, 25GB disk instance and, well, no; it’s fallen over and died a couple times.
    • Sidekiq logs to syslog, which is fine, but Ubuntu 18.04 defaults to rolling that log daily, and it’s by default really really chatty. Like, it fell over the first time because I had 14GB of just /var/log/syslog after a couple days. I changed the settings to roll the logs every 3GB, and moved the cron runner script to run hourly, and I resized the machine to a 60GB instance to get back into the system.
    • It ran out of memory! I haven’t actually fixed this yet - I added an 8GB swapfile. But, Mastodon really needs 6GB of memory. It’s holding tight at 5 and change now for my tiny instance.
    • So, my recommendation would be at least 2 vCPUs, at least 40GB disk, and 6GB RAM, and make sure you set up object storage (S3 or alike) if you’re going to post a lot of media. Then, make sure you make the logging changes I mentioned. If you don’t want to do any of that, then maybe do 100 GB of disk or more.
    • Update: I ended up with a 4 vCPU server, 8GB RAM and 80GB disk, and that seems to be pretty good for now. The CPUs are mostly idle at this point - I added them to help drain the message queue. That took a few hours; my Sidekiq queue was at about 100K jobs backlogged. After some further looking, it appears the DigitalOcean one-click installer installs a pretty old version of the server too (3.1.3 vs. 3.5.3 that’s out now) so now I get to figure out how to update it (on top of the other learning how to admin a Ruby app things). Nothing more fun than performing umpteen point upgrades for something…

My instance specifically is as it ships, except I did turn on ElasticSearch for full-text searching, and I’ve added about 7 relays into it. (I think the relays are really the problem with regard to logging. It’s just a lot more jobs that get scheduled into Sidekiq.)

Speaking of relays, I added a handful of relays off of this list to my instance, and now I’m going to clear out a few of them. Relays help shuttle a bunch of toots to your instance, so your federated timeline is more “fleshed out” and your posts get to more places faster, but you should be somewhat judicious about this. Most of them give you a list of instances they know about if you just go to the relay’s root page, and that’s useful for seeing what kind of content will be relayed to you. In my case, I added too many; my federated feed has a lot of just sort of random stuff in it now. (I added a relay in Japan and half of it is Japanese now.) If you’re running a vanity instance like mine, maybe just having a couple relays would be a good idea. You don’t specifically need them.

It’s pretty interesting and fun, though

It doesn’t replace Twitter for me - all the local news stuff, sports-specific stuff, and complaining about service stuff is just about non-existant on Mastodon right now. But, it’s pretty fun to be on and to keep up with. And, it’s a lot slower, so it’s easier to catch up on and then be done with. (And there’s a lot less yelling. I signed up for some of that yelling, but it’s nice to have it in a separate place.) I recommend it - it is a lot more work to use and get into than Twitter, but it’s pretty worth it. I liked it enough to sign up for a blue checkmark the Patreon for the developers.

At some point I’ll swap the Twitter box on the right for a Mastodon box, and at some point I might cross-post things from Mastodon to here, or vice versa, or also to Instagram. (My ever growing cache of cat pictures could use a less Facebooky home.)

If you want to find me on the Mastodon, I’m @james@kachel.social.

Some More Information For Y'all

Hi, I'm James. Some people call me 'murgee'.

I'm a web developer, general computer nerd, and music geek based in Memphis, TN.

This blog is powered by Hexo, Bootstrap, and coffee. Hosting by DigitalOcean (referral). Fonts by Google Fonts.

Background image: unsplash-logoTara Evans

Because I have to: unless otherwise noted, © 2019-2022 James Kachel.

The Socials

Social media links and stuff:

The Tweet Machine