PSA: Menus not staying open in Supermicro IPMI? Here’s how to fix it:

I’ve noticed this a couple times in the last week – I had an iKVM window open on my Supermicro host, trying to control the ESXi DCUI, and the menu wouldn’t stay open. It’s very frustrating.

I don’t have a physical monitor hooked up to any of my hosts, so this is a pretty important thing to have working in the event I need to change a setting only available on the “physical” host’s menu.

Say you have two ways to interface with the IPMI firmware – like, iced-tea javaws and Supermicro’s IPMIview:

$ ls /opt/IPMIView_2.17.0_200505  IPMIView20_User_Guide.pdf 
BMCSecurity         IPMIView_MicroBlade_User_Guide.pdf                 libSharedLibrary_v11_64.jnilib
iKVM                IPMIView_SuperBlade_User_Guide.pdf
iKVM32.dll          jre                                 PMingLiU-02.ttf
iKVM64.dll          JViewerX9                           ReleaseNotes.txt
iKVM.jar            JViewerX9.jar                       SharedLibrary32.dll
iKVM.lax            JViewerX9.lax                       SharedLibrary64.dll
iKVMMicroBlade.jar  lax.jar                             SharedLibrary_v11_32.dll
iKVM_precheck.jar                        SharedLibrary_v11_64.dll     libiKVM64.jnilib          
iKVM_v11_64.dll     libiKVM_v11_64.jnilib               TrapReceiver
IPMIView20                   TrapReceiver.lax
IPMIView20.jar               TrapView.jar
IPMIView20.lax      libSharedLibrary64.jnilib

— VS —

 $ javaws launch.jnlp 
selected jre: /usr/lib/jvm/default-java
Codebase matches codebase manifest attribute, and application is signed. Continuing. See: for details.
Starting application [] ...
Buf size:425984
a singal 17 is raised
doris getscreen ui lang
doris 0 0

The first time this happened, I am pretty sure it’s because another instance of the iKVM window was open (I was unwittingly trying to use a 2nd iKVM – for example, launching launch.jnlp with IcedTea when an IPMIview instance is open – windows get lost sometimes!). This presumably could happen between using two separate remote machines at once, although I cannot confirm from experience.

The menus won’t stay open. It launches, it looks like it should work, but none of the menus will open and none of the keys will respond.

The second time this happened to me, there was no explaination – no other open instances. So I’m here to say:

Just reset your IPMI.

To do that, go to the IPMI’s web portal, log in, and navigate to Maintenance –> iKVM Reset.

Getting your IPMI iKVM menus to open again…

Then if there was any instance that wasn’t terminated properly, it will terminate them and allow your menus to work again – which is why I think the 2nd time it wasn’t working, since there was no other instances open at the time.

Best wishes!

Automate kernel module installations for VMware Workstation on your Linux distro

VMWare Archives - The CloudStack Company
VMware Robot does a little dance

In reference to this post I made earlier:

I found the most helpful script, just drop this in a text file called /etc/kernel/install.d/vmmodules.install


export LANG=C



case "$COMMAND" in
       VMWARE_VERSION=$(cat /etc/vmware/config | grep player.product.version | sed '/.*\"\(.*\)\".*/ s//\1/g')

       [ -z VMWARE_VERSION ] && exit 0

       mkdir -p /tmp/git; cd /tmp/git
       git clone -b workstation-${VMWARE_VERSION}
       cd vmware-host-modules
       make install VM_UNAME=${KERNEL_VERSION}

        exit 0

exit $ret

It’s so exciting! That should download, compile, and install the proper vmmon and vmnet kernel extensions required for Workstation every time a new kernel is installed (!!).

Now I just have to work out a way to get the MOKutil key submission in there for secure boot, but it’s definite a good start to making the installation of new kernels less painful for Workstation users on Linux.

Tested in: Ubuntu 20.10


Solve “The system cannot find the file specified” error in VMware Workstation

If you’ve ever run into this, it’s a real bummer. I encountered it after using rsync to clone a vm.

At the outset, I want to say either using vm -> manage -> clone or file -> export to OVF are both easier options, but if you’ve already copied a vm by hand, you can try this out:

After I rsynced my VM, I wanted to rename the files to differentiate it from the old VM.

For the example, let’s say my old VM’s name was Windows Ent EFI and I wanted to rename it to winInsiders

I use this one-liner just to rename the .vmdk files:

for f in *.vmdk; do mv "$f" "$(echo "$f" | sed s/"Windows Ent EFI"/winInsiders/)"; done

and renamed the remaining files {$f.nvram, $f.vmds, $f.vmx, $f.vmxf} by hand (wanted to be careful)

At first, I had forgotten about references in the winInsiders.vmx to old filenames, so when I tried to re-add the hard drive, I ran into the “cannot find the file specified” error

I deleted any .lck directories first (not sure if this is necessary, but it seemed like a good idea)

Then I looked inside the .vmx file with a text editor, and found a few keys that needed new values, because they referenced the old names.

They were:


There could always be more, that’s why instead of editing it by hand, run sed on the .vmx file, as it is faster and far less error-prone (just like the renaming command above, but with the -i flag, meaning inside the file):

sed -i s/"Windows Ent EFI"/winInsiders/g winInsiders.vmx

Then do the same for the .vmxf and first .vmdk file (the first .vmdk in a split virtual disk is just a file descriptor, plain text):

sed -i s/"Windows Ent EFI"/winInsiders/g winInsiders.vmxf
sed -i s/"Windows Ent EFI"/winInsiders/g winInsiders.vmdk

It should work now (at least, it did for me).

PowerBash eases some discomfort of adjusting to PowerShell for Linux users

Want to use grep or vim from your powershell env? Now you can!

OK, I just stumbled across the coolest thing.

Occasionally I use PowerShell because it’s the easiest way to get batch processes accomplished on a Windows computer, or sometimes the only way to implement a feature (e.g. making folders case-sensitive to avoid naming conflicts when syncing with nix computers).

But it’s always a bit of a pain because I have to look up ways to do basic things I can easily do in a POSIX-style environment without needing a reference (e.g. find, grep, sed, df, vim) and sometimes their implementation is awkward or clunky, or just not easily possible.

Enter Powerbash:

Another way of installing – import the script

From the Github repo, “PowerBash Module allows you to run bash commands and other Linux programs from the Windows Subsystem for Linux directly from PowerShell, with support for piping to and from PowerShell commands.”

Here’s the link to the repo:

They recommend getting the script from the repo and importing it using import-module but I just installed it from the Powershell Gallery using:
install-package powerbash
from a PS prompt

(note, I did try installing it from the script using import-module but it points to an older directory structure for WSL than that used in 1809, so it would need some modification for more modern WSL implementations. The PSGallery package works OTB)

Using grep -E with find-package command in PowerShell

To install the PS Gallery and get started with the PowerShell method of package management, check this out:

It does require WSL, as well, and 14316 or higher version of Windows.

It makes using PowerShell a lot easier when you can pull out the random nix command in a pinch!

Looks like VIM works just fine in PowerShell now…

You can still use PowerShell in msys64 (also in “Git Bash”) which gives you some of the same functionality, but PowerShell’s functionality is somewhat impaired when running inside msys64, so it’s not that great of a solution.

PowerBash starts with PowerShell and adds the Linux compatibility layer to it, rather than the other way around, so it’s inherently more friendly to your PS cmdlets. Also nice that it uses WSL for compatibility instead of msys64’s re-compiled utilities. Definitely a two-fer.

PowerBash looks in locations in the Linux Subsystem’s filesystem for programs, equivalent to the following $PATH:


This is currently hardcoded (damn!) but that may be changed in the future (yay!).

Any existing commands available to the current PowerShell session, including EXEs in the PowerShell $env:PATH, cmdlets, functions, and aliases, will not be overridden by Linux programs. If you would like to use a Linux program of the same name instead, remove all functions, cmdlets, and aliases from your session prior to importing the module. For EXEs, you can pass the names of the programs you want over-ridden to the module via -ArgumentList when importing (comma separated).

Add podman controller to cockpit on Ubuntu 20.04 LTS

Podman is the bad-ass contender to docker that uses oci

I am heartened to see podman becoming more comfortable on Ubuntu, since although I’m excited about a lot of the software coming from RHEL’s massive portfolio of acquisitions, I still prefer Ubuntu’s support length and update schedule, and find its commands and structure more familiar.

But what about the accessories that are available for podman? Would installing it on Ubuntu make it like a fish out of water?

Thankfully, no! buildah is also available in the offical podman repo for Ubuntu, and apparently cockpit-podman can be installed fairly easily, too.

cockpit-podman is one of the areas in which Fedora/RHEL has an edge out of the gate, as it’s not available in Ubuntu repos or PPAs. But I thought it might be fairly easy to build since it’s a web application. Naturally, I couldn’t find any instructions on how to do it anywhere, but I managed to figure it out on a new VPS I’m setting up. So I wanted to share how I got it working, since I’m pretty sure the procedure isn’t described anywhere else yet.

Obviously, first you want to install podman and cockpit. Cockpit’s in the Ubuntu 20.04 multiverse repo, it’s a quick:
sudo apt update && sudo apt install -y cockpit

Podman has Ubuntu in the official documentation and only requires a few easy commands – take a trip to their installation instructions and step through them real quick before continuing here:

Then clone the cockpit-podman repo:

Go to your build directory and run this to clone the repo:
git clone

Install the build requirements (listed in cockpit-podman.spec):
sudo apt update && sudo apt install -y appstream-util
(you need build-essential for this even though it uses node – so add it to the end of that last command string if you haven’t installed it yet)

appstream-util should automatically install libappstream-glib8 which satisfies libappstream-glib mentioned in the .spec file – other prerequisites mentioned will be installed along with the cockpit meta-package)

Also, you’ll need to have nodejs installed – but don’t use the package maintainer’s version, it’s several major versions behind. Instead, manage your node installations with nvm-sh from here:

The installation’s pretty easy – you have to have curl for nvm to work, so this command is appropriate (install curl if you haven’t already):
curl -o- | bash

Quick side note about choosing a node release with nvm: If you’d rather check to see what the latest available node LTS release is when you perform these operations, it’s:
nvm ls-remote --lts
(I definitely recommend you use an LTS release, as building with a bleeding-edge release of node won’t offer any benefits and is more likely not to work)

Then, required to install node are a few simple commands:
~/.bashrc (reload your env)
nvm install 12.18.1 (as of writing this is latest node LTS)
nvm alias default 12.18.1 (set it to default for later)
Verify it’s working by running node --version
If for some reason it’s not working, try:
nvm use 12.18.1

Now that node is installed, navigate your newly cloned cockpit-podman repo and run make

Then run your sudo make install and it’ll install it to your existing cockpit directory.

Now the next time you log into cockpit you’ll have a super convenient podman controller, just as if you were on a Fedora or RHEL/CentOS machine. (If you’re already logged in, obviously you’ll want to log out and back in again.)

Working example on a VPS running Ubuntu 20.04 LTS

Getting touchpad scroll to work in VMWare Workstation

Shiny new Kubuntu 20.04 dev platform

So, I develop using NodeJS, and I use Windows. Anyone who uses both together and has used Linux as a dev platform has probably realized the Windows version is a little lackluster and has edge configuration issues that make it a pain.

So I was setting up a dev environment on an ESXi host so I could have one place in which I do my development and not have to switch back and forth between Windows and Linux platforms, deal with the painfully slow WSL, Windows ‘git bash’, etc. do everything native.

That was a great idea until I got it all set up and realized my touchpad wouldn’t scroll anymore.

Not just in the new Kubuntu dev VM, but everywhere! My Thinkpad’s touchpad literally would not scroll ANYWHERE, not in Ubuntu, not in Windows, nada.

I had just installed VMWare-tools 10.3.21 which as of June 2020 was still the latest edition at over a year old (surprising). However, there is a KB article in which VMWare recommends using the open-vm-tools package, which I had thought was an open-source knock-off, but through a little digging I found out it is actually an official release from VMWare that appears to have been open-sourced.


Thinking this might have contributed to my non-scrolling touchpad issue, I promptly uninstalled VMWare-tools and installed the open-vm-tools-desktop package.

To my surprise, my touchpad started scrolling again, both in the VM and in Windows.

So, as a PSA, anyone who’s experiencing this issue (maybe it’s a Synaptics-related thing?) try using the open-vm-tools package instead of VMWare-tools and maybe your touchpad will magically start scrolling again…

Genuine Lenovo Thinkpad T460S T470S Three button Touchpad Clickpad ...
“I feel so alive!”

Building required VMWare Workstation Kernel Modules Ubuntu ** updated for Secure Boot **

Ubuntu Authentic Logo - Ubuntu Linux - Pin | TeePublic
Ubuntu logo

VMWare Workstation on Ubuntu requires another step to run. Sometimes the installer will install the kernel modules it requires in the installer, other times it won’t. There’s another step to the setup after running the installer that might as well be documented as part of the installation process.

It’s compiling kernel extensions for vmmon and vmnet – and Workstation tries to compile them out of the box, but it never works. It’s missing some essential kernel extensions that are only available from GitHub. Get used to it, because every time you upgrade your kernel, you’ll have to do this again (unless you create a hook for apt … I’ll have to put that in a later post).

I had to write a post about it since after doing it for the umpteenth time I realized this might be a problem for a lot of people that isn’t well documented and might be exceedingly confusing for first-time users.

Make sure you have the headers and modules for your kernel – this oneliner will refresh your package cache and install them if you don’t have them:

$ sudo apt update && sudo apt install -y linux-headers-$(uname -r) linux-modules-$(uname -r)

First, you’ll have to download the kernel extensions. Here’s where they’re available:

You’ll see filenames available for download that look like w15.5.5-k5.7 – you want to match the first 3 numbers to your Workstation version, and the last two to your kernel – this example would be for Workstation 15.5.5 and an OS with 5.7 kernel.

Thankfully there’s plenty of releases to find one that’ll work for your setup. At the time of writing there is nearly 700 releases to choose from (indicating that this has probably been a major issue for a long, long time… )

After downloading the release, unzip or untar it and it’ll create a folder called:

vmware-host-modules-<release version>

cd into this folder and try making the .ko files:

$ cd vmmon-only && make && cd ..
$ cd vmnet-only && make && cd ..

Create two .tars:

$ tar -cf vmmon.tar vmmon-only
$ tar -cf vmnet.tar vmnet-only

Then copy them to the modules source directory for Workstation (as root):

$ sudo cp -v vmmon.tar vmnet.tar /usr/lib/vmware/modules/source/

After copying the .tar files to /usr/lib/vmware/modules/source/ run this command:

$ sudo vmware-modconfig --console --install-all

If all goes as planned, you should be able to run Workstation now.

If you notice vmware-modconfig complaining about errors / failing, you’re probably booted under secureboot. You can disable secureboot if you’d like to make this process easier for next time, but if not – here’s some additional steps you’ll have the pleasure of performing over and over.

Note: Keep in mind that if you upgrade your kernel you’ll absolutely have to go through this process all over again. Sometimes, but not always, updates to Workstation will require it, too.

You’re sure to get just as familiar with the process as I am before long.

For systems using secure boot, regarding the additional steps, they may only be necessary when changing release version of kernel, but not point version – so 5.8.0-19 to 5.8.0-21 might not need a mokutil update, but 5.6.2 to 5.8.0 definitely would.

Generate a key pair using the openssl to sign vmmon and vmnet modules:

$ openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=VMware/

Sign the modules using the generated key:

$ sudo /usr/src/linux-headers-`uname -r`/scripts/sign-file sha256 ./MOK.priv ./MOK.der $(modinfo -n vmmon)

$ sudo /usr/src/linux-headers-`uname -r`/scripts/sign-file sha256 ./MOK.priv ./MOK.der $(modinfo -n vmnet)

Import the public key to the system’s MOK list:

$ sudo mokutil --import MOK.der

Choose a password you’ll remember for the MOK key enrollment, since the next step is rebooting and enrolling it in your UEFI firmware.

Reboot your machine. Follow the instructions to enroll the MOK key in your firmware from the UEFI console, and go back to your Ubuntu terminal and run:

$ sudo vmware-modconfig --console --install-all

That should take care of it for you.

Now just make sure you bookmark this page and the Github releases page so you can come back when you upgrade your kernel and do this all over again!

Good luck!


ESXi 6.7 – Forcefully Removing a Phantom NFS Share

Link Broken Icon - Paper Style - Iconfu

Interesting thing happened to me today. I had to do some work on one of my ESXi 6.7 hosts, so I powered it off (which is something I rarely ever do). I had just done some static configuration of ipv6 on my domain controllers, provisioned DNSSEC, etc. so I was playing with a relatively recently modified network.

Well, it must have been a little too different for my ESXi configuration to handle, because when I powered up my ESXi host again, two out 5 of my NFS datastores couldn’t reconnect. It was a little perplexing because ESXi doesn’t give you much of a sense of what’s going on – just a message saying:

Can’t mount NFS share – Operation failed, diagnostics report: Unable to get console path for volume

There’s some logs you can check, like hostd.log, vmkernel.log, etc. but they didn’t tell me much, either. (If you want to check these they’re in /var/log and the most convenient way I’ve found is to use:
# tail -n <number of lines> <name of log file>

But here’s the thing – no matter how many times I tried to reconnect, I was unable. I could, however, reconnect the datastore as long as I gave it a different name

Finally after a while it dawned on me that the share still existed (although it did not show up in):
# esxcli storage nfs list

That was the whole reason I couldn’t re-create it – ESXi thought it was still there although it wasn’t giving me any real indication (only the failed attempt to manually re-connect)

Tl;dr – to Fix the issue,

  1. Remove any references to your phanton NFS datastore – remove any HDDs that reference it, any ISO files (connected or not), remove VMs from your inventory that are imported from it.
  2. Remove the NFS share link from SSH using the command:
    # esxcli storage nfs remove -v <phantom datastore name>
    (note, the -v will give you a little more feedback)
  3. At this point I decided to reboot, but if you can probably just restart the agents from the DCUI:
  4. Now you should be able to re-connect your NFS datastore, with the same name and everything.

If you have any VMs on the datastore, ISOs, vHDDs, etc. you need to re-connect, now you should be able to without any problem.

Building NetBSD pkgsrc on Ubuntu 19.10

pkgsrc is a package manager and port tree, kind of like FreeBSD ports and pkg combined.

Messing around with my Ubuntu 19.10 rpool installation (ZFS is offered by default in the desktop installer now), and I couldn’t help but think to myself, “gee, it sure would be cool if the ARC was in top like it is in FreeBSD and OmniOS.”

So I thought I’d try to compile the NetBSD version of top using pkgsrc, since I figured the BSD versions of posix software might have more resources for ZFS.

Pkgsrc isn’t used on Linux very often. Maybe for some more obscure scientific and development purposes, but not common end-users. Therefore, I haven’t seen a succinct guide on how to install it that’s palatable to most users.

There IS a version of pkgsrc compiled for RHEL7, but the toolchain is super old and it doesn’t appear to be compatible with Ubuntu 19.10. Therefore, you’ll have to compile everything (including the package manager) from source if you want to use it with a modern Linux distribution.

I’m going to include resources for the information so you can dive into it with more detail, because chances are you’ll need it given the complicated process, but the goal here is to create a start-to-finish how-to article that’s easy for most people to follow without too much info that it becomes inundating.

Here’s how I managed to build pkgsrc in my environment. Please note, I’m using an unprivileged (read: non-sudo) environment and built everything in my $HOME dir, but you’re welcome to put it in the default /usr directory if you want – just modify accordingly.

Full documentation for pkgsrc lives here:

TIP: Keep track of the exportvariables so can add them to your user’s .profile or .bashrc files later.

Download pkgsrc using cvs – install cvs first if you don’t have it (Source:

$ cd ~/ && cvs -q -z2 -d checkout -r pkgsrc-2020Q1 -P pkgsrc

Navigate to $HOME/pkgsrc/bootstrap and invoke:

$ unset PKG_PATH && export bootstrap_sh=/bin/bash && ./bootstrap --unprivileged

If something goes wrong, delete your work folder or set a new folder with --workdir=path/to/new/tmp/dir

Also, there’s more info here that might be helpful with the bootstrapping process specific to Linux:

After pkgsrc is done bootstrapping, navigate to $HOME/pkg – you’ll want to add $HOME/pkg/bin and $HOME/pkg/sbin to your $PATH

$ export PATH=$HOME/pkg/bin:$HOME/pkg/sbin:$PATH

Set the $PKG_PATH and the $MAKECONF:

$ export PKG_PATH=$HOME/pkg
$ export MAKECONF=$PKG_PATH/etc/

Then, add this file named to $HOME/pkg/etc/

# configuration for non-root pkgsrc builds
# set $MAKECONF to this file for BSD makefiles (/usr/share/mk/*)
# to use it
# inspired by:
#   From: "Jeremy C. Reed" <>
#   Sender:
#   To:
#   Subject: Re: pkgsrc as non-root?
#   Date: Sat, 27 Sep 2003 21:22:04 -0700 (PDT)

GROUP!=		/usr/bin/id -gn

SU_CMD=		sh -c
DISTDIR=	${HOME}/pkg/distfiles/All
PKG_DBDIR=	${HOME}/pkg/pkgdb
PKG_TOOLS_BIN=	${HOME}/pkg/sbin
#INSTALL_DATA=	install -c -o ${BINOWN} -g ${BINGRP} -m 444
WRKOBJDIR=	${HOME}/tmp		# build here instead of in pkgsrc
OBJHOSTNAME=	yes			# use work.`hostname`

CHOWN=		true
CHGRP=		true


.-include "$PKG_PATH/etc/mk.conf"

Obviously you’ll want to edit variables that you configure differently, but this setup appears to be working well in my unprivileged config.

A note about the ALLOW_VULNERABLE_PACKAGES variable, I noticed when building the pkgsrc root there were a LOT of packages that were flagged vulnerable, so I decided to allow them in the repo in case I needed them (e.g. if they were dependencies). I figured I could audit them later if I needed to. Admittedly, it looks kind of scary, but I don’t recommend not allowing them because it might cause all sorts of build issues if there’s so many unavailable Makefiles. You can always audit them later.

OK, so now you can back go to $HOME/pkgsrc and build the Makefile for the repo – you have to use bmake located in $PKG_PATH/bin to do this with all the pkgsrc Makefiles:

$ bmake configure && bmake install

This will probably take several hours to build depending on how powerful your machine is.

Note: I tried the -j variable, for instance I tried to assign nproc number of threads to use for the compilation, but this did not work – I got this error with bmake compiled with pkgsrc bootstrap, and also the one installed from the Ubuntu repo:

ERROR: This package has set PKG_FAIL_REASON:
ERROR: [] pkgsrc does not support parallel make for the infrastructure.

On my Thinkpad T460s using a single core it’s taken all night and morning, so if you’re in a “hurry” this process is not for you. Also, it crashed on me in Gnome terminal, so I recommend going to another TTY (e.g. alt-shift-F2) starting the process, and leaving the computer alone for a day or two, coming back to check it periodically, of course. Be sure to turn suspend off in your power settings!

I’ll come back and update this with how to use the repo + pkg manager itself once it’s done. It should be pretty much ready to use after all this, and we’ll see if I’ll get my shiny new build of top w/ ARC level display (excited!).

Note: I found this compilation from Joyent specifically for Ubuntu:

It might save Ubuntu users from having to compile the framework from source themselves, but it looks like they stopped building them after this one (the latest) back in October 2019, so it might not a good long-term solution.

Want BTRFS CoW snapshots but need to use Dropbox?

Update: Not sure when this happened but I was checking the requirements for FS on Linux at and apparently they DO support BTRFS once again. So the process outlined in this post is no longer necessary.

See announcement:

Old article: This is less of a write up and more of a PSA:

Want to put BTRFS on every device you have like me because you can’t get enough of CoW FS? Especially for operating systems? Because snapper? cgroups? timeshift? machinectl? The list goes on…

Use BTRFS to your heart’s content! Just create a separate partition formatted with EXT4 and mount it to your Dropbox-using user’s /home/username/Dropbox in /etc/fstab

This can be done while initially setting up your OS, which if you’re like me you’ll be doing all sorts of finagling with your partition layout anyway, or shrink one of your partitions after the fact (/home comes to mind, since that’s where Dropbox is usually located).

I’ve done this in several builds so I can have snapshot+rollback capabilities while working around the new(ish) Dropbox file system requirements in OS like OpenSUSE, Debian and Ubuntu. Admittedly, less of the VMs are in service now that Dropbox limits free accounts to 3 devices.

What about snapshots of your Dropbox folder, you ask? Well, Dropbox has its own file versioning system built-in that should keep your files safe from accidental deletion for at least 1 month. So if you delete anything from your EXT4 Dropbox folder, just get on the Dropbox website and click “deleted files” before it’s too late (!)

In another post, I’ll have to explain how I set up a Dropbox+Syncthing VM to share files across platforms and then propagate those files to other machines using Syncthing so as to skirt the 3-device limit.

I only use about 2GB on Dropbox in the form of PDFs and text files for work – I don’t need a $10/mo 1TB of storage, but I still have LOTS of devices I like to use. Thankfully Syncthing doesn’t have any high-falutin’ requirements for underlying file system types like Dropbox does [insert pointed stink-eye at Dropbox developers].

I’ve done this specifically to retain Dropbox notifications (changed/added file, etc.) which are easily connected to IFTTT for push notifications on cell phones. It’s not a functionality I’ve found easy to recreate using other methods.

If you want to delve deeper into the partitioning issue, here’s a thread in the BTRFS subreddit about the original topic – with an interesting alternative to creating a separate partition, a loop device!

Nerd out.