ESXi 6.7 – Forcefully Removing a Phantom NFS Share

Link Broken Icon - Paper Style - Iconfu

Interesting thing happened to me today. I had to do some work on one of my ESXi 6.7 hosts, so I powered it off (which is something I rarely ever do). I had just done some static configuration of ipv6 on my domain controllers, provisioned DNSSEC, etc. so I was playing with a relatively recently modified network.

Well, it must have been a little too different for my ESXi configuration to handle, because when I powered up my ESXi host again, two out 5 of my NFS datastores couldn’t reconnect. It was a little perplexing because ESXi doesn’t give you much of a sense of what’s going on – just a message saying:

Can’t mount NFS share – Operation failed, diagnostics report: Unable to get console path for volume

There’s some logs you can check, like hostd.log, vmkernel.log, etc. but they didn’t tell me much, either. (If you want to check these they’re in /var/log and the most convenient way I’ve found is to use:
# tail -n <number of lines> <name of log file>

But here’s the thing – no matter how many times I tried to reconnect, I was unable. I could, however, reconnect the datastore as long as I gave it a different name

Finally after a while it dawned on me that the share still existed (although it did not show up in):
# esxcli storage nfs list

That was the whole reason I couldn’t re-create it – ESXi thought it was still there although it wasn’t giving me any real indication (only the failed attempt to manually re-connect)

Tl;dr – to Fix the issue,

  1. Remove any references to your phanton NFS datastore – remove any HDDs that reference it, any ISO files (connected or not), remove VMs from your inventory that are imported from it.
  2. Remove the NFS share link from SSH using the command:
    # esxcli storage nfs remove -v <phantom datastore name>
    (note, the -v will give you a little more feedback)
  3. At this point I decided to reboot, but if you can probably just restart the agents from the DCUI:
  4. Now you should be able to re-connect your NFS datastore, with the same name and everything.

If you have any VMs on the datastore, ISOs, vHDDs, etc. you need to re-connect, now you should be able to without any problem.

Building NetBSD pkgsrc on Ubuntu 19.10

pkgsrc is a package manager and port tree, kind of like FreeBSD ports and pkg combined.

Messing around with my Ubuntu 19.10 rpool installation (ZFS is offered by default in the desktop installer now), and I couldn’t help but think to myself, “gee, it sure would be cool if the ARC was in top like it is in FreeBSD and OmniOS.”

So I thought I’d try to compile the NetBSD version of top using pkgsrc, since I figured the BSD versions of posix software might have more resources for ZFS.

Pkgsrc isn’t used on Linux very often. Maybe for some more obscure scientific and development purposes, but not common end-users. Therefore, I haven’t seen a succinct guide on how to install it that’s palatable to most users.

There IS a version of pkgsrc compiled for RHEL7, but the toolchain is super old and it doesn’t appear to be compatible with Ubuntu 19.10. Therefore, you’ll have to compile everything (including the package manager) from source if you want to use it with a modern Linux distribution.

I’m going to include resources for the information so you can dive into it with more detail, because chances are you’ll need it given the complicated process, but the goal here is to create a start-to-finish how-to article that’s easy for most people to follow without too much info that it becomes inundating.

Here’s how I managed to build pkgsrc in my environment. Please note, I’m using an unprivileged (read: non-sudo) environment and built everything in my $HOME dir, but you’re welcome to put it in the default /usr directory if you want – just modify accordingly.

Full documentation for pkgsrc lives here:

TIP: Keep track of the exportvariables so can add them to your user’s .profile or .bashrc files later.

Download pkgsrc using cvs – install cvs first if you don’t have it (Source:

$ cd ~/ && cvs -q -z2 -d checkout -r pkgsrc-2020Q1 -P pkgsrc

Navigate to $HOME/pkgsrc/bootstrap and invoke:

$ unset PKG_PATH && export bootstrap_sh=/bin/bash && ./bootstrap --unprivileged

If something goes wrong, delete your work folder or set a new folder with --workdir=path/to/new/tmp/dir

Also, there’s more info here that might be helpful with the bootstrapping process specific to Linux:

After pkgsrc is done bootstrapping, navigate to $HOME/pkg – you’ll want to add $HOME/pkg/bin and $HOME/pkg/sbin to your $PATH

$ export PATH=$HOME/pkg/bin:$HOME/pkg/sbin:$PATH

Set the $PKG_PATH and the $MAKECONF:

$ export PKG_PATH=$HOME/pkg
$ export MAKECONF=$PKG_PATH/etc/

Then, add this file named to $HOME/pkg/etc/

# configuration for non-root pkgsrc builds
# set $MAKECONF to this file for BSD makefiles (/usr/share/mk/*)
# to use it
# inspired by:
#   From: "Jeremy C. Reed" <>
#   Sender:
#   To:
#   Subject: Re: pkgsrc as non-root?
#   Date: Sat, 27 Sep 2003 21:22:04 -0700 (PDT)

GROUP!=		/usr/bin/id -gn

SU_CMD=		sh -c
DISTDIR=	${HOME}/pkg/distfiles/All
PKG_DBDIR=	${HOME}/pkg/pkgdb
PKG_TOOLS_BIN=	${HOME}/pkg/sbin
#INSTALL_DATA=	install -c -o ${BINOWN} -g ${BINGRP} -m 444
WRKOBJDIR=	${HOME}/tmp		# build here instead of in pkgsrc
OBJHOSTNAME=	yes			# use work.`hostname`

CHOWN=		true
CHGRP=		true


.-include "$PKG_PATH/etc/mk.conf"

Obviously you’ll want to edit variables that you configure differently, but this setup appears to be working well in my unprivileged config.

A note about the ALLOW_VULNERABLE_PACKAGES variable, I noticed when building the pkgsrc root there were a LOT of packages that were flagged vulnerable, so I decided to allow them in the repo in case I needed them (e.g. if they were dependencies). I figured I could audit them later if I needed to. Admittedly, it looks kind of scary, but I don’t recommend not allowing them because it might cause all sorts of build issues if there’s so many unavailable Makefiles. You can always audit them later.

OK, so now you can back go to $HOME/pkgsrc and build the Makefile for the repo – you have to use bmake located in $PKG_PATH/bin to do this with all the pkgsrc Makefiles:

$ bmake configure && bmake install

This will probably take several hours to build depending on how powerful your machine is.

Note: I tried the -j variable, for instance I tried to assign nproc number of threads to use for the compilation, but this did not work – I got this error with bmake compiled with pkgsrc bootstrap, and also the one installed from the Ubuntu repo:

ERROR: This package has set PKG_FAIL_REASON:
ERROR: [] pkgsrc does not support parallel make for the infrastructure.

On my Thinkpad T460s using a single core it’s taken all night and morning, so if you’re in a “hurry” this process is not for you. Also, it crashed on me in Gnome terminal, so I recommend going to another TTY (e.g. alt-shift-F2) starting the process, and leaving the computer alone for a day or two, coming back to check it periodically, of course. Be sure to turn suspend off in your power settings!

I’ll come back and update this with how to use the repo + pkg manager itself once it’s done. It should be pretty much ready to use after all this, and we’ll see if I’ll get my shiny new build of top w/ ARC level display (excited!).

Note: I found this compilation from Joyent specifically for Ubuntu:

It might save Ubuntu users from having to compile the framework from source themselves, but it looks like they stopped building them after this one (the latest) back in October 2019, so it might not a good long-term solution.

Want BTRFS CoW snapshots but need to use Dropbox?

Update: Not sure when this happened but I was checking the requirements for FS on Linux at and apparently they DO support BTRFS once again. So the process outlined in this post is no longer necessary.

See announcement:

Old article: This is less of a write up and more of a PSA:

Want to put BTRFS on every device you have like me because you can’t get enough of CoW FS? Especially for operating systems? Because snapper? cgroups? timeshift? machinectl? The list goes on…

Use BTRFS to your heart’s content! Just create a separate partition formatted with EXT4 and mount it to your Dropbox-using user’s /home/username/Dropbox in /etc/fstab

This can be done while initially setting up your OS, which if you’re like me you’ll be doing all sorts of finagling with your partition layout anyway, or shrink one of your partitions after the fact (/home comes to mind, since that’s where Dropbox is usually located).

I’ve done this in several builds so I can have snapshot+rollback capabilities while working around the new(ish) Dropbox file system requirements in OS like OpenSUSE, Debian and Ubuntu. Admittedly, less of the VMs are in service now that Dropbox limits free accounts to 3 devices.

What about snapshots of your Dropbox folder, you ask? Well, Dropbox has its own file versioning system built-in that should keep your files safe from accidental deletion for at least 1 month. So if you delete anything from your EXT4 Dropbox folder, just get on the Dropbox website and click “deleted files” before it’s too late (!)

In another post, I’ll have to explain how I set up a Dropbox+Syncthing VM to share files across platforms and then propagate those files to other machines using Syncthing so as to skirt the 3-device limit.

I only use about 2GB on Dropbox in the form of PDFs and text files for work – I don’t need a $10/mo 1TB of storage, but I still have LOTS of devices I like to use. Thankfully Syncthing doesn’t have any high-falutin’ requirements for underlying file system types like Dropbox does [insert pointed stink-eye at Dropbox developers].

I’ve done this specifically to retain Dropbox notifications (changed/added file, etc.) which are easily connected to IFTTT for push notifications on cell phones. It’s not a functionality I’ve found easy to recreate using other methods.

If you want to delve deeper into the partitioning issue, here’s a thread in the BTRFS subreddit about the original topic – with an interesting alternative to creating a separate partition, a loop device!

Nerd out.

Quick and dirty SSH key for pfSense / OPNsense gateway

PSA: Stop using password login for root on your router! NOW!!!

OK, now that I got your attention, here’s how I did it in Git Bash (MINGW64) on Windows LTSC. I promise this is QUICK and EASY, so there’s NO EXCUSE not to do it.

Open your terminal and run ssh-keygen – you don’t need a passphrase unless you want one (just hit enter).

Navigate to ~/.ssh/ and display your file:

avery@winsalad MINGW64 /c/users/avery/.ssh
$ cat

Copy the redacted portion above from ssh-rsa to the username@computername portion (get all of it)

Log into your gateway via SSH, drop to shell and run ssh-keygen (if you haven’t already):

avery@winsalad MINGW64 /c/users/avery/.ssh
$ ssh root@gateway
Last login: Mon Dec 16 21:21:17 2019 from
|      Hello, this is OPNsense 19.7          |         @@@@@@@@@@@@@@@
|                                            |        @@@@         @@@@
| Website:        |         @@@\\\   ///@@@
| Handbook:   |       ))))))))   ((((((((
| Forums:  |         @@@///   \\\@@@
| Lists:  |        @@@@         @@@@
| Code:  |         @@@@@@@@@@@@@@@
*** OPNsense 19.7.7 (amd64/OpenSSL) ***
 LAN (vmx0)      -> v4:
                    v6/t6: REDACTED ... REDACTED .../64
 WAN (em0)       -> v4/DHCP4: REDACTED ... /21
                    v6/DHCP6: REDACTED ... REDACTED .../128
               REDACTED ... REDACTED ...REDACTED ... REDACTED ...
 SSH:   SHA256 REDACTED ... REDACTED ...  (ED25519)
  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup
Enter an option: 8
root@gateway:~ # ssh-keygen

paste your to the end of ~/.ssh/authorized_keys :


Navigate to your web UI and de-select “Permit password login” under SSH section (or similar – depending on your gateway of preference, I’m using OPNsense):

OPNsense gateway used as example

Note: You should probably create a new user and su to root once you’re connected via SSH, but this is the “quick and dirty” version, so we’ll save that for another day.

Open A NEW TERMINAL instance or tab (don’t log out of SSH yet!) and try logging into your gateway. I’ve found it good practice to always try things that effect SSH login in a new instance because you never know when your settings alterations will lock you out. It’s not such a big deal in this example, but it’s a good practice to be in the habit of.

If you’ve gone through these steps properly, you should be able to log into your gateway without a password now (unless you specified a passphrase using ssh-keygen).

Now you’ll be limited to connecting via SSH only with this one machine. For additional machines, there’s several things you could do:

  • Copy the contents of your ~/.ssh folder to other machines
  • repeat the ssh-keygen step for the next computer and copy the to the gateway’s authorized_keys again
  • My personal favorite, read this man page:

OK that’s all for now. Happy hashing!

Tricks for Importing VMware Workstation VMDK to ESXi 6.7

So I’ve been playing with some of the Turnkey Linux VMs, which are nice a lightweight Debian Linux OS core packaged with purpose-built software

They run lighttpd out of the box and have web portals, web-accessible shells, webmin for easy administration, Let’s Encrypt SSL certificate management automation, and usually some web administration specific to the purpose for which they are built.

They’re really pretty nice! Nothing too fancy as far as their looks, but they get the job done and seem very stable. They can really save a lot of time.

Turnkey Linux’s VM packages are available as iso, vmdk, qcow2, tar, and Docker images

So I was trying a couple for torrent server and openvpn, and since I run ESXi hosts, I opted for the vmdk images. I was expecting just a disk image (.vmdk), but lo-and-behold, they’re not the image files but full VMs built on VMware Workstation.

This is cool but also presents some compatibility issues, as you can’t just copy a Workstation VM to your datastore and run it like anything else. I tried it personally not knowing what I was in for, and my host couldn’t open the .VMX file when I clicked on “edit settings” (it just gets stuck at “loading”).

So OK, cool, let’s figure out how to get this working. The easiest way if you have Workstation is to download the VM locally and load it up in Workstation. Then connect to your ESXi host and click “upload”:

In workstation go to VM –> Manage –> Upload

Then specify the host you want to send the VM to and it will automatically convert it for you.

However it’s not without its problems. You will probably want to upgrade the VM type – mine was woefully outdated, being at version 6 when the host’s capability was version 14. This prevented me from using newer device types and changing the OS to Debian 9.

But even after upgrading the VM, I still couldn’t add vmxnet3 or paravirtual scsi drivers. This I solved by cloning the VM in VCSA. Somehow, the cloned VM was able to add the vmxnet3 and paravirtual scsi drivers. I’m not sure why they weren’t just available from upgrading the VM, but it worked.

What if you don’t have a copy of VMware Workstation? Well, it’s not that hard to get around. This is how I did it the first time I tried importing a Turnkey appliance, since I already had the VM on my host’s datastore.

SSH into the host and invoke the following command:

[Host@VMdir] # vmkfstools -i HostedVirtualDisk ESXVirtualDisk

Where the HostedVirtualDisk is the one supplied by the Turnkey Appliance and ESXVirtualDisk is the output you’re going to use for your new VM.

You can read the KB on this procedure here:

Then just manually create a VM and import the existing disk, using the one you just output with vmkfstools.

After that, you can safely delete the files that were included with the Turnkey appliance, being careful to save your vmkfstools output vmdk file.

Also, I noticed on VMFS v6 I could not re-locate the vmdk file. For some reason it had to be left in its original location (folder) where it was created. There may be a way to work around this, but it’s easy enough just to leave the dir. Maybe that’ll be for another post…

Happy virtualizing!

Asrock J4105-ITX 16GB and 32GB memory configuration tests

I saw this on a German web site called and I thought it was pretty significant so I thought I’d share a copy of the translation to English

The Asrock J4105-ITX is a useful low-power build that has been confirmed to work as an ESXi IGD GPU passthrough computer with 4K@60Hz HDMI 2.0. It can be had for around $110 for the motherboard.

The real sticking point for ESXi, besides the realtek NIC, is who wants to run an ESXi host that can only support up to 8GB RAM? As that is the official specifications listed on the Asrock web site…

Well this group of hardware testers in Germany has laid that myth to rest by doing their own testing, and they confirm that configurations of up to 32GB work just fine. Have a look:

32GB Memory – ASRock J4105-ITX – overRAMing Test


32GB Memory – ASRock J4105-ITX – overRAMing Test

32GB Memory - ASRock J4105-ITX - overRAMing Test

By Jürgen Hartmann 1 years ago

For all Gemini Lake processors , the current data sheets show a maximum RAM of only 8GB (2x 4GB) . The processor series of the Gemini Lake series include the following Intel CPUs: 

Pentium Silver J5005 (4M cache, up to 2.80 GHz)Pentium Silver N5000 (4M cache, up to 2.70 GHz)Celeron J4105 (4M cache, up to 2.50 GHz)Celeron J4005 (4M cache, up to 2.70 GHz)Celeron N4100 (4M cache, up to 2.40 GHz) 

Using the example of the current ASRock motherboard J4105-ITX with the Celeron J4105 processor of the same name, we were able to prove in the overRAMing test that the values ​​described in the specifications for maximum RAM are incorrect. So we could prove in the memory duration test that 32GB (2x 16GB) are possible.

Test scenario:

In our overRAMing test , we extensively tested the ASRock J4105-ITX Mini-ITX motherboard in continuous testing with different memory configurations.

  • The following memory configurations have been tested: 
  • 12GB Memory – (8GB + 4GB)
  • 16GB Memory – (8GB + 8GB)
  • 24GB memory – (16GB + 8GB)
  • 32GB Memory – (16GB + 16GB) 

The memory tests were performed with MemTest86

The complete 32GB memory was recognized and could be fully utilized. The first initialization of the 32GB requires some patience. The ASRock J4105 Mini-ITX needs approx. 35 seconds until the system starts. 

The two 16GB memory modules have been described and read in the ASrock motherboard for more than 6 hours, including the “Hammer Test.” All tests were performed 3 times in a row , and none of the tested memory configurations found an abnormality, error or problem in the memory duration test become.

Why the manufacturers limit the maximum RAM to only 8GB (2x 4GB) in the manuals and documentation is incomprehensible. Our tests were convincing in every configuration of the main memory.

Where are the new processors used?

The new Gemini Lake processors are used eg in the following motherboards / mini-PCs: 

ASRock motherboard J4105 Mini-ITXASRock motherboard J5005 Mini-ITXiFujitsu motherboard D3543-S1 / -S2 / -S3Fujitsu Mini STX Boards D3544-S1 / -S2Intel NUC KIT NUC7CJYHIntel NUC KIT NUC7PJYHMSI Cubi N 8GL (MS-B171) 

The 8GB and 16GB memory modules have been tested, approved and successfully sold by us due to the successful tests.

Another note from the manual!

When changing the memory modules, we noticed that the BIOS of the ASRock motherboard J4105-ITX does not always recognize the new memory. In this case, the motherboard does not start anymore. The screen will stay black.

ASRock motherboard J4105-ITX

The cause is not due to the memory sizes that are used. We were also able to reproduce this phenomenon with 2GB and 4GB RAM.

The remedy here is the function CRLCMOS1 as described in the manual under 1.3 Jumpersettings .

Quote from the ASRock manual: 
CRLMOS1 allows you to delete the data in CMOS. The data in CMOS includes system setup information such as system password, date, time, and system setup parameters … Please turn off your computer and unplug the power cord. Then you can short-circuit the solder points on CLRCMOS1 with a metal part, such as a paperclip, for 3 seconds …

Once the CMOS was cleared, any memory configuration could be installed. This was then always recognized immediately and the system booted on the 1st attempt.

Linux or MacOS lover, but have to use Windows?

editing bashrc in nano using a mingw64 (git-bash) terminal window on Windows 1809 LTSC

Make your Windows experience more nerdy!

I tend to notice Windows users fall into a few major categories:

  1. Unaware any OS besides Windows exists
  2. Uses Windows because that’s what they’re most comfortable with
  3. Doesn’t like Windows per se, but can’t afford a Mac, and doesn’t want to grapple with Linux
  4. Likes posix-compliant OS better, but trapped by vendor lock-in requiring use of proprietary software only available on Windows

If you’re like me, you’re in the 4th category. I love package managers, bash shells, am most comfortable with nix commands and regex, and while I think powershell is a step in the right direction, the syntax is unfamiliar enough that I still have to consult references in order to do most things.

But alas, I’m in an environment with proprietary Windows software. If I want to edit server settings in my Windows Domain environment, there really isn’t any cross-platform tools to do it with. I could remote in using SSH and run powershell scripts, but that’s definitely not as easy as just using the GUI tools that are readily available to install on any Windows platform.

Well, thankfully I’ve been able to make Windows a more comfortable environment for nix users with a few add-ons that have really made life a lot easier!

In a lot of ways, it’s the best of both worlds (or, attempting to have cake and eat it – pick your metaphor).

1) Install a package manager (or a couple!)

scoop info command showing description of mingx package

Here’s some of what I’ve done:

I went with Scoop, which is a package manager that installs normal Windows programs through the command line.

It’s kind of interesting in that it installs programs to your user folder, negating permissions issues and UAC prompts for installing/uninstalling programs. Also, any user-specific settings will be local to each program, so they can be customized and preserved more fully than system-wide installs.

The drawback, obviously, is that if you want to install programs for multiple users you’ll want to install them the old-fashioned way, or use chocolatey

Here’s an example of finding and installing QEMU using scoop:

avery@tunasalad MINGW64 /bin
$ scoop search qemu
'main' bucket:
    qemu (4.1.0-rc2)
avery@tunasalad MINGW64 /bin
$ scoop install qemu
Updating Scoop...
Updating 'extras' bucket...
 * 33f73563 oh-my-posh: Update to version 2.0.311                        2 hours ago
 * 35865425 ssh-agent-wsl: Update to version 2.4                         3 hours ago
 * 2fa7048f oh-my-posh: Update to version 2.0.307                        3 hours ago
 * 63f7fcdb gcloud: Update to version 256.0.0                            3 hours ago
 * f92493a8 kibana: Update to version 7.2.1                              4 hours ago
 * 6f9fb8d5 elasticsearch: Update to version 7.2.1                       4 hours ago
Updating 'main' bucket...
 * f8034317 aws: Update to version 1.16.209                              53 minutes ago
Scoop was updated successfully!
Installing 'qemu' (4.1.0-rc2) [64bit]
Downloading (121.3 MB)...
Checking hash of qemu-w64-setup-20190724.exe ... ok.
Extracting dl.7z ... done.
Linking ~\scoop\apps\qemu\current => ~\scoop\apps\qemu\4.1.0-rc2
Creating shim for 'qemu-edid'.
Creating shim for 'qemu-ga'.
Creating shim for 'qemu-img'.
Creating shim for 'qemu-io'.
Creating shim for 'qemu-system-aarch64'.
Creating shim for 'qemu-system-aarch64w'.
Creating shim for 'qemu-system-alpha'.
Creating shim for 'qemu-system-alphaw'.
Creating shim for 'qemu-system-arm'.
Creating shim for 'qemu-system-armw'.
Creating shim for 'qemu-system-cris'.
Creating shim for 'qemu-system-crisw'.
Creating shim for 'qemu-system-hppa'.
Creating shim for 'qemu-system-hppaw'.
Creating shim for 'qemu-system-i386'.
Creating shim for 'qemu-system-i386w'.
Creating shim for 'qemu-system-lm32'.
Creating shim for 'qemu-system-lm32w'.
Creating shim for 'qemu-system-m68k'.
Creating shim for 'qemu-system-m68kw'.
Creating shim for 'qemu-system-microblaze'.
Creating shim for 'qemu-system-microblazeel'.
Creating shim for 'qemu-system-microblazeelw'.
Creating shim for 'qemu-system-microblazew'.
Creating shim for 'qemu-system-mips'.
Creating shim for 'qemu-system-mips64'.
Creating shim for 'qemu-system-mips64el'.
Creating shim for 'qemu-system-mips64elw'.
Creating shim for 'qemu-system-mips64w'.
Creating shim for 'qemu-system-mipsel'.
Creating shim for 'qemu-system-mipselw'.
Creating shim for 'qemu-system-mipsw'.
Creating shim for 'qemu-system-moxie'.
Creating shim for 'qemu-system-moxiew'.
Creating shim for 'qemu-system-nios2'.
Creating shim for 'qemu-system-nios2w'.
Creating shim for 'qemu-system-or1k'.
Creating shim for 'qemu-system-or1kw'.
Creating shim for 'qemu-system-ppc'.
Creating shim for 'qemu-system-ppc64'.
Creating shim for 'qemu-system-ppc64w'.
Creating shim for 'qemu-system-ppcw'.
Creating shim for 'qemu-system-riscv32'.
Creating shim for 'qemu-system-riscv32w'.
Creating shim for 'qemu-system-riscv64'.
Creating shim for 'qemu-system-riscv64w'.
Creating shim for 'qemu-system-s390x'.
Creating shim for 'qemu-system-s390xw'.
Creating shim for 'qemu-system-sh4'.
Creating shim for 'qemu-system-sh4eb'.
Creating shim for 'qemu-system-sh4ebw'.
Creating shim for 'qemu-system-sh4w'.
Creating shim for 'qemu-system-sparc'.
Creating shim for 'qemu-system-sparc64'.
Creating shim for 'qemu-system-sparc64w'.
Creating shim for 'qemu-system-sparcw'.
Creating shim for 'qemu-system-tricore'.
Creating shim for 'qemu-system-tricorew'.
Creating shim for 'qemu-system-unicore32'.
Creating shim for 'qemu-system-unicore32w'.
Creating shim for 'qemu-system-x86_64'.
Creating shim for 'qemu-system-x86_64w'.
Creating shim for 'qemu-system-xtensa'.
Creating shim for 'qemu-system-xtensaeb'.
Creating shim for 'qemu-system-xtensaebw'.
Creating shim for 'qemu-system-xtensaw'.
'qemu' (4.1.0-rc2) was installed successfully!

See? That wasn’t so hard, was it?

2) Install git for Windows (and get an awesome nix-like terminal shell!)

Note, I usually install VS code before installing git, since that way you can set it to be your default editor during the install process. Now that you have your package manager set up, you can install it by running:

$ scoop install code

Obviously git is a hugely useful program for versioning code, documents, etc. and there are multiple sites it can be configured to work with your favorite version tracking site, such as Github, BitBucket, GitLab, et. al.

Helpful info about using git:

But the beauty of the git windows installer is it also installs a fork of mintty called git-bash, which is a terminal emulator with a very comprehensive set of posix tools that’ll make you feel way more at home in a Windows environment if you’re used to working on posix systems.

Learn more about it by checking out this discussion, “What is git-bash, anyway?”

If you have more questions about nuts-and-bolts, the git-bash github faq is a good place to start, including cmd console compatibility, long path or file names,

If you REALLY want to take posix-compatibility to the next level, you could install MinGW which includes pacman package manager (from Arch Linux), makepkg, and a GNU compiler suite (!)

There’s also a 64-bit fork called MinGW-W64

If you’re having trouble keeping track of all this, check out: What is the difference between MinGW, MinGW-w64 and MinGW-builds?

I’ve tried both of these, and I personally didn’t need to compile or install packages from arch linux often enough to keep the full suite, so I usually just stick with git-bash since it’s a happy medium. But it’s pretty cool that any of that is even possible. If you’re a developer, the full development suite will probably be right up your alley. Apparently the applications you compile can even be used outside the shell (supposedly – although I wouldn’t count on it).

Some of the tools I’ve found to be a bit wonky, like “find” and “du”, but if you keep your expectations tempered I think you’ll be pleasantly surprised. I love that I can invoke “nano” or “vim” to edit text files right in the shell so much, if nothing else worked I’d be fine with it.

Don’t get me started on how happy I am with “ssh” and “openssl”…

Check out a list of the /bin directory of MingW64 – I’ve listed the commands I imagine people use most often on posix platforms:


Musings about Home CCTV setup, and a review of Anpviz 5MP@15FPS POE security camera

Happy new camera watching garden grow at my modest home

This review is for the Anpviz IPC-B850W-D 5MP@15FPS security camera w/ mic and Micro-SD card support.

Full disclosure: I purchased this camera and was later reimbursed for writing this review. However, I will do my best to offer a totally objective viewpoint on the pros and cons of this model.

Since I was hired to do installation and maintenance of a TVI DVR system at a local restaurant, I’d been wanting to set up a couple cameras around my home.

I’m not so worried about theft or break-ins like the restaurant is, but even in a mellow, low-stress area like ours recording a few key areas can still be helpful. For instance, we share a fence with a parking lot, so it gets hit by cars occasionally and we’d like to know who did it if we’re not home. Also, you never known when someone might take an Amazon delivery off your doorstep, as that is becoming common even in nice neighborhoods.

Even though I do installation professionally, I did not get the job because I’m a video security expert. I did networking for the restaurant and they asked me if I wanted to work on the security system. I virtually had no experience with CCTV setups before starting the job, so I dove into it head-first and have been learning as I go. It’s been a great experience.

As far as DVR/NVR technology, I’ve found that IP cameras better for features and quality, but the TVI cameras are a lot easier to wire. However, even though wiring RG59+BNC cables is easier than terminating cat6, the time savings is offset by the number of cameras I end up setting up, since they tend to be lower resolution.

Compared to the 1080p cameras I use with the restaurant’s TVI setup, the 5MP IP cameras I have are much more efficient. I was stunned by how much more detail there is, so if it’s recording an open area, I just set up one 5MP+ camera and use digital zoom to navigate, whereas I have to use several 1080p or 720p cameras to cover the same area at the restaurant because of their resolution. Even though a single IP camera might be more expensive than a TVI camera, overall the system has been cheaper, easier to set up, and less hassle to use.

While I use almost entirely Hikvision products (and clones) for the TVI CCTV system at work, I’ve decided to stay away the major name brands for IP cameras because the market is moving so quickly the newer up-and-comer companies are competing much harder and produce products with more features and better value.

Of the 5-6 IP cameras I’ve tried so far, I’ve liked Anpviz best and SV3C second-best. I’d like to try Morphxstar at some point, but would be happy sticking with Anpviz going forward as they have stunning picture quality and are built like tanks.

The Anpviz camera I already owned I had been very happy with. It’s an IPC-D350 that was an Amazon best seller, so I was excited to try another one from the same company. I have an IP camera from SV3C, too, the SV-B06POE-4MP-A. The main reasons I was excited to try the review camera were:

1) Value – Anpviz makes a great product at a very affordable price. The IPC-D350 was an ‘Amazon Recommended’ camera. The IPC-B850W-D hasn’t gotten as much attention, but it’s a very similar camera picture-wise with the added benefit of having a mic and a Micro-SD card slot, at pretty much exactly the same price. Seems like a good value to me!

2) Picture and recording quality – a camera that is inexpensive but has a poor picture or recording quality is a waste of money. My IPC-D350 has great picture quality and has allowed me to see detail from the front of the house to the street. I’ve only needed the single camera to be able to identify anyone who walks anywhere in the front yard. I’m happy I have something reliable I can use to decipher between the post carrier, family members, or Mormons when they come to knock on my door.

3) Build quality – I’ve had the IPC-B850W-D for more over a month, and the IPC-D350 for about 6 months. Both have been stable in all sorts of weather conditions. I haven’t had any issues with it stalling or rebooting over the 6+ months it’s been in use. The IPC-B850W-D has been similarly resilient. Subjectively, they don’t seem cheap like other similarly-priced cameras, using a fair amount of metal in their build instead of being entirely plastic like most others. My SV3C, for instance, is all plastic except the base. It’s a hard ABS plastic, but still just doesn’t seem as hearty as the Anpviz cameras I’ve tried.

4) Support – Anpviz support has been great, they’re responsive and kind, and have been able to walk me through any problems I’ve had in the past.

5) Features – Anpviz cameras use a stock firmware I’ve seen in cameras from other manufacturers in the same price range, but with a few additional settings here or there that make them stand out. Also, the option to downgrade the sensor resolution and increase the framerate does not come at a great loss – for instance, 5MP@15MPS is a good combination by default, most people would be happy with that. However, if you want to have full 24FPS recording, downgrading from 2592×1944 (5MP) to 2688×1520 will allow you record up to 25FPS without sacrificing much in the way of pixel density.

Comparitively, our Hikvision DS-7332-HGHI 32-channel DVR requires us to downgrade the stream from 1080P@12FPS to 720P@24FPS to get full-frame recording on any more than 8 cameras, so I was expecting a much larger sacrifice in quality!

Mounting: Mounting the camera on my shed and adjusting it was easy enough. There were three different points of adjustment – 1) A 360° swivel of the base, the part with screw holes that sits on the wall. 2) an up-down/left-right joint that is limited to 90° but can be swiveled any direction, and 3) the camera itself swivels 360°.

I tend to be a fan of ball-joint mounts for bullet cams when I’m first setting them up because of the ease of adjustment, but you can get the same range of motion with this number of joints, it just requires a little more work. The positive of NOT having a ball joint is they tend to wear out after a while and are more likely to become loose over time compared to discrete, range-restricted joints.

Day shot from firmware web page: Now I can watch my garden grow from anywhere!

Picture quality: I’m not sure what the difference is between the IPC-D350 and the IPC-B850W-D in terms of the sensor technology, materials over the sensor, etc. but Subjectively the colors seem slightly more vibrant on the IPC-D350. It’s not something most people would notice, though, I don’t think. They look extremely similar. Both are similarly as sharp, and have essentially the same motion-recording sensitivity.

Night shot from firmware web page: Can see a lot of detail, even some across the street (top left)

Night vision: The IPC-B850W-D night vision is a major improvement over the IPC-D350, which is the only thing I didn’t like about that camera. I can see a lot further using this camera, and there is great detail rather than just cloudy grey out approx 60+ ft. I think it might be the clearest night vision of the 3 cameras I have now, but it’s a close call between the IPC-B850W-D and my SV3C SV-B06POE-4MP-A, which also has great night vision.

This possum had no idea s/he was being taped:

Night vision shot: Opossum sniffing for yummy bugs
Night vision shot: Opossum sniffing for yummy bugs (part deux)

Recording support:

Milestone Smart Client: Trying to zoom in the right amount to avoid scaling on a 1600×900 laptop

My recording platform is Milestone Xprotect Essential+ 2019R1, which is a true enterprise solution that runs on Windows 10 or Server 2012R2 and newer. While not in the directly supported list, the IPC-B850W-D camera was detected using ‘easy add’ in hardware management via ONVIF without issue. Main feed, sub feed, metadata feed, microphone outputs all detected and record without issue.

Interestingly, Milestone detects 5 input feeds to the camera, but I’m not sure what they’re used for. I’ve noticed Milestone detects input feeds on other cameras I’ve tested, as well. At some point, I would like to know what they do, if anything. It’s possible they’re just for ONVIF compatibility, I suppose (?).

I imagine the camera should work fine with other recording platforms such as Shinobi, Zone Minder, Blue Iris, iSpy, MotionEye, etc. but I did not go so far as to test the camera with any of these because I use Milestone (and am very happy with it). Chances are if your recording platform is compatible with ONVIF, though, the camera should work fine.

I do not have a commercial NVR to test the camera with, but it is advertized as being compatible with Hikvision through port 8000. I did test the Anpviz IPC-D350 IP camera with a Hikvision Turbo 1.0 DS-7332-HGHI hybrid DVR w/ firmware version 3.1.18 and it would record the feed, but a lot of features were mismatched because of the age of the DVR, and the fact that it uses dated firmware. I do not believe the DVR even supports ONVIF.

If you have an old hybrid DVR, this camera will work, but be prepared for some of the settings not to overlap properly. I know our DVR only supports maybe 2-3 IP camera models officially, but it will record the feed of most any Hikvision-compatible IP camera nonetheless. Additionally, viewing IP cameras from the IVMS-4500 phone app on an old hybrid DVR requires setting up separate devices for each IP camera, as they are not available in the main list along with the analog camera feeds. Just something to consider.

I’ve thought of setting up a Milestone record server and importing a TVI DVR such as the DS-7316HUI-K4 for a solution that would provide compatibility with TVI cameras, yet display all TVI and IP cameras on one page through the Milestone app, but that’s a project for another day.

From the configuration page there’s a recording option for MicroSD, USB (generic firmware option, probably non-functional), and NFS. I did not test a micro-SD card, so I cannot report on that. But I did try mounting an NFS share through my network. It was a tad quirky in that when I first configured the share it wouldn’t show that it was mounted until I logged out and back in, but it did seem to work just fine.

There is also an option to send clips to an FTP server, but it’s in the Network settings menu instead of under storage. There’s also an email for alerts, but unlike some other cameras I’ve seen, I don’t think you can email clips of video.

Intuitively, I’d think the easiest thing to use for storage would be the P2P option through Danale, a paid cloud storage service, but after having a look at their app’s ratings it looks like it might be more trouble than it’s worth. Therefore, if you want push notifications, you probably should set up the camera to send you an email and do some extensive testing of the motion detection sensitivity in order to avoid false positives.

IFTTT is worth checking out for push notifications, too! I’ve managed to rig it up to push notifications for work, as well as automate all sorts of things for work and home. If you’ve never heard of it, have a look, it’s amazing:

I think Microsoft has something similar called “flows” but I haven’t tried it yet. Hey, whatever works, right?

Browser support testing:

I noticed the IPC-B850W-D camera was compatible with a wide range of browsers. I tested it with the top 2 or 3 browsers on 3 different common operating systems, and they all seemed to perform great. This is a real step up from older cameras which would only work on Internet Explorer, given how many people use Mac or Linux (or even BSD or Illumos) for their desktop or laptop instead of Windows.

Live View and all feature/configuration pages were available in the following browsers:

Windows LTSC 1809 build 17763.557:

Chrome 75.0.3770.100 (Official Build) (64-bit)
Firefox 60.7.2esr (64-bit)
Internet Explorer 11.0.130 KB4503259

Ubuntu Linux 18.04.2 kernel 4.15.0-52-generic:

Chrome 75.0.3770.100 (Official Build) (64-bit)
Firefox 67.0.4 (64-bit)

MacOS Mojave 10.14.5 Darwin Kernel Version 18.6.0 root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64:

Chrome Version 75.0.3770.100 (Official Build) (64-bit)
Firefox 60.7.0esr (64-bit)
Safari Version 12.1.1 (14607.

Here’s screenshots I took of the camera’s firmware webpage.

To blow up a picture, right-click and select “Open image in new tab”

Using APFS features to migrate from a larger disk to a smaller one

Resizing an APFS container via diskutil terminal command

Disclaimer: Sorry for the phone pics of my screen, I was using an installer disk for this procedure and there was no screenshot feature.

I’m migrating a MacOS install but the source drive is 256GB and the destination is 250GB. It’s not too much of a mismatch, but still prevents a straight-across restore.

What ever to do?

Well, if you’re using APFS (like you should be), then it’s pretty trivial.

First, erase your your destination disk and set it to APFS.

The GUI disk utility is what I find easiest.  You might have to tell the sidebar to show all devices if you have a greyed-out partition like ‘disk0s4’ and nothing else.

The nice thing about the Mojave installer is you can have two applications open at once now, which I think is still impossible in Recovery mode. So go ahead and open a terminal while you’re at it and have the two displayed the whole time.

Erase it to an APFS disk, GUID should be the only partition map option (if it’s not, select GUID).

You’re welcome to use encryption or case sensitivity if you want, but I usually opt for whatever is the easiest to troubleshoot, myself. You might want to, as well, unless you have some specific reasons otherwise.

Now for some terminal diskutil commands:

Clear up diskutil list output by only selecting two drives

There’s a bunch of ram disks when in recovery mode, so I like to list one disk at a time. The two SSDs here are disk0 and disk2, so you can just clean up the display by invoking something like:

# diskutil list disk0 && diskutil list disk2

Learn from my mistakes: I actually cloned my efi partition before I did the container resize, but I found out that you have to do it after the restore because it’ll be overwritten. So wait until the last step before you restart the computer.

Then get the exact sizes of your source and destination drives with:

# diskutil info disk0 | grep Size && diskutil into disk2 | grep Size

Note: My laptop had the 250GB destination drive installed internally so it was the first drive that was detected, disk0, but these numbers (and perhaps even the partition, or ‘slice’, number s3) can change, so make sure you identify them properly because you can mess things up big time.

Then you’ll want to get the size of the APFS container itself, in this case it’s on partition 2 of disk 0. This is our limit.

# diskutil info disk0s3 | grep Size && diskutil into disk2s3 | grep Size

Note: You can just run the first part of the command before the && if you don’t feel like comparing, just make sure it’s actually your destination container.

You can also get the container limits with:

# diskutil apfs resizeContainer disk0s3 limit

This could be helpful in a scenario where the drives have a greater difference in size, or if the source container is very full, to see if you might be trying to make the container smaller than it’s constrained.

Snapshots on the source drive can throw a monkey wrench in your plans

If you get a resize error you might have to delete the snapshots off the source container, as they refer to the total size of the container at the time they are taken. Thus, they can no longer be restored if the container is resized.

Snapshots are deleted from the source drive, which must be mounted, by invoking:

# diskutil apfs deleteSnapshot /Volumes/nameOfSourceDrive
Need capital ‘S’ in ‘Size’ and a ‘B’ at the end of the bytes

Then when you actually resize the container, use the bytes listed from # diskutil info [destination container] | grep Size that you got earlier. I recommend copying and pasting.

# diskutil apfs resizeContainer disk2s3 249849593856B

Note: Don’t forget the ‘B’ at the end!

Then you can go back to the GUI disk utility and start the restore by selecting the destination drive and clicking restore on the top panel. Choose your source drive and it should get going.

Note: Depending on the size of your drives and the amount of information stored on the source, this can take a long time. During the procedure I was doing while writing this, two ~250GB NVMe SSDs (one connected with USB 3.0) with a source containing ~160GB of data took about 4-5 hours. It’s an exercise in patience.

Clone your EFI partition using dd:

Cloning your source drive’s EFI folder using dd

Now you can clone your EFI folder from source to destination and it won’t get overwritten:

# dd if=/dev/disk2s1 of=/dev/disk0s1 bs=512

Note: Might want to make sure all your necessary bootloader software is there by mounting the EFI on your destination drive and poking around a bit (see commands in above picture). Otherwise you won’t be booting from this drive yet.

Also – this is important, there’s one last step – rebuild your kextcache on your destination drive:

# kextcache -i /Volumes/nameOfDestinationDrive

Now you should be able to disconnect all the other drives and boot from your newly cloned drive.


I heart my web host

You know what’s awesome?

Image result for squidix
Gratuitous Google Image Rip

They’re my new web host and they are the shit.

I used to host my site on (don’t laugh). I was with them for nearly 10 years, taken in by their low prices that promptly increased over the first year. “Buy this” buttons flashed at me constantly while trying to edit my site, along with services and add-ons that kept appearing on my bill I had no idea about until suddenly I was expected to pay for them.

When the service slowed to a crawl about 4-5 years in, I mainly just kept my site for the simplest web presence. Sure, I fix laptops. Sure, I’ll scan your computer for viruses. I’ll recover lost data from a failing hard drive or upgrade you to an SSD. But make a website for you? Not on this package.

The service was so slow that when I tried to make a site showcasing ikebana arrangements I finally gave up because page reloads were taking so long and told the client they should try ‘wix’.

Every time I remember that moment, I hang my head in shame.

Well, no more. No more, I tell you!

Because I finally have *decent* hosting. Good hosting. Stellar hosting.

Squidix is a company based out of Indiana that has that midwestern touch to their service where they actually seem to give AF. And page reloads? They happen. Quickly. I haven’t noticed any page taking longer than I can tolerate easily. They pass the tolerable test.

I could throw stats at you or tell you about package costs, etc. but none of that really matters. I mean, it was all considered carefully when choosing my web host, but there’s tons of reviews out there you can find. This is more just about my *feelings*. And they are good. They are great. They are phenomenal.

I think I’m in love.

And what’s this?

They have cPanel? I guess that’s pretty standard these days, but try going 10 years without it. Try living with a crappy ‘buy-button’ based panel engineered to operate like you’re on – purchasing things left and right you have no business getting, nor can you afford – and navigating to the things you need to do while steadfastly avoiding commercial pitfalls.

Ok, so maybe this is less a charm-offensive for Squidix and more of a rebuke of ipage.

Am I going to stop this stream-of-consciousness blog posting nonsense and actually learn how to set up git versioning and SSH access?

Hopefully. For all our sake.

But seriously, if you’re looking for a web host, give them a careful look.