PowerBash eases some discomfort of adjusting to PowerShell for Linux users

Want to use grep or vim from your powershell env? Now you can!

OK, I just stumbled across the coolest thing.

Occasionally I use PowerShell because it’s the easiest way to get batch processes accomplished on a Windows computer, or sometimes the only way to implement a feature (e.g. making folders case-sensitive to avoid naming conflicts when syncing with nix computers).

But it’s always a bit of a pain because I have to look up ways to do basic things I can easily do in a POSIX-style environment without needing a reference (e.g. find, grep, sed, df, vim) and sometimes their implementation is awkward or clunky, or just not easily possible.

Enter Powerbash:

Another way of installing – import the script

From the Github repo, “PowerBash Module allows you to run bash commands and other Linux programs from the Windows Subsystem for Linux directly from PowerShell, with support for piping to and from PowerShell commands.”

Here’s the link to the repo: https://github.com/jimmehc/PowerBash

They recommend getting the script from the repo and importing it using import-module but I just installed it from the Powershell Gallery using:
install-package powerbash
from a PS prompt

(note, I did try installing it from the script using import-module but it points to an older directory structure for WSL than that used in 1809, so it would need some modification for more modern WSL implementations. The PSGallery package works OTB)

Using grep -E with find-package command in PowerShell

To install the PS Gallery and get started with the PowerShell method of package management, check this out:
https://copdips.com/2018/05/setting-up-powershell-gallery-and-nuget-gallery-for-powershell.html

It does require WSL, as well, and 14316 or higher version of Windows.

It makes using PowerShell a lot easier when you can pull out the random nix command in a pinch!

Looks like VIM works just fine in PowerShell now…

You can still use PowerShell in msys64 (also in “Git Bash”) which gives you some of the same functionality, but PowerShell’s functionality is somewhat impaired when running inside msys64, so it’s not that great of a solution.

PowerBash starts with PowerShell and adds the Linux compatibility layer to it, rather than the other way around, so it’s inherently more friendly to your PS cmdlets. Also nice that it uses WSL for compatibility instead of msys64’s re-compiled utilities. Definitely a two-fer.

PowerBash looks in locations in the Linux Subsystem’s filesystem for programs, equivalent to the following $PATH:

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/gcc/x86_64-linux-gnu/4.8:/usr/games

This is currently hardcoded (damn!) but that may be changed in the future (yay!).

Any existing commands available to the current PowerShell session, including EXEs in the PowerShell $env:PATH, cmdlets, functions, and aliases, will not be overridden by Linux programs. If you would like to use a Linux program of the same name instead, remove all functions, cmdlets, and aliases from your session prior to importing the module. For EXEs, you can pass the names of the programs you want over-ridden to the module via -ArgumentList when importing (comma separated).

Add podman controller to cockpit on Ubuntu 20.04 LTS

Podman is the bad-ass contender to docker that uses oci

I am heartened to see podman becoming more comfortable on Ubuntu, since although I’m excited about a lot of the software coming from RHEL’s massive portfolio of acquisitions, I still prefer Ubuntu’s support length and update schedule, and find its commands and structure more familiar.

But what about the accessories that are available for podman? Would installing it on Ubuntu make it like a fish out of water?

Thankfully, no! buildah is also available in the offical podman repo for Ubuntu, and apparently cockpit-podman can be installed fairly easily, too.

cockpit-podman is one of the areas in which Fedora/RHEL has an edge out of the gate, as it’s not available in Ubuntu repos or PPAs. But I thought it might be fairly easy to build since it’s a web application. Naturally, I couldn’t find any instructions on how to do it anywhere, but I managed to figure it out on a new VPS I’m setting up. So I wanted to share how I got it working, since I’m pretty sure the procedure isn’t described anywhere else yet.

Obviously, first you want to install podman and cockpit. Cockpit’s in the Ubuntu 20.04 multiverse repo, it’s a quick:
sudo apt update && sudo apt install -y cockpit

Podman has Ubuntu in the official documentation and only requires a few easy commands – take a trip to their installation instructions and step through them real quick before continuing here: https://podman.io/getting-started/installation.html#ubuntu

Then clone the cockpit-podman repo: https://github.com/cockpit-project/cockpit-podman

Go to your build directory and run this to clone the repo:
git clone https://github.com/cockpit-project/cockpit-podman.git

Install the build requirements (listed in cockpit-podman.spec):
sudo apt update && sudo apt install -y appstream-util
(you need build-essential for this even though it uses node – so add it to the end of that last command string if you haven’t installed it yet)

appstream-util should automatically install libappstream-glib8 which satisfies libappstream-glib mentioned in the .spec file – other prerequisites mentioned will be installed along with the cockpit meta-package)

Also, you’ll need to have nodejs installed – but don’t use the package maintainer’s version, it’s several major versions behind. Instead, manage your node installations with nvm-sh from here:
https://github.com/nvm-sh/nvm

The installation’s pretty easy – you have to have curl for nvm to work, so this command is appropriate (install curl if you haven’t already):
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

Quick side note about choosing a node release with nvm: If you’d rather check to see what the latest available node LTS release is when you perform these operations, it’s:
nvm ls-remote --lts
(I definitely recommend you use an LTS release, as building with a bleeding-edge release of node won’t offer any benefits and is more likely not to work)

Then, required to install node are a few simple commands:
~/.bashrc (reload your env)
nvm install 12.18.1 (as of writing this is latest node LTS)
nvm alias default 12.18.1 (set it to default for later)
Verify it’s working by running node --version
If for some reason it’s not working, try:
nvm use 12.18.1

Now that node is installed, navigate your newly cloned cockpit-podman repo and run make

Then run your sudo make install and it’ll install it to your existing cockpit directory.

Now the next time you log into cockpit you’ll have a super convenient podman controller, just as if you were on a Fedora or RHEL/CentOS machine. (If you’re already logged in, obviously you’ll want to log out and back in again.)

Working example on a VPS running Ubuntu 20.04 LTS

Getting touchpad scroll to work in VMWare Workstation

Shiny new Kubuntu 20.04 dev platform

So, I develop using NodeJS, and I use Windows. Anyone who uses both together and has used Linux as a dev platform has probably realized the Windows version is a little lackluster and has edge configuration issues that make it a pain.

So I was setting up a dev environment on an ESXi host so I could have one place in which I do my development and not have to switch back and forth between Windows and Linux platforms, deal with the painfully slow WSL, Windows ‘git bash’, etc. do everything native.

That was a great idea until I got it all set up and realized my touchpad wouldn’t scroll anymore.

Not just in the new Kubuntu dev VM, but everywhere! My Thinkpad’s touchpad literally would not scroll ANYWHERE, not in Ubuntu, not in Windows, nada.

I had just installed VMWare-tools 10.3.21 which as of June 2020 was still the latest edition at over a year old (surprising). However, there is a KB article in which VMWare recommends using the open-vm-tools package, which I had thought was an open-source knock-off, but through a little digging I found out it is actually an official release from VMWare that appears to have been open-sourced.

Reference: https://github.com/vmware/open-vm-tools

Thinking this might have contributed to my non-scrolling touchpad issue, I promptly uninstalled VMWare-tools and installed the open-vm-tools-desktop package.

To my surprise, my touchpad started scrolling again, both in the VM and in Windows.

So, as a PSA, anyone who’s experiencing this issue (maybe it’s a Synaptics-related thing?) try using the open-vm-tools package instead of VMWare-tools and maybe your touchpad will magically start scrolling again…

Genuine Lenovo Thinkpad T460S T470S Three button Touchpad Clickpad ...
“I feel so alive!”

What’s wrong with VMWare Workstation on Ubuntu?!

Ubuntu Authentic Logo - Ubuntu Linux - Pin | TeePublic
Ubuntu logo

I’m not really sure why, but every time I install VMWare Workstation on Ubuntu, I can never get it to run right away. There’s another step to the setup after running the installer that might as well be documented as part of the installation process.

It’s compiling kernel extensions for vmmon and vmnet – and Workstation tries to compile them out of the box, but it never works. It’s missing some essential kernel extensions that are only available from GitHub (not exactly a professional look…)

I had to write a post about it since after doing it for the umpteenth time I realized this might be a problem for a lot of people that isn’t well documented and might be exceedingly confusing for first-time users.

First, you’ll have to download the kernel extensions. Here’s where they’re available:

https://github.com/mkubecek/vmware-host-modules/releases

You’ll see filenames available for download that look like w15.5.5-k5.7 – you want to match the first 3 numbers to your Workstation version, and the last two to your kernel – this example would be for Workstation 15.5.5 and an OS with 5.7 kernel.

Thankfully there’s plenty of releases to find one that’ll work for your setup. At the time of writing there is nearly 700 releases to choose from (indicating that this has probably been a major issue for a long, long time… )

After downloading the release, unzip it and it’ll create a folder called:

vmware-host-modules-<release version>

cd into this folder and create two .tars:

$ tar -cf vmmon.tar vmmon-only/
$ tar -cf vmnet.tar vmnet-only/

Then copy them to the modules source directory for Workstation (as root):

# cp -v vmmon.tar vmnet.tar /usr/lib/vmware/modules/source/

After copying the .tar files to /usr/lib/vmware/modules/source/ run this command:

# vmware-modconfig --console --install-all

If all goes as planned, you should be able to run Workstation now.

Note: Keep in mind that if you update your kernel you’ll absolutely have to go through this process all over again. Sometimes, but not always, updates to Workstation will require it, too.

You’re sure to get just as familiar with the process as I am before long.

Good luck!

Reference:

https://askubuntu.com/questions/1135343/ubuntu-19-04-vmware-kernel-modules-updater

ESXi 6.7 – Forcefully Removing a Phantom NFS Share

Link Broken Icon - Paper Style - Iconfu

Interesting thing happened to me today. I had to do some work on one of my ESXi 6.7 hosts, so I powered it off (which is something I rarely ever do). I had just done some static configuration of ipv6 on my domain controllers, provisioned DNSSEC, etc. so I was playing with a relatively recently modified network.

Well, it must have been a little too different for my ESXi configuration to handle, because when I powered up my ESXi host again, two out 5 of my NFS datastores couldn’t reconnect. It was a little perplexing because ESXi doesn’t give you much of a sense of what’s going on – just a message saying:

Can’t mount NFS share – Operation failed, diagnostics report: Unable to get console path for volume

There’s some logs you can check, like hostd.log, vmkernel.log, etc. but they didn’t tell me much, either. (If you want to check these they’re in /var/log and the most convenient way I’ve found is to use:
# tail -n <number of lines> <name of log file>

But here’s the thing – no matter how many times I tried to reconnect, I was unable. I could, however, reconnect the datastore as long as I gave it a different name

Finally after a while it dawned on me that the share still existed (although it did not show up in):
# esxcli storage nfs list

That was the whole reason I couldn’t re-create it – ESXi thought it was still there although it wasn’t giving me any real indication (only the failed attempt to manually re-connect)

Tl;dr – to Fix the issue,

  1. Remove any references to your phanton NFS datastore – remove any HDDs that reference it, any ISO files (connected or not), remove VMs from your inventory that are imported from it.
  2. Remove the NFS share link from SSH using the command:
    # esxcli storage nfs remove -v <phantom datastore name>
    (note, the -v will give you a little more feedback)
  3. At this point I decided to reboot, but if you can probably just restart the agents from the DCUI: https://kb.vmware.com/s/article/1003490
  4. Now you should be able to re-connect your NFS datastore, with the same name and everything.

If you have any VMs on the datastore, ISOs, vHDDs, etc. you need to re-connect, now you should be able to without any problem.

Building NetBSD pkgsrc on Ubuntu 19.10

pkgsrc is a package manager and port tree, kind of like FreeBSD ports and pkg combined.

Messing around with my Ubuntu 19.10 rpool installation (ZFS is offered by default in the desktop installer now), and I couldn’t help but think to myself, “gee, it sure would be cool if the ARC was in top like it is in FreeBSD and OmniOS.”

So I thought I’d try to compile the NetBSD version of top using pkgsrc, since I figured the BSD versions of posix software might have more resources for ZFS.

Pkgsrc isn’t used on Linux very often. Maybe for some more obscure scientific and development purposes, but not common end-users. Therefore, I haven’t seen a succinct guide on how to install it that’s palatable to most users.

There IS a version of pkgsrc compiled for RHEL7, but the toolchain is super old and it doesn’t appear to be compatible with Ubuntu 19.10. Therefore, you’ll have to compile everything (including the package manager) from source if you want to use it with a modern Linux distribution.

I’m going to include resources for the information so you can dive into it with more detail, because chances are you’ll need it given the complicated process, but the goal here is to create a start-to-finish how-to article that’s easy for most people to follow without too much info that it becomes inundating.

Here’s how I managed to build pkgsrc in my environment. Please note, I’m using an unprivileged (read: non-sudo) environment and built everything in my $HOME dir, but you’re welcome to put it in the default /usr directory if you want – just modify accordingly.

Full documentation for pkgsrc lives here: https://www.netbsd.org/docs/pkgsrc/

TIP: Keep track of the exportvariables so can add them to your user’s .profile or .bashrc files later.

Download pkgsrc using cvs – install cvs first if you don’t have it (Source: https://www.netbsd.org/docs/pkgsrc/getting.html):

$ cd ~/ && cvs -q -z2 -d anoncvs@anoncvs.NetBSD.org:/cvsroot checkout -r pkgsrc-2020Q1 -P pkgsrc

Navigate to $HOME/pkgsrc/bootstrap and invoke:

$ unset PKG_PATH && export bootstrap_sh=/bin/bash && ./bootstrap --unprivileged

If something goes wrong, delete your work folder or set a new folder with --workdir=path/to/new/tmp/dir

Also, there’s more info here that might be helpful with the bootstrapping process specific to Linux: https://wiki.netbsd.org/pkgsrc/how_to_use_pkgsrc_on_linux/

After pkgsrc is done bootstrapping, navigate to $HOME/pkg – you’ll want to add $HOME/pkg/bin and $HOME/pkg/sbin to your $PATH

$ export PATH=$HOME/pkg/bin:$HOME/pkg/sbin:$PATH

Set the $PKG_PATH and the $MAKECONF:

$ export PKG_PATH=$HOME/pkg
$ export MAKECONF=$PKG_PATH/etc/pkgsrc.mk.conf

Then, add this file named pkgsrc.mk.conf to $HOME/pkg/etc/

# pkgsrc.mk.conf
#
# configuration for non-root pkgsrc builds
# set $MAKECONF to this file for BSD makefiles (/usr/share/mk/*)
# to use it
#
# inspired by:
#   From: "Jeremy C. Reed" <reed@reedmedia.net>
#   Sender: tech-pkg-owner@NetBSD.org
#   To: tech-pkg@NetBSD.org
#   Subject: Re: pkgsrc as non-root?
#   Date: Sat, 27 Sep 2003 21:22:04 -0700 (PDT)

GROUP!=		/usr/bin/id -gn

SU_CMD=		sh -c
DISTDIR=	${HOME}/pkg/distfiles/All
PKG_DBDIR=	${HOME}/pkg/pkgdb
LOCALBASE=	${HOME}/pkg
PKG_TOOLS_BIN=	${HOME}/pkg/sbin
#INSTALL_DATA=	install -c -o ${BINOWN} -g ${BINGRP} -m 444
WRKOBJDIR=	${HOME}/tmp		# build here instead of in pkgsrc
OBJHOSTNAME=	yes			# use work.`hostname`
VARBASE=	${HOME}/var

CHOWN=		true
CHGRP=		true

ROOT_USER=	${USER}
ROOT_GROUP=	${GROUP}

BINOWN=		${USER}
BINGRP=		${GROUP}
DOCOWN=		${USER}
DOCGRP=		${GROUP}
INFOOWN=	${USER}
INFOGRP=	${GROUP}
KMODOWN=	${USER}
KMODGRP=	${GROUP}
LIBOWN=		${USER}
LIBGRP=		${GROUP}
LOCALEOWN=	${USER}
LOCALEGRP=	${GROUP}
MANOWN=		${USER}
MANGRP=		${GROUP}
NLSOWN=		${USER}
NLSGRP=		${GROUP}
SHAREOWN=	${USER}
SHAREGRP=	${GROUP}
ALLOW_VULNERABLE_PACKAGES=yes
.-include "$PKG_PATH/etc/mk.conf"

Obviously you’ll want to edit variables that you configure differently, but this setup appears to be working well in my unprivileged config.

A note about the ALLOW_VULNERABLE_PACKAGES variable, I noticed when building the pkgsrc root there were a LOT of packages that were flagged vulnerable, so I decided to allow them in the repo in case I needed them (e.g. if they were dependencies). I figured I could audit them later if I needed to. Admittedly, it looks kind of scary, but I don’t recommend not allowing them because it might cause all sorts of build issues if there’s so many unavailable Makefiles. You can always audit them later.

OK, so now you can back go to $HOME/pkgsrc and build the Makefile for the repo – you have to use bmake located in $PKG_PATH/bin to do this with all the pkgsrc Makefiles:

$ bmake configure && bmake install

This will probably take several hours to build depending on how powerful your machine is.

Note: I tried the -j variable, for instance I tried to assign nproc number of threads to use for the compilation, but this did not work – I got this error with bmake compiled with pkgsrc bootstrap, and also the one installed from the Ubuntu repo:

ERROR: This package has set PKG_FAIL_REASON:
ERROR: [bsd.pkg.mk] pkgsrc does not support parallel make for the infrastructure.

On my Thinkpad T460s using a single core it’s taken all night and morning, so if you’re in a “hurry” this process is not for you. Also, it crashed on me in Gnome terminal, so I recommend going to another TTY (e.g. alt-shift-F2) starting the process, and leaving the computer alone for a day or two, coming back to check it periodically, of course. Be sure to turn suspend off in your power settings!

I’ll come back and update this with how to use the repo + pkg manager itself once it’s done. It should be pretty much ready to use after all this, and we’ll see if I’ll get my shiny new build of top w/ ARC level display (excited!).

Note: I found this compilation from Joyent specifically for Ubuntu: http://mail-index.netbsd.org/pkgsrc-bulk/2019/10/03/msg017826.html

It might save Ubuntu users from having to compile the framework from source themselves, but it looks like they stopped building them after this one (the latest) back in October 2019, so it might not a good long-term solution.

Want BTRFS CoW snapshots but need to use Dropbox?

Update: Not sure when this happened but I was checking the requirements for FS on Linux at Dropbox.com and apparently they DO support BTRFS once again. So the process outlined in this post is no longer necessary.

See announcement: https://hardware.slashdot.org/story/19/07/22/1534200/dropbox-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux

Old article: This is less of a write up and more of a PSA:

Want to put BTRFS on every device you have like me because you can’t get enough of CoW FS? Especially for operating systems? Because snapper? cgroups? timeshift? machinectl? The list goes on…

Use BTRFS to your heart’s content! Just create a separate partition formatted with EXT4 and mount it to your Dropbox-using user’s /home/username/Dropbox in /etc/fstab

This can be done while initially setting up your OS, which if you’re like me you’ll be doing all sorts of finagling with your partition layout anyway, or shrink one of your partitions after the fact (/home comes to mind, since that’s where Dropbox is usually located).

I’ve done this in several builds so I can have snapshot+rollback capabilities while working around the new(ish) Dropbox file system requirements in OS like OpenSUSE, Debian and Ubuntu. Admittedly, less of the VMs are in service now that Dropbox limits free accounts to 3 devices.

What about snapshots of your Dropbox folder, you ask? Well, Dropbox has its own file versioning system built-in that should keep your files safe from accidental deletion for at least 1 month. So if you delete anything from your EXT4 Dropbox folder, just get on the Dropbox website and click “deleted files” before it’s too late (!)

In another post, I’ll have to explain how I set up a Dropbox+Syncthing VM to share files across platforms and then propagate those files to other machines using Syncthing so as to skirt the 3-device limit.

I only use about 2GB on Dropbox in the form of PDFs and text files for work – I don’t need a $10/mo 1TB of storage, but I still have LOTS of devices I like to use. Thankfully Syncthing doesn’t have any high-falutin’ requirements for underlying file system types like Dropbox does [insert pointed stink-eye at Dropbox developers].

I’ve done this specifically to retain Dropbox notifications (changed/added file, etc.) which are easily connected to IFTTT for push notifications on cell phones. It’s not a functionality I’ve found easy to recreate using other methods.

If you want to delve deeper into the partitioning issue, here’s a thread in the BTRFS subreddit about the original topic – with an interesting alternative to creating a separate partition, a loop device!

Nerd out.

Quick and dirty SSH key for pfSense / OPNsense gateway

PSA: Stop using password login for root on your router! NOW!!!

OK, now that I got your attention, here’s how I did it in Git Bash (MINGW64) on Windows LTSC. I promise this is QUICK and EASY, so there’s NO EXCUSE not to do it.

Open your terminal and run ssh-keygen – you don’t need a passphrase unless you want one (just hit enter).

Navigate to ~/.ssh/ and display your id_rsa.pub file:

avery@winsalad MINGW64 /c/users/avery/.ssh
$ cat id_rsa.pub
ssh-rsa REDACTED ... REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ... = avery@winsalad

Copy the redacted portion above from ssh-rsa to the username@computername portion (get all of it)

Log into your gateway via SSH, drop to shell and run ssh-keygen (if you haven’t already):

avery@winsalad MINGW64 /c/users/avery/.ssh
$ ssh root@gateway
Last login: Mon Dec 16 21:21:17 2019 from 192.168.1.122
----------------------------------------------
|      Hello, this is OPNsense 19.7          |         @@@@@@@@@@@@@@@
|                                            |        @@@@         @@@@
| Website:      https://opnsense.org/        |         @@@\\\   ///@@@
| Handbook:     https://docs.opnsense.org/   |       ))))))))   ((((((((
| Forums:       https://forum.opnsense.org/  |         @@@///   \\\@@@
| Lists:        https://lists.opnsense.org/  |        @@@@         @@@@
| Code:         https://github.com/opnsense  |         @@@@@@@@@@@@@@@
----------------------------------------------
*** gateway.webtool.space: OPNsense 19.7.7 (amd64/OpenSSL) ***
 LAN (vmx0)      -> v4: 192.168.1.1/24
                    v6/t6: REDACTED ... REDACTED .../64
 WAN (em0)       -> v4/DHCP4: REDACTED ... /21
                    v6/DHCP6: REDACTED ... REDACTED .../128
 HTTPS: SHA256 REDACTED ... REDACTED ...REDACTED ... REDACTED ...
               REDACTED ... REDACTED ...REDACTED ... REDACTED ...
 SSH:   SHA256 REDACTED ... REDACTED ...  (ECDSA)
 SSH:   SHA256 REDACTED ... REDACTED ...  (ED25519)
 SSH:   SHA256 REDACTED ... REDACTED ...  (RSA)
  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup
Enter an option: 8
root@gateway:~ # ssh-keygen

paste your id_rsa.pub to the end of ~/.ssh/authorized_keys :

root@gateway:~/.ssh # echo 'ssh-rsa  REDACTED ... REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ...REDACTED ... = avery@winsalad' >> authorized_keys

Navigate to your web UI and de-select “Permit password login” under SSH section (or similar – depending on your gateway of preference, I’m using OPNsense):

OPNsense gateway used as example

Note: You should probably create a new user and su to root once you’re connected via SSH, but this is the “quick and dirty” version, so we’ll save that for another day.

Open A NEW TERMINAL instance or tab (don’t log out of SSH yet!) and try logging into your gateway. I’ve found it good practice to always try things that effect SSH login in a new instance because you never know when your settings alterations will lock you out. It’s not such a big deal in this example, but it’s a good practice to be in the habit of.

If you’ve gone through these steps properly, you should be able to log into your gateway without a password now (unless you specified a passphrase using ssh-keygen).

Now you’ll be limited to connecting via SSH only with this one machine. For additional machines, there’s several things you could do:

  • Copy the contents of your ~/.ssh folder to other machines
  • repeat the ssh-keygen step for the next computer and copy the id_rsa.pub to the gateway’s authorized_keys again
  • My personal favorite, read this man page: https://www.ssh.com/ssh/copy-id

OK that’s all for now. Happy hashing!

Tricks for Importing VMware Workstation VMDK to ESXi 6.7

So I’ve been playing with some of the Turnkey Linux VMs, which are nice a lightweight Debian Linux OS core packaged with purpose-built software

They run lighttpd out of the box and have web portals, web-accessible shells, webmin for easy administration, Let’s Encrypt SSL certificate management automation, and usually some web administration specific to the purpose for which they are built.

They’re really pretty nice! Nothing too fancy as far as their looks, but they get the job done and seem very stable. They can really save a lot of time.

Turnkey Linux’s VM packages are available as iso, vmdk, qcow2, tar, and Docker images

So I was trying a couple for torrent server and openvpn, and since I run ESXi hosts, I opted for the vmdk images. I was expecting just a disk image (.vmdk), but lo-and-behold, they’re not the image files but full VMs built on VMware Workstation.

This is cool but also presents some compatibility issues, as you can’t just copy a Workstation VM to your datastore and run it like anything else. I tried it personally not knowing what I was in for, and my host couldn’t open the .VMX file when I clicked on “edit settings” (it just gets stuck at “loading”).

So OK, cool, let’s figure out how to get this working. The easiest way if you have Workstation is to download the VM locally and load it up in Workstation. Then connect to your ESXi host and click “upload”:

In workstation go to VM –> Manage –> Upload

Then specify the host you want to send the VM to and it will automatically convert it for you.

However it’s not without its problems. You will probably want to upgrade the VM type – mine was woefully outdated, being at version 6 when the host’s capability was version 14. This prevented me from using newer device types and changing the OS to Debian 9.

But even after upgrading the VM, I still couldn’t add vmxnet3 or paravirtual scsi drivers. This I solved by cloning the VM in VCSA. Somehow, the cloned VM was able to add the vmxnet3 and paravirtual scsi drivers. I’m not sure why they weren’t just available from upgrading the VM, but it worked.

What if you don’t have a copy of VMware Workstation? Well, it’s not that hard to get around. This is how I did it the first time I tried importing a Turnkey appliance, since I already had the VM on my host’s datastore.

SSH into the host and invoke the following command:

[Host@VMdir] # vmkfstools -i HostedVirtualDisk ESXVirtualDisk

Where the HostedVirtualDisk is the one supplied by the Turnkey Appliance and ESXVirtualDisk is the output you’re going to use for your new VM.

You can read the KB on this procedure here: https://kb.vmware.com/s/article/1028943

Then just manually create a VM and import the existing disk, using the one you just output with vmkfstools.

After that, you can safely delete the files that were included with the Turnkey appliance, being careful to save your vmkfstools output vmdk file.

Also, I noticed on VMFS v6 I could not re-locate the vmdk file. For some reason it had to be left in its original location (folder) where it was created. There may be a way to work around this, but it’s easy enough just to leave the dir. Maybe that’ll be for another post…

Happy virtualizing!

Asrock J4105-ITX 16GB and 32GB memory configuration tests

I saw this on a German web site called speicher.de and I thought it was pretty significant so I thought I’d share a copy of the translation to English

The Asrock J4105-ITX is a useful low-power build that has been confirmed to work as an ESXi IGD GPU passthrough computer with 4K@60Hz HDMI 2.0. It can be had for around $110 for the motherboard.

The real sticking point for ESXi, besides the realtek NIC, is who wants to run an ESXi host that can only support up to 8GB RAM? As that is the official specifications listed on the Asrock web site…

Well this group of hardware testers in Germany has laid that myth to rest by doing their own testing, and they confirm that configurations of up to 32GB work just fine. Have a look:

32GB Memory – ASRock J4105-ITX – overRAMing Test

LATEST POSTS

32GB Memory – ASRock J4105-ITX – overRAMing Test

32GB Memory - ASRock J4105-ITX - overRAMing Test

By Jürgen Hartmann 1 years ago

For all Gemini Lake processors , the current data sheets show a maximum RAM of only 8GB (2x 4GB) . The processor series of the Gemini Lake series include the following Intel CPUs: 

Pentium Silver J5005 (4M cache, up to 2.80 GHz)Pentium Silver N5000 (4M cache, up to 2.70 GHz)Celeron J4105 (4M cache, up to 2.50 GHz)Celeron J4005 (4M cache, up to 2.70 GHz)Celeron N4100 (4M cache, up to 2.40 GHz) 

Using the example of the current ASRock motherboard J4105-ITX with the Celeron J4105 processor of the same name, we were able to prove in the overRAMing test that the values ​​described in the specifications for maximum RAM are incorrect. So we could prove in the memory duration test that 32GB (2x 16GB) are possible.

Test scenario:

In our overRAMing test , we extensively tested the ASRock J4105-ITX Mini-ITX motherboard in continuous testing with different memory configurations.

  • The following memory configurations have been tested: 
  • 12GB Memory – (8GB + 4GB)
  • 16GB Memory – (8GB + 8GB)
  • 24GB memory – (16GB + 8GB)
  • 32GB Memory – (16GB + 16GB) 

The memory tests were performed with MemTest86


The complete 32GB memory was recognized and could be fully utilized. The first initialization of the 32GB requires some patience. The ASRock J4105 Mini-ITX needs approx. 35 seconds until the system starts. 


The two 16GB memory modules have been described and read in the ASrock motherboard for more than 6 hours, including the “Hammer Test.” All tests were performed 3 times in a row , and none of the tested memory configurations found an abnormality, error or problem in the memory duration test become.

Why the manufacturers limit the maximum RAM to only 8GB (2x 4GB) in the manuals and documentation is incomprehensible. Our tests were convincing in every configuration of the main memory.

Where are the new processors used?

The new Gemini Lake processors are used eg in the following motherboards / mini-PCs: 

ASRock motherboard J4105 Mini-ITXASRock motherboard J5005 Mini-ITXiFujitsu motherboard D3543-S1 / -S2 / -S3Fujitsu Mini STX Boards D3544-S1 / -S2Intel NUC KIT NUC7CJYHIntel NUC KIT NUC7PJYHMSI Cubi N 8GL (MS-B171) 

The 8GB and 16GB memory modules have been tested, approved and successfully sold by us due to the successful tests.

Another note from the manual!

When changing the memory modules, we noticed that the BIOS of the ASRock motherboard J4105-ITX does not always recognize the new memory. In this case, the motherboard does not start anymore. The screen will stay black.

ASRock motherboard J4105-ITX

The cause is not due to the memory sizes that are used. We were also able to reproduce this phenomenon with 2GB and 4GB RAM.

The remedy here is the function CRLCMOS1 as described in the manual under 1.3 Jumpersettings .

Quote from the ASRock manual: 
CRLMOS1 allows you to delete the data in CMOS. The data in CMOS includes system setup information such as system password, date, time, and system setup parameters … Please turn off your computer and unplug the power cord. Then you can short-circuit the solder points on CLRCMOS1 with a metal part, such as a paperclip, for 3 seconds …

Once the CMOS was cleared, any memory configuration could be installed. This was then always recognized immediately and the system booted on the 1st attempt.