Linux or MacOS lover, but have to use Windows?

editing bashrc in nano using a mingw64 (git-bash) terminal window on Windows 1809 LTSC

Make your Windows experience more nerdy!

I tend to notice Windows users fall into a few major categories:

  1. Unaware any OS besides Windows exists
  2. Uses Windows because that’s what they’re most comfortable with
  3. Doesn’t like Windows per se, but can’t afford a Mac, and doesn’t want to grapple with Linux
  4. Likes posix-compliant OS better, but trapped by vendor lock-in requiring use of proprietary software only available on Windows

If you’re like me, you’re in the 4th category. I love package managers, bash shells, am most comfortable with nix commands and regex, and while I think powershell is a step in the right direction, the syntax is unfamiliar enough that I still have to consult references in order to do most things.

But alas, I’m in an environment with proprietary Windows software. If I want to edit server settings in my Windows Domain environment, there really isn’t any cross-platform tools to do it with. I could remote in using SSH and run powershell scripts, but that’s definitely not as easy as just using the GUI tools that are readily available to install on any Windows platform.

Well, thankfully I’ve been able to make Windows a more comfortable environment for nix users with a few add-ons that have really made life a lot easier!

In a lot of ways, it’s the best of both worlds (or, attempting to have cake and eat it – pick your metaphor).

1) Install a package manager (or a couple!)

scoop info command showing description of mingx package

Here’s some of what I’ve done:

I went with Scoop, which is a package manager that installs normal Windows programs through the command line.

It’s kind of interesting in that it installs programs to your user folder, negating permissions issues and UAC prompts for installing/uninstalling programs. Also, any user-specific settings will be local to each program, so they can be customized and preserved more fully than system-wide installs.

The drawback, obviously, is that if you want to install programs for multiple users you’ll want to install them the old-fashioned way, or use chocolatey

Here’s an example of finding and installing QEMU using scoop:

avery@tunasalad MINGW64 /bin
$ scoop search qemu
'main' bucket:
    qemu (4.1.0-rc2)
avery@tunasalad MINGW64 /bin
$ scoop install qemu
Updating Scoop...
Updating 'extras' bucket...
 * 33f73563 oh-my-posh: Update to version 2.0.311                        2 hours ago
 * 35865425 ssh-agent-wsl: Update to version 2.4                         3 hours ago
 * 2fa7048f oh-my-posh: Update to version 2.0.307                        3 hours ago
 * 63f7fcdb gcloud: Update to version 256.0.0                            3 hours ago
 * f92493a8 kibana: Update to version 7.2.1                              4 hours ago
 * 6f9fb8d5 elasticsearch: Update to version 7.2.1                       4 hours ago
Updating 'main' bucket...
 * f8034317 aws: Update to version 1.16.209                              53 minutes ago
Scoop was updated successfully!
Installing 'qemu' (4.1.0-rc2) [64bit]
Downloading (121.3 MB)...
Checking hash of qemu-w64-setup-20190724.exe ... ok.
Extracting dl.7z ... done.
Linking ~\scoop\apps\qemu\current => ~\scoop\apps\qemu\4.1.0-rc2
Creating shim for 'qemu-edid'.
Creating shim for 'qemu-ga'.
Creating shim for 'qemu-img'.
Creating shim for 'qemu-io'.
Creating shim for 'qemu-system-aarch64'.
Creating shim for 'qemu-system-aarch64w'.
Creating shim for 'qemu-system-alpha'.
Creating shim for 'qemu-system-alphaw'.
Creating shim for 'qemu-system-arm'.
Creating shim for 'qemu-system-armw'.
Creating shim for 'qemu-system-cris'.
Creating shim for 'qemu-system-crisw'.
Creating shim for 'qemu-system-hppa'.
Creating shim for 'qemu-system-hppaw'.
Creating shim for 'qemu-system-i386'.
Creating shim for 'qemu-system-i386w'.
Creating shim for 'qemu-system-lm32'.
Creating shim for 'qemu-system-lm32w'.
Creating shim for 'qemu-system-m68k'.
Creating shim for 'qemu-system-m68kw'.
Creating shim for 'qemu-system-microblaze'.
Creating shim for 'qemu-system-microblazeel'.
Creating shim for 'qemu-system-microblazeelw'.
Creating shim for 'qemu-system-microblazew'.
Creating shim for 'qemu-system-mips'.
Creating shim for 'qemu-system-mips64'.
Creating shim for 'qemu-system-mips64el'.
Creating shim for 'qemu-system-mips64elw'.
Creating shim for 'qemu-system-mips64w'.
Creating shim for 'qemu-system-mipsel'.
Creating shim for 'qemu-system-mipselw'.
Creating shim for 'qemu-system-mipsw'.
Creating shim for 'qemu-system-moxie'.
Creating shim for 'qemu-system-moxiew'.
Creating shim for 'qemu-system-nios2'.
Creating shim for 'qemu-system-nios2w'.
Creating shim for 'qemu-system-or1k'.
Creating shim for 'qemu-system-or1kw'.
Creating shim for 'qemu-system-ppc'.
Creating shim for 'qemu-system-ppc64'.
Creating shim for 'qemu-system-ppc64w'.
Creating shim for 'qemu-system-ppcw'.
Creating shim for 'qemu-system-riscv32'.
Creating shim for 'qemu-system-riscv32w'.
Creating shim for 'qemu-system-riscv64'.
Creating shim for 'qemu-system-riscv64w'.
Creating shim for 'qemu-system-s390x'.
Creating shim for 'qemu-system-s390xw'.
Creating shim for 'qemu-system-sh4'.
Creating shim for 'qemu-system-sh4eb'.
Creating shim for 'qemu-system-sh4ebw'.
Creating shim for 'qemu-system-sh4w'.
Creating shim for 'qemu-system-sparc'.
Creating shim for 'qemu-system-sparc64'.
Creating shim for 'qemu-system-sparc64w'.
Creating shim for 'qemu-system-sparcw'.
Creating shim for 'qemu-system-tricore'.
Creating shim for 'qemu-system-tricorew'.
Creating shim for 'qemu-system-unicore32'.
Creating shim for 'qemu-system-unicore32w'.
Creating shim for 'qemu-system-x86_64'.
Creating shim for 'qemu-system-x86_64w'.
Creating shim for 'qemu-system-xtensa'.
Creating shim for 'qemu-system-xtensaeb'.
Creating shim for 'qemu-system-xtensaebw'.
Creating shim for 'qemu-system-xtensaw'.
'qemu' (4.1.0-rc2) was installed successfully!

See? That wasn’t so hard, was it?

2) Install git for Windows (and get an awesome nix-like terminal shell!)

Note, I usually install VS code before installing git, since that way you can set it to be your default editor during the install process. Now that you have your package manager set up, you can install it by running:

$ scoop install code

Obviously git is a hugely useful program for versioning code, documents, etc. and there are multiple sites it can be configured to work with your favorite version tracking site, such as Github, BitBucket, GitLab, et. al.

Helpful info about using git:

But the beauty of the git windows installer is it also installs a fork of mintty called git-bash, which is a terminal emulator with a very comprehensive set of posix tools that’ll make you feel way more at home in a Windows environment if you’re used to working on posix systems.

Learn more about it by checking out this discussion, “What is git-bash, anyway?”

If you have more questions about nuts-and-bolts, the git-bash github faq is a good place to start, including cmd console compatibility, long path or file names,

If you REALLY want to take posix-compatibility to the next level, you could install MinGW which includes pacman package manager (from Arch Linux), makepkg, and a GNU compiler suite (!)

There’s also a 64-bit fork called MinGW-W64

If you’re having trouble keeping track of all this, check out: What is the difference between MinGW, MinGW-w64 and MinGW-builds?

I’ve tried both of these, and I personally didn’t need to compile or install packages from arch linux often enough to keep the full suite, so I usually just stick with git-bash since it’s a happy medium. But it’s pretty cool that any of that is even possible. If you’re a developer, the full development suite will probably be right up your alley. Apparently the applications you compile can even be used outside the shell (supposedly – although I wouldn’t count on it).

Some of the tools I’ve found to be a bit wonky, like “find” and “du”, but if you keep your expectations tempered I think you’ll be pleasantly surprised. I love that I can invoke “nano” or “vim” to edit text files right in the shell so much, if nothing else worked I’d be fine with it.

Don’t get me started on how happy I am with “ssh” and “openssl”…

Check out a list of the /bin directory of MingW64 – I’ve listed the commands I imagine people use most often on posix platforms:


Musings about Home CCTV setup, and a review of Anpviz 5MP@15FPS POE security camera

Happy new camera watching garden grow at my modest home

This review is for the Anpviz IPC-B850W-D 5MP@15FPS security camera w/ mic and Micro-SD card support.

Full disclosure: I purchased this camera and was later reimbursed for writing this review. However, I will do my best to offer a totally objective viewpoint on the pros and cons of this model.

Since I was hired to do installation and maintenance of a TVI DVR system at a local restaurant, I’d been wanting to set up a couple cameras around my home.

I’m not so worried about theft or break-ins like the restaurant is, but even in a mellow, low-stress area like ours recording a few key areas can still be helpful. For instance, we share a fence with a parking lot, so it gets hit by cars occasionally and we’d like to know who did it if we’re not home. Also, you never known when someone might take an Amazon delivery off your doorstep, as that is becoming common even in nice neighborhoods.

Even though I do installation professionally, I did not get the job because I’m a video security expert. I did networking for the restaurant and they asked me if I wanted to work on the security system. I virtually had no experience with CCTV setups before starting the job, so I dove into it head-first and have been learning as I go. It’s been a great experience.

As far as DVR/NVR technology, I’ve found that IP cameras better for features and quality, but the TVI cameras are a lot easier to wire. However, even though wiring RG59+BNC cables is easier than terminating cat6, the time savings is offset by the number of cameras I end up setting up, since they tend to be lower resolution.

Compared to the 1080p cameras I use with the restaurant’s TVI setup, the 5MP IP cameras I have are much more efficient. I was stunned by how much more detail there is, so if it’s recording an open area, I just set up one 5MP+ camera and use digital zoom to navigate, whereas I have to use several 1080p or 720p cameras to cover the same area at the restaurant because of their resolution. Even though a single IP camera might be more expensive than a TVI camera, overall the system has been cheaper, easier to set up, and less hassle to use.

While I use almost entirely Hikvision products (and clones) for the TVI CCTV system at work, I’ve decided to stay away the major name brands for IP cameras because the market is moving so quickly the newer up-and-comer companies are competing much harder and produce products with more features and better value.

Of the 5-6 IP cameras I’ve tried so far, I’ve liked Anpviz best and SV3C second-best. I’d like to try Morphxstar at some point, but would be happy sticking with Anpviz going forward as they have stunning picture quality and are built like tanks.

The Anpviz camera I already owned I had been very happy with. It’s an IPC-D350 that was an Amazon best seller, so I was excited to try another one from the same company. I have an IP camera from SV3C, too, the SV-B06POE-4MP-A. The main reasons I was excited to try the review camera were:

1) Value – Anpviz makes a great product at a very affordable price. The IPC-D350 was an ‘Amazon Recommended’ camera. The IPC-B850W-D hasn’t gotten as much attention, but it’s a very similar camera picture-wise with the added benefit of having a mic and a Micro-SD card slot, at pretty much exactly the same price. Seems like a good value to me!

2) Picture and recording quality – a camera that is inexpensive but has a poor picture or recording quality is a waste of money. My IPC-D350 has great picture quality and has allowed me to see detail from the front of the house to the street. I’ve only needed the single camera to be able to identify anyone who walks anywhere in the front yard. I’m happy I have something reliable I can use to decipher between the post carrier, family members, or Mormons when they come to knock on my door.

3) Build quality – I’ve had the IPC-B850W-D for more over a month, and the IPC-D350 for about 6 months. Both have been stable in all sorts of weather conditions. I haven’t had any issues with it stalling or rebooting over the 6+ months it’s been in use. The IPC-B850W-D has been similarly resilient. Subjectively, they don’t seem cheap like other similarly-priced cameras, using a fair amount of metal in their build instead of being entirely plastic like most others. My SV3C, for instance, is all plastic except the base. It’s a hard ABS plastic, but still just doesn’t seem as hearty as the Anpviz cameras I’ve tried.

4) Support – Anpviz support has been great, they’re responsive and kind, and have been able to walk me through any problems I’ve had in the past.

5) Features – Anpviz cameras use a stock firmware I’ve seen in cameras from other manufacturers in the same price range, but with a few additional settings here or there that make them stand out. Also, the option to downgrade the sensor resolution and increase the framerate does not come at a great loss – for instance, 5MP@15MPS is a good combination by default, most people would be happy with that. However, if you want to have full 24FPS recording, downgrading from 2592×1944 (5MP) to 2688×1520 will allow you record up to 25FPS without sacrificing much in the way of pixel density.

Comparitively, our Hikvision DS-7332-HGHI 32-channel DVR requires us to downgrade the stream from 1080P@12FPS to 720P@24FPS to get full-frame recording on any more than 8 cameras, so I was expecting a much larger sacrifice in quality!

Mounting: Mounting the camera on my shed and adjusting it was easy enough. There were three different points of adjustment – 1) A 360° swivel of the base, the part with screw holes that sits on the wall. 2) an up-down/left-right joint that is limited to 90° but can be swiveled any direction, and 3) the camera itself swivels 360°.

I tend to be a fan of ball-joint mounts for bullet cams when I’m first setting them up because of the ease of adjustment, but you can get the same range of motion with this number of joints, it just requires a little more work. The positive of NOT having a ball joint is they tend to wear out after a while and are more likely to become loose over time compared to discrete, range-restricted joints.

Day shot from firmware web page: Now I can watch my garden grow from anywhere!

Picture quality: I’m not sure what the difference is between the IPC-D350 and the IPC-B850W-D in terms of the sensor technology, materials over the sensor, etc. but Subjectively the colors seem slightly more vibrant on the IPC-D350. It’s not something most people would notice, though, I don’t think. They look extremely similar. Both are similarly as sharp, and have essentially the same motion-recording sensitivity.

Night shot from firmware web page: Can see a lot of detail, even some across the street (top left)

Night vision: The IPC-B850W-D night vision is a major improvement over the IPC-D350, which is the only thing I didn’t like about that camera. I can see a lot further using this camera, and there is great detail rather than just cloudy grey out approx 60+ ft. I think it might be the clearest night vision of the 3 cameras I have now, but it’s a close call between the IPC-B850W-D and my SV3C SV-B06POE-4MP-A, which also has great night vision.

This possum had no idea s/he was being taped:

Night vision shot: Opossum sniffing for yummy bugs
Night vision shot: Opossum sniffing for yummy bugs (part deux)

Recording support:

Milestone Smart Client: Trying to zoom in the right amount to avoid scaling on a 1600×900 laptop

My recording platform is Milestone Xprotect Essential+ 2019R1, which is a true enterprise solution that runs on Windows 10 or Server 2012R2 and newer. While not in the directly supported list, the IPC-B850W-D camera was detected using ‘easy add’ in hardware management via ONVIF without issue. Main feed, sub feed, metadata feed, microphone outputs all detected and record without issue.

Interestingly, Milestone detects 5 input feeds to the camera, but I’m not sure what they’re used for. I’ve noticed Milestone detects input feeds on other cameras I’ve tested, as well. At some point, I would like to know what they do, if anything. It’s possible they’re just for ONVIF compatibility, I suppose (?).

I imagine the camera should work fine with other recording platforms such as Shinobi, Zone Minder, Blue Iris, iSpy, MotionEye, etc. but I did not go so far as to test the camera with any of these because I use Milestone (and am very happy with it). Chances are if your recording platform is compatible with ONVIF, though, the camera should work fine.

I do not have a commercial NVR to test the camera with, but it is advertized as being compatible with Hikvision through port 8000. I did test the Anpviz IPC-D350 IP camera with a Hikvision Turbo 1.0 DS-7332-HGHI hybrid DVR w/ firmware version 3.1.18 and it would record the feed, but a lot of features were mismatched because of the age of the DVR, and the fact that it uses dated firmware. I do not believe the DVR even supports ONVIF.

If you have an old hybrid DVR, this camera will work, but be prepared for some of the settings not to overlap properly. I know our DVR only supports maybe 2-3 IP camera models officially, but it will record the feed of most any Hikvision-compatible IP camera nonetheless. Additionally, viewing IP cameras from the IVMS-4500 phone app on an old hybrid DVR requires setting up separate devices for each IP camera, as they are not available in the main list along with the analog camera feeds. Just something to consider.

I’ve thought of setting up a Milestone record server and importing a TVI DVR such as the DS-7316HUI-K4 for a solution that would provide compatibility with TVI cameras, yet display all TVI and IP cameras on one page through the Milestone app, but that’s a project for another day.

From the configuration page there’s a recording option for MicroSD, USB (generic firmware option, probably non-functional), and NFS. I did not test a micro-SD card, so I cannot report on that. But I did try mounting an NFS share through my network. It was a tad quirky in that when I first configured the share it wouldn’t show that it was mounted until I logged out and back in, but it did seem to work just fine.

There is also an option to send clips to an FTP server, but it’s in the Network settings menu instead of under storage. There’s also an email for alerts, but unlike some other cameras I’ve seen, I don’t think you can email clips of video.

Intuitively, I’d think the easiest thing to use for storage would be the P2P option through Danale, a paid cloud storage service, but after having a look at their app’s ratings it looks like it might be more trouble than it’s worth. Therefore, if you want push notifications, you probably should set up the camera to send you an email and do some extensive testing of the motion detection sensitivity in order to avoid false positives.

IFTTT is worth checking out for push notifications, too! I’ve managed to rig it up to push notifications for work, as well as automate all sorts of things for work and home. If you’ve never heard of it, have a look, it’s amazing:

I think Microsoft has something similar called “flows” but I haven’t tried it yet. Hey, whatever works, right?

Browser support testing:

I noticed the IPC-B850W-D camera was compatible with a wide range of browsers. I tested it with the top 2 or 3 browsers on 3 different common operating systems, and they all seemed to perform great. This is a real step up from older cameras which would only work on Internet Explorer, given how many people use Mac or Linux (or even BSD or Illumos) for their desktop or laptop instead of Windows.

Live View and all feature/configuration pages were available in the following browsers:

Windows LTSC 1809 build 17763.557:

Chrome 75.0.3770.100 (Official Build) (64-bit)
Firefox 60.7.2esr (64-bit)
Internet Explorer 11.0.130 KB4503259

Ubuntu Linux 18.04.2 kernel 4.15.0-52-generic:

Chrome 75.0.3770.100 (Official Build) (64-bit)
Firefox 67.0.4 (64-bit)

MacOS Mojave 10.14.5 Darwin Kernel Version 18.6.0 root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64:

Chrome Version 75.0.3770.100 (Official Build) (64-bit)
Firefox 60.7.0esr (64-bit)
Safari Version 12.1.1 (14607.

Here’s screenshots I took of the camera’s firmware webpage.

To blow up a picture, right-click and select “Open image in new tab”

Using APFS features to migrate from a larger disk to a smaller one

Resizing an APFS container via diskutil terminal command

Disclaimer: Sorry for the phone pics of my screen, I was using an installer disk for this procedure and there was no screenshot feature.

I’m migrating a MacOS install but the source drive is 256GB and the destination is 250GB. It’s not too much of a mismatch, but still prevents a straight-across restore.

What ever to do?

Well, if you’re using APFS (like you should be), then it’s pretty trivial.

First, erase your your destination disk and set it to APFS.

The GUI disk utility is what I find easiest.  You might have to tell the sidebar to show all devices if you have a greyed-out partition like ‘disk0s4’ and nothing else.

The nice thing about the Mojave installer is you can have two applications open at once now, which I think is still impossible in Recovery mode. So go ahead and open a terminal while you’re at it and have the two displayed the whole time.

Erase it to an APFS disk, GUID should be the only partition map option (if it’s not, select GUID).

You’re welcome to use encryption or case sensitivity if you want, but I usually opt for whatever is the easiest to troubleshoot, myself. You might want to, as well, unless you have some specific reasons otherwise.

Now for some terminal diskutil commands:

Clear up diskutil list output by only selecting two drives

There’s a bunch of ram disks when in recovery mode, so I like to list one disk at a time. The two SSDs here are disk0 and disk2, so you can just clean up the display by invoking something like:

# diskutil list disk0 && diskutil list disk2

Learn from my mistakes: I actually cloned my efi partition before I did the container resize, but I found out that you have to do it after the restore because it’ll be overwritten. So wait until the last step before you restart the computer.

Then get the exact sizes of your source and destination drives with:

# diskutil info disk0 | grep Size && diskutil into disk2 | grep Size

Note: My laptop had the 250GB destination drive installed internally so it was the first drive that was detected, disk0, but these numbers (and perhaps even the partition, or ‘slice’, number s3) can change, so make sure you identify them properly because you can mess things up big time.

Then you’ll want to get the size of the APFS container itself, in this case it’s on partition 2 of disk 0. This is our limit.

# diskutil info disk0s3 | grep Size && diskutil into disk2s3 | grep Size

Note: You can just run the first part of the command before the && if you don’t feel like comparing, just make sure it’s actually your destination container.

You can also get the container limits with:

# diskutil apfs resizeContainer disk0s3 limit

This could be helpful in a scenario where the drives have a greater difference in size, or if the source container is very full, to see if you might be trying to make the container smaller than it’s constrained.

Snapshots on the source drive can throw a monkey wrench in your plans

If you get a resize error you might have to delete the snapshots off the source container, as they refer to the total size of the container at the time they are taken. Thus, they can no longer be restored if the container is resized.

Snapshots are deleted from the source drive, which must be mounted, by invoking:

# diskutil apfs deleteSnapshot /Volumes/nameOfSourceDrive
Need capital ‘S’ in ‘Size’ and a ‘B’ at the end of the bytes

Then when you actually resize the container, use the bytes listed from # diskutil info [destination container] | grep Size that you got earlier. I recommend copying and pasting.

# diskutil apfs resizeContainer disk2s3 249849593856B

Note: Don’t forget the ‘B’ at the end!

Then you can go back to the GUI disk utility and start the restore by selecting the destination drive and clicking restore on the top panel. Choose your source drive and it should get going.

Note: Depending on the size of your drives and the amount of information stored on the source, this can take a long time. During the procedure I was doing while writing this, two ~250GB NVMe SSDs (one connected with USB 3.0) with a source containing ~160GB of data took about 4-5 hours. It’s an exercise in patience.

Clone your EFI partition using dd:

Cloning your source drive’s EFI folder using dd

Now you can clone your EFI folder from source to destination and it won’t get overwritten:

# dd if=/dev/disk2s1 of=/dev/disk0s1 bs=512

Note: Might want to make sure all your necessary bootloader software is there by mounting the EFI on your destination drive and poking around a bit (see commands in above picture). Otherwise you won’t be booting from this drive yet.

Also – this is important, there’s one last step – rebuild your kextcache on your destination drive:

# kextcache -i /Volumes/nameOfDestinationDrive

Now you should be able to disconnect all the other drives and boot from your newly cloned drive.


I heart my web host

You know what’s awesome?

Image result for squidix
Gratuitous Google Image Rip

They’re my new web host and they are the shit.

I used to host my site on (don’t laugh). I was with them for nearly 10 years, taken in by their low prices that promptly increased over the first year. “Buy this” buttons flashed at me constantly while trying to edit my site, along with services and add-ons that kept appearing on my bill I had no idea about until suddenly I was expected to pay for them.

When the service slowed to a crawl about 4-5 years in, I mainly just kept my site for the simplest web presence. Sure, I fix laptops. Sure, I’ll scan your computer for viruses. I’ll recover lost data from a failing hard drive or upgrade you to an SSD. But make a website for you? Not on this package.

The service was so slow that when I tried to make a site showcasing ikebana arrangements I finally gave up because page reloads were taking so long and told the client they should try ‘wix’.

Every time I remember that moment, I hang my head in shame.

Well, no more. No more, I tell you!

Because I finally have *decent* hosting. Good hosting. Stellar hosting.

Squidix is a company based out of Indiana that has that midwestern touch to their service where they actually seem to give AF. And page reloads? They happen. Quickly. I haven’t noticed any page taking longer than I can tolerate easily. They pass the tolerable test.

I could throw stats at you or tell you about package costs, etc. but none of that really matters. I mean, it was all considered carefully when choosing my web host, but there’s tons of reviews out there you can find. This is more just about my *feelings*. And they are good. They are great. They are phenomenal.

I think I’m in love.

And what’s this?

They have cPanel? I guess that’s pretty standard these days, but try going 10 years without it. Try living with a crappy ‘buy-button’ based panel engineered to operate like you’re on – purchasing things left and right you have no business getting, nor can you afford – and navigating to the things you need to do while steadfastly avoiding commercial pitfalls.

Ok, so maybe this is less a charm-offensive for Squidix and more of a rebuke of ipage.

Am I going to stop this stream-of-consciousness blog posting nonsense and actually learn how to set up git versioning and SSH access?

Hopefully. For all our sake.

But seriously, if you’re looking for a web host, give them a careful look.

Hello, banana!

Gratuitous Google Image Rip

OK, so I’ve been fighting it, but I decided learning WordPress is a necessary evil. Supposedly according to website tech stats WordPress is powering 25% of all websites. I’m not sure if that’s entirely relevant if 15% of those sites are as bad as mine, but I figured if even 10% are decent then it’s probably worth learning.

I use the W3Techs extension in Chrome to look at what technology websites I visit use and WordPress comes up time after time. Some of them are significant players, like a lot of online newspapers and major companies. So here I am, in the middle of the bell-curve, going along with the crowd, jumping on the bandwagon and flinging myself off a bridge because my friends did it.

Actually, I can see why people like it. It seems pretty nice, and it’s easy to use. To put myself through torture, I avoided the “one-click installer” this time and downloaded a proper copy, un-tarred it, created the MySQL database and user, set permissions, etc. The hardest part was keeping track of all the extremely complex passwords required to keep this monstrosity from getting hacked, because it’s by far the most attacked platform in internet history.

While I managed to get through the install process, don’t ask me how to troubleshoot an existing installation because I don’t know. I finally gave up with the techs on my last site I imported from another web host and said I’d just install a new copy. Were those two how-to guides I had written worth keeping? Probably. Did it seem harder to preserve them than just starting over from scratch? Yes. But that’s OK. I’ll figure out how to troubleshoot this mess some day.

But that is for another day, because WordPress is all about content. Maybe now that it’s set up I’ll actually keep more records of the technical stuff I do. There’s so many laptops I’ve torn apart and bash scripts I’ve modified and rsync OS backups I’ve made that are lost in the ether forever. I do weird crap with Linux and ESXi that I don’t think is documented anywhere yet and I’m sure someone else out there might want to know how to do it someday… maybe…

Or not. But that’s what blogging is all about – the inflated sense of ego. My experience matters, dammit, because here it is, written down, displayed for all to see. Or maybe it’ll actually help someone, which is the true meaning of purpose – giving back to others when we have been able to overcome obstacles they might also encounter.

But not to get too meta, I’m probably just going to write stuff down as I go. No real project continuity I’m foreseeing, as I don’t have much of an attention span. And I’m not really a developer, I just thought the name was cute. I’d like to learn coding but I’m way more involved in doing hands-on tech support and infrastructure projects and they mainly take up all my time. But enough about me, this is really just meant to be a rambling first-post hello thing. I really should stop typing now before I get myself in any more trouble.

Image result for squirrel


Install Cinnamon 3.6.3 in CentOS 7.4 1708 using EPEL-Testing repo

After being a long-time Debian user, for some reason I’ve been going all-in for CentOS lately.  It started after I installed it as a virtualization platform with KVM on an E3 server.  I just love how stable and methodically planned it is.

I’ve probably gone a little too far, as I’ve also installed it on my Thinkpad x220 Laptop.  It’s definitely not much of a desktop OS – its packages are stable but they’re OLD.  But, I guess my laptop is old, too, and I still love both of them.  They work awesome together.

To avoid being too bored by its aged userland, I decided to breathe new life into the desktop experience by installing a new version of Cinnamon.  After poking around the Wiki for a few days, totally by accident, I noticed EPEL has a testing branch for packages that are pre-release.  A little repo package list inspection, and I noticed the pre-release version of Cinnamon they’re working on currently is 3.6.3 (!)

While installing from a testing repo is rather antithetical to the entire notion of running a 10-year service OS like CentOS, I thought for a desktop it might be fun.  I haven’t had any problems with it so far after running it a couple days (besides the little missing app here or there).  Maybe I’ve been too spoiled (ruined?) by running Debian testing for years?  Anyway, I don’t judge if you don’t…

Example for a fresh install:

Install current version – CentOS 7.4 1704 – you can start with Minimal install or install Gnome Desktop group during the install process. I’ll explain the extra steps for minimal just in case.

log in as root or su root

# yum update
# yum groupinstall “gnome server”
# yum install epel-release
# systemctl set-default (not necessary if Gnome installed during install process)
# yum update
# yum install rpm-utils (optional: if yum-config-manager not installed)
yum-config-manager –enable epel-testing
yum update
yum install cinnamon*

cinnamon –version

if 2.8,
# yum update again

Note:  Somewhere during the process, after installing Gnome Desktop, if you are in the GUI and open gnome-tweak-tool, selecting global dark theme  helps Gnome apps match cinnamon apps if you use a dark theme in Cinnamon.

Select Cinnamon using the gear icon that appears under your name after you enter your name

Now you should have a shockingly modern-looking and acting CentOS Cinnamon desktop system.

The Gnome installed from EPEL-testing is pretty new, too (for CentOS anyway) at 3.22.3 as of 11/21/2017. Be aware, most of the other packages have not caught up as the libraries and kernel, etc. still originate circa 2014 (kernel 3.10.0 anybody?).  But they’re very, very stable and safe, relative to other Linux distros, and will be supported until 2024.

I can’t confirm if the older kernel is too old for newer systems (e.g. z170 z270, etc. chipsets) but it’s worth a shot, I know RHEL backports all sorts of security updates, so they might backport compatibility, too.  I have no idea since I do not have any computers newer than 7 years old. 😛

Hope this helps someone out!


Installing IceFilms (and other) Channel(s) on your Synology DiskStation Plex Media Server

So, have you been looking for the IceFilms channel plug-in for your Synology DiskStation, but only seen Windows and Mac OSX referenced for install methods on the sites you’ve visited?

Well, look no further!

I installed IceFilms on my Synology DiskStation tonight (which, in reality, is a server I built running XPEnology, but on a software/OS-level, it isn’t any different than a DSM x86 unit).

It appears to be working without a hitch!  I know you’ve probably been dying to install IceFilms (as I know I was!), so I thought I should probably share how to do it:


1) You know a little bit about how to navigate using a UNIX-style command line interface.
2) You have SSH enabled on your NAS.  If you don’t, please visit this site to find out how.
3) You have a method of connecting to your SSH-enabled NAS, such as using PuTTY for Windows, or the Terminal in Mac OS or Linux.  Download PuTTY HERE.
4) You have the Plex Media Server package installed on your Synology DiskStation NAS.  If you need to install it on your NAS, please download the latest version here and install it manually as this is the best way to ensure that the package is up-to-date.  (To do this, you may have to allow plug-ins from any source in your Synology Package Manager).

Quick guide for less techincally-inclined readers:   *Confirmed working!* The easiest way to do this is to download the zip and rename IceFilms.bundle-master to IceFilms.bundle move it to your Plex Media Center Plug-Ins directory, and re-start your Plex Server.  Here’s a quick walk-through:

Step one:  Navigate to the Git repository for the IceFilms.bundle package:

Step two: Click on “Download ZIP” at the bottom-right corner of the page.  Unzip this and it will give you a folder called IceFilms.bundle-master  

Step three:  Rename the IceFilms.bundle-master folder to IceFilms.bundle and copy it to your synology diskstation.  We’ll use /volume1/home/ as an example.

Plex user folder, under the path:  /volume1/Plex/ 

Step four:  After copying the IceFilms.bundle package to your home folder, open your terminal and navigate to your server by typing ssh root@(server address):

Quick tip:  Log to Synology SSH using your admin password.

Navigate to your home directory (e.g. /volume1/homes/username/ ) and move the .bundle file to the Plex Library folder:

NA$ mv /volume1/homes/avery/IceFilms.bundle/ /volume1/Plex/Library/Application Support/Plex Media Server/Plug-ins/

Notes: You can hit “tab” in most terminals to auto-complete the name of a directory.  The forward-slash indicates that there is a space in the name of the path or file.

Step five:  Navigate to your Synology DiskStation manager in your browser and open Package Center.  Navigate to your installed packages and click on Plex Media Server.  Under the Actions drop-down menu, choose “Stop”.  Then click the Actions menu again and choose “Run”.

Now, if all went well, you should be able to go to your Plex Media Center in your browser, or Plex Media Theater, etc. and see “IceFilms” as one of your channels.

It’s as simple as that!

TIP:  Look in ReallyFuzzy’s GitHub page to find other plug-ins, as well!

Now, here’s the more involved method I used (I can personally verify that both of these methods work):

Caveats: This more complicated guide is only useful for those who
1) Have an interest in obtaining and installing bootstrapper and ipkg package management functionality for their Synology DiskStation
2) Have an interest in having Git clone functionality for their Synology DiskStation

If neither of these apply to you, I suggest you just follow the easy guide above!

Install Git on your Synology NAS by using ipkg.  This will enable the ability to get the latest build of the IceFilms.bundle file required for the IceFilms Plex Media Server channel.  

Note:  installing and using the ipkg package management infrastructure is beyond the scope of this post.  To learn more about using a bootstrapper and ipkg on your system, please visit:,_bootstrap,_ipkg_etc

I also found this forum post helpful (note the post by “LittleLama”):

I prefer using Nano to VI, which you can install using ipkg prior to doing the steps explained in the link above for editing your configuration files.  Also, I prefer using the command EXPORT for setting the env for the ipkg path, rather than rebooting to acquire the PATH variable (as recommended in the LittleLama post I referenced).

Try it by entering the following and you should be able to run ipkg from anywhere:

NA$ export PATH=/opt/bin:/opt/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/syno/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin

Note: I am using NA$ to signify the command prompt, as the “less-than” sign is not displayed in blogger – you will see a “less than” sign at the end of your server name for the display of your command prompt of your NAS.

If you use export, you do *not* need to reboot in order to use ipkg.  Then invoke:

NA$ ipkg update 


NA$ ipkg upgrade

Install Bash for a more familiar command line interface:

NA$ ipkg instal bash

then run it by invoking:

NA$ bash

Now you will see the # sign as your command prompt, which signifies that you have root access (note – you already HAD root access, but this is the more common convention seen in *NIX OS).

Add a couple lines to your to log-in with Bash next time you SSH to the server by invoking:

# nano /etc/profile

and adding a line that says the following:

[ -e /opt/bin/bash ] && /opt/bin/bash

Also add the line


to avoid having to “export PATH=” next time you log-in — make sure you delete any other references to PATH in this file (There was one at the end of mine). 

Quick tip:  You can also have Bash show you the current directory you’re in by invoking:

# nano /root/.bashrc

and adding a line that says

PS1=’w$ ‘

Bind the /volume1/@optware directory to your /opt folder:

# nano /etc/fstab

Add the following:

/volume1/@optware /opt none defaults,bind 0 0

Exit by pressing CTRL-X and save changes.

And permit a user environment in your /etc/sshd_config

# nano /etc/ssh/sshd_config

Find the line that says #PermitUserEnvironment no by pressing CTRL-W and typing in part of the string.  Uncomment it by removing the # sign and enable it by changing no to yes.  Exit by pressing CTRL-X and save changes.

Restart SSHD

synoservicectl –reload sshd

And you should be good to go!

Now for the fun stuff!

Log back into your DiskStation using SSH (same as above)

Install Git by using Ipkg to download the package:

# ipkg install git

Navigate to the Plex Plug-ins directory by invoking:

# cd /volume1/Plex/Library/Application Support/Plex Media Server/Plug-ins/

Then clone the IceFilms.bundle file by invoking: 

# git clone

Note: You can copy the URL from the bottom-right corner of the github page:

Verify that the IceFilms.bundle file has been copied by invoking:

# ls 

You should see IceFilms.bundle listed amongst the other plug-ins (.bundle files) in your plug-ins directory. 

Now go to your DSM home page and open “Package Center”.  Navigate to your installed packages and click on Plex Media Server.  Under the Actions drop-down menu, choose “Stop”.  Then click the Actions menu again and choose “Run”.  

Navigate to your Plex Media Center and look at your channels.  You should see that IceFilms has been installed.

This is the best way to install plug-ins to Plex on your Synology DiskStation, as now you can simply invoke # git clone [URL] to clone any package you want!  Just don’t forget that you have to restart your Plex Media Server before using the new plug-ins.

Any questions?  Want any further explanations?  Please leave a comment!  
Thanks for reading!

Compiling Gnumeric on Mac OS 10.9.5

Note: This guide below has been depreciated – now you can just install macports and run:

$ sudo port install gnumeric +quartz

It still installs some xorg dependencies but it links them to the cocoa interface


Compiling Gnumeric on Mac OS 10.9.5 using Cocoa instead of X11

Note:  No X11 was used in the installation of this Gnumeric port!  While you may be instructed to install XQuartz at some point during your MacPorts experience, it is NOT necessary for this particular package (if you follow the instructions properly).  

Hi everybody,

So, if you know anything about *NIX spreadsheet programs, or you are a hardcore statistics nerd, you are probably familiar with Gnumeric.

I like to run several operating systems, and one of my favorite is Mac OS.  But I like Gnumeric quite a bit and there’s never been an installer package for a native port of Gnumeric for Mac OS as far as I know.

There IS a how-to on how to compile Gnumeric using MacPorts, though!  However, it’s a little outdated:

Open Source Packages for the Macintosh 

I was really glad to see this as a starting place, as it has some info about how to use flags when porting software to compile in Quartz instead of X11 (thus, to avoid the nasty X11 interface and stay more over-all “Mac-Looking”)

Following the directions on the site did not work for me, however — so here’s an update to help people with more modern Mac OS versions (I did this under Mac OS 10.9.5 Mavericks with MacPorts 2.3.3).

MacPorts Quickstart Guide

Installing and using MacPorts is beyond the scope of this article, but it’s not too hard to do – just make sure you have the latest (or most appropriate) version of Xcode, including the command-line tools.  you can check to see if you have the Xcode command line tools by opening a terminal, and invoking the command:

$ gcc

After a brief pause, you should see the following:


clang: error: no input files
So far, so good!  So now, install MacPorts using the Quickstart guide link provided above, and you should be in business.
After you’ve installed MacPorts, you’re going to need to install some dependencies. Go to the terminal prompt again and type:
$ sudo port install -v cairo +quartz+no_x11
You should see MacPorts downloading and installing the cairo package on your behalf.  I like to use the -v flag so it returns a little more information, but it’s not necessary.
The +quartz+no_x11 flags are to tell the packages to build such that they do not need to use XQuartz (which, should be rather important since you probably don’t have it installed!  X11 is not included by default on Mac OS versions released after Mountain Lion). 
If that all worked properly, next we’re going to want to install some more dependencies.  Go back to your prompt and invoke the following:
$ sudo port uninstall -f pango
We’re going to remove pango since it has been installed using X11 dependencies.  You cannot install pango using the +quartz+no_x11 flags unless this package has been uninstalled, and to make it uninstall without removing its dependencies (a hassle), you use the -f flag to force the uninstall. 
Now that pango’s been removed, let’s re-install it using our +quartz+no_x11 flags, by invoking:
$ sudo port install -v pango +quartz+no_x11
There, now pango’s been installed the way we would like!  Using Quartz, instead of X11.  
Now do the same for GTK3 — remove it by invoking the following:
$ sudo port uninstall gtk3
That will get rid of the X11-dependent gtk3 engine, so we can re-install it with Quartz dependencies, by invoking, yep – you guessed it:
$ sudo port install -v gtk3 +quartz+no_x11
Starting to get the idea?  We’re making sure this Gnumeric install is going to be dependent on Quartz, and not X11 — so it’s got the most Mac-like appearance.  
Now let’s go back to the prompt and install two more dependencies before building Gnumeric.  Go to your prompt and invoke:
$ sudo port install -v poppler +quartz+no_x11 
to install Poppler, which is used to render PDFs inside Gnumeric, and
$ sudo port install -v gnome-themes-standard +quartz+no_x11 
Which is necessary for installing adwaita-icon-theme, icon-naming-utils and other appearance-related dependencies.  
Note:  This package installs GTK2, as well, but since we invoked the install it should be compiled with the same  +quartz+no_x11 flags as its parent.
Now that you’ve got the dependencies installed, now it’s time to compile Gnumeric!
$ sudo port install -v gnumeric +quartz+no_x11
Doesn’t take that long compared to the GTK+ packages that you installed earlier.  After it’s done compiling, invoke:
$ gnumeric
to give it a spin!
OK, now if it’s working, here’s a quick-and-dirty way to create a link so that you can open it via an icon.


Go back to your terminal and invoke:

$ ln -s /opt/local/bin/gnumeric /Applications/Gnumeric

Navigate to your Applications folder via the Finder and you should see a blank icon named “Gnumeric”.

Double click on the icon, and you should be prompted with a dialog asking you to choose an application to run the program with.   Select the option to choose the program, then select the Terminal as the program to run it with (under /Applications/Utilities/Terminal).

Now when you double-click on this icon it should run Gnumeric for you without having to enter the terminal.

Alternately, you can create an alias instead of a link, by invoking:

$ osascript -e ‘tell application “Finder”‘ -e ‘make new alias to file (posix file “/opt/local/bin/gnumeric-1.12.20”) at desktop’ -e ‘end tell’

This does NOT require the step of having to choose the terminal to run it with.  Now the Gnumeric alias will be on your desktop.  You may rename it and move it wherever you like (e.g. the Applications folder). 

Any questions or comments to improve this post are immensely appreciated!  Please leave a post!  Thank you!

Note: I’ve noticed some of the functions, such as creating check boxes, radio buttons, and lists, do not work and actually crash the program.  I am not sure if these would work in the X11-compiled version, as I have not tried it in a number of years.  Please submit your comments regarding whether or not you have compiled Gnumeric on a Mac using X11, and if these functions work properly in that version.  Also, if you have any ideas of how to fix these functions, I’d be more than happy to hear them! 


How to fix 3d rendering for a Windows guest in VMWare Workstation 11 Ubuntu 14.10 host

How to fix 3d rendering for a Windows guest in VMWare Workstation 11 Ubuntu 14.10 host

Hi all!

Getting a message when you start your Windows guest in VMWare Workstation 11 hosted by Ubuntu 14.10?

I was, and I had a little trouble finding a fix, until finally it worked!  it’s simply a matter of upgrading to the latest OpenGL drivers in Ubuntu.

To install open a terminal window and enter
sudo add-apt-repository ppa:oibaf/graphics-drivers
sudo apt-get update
sudo apt-get upgrade
To remove, you can use ppa-purge
sudo apt-get install ppa-purge
sudo ppa-purge ppa:oibaf/graphics-drivers

Now when you start your Windows guest, it should no longer complain about 3D acceleration!

That’s it!  If you have any questions or comments, feel free to reply!

Switching from LXDE to LXQT in Lubuntu 14.10 Utopic Unicorn

Switching from LXDE to LXQT in Lubuntu 14.10 Utopic Unicorn


So if you’re like me, you’ve been dying to try out the new LXQT Desktop Environment everyone has been talking so much about!

But so far, a official release of Lubuntu with LXQT has not been developed.   There is an unofficial release, but it does not have an installer package AFAIK.
What to do?

The current approach is, of course, to manually switch from LXDE to LXQT after installing a base version of Lubuntu 14.10 — Here’s how it goes:

1) Install a fresh copy of Lubuntu 14.10 from the official repository, which is here:

2) After installing the copy, get ready to convert your Desktop Environment to LXQT.

Go to the command prompt by hitting ALT-F2 on the desktop and typing LXTerminal.  Then, once the terminal window is open, type the following:

$ sudo apt-get update && sudo apt-get upgrade

Type in your password, and you’re good to go!

Then, to do the actual switch, invoke this command at the prompt:

sudo add-apt-repository ppa:lubuntu-dev/lubuntu-daily


$ sudo apt-get update


$ sudo aptitude install lxqt-metapackage lxqt-session lxsession

Note the use of aptitude in the above step, because apt-get has the tendency to complain about doing this … (try it, you’ll see!) 

However, you should be warned that this approach causes a few issues with your setup.  In the terminal you’ll see the this: 

The following actions will resolve these dependencies:

      Remove the following packages:                                                             
1)      gnome-system-tools                                                                       
2)      gnome-time-admin                                                                         
3)      lubuntu-desktop                                                                          
4)      network-manager-gnome                                                                    
5)      ubuntu-release-upgrader-gtk                                                              
6)      update-manager                                                                           
7)      update-notifier                                                                          

      Leave the following dependencies unresolved:                                               
8)      apport-gtk recommends update-notifier                                                    
9)      gvfs-daemons recommends policykit-1-gnome                                                
10)     network-manager recommends network-manager-gnome | plasma-widget-networkmanagement 

Well, we’re not going to let a little thing like bro

As far as I can tell, these are the most pressing issues to resolve:

1) The logout menu is broken:

It removes the lxsession-logout package, which makes it difficult to log out of the DE when you used to be able to just hit “logout” from the panel and have it bring up the menu.

To fix this, at the command prompt, invoke:

$ sudo apt-get install lxsession-logout

2) The network manager is no longer present:

The switch has also removed your network manager by default, so to get it back, invoke:

$ sudo apt-get install network-manager

3) System tools have been uninstalled:

Want the system tools you’re used to seeing back?  It’s easy enough, just invoke:

$ sudo apt-get install gnome-system-tools

4) Wait!  What about the lubuntu-lxpanel-icons package I see being uninstalled??

The panel in LXQT is actually LXQT-Panel, so it is different, so not to worry — it uses it’s own set of icons, so you don’t need the lubuntu-lxpanel-icons package any more!

Well, that about wraps it up!  Do you have any questions or comments about this process?  See anything I did wrong, or want to mention anything else that this switch will impair/remove/destroy?   Please reply in the comments section!  Thanks!