Navigate to your web UI and de-select “Permit password login” under SSH section (or similar – depending on your gateway of preference, I’m using OPNsense):
Note: You should probably create a new user and su to root once you’re connected via SSH, but this is the “quick and dirty” version, so we’ll save that for another day.
Open A NEW TERMINAL instance or tab (don’t log out of SSH yet!) and try logging into your gateway. I’ve found it good practice to always try things that effect SSH login in a new instance because you never know when your settings alterations will lock you out. It’s not such a big deal in this example, but it’s a good practice to be in the habit of.
If you’ve gone through these steps properly, you should be able to log into your gateway without a password now (unless you specified a passphrase using ssh-keygen).
Now you’ll be limited to connecting via SSH only with this one machine. For additional machines, there’s several things you could do:
Copy the contents of your ~/.ssh folder to other machines
repeat the ssh-keygen step for the next computer and copy the id_rsa.pub to the gateway’s authorized_keys again
So I’ve been playing with some of the Turnkey Linux VMs, which are nice a lightweight Debian Linux OS core packaged with purpose-built software
They run lighttpd out of the box and have web portals, web-accessible shells, webmin for easy administration, Let’s Encrypt SSL certificate management automation, and usually some web administration specific to the purpose for which they are built.
They’re really pretty nice! Nothing too fancy as far as their looks, but they get the job done and seem very stable. They can really save a lot of time.
So I was trying a couple for torrent server and openvpn, and since I run ESXi hosts, I opted for the vmdk images. I was expecting just a disk image (.vmdk), but lo-and-behold, they’re not the image files but full VMs built on VMware Workstation.
This is cool but also presents some compatibility issues, as you can’t just copy a Workstation VM to your datastore and run it like anything else. I tried it personally not knowing what I was in for, and my host couldn’t open the .VMX file when I clicked on “edit settings” (it just gets stuck at “loading”).
So OK, cool, let’s figure out how to get this working. The easiest way if you have Workstation is to download the VM locally and load it up in Workstation. Then connect to your ESXi host and click “upload”:
Then specify the host you want to send the VM to and it will automatically convert it for you.
However it’s not without its problems. You will probably want to upgrade the VM type – mine was woefully outdated, being at version 6 when the host’s capability was version 14. This prevented me from using newer device types and changing the OS to Debian 9.
But even after upgrading the VM, I still couldn’t add vmxnet3 or paravirtual scsi drivers. This I solved by cloning the VM in VCSA. Somehow, the cloned VM was able to add the vmxnet3 and paravirtual scsi drivers. I’m not sure why they weren’t just available from upgrading the VM, but it worked.
What if you don’t have a copy of VMware Workstation? Well, it’s not that hard to get around. This is how I did it the first time I tried importing a Turnkey appliance, since I already had the VM on my host’s datastore.
SSH into the host and invoke the following command:
Then just manually create a VM and import the existing disk, using the one you just output with vmkfstools.
After that, you can safely delete the files that were included with the Turnkey appliance, being careful to save your vmkfstools output vmdk file.
Also, I noticed on VMFS v6 I could not re-locate the vmdk file. For some reason it had to be left in its original location (folder) where it was created. There may be a way to work around this, but it’s easy enough just to leave the dir. Maybe that’ll be for another post…
For all Gemini Lake processors , the current data sheets show a maximum RAM of only 8GB (2x 4GB) . The processor series of the Gemini Lake series include the following Intel CPUs:
Pentium Silver J5005 (4M cache, up to 2.80 GHz)Pentium Silver N5000 (4M cache, up to 2.70 GHz)Celeron J4105 (4M cache, up to 2.50 GHz)Celeron J4005 (4M cache, up to 2.70 GHz)Celeron N4100 (4M cache, up to 2.40 GHz)
Using the example of the current ASRock motherboard J4105-ITX with the Celeron J4105 processor of the same name, we were able to prove in the overRAMing test that the values described in the specifications for maximum RAM are incorrect. So we could prove in the memory duration test that 32GB (2x 16GB) are possible.
In our overRAMing test , we extensively tested the ASRock J4105-ITX Mini-ITX motherboard in continuous testing with different memory configurations.
The following memory configurations have been tested:
12GB Memory – (8GB + 4GB)
16GB Memory – (8GB + 8GB)
24GB memory – (16GB + 8GB)
32GB Memory – (16GB + 16GB)
The memory tests were performed with MemTest86
The complete 32GB memory was recognized and could be fully utilized. The first initialization of the 32GB requires some patience. The ASRock J4105 Mini-ITX needs approx. 35 seconds until the system starts.
The two 16GB memory modules have been described and read in the ASrock motherboard for more than 6 hours, including the “Hammer Test.” All tests were performed 3 times in a row , and none of the tested memory configurations found an abnormality, error or problem in the memory duration test become.
Why the manufacturers limit the maximum RAM to only 8GB (2x 4GB) in the manuals and documentation is incomprehensible. Our tests were convincing in every configuration of the main memory.
Where are the new processors used?
The new Gemini Lake processors are used eg in the following motherboards / mini-PCs:
The 8GB and 16GB memory modules have been tested, approved and successfully sold by us due to the successful tests.
Another note from the manual!
When changing the memory modules, we noticed that the BIOS of the ASRock motherboard J4105-ITX does not always recognize the new memory. In this case, the motherboard does not start anymore. The screen will stay black.
The cause is not due to the memory sizes that are used. We were also able to reproduce this phenomenon with 2GB and 4GB RAM.
The remedy here is the function CRLCMOS1 as described in the manual under 1.3 Jumpersettings .
Quote from the ASRock manual: CRLMOS1 allows you to delete the data in CMOS.The data in CMOS includes system setup information such as system password, date, time, and system setup parameters …Please turn off your computer and unplug the power cord.Then you can short-circuit the solder points on CLRCMOS1 with a metal part, such as a paperclip, for 3 seconds …
Once the CMOS was cleared, any memory configuration could be installed. This was then always recognized immediately and the system booted on the 1st attempt.
I tend to notice Windows users fall into a few major categories:
Unaware any OS besides Windows exists
Uses Windows because that’s what they’re most comfortable with
Doesn’t like Windows per se, but can’t afford a Mac, and doesn’t want to grapple with Linux
Likes posix-compliant OS better, but trapped by vendor lock-in requiring use of proprietary software only available on Windows
If you’re like me, you’re in the 4th category. I love package managers, bash shells, am most comfortable with nix commands and regex, and while I think powershell is a step in the right direction, the syntax is unfamiliar enough that I still have to consult references in order to do most things.
But alas, I’m in an environment with proprietary Windows software. If I want to edit server settings in my Windows Domain environment, there really isn’t any cross-platform tools to do it with. I could remote in using SSH and run powershell scripts, but that’s definitely not as easy as just using the GUI tools that are readily available to install on any Windows platform.
Well, thankfully I’ve been able to make Windows a more comfortable environment for nix users with a few add-ons that have really made life a lot easier!
In a lot of ways, it’s the best of both worlds (or, attempting to have cake and eat it – pick your metaphor).
1) Install a package manager (or a couple!)
Here’s some of what I’ve done:
I went with Scoop, which is a package manager that installs normal Windows programs through the command line.
It’s kind of interesting in that it installs programs to your user folder, negating permissions issues and UAC prompts for installing/uninstalling programs. Also, any user-specific settings will be local to each program, so they can be customized and preserved more fully than system-wide installs.
The drawback, obviously, is that if you want to install programs for multiple users you’ll want to install them the old-fashioned way, or use chocolatey
Here’s an example of finding and installing QEMU using scoop:
avery@tunasalad MINGW64 /bin
$ scoop search qemu
avery@tunasalad MINGW64 /bin
$ scoop install qemu
Updating 'extras' bucket...
* 33f73563 oh-my-posh: Update to version 2.0.311 2 hours ago
* 35865425 ssh-agent-wsl: Update to version 2.4 3 hours ago
* 2fa7048f oh-my-posh: Update to version 2.0.307 3 hours ago
* 63f7fcdb gcloud: Update to version 256.0.0 3 hours ago
* f92493a8 kibana: Update to version 7.2.1 4 hours ago
* 6f9fb8d5 elasticsearch: Update to version 7.2.1 4 hours ago
Updating 'main' bucket...
* f8034317 aws: Update to version 1.16.209 53 minutes ago
Scoop was updated successfully!
Installing 'qemu' (4.1.0-rc2) [64bit]
Downloading https://qemu.weilnetz.de/w64/qemu-w64-setup-20190724.exe#/dl.7z (121.3 MB)...
Checking hash of qemu-w64-setup-20190724.exe ... ok.
Extracting dl.7z ... done.
Linking ~\scoop\apps\qemu\current => ~\scoop\apps\qemu\4.1.0-rc2
Creating shim for 'qemu-edid'.
Creating shim for 'qemu-ga'.
Creating shim for 'qemu-img'.
Creating shim for 'qemu-io'.
Creating shim for 'qemu-system-aarch64'.
Creating shim for 'qemu-system-aarch64w'.
Creating shim for 'qemu-system-alpha'.
Creating shim for 'qemu-system-alphaw'.
Creating shim for 'qemu-system-arm'.
Creating shim for 'qemu-system-armw'.
Creating shim for 'qemu-system-cris'.
Creating shim for 'qemu-system-crisw'.
Creating shim for 'qemu-system-hppa'.
Creating shim for 'qemu-system-hppaw'.
Creating shim for 'qemu-system-i386'.
Creating shim for 'qemu-system-i386w'.
Creating shim for 'qemu-system-lm32'.
Creating shim for 'qemu-system-lm32w'.
Creating shim for 'qemu-system-m68k'.
Creating shim for 'qemu-system-m68kw'.
Creating shim for 'qemu-system-microblaze'.
Creating shim for 'qemu-system-microblazeel'.
Creating shim for 'qemu-system-microblazeelw'.
Creating shim for 'qemu-system-microblazew'.
Creating shim for 'qemu-system-mips'.
Creating shim for 'qemu-system-mips64'.
Creating shim for 'qemu-system-mips64el'.
Creating shim for 'qemu-system-mips64elw'.
Creating shim for 'qemu-system-mips64w'.
Creating shim for 'qemu-system-mipsel'.
Creating shim for 'qemu-system-mipselw'.
Creating shim for 'qemu-system-mipsw'.
Creating shim for 'qemu-system-moxie'.
Creating shim for 'qemu-system-moxiew'.
Creating shim for 'qemu-system-nios2'.
Creating shim for 'qemu-system-nios2w'.
Creating shim for 'qemu-system-or1k'.
Creating shim for 'qemu-system-or1kw'.
Creating shim for 'qemu-system-ppc'.
Creating shim for 'qemu-system-ppc64'.
Creating shim for 'qemu-system-ppc64w'.
Creating shim for 'qemu-system-ppcw'.
Creating shim for 'qemu-system-riscv32'.
Creating shim for 'qemu-system-riscv32w'.
Creating shim for 'qemu-system-riscv64'.
Creating shim for 'qemu-system-riscv64w'.
Creating shim for 'qemu-system-s390x'.
Creating shim for 'qemu-system-s390xw'.
Creating shim for 'qemu-system-sh4'.
Creating shim for 'qemu-system-sh4eb'.
Creating shim for 'qemu-system-sh4ebw'.
Creating shim for 'qemu-system-sh4w'.
Creating shim for 'qemu-system-sparc'.
Creating shim for 'qemu-system-sparc64'.
Creating shim for 'qemu-system-sparc64w'.
Creating shim for 'qemu-system-sparcw'.
Creating shim for 'qemu-system-tricore'.
Creating shim for 'qemu-system-tricorew'.
Creating shim for 'qemu-system-unicore32'.
Creating shim for 'qemu-system-unicore32w'.
Creating shim for 'qemu-system-x86_64'.
Creating shim for 'qemu-system-x86_64w'.
Creating shim for 'qemu-system-xtensa'.
Creating shim for 'qemu-system-xtensaeb'.
Creating shim for 'qemu-system-xtensaebw'.
Creating shim for 'qemu-system-xtensaw'.
'qemu' (4.1.0-rc2) was installed successfully!
See? That wasn’t so hard, was it?
2) Install git for Windows (and get an awesome nix-like terminal shell!)
Note, I usually install VS code before installing git, since that way you can set it to be your default editor during the install process. Now that you have your package manager set up, you can install it by running:
$ scoop install code
Obviously git is a hugely useful program for versioning code, documents, etc. and there are multiple sites it can be configured to work with your favorite version tracking site, such as Github, BitBucket, GitLab, et. al.
But the beauty of the git windows installer is it also installs a fork of mintty called git-bash, which is a terminal emulator with a very comprehensive set of posix tools that’ll make you feel way more at home in a Windows environment if you’re used to working on posix systems.
Learn more about it by checking out this discussion, “What is git-bash, anyway?”
If you have more questions about nuts-and-bolts, the git-bash github faq is a good place to start, including cmd console compatibility, long path or file names,
I’ve tried both of these, and I personally didn’t need to compile or install packages from arch linux often enough to keep the full suite, so I usually just stick with git-bash since it’s a happy medium. But it’s pretty cool that any of that is even possible. If you’re a developer, the full development suite will probably be right up your alley. Apparently the applications you compile can even be used outside the shell (supposedly – although I wouldn’t count on it).
Some of the tools I’ve found to be a bit wonky, like “find” and “du”, but if you keep your expectations tempered I think you’ll be pleasantly surprised. I love that I can invoke “nano” or “vim” to edit text files right in the shell so much, if nothing else worked I’d be fine with it.
Don’t get me started on how happy I am with “ssh” and “openssl”…
Check out a list of the /bin directory of MingW64 – I’ve listed the commands I imagine people use most often on posix platforms:
Full disclosure: I purchased this camera and was later reimbursed for writing this review. However, I will do my best to offer a totally objective viewpoint on the pros and cons of this model.
Since I was hired to do installation and maintenance of a TVI DVR system at a local restaurant, I’d been wanting to set up a couple cameras around my home.
I’m not so worried about theft or break-ins like the restaurant is, but even in a mellow, low-stress area like ours recording a few key areas can still be helpful. For instance, we share a fence with a parking lot, so it gets hit by cars occasionally and we’d like to know who did it if we’re not home. Also, you never known when someone might take an Amazon delivery off your doorstep, as that is becoming common even in nice neighborhoods.
Even though I do installation professionally, I did not get the job because I’m a video security expert. I did networking for the restaurant and they asked me if I wanted to work on the security system. I virtually had no experience with CCTV setups before starting the job, so I dove into it head-first and have been learning as I go. It’s been a great experience.
As far as DVR/NVR technology, I’ve found that IP cameras better for features and quality, but the TVI cameras are a lot easier to wire. However, even though wiring RG59+BNC cables is easier than terminating cat6, the time savings is offset by the number of cameras I end up setting up, since they tend to be lower resolution.
Compared to the 1080p cameras I use with the restaurant’s TVI setup, the 5MP IP cameras I have are much more efficient. I was stunned by how much more detail there is, so if it’s recording an open area, I just set up one 5MP+ camera and use digital zoom to navigate, whereas I have to use several 1080p or 720p cameras to cover the same area at the restaurant because of their resolution. Even though a single IP camera might be more expensive than a TVI camera, overall the system has been cheaper, easier to set up, and less hassle to use.
While I use almost entirely Hikvision products (and clones) for the TVI CCTV system at work, I’ve decided to stay away the major name brands for IP cameras because the market is moving so quickly the newer up-and-comer companies are competing much harder and produce products with more features and better value.
Of the 5-6 IP cameras I’ve tried so far, I’ve liked Anpviz best and SV3C second-best. I’d like to try Morphxstar at some point, but would be happy sticking with Anpviz going forward as they have stunning picture quality and are built like tanks.
The Anpviz camera I already owned I had been very happy with. It’s an IPC-D350 that was an Amazon best seller, so I was excited to try another one from the same company. I have an IP camera from SV3C, too, the SV-B06POE-4MP-A. The main reasons I was excited to try the review camera were:
1) Value – Anpviz makes a great product at a very affordable price. The IPC-D350 was an ‘Amazon Recommended’ camera. The IPC-B850W-D hasn’t gotten as much attention, but it’s a very similar camera picture-wise with the added benefit of having a mic and a Micro-SD card slot, at pretty much exactly the same price. Seems like a good value to me!
2) Picture and recording quality – a camera that is inexpensive but has a poor picture or recording quality is a waste of money. My IPC-D350 has great picture quality and has allowed me to see detail from the front of the house to the street. I’ve only needed the single camera to be able to identify anyone who walks anywhere in the front yard. I’m happy I have something reliable I can use to decipher between the post carrier, family members, or Mormons when they come to knock on my door.
3) Build quality – I’ve had the IPC-B850W-D for more over a month, and the IPC-D350 for about 6 months. Both have been stable in all sorts of weather conditions. I haven’t had any issues with it stalling or rebooting over the 6+ months it’s been in use. The IPC-B850W-D has been similarly resilient. Subjectively, they don’t seem cheap like other similarly-priced cameras, using a fair amount of metal in their build instead of being entirely plastic like most others. My SV3C, for instance, is all plastic except the base. It’s a hard ABS plastic, but still just doesn’t seem as hearty as the Anpviz cameras I’ve tried.
4) Support – Anpviz support has been great, they’re responsive and kind, and have been able to walk me through any problems I’ve had in the past.
5) Features – Anpviz cameras use a stock firmware I’ve seen in cameras from other manufacturers in the same price range, but with a few additional settings here or there that make them stand out. Also, the option to downgrade the sensor resolution and increase the framerate does not come at a great loss – for instance, 5MP@15MPS is a good combination by default, most people would be happy with that. However, if you want to have full 24FPS recording, downgrading from 2592×1944 (5MP) to 2688×1520 will allow you record up to 25FPS without sacrificing much in the way of pixel density.
Comparitively, our Hikvision DS-7332-HGHI 32-channel DVR requires us to downgrade the stream from 1080P@12FPS to 720P@24FPS to get full-frame recording on any more than 8 cameras, so I was expecting a much larger sacrifice in quality!
Mounting: Mounting the camera on my shed and adjusting it was easy enough. There were three different points of adjustment – 1) A 360° swivel of the base, the part with screw holes that sits on the wall. 2) an up-down/left-right joint that is limited to 90° but can be swiveled any direction, and 3) the camera itself swivels 360°.
I tend to be a fan of ball-joint mounts for bullet cams when I’m first setting them up because of the ease of adjustment, but you can get the same range of motion with this number of joints, it just requires a little more work. The positive of NOT having a ball joint is they tend to wear out after a while and are more likely to become loose over time compared to discrete, range-restricted joints.
Picture quality: I’m not sure what the difference is between the IPC-D350 and the IPC-B850W-D in terms of the sensor technology, materials over the sensor, etc. but Subjectively the colors seem slightly more vibrant on the IPC-D350. It’s not something most people would notice, though, I don’t think. They look extremely similar. Both are similarly as sharp, and have essentially the same motion-recording sensitivity.
Night vision: The IPC-B850W-D night vision is a major improvement over the IPC-D350, which is the only thing I didn’t like about that camera. I can see a lot further using this camera, and there is great detail rather than just cloudy grey out approx 60+ ft. I think it might be the clearest night vision of the 3 cameras I have now, but it’s a close call between the IPC-B850W-D and my SV3C SV-B06POE-4MP-A, which also has great night vision.
This possum had no idea s/he was being taped:
My recording platform is Milestone Xprotect Essential+ 2019R1, which is a true enterprise solution that runs on Windows 10 or Server 2012R2 and newer. While not in the directly supported list, the IPC-B850W-D camera was detected using ‘easy add’ in hardware management via ONVIF without issue. Main feed, sub feed, metadata feed, microphone outputs all detected and record without issue.
Interestingly, Milestone detects 5 input feeds to the camera, but I’m not sure what they’re used for. I’ve noticed Milestone detects input feeds on other cameras I’ve tested, as well. At some point, I would like to know what they do, if anything. It’s possible they’re just for ONVIF compatibility, I suppose (?).
I imagine the camera should work fine with other recording platforms such as Shinobi, Zone Minder, Blue Iris, iSpy, MotionEye, etc. but I did not go so far as to test the camera with any of these because I use Milestone (and am very happy with it). Chances are if your recording platform is compatible with ONVIF, though, the camera should work fine.
I do not have a commercial NVR to test the camera with, but it is advertized as being compatible with Hikvision through port 8000. I did test the Anpviz IPC-D350 IP camera with a Hikvision Turbo 1.0 DS-7332-HGHI hybrid DVR w/ firmware version 3.1.18 and it would record the feed, but a lot of features were mismatched because of the age of the DVR, and the fact that it uses dated firmware. I do not believe the DVR even supports ONVIF.
If you have an old hybrid DVR, this camera will work, but be prepared for some of the settings not to overlap properly. I know our DVR only supports maybe 2-3 IP camera models officially, but it will record the feed of most any Hikvision-compatible IP camera nonetheless. Additionally, viewing IP cameras from the IVMS-4500 phone app on an old hybrid DVR requires setting up separate devices for each IP camera, as they are not available in the main list along with the analog camera feeds. Just something to consider.
I’ve thought of setting up a Milestone record server and importing a TVI DVR such as the DS-7316HUI-K4 for a solution that would provide compatibility with TVI cameras, yet display all TVI and IP cameras on one page through the Milestone app, but that’s a project for another day.
From the configuration page there’s a recording option for MicroSD, USB (generic firmware option, probably non-functional), and NFS. I did not test a micro-SD card, so I cannot report on that. But I did try mounting an NFS share through my network. It was a tad quirky in that when I first configured the share it wouldn’t show that it was mounted until I logged out and back in, but it did seem to work just fine.
There is also an option to send clips to an FTP server, but it’s in the Network settings menu instead of under storage. There’s also an email for alerts, but unlike some other cameras I’ve seen, I don’t think you can email clips of video.
Intuitively, I’d think the easiest thing to use for storage would be the P2P option through Danale, a paid cloud storage service, but after having a look at their app’s ratings it looks like it might be more trouble than it’s worth. Therefore, if you want push notifications, you probably should set up the camera to send you an email and do some extensive testing of the motion detection sensitivity in order to avoid false positives.
IFTTT is worth checking out for push notifications, too! I’ve managed to rig it up to push notifications for work, as well as automate all sorts of things for work and home. If you’ve never heard of it, have a look, it’s amazing: https://ifttt.com
I think Microsoft has something similar called “flows” but I haven’t tried it yet. Hey, whatever works, right?
Browser support testing:
I noticed the IPC-B850W-D camera was compatible with a wide range of browsers. I tested it with the top 2 or 3 browsers on 3 different common operating systems, and they all seemed to perform great. This is a real step up from older cameras which would only work on Internet Explorer, given how many people use Mac or Linux (or even BSD or Illumos) for their desktop or laptop instead of Windows.
Live View and all feature/configuration pages were available in the following browsers:
Windows LTSC 1809 build 17763.557:
Chrome 75.0.3770.100 (Official Build) (64-bit)
Firefox 60.7.2esr (64-bit)
Internet Explorer 11.0.130 KB4503259
Disclaimer: Sorry for the phone pics of my screen, I was using an installer disk for this procedure and there was no screenshot feature.
I’m migrating a MacOS install but the source drive is 256GB and the destination is 250GB. It’s not too much of a mismatch, but still prevents a straight-across restore.
What ever to do?
Well, if you’re using APFS (like you should be), then it’s pretty trivial.
First, erase your your destination disk and set it to APFS.
The GUI disk utility is what I find easiest. You might have to tell the sidebar to show all devices if you have a greyed-out partition like ‘disk0s4’ and nothing else.
The nice thing about the Mojave installer is you can have two applications open at once now, which I think is still impossible in Recovery mode. So go ahead and open a terminal while you’re at it and have the two displayed the whole time.
Erase it to an APFS disk, GUID should be the only partition map option (if it’s not, select GUID).
You’re welcome to use encryption or case sensitivity if you want, but I usually opt for whatever is the easiest to troubleshoot, myself. You might want to, as well, unless you have some specific reasons otherwise.
Now for some terminal diskutil commands:
There’s a bunch of ram disks when in recovery mode, so I like to list one disk at a time. The two SSDs here are disk0 and disk2, so you can just clean up the display by invoking something like:
# diskutil list disk0 && diskutil list disk2
Learn from my mistakes: I actually cloned my efi partition before I did the container resize, but I found out that you have to do it after the restore because it’ll be overwritten. So wait until the last step before you restart the computer.
Then get the exact sizes of your source and destination drives with:
# diskutil info disk0 | grep Size && diskutil into disk2 | grep Size
Note: My laptop had the 250GB destination drive installed internally so it was the first drive that was detected, disk0, but these numbers (and perhaps even the partition, or ‘slice’, number s3) can change, so make sure you identify them properly because you can mess things up big time.
Then you’ll want to get the size of the APFS container itself, in this case it’s on partition 2 of disk 0. This is our limit.
# diskutil info disk0s3 | grep Size && diskutil into disk2s3 | grep Size
Note: You can just run the first part of the command before the && if you don’t feel like comparing, just make sure it’s actually your destination container.
You can also get the container limits with:
# diskutil apfs resizeContainer disk0s3 limit
This could be helpful in a scenario where the drives have a greater difference in size, or if the source container is very full, to see if you might be trying to make the container smaller than it’s constrained.
If you get a resize error you might have to delete the snapshots off the source container, as they refer to the total size of the container at the time they are taken. Thus, they can no longer be restored if the container is resized.
Snapshots are deleted from the source drive, which must be mounted, by invoking:
Then you can go back to the GUI disk utility and start the restore by selecting the destination drive and clicking restore on the top panel. Choose your source drive and it should get going.
Note: Depending on the size of your drives and the amount of information stored on the source, this can take a long time. During the procedure I was doing while writing this, two ~250GB NVMe SSDs (one connected with USB 3.0) with a source containing ~160GB of data took about 4-5 hours. It’s an exercise in patience.
Clone your EFI partition using dd:
Now you can clone your EFI folder from source to destination and it won’t get overwritten:
# dd if=/dev/disk2s1 of=/dev/disk0s1 bs=512
Note: Might want to make sure all your necessary bootloader software is there by mounting the EFI on your destination drive and poking around a bit (see commands in above picture). Otherwise you won’t be booting from this drive yet.
Also – this is important, there’s one last step – rebuild your kextcache on your destination drive:
# kextcache -i /Volumes/nameOfDestinationDrive
Now you should be able to disconnect all the other drives and boot from your newly cloned drive.
I used to host my site on ipage.com (don’t laugh). I was with them for nearly 10 years, taken in by their low prices that promptly increased over the first year. “Buy this” buttons flashed at me constantly while trying to edit my site, along with services and add-ons that kept appearing on my bill I had no idea about until suddenly I was expected to pay for them.
When the service slowed to a crawl about 4-5 years in, I mainly just kept my site for the simplest web presence. Sure, I fix laptops. Sure, I’ll scan your computer for viruses. I’ll recover lost data from a failing hard drive or upgrade you to an SSD. But make a website for you? Not on this package.
The service was so slow that when I tried to make a site showcasing ikebana arrangements I finally gave up because page reloads were taking so long and told the client they should try ‘wix’.
Every time I remember that moment, I hang my head in shame.
Well, no more. No more, I tell you!
Because I finally have *decent* hosting. Good hosting. Stellar hosting.
Squidix is a company based out of Indiana that has that midwestern touch to their service where they actually seem to give AF. And page reloads? They happen. Quickly. I haven’t noticed any page taking longer than I can tolerate easily. They pass the tolerable test.
I could throw stats at you or tell you about package costs, etc. but none of that really matters. I mean, it was all considered carefully when choosing my web host, but there’s tons of reviews out there you can find. This is more just about my *feelings*. And they are good. They are great. They are phenomenal.
I think I’m in love.
And what’s this?
They have cPanel? I guess that’s pretty standard these days, but try going 10 years without it. Try living with a crappy ‘buy-button’ based panel engineered to operate like you’re on Amazon.com – purchasing things left and right you have no business getting, nor can you afford – and navigating to the things you need to do while steadfastly avoiding commercial pitfalls.
Ok, so maybe this is less a charm-offensive for Squidix and more of a rebuke of ipage.
Am I going to stop this stream-of-consciousness blog posting nonsense and actually learn how to set up git versioning and SSH access?
Hopefully. For all our sake.
But seriously, if you’re looking for a web host, give them a careful look.
OK, so I’ve been fighting it, but I decided learning WordPress is a necessary evil. Supposedly according to website tech stats WordPress is powering 25% of all websites. I’m not sure if that’s entirely relevant if 15% of those sites are as bad as mine, but I figured if even 10% are decent then it’s probably worth learning.
I use the W3Techs extension in Chrome to look at what technology websites I visit use and WordPress comes up time after time. Some of them are significant players, like a lot of online newspapers and major companies. So here I am, in the middle of the bell-curve, going along with the crowd, jumping on the bandwagon and flinging myself off a bridge because my friends did it.
Actually, I can see why people like it. It seems pretty nice, and it’s easy to use. To put myself through torture, I avoided the “one-click installer” this time and downloaded a proper copy, un-tarred it, created the MySQL database and user, set permissions, etc. The hardest part was keeping track of all the extremely complex passwords required to keep this monstrosity from getting hacked, because it’s by far the most attacked platform in internet history.
While I managed to get through the install process, don’t ask me how to troubleshoot an existing installation because I don’t know. I finally gave up with the techs on my last site I imported from another web host and said I’d just install a new copy. Were those two how-to guides I had written worth keeping? Probably. Did it seem harder to preserve them than just starting over from scratch? Yes. But that’s OK. I’ll figure out how to troubleshoot this mess some day.
But that is for another day, because WordPress is all about content. Maybe now that it’s set up I’ll actually keep more records of the technical stuff I do. There’s so many laptops I’ve torn apart and bash scripts I’ve modified and rsync OS backups I’ve made that are lost in the ether forever. I do weird crap with Linux and ESXi that I don’t think is documented anywhere yet and I’m sure someone else out there might want to know how to do it someday… maybe…
Or not. But that’s what blogging is all about – the inflated sense of ego. My experience matters, dammit, because here it is, written down, displayed for all to see. Or maybe it’ll actually help someone, which is the true meaning of purpose – giving back to others when we have been able to overcome obstacles they might also encounter.
But not to get too meta, I’m probably just going to write stuff down as I go. No real project continuity I’m foreseeing, as I don’t have much of an attention span. And I’m not really a developer, I just thought the name was cute. I’d like to learn coding but I’m way more involved in doing hands-on tech support and infrastructure projects and they mainly take up all my time. But enough about me, this is really just meant to be a rambling first-post hello thing. I really should stop typing now before I get myself in any more trouble.
After being a long-time Debian user, for some reason I’ve been going all-in for CentOS lately. It started after I installed it as a virtualization platform with KVM on an E3 server. I just love how stable and methodically planned it is.
I’ve probably gone a little too far, as I’ve also installed it on my Thinkpad x220 Laptop. It’s definitely not much of a desktop OS – its packages are stable but they’re OLD. But, I guess my laptop is old, too, and I still love both of them. They work awesome together.
To avoid being too bored by its aged userland, I decided to breathe new life into the desktop experience by installing a new version of Cinnamon. After poking around the Wiki for a few days, totally by accident, I noticed EPEL has a testing branch for packages that are pre-release. A little repo package list inspection, and I noticed the pre-release version of Cinnamon they’re working on currently is 3.6.3 (!)
While installing from a testing repo is rather antithetical to the entire notion of running a 10-year service OS like CentOS, I thought for a desktop it might be fun. I haven’t had any problems with it so far after running it a couple days (besides the little missing app here or there). Maybe I’ve been too spoiled (ruined?) by running Debian testing for years? Anyway, I don’t judge if you don’t…
Example for a fresh install:
Install current version – CentOS 7.4 1704 – you can start with Minimal install or install Gnome Desktop group during the install process. I’ll explain the extra steps for minimal just in case.
log in as root or su root
# yum update # yum groupinstall “gnome server” # yum install epel-release # systemctl set-default graphical.target (not necessary if Gnome installed during install process) # yum update # yum install rpm-utils (optional: if yum-config-manager not installed) yum-config-manager –enable epel-testing yum update yum install cinnamon* check: cinnamon –version if 2.8, # yum update again reboot
Note: Somewhere during the process, after installing Gnome Desktop, if you are in the GUI and open gnome-tweak-tool, selecting global dark theme helps Gnome apps match cinnamon apps if you use a dark theme in Cinnamon.
Select Cinnamon using the gear icon that appears under your name after you enter your name
Now you should have a shockingly modern-looking and acting CentOS Cinnamon desktop system.
The Gnome installed from EPEL-testing is pretty new, too (for CentOS anyway) at 3.22.3 as of 11/21/2017. Be aware, most of the other packages have not caught up as the libraries and kernel, etc. still originate circa 2014 (kernel 3.10.0 anybody?). But they’re very, very stable and safe, relative to other Linux distros, and will be supported until 2024.
I can’t confirm if the older kernel is too old for newer systems (e.g. z170 z270, etc. chipsets) but it’s worth a shot, I know RHEL backports all sorts of security updates, so they might backport compatibility, too. I have no idea since I do not have any computers newer than 7 years old. 😛
Quick guide for less techincally-inclined readers: *Confirmed working!* The easiest way to do this is to download the zip and rename IceFilms.bundle-master to IceFilms.bundle move it to your Plex Media Center Plug-Ins directory, and re-start your Plex Server. Here’s a quick walk-through:
Step one: Navigate to the Git repository for the IceFilms.bundle package:
Step two: Click on “Download ZIP” at the bottom-right corner of the page. Unzip this and it will give you a folder called IceFilms.bundle-master
Step three: Rename the IceFilms.bundle-master folder to IceFilms.bundleand copy it to your synology diskstation. We’ll use /volume1/home/ as an example.
Plex user folder, under the path: /volume1/Plex/
Step four: After copying the IceFilms.bundle package to your home folder, open your terminal and navigate to your server by typing ssh root@(server address):
Quick tip: Log to Synology SSH using your admin password.
Navigate to your home directory (e.g. /volume1/homes/username/ ) and move the .bundle file to the Plex Library folder:
NA$ mv /volume1/homes/avery/IceFilms.bundle/ /volume1/Plex/Library/Application Support/Plex Media Server/Plug-ins/
Notes: You can hit “tab” in most terminals to auto-complete the name of a directory. The forward-slash indicates that there is a space in the name of the path or file.
Step five: Navigate to your Synology DiskStation manager in your browser and open Package Center. Navigate to your installed packages and click on Plex Media Server. Under the Actions drop-down menu, choose “Stop”. Then click the Actions menu again and choose “Run”.
Now, if all went well, you should be able to go to your Plex Media Center in your browser, or Plex Media Theater, etc. and see “IceFilms” as one of your channels.
It’s as simple as that!
TIP: Look in ReallyFuzzy’s GitHub page to find other plug-ins, as well!
Now, here’s the more involved method I used (I can personally verify that both of these methods work):
Caveats: This more complicated guide is only useful for those who 1) Have an interest in obtaining and installing bootstrapper and ipkg package management functionality for their Synology DiskStation 2) Have an interest in having Git clone functionality for their Synology DiskStation
If neither of these apply to you, I suggest you just follow the easy guide above!
Install Git on your Synology NAS by using ipkg. This will enable the ability to get the latest build of the IceFilms.bundle file required for the IceFilms Plex Media Server channel.
Note: installing and using the ipkg package management infrastructure is beyond the scope of this post. To learn more about using a bootstrapper and ipkg on your system, please visit:
I prefer using Nano to VI, which you can install using ipkg prior to doing the steps explained in the link above for editing your configuration files. Also, I prefer using the command EXPORT for setting the env for the ipkg path, rather than rebooting to acquire the PATH variable (as recommended in the LittleLama post I referenced).
Try it by entering the following and you should be able to run ipkg from anywhere:
NA$ export PATH=/opt/bin:/opt/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/syno/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin Note: I am using NA$ to signify the command prompt, as the “less-than” sign is not displayed in blogger – you will see a “less than” sign at the end of your server name for the display of your command prompt of your NAS. If you use export, you do *not* need to reboot in order to use ipkg. Then invoke: NA$ ipkg update And NA$ ipkg upgrade Install Bash for a more familiar command line interface: NA$ ipkg instal bash then run it by invoking: NA$ bash Now you will see the # sign as your command prompt, which signifies that you have root access (note – you already HAD root access, but this is the more common convention seen in *NIX OS). Add a couple lines to your to log-in with Bash next time you SSH to the server by invoking: # nano /etc/profile and adding a line that says the following: [ -e /opt/bin/bash ] && /opt/bin/bash Also add the line PATH=$PATH:/opt/bin:/opt/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/syno/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin to avoid having to “export PATH=” next time you log-in — make sure you delete any other references to PATH in this file (There was one at the end of mine). Quick tip: You can also have Bash show you the current directory you’re in by invoking: # nano /root/.bashrc and adding a line that says PS1=’w$ ‘
Bind the /volume1/@optware directory to your /opt folder:
And permit a user environment in your /etc/sshd_config
# nano /etc/ssh/sshd_config
Find the line that says #PermitUserEnvironment no by pressing CTRL-W and typing in part of the string. Uncomment it by removing the # sign and enable it by changing no to yes. Exit by pressing CTRL-X and save changes.
# synoservicectl –reload sshd
And you should be good to go!
Now for the fun stuff!
Log back into your DiskStation using SSH (same as above)
Install Git by using Ipkg to download the package:
# ipkg install git
Navigate to the Plex Plug-ins directory by invoking: # cd /volume1/Plex/Library/Application Support/Plex Media Server/Plug-ins/ Then clone the IceFilms.bundle file by invoking: # git clone https://github.com/ReallyFuzzy/IceFilms.bundle.git Note: You can copy the URL from the bottom-right corner of the github page:
Verify that the IceFilms.bundle file has been copied by invoking:
You should see IceFilms.bundle listed amongst the other plug-ins (.bundle files) in your plug-ins directory.
Now go to your DSM home page and open “Package Center”. Navigate to your installed packages and click on Plex Media Server. Under the Actions drop-down menu, choose “Stop”. Then click the Actions menu again and choose “Run”.
Navigate to your Plex Media Center and look at your channels. You should see that IceFilms has been installed. This is the best way to install plug-ins to Plex on your Synology DiskStation, as now you can simply invoke # git clone [URL] to clone any package you want! Just don’t forget that you have to restart your Plex Media Server before using the new plug-ins.
Any questions? Want any further explanations? Please leave a comment!