I’ve been working on moving to Windows Server Core for my DCs, and getting adjusted to PowerShell has been slightly daunting, as the only CLI I’m really familiar with is Unix-based, and, while there are similarties to be sure, they are also very different in a lot of respects. There’s a plethora of remote-based GUI programs for configuring Core servers available that are useful as “training wheels”, like Windows Admin Center (aka Webmin for Windows) and the ever-ubiquitous RSAT tools, which are desktop applications that, by using WinRM, can manage remote machines – programs like Server Manager, ADUC, PSRemoting (aka MS-SSH – hint: just use SSH), and time-tested (old) MMC snap-ins.
But what if you just want to get something essential done, especially if you don’t have remote access available? Well, there’s actually a lot of options still: The Sconfig TUI program, which is always included, but it’s extremely limited in scope, and I’ve realized through difficult experiences can be problematic (more later).
You actually can use MMC locally on a Server Core machine, but it’s more limited in snap-ins, and often these don’t cover what you need.
MMC on Server Core 2022 doesn’t even have Network Shares or Computer Management – boo.
I’ve learned through installing RocketDock for giggles (honorable mention) there actually is an instance of the control panel on Windows Server Core (I had read otherwise), but it’s pretty :sad trombone:.
All your .cpl are belong to us (all 6 of them, woah)
If none of those fit your necessities, you’re looking at using netsh, wmic, or PowerShell. Thankfully, they are all fairly comprehensive and easy to learn, even if they have completely different syntax from anything you’ve had to rely on CLI commands over the past few decades.
OK, enough rant and external links. Here’s something I discovered pawing through the MS wiki and random blog posts about configuring my DCs IP addresses.
I’ve had to re-configure the IPs on my DCs now a couple times since I’ve been migrating them between hosts to deal with hardware upgrades. Every time I clone one and start it on a new machine, it either resets the NIC to DHCP or a link-local address, so I have to re-set the static IP manually and (sometimes) re-authorize the connection to the domain (on the domain controller … irony?).
HAPPYMUFFIN has decided not to recognize he’s a DC anymore, probably because he’s barfing out this Link-Local address
Setting your IP address is obviously trivial on a Server instance with Desktop Experience. Observe:
Like any other Windows Desktop, via Control Panel -> Network Connections
Server Core should be even easier since the SConfig menu pops up whenever you log in, but Sconfig routinely fails to accept a static configuration. Observe:
boo.
Note, if you hate that SConfig pops up when you log in, you can disable it with the cmdlet Set-SConfig, ala:
Set-SConfig -AutoLaunch $false
TL;DR
That was an incredibly long wind-up to say, here’s how to set the NetIPAddress and DnsClientServerAddress in Windows Server Core
Synopsis:
If you’ve got a pre-existing static IP, be sure to delete it first (Remove-NetIPAddress)
If you’ve got DHCP configured, disable it first (Set-NetIPInterface)
Use New-NetIPAddress to create your new address (notSet-NetIPInterface)
Use Set-DnsClientServerAddress to point to your DNS forwarders – which, in my case, are these very DCs I’m configuring
Basics – gathering necessary info:
Get-NetIPInterface will give you a list of network adapters inside your machine currently (including loopback):
Get-NetIPConfiguration will give you more info about a given interface (you can identify it with -InterfaceIndex (e.g. 4), or -InterfaceAlias (e.g. Ethernet0)
Remove-NetIPAddress can help you get rid of that old, pesky address that might be stopping you from employing a new one (if they’re on the same subnet, they’ll probably co-exist peacefully, but different subnets = food for gremlins)
You can pipe these commands to some extent with Get- and Set-NetIPInterface – here’s an example (albeit, not a very good one since it requires more typing):
The one you’re probably really after isn’t Set-NetIPInterface, but New-NetIPInterface. That kind of threw me off at first, but it’s reflected in how the workflow expects you to delete prior addresses (before creating a new one, right?)
Now that your IP and Gateway are set, the only thing missing is your DNS settings, right? Since this is a DC, I’m going to point it towards myself and the other DC.
OK, this post ended up being way longer than I anticipated, but that’s how you get the basics of your network adapter working again if SConfig takes the ultimate dump when trying to use it to configure your Server Core network interfaces. Enjoy!
Concurrent local uploads – not possible when using local file (also no ominous “don’t refresh your window” warning)
I haven’t gotten very far with Harvester yet, having taken down the first cluster I built to re-purpose resources, but I thought I’d explore it again for running some VMS (security video recording) packages at a local business so we can avoid ESXi license costs and potentially scale-out (add servers) in the future without having to get even more licenses for vCenter and use up even more resources.
Harvester’s concept is super cool – set up Kubernetes infrastructure and use it for “legacy workloads” (aka VMs). Contrary to what a lot of other bloggers have written, Harvester does not run containers, so don’t get it twisted. That’s what Rancher is for. (side note: This misinformation got me running down a rabbit hole months ago when I stood up my first Harvester cluster thinking I was going to run a container for my TV recorder. I couldn’t figure out why there was no way to access the container layer, until finally I wrote the developers and they informed me there was no way to access it because it doesn’t exist.)
“Legacy workloads” are completely what we’d planned to run at this business anyway, so that’s just fine. Most decent VMS (video management systems) run in Windows, but we’d like to have some of the features only available via hypervisors, like cloning and testing new systems, or performing updates, without having to take the VM currently in production down or buy another physical machine. Harvester brings it to another level with the easy scale-out (adding more servers), and having cluster-awareness and high-availability embedded by design.
Of course, none of that matters if you can’t get your damn VMs to run, which is the first problem I ran into when I started the thing up. How do I get these clones of existing machines into the storage layer so I can create some new VMs around them?
Harvester only offers two options for creating an image:
Provide a URL that responds to an HTTP GET request for a file that’s a consumable disk-image format (currently qcow2, raw/img or iso)
Upload one of the aforementioned images via a web browser
I was pretty bummed these were the only options when I started out, as I had already connected a disk with my images to the host machine, thinking I could just copy the files from the disk to a particular location on the host filesystem. I should have probably RTFM, as this option (which is the most intuitive to me, but, alas) totally does not exist.
So I turned off the host and dug the NVMe with my images out, and popped it in a USB enclosure, thinking I’d use the upload option from my laptop. I watched the progress bar for long enough to know when to walk away – the file was 23GB, so I knew it should take a while – but the whole thing left me feeling uneasy that the process wouldn’t work. Sure enough, when I returned to the upload page, there was a “context cancelled” error. I tried it two more times, but Harvester kept thinking I had cancelled the upload at 99% finished. I am fairly certain I encountered a bug, but no time to file an issue, I’ve gotta see if this thing will even function for our workloads, and this wasn’t instilling me with the utmost confidence.
It seems like the “URL” option for providing disk images is the more mature of the two options, so I thought I’d look into running a local web server that would respond to a GET request with my image files. This turned out to be super easy, as basically every computer has or can get a copy of python without any trouble. Python has a built-in web server that simply provides whatever files are in the same directory in which it’s run by responding to GET requests and serving them up.
If you’ve got python installed, try it out. On my “server” machine, which in my case was my laptop, I had to do a few things to get it ready. Namely, make sure port 80 was open in my firewall, and make sure I was using the correct zone on my WIFI connection (for the port I had just opened) – I’m on Fedora 37, so if you’re using another OS, you should probably read this instead:
# check current connection:
nmcli con show
NAME UUID TYPE DEVICE
rabbit_hole e130190c-0c3b-4e22-8dc3-ebedc31a7d75 wifi wlp58s0
EastsideBigTom a51d9139-89bf-4a4f-9cad-25aaf3a8a2b0 wifi --
lan on the run 83afc4e0-a0bb-4b62-9437-59478a523da9 wifi --
Wired connection 1 eadbe5cc-ad09-4e43-8ad1-b24412a8610e ethernet --
# check to see if current connection is configured for a zone:
nmcli con show rabbit_hole | grep zone
connection.zone: --
# assign a zone to the current connection since it's not configured:
sudo nmcli con mod rabbit_hole connection.zone home
# make sure it worked:
nmcli con show rabbit_hole | grep zone
connection.zone: home
# open the port in the firewall - you can use any number up to 65536, but I chose 80 so I wouldn't have to use the port notation:
sudo firewall-cmd --zone=home --add-port=80/tcp --permanent
sudo firewall-cmd --reload
If you are OK with this hole being open in your firewall indefinitely, use the --permanent flag for firewall-cmd, otherwise leave it out, and once your firewall is restarted it should be closed.
Now I set up a very rudimentary test to make sure the web server is responding to GET requests, since the machine will respond to a ping, but that’s not very helpful, considering ICMP is a completely different protocol than HTTP:
### on "server" machine (aka laptop, etc.) ###
# navigate to the folder with the files you want to send to Harvester:
cd /path/to/disk/images/you/are/going/to/transfer
# put some gibberish in a file to be served up on a GET request:
echo 'server working' > index.html
# start the actual http server (needs root privileges to attach to socket):
sudo python -m http.server 80
### in Harvester node console ###
# make GET request to "server" for the test gibberish file you created:
curl http://10.0.0.207/index.html
You should see the words “server working” in your Harvester console. If it didn’t work, make sure you spelled everything right, etc. If you don’t know the IP address of your “server”, you can run ip a in your console and it’ll let you know. If you’re not sure which network you’re on, you should probably get a new hobby.
Anyway, now go back to the image menu in Harvester’s web UI, and provide the ip address of your “server” to create images from your files on your node. Make sure you spell everything properly, including using eXaCt SaMe CaSe. Unlike Google, Harvester won’t figure out what you meant if there’s tyops.
This method has proven far more reliable (well, in my case, actually worked) than trying to upload a local file, which is kind of hilarious given it is still uploading the files from the same machine (go figure). Providing files from a local URL also lets you queue a bunch of them to send at once, a definite time-saver. The upload function has an ominous warning not to leave the page or refresh your browser, so in this case you can navigate away from the page without fear of trashing (and painfully re-initiating) your arduously instigated multi-GB transfer process.
Neofetch doesn’t find much when you have no DE/WM/DM or icons, but tries its best…
Some people are charging for Fedora in the Windows store. I guess it’s good work if you can get it (?) Charging for someone else’s free OS seems kinda lame to me, but I don’t mind getting my hands dirty.
I could see this process being intimidating for some people, for them $5.99 might be a worthwhile investment to be able to not do what I explain herein. But if that’s the case, it’s probably better you just don’t use Linux to begin with. Those are the people for whom paying like 2000% more for a Macintosh is like, totally worth it. You know who you are.
Note: you do need a working installation of WSL (like the Ubuntu base image), or a linux machine to do this on to start with. Or you could try a util like imdisk in Windows, but I couldn’t figure it out, so I used WSL since I already knew how this process could be achieved in Linux (fairly easily).
Open up a command prompt in Windows and find suitable dir for downloading a large file, such as:
I chose Fedora Cloud, which is pretty cool, mainly just because it was available in a .raw file, while Fedora Server was only available in .iso. I don’t know much about it, but I noticed it uses systemd-networkd instead of NetworkManager, which makes sense for a cloud-centric OS. That’s a plus in my book.
Choose a distro to invoke, enter wsl from cmd, and you should be in same dir as you were previously:
wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
wsl -d ubuntu # you can just invoke wsl unless need to specify distro
Once you’re in wsl, you can use bash to save an environment variable for your username and copy these commands straight across (except /dev/loop – make sure you’re using the right loop number and partitions, those are unique to your system, unlikely to be the same, esp if you use a dif distro or release).
export USERNAME=<your username on the PC>
Decompress the raw from any archive. E.g.:
xz -d Fedora-Cloud-Base-36-1.5.x86_64.raw.xz
xz deletes the original, so no need to worry about that. Now create a loop for all the partitions, and list those mounts :
At e.g. /dev/loop0 you should have your .raw file mounted as a device. Now you actually have to mount it as one image (as if it were booted). Create a working subdirectory:
mkdir -p /media/raw
list the partitions on your loop device:
fdisk -l /dev/loop0
Disk /dev/loop0: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BF6BA5C6-0688-4AB6-B09C-942136A8C104
Device Start End Sectors Size Type
/dev/loop0p1 2048 4095 2048 1M BIOS boot
/dev/loop0p2 4096 2052095 2048000 1000M Linux filesystem
/dev/loop0p3 2052096 2256895 204800 100M EFI System
/dev/loop0p4 2256896 2265087 8192 4M PowerPC PReP boot
/dev/loop0p5 2265088 10483711 8218624 3.9G Linux filesystem
The root partition is going to go first, and then boot, and finally efi, so in that order.
mount /dev/loop0p5 /media/raw # largest partition generally indicates root
Look at what you’re working with, sometimes there’s a root subdir, but usually there isn’t:
ls /media/raw home root
This time there was a subdir named root, so base the subsequent mounts off of your new information:
mount /dev/loop0p2 /media/raw/root/boot mount /dev/loop0p3 /media/raw/root/boot/efi
Now you can go to the root folder of the filesystem and archive the image in one go – the trick to making the syntax simple is being inside the directory but saving the .tar elsewhere (in this example, we’ll use the parent folder):
cd /media/raw/root
tar -cvf ../fedora-cloud-36.tar . # notice these periods, they indicate $pwd
And in the parent directory will be your tar. Now move it out to C:
BTW, you can safely remove the kernel, since WSL always uses the one installed through Windows:
dnf remove -y kernel-core
Enjoy Fedora …
PS: for homework, you could figure out what else you don’t need you could uninstall or replace, depending on what distro you choose. Systemd comes to mind, or whatever init system you’re using, since you can’t use that in WSL, either… fun fact: you use task scheduler in Windows to schedule stuff to run in WSL instead of cron…
Ripped everything out of one of these cases to make a router
I’ve noticed this blog is really less about the info I’m trying to share with people, and more of a collection of me rambling on about the stuff I’ve fixed or put together.
I’m not sure what’s wrong with me. I just spent an hour going off on another blog about this case I owned I made a firewall out of. Since I put so much time and energy into it, I thought I’d bring it here and add it to the collection: http://disq.us/p/2p7i6zw
I have one of those Norco RPC-230 cases, I’ve had it since 2011. It’s crossed 2 states during 4 moves, and at least 6 different configurations, so it’s seen a thing or two.
I suffered through using it longest for a TV recorder PC case, running HDHomeRun for at least a year or two, with 4x HGST 7K2000s in it, which are power-hungry, fast-spinning 7200RPM, hot, old drives. The thermal design of this case is garbage, the fans only hit 2 out of 4 of the drives, so the two coolest HDDs ran 50+ deg C 24/7 that entire time, the other two 55-60+ deg. It was a major testament to HGST’s enterprise line, as they were refurbished, so undoubtedly they were tormented mercilessly before I even got them. The case I got new, but was chipped and scraped after the first of my builds it endured. The front USB connector broke in the middle within a year after normal use. It still works, but is difficult to plug into. As far as quality, amazing drives, crap case, both still work, only one deserves praise.
I also used this case for a desktop PC for a while, with a i5-3570K 77w TDP processor. With stock CPU cooler it would idle at 65 deg C within 5 minutes, which was unacceptable. A Thermaltake Gravity cooler got it under 50 deg C, but the clearance was only a couple mm so it probably could have been more efficient had there been more room, or perhaps a shorter cooler. The Gravity is 3-pin non-PWM, though, which in this case is probably better since the case was never not too hot under any circumstances, so there’s no reason to give a PWM fan any time to decide to rev up.
I thought about drilling some holes in the side near where the CPU would usually sit, since it doesn’t have any airflow from either the side or the back to speak of, but have you ever tried to drill through sheet metal? It’s not as easy as it sounds, even with a specialized drill bit. Even the aluminum foil-like character of the RPC-230’s build quality would present a painstaking chore. I’m glad I decided not to, because I ended up putting a board in it where the CPU isn’t even in the back, it’s in the front, so it wouldn’t have made any difference for my current config, other than to look hideously abused.
Anything over 40 deg C consistently makes me nervous personally, so I was never at ease running any equipment in the Norco RPC-230 case that wasn’t meant to be low-powered, cool, designed for minimal energy consumption, etc. So I finally took a server board X9SPU-F meant for proprietary cases that are near impossible to find, and put it in the RPC with a very low-TDP CPU. I chose the RPC-230 mainly because the motherboard wouldn’t fit in anything else I owned at the time, but also because I finally had a very low-TDP processor I intended to use with it. I’d been watching prices of the E3-1220Lv2 for a couple years and they plummeted from being around $150 to around $35 at the time (and now, virtually worthless). Finally, I had the type of components that wouldn’t make you think you could fry an egg on the top of this thing.
To fit the X9SPU-F in the RPC-230, I had to rip the drive tray out of the front. There’s no use putting drives in the RPC-230 anyway, unless you want to make fried hard drive souffle. Then, I equipped this odd-shaped server board with an E3-1220Lv2 which is 17w TDP, butchered a heatsink out of a CPU cooler from a Dell SFF Optiplex 7020 and tightly zip-tied a Noctua NF-A8 to it (works great). Then, I velcroed a couple mSATA SSDs to the bottom of the case in a single 2-port mSATA to SATA adapter board for a RAID1-ish config designed to keep the system from being taken down by an IO shit-nado of network log files.
Now it’s an edge router + firewall! Has dual Intel NICs and IPMI, too, so it’s fancy. Pulls less than 40w at the wall, does up to 2.4Gbps as tested with an 82599 (possibly faster if not using FreeBSD/pfSense, which has terrible Intel NIC driver and kernel regressions, and less-than-ideal for 10GbE `sysctl` defaults).
For a network device SSD application, I’ve tended to think size and speed are secondary to being able to take a very certain and constant beating. I’ve been using these 24GB Intel 313 SLC mSATA SSDs on dual-SATA daughterboards and putting them in some kind of SDS version of RAID1 (either with mdadm or ZFS). The key is they’re SLC, which is
hard to find and weren’t made very long for cost reasons. However, you’d be hard-pressed to find another SSD that’ll take a beating like these. They were designed to be cache devices for an old Windows 7 + RST config that never really caught on in I want to say like 2014. The plus side, is you can find a bunch of them around for cheap, and they’re pretty unlikely to die. They only get about 120MB/s read, 60MB/s write, but it doesn’t matter, and if you use a mSATA to SATA converter board like mine that has one port per drive, you can reap some benefit in throughput from running your SDS RAID1-ish mirror-type config (in this device’s case, it’s running ZFS).
And such is the tale of the long and winding life of my poor, beat-up, ill-equipped for most tasks I envisioned, but still utilized, and now humbly revered for being able to fit my bizarrely-shaped server board, Norco RPC-230.
One of the many logos from the many articles discussing Fedora 33’s stunning move to finally ditch XFS, after their unfortunate habit of talking much trash about btrfs for a number of years
Hey
I ran into an issue where I had an unpopulated grub menu on a Fedora 36 Workstation installation. It ended up only booting to the empty menu, but at least I could drop to a prompt to try and figure out how to get through it.
The usual booting with a live USB, mounting the affected drive to /mnt and the other partitions and bind mounts respectively, and rebuilding grub.cfg from chroot didn’t appear to be functional due to an error in the chroot environment that makes it unable to see /dev and use any of the usual tools to build the file.
TL;DR –
All you really need to read in this article to boot from grub prompt is in the 2nd codeblock below (some people just here for the lols)
So booting from the grub prompt was necessary in order to be in an environment where /dev would be recognized and grub.cfg could be rebuilt – since grub.cfg isn’t anything that can be edited manually, unfortunately (it looks like it could be the way it’s named, but if you know anything about how grub works, you know it’s definitely not. Not even a little bit…)
The example I gave in the last article was done on Ubuntu. It looks like pretty much any other grub rescue operation using EXT4, XFS, etc. but it’s actually a homebrew ZFS initrd- I used to make my own ZFS built-in kernels regularly, and threw together a script for automating the process if you’re interested: https://github.com/averyfreeman/zfs-kernel-builder
But lately I’ve been dabbling with Fedora so this article’s related to Fedora. It’s upstream for some great software, what can I say. I had an NVMe from a Thinkpad I’ve had for a couple years I tried to swap into a newer Thinkpad I was upgrading to, because Thinkpads. For some reason grub.cfg got hosed in the process and I couldn’t boot from it.
Not wanting to take too much time troubleshooting, I fresh-installed a new copy of Fedora 36 to another NVMe in the new laptop and copied as much as I could from /home and a package list from repoquery -a --installed run using chroot from the old NVMe in an external enclosure (gee willikers, I sure love my external NVMe with Thunderbolt 3, by golly).
Since I still had the old installation all set up, I thought I’d try and rescue it eventually, so when I got a moment I booted it from the external NVMe enclosure on another laptop with Thunderbolt using VMware Workstation and the enclosure as a physical disk .vmdk (hypervisors are awesome).
After failing with the tried-and-true live USB chroot rescue method, at least I could get into the grub menu without issue. That poor, empty menu, so lonely feeling with its 0 boot loader entries. I knew the OS was still there, but how to get into it without the entries? What’s a nerd to do? Start messing around with the prompt to explore how to crack open the damn thing!
Since this was btrfs, of course, it was a little different, and since each distro uses a different subvolume layout, none of the info about any of them translate to the others very well. At the time of writing, I didn’t see any definitive info on how to boot from grub prompt on stock btrfs Fedora Workstation, which is kind of surprising since it’s been btrfs default for 3-4 years now, since version 33.
There’s a great guide here about Ubutntu using btrfs on “nixventure”, which I admit I’d never heard of before, but appears very thorough, and I saw one from Debian (I believe) that I’m not going to reference because it was less remarkable, but Fedora there were just a bunch of forums with people flailing about trying to figure out the same thing I was, and ending up being unsuccessful and presumably giving up from the look of their abruptly truncated threads. So that was concerning, to say the last.
I even tried “rescatux” automated rescue tool, if you can believe that, because I was being lazy and kind of running out of ideas. I can’t say I give it a glowing review, but it tries. That statement probably indicates how well it worked for my needs. I had to suck it up and adapt what I knew about using the grub prompt to the new partition layout and filesystem. Thankfully, it all turned out well in the end.
Here’s some notes I took while I was working through the process:
# Fedora's variant of grub has some helpful btrfs-specific commands:
grub> help
. . . commands . . .
btrfs-get-default-subvol (hd0,gpt3)
btrfs-info
btrfs-list-subvols (hd0,gpt3)
btrfs-mount-subvol
. . . more commands . . .
# so you get information like this:
grub> btrfs-list-subvols (hd0,gpt3)
ID 256 path home
ID 257 path root
ID 258 path var
# and this:
grub> btrfs-info (hd0,gpt3)
Label: 'fedora_treygouty' uuid: 7caff388-2bb3-434a-a927-096dac2dc892
Total devices 1 FS bytes used 298520481792
# Intuitively, I thought this would work, but I was mistaken:
grub> linux /vmlinuz-5.17.9-300 <tab-tab works> root=UUID=7caff388-2bb3-434a-a927-096dac2dc892 ro rootflags=subvol=root
# You'd probably have to write all that out by hand, so be thankful I'm telling you ahead of time it didn't work for me
TL;DR #2 – you’re getting closer …
Anyway, I’ll cut to the chase and show how I did it. Note, again, for the record, this is the bare-minimum default partition structure on an Anaconda installed Fedora 36 Workstation setup. No LUKS or other encryption, no LVM.
Note: For the record, this is the first Anaconda (RedHat’s) installer where LVM hasn’t been enabled for the default partition configuration. LVM is marginally useful for btrfs, I’ve tried it before on OpenSUSE, but comparitively it’s much less of a crutch than it is for EXT4 or XFS filesystems. So if RH’s installer says don’t bother with LVM, don’t bother… It’s RH, you know how they LOVE their LVM, if they’re not recommending it it really must not be necessary. /rant
# Quick grub prompt recap --
# First, list your storage devices:
grub> ls
(proc) (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) (cd0) (cd0,msdos2)
# If you're lucky like me and you only have one drive connected, /boot will be easy to find. /boot is almost always gpt2, and there's only one hard drive, so...:
grub> set root=(hd0,gpt2)
grub> ls /
config-5.17.9-300.fc36.x86_64
efi
extlinux
grub2
initramfs-0-rescue-334f7d6e8a0942d388bafdab5fa2a0a1.img
initramfs-5.17.9-300.fc36.x86_64.img
loader
lost+found
symvers-5.17.9-300.fc36.x86_64.gz
System.map-5.17.9-300.fc36.x86_64
vmlinuz-0-rescue-334f7d6e8a0942d388bafdab5fa2a0a1
vmlinuz-5.17.9-300.fc36.x86_64
# Set your vmlinuz and root device + partition. Here (hd0,gpt3) corresponds with /dev/sda3:
grub> linux /vmlinuz-5.17.9-300.fc36.x86_64 root=/dev/sda3 ro rootflags=subvol=root
# Then boot 'er up:
grub> boot
That’s really all there is to it. It’s not too bad once you know what you’re doing. Since the process is so short, you might want to poke around some more to try and get more info, or to try and give your life meaning. There’s the (hd0,gpt2)/grub/grub.cfg file, or the /loader/entries folder – you can cat anything in either of those. Type help or something. Go wild.
Coincidentally, all the loaders were in the loader directory the entire time, but none of them would load all of the things. Such a bummer when one cannot load all of the things. So, frustrating, but glad it all worked out in the end. Hope this helps someone else, too.
Picture depicts Houdinis as books disappearing from a library, while our problem depicts missing libraries themselves. Kinda makes you think, doesn’t it?
Hey,
I just came across an issue many people have been experiencing after upgrading VMware Workstation to 16.2.1 in Windows. It looks like the installer has a bug that deletes two necessary files by accident, and their absence prevents the programs from running, beginning with vmware-tray.exe, which is the precursor to all things Workstation.
The lawnmowered files are two dynamic link libraries (“DLLs”) responsible for encryption. Their names are libssl-1_1.dll and libcrypto-1_1.dll. Thankfully, I’ve managed to gather them from a reliable source and have re-packaged them for your convenience. Below is an archive containing both missing files for you to download, in hopes that you may use them to remedy this rank insipidity:
The organization responsible for storing and distributing DLL files such as these is named dll-files.com, located at https://www.dll-files.com in lovely Tilf AB, Sweden. Their website offers a surprisingly painless, hassle-free download (full disclosure: no affiliation).
I actually visited a few other sites looking for these DLL files before I found dll-files.com, but all the other sites “provided” were dubious “utilities” containing code that made my antivirus software blush a deep shade of “seriously?”.
Speaking of malware, everything I am providing today has been scanned with Malwarebytes, the best antivirus software in the biz (full disclosure: no affiliation), which reports these DLLs, and obligatory license + attribution document, are as squeaky-clean as a newborn antelope (devouring placenta does have advantages). I installed them on my laptop, the one I’m using now, and it’s been smoother sailing than vacationing on Velveeta.
Installing random files with these kinds of file names actually can be a little scary, considering all the ransomware thieves who were wreaking havoc across the US a couple years ago, but legitimate programs use cryptographic libraries just often as cyber-criminals, perhaps even more. So never you be a’ feared of them scary ol’ names, now, their bark a’ far worse than their soo-eey.
Come to think of it, in contrast to the wrath of cryptographic criminals a couple years ago, it seems that recently the media is more likely to associate “crypto” with blockchain currency. At least wealth is a more pleasant association than robbery… <dentist office music>
To install the files above after you download them, unzip the .7z file using 7zip archive software, and move the two .dll files contained therein to the folder located at C:\Program Files (x86)\VMware\VMware Workstation\<files go here>
I used an admin command prompt to move mine, but you could just as easily open your file explorer to drag-and-drop them, I’m sure. Once Meta gets off their behinds and releases the first telekinesis controller, we’ll all be able to will our files across our hard drives with sheer focus and determination, but until then it’s a flail across our mice and keyboards. Sorry to remind you we’re still savages with disappearing DLLs.
My system runs ZFS and lately has been dropping to the initramfs / busybox prompt on boot. I had a hard time finding a fleshed-out guide on how to mount ZFS in a live environment for performing disaster recovery tasks like chroot and grub repair, so I thought I’d write something up.
My system was dropping to the busybox prompt after GRUB menu. I started experiencing the issue after a routine apt upgrade, I rebooted and wasn’t able to get any of my initramfs to boot. It seems a little strange, because usually the inability to boot will be limited to a new initramfs – e.g. an older version of the kernel will still have the ZFS drivers, or other necessary components to boot, while the newer versions (the ones just installed) will be lacking these necessary components for whatever reason.
First of all, burn yourself a copy of a live USB, and boot into it. Luckily, the newest version of Ubuntu (22.04 – Jammy Jellyfish) has the ZFS drivers and executables installed by default, unlike prior versions where you had to add the multiverse repo manually, download the packages, and enable the ZFS drivers using modprobe.
A peek at lsmod shows the ZFS drivers are indeed loaded, and lo-and-behold, there’s the zpool and zfs executables:
The drive I am diagnosing is the internal NVMe, so there’s no need to attach it. One question I had was how to mount the two pools, and in what order. By default, Ubuntu creates an rpool for the root partition, and a bpool for the boot partition.
Generally, on an EFI system, one would mount the root partition in a clean directory like /mnt first, and subsequently mount boot at /mnt/boot once it is provided by the previously mounted root partition, and then mount efi at /mnt/boot/efi once that’s provided by the boot partition. As you can see, the order of mounting these partitions is therefore of paramount importance, but as there are only 3 options, it’s not too complicated.
You’ll need to be root for basically all these commands. Using sudo su without a password will typically get you to a root prompt (#) in a live environment.
TL;DR – probably way more than you ever wanted to know about an lsblk device list:
First, we should identify the storage devices using lsblk -f (the -f flag includes the filesystem information, which is important for our purposes):
OK, there’s a lot there, so what are we looking at? Well, the first 9 devices that say loop are snaps, since we’re on Ubuntu. Those are responsible for storing some of the programs being run by the OS. Each one gets their own virtual storage device, sometimes referred to as an “overlay”. They create a fair amount of clutter in our device list, but that’s about all. You can ignore them.
Then, /dev/sda is our copy of Ubuntu ISO we booted from – you can see how it says cdrom there, and iso9660 (the cdrom spec). It’s read-only, so we couldn’t do anything with it if we wanted to, and we don’t, so let’s move on…
There’s a device for log and crash log, so that’s kind of interesting. I imagine the live ISO makes those since you can’t write to the USB drive, seeing as the ISO is a virtual CD-ROM, and CD-ROMs are read-only. Then there’s a bunch of what are called “zvols” (the zd0, zd16, etc. devices – see those?). Those are devices created with ZFS that are isolated from the rest of the filesystem. zvols are virtual block devices you can use just like any other block device, but in this context they’re typically either formatted with a different filesystem, or mounted via iSCSI for block-level filesharing (filesystem-sharing?). You can see these ones say btrfs, they were actually created for use with container runtimes, namely podman and systemd-container, both of which support btrfs very well and ZFS either nominally or not at all.
Now we get to nvme1n1 – this is the first NVMe drive listed. Generally 0 would be listed first, but for some reason it’s listed second. n1 is the number of the drive (the second NVMe drive in the laptop), after that the partitions are listed as p1, p2, p3, and so on. Here’s the drive in isolation:
The canonical address for this drive is: /dev/nvme1n1p{1,2,3,4} . The /dev (device) folder, while not listed in this output, is important to reference, as the full path is required for mounting a partition. Typically one would only mount a single partition at a time, but you could conceivably chain them in a single command by using curly braces, as shown. This is not common, as you will probably need to mount different partitions in different locations (e.g. /mnt, /mnt/boot), and usually either in descending order, or with no pattern at all.
If you remember back at the start, I mentioned the rpool and bpool. These are seen on /dev/nvme1n1p4 and /dev/nvme1n1p3 respectively. If the disk were formatted in a block filesystem such as EXT4 (Ubuntu’s default filesystem), the root partition could be mounted by attaching /dev/nvme0n1p4 to an empty folder. The command would therefore be:
# mount /dev/nvme1n1p4 /mnt
And then you’d be able to ls /mnt and see the files contained on your newly mounted root partition. E.g.:
# ls /mnt
Qogir boot dev home lib32 libx32 mnt proc run snap sys usr
bin cdrom etc lib lib64 media opt root sbin srv tmp var
But this NVMe is formatted using ZFS. So what to do? That’s the process I was having difficulty finding that inspired this blog post.
End TL;DR – here’s the ZFS-specific stuff again:
First, after you confirm that you have your ZFS modules loaded by referencing your list of loaded kernel modules, and confirming that your ZFS executables are available in PATH (here’s the syntax again so you don’t have to scroll back):
Here’s where it’s different than your typical mount. You use zpool to import rpool, but you need to mount it using an alternate root (at /mnt) – otherwise it’ll try to mount itself over your live environment! Then confirm that the import worked.
# zpool import -f rpool -R /mnt
# ls /mnt
Qogir boot dev home lib32 libx32 mnt proc run snap sys usr
bin cdrom etc lib lib64 media opt root sbin srv tmp var
OK, that went well. You can see that now we have a /mnt/boot folder, which is boot inside rpool – that’s where initramfs lives, but they’re stored in the bpool. We needed that folder to be available to mount our bpool into. So, let’s import bpool into /mnt/boot as an alternate root (if we didn’t, it’d try and overwrite our currently mounted /boot partition:
That looks like a bunch of initramfs files to me! Good, so that means those kickstarter runtimes that load from grub are available.
If you look in that list, you’ll also see both efi and grub folders. Both of those are empty and waiting for storage to be attached. The efi partition lives in the first partition of the same NVMe drive, and is formatted with FAT, and grub is a bind-mount (you can see it in /etc/fstab):
# mount -t msdos /dev/nvme1n1p1 /mnt/boot/efi
Can also use UUID from lsblk if prefer (just use one or other, not both):
# mount -t msdos UUID=B045-5C3B /mnt/boot/efi
# ls /mnt/boot/efi
efi grub system~1 (confirm it's mounted)
# grep grub /mnt/etc/fstab
/boot/efi/grub /boot/grub none defaults,bind 0 0
(we'll bind-mount this in next step)
Then you’ll want to mount a few system folders inside your drive’s filesystem so you can access them inside the chroot (required for things to work OK):
# for i in proc dev sys dev/pts; do mount -v --bind /$i /mnt/$i; done
mount: /proc bound on /mnt/proc.
mount: /dev bound on /mnt/dev.
mount: /sys bound on /mnt/sys.
mount: /dev/pts bound on /mnt/dev/pts.
# mount -v --bind /mnt/boot/efi/grub /mnt/boot/grub
mount: /mnt/boot/efi/grub bound on /mnt/boot/grub.
“chrooting”: Now that all 3 partitions are mounted together in a cohesive filesystem tree, and you’ve got all your necessary bind mounts, one of the most effective ways to diagnose issues as if you’re running the affected disk, is to chroot into the filesystem. Run # chroot /mnt and now you’ll see /mnt as / (root), and you can run your programs as if you booted the computer using that drive (from the terminal, anyway):
# chroot /mnt
# apt update (failed)
# cd /etc
# ls -la resolv.conf
lrwxrwxrwx 1 root root 39 Feb 17 12:09 resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
If your network connection fails inside the chroot like mine did, go to /etc and delete resolv.conf if it’s a symlink to systemd-resolved (as shown above). Then point /etc/resolv.conf to a known good dns forwarder (e.g. 1.1.1.1, 8.8.8.8, etc.)
# echo 'nameserver 8.8.8.8' > resolv.conf
# apt update (works)
# apt list --installed | grep dkms
dkms/jammy,now 2.8.7-2ubuntu2 all [installed,automatic]
zfs-dkms/jammy-proposed,now 2.1.4-0ubuntu0.1 all [installed]
I was really hoping zfs-dkms got uninstalled somehow, because I thought that might have been why my initramfs files didn’t have zfs modules. So unfortunately I still have to keep looking to figure out what’s wrong…
Note, you’ll probably see this error a lot, but it’s safe to ignore:
ERROR couldn't connect to zsys daemon: connection error: desc = "transport: Error while dialing dial unix /run/zsysd.sock: connect: connection refused"
Let’s try upgrading the packages and see what shakes out:
# apt upgrade
The following packages were automatically installed and are no longer required:
linux-headers-5.15.32-xanmod1 linux-headers-5.15.34-xanmod1
linux-headers-5.15.36-xanmod1 linux-headers-5.17.0-xanmod1
linux-headers-5.17.1-xanmod1 linux-headers-5.17.3-xanmod1
linux-headers-5.17.5-xanmod1 linux-image-5.15.32-xanmod1
linux-image-5.15.34-xanmod1 linux-image-5.15.36-xanmod1
linux-image-5.17.0-xanmod1 linux-image-5.17.1-xanmod1
linux-image-5.17.3-xanmod1 linux-image-5.17.5-xanmod1
Use 'sudo apt autoremove' to remove them.
That was … interesting … and then the issue presented itself next while I ran apt autoremove:
Setting up linux-image-5.17.9-xanmod1 (5.17.9-xanmod1-0~git20220518.d88d798) ...
* dkms: running auto installation service for kernel 5.17.9-xanmod1 [ OK ]
update-initramfs: Generating /boot/initrd.img-5.17.9-xanmod1
zstd: error 25 : Write error : No space left on device (cannot write compressed
block)
(emphasis added)
bpool has no space left. That’s almost certainly the problem. I’m going to remove a couple kernels and rebuild all my initramfs, that ought to do it. I’m also noticing my bpool is full of snapshots. List current snapshots with this first command, and then destroy them with the second one:
// This lists the snapshots:
# zfs list -H -o name -t snapshot | grep bpool
...auto-snapshots look like pool/BOOT/ubuntu_pd3ehl@autozsys_xxxx,
snapshots have @ symbol - no @ symbol, not a snapshot, don't delete it!
// This destroys the snapshots:
# zfs list -H -o name -t snapshot | grep bpool | xargs -n1 zfs destroy -r
What this does:
(list only snapshots by full name) | (list only bpool) | (delete by ea line)
It's the same as what's above, but with the delete command, destroy.
Make sure you understand what's going on with this command, as you can delete stuff you don't want to really easily. Please be careful.
Install some generic kernel to make sure you have one available, check that zfs-initramfs is installed if all you’re going to use is generic kernel (or zfs-dkms if using xanmod, other 3rd-party kernel). E.g. I got rid of my xanmod kernels just so I wouldn’t have to deal with building custom dkms modules:
# apt list --installed | grep xanmod
linux-headers-5.15.40-xanmod1-tt/unknown,now 5.15.40-xanmod1-tt-0~git20220515.867e3cb amd64 [installed,automatic]
linux-image-5.15.40-xanmod1-tt/unknown,now 5.15.40-xanmod1-tt-0~git20220515.867e3cb amd64 [installed,automatic]
linux-xanmod-tt/unknown,now 5.15.40-xanmod1-tt-0 amd64 [installed]
xanmod-repository/unknown,now 1.0.5 all [installed]
# apt remove linux-headers-5.15.40-xanmod1-tt linux-image-5.15.40-xanmod1-tt xanmod-repository linux-xanmod-tt zfs-dkms
. . .
The following packages will be REMOVED:
linux-headers-5.15.40-xanmod1-tt linux-image-5.15.40-xanmod1-tt
linux-xanmod-tt xanmod-repository zfs-dkms
Do you want to continue? [Y/n]
. . .
# apt autoremove -y
... install a couple kernels...
# apt install -y linux-{image,headers}-5.15.0-28-generic linux-{image,headers}-5.15.0-33-generic
. . . using versions that are most current & 2nd most current now . . .
Then update all the initramfs one last time, just in case. I’ll probably re-install grub, too, just bc, but one thing at a time…
# update-initramfs -uvk all
. . . lots of output . . . that's how you know it's working . . .
Let’s re-install grub and run update-grub
# grub-install --bootloader-id=ubuntu --recheck --target=x86_64-efi --efi-directory=/boot/efi --no-floppy
Installing for x86_64-efi platform.
grub-install: warning: EFI variables cannot be set on this system.
grub-install: warning: You will have to complete the GRUB setup manually.
Installation finished. No error reported.
When you get this error, it just means you can’t set the UEFI boot order while you’re in a chroot. I also like to run update-grub for good measure (this is grub2-mkconfig -o /boot/grub/grub.cfg on most other systems if that’s more familiar sounding to you). Update-grub rebuilds the entries in your grub menu, along with their parameters detailed in /etc/default/grub.
Speaking of which, you can always take a peek at /etc/default/grub before you run this command – just in case.
# which update-grub
/usr/sbin/update-grub
# cat /usr/sbin/update-grub
// update-grub:
#!/bin/sh
set -e
exec grub-mkconfig -o /boot/grub/grub.cfg "$@"
# update-grub
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: vmlinuz-5.15.0-33-generic in rpool/ROOT/ubuntu_pd3ehl
Found initrd image: initrd.img-5.15.0-33-generic in rpool/ROOT/ubuntu_pd3ehl
Found linux image: vmlinuz-5.15.0-28-generic in rpool/ROOT/ubuntu_pd3ehl
Found initrd image: initrd.img-5.15.0-28-generic in rpool/ROOT/ubuntu_pd3ehl
Found linux image: vmlinuz-5.15.0-33-generic in rpool/ROOT/ubuntu_pd3ehl@autozsys_yg50xc
. . . snapshot menu entries . . .
Now leave the chroot now, remove the system folder redirects and bind mounts, and reboot, like so:
# exit
# for i in proc dev/pts dev sys boot/grub; do umount -v /mnt/$i; done
umount: /mnt/proc unmounted
umount: /mnt/dev/pts unmounted
umount: /mnt/dev unmounted
umount: /mnt/sys unmounted
umount: /mnt/boot/grub unmounted
# umount -v /dev/nvme1n1p1
umount: /mnt/boot/efi (/dev/nvme1n1p1) unmounted
# zpool export bpool
# zpool export rpool
One last quick thing you can do before rebooting is check out efibootmgr and see which order your system will start up in. This is a little easier and more predictable, as you can make sure you boot from the right efi file, rather than mashing the startup menu button to make sure it loads the correct disk / efi.
Some stuff I was messing with trying cover all the bases. efibootmgr reference: https://wiki.archlinux.org/title/GRUB/EFI_examples#Asus
A troubleshooting tip: If you have issues using the pool names with zpool for some reason, the UUIDs are listed in lsblk. While technically interchangeable, the UUID can coax some commands into to working correctly when the name can’t.
If it doesn’t boot from the ZFS drive again, boot it into the live ISO and go through everything all over … 😉 Good luck!!
I meant to post this over a month ago, but got sidetracked, so I’m coming in a little late. Unfortunately, it looks like even though Fedora 36 has officially been released, the dpdk 22.03 rpms still aren’t available.
Back when I compiled these, I realized there were no official dpdk 22.03 rpms available in the yum repos, despite being required for openvswitch. So I compiled them so I could install openvswitch for use with virt-manager.
Dpdk on Fedora has an official maintainer, they probably just got sidetracked themselves – I can relate. But I had these packages already and wanted to help other people install openvswitch, so I posted the resulting packages in a github repo case other people want to download it.
So if you (like me) want to run openvswitch on your fancy new officially-released non-beta Fedora 36 workstation (or server, or silverblue, kinoite or iot using rpm-ostree), and you don’t want to wait or downgrade, you’d have to either have to compile dpdk 22.03 for yourself, or now you can download them from my repo.
I’ve got all the instructions to go through the build process if you’d like to get your hands dirty with compilation (it’s pretty straightforward): https://github.com/averyfreeman/dpdk-2203-for-fedora36
The .rpm files are also there if you don’t care for all the fuss of compilation. You can use them as openvswitch dependencies, just install them before you try and install openvswitch using dnf.
I should have made this post sooner so people knew the rpms were available, but at least I put a note or two in on reddit in a couple key places (now that I think about it, probably just /r/fedora). Nothing beats the officialism of posting a fancy notice on your wordpress blog, though, amirite?
Speaking of which, I think I will be looking at a way of integrating this blog with my repo – I’m hoping I can figure out a way to produce and update posts on wordpress automatically by creating a gist or a new repo. Then the two would be more tightly integrated, and could avoid this whole getting sidetracked issue…
The less people get sidetracked, the sooner they use software…
TL;DR: drag the icon from the shell:AppsFolder over to the shell:startup folder!
I am building a dedicated TV viewing VM for HDHomeRun View using Windows IoT 2021 LTSC, so I can watch TV while using a computer running Linux with the smallest possible KVM-QEMU VM I can possibly put together. That’s because, unfortunately, the app I have to use is only available in Windows, and only through the Windows Store on top of that, which in the past has created an additional layer of potential configuration difficulties due to app sandboxing, obscured location paths, etc.
Well, thankfully starting UWP apps (also called “Windows Store” apps, but herein will be referred to as UWP apps) have gotten a lot easier in newer versions of Windows. I’m not actually sure of this because I haven’t looked into any changelogs detailing new Windows shell or explorer features, but I’m assuming because I’ve gone through all sorts of trouble to get them to start up automatically in the past (this HDHomeRun one in particular), but now it seems ridiculously easy given what I had previously gone through.
In Windows File Explorer (the manilla folder icon), type shell:AppsFolder in the location bar and hit enter. This will bring up a list of icons, “regular” Windows applications in addition to UWP (store) apps.
Navigate through the icons until you find the UWP app you’re looking for.
In a separate Explorer window, type shell:startup and hit enter.
Drag the UWP app you want to start during startup into the shell:startup folder. It might ask you if you want to create a shortcut in that folder (hit yes).
Log out and back in to make sure it worked.
This worked for me on the first try. The hardest thing I did was remove the - shortcut string that was auto-appended to the icon. It really doesn’t get much easier than that.
In the past, I’d gone through all sorts of trouble figuring out which folder holds the app – they’re all in a hidden folder C:\Program Files\WindowsApps with folders named things like EF712BA7.HDHomeRunDVR_1.1.345.0_x64__23nna27hyxhag only accessible from administrative shell, isn’t that fun? – getting the application name and path from the AppManifest.xml file, and then creating a batch file for the startup script for the UWP app to be started with C:\Windows\System32\cmd.exe /C:, etc.
As I said, I’m not sure if using UWP apps is just getting easier on newer versions of Windows, but this is pretty darn convenient. If I was just doing something unnecessary in the past, it sure was a lot of trouble.
If you’re interested in running Windows Store for the apps you can’t find anywhere else on LTSC, check out this github repo – says last updated 3 years ago, but the script has worked for me on newer versions of LTSC for me just fine: https://github.com/kkkgo/LTSC-Add-MicrosoftStore