They literally say in Proxmox wiki for 7 to 8 upgrade:
Ensure that there are no remaining Debian Bullseye specific repositories left, if you can use the # symbol at the start of the respective line to comment these repositories out. Check all files in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list and see Package_Repositories for the correct Proxmox VE 8 / Debian Bookworm repositories.
And I managed to forget the one in the /etc/apt/sources.list.d/pve-enterprise.list. I should not have had it there in the first place, at that time I didn't know Proxmox at all.
So after following the upgrade steps and having no errors after apt dist-upgrade I faced no Web interface being available as the first sign of my failure.
journalctl -u pveproxy showed:
Nov 13 14:24:08 pve62 pveproxy[314931]: ipcc_send_rec[3] failed: Connection refused
systemctl status pve-cluster showed:
○ pve-cluster.service
Loaded: masked (Reason: Unit pve-cluster.service is masked.)
Active: inactive (dead)
pveversion:
-bash: pveversion: command not found
Running apt update showed me that I have bullseye still somewhere in the sources.
I did the following and that seems to have solved the problem.
mkdir -p /usr/share/proxmox-ve
touch /usr/share/proxmox-ve/pve-apt-hook
chmod +x /usr/share/proxmox-ve/pve-apt-hook
After that:
apt update && apt full-upgrade
apt dist-upgrade
And the last one:
apt install proxmox-ve
Web inteface is back, no errors so far. All containers are running as they should.
This post is just to confirm that dual boot Ubuntu 18.04.3 server and ESXI 6.7 U3 on Intel NUC Hades Canyon is possible and to describe some pitfalls/obstacles I had to jump through.
My configuration for the Intel NUC Hades Canyon is Kingston dual ranked 2x16GB + 2xSamsung EVO 670 500GB.
I first thought I'd do it with RAID in BIOS, but both ESXI and Ubuntu server just ignored this option anyway and second, I read that actually it is a good idea to avoid this BIOS RAID option in order to be able to read/use disks after the CPU/board failure.
#1 Installing ubuntu 18.04.3 server on Intel NUC Hades Canyon
I should tell I was a long time user of Digital Ocean, Host Europe and so on - preinstalled Ubuntu server was it for me. Somehow this time I bumped into "live" version of Ubuntu server and started to install it with the outcome of installation not seeing the disks to install Ubuntu, "Unfortunately Probing for devices to install to failed.":
This "problem" was the reason for the part #2 of this post - installing ESXI. I only got to the article above and understanding that I picked the wrong Ubuntu version after I have ESXI successfully installed as a boot option for disk 1.
Here is the image for Ubuntu 18.04.3 "not live" server, just for a reference:
Other than that, ESXI was a breeze to install and use. Btw, I used the "live" version of Ubuntu 18.04.3 server as ESXI ubuntu iso image and it worked just alright, I really liked the ssh from github option when installing. --------
This concludes my notes on dual boot Ubuntu 18.04.3 and ESXI on Intel NUC Hades Canyon. Machine performance is outstanding, no heat, no noise. Performance of AWS large instance to paid for itself within 45 days from what I read.
Next steps are to study on MAAS, LXD, Juju for the Kubernetes deployment as a target.
Also looking forward the next generation of Intel NUC Canyon, the Ghost one!
At some point I started to have dismal swift autocomplete delays in xCode 10 and slower compile times. Having googled this and that and optimizing what I could I also found into these type of warnings:
My first guess was that longer tuples are causing the problem. I changed this part to use structs, no joy.
One more part in the code gave me another hint:
Looked to me as if just adding up literals via "+" would be tricky for swift compiler's type checking.
Changing everything above to String(format:) resolved the type checking speed.
If you ask on using string interpolation with \() - it didn't solve the problem. Plus I don't like these messy strings when they get longer.
I have no good words for the guys that work on Swift programming language and its design/tools. I don't think this part and types of optimization should really ever bother us, regular devs.
Bad, bad, bad.... I wish I could have loved Swift more. I love the language even if I don't think it is superior that much to Objective-C. Moving to it from Objective C with all new and updated stuff, but this is simply bad.
P.S. You can find a really good digest of Swift compile time optimizations here: