Saturday, November 16, 2024

Elastic 7.1 > 8.16. Journalbeat > Filebeat. Migration notes.

Journalbeat is removed in 7.16

https://github.com/elastic/observability-docs/issues/1173

Use filebeat's new input type of journald:

https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-input-journald.html

Add input:

filebeat.inputs:
...
- type: journald
  id: everything

To avoid indices like:

filebeat-%{[fields][source]}-2024.11.16

Set fields, source in journald input type, e.g.:

filebeat.inputs:
...
- type: journald
  id: everything
  fields:
    source: journal

That will lead to index names as:

filebeat-journal-2024.11.16

I was trying with the index name configuration as described in docs:

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-journald.html#_index_13

But I should have been doing something wrong, as I was only getting indices like filebeat-%{[fields][source]}-2024.11.16.

Wednesday, November 13, 2024

Failed Proxmox 7 -> 8 upgrade. I forgot to substitute bullseye with bookworm in one of the repository lists!

They literally say in Proxmox wiki for 7 to 8 upgrade:


Ensure that there are no remaining Debian Bullseye specific repositories left, if you can use the # symbol at the start of the respective line to comment these repositories out. Check all files in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list and see Package_Repositories for the correct Proxmox VE 8 / Debian Bookworm repositories.

And I managed to forget the one in the /etc/apt/sources.list.d/pve-enterprise.list. I should not have had it there in the first place, at that time I didn't know Proxmox at all.

So after following the upgrade steps and having no errors after apt dist-upgrade I faced no Web interface being available as the first sign of my failure.


journalctl -u pveproxy showed:

Nov 13 14:24:08 pve62 pveproxy[314931]: ipcc_send_rec[3] failed: Connection refused

systemctl status pve-cluster showed:

○ pve-cluster.service
     Loaded: masked (Reason: Unit pve-cluster.service is masked.)
     Active: inactive (dead)

pveversion:

-bash: pveversion: command not found

Running apt update showed me that I have bullseye still somewhere in the sources.


Where?


List the repositories:


grep "^[^#]" /etc/apt/sources.list /etc/apt/sources.list.d/*
/etc/apt/sources.list:deb http://ftp.cz.debian.org/debian bookworm main contrib
/etc/apt/sources.list:deb http://ftp.cz.debian.org/debian bookworm-updates main contrib
/etc/apt/sources.list:deb http://security.debian.org bookworm-security main contrib
/etc/apt/sources.list.d/pve-enterprise.list:deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription

Fix:


vim /etc/apt/sources.list.d/pve-enterprise.list

What now? Let see how to put broken proxmox installation back on top of debian:


https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm


Running:


wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg 

Now:


apt install proxmox-ve

Fails with error:


E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

Here:


https://forum.proxmox.com/threads/upgrade-failed-pve-apt-hook-not-found.107950/post-464053


Quoting:


I did the following and that seems to have solved the problem.

mkdir -p /usr/share/proxmox-ve
touch /usr/share/proxmox-ve/pve-apt-hook
chmod +x /usr/share/proxmox-ve/pve-apt-hook

After that:


apt update && apt full-upgrade

apt dist-upgrade



And the last one:


apt install proxmox-ve

Web inteface is back, no errors so far. All containers are running as they should.

Saturday, September 21, 2024

ML310e Gen 8 v2 - Front IO pinout. Using ML310e case as a MicroATX housing.

Recycling now my very old ML310e and want to reuse it for the X570D4U from AsRock

Here is a hole I had to cut in ML310 the case, lol:


And as I was not able to find any pinout diagram for the front-io panel:


Here is mine (don't trust me though and always re-check by yourself!):


Sending my worldwide regards to all server recyclers :)!





Saturday, December 11, 2021

Friday, October 22, 2021

TIL: Proxmox "pct create" -rootfs parameter and disk size.

 Got confused by that -rootfs parameter of proxmox when creating a new container with "pct create", so documenting it here.

-rootfs local:2
translates into (when creating container with id 1062, /etc/pve/lxc/1062.conf):
rootfs: local:1062/vm-1062-disk-0.raw,size=2G
so local:2 then stands for the "local" volume and 2GB as a disk size.

Using size=2G sub-parameter led to the size of 1062G, interestingly. Not sure if this is a bug in proxmox (v7.0) or what.

Saturday, December 7, 2019

Ubuntu 18.04 server and ESXI on Intel NUC Hades Canyon.

This post is just to confirm that dual boot Ubuntu 18.04.3 server and ESXI 6.7 U3 on Intel NUC Hades Canyon is possible and to describe some pitfalls/obstacles I had to jump through.

My configuration for the Intel NUC Hades Canyon is Kingston dual ranked 2x16GB + 2xSamsung EVO 670 500GB.

I first thought I'd do it with RAID in BIOS, but both ESXI and Ubuntu server just ignored this option anyway and second, I read that actually it is a good idea to avoid this BIOS RAID option in order to be able to read/use disks after the CPU/board failure.

#1 Installing ubuntu 18.04.3 server on Intel NUC Hades Canyon

I should tell I was a long time user of Digital Ocean, Host Europe and so on - preinstalled Ubuntu server was it for me. Somehow this time I bumped into "live" version of Ubuntu server and started to install it with the outcome of installation not seeing the disks to install Ubuntu, "Unfortunately Probing for devices to install to failed.":


This "problem" was the reason for the part #2 of this post - installing ESXI. I only got to the article above and understanding that I picked the wrong Ubuntu version after I have ESXI successfully installed as a boot option for disk 1.

Here is the image for Ubuntu 18.04.3 "not live" server, just for a reference:


I also ran into Ubuntu 18.04 server installation getting stuck at 66% while running'update-grub':


After killing the offending process, to exit busybox I used:

```
exit
```

and then probably Ctrl+F1

#2 Installing VMWare ESXI Hypervisor

My main problem with ESXI installation was creating a bootable ESXI image on my mac. After a bit of research I was lucky enough to find Unetbootin:


Other than that, ESXI was a breeze to install and use. Btw, I used the "live" version of Ubuntu 18.04.3 server as ESXI ubuntu iso image and it worked just alright, I really liked the ssh from github option when installing.
--------

This concludes my notes on dual boot Ubuntu 18.04.3 and ESXI on Intel NUC Hades Canyon. Machine performance is outstanding, no heat, no noise. Performance of AWS large instance to paid for itself within 45 days from what I read.

Next steps are to study on MAAS, LXD, Juju for the Kubernetes deployment as a target.

Also looking forward the next generation of Intel NUC Canyon, the Ghost one!

Wednesday, February 20, 2019

xCode 10 Swift 4.2 slow autocomplete and compile times. BAD.

At some point I started to have dismal swift autocomplete delays in xCode 10 and slower compile times. Having googled this and that and optimizing what I could I also found into these type of warnings:



My first guess was that longer tuples are causing the problem. I changed this part to use structs, no joy.

One more part in the code gave me another hint:



Looked to me as if just adding up literals via "+" would be tricky for swift compiler's type checking.

Changing everything above to String(format:) resolved the type checking speed.

If you ask on using string interpolation with \() - it didn't solve the problem. Plus I don't like these messy strings when they get longer.

I have no good words for the guys that work on Swift programming language and its design/tools. I don't think this part and types of optimization should really ever bother us, regular devs.

Bad, bad, bad.... I wish I could have loved Swift more. I love the language even if I don't think it is superior that much to Objective-C. Moving to it from Objective C with all new and updated stuff, but this is simply bad.

P.S. You can find a really good digest of Swift compile time optimizations here:

https://github.com/fastred/Optimizing-Swift-Build-Times