• 1 Post
  • 13 Comments
Joined 8 months ago
cake
Cake day: July 14th, 2024

help-circle
  • Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.


  • I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and “attach” to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

    This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.





  • That’s a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

    Edit: I did a ‘virsh dumpxml <vmname>’ and the Disk Part looks like this:

      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none'/>
          <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
          <backingStore/>
          <target dev='sda' bus='sata'/>
          <serial>8d68ee83940d4b688b28</serial>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    

    It is SATA… now I need to figure out how to change that configuration ;-)







  • I spent half a day trying to get acme-dns + Cert Warden up and running and failed miserably. And I think I will give up on it. That does not happen usually, but during my debugging sessions I have seen that the acme-dns project is not maintained regularly since quite a while. The current maintainer just has not enough time, but tries to prepare the project for a move to a new GitHub organization, so more people can help with the project. Until then, Issues and PRs accumulate, so I am not sure anymore if I should stick to acme-dns or just do it differently.

    Why did I pick this scenario? Because of Let’s Encrypt certificates and my DNS provider does not allow fine-grained API Keys for DNS management. This means, that currently the processes that request certificates in my Network need the API Key for the dns-challenge for Let’s Encrypt.

    Ways around that are by either using Let’s Encrypt alternate (I think it is called DNS alias mode) method where you can request Certificates for your main domain, but put the TXT records for the DNS challenge on another Domain. One way is to just use a 2nd Domain for that if you have one.

    I tried to do it with a Subdomain of my Main Domain that I delegate to acme-dns. The whole acme-dns, Domain delegation stuff etc. works fine, but I am not able to get this hooked up to Cert Warden properly and end up with error messages that make no sense to me and since I do not find any further information in the logs, as I said, I just gave up yesterday evening… for now ;-)

    Another thing I am struggling sometimes is my Pi-Hole + Unbound setup where Unbound for no reason just returns a NXDOMAIN for some queries and I can not figure out why, under which circumstances and when that happens. It just seems to be random and a restart / cache clearing etc. does not fix it.


  • PostgreSQL Updates AFAIK require manual Backup / Restore of the Database. But better look that up. I think the last one I did was:

    1. Stop the Application Containers (here the Immich ones, so only PostgreSQL runs)
    2. Backup the Database
    3. Stop the PostgreSQL Container
    4. Change to the new PostgreSQL Version
    5. Start the PostgreSQL Container
    6. Restore the Database
    7. Start the Application Containers

    As I said, better look it up first, this is just how I remember the process (but not the backup / restore commands).