this is to wrap my brain around future home server expansion. the end goal is to have a clustered proxmox setup with ceph shared storage and a bare-metal nas machine.
first off, the proxmox nodes will be three elitedesk 800 g4’s with an i5-8500, 64gb ram, 1tb ssd boot drive, and 2tb nvme drive. the reasoning behind this is that i want identical systems so that migration is on par with the other systems. as well as no slow downs when there is a migration.
i think that 8th gen intel is the lowest i would go for a home server as it is the minimum support for windows 11, and is fairly modern. the 64gb is overkill, but if my workload were to increase then i have sufficient memory. the 1tb boot drive is also overkill, but hey, whatever. and the 2tb nvme is primarily for the ceph storage, where all of the virtual machines and lxc containers will live.
this is the acting shared storage across all three nodes. i wanted to make sure this storage is fast and redundant. i may consider getting either 2x2tb nvmes or 2x4tb nvmes, but cost is the real limiting factor. since i am not using the sata ports, then i can also add in two ssds for another ceph storage pool.
then the nas system will be a bare-metal install of truenas scale with an instance of proxmox backup server as a virtual machine. this will ensure the access to the drives will be direct, instead of going through the hypervisor layer. that will mean a migration from proxmox vm to bare-metal.
along with this clustering, i will have to get dual port network cards for each of the nodes. this will be for the cluster and ceph network. i plan to get 10gbe network cards as they are relatively cheap now. i also will be doing it with the full mesh networking provided by proxmox. i do not think i will have the funds for a 10gbe switch, so by hard wiring the nodes together will cut costs greatly.
though, i will include a 2.5gbe nic for the nas as i would want that to have a fast connection to the nodes. that would mean a network expansion of another switch that is 2.5gbe. my main rig has a 2.5gbe nic built in, but still going to a 1gbe switch. that way i can spread out the use of the network ports.
this will involve more costs, but i think it will be a slow integration as i gather the parts for it. i first want to have the base systems specs so that i can configure everything within a weekend. i will first have to source another three elitedesk 800 g5s since i will need four in total. after that i will source their specs, so the 64gb of ram, 1tb boot drive and 2tb nvme. the main nas is already setup, but i have to migrate that to bare-metal.
my original server will most likely be sold and have everything wiped. i think i will keep the 1tb ssd since that is what holds all of my main services.
the migration process will have to go as followed, move the 1tb boot drive to an elitedesk as it has the essential services. and then configure the next two nodes. once they are configured, i will migrate the nas server to its bare-metal server.
which i am not sure how to do yet since i am running truenas inside of a vm. and if i were to install truenas onto the existing ssd, i am not sure if the current hard drives will carry over the same data.
after looking at some forums, it seems that it can be varying depending on what is in place. i think i will be fine since i passed through the hba card entirely and my current instance can see both drives. so this will really mean an export of the truenas config file and then a wipe of the boot drive, install truenas and restore the config file and reboot. i am going to backup my data regardless since i do not want to lose anything on my shares.