when considering a 2ru case. consider future use. the case will only take half-height, half lenght cards which could issues down the track if you want to add graphics or HBA cards.
If you won’t think you’ll need them everything should be fine.
when considering a 2ru case. consider future use. the case will only take half-height, half lenght cards which could issues down the track if you want to add graphics or HBA cards.
If you won’t think you’ll need them everything should be fine.
taticalrmm which builds on meshcentral to handle the remote access.
virtualising means you can make more use of resources on system rather than having two systems and dedicating one to specific task.
On the other hand you can bork the hypervisor and then be without internet and possible become the families public enemy #1 :)
But it’s generally pretty stable. Not use opnSense but do have a virtualised router using SophosXG. One nic from the VM is tied to vmbr0 which is the main virtual bridge that ties my virtual machines to the rest of the network. The IP is my default gateway.
the second NIC is done as PCIe pass through and this connects direct to my cable modem.
I could have bound this NIC to another vmbr and would have worked just as well. However there was some discussion in r/proxmox about performance impacts if you have a very fast internet connection (something to with srv-io iirc).
as the drive a single unit you can use LTFS which allows you click n drag the files to tape just like it was another hard disk.
https://www.youtube.com/watch?v=3HjjNOcqGt4&t=8s&pp=ygUIbHRvIHRhcGU%3D
Tape drives are more enterprise area so the software is priced accordingly.
Veeam has a community edition which supports tape drives but you’d probably need a bit more set up.
just need a straight HBA i.e no raid support such as the IT mode controllers used for TrueNAS etc such as an LSI 92xxe.
needs the to be an “e” for the external connector.
Then he just need an cable to go between them. Google should be able to confirm the type needed.
search the forum.
many discussions on running minecraft
Width is standardized at 19” you just need to make sure it’s deep enough.
Most of the dells are around 27” deep and racks tend to be 25 or 35” if they’re enclosed so you’d need a 35” one.
Open frame racks can be adjusted for depth,
The decide on the height 12- 15ru should be fine for your usage.
They’re harder to find second hand so new may be the best option.
42ru cable found quite cheap (even free) but are a pain to move about hence the low price. Given they become available en mass if a data centre or similar close it’s sell to cheap to dispose of or send for scrap.
You’ll need some rails but they can be expensive or a shelf ( I use a Startech one with my 4ru server).
What you define as light gaming? Is something that will run without a gpu?
Could go with a couple of approaches - dedicated thin clients can be found on e-bay for not too many dollars. They usually come with RDP clients which will allow you to connect to the VMs via remote desktop (but will need them to be running Windows Pro or higher or a Linux distro with XRDP installed).
Option b) is a roll your own option. Again you could use the thin clients (just might be a bit of extra work) or get a mini pc (say Dell USFF) with an i3 or i5. You won’t need that much oomp at the thin client end. Could even do with a Rasberry Pi or other SBC but might not be any cost savings on the hardware and you’re working cross platform.
Not sure if your kids are the dual monitor stage but if so, a thin client or USFF PC will allow you to connect multiple monitors. Mileage may vary on SBCs.
The idea is to use iPXE/PXE for remote booting the thin clients. That gives you a build/update once deploy many approach.
There’s ltsp (ltsp.org) which I use. You build up a system with the apps you want (in my case the Proxmox VDI client, Remmina for RDP, Parsec & Moonlight for accelerated graphics support for when I play games, NoMachine, AnyDesk and Firefox) though you probably don’t need all those. To keep the size down build from basic Linux distro (Ubuntu Server, Debian net install) and use a lightweight desktop manage (XFCE is ideal). Build it, boot and off you go.
Or you easily build one with Alpine Linux (apalard.net has tutorial - it’s geared at the Proxmox VDI client but the general approach would work in your case).
Only problem with Alpine is you pretty much have to install again to update everything. LTSP, update the Linux distroy, rebuild the ltsp image and reboot (so 2/3 commands).
If unRAID used KVM (which i think it does) you can use Spice/Virtviewer for remote access. One advantage with Spice is the ability to pass through USB devices which you don’t get with RDP.
Reason I asked about the gaming is you’ll then need to dive in to GPU pass through and use of Parsec or Moonlight to take advantage of it.
it’s probably DNS related.
you need to check that what the DNS the system is set to use when you bring up the VPN.
For example when I bring up my StrongVPN connection, it’s set to use an external DNS so if I want to access a system on my network i need to use the IP address.
Though I can access some SMB shares but I have them mapped at login via group policy but if I wanted connect to my samba file server afterwards, yeah gonna say the server isn’t found if I have the system name.
yes and off-the-shelf unit is great for the family who aren’t that tech literate - it can just sit on the shelf and do it’s thing but you will still need to monitor and update it.
That CPU in the custom nas could also be more powerful than what you get in the off-the-shelf providing you with more flexibility.
It could start off as a nas today and tomorrow you to turn it into a virtualised server with the NAS software running as virtual machine.
Or you could find that you’re shifting very large files around and need 10GBe ethernet and high performace which could start to choke a Synology or QNAP NAS with an ARM processor.
Bit baffled by u/ajnozari comment on share security issues - SMB via SAMBA is well documented as to how to set things up…
Not sure they’re referring to a security issue as much as sometimes SAMBA can be a bit cantankerous with with permissions and security in order to access ahare or that some users have problems with configuring it.
it’s unlikely you’ll find something to fit that rack.
They’re designed for networking equipment - not PC hardware and outside of some 1RU units, there’s very little that’s going to find into 17".
Some-one posted looking for a 16" deep case yesterday (factoring in power cables etc that’s about as deep as the case could be). There might have been suitable response in there.
I’ve seen the same sort of behaviour with netboot (though it’s more iPXE) though in my case it was an E1000 under Proxmox (and with other virtual cards).
It seems a common problem with iPXE where it hangs at initialising devices but there doesn’t see to be much in the way of solutions.
umm read the forum?
try google? (which reddit isn’t).
I5.
Gives you a few more cores to play with down the track if you need them plus better performance from the get go.
Is the QNAP disk shelf known to work with non-QNAP hardware?
It might use a standard cable and have a HBA but doesn’t mean they hadn’t taken steps to make it proprietary.
before you convert, install the virtio drivers and in the VM configuration make sure the disk image file is attached as SATA0.
The virtioSCSI is a better drive but Windows doesn’t know the device at the first boot after the migration. It needs to boot, find the device and configure the drivers.
and need to ensure the image contains all the partitions from the original drive.
No proxmox isn’t harder to get into but it’s different from TrueNAS.
TrueNAS is a storage system than can host apps,
ProxMox is a hypervisor which is first and foremot about compute - i.e the ability to run virtual machines and containers (LXC). It can utilise ZFS (which is the basis for TrueNAS) for VM storage but it’s not intended as network storage.
A container is a lighter way of doing a virtual machine. It shares the kernel with Proxmox (hence only being able to run Linux distros in LXC).
That said you can run TrueNAS as a VM under Proxmox or you can spin up a file server in a container and utilise the ZFS system for storage.
With Proxmox and the 1080 you could go one of a couple of ways.
Put Plex in a LXC and it will be able to access the GPU through the kernel space with Proxmox (drivers are installed at the hypervisor level). Advantage is that you still have gpu for the console.
Option 2 is to pass the GPU through to a virtual machine. This has to be a done on a 1:1 basis i.e the 1080 can only be used by one VM at a time. IF you’re interest you can go down the path of vGPU which would allow the GPU to be shared between a number of VMs.
However as part of the configuration processor you blacklist the drivers and card which means it’s not available for the console (normally won’t matter if but if you have any problems it can make trouble shooting a bit harder).