I have 2x20TB, 7x8TB, and 1x6TB spinners and 3x500GB SSDs, so a typical RAID setup isn’t really possible. I’m not buying any more hard drives for awhile.

I switched to unRaid from Windows/Drivepool/Snapraid because. Well I don’t really know why, wanted to try something new. Wish I had started with the trial instead of paying for the license, but hindsight is 20/20 as they say. Thankfully it wasn’t too expensive.

My big issue is just write speeds. I run sabnzb/sonarr/radarr/plex, and I’ve got everything configured properly, but sabnzb can barely handle 20 mb/s even though my 1gig cable isp pulls 100 mb/s.

I actually pulled the parity drive out of my array just to get better speeds because with parity you’re automatically 1/2-ing your disk write i/o. With WD shucks 5400 rpm’s I should be able to hit over 100 mb/s writes, but with parity plus sabnzb repairing I’m lucky to get 40 mb/s. It’s just abysmal.

I never had any slowdowns using windows and drivepool. Even during repair + download operations. Obviously snapraid is on demand so parity doesn’t play into things.

I have tried the cache drives, but then it just fills up and you’re in an even worse place where you have to wait on mover to move the data from the ssd cache to the array, but if you’re still downloading then you’re trying to download to the array and repair on the array too. That’s even slower than if you don’t have the cache setup.

I guess if I was downloading a tiny amount everyday it wouldn’t be such a big deal, but I’m trying to catch up on the time sonarr/radarr weren’t running while I moved my existing data to unRaid, as well as some new keywords for downloading x265 and upping some 720p content to 1080p. So think about a long, multi TB, queue in sabnzb.

So now I’m in the boat of thinking, maybe I should go back to windows. I’ve read about mergeFS and snapraid, but it seems like a fairly high learning curve when Windows worked fine for years. I wish I had never switched.

Am I missing something? I have reconstruct write “on” and took out the parity drive, and it does alright downloading, but a repair during download still brings it to a crawl. Is there something better out there I’m not thinking of?

Any tips would be appreciated.

  • Johnpyp@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    ZFS is going to be by far the best performance on hard drives, and unlike common advice, you don’t need copious ram for it to perform well.

    It won’t be trivial to migrate to, and there’s some tradeoffs, though though have been written about extensively.

    • ZataH@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Only downside to ZFS is you have to buy all the drives at once. And you can’t upgrade single drives. Unless you run R1 vdevs

  • kwarner04@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I’ve been on mergerfs + snapraid for years and haven’t had any issues. I even tried downloading to a NVMe but then you just move the bottleneck from downloading to moving off the cache drive. And if you download more than your cache drive, you’re no better off.

    Setup mergerfs and make sure and specify “most free space”. Then each file will actually get written to a different drive because it evaluates the space and rotates the writes. The n just run snapraid nightly to keep parity in sync. (This does mean you could lose something between download and sync, but if you just downloaded, you can probably grab again)

    Now, the big assumption here is that most of your files are large media files. If you’re moving thousands of small files, you’ll probably notice a performance hit.

    I can easily saturate my 2.5 gbps fiber connection with no issues. And as others have said, it’s just standard files and you’re not hosed it a single drive dies.

  • thebaldmaniac@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    it’s about carefully tuning shares to what actually needs to be on the cache vs what can be directly written to disks. For example the downloaders can download to the cache shares but when the *arrs are moving the files, have them move to shares which are disk only. So you can get fast downloads and repairs, and a slower move to disk which should not impact performance much.

    I agree it’s too much tuning which is needed, but once you do it, it works fine

    • fundementalpumpkin@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Everybody says to use the trash guide for setup. So I setup Cache-> Array. I did exactly the same folder structure as the trash guide.

      What confuses me, is won’t a completed movie that radarr imports just hardlink to the media/movie folder on the cache? What is telling it to move it to the array? If I have to wait for mover, then mover is wasting time moving my usenet/incomplete folder to the array, but the trash guide hammers home how hardlinking/atomic moves are really important.

      It seems like the better setup is to have usenet only go to cache, or at least only incomplete. Leave the media as array only, no cache. And then let sabnzb move it to array/completed or let sonarr/radarr move it to the array/media. But then that isn’t an atomic move? Are atomic moves not really that big of a deal?

      Normal operation I’m not filling up my cache drive, so its kinda moot, but what about the rare occasion where I queue 10 seasons of a show or something? Then I’d want stuff moving off cache asap instead of waiting on mover.

      • Cygnusaurus@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You are correct, you have to keep your temporary files off the array, cache only. I have two pools plus the array. Cache, buffer, array.

        Sabnzbd downloads to the 2tb buffer (which is an unprotected 2tb nvme), it then unpacks it to the 1gb cache (a pool of 2 1tb $45 inland nvmes). Files are unpacked into the appropriate folder /linuxisos/ubuntu for example, which are created on the cache, then moved overnight to the array by mover.

        I bet some of your slow speed is the downloader unpacking a file, and writing it to the same drive. You get much better speeds downloading to one drive or pool, and writing to an another.

        Look at YouTube videos by space invader one about how to set up cache pools using zfs, or converting existing cache to it, and I think he has some about optimizing downloads as well.

  • Freaky_Freddy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I actually pulled the parity drive out of my array just to get better speeds because with parity you’re automatically 1/2-ing your disk write i/o. With WD shucks 5400 rpm’s I should be able to hit over 100 mb/s writes, but with parity plus sabnzb repairing I’m lucky to get 40 mb/s. It’s just abysmal.

    Yeah you will only hit max speed on any disk when doing pure sequential workloads, if starts doing multiple things at once and writing to different spots your write speed will take a hit

    I have 2x20TB, 7x8TB, and 1x6TB spinners and 3x500GB SSDs, so a typical RAID setup isn’t really possible.

    You could still try something like truenas

    You could create a 60TB pool by doing a mirror vdev (2x20TB) + a raidz2 vdev (7x8TB)

    In terms of parity you would be on par with your unraid setup, and in terms of speed you would be combining the speed of the slowest 20TB disk + something like the average speed of 5 disks on the raidz2

    Also you’d be using ZFS so your files would be checksumed to protect against bitrot

    The SSDs you could either use them as a metadata + small files vdev for that 60TB pool or use them in a second all SDD pool

    The 6TB disk i can’t really fit it anywhere unfortunately, maybe as an offline backup for some of your more important files?

    I don’t know anything about windows storage unfortunately… Good luck

    • fundementalpumpkin@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Thanks for the input. It looks like Truenas has all the same apps/plugins that I use as well, so that might be an option. Appreciate it.

      • alsdhjf1@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        As someone who went the other way - proxmox and TrueNas -> Unraid, I would caution you to think twice against going that way. You’ve admitted having issues with Unraid configuration, and TrueNas is significantly more complex (especially if you layer in proxmox). You’ll just be throwing away all the knowledge you have about Unraid and jumping into another setup where configuration is important. Except, in this case, if you screw up the ZFS setup you risk losing ALL YOUR DATA.

        Something doesn’t quite add up. How are you downloading 3TB a day and expecting it to fit into your existing drive setup? I would really recommend just biting the bullet and getting 2x4TB cache drives that you could fill up and then let the slow writes to array take over. If you’re telling me that you do 3TB every day, then what is your plan for 30 days from now when you’re out of array space?

        Something doesn’t quite add up - I’m not doubting you, but I think your frustration is pushing you to make a change, even if that change might not be the best for you in the long run.

        All that is to say, please don’t jump to TrueNAS with prod data for your first experience.

        • Freaky_Freddy@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          You’ll just be throwing away all the knowledge you have about Unraid and jumping into another setup where configuration is important. Except, in this case, if you screw up the ZFS setup you risk losing ALL YOUR DATA.

          I recently setup my truenas and it was pretty easy after following a youtube video

          but now i’m curious how i can lose ALL MY DATA with zfs

          Can you provide any examples of what would cause this?

          • alsdhjf1@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            Sure, if you’re using ZFS’ RAID1 and lose 2 drives, it all goes. Or RAID2 and lose 3 drives, it all goes. Because the data is allocated across many drives, there is not a fundamental “one file one drive” scenario if SHTF.

            Whereas with Unraid, one file on one drive, with a parity check (or two) if you need to recreate.

            • Freaky_Freddy@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              if you’re using ZFS’ RAID1 and lose 2 drives, it all goes. Or RAID2 and lose 3 drives, it all goes. Because the data is allocated across many drives, there is not a fundamental “one file one drive” scenario

              That has nothing to do with how you setup ZFS though…

              That is just the reality of how redundant arrays work, whether its ZFS or hardware RAID

              Losing 2 or 3 disks at once should be a very low probability and either way it shouldn’t be a big issue if you have backups (raid and unraid aren’t backups)

              • alsdhjf1@alien.topB
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                Yes agreed, all that is true. Regardless of the reason, Unraid takes a different approach and has different tradeoffs which enable partial recovery in the SHTF scenario. I prefer the Unraid approach - plus it is more intuitive and feels simpler for me.

        • fundementalpumpkin@alien.topOPB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          No the 3tb is just my initial download. It’s a bunch of shows from the time when I was moving from windows to unraid, plus I added a couple radarr keywords for x265 and I changed the profile on a lot of my old old 720p media to 1080p and did a search all on my entire sonarr list. Once this initial transfer finishes it should only be a few shows and maybe a movie a day.

  • katbyte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    unraid is slow, not something you can expect to be faster then a single disk\most likely will be slower because of its design

    you likely want something like truenas scale which will properly make use of multiple disks for speed/iops but it will be a much steeper learning curve

  • gammajayy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Download to cache then have the mover put it on the array. I max out at 110MB/s on sabnzbd with a 1g/1g connection. Definitely user error here.

  • JanBurianKaczan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    mate, why are you downloading straight to HDDs? Put the downloads on an SSD then let the arrs do the import to hdds

    • lightning228@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Yeah, I switched over temporarily to writing to a HDD and it makes a huge difference, switched back the next day. Unraid is not the issue

  • dr100@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    There’s nothing to it, except to be patient. Unless you start messing with cache (which can suck if you’re transferring a lot and running out of space) you aren’t making it faster, also I somehow suspect you might have some SMRs too with the small drives…

    unraid is really great for the (unique and I don’t know why, as this is the only one that makes sense for a lot of users) arrangement with the drives being separated so you actually can’t lose more data than the drives you’ve lost, but it comes with its own countless quirks. Not the speed in itself, which is a consequence of this arrangement, plus of doing a lot of stuff in fuse (user space), but all the arbitrary choices: weird Slackware with bugs that don’t get fixed for years even if they were fixed everywhere else, boot only from USB and DRMed, mostly everything is containers and plugins, maintained by third parties and installed on your own risk, as opposed to regular stuff you install from a repository maintained by any major Linux distribution and so on.

    Overall I’d say be a little patient, and stick with it, leave the parity on, you’d be happy you have it. Also you can’t just enable/disable it, I’m not even sure about the process of doing that but I’m sure if you touch it in any way it’ll have to do a whole re-sync, which would really kill your performance. Maybe after all this was a contributor to your lack of speed, some parity check/sync which is actually reading all drives and checking or writing the parity?

  • jamori@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    While other commenters have a point about initial downloads to cache and such for much better performance – it still isn’t fundamental ‘wrong’ and shouldn’t be THAT slow in a relatively modern system writing directly to the array.

    My guess is that there’s something about the way your HDDs are hooked up which becomes a bottleneck when the system needs to read all the drives in parallel to compute parity.

    Sometimes it’s as simple as ‘my PCIe drive controller is in the wrong slot on the motherboard, so limited to PCIe 2.0 1x lane speeds’, or that certain SATA headers on the motherboard go through an expansion chip (we used to call this the “south bridge”) rather than interfacing directly with the CPU/“north bridge”

    There’s an unraid disk speed testing plugin that can help narrow things down. I’d suggest you install and run this to at least rule out that problem: https://forums.unraid.net/topic/70636-diskspeed-hddssd-benchmarking-unraid-6-version-2107/

  • dmn002@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    You need to download and extract on the cache drive, preferably a NVME SSD and move to the array after, otherwise there will be speed issues. If you’re downloading 3TB at a time, then a minimum 4TB cache is recommended.

  • simonmcnair@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Hard drives performance is never as good as SSD or even ISP. Give it a break, it’s mechanical, not solid state.

    Raid parity calculations take time too, don’t underestimate the impact on performance of keeping your data safe.

    Snapraid is a good solution if you can schedule your backups as it is like raid, but not continuous.

    My suggestion mirrors more of the above, download to ssd then move to hdd afterwards.

  • Wassindabox@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Yo! Peep trash-guides (google it) and look at his directions for setup. When I first started, I ran into similar issues but that guide got me to automation heaven and I rarely touch it now.