• NutzPup@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    If these were in a RAID array, what can happen is that when one drive goes bad, the other drives get pushed into overdrive during recovery, which can then tip those over the edge.

    • reercalium2@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      There is no “going into overdrive”, there’s just more work which means a higher chance some of the work fails.

  • YousureWannaknow@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    There’s too many factors and things we have no idea about that… We can’t say anything about it… How, what happened to them and stuff… 3 in week? I wouldn’t be surprised if they were used to mining or there was electrical failure in setup, but until we learn about causes, there’s no way we will know…

    I personally have Seagate as my main storage… Am I concerned about it’s state? Well it’s only storage, it ain’t even plugged most of time, so I’m more scared about corrosion than actual failure… I have decade old Apollo drive filled with data. Unless there’ll be anything mechanical that will damage it… I doubt it will catch bigger issues than filesystem errors

  • Niklasw99@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Well its seagate for ya, but 3 in a single week is insaine, what setup was it ?

  • alaurence@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I feel like there should be more available information on drive model failures before 5 years of use (or a certain number of hours), rather than blanket databases on which drives fail

  • burncap@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I have had twice WD HDD’s (black ones though), and both times they just died for no particular reason.

  • moh8disaster@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Funny you should mention BackBlaze.

    They post annual reports on how many drives they have failed in a year.

    Seagate was always on the top in % by a large margin. WD HGST which is now WD Toshiba

    Seagate is the most TB per $ but meh…

  • BackToPlebbit69@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    And some of you guys told me not to get WD RED drives from Prime day.

    Mine are still working, so there’s that.

  • rukawaxz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Where did you buy them by the way? I wonder if from newegg since they have a suspicious low price there.

  • leexgx@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    There me been crazy with zfs z3 redundancy (plus 2 backups with less redundancy z2 or raid5/SHR1)

    Mutiple drive failures within a short time probably means no extended scans are Been performed at all for posable pre failure detection (recommend every 3 or 1 monthly) and data scrub monthly

  • zepsutyKalafiorek@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    First, sorry for your loss. Regardless of the case, if they are really not working it is still a possible big data loss and hit on the pocket.

    It is weird to happen at the same time. Have you checked them on another machine? Other SATA/SAS controller? If it is, for example, a power issue problem it may happen to other drivers soon too.

    Maybe they have been powerfully hit by something?

    Just trying to help with finding the culprit, since it is so unlikely that 3 have died in such a similar time frame.

  • cgimusic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Isn’t it a bit pointless to blank out the serial numbers of the drives, but not blank out the barcode for the serial numbers of the drives?