Like the Phoenix..

Ello again,

After a semi raid failure, and issues from that lead to the boot drive being wiped The site is alive once again, risen once again from the ashes..

On the 13th of December one of the raid drives, The lot of 1.5 Terabyte drives that even Seagate admitted to me they had issues with, up and died in complicity, no motor, and on boot BIOS stated SMART failure.

This lead to me borrowing/stealing my fathers PC, piecemealing the raid back together, thank goodness for software raids, and buying a new line of 3TB drives for the server.

 

After getting the Server back online, Dads system running the old raid with a spare which didn’t last long as one of my two drives failed and ran degraded as we pulled the data off, we mined through the night to get the data off, using SSH this was going to take DAYS.. untill i did some reasearch.

 

The reason it was going to take so long was the overhead from the default encryption SSH2 uses which is pretty secure, something I didn’t need as we were not going over the net but only working on a local LAN, I found online how to turn down the encryption standard, fed that into rsync and it took about 4-8 hours (from what I recall) to transfer around 2TB worth of data which is a lot better than the original estimate of 72-150.

 

None the less I got it running up until Jan 6th when another of the new 3TB smart failed and downed the server. a quick swap, an overnight rebuild and the server has run since.

 

The question I have is, Is my computer killing the drives? or have Seagate drives taken a turn for the worse, thus the lower warranties?