Budget 41-50k Building a NAS (Unraid)

Going to watch this thread, especially for suggestions on what options for cases are available. I have a similar requirement of 8-10 drives although mobo is m-itx and have a sfx psu.
I don't mean to hijack the thread but a few suitable suggestions for a case would help. I was looking for something compact and the closest I could get something to be in stock was the CM masterbox nr600 placed horizontally.
 
it might be just me but I would not want to put all my eggs in one basket, I would not want 1 PC that has all my storage drives, I would split them up into 2 systems one backing up the other for redundancy.

OT: You can also look at the used market, I recently bought an HP Z240 SSF Workstation for 20k with Intel Xeon E3-1225 V5 & 16GB DDR4 RAM, it can fit 4 HDD's, I also bought Seagate 4TB Enterprise 7200rpm drives used for 4.5k each. going to do Windows Server 2019 > Storage spaces.

Redundancy only works if need for HA arises. Else for home labs equivalent, nopes.

Storage Spaces are a complete utter failure. Unless they have been radically redesigned, it very slow.
If I do intend to use plex, what are my options?

Plex HW Transcoding means nvidia nvenc or Intel quick sync for now. AMD is not officially supported. @cyberwarfare is following the scene more closely than me.

If using JellyFin - I tried this on a RPI4 and worked fine.
having split system defeats the purpose of nas and economically not great .

BTW the server you bought ,do they sell spares like used HBA and NIC`s ?

Get the HBA's and NIC's from used server dealers. But unless you need that much bandwidth, a single card should do the job. As for HBAs try to pickup the 9211s
I recommend you go with a mobo that supports ECC memory and opt for that. Is extremely useful for NAS/Backup purposes. I think the 4350g is perfect for your choice, as it is a "pro" cpu and has official ECC support.

Here we go again with someone pushing for ECC. If you can afford it, great. Else, for homelabs - I would say no. My earlier systems have been without ECC and my issues have been with HDD failures.
Once I use Unraid, do I also need Raid controllers? Most of Unraid seems to be a non standard RAID setup, but that setup seems OK to me.

You basically need to configure them in non RAID and let unraid do the job. Gives more flexibility vs standard RAID, where you are limited to the same hardware.
I don't mean to hijack the thread but a few suitable suggestions for a case would help. I was looking for something compact and the closest I could get something to be in stock was the CM masterbox nr600 placed horizontally.

I would not recommend the fractals - I nearly cooked my HDDs with it.

Why dont you make your own? I am planning that with acrylic, going to start work on this next month (finally)
 
Last edited:
I would not recommend the fractals - I nearly cooked my HDDs with it.

Why dont you make your own? I am planning that with acrylic, going to start work on this next month (finally)
Unfortunately building a case is beyond my level of DIY. I would be really interested in how your NAS case turns up though, please do share and if you would be willing to build a few more for fellow TE members. I am sure there would be some interest.
 
Unfortunately building a case is beyond my level of DIY. I would be really interested in how your NAS case turns up though, please do share and if you would be willing to build a few more for fellow TE members. I am sure there would be some interest.

Its even beyond my level, thats why I tried roping in @truegenius to help if possible.
 
Redundancy only works if need for HA arises. Else for home labs equivalent, nopes.

Storage Spaces are a complete utter failure. Unless they have been radically redesigned, it very slow.

Here we go again with someone pushing for ECC. If you can afford it, great. Else, for homelabs - I would say no. My earlier systems have been without ECC and my issues have been with HDD failures.

You basically need to configure them in non RAID and let unraid do the job. Gives more flexibility vs standard RAID, where you are limited to the same hardware.
Not really, RAID is not a backup, so you need some sort of a backup for your RAID array, therefore the recommendation for 2 separate systems instead of one, since he already has 4 drives of each.
I would recommend that OP use 4 x 8TB drives as his primary RAID array, and the 4 x 4TB drives as a backup for the primary to save at least backup important data.

Also for the OS I would not suggest Unraid at all if he wants to go the Linux route FreeNas/TrueNas core is the only recommendation, mainly because Unraid doesn't use ZFS and uses Btrfs / XFS which has its own down-side. then again ZFS with the amount of RAM requirement he'd need at least 48GB of ECC Memory (FreeBSD recommends 1GB of ECC memory per TB of Storage).
So if OP is comfortable with Windows and not comfortable with using the CLI for everything, Storage Spaces is the way to go, there is an added cost of licensing as well, mainly because of the familiarity of Windows and ease of setup for things like LACP/ Link Aggregation, Storage Spaces just has better flexibility in terms of removing/replacing drives, and it is 100 times better in detecting and reacting to drive failures where FreeNas would just panic, I understand the reason why people don't prefer Storage Spaces due to ReFS being proprietary and a big black-box but it just works, and if you still don't want to use ReFS we can always use NTFS, if the network is entirely Windows then there is no point in adding the complexity of a Linux in the mix.

I am testing both storage space and FreeNAS ZFS this weekend, I will be happy to post benchmark results. I will be using a 240GB SSD for Caching on both. I've heard parity speeds are poor in Storage Spaces but Mirror is much faster, but adding an SSD helps speed up the system by quite a lot.
I have no issues with FreeNAS or UnRaid personally, I can use and setup both without any problems, but the problem arises when we have to troubleshoot things, and me being familiar with Windows for the past 15 years, I can troubleshoot basically anything wrong with it much quicker and easier than I can in Linux where I have to post questions and wait for responses online which can take up to a day or more for resolution, I might not be able to wait for that long for that.

And yes ECC is a must for any data storage, if the OP is not concerned about losing his data here is a great source on the reason to use ECC
Regardless of what file system you’re using, if you care about data integrity, you want to be using ECC ram. It is estimated that computers experience 1 random bit flip per 4gb of ram per day. If you have 16gb of ram on your file server, that means you’re likely to experience random bits flipping 4 times every day. This can cause problems at all levels of your system and can negate many of the benefits of ZFS and ReFS. The Oracle blog has a fantastic post on this topic. Using ECC ram has been shown to almost entirely eliminate these errors.

Now then, on to the file systems.
Source: ZFS on Linux vs Windows Storage Spaces with ReFS | by Brian Smith | @brismuth’s blog
 
Not really, RAID is not a backup, so you need some sort of a backup for your RAID array, therefore the recommendation for 2 separate systems instead of one, since he already has 4 drives of each.
I would recommend that OP use 4 x 8TB drives as his primary RAID array, and the 4 x 4TB drives as a backup for the primary to save at least backup important data.

Also for the OS I would not suggest Unraid at all if he wants to go the Linux route FreeNas/TrueNas core is the only recommendation, mainly because Unraid doesn't use ZFS and uses Btrfs / XFS which has its own down-side. then again ZFS with the amount of RAM requirement he'd need at least 48GB of ECC Memory (FreeBSD recommends 1GB of ECC memory per TB of Storage).
So if OP is comfortable with Windows and not comfortable with using the CLI for everything, Storage Spaces is the way to go, there is an added cost of licensing as well, mainly because of the familiarity of Windows and ease of setup for things like LACP/ Link Aggregation, Storage Spaces just has better flexibility in terms of removing/replacing drives, and it is 100 times better in detecting and reacting to drive failures where FreeNas would just panic, I understand the reason why people don't prefer Storage Spaces due to ReFS being proprietary and a big black-box but it just works, and if you still don't want to use ReFS we can always use NTFS, if the network is entirely Windows then there is no point in adding the complexity of a Linux in the mix.

I am testing both storage space and FreeNAS ZFS this weekend, I will be happy to post benchmark results. I will be using a 240GB SSD for Caching on both. I've heard parity speeds are poor in Storage Spaces but Mirror is much faster, but adding an SSD helps speed up the system by quite a lot.
I have no issues with FreeNAS or UnRaid personally, I can use and setup both without any problems, but the problem arises when we have to troubleshoot things, and me being familiar with Windows for the past 15 years, I can troubleshoot basically anything wrong with it much quicker and easier than I can in Linux where I have to post questions and wait for responses online which can take up to a day or more for resolution, I might not be able to wait for that long for that.

And yes ECC is a must for any data storage, if the OP is not concerned about losing his data here is a great source on the reason to use ECC

Source: ZFS on Linux vs Windows Storage Spaces with ReFS | by Brian Smith | @brismuth’s blog
Let me share my experience. I ran a FreeNAS server with 4x2TB drivers out of which, 1 drive was parity. I had 16GB of Non ECC memory in the system. I used the server to backup data from computer and watching movies over the network. The RAM usage never went above 6GB. Although the recommendation is 1GB per terabyte of storage, for home use, you don't have to follow it strictly. 16GB should be enough for OP's use case. Also ZFS and freenas are more geared towards enterprise while unraid might be more user friendly. I am not sure but data recovery in case the RAID array goes belly up is much easier in unraid when compared to FreeNAS. Let us not forget, OP is on a budget. RAID cards, 48GB ECC memory, a board that supports ECC memory and all that is not in his budget.
 
Let me share my experience. I ran a FreeNAS server with 4x2TB drivers out of which, 1 drive was parity. I had 16GB of Non ECC memory in the system. I used the server to backup data from computer and watching movies over the network. The RAM usage never went above 6GB. Although the recommendation is 1GB per terabyte of storage, for home use, you don't have to follow it strictly. 16GB should be enough for OP's use case. Also ZFS and freenas are more geared towards enterprise while unraid might be more user friendly. I am not sure but data recovery in case the RAID array goes belly up is much easier in unraid when compared to FreeNAS. Let us not forget, OP is on a budget. RAID cards, 48GB ECC memory, a board that supports ECC memory and all that is not in his budget.
Asrock/asus provide support for unbuffered ecc and almost all ryzen have unbuffered ecc support right from a320 to x570, and when some of us mentioned HBA & intel nics we all mentioned used not new .

That's why there was a rant from Linus ,regarding intel not providing ecc support on desktop grade processors
 
@pguy My used server/desktop guy has told me that he has an HP Z420 with Xeon E5-2690 8 Core, NVidia Quadro 2000, 32GB ECC Memory, and 8 x 3.5 Inch Hard drive Caddy for Rs. 45000/- This system can take 8 Hard drives will suit your needs quite well, I can get him to reduce 1 / 2k more at max,
 
@pguy My used server/desktop guy has told me that he has an HP Z420 with Xeon E5-2690 8 Core, NVidia Quadro 2000, 32GB ECC Memory, and 8 x 3.5 Inch Hard drive Caddy for Rs. 45000/- This system can take 8 Hard drives will suit your needs quite well, I can get him to reduce 1 / 2k more at max,
Does the guy do any old HP Microservers ?
 
Not really, RAID is not a backup, so you need some sort of a backup for your RAID array, therefore the recommendation for 2 separate systems instead of one, since he already has 4 drives of each.
I would recommend that OP use 4 x 8TB drives as his primary RAID array, and the 4 x 4TB drives as a backup for the primary to save at least backup important data.

Also for the OS I would not suggest Unraid at all if he wants to go the Linux route FreeNas/TrueNas core is the only recommendation, mainly because Unraid doesn't use ZFS and uses Btrfs / XFS which has its own down-side. then again ZFS with the amount of RAM requirement he'd need at least 48GB of ECC Memory (FreeBSD recommends 1GB of ECC memory per TB of Storage).
So if OP is comfortable with Windows and not comfortable with using the CLI for everything, Storage Spaces is the way to go, there is an added cost of licensing as well, mainly because of the familiarity of Windows and ease of setup for things like LACP/ Link Aggregation, Storage Spaces just has better flexibility in terms of removing/replacing drives, and it is 100 times better in detecting and reacting to drive failures where FreeNas would just panic, I understand the reason why people don't prefer Storage Spaces due to ReFS being proprietary and a big black-box but it just works, and if you still don't want to use ReFS we can always use NTFS, if the network is entirely Windows then there is no point in adding the complexity of a Linux in the mix.

I am testing both storage space and FreeNAS ZFS this weekend, I will be happy to post benchmark results. I will be using a 240GB SSD for Caching on both. I've heard parity speeds are poor in Storage Spaces but Mirror is much faster, but adding an SSD helps speed up the system by quite a lot.
I have no issues with FreeNAS or UnRaid personally, I can use and setup both without any problems, but the problem arises when we have to troubleshoot things, and me being familiar with Windows for the past 15 years, I can troubleshoot basically anything wrong with it much quicker and easier than I can in Linux where I have to post questions and wait for responses online which can take up to a day or more for resolution, I might not be able to wait for that long for that.

And yes ECC is a must for any data storage, if the OP is not concerned about losing his data here is a great source on the reason to use ECC

Source: ZFS on Linux vs Windows Storage Spaces with ReFS | by Brian Smith | @brismuth’s blog

I agree, but having 2 systems is mainly HA rather than a backup. You will keep it in sync, hence chances of the data getting wiped on the secondary is more. Rather better to just have cloud backup like B2.

Dude, Storage Space sucks big time. I used it with enterprise drives on a C2750U. I get better speeds from the RPI4 with a USB HDD attached!

Coming to the ZFS thing - yes, ZFS is the ultimate. Its the equivalent of the Quadro vs GeForce. But it is not the end all. unRAID is far more geared towards consumers.

The ECC myth is again a 99 percentile thing. Its the equivalent of a freak issue if you dont subscribe to the viewpoint. If the OP needs complete 100% protection then he should do 3-2-1 backups, with HA and what not.

Asrock/asus provide support for unbuffered ecc and almost all ryzen have unbuffered ecc support right from a320 to x570, and when some of us mentioned HBA & intel nics we all mentioned used not new .

That's why there was a rant from Linus ,regarding intel not providing ecc support on desktop grade processors

Ryzen yes, APUs - only PRO i think.

Dont bother with new HBAs or NICs.
 
I wish man, I would buy those in a heartbeat, one guy in Andheri had them but they broke into it trying to get in and bent the chassis up, they don't have any clue what these microservers are. I think that one was N40L or N54L don't remember.
 
@pguy My used server/desktop guy has told me that he has an HP Z420 with Xeon E5-2690 8 Core, NVidia Quadro 2000, 32GB ECC Memory, and 8 x 3.5 Inch Hard drive Caddy for Rs. 45000/- This system can take 8 Hard drives will suit your needs quite well, I can get him to reduce 1 / 2k more at max,

And power draw? This wont be cheap to run, and will mostly be a V1 or V2.

Rather, I would suggest to pickup a mobo with ECC and Xeon support, use unbuffered with an i3, move to ECC or Xeon and so forth.

If doing Ryzen, same.
 
Its even beyond my level, thats why I tried roping in @truegenius to help if possible.
I think to get 10-12 HDD storage it will be easier to modify current open air cases like core p3/5 than to build a new case.
( alternatively I can provide a modular attachment for my case Open Air X as I had that in mind if enough users needed that :cool:, though you won't be able to use its side mounted 360mm horizontal radiator installation but still can use 240mm top mounted )
 
just wondering... any of you tried ceph? its a b**ch to setup but if you want to setup as a kubernetes cluster rook-ceph is a pretty decent option. it is linearly scalable and provides s3/samba and bfs on same file system and you don’t raid anything and can hot plug drives and such.
you can create a nice storage system out of rpis and keep adding more if you need more storage. if you setup replicas properly backups will become irrelevant.
 
Found a guy on facebook selling these for 2184/-
149426628_870905630421495_121066658042100703_n.jpg
149610795_461586938213350_7190063216791303558_n.jpg
149946436_295635778821636_4433000948133839063_n.jpg
 
Redundancy only works if need for HA arises. Else for home labs equivalent, nopes.

Storage Spaces are a complete utter failure. Unless they have been radically redesigned, it very slow.


Plex HW Transcoding means nvidia nvenc or Intel quick sync for now. AMD is not officially supported. @cyberwarfare is following the scene more closely than me.

If using JellyFin - I tried this on a RPI4 and worked fine.


Get the HBA's and NIC's from used server dealers. But unless you need that much bandwidth, a single card should do the job. As for HBAs try to pickup the 9211s


Here we go again with someone pushing for ECC. If you can afford it, great. Else, for homelabs - I would say no. My earlier systems have been without ECC and my issues have been with HDD failures.


You basically need to configure them in non RAID and let unraid do the job. Gives more flexibility vs standard RAID, where you are limited to the same hardware.


I would not recommend the fractals - I nearly cooked my HDDs with it.

Why dont you make your own? I am planning that with acrylic, going to start work on this next month (finally)
Plex HW with NVENC only works if you have a plex pass or premium. Dunno which one of those. If you have an intel igpu then it’ll use that for free, if not it uses cpu.
Learned it the hard way when my 3300X almost chocked trying to decode when I streamed to my projector :’(
 
Back
Top