Budget 90k+ Dedicated Server and everything around it - budget 4L

mutant

Skilled
Basically, need to run a dedicated server locally, which will host a number of real-time applications. Need everything from UPS to Lan card, monitor etc. The budget is of around 4, would be running either Ubuntu or Windows Server.

  1. What is your budget?
    • 400K
  2. What is your existing hardware configuration (component name - component brand and model)
    • None
  3. Which hardware will you be keeping (component name - component brand and model)
    • None
  4. Which hardware component are you looking to buy (component name). If you have already decided on a configuration then please mention the (component brand and model) as well, this will help us in fine tuning your requirement.
    • none
  5. Is this going to be your final configuration or you would be adding/upgrading a component in near future. If yes then please mention when and which component
    • nothing
  6. Where will you buy this hardware? (Online/City/TE Dealer)
    • Kolkata
    • Open to online purchase
  7. Would you consider buying a second hand hardware from the TE market
    • No
  8. What is your intended use for this PC/hardware
    • Server
  9. Do you have any brand preference or dislike? Please name them and the reason for your preference/dislike.
    • None
  10. If you will be playing games then which type of games will you be playing?
    • No, but might need video conversion power
  11. What is your preferred monitor resolution for gaming and normal usage
    1. No gaming
  12. Are you looking to overclock?
    • No
  13. Which operating system do you intend to use with this configuration?
    • Windows Server or Ubuntu Server
 
Have you explored cloud based server as it might be much cheaper with only requirement being a fast/low latency internet connection.
Else, I guess you would need a Xeon / equivalent processor (perhaps a dual socket motherboard), loads of RAM /storage /ssd etc.
 
Yeah I would suggest you go the cloud route - VPS or something like that. Maintenance will be a major headache - I mean the physical hardware - what if some part goes wrong? How will you diagnose and not to mention the downtime.
 
Actually, it needs to be on LAN and the apps are also only supposed to work in that network. So, I actually do need a dedicated local server.
 
Whitebox or Branded? Do you have a preference?

Next - what do you need to run? No need to give apps, just the services like - HTTP, SQL, etc and/or if you can give specifics would be better - IIS/nginx/Apache, MS-SQL/mySQL/etc. This is because, you need to optimise your server towards either CPU/Memory/storage or try to do all.

Next, UPS - how much backup capacity? In terms of hours. This is directly dependent on the above.
 
It would basically server as backend to different Web and mobile apps.
Different Apps and backends mean multiple VMs.

You need to get a better idea of the workload and whats currently running it. This is a highly subjective thing. You could be bottlenecked on the network part if you put too much on 1 machine. You could split it into two depending on the workload.

I would suggest you try AMD Threadripper or EPYC with 24 cores. Something like this -> https://www.newegg.com/Product/Product.aspx?Item=N82E16819113466&cm_re=epyc-_-19-113-466-_-Product
Assess the work load and pick a motherboard, ram and network components as per that. You can start low at 32 or 64gb ram and then work upwards as per ram load. Ideally servers should max out on avg at 60% or lower to accomodate spikes. CPU should avg out at 20-30% across cores to not slow down during some intensive scripts etc.

Storage & Harddisks absolutely depend on the workload again. Are you doing significant hot storage? Is there a different storage server. Caching? Need NVME or SSDs. Each has consumer and enterprise variants.

If you have an enterprise internet connection like 1gbps you could also consider a mixed setup where you can use Amazon Glacier for storage (a bit complicated) but you get rid of the whole data storage headache. There are others like Back blaze that also provide it. They have a nifty calculator too https://www.backblaze.com/b2/cloud-storage-pricing.html

Servers online can be very costly, and often it would make sense to buy if parts are available locally to reduce downtime chance significantly. But stuff like storage is a big PITA. Even if you do go with complete local storage, I suggest you do use one of the above services to back it up.
 
Whitebox or Branded? Do you have a preference?

Next - what do you need to run? No need to give apps, just the services like - HTTP, SQL, etc and/or if you can give specifics would be better - IIS/nginx/Apache, MS-SQL/mySQL/etc. This is because, you need to optimise your server towards either CPU/Memory/storage or try to do all.

Next, UPS - how much backup capacity? In terms of hours. This is directly dependent on the above.
I prefer to make it myself.

Mostly, we would be running apps under Docker containers and it would be quite a mix of techs and stacks. We won't need a lot of storage, its basically compute and ram heavy type of server, we would need to support quite a large number of socket connections across apps.

For the UPS, Power cuts are minimal here in Corporate lines, but I think something like an hour would be safe for us to take measures if needed.

Thanks a tonne for your inputs!
 
Different Apps and backends mean multiple VMs.

You need to get a better idea of the workload and whats currently running it. This is a highly subjective thing. You could be bottlenecked on the network part if you put too much on 1 machine. You could split it into two depending on the workload.

I would suggest you try AMD Threadripper or EPYC with 24 cores. Something like this -> https://www.newegg.com/Product/Product.aspx?Item=N82E16819113466&cm_re=epyc-_-19-113-466-_-Product
Assess the work load and pick a motherboard, ram and network components as per that. You can start low at 32 or 64gb ram and then work upwards as per ram load. Ideally servers should max out on avg at 60% or lower to accomodate spikes. CPU should avg out at 20-30% across cores to not slow down during some intensive scripts etc.

Storage & Harddisks absolutely depend on the workload again. Are you doing significant hot storage? Is there a different storage server. Caching? Need NVME or SSDs. Each has consumer and enterprise variants.

If you have an enterprise internet connection like 1gbps you could also consider a mixed setup where you can use Amazon Glacier for storage (a bit complicated) but you get rid of the whole data storage headache. There are others like Back blaze that also provide it. They have a nifty calculator too https://www.backblaze.com/b2/cloud-storage-pricing.html

Servers online can be very costly, and often it would make sense to buy if parts are available locally to reduce downtime chance significantly. But stuff like storage is a big PITA. Even if you do go with complete local storage, I suggest you do use one of the above services to back it up.

Yea, quite right, mostly Dockers.

The idea is to keep things on a single machine. And we plan on taking routine HDD backups to external discs. There won't be much data (they are not image/video heavy kind of apps ).

AMD Threadripper looks cool, does it support all the virtualization stuff? Are these available locally?

I would imagine caching and Redis/memory database usage across apps.

Amazon glacier and other looks cool, for backup of things like images/videos, we would want to keep other data with us only and not have it on the cloud, so I am guessing we can write custom scripts to achieve that.

Thanks a tonne for your inputs!
 
Yea, quite right, mostly Dockers.

The idea is to keep things on a single machine. And we plan on taking routine HDD backups to external discs. There won't be much data (they are not image/video heavy kind of apps ).

AMD Threadripper looks cool, does it support all the virtualization stuff? Are these available locally?

I would imagine caching and Redis/memory database usage across apps.

Amazon glacier and other looks cool, for backup of things like images/videos, we would want to keep other data with us only and not have it on the cloud, so I am guessing we can write custom scripts to achieve that.

Thanks a tonne for your inputs!
Yes it supports virtualisation.

If your apps are compute heavy and will scale across cores, would highly recommend higher end EPYC server processors instead of Threadripper and spend the rest on RAM. Seeing the prices, the budget would increase a bit. These processors would support upt 2TB of ram afaik.


You can do the backup yourself, but its almost imperative to have an offsite backup (hot and cold). Hence the recommendation. For offsite backup, you can have a script that encrypts the data to your choice of encryption strength.
 
Last edited:
I prefer to make it myself.

Mostly, we would be running apps under Docker containers and it would be quite a mix of techs and stacks. We won't need a lot of storage, its basically compute and ram heavy type of server, we would need to support quite a large number of socket connections across apps.

For the UPS, Power cuts are minimal here in Corporate lines, but I think something like an hour would be safe for us to take measures if needed.

Thanks a tonne for your inputs!

Great.

Since you need compute, you should opt for dual socket at the very least.

I can give the costs for white boxes, since we have them and are going to buy spares. These prices are tax exclusive pricing, and LR prices (my supplier is AIS - Arihant Infosystems, Mumbai). Ensure you get them from enterprise dealers, pricing will be lesser.

CPU/Mobo/RAM - the dual socket v3/v4 board (Intel S2600 CW/2R) is around 24K. A CPU (we use 2620v4, 8C/16T, dual socketed give 16C/32T) comes around the 20-25K mark. RAM is the most expensive here, will cost a bomb, somewhere around 1K per GB, for 8GB sticks - RDIMMs.

For remote management - You will need to add Intel RMM which is around 4K which gives IPMI capabilities allowing for remote access, aka data centre style - do take this.

Chassis for tower is around 15K and 25K for 2U, with an additional amount for dual RPS. Chenbro chassis with FSP/Delta PSU.

Storage will be the next expensive thing and I would suggest an all flash platform with minimal or no HDD. Would suggest to pickup a decent RAID controller card as well if you need to write to the disk frequently, as this improves IOPS. Initially, I would suggest to opt for Samsung EVO/Pro, and replace them after 2-3 years. Look at the wear/tear, take the server offline and check them every quarter.

Network - most of the enterprise mobos come with dual NICs, but would suggest to throw in an additional set, this will allow for segregation if needed. Plus, no VLAN fiddling on the servers.

Apart from these - there are some advantages to going with Intel boards, they give you advance replacement. This allows you to take the system offline at your time if the issue is not critical. We have availed of this and our downtime was less than 2 hours in theory. We had to rebuild our hypervisor only and reimport all machines back (we exported them first). Speak to your vendor. If you need this with Supermicro - you will need to pay extra and/or keep one is spare, which is not everyone's cup of tea.

If you can, I would suggest to build a NAS/SAN type system as well. This will be useful for throwing backups, plus you have an independent system which won't have any issues/relation to you main system. Plus it can also run VMs, so you can run several management VMs which will be independent of the main system - our network/managment VMs are seperate vs main VMs vs storage. Plus, this can throw the data to cloud as well, and ensure you get peering support from your ISP if needed.

Finally, UPS - spend 30-40K for an entry level UPS which can run your server for 30/60/90 minutes. This will allow for you to shutdown the server and run for a short period on UPS power if needed.

On AMD - our next server might be an AMD, but until we see more adoption of these, and service availability, we are not going for it now.

Software - this is something that most ignore. Speak to your team and get this done once correctly.
 
Please only go for enterprise drives. If you're feeling particularly adventurous, check out the write endurance of these drives and then decide if you want to replace every year
 
Yes it supports virtualisation.

If your apps are compute heavy and will scale across cores, would highly recommend higher end EPYC server processors instead of Threadripper and spend the rest on RAM. Seeing the prices, the budget would increase a bit. These processors would support upt 2TB of ram afaik.


You can do the backup yourself, but its almost imperative to have an offsite backup (hot and cold). Hence the recommendation. For offsite backup, you can have a script that encrypts the data to your choice of encryption strength.

Great.

Since you need compute, you should opt for dual socket at the very least.

I can give the costs for white boxes, since we have them and are going to buy spares. These prices are tax exclusive pricing, and LR prices (my supplier is AIS - Arihant Infosystems, Mumbai). Ensure you get them from enterprise dealers, pricing will be lesser.

CPU/Mobo/RAM - the dual socket v3/v4 board (Intel S2600 CW/2R) is around 24K. A CPU (we use 2620v4, 8C/16T, dual socketed give 16C/32T) comes around the 20-25K mark. RAM is the most expensive here, will cost a bomb, somewhere around 1K per GB, for 8GB sticks - RDIMMs.

For remote management - You will need to add Intel RMM which is around 4K which gives IPMI capabilities allowing for remote access, aka data centre style - do take this.

Chassis for tower is around 15K and 25K for 2U, with an additional amount for dual RPS. Chenbro chassis with FSP/Delta PSU.

Storage will be the next expensive thing and I would suggest an all flash platform with minimal or no HDD. Would suggest to pickup a decent RAID controller card as well if you need to write to the disk frequently, as this improves IOPS. Initially, I would suggest to opt for Samsung EVO/Pro, and replace them after 2-3 years. Look at the wear/tear, take the server offline and check them every quarter.

Network - most of the enterprise mobos come with dual NICs, but would suggest to throw in an additional set, this will allow for segregation if needed. Plus, no VLAN fiddling on the servers.

Apart from these - there are some advantages to going with Intel boards, they give you advance replacement. This allows you to take the system offline at your time if the issue is not critical. We have availed of this and our downtime was less than 2 hours in theory. We had to rebuild our hypervisor only and reimport all machines back (we exported them first). Speak to your vendor. If you need this with Supermicro - you will need to pay extra and/or keep one is spare, which is not everyone's cup of tea.

If you can, I would suggest to build a NAS/SAN type system as well. This will be useful for throwing backups, plus you have an independent system which won't have any issues/relation to you main system. Plus it can also run VMs, so you can run several management VMs which will be independent of the main system - our network/managment VMs are seperate vs main VMs vs storage. Plus, this can throw the data to cloud as well, and ensure you get peering support from your ISP if needed.

Finally, UPS - spend 30-40K for an entry level UPS which can run your server for 30/60/90 minutes. This will allow for you to shutdown the server and run for a short period on UPS power if needed.

On AMD - our next server might be an AMD, but until we see more adoption of these, and service availability, we are not going for it now.

Software - this is something that most ignore. Speak to your team and get this done once correctly.

Thanks, guys! This is very useful information. I will spend some time digging into these and will post if there is anything else.
 
There are few things to consider here. let me dump my thoughts and confuse everyone. :D
  1. Is this server supposed to run 24/7? how much impact do you have if you have say... 1 hr downtime every month?
  2. Networking? do they need something like 10G public link for hosting things like pron? or just internal network which is totally isolated?
  3. Did you ever do power calculations and TCO(total cost of ownership) calculations? e.g., http://www.oracle.com/us/products/s...lc/oracle-server-x7-2-power-calc-3811391.html
  4. The BTUs from the above calculator will define how much AC you would need.
  5. Do you need remote management? like dell DRAC/Intel RMM or just use the intel's in built remote management capability in processor or make a minion run to office.
  6. Storage? NVM/SS, direct attached, remotely attached like NAS/SAN?
  7. rack server or the cool looking chasis?
  8. do you have a switch? cisco/juniper/arista?
  9. are you getting an admin for babysitting this server or you will be the babysitter?
coming to desktop vs enterprise components. The simple reason that enterprise components exist is because they are designed to run 24/7. dont expect thread ripper to run 24/7 like epyc and expect it not to fail. this applies to SSDs as well.

Now, coming to virtualization...
  1. Just use vanilla linux for OS
  2. KVM is highly recommended for hypervisor. part of linux maniline too.
  3. use SR-IOV for creating and assigning virtual nics to the vms. intel, broadcomm, cavium, mellanox all support sriov these days.
Finally, dont buy the best out there. always keep the TCO in mind while choosing server. then compare it to cloud offerings :p its easier to suggest dual epyc 64 core 7nm processor server. but money doesnt grow on trees. :)
 
  1. Just use vanilla linux for OS
  2. KVM is highly recommended for hypervisor. part of linux maniline too.
  3. use SR-IOV for creating and assigning virtual nics to the vms. intel, broadcomm, cavium, mellanox all support sriov these days.

Extremely true. I was in the same conundrum, but I took the safe way out with Hyper V because you really do not get good chaps with linux knowledge, especially in non urban areas. And when your physical server goes down, support should be nearby else you are a focal point for all that anger.

Since he has mentioned docker, I have a feeling it is going to be either that or Windows Server. Lets see.

SR-IOV - Do you see much use for this on 1Gbps switching? This is something I need to check with 10G, but the switches are out of my reach + not really needed.[DOUBLEPOST=1530294177][/DOUBLEPOST]
Did you ever do power calculations and TCO(total cost of ownership) calculations? e.g., http://www.oracle.com/us/products/s...lc/oracle-server-x7-2-power-calc-3811391.html

The moment you get an oracle link - I am like - He is surely going to confuse us all :p
 
SR-IOV - Do you see much use for this on 1Gbps switching? This is something I need to check with 10G, but the switches are out of my reach + not really needed.
did it with intel's e1000 linux driver. pretty stable. but it was long time back. currently we switched to others. usually very manufacturer dependent. but intel's drivers are very stable. basically, SRIOV makes every vm get 1G speed on a 1G link. all the work is offloaded to the card so cpu stays cool since there are no translations etc... Some of the advanced cards even implement L2 switches and virtual lan and a bunch of acronyms.
 
Well I've hosted a lot of Docker containers so might as well suggest you a build. Your budget is great for the requirement but it depends on how many instances are you planning on running and compute requirements for each? our environment consists of Dell R710's & R720's host OS is VMware ESXi then we install Windows server as VM and on top of it would be the Docker Containers. This is not the ideal scenario but we are not running these in production only for testing & building apps. So the ease of cleaning up the VM's after a Docker project is done is great.

Now for the Server... if you are going to build it by your self, I'd suggest AMD Threadripper or Threadripper 2 if you can wait. On the Blue side i'd suggest Xeon E5 Series, your best bet would be to source these from outside India including motherboard as the cost / margins are too high on these products making it not worth purchasing here at all. It should be dual socket for Intel, for Threadripper you'd be stuck with single socket unless your going with EPYC CPU which again you'll have to wait.

You can also look at pre-owned servers since you are not using anything for production and this way you can run more servers for less price. A Dell R710 fully spec'd with Dual Xeon Six cores & 96 GB memory is available to us for less than 100K (off course this is not in India) Shipping & customs i am not sure how much it would cost to ship to India but I can find out from my source if you are interested. Off course there are warranties as well on these hardware.

Also go with SSD's rather than 10000 RPM drives, no necessarily enterprise SSD's but a good quality SSD like Samsung 840's & 850's that should serve the purpose well. Dont forget to add a NAS & UPS for Backup possibly offsite backup as well.
 
Back
Top