71-90K Home Virtualization lab

Status
Not open for further replies.
@ryan : Yes primary objective is messaging using Hyper-v once thats done. I might want to look into Vmware also. I might bug you later on virtualization part
 
Last edited by a moderator:
Hi, you are going to buy the rig for learning/training right? Then why spend more for the highest end CPU like 8350 ? You will need a minimum of 3 to 4 host machines to learn many features like DRS/FT properly, not an overly powerful single host. SO get 3 to 4 mid end bulldozer quad cores, 16gigs each, low to mid end cabinets, mid end PSU and one or two low end machines with raid to be used as datastores (iscsi), and minimum or 3 to 4 GB NICS on all machines.
 
Hi, you are going to buy the rig for learning/training right? Then why spend more for the highest end CPU like 8350 ? You will need a minimum of 3 to 4 host machines to learn many features like DRS/FT properly, not an overly powerful single host. SO get 3 to 4 mid end bulldozer quad cores, 16gigs each, low to mid end cabinets, mid end PSU and one or two low end machines with raid to be used as datastores (iscsi), and minimum or 3 to 4 GB NICS on all machines.
Well @dOm1naTOr thats exactly what i was trying to say, why even get multiple machines he can have 2 vms running on 1 desktop and for rest of the messaging activity he could lend laptop from friends or use low end machines , as per the messaging requirement he needs horsepower on only the hyperV host the remaining can also be a basic system cos all he will be doing on those is configure Exchange accounts and use lync to communicate from machine to other
 
Last edited by a moderator:
1) Well guys first the OP needs to specify what kind of VMware hypervisor he'll be using ?? will it be a type 1 hypervisor OS ? using Esx, Esxi or type 2 hypervisor where he will have an OS and inside that host OS he will be using vmware workstation to manage
2) Purpose (if youre eyeing on the VCP certification , you need to work on Type 1 hypervisor, type 2 hypervisor apps can be installed on any basic desktop having VT on)
3) If youre aiming to simulate Vmotion, you would need a good managed switch and a external storage with iscsi capability (to learn virtual networking and virtual storage elements)


i have a home lab where im working on my MCITP(70-693) and will be moving to VCP after my exam on 6th that is if i clear it , so will be swithing from hyper-V to VM specific environment

You dont need any manageable switch for vmotion. For learning purpose, you can even simulate vmotion between two or 3 hosts inside a single worstation, and even the datastore can be on the same workstation environment, using openfiler/freenas.

But for a proper learning, go for multiple mid range machines, be it vmware or Hyper-V. Also investing on SSD for learning purpose is just waste of money.
 
ok guys . Here's the scenario. I am planning to run all these machines.

1 /2 DC
6 Exchange Servers ( high availability )
8 Lync Servers ( high Availability )
Sharepoint server ( maybe )
2 Win 7 or 8 vms
Version's included
Exchange 2010 , 2013
Lync 2010 ,2013
Win 2008r2 / Win 2012

Currently I user my office workstation . Its a dell T310 with 32gb ram and can pull upto 10 vms . So I am looking at a similar setup but less expensive. .
Apart from general learning it MIGHT be also used for some workload testing.
 
You dont need any manageable switch for vmotion. For learning purpose, you can even simulate vmotion between two or 3 hosts inside a single worstation, and even the datastore can be on the same workstation environment, using openfiler/freenas.

But for a proper learning, go for multiple mid range machines, be it vmware or Hyper-V. Also investing on SSD for learning purpose is just waste of money.

I wanted a managed switch for 1)Nic teaming(have 2 * 10mbps connections in same LAG) 2)CCNA exam which i have postponed again till may ( i havent paid for it :p was laying idle at friends place he is done with his CCIE R and S so this was laying at his place )
i havent bought SSDs for this specific purpose, they are on the desktop(server) from the time i was using as a desktop for gaming and basic usage

Post#666 number of the beast
 
Here is my 2cents.

2 pcs with all the cpu horse power to run the vms.
3rd pc with normal configuration but a lot of HDD spcace and at least 4 1gig nic ports.

Install hypervisor (free linux ovm) on the 2 pcs and iscsi-scst driver on 3rd and then create balance-alb bond with all the nics (4x1Gb=4Gbps) and install the ovm suite to manage the hypervisor pool using this box. setup home iscsi san with this configuration and you wont need more than 60GB ssds on the hypervisors.

create a seperate lan for iscsi and put 2port nics on each of the hypervisors. you can manage the hypervisors by either creating bonds or individually in ovm.

My point is to seperate the storage from processing power so that they can be managed independently. at the same time you can provision storage to independent vms using iscsi san. if iscsi-san doesnt work out, you can switch to direct attached storage any time.
 
Status
Not open for further replies.