Quick Recap
My current ESX 4.1 server is aging out (five years and still running). The server is limited by the amount of RAM (64 GB) which makes it difficult to run Exchange 2013 simulations/testing. Storage is also a factor (8 or so usable TB) which is being split between backups, photo storage, ISOs, snapshots and production/testing VMs. The need for a new server has now reached its peak.
*** Disclaimer *** This is my approach and it may not be exactly what you would want to replicate. You need to evaluate your own needs, budget, etc. before proceeding.
The New Lab Setup
Although I would like to follow Jeff Guillet and build a server like his, I’ve decided to take a different approach. Jeff’s server was a fast, low memory Hyper-V 2012 R2 server designed to utilize newer hardware, SSDs and data dedupe in Windows 2012 R2. For me the problem with this is, to scale to test labs where multiple environments of Exchange 2013 are needed or wanted, 32 GB of RAM is limiting because it also requires ~ $1000 for each server with 32 GB of RAM. Ideally I wanted to scale to 128 GB the these server blocks total up to $4000.
Short of the costs the potential benefit to this design is to make building blocks out of the server builds where each Hyper-V host had the same RAM, SSD and dedupe. No need to purchase all of the servers at one time. Hyper-V could also be clustered or VMs could be isolated from each other. All that would be needed to make this scalable would be to have a managed gigabit switch (i.e. Linksys SRW2024) and quad port gigabit NICs in the Hyper-V servers. For the NIC an ideal choice would be the Intel Pro/1000 VT as this NIC is stable, well supported, LACP, jumbo frames and has optimizations for virtualization.
For my lab I have implemented a sort of building block approach as well. One block would be the 20TB iSCSI SAN (13x2TB RAID 6 + hot spare) for all ESXi servers to use for VMs and the other block(s) would comprise of the various ESXi servers to run the VMs on. Here is how I want to have my lab set up:

Building Block One – ESXi 5.1
For the first ESXi 5.xserver I chose to go with a HP DL580 G5 which is a few years old. The advantages are that it uses SAS drives (good for fast boot or faster VMs), redundant Power Supplies, fast RAID controller, quad port Intel Pro/1000 NIC and 128 GB of RAM for cheap. I was able to pick up one such server with 2x146GB SAS for $575. The downside with this approach will be power costs as it has 4 power supplies and is a lot noisier than my ESX 4.1 white box server. The server is also not supported for Windows 2012 R2 so I did not install Hyper-V on it but installed ESXi 5.1 because Windows 2012 R2 VMS are supported now..


Building Block Two – 20TB SAN
for my lab environments I like to build and maintain multiple lab scenarios or even multiple version levels of labs. For example, I like to have a lab for each version of Exchange Server 2013 – RTM, CU1, CU2, CU3, SP1, CU5 and so on. I do this for feature comparison, client troubleshooting, answering forum questions and so on. This requires disk space for storage as well as backups and/or snapshots. With my current setup I am unable to maintain this many environments just for my own testing. In order to do so I need a SAN with multi-TB of storage. After doing a bit of reading on what other people use for their home labs, I ran into a post about a 32 TB SAN for ~$1600. The complete write-up can be found here. Perfect! in the end I went a bit lower on storage (I did not like the idea of Velcro in my case or drilling holes for extra drives) and spent a bit more on faster RAM and some external components (managed switch). Here is the parts list I ended up purchasing:

- Case – Rosewill RSV-L4500
- SAN Hard Drives – 2 TB Hitachi Ultrastar from eBay
- CPU – AMD FX-6300
- OS and cache hard drives – Transcend SSD340 – 128 GB
- RAM – DDR3 1333/1600 4x8GB Non-ECC
- Power Supply – Corsair HX750 (750 Watt)
- Motherboard – AsRock 970 Extreme 4
- Video Card – Cheap PCIe x16 256MB
- Network card – Intel Pro/1000 PT Quad Port
My plan is to install Windows 2008 R2 for driver compatibility with the various hardware components. On top of this I will run StarWinds SAN software (free edition) for publishing iSCSI targets for the ESXi server and VMs as needed. Then I will install PrimoCache to enhance the I/O of the server by using 28GB of RAM for caching.
Additional Components
On top of all of this I also purchased a Linksys SRW2024 switch to allow for NIC teaming using 802.3ad (LACP) at the switch and server level. Each server will have 4 Gigabit NICs configured as a team for enhanced bandwidth. I’ve left room on the switch and servers for additional NICs if I want to combine 8 ports on each server.
Expenditures to Date
$575 – HP Server – 16 cores, 128 GB RAM, 2x146GB, 6 GB NIC Ports
$80 – Linksys SRW2024 – 24 Port Gigabit managed switch
$1700 – SAN Server – 6 cores, 32 GB RAM, 20 TB RAID 6, SSD drives for OS, 5 GB NIC ports
Future Needs
$130 each – APC Backups 1500 x 2
$150 each – APC Backups extra battery x 2
Next in the series
In future parts of this series I will review the nuances of a home lab:
Setup of the lab (hardware, software, etc.)
Performance – real world and simulated
Power Management – UPS loads (once purchased)
Noise – How to quiet that home lab (already working on a real world solution)
Temperatures – keeping those servers cool at home
Look for the next article in the next week or so.