Home Lab v2

I’ve been interested in creating this lab for quite some time.  My legacy lab has fulfilled its usefulness and is need of a dire refresh.  Plus it gives me an excuse to have vSAN, NSX, and the vRealize Suite in the house.  With my new job as an HCI vArchitect, I could only read so much dry material. I needed to roll up my sleeves and dig into the products I’m putting together for customers.  This also gives me an excuse to breathe more life into this blog as I’ll be documenting the setup process along with any roadblocks I may come across.  I’ll also be writing about feature sets and updates as I’m able to work on them.  Next couple of paragraphs will be a brief summary of how I came about this journey as its been a couple years in the making.  Sadly my dreams of accumulating millions of dollars without doing any work hasn’t happened yet.  So until then this will serve as my distraction.

First step, storage:

My FreeNAS box was becoming increasingly unreliable every time I patched it and given the drama that happened with the fork that happened a couple years ago, I wasn’t too confident in its longevity.  Enter Synology.  It made sense from a feature set and I was quite pleased in how easy it was to setup and configure. After I picked up the DS1817+ and populated all the drives with 4TB WD Red drives, I had all the storage I could need.  Holding off on the SSD cache drive for now since I’m barely scratching the utilization even with media playback.

Next step, network:

I’ve never been fond of configuring physical switches and routers.  They just seemed more complex then they needed to be and my lack of technical knowledge beyond L2 doesn’t help. A friend of mine introduced me to Ubiquiti.  He showed off how painless their products were to manipulate with close to enterprise feature set at a fraction of the cost.  Didn’t take much for me to be sold after that.  Ubiquiti’s routers, APs, and switches have only gotten better and I couldn’t be happier with them.  I’ll take their GUI over dealing with HP’s console any day!

All of this took the span of roughly a year or so to come to fruition.  Needless to say, I’m quite pleased with this setup.  They (Synology & Ubiquiti gear) have been running for a while now, even through multiple updates without any hiccups.  Of course now that I say that, all of this will inevitably come tumbling down, knowing my luck.  Either way, I’ll tempt fate having said that and will continue to move on…

Finally, compute:

Moving to Shuttle based PCs for my server infrastructure seemed like a no brainer to me.  Being that this isn’t a corporate infrastructure, the HCL for a hypervisor or any software defined solution didn’t matter a whole lot to me.  The main point of this environment is uptime and learning. To make sure the years of knowledge I gained over the years didn’t go to waste.  I was looking for a compute infrastructure that didn’t consume too much power, was low in sound, and a smaller footprint.

The hardware I chose for the Shuttle environment

Took the liberty of detailing the gear I’ve mentioned as well as hardware within the Shuttle PCs that in my lab for this post.  I haven’t confirmed but can’t imagine much of the gear that is listed below is on VMware’s HCL.  But since I’m not planning to call VMware  support, it’s not much a concern to me:





With all of this out of the way, it was time to get to the hypervisor level:

vSphere & vCenter:

Installed vSphere 6.7 on the flash drives for each of the hosts.  Created an NFS datastore within my Synology and added it to one of the hosts in order to build the 6.7 vCenter.  Once that was completed, went ahead and added all of the hosts to it.  One of the benefits of the DQ170 are the two onboard 1Gb NICs.  I dedicated the first NIC for Management & vMotion local vSwitch. The second NIC, I left for the data distributed vSwitch.  My future plans equate to bonding both links into a single distributed switch and have separate port groups for vMotion, Management, vSAN, and server data.

I won’t continue to be going into any detail on simple tasks such as this.  I’m writing this pool with the assumption that you have basic VMware knowledge. If you’d like a tutorial, comment below and I’ll make a YouTube video for it.


  • vSAN (v6.5) requires 255GB of space on each server.  That being said, vSAN will function normally with the drives I’m using but do alert that there isn’t enough space for the max component size in GB.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s