Welcome from Day 2 here at VMWorld 2017 in Las Vegas.


This is just a quick and dirty run down of my opinions on the topic of vSphere on AWS.


Couple of links from VMware to be aware of, and that I used in putting this post together: 


Pricing Guide:  https://cloud.vmware.com/vmc-aws/pricing

VMWare White Paper: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/cloud/VMW-TO-Cloud-on-AWS-USLET-101-MED-RES.pdf



What is it?

vSphere on AWS is exactly that.  It is a set of VMware SKUS, that runs the VMware Cloud Foundation Stack (VSAN/NSX/VSPHERE) on AWS hardware in the AWS region of the customer’s choosing.  Right now, this is currently only available in the US West - Oregon Region.  I haven't seen a timetable for rolling this out worldwide, but we are still early in this product lifecycle.

This is a now active product.  Welcome to week 1 of vSphere on AWS being generally available!  Remember that this is still only active in the Oregon - US West Region.


What’s the value prop?

-        This offering allows a customer to QUICKLY spin up a datacenter utilizing AWS hardware and VMware tools that IT already knows how to use.

-        This allows customers to take advantage of the AWS global footprint for geo expansion.  Obviously from a hosted vSphere standpoint this is future state, however access from vSphere on AWS to other AWS resources can be carried to any region via the AWS network.

-        Service can be completely on-demand, and allow the customer to only pay for what they need.  I'll cover cost more a bit further down, however the Reserved Instance model is still available as well.



-         Customer MUST start with a minimum of 4 hosts.  Once they have 4 hosts in a region, they can add a single host at a time from there, but 4 is the minimum.

-        Bandwidth between the customer datacenter and the AWS region is still an issue that needs to be dealt with. The cloud adds latency to transactions.  IPSEC VPN and Direct-Connect services are available here, but are additional monthly cost considerations.

-        NSX on the customer premises isn’t required, but greatly enhances the capabilities in the realms of DR, and automating failover.  Being able to stretch out to universal logical switches and routers, allow the utilization of the same IP space in use in the same VXLAN across sites. 



-        On Demand is ~$32 per hour.  This is 4 hosts (the minimum) up and running.  The customer can reduce that by about half by signing a three year reserved pricing agreement, or by about 1/3 by doing a 1 year agreement.  If they were to run the bare minimum as a permanently up AWS based datacenter, the customer is looking at $280K per year.   This pricing doesn’t include any data transfer, routable IP’s, etc.  This is purely from a compute perspective.  There are other AWS costs to be mindful of.


-        There is also a “Hybrid Licensing” credit available.  Details around this to reduce the costs haven’t been released yet, and probably will require some involvement from licensing folks from VMware.


Where does this fit?

-        Customers that are looking to get out of the hardware business, but still want to manage their legacy infrastructure workload with the VMware tools that they know and love.  This will be a great fit.


-        Customers looking to add a DR site that they can quickly spin up and manage with vCenter.  Data Replication and failover orchestration still needs to be addressed.  Customers spending this money, are probably looking at Zerto / Veeam with SRM, or Rubrik/Cohesity for that.


-        Customers that have a workload that fits well into the standard amount of CPU/RAM/Storage that are being offered as part of this sku will benefit well from this offering.


I survived.  The VMUG 2017 welcome party was an incredible way to start the week here at VMWorld.  The band was incredible, and I got to meet a bunch of new folks.  What continues to astound me is how incredibly welcoming this community is to new folks, and how approachable the VMUG staff is.  Who would have thought that after a few minutes, I'd be on a first name basis with Brad, the president of VMUG?  Who would have thought I would be having a beer, literally 4 feet from Michael Dell.  (Picture on my twitter feed @indylinux)

Well, VMWorld 2017 in Las Vegas, Nevada.  I think overall this is my 12th VMWorld.  I am still looking for that incredible feeling like the first time I saw vMotion, or Fault Tolerance at work.  It's been a somewhat quiet evolution of products for VMware in my mind.  Each one bringing more and more of the Software Defined Data Center vision closer and closer to reality.  The last of those being full on network virtualization with VMware NSX.  I have to say, NSX has me the most curious of all of the recent work put out by the folks at VMware.  

I'm sitting here at breakfast, looking over my schedule for the conference.  I think I see a theme.  Do you?

NSX Features Deep Dive:
Today, 11:30 AM – 12:30 PM
Lagoon H, Level 2

The NSX Practical Path:
Today, 2:30 PM – 3:30 PM
Mandalay Bay Ballroom H, Level 2 

How to Describe NSX to your Grandmother:
Today, 4:00 PM – 4:30 PM
VMvillage - VMTN Community Theater

Introduction to VMware NSX for Security:
Today, 5:00 PM – 6:00 PM
Mandalay Bay Ballroom F, Level 2 

Customer Panel on VMware NSX for Automation:
Tomorrow11:30 AM – 12:30 PM
Mandalay Bay Ballroom D, Level 2 

Kubernetes Networking Using NSX:
Tomorrow12:45 PM – 1:00 PM
VMvillage - VMTN Community Theater

Deploying NSX on a Cisco Infrastructure:
Tomorrow1:00 PM – 2:00 PM
Lagoon H, Level 2

NSX Performance Deep Dive:
Tomorrow4:00 PM – 5:00 PM
Mandalay Bay Ballroom F, Level 2 

NSX-T Advanced Architecture Concepts:
Aug 30, 10:00 AM – 11:00 AM
Breakers E, Level 2

NSX Design—Reference Design for SDDC with NSX and vSphere: Part 1:
Aug 30, 11:30 AM – 12:30 PM
Oceanside B, Level 2 

NSX Design—Reference Design for SDDC with NSX and vSphere: Part 2:
Aug 30, 1:00 PM – 2:00 PM
Oceanside B, Level 2 

NSX Service Insertion: Platform for Advanced Networking and Security Services:
Aug 30, 2:30 PM – 3:30 PM
Mandalay Bay Ballroom B, Level 2 

NSX Logical Routing :
Aug 30, 4:00 PM – 5:00 PM
Mandalay Bay Ballroom I, Level 2

NSX and VMware Cloud on AWS: Deep Dive:
Aug 31, 10:30 AM – 11:30 AM
Breakers E, Level 2

Advanced VMware NSX: Demystifying the VTEP, MAC, and ARP Tables:
Aug 31, 12:00 PM – 1:00 PM
Lagoon L, Level 2

NSX DMZ Anywhere: Modernizing the DMZ:
Aug 31, 1:30 PM – 2:30 PM
Mandalay Bay Ballroom I, Level 2

If you haven't heard of Let's Encrypt, check it out here.  In a nutshell, Let's Encrypt allows you to get a free SSL certificate for your personal site from a trusted CA, FOR FREE!  <applause here>

The process to install a Let's Encrypt certificate, and keep it renewed couldn't be simpler.  I've outlined here below.


  1. You are running at least Amazon Linux AMI 03.2017
  2. You are using Apache as your web server.  You could use NGINX, but the file locations would be slightly different.
  3. Apache is currently using self signed certificates for SSL


Installing the SSL Certificates

Install Certbot and get the Certificates
  1. Log into your EC2 instance as ec2-user via SSH
  2. Download the certbot application, and make it executable
    1. wget https://dl.eff.org/certbot-auto
    2. sudo chmod a+x certbot-auto
  3. Run the certbot application to get your certificates.  This will execute a yum install for any necessary packages including PIP and Python
    1. sudo ./certbot-auto --debug -v --server https://acme-v01.api.letsencrypt.org/directory certonly -d Your_FQDN
  4. From here, Certbot will ask you several questions including where to validate with certbot (webroot, typically /var/www/html on Amazon Linux), and an administrative email.  This will place the certificate, private key, and chainfile onto your system.
    • Certificate File : /etc/letsencrypt/live/FQDN/cert.pem
    • Private Key : /etc/letsencrypt/live/FQDN/privkey.pem
    • Full Chain File : /etc/letsencrypt/live/FQDN/fullchain.pem
Configure Apache to use the new SSL Certificates
  1. Edit your SSL.conf file
    • sudo vi /etc/httpd/conf.d/ssl.conf
      1. Configure SSLCertificateFile to point to  /etc/letsencrypt/live/FQDN/cert.pem
      2. Configure SSLCertificateKeyFile to point to /etc/letsencrypt/live/FQDN/privkey.pem
      3. Configure SSLCertificateChainFile to point to /etc/letsencrypt/live/FQDN/fullchain.pem
  2. Restart your HTTPD process
    • sudo service httpd restart


That's all there is to it.  Now you have a fully trusted CA certificate protecting your websites SSL connections.  No more untrusted certificate browser errors for your site.  The next thing to do, would be to automate the certificate renewal process.  You can do that by adding the following line to your root user's crontab

  • 0 6 * * * /home/ec2-user/certbot-auto renew

This will have certbot renew your certificates everyday at 6am.

Ran across a really cool product today that I have personally been wishing into existence for years.  Being a person that had spent time in the trenches managing security policies across hundreds of Cisco devices; I had always thought it was so much more complex than it had to be.  It turns out that I was right.  Cisco has a new product out called Cisco Defense Orchestrator. 

Cisco Defense Orchestrator is a cloud based security policy management tool that can manage all of your Cisco devices across the globe.  Incredible.

Here's a brief listing of the benefits of the product:

  • Single Pane of Glass Management - All rolled into a single Web Based SAS Application
  • Consistent Security Policies - Create Security Templates, roll them out with a few clicks
  • Simple Provisioning - Easy Deployments via Template driven rollouts
  • Cloud Based - Incredibly fast deployment and Time to Value
  • More Time - Doing an impact analysis is a breeze and no longer a painstakingly involved process.

Quick link : http://www.cisco.com/go/cdo


Quick tech tip, mostly for me to remember.


When running docker containers on OSX it can act a little weird.


Because Docker is running as a VM, when you do a port binding via :


docker run - p localhostport:containerport

The container port is actually bound to the IP bound to docker VM, and not necessarily to localhost.

Be sure to run a docker-machine ip default to verify what the address is for the VM.

so if docker-machine ip default


then accessing the localport for a mapped container port would be


Hopefully this saves some one else from pulling their hair out.

I'm a big fan of LastPass.  After the thorough methodology review by Steve Gibson at GRC, LastPass is a great option for password management.  They recently had a blog post with an interesting infographic. 


Here it is: