Friday, 26 August 2016

VMworld 2016 wishlist

So, with another VMworld approaching, I thought I'd jot down a few things I would like to see announced. Not saying any of this is going to happen, but it's what I'd like to see...

Bundle NSX with vSphere

I doubt this will actually happen while VMware is making a lot of money on standalone NSX, but at some point, it needs to bundle the product into vSphere (or one of the vSphere/vCloud/vRealize suites). Of course, I'm still smarting at the loss of vCNS from the vCloud Suite, but even the "cut price" NSX for vSphere was not low enough to make it affordable to most organisations.

If NSX is going be pervasive, it needs to get everywhere, so VMware are going to have to push it out at a lower entry price point to gain mass adoption.

Stable release of vSphere 6.x

Okay, so that may be unfair, but I still read news reports detailing some pretty major bugs. It seems to me that vSphere 6 hasn't been the highest quality release that VMware has ever done. As a result, we have held off going to vSphere 6, and judging by the show of hands at the last VMUG I attended, we're not the only ones.

VMware's answer to Azure Stack

VMware's biggest competition in the private cloud space is only just getting started. If Microsoft are able to make a version of Azure that can run internally and is simple to install and manage (a big "if"), then this will be a big threat to VMware. Microsoft will have the same stack for both private and public clouds and the ability to move between both. A true hybrid cloud.

VMware has been very weak in public cloud (is vCloud Air still a thing?) but has a credible private cloud story. With a concerted effort, it could provide an equivalent stack solution, perhaps leveraging Cloud Foundry for the PaaS platform. But it's going to have to get moving on this before Microsoft deliver something that is actually pretty compelling for many organisations.

vRealize Automation IaaS appliance

vRealize Automation has a reputation for being a bit of a pain to install. Things have apparently improved massively with version 7, but I'm waiting for VMware to remove the Windows IaaS server requirement. I'm hoping that there will be a release that introduces a Linux-based IaaS appliance that we can easily deploy. I'm guessing this is in progress will be released at some point. When this is done, deploying vRA will be a lot simpler.

Let's wait and see what the keynotes reveals...

Friday, 1 January 2016

Things to learn... 2016 edition

As 2015 fades away in the rear view mirror, the prospect of a new year looms, and with it, a chance for me to reflect on the new technologies that are going to become increasingly important in 2016.

For a number of years, I've kept a "Things to learn" note, but I've never blogged about it before. This year, I decided to put this online and will (hopefully) be able to track how I'm doing.

So, here are the things I'm aiming to learn in 2016...


I've started to dabble with Puppet and really like what can be done with it. I'm planning on investing a significant of time and effort into becoming fluent in Puppet so that the desired state method to system administration becomes the default approach.

Powershell / PowerCLI

I like to think I'm decent at Unix shell scripting, but my Powershell skills are very much at a beginner's level.  I've now purchased a couple of ebooks (from Packt Publishing; recommended) on the subject and want to become as comfortable in Powershell as I am BASH.

I mention PowerCLI along with Powershell because, although an extension to the base Powershell, PowerCLI provides all the functionality for managing a VMware environment through Powershell. So it's a pretty big deal in its own right.

VMware vRealize Automation / Orchestrator

This is the year that we aim to phase out our vCloud Director private cloud and replace it with vRealize Automation. It seems that for any customisation, vRealize Orchestrator is the obvious tool, which means learning some (more) Javascript as well.

Citrix XenApp 7.6

On a purely practical level, I have some older XenApp servers (test and dev) that need to be replaced with the latest XenApp release (currently 7.6). This brings a significant change to XenApp as 7.x is built off the XenDesktop FMA architecture.

It also means a migration from the old Secure Gateway and Web Interface to Netscaler VPX and StoreFront.

I still maintain that XenApp is a great "access-layer" platform for endpoints to connect in through. The new version is going to be different and take some time to understand and then fully exploit.

Microsoft Azure

As Microsoft transition from being the Windows/Office company, to a major provider of public cloud services, we're seeing an uptake of Azure usage, primarily through developers who are using it for "quick and dirty" deployments.

These developments are now transitioning to a production state, which means they need to be looked after, tended, watered and weeded by IT. So learning what can and can't be done with Azure is going to be an increasingly important requirement.

Other Stuff...

This isn't an exhaustive list. There are other things that will need research, development and deployment: Solaris 11, RHEL7, Red Hat IDM (aka FreeIPA), Cisco ASA upgrades, Zabbix monitoring and alerting and probably other things I've forgotten.

In terms of certifications, I didn't have a great experience in 2015 with my Cisco CCDA and as a result, my CCNA and CCNA Security certifications have expired. I'm not yet sure which (if any) I'll look to retake, especially given the required investment in time.

Having now been doing IT for over 15 years, it's pleasing to see that there is still a whole lot of stuff I don't know and need to learn. It keeps the job interesting. May your 2016 be equally stimulating!

Thursday, 6 August 2015

Not passing the Cisco CCDA exam

"Do not underestimate this exam" - comment seen on the Cisco Learning Network.

Over the years, I've published a few posts with the title "Passing the [insert_cert] exam". Sadly, this post is "Not passing the Cisco CCDA exam". I've taken the exam twice, and failed both times.

While I can't talk about the exam (for NDA reasons), I can talk about the studying...

The CCDA is an odd certification. It sits alongside the more familiar CCNA certifications (R&S, Security, Voice etc.) and, as an "Associate" certification, is classed as entry level. Despite that, reading the Cisco Learning Community discussions reveals that a lot of people only tackle it once they have most, or all, of their CCNP or have at least taken all the CCNA concentrations. There is a common theme in the discussion forums that I've read, that this is not an easy exam.

My CCNA R&S and Security certifications were coming up for renewal, so I thought I'd give CCDA a go. I'd bought the (now out-dated) 640-863 "Designing for Cisco Internetwork Solutions" years ago and while I'd found parts of it interesting, I'd never put the effort into actually studying it.

So I equipped myself with the following:

  • Designing for Cisco Internetwork Solutions (third edition)
  • The Cisco CCDA Official Cert Guide
  • CCDA Simplified

To go alongside that, I also bought the ARCH book, "Designing Cisco Network Service Architectures" which is actually part of the CCDP and goes into a lot more detail. I also read a bunch of the Cisco SAFE reference guides.

All in all, a lot of reading!

I then spent nearly two months of study, going through the above and learning the material. I've seen some people comment how the CCDA is a Cisco sales/marketing certification, and I sort of see where they're coming from because it does use a lot of Cisco jargon that relates to Cisco products.

However, that's not to say it's easy or that it isn't technically demanding. There are a lot of details to understand and the main challenge is that while it's very broad and in some ways theoretical, the material expects you to have an understanding of some pretty technical details, such as:

  • In OSPF, what does LSA type 7 do?
  • Syslog level 5 is what level of severity?
  • Which H.323 protocol is responsible for call setup and signaling?
  • Which IPv6 routing protocol uses FF02::9?

Quite a lot to understand and know.

So, in order to cover all the bases, I dived into:
  • Network architectures (three layer, modular enterprise, borderless, collaboration, data centre)
  • Campus LAN and Data centre design
  • Branch office and WAN design
  • IP addressing (both IPv4 and IPv6)
  • Routing protocols: RIPv2, EIGRP, OSPF, BGP
  • Security
  • Wireless
  • Voice
  • Network management procotols

My first attempt, a couple of weeks ago, was a bit shakey. As I went into the exam I felt it was going to be a close thing and I failed with a score of 752 out of 1000 (pass score was 790). However, I was able to see the areas that I was not strong in and focus on that. So with nearly two weeks of additional revision and study, I took it again, feeling more confident...

This time I got 777, much closer than before and potentially only a couple of questions away from a pass. Without wanting to sound like a sore loser, I've actually flagged a couple of the questions with Cisco as the wording was very poor and ambiguous. I'm not honestly expecting much to change, but we'll see.

Sadly, this is the end of the road for my Cisco certifications for the time being. My current certs will expire in a few days, so I'll have to take them all again if I want to get back to this point. Disappointing, but that's how it goes sometimes.

So is it worth doing the CCDA? I think so. Once you get past the marketing stuff, there is a lot of good content that helps focus the architect in identifying what's important in designing a network solution. It's not a hands-on exam, but you do learn a lot that can be applied to actual network implementations. The current syllabus is getting pretty old and refers to products that have now gone end-of-life, but the concepts are sound and I assume an update will fix that.

After all these weeks of spending spare time studying, I might take a few days to sit in the sun (weekend's coming!), spend time with my neglected family and play some Elite:Dangerous. I think I deserve it.

Tuesday, 17 February 2015

Thoughts on migrating from vCloud Director

A couple of years ago, VMware provided a "free" upgrade from vSphere Enterprise Plus to the "vCloud Suite" standard edition. This gave enterprises access to the vCloud Networking and Security (vCNS) and vCloud Director (vCD) products, enabling vApp firewalling and routing, self service provisioning and multi-tenancy support. Third party companies such as Veeam and VMTurbo started adding vCloud Director support into their products and the future seemed bright. We had the tools to build private clouds.

Then VMware bought Dynamic Ops and decided to refocus enterprise customers on what was now called vCloud Automation Center (vCAC). vCloud Director would continue as a Service Provider tool only. As mild compensation, a cut down version of vCAC was added to the vCloud Suite for standard edition users.

With the release of vCloud Suite 6.0, vCD and vCNS appear to have been dropped. While VMware are continuing support for these products through to 2017, it is obvious that they are not the future if you are in the enterprise space.

So what should vCD and vCNS users do?

The answer VMware gave back in 2013 when this happened was to look to vCAC (now vRealize Automation) to replace the portal aspects of vCD. That blog post gave a suggestion that some vCloud Director functionality would move "up" to vCAC and other functionality would move "down" to vCenter Server:

VMware has been pretty much silent on the subject ever since.

For vCAC/vRealize Automation to successfully replace vCD, it needs to:
  • Support multiple organisations/tenants
  • Enable delegation of organisation VMs to non-IT end users
  • Provide IT with tools to easily assign computer, memory and storge resources to specific organisations
  • Allow for the creation of standard images through a service catalogue
  • Allow for the creation and dynamic implementation of networks and complex vApps
  • Allow for firewall/routing/VPN between vApp networks
  • Provide integration points for third party backup and monitoring tools

At this stage, I'm not sure if vCAC can do this or not. My limited exposure to the product (thanks to a presentation at the South West VMUG) left me with a feeling that to do anything with vCAC required a fair amount of development work and integration with vCenter Orchestrator.

So with vCD's migration path unclear, what about vCNS?

In the knowledge base article, End of Availability (EOA) of vCloud Networking and Security (vCNS) in vCloud Suite 6.0 (2107201), VMware recommends that customers migrate to NSX at a "discounted price". Hmm, so if customers don't pay more, what do they lose? Edge and App firewalls? VPN into vApps? Load balancing? So how will more complex vApps with private networks utilising network pools work in this situation? Will any of this even be possible without NSX?

Again, more questions than answers.

In the past, some customers were burnt when VMware deprecated Lab Manager in preference to vCloud Director, and they've done it again now with vCloud Director to vRealize Automation and vCNS to NSX. This creates a lot of work for customers, for little apparent gain, and does nothing to instil a sense of confidence that the "new" solution is going to be around in five years.

To VMware, you need to improve communication in this area. Customers need to make plans and the silence regarding on-premise private cloud is uncertain. At the moment, there seems to be no like-for-like migration plan that doesn't cost the customer more, both in terms of effort required and additional SKUs.

And the "discounted price" for NSX is frankly insulting. Don't sell enterprises the dream of private cloud, provide the tools to build it, then pull the rug from under us because you have a new product you want to sell. Providing a discount that expires in year is useless to organisations who have already submitted their budget requests.

For me, I guess I need to schedule some time in to see what vRealize Automation is actually capable of. But I'll also be watching closely to see what others in our position are doing and if there are any alternative options.

[Update - 4th March 2015:  The VMware knowledge base article referenced above has gone offline. Perhaps VMware are re-evaluating???]

Monday, 16 February 2015

Passing the VCP550D exam

Last year VMware announced that the VMware Certified Professional (VCP) certification would only be valid for two years, ostensibly to ensure that candidates didn't become out of date. Now, I have no problems with recertifying when the certification isn't version specific (e.g., CCNA), but because the VCP is tied to a release of software (VCP4, VCP5 etc.), forcing a recertification does seem a bit like a cash-grab by VMware Education.

With my VCP scheduled to expire next month, I spent a couple of weeks revising and took the exam today. Fortunately I passed with a score of 340 (the passing score is 300). To be honest, I'm a bit disappointed that I didn't score higher, but a pass is a pass and it got the job done.

The exam I took was the VCP550D "delta", which focuses on the differences between vSphere 5.0/5.1 and 5.5. However, it would be worth revising the standard VCP material too as there are a lot of generic questions. The exam blueprint for the 550D is the same as for the 550, which didn't help much.

For revision, I did the following:

Took the free VMware vSphere What's New Fundamentals [v5.5] course

Took the free VMware VSAN 101 course, which has subsequently been replaced by the VSAN 6.0 course.

Signed up for the Pluralsight 10 day trial subscription and took the VMware vSphere 5.5 New Features course

Built a nested home lab environment to test a bunch of new features. William Lam's OVF template for creating Nested ESXi VSAN clusters was very helpful in getting an environment up and running quickly (as was using the vCenter Server Appliance).

There are a number of features that I specifically focussed on when revising because I don't use them day-to-day, including: vSphere Data Protection (we use Veeam), vSphere Replication (we use Veeam), VSAN (we have a SAN/NAS) and VCOPS. Getting hands on with these features in the lab was extremely helpful, although make sure you're not too rusty of "basic" VCP questions covering network, storage, DRS/HA, update manager etc.

The exam itself is online and open book, but this doesn't make passing it a foregone conclusion. You still need to know your stuff! I found it helpful to have my home lab powered up and logged in, along with the VCOPS dashboard in case I needed to quickly cross-reference something. I made sure I had access to the VMware PDFs (but didn't actually use them). Having access to Google was very useful too(!).

With 65 questions in 75 minutes, there was plenty of time to go through the exam and then have time to review "marked" questions. I did use all my time and didn't finish the review, but, obviously did enough to pass.

If you are a VCP5 holder, you only have until the 10th March 2015 to recertify. Doing the VCP550D is the most efficient and easiest way to stay certified.

Wednesday, 17 December 2014

Solaris Live Upgrade, ZFS and Zones

I've been working on this problem for a few days and have only just solved it, so thought it might be worth sharing...

Solaris is a very powerful operating system with some great features. Zones brought Docker-like containers to Solaris back in 2005, ZFS is one of the most advanced filesystems currently available, and the Live Upgrade capability is highly underrated and is a great way to patch a server while ensure you have a back out plan.

All good stuff, but when you put Live Upgrade into a mix of Zones and ZFS, things get a bit flakey.

The issue I was having was that when I ran the "lupc -S" (Live Upgrade Preflight Check) script on my zone, I'd get the following message:

# lupc -S
This system has Patch level/IDR  of 121430-92.
Please check MOS (My Oracle Support) to verify that the latest Live Upgrade patch is installed -- lupc does not verify patch versions.

Zonepath of zone is the mountpoint of top level dataset.
This configuration is unsupported

Oracle has a document on My Oracle Support: "List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)" which lists a lot of ways in which Live Upgrade won't work(!). On checking this document for the top level dataset issue, it gives the following text:

If ZFS root pool resides on one pool (say rpool) with zone residing on toplevel dataset of a different pool (say newpool) mounted on /newpool i.e. zonepath=/newpool, the lucreate would fail.

Okay, except that's not what I've got. My zone, s1, has a zonepath set to  /zones/s1. The zpool is called "zones" and "s1" is a separate ZFS filesystem in this dataset.

What the system is actually complaining about is that the zpool is called "zones" and is mounted as "/zones". The workaround is to set the ZFS mountpoint to be something different from the pool name.

For example,  I created a new ZFS filesystem under zones called "zoneroot":

# zfs create zones/zoneroot

Then (and this is the important bit), I set the mountpoint to something else:

# zfs set mountpoint=/zoneroot zones/zoneroot

Running zfs list for this dataset shows:

zones/zoneroot                  1.80G   122G    32K  /zoneroot

Now, I can create a zone, let's call it "s2":

# zonecfg -z s2
s2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:s2> create
zonecfg:s2> set zonepath=/zoneroot/s2

zonecfg:s2> verify
zonecfg:s2> commit
zonecfg:s2> exit

On installing this zone, a new ZFS file system is created, /zoneroot/s2.

Now, when running the "lupc -S" command, Live Upgrade doesn't complain about unconfigured configurations!

Saturday, 15 February 2014

N36L and N54L in the same vSphere cluster

In the home lab I have a two-node cluster built around the HP Microserver N36L. Recently I took delivery of an N54L which I wanted to add to the cluster.

Although it was straightforward to add the new host, I was unable to vMotion VMs between the two types of CPU as there was a CPU incompatability:

Okay, so the CPUs are different between the two servers, but they are both AMD, so I figured I could make it work by enabling Enhanced vMotion Compatability (EVC). Unfortunately, this complained that the CPUs were either missing features, or that the VMs were using these CPU features:

As it turned out, the problem was due to the virtual machines in the cluster.

In order to work around the error, I powered off each VM in the cluster, moved them to a host outside of the cluster, edited the settings of the VM, selected Options > CPUID Mask > Advanced > Reset all to Default.

This cleared a bunch of flags that have been set at one time (not by me; must have been an automatic change). Once that I was able to configure the cluster to use EVC for AMD "Generation 2" processors.

I was then able to cold migrate the VMs back to the new cluster and power them on. One problem though: How to move the vCenter Server and its SQL Server database VM into the cluster without powering it off. I tried to vMotion them while they were on, but got the same error as above.

The answer to this, courtesy of VMware knowledge base article 1013111, is to open a vSphere Client connection directly to the host running the vCenter and SQL Server VMs, power off the VMs, right-click on them and Remove from Inventory. Then open another vSphere Client connection directly to a host in the EVC-enabled cluster, and then browse the datastore containing these VMs. Once located, right-click the VMX file and add to inventory. This will register the VMs with the host and you can power them on (be sure to say you moved the VM when prompted). Once loaded (which can take some time on low end servers), open a new vSphere Client session to the vCenter Server and you should be able to see the VMs in the correct cluster.