Change is the only constant

Let me start by wishing you guys all the best in 2017! 2016 has been  the biggest rollercoaster in my career. vExpert 2016, VMware EUC Champion, VMworld Las Vegas and Barcelona and last but not least: VCDX-DTM. All stuff I am really, really proud of, and which I couldn’t have achieved without the help of my employer, ITQ.


All this made me think. Last year I have been working as a Team lead for the End-User Computing business unit at ITQ and we have achieved great things. Awesome projects, enormous growth (in projects, consultants and publicity) and great awareness on an international level (our CEO got interviewed on stage at VMworld Barcelona about our unique partnership). I am a guy with a passion for technology and managing a team as the primary objective didn’t suit me that well at this time in my career. Hmm.. but what does?

Francisco Perez van der Oord at VMworld

The last 3 years I have also been working as an evangelist for everything that has to do with VMware’s products and End-User Computing in particular. Blogging, presenting (at events and customers) and technical enablement were such things which gave me a lot of energy, and still does. I started talking to people around me and now the rollercoaster continues in another direction!

Starting from this week I will be working as a Technical Marketing Architect at ITQ. I will be focussing on creating solutions based on the VMware product portfolio and all content around it. Content like blog posts and white papers. And of course presentations on events like the NLVMUG. My focus areas will be our customers, our consultants and the VMware community with a lot of great initiatives that I will be working on. But more about that later..

I am very excited to continue working with/at an employer that has a great focus on personal development and take my career and the services at ITQ to the next level!

User Experience Troubleshooting Deep Dive, Part 2: How to monitor?

In the previous post I explained some things around metrics and which one you could monitor if you want to be aware of what is happing around the User Experience (UX) of your end users. In this post I would like to show you how these metrics can be monitored. vRealize Operations 6.4 is quite complete with all kinds of out-of-the-box dashboards. But in case you would like to create something that perfectly satisfy your needs (like I did), I will also show you how to create the custom dashboard that I always use.

So where to start? In this case I will be using vRealize Operations (vROps) 6.4. vROps 6.4 has a lot of new features including the ability to monitor Blast as a protocol and include metrics from App Volumes and the EUC Access Point. And the very best feature (imho)  is the metric selector tool when creating custom XML’s for dashboards. And that’s what we are going to use in this post.

In the following steps I will guide you through the process of creating a custom dashboard that you can use to monitor the metrics in the previous

First, go to the content tab in vROps. Open Manage Metric Config and click on the ReskndMetric.

Add metric XML to vROps

Add a new XML file by clicking on the + icon and call it custom.xml.

Add a custom XML to vROps

In the XML editor, you can paste XML code or select metrics with the use of the brand new XML metric picker. In this case, you can paste the following XML to the XML editor:


The next step would be to add a custom dashboard with widgets that will be filled with the above metrics.

If you are handy enough, you can add widgets and have interaction between them. I will help you a bit with and share the dashboard. You can download it here: Custom UX

The dashboard looks like this (click on it to enlarge) when focussing on a session:

vROps Custom EUC dashboard

or like this when focussing on a virtual desktop:

vROps Custom EUC dashboard

Let’s run through the widgets:

Object Picker
This widget let’s search for an object. This could be an active directory user, a session or a virtual desktop. After searching for a certain string, you can select the object and the next widget will be activated.

Object Relationship
This widget shows you the object that you selected including relationships to other objects. For instance, when you searched for a user and select it, a possible relationship to a virtual desktop and a session will be displayed. When selecting a session or a virtual desktop, the next widget will be activated.

Metric Scoreboard (last hour)
When a session is selected, the metrics that are part of the session object are loaded (things like logon times and PCoIP/Blast metrics). If a virtual desktop is selected, the metrics that are part of the virtual desktop object are loaded (things like CPU/RAM usage and disk activity). The metrics in the scoreboard contain a little graph and are clickable. In case you click a metric, the next widget is activated.

Detailed Metrics
The little graph in the Metric Scoreboard are nice, but if you need more detail you might need another type of graph. In this case all metrics that are clicked from the Metric Scoreboard are added to the Detailed Metrics widget. In this widget you are able to create a detailed timeline that is filled (and automatically updated) with different metrics that you would like to use for UX troubleshooting or a root cause analysis. Metrics from different objects can be added (so you can mix and match virtual desktop and session metrics).

Top Alerts
The last widget in the dashboard are the Top Alerts. When clicking on an object in the Object Relationship widget, the top alerts for the selected object are displayed, which gives you the ability to find anomalies quite fast in case of issues.

This custom dashboard is mostly used at UX support departments that offer direct support to end users.

I hope this dashboard gives you an idea of how to get in-depth information on UX. In the next posts I will dive deeper in the different metrics that we are now able to check from the dashboard. I will continue to use the dashboard to show examples on what to look for in case of issues in the future posts.

User Experience Troubleshooting Deep Dive, Part 1: How to start?

Many customers that I have visited are struggling with their User Experience. And quite often they have the tools to actually monitor it, but they have challenges in interpreting the right information. There are many solutions on the market that can be used to monitor the User Experience, but configuring them to show the right information can be hard. And where should you start? What information is useful? On regular basis I get questions on which KPI’s and metrics should be monitored. But how should you interpret these metrics? This blog series is dedicated to help you on monitoring some useful KPI’s and possibly improving your User Experience.

As vRealize Operations for View is the most common tool that I see at customers, I will use this as the monitoring solution in one of the next posts in the series. This first post is dedicated to give you some overall information on metrics.

So how should you get started?

Well, let’s first explain what User Experience (UX) actually is. Traditionally seen, users were working on their local desktop and running their applications that were installed on the endpoint. If they had a powerful pc, that would most of the times mean that their applications were running smooth. Their UX in that case was positive. If case they had a negative UX, 9 out of 10 times we expanded or replaced endpoint hardware and the problem was solved.

When taking applications and desktops to the data center, you can create the best UX ever, but a lot more dependencies exist that can create a negative impact on the UX. And all of these dependencies (or in our case KPI’s/Metrics) need to be monitored so you are aware on the dependencies that you can actually control (like data center compute hardware and storage) as well as external decencies that you don’t (the end user’s internet connection).

The most challenging thing in monitoring the UX isn’t selecting a solution or creating a dashboard for it. The challenge that a lot of customers are facing, is to know what to look for in case of UX issues. So let’s talk about the dependencies (and call them KPI’s and metrics) first.

In case of a negative UX, end users first complain about their “desktop being slow and they need more CPU’s and RAM!”. The obvious one would be to give the user a new virtual desktop with better specs, but most of the times that isn’t the solution.

Smashed computer
You want to avoid situations like these. So proactive monitoring on User Experience is essential!

KPI’s and Metrics

To get a better understanding on how to solve UX issues, let’s focus on KPI’s and Metrics.

UX depends on a great variety of parameters. The following metrics are the ones you could start with as they are the ones I use most.

CPU – Usage in %
See the total cpu usage in the VM, based on a percentage. Is useful when wanting to know what the overal usage is in a desktop.

CPU – Usage in MHz
See the CPU usage, based on the actual clock speed. Could be useful if certain applications aren’t able to use multiple threads. In this case you could see that a certain thread is using a complete core.

CPU – Ready times
If you have CPU contention, the ready times could be very high. CPU  contention means that the virtual CPU’s need to wait in line before a physical CPU can handle the calculations.

RAM – Usage
It says what it does. How much RAM memory is a virtual desktop using.

RAM – Ballooning
Something you need to avoid at all times. It means that an ESXi server is running out of memory and will try to steal available RAM from a virtual desktop.

RAM – Swap out
Again, something you want to avoid. If a virtual desktop is swapping out, it means that it is running out of RAM.

Protocol – PCoIP/Blast – Roundtrip latency
The total amount of latency between the endpoint of the end user (such as a tablet or a laptop) and the virtual desktop, in a round trip. So it is measured from the endpoint to the virtual desktop and back.

Protocol – PCoIP/Blast – Frame rate
The amount of frames that is transferred from a virtual desktop to the endpoint. The more frames, the more data is transferred to the endpoint.

Disk – Latency
The latency between the virtual desktop and the datastore. The lower the latency is, the quicker a IO request can be handled. Needs to be measured for both reads and writes.

Disk – Read IOPS
The number of IO read requests that can be handled by the storage device. Rule of thumb: the higher this number, the better the performance.

Disk – Write IOPS
The number of IO write requests that can be handled by the storage device. Rule of thumb: the higher this number, the better the performance.

Disk – Free capacity
Also says what it does. Should be measured per disk.

OS – Logon time
Total amount of time a used needs to get through the complete logon process (including profile load, logon script load, shell load, etc). In case this number is to high, more in-depth metrics can be loaded.

OS – Uptime
Also says what it does. The total amount of time a desktop OS is running since the last reboot.

Network – throughput
The total amount of data that is transferred by the virtual desktop’s network card.

Network – transmitted data
The data that is transferred through the network card from the virtual desktop.

Network – received data
The data that is received by the network card from the virtual desktop.

This is just a brief overview. Somewhere in the next few posts, I will dive deeper in the different metrics including some best practices around thresholds.

So now you have an idea of the types of metrics that can be used. In the next post I will explain more details on how these metrics could be monitored with vRealize Operations including some information on dashboards.

Continue to part 2: Monitoring with vROps