Posted by: markachtemichuk | April 25, 2013

Performancezilla Ready to Crash VMworld 2013

A couple of weeks ago I sent out a tweet asking about interest in advanced technical content for VMworld.

Screen Shot 2013-04-25 at 2.52.27 PM

I had a huge response with theme being – you want more technical content and less marketing.

So I took it upon myself to round-up many of my performance minded and engineering friends and create the following “Extreme Performance” and “Big Data” sessions to meet your advanced technical appetites.  The session speakers are those folks that work deep in this technology everyday and don’t often get the opportunity to share their data, practices and passion on stage.  These are the people I learn from so I’m hopeful to get them in front of you.

The following sessions were created to cover the various vSphere performance dimensions of vCenter, CPU, Memory, Network and Storage in significant depth by not cramming them into a single session.

  • Session# 5234: Extreme Performance Series: vCenter of the Universe (with Ravi and Deep)
  • Session# 4811: Extreme Performance Series: Stretch Your Virtual Foot Print with Monster VMs (with Seongbeom and Myself)

“But Mark, where are the Network and Storage sessions?” you ask.  They need a little extra approval ;-) but show your interest in the sessions above as it will reflect your desire to see the others.  Think of these as an informal mini-track of Performance goodness.  Vote for these to get all of them.

For those interested in Big Data, the following sessions dig deep into huge and in-memory databases:

  • Session# 5190: Big Data: Virtualized Greenplum DB Performance, Scalability and Practices (with Vincent)
  • Session# 5591: Big Data: Virtualized SAP HANA Performance, Scalability and Practices (with Todd and Bob)
  • Session# 5622: Big Data: Virtualizing NoSQL Applications and Practices (with Priti and Rean)

If you want to take a look behind the scenes and how the Technical Marketing team deployed, load-tested and operated the cloud hosting the labs, you won’t want to miss this session:

  • Session# 4684: Hand-on-Labs Cloud: A Performance Deep Dive (with Joey and Josh)

Speaking of Hands-on-Labs, there is a great team pulling together this year’s HOL-SDC-1304 Optimize vSphere Performance lab.  Be sure to leave some time in your VMworld schedule to take this lab.  I promise some information you won’t want to miss.

Lastly, a selfish plug for a panel with my VCDX peers where we’ll focus on pushing the limits of our technology supporting the biggest and baddest workloads:

  • Session# 4679: Software Defined Datacenter Design Panel for Monster VM’s : Taking the Technology to the Limits for High Utilization, High Performance Workloads (Michael, Andrew, Mostafa and Myself)

The goal of all the content above is to highlight technical speakers, go deep into specific technologies and make sure the audience learns something new.  That said, there are so many great sessions, speakers and content it will be hard to vote and sadly some may be turned down.  Remember this isn’t your only venue to get in front of the virtualization community, be sure to get involved in your local VMUGs.

Vote NOW eh!

vmMarkA

Posted by: markachtemichuk | September 27, 2012

Poll: Which storage protocol do you use and why in vSphere?

I’ve recently been having some great conversations around different storage protocols and their performance differences/capabilities.  In that light, I wanted take this opportunity to poll the larger community as well.

Please let me know which is your favorite protocol and why (in the comments).

 

An excellent reference/comparison was done by Cormac Hogan and can be downloaded here.

Posted by: markachtemichuk | September 17, 2012

Performance Best Practices for VMware vSphere 5.1

Once again, VMware has pulled together an incredible team of Performance resources to revise and then release a new Performance Best Practices guide for vSphere 5.1.  Kudos to the team!

Available here: Performance Best Practices for VMware vSphere 5.1

There is a lot of great info in here so it’s hard to pick highlights:

Availability of the C1E halt state typically provides a reduction in power consumption with little or no impact on performance.  When “Turbo Boost” is enabled, the availability of C1E can sometimes even increase the performance of certain single-threaded workloads.  We therefore recommend that you enable C1E in BIOS.  However, for a very few workloads that are highly sensitive to I/O latency, especially those with low CPU utilization, C1E can reduce performance.  In these cases, you might obtain better performance by disabling C1E in BIOS. [page 15, Power Management BIOS Settings]

This really acknowledges the fact that performance tuning is an ‘art’ because it involves understanding your workloads, what they’re doing and changing parameters that have trade-off affects.  In this case, as long as applications hosted on this physical asset are not sensitive to latency, leaving C1E enabled could give you a small performance advantage.  However, if you don’t know what a particular workload requires, you may be inadvertently impacting it.  Mark’s rule: “Unless you know what you’re doing, defaults are best!”  Believe it or not, a lot of time is spent deciding what defaults values should be based on experience, averages, workload safety, etc.

ESXi 5.1introduces virtual hardware version 9.  By creating virtual machines using this hardware version, or upgrading existing virtual machines to this version, a number of additional capabilities become available.  Some of these, such as support for virtual machines with up to 64 vCPUs and support for virtual GPU hardware acceleration, can improve performance for some workloads.  This hardware version is not compatible with versions of ESXi prior to 5.1, however, and thus if a cluster of ESXi hosts will contain some hosts running pre-5.1 versions of ESXi, the virtual machines running on hardware version 9 will be constrained to run only on the ESXi 5.1 hosts.  This could limit vMotion choices for Distributed Resource Scheduling (DRS) or Distributed Power Management (DPM). [page 17, ESXi General Considerations]

I know many people excited to start upgrading their clusters.  Keep things like this in mind when planning for cluster wide migrations.  When using mixed versions of software and virtual hardware there will be limitations.  Be sure those are managed timely.

Be careful when using CPU affinity on systems with hyper-threading.  Because the two logical processors share most of the processor resources, pinning vCPUs, whether from different virtual machines or from a single SMP virtual machine, to both logical processors on one core (CPUs 0 and 1, for example) could cause poor performance. [page 21, Hyper-Threading]

I’ll go even a step further to suggest pinning is a bad operational practice and will cause you issues.  Leave pinning to the benchmark experts and teams but in day-to-day operational life the value you’d receive if often negated by the level of management required to ensure that workload is not impacting other services within your infrastructure.

For the best iSCSI performance, enable jumbo frames when possible.  In addition to supporting jumbo frames for software iSCSI, ESXi 5.1 now also supports jumbo frames for hardware iSCSI.  Using jumbo frames with iSCSI can reduce packet-processing overhead, thus improving the CPU efficiency of storage I/O. [page 32, iSCSI and NFS Recommendations]

Yes enabling jumbo frames can aggravate the network admin but it is valuable for NFS and iSCSI, especially with 10Gb links.  Take your network admin out for lunch or beverages one day – breaking down the virtualization/network silo is the next operational challenge (see software defined datacenter).

So download this guide, get to know it intimately, it’s one of you core performance resources.

Posted by: markachtemichuk | September 5, 2012

My New Lab Infrastructure

Many people have asked for what type of lab gear I use.  I’m especially lucky because I have access to a wide range within my role at VMware but wanted to let you know what I’ve settled on for my home lab recently.  Seems people are always wondering what the “performance guy” bought.

I had two overriding goals:

  • It had to be economical as its my own equipment
  • It had to be “quiet” since I run it in my office

Servers

After some research and peer recommendations I settled on some white boxes.  Besides being fun to build on your own, it gave me control over the noise factor – these things are SILENT!  I must admit much of the research/effort was previously done by Phillip Jaenke as part of his Baby Dragon architecture.  Another bonus is the IPMI port so I can run them without KVM.

Storage

I owe Jason Nash (@TheJasonNash) a shout out for his advocacy of the Synology platform.  I’ve used an Iomega IX4 in the past but wasn’t loving the performance of that platform.  So it was a debate between a new Iomega (PX6) or the Synology platform.  In speaking with Synology I was excited to hear that VAAI support was coming and that tipped the scale.

Network

In the pursuit of a silent lab it was an easy decision to pick this Cisco fanless model.  It will also allow me to make use of vlans and place static routes between them if necessary.

Other Notes

To enable both Gb ports on the motherboard you’ll need to follow this thread.

Yes the memory can be expensive but it has already dropped since I purchased.  Shop around.

The motherboard needs to have recent firmware for the v2 Ivy Bridge processors.  If not, you cannot flash it yourself since you’ll need a working CPU to run the updater so you’ll have to RMA it.  That said, my boards were only one firmware rev behind so this didn’t affect me.

Yes I am running vSphere 5.1 GA builds on them.

[update] I’ve updated the post to include links to suppliers I’ve used to give a reasonably accurate cost of the system.  All in for me was about $3.5k (as I love to bargin).

[update Sept 17, 2012] The Synology DS151x+ models with DSM4.1 currently won’t work with vSphere 5.1 via NFS or iSCSI :-( Synology is working on the issue, no ETA for resolution  Guess I need to breakout that Iomega IX2 unit now I just received.

[update Sept 25. 2012] Synology has created an NFS patch (my preferred connectivity) that is working for me.  If’ you’re having an issue, open a ticket with them to gain access to it.  Some misc reports that there may be a few issues still (sorry Kendrick) so keep an eye out.

As you have probably heard me suggest, vCenter Operations (aka vCOps) is a key technology to use in managing performance and capacity of your virtual environments.  It will be hard for some of us to drop using deep inspection tools like esxtop daily, but I find more of often than not, vCOps is providing the same information is an easier to consume format.  Don’t forgot, vCOps is a ‘learning’ tool as well, which means it’s not just monitoring static thresholds, but using custom analytics to monitor workload behavior over time and let you know when it changes in a negative way.  vCOps can take some time to become fluent in it so I’m excited for this announcement.

VMware has finally launched their new course called: VMware vCenter Operations Manager: Analyze and Predict [V5.0]

For anyone who want to learn more about using vCOps I strongly urge you to check out this course.  I’m always interested in feedback as well so please let me know what you experience was after taking it.

Posted by: markachtemichuk | May 23, 2012

He’s Back!

As of May 16, 2012 I’m proud to announce that I’ve joined the VMware Technical Marketing team as a Sr. Technical Marketing Architect, Performance Specialist.  A long title I know, but just remember that ‘vmMarkA = Performance’

I’m very excited to be getting back to my performance roots where I have a strong passion for ensuring everything can be virtualized successfully, sharing those tips and tricks and evangelizing the capabilities of the virtual platform.  VMware has an extremely strong performance team that I’m very lucky to be working with and drawing upon.  There is honestly very little that can’t be considered today a virtual candidate with the right design.

Where have you been?

Since Oct 2011 I’ve been on a sabbatical in the clouds as a Cloud Solution Specialist SE for VMware’s Team Canada.  This was an amazing opportunity to work with VMware’s leading edge cloud portfolio helping to sell and design both private and public cloud offerings.  Cloud is changing the way IT delivers service.  While the cloud is very cool, I personally like to make the clouds go faster.

What’s coming up?

More of the same great info I’ve shared in the past on a regular basis.  As well, if you have any questions, please reach out to me – I love a good challenge.

vmMarkA

Posted by: markachtemichuk | July 12, 2011

VMware vSphere 5.0 – Performance Unleashed

Its been quiet recently…very quite…because something BIG was being created…

Finally I’m able to share some of my excitement with the virtualization community around VMware’s next major release – vSphere 5.  You all know I have a passion for performance so, today we start a new journey around education on just what is possible on this Cloud Infrastructure.  Here’s the start:

What’s New in VMware vSphere 5.0: Performance Whitepaper

Some very cool technical highlights of the ‘Monster’ VM:

I used to say one could confidently virtualize 99% of their workloads, but after today’s release, I’m now suggesting one can virtualize 99.9% of workloads.  Over the coming weeks you’ll start to see a stream of performance data demonstrating this enormous capability.

“And one more thing” Eric just posted:

vSphere 5 can virtualize itself and 64 bit guests – Sweet!

Posted by: markachtemichuk | February 17, 2011

Troubleshooting vSphere 4.1 Performance Issues – Updated Whitepaper

I’m excited to highlight that VMware, specifically Chethan Kumar, has released an updated Performance Troubleshooting guide for vSphere. The document was originally penned by Hal Rosenberg and has been referred to as the ‘Performance Troubleshooting Bible.’  Both of these fine gentleman work as performance engineers within VMware and have strove to identify common issues and potential resolutions in an easy-to-use framework.

This new edition has been updated specifically for vSphere 4.1 and should be part of every virtual administrators core collection. Whatever you call it, all virtual administrators should not only review this guide for the cool performance information contained within, but it should be used as Step #1 for troubleshooting any performance related issue.  Using it will help you diagnose all the most common issues seen in the field.

Abstract:

This document provides step-by-step approach for troubleshooting most common performance problems in vSphere-based virtual environments. The steps discussed in the document use performance data and charts readily available in the vSphere Client and esxtop to aid the troubleshooting flows. Each performance troubleshooting flow has two parts:

  1. How to identify the problem using specific performance counters.
  2. Possible causes of the problem and solutions to solve it.

I reference this guide at the end of all my presentations and will be updating my links to the new edition.  Performance is often seen as a dark art but in reality is often misunderstood. I hope resources, like this guide, will help take everyone’s knowledge to the next level through awareness and practical application.

The document is available here: Troubleshooting Performance Related Problems in vSphere 4.1 Environments

If you have any feedback, feel free to drop me a line and I’ll pass it along.

Posted by: markachtemichuk | January 18, 2011

vSphere Performance Presentation at PEX

I’m very excited to be presenting at VMware’s Partner Exchange 2011 in Orlando.  In this session I’ll be covering why I’m confident you can virtualize your Tier 1 apps on the vSphere platform, practices & configurations to consider and some common troubleshooting steps.

Session Details:

  • Location: Coronado DE
  • Time/Date: Thursday Feb 10 at 2:00pm

So for all you VMware partners out there, I hope you can attend and reach out to me while you’re down there so we can chat performance.

For more information on PEX 2011 and Registration check out this link.

Remember:  Performance is not a barrier.

Posted by: markachtemichuk | December 23, 2010

Do Consolidation Ratios Matter Anymore?

I was on a road trip recently and had a number of discussions about consolidation ratios as a measure of success.  This worried me.  Especially considering that these were large virtual adopters and I was talking with these clients about moving their Tier 1 applications to their virtual platforms.  It made me realize that a number of organizations are still using that ‘old’ ratio as a key performance indicator (KPI) for measuring how successful they are at virtualization.

Yes – back in the day consolidation ratios were a common measure often used for capacity management, generating ROI models and generally bragging rights.  This was fine when the only services being migrated were “low hanging fruit” or “extremely” underutilized servers.  One could expect 10:1, 25:1 even 50+:1 as a ratio.

But can we keep up those high ratios as we migrate Tier 1 services to virtual platforms?

Easy answer: No!

For the obvious reason that these larger workloads are actually using the hardware resources underneath them (being physical or virtual) so when we start to stack them side by side there are fewer resources to go around.  But this is not a bad thing.  A lower ratio does not mean you’ve failed.  It seems an obvious message but one I wanted to re-iterate as some seem to miss it.

Lets make sure our vision and focus for virtualization goes deeper than consolidation.  Don’t forget about the many other cool things virtualization brings like:  DR enablement, unlocking multicore platforms, simpler management and capacity in the cloud.

Merry Christmas Everyone!

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.

Join 678 other followers

%d bloggers like this: