Input/output (I/O) and performance bottlenecks are well-known server virtualization drawbacks and barriers to wider...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
deployment of high-volume, high-performance applications on virtual machines (VMs.).
In this interview, Lewis and Mueting describe the I/O and performance improvements possible with the new Next-Generation AMD Opteron processors with virtualization features called AMD-V and upcoming advances in these chips. Mueting is virtualization solutions manager for AMD and Lewis is the director of commercial solutions for the Austin, Tex.-based chipmaker.
What technology limitations are creating barriers to wider virtualization adoption today?
Tim Meuting: Performance is probably the biggest issue. Companies have been taking a pragmatic approach to virtualization, recognizing the limitations from a performance perspective. They've been fairly consistent with the types of applications that they're virtualizing; not the high-volume, high-memory, high-I/O applications that require high performance.
Sounds like it's pretty clear that some applications should not be run on virtual machines (VMs). Is that the case?
Margaret Lewis: This isn't a clear-cut thing. You could say that, depending on the applications used, there's a limit to how many virtual machines you could actually run on a server before the performance becomes unacceptable.
Then again, performance could be about how many virtual machines you had on one server before performance decreased, or how many users you can get into a fixed number of virtual machines. Or how many transactions virtual machines would run.
Some people in IT are willing to accept virtual machines that may run transactions slower, but consolidate more, on one server. Everybody is not facing the same issues.
How does the chipset figure into removing some of these varying and variable performance limitations?
Tim Meuting: We're trying to do things to improve performance from the hardware perspective and take some of the load off of software solutions. Today, I/O is mostly handled within the software, so there certainly can be bottlenecks as you start to add more and more applications under a single server.
Margaret Lewis: Our platform has been doing a better job of handling even those existing solutions because or our design. Our next generation Opteron processor continues on an architectural level to help remove some of the limitations that might keep virtualizations with certain aspects of your business.
What functionality in the new Opteron chips with AMD-V helps IT managers minimize performance bottlenecks?
Margaret Lewis: Virtualization is very memory intensive. Our AMD Opteron Direct Connect Architecture with integrated memory controller provides a very efficient way to increase memory, [address space] and eliminate bottlenecks commonly found in traditional front-side bus architectures. With Direct Connect Architecture, memory is connected directly to the CPU, and that decreases memory latency. Having memory directly attached to the CPU improves SMP performance, too, because CPUs do not have to share memory bandwidth with each other.
Tim Meuting: In addition, ADM-V provides hardware extensions to improve the performance of translating virtual addresses to physical addresses, as well as performance improvements and isolation when switching between virtual machines. That may seem like a simple process, but it can take a lot of CPU cycles when it's done with software.
Margaret Lewis: Handling memory directly allows us to use hardware boundaries to set up these isolated worlds, these virtual machines, which means better isolation, which translates into better securities.
Can this architecture also address the problem of I/O bottlenecks in virtualized environments?
Margaret Lewis: Well, it does help. Our HyperTransport technology, our interconnect for high bandwidth and low latency, will help. I/O is directly connected to the CPU, giving more balanced throughput. But, there are still issues to be resolved in I/O.
Tim Meuting: We'll be addressing those issues in the future. AMD's I/O Virtualization Technology (IOMMU) specification, which we're working on with partners, will help in the performance characteristics of translating virtualizers to the physical addresses and offering solutions for isolations. You can truly protect virtual machines at the hardware level.
Will developments in hardware virtualization make paravirtualization unnecessary?
Margaret Lewis: I don't necessarily know that paravirtualization is going to discontinue existence. I think there's going to be many different ways to achieve virtualization, just like there are different operating systems (OSes). Paravirtualization has an advantage, letting virtualization software not have to handle some of [the compatibility issues] with some hardware device drivers.
The bad thing about paravirtualization is you have to be able to recompile your OS. So, you'll never be able to run Windows in a paravirtualized Linux world, but you could use older versions and newer versions of Red Hat on the same machine.
It's going to be interesting to see how the software vendors are going to work the underlying extensions that we and Intel have provided and work into their software. It's an evolutionary path.
What's coming the next phase of hardware virtualization?
Tim Meuting: There will be more improvements in I/O in virtualization and graphics virtualization. As client-side virtualization becomes more mainstream, graphics will become more and more important.
Analysts have said that hardware virtualization technologies like AMD-V are immature. Would there be advantages for IT shops in waiting until the technologies take a few more evolutionary steps?
Tim Meuting: There are benefits to be gained today. For instance, customers say virtualization has reduced the time to provision an application from weeks to months to minutes. It eliminates the need to [buy new servers for each application], do all the configuration and installation of the application from scratch and then configure that application.
Margaret Lewis: We are seeing so many companies achieve higher server utilization rates, increased labor efficiencies and other benefits today. I would hate to think that anyone passed up today's benefits to wait for new technologies.
This article originally appeared on SearchServerVirtualization.com.