Timeline of virtualization development

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

Timelines

Note: This timeline is missing data for important historical systems, including: Atlas C (Manchester), GE 645, Burroughs B5000

  • 1964
  • 1965
  • 1966
    • IBM ships the S/360-67 computer in June 1966
    • IBM begins work on CP-67, a reimplementation of CP-40 for the S/360-67.
  • 1967
    • CP-40 (January) and CP-67 (April) go into production time-sharing use.
  • 1968
  • 1970
    • IBM System/370 announced (June) – without virtual memory.
    • Work begins on CP-370, a complete reimplementation of CP-67, for use on the System/370 series.
  • 1971
  • 1972
    • Announcement of virtual memory added to System/370 series.
    • VM/370 announced – and running on announcement date. VM/370 includes the ability to run VM under VM (previously implemented both at IBM and at user sites under CP/CMS, but not made part of standard releases).
  • 1973
    • First shipment of announced virtual memory S/370 models (April: -158, May: -168).
  • 1974-1998
    • [ongoing history of VM family and VP/CSS.]
  • 1977
    • Initial commercial release of OpenVMS (Open Virtual Memory System).
  • 1985
  • 1987
  • 1988
    • SoftPC 1.0 for Sun was introduced in 1988 by Insignia Solutions [1]
    • SoftPC appears in its first version for Apple Macintosh. These versions (Sun and Macintosh) have only support for DOS.
  • 1994
    • Kevin Lawton leaves MIT Lincoln Lab and start the Bochs project. Bochs was initially coded for x86 architecture, capable of emulating BIOS, processor and other x86-compatible hardware, by simple algorithms, isolated from the rest of the environment, eventually incorporating the ability to run different processor algorithms under x86-architecture or the host, including bios and core processor (Itanium x64, x86_64, arm, mips, powerpc, etc.), and with the advantage that the application is multi platform (BSD, Linux, Windows, Mac, Solaris).[1]
  • 1997
  • 1998
    • June 15, 1998, Simics/sun4m is presented at USENIX'98, demonstrating full system simulation booting Linux 2.0.30 and Solaris 2.6 unmodified from dd (Unix):ed disks. [2]
    • October 26, 1998, VMware filed for a patent on their techniques, which is granted as U.S. Patent 6,397,242 [3]
  • 1999
    • February 8, 1999, VMware introduced VMware Virtual Platform for the Intel IA-32 architecture.
  • 2000
  • 2001
    • January 31, 2001, AMD and Virtutech release Simics/x86-64 ("Virtuhammer") to support the new 64-bit architecture for x86. [4] Virtuhammer is used to port Linux distributions and the Windows kernel to x86-64 well before the first x86-64 processor (Opteron) was available in April, 2003.
    • June, Connectix launches its first version of Virtual PC for Windows.[5]
    • July, VMware created the first x86 server virtualization product.[6]
    • Egenera, Inc. launches their Processor Area Network (PAN Manager) software and BladeFrame chassis which provide hardware virtualization of processing blade's (pBlade) internal disk, network interface cards, and serial console.[7]
  • 2003
    • First release of first open-source x86 hypervisor, Xen [8]
    • February 18, 2003, Microsoft acquired virtualization technologies (Virtual PC and unreleased product called "Virtual Server") from Connectix Corporation. [9]
    • Late 2003, EMC acquired VMware for $635 million.
    • Late 2003, VERITAS acquired Ejascent for $59 million.
    • November 10, 2003 Microsoft releases Microsoft Virtual PC, which is machine-level virtualization technology, to ease the transition to Windows XP.
  • 2005
  • 2006
  • 2007

Open source kvm released which is integrated with linux kernel and provides virtualization on only linux system, it needs hardware support.

Year 1960

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In the mid-1960s, IBM's Cambridge Scientific Center developed CP-40, the first version of CP/CMS. It went into production use in January 1967. From its inception, CP-40 was intended to implement full virtualization. Doing so required hardware and microcode customization on a S/360-40, to provide the necessary address translation and other virtualization features. Experience on the CP-40 project provided input to the development of the IBM System/360-67, announced in 1965 (along with its ill-starred operating system, TSS/360). CP-40 was reimplemented for the S/360-67 as CP-67, and by April 1967, both versions were in daily production use. CP/CMS was made generally available to IBM customers in source code form, as part of the unsupported IBM Type-III Library, in 1968.

Year 1970

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

IBM announced the System/370 in 1970. To the disappointment of CP/CMS users – as with the System/360 announcement – the series would not include virtual memory. In 1972, IBM changed direction, announcing that the option would be made available on all S/370 models, and also announcing several virtual storage operating systems, including VM/370. By the mid-1970s, CP/CMS, VM, and the maverick VP/CSS were running on a numerous large IBM mainframes. By the late 80s, there were reported to be more VM licenses than MVS licenses.

Year 1999

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

On February 8, 1999, VMware introduced the first x86 virtualization product, VMware Virtual Platform, based on earlier research by its founders at Stanford University.

Year 2005

Free desktop virtualization

Previously, a substantial licensing fee was required for the use of VMware's Workstation product. VMware decided to provide high quality virtualization technology to everyone for free. They omitted the ability to create virtual machines and did not distribute the acceleration tools that come with VMware workstation. This early corporate play to encourage consumer applications of virtualization went largely unnoticed.

Year 2006

This year virtualization has a new level of playing field in application virtualization and application streaming.

Year 2008

VMware releases VMware Workstation 6.5 beta, the first program for Windows and Linux to enable DirectX 9 accelerated graphics on Windows XP guests [11].

Overview

As an overview, there are three levels of virtualization:

  • At the hardware level, the VMs can run multiple guest OSes. This is best used for testing and training that require networking interoperability between more than one OSes, since not only can the guest OSes be different from the host OS, there can be as many guest OS as VMs, as long as there is enough CPU, RAM and HDD space. IBM introduced this around 1990 under the name logical partitioning (LPAR), at first only in the mainframe field.
  • At the operating system level, it can only virtualize one OS: the guest OS is the host OS. This is similar to having many terminal server sessions without locking down the desktop. Thus, this is the best of both worlds, having the speed of a TS session with the benefit of full access to the desktop as a virtual machine, where the user can still control the quotas for CPU, RAM and HDD. Similar to the hardware level, this is still considered a Server Virtualization where each guest OS has its own IP address, so it can be used for networking applications such as web hosting.
  • At the application level, it is running on the Host OS directly, without any guest OS, which can be in a locked down desktop, including in a terminal server session. This is called Application Virtualization or Desktop Virtualization, which virtualizes the front-end, whereas Server Virtualization virtualizes the back-end. Now, Application Streaming refers to delivering applications directly onto the desktop and running them locally. Traditionally in terminal server computing, the applications are running on the server, not locally, and streaming the screenshots onto the desktop.

Application virtualization

Application virtualization solutions such as VMware ThinApp, Softricity, and Trigence attempt to separate application specific files and settings from the host operating system, thus allowing them to run in more-or-less isolated sandboxes without installation and without the memory and disk overhead of full machine virtualization. Application virtualization is tightly tied to the host OS and thus does not translate to other operating systems or hardware. VMware ThinApp and Softricity are Intel Windows centric, while Trigence supports Linux and Solaris. Unlike machine virtualization, Application virtualization does not use code emulation or translation so CPU related benchmarks run with no changes, though fileystem benchmarks may experience some performance degradation. On Windows, VMware ThinApp and Softricity essentially work by intercepting filesystem and registry requests by an application and redirecting those requests to a preinstalled isolated sandbox, thus allowing the application to run without installation or changes to the local PC. Though VMware ThinApp and Softricity both began independent development around 1998, behind the scenes VMware ThinApp and Softricity are implemented using different techniques:

  • VMware ThinApp works by packaging an application into a single "packaged" EXE which includes the runtime plus the application data files and registry. VMware ThinApp’s runtime is loaded by Windows as a normal Windows application, from there the runtime replaces the Windows loader, filesystem, and registry for the target application and presents a merged image of the host PC as if the application had been previously installed. VMware ThinApp replaces all related API functions for the host application, for example the ReadFile API supplied to the application must pass through VMware ThinApp before it reaches the operating system. If the application is reading a virtual file, VMware ThinApp handles the request itself otherwise the request will be passed on to the operating system. Because VMware ThinApp is implemented in user-mode without device drivers and it does not have a client that is preinstalled, applications can run directly from USB Flash or network shares without previously needing elevated security privileges.
  • Softricity (acquired by Microsoft) operates on a similar principle using device drivers to intercept file request in ring0 at a level closer to the operating system. Softricity installs a client in Administrator mode which can then be accessed by restricted users on the machine. An advantage of virtualizing at the kernel level is the Windows Loader (responsible for loading EXE and DLL files) does not need to be reimplemented and greater application compatibility can be achieved with less work (Softricity claims to support most major applications). A disadvantage for ring0 implementation is it requires elevated security privileges to be installed and crashes or security defects can occur system wide rather than being isolated to a specific application.

Because Application Virtualization runs all application code natively, it can only provide security guarantees as strong as the host OS is able to provide. Unlike full machine virtualization, Application virtualization solutions currently do not work with device drivers and other code that runs at ring0 such as virus scanners. These special applications must be installed normally on the host PC in order to function.

Managed runtimes

Another technique sometimes referred to as virtualization, is portable byte code execution using a standard portable native runtime (aka Managed Runtimes). The two most popular solutions today include Java and .NET. These solutions both use a process called JIT (Just in time) compilation to translate code from a virtual portable machine language into the local processor’s native code. This allows applications to be compiled for a single architecture and then run on many different machines. Beyond machine portable applications, an additional advantage to this technique includes strong security guarantees. Because all native application code is generated by the controlling environment, it can be checked for correctness (possible security exploits) prior to execution. Programs must be originally designed for the environment in question or manually rewritten and recompiled to work for these new environments. For example, one cannot automatically convert or run a Windows / Linux native app on .NET or Java. Because portable runtimes try to present a common API for applications for a wide variety of hardware, applications are less able to take advantage of OS specific features. Portable application environments also have higher memory and CPU overheads than optimized native applications, but these overheads are much smaller compared with full machine virtualization. Portable Byte Code environments such as Java have become very popular on the server where a wide variety of hardware exist and the set of OS-specific APIs required is standard across most Unix and Windows flavors. Another popular feature among managed runtimes is garbage collection, which automatically detects unused data in memory and reclaims the memory without the developer having to explicitly invoke free(ing) operations.

Neutral view of application virtualization

Given the industry-biased in the past, to be more neutral, there are also two other ways to look at the Application Level:

  • The first type is application packagers (VMware ThinApp, Softricity) whereas the other is application compilers (Java and .NET). Because it is a packager, it can be used to stream applications without modifying the source code, whereas the latter can only be used to compile the source code.
  • Another way to look at it is from the Hypervisor point of view. The first one is "hypervisor" in user mode, whereas the other is "hypervisor" in runtime mode. The hypervisor was put in quotation, because both of them have similar behavior in that they intercept system calls in a different mode: user mode; and runtime mode. The user mode intercepts the system calls from the runtime mode before going to kernel mode. The real hypervisor only needs to intercept the system call using hypercall in kernel mode. Hopefully, once Windows have a Hypervisor, Virtual machine monitor, there may even be no need for JRE and CLR. Moreover, in the case of Linux, maybe the JRE can be modified to run on top of the Hypervisor as a loadable kernel module running in kernel mode, instead of the having slow legacy runtime in user mode. Now, if it were running on top of the Linux Hypervisor directly, then it should be called Java OS, not just another runtime mode JIT.
  • Mendel Rosenblum[3] called the runtime mode a High-level language virtual machine in August 2004. However, at that time, the first type, intercepting system calls in user mode, was irresponsible and unthinkable, so he didn't mention it in his article. Hence, Application Streaming was still mysterious in 2004.[4] Now, when the JVM, no longer High-level language virtual machines, becomes Java OS running on Linux Hypervisor, then Java Applications will have a new level of playing field, just as Windows Applications already has with Softricity.
  • In summary, the first one is virtualizing the Binary Code so that it can be installed once and run anywhere, whereas the other is virtualizing the Source Code using Byte code or Managed code so that it can be written once and run anywhere. Both of them are actually partial solutions to the twin portability problems of: application portability; and source code portability. Maybe it is time to combine the two problems into one complete solution at the hypervisor level in the kernel mode.

Further development

Microsoft bought Softricity on July 17, 2006 and popularized Application Streaming, giving traditional Windows applications a level playing field with Web and Java applications with respect to the ease of distribution (i.e. no more setup required, just click and run). Soon every JRE and CLR can run virtually in user mode, without kernel mode drivers being installed, such that there can even be multiple versions of JRE and CLR running concurrently in RAM.

The integration of the Linux Hypervisor into the Linux Kernel and that of the Windows Hypervisor into the Windows Kernel may make rootkit techniques such as the filter driver[5] obsolete[not in citation given]. This may take a while as the Linux Hypervisor is still waiting for the Xen Hypervisor and VMware Hypervisor to be fully compatible with each other as Oracle impatiently pounding at the door to let the Hypervisor come into the Linux Kernel so that it can full steam ahead with its Grid Computing life. Meanwhile, Microsoft have decided to be fully compatible with the Xen Hypervisor [12]. IBM, of course, doesn't just sit idle as it is working with VMware for the x86 servers, and possibly helping Xen to move from x86 into Power Architecture using the open source rHype. Now, to make the Hypervisor party into a full house, Intel VT-x and AMD-V are hoping to ease and speed up paravirtualization so that a guest OS can be run unmodified. [needs update][clarification needed]

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. The Reincarnation of Virtual Machines ACM Queue vol. 2, no. 5 - July/August 2004 -- by Mendel Rosenblum, Stanford University and VMWare
  4. Application streaming anyone? By Brien M. Posey MCSE, Special to ZDNet Asia Wednesday, April 14, 2004 03:55 PM.
  5. File System Filter Driver

External links