`
`(19) United States
`(12) Patent Application Publication
`HUTCHINS et al.
`
`(10) Pub. No.: US 2013/0024940 Al
`Jan. 24, 2013
`(43) Pub. Date:
`
`(54) OFFLOADING OPERATIONS TO A
`REPLICATE VIRTUAL MACHINE
`
`(60) Provisional application No. 60/788,032, filed on Mar.
`31, 2006.
`
`(71) Applicant: VMWARE, INC., Palo Alto, CA (US)
`
`(72)
`
`Inventors: Gregory HUTCHINS, San Francisco,
`CA (US); Christian CZEZATKE, San
`Francisco, CA (US); Satyam B.
`VAGHANI, San Jose, CA (US); Mallik
`MAHALINGAM, Cupertino, CA (US);
`Shaw CHUANG, Mountain View, CA
`(US); Bich Cau LE, San Jose, CA (US)
`
`(73) Assignee: VMWARE, INC., Palo Alto, CA (US)
`
`(21) Appl. No.: 13/623,411
`
`(22) Filed:
`
`Sep. 20, 2012
`
`Related U.S. Application Data
`
`(63) Continuation of application No. 11/545,662, filed on
`Oct. 10, 2006, now Pat. No. 8,296,759.
`
`Publication Classification
`
`(51) Int. Cl.
`G06F 21/00
`(52) U.S. Cl.
`
`(2006.01)
`
` 726/24
`
`(57)
`
`ABSTRACT
`
`A method for detecting malicious code within a first virtual
`machine comprising creating a snapshot of the first virtual
`machine and transferring the snapshot to a second machine. A
`scan operation is run on the snapshot using resources of the
`second machine. In response to detecting malicious code
`during the scan operation, action is taken at the first virtual
`machine to address the detection of the malicious code. Thus,
`the action in response to detecting the malicious code may
`include placing the first virtual machine in quarantine.
`
`c
`
` 700
`
`260
`
`V
`
`200
`
`202
`
`201
`
`GUEST SYSTEM SOFTWARE
`220i GUEST OS
`DRVS
`224
`VIRTUAL SYSTEM HARDWARE
`VCPU1
`VCPUm
`VCPUO
`21m
`210
`211
`VDEVICE(S)
`VMEM
`270
`230
`
`VDISK
`240
`
`VM
`200-n
`
`APPS
`PThk
`430 300
`COS
`420
`
`100
`
`1
`600
`SYSTEM HARDWARE
`
`VMM DEVICE EMULATORS
`330
`
`VMM
`300-n
`
`KERNEL
`
`CPU(S)
`110
`
`MEM
`130
`
`[DISK
`140
`
`DEVICES
`170
`
`WIZ, Inc. EXHIBIT - 1070
`WIZ, Inc. v. Orca Security LTD.
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 1 of 7
`
`US 2013/0024940 Al
`
`VM1
`
`291
`
`A
`
`VM2
`
`292
`
`VM3
`
`293
`
`A
`
`HOST COMPUTER
`SYSTEM
`
`101
`
`--,, N
`
`-------________----'-'"..
`
`NON-VOLATILE MEMORY
`
`102
`
`FIG. 1
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 2 of 7
`
`US 2013/0024940 Al
`
`VM1
`za
`
`VM2
`
`20.5
`
`VM3
`
`VM4
`
`2QZ
`
`VM5
`
`2011
`
`A
`
`HOST
`COMPUTER SYSTEM
`103
`
`HOST COMPUTER
`SYSTEM
`104
`
`NON-VOLATILE MEMORY
`
`105
`
`FIG. 2
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 3 of 7
`
`US 2013/0024940 Al
`
`START
`
`CREATE VIRTUAL MACHINE
`301
`
`NEW APPLICATION?
`
`NO-
`
`YES
`
`A
`
`APPLICATION DESIGNATED
`FOR OFFLOADING?
`303
`
`--...,,
`----------
`
`--..,.......
`, -
`------
`
`_----
`
`N
`
`YES
`
`CREATE A REPLICATED VIRTUAL
`MACHINE
`305
`
`RUN APPLICATION ON VIRTUAL
`MACHINE
`
`TRANSFER REPLICATED VIRTUAL
`MACHINE TO A DIFFERENT HOST
`COMPUTER
`306
`
`RUN THE APPLICATION ON THE
`REPLICATED VIRTUAL MACHINE
`307
`
`WHEN THE APPLICATION
`COMPLETES, DELETE THE
`REPLICATED VIRTUAL MACHINE
`308
`
`FIG. 3
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 4 of 7
`
`US 2013/0024940 Al
`
`c -700
`
`260
`
`APPS
`
`VM
`
`(..
`200
`
`202
`
`201
`
`GUEST SYSTEM SOFTWARE
`
`i GUEST OS
`220
`
`DRVS
`224
`
`VIRTUAL SYSTEM HARDWARE
`
`VM
`200-n
`
`VCPU0
`210
`
`VCPU1
`211
`
`VCPUm
`21m
`
`VDEVICE(S)
`nQ
`
`VMEM
`230
`
`[1/DISK]
`240
`
`VMM
`
`---.44
`430 300
`
`DEVICE EMULATORS
`330
`
`VMM
`300-n
`
`COS
`420
`
`WO
`
`T
`i 6001_
`
`c
`
`SYSTEM HARDWARE
`
`KERNEL
`
`CPU(S)
`110
`
`MEM
`130
`
`[-DISK
`140
`
`DEVICES
`170
`
`FIG. 4
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 5 of 7
`
`US 2013/0024940 Al
`
`700
`
`260
`
`VM
`
`r *
`
`200
`
`202
`
`201
`
`GUEST SYSTEM SOFTWARE
`
`1 GUEST OS
`220
`
`DRVS
`224
`
`VIRTUAL SYSTEM HARDWARE
`
`VCPU(S)
`210
`
`VDEVICE(S)
`270
`
`VMEM
`230
`
`,....._
`
`,....,
`
`VDISK
`240
`
`BINARY
`TRANSLATOR
`
`321 t
`
`TRANS. CACHE
`325
`
`a 460
`
`V MM
`
`300
`---\
`
`DEVICE EMULATORS
`370
`
`HOST OS
`420
`
`DRIVERS 424
`
`41 -
`
`SYSTEM HARDWARE
`
`100
`
`DIRECT
`EXECUTION 310
`
`INT/EXPT
`
`t
`
`FIG. 5
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 6 of 7
`
`US 2013/0024940 Al
`
`START
`
`DRAIN THE I/O QUEUE
`601
`
`V
`TAKE A SYNCHRONOUS SNAPSHOT OF THE VM DEVICE STATE
`602
`
`CREATE A REDO LOG
`603
`
`CREATE A NEW VIRTUAL MACHINE
`604
`
`COPY THE DEVICE STATE AND CONFIGURATION FILE FOR THE ORIGINAL VIRTUAL
`MACHINE TO A NEW, SECOND SNAPSHOT IMAGE
`605
`
`CREATE A NEW REDO LOG
`606
`
`V
`CLONE MAIN MEMORY
`607
`
`ORIGINAL VIRTUAL MACHINE CONTINUES ITS EXECUTION
`608
`
`V
`REPLICATED VIRTUAL MACHINE REVERTS OR RESUMES
`EXECUTIONS FROM THE DUPLICATE SNAPSHOT IMAGE
`609
`
`(
`
`END
`
`FIG. 6
`
`
`
`Patent Application Publication
`
`Jan. 24, 2013 Sheet 7 of 7
`
`US 2013/0024940 Al
`
`ORIGINAL
`VIRTUAL
`MACHINE
`271
`
`REPLICATED
`VIRTUAL
`MACHINE
`272
`
`VM KERNEL 273
`
`REDO LOG 1
`171
`
`REDO LOG 2
`172
`
`V
`REDO LOG 3
`173
`
`REDO LOG 4
`
`174 1
`
`DISK FILE
`175
`
`FIG. 7
`
`
`
`US 2013/0024940 Al
`
`Jan. 24, 2013
`
`1
`
`OFFLOADING OPERATIONS TO A
`REPLICATE VIRTUAL MACHINE
`
`CROSS-REFERENCE TO RELATED
`APPLICATION(S)
`
`[0001] This application is a continuation and claims the
`benefit of U.S. patent application Ser. No. 11/545,662, filed
`Oct. 10, 2006, which claimed benefit under 35 U. S.C. §119(e)
`ofU.S. Provisional Application No. 60/788,032, filed 31 Mar.
`2006.
`
`DESCRIPTION OF THE RELATED ART
`
`[0002] Typically, computers are dedicated to individuals or
`to specific applications. For example, an individual owns or is
`assigned his or her own personal computer (PC). Each time a
`business hires an employee whose job entails access to a
`computer, a new PC must be purchased and installed for that
`new hire. In other cases, a PC or server may be used to
`perform a specific task. For example, a corporation could
`have a server for hosting the company's web site, another
`server for handling emails, and yet another server for han-
`dling financial transactions. This one-to-one paradigm is
`simple, straightforward, flexible, and readily upgradeable.
`However, one drawback to this set-up is that it is inefficient
`from a computer resource perspective.
`[0003] The inefficiency stems from the fact that most soft-
`ware applications do not fully utilize the full processing
`potential of the computer upon which that software is
`installed. The processing power of a computer is largely
`defined by its interconnected hardware components. How-
`ever, when creating software, programmers do not know the
`specific hardware capabilities of the computers upon which
`their software is to be ultimately installed upon. Conse-
`quently, programmers tend to be conservative when creating
`software in order to ensure that software can run on the vast
`majority of conventional, contemporary PCs or servers. As a
`result, software applications do not push the envelope set by
`hardware constraints. Furthermore, some applications may
`consume a great deal of processing power, while other com-
`puter applications are inherently less computing intensive.
`When the PC or server is running less computationally inten-
`sive applications, much of its hardware resources are
`underutilized. Furthermore, given hundreds or thousands of
`computers networked in an enterprise, the cumulative effect
`of the amount of wasted computing resources adds up.
`[0004]
`In an effort to take advantage of all the underutilized
`computing resources, there have been efforts to design "vir-
`tual" machines. Basically, a virtual machine entails loading a
`piece of software onto a physical "host" computer so that
`more than one user can utilize the resources of that host
`computer. In other words, the virtual software package is
`loaded onto one or more physical host computers so that the
`processing resources of the host computers can be shared
`amongst many different users. By sharing computing
`resources, virtual machines make more efficient use of exist-
`ing computers. Moreover, each user accesses the host com-
`puter through his own virtual machine. From the viewpoint of
`the user, it appears as if he were using his own computer.
`Users can continue to operate in a manner that they had grown
`accustomed to in interacting with computers. Thus, rather
`than buying, installing, and maintaining new computers,
`companies can simply load virtual machine software to get
`more leverage off their existing computers. Furthermore, vir-
`
`tual machines do not entail any special training because they
`run transparent to the user. In addition, virtual machines have
`the ability to run multiple instances of operating systems—
`even different operating system—concurrently on the same
`host or a group of hosts. As one of many examples of the
`benefits of this, a single user may then run applications on one
`trusted operating system while safely testing software written
`to run on a different operating system.
`[0005] Unfortunately, there is one drawback manifest in
`exploiting virtual machines to their full potential: Because
`virtual machines are designed to maximize the most efficient
`use of the computing resources, there is typically not very
`much of the spare computing resources left over. Any spare
`computing resources are often used to host another virtual
`machine for another user. It is this very economic efficiency
`which poses serious issues with certain types of applications.
`Some applications are run infrequently, but when they do
`execute, these applications are extremely computing inten-
`sive. For example, backup applications are often used to
`backup a company's data. The data is backed up periodically
`and stored in backup files so that if there happens to be a
`computer crash or failure, important data is not irretrievably
`lost. Backing up files is an important function, but it needs to
`be run only periodically; however, when it does run, it can
`consume a great deal of computing resources in terms of
`input/output (I/O) and processing bandwidth. Similarly, data
`mining and virus scanning applications also fall into the cat-
`egory of applications which are run periodically and which
`consume an inordinate amount of computer resources when
`they do execute.
`[0006]
`In the past, when dedicated computers had a thick
`cushion of unused computing resources, these periodic com-
`puter resource-intensive applications could execute in the
`background without disrupting or affecting the user's normal
`operation. However, with virtual machines, there are typi-
`cally no spare computing resources to fall back on. Thus, IT
`administrators are faced with a dilemma: They could run the
`periodic applications and have the virtual machines suffer a
`performance hit. However, this is problematic when dealing
`with mission-critical applications. For example, one would
`not want to impact the server handling a company's sales
`orders, even though backing up the sales information is vital.
`Alternatively, IT administrators could choose to use dedi-
`cated computers, but this is wasteful of computing resources.
`
`SUMMARY OF THE INVENTION
`
`[0007] This invention comprises a method and system for
`offloading a software application intended to be run on a first
`virtual machine onto a second machine. A periodic and/or
`computing resource-intensive application is supposed to be
`run on the original virtual machine. However, doing so may
`detrimentally impact the operation of the original virtual
`machine by consuming valuable computing resources. The
`periodic and/or computing intensive application is run on the
`second machine instead of the original virtual machine.
`Meanwhile, applications on the original virtual machine can
`continue to operate as normal.
`[0008]
`In one embodiment, both the original virtual
`machine and the second machine are run on the same host
`computer system. Resource constraints can be imposed to
`control the amount of computing resources allocated to the
`original and replicated virtual machines. In another embodi-
`ment, the original virtual machine is run on one host computer
`system, whereas the second machine is a second virtual
`
`
`
`US 2013/0024940 Al
`
`Jan. 24, 2013
`
`2
`
`machine run on a different host system. Because the applica-
`tion on the replicated virtual machine is run on a different set
`of hardware components, this leaves the original virtual
`machine free to continue its normal operations unaffected.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0009] FIG. 1 illustrates computer system for hosting an
`original virtual machine and a replicated virtual machine
`[0010] FIG. 2 illustrates two host computer systems
`whereby one computer system hosts the original virtual
`machine and a different computer system hosts the replicated
`virtual machine.
`[0011] FIG. 3 is a flowchart depicting the process flow for
`offloading an application onto a replicated virtual machine.
`[0012] FIG. 4 illustrates an exemplary non-hosted virtual-
`ized computer system.
`[0013] FIG. 5 illustrates an exemplary hosted virtualized
`computer system.
`[0014] FIG. 6 is a flowchart depicting the process flow for
`creating a replicated virtual machine.
`[0015] FIG. 7 illustrates a block diagram of the I/O opera-
`tions between an original virtual machine and its counterpart,
`a replicated virtual machine.
`
`DETAILED DESCRIPTION
`
`[0016] The present invention pertains to offloading particu-
`lar operations onto a replicated virtual machine in order to
`minimize the effects of those operations on the original vir-
`tual machine. Initially, software for creating, running, and
`maintaining virtual machines is installed upon one or more
`physical computer systems. These physical computer sys-
`tems act as hosts to virtual machines, which share the com-
`puting resources of the physical computer systems. Certain
`software applications which tend to be periodically run and
`which are computing resource-intensive are identified. When
`one of these applications is being called upon for execution on
`a particular virtual machine, a replicate of that virtual
`machine is created. In other words, a "cloned" virtual
`machine, which is a copy of the original virtual machine, is
`created. In the past, the computing resource-intensive appli-
`cation would have to be run on the original virtual machine
`due its uniqueness. For example, a backup application could
`be scheduled to routinely backup the information correspond-
`ing to the original virtual machine.
`[0017]
`In another example, an anti-virus scan could peri-
`odically run on the original virtual machine to detect and fix
`any potential viruses. Data mining applications can be initi-
`ated to track or analyze data being processed, accessed or
`otherwise associated with that original virtual machine. In the
`past, running these computing resource-intensive applica-
`tions on the original virtual machine can detrimentally impact
`its other operations. But in the present invention, the software
`application is run, not on the original virtual machine, but
`instead on the replicated virtual machine. By running the
`computing resource-intensive application on the replicated
`virtual machine, the original virtual machine can continue to
`perform its other operations with little to no detrimental
`impact. Meanwhile, the computing resource-intensive appli-
`cation, having been successfully offloaded onto the replicated
`virtual machine, is run separately. The present invention con-
`fers the best of both worlds: highly efficient management of
`periodic spikes of computing resource demands with minimal
`impact on the normal operations of virtual machines.
`
`In one embodiment, the replicated virtual machine
`[0018]
`is run on the same host computer system as that of the original
`virtual machine. Referring now to FIG. 1, a host computer
`system 101 is shown. Host computer system 101 can be any
`type computer, such as a PC, a server, a workstation, a main-
`frame computer, etc. Host computer system 101 contains
`basic hardware components (e.g., one or more microproces-
`sors, volatile and/or non-volatile memory, busses, one or
`more I/O interface, etc.).
`[0019]
`In one embodiment, an operating system is installed
`on host computer system 101. The operating system functions
`as an interface between software applications and the corre-
`sponding hardware components. In another embodiment,
`specially developed software such as VMware ESX Serverm
`runs directly on the bare metal hardware and performs the
`functions of an operating system for virtual machines running
`on it. One or more virtual machines 291 and 292 are created
`by known virtualization software.
`[0020] Users of the virtual machines 291 and 292 can pro-
`cess and store data contained in a non-volatile memory 102.
`Non-volatile memory stores digital data and can be a storage
`area network device, hard disk array, tape drive, etc. In this
`embodiment, whenever a potentially computing resource-
`intensive application is to be run on one of the virtual
`machines, a replicate of that particular virtual machine is
`created. For example, if the computing resource-intensive
`application is to be run on virtual machine VM1 291, the
`original VM1 291 is replicated. The replicated virtual
`machine VM1* 293 is created and instantiated on host com-
`puter system 101. The computing resource-intensive applica-
`tion is then run on the replicated virtual machine VM1* 293.
`By offloading the computing resource-intensive application
`onto the replicated virtual machine, an administrator can con-
`trol how much computing resources are to be allocated
`between the original virtual machine (e.g., VM1 291), the
`replicated virtual machine (e.g., VM1* 293), and/or other
`virtual machines (e.g., VM2 292) running on the host com-
`puter system 101. The administrator now has the ability to
`limit the amount of computing resources allocated to the
`replicated virtual machine such that it will have little to no
`effect on the normal operations of the original virtual machine
`and/or other virtual machines. Once the computing resource-
`intensive application completes its execution, the replicated
`virtual machine can be discarded.
`[0021]
`In another embodiment, the replicated virtual
`machine is transferred or otherwise installed on a different
`host computer system. In other words, the original virtual
`machine is run on one host computer system while the repli-
`cated virtual machine is run on a different, separate host
`computer system. FIG. 2 shows the embodiment of offloading
`the operation of an application onto a replicated virtual
`machine running on a host computer system different from
`that of the original virtual machine. Host computer system
`103 is a physical computer which may include one or more
`processors, memory chips, non-volatile storage, I/O inter-
`face, and a bus to interconnect the various hardware compo-
`nents. Host computer system 103 has an operating system and
`virtualization software installed. The virtual machine soft-
`ware is used to create multiple virtual machines (e.g., VM1
`204, VM2 205, and VM3 206). A separate physical computer
`104 has its own dedicated hardware components, such as one
`or more processors, memory chips, non-volatile storage, I/O
`interface, and a bus to interconnect the various hardware
`components. Also running on host computer system 104 is an
`
`
`
`US 2013/0024940 Al
`
`Jan. 24, 2013
`
`3
`
`operating system and virtualization software. The virtualiza-
`tion software can be used to create a number of virtual
`machines (e.g., VM4 207). The hardware components and/or
`operating system of host computer system 103 can be differ-
`ent from that of host computer system 104. The host computer
`systems 103 and 104 can be networked and/or share a com-
`mon non-volatile memory 105.
`[0022]
`In this embodiment, when an application is to be run
`one of the virtual machines which may tend to put a strain on
`the computing resources of the corresponding host system,
`the application can be run on a replicated virtual machine.
`This is accomplished by replicating the original virtual
`machine. In other words, a copy, clone, or duplicate virtual
`machine is created on the computer system hosting the origi-
`nal virtual machine. The replicated virtual machine is then
`moved, installed, or otherwise transferred to a different host
`computer system. For example, the virtual machine VM2
`running on host computer system 103 can have a replicated
`virtual machine VM2* 208 running on host computer system
`104. This implementation enables the original virtual
`machine to continue its normal operation on its host computer
`system without being affected by the operations being per-
`formed by the replicated virtual machine. The original virtual
`machine is not affected because the computing resource-
`intensive application is being run on a different host system,
`Again, once the computing resource-intensive application
`completes its execution, the replicated virtual machine can be
`discarded.
`[0023] FIG. 3 is a flowchart describing the processes for
`offloading operations onto a replicated virtual machine. Ini-
`tially, in step 301, a virtual machine is created. Step 302
`determines when an application is to be run on the virtual
`machine. Basically, one or more applications are intended to
`be run on the virtual machine. However, certain ones of these
`applications intended to be run on the virtual machine created
`in step 301 can be expeditiously offloaded onto and run by a
`virtual machine which is a replica of the virtual machine
`originally created in step 301. It should be noted that any
`application can be offloaded onto the replicated virtual
`machine. Some good candidates for offloading are those
`applications which are run either periodically, occasionally,
`or infrequently. Other good candidates are those applications
`which require a great amount of processing power, I/O band-
`width, and/or memory throughput. In one embodiment, an IT
`administrator determines certain types or specific ones of
`applications are to be offloaded. In another embodiment, an
`application is offloaded if a network administrator detects that
`a host computer system is being overloaded. Software can be
`designed to detect that a host computer system is being over-
`loaded and automatically offload one or more applications
`onto a replicated virtual machine. As described above, the
`replicated virtual machine can reside on a different host com-
`puter system. If an application is not designated for offload-
`ing, as determined in step 303, that application is run on the
`virtual machine, step 304. In this case, the process begins
`anew at step 302. Steps 302-304 enable several applications
`to be running simultaneously on the virtual machine created
`in step 301.
`[0024] However, if step 303 determines that an application
`intended for the virtual machine created in step 301 is to be
`offloaded, a new virtual machine is created in step 305. This
`new virtual machine is a replicate, clone, or copy of the virtual
`machine originally created in step 301. In step 306, the rep-
`licated virtual machine can be transferred to a different host
`
`computer system. Step 306 is optional. The offloaded appli-
`cation is run on the replicated virtual machine according to
`step 307. In step 308, when the offloaded application com-
`pletes, the replicated virtual machine can be deleted. It should
`be noted that one or more applications can be running on the
`virtual machine created in step 301 in conjunction with one or
`more offloaded applications running on the replicated virtual
`machine created in step 305.
`
`[0025] Detailed descriptions of how a virtual machine is
`created, replicated, and transferred to a different host com-
`puter system is now disclosed. FIGS. 4 and 5 show a virtual
`machine and its functions. As is well known in the field of
`computer science, a virtual machine (VM) is a software
`abstraction—a "virtualization"—of an actual physical com-
`puter system. FIG. 4 shows one possible arrangement of a
`computer system 700 that implements virtualization. A vir-
`tual machine (VM) 200, which in this system is a "guest," is
`installed on a "host platform," or simply "host," which will
`include a system hardware 100, that is, a hardware platform,
`and one or more layers or co-resident components comprising
`system-level software, such as an operating system (OS) or
`similar kernel, a virtual machine monitor, or some combina-
`tion of these.
`
`[0026] As software, the code defining the VM will ulti-
`mately execute on the actual system hardware 100. As in
`almost all computers, this hardware will typically include one
`or more CPUs 110, some form of memory 130 (volatile
`and/or non-volatile), one or more storage devices such as one
`or more disks 140, and one or more devices 170, which may
`be integral or separate and removable.
`
`In many existing virtualized systems, the hardware
`[0027]
`processor(s) 110 are the same as in a non-virtualized com-
`puter with the same platform, for example, the Intel x-86
`platform. Because of the advantages of virtualization, how-
`ever, some hardware vendors have proposed, and are presum-
`ably developing, hardware processors that include specific
`hardware support for virtualization.
`
`[0028] Each VM 200 will typically mimic the general struc-
`ture of a physical computer and as such will usually have both
`virtual system hardware 201 and guest system software 202.
`The virtual system hardware typically includes at least one
`virtual CPU 210, virtual memory 230, at least one virtual disk
`or storage device 240, and one or more virtual devices 270.
`Note that a storage disk—virtual 240 or physical 140—is also
`a "device," but is usually considered separately because of the
`important role it plays. All of the virtual hardware compo-
`nents of the VM may be implemented in software to emulate
`corresponding physical components. The guest system soft-
`ware typically includes a guest operating system (OS) 220
`and drivers 224 as needed, for example, for the various virtual
`devices 270.
`
`If the VM 200 is properly designed, applications
`[0029]
`260 running on the VM will function as they would if run on
`a "real" computer, even though the applications are running at
`least partially indirectly, that is via the guest OS 220 and
`virtual processor(s). Executable files will be accessed by the
`guest OS from the virtual disk 240 or virtual memory 230,
`which will be portions of the actual physical disk 140 or
`memory 130 allocated to that VM. Applications may be
`installed within the VM in a conventional manner, using the
`guest OS. Then, the guest OS retrieves files required for the
`execution of such installed applications from the virtual disk
`
`
`
`US 2013/0024940 Al
`
`Jan. 24, 2013
`
`4
`
`in a conventional manner. The design and operation of virtual
`machines in general are known in the field of computer sci-
`ence.
`[0030] Some interface is usually required between a VM
`200 and the underlying host platform (in particular, the hard-
`ware CPU(s) 110 and any intermediate system-level software
`layers), which is responsible for actually submitting and
`executing VM-issued instructions and for handling I/O opera-
`tions, including transferring data to and from the hardware
`memory 130 and storage devices 140. A common term for this
`interface or virtualization layer is a "virtual machine moni-
`tor" (VMM), shown as component 300. A VMM is usually a
`software component that virtualizes at least some of the
`resources of the physical host machine, or at least some
`hardware resource, so as to export a hardware interface to the
`VM corresponding to the hardware the VM "thinks" it is
`running on. As FIG. 4 illustrates, a virtualized computer
`system may (and usually will) have more than one VM, each
`of which may be running on its own VMM.
`[0031] The various virtualized hardware components in the
`VM, such as the virtual CPU(s) 210, etc., the virtual memory
`230, the virtual disk 240, and the virtual device(s) 270, are
`shown as being part of the VM 200 for the sake of conceptual
`simplicity. In actuality, these "components" are often imple-
`mented as software emulations included in the VMM. One
`advantage of such an arrangement is that the VMM may (but
`need not) be set up to expose "generic" devices, which facili-
`tate, for example, migration of VM from one hardware plat-
`form to another.
`[0032]
`In contrast, another concept, which has yet to
`achieve a universally accepted definition, is that of "para-
`virtualization." As the name implies, a "para-virtualized" sys-
`tem is not "fully" virtualized, but rather the guest is config-
`ured in some way to provide certain features that facilitate
`virtualization. For example, the guest in some pars-virtual-
`ized systems is designed to avoid hard-to-virtualize opera-
`tions and configurations, such as by avoiding certain privi-
`leged instructions, certain memory address ranges, etc. As
`another example, many para-virtualized systems include an
`interface within the guest that enables explicit calls to other
`components of the virtualization software. For some, para-
`virtualization implies that the guest OS (in particular, its
`kernel) is specifically designed to support such an interface.
`According to this view, having, for example, an off-the-shelf
`version of Microsoft Windows XP as the guest OS would not
`be consistent with the notion of para-virtualization. Others
`define para-virtualization more broadly to include any guest
`OS with any code that is specifically intended to provide
`information directly to the other virtualization software.
`According to this view, loading a module such as a driver
`designed to communicate with other virtualization compo-
`nents renders the system para-virtualized, even if the guest
`OS as such is an off-the-shelf, commercially available OS not
`specifically designed to support a virtualized computer sys-
`tem.
`[0033] Unless otherwise indicated or apparent, this inven-
`tion is not restricted to use in systems with any particular
`"degree" of virtualization and is not to be limited to any
`particular notion of full or partial ("para-") virtualization.
`[0034]
`In addition to the distinction between full and partial
`(para-) virtualization, two arrangements of intermediate sys-
`tem-level software layer(s) are in general use—a "hosted"
`configuration (illustrated in FIG. 5) and a non-hosted con-
`figuration (illustrated in FIG. 4). In a hosted virtualized corn-
`
`puter system, an existing, general-purpose operating system
`forms a "host" OS that is used to perform certain input/output
`(I/O) operations, alongside and sometimes at the request and
`direction of the VMM 300. The host OS 420, which usually
`includes drivers 424 and supports applications 460 of its own,
`and the VMM are both able to directly access at least some of
`the same hardware resources, with conflicts being avoided by
`a context-switching mechanism. The Workstation product of
`VMware, Inc., of Palo Alto, Calif., is an example of a hosted,
`virtualized computer system, which is also explained in U.S.
`Pat. No. 6,496,847 (Bugnion, et al., "System and Method for
`Virtualizing Computer Systems," 17 Dec. 2002).
`[0035]
`In addition to device emulators 370, FIG. 5 also
`illustrates some of the other components that are also often
`included in the VMM of a hosted virtualization system; many
`of these components are found in the VMM of a non-hosted
`system as well. For example, exception handlers 330 may be
`included to help context-switching (see again U.S. Pat. No.
`6,496,847), and a direct execution engine 310 and a binary
`translator 320 with associated translation cache 325 may be
`included to provide execution speed while still preventing the
`VM from directly executing certain privileged instructions
`(see U.S. Pat. No. 6,397,242, Devine, et al., "Virtualization
`System Including a Virtual Machine Monitor for a Computer
`with a Segmented Architecture," 28 May 2002).
`[0036]
`In many cases, it may be beneficial to deploy VMMs
`on top of a software layer—a kernel 600—constructed spe-
`cifically to provide efficient support for the VMs. This con-
`figuration is frequently referred to as being "non-hosted."
`Compared with a system in which VMMs run directly on the
`hardware platform (such as shown in FIG. 5), use of a kernel
`offers greater modularity and facilitates provision of services
`(for example, resource management) that extend across mul-
`tiple virtual machines. Compared with a hosted deployment,
`a kernel may offer greater performance because it can be
`co-developed with the VMM and be optimized for the char-
`acteristics of a workload consisting primarily of VMs/
`VMMs. The kernel 600 also handles any other applications
`running on it that can be separately scheduled, as well as any
`temporary "console" operating syste