throbber
USOO7093086B1
`
`(12) Unlted States Patent
`(10) Patent No.:
`US 7,093,086 B1
`
`van Rietschote
`(45) Date of Patent:
`*Aug. 15, 2006
`
`(54) DISASTER RECOVERY AND BACKUP
`USING VIRTUAL MACHINES
`
`(75)
`
`Inventor: Hans F. van Rietschote, Sunnyvale,
`CA (US)
`
`(73) Assignee: VERITAS Operating Corporation,
`Mountain View, CA (US)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 234 days.
`
`This patent is subject to a terminal dis-
`claimer.
`
`(21) Appl. No.: 10/109,186
`
`(22)
`
`Filed:
`
`Mar. 28, 2002
`
`(2006 01)
`IGnottlsFClIz/oo
`(51)
`' 711/161‘ 714/8' 714/13.
`U S Cl
`52
`'
`'
`' ”””””””””””””””
`714/15‘5714/16" 714/20’
`(
`)
`(58) Field of Classification Search ................ 709/104,
`709/106; 711/6,156,161, 165; 714/6,13,
`714/15, 16, 20
`
`(56)
`
`See application file for complete search history.
`e erences
`1 e
`R f
`C't d
`U.S. PATENT DOCUMENTS
`
`4,912,628 A *
`4,969,092 A
`5,257,386 A
`5,408,617 A
`5,621,912 A
`5,852,724 A
`5,872,931 A
`5,944,782 A
`6,003,065 A
`6,029,166 A
`6,075,938 A
`6,151,618 A
`6,230,246 B1
`6,298,390 B1
`
`3/1990 Briggs ........................ 718/100
`11/1990 Shorter
`10/1993 Saito
`4/1995 Yoshida
`4/1997 Borruso et al.
`12/1998 Glenn, 11 et al.
`2/1999 Chivaluri
`8/1999 Noble et 31.
`12/1999 Yan et al.
`2/2000 Mutalik et 31.
`6/2000 Bugnion et al.
`ll/2000 Wahbe et al.
`5/2001 Lee et al.
`10/2001 Matena et al.
`
`6,298,428 B1
`6,324,627 B1
`6,341,329 B1
`
`6,363,462 Bl
`6,370,646 B1
`6,397,242 B1
`
`10/2001 Munroe et al.
`11/2001 Kricheff et al.
`1/2002 LeCrone et al.
`
`3/2002 Bergsten
`4/2002 Goodman etal.
`5/2002 Devine et al.
`
`6,421,739 B1*
`6,438,642 B1
`6,493,811 B1
`
`7/2002 Holiday ...................... 719/330
`8/2002 Shaath
`12/2002 Blades et al.
`
`6,496,847 Bl
`6,542,909 B1*
`
`12/2002 Bugnion et 31~
`4/2003 Tamer et al.
`............... 707/205
`
`(Continued)
`
`OTHER PUBLICATIONS
`
`Veritas, “Executive Overview,” Technical Overview, pp.
`1-9.
`
`.
`(Continued)
`Primary Examiner%hristian P. Chace
`(74) Attorney, Agent, or FirmiLawrence J. Merkel;
`Meyertons, Hood, Kivlin, Kowert & Goetzel, RC.
`(57)
`ABSTRACT
`
`One or more computer systems, a carrier medium, and a
`method are provided for backing up virtual machines. The
`backup may occur, e.g., to a backup medium or to a disaster
`recovery site, in various embodiments. In one embodiment,
`an apparatus includes a computer system configured to
`execute at least a first virtual machine, wherein the computer
`system is configured to: (i) capture a state of the first virtual
`machine, the state corresponding to a point in time in the
`execution of the first virtual machine; and (ii) copy at least
`a portion of the state to a destination separate from a storage
`device to which the first virtual machine is suspendable. A
`carrier medium may include instructions which, when
`executed, cause the above operation on the computer sys-
`tem. The method may comprise the above highlighted
`operations.
`
`30 Claims, 8 Drawing Sheets
`
`
`
`
`
`Start: Backup Program
`
`50
`
`Suspend Virtual
`Machine
`
`
`
`52
`
`55
`
`SS
`
`
`
`
`
`Non-Persistent
`Virtual Disks‘.7
`
`54
`
`
`ers
`Optionally Commit
`
`Changes in COW /—
`files
`N0
`
`Copy Virtual
`
`Machine Image to /
`Backup Medium
`
`vl
`.
`Resume Vntual /
`Machine
`
`60
`
`
`A11 Virtual
`
` Machines Backed
`
`
`Up?
`
`Yes
`
`End: Backup Program
`
`VEEAM 1001
`
`IPR of US. Patent No. 7,093,086
`
`

`

`US 7,093,086 B1
`
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`“Solution Overview,” TrueSAN Networks, Inc., 2002, 7
`pages.
`“Simics: A Full System Simulation Platform,” Reprinted
`with permission from Computer, Feb. 2002, © The Institute
`of Electrical and Electronics Engieering, Inc., pp. 50-58.
`“Introduction to Simics Full-System Simulator without
`Equal,” Virtutech, Jul. 8, 2002, 29 pages.
`“The Technology of Virtual Machines,” A Conneectix White
`Paper, Connectix Corp., 2001, 13 pages.
`“The Technology of Virtual PC,” A Connecctix White Paper,
`Connectix Corp., 2000, 12 pages.
`“About LindowsOS,” Lindows.com, http://www.lindows.
`com/lindows.com/lindowsiproductsilindowsos.php,
`2002, 2 pages.
`“Savannah: This is a Savannah Admin Documentation,”
`Savannah, Free Software Foundation, Inc.© 2000-2002, 31
`pages.
`“Virtuozzo Basics,” Virtuozzo, http://www.sw-soft.com/en/
`products/virtuozzo//basics/, © 1994-2002 SWsoft, printed
`from web on Dec. 13, 2002, 2 pages.
`“What is Virtual Environment(VE)?,” SWsoft, http://www.
`sw-soft/en/products/virtuozzo/we/, © 1994-2002 SWsoft,
`printed from web on Dec. 13, 2002, 2 pages.
`“Products,” Netraverse, Inc, 2002, http://www.netraverse.
`com/products/indexphp, printed from web on Dec. 13,
`2002, 1 pages.
`Edition,”
`4.07Workstation
`“NeTraverse Win4Lin
`Netraverse, Inc, 2002, http://www.netraverse.com/products/
`win4lin40/, printed from web on Dec. 13, 2002, 1 page.
`“Win4Lin Desktop 4.0,” Netraverse, Inc, 2002, http://www.
`netraverse.com/products/win4lin40/benefits.php,
`printed
`from web on Dec. 13, 2002, 1 page.
`“Win4Lin Desktop 4.0,” Netraverse, Inc, 2002, http://www.
`netraverse.com/ products/win4lin40/features.php, printed
`from web on Dec. 13, 2002, 2 page.
`“Win4Lin Desktop 4.0,” Netraverse, Inc, 2002, http://www.
`netraverse.com/
`products/win4lin40/requirements.php,
`printed from web on Dec. 13, 2002, 2 page.
`Inc, 2002,
`“Win4Lin Terminal Server 2.0,” Netraverse,
`http://www.netraverse.com/ products/wts, printed from web
`on Dec. 13, 2002, 1 page.
`Inc, 2002,
`“Win4Lin Terminal Server 2.0,” Netraverse,
`http://www.netraverse.com/
`products/wts/benefitsphp,
`printed from web on Dec. 13, 2002, 1 page.
`Win4Lin Terminal Server 2.0, Netraverse, Inc, 2002, http://
`www.netraverse.com/ products/wts/features.php,
`printed
`from web on Dec. 13, 2002, 2 pages.
`Win4Lin Terminal Server 2.0, Netraverse, Inc, 2002, http://
`www.netraverse.com/
`products/wts/requirements.php,
`printed from web on Dec. 13, 2002, 2 pages.
`Win4Lin Terminal Server 2.0, Netraverse, Inc, 2002, http://
`www.netraverse.com/ products/wts/technology.php, printed
`from web on Dec. 13, 2002, 1 page.
`Win4Lin, Netraverse,
`Inc, 2002, http://www.netraverse.
`com/ support/docs/Win4Lin-whitepapers.php, printed from
`web on Dec. 13, 2002, 5 pages.
`“Virtual PC for Windows,” Connectix, Version 5.0, 2002, 2
`pages.
`Dave Gardner, et al., “WINE FAQ,”, © David Gardner
`1995-1998, printed from www.winehq.org, 13 pages.
`“Winelib User’s Guide,” Winelib, www.winehq.org, 26
`pages.
`John R. Sheets, et al. “Wine User Guide,” www.winehq.org,
`pages 1-53.
`
`................ 718/104
`2/2004 Aman et al.
`6,694,346 B1*
`4/2004 Mathiske .................... 717/129
`6,718,538 B1*
`6/2004 van Rietschote
`6,757,778 B1
`7/2004 Traversat et al.
`6,763,440 B1
`9/2004 Kim et al.
`6,789,103 B1
`6,802,062 B1 * 10/2004 Oyamada et al.
`2001/0016879 A1
`8/2001 Sekiguchi et al.
`2002/0049869 A1*
`4/2002 Ohmura et al.
`................ 710/5
`2002/0099753 A1*
`7/2002 Hardin et al.
`2002/0129078 A1*
`9/2002 Plaxton et al.
`2003/0028861 A1*
`2/2003 Wallman et al.
`2003/0033431 A1
`2/2003 Shinomiya
`2004/0010787 A1 *
`1/2004 Traut et al.
`
`..
`............ 717/128
`
`.................... 718/1
`
`.............. 718/1
`
`OTHER PUBLICATIONS
`
`Kinshuk Govil, et al., “Cellular Disco: Resource Manage-
`ment Using Virtual Clusters on Shared-Memory Multipro-
`cessors,” 17Lb ACM Symposium on Operating Systems
`Principles (SOSP’99), Published as Operating Systems
`Review 34(5):154-169, Dec. 1999, pp. 154-169.
`Edouard Bugnion, et al., “Disco: Running Commodity
`Operating Systems on Scalable Multiprocessors,” Computer
`Systems Laboratory, Stanford, CA, 33 pages.
`“White Paper, GSX Server,” VMware, Inc., Dec. 2000, pp.
`1-9.
`“Vmware GSX Serve, The Server Consolidation Solution,”
`VMware, Inc., 2001, 2 pages.
`“Manage Multiple Worlds, From Any Desktop,” VMware,
`Inc., 2000, 2 pages.
`“VMware ESX Server, The Server Consolidation Solution
`for High-Performance Environments,” VMware, Inc., 2001,
`2 pages.
`Melinda Varian, “VM and the VM Community: Past,
`Present, and Future,” Operating Systems, Computing and
`Information Technology, Princeton Univ., Aug. 1997, pp.
`1-67.
`
`Veritas, “Comparison: Microsoft Logical Disk Manager and
`VERITAS Volume Manager for Windows,” May 2001, 4
`pages.
`Veritas, “ How VERITAS Volume Manager Complements
`Hardware Raid in Microsoft Server Environments,” May
`2001, pp. 1-7.
`Veritas, “VERITAS Volume Manager for Windows, Best
`Practices,” May 2001, pp. 1-7.
`Barrie Sosinky, Ph.D., “The Business Value of Virtual
`Volume Management, In Microsoft Window NT and Win-
`dows 2000 Netowrks,” VERITAS, A white paper for admin-
`istrators and planners, Oct. 2001, pp. 1-12.
`“BladeFramTM System Overview,” Egenera, Inc., 2001 2
`pages.
`White Paper, “The EgeneraTM Processing Area Network
`(PAN) Architecture,” Egenera, Inc., 2002, 20 pages.
`White Paper, “Emerging Server Architectures,” Egenera,
`Inc., 2001, 12 pages.
`White Paper,
`“Improving Data Center Perfromance,”
`Egenera, Inc., 2001, 19 pages.
`for Effective E-Business
`White Paper,
`“Guidelines
`Infrastrucure Management,” Egenera, Inc., 2001, 11 pages.
`White Paper, “The Pros and Cons of Server Clustering in the
`ASP Environment,” Egenera, Inc., 2001, 10 pages.
`Position Paper, “Taking Control of The Data Center,”
`Egenera, Inc., 2001, 4 pages.
`Position Paper, “The Linux Operating System: How Open
`Source Software Makes Better Hardware,” Egenera, Inc.,
`2001, 2 pages.
`
`

`

`US 7,093,086 B1
`Page 3
`
`“Wine Developer’s Guide,” pages, www.winehq.org, 1-104.
`VERITAS, “Veritas Volume Manager for Windows NT,”
`Version 27, 2001, 4 pages.
`VMware, Inc., “VMware Control Center,” 2003, 3 pages.
`Info World, Robert McMillan, “VMware Launches VMware
`Control Center,” 2003, 2 pages.
`VMware, Inc., “VMware Control Center: Enterprise-class
`Software to Manage and Control Your Virtual Machines,”
`2003, 2 pages.
`John Abbott, Enterprise Software, “VMware Heads Toward
`Utility Computing With New Dynamic Management Tools,”
`Jul. 1, 2003, 4 pages.
`Dejan S. Milogicic, et al., “Process Migration,” Aug. 10,
`1999, 49 pages.
`Xian-He Sun, et al., “A Coordinated Approach for Process
`Migration in Heterogeneous Environments,” 1999,
`12
`pages.
`Kasidit Chanchio, et al., “Data Collection and Restoration
`for Heterogeneous Process Migration,” 1997, 6 pages.
`Kasidit Chanchio, et al., “A Protocol Design of Communi-
`cation State Transfer for Distributed Computing,” Publica-
`tion date unknown, 4 pages.
`SourceForgeTM, “Project: openMosix: Document Manager:
`Display Document,” 14 pages.
`OpenMosix, “The openMosix HOWTO: Live Free() or die
`0,” May 7, 2003, 3 pages.
`OpenMosix, “openMosix Documentation Wiki
`May 7, 2003, 2 pages.
`Sapuntzakis, et al., “Optimizing the Migration of Virtual
`Computers,” Proceedings of the Fifth Symposium on Oper-
`ating Systems Design and Implementation, Dec. 2002, 14
`pages.
`Helfrich, et al., “Internet Suspend/Resume,” ISR Project
`Home Page, 2003, 4 pages.
`
`- don’t,”
`
`Kozuch, et al., “Internet Suspend/Resume,” IRP-TR-02-01,
`Apr. 2002, Accepted to the Fourth IEEE Workshop on
`Mobile Computing Systems and Applications, Callicoon,
`NY, Jun. 2002, Intel Research, 9 pages.
`Kozuch, et al., “Efficient State Transfer for Internet Suspend/
`Resume,” IRP-TR-02-03, May 2002, Intel Research, 13
`pages.
`Tolia, et al., “Using Content Addressing to Transfer Virtual
`Machine State,”
`IRP-TR-02-11, Summer 2002,
`Intel
`Research, 11 pages.
`Flinn, et al., “Data Staging on Untrusted Surrogates,” IRP-
`TR-03-03, Mar. 2003, Intel Research, To Appear in the
`Proceedings of the 2““1 USENIX Conference on File and
`Storage Technologies, San Francisco, 16 pages.
`Tolia, et al., “Opportunistic Use of Content Addressable
`Storage for Distributed File Systems,” IRP-TR-03-02, Jun.
`2003, Intel Research, To Appear in the Proceedings of the
`2003 USENIX Annual Technical Conference, San Antonio,
`TX, 16 pages.
`The Office Action mailed on Sep. 30, 2004 for US. Appl.
`No. 10/109,406.
`“Office Communication” for US. Appl. No. 10/108,882
`mialed Jun. 24, 2005 (11 pages).
`Office Action from US. Appl. No. 10/108,882, mailed Oct.
`19, 2005.
`Office Action from US. Appl. No. 10/109,406, mailed Aug.
`19, 2005.
`Office Action from US. Appl. No. 10/616,437, mailed Oct.
`18, 2005.
`US. Appl. No. 10/791,472.
`Office Action from US. Appl. No. 10/108,882, mailed Dec.
`23, 2005.
`
`* cited by examiner
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 1 of 8
`
`US 7,093,086 B1
`
`
`
`
`
`olfi883$HBanU
`
`osfiofiz
`
`U1828:00
`
`#3N33
`
`REE,
`
`SSmoEowns“:
`
`miowm:6,
`an?emkUE"
`*om$0I__fl30:82AM?_<9H2:232RE“;
`
`.V
`
`
`
`IN85:52msmomm
`
`
`
`mlfi685M2>
`
`QMonBEwE
`
`HM
`
`
`
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 2 0f 8
`
`US 7,093,086 B1
`
`Start: Backup Program
`
`
`
`
`
`
`Suspend Virtual
`Machine
` Non~Persistent
`Virtual Disks?
`{Yes
`
`
`Optionally Commit
`Changes in COW
`
`files
`
`
`Copy Virtual
`
`Machine Image to
`
`Backup Medium
`
`
`
`
`Machine
`
`Select Next Virtual
`
` Resume Virtual
`Machine
`
`
`
`All Virtual
`
`
`Machines Backed
` 60
`Up?
`
`Yes
`
`End: Backup Program
`
`Fig. 2
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 3 of 8
`
`US 7,093,086 B1
`
`@3365:
`
` Maw.
`
`<omBaBUEE
`
`33>
`
`2::an
`
`ding28:00
`
`\CoSQom
`
`mm83on
`
`
`
`mm:Mew—Mom§>
`
`
`
`___._______
`
`_...._._._
`
`n3896050
`fl98%on
`
`GEEK
`
`323.2
`
`UH28:00
`
`<2$50M2>
`
`lluillllllllllllllllillllll.
`
`©35qu
`
`wiNSSon
`
`
`
`i300iwwmm5!
`
`IIIIIIIIIIIIIIL
`._3.0.5_I802
`
`E68605
`
`Eamofl
`
`<Nm
`
`
`
`human:22>
`
`.QN2m@0822
`
`NHo5MED
`
`«MDE300
`
`
`
`
`
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 4 0f 8
`
`US 7,093,086 B1
`
`Start: Checkpoint Program
`
`
`Checkpoint
`Interval Expired?
`
`
`
`
`
`
`Yes
`
`
`80
`
`82
`
`I
`
`\
`
`_
`Suspend Vlrtual
`Machine
`
`l
`
`Machine Image to
`DR System
`
`
`Non—Persistent
`
`Optionally Commit
`Changes in COW file and
`
`Virtual Disks?
`
`Create New COW file
`
` Copy Virtual
`
`
` Resume Virtual
`Machine
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 5 0f 8
`
`US 7,093,086 B1
`
`Start: Recovery Program
`
`
`
`
`Select Desired
`
`Checkpoint
`
`
`
`Commit Any COW
`File Changes for
`
`
`Selected Checkpoint
`
`
`Resume Virtual
`
`Machine
`
`
`
`
`
`
`End: Recovery Program
`
`100
`
`102
`
`104
`
`

`

`U.S.
`
`Patent
`
`Aug. 15, 2006
`
`Sheet 6 of 8
`
`US 7
`
`9
`
`093,086 B1
`
`a8332E293 Bab—moi
`a05UNoi
`
`920mm
`
`300262
`
`NN
`
`«N2:mm2m
`
`
`
` Boo”inaowns:22>
`
`©0832
`
`13:5
`
`2:532
`
`
`
`UH.28:00
`
`fl823mHBsmEoU
`
`EHHag<3Hcannon:
`ESE,
`
`1a3HE
`
`
`
`N:38%vm©053230802
`
`owfioum
`
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 7 0f 8
`
`US 7,093,086 B1
`
`Start: Backup Program
`
`
`
`
`Request Image Data
`from VM Kernel
`
`
`
`Image Data
`Available?
`
`Non-Persistent
`Virtual Disks ?
`
`Optionally Commit
`
`Changes in COW
`files
`
`
`
`
`
`
`
`Copy Virtual
`
`Machine Image to
`Backup Medium
`
`
`Signal Complete to
`VM Kernel
`
`_
`Select Next Virtual
`
`Machine
`
`
`
`All Virtual
`
`Machines Backed
`Up?
`
`
`
`60
`
`Yes
`
`End: Backup Program
`
`Fig. 7
`
`

`

` Create Memory
` ———————————— w
`I Copy Memory to
`I /
`L - -MEIBOILFEIE _ -1
`——————}— — —- — — — 1
`: Indicate Image Data l/
`L___éZa11E1‘319_-_J
`
`l38
`
`U.S. Patent
`
`Aug. 15, 2006
`
`Sheet 8 0f 8
`
`US 7,093,086 B1
`
`Request?
`
`Start: VM Kernel
`
`Image Data
`
`130
`
`
`
` . Create New COW
`
`
`Files for Each
`
`Virtual Disk
`
`
`136
`
`
`
`
`
`Complete?
`
`
`
`to Memory and Delete
`
`Memory COW
`
`
`
`Commit Writes to
`
`Persistent Disks from
`
`
`Commit Memory COW
`
`New COW Files
`
` End: VM Kernel
`
`

`

`US 7,093,086 B1
`
`1
`DISASTER RECOVERY AND BACKUP
`USING VIRTUAL MACHINES
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`
`This invention is related to the field of computer systems
`and, more particularly,
`to backup and disaster recovery
`mechanisms in computer systems.
`2. Description of the Related Art
`Computer systems, and their components, are subject to
`various failures which may result in the loss of data. For
`example, a storage device used in or by the computer system
`may experience a failure (e.g. mechanical, electrical, mag-
`netic, etc.) which may make any data stored on that storage
`device unreadable. Erroneous software or hardware opera-
`tion may corrupt
`the data stored on a storage device,
`destroying the data stored on an otherwise properly func-
`tioning storage device. Any component in the storage chain
`between (and including) the storage device and the computer
`system may experience failure (e.g.
`the storage device,
`connectors (e.g. cables) between the storage device and
`other circuitry, the network between the storage device and
`the accessing computer system (in some cases), etc.).
`To mitigate the risk of losing data, computer system users
`typically make backup copies of data stored on various
`storage devices. Typically, backup software is installed on a
`computer system and the backup may be scheduled to occur
`periodically and automatically. In many cases, an applica-
`tion or applications may be in use when the backup is to
`occur. The application may have one or more files open,
`preventing access by the backup software to such files.
`Some backup software may include custom code for each
`application (referred to as a “backup agent”). The backup
`agent may attempt to communicate with the application or
`otherwise cause the application to commit its data to files so
`that the files can be backed up. Often, such backup agents
`make use of various undocumented features of the applica-
`tions to successfully backup files. As the corresponding
`applications change (e.g. new versions are released), the
`backup agents may also require change. Additionally, some
`files (such as the Windows registry) are always open and
`thus diflicult to backup.
`Disaster recovery configurations are used in some cases to
`provide additional protection against loss of data due to
`failures, not only in the computer systems themselves but in
`the surrounding environment (e.g. loss of electrical power,
`acts of nature, fire, etc.). In disaster recovery configurations,
`the state of data may periodically be checkpointed from a
`first computer system to a second computer system. In some
`cases,
`the second computer system may be physically
`located distant from the first computer system. If a problem
`occurs that causes the first computer system to go down, the
`data is safely stored on the second computer system. In some
`cases, applications previously running on the first computer
`system may be restarted on the second computer system to
`allow continued access to the preserved data. The disaster
`recovery software may experience similar issues as the
`backup software with regard to applications which are
`running when a checkpoint is attempted and the files that the
`applications may have open at the time of the checkpoint.
`Additionally, replicating all the state needed to restart the
`application on the second computer system (e.g. the oper-
`ating system and its configuration settings, the application
`and its configuration settings, etc.) is complicated.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`SUMMARY OF THE INVENTION
`
`One or more computer systems, a carrier medium, and a
`method are provided for backing up virtual machines. The
`backup may occur, e.g., to a backup medium or to a disaster
`recovery site, in various embodiments. In one embodiment,
`an apparatus includes a computer system configured to
`execute at least a first virtual machine, wherein the computer
`system is configured to: (i) capture a state of the first virtual
`machine, the state corresponding to a point in time in the
`execution of the first virtual machine; and (ii) copy at least
`a portion of the state to a destination separate from a storage
`device to which the first virtual machine is suspendable. A
`carrier medium may include instructions which, when
`executed, cause the above operation on the computer sys-
`tem. The method may comprise the above highlighted
`operations.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The following detailed description makes reference to the
`accompanying drawings, which are now briefly described.
`FIG. 1 is a block diagram of one embodiment of a
`computer system.
`FIG. 2 is a flowchart illustrating operation of one embodi-
`ment of a backup program shown in FIG. 1.
`FIG. 3 is a block diagram of one embodiment of a pair of
`computer systems, wherein one of the computer systems is
`a disaster recovery site for the other computer system.
`FIG. 4 is a flowchart illustrating operation of one embodi-
`ment of a checkpoint program shown in FIG. 3
`FIG. 5 is a flowchart illustrating operation of one embodi-
`ment of a recovery program shown in FIG. 3.
`FIG. 6 is a block diagram of a second embodiment of a
`computer system.
`FIG. 7 is a flowchart illustrating operation of a second
`embodiment of a backup program shown in FIG. 6.
`FIG. 8 is a flowchart illustrating operation of a portion of
`one embodiment of a VM kernel.
`
`While the invention is susceptible to various modifica-
`tions and alternative forms, specific embodiments thereof
`are shown by way of example in the drawings and will
`herein be described in detail.
`It should be understood,
`however, that the drawings and detailed description thereto
`are not intended to limit the invention to the particular form
`disclosed, but on the contrary, the intention is to cover all
`modifications, equivalents and alternatives falling within the
`spirit and scope of the present invention as defined by the
`appended claims.
`
`DETAILED DESCRIPTION OF EMBODIMENTS
`
`system executes one or more virtual
`A computer
`machines, each of which may include one or more applica-
`tions. To create a backup, the computer system may capture
`a state of each virtual machine and backup the state. In one
`embodiment, the computer system may capture the state in
`cooperation with a virtual machine kernel which controls
`execution of the virtual machines, while the virtual
`machines continue to execute. The state may include the
`information in a virtual machine image created in response
`to a suspension of the virtual machine. In another embodi-
`ment, the computer system may capture the state by sus-
`pending each virtual machine to an image and backing up
`the image of the virtual machine. In this manner, the files
`used by the application are backed up, even if the application
`has the files open while the virtual machine is active in the
`
`

`

`US 7,093,086 B1
`
`3
`computer system. Furthermore, updates to the files which
`are not yet committed (e.g. they are still in memory in the
`Virtual machine) may be backed up as well. In some cases,
`only a portion of the state or image need be backed-up at a
`given time (e. g. non-persistent Virtual disks may be backed-
`up by copying the COW files corresponding to those disks,
`if an initial copy of the disk file has been made).
`Similarly, for disaster recovery configurations, the com-
`puter system may periodically capture the state of the Virtual
`machines as a checkpoint. The checkpoints may be copied to
`a second computer system, which may retain one or more
`checkpoints for each Virtual machine. In the event of a
`“disaster” at
`the original computer system,
`the Virtual
`machines may be resumed from one of the checkpoints on
`the second computer system. The loss of data may be limited
`to the data created between the selected checkpoint and the
`point at which the disaster occurred. The checkpoints may
`be created by capturing state while the Virtual machines
`continue to execute, or by suspending the Virtual machines
`and copying the suspended image. As mentioned above, in
`some cases, only a portion of the state or image may be
`copied. Since the Virtual machine state includes all of the
`state used by the application (operating system and its
`configuration settings, the application and its configuration
`settings, etc.), restarting the application on the second com-
`puter system may occur correctly.
`Turning now to FIG. 1, a block diagram is shown illus-
`trating one embodiment of a computer system 10 for per-
`forming a backup. Other embodiments are possible and
`contemplated. The computer system 10 includes one or more
`Virtual machines (e.g. Virtual machines 16A716C as illus-
`trated in FIG. 1). The Virtual machines are controlled by a
`Virtual machine (VM) kernel 18. The Virtual machines
`16A716C and the VM kernel 18 may comprise software
`and/or data structures. The software may be executed on the
`underlying hardware in the computer system 10 (e.g. the
`hardware 20). The hardware may include any desired cir-
`cuitry. For example, the hardware may include one or more
`processors, or central processing units (CPUs), storage, and
`input/output (I/O) circuitry. In the embodiment of FIG. 1, the
`computer system 10 includes a storage device 22 and a
`backup medium 24.
`As shown in FIG. 1, each application executing on the
`computer system 10 executes within a Virtual machine
`16A716C. Generally, a Virtual machine comprises any com-
`bination of software, one or more data structures in memory,
`and/or one or more files stored on a storage device (such as
`the storage device 22). The Virtual machine mimics the
`hardware used during execution of a given application. For
`example, in the Virtual machine 16A, an application 28 is
`shown. The application 28 is designed to execute within the
`operating system (0/S) 30. Both the application 28 and the
`O/S 30 are coded with instructions executed by the Virtual
`CPU 32. Additionally, the application 28 and/or the 0/8 30
`may make use of Various Virtual storage devices 34 and
`Virtual I/O devices 36. The Virtual storage may include any
`type of storage, such as memory, disk storage, tape storage,
`etc. The disk storage may be any type of disk (e.g. fixed disk,
`removable disk, compact disc read-only memory (CD-
`ROM), rewriteable or read/write CD, digital Versatile disk
`(DVD) ROM, etc.). Each disk storage in the Virtual machine
`may be mapped to a file on a storage device such as the
`storage device 22A. Alternatively, each disk storage may be
`mapped directly to a storage device, or a combination of
`direct mappings and file mappings may be used. The Virtual
`I/O devices may include any type of I/O devices, including
`modems, audio devices, Video devices, network interface
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`
`cards (NICs), universal serial bus (USB) ports, firewire
`(IEEE 1394) ports, serial ports, parallel ports, etc. Generally,
`each Virtual I/O device may be mapped to a corresponding
`I/O device in the underlying hardware or may be emulated
`in software if no corresponding I/O device is included in the
`underlying hardware.
`The Virtual machine in which an application is executing
`encompasses the entire system state associated with an
`application. Generally, when a Virtual machine is active (i.e.
`the application within the Virtual machine is executing), the
`Virtual machine may be stored in the memory of the com-
`puter system on which the Virtual machine is executing
`(although the VM kernel may support a paging system in
`which Various pages of the memory storing the Virtual
`machine may be paged out to local storage in the computer
`system) and in the files which are mapped to the Virtual
`storage devices in the Virtual machine. The VM kernel may
`support a command to suspend the Virtual machine.
`In
`response to the command, the VM kernel may write an
`image of the Virtual machine to the storage device 22 (e.g.
`the image 40 shown in FIG. 1), thus capturing the current
`state of the Virtual machine and thus implicitly capturing the
`current state of the executing application. The image may
`include one or more files written in response to the suspend
`command, capturing the state of the Virtual machine that was
`in memory in the computer system, as well as the files
`representing the Virtual storage in the Virtual machine. The
`state may include not only files written by the application,
`but uncommitted changes to files which may still be in the
`memory within the Virtual machine, the state of the hardware
`(including the processor 32,
`the memory in the Virtual
`machine, etc.) within the Virtual machine, etc. Thus, the
`image may be a snapshot of the state of the executing
`application.
`A suspended Virtual machine may be resumed using a
`resume command supported by the VM kernel. In response
`to the resume command, the VM kernel may read the image
`of the suspended Virtual machine from the storage device
`and may activate the Virtual machine in the computer
`system.
`The computer system 10 may be configured to backup the
`Virtual machines executing thereon. For example,
`in the
`illustrated embodiment, a backup program 42 may execute
`in the Virtual machine 16C (and may also be stored on the
`storage device 22). The Virtual machine 16C may be a
`console Virtual machine as illustrated in FIG. 1 (a Virtual
`machine which also has direct access to the hardware 20 in
`
`the computer system 10). Alternatively, the backup program
`42 may execute on a non-console Virtual machine or outside
`of a Virtual machine.
`
`The backup program 42 may suspend the Virtual machines
`executing on the computer system 10 (e.g.
`the Virtual
`machines 16A716B as shown in FIG. 1) and backup the
`image of each Virtual machine (e.g. the image 40 of the
`Virtual machine 16A) onto the backup medium 24 (or send
`the image files to a backup server, if the backup server is
`serving as the backup medium 24). Once the backup has
`been made, the backup program 42 may resume the Virtual
`machines to allow their execution to continue.
`
`Since a given Virtual machine is suspended during the
`backup operation for that Virtual machine, the files used by
`the application(s) within the Virtual machine may be backed
`up even if the files are in use by the application(s) at the time
`the Virtual machine is suspended. Each Virtual machine may
`be suspended and backed up in the same fashion. Thus, the
`
`

`

`US 7,093,086 B1
`
`5
`backup program 42 may not include any specialized backup
`agents for different applications that may be included in the
`various virtual machines.
`
`In the embodiment of FIG. 1, the backup medium 24 may
`be used to store the images of the Virtual machine. Generally,
`the backup medium 24 may be any medium capable of
`storing data. For example, the backup medium 24 may be
`storage device similar to the storage device 22. The backup
`medium 24 may be a removable storage device, to allow the
`backup medium to be separated from the computer system
`10 after the backup is complete. Storing the backup medium
`physically separated from the computer system that
`is
`backed up thereon may increase the reliability of the backup,
`since an event which causes problems on the computer
`system may not affect the backup medium. For example, the
`backup medium 24 may comprise a removable disk or disk
`drive, a tape backup, writeable compact disk storage, etc.
`Alternatively, the backup medium 24 may comprise another
`computer system (e.g. a backup server) coupled to receive
`the backup data from the computer system 10 (e.g. via a
`network coupling the two computer systems), a storage
`device attached to a network to which the computer system
`is attached (e.g. NAS or SAN technologies), etc.
`The virtual hardware in the virtual machine 16A (and
`other virtual machines
`such as
`the virtual machines
`
`16B716C) may be similar to the hardware 20 included in the
`computer system 10. For example, the virtual CPU 32 may
`implement the same instruction set architecture as the pro-
`cessor(s) in the hardware 20. In such cases, the virtual CPU
`32 may be one or more data structures storing the processor
`state for the virtual machine 16A. The application and O/S
`software instructions may execute on the CPU(s) in the
`hardware 20 when the virtual machine 16A is scheduled for
`
`execution by the VM kernel 18. When the VM kernel 18
`schedules another virtual machine for execution (e.g. the
`virtual machine 16B), the VM kernel 18 may write the state
`of the processor into the virtual CPU 32 data structure.
`Alternatively, the virtual CPU 32 may be different from the
`CPU(s) in the hardware 20. For example, the virtual CPU 32
`may comprise software coded using instructions from the
`instruction set supported by the underlying CPU to emulate
`instruction execution according to the instruction set archi-
`tecture of the virtual CPU 32. Alternatively, the VM kernel
`18 may emulate the operation of the hardware in the virtual
`machine. Similarly, any virtual hardware in a virtual
`machine may be emulated in software if there is no matching
`hardware in the hardware 20.
`Different virtual machines which execute on the same
`
`computer system 10 may differ. For example, the 0/8 30
`included in each virtual machine may differ. Different virtual
`machines may employ different versions of the same O/S
`(e.g. Microsoft Windows NT with different service packs
`installed), different versions of the same O/S family (e.g.
`Microsoft Windows NT and Microsoft Windows2000), or
`different O/Ss (e.g. Microsoft Windows NT, Linux, Sun
`Solaris, etc.).
`Generally, the VM kernel may be responsible for man-
`aging the virtual machines on a given computer system. The
`VM kernel may schedule virtual machines for execution on
`the underlying hardware, using any scheduling scheme. For
`example, a time division multiplexed scheme may be used
`to assign time slots to each virtual machine. Additionally, the
`VM kernel may handle the suspending and resuming of
`virtual machines responsive to suspend and resume com-
`mands. The commands may be received from a virtual
`machi

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket