`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_______________
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_____________
`VMWARE, INC.,
`Petitioner,
`v.
`CIRBA IP, INC.,
`Patent Owner.
`_______________
`Case: PGR2021-00098
`Patent: 10,951,459
`____________________________________________________________
`DECLARATION OF DR. EREZ ZADOK, PH.D.
`
`
`
`
`
`VMware, Inc. Exhibit 1006 Page 1
`
`
`
`
`
`TABLE OF CONTENTS
`Qualifications ................................................................................................... 2
`I.
`II. Materials Considered ..................................................................................... 12
`III. Understanding of Relevant Legal Principles ................................................. 12
`Claim Construction Standard .............................................................. 12
` Written Description ............................................................................. 13
`IV. Background .................................................................................................... 14
`Technical Background ......................................................................... 14
`Overview of the ’459 Patent ................................................................ 18
`1.
`System Parameters .................................................................... 20
`2.
`Compatibility Analyses ............................................................. 21
`3.
`Consolidation Analyses ............................................................ 23
`4.
`Visualization ............................................................................. 26
`5.
`Summary of the ’459 Patent Prosecution History .................... 26
`Level of Skill in the Art ................................................................................. 28
`V.
`VI. Claim Construction ........................................................................................ 29
`VII. The ’459 Patent Specification Does Not Describe Claims 1-63 Such That a
`POSITA Would Understand the ’459 patent Inventors Were in Possession of
`the Claimed Systems, Methods, or Computer Readable Medium ................ 30
`“Already Placed” ................................................................................. 30
`1.
`Analysis Types .......................................................................... 32
`1-to-1 Compatibility Analysis................................................... 32
`N-to-1 Compatibility Analysis .................................................. 33
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`i
`
`VMware, Inc. Exhibit 1006 Page 2
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`N-by-N Compatibility Analysis ................................................ 33
`Consolidation Analysis ............................................................. 34
`“Source System” ....................................................................... 35
`2.
`“Issue Instructions” ............................................................................. 37
`Incorporated Applications ................................................................... 41
`
`
`
`ii
`
`VMware, Inc. Exhibit 1006 Page 3
`
`
`
`
`
`1. My name is Erez Zadok. I have been retained by Petitioner VMware
`
`Inc. (“Petitioner” or “VMware”) to assist regarding U.S. Patent No. 10,951,459
`
`(Ex. 1001, “the ’459 patent”). Specifically, I have been asked to consider the
`
`patentability of claims 1-63 of the ’459 patent (“the Challenged Claims”) in view of
`
`prior art and the understanding of a person of ordinary skill in the art (“POSITA”)
`
`as it relates to the ’459 patent. I understand that my opinions in this declaration are
`
`being submitted as part of the petition in PGR2021-00098. Additionally, I
`
`understand two other of my declarations that address the same claims of the ’459
`
`patent will be submitted in connection with two IPR petitions (IPR2021-01210 and
`
`IPR2021-01211).
`
`2.
`
`I have personal knowledge of the facts and opinions set forth in this
`
`declaration and believe them to be true. If called upon to do so, I would testify
`
`competently thereto. I have been warned that willful false statements and the like
`
`are punishable by fine or imprisonment, or both.
`
`3. My consulting company, Zadoks Consulting, LLC,
`
`is being
`
`compensated for my time at my standard consulting rate. I am also being reimbursed
`
`for expenses that I incur during the course of this work. My compensation is not
`
`contingent upon the results of my study and analysis, the substance of my opinions,
`
`or the outcome of any proceeding involving the Challenged Claims. I have no
`
`
`
` 1
`
`VMware, Inc. Exhibit 1006 Page 4
`
`
`
`
`
`financial interest in the outcome of this matter or in any litigation involving the ’459
`
`patent.
`
`I.
`
`Qualifications
`
`4.
`
`I am a Professor in the Computer Science Department at Stony Brook
`
`University (part of the State University of New York (“SUNY”) system). I direct
`
`the File-systems and Storage Lab (FSL) at Stony Brook’s Computer Science
`
`Department. My research interests include file systems and storage systems,
`
`operating systems, information technology and system administration, security and
`
`information assurance, networking, energy efficiency, performance and
`
`benchmarking, virtualization, compilers, applied machine learning, and software
`
`engineering.
`
`5.
`
`I studied at a professional high school in Israel, focusing on electrical
`
`engineering (“EE”), and graduated in 1982. I spent one more year at the high
`
`school’s college division, receiving a special Certified Technician’s degree in EE. I
`
`then went on to serve in the Israeli Defense Forces for three years (1983-1986). I
`
`received my Bachelor of Science degree in computer science (“CS”) in 1991, my
`
`Master’s degree in CS in 1994, and my Ph.D. in CS in 2001—all from Columbia
`
`University in New York.
`
`6. When I began my undergraduate studies at Columbia University, I also
`
`started working as a student assistant in the various campus-wide computer labs,
`
`
`
` 2
`
`VMware, Inc. Exhibit 1006 Page 5
`
`
`
`
`
`eventually becoming an assistant to the head labs manager, who was managing all
`
`public computer labs on campus. During that time, I also became more involved
`
`with research within the CS Department at Columbia University, conducting
`
`research on operating systems, file and storage systems, distributed and networked
`
`systems, security, and other topics. I also assisted the CS department’s computer
`
`administrators in managing the department’s computers, which included storage, IT,
`
`networking, and cyber-security related duties.
`
`7.
`
`In 1991, I joined Columbia University’s CS department as a full-time
`
`systems administrator, studying towards my MS degree part-time. My MS thesis
`
`topic related to file system reliability, fault tolerance, replication, and failover in
`
`mobile networked storage systems using file virtualization. My main duties as a
`
`systems administrator involved installing, configuring, and managing many
`
`networked servers, proxies, and desktops running several operating systems, as well
`
`as network devices setup; this included many hardware upgrades, device upgrades,
`
`and BIOS firmware/chipset updates/upgrades. My duties also included ensuring
`
`reliable, secure, authenticated access to networked systems/storage and licensed
`
`software, as well as software updates, security and bug fixes. Examples of servers
`
`and their protocols included email (SMTP), file transfer (FTP), domain names
`
`(DNS), network file systems (NFS), network news systems (NNTP), and Web
`
`(HTTP).
`
`
`
` 3
`
`VMware, Inc. Exhibit 1006 Page 6
`
`
`
`
`
`8.
`
`In 1994, I left my systems administrator position to pursue my doctoral
`
`studies at Columbia University. My Ph.D. thesis topic was on versatile file system
`
`development using stackable (virtualized) file systems, with examples in the fields
`
`of security and encryption, efficiency, reliability, and failover. I continued to work
`
`part-time as a systems administrator at the CS department, and eventually I was
`
`asked to serve as manager to the entire information technology (“IT”) staff. From
`
`1991 to 2001, I was a member of the faculty-level Facilities Committee that oversaw
`
`all IT operations at the CS department.
`
`9.
`
`As part of my Ph.D. studies at Columbia, I collaborated on projects to
`
`develop advanced AI-like techniques to detect previously unknown viruses (a.k.a.
`
`“zero-day malware”), using data mining and rule-based detection. This work led to
`
`several highly cited papers (over 1,300 citations for one of the papers alone), and
`
`two patents. I also became a Teaching Assistant (TA) for a first-ever Computer
`
`Security course given at Columbia University’s CS department with Dr. Matt Blaze
`
`as instructor.
`
`10. From 1990 to 1998, I consulted for SOS Corporation and HydraWEB
`
`Technologies, as a systems administrator and programmer, managing data storage
`
`use and backup/restore duties, as well as information assurance and cyber-security
`
`(e.g., malware protection, software licensing). From 1994 to 2000, I led projects at
`
`HydraWEB Technologies, and then became the Director of Software Development-
`
`
`
` 4
`
`VMware, Inc. Exhibit 1006 Page 7
`
`
`
`
`
`overseeing the development of several products and appliances such as stateful
`
`firewalls and HTTP load-balancers, utilizing network-virtualization and high-
`
`availability techniques. Since 2009, I have consulted for Packet General Networks,
`
`a startup specializing in secure, virtualized, network storage and applications’ data
`
`security in the cloud.
`
`11.
`
`In 2001, I joined the faculty of Stony Brook University, a position I
`
`have held since that time. In 2002, I joined the Operations Committee, which
`
`oversees the IT operations of the CS department at Stony Brook University. From
`
`2006 to 2010, I was the Director of IT Operations of the CS department; my day-to-
`
`day duties included setting policies regarding computing, hiring and training new
`
`staff, assisting any staff with topics of my specialty, defining requirements for new
`
`software/hardware, and purchasing. From 2010 to 2015, I have served as the Co-
`
`Chair to the Operations Committee. From 2016 to 2019, I oversaw the IT Operations
`
`as the Chair of the Operations Committee. A significant component of these duties
`
`included defining and helping implement policies for data management, so as to
`
`ensure the security of users and their data, and data reliability and availability, while
`
`minimizing the inconvenience and performance impact to users. I personally helped
`
`setup and maintain an initial virtual host infrastructure in the department. Since late
`
`2019, I’ve been a member of the department’s Executive Committee that also
`
`oversees all IT operations.
`
`
`
` 5
`
`VMware, Inc. Exhibit 1006 Page 8
`
`
`
`
`
`12.
`
`In 2017, I became the department’s Graduate Academic Adviser,
`
`advising all Masters students (over 400 annually on average) and many other
`
`graduate students on an assortment of academic matters.
`
`13. Since 2001, I personally configured and managed my own research
`
`lab’s network. This includes setting up and configuring multiple storage systems
`
`(e.g., NFS, CIFS/SMB, NAS), virtual and physical environments, applications such
`
`as database and mail servers, user access control (e.g., NIS, LDAP), backups and
`
`restores, snapshot policies, and more. I’ve personally installed, configured, changed,
`
`replaced parts, and upgraded components in numerous devices from laptops to
`
`servers, both physical and virtual.
`
`14. Since 1995, I have taught courses on operating systems, storage and file
`
`systems, advanced systems programming in Unix/C, systems administration, data
`
`structures, data/software security, and more. My courses often use storage, file
`
`systems, distributed systems, and system/network security as key teaching principles
`
`and practical examples for assignments and projects. I have taught these concepts
`
`and techniques to my students, both to my direct advisees as well as in my courses.
`
`For example, in my graduate Operating Systems course, I often cover Linux’s kernel
`
`mechanisms to protect users, applications, and data files, as well as distributed
`
`storage systems (e.g., NFS). And in the System Administration undergraduate
`
`
`
` 6
`
`VMware, Inc. Exhibit 1006 Page 9
`
`
`
`
`
`course, I covered many topics such as networking, storage, backups, and configuring
`
`complex applications such as mail, web, and database servers.
`
`15. My research often investigates computer systems from many angles:
`
`security, efficiency, energy use, scalability, reliability, portability, survivability,
`
`usability, ease-of-use, versatility, flexibility, and more. My research gives special
`
`attention to balancing five often-conflicting aspects of computer systems:
`
`performance, reliability, energy use, security, and ease-of-use. Since joining Stony
`
`Brook University in 2001, my group in the File-systems and Storage Lab (FSL) has
`
`developed many file systems and operating system extensions; examples include a
`
`highly-secure cryptographic file system, a portable copy-on-write (COW)
`
`versioning file system, a tracing file system useful to detect intrusions, a replaying
`
`file system useful for forensics, a snapshotting and sandboxing file system, a
`
`namespace unification file system (that uses stackable, file-based COW), an anti-
`
`virus file system, an integrity-checking file system, a load balancing and
`
`replication/mirroring file system, network file system extensions for security and
`
`performance, distributed secure cloud-based storage systems, transactional key-
`
`value stores and file systems, OS level embedded databases, a compiler to convert
`
`user-level C code to in-kernel efficient yet safe code, GCC plugins, stackable file
`
`system templates, and a Web-based backup system. Many of these projects used
`
`
`
` 7
`
`VMware, Inc. Exhibit 1006 Page 10
`
`
`
`
`
`one form of virtualization or another (storage, network, host, etc.). I continue to
`
`maintain and release newer versions of some of these file systems and software.
`
`16.
`
`I have published over 120 refereed publications (in ACM, IEEE,
`
`USENIX, and more). To date, my publications have been cited more than 8,700
`
`times (as per Google Scholar as of June 21, 2021). My papers cover a wide range
`
`of related technologies such as file systems, storage systems, transactional systems,
`
`security, performance benchmarking and optimization, energy efficiency, system
`
`administration, and more. I also published a book titled “Linux NFS and
`
`Automounter Administration” (Sybex, 2001), covering systems administration
`
`topics related to network storage and data security.
`
`17. Some of my research has led to public software releases that have been
`
`used worldwide. I have publicly maintained the Amd Berkeley Automounter in a
`
`package called “am-utils” since 1992; this software helps administrators manage the
`
`multitude of file system mounts on dozens of different Unix systems, especially
`
`helping to automate access to multiple NFS/NAS storage volumes. Since 1997, I
`
`have maintained and released several stackable (virtualized) file system software
`
`projects for Linux, FreeBSD, and/or Solaris, in a package called FiST. One of my
`
`stackable file system encryption projects, called Cryptfs, became the basis for IBM’s
`
`public release of eCryptfs, now part of Linux. Packet General Networks, for whom
`
`I have provided consulting services since 2009, licensed another encryption file
`
`
`
` 8
`
`VMware, Inc. Exhibit 1006 Page 11
`
`
`
`
`
`system called Ncryptfs. Another popular file system released in 2003, called
`
`Unionfs, offers virtual namespace unification, transparent shadow copying (a.k.a.
`
`copy-on-write or COW), file system snapshotting (e.g., useful for forensics and
`
`disaster recovery), and the ability to save disk space by sharing a read-only copy of
`
`data among several computers, among other features.
`
`18. My research and teaching make extensive use of data security features.
`
`For example, each time I taught the graduate operating system course, the first
`
`homework assignment includes the creation of a new system call that performs new
`
`or added functionality, often for encrypting a file or verifying its integrity; many of
`
`my other assignments cover topics of user/process access control, anti-virus filtering,
`
`and more. Since 2001, over 1,000 graduate students were exposed to these simple
`
`principles directly through my teaching and research at Stony Brook University.
`
`19. Moreover, in an undergraduate course titled “Advanced Systems
`
`Programming in Unix/C,” I cover many topics of system security and vulnerabilities,
`
`such as the structure of UNIX processes, and memory segments such as the heap and
`
`stack. Often, the first assignment for this course is to develop a tool to
`
`encrypt/decrypt files using advanced ciphers, use digital signatures to certify the
`
`cipher keys used, and reliably recover files in case of failures. Since 2001, several
`
`hundred undergraduate students were exposed to these principles directly through
`
`my teaching and research at Stony Brook University.
`
`
`
` 9
`
`VMware, Inc. Exhibit 1006 Page 12
`
`
`
`
`
`20.
`
`In another undergraduate course, System Administration, I taught
`
`network configuration, security, and storage configuration and reliability. In a
`
`special topics course on Storage Systems, I covered many topics such as data
`
`deduplication, RAID, transactional storage, storage hardware including modern
`
`Flash based ones, virtual storage, backup/restore, snapshots and continuous data
`
`protection (CDP), NAS and SAN, and NFS.
`
`21. Overall , in addition to the aforementioned experience, my technical
`
`experience relevant to this patent at the time of the alleged invention included the
`
`following: configuring and running hypervisors on Linux, Windows, and Mac OS
`
`X; experimenting with and/or using products from VMware (ESX, GSX, Fusion,
`
`Workstation), Sun Microsystems (VirtualBox), Linux (open-source Xen and KVM),
`
`and others (e.g., Parallels); installed and ran numerous virtual machines (VMs) on
`
`these hypervisors, setup storage backends, configured and executed VM migration,
`
`manually load-balanced and consolidated VMs and hosts, evaluated hardware
`
`capabilities of hosts and VMs, and optimized my VM clusters. I’ve also studied,
`
`researched, taught and published on topics of operating system and storage system
`
`optimizations using complex algorithms.
`
`22. My research has been supported by many federal and state grants as
`
`well as industry awards, including an NSF CAREER award, two IBM Faculty
`
`awards, two NetApp Faculty awards, a Western Digital award, a Facebook award,
`
`
`
` 10
`
`VMware, Inc. Exhibit 1006 Page 13
`
`
`
`
`
`several Dell-EMC awards, and several equipment gifts. I was the winner of the 2004
`
`Computer Science Department bi-annual Graduate Teaching Award, the winner of
`
`the 2006 Computer Science Department bi-annual Research Excellence Award, and
`
`a recipient of the 2008 SUNY Chancellor’s Excellence in Teaching award (an award
`
`that can be given only once a lifetime).
`
`23. My service record to the community includes being the co-chair for the
`
`USENIX Annual Technical Conference in 2020 (ATC’20); being the co-chair for
`
`USENIX File and Storage Technologies (FAST) in 2015 and being on the FAST
`
`Steering Committee since 2015; joining the ACM HotStorage Steering Committee
`
`in 2021; and being the co-chair in 2012 and on the Steering Committee of the ACM
`
`SYSTOR conference. I have also been an Associate Editor to the ACM Transactions
`
`on Storage (TOS) journal since 2009.
`
`24.
`
`I am a named inventor on four patents, two titled “Systems and Methods
`
`for Detection of New Malicious Executables” (U.S. Patent No. 7,487,544, issued
`
`February 3, 2009; and U.S. Patent No. 7,979,907, issued July 12, 2011); and two
`
`more titled “Multi-Tier Caching,” (U.S. Patent No. 9,355,109, issued May 31, 2016;
`
`and U.S. Patent 9,959,279, issued May 1, 2018).
`
`25.
`
`I have been disclosed as a testifying expert in 13 cases (including inter
`
`partes review (IPR) proceedings) in the past four years. I have been deposed 11
`
`times and testified in trial twice. A complete copy of my curriculum vitae, which
`
`
`
` 11
`
`VMware, Inc. Exhibit 1006 Page 14
`
`
`
`
`
`includes a list of my publications and contains further details on my education,
`
`experience, publications, patents, and other qualifications to render an expert
`
`opinion, is attached as Exhibit 1007.
`
`II. Materials Considered
`
`26.
`
`In performing my analysis and forming the opinions below, I
`
`considered the ’459 patent, its prosecution history, the applications that it
`
`incorporates by reference, and the materials listed as exhibits to the petition in (this)
`
`PGR2021-00098.
`
`III. Understanding of Relevant Legal Principles
`
`27.
`
`I am not a lawyer, and I will not provide any legal opinions. Rather, I
`
`have been asked to provide my technical opinions based on how a person of ordinary
`
`skill in the art would have understood the claims of the ’459 patent, in light of its
`
`disclosure and prosecution history, as of the patent’s priority date. Although I am
`
`not a lawyer, I have been advised that certain legal standards are to be applied by
`
`technical experts in forming opinions regarding the meaning and validity of patent
`
`claims.
`
` Claim Construction Standard
`
`28.
`
`I understand that claim terms are given their ordinary and customary
`
`meaning, as would be understood by a person of ordinary skill in the relevant art in
`
`the context of the patent’s entire disclosure and prosecution history. A claim term,
`
`
`
` 12
`
`VMware, Inc. Exhibit 1006 Page 15
`
`
`
`
`
`however, will not receive its ordinary meaning if the patentee acted as his own
`
`lexicographer and clearly set forth a definition of the claim term in the specification.
`
`In that case, the claim term will receive the definition set forth in the patent.
`
`29. The face of the ’459 patent claims priority to a series of applications,
`
`the earliest of which was filed April 21, 2006. As I explain below, none of the
`
`applications in this priority chain provide written description support to the
`
`challenged claims. I understand that this means the ’459 patent’s priority date is its
`
`November 19, 2019 filing date. Even if, however, an earlier priority date—up to
`
`and including the earliest April 21, 2006 date listed on the face of the ’459 patent—
`
`were considered for purposes of determining the level of ordinary skill in the art and
`
`interpreting the ’459 patent, my opinions below would not change.
`
` Written Description
`
`30.
`
`I have been informed that 35 U.S.C. § 112 requires patents to contain
`
`an adequate written description of the claimed invention. I understand that the
`
`purpose of the written description requirement is to demonstrate that the inventor
`
`was in possession of the invention at the time the patent application was filed, even
`
`though subsequently the claims may have been changed or new claims may have
`
`been added. I understand that this requirement is met if, at the time of filing the
`
`patent application, a person of ordinary skill in the art reading that application would
`
`have recognized that it described the invention as claimed. I understand that the
`
`
`
` 13
`
`VMware, Inc. Exhibit 1006 Page 16
`
`
`
`
`
`application does not need to specifically disclose a claim limitation as long as a
`
`person of ordinary skill in the art would understand that the missing requirement is
`
`necessarily implied in the application as originally filed. I understand that the
`
`written description inquiry must take into account the entirety of what is disclosed
`
`within the specification. I understand that it is insufficient that undisclosed subject
`
`matter would have been obvious to a person of ordinary skill in the art.
`
`IV. Background
`
` Technical Background
`
`31. To carry out their day-to-day operations, businesses and other
`
`organizations typically require an information technology (“IT”) infrastructure to
`
`provide computing and data processing services. IT infrastructures frequently
`
`include one or more “data centers,” where computing and networking equipment is
`
`concentrated for the purpose of collecting, storing, processing, or permitting access
`
`to data, software, or other computing resources. (Ex. 1013 (Wood) at p. 229 (“Data
`
`centers—server farms that run networked applications—have become popular in a
`
`variety of domains such as web hosting, enterprise systems, and e-commerce
`
`sites.”).)
`
`32.
`
`In the early days of computing, such a data center might have contained
`
`a single, powerful “mainframe” computer with enough resources to store large
`
`amounts of data and provide services to the organization’s users. But as computing
`
`
`
` 14
`
`VMware, Inc. Exhibit 1006 Page 17
`
`
`
`
`
`equipment got smaller and cheaper, organizations began to replace these large
`
`monolithic mainframe computers with compute “clusters” made up of many
`
`(hundreds or even thousands of) smaller, cheaper computer servers, networked
`
`together. (Ex. 1014 (Baker) at p. 1 (“there has been a dramatic shift from mainframe
`
`or ‘host-centric’ computing to a distributed ‘client-server’ approach” due to “the
`
`realisation that clusters of high−performance workstations can be realistically used
`
`for a variety of applications either to replace mainframes, vector supercomputers and
`
`parallel computers or
`
`to better manage already
`
`installed collections of
`
`workstations.”).)
`
`33. Like the monolithic mainframe had, the cluster would provide users
`
`with the perception of single computing resource. (Ex. 1020 (PlateSpin) at p. xxiv
`
`(“Clustering allows several physical machines to collectively host one or more
`
`virtual servers.”).) Users could connect to that resource over a network to access
`
`data and computing services as before. (Ex. 1020 (PlateSpin) at p. xxiv (“With
`
`clustering, clients won’t connect to a physical computer but instead connect to a
`
`logical virtual server running on top of one or more physical computers.”).)
`
`34. As data centers grew and changed over time, one problem that arose is
`
`“server sprawl.” (Ex. 1018 (Khanna) at Abstract, p. 373 (“As businesses have
`
`grown, so has the need to deploy I/T applications rapidly to support the expanding
`
`business processes. Often, this growth was achieved in an unplanned way,” which
`
`
`
` 15
`
`VMware, Inc. Exhibit 1006 Page 18
`
`
`
`
`
`“has led to what is often referred to as ‘server and storage sprawl’, i.e., many
`
`underutilized servers, with heterogeneous storage elements.”).)
`
`35. Over time, as data centers grew, they frequently came to include “an
`
`increasingly complex mixture of server platforms, hardware, operating systems, and
`
`applications.” (Ex. 1020 (PlateSpin) at p. 505.) But as new servers and software
`
`were added, it quickly became difficult to track them all and to ensure that they
`
`remained sufficiently utilized. (Ex. 1020 (PlateSpin) at p. 513 (“Today’s data center
`
`is riddled with production servers that are underutilized and that therefore represents
`
`a large amount of potential savings”).) For example, legacy technologies persisted
`
`long after they were no longer needed, and underutilized servers imposed significant
`
`unnecessary cost in terms of power, temperature conditioning, floor space, licensing,
`
`and other costs of maintenance and ownership. (Ex. 1020 (PlateSpin) at p. 505
`
`(“Data centers have accumulated and assimilated a large variety of new technologies
`
`that over time have become ‘legacy’ technologies that never go away” and they
`
`“collectively increase data center costs because of power consumption, temperature
`
`conditioning, and floor space.”).)
`
`36.
`
`In light of the above issues, before the ’459 patent’s earliest priority
`
`date, the advantages of server consolidation were well-known. For example, a patent
`
`assigned to IBM and filed in June 2005 recognized the importance of consolidation
`
`analysis and the cost benefits of consolidation:
`
`
`
` 16
`
`VMware, Inc. Exhibit 1006 Page 19
`
`
`
`
`
`It was also known that in some cases where the customer
`is currently using two or more of its own servers, these
`can be consolidated into one dedicated server in an “on
`demand” or “utility” model. For example, if the customer
`was only using a small percentage of the capacity of its
`current servers, these may be consolidated into one vendor
`server of similar or equal power in an “on demand” or
`“utility” model. The cost savings result from a reduction
`in (a) number of operating systems, (b) virtual memory,
`(c) real memory, (d) swap disk space, (e) system manage-
`ment software licenses, (f) customer application software
`licenses, (g) systems administration and support, (h) floor
`space, and/or (i) electricity and cooling costs.
`
`
`(Ex. 1015 (Taylor) at 2:1-13 (emphases added); see also 11:43-15:5 referring to Figs.
`
`4(A)-(E) describing a program that “determines whether two customer servers can
`
`be consolidated into one vendor server.”) Similarly, there were known techniques
`
`for “costing and planning the consolidation of multiple source server computers or
`
`other source computer hardware devices to fewer target server computers or other
`
`target hardware devices.” (Ex. 1016 (Power) at 3:22-26.) And others were
`
`describing how to identify “different possible configurations [of communications
`
`systems] that consolidate some or all of two or more of servers in the system.” (Ex.
`
`1017 (Van Hoose) at Abstract).)
`
`37.
`
`It also was well-known that one way to combat server sprawl is by using
`
`virtualization to perform “server consolidation.” (Ex. 1019 (Menascé) at p. 1
`
`(“Virtualization may be used for server consolidation”); Ex. 1020 (PlateSpin) at p.
`
`505 (“server virtualization alleviates the complexity of the data center to some extent
`
`
`
` 17
`
`VMware, Inc. Exhibit 1006 Page 20
`
`
`
`
`
`through consolidation”).) Server consolidation refers to the process of converting
`
`some physical machines into VMs and consolidating those VMs onto remaining
`
`physical machines, which act as hosts. (See Ex. 1018 (Khanna) at pp. 373-74
`
`(illustrating five steps of “a simple algorithm for server consolidation” using
`
`virtualization).)
`
` Overview of the ’459 Patent
`
`38.
`
`I have reviewed the overview of the ’459 patent from the corresponding
`
`petition and incorporate it below.
`
`39. The ’459 patent presents techniques for calculating “consolidation
`
`solutions” that can allegedly improve the efficiency of a computing environment
`
`with multiple computer systems. (’459 patent at 2:5-13, 2:66-3:10.) Specifically,
`
`these consolidation solutions are recommendations on where to move existing
`
`applications and data from certain systems (called “source systems”) to other
`
`systems (called “target systems”) in order to reduce the number of systems in the
`
`overall environment. (’459 patent at 5:16-24, 5:55-64, 6:35-51, 8:17-25.)
`
`40. The ’459 patent explains that a “system” can be a “physical system,” a
`
`“virtual system,” or a “hypothetical system” (i.e., a system that does not currently
`
`exist in the environment but is useful for analyzing hypothetical situations). (’459
`
`patent at 5:65-6:10.) According to the ’459 patent, managing an environment with
`
`many of these systems presents challenges in “optimizing efficiency” and
`
`
`
` 18
`
`VMware, Inc. Exhibit 1006 Page 21
`
`
`
`
`
`“avoid[ing] redundancies and/or under-utilized hardware.” (’459 patent at 1:47-52.)
`
`For example, a sub-optimal environment may include “additional hardware” with
`
`“separate maintenance considerations” that require costly “incidental attention[.]”
`
`(’459 patent at 1:63-67.) Too many systems means that “[h]eat production and
`
`power consumption can also be a concern.” (’459 patent at 1:67-2:1.)
`
`41. The ’459 patent discloses analytical techniques that purportedly address
`
`these problems by calculating “roadmaps” for efficiently reducing the number of
`
`systems. (’459 patent at 5:16-24.) The roadmaps identify “source” systems, “from
`
`which applications and/or data are to be moved,” and “target” systems, “to which
`
`applications and/or data are to be moved.” (’459 patent at 5:57-60, 6:46-51.) Thus,
`
`for example, “an underutilized environment having two systems [] can be
`
`consolidated to a target system (one of the syst