`2
`
`3
`
`4
`
`5
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`14
`
`15
`
`I 6
`
`17
`
`METHOD AND SYSTEM FOR DETERMINING COMPATIBILITY
`OF COMPUTER SYSTEMS
`
`{0001]
`
`This application claims priority from US. provisional patent application
`
`no. 60/745,322 filed April 21, 2006.
`
`FIELD OF THE INVENTION:
`
`[0002]
`
`The present invention relates to information technology infrastructures and has
`
`particular utility in determining compatibility of computer systems in such infrastructures.
`
`DESCRIPTION OF THE PRIOR ART
`
`[0003]
`
`As organizations have become more reliant on computers for performing day to day
`
`activities, so to has the reliance on networks and information technology (IT) infrastructures
`
`increased. It is well known that large organizations having offices and other facilities in different
`
`geographical locations utilize centralized computing systems connected locally over local area
`networks (LAN) and across the geographical areas through wide-area networks 0N AN).
`
`[0004]
`
`As these organizations grow, the amount of data to be processed and handled by the
`
`centralized computing centers also grows. As a result, the IT infrastructures used by many
`
`organizations have moved away from a reliance on centralized computing power and towards
`
`18 more robust and efficient distributed systems. Distributed systems are decentralized computing
`
`19
`
`systems that use more than one computer operating in parallel to handle large amounts of data.
`
`20 Concepts surrounding distributed systems are well known in the art and a complete discussion
`
`21
`
`22
`
`24
`
`25
`
`26
`
`27
`
`28
`
`29
`
`can be found in,
`
`"Distributed Systems: Principles and Paradigms"; Tanenbaum Andrew S.;
`
`Prentice Hall; Amsterdam, Netherlands; 2002.
`
`[0005]
`
`'While the benefits of a distributed approach are numerous and well understood, there
`
`has arisen significant practical challenges in managing such systems for optimizing efficiency
`
`and to avoid redundancies and/or under-utilized hardware. In particular, one challenge occurs
`
`due to the sprawl that can occur over time as applications and servers proliferate. Decentralized
`
`control and decision making around capacity, the provisioning of new applications and hardware,
`
`and the perception that the cost of adding server hardware is generally inexpensive, have created
`
`environments with far more processing capacity than is required by the organization.
`21562813.l
`
`- 1 -
`
`VMware, Inc. Exhibit 1010 Page 1
`
`
`
`[0006] When cost is considered on a server-by-server basis, the additional cost of having
`
`underutilized servers is often not deemed to be troubling. However, when multiple servers in a
`
`large computing environment are underutilized, having too many servers can become a burden.
`
`2
`
`3
`
`4 Moreover, the additional hardware requires separate maintenance considerations, separate
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`14
`
`upgrades and requires the incidental attention that should instead be optimized to be more cost
`
`effective for the organization. Even considering only the cost of having redundant licenses,
`
`removing even a modest number of servers from a large computing environment can save a
`
`significant amount of cost on a yearly basis.
`
`10007]
`
`As a result, organizations have become increasingly concerned with such
`
`redundancies and how they can best achieve consolidation of capacity to reduce operating costs.
`
`The heterogeneous nature of distributed systems makes consolidation ever more difficult to
`
`achieve.
`
`[0008)
`
`It is therefore an object of the following to obviate or mitigate the above-described
`
`disadvantages.
`
`15
`
`SUTvIMARY OF THE INVENTION
`
`16
`
`17
`
`18
`
`19
`
`[0009}
`
`In one aspect, a method for determining compatibilities for a plurality of computer
`
`systems is provided comprising generating a configuration compatibility score for each pair of
`
`the plurality of systems based on configuration data obtained for each of the plurality of systems;
`
`generating a workload compatibility score for each pair of the plurality of systems based on
`
`20 workload data obtained for each of the plurality of systems; and generating a co-habitation score
`
`21
`
`22
`
`23
`
`24
`
`25
`
`26
`
`27
`
`for each pair of the plurality of systems using the respective configuration compatibility score
`
`and workload compatibility score, the co-habitation score indicating an overall compatibility for
`
`each system with respect to the others of the plurality of systems.
`
`[0010]
`
`In another aspect, a computer program is provided for determining compatibilities for
`
`a plurality of computer systems. The program comprises an audit engine for obtaining
`
`information pertaining to the compatibility of the plurality of computer systems; an analysis
`
`engine for generating a compatibility score for each pair of the plurality of systems based on the
`
`21562813.1
`
`-2-
`
`VMware, Inc. Exhibit 1010 Page 2
`
`
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`I 8
`
`19
`
`20
`
`21
`
`information that is specific to respective pairs; and a client for displaying the compatibility score
`
`on an interface.
`
`(0011]
`
`In yet another aspect, a method for determining configuration compatibilities for a
`
`plurality of computer systems is provided comprising obtaining configuration data for each of
`
`the plurality of computer systems; assigning a weight to one or more parameter in the
`
`configuration data indicating the importance of the parameter to the compatibility of the plurality
`
`of systems; generating a rule set comprising one or more of the parameters; and computing a
`
`configuration score for each pair of the plurality of systems according to the weights in the rule
`
`set.
`
`[0012}
`
`In yet another aspect, a method for determining workload compatibilities for a
`
`plurality of computer systems is provided comprising obtaining workload data for each of the
`
`plurality of systems; computing a stacked workload value for each pair of the plurality of
`
`systems at one or more time instance according to the workload data; and computing a workload
`
`score for each pair of the plurality of systems using the stacked workload values.
`
`(0013]
`
`In yet another aspect, a graphical interface for displaying compatibility scores for a
`
`plurality of computer systems is provided comprising a matrix of cells, each the cell
`
`corresponding to a pair of the plurality of computer systems, each row of the matrix indicating
`
`one of the plurality of computer systems and each column of the matrix indicating one of the
`
`plurality of computer systems, each cell displaying a compatibility score indicating the
`
`compatibility of the respective pair of the plurality of systems indicated in the corresponding row
`
`and column, and computed according to predefined criteria.
`
`22 BRIEF DESCRIPTION OF THE DR.A WINGS
`
`[00141
`
`An embodiment of the invention will now be described by way of example only with
`
`24
`
`reference to the appended drawings wherein:
`
`25
`
`{0015]
`
`Figure la is a schematic representation of a system for evaluating computer systems.
`
`21562813.1
`
`- 3 -
`
`VMware, Inc. Exhibit 1010 Page 3
`
`
`
`[0016]
`
`Figure 1 b is schematic representation of a network of systems analyzed by a
`
`compatibility analysis program.
`
`[0017]
`
`Figure 2 is a schematic block diagram of an underlying architecture for implementing
`
`the analysis program of Figure lb.
`
`[0018)
`
`Figure 3 is flow diagram illustrating a system consolidation analysis.
`
`(0019]
`
`Figure 4 is a flow diagram illustrating strategies for a server consolidation analysis.
`
`[0020]
`
`Figure 5 is a table illustrating data enablement for system consolidation and
`
`virtualization.
`
`[0021]
`
`Figure 6 is a system compatibility index (SCI) matrix.
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`[0022)
`
`Figure 7 is a workload compatibility index (WCI) matrix.
`
`11
`
`12
`
`[0023]
`
`Figure 8 is a co-habitation index (CHI) matrix for the SCI and WCI matrices of
`
`Figures 6 and 7.
`
`13
`
`[0024)
`
`Figure 9 is a graphical display showing server consolidation information.
`
`14
`
`[0025]
`
`Figure IO is an audit request template.
`
`15
`
`[00261
`
`Figure 11 is a detailed configuration report containing audited configuration data.
`
`16
`
`[0027]
`
`Figure 12 is a table containing workload data.
`
`17
`
`[0028]
`
`Figure 13 is a table containing a rule set used in generating an SCI matrix.
`
`18
`
`[0029}
`
`Figure 14 is a screenshot of a program for generating compatibility reports.
`
`19
`
`(0030)
`
`Figure 15 is an SCI matrix for an example environment having four server systems.
`
`20
`
`21
`
`[0031}
`
`Figure 16 is a table containing a summary of differences between a pair of systems in
`
`the environment.
`
`21562813.1
`
`VMware, Inc. Exhibit 1010 Page 4
`
`
`
`1
`
`2
`
`3
`
`(0032}
`
`Figure 17 is a table containing details of the differences listed in Figure I 6.
`
`[0033]
`
`Figure 18 is a WCl matrix for the environment.
`
`[0034]
`
`Figure 19 is a workload compatibility report for a pair of systems being analyzed.
`
`4
`
`[0035]
`
`Figure 20 is a CHI matrix for the environment.
`
`5
`
`6
`
`7
`
`8
`
`9
`
`[0036)
`
`Figure 21 is a flowchart illustrating a system compatibility analysis procedure.
`
`[0037]
`
`Figure 22 is a flowchart illustrating a configuration data extraction procedure.
`
`[0038]
`
`Figure 23 is a flowchart illustrating a configuration compatibility analysis procedure.
`
`[0039]
`
`Figure 24 is a flowchart illustrating a rnle set application procedure.
`
`(0040}
`
`Figure 25 is a flowchart illustrating a workload data extraction procedure.
`
`10
`
`[00411
`
`Figure 26 is a flowchart illustrating a workload compatibility analysis procedure.
`
`11
`
`[0042}
`
`Figure 27 is a flowchart illustrating a workload multi-stacking procedure.
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`[0043]
`
`Figure 28 is a flowchart illustrating another workload multi-stacking procedure using
`
`a hypothetical target system.
`
`Figure 29 is a is a hierarchical block diagram illustrating metadata, rules and rule
`
`[0044]
`
`sets.
`
`[0045]
`
`Figure 30 is a schematic flow diagram showing the application of a rule set in
`
`analyzing a pair of computer systems.
`
`18
`
`[0046]
`
`Figure 31 illustrates a general mle definition.
`
`19
`
`[0047]
`
`Figure 32 shows an example rule set.
`
`21562813.1
`
`- 5 -
`
`VMware, Inc. Exhibit 1010 Page 5
`
`
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`[0048)
`
`Referring therefore to Figure 1 a, an analysis program 10 is in communication with a
`
`set of computer systems 28 (3 are shown in Figure la as an example). The analysis program 10,
`
`using a computer station I 6, evaluates the computer systems 28 and provides a report showing
`
`how the systems differ. The computer systems 28 may be physical systems as well as virtual
`
`systems or models. A distinct data set is preferably obtained for each system 28.
`
`(0049]
`
`Each data set comprises one or more parameter that relates to characteristics or
`
`features of the respective system 28. The parameters can be evaluated by scrutinizing program
`
`definitions, properties, objects, instances and any other representation or manifestation of a
`
`component, feature or characteristic of the system 28. In general, a parameter is anything related
`
`to the system 28 that can be evaluated, quantified, measured, compared etc.
`
`12 Exemplary Environment
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`19
`
`20
`
`21
`
`22
`
`23
`
`24
`
`25
`
`26
`
`27
`
`[0050]
`
`Referring to Figures lb and 2, a compatibility analysis program, generally referred to
`
`by numeral l 0 for clarity is deployed to gather data from the exemplary architecture shown for a
`
`computing environment 12 (shown in Figure lb). The analysis program JO analyzes the
`
`environment 12 to determine whether or not compatibilities exist within the environment 12 for
`
`consolidating systems such as servers, desktop computers, routers, storage devices etc. The
`
`analysis program l O is preferably part of a client-server application that is accessible via a web
`
`browser client 34 running on, e.g. a computer station 16. The analysis program 10 operates in
`
`the environment 12 to collect, analyze and report on audited data for not only consolidation but
`
`other functions such as inventory analysis, change and compliance analysis etc. In the following
`
`examples, the systems are exemplified as servers.
`
`[0051]
`
`As shown in Figure 1 b, the example environment 12 generally comprises a master
`
`server 14 that controls the operations of a series of slave servers 28 arranged in a distributed
`
`system. In the example shown, the master server 14 audits a local network 18 having a series of
`
`servers 28 some having local agents and others being agentless. The master server also audits a
`
`pair of remote networks 20, 22 having firewalls 24. The remote network 20 includes a proxy for
`
`21562813.l
`
`-6-
`
`VMware, Inc. Exhibit 1010 Page 6
`
`
`
`1
`
`2
`
`3
`
`avoiding the need to open a port range. The remote network 22 comprises a collector 30 for
`
`concentrating traffic through a single point allowing an audit to be performed through the
`
`firewall 24, and also comprises a proxy 32. The proxy 32 is used to convert between
`
`4 Windows TM protocols and UNIXTM/LinuxTM servers, and can also concentrate traffic. The
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`I 4
`
`15
`
`16
`
`17
`
`proxy 32 may be required for auditing agentless Windows TM based server if the master server 14
`
`is running another operating system such as UNIXTM or LinuxTM_
`
`[0052]
`
`The master server 14 is capable of connecting to the slave servers 28 for performing
`
`audits of configuration settings, workload etc. and thus can communicate over several applicable
`
`protocols, e.g. simple network management protocol (SNivIP). As shown, a computer station 16
`
`running a web browser and connected to a web server (not shown) on the master server 14, e.g.
`
`over HTTP, can be used to operate the analysis program 10 in the environment 12. The analysis
`
`program 10 may reside on the master server 14 or may run on a remote server (not shown). The
`
`analysis program 10 can gather data as it is available or retrieve a block of data from the master
`
`server I 4 either via electronic means or other physical means. As such, the analysis program 10
`
`can operate in the environment 12 or independently (and remote thereto) so long as it can obtain
`
`audited data from the environment 12. The computer station 16 enables the analysis program 10
`
`to display reports and gather user input for executing an audit or analysis.
`
`18 Analysis Program
`
`19
`
`20
`
`21
`
`22
`
`24
`
`25
`
`[0053)
`
`A example block diagram of the analysis program 10 is shown in Figure 2. The flow
`
`of data through the program 10 begins as an audit engine 46 pulls audit data from audited
`
`environments 50. The data works its way up to the web client 34 which displays an output on a
`
`user interface,
`
`on computer system 16. The program 10 is preferably a client-server
`
`application that is accessed via the web client or interface.
`
`[0054)
`
`An audit engine 46 communicates over one or more connections referred to generally
`
`by numeral 48 with audited environments 50 which are the actual systems 28,
`
`server
`
`26 machines, that are being analysed. The audit engine 46 typically uses data acquisition (DAQ)
`
`27
`
`28
`
`adapters to communicate with the end points ( e.g. servers 28) or software systems that manage
`
`the end points (e.g. management frameworks 52 and/or agent instrumentation 54). The program
`21562813.1
`
`-7-
`
`VMware, Inc. Exhibit 1010 Page 7
`
`
`
`10 can utilize management framework adapters 52 in the audited environments 50 for
`
`communicating with ESM frameworks and agent instrumentation and for communicating with
`
`other agents such as a third party or agents belonging to the program 10. The audit engine 46
`
`can also communicate directly with candidate and/or target systems 28 using agentless adapters
`
`( central arrow in Figure 2) to gather the necessary audit information.
`
`(0055]
`
`An auditing data repository 42 is used to store audit information and previous reports.
`
`The audit engine 46, using a set of audit templates 45, controls the acquisition of data that is used
`
`by the other software modules to eventually generate a set of reports to display on the interface
`
`34. A contex1: engine 40 utilizes metadata 39 stored by the program 10, which indicates the
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`nature of the data, to filter out extraneous information.
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`[0056]
`
`An analysis engine 4 I evaluates data compared in a differential engine 38 based on a
`
`set of rules 43. The analysis engine 41 performs the compatibility and, in this example, the
`
`consolidation analysis to determine if the environment 12 can operate with fewer systems 28.
`
`[0057]
`
`The program 10 has a report generation tool 36 that utilizes a set of report templates
`
`3 5 for generating custom reports for a particular environment 12. The report generation tool 3 6
`
`utilizes the information generated by the analysis engine 41. Typically, the program 10 includes
`
`a web client 34 for communicating with a web interface ( e.g. on computer system 16). The web
`
`interface allows a user to enter settings, initiate an audit or analysis, display reports etc.
`
`19
`
`System Compatibility Analysis Visualization
`
`20
`
`21
`
`22
`
`23
`
`24
`
`[0058)
`
`In the following examples, a source system refers to a system from which
`
`applications and/or data are to be moved, and a target server or system is a system to which such
`
`applications and/or data are to be moved. For example, an underutilized environment having two
`
`systems 28 can be consolidated to a target system (one of the systems) by moving applications
`
`and/or data from the source system (the other of the systems) to the target system.
`
`21562Sl3.l
`
`- 8 -
`
`VMware, Inc. Exhibit 1010 Page 8
`
`
`
`l
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`[0059]
`
`As best seen in Figure 3, the systems 28 are audited in stage A to generate
`
`configuration reports and workload patterns, which are in tum used to create statistical
`
`scorecards in stages B, C and D.
`
`[00601
`
`The first stage, stage A, involves data collection, which includes the collection of
`
`detailed configuration and workload information from the environment 12. Stage A also
`
`includes data extraction to obtain the relevant configuration and workload data from the per(cid:173)
`
`system data gathered in an audit according to compatibility rule sets and workload data types to
`
`obtain filtered per-system configuration and workload data sets. In addition, per-system
`
`benchmark data and usage limits are considered in Stage A for performing the workload analysis
`
`10
`
`(Stage C).
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`19
`
`(0061]
`
`Stages B and C are then performed using the data that has been collected. Stage B
`
`involves performing the system compatibility analysis to generate a system compatibility index
`
`(SCI) matrix 60 and Stage C involves performing the workload compatibility analysis to
`
`generate a workload compatibility index (WCI) matrix 70. The results from stages Band Care
`
`then used in stage D to perform an overall compatibility analysis which involves the generation
`
`of a co-habitation index (CHI) matrix 80 and its visual mapping of overall system compatibility.
`The analysis results may then be used to identify the best server consolidation candidates. It will
`
`be appreciated that the principles described herein support many strategies and consolidation is
`
`only one example.
`
`20 Objectives
`
`{0062]
`
`From an analysis perspective, as shown in Figure 4, a cost-savings objective can be
`
`evaluated on the basis of certain strategies such as, but not limited to, database consolidation
`
`strategies, application stacking strategies, operating system (OS) level stacking strategies and
`
`virtualization strategies that can also be visualized graphically using a comprehensive
`
`consolidation roadmap 90 incorporating the SCI 60, WCI 70 and CHI 80 matrices as will be
`
`explained in greater detail below.
`
`21
`
`22
`
`23
`
`24
`
`25
`
`26
`
`27
`
`21562813.1
`
`-9-
`
`VMware, Inc. Exhibit 1010 Page 9
`
`
`
`System Configuration Compatibility Visualization
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`l l
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`19
`
`20
`
`21
`
`22
`
`24
`
`25
`
`26
`
`27
`
`[0063]
`
`A configuration analysis in Stage B of N systems 18 computes NxN system
`
`compatibility scores by individually considering each system 18 as a consolidation source and as
`
`a target. Preferably, the scores range from O to 100 with higher scores indicating greater system
`
`compatibility. The analysis will thus also consider the trivial cases where systems are
`
`consolidated with themselves and would be given a maximum score,
`
`100. For display and
`
`reporting purposes, the scores are preferably arranged in an NxN matrix form.
`
`[0064]
`
`An example of an SCI matrix 60 is shown in Figure 6. The SCI matrix 60 provides
`
`an organized graphical mapping of system compatibility for each source/target system pair on
`
`the basis of configuration data. The SCI matrix 60 shown in Figure 6 is structured having each
`
`server 28 in the environment 12 listed both down the leftmost column 64 and along the
`
`uppermost row 62. Each row represents a consolidation source system, and each column
`
`represents the possible consolidation target. Each cell contains the score corresponding to the
`
`case where the row system is consolidated onto the column (target) system.
`
`[0065)
`
`The preferred output shown in Figure 6 arranges the servers 28 in the matrix such that
`
`a 100% compatibility exists along the diagonal 63 where each server is naturally 100%
`
`compatible with itself The SCI matrix 60 is preferably displayed such that each cell 66 includes
`
`a numerical score and a shade of a certain colour. As noted above, the higher the score (from
`
`zero (0) to one hundred (100)), the higher the compatibility. The scores are pre-classified into
`
`predefined ranges that indicate the level of compatibility between two systems I 8. Each range
`
`maps to a corresponding colour or shade for display in the matrix 60. For example, the
`
`following ranges and colour codes can be used: score= l 00, 100% compatible, dark green; score
`= 75-99, highly compatible, green; score= 50-74, somewhat compatible, yellow; score 25-49,
`low compatibility, orange; and score= 0-24, incompatible, red.
`
`[0066]
`
`The above ranges are only one example. Preferably, the ranges can be adjusted to
`
`reflect more conservative and less conservative views on the compatibility results. The ranges
`
`can be adjusted using a graphical tool similar to a contrast slider used in graphics programs.
`
`21562813,l
`
`- 10 -
`
`VMware, Inc. Exhibit 1010 Page 10
`
`
`
`1 Adjustment of the slider would correspondingly adjust the ranges and in turn the colours. This
`
`2
`
`3
`
`allows the results to be tailored to a specific situation.
`
`[0067]
`
`It is therefore seen that the graphical output of the SCI matrix 60 provides an intuitive
`
`4 mapping between the source/target pairs in the environment 12 to assist in visualizing where
`
`5
`
`compatibilities exist and do not exist. In Figure 6, it can be seen that the server pair identified
`
`6 with an asterisk (*) and by the encircled cell indicates complete compatibility between the two
`
`7
`
`8
`
`9
`
`IO
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`19
`
`20
`
`21
`
`22
`
`23
`
`24
`
`25
`
`26
`
`27
`
`servers for the particular strategy being observed, e.g. based on a chosen rule set. It can also be
`
`seen that the server pair identified with an X and the encircled cell at the corresponding
`
`row/column crossing comprises a particularly poor score and thus for the strategy being
`
`observed, the servers 28 in that pair are not very compatible.
`
`[0068]
`
`The scores are calculated based on configuration data that is acquired through a
`
`configuration audit performed by the analysis program 10. The data is acquired using tools such
`
`as the table 100 shown in Figure 5 that illustrate the various types of configuration settings that
`
`are of interest and from which sources they can be obtained. Figure 5 also provided a mapping
`
`to where the sample workload data can be obtained. In Figure 5, a number of strategies 104 and
`
`sub-strategies l 05 map to various configuration and workload sources, collectively referred to by
`
`numeral l 02. As discussed making reference to Figure 4, the strategies I 04 may relate to
`
`database consolidation, OS-level stacking, application server stacking and virtualization. Each
`
`strategy 104 includes a set of sub-strategies 105, which in tum map to specific rule sets 43. The
`
`rule sets, which will be explained in greater detail below, determine whether or not a particular
`
`setting or system criterionf criteria have been met and thus how different one server 28 is to the
`
`next.
`
`[0069]
`
`The table l 00 lists the supported consolidation strategies and the relevant data sources
`
`that should be audited to perform the corresponding consolidation analysis. In general,
`
`collecting more basis data improves the analysis results. The table 100 enables the analysis
`
`program 10 to locate the settings and information of interest based on the strategy I 04 or sub-
`
`strategy 105 (and in tum the rule set) that is to be used to evaluate the systems 28 in the
`
`21562813.I
`
`- 11 -
`
`VMware, Inc. Exhibit 1010 Page 11
`
`
`
`environment 12. The results can be used to determine source/target candidates for analysing the
`
`environment for the purpose of, e.g. consolidation, compliance measures etc.
`
`System Workload Compatibility Visualization
`
`[0070]
`
`An example WCI matrix 70 is shown in Figure 7. The WCI matrix 70 is the analog
`
`of the SCI matrix 60 for workload analysis. The WCI matrix 70 includes a similar graphical
`
`display that indicates a score and a colour or shading for each cell to provide an intuitive
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7 mapping between candidate source/target server pairs. The workload data is obtained using tools
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`such as the table l 00 shown in Figure 5 and corresponds to a particular workload factor, e.g.
`
`CPU utilization, network I/0, disk 1/0, etc. A high workload score indicates that the candidate
`
`server pair being considered has a high compatibility for accommodating the workload on the
`
`target system. The specific algorithms used in determining the score are discussed in greater
`
`detail below. The servers are listed in the upper row 72 and leftmost column 74 and each cell 76
`
`represents the compatibility of its corresponding server pair in the matrix. The encircled cell
`
`identified by the asterisk (*) in Figure 7 indicates a high workload compatibility for
`
`consolidating to the target server, and the one marked by the X indicates an unlikely candidate
`
`pair for workload consolidation, compliance etc.
`
`17
`
`System Co-Habitation Compatibility Visualization
`
`18
`
`19
`
`20
`
`21
`
`22
`
`23
`
`[0071]
`
`An example CHI matrix 80 is shown in Figure 8. The CHI matrix 80 comprises a
`
`similar arrangement as the SCI and WCI matrices 60, 70, which lists the servers in the uppermost
`
`row 82 and leftmost column 84 to provide 100% compatibility along the diagonal. Preferably
`
`the same scoring and shading convention is used as shown in Figure 8. The CHI matrix 80
`
`provides a visual display of scoring for candidate system pairs that considers both the
`
`configuration compatibility from the SCI matrix 60 and the workload compatibility from the
`
`24 WCI matrix 70.
`
`25
`
`26
`
`27
`
`(0072]
`
`The score provided in each cell 86 indicates the co-habitation compatibility for
`
`consolidating servers. It should be noted that in some cases two servers 28 can have a high
`
`configuration compatibility but a low workload compatibility and thus end up with a reduced or
`
`21562813.1
`
`- 12 -
`
`VMware, Inc. Exhibit 1010 Page 12
`
`
`
`l
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`14
`
`15
`
`16
`
`17
`
`18
`
`relatively low co-habitation score. It is therefore seen that the C:HI 80 provides a comprehensive
`
`score that considers not only the compatibility of systems 28 at the setting level but also in its
`
`utilization. By displaying the SCI matrix 60, WCI matrix 70 and CHI matrix 80 in the
`
`consolidation roadmap 90, a complete picture of the entire system can be ascertained in an
`
`organized manner. The matrices 60, 70 and 80 provide a visual representation of the
`
`compatibilities and provide an intuitive way to evaluate the likelihood that systems can be
`
`consolidated and have associated tools (as explained below) that can be used to analyse
`
`compliance and remediation measures to modify systems 28 so that they can become more
`
`compatible with other systems 28 in the environment 12. It can therefore be seen that a
`
`significant amount of quantitative data can be analysed in a convenient manner using the
`
`graphical matrices 60, 70, 80 and associated reports and graphs (described below).
`
`[0073]
`
`For example, a server pair that is not compatible only for the reason that certain
`
`critical software upgrades have not been implemented, the information can be uncovered through
`
`analysis tools used with the SCI matrix 60, and then investigated, so that upgrades can be
`
`implemented, referred to herein as remediation. Remediation can be determined by modeling
`
`cost ofirnplementing upgrades, fixes etc that are needed in the rule sets. Ifremediation is then
`
`implemented, a subsequent analysis may then show the same server pair to be highly compatible
`
`and thus suitable candidates for consolidation.
`
`19
`
`Sorting Examples
`
`20
`
`21
`
`22
`
`23
`
`24
`
`25
`
`26
`
`[0074]
`
`The matrices 60, 70 and 80 can be sorted in various ways to convey different
`
`information. For example, sorting algorithms such as a simple row sort, a simple column sort
`
`and a sorting by group can be used.
`
`(0075]
`
`A simple row sort involves computing the total scores for each source system (by
`
`row), and subsequently sorting the rows by ascending total scores. In this arrangement, the
`
`highest total scores are indicative of source systems that are the best candidates to consolidate
`
`onto other systems.
`
`21562813.l
`
`- 13 -
`
`VMware, Inc. Exhibit 1010 Page 13
`
`
`
`[0076}
`
`A simple column sort involves computing the total scores for each target system (by
`
`column) and subsequently sorting the columns by ascending total score. In this arrangement, the
`
`highest total scores are indicative of the best consolidation target systems.
`
`[0077)
`
`Sorting by group involves computing the difference between each system pair, and
`
`arranging the systems to minimize the total difference between each pair of adjacent systems in
`
`the matrix. The difference between a system pair can be computed by taking the square root of
`the sum of the squares of the difference of a pair's individual compatibility score against each
`
`other system in the analysis. In general, the smaller the total difference between two systems,
`
`the more similar the two systems with respect to their compatibility with the other systems. The
`
`group sort promotes the visualization of the logical breakdown of an environment by producing
`
`clusters of compatible systems 18 around the matrix diagonal. These clusters are indicative of
`
`compatible regions in the environment 12.
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`10
`
`11
`
`12
`
`13
`
`Consolidation Roadmap
`
`14
`
`15
`
`16
`
`17
`
`18
`
`19
`
`20
`
`21
`
`[0078)
`
`The consolidation roadmap 90 shown in Figure 9 illustrates how the matrices 60, 70
`
`and 80 can be used to provide a complete visualization. As shown in Figure 9, configuration
`
`deviation examples 92 can be generated based on the SCI matrix 60 and displayed in a chart or
`
`table to show where compatibilities are lacking. Similarly, work.load stacking examples 94 can
`
`be generated based on the WCI 70 to show how a candidate server pair would operate when the
`
`respective workloads are stacked. Ultimately, a current server utilization versus capacity
`
`comparison 96 can also be illustrated to show combined server capacities and other information
`
`pulled from the CHI 80.
`
`22
`
`System Compatibility Analysis Overview
`
`[0079]
`
`The following illustrates an example for generating the matrices 60, 70 and 80
`
`discussed above. The analysis program 10 generally executes four primary stages as shown in
`
`Figures 3 and 21.
`
`23
`
`24
`
`25
`
`26
`
`21562813.1
`
`- 14 -
`
`VMware, Inc. Exhibit 1010 Page 14
`
`
`
`System Configuration Compatibility
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`[0080]
`
`In stage A, a configuration data e;;.iraction st