`Franklin
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,822,841 B2
`Oct. 26, 2010
`
`(54) METHOD AND SYSTEM FOR HOSTING
`MULTIPLE, CUSTOMIZED COMPUTING
`CLUSTERS
`
`(75)
`
`Inventor: Jeffrey B. Franklin, Louisville, CO
`(US)
`
`(73) Assignee: Modern Grids, Inc., Louisville, CO
`(US)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 364 days.
`
`(21) Appl. No.: 11/927,921
`
`(22) Filed:
`
`Oct. 30, 2007
`
`(65)
`
`Prior Publication Data
`
`US 2009/0113051 Al
`
`Apr. 30, 2009
`
`(51) Int. Cl.
`GO6F 15/16
`GO6F 15/173
`GO6F 15/177
`(52) U.S. Cl.
`
`(2006.01)
`(2006.01)
`(2006.01)
` 709/223; 709/224; 714/4;
`705/6
` 709/217-228;
`714/4; 705/6
`See application file for complete search history.
`
`(58) Field of Classification Search
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`4,731,860 A
`4,837,831 A
`5,079,765 A
`5,185,860 A
`5,224,205 A
`5,371,852 A
`5,649,141 A
`5,694,615 A
`5,774,650 A
`5,822,531 A
`5,890,007 A
`
`3/1988 Wahl
`6/1989 Gillick et al.
`1/1992 Nakamura
`2/1993 Wu
`6/1993 Dinkin et al.
`12/1994 Attansio et al.
`7/1997 Yamazaki
`12/1997 Thapar et al.
`6/1998 Chapman et al.
`10/1998 Gorczyca et al.
`3/1999 Zinguuzi
`
`5,946,463 A
`6,088,727 A
`6,363,495 B1
`6,427,209 B1
`6,438,705 B1
`6,748,429 B1
`
`8/1999 Carr et al.
`7/2000 Hosokawa et al.
`3/2002 MacKenzie et al.
`7/2002 Brezak, Jr. et al.
`8/2002 Chao et al.
`6/2004 Talluri et al.
`
`(Continued)
`
`OTHER PUBLICATIONS
`
`Lee, DongWoo, et al., "visPerf: Monitoring Tool for Grid
`Computing"In Proceedings of the International Conference on Corn-
`putational Science 2003 (LNCS vol. 2659/2003), 2003.
`
`(Continued)
`
`Primary Examiner Haresh N Patel
`(74) Attorney, Agent, or Firm Marsh Fischmann &
`Breyfogle LLP; Kent A. Lembke
`
`(57)
`
`ABSTRACT
`
`A computer system for hosting computing clusters for clients.
`The system includes clusters each including a set of comput-
`ing resources and each implemented in custom or differing
`configurations. Each of the configurations provides a custom-
`ized computing environment for performing particular client
`tasks. The configurations may differ due to configuration of
`the processing nodes, the data storage, or the private cluster
`network or its connections. The system includes a monitoring
`system that monitors the clusters for operational problems on
`a cluster level and also on a per-node basis such as with
`monitors provided for each node. The system controls client
`access to the clusters via a public communications by only
`allowing clients to access their assigned cluster or the cluster
`configured per their specifications and performing their com-
`puting task. Gateway mechanisms isolate each cluster such
`that communications within a cluster or on a private cluster
`communications network are maintained separate.
`
`9 Claims, 7 Drawing Sheets
`
`CLIENT
`SYSTEM
`
`CLIENT
`SYSTEM
`20B
`
`200
`
`I
`
`CLIENT
`SYSTEM
`208
`
`PUBLIC NETWORK (INTERNET)
`274
`
`FIREWALL
`UTHENS1CATION
`214
`
`I karma
`SYSTEM
`
`222
`
`PRIVATE COMPANY NETWORK
`20
`
`GA EWA 1
`2.44
`
`GATEWAY 2
`
`GATEWAY
`28/
`
`cI
`,o
`PRIVATE CLUSTER NETWORK
`
`LUSTER
`244
`
`CLUSTER
`
`C IISTER N
`252
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 1
`
`
`
`US 7,822,841 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`8/2004 Bommareddy et al.
`6,779,039 B1
`11/2004 Doyle et al.
`6,823,452 B1
`11/2004 Bernstein et al.
`6,826,568 B2
`2/2005 Kampe et al.
`6,854,069 B2
`1/2006 Skinner et al.
`6,990,602 B1
`2/2006 De La Cruz et al.
`6,996,502 B2 *
`4/2006 Dinker et al.
`7,035,858 B2
`2/2007 Novaes et al.
`7,185,076 B1
`3/2007 Srinivasan et al.
`7,188,171 B2
`4/2007 Goin et al.
`7,203,864 B2
`7/2007 Ford
`7,243,368 B2
`7/2007 De La Cruz et al.
`7,246,256 B2 *
`9/2007 Heckmann et al.
`7,269,762 B2
`7,634,683 B2 * 12/2009 De La Cruz et al.
`2005/0060391 Al
`3/2005 Kaminsky et al.
`2005/0159927 Al * 7/2005 Cruz et al.
`2005/0172161 Al *
`8/2005 Cruz et al.
`2006/0080323 Al
`4/2006 Wong et al.
`
` 702/188
`
` 714/4
`
` 714/4
`
` 702/188
` 714/4
`
`8/2006 Canali et al.
`2006/0190602 Al *
`2006/0212332 Al * 9/2006 Jackson
`2006/0212334 Al *
`9/2006 Jackson
`2006/0230149 Al * 10/2006 Jackson
`2006/0248371 Al
`11/2006 Chen et al.
`2007/0156677 Al *
`7/2007 Szabo
`
`2007/0156813 Al *
`7/2007 Galvez et al.
`2007/0220152 Al *
`9/2007 Jackson
`2007/0245167 Al * 10/2007 De La Cruz et al.
`2008/0216081 Al* 9/2008 Jackson
`2010/0023949 Al* 1/2010 Jackson
`
` 709/226
` 705/8
` 705/8
` 709/226
`
` 707/5
` 709/204
` 709/226
` 714/4
` 718/104
` 718/104
`
`OTHER PUBLICATIONS
`
`Peng, Liang, et al., "Performance Evaluation in Computational Grid
`Environments" Proceedings of the Seventh International Conference
`on High Performance Computing and Grid in Asia Pacific Region
`(HPCAsia '04) 2003 (LNCS vol. 2659/2003).
`International Search Report May 25, 2009, PCT/US2008/080876.
`
`* cited by examiner
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 2
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 1 of 7
`
`US 7,822,841 B2
`
`100
`
`101
`
`100
`
`101
`
`100
`
`101
`
`111
`
`110
`
`1 1
`
`Hal
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 3
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`heel 2 ()I 7
`
`US 7,822,841 B2
`
`CLIENT
`SYSTEM
`208
`
`CLIENT
`SYSTEM
`208
`
`200
`
`CLIENT
`SYSTEM
`208
`
`PUBLIC NETWORK (INTERNET)
`204
`
`FIREWALL
`AUTHENTICATION
`210
`
`MONITORING
`SYSTEM
`220
`
`PRIVATE COMPANY NETWORK
`230
`
`GATEWAY 1
`240
`
`GATEWAY 2
`241
`
`0
`
`GATEWAY N
`242
`
`301-P
`300
`PRIVATE CLUSTER NETWORK
`
`4,
`
`CLUSTER 1
`250
`
`CLUSTER 2
`251
`
`0
`
`•
`
`•
`
`CLUSTER N
`252
`
`FIG.2
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 4
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 3 of 7
`
`US 7,822,841 B2
`
`PRIVATE COMPANY NETWORK
`230
`
`GATEWAY
`240
`
`SHARED
`STORAGE
`320
`
`CUSTOMIZED CLUSTER
`350
`
`PRIVATE NETWORK FOR CLUSTER
`300
`
`NODE 1
`.3.1
`
`NODE 2
`211
`
`•
`
`S
`
`0
`
`NODE N
`312
`
`FIG.3A
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 5
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 4 of 7
`
`US 7,822,841 B2
`
`PRIVATE COMPANY NETWORK
`
`230 1
`
`GATEWAY
`241
`
`CUSTOMIZED CLUSTER
`351
`
`PRIVATE CLUSTER COMMUNICATION NETWORK 301
`
`NODE 1
`310
`
`NODE 2
`311
`
`•
`
`•
`
`•
`
`NODE N
`312
`
`PRIVATE CLUSTER STORAGE NETWORK 340
`
`STORAGE
`320
`
`FIG.3B
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 6
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 5 of 7
`
`US 7,822,841 B2
`
`PRIVATE COMPANY NETWORK
`230
`
`GATEWAY
`242
`
`CENTRAL
`MONITORING
`SYSTEM
`220
`
`DEDICATED
`MONITORING
`SYSTEM
`330
`
`CUSTOMIZED CLUSTER
`352
`
`PRIVATE CLUSTER COMMUNICATION NETWORK
`302
`
`NODE 1
`LQ
`
`NODE 2
`311
`
`•
`
`•
`
`•
`
`NODE N
`312
`
`PRIVATE CLUSTER STORAGE NETWORK 340
`
`STORAGE
`320
`
`FIG.3C
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 7
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 6 of 7
`
`US 7,822,841 B2
`
`400
`
`\I lk
`
`PUBLIC NETWORK (INTERNET)
`200
`
`[
`
`FIREWALL
`AUTHENTICATION 1
`410
`
`FIREWALL
`AUTHENTICATION 2
`411
`
`•
`
`•
`
`FIREWALL
`AUTHENTICATION N
`412
`
`300
`
`y301
`
`y302
`
`PRIVATE CLUSTER
`NETVIORK
`
`CLUSTER 1
`250
`
`CLUSTER 2
`251
`
`•
`
`•
`
`•
`
`CLUSTER N
`252
`
`c300
`
`,c 301
`
`y302
`
`GATEWAY 1
`242
`
`GATEWAY 2
`241
`
`•
`
`•
`
`•
`
`GATEWAY N
`242
`
`PRIVATE COMPANY NETWORK 230
`
`MONITORING
`SYSTEM
`220
`
`FIG.4
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 8
`
`
`
`U.S. Patent
`
`Oct. 26, 2010
`
`Sheet 7 of 7
`
`US 7,822,841 B2
`
`500
`
`PER-NODE MONITORING SYSTEM
`CHECKS FOR PROBLEM WITH NODE
`(HARDWARE, SOFTWARE, ETC.)
`502
`
`NO
`
`IS THERE A PROBLEM
`WITH THIS NODE?
`510
`
`YES
`
`*Q.
`
`MAIN MONITORING SYSTEM CHECKS
`FOR PROBLEMS WITH CLUSTER AND
`CLUSTER NODE (NETWORK
`CONNECTIVITY, VERIFY PER-NODE
`MONITORING, ETC.)
`505
`
`NO
`
`IS THERE A
`PROBLEM WITH CLUSTER
`OR CLUSTER NODE?
`515
`
`YES
`
`NOTIFY CONTROL MONITORING
`SITE OF PROBLEM
`520
`
`CENTRAL MONITORING SYSTEM
`ACKNOWLEDGES THE RECEIPT OF A
`PROBLEM THEN PER-NODE AND MAIN
`MONITORING SYSTEM RESUMES
`MONITORING COMPONENTS
`530
`
`CENTRAL MONITORING SYSTEM
`NOTIFIES STAFF OF PROBLEM
`AND STAFF FIXES AND CLEARS
`PROBLEM
`540
`G.
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 9
`
`
`
`1
`METHOD AND SYSTEM FOR HOSTING
`MULTIPLE, CUSTOMIZED COMPUTING
`CLUSTERS
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`The present invention relates, in general, to distributed
`computing and clustered computing environments, and, more
`particularly, to computer software, hardware, and computer-
`based methods for hosting a set of computer clusters that are
`uniquely configured or customized to suit a number of remote
`customers or clients.
`2. Relevant Background
`A growing trend in the field of distributed computing is to
`use two or more computing resources to perform computing
`tasks. These grouped resources are often labeled clustered
`computing environments or computing clusters or simply
`"clusters." A cluster may include a computer or processors,
`network or communication links for transferring data among
`the grouped resources, data storage, and other devices to
`perform one or more assigned computing processes or tasks.
`The clusters may be configured for high availability, for
`higher performance, or to suit other functional parameters. In
`a typical arrangement, a portion of a company's data center
`may be arranged and configured to operate as a cluster to
`perform one task or support the needs of a division or portion
`of the company. While a company may benefit from use of a
`cluster periodically on an ongoing basis, there are a number of
`reasons why it is often undesirable for a company to own and
`maintain a cluster.
`As one example, High Performance Computing (HPC)
`clusters are difficult to setup, configure, and manage. An HPC
`cluster also requires numerous resources for ongoing main-
`tenance that increases the cost and manpower associated with
`cluster ownership. Despite these issues, a company may
`require or at least demand HPC clusters (or other cluster
`types) to solve large problems that would take an inordinate
`amount of time to solve with a single computer. The need for
`HPC and other cluster types is in part due to the fact that
`processor speeds have stagnated over the past few years. As a
`result, many companies and other organizations now turn to
`HPC clusters because their problems cannot be solved more
`rapidly by simply purchasing a faster processor. These com-
`puter users are placed in the difficult position of weighing the
`benefits of HPC clusters against the resources consumed by
`owning such clusters. Decision makers often solve this
`dilemma by not purchasing clusters, and clusters have
`remained out of reach of some clients as the resource issues
`appear insurmountable.
`When utilized, HPC systems allow a set of computers to
`work together to solve a single problem. The large problem is
`broken down into smaller independent tasks that are assigned
`to individual computers in the cluster allowing the large prob-
`lem to be solved faster. Assigning the independent tasks to the
`computer is often the responsibility of a single node in the
`cluster designated the master node. The responsibilities of the
`master node include assigning tasks to nodes, keeping track
`of which nodes are working on which tasks, and consolidat-
`ing the results from the individual nodes. The master node is
`also responsible for determining if a node fails and assigning
`the task of the failed node to another node to ensure that node
`failures are handled transparently. Communication between
`nodes is accomplished through a message passing mecha-
`nism implemented by every member of the cluster. Message
`passing allows the individual computers to share information
`about their status on solving their piece of the problem and
`
`US 7,822,841 B2
`
`5
`
`2
`return results to the master node. Currently, those who deter-
`mine a cluster is worth the drain on resources purchase a
`cluster, host the cluster, and manage it on their premises or on
`site.
`Unfortunately, while the number of tasks and computing
`situations that would benefit from HPC clusters continues to
`rapidly grow, HPC clusters are not being widely adopted. In
`part, this is because HPC clusters require the most computers
`of any cluster type and, thus, cause the most problems with
`10 maintenance and management. Other types of clusters that
`have been more widely adopted include the "load balancing
`cluster" and the "high availability cluster," but resources are
`also an issue with these clusters. A load balancing cluster is a
`configuration in which a server sends small individual tasks to
`is a cluster of additional servers when it is overloaded. The high
`availability cluster is a configuration in which a first server
`watches a second server and if the second server fails, then the
`first server takes over the function of the second server.
`The multi-cluster subsumes all other classes of clusters
`20 because it incorporates multiple clusters to perform tasks.
`The difficulties for managing clusters are amplified when
`considering multiple clusters because of their complexity. For
`example, if one HPC cluster consumes a set of resources, then
`multiple HPC clusters will, of course, consume a much larger
`25 set of resources and be even more expensive to maintain. One
`method proposed for managing multiple high availability
`clusters is described in U.S. Pat. No. 6,438,705, but this
`method is specific only to the managing of high availability
`clusters. Further, the described method requires each cluster
`30 to have a uniform design. Because it is limited to high avail-
`ability clusters, the owner would not have an option to incor-
`porate multiple cluster types, such as HPC or load-balancing
`clusters, within the managed multi-cluster. Additionally, the
`suggested method does not solve one of the fundamental
`35 difficulties associated with cluster usage because it requires
`the cluster to be owned and operated by the user and to remain
`on the client's property or site. Other discussions of cluster
`management, such as those found in U.S. Pat. Nos. 6,748,429,
`5,371,852, and 5,946,463 generally describe a single cluster
`40 configuration and do not relate to operating multi-clusters. In
`all of these cases, the burden of managing, monitoring, and
`hosting the cluster remains with the user of the cluster who
`owns the cluster who must maintain the cluster on their pre-
`mises.
`Hence, there remains a need for systems and methods for
`providing clusters to users or "clients" such as companies and
`other organizations that provide the computational assets or
`power that the clients demand while not presenting an unac-
`ceptable burden on the clients' resources. Preferably, these
`so systems and methods would be effective in providing a cluster
`that is adapted to suit a particular need or computing task
`rather than forcing a one-size-fits-all solution upon a cluster
`user.
`
`45
`
`55
`
`SUMMARY OF THE INVENTION
`
`To address the above and other problems, the present
`invention provides methods and systems for hosting a plural-
`ity of clusters that are each configured for a particular task or
`60 computing application presented by a user or client. In par-
`ticular, the present invention provides for configuration,
`access control, and monitoring of multiple customized clus-
`ters that are hosted for one or more remote clients. For
`example, system or cluster configuration data may be gener-
`65 ated for a cluster based on input from a client or user regarding
`their computing needs and planned tasks and this configura-
`tion data may be used to configure a cluster particularly for
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 10
`
`
`
`US 7,822,841 B2
`
`3
`that client. The customized cluster is then hosted at a central
`hosting facility and is made accessible to that client, such as
`via a public network such as the Internet.
`More particularly, a computer system or network is pro-
`vided for hosting computing clusters for clients or customers
`(such as businesses and organizations that desire a cluster but
`do not want to own, operate, and maintain one on their pre-
`mises). The system includes a first cluster including a set of
`computing resources such as processing nodes, data storage,
`and a private communications network that is arranged or
`implemented in a first configuration. The system also
`includes a second cluster having a set of computing resources
`in a second configuration, which differs from the first con-
`figuration (e.g., both may be HPC clusters but be configured
`to handle a different client-assigned or defined task). The first
`configuration provides a first computing environment for per-
`forming a first client task while the second configuration
`provides a second computing environment for performing a
`second client task (which typically will differ from the first
`client task). The first and second configurations may differ
`due to configuration of the processing nodes in the clusters,
`based on configuration of the data storage, based on the
`private communications network or its connections, or based
`on software modules provided on the nodes, or based on other
`hardware or software components and/or configurations.
`The system may further include a monitoring system that
`monitors the clusters for connectivity and availability or other
`operational problems on a cluster level and, typically, on a
`per-node basis (such as with monitors provided for each
`node) and issues alerts to operations and/or maintenance per-
`sonnel based on identified issues. The system also provides
`clients or client systems access to the clusters via a public
`communications network that is linked, such as via a firewall,
`to a private company network to which the clusters are linked,
`such as via a gateway mechanism. The system is adapted to
`control access of the clients to the clusters such that a client
`can only access particular ones of the clusters (e.g., the cluster
`that has been configured according to their specifications or
`computing parameters or to perform their computing tasks).
`For example, the firewall mechanism may act to determine
`which cluster a client is attempting to access and then to
`determine whether the requesting client has permission or
`authorization to access that cluster. The gateway mechanisms
`operate, in part, to isolate each cluster such that communica-
`tions within a cluster such as on the private cluster commu-
`nications network are separated (e.g., do not have to share
`bandwidth of a single system network).
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 illustrates a multi-cluster system available prior to
`the invention;
`FIG. 2 is a functional block diagram illustrating a hosted
`cluster system of one embodiment of the invention;
`FIGS. 3A-3C illustrate three representative embodiments
`of clusters that are configured to provide customization to suit
`a particular task or computing application (e.g., to meet the
`particular needs of a requesting customer);
`FIG. 4 illustrates another embodiment of a hosted cluster
`system of the invention in which dedicated firewall and
`authentication mechanisms or systems are provided for each
`cluster; and
`FIG. 5 is a flow diagram representing a monitoring process
`implemented in a hosted cluster system in one embodiment of
`the invention for monitoring operation of multiple, custom-
`ized clusters.
`
`4
`DETAILED DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`5
`
`30
`
`The present invention is directed to methods and systems
`for hosting multiple clusters or clustered computing environ-
`ments such that each of the clusters is configured to match or
`address a particular client or user computing task or problem
`(e.g., in response to a request for a hosted cluster from a client
`that identifies their computing and associated requirements).
`10 The cluster systems of the invention differ from prior clusters
`in part because they are physically provided at one or more
`locations that are remote from the processing user or client's
`facilities (i.e., the computing resources are not owned and
`operated by the user). The client may establish processing
`15 parameters that are used to configure a cluster in the system in
`a manner that suits their needs and, then, access their hosted
`cluster from a remote location via a communications network
`such as the Internet or other network.
`The hosted cluster systems and hosting methods of the
`20 invention are described herein in relation to three issues asso-
`ciated with hosting multiple customized clusters that were
`identified by the inventor. Particularly, the systems and meth-
`ods of the invention address issues associated with arranging
`the clusters in a consistent and useful manner and of control-
`25 ling client access to the clusters. Additionally, the systems and
`methods address issues involved with monitoring the indi-
`vidual cluster components. Examples of solutions to each of
`these problems are described in the embodiments shown in
`FIGS. 2-5.
`It will be clear from the following description that the
`managed and hosted clusters of the various embodiments can
`be used to give a client control over the design and configu-
`ration of a cluster while removing the impediments required
`by traditional clusters consuming the client's real-estate and
`35 requiring nearly constant maintenance. Additionally, the
`hosting options presented with the hosting methods and
`hosted cluster systems relieve the client of many burdens and
`opens up future potential avenues for cluster usage. Further-
`more, the hosted multi-clusters have the following additional
`40 advantages. Providing a hosted cluster to a client does not
`lock the client into using any one vendor for cluster comput-
`ing parts because the cluster components can be from any
`vendor and can be modified and replaced as appropriate to
`support client needs. Hosted clusters allow for easily expand-
`45 able clusters since each cluster is isolated or is maintained as
`a standalone unit in communication with a network for com-
`munications with a corresponding client and monitoring
`equipment and/or software modules. It provides for constant
`monitoring of the cluster because each cluster is hosted and
`so managed.
`Before the invention, the use of multiple cluster systems
`was known, but these multi-cluster systems were typically
`limited in ways that hindered their use and adoption. For
`example, prior multi-cluster computing systems were limited
`55 to systems owned and operated by a single user (e.g., to being
`located upon the owner's facilities), limited to a single con-
`figuration such as all clusters being a particular configuration
`to support a similar processing task, limited to a particular
`type such as all being high availability, or otherwise limited in
`60 their function and/or configuration. For example, one prior
`multi-cluster system having high availability clusters is
`described in U.S. Pat. No. 6,438,705 and is illustrated in FIG.
`1. In this diagram, several clusters are shown that each consist
`of a primary node 100 and a secondary cluster node 101
`65 connected to a primary storage system 110 and secondary
`storage system 111. As discussed above, this cluster system
`design requires each cluster to have a uniform design with like
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 11
`
`
`
`US 7,822,841 B2
`
`5
`hardware and software. The described cluster system limits or
`even prevents the ability to have multiple cluster types (such
`as a HPC cluster and a load balancing or high availability
`cluster) within a single managed multi-cluster. In the patent
`description, the cluster system is also restricted to high avail-
`ability clusters and not applicable to other cluster types such
`as HPC or load balancing. Significantly, this system also does
`not solve the fundamental difficulties associated with prior
`cluster systems, i.e., the clients are required to host and man-
`age the clusters that are located on their site or in their facili-
`ties.
`In FIG. 2, one preferred embodiment of a hosted cluster
`system 200 is illustrated such as it may be provided at a
`hosting facility typically remote from users or clients (i.e.,
`from their accessing nodes or systems 208). The system 200
`has, or is connected to, a public network 204 (e.g., a wired
`and/or wireless digital communications network including
`the Internet, a LAN, a WAN, or the like), which in turn is
`connected to a firewall and authentication system 210. The
`authentication system 210 connects to the company network
`230, which has a monitoring system 220 for all the custom-
`ized clusters 250, 251, 252. The company network 230 also
`has gateways 240, 241, 242, such as routers, to each unique
`cluster 250, 251, 252. On the other side of each gateway 240,
`241, 242 is a private network 300, 301, 302 for the individual
`clusters 250, 251, 252.
`The embodiment shown with system 200 provides efficient
`separation of the individual cluster network traffic to prevent
`one cluster from interfering with other clusters. The traffic
`separation is achieved through the gateway 240, 241, and/or
`242 located between each cluster 250, 251, and 252 and the
`company network 230. Each gateway 240, 241, 242 is con-
`figured with software and hardware to apply a standard set of
`rules to only permit traffic destined for its corresponding
`cluster to pass through from the company network 230 while
`keeping all cluster traffic internal to the cluster. With this
`cluster separation, the internal cluster configuration is
`abstracted from the primary company network 230 allowing
`the configuration of each cluster to be selected and main-
`tained independently from the other clusters on the network
`230. By keeping all clusters 250, 251, 252 connected to a
`common network 230 through the gateways 240, 241, 242, it
`is significantly easier to administer the many individual clus-
`ters and it also gives the clusters 240, 241, 242 a common
`destination for any monitoring information (e.g., to monitor-
`ing system 220 via common network 230).
`Access control to the individual clusters 250, 251, 252 is
`governed by the firewall and authentication mechanism 210.
`This mechanism 210 may be implemented with several con-
`figurations to achieve the goal of ensuring that clients have
`access to their cluster, and only to their cluster. Each of these
`configurations performs two primary steps: (1) ensuring that
`an incoming connection goes to the correct cluster and (2)
`ensuring that the incoming user has access to that cluster (e.g.,
`that a client or customer operating a client node or system 208
`attempting a communication or connection with their cluster
`is directed to the proper one of the clusters 250, 251, or 252
`and that the system 208 or, more typically, the user of the
`system 208 has access to that particular cluster 250, 251, or
`252).
`One useful configuration of the system 200 and mechanism
`210 is to give each cluster 250, 251, 252 its own public
`address. This enables the firewall portion of mechanism 210
`to know that all incoming connections to that specific public
`address are sent to a node (not shown in FIG. 2) on a particular
`cluster 250, 251, 252. Once the client system 208 is connected
`to a node on a cluster 250, 251, or 252, that node is then
`
`6
`responsible for user authentication to grant access (e.g., a
`node is provided within each cluster 250, 251, 252 that has the
`proper software and/or hardware to authenticate accessing
`users). Another configuration of the system 200 and mecha-
`5 nism 210 is to have each client 210 connect to a different
`service on the firewall 210, such as a TCP/IP port. The firewall
`210 will then know which services are for which clusters out
`of the many clusters 250, 251, 252 on the network 230. It is
`then able to route the connection to a node on the desired
`10 cluster 250, 251, or 252 to perform user authentication.
`Another configuration for system 200 and mechanism 210 is
`for client system 208 to connect to a common service on the
`firewall 210, and have the firewall 210 authenticate the user.
`This configuration requires the firewall 210 to setup a special
`15 user environment on the firewall 210 that will only allow the
`user of the system 208 to communicate with their cluster 250,
`251, or 252 and no other clusters. This is accomplished
`through common virtual machine technology. All of these
`possible configurations can co-exist together and are not
`20 mutually exclusive. Many other configurations exist that pro-
`vide per-cluster and per-user authentication, and the above-
`described configurations for the system 200 and mechanism
`210 are merely provided as examples.
`Significantly, each individual cluster 250, 251, 252 can
`25 have any configuration requested by the client of that cluster.
`For example, companies or organizations may face differing
`computing challenges and have different needs for a cluster,
`and the system 200 is intended to represent generally a hosted
`cluster system 200 in which a plurality of clusters 250, 251,
`30 252 are provided for access by client systems 208 via public
`network 204 (or another network). Hence, the clusters 250,
`251, 252 are located remotely from the customer or user's
`facilities or sites (e.g., the system 200 excluding the client
`remote systems 208 and all or portions of the network 204
`35 may be located at a hosting facility or facilities) and are not
`typically owned by the customer or user but instead are pro-
`vided on an as-needed basis from an operator of the system
`200 (such as by leasing use of a cluster 250, 251, or 252). As
`a result the customer or user is not required to operate and
`40 maintain a data center filled with clusters. Further, in contrast
`to prior practice, each of the clusters 250, 251, 252 is inde-
`pendent and can be configured to suit the needs of the user or
`customer. For example, each of the cluster users or clients
`may need a cluster to perform a particular and differing task.
`45 Previously, a data center would be provided with clusters of a
`particular configuration, and the task would be performed by
`that configured cluster.
`In contrast, the system 200 is adapted such that each of the
`clusters 250, 251, 252 may have a differing configuration
`so with such configuration being dynamically established in
`response to a user or customer's request so as to be better
`suited to perform their task. For example, the task may be
`handled better with a cluster configuration designed to pro-
`vide enhanced processing or enhanced data storage. In other
`55 cases, the task may best be served with a cluster configured
`for very low latency or a cluster with increased bandwidth for
`communications between nodes and/or accessing storage.
`The task parameters and needs of a user are determined as part
`of personal interview of the customer and/or via data gathered
`60 through a data collection screen/interface (not shown) with
`the system 200. This user input defines the task characteristics
`or computing parameters, and these are processed manually
`or with configuration software to select a cluster configura-
`tion that matches or suits the customer's needs. The selected
`65 cluster configuration (or configuration data) is then used to
`customize one or more of the clusters 250, 251, 252 to have a
`configuration for performing tasks assigned by the customer
`
`HEWLETT PACKARD ENTERPRISE CO.
`EXHIBIT 1001 - PAGE 12
`
`
`
`US 7,822,841 B2
`
`7
`such as by use of node or system 208. The customer accesses
`their assigned cluster(s) 250, 251, 252 via the public network
`204 and authentication and firewall mechanism 210 through
`use of a client system 208 as discussed above (or, in some
`cases, by providing computing requests to an operator of the
`system 200 in physical form for entry via the monitoring
`system 220 or the like or by digital communications with such
`an operator).
`One possible common configuration for the clusters 250,
`251, 252 of system 200 such as cluster 250 is shown in FIG.
`3A with customized cluster 350, which is linked to the private
`company network 230 via gateway 240. The cluster 350 is
`shown as having a plurality of nodes 310, 311, 312 that are all
`connected to a single private communication network 300 for
`the cluster 350. The cluster 350 also has a dedicated storage
`node 320 linked to this private network 300, and storage node
`is used for common storage or data that is shared between the
`nodes 310, 311, 312 of the cluster 350. Another useful con-
`figuration for clusters is shown with customized cluster 351 in
`FIG. 3B, which modifies the cluster structure of cluster 350 of
`FIG. 3A. The customized cluster 351 may be used for cluster
`251 of system 200 to service one of the client systems 2