throbber
1111111111111111 IIIIII IIIII 11111 1111111111 11111 11111 111111111111111 111111111111111 11111111
`US 20060005181Al
`
`(19) United States
`(12) Patent Application Publication
`Fellenstein et al.
`
`(10) Pub. No.: US 2006/0005181 Al
`Jan. 5, 2006
`( 43) Pub. Date:
`
`(54) SYSTEM AND METHOD FOR
`DYNAMICALLY BUILDING APPLICATION
`ENVIRONMENTS IN A COMPUTATIONAL
`GRID
`
`(75)
`
`Inventors: Craig William Fellenstein, Brookfield,
`CT (US); Rick Allen Hamilton II,
`Charlottesville, VA (US); Joshy Joseph,
`Poughkeepsie, NY (US); James Wesley
`Seaman, Falls Church, VA (US)
`
`Correspondence Address:
`Robert H. Frantz
`P.O. Box 23324
`Oklahoma City, OK 73123 (US)
`
`(73)
`
`Assignee: International Business Machines Cor(cid:173)
`poration, Armonk, NY
`
`(21)
`
`Appl. No.:
`
`10/870,522
`
`(22)
`
`Filed:
`
`Jun. 17,2004
`
`Related U.S. Application Data
`
`(63) Continuation-in-part of application No. 10/824,808,
`filed on Apr. 15, 2004.
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`G06F 9/445
`(2006.01)
`(52) U.S. Cl. .............................................................. 717/174
`
`(57)
`
`ABSTRACT
`
`Computing environments within a grid computing system
`are dynamically built in response to specific job resource
`requirements from a grid resource allocator, including acti(cid:173)
`vating needed hardware, provisioning operating systems,
`application programs, and software drivers. Optimally, prior
`to building a computing environment for a particular job,
`cost/revenue analysis is performed, and if operational objec(cid:173)
`tives would not be met by building the environment and
`executing the job, a job sell-off process is initiated.
`
`22
`
`Receive
`Environment
`Data
`
`21
`
`Environment
`Requirements
`
`15
`
`Grid
`Catalog/
`Storage
`
`N
`
`25
`
`OS
`
`Evaluate
`Possible
`Environment
`Build
`
`y
`
`28
`
`Request and
`Install OS,
`ifRequired
`
`27
`
`201
`
`Request and
`Install 1 - - - - - . . i Environment
`Application(s),
`Built
`Applications and/or
`Drivers, or combined image .___D_riv_e_rs _ _,
`(e.g. OS+apps+drivers)
`
`Netflix, Inc. - Ex. 1010, Page 000001
`IPR2022-00322 (Netflix, Inc. v. CA, Inc.)
`
`

`

`Patent Application Publication
`
`Jan. S, 2006 Sheet 1 of 7
`
`US 2006/0005181 Al
`
`Grid Resource
`Allocation
`
`11
`
`12
`
`15
`
`Grid Catalog . . _ _ _ ..,., Grid Dynamic
`and Storage
`Build Subsystem
`Subsystem
`
`Grid Virtual
`Node Grouper
`
`13
`
`Grid Manager
`
`Figure I
`
`Netflix, Inc. - Ex. 1010, Page 000002
`
`

`

`Patent Application Publication
`
`Jan. 5, 2006 Sheet 2 of 7
`
`US 2006/0005181 Al
`
`22
`
`Receive
`Environment
`Data
`
`y
`
`N
`
`25
`
`21
`
`Environment
`Requirements
`
`24
`
`Notify Grid
`Allocator
`
`15
`
`Grid
`Catalog/
`Storage
`
`OS
`
`Evaluate
`Possible
`Environment
`Build
`
`26
`
`27
`
`Reject Job
`
`Request and
`Install OS,
`if Required
`
`200
`
`201
`
`Request and
`Install
`Application(s),
`Applications and/or
`Drivers
`Drivers, or combined image ..._ _ _ _ _ ...,
`(e.g. OS+apps+drivers)
`
`t------11~ Environment .._ _ __,
`Built
`
`Figure 2
`
`Netflix, Inc. - Ex. 1010, Page 000003
`
`

`

`'"""'
`'"""' >
`'"""' 00
`Ul
`0
`0
`0
`~
`0
`0
`N
`'JJ.
`d
`
`~
`
`-..J
`0 ....,
`~ ....
`'JJ. =-~
`
`C'I
`0
`0
`N
`~Ul
`?
`~
`~
`
`I
`
`I
`
`/
`
`Figure 3
`
`Accounting
`
`304
`
`/
`
`,""
`
`+--_,,,.
`
`Manager
`Results
`
`Job
`
`Application
`
`Client
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ "Cl -....
`~ .... ~ = ....
`
`""C
`
`(')
`
`~ ....
`
`(')
`
`37
`
`)30
`
`SLA
`
`305
`
`\
`\
`\
`
`' \
`' ' ' '
`
`35
`
`32
`
`34
`
`32
`
`33
`
`Job Queue
`
`Netflix, Inc. - Ex. 1010, Page 000004
`
`

`

`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ "Cl -....
`~ .... ~ = ....
`
`'"""'
`'"""' >
`'"""' 00
`Ul
`0
`0
`0
`~
`0
`0
`N
`'JJ.
`d
`
`-..J
`0 ....,
`~ ....
`'JJ. =(cid:173)~
`
`,i;;..
`
`C'I
`0
`0
`N
`~Ul
`?
`~
`~
`
`(')
`
`~ ....
`
`(')
`
`""C
`
`Figure 4
`
`Capabilities
`
`and
`
`Characteristics
`
`Server
`Grid
`
`300
`
`Server
`
`N
`
`•••
`39
`
`Server
`
`2
`
`38----
`
`Server
`
`I
`
`302
`
`Manager
`Results
`
`Job
`
`%
`Idle
`
`%
`Idle
`
`%
`Idle
`
`43
`
`42
`
`Completion
`
`Stats
`
`Job
`
`J40
`
`Scheduler
`Job/Grid
`
`Netflix, Inc. - Ex. 1010, Page 000005
`
`

`

`Patent Application Publication
`
`Jan. S, 2006 Sheet 5 of 7
`
`US 2006/0005181 Al
`
`lli
`
`Cl)
`
`~
`
`N
`
`...
`
`~
`
`1 oo
`I M
`
`I
`I
`I
`I
`I
`
`r------------
`"-t'I
`-o e
`·i::::,
`O Sl
`~
`
`N
`lli
`
`E
`u
`-o s s
`·i:: u u
`0 ~~
`l; CJ'.)
`~
`
`CJ'.)
`
`•
`
`•
`•
`
`0
`0
`M
`
`z
`...
`u
`i".:
`u
`CJ'.)
`
`-----
`
`an
`(l)
`s....
`:l
`en
`·-
`LL
`
`Netflix, Inc. - Ex. 1010, Page 000006
`
`

`

`'"""'
`'"""' >
`'"""' 00
`Ul
`0
`0
`
`C'I -0
`
`0
`0
`N
`[J).
`d
`
`700
`
`Camera
`
`Reco
`Voice
`
`--------
`112.
`Microphone
`
`Reco
`
`Handwriting
`
`_ Screen118.
`
`Touch
`
`Figure 6
`I
`111 I
`Mouse I I
`
`1
`
`Pointer
`
`I I
`I I
`
`716
`
`Keypad
`
`Keyboard/
`
`---l
`0 ....,
`~ ....
`[J). =-~
`
`C'I
`
`C'I
`0
`0
`N
`Y'
`?
`~
`~
`
`7141
`
`ill.I
`
`~ .... .... 0 =
`O' -....
`~
`=
`~ .... o·
`"Cl -....
`> "Cl
`~ = ....
`~ ....
`""C
`
`~
`
`~
`
`].Q
`72
`,H
`ll
`
`Flash
`ROM
`RAM
`Cache
`
`78"
`
`78
`
`ill
`
`711
`
`710
`
`79
`
`Slots
`
`Expansion
`External
`
`Slots
`
`Expansion
`
`Internal
`
`Communications
`
`Interface( s)
`
`Drives
`Storage
`

`
`Firmware
`
`Software
`
`and
`
`Netflix, Inc. - Ex. 1010, Page 000007
`
`

`

`'-' > '-'
`'-' 00
`VI

`~
`~
`N
`rJ'l
`~
`
`~ ....
`rJ'l =-~
`~ =-,
`
`......:i
`0 -.,
`......:i
`
`N
`"'VI
`
`~ = ?
`=
`0·
`
`> -e -e --· (') = =· 0 = -= = ~ ;:;· = ....
`
`~ = ....
`-= = ....
`
`fil
`80
`
`,,
`' '
`
`85
`
`. . .
`Plug-ins -
`) ,
`,~7
`
`I
`
`Browser
`
`j 701
`
`Figure 7
`
`Embedded l.)82
`
`Firmware -
`' ' "
`
`I
`Programs
`
`I
`
`• • •
`
`Bios and Hardware Device Drivers
`
`Operating System
`
`"
`"
`
`f 85
`
`Platform-Specific
`
`OS-Native
`
`Interpreter
`
`, ,
`' '
`
`w
`
`' '
`
`LJO'+
`84
`
`•
`• •
`
`I
`
`.
`Programs -
`Portable
`
`LJO.J
`83
`
`OS-Native -
`
`. •
`Application -
`
`•
`
`I
`Programs
`
`Netflix, Inc. - Ex. 1010, Page 000008
`
`

`

`US 2006/0005181 Al
`
`Jan.5,2006
`
`1
`
`SYSTEM AND METHOD FOR DYNAMICALLY
`BUILDING APPLICATION ENVIRONMENTS IN A
`COMPUTATIONAL GRID
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`[0001] This patent application is a continuation-in-part of
`U.S. patent application Ser. No. 10/824,808, docket number
`AUS920040042US1, filed on Apr. 15, 2004, which has a
`common inventor, Rick Allen Hamilton, II, and is com(cid:173)
`monly assigned.
`
`INCORPORATION BY REFERENCE
`
`[0002] The related U.S. patent application Ser. No.
`10/824,808, docket number AUS920040042US1, filed on
`Apr. 15, 2004, is incorporated herein by reference in its
`entirety, including figures.
`
`[0003] This patent application is a continuation-in-part of
`U.S. patent application Ser. No. 10/824,808, docket number
`AUS920040042US1, filed on Apr. 15, 2004.
`
`FEDERALLY SPONSORED RESEARCH AND
`DEVELOPMENT STATEMENT
`
`[0004] This invention was not developed in conjunction
`with any Federally-sponsored contract.
`
`MICROFICHE APPENDIX
`
`[0005] Not applicable.
`
`BACKGROUND OF THE INVENTION
`
`[0006] 1. Field of the Invention
`
`[0007] This invention relates to the arts of on demand
`grid-based computing, and management and allocation of
`resources within a grid computing environment.
`
`[0008] 2. Description of the Related Art
`
`[0009]
`In the 1990's, the communications standardization
`between wide ranges of systems propelled the Internet
`explosion. Based upon the concept of resource sharing, the
`latest evolutionary technology is grid computing.
`
`[0010] Grid computing is an emerging technology that
`utilizes a collection of systems and resources to deliver
`qualities of services. It is distributed computing at its best,
`by creating a virtual self-managing computer, the processing
`for which is handled by a collection of interconnected
`heterogeneous systems sharing different combinations of
`resources. In simple terms, grid computing is about getting
`computers to work together, and allowing businesses, or grid
`participants, to optimize available resources.
`
`[0011] The framework to grid computing is large scale
`resource sharing, which exist within multiple management
`domains, typically involving highly parrallelized applica(cid:173)
`tions connected
`together
`through a communications
`medium, and organized to perform one or more requested
`jobs simultaneously. Each grid resource's characteristics can
`include, but are not limited, to processing speed, storage
`capability, licensing rights, and types of applications avail(cid:173)
`able.
`
`[0012] Grid computing's architecture is defined in the
`Open Grid Services Architecture ("OGSA"), which includes
`a basic specification Open Grid Services Infrastructure
`("OGSI").
`
`[0013] Using grid computing to handle computing jobs of
`all sizes, and especially larger jobs such as enterprise
`processes, has several advantages. First, it exploits underuti(cid:173)
`lized resources on the grid. For example, if a financial
`services company suddenly encounters a 50% increase in
`stock trade transactions during a 30-minute time period,
`using a traditional systems process, the company would face
`an increase in network traffic, latent response and comple(cid:173)
`tion time, bottleneck in processing and even overload on its
`resources due to its limited or fixed computational and
`communications resources.
`
`[0014]
`In a similar situation, however, grid computing can
`adjust dynamically to meet the changing business needs, and
`respond instantly to stock transaction increase using its
`network of unused resources. For example, a grid computing
`system could run an existing stock trading application on
`four underutilized machines to process transactions, and
`deliver results four times faster than the traditional comput(cid:173)
`ing architecture. Thus, grid computing provides a better
`balance in resource utilization and enables the potential for
`massive parallel CPU capacity.
`[0015] Second, because of its standards, grid computing
`enables and simplifies collaboration among many resources
`and organizations from a variety of vendors and operators.
`For instance, genome research companies can use grid
`computing to process, cleanse, cross-tabulate and compare
`massive amounts of data, with the jobs being handled by a
`variety of computer types, operating systems, and program(cid:173)
`ming languages. By allowing the files or databases to span
`across many systems, data transfer rates can be improved
`using striping techniques that lead to faster processing
`giving the companies a competitive edge in the marketplace.
`
`[0016] Third, grid computing provides sharing capabilities
`that extends to additional equipment, software, services,
`licenses and others. These virtual resources provide uniform
`interoperability among heterogeneous grid participants.
`Each grid resource may have certain features, functionalities
`and limitations. For example, a particular data mining job
`may be able to run on a DB2 server, but may not be
`compatible to be processed on an Oracle server. So, the grid
`computing architecture selects a resource which is capable
`of handling each specific job.
`
`International Business Machines ("IBM") has pio(cid:173)
`[0017]
`neered the definition and implementation of grid computing
`systems. According to the IBM architecture, Service Level
`Agreements ("SLAs") are contracts which specify a set of
`client-driven criterion directing acceptable execution param(cid:173)
`eters for computational jobs handled by the grid. SLA
`parameters may consist of metrics such as execution and
`response time, results accuracy, job cost, and storage and
`network requirements. Typically, after job completion, an
`asynchronous process which is frequently manual is per(cid:173)
`formed to compare actual completion. In other words,
`companies use SLAs to ensure all accounting specifics such
`as costs incurred and credits obtained conforms to the
`brokered agreements. The relationship between a submitting
`client and grid service provider is that of a buyer ( client) and
`a seller (grid vendor).
`
`Netflix, Inc. - Ex. 1010, Page 000009
`
`

`

`US 2006/0005181 Al
`
`Jan.5,2006
`
`2
`
`[0018]
`In order for grid and on-demand computing to be
`successful, maximum automation of grid related processes
`needs to occur. Due to the fact that grid computing is a
`relatively new and emerging art, many processes have yet to
`be considered for automation, and as such, require inefficient
`manual interaction.
`[0019]
`IBM's grid computing architecture provides an
`automated and efficient mechanism to allocate and enable
`the specific hardware and software environment required for
`job execution in a grid or on-demand computing system,
`responsive dynamically to the receipt of new jobs. However,
`at certain times depending on job load and job requirements
`within the grid, adequate resources to handle a newly
`submitted job may not be available. Inavailability may result
`from the fact that hardware and software which are capable
`of handling the job are already allocated to other jobs, or that
`no hardware and software are currently configured in the
`grid in a fashion which could handle the job, or combina(cid:173)
`tions of both reasons.
`[0020] Therefore, there is a need in the art for a mecha(cid:173)
`nism which, if the current active and available grid hardware
`does not contain the software environment(s) required by
`inbound grid jobs, to build the required software environ(cid:173)
`ment in an automated manner. The software involved may
`include the base operating system, specific device drivers,
`application software, and other components. Build of the
`appropriate software environment may include complete
`build of a new software environment on new hardware, build
`of a supplement set of nodes to integrate in with other
`existing nodes in order to complete a required environment,
`or simply build of required applications on existing active
`nodes, according to the need in the art.
`
`DESCRIPTION OF THE DRAWINGS
`
`[0021] The following detailed description when taken in
`conjunction with the figures presented herein present a
`complete description of the present invention.
`[0022] FIG. 1 provides a high-level illustration of our
`intra-grid relationships between the present invention and
`the processes of a grid computing environment.
`[0023] FIG. 2 illustrates the logical process of our inven(cid:173)
`tion.
`[0024] FIG. 3 provides details of how grid computing
`functions are accomplished in general.
`[0025] FIG. 4 illustrates functionality for selecting a grid
`resource.
`[0026] FIG. 5 shows a high-level view of grid computing
`in general.
`[0027] FIG. 6 provides depicts a generalized computing
`platform, suitable for implementation of the invention
`according to one available embodiment.
`[0028] FIG. 7 provides more details of the software orga(cid:173)
`nization of the platform of FIG. 6.
`
`SUMMARY OF THE INVENTION
`
`[0029] Through use of the present invention, a computing
`grid can offer more advanced resource load balancing. A
`relatively idle machine may receive an unexpected peak job,
`or if the grid is fully utilized, priorities may be assigned to
`
`better execute the number of requested jobs. By using our
`Dynamic Application Environment Builder ("DAEB") in
`conjunction with a Grid Management System ("GMS")
`scheduler such as the IBM GMS, a computing grid can
`provide excellent infrastructure for brokering resources.
`[0030] Generally speaking, as jobs flow into our compu(cid:173)
`tational grid for execution, an automated and efficient
`mechanism allocates and enables the specific hardware and
`software environment required for job execution. Addition(cid:173)
`ally, if the current active grid hardware does not contain the
`software environment(s) required by inbound grid jobs, the
`processes of the invention build the required software envi(cid:173)
`ronment in an automated manner. The software resources
`required and provided by the build may include a base
`operating system, one or more specific device drivers, one or
`more application software programs, and other components.
`[0031] Building of the appropriate software environment
`may include complete build of a new software environment
`on new hardware, build of a supplement set of nodes to
`integrate in with other existing nodes in order to complete a
`required environment, or simply build of required applica(cid:173)
`tions on existing active nodes.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`[0032] The present invention is preferably realized in
`conjunction with a grid computing architecture, and espe(cid:173)
`cially with the grid computing environment offered by
`International Business Machines. Therefore, it will be useful
`to first establish some definitions and discuss some gener(cid:173)
`alities of grid computing concepts prior to presenting details
`of the present invention.
`
`Fundamentals of Grid Computing
`[0033] The
`following definitions will be employed
`throughout this disclosure:
`
`[0034]
`(a) "Grid" shall mean of a collection of comput(cid:173)
`ing resources such as servers, processors, storage sys(cid:173)
`tems, and communications media, ranging from just a
`few machines to groups of machines organized as a
`hierarchy potentially spanning the world;
`
`[0035]
`(b) "Job" shall mean a desired requested task
`that a client initiates to be processed using available
`and selected resources;
`
`[0036]
`(c) "Resources" shall mean any system, hard(cid:173)
`ware or software module that is available within a grid
`for use in completing a job, such as application pro(cid:173)
`grams, hardware, software licenses, storage and related
`components;
`
`[0037]
`(d) computing environment or "environment"
`for short shall mean a set of hardware and software
`resources, such as a process, memory, disk space,
`operating system and one or more application pro(cid:173)
`grams, which are used to process or execute a job; and
`
`[0038]
`(e) "SLA" shall refer specifically to an IBM
`Service Level Agreement, and more generically to any
`documented set of client-driven criteria for job han(cid:173)
`dling on a grid, including but not limited to processing
`accuracy, results format, processing time for comple(cid:173)
`tion, and cost of job processing.
`
`Netflix, Inc. - Ex. 1010, Page 000010
`
`

`

`US 2006/0005181 Al
`
`Jan.5,2006
`
`3
`
`[0039] As previously discussed, IBM has pioneered the
`development of systems, architectures, interfaces, and stan(cid:173)
`dards for open grid computing. As grid computing is rela(cid:173)
`tively new in the field of computing, we will first provide an
`overview of grid computing concepts and logical processes.
`Additional information regarding grid computing in general
`is publicly available from IBM, several other grid comput(cid:173)
`ing suppliers and developers, a growing number of univer(cid:173)
`sities, as well as from appropriate standards organizations
`such as the Open Grids Computing Environment ("OGCE")
`consortium. The following description of grid computing
`uses a generalized model and generalized terminology
`which can be used equally well to implement the invention
`not only in an IBM-based grid environment, but in grid
`environments comprised of systems and components from
`other vendors, as well.
`
`[0040] Turning to FIG. 5, the new computing paradigm of
`grid computing (50) is illustrated at a high level. A client
`(53), such as an FBI analyst using a client computer,
`requests a computational job or task, a cross-agency list of
`suspected terrorists, to be performed by the grid. The job is
`submitted via a communications network (51) to a Grid
`Management System ("GMS"), which makes a selection of
`which grid vendor(s) (54) to use based on client job criteria
`(e.g. response time, cost, accuracy, etc.) and resource char(cid:173)
`acteristics, such as server capability, resource availability,
`storage capacity, and cost.
`
`[0041] Once the GMS determines a specific vendor(s) (38,
`39,300) to which the job will be assigned (or among which
`the job will be divided), requests are sent to the selected grid
`resources, such as Server 1 (38). Server 1 (38) would then
`process the job as required, and would return job results,
`back to the requesting client (53) via the communications
`network (51).
`
`[0042] FIG. 3 provides a more detailed illustration (30) of
`how grid computing functions at a lower level. When a job
`(32) is submitted by a client application (31) to the grid, the
`job (32) is received into a grid inbound job queue (33),
`where it awaits assignment to one or more grid resources.
`
`[0043] A Job/Grid Scheduler ("JGS") (34) retrieves each
`pending job from the inbound job queue (33), verifies
`handling requirements against one or more SLA (305) to
`determine processing requirements for the job, and then
`selects which server or servers (28, 29, 300) to assign to
`process the job (32). In this illustration, Server 2 (39) has
`been selected, so the job (32) is transferred to Server 2' job
`queue (36) to be processed when the server becomes avail(cid:173)
`able (immediately if adequate processing bandwidth is
`already available). Some servers may handle their job
`queues in an intelligent manner, allowing jobs to have
`priority designation which allows them to be processed
`quicker or sooner than earlier-received, lower priority jobs.
`
`[0044] Eventually, the assigned server completes the job
`and returns the results (301) to a Job Results Manager
`("JRM") (302). The JRM can verify job completion and
`results delivery (303) to the client application (31), and can
`generate job completion records (304) as necessary to
`achieve billing and invoice functions.
`
`[0045] Turning now to FIG. 4, more details of the
`resource selection process ( 40) are shown. Each grid
`resource (38, 39,300) may report in real-time its availability
`
`or "percent idle" ( 41, 42, and 43) to the Job/Grid Scheduler
`(34). Additionally, a set of grid resource characteristics and
`capabilities ( 44) is compiled, either statically, dynamically,
`or both, which is also available for the JGS (34) to use. Some
`server characteristics may be static, such as hardware char(cid:173)
`acteristics (e.g. installed memory, communications proto(cid:173)
`cols, or licenses), which other characteristics may be more
`dynamic in nature, such as number of licenses available for
`a certain application program (e.g. PDF generators, video
`compressors, etc.). Additionally, the completion statistics
`( 45) from the Job Results Manager (302) are preferably
`available to the JGS (34), as well.
`[0046] Through consideration of these factors regarding
`the grid resources, and in combination with the SLA client
`requirements, the JGS can select one or more appropriate
`grid resources to which to assign each job. For example, for
`high-priority jobs which require immediate processing, the
`JGS may select a resource which is immediately available,
`and which provides the greatest memory and processing
`bandwidth. For another job which is cost-sensitive but not
`time critical, the JGS may select a resource which is least
`expensive without great concern about the current depth of
`the queue for handling at that resource.
`
`Computing Platform Suitable for Realization of the Inven(cid:173)
`tion
`[0047] The invention, in one available embodiment, is
`realized as a feature or addition to software products, such
`as IBM's grid computing products, for execution by well(cid:173)
`known computing platforms such as personal computers,
`web servers, and web browsers.
`[0048] As the computing power, memory and storage, and
`communications capabilities of even portable and handheld
`devices such as personal digital assistants ("PDA"), web(cid:173)
`enabled wireless telephones, and other types of personal
`("PIM")
`information management
`devices,
`steadily
`increases over time, it is possible that the invention may be
`realized in software for some of these devices, as well.
`[0049] Therefore, it is useful to review a generalized
`architecture of a computing platform which may span the
`range of implementation, from a high-end web or enterprise
`server platform, to a personal computer, to a portable PDA
`or web-enabled wireless phone.
`[0050] Turning to FIG. 6, a generalized architecture is
`presented including a central processing unit (71) ("CPU"),
`which is typically comprised of a microprocessor (72)
`associated with random access memory ("RAM") (74) and
`read-only memory ("ROM") (75). Often, the CPU (71) is
`also provided with cache memory (73) and programmable
`FlashROM (76). The interface (77) between the micropro(cid:173)
`cessor (72) and the various types of CPU memory is often
`referred to as a "local bus", but also may be a more generic
`or industry standard bus.
`[0051] Many computing platforms are also provided with
`one or more storage drives (79), such as a hard-disk drives
`("HDD"), floppy disk drives, compact disc drives (CD,
`CD-R, CD-RW, DVD, DVD-R, etc.), and proprietary disk
`and tape drives (e.g., Iomega Zip™ and Jaz™, Addonics
`SuperDisk™, etc.). Additionally, some storage drives may
`be accessible over a computer network.
`[0052] Many computing platforms are provided with one
`or more communication interfaces (710), according to the
`
`Netflix, Inc. - Ex. 1010, Page 000011
`
`

`

`US 2006/0005181 Al
`
`Jan.5,2006
`
`4
`
`function intended of the computing platform. For example,
`a personal computer is often provided with a high speed
`serial port (RS-232, RS-422, etc.), an enhanced parallel port
`("EPP"), and one or more universal serial bus ("USE")
`ports. The computing platform may also be provided with a
`local area network ("LAN") interface, such as an Ethernet
`card, and other high-speed interfaces such as the High
`Performance Serial Bus IEEE-1394.
`
`[0053] Computing platforms such as wireless telephones
`and wireless networked PDA's may also be provided with a
`radio frequency ('RF") interface with antenna, as well. In
`some cases, the computing platform may be provided with
`an infrared data arrangement ("IrDA") interface, too.
`
`[0054] Computing platforms arc often equipped with one
`or more internal expansion slots (811), such as Industry
`Standard Architecture ("ISA"), Enhanced Industry Standard
`Architecture ("EISA"), Peripheral Component Interconnect
`("PCI"), or proprietary interface slots for the addition of
`other hardware, such as sound cards, memory boards, and
`graphics accelerators.
`
`[0055] Additionally, many units, such as laptop computers
`and PDA's, are provided with one or more external expan(cid:173)
`sion slots (712) allowing the user the ability to easily install
`and remove hardware expansion devices, such as PCMCIA
`cards, SmartMedia cards, and various proprietary modules
`such as removable hard drives, CD drives, and floppy drives.
`
`[0056] Often, the storage drives (79), communication
`interfaces (810), internal expansion slots (711) and external
`expansion slots (712) are interconnected with the CPU (71)
`via a standard or industry open bus architecture (78), such as
`ISA, EISA, or PCI. In many cases, the bus (78) may be of
`a proprietary design.
`
`[0057] A computing platform is usually provided with one
`or more user input devices, such as a keyboard or a keypad
`(716), and mouse or pointer device (717), and/or a touch(cid:173)
`screen display (718). In the case of a personal computer, a
`full size keyboard is often provided along with a mouse or
`pointer device, such as a track ball or TrackPoint™. In the
`case of a web-enabled wireless telephone, a simple keypad
`may be provided with one or more function-specific keys. In
`the case of a PDA, a touch-screen (718) is usually provided,
`often with handwriting recognition capabilities.
`
`[0058] Additionally, a microphone (719), such as the
`microphone of a web-enabled wireless telephone or the
`microphone of a personal computer, is supplied with the
`computing platform. This microphone may be used for
`simply reporting audio and voice signals, and it may also be
`used for entering user choices, such as voice navigation of
`web sites or auto-dialing telephone numbers, using voice
`recognition capabilities.
`
`[0059] Many computing platforms are also equipped with
`a camera device (7100), such as a still digital camera or full
`motion video digital camera.
`
`[0060] One or more user output devices, such as a display
`(713), are also provided with most computing platforms.
`The display (713) may take many forms, including a Cath(cid:173)
`ode Ray Tube ("CRT"), a Thin Flat Transistor ("TFT") array,
`or a simple set of light emitting diodes ("LED") or liquid
`crystal display ("LCD") indicators.
`
`[0061] One or more speakers (714) and/or annunciators
`(715) are often associated with computing platforms, too.
`The speakers (714) may be used to reproduce audio and
`music, such as the speaker of a wireless telephone or the
`speakers of a personal computer. Annunciators (715) may
`take the form of simple beep emitters or buzzers, commonly
`found on certain devices such as PDAs and PIMs.
`
`[0062] These user input and output devices may be
`directly interconnected (78', 78") to the CPU (71) via a
`proprietary bus structure and/or interfaces, or they may be
`interconnected through one or more industry open buses
`such as ISA, EISA, PCI, etc. The computing platform is also
`provided with one or more software and firmware (7101)
`programs to implement the desired functionality of the
`computing platforms.
`
`[0063] Turning to now FIG. 7, more detail is given of a
`generalized organization of software and firmware (7101) on
`this range of computing platforms. One or more operating
`system ("OS") native application programs (823) may be
`provided on the computing platform, such as word proces(cid:173)
`sors, spreadsheets, contact management utilities, address
`book, calendar, email client, presentation, financial and
`bookkeeping programs. 100531 Additionally, one or more
`"portable" or device-independent programs (824) may be
`provided, which must be interpreted by an OS-native plat(cid:173)
`form-specific interpreter (825), such as Java™ scripts and
`programs.
`
`[0064] Often, computing platforms are also provided with
`a form of web browser or micro-browser (826), which may
`also include one or more extensions to the browser such as
`browser plug-ins (827).
`
`[0065] The computing device is often provided with an
`operating system (820), such as Microsoft Windows™,
`UNIX, IBM OS/2 TM, LINUX, MAC OS TM or other platform
`specific operating systems. Smaller devices such as PDA's
`and wireless telephones may be equipped with other forms
`of operating systems such as real-time operating systems
`("RTOS") or Palm Computing's PalmOS™.
`
`[0066] A set of basic input and output functions ("BIOS")
`and hardware device drivers (821) are often provided to
`allow the operating system (820) and programs to interface
`to and control the specific hardware functions provided with
`the computing platform.
`
`[0067] Additionally, one or more embedded firmware pro(cid:173)
`grams (822) are commonly provided with many computing
`platforms, which are executed by onboard or "embedded"
`microprocessors as part of the peripheral device, such as a
`micro controller or a hard drive, a communication processor,
`network interface card, or sound or graphics card.
`
`[0068] As such, FIGS. 6 and 7 describe in a general sense
`the various hardware components, software and firmware
`programs of a wide variety of computing platforms, includ(cid:173)
`ing but not limited to personal computers, PDAs, PIMs,
`web-enabled telephones, and other appliances such as
`WebTV™ units. It will be readily recognized by those
`skilled in the art that the methods and processes disclosed
`herein may be alternatively realized as hardware functions,
`in part or in whole, without departing from the spirit and
`scope of the invention.
`
`Netflix, Inc. - Ex. 1010, Page 000012
`
`

`

`US 2006/0005181 Al
`
`Jan.5,2006
`
`5
`
`Other Grid Products from IBM and Tivoli
`[0069] There are two previously available products to
`which the present invention relates, but compared to which
`the present invention provides substantial performance and
`functional advantages:
`[0070]
`(1) Tivoli's Think Dynamic (a.k.a. IBM Tivoli
`Provisioning Manager and IBM Tivoli Intelligent
`Orchestrator) Provisioning Manager(s), which man(cid:173)
`ages static building, deploying, configuring and
`reclaiming resources to and from application environ(cid:173)
`ments. It enables building of workflows that automate
`"best practices" to deploy and augment n-tier applica(cid:173)
`tion environments, and provides out-of-the-box auto(cid:173)
`mated provisioning for networks, servers, storage,
`middleware and application services.
`
`(2) IBM Websphere™ Server Allocation Work(cid:173)
`[0071]
`load Manager, which enables pooling of WebSphere TM
`resources (e.g. software components and resources)
`and have applications respond quickly to the dyna

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket