`
`
`
`A publish/subscribe CORBA Persistent State Service
`Prototype
`
`C. Liebig, M. Cilia†, M. Betz, A. Buchmann
`
`Database Research Group - Department of Computer Science
`Darmstadt University of Technology - Darmstadt, Germany
`{chris,cilia,betz,buchmann}@dvs1.informatik.tu-darmstadt.de
`
`Abstract. An important class of information dissemination applications
`requires 1:n communication and access to persistent datastores. CORBA’s new
`Persistent State Service combined with messaging capabilities offer the
`possibility of efficiently realizing information brokers between data sources and
`CORBA clients. In this paper we present a prototype implementation of the PSS
`that exploits the reliable multicast capabilities of an existing middleware
`platform. This publish/subscribe architecture makes it possible to implement an
`efficient update propagation mechanism and snooping caches as a generic
`service for information dissemination applications. The implementation is
`presented in some detail and implications of the design are discussed. We
`illustrate the use of a publish/subscribe PSS by applying it to an auction
`scenario.
`
`1 Introduction
`
`The deployment of large scale information dissemination systems like Intranet and
`Extranet information systems, e-commerce applications, and workflow management
`and groupware systems, is key to the success of companies competing in a global
`marketplace and operating in a networked world. Applications like warehouse
`monitoring, auctions, reservation systems, traffic information systems, flight status
`tracking, logistics systems, etc. consist of a potentially large number of clients spread
`all over the world demanding timely information delivery. Many of these applications
`span organizational boundaries and are centered around a variety of data sources, like
`relational databases or legacy systems that maintain business data. The business logic
`may be spread over separate modules and the entire system is expected to undergo
`continuous extension and adaptation to provide new functionality.
`Common approaches in terms of systems architecture can be classified into traditional
`2-tier client/server, 3-tier TP-heavy using TP monitors and n-tier Object-Web
`systems.
`In 2-tier client/server the client part implements the presentation logic together with
`application logic and data access. This approach depends primarily on RPC-like
`communication and scales well only if client and server are close together in terms of
`
`
`† Also ISISTAN, Faculty of Sciences, UNICEN, Tandil, Argentina.
`
`J. Sventek and G. Coulson (Eds.): Middleware 2000, LNCS 1795, pp. 231-255, 2000.
`© Springer-Verlag Berlin Heidelberg 2000
`
`231
`
`Zynga Ex. 1013, p. 1
`Zynga v. IGT
`IPR2022-00368
`
`
`
`232
`
`network bandwidth and access latency. However, it does not scale in the face of wide-
`area distribution. Moreover, the fat-client approach renders the client software depen-
`dent on the data model and API of the backend.
`In a 3-tier architecture a middle-tier – typically based on a TP monitor - is introduced
`to encapsulate the business logic and to hide the data source specifics. TP monitors
`provide scalability in terms of resource management, i.e. pooling of connections,
`allocating processes/threads to services and load balancing. The communication
`mechanisms used in 3-tier architectures range from peer-to-peer messaging and
`transactional queues to RPC and RMI. TP monitor based approaches assume that the
`middle-tier has a performant connection to the backend data sources, because
`database access protocols for relational systems are request/response and based on
`“query shipping”. In order to reduce access latency and to keep the load of the data
`source reasonably low, the application programmers are urged to implement their own
`caching functionality in the middle-tier. A well known example of such an
`architecture is the SAP system [21].
`In n-tier Object-Web systems the clear distinction between clients and servers gets
`blurred. The monolithic middle-tier is split up into a set of objects. Middleware
`technology, such as CORBA, provides the glue for constructing applications in
`distributed and heterogeneous environments in a component-oriented manner.
`CORBA leverages a set of standard services [22] like Naming Service, Event and
`Notification Service, Security Service, Object Transaction Service, and Concurrency
`Control Service. CORBA has not been able to live up to expectations of scalability,
`particularly in
`the
`information dissemination domain, because of a limiting
`(synchronous) 1:1 communication structure and the lack of a proper persistence
`service. The new CORBA Messaging standard [23] will provide true asynchronous
`communication including time independent invocations. We argue, that the recently
`proposed Persistent State Service [14], which replaces the ill-fated Persistent Object
`Service, will not only play a key role as integration mechanism but also provides the
`opportunity to introduce efficient data distribution and caching mechanisms.
`A straightforward implementation of the PSS relying on relational database
`technology is based on query shipping. The PSS must open a datastore connection to
`the server, then ships a query that is executed at the server side and the result set is
`returned in response. Such a PSS implementation realizes storage objects as stateless
`incarnations on the CORBA side, that act as proxies to the persistent object instance
`in the datastore. Operations that manipulate the state of objects managed by the PSS
`are described in datastore terms. This approach generates a potential bottleneck at the
`datastore side, because each operation request on an instance will result in a SQL
`query. Furthermore, for information dissemination systems, where the user wants to
`continuously monitor the data of interest, polling must be introduced which results in
`a high load at the backend, wasting resources and possibly delivering low quality of
`data freshness.
`For information dissemination systems an alternate approach based on server-initiated
`communication is more desirable. Techniques ranging from cache consistency
`mechanisms in (OO)DBMSs [33,5] and triggers/active database rules [10] to
`broadcast disks [1] can be used to push data of interest to clients. In the context of the
`PSS a new publish/subscribe session is needed. A publish/subscribe session represents
`the scope of the objects an application is interested in, i.e. subscribes to. For those
`
`232
`
`Zynga Ex. 1013, p. 2
`Zynga v. IGT
`IPR2022-00368
`
`
`
`
`
`
`
`233
`
`objects in a publish/subscribe session the cache is loaded and updated automatically.
`Additionally, this session provides notifications about insert, modify and delete events
`to the application. While publish/subscribe sessions currently are not part of the PSS
`specification they are definitely not precluded by it and would represent a useful
`extension to the spec.
`In this paper we present an implementation of a PSS prototype that provides an
`intelligent caching mechanism and active functionality in conjunction with message
`oriented middleware (MOM) that is capable of 1:n communication. By removing two
`crucial bottlenecks from the CORBA platform we claim that highly scalable Object-
`Web systems become feasible.
`In our PSS prototype1 we
`take advantage of commercial publish/subscribe
`middleware that provides the paradigm of subject based addressing and 1-to-many
`reliable multicast message delivery. We show how a snoopy cache can be
`implemented for multi-node PSS deployment. We make use of a prototype of a
`database adapter for object-relational databases (Informix IUS, in particular) that was
`partially developed and extended in the scope of this project. The database adapter
`allows to use publish/subscribe functionality in the database and to push data to the
`PSS caches when update transactions are issued against the data base backend or
`when new data objects are created.
`This paper concentrates on the basic infrastructure needed to provide scalability with
`respect to dissemination of information from multiple data sources. We explicitly
`exclude from the scope of this paper federated database and schema integration
`issues.
`The remainder of this paper is organized as follows: Section 2 briefly introduces key
`concepts of the PSS specification and the multicast-enabled message oriented
`middleware; Section 3 provides an overview of the architecture of our prototype
`implementation of the PSS and identifies the main advantages of integrating the
`reliable multicast functionality of the TIBCO platform; Section 4 describes the
`implementation; Section 5 introduces auctions as a typical scenario for middleware-
`based Web-applications and Section 6 presents conclusions and identifies areas of
`ongoing research.
`
`2 CORBA PSS and Messaging Middleware
`
`2.1 CORBA Persistent State Service
`
`The need for a persistence service for CORBA was recognized early on. In 1995, the
`Persistent Object Service was accepted but failed because of major flaws: the
`specification was not precise, persistence was exposed
`to CORBA clients,
`transactional access to persistent data was not covered, and the service lacks
`integration with other CORBA services. Recently, the Persistent State Service (PSS)
`was proposed to overcome those flaws. The goals of the PSS specification [14] are to
`
`1
`The work of this project is partially funded by TIBCO Software Inc., Palo Alto.
`
`233
`
`Zynga Ex. 1013, p. 3
`Zynga v. IGT
`IPR2022-00368
`
`
`
`234
`
`make the state of the servant persistent, to be datastore neutral and implementable
`with any datastore, to be CORBA friendly, consistent with other OMG specifications
`(Transactions, POA, Components, etc.) and also with other standards like SQL3 [18]
`and ODMG [7].
`The PSS provides a single interface for storing objects’ state persistently on a variety
`of datastores like OO-, OR-, R-DBMS, and simple files. The PSS provides a service
`to programmers who develop object implementations, to save and restore the state of
`their objects and is totally transparent to the client. Persistence is an implementation
`concern, and a client should not be aware of the persistence mechanisms. Therefore,
`the PSS specification does not deal with the external interface (provided by a CORBA
`server) but with an internal interface between the CORBA-domain and the datastore-
`domain.
`Due to numerous problems with IDL valuetypes - used in previous proposals as
`requirement imposed by the RFP - the IDL was extended with new constructs to
`define storage objects and storage home objects. The extended IDL is known as
`Persistent State Definition Language (PSDL). Storage objects are stored in storage
`homes, which are themselves stored in datastores. In order to manipulate a storage
`object, the programmer uses a representative programming-language entity, called
`storage object instance. A storage object instance may be connected to a storage
`object in the datastore, providing direct access to the state of this storage object. Such
`a connected instance is called storage object incarnation. To access a storage object, a
`logical connection between the process and the datastore is needed. Such a connection
`is known as session.
`There is also a distinction between abstract storage type specification and concrete
`storage type implementation. The abstract storage type spec defines everything a
`servant programmer needs to know about a storage object, while an implementation
`construct defines what a code generator needs to know in order to generate code for it.
`A given abstract specification can have more than one implementation and it is
`possible to update an implementation without affecting the storage objects’ clients.
`So, the implementation of storage types and storage homes lies mainly in the
`responsibility of the PSS. An overview of these concepts is depicted in Figure 1.
`
`datastore
`
`storage
`objects
`
`sessions
`
`storage
`homes
`
`implementation of
`abstract
`abstract
`storage homes
`storage objects
`
`
`
`Fig. 1. PSS concepts [14].
`
`storage object
`incarnations
`
`storage home
`incarnations
`
`Process A
`
`Process B
`
`234
`
`Zynga Ex. 1013, p. 4
`Zynga v. IGT
`IPR2022-00368
`
`
`
`
`
`
`
`235
`
`A storage object can have both state and behavior, defined by the storage type : its
`state is described by attributes (also called state members) and its behavior is
`described by operations. State members are manipulated through equally named pairs
`of accessor functions. Operations on storage objects are specified in the same manner
`as with IDL. In addition to IDL parameter types, storage types defined in PSDL may
`be used as parameters. In contrast to CORBA objects, operations on storage objects
`are locally implemented and not remotely accessible.
`A storage home does not have its own state, but it can have behavior, which is
`described by operations in the abstract storage home. A storage home can ensure that
`a list of attributes of its storage type forms a unique identifier for the storage objects it
`manages. Such a list is called a key. A storage home can define any number of keys.
`Each key declaration implicitly declares associated finder operations in the language
`mapping. To create or locate a storage object, a CORBA server implementor calls
`create(<parameters>) or find_by_<some key>(<parameters>) operations on
`the storage home of the storage type and in return will receive the according storage
`object instance.
`The inheritance rules for storage objects are similar to the rules for interface
`inheritance in IDL. Storage homes also support multiple inheritance. However, it is
`not possible to inherit two operations with the same name; as well as to inherit two
`keys with the same name.
`In the PSS spec the mapping of PSDL constructs to several programming languages is
`also specified. A compliant PSS tool must generate a default implementation for
`storage homes and storage types based on the given PSDL definition.
`For the case that the underlying datastore is a database system, the PSS introduces a
`transactional session orchestrated by OTS through the use of the X/Open XA
`interface [34] of the datastore. Access to storage objects within a transactional session
`produces executions that comply with the selected isolation level i.e. read
`uncommited, read commited. Note that stronger isolation levels like repeatable read
`and serializable are not specified.
`
`2.2 Multicast-enabled MOM
`
`We use COTS MOM [31] to build the PSS prototype, namely TIB/Rendezvous and
`TIB/ObjectBus products. TIB/Rendezvous
`is based upon
`the notion of
`the
`Information Bus [26] (interchangeable with the wording “message bus” in the
`following) and realizes the concept of subject based addressing, which is related to
`the idea of a tuple space, first introduced in LINDA [6]. Instead of addressing a
`sender or recipient for a message by its identifier, which in the end comes down to a
`network address, messages are published under a subject name on the message bus.
`The subject name is supposed to characterize the contents - i.e. the type - of a
`message. If a participant, who is connected to the message bus, is interested in some
`specific message types, she will subscribe for the subjects of interest and in turn be
`notified of messages published under the selected subject names. The subject name
`space is hierarchical and subscribers may use subject name patterns to denote a set of
`types to which they want to subscribe.
`
`235
`
`Zynga Ex. 1013, p. 5
`Zynga v. IGT
`IPR2022-00368
`
`
`
`236
`
`Messages are constructed from typed fields and can be recursively nested.
`Furthermore, messages are self-describing: a recipient of a message can inquire about
`the structure and type of message content. The abstraction of a bus inherently carries
`the semantic of many-to-many communications as there can be multiple publishers
`and subscribers for the same subject. The implementation of TIB/Rendezvous uses a
`lightweight multicast communication layer to distribute messages to all potential
`subscribers. On each machine, a daemon manages local subscribers, filters out
`relevant messages according
`to subject
`information and notifies
`individual
`subscribers. The programming style for listening applications is event-driven; i.e.
`eventually the program must transfer control to the TIB/Rendezvous library which
`runs an event-loop. Following the Reactor-Pattern [29] the onData() method of an
`initially registered callback object will be invoked by the TIB/Rendezvous library
`when a message arrives with a subject that the subscriber is listening to.
`Message propagation can be configured to use IP multicast or UDP broadcast. In the
`latter case, a special message routing daemon must be set up in each subnet in order to
`span LAN (broadcast) boundaries. Optionally, TIB/Rendezvous can make use of
`PGM, a reliable multicast transport on top of IP multicast, which has been developed
`by Cisco Systems in cooperation with TIBCO and proposed to the IETF [30].
`Two quality of service levels are supported by TIB/Rendezvous: reliable and
`guaranteed. In both modes, messages are delivered in FIFO order with respect to the
`publisher. There is no total ordering in case of multiple publishers on the same
`subject. Reliable delivery uses receiver-side NACKs and a sender-side in-memory
`ledger that buffers messages for some amount of time in case of retransmission
`requests. With guaranteed delivery, a subscriber may register with the publisher for a
`certified session or the publisher preregisters dedicated subscribers.
`Strict group membership semantics must be realized at the application level if so
`required. However, atomic message delivery is not provided. The TIB/ Rendezvous
`library uses a persistent ledger in order to provide guaranteed delivery. Messages may
`be discarded from the persistent ledger as soon as all subscribers have explicitly
`acknowledged the receipt. In both variants, the retransmission of messages is
`receiver-initiated by sending NACKs.
`The diagram in Figure 2 depicts, how the multicast messaging middleware is
`introduced to CORBA in ObjectBus, a CORBA 2.0 compliant ORB implementation.
`
`Application
`
`skeletons
`
`stubs
`
`ORB interfaces
`protocols - GIOP
`TIBIOP
`
`IIOP
`
`PSS
`
`TIB/Rendezvous
`
`TCP/IP
`
`TIB
`
`IIOP
`
`Messaging
`Applicaitons
`
`ObjectBus
`Services
`
`CORBA 2.0
`Applications
`
`
`
`Fig. 2. ObjectBus Architecture.
`
`236
`
`Zynga Ex. 1013, p. 6
`Zynga v. IGT
`IPR2022-00368
`
`
`
`
`
`
`
`237
`
`The General Inter-ORB Protocol (GIOP) is implemented both by a standard Internet
`Inter-ORB Protocol (IIOP) layer and a TIBCO specific layer (TIBIOP). When using
`TIBIOP, the GIOP messages are marshaled into TIB/Rendezvous messages and
`published on the message bus on behalf of a specific subject. The CORBA (server)
`object may be registered with the ORB presenting an application specific subject
`name. In that case the returned Interoperable Object Reference (IOR) carries the
`subject name on behalf of the TIBIOP addressing profile. In order to preserve
`interoperability, server objects may be registered with both, TIBIOP and IIOP profiles
`at the same time. Additionally, CORBA applications may access the TIB/Rendezvous
`API directly to register listeners and publish messages on behalf of some subject. The
`PSS prototype implementation is mainly based on this TIB/Rendezvous messaging
`API.
`
`3 Overview of the Prototype Architecture
`
`In [13], the nodes in a general distributed information system are classified into: i)
`data sources which provide the base data that is to be disseminated, ii) clients which
`are net consumers of information and iii) information brokers (agents, mediators) that
`acquire information from data sources and provide the information to the clients. Data
`delivery mechanisms are distinguished along three main dimensions: push vs. pull,
`periodic vs. aperiodic and 1:1 vs. 1:n.
`An analysis of the large, scalable, distributed applications that we are addressing
`reveals that they are best built using multi-tier architectures. The diagram in Figure 3
`below shows this: clients can interact with an application either directly through an
`ORB or via a Web-server (optionally using an applet). Both periodic and aperiodic
`pull may be used to begin an interaction, while aperiodic notification and polling are
`required to propagate change to the users. At the integration-tier the application logic
`is realized through CORBA objects and services.
`The interaction between the integration-tier and the backend-tier requires both pull
`and push communication to initiate individual requests and to update the caches,
`respectively. Further, aperiodic event-driven
`interaction
`is required and 1:n
`communication capabilities are essential for effective dissemination of updates and
`for snooping of load reply and creation/deletion events. Under these conditions, the
`PSS provides the means to efficiently realize CORBA objects as information brokers
`between data sources and CORBA clients.
`In our prototype architecture of a publish/subscribe based PSS, we include a PSS
`Connector on the side of the integration tier and its counterpart, the DB Connector on
`the datastore. In terms of Object Oriented Database Systems architecture, the DB
`Connector plays the role of an object server, leveraging extended relational data base
`technology and the PSS Connector acts as the object manager.
`
`237
`
`Zynga Ex. 1013, p. 7
`Zynga v. IGT
`IPR2022-00368
`
`
`
`238
`
`client
`
`integration-tier
`
`backend-tier
`
`datastore
`
`datastore
`
`DB Connector
`
`DB Connector
`
`domain:
`auction.com
`
`aperiodic pull
`
`snooping
`
`1:n delivery
`
`Message Bus
`
`Connector
`
`PSS
`
`Notification
`
`(Information-Broker)
`Application logic
`
`CORBA
`
`aperiodic pull
`
`periodic pull
`
`aperiodic
`notification
`
`aperiodic
`notification
`
`App.
`
`PSS
`
`snooping
`
`1:n delivery
`
`CORBA
`
`Web
`Server
`
`applet
`
`aperiodic push
`
`
`
`Fig. 3. Multi-tier Architecture for Information Dissemination Systems.
`
`We unbundle object caching and object-relational mappings and benefit from the
`reliable multicast messaging services provided by publish/subscribe MOM:
`1. The PSS Connector at the CORBA side interacts with the data sources at the
`backend in aperiodic pull combined with 1:n delivery. A storage object lookup
`request is initiated by some PSS Connector on application demand. The response is
`published by the DB Connector under an associated subject and all PSS Connector
`instances that have subscribed to that kind of object will snoop the resulting
`messages and possibly refresh or add a new incarnation to their object cache.
`2. Updates to storage object instances result in publishing update notifications under
`an associated subject including the new state of the object, i.e. aperiodic push
`combined with 1:n delivery. Again, the PSS Connector instances may snoop the
`update notifications to update the cached incarnation of the object and notify the
`application of the update.
`3. In addition to update notifications, creation and deletion events can be signaled to
`the application by letting the PSS snoop the respective messages. The application is
`thus relieved from polling and may extend the chain of notification to the client-tier
`in order to facilitate timely information delivery.
`4. The implementation of the PSS uses a hierarchy of subject names to address
`objects. Instead of addressing by location (i.e. IP number, DB listener socket
`address), publish/subscribe interactions use the paradigm of addressing content
`(i.e. subject based addressing). Thereby several data sources may be federated in a
`single data source domain. Additionally, a labeling mechanism can be introduced
`to support subscription to a collection of storage objects and simple subject-based
`queries.
`Given the potential distribution of clients and caches we expect to benefit from
`reference locality not only in the scope of a single PSS instance but because of the
`
`238
`
`Zynga Ex. 1013, p. 8
`Zynga v. IGT
`IPR2022-00368
`
`
`
`
`
`
`
`239
`
`snooping of load replies and update notifications we benefit from reference locality
`throughout the datastore domain across different PSS nodes.
`
`4 Prototype Design & Implementation
`
`The implementation consists of the realization of the PSS Connector and the DB
`Connector including the definition of the corresponding formats and protocols (FAP),
`provision of snoopy caching and active functionality, the mechanisms to adapt the
`database to the TIB/Rendezvous message bus, the mapping between PSDL and the
`(object-) relational data model, and last but not least the transactional semantics and
`correctness criteria that can be enforced.
`
`4.1 Formats and Protocols between Connectors
`
`In defining the FAPs we must specify the basic functionality to create, lookup, load,
`change/save and delete objects. More advanced features are snooping load replies,
`generating and snooping update notifications, and generating and snooping create/
`delete events. Most important for the implementation of the advanced features on top
`of publish/subscribe messaging middleware is the definition of the subject namespace,
`i.e. the categories an application can subscribe to and under which to publish. Subjects
`must be chosen in a way that enables snooping load and update payload data, as well
`as detecting create/update/delete events and signaling them to the application.
`Appendix A presents the subject name space with respect to the FAP. Figure 4 below
`shows the basic functional units of the PSS prototype.
`
`DB Connector
`
`
`
`meta-data
`repository
`
`IUS
`persistent state
`data tables
`
`SQL
`
`DB-Adapter
`
`FAP
`
`callback
`callback
`callback
`
`PSS Connector
`
`Message Bus Adapter
`
`Listner
`
`Publ.
`
`Agent
`
`interaction
`protocol
`
`Message Bus
`
`Fig. 4. PSS Prototype Components.
`
`Application
`generated code
`notification
`snoop F
`object manager
`
`AP
`
`TIB/ObjectBus
`
`type-specific generated storage object (home)
`is materialized by
`The FAP
`implementation on top of a general object manager library at the PSS Connector. At
`the DB Connector the FAP is implemented using callback handlers (external SQL
`
`239
`
`Zynga Ex. 1013, p. 9
`Zynga v. IGT
`IPR2022-00368
`
`
`
`240
`
`UDR, see also 4.2). Additionally we must provide a DB Adapter that maps the
`payload data to the constructs of the datastore as reflected in the metadata repository.
`
`4.1.1 Loading a storage object in a publish/subscribe session
`An application gets access to one or more storage object incarnations through its
`respective storage home. Storage homes are managed by a publish/subscribe session,
`which also defines the scope of the object cache. Before actually accessing a storage
`object, the application must retrieve a handle to the object incarnation using
`find_by_pid() or
`some other key-based
`finder operation, or using
`find_by_label(). In the first case, the application is returned a handle to the
`storage object incarnation. In the second case the storage home will return a sequence
`of handles (see also ItemHome in Appendix C).
`As the prototype is restricted to C++, the handle is realized by a C++ pointer. The
`actual implementation of state and of the corresponding accessor functions is
`delegated to a “data container” object. Thus the handle represents a smart-pointer [12]
`to the actual storage object incarnation This approach is somewhat similar to
`Persistence [28] and other OODB systems.
`Although the handle is returned by the finder operation after the object lookup
`returned successfully, the data container part is not populated by pulling the datastore
`immediately. Instead, the respective delegate data container object subscribes to the
`storage object’s subject name and snoops the message bus for LOADREPLY and
`UPDATENOTIFY messages.
`
`PSS-Connector
`
`DB-Connector
`
`listen on
`/* LOAD.rep_id.pid.domain */
`LOAD.Item.*.auction.com
`
`ó p
`
`persistent
`state data
`tables
`
`Datablade
`IUS PSS-
`
`ublish on
`/* LOADREPLY.rep_id.pid.fragment_no.domain.label */
`ö
`
`LOADREPLY.Item.1234.0.auction.com.computer.hardware.misc
`
`publish on
`/* LOAD.rep_id.pid.domain */
`
`LOAD.Item.1234.auction.com
`
`listen on
`/* LOADREPLY.rep_id.pid.fragment_no.domain.> */
`ì
`
`LOADREPLY.Item.1234.0.auction.com.>
`
` ì
`
`ó
`
`ö
`
`Message Bus
`
`
`
`Fig. 5. Object load with publish/subscribe.
`
`At the time the application accesses the storage object incarnation - by calling an
`accessor method - we either have snooped the state in the meantime and can save
`pulling the object from the data store, or we run into an object fault and initiate a
`synchronous load request. Figure 5 depicts the object fault situation for a storage
`object of type Item with identifier 1234 in the data store domain auction.com. Other
`nodes running a publish/subscribe session may benefit from snooping message
`number 4 – an example scenario is presented later in Section 5.
`The proposed mechanism is realized by the object manager in the PSS Connector and
`is transparent to the user. The proposed object faulting technique extends lazy
`swizzling to the accessed root object, compared to lazy swizzling restricted to
`
`240
`
`Zynga Ex. 1013, p. 10
`Zynga v. IGT
`IPR2022-00368
`
`
`
`
`
`
`
`241
`
`contained references [20]. Fetching through collections of objects and navigating
`through an object tree are typical scenarios where lookup and access are separated in
`time and thus benefit most from the lazy swizzling with snooping.
`As mentioned above, the publish/subscribe PSS provides a supplementary finder
`operation find_by_label() which returns a collection of handles to storage
`object incarnations. Storage object instances can be assigned a label, which will
`become a postfix of the subject name in DB Connector reply messages as depicted in
`Appendix A. The labeling mechanism presents the subject-based addressing paradigm
`to the server implementor to explicitly take additional advantage of publish/subscribe
`capabilities of the PSS implementation. By labeling a collection of related objects, the
`application can issue subject-based queries to find all storage objects with a given
`label. In contrast to traditional object server approaches, the result returned by the DB
`Connector is a set of subject names merely representing the storage object instances.
`The data container part of the incarnations is eventually filled by snooping on the
`message bus. As labels can be hierarchically structured, storage objects can be
`hierarchically categorized. The simple subject-based query mechanism is not
`supposed to replace a full fledged query service, but comes with our prototype
`implementation for no additional cost.
`
`4.1.2 Snooping and state reassembling
`As mentioned before, the data container of a storage object incarnation implements
`the snooping algorithm. In order to collect the state of an storage object the data
`container may subscribe to LOADREPLY as well as to UPDATENOTIFY messages.
`Depending on the storage type definition, the storage object state may be mapped to
`different tables in the data store (see 4.3) and published on the message bus in
`different fragments per mapped table respectively. The data container reassembles the
`fragments according
`to a common request_id which
`identifies a particular
`request/reply interaction and which is enclosed in the message payload data (see
`Appendix A).
`Given a specific incarnation, the data container object subscribes to the message bus
`using an appropriate subject mask. For example, to snoop for update notifications on
`storage object of type Item with identifier 1234 in data store domain auction.com the
`subject mask to use is “UPDATENOTIFY.Item.1234.*.auction.com.>”. The subject
`mask for snooping
`load replies for
`the same storage object
`instance
`is
`“LOADREPLY.Item.1234.*.auction.com.>”.
`Figure 6 summarizes the swizzling with snooping mechanism implemented by any
`data container in the object manager. Initially the handle to the incarnation is
`unswizzled and snooping to loads and updates is initiated. Eventually, snooping the
`collection of fragments is completed and the incarnation is switched to the valid state
`or an accessor is called beforehand. In the former case, the storage object state
`members can be accessed without going back to the data store. In the latter case, a
`blocking load request is published – in turn, replies to this request may be snooped by
`other PSS nodes. Once in a valid state, the storage object incarnation continuously
`tracks updates by snooping UPDATENOTIFY fragment messages.
`The construction of a valid state is