`
`This page is intentionally blank.
`
`DivX, LLC Exhibit 2014
`Page 2014 - 2
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`NAVAL POSTGRADUATE SCHOOL
`Monterey, California 93943-5000
`
`RADM Patrick W. Dunne
`President
`
`R. Elster
`Provost
`
`This material is based upon work supported by the National Science Foundation under Grant No.
`CNS-0430566. Any opinions, findings, and conclusions or recommendations expressed in this
`material are those of the authors and do not necessarily reflect the views of the National Science
`Foundation.
`
`This report was prepared by:
`
`Cynthia E. Irvine
`Professor
`
`Timothy E. Levin
`Research Associate Professor
`
`Reviewed by:
`
`Released by:
`
`Peter 1. Denning
`Department of Computer Science
`
`Leonard Ferrari
`Associate Provost and
`Dean of Research
`
`DivX, LLC Exhibit 2014
`Page 2014 - 3
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`This page is intentionally blank.
`
`DivX, LLC Exhibit 2014
`Page 2014 - 4
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`REPORT DOCUMENTATION PAGE
`
`Farm approved
`
`OMB No 0704-0188
`
`Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources,
`gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
`collection of information, including suggestions for reducing this burden, to '·Vashington Headquarters Services, Directorate for information Operations and Rep0l1s, 1215
`Jefferson Davis Highway, Snite 1204, Arlington, VA 22202-4302, and to the Office of Management and Bndget, Paperwork Rednction Project (0704-0188), \Vashington, DC
`20503.
`1. AGENCY USE ONLY (Leave blank)
`
`2. REPORT DATE
`20 September 2005
`
`3. REPORT TYPE AND DATES COVERED
`Research; April 2005 to September 2005
`
`4. TITLE AND SUBTITLE
`
`Design Principles for Security
`
`5. FUNDING
`
`Grant Number CNS 043566
`
`6. AUTHOR(S)
`Terry V. Benzel, Cynthia E. Irvine, Timothy E. Levin, Ganesha
`Bhaskara, Thuy D. Nguyen, Paul C. Clark
`
`7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
`
`Naval Postgraduate School, Monterey, California, 93943
`
`9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
`
`National Science Foundation, 420 1 Wilson Blvd, Arlington, VA 22230
`DARPA, 370 1 Fairfax Drive, Arlington, VA 22203
`
`8. PERFORMING ORGANIZATION
`REPORT NUMBER
`
`NPS-CS-05-0 10
`
`10. SPONSORING/MONITORING
`AGENCY REPORT NUMBER
`
`11. SUPPLEMENTARY NOTES
`This material is based upon work supported by the National Science Foundation under Grant No. CNS-0430566 and CNS-
`0430598 with support from DARPA ATO. Any opinions, findings, and conclusions or recommendations expressed in this
`material are those of the authors and do not necessarily reflect the views of the National Science Foundation or of DARPA
`ATO.
`
`12a. DISTRIBUTION/AVAILABILITY STATEMENT
`
`12b. DISTRIBUTION CODE
`
`Approved for public release;
`Distribution is unlimited
`
`13. ABSTRACT (Maximum 200 words.)
`
`As a prelude to the clean-slate design for the SecureCore project, the fundamental security principles from more than four
`decades of research and development in information security technology were reviewed. As a result of advancing technology,
`some of the early "principles" required re-examination. For example, previous worked examples of combinations of hardware,
`and software may have encountered problems of performance and extensibility, which may no longer exist in today's
`environment. Moore's law in combination with other advances has yielded better performance processors, memory and
`context switching mechanisms. Secure systems design approaches to networking and communication are beginning to
`emerge and new technologies in hardware-assisted trusted platform development and processor virtualization open hither to
`previously unavailable possibilities.
`
`The results of this analysis have been distilled into a review of the principles that underlie the design and
`implementation of trustworthy systems.
`
`14. SUBJECT TERMS
`
`Security, design principles, architecture, trust, trustworthy
`
`17. SECURITY CLASSIFICATION
`OF REPORT
`unclassified
`NSN 7540-01-280-5800
`
`118. SECURITY CLASSIFICATION
`OF THIS PAGE
`unclassified
`
`119. SECURITY CLASSIFICATION
`OF ABSTRACT
`unclassified
`
`15. NUMBER OF
`PAGES
`34
`
`16. PRICE CODE
`
`20. LIMITATION OF
`ABSTRACT None
`
`Standard Form 298 (Rev. 2-89)
`Prescribed by ANSI Std 239-18
`
`DivX, LLC Exhibit 2014
`Page 2014 - 5
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`This page is intentionally blank.
`
`DivX, LLC Exhibit 2014
`Page 2014 - 6
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`ISI-TR-605
`
`NPS-CS-05-010
`
`COffJ
`
`Trustworthy Commodity Computation and
`Communication
`
`I SecureCore Technical Report
`
`Design Principles for Security
`Terry V. Benzel, Cynthia E. Irvine, Timothy E. Levin, Ganesha Bhaskara,
`Thuy D. Nguyen, and Paul C. Clark
`
`DivX, LLC Exhibit 2014
`Page 2014 - 7
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`This material is based upon work supported by the National Science Foundation under Grant No.
`CNS-0430566 and CNS-0430598 with support from DARPA ATO. Any opinions, findings, and
`conclusions or recommendations expressed in this material are those of the authors and do not
`necessarily reflect the views of the National Science Foundation or of DARPA ATO.
`
`Author Affiliations
`
`Naval Postgraduate School:
`
`Cynthia E. Irvine, Timothy E. Levin, Thuy D. Nguyen, and Paul C. Clark
`
`Center for Information Systems Security Studies and Research
`Computer Science Department
`Naval Postgraduate School
`Monterey, California 93943
`
`USC Information Sciences Institute:
`
`Terry V. Benzel and Ganesha Bhaskara
`
`Information Sciences Institute
`University of Southern California
`4676 Admiralty Way, Suite 1001
`Marina del Rey, Ca 90292
`
`DivX, LLC Exhibit 2014
`Page 2014 - 8
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Design Principles for Security
`
`TABLE OF CONTENTS
`
`1 2 3 4 445566 7 7 889 9
`
`10
`10
`10
`II
`
`11
`II
`12
`
`12
`12
`13
`14
`14
`15
`15
`16
`16
`17
`
`17
`17
`18
`18
`18
`
`19
`
`I.
`
`INTRODUCTION
`
`A. Definitions
`
`B.
`
`Security Design Principles Overview
`
`II. STRUCTURE
`
`A.
`
`Economy and Elegance
`Least Common Mechanism
`Clear Abstractions
`Partially Ordered Dependencies
`EtTiciently Mediated Access
`Minimized Sharing
`Reduced Complexity
`
`B.
`
`Secure System Evolution
`
`C.
`
`Trust
`Trusted Components
`Hierarchical Trust for Components
`Inverse Modification Threshold
`Hierarchical Protection
`Minimized Security Elements
`Least Privilege
`Self-reliant Trustworthiness
`
`D. Composition
`Secure Distributed Composition
`Trusted Communication Channels
`
`III. LOGIC AND FUNCTION
`Secure defaults
`Secure Failure
`Self Analysis
`Accountability and Traceability
`Continuous Protection of Information
`Economic Security
`Performance Security
`Ergonomic Security
`Acceptable Security
`
`IV. SYSTEM LIFE CYCLE
`Use Repeatable, Documented Procedures
`Procedural Rigor
`Secure System Modification
`SufTicient User Documentation
`
`V. COMMENTARY AND LESSONS LEARNED
`
`Trustworthy Commodity Computation and Communication
`
`iii
`
`DivX, LLC Exhibit 2014
`Page 2014 - 9
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Include Security in Design from the Start
`The Philosopher's Stone
`Other Approaches to Secure System Composition
`The Reference Monitor
`Conflicts in Design Principles
`
`REFERENCES AND BIBLIOGRAPHY
`
`19
`19
`20
`20
`20
`
`21
`
`iv
`
`DivX, LLC Exhibit 2014
`Page 2014 - 10
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Design Principles for Security
`
`Introduction
`I.
`throughout our information infrastructures. The
`Security vulnerabilities are rampant
`majority of commodity computing and communication platforms have been designed to
`meet performance and functionality requirements with little attention to trustworthiness.
`traditional
`stand-alone computers
`into highly networked,
`The transformation of
`pervasive, and mobile computing systems profoundly increases the vulnerabilities of
`current systems, and exacerbates the need for more trustworthy computing and
`communications platforms.
`
`While there is a significant history of secure systems design and development focusing
`on one or more of the triad of hardware, networking and operating systems, there are few
`the
`To date, only special purpose systems begin to meet
`worked examples [20].
`requirements to counter either the modern or historical threats. In spite of over thirty
`years of research and development, a trustworthy product built at the commodity level
`remains elusive.
`
`The SecureCore project is designing a secure integrated core for trustworthy operation of
`mobile computing devices consisting of: a security-aware processor, a small security
`kernel and a small set of essential secure communications protocols. The project is
`employing a clean slate approach to determine a minimal set of architectural features
`required for use in platforms exemplified by secure embedded systems and mobile
`computing devices.
`
`In addition to security, other factors including performance, size, cost and energy
`consumption must all be reasonably accounted for when building a secure system. These
`factors are especially important for viability in the commodity market, where client
`computing devices have constrained resources but high performance requirements. Our
`goal is not security at any price, but appropriate levels of security that permit desirable
`levels of performance, cost, size and battery consumption.
`
`As a prelude to our clean-slate design, we have reviewed the fundamental security
`principles from more than four decades of research and development in information
`security technology. As a result of advancing technology, we found that some of the early
`"principles" require re-examination. For example, previous worked examples of
`combinations of hardware, and software may have encountered problems of performance
`and extensibility, which may no longer exist in today's environment. Moore's law in
`combination with other advances has yielded better performance processors, memory and
`context switching mechanisms. Secure systems design approaches to networking and
`communication are beginning to emerge and new technologies in hardware-assisted
`trusted platform development and processor virtualization open hither to previously
`unavailable possibilities.
`
`Our analysis of key principles for secure computing started with the landmark work of
`Saltzer and Schroeder [25] and surveyed the refinement of these principles as systems
`This report provides a distillation, synthesis and
`have evolved to the present.
`organization of key security systems design principles, describes each principle, and
`provides examples where needed for clarity. Although others have described various
`principles and techniques for the development of secure systems, e.g. [3], [9], [22], [24],
`
`Trustworthy Commodity Computation and Communication
`
`DivX, LLC Exhibit 2014
`Page 2014 - 11
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`[25], [29], it was felt that a concise articulation of the principles as they are applied to the
`development of the most elemental components of a basic security system would be
`useful. In developing this report we have focused on the principles as they may be most
`applicable to SecureCore. A later report, "SecureCore Architecture and Requirements"
`[12], that uses these principles to define a high level architecture for SecureCore and a set
`of requirements, which will then be refined into a design specification. As a separate
`component of this work, a series of analysis reports will compare and contrast
`SecureCore to alternative modern information technology projects such as the TCG TPM
`[30] and various virtual machine based systems.
`
`A common limitation of previous and ongoing efforts to articulate secure software
`development principles is the premise that "security vulnerabilities result from defects
`that are unintentionally introduced into the software during design and development" [6].
`In contrast to those efforts and the software engineering "safety" paradigm upon which
`they rely, the articulation of design principles for SecureCore differs in two ways. First,
`it explicitly
`our perspective not only acknowledges the risk of unintentional flaws,
`assumes that unspecified functionality may be intentional. An adversary within the
`development process is assumed. Second, our analysis considers both the design of
`components as well as the composition of components to form a coherent security
`architecture that takes into account hardware, software and networking design elements.
`
`The remainder of this section provides the definitions for commonly used terms, and an
`illustration of our overall taxonomy of security principles. Following this we present, in
`separate sections, the principles for: structure, logic and function, and system lifecycle.
`Finally, we end with some "lessons from the past," and identify some potential conflicts
`in the application of the described principles.
`
`A.
`
`Definitions
`
`Component: any part of a system that, by itself, provides all or a portion of the total
`functionality required of a system. A component is recursively defined to be an individual
`unit, not useful to further subdivide, or a collection of components up to and including
`the entire system. A component may be software, hardware, etc. For this report it is
`assumed that an atomic component - one not consisting of other components - may
`implement one or more different functions, but the degree of trustworthiness of the
`component is homogeneous across its functions.
`
`A system is made up of one or more components, which may be linked (interact through
`the same processor), tightly coupled (e.g., share a bus), distributed (interact over a wire
`protocol), etc.
`
`Failure: a condition in which, given a specifically documented input that conforms to
`specification, a component or system exhibits behavior that deviates from its specified
`behavior.
`
`Module: a unit of computation that encapsulates a database and provides an interface for
`the initialization, modification, and retrieval of information from the database. The
`database may be either implicit, e.g. an algorithm, or explicit.
`
`Process: a program in execution.
`
`2
`
`DivX, LLC Exhibit 2014
`Page 2014 - 12
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Design Principles for Security
`
`Reference Monitor Concept: an access control concept that refers to an abstract machine
`that mediates all accesses to objects by active entities. By definition, the ideal mechanism
`is protected from unauthorized modification and can be analyzed for correctness [2].
`
`Security Mechanisms: system artifacts that are used to enforce system security policies.
`
`Security Principles: guidelines or rules that when followed during system design will aid
`in making the system secure
`
`Security Policies: Organizational Security Policies are "the set of laws, rules, and
`practices that regulate how an organization manages, protects, and distributes sensitive
`information." [28] System Security Policies are rules that
`the information system
`enforces relative to the resources under its control to reflect the organizational security
`policy. In this document, "security policy" will refer to the latter meaning, unless
`otherwise specified.
`
`Service: processing or protection provided by a component to users or other components.
`E.g., communication service (TCP/IP), security service (encryption, firewall).
`
`the degree to which the security behavior of the component is
`Trustworthy (noun):
`demonstrably compliant with its stated functionality (e.g., trustworthy component).
`
`Trust: (verb) the degree to which the user or a component depends on the trustworthiness
`of another component. For example, component A trusts component B, or component B
`is trusted by component A. Trust and trustworthiness are assumed to be measured on the
`same scale.
`
`Security Design Principles Overview
`
`B.
`Security design principles can be organized into logical groups, which are illustrated in
`Figure 1. The logical groupings for the principles are in shaded boxes whereas the
`principles appear in clear boxes. For example, Least Privilege is a principle and appears
`grouped under Structure/Trust. In the case of "Secure System Evolution," the principle is
`in its own group.
`
`Trustworthy Commodity Computation and Communication
`
`3
`
`DivX, LLC Exhibit 2014
`Page 2014 - 13
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`
`
`Design Principles for Security
`
`the function is created once.
`separate implementations of the same function; rather,
`Examples of the application of this principle include device drivers,
`libraries, and
`operating system resource managers.
`
`Using least common mechanism will help to minimize the complexity of the system by
`avoiding unnecessary duplicate mechanisms. Another benefit is maintainability, since
`modifications to the common function can be performed (only) once, and the impact of
`proposed modifications can be more easily understood in advance. Also, the use of
`common mechanisms will facilitate the construction and analysis of (1) non-by-passable
`system properties and (2) the encapsulation of data (see also "Minimized Sharing").
`
`Consideration should be given to the problem of persistent state as it relates to a common
`mechanism. The common mechanism may need to retain state related to the context of
`the calling component. Whenever possible, the system should be organized to avoid this
`since: (1) retention of state information can result in significant increases in complexity,
`and (2) can result
`in state that
`is shared by multiple components (see "Minimized
`Sharing"). Sometimes various forms of linking can permit a common mechanism to
`utilize state information specific to the calling component, and, with sufficient low-level
`support, the mechanism can even assume the privilege attributes of its calling component.
`[10]
`
`Clear Abstractions
`
`The principle of clear abstractions states that a system should have simple, well-defined
`interfaces that clearly represent the data and functions provided. The elegance (e.g.,
`clarity, simplicity, necessity, sufficiency) of the system interfaces, combined with a
`precise definition of their behavior promotes thorough analysis, inspection and testing as
`well as correct and secure use of the system. Clarity of abstractions is difficult to
`quantify and a description will not be attempted here. Some of the techniques used to
`create simple interface are: the avoidance of redundant entry points, the avoidance of
`overloading the semantics of entry points and of the parameters used at entry points, and
`the elimination of unused entry points to components.
`
`Information hiding [23] is a design discipline for ensuring that the internal representation
`of information does not unnecessarily perturb the correct abstract representation of that
`data at an interface (see also Secure System Evolution).
`
`Partially Ordered Dependencies
`
`In applying the principle of least common mechanism, if the shared mechanism also
`makes calls to or otherwise depends on services of the calling mechanisms, creating a
`circular dependency, performance and liveness problems can result. The principle of
`partially ordered dependencies
`says
`that
`the calling,
`synchronization and other
`dependencies in the system should be partially ordered.
`
`tool in system design is that of layering [8], whereby the system is
`A fundamental
`organized into functionally related modules or components, and where the layers are
`linearly ordered with respect to inter-layer dependencies. While a partial ordering of all
`functions in a given system may not be possible, if circular dependencies are constrained
`to occur within layers, the inherent problems of circularity can be more easily managed
`
`Trustworthy Commodity Computation and Communication
`
`5
`
`DivX, LLC Exhibit 2014
`Page 2014 - 15
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`[26].
`
`Partially ordered dependencies and system layering contribute significantly to the
`simplicity and coherency of the system design (see also "Assurance through Reduced
`Complexity") .
`
`Efficiently Mediated Access
`
`The mediation of access to resources is often the predominant security function of
`security systems, which can result
`in performance bottlenecks if the system is not
`designed correctly. The principle of efficiently mediated access [1] states that the access
`control mechanism for each subset of the policy should be performed by the most
`efficient system mechanism available while respecting layering and still meeting system
`flexibility requirements. A good example of this is the use of hardware memory
`management mechanisms to implement various access control functions, e.g. [10], [27].
`
`Minimized Sharing
`
`The principle of minimized sharing states that no computer resource should be shared
`between components or subjects (e.g., processes, functions, etc.) unless it is necessary to
`do so. Minimized sharing helps to simplify the design and implementation. It is evident
`that in order to protect user-domain information from active entities, no information
`should be shared unless that sharing has been explicitly requested and granted (see also
`"Secure Defaults"). For internal entities, sharing can be motivated by the principle of
`least common mechanism, as well as to support user-domain sharing. However, internal
`sharing must be carefully designed to avoid performance and covert channel problems
`[17]. There are various mechanisms to avoid sharing and mitigate the problems with
`internal sharing.
`
`To minimize the sharing induced by common mechanisms, they can be designed to be re(cid:173)
`entrant or virtualized, so that each component depending on that mechanism will have a
`virtual private data space. Virtualization logically partitions the resource into discrete,
`private subsets for each dependent component. The shared resource is not directly
`accessible by the dependent components. Instead an interface is created that provides
`access to the private resource subsets.
`Practically any resource can be virtualized,
`including the processor, memory and devices. Encapsulation is a design discipline or
`compiler feature for ensuring there are no extraneous execution paths for accessing the
`private subsets (see also "information hiding," under Secure System Evolution). Some
`systems use global data to share information among components. A problem with this
`approach is that it may be difficult to determine how the information is being managed
`[31]. Even though the original designer may have intended that only one component
`perform updates on the information, the lack of encapsulation allows any component to
`do so.
`
`To avoid covert timing channels, in which the processor is one of the shared components,
`a scheduling algorithm can ensure that each depending component is allocated a fixed
`amount of time [11]. A development technique for controlled sharing is to require the
`execution durations of shared mechanisms (or the mechanisms and data structures that
`determine its duration), to be explicitly stated in the design specification, so that the
`effects of sharing can be verified.
`
`6
`
`DivX, LLC Exhibit 2014
`Page 2014 - 16
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Design Principles for Security
`
`Reduced Complexity
`
`Given the current state of the art, a conservative assumption must be that every complex
`system will contain vulnerabilities, and it will be impossible to eliminate all of them,
`even in the most highly trustworthy of systems. Application of the principle of reduced
`complexity contributes to the ability to understand the correctness and completeness of
`system security functions, and facilitates identification of potential vulnerabilities. The
`the simpler a system is,
`the fewer
`corollary of reduced complexity states that
`vulnerabilities it will have. An example of this is a bank auto teller, which, due to the
`simplicity of its interface (a very limited set of requests), has relatively few functional
`security vulnerabilities compared to many other widely used security mechanisms.
`
`From the perspective of security, the benefit to this simplicity is that it is easier to
`understand whether the intended security policy has been captured in the system design.
`For example, at the security model level, it can be easier to determine whether the initial
`system state is secure and whether subsequent state changes preserve the system security
`properties.
`B.
`Secure System Evolution
`
`The principle of secure system evolution states that a system should be built to facilitate
`the maintenance of its security properties in the face of changes to its interface,
`functionality structure or configuration. These changes may include upgrades to the
`system, maintenance
`activities,
`reconfiguration,
`etc.
`(see
`also, Secure System
`Modification, and Secure Failures). The benefits of this principle include reduced
`lifecycle costs for the vendor, reduced cost of ownership for the user, as well as improved
`system security. Just as it is easier to build trustworthiness into a system from the outset
`(and for highly trustworthy systems, impossible to achieve without doing so), it is easier
`to plan for change than to be surprised by it.
`
`Although it is not possible to plan for every possibility, most systems can anticipate
`maintenance, upgrades, and changes to their configurations. For example, a component
`may implement a computationally intensive algorithm.
`If a more efficient approach to
`solving the problem emerges, then if the component is constructed using the precepts of
`modularity and information hiding [14], [15], [23],
`it will be easier to replace the
`algorithm without disrupting the rest of the system.
`
`Rather than constructing the system with a fixed set of operating parameters, or requiring
`a recompilation of the system to change its configuration, startup or runtime interfaces
`can provide for reconfiguration. In the latter case, the system designer needs to take into
`account the impact dynamic reconfiguration will have on secure state.
`
`Interoperability can be supported by encapsulation at the macro level: internal details are
`hidden and standard interfaces and protocols are used. For scalability, the system can be
`designed so that it may easily accommodate more network connections, more or faster
`processors, or additional devices. A measure of availability can be planned into the
`system by replication of services and mechanisms to manage an increase in demand, or a
`failure of components.
`
`Constructing a system for evolution is not without limits. To expect that complex systems
`will remain secure in contexts not envisioned during development, whether
`
`Trustworthy Commodity Computation and Communication
`
`7
`
`DivX, LLC Exhibit 2014
`Page 2014 - 17
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`environmental or related to usage, is unrealistic. It is possible that a system may be secure
`in some new contexts, but there is no guarantee that its "emergent" behavior will always
`be secure.
`
`C.
`
`Trust
`
`Trusted Components
`
`The principle of trusted components states that a component must be trustworthy to at
`least a level commensurate with the security dependencies it supports (i.e., how much it
`is trusted to perform its security functions by other components). This principle enables
`the composition of components such that trustworthiness is not inadvertently diminished
`and consequently, where trust is not misplaced.
`
`Ultimately this principle demands some metric by which the trust in a component and the
`trustworthiness of a component can be measured; we assume these measurements are on
`the same, abstract, scale. This principle is particularly relevant when considering systems
`and those in which there are complex "chains" of trust dependencies.
`
`A compound component consists of several subcomponents, which may have varying
`levels of trustworthiness. The conservative assumption is that the overall trustworthiness
`of a compound component is that of its least trustworthy subcomponent. It may be
`possible to provide a security engineering rationale that the trustworthiness of a particular
`compound component is greater than the conservative assumption, but a general analysis
`to support such a rationale is outside of the scope of this report.
`
`This principle is stated more formally:
`
`Basic types
`
`component
`
`1: integer /* level of trust or trustworthiness - this is cast as integer for
`convenience - any linear ordering will do */
`System constant functions and their axioms
`
`subcomponent(a, b:component): boolean /* a is a subcomponent ofb */
`
`depend(a, b: component): boolean /* a depends on b */
`
`sec_depend(a, b: component): boolean /* a has security dependency on b
`*/
`
`axiom 1.. "if a, b: component(
`
`sec_depend(a, b)
`
`~ depend(a, b)) /* but not visa versa */
`
`trust(a, b: component): t /* the degree of sec_depend */
`axiom 2.. "if a, b:component(
`
`8
`
`DivX, LLC Exhibit 2014
`Page 2014 - 18
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`Design Principles for Security
`
`sec_depend(a, b)
`
`~ trust (a,b) > 0)
`
`trustworthy(a: component): t /* a is trustworthy to the degree oft */
`axiom 3.. "if a, b:component( /* sub-component trustworthiness */
`
`subcomponent(a,b) ~
`
`trustworthy(b):S trustworthy(a))
`Principle of trusted components.
`"if a:component(
`
`3 b:component (sec_depend(a,b) ~
`trust(a,b) :s trustworthy(b)))
`
`Hierarchical Trust for Components
`
`The corollary of hierarchical trust for components states that the security dependencies in
`a system will form a partial ordering if they preserve the principle of trusted components.
`To be able to analyze a system comprised of heterogeneously trustworthy components for
`its overall trustworthiness, it is essential to eliminate circular dependencies with regard to
`trustworthiness. Clearly, if a more trustworthy component located in a lower layer of the
`system were to depend upon a less trustworthy component in a higher layer, this would,
`in effect, put them in the same equivalence class: less trustworthy.
`
`Trust chains have various manifestations. For example, the root certificate of a certificate
`hierarchy is the most trusted node in the hierarchy, whereas the leaves may be the least
`trustworthy nodes in the hierarchy. Another example occurs in a layered high assurance
`secure system where the security kernel (including the hardware base), which is located
`at the lowest layer of the system, is the most trustworthy component.
`
`This principle does not prohibit the use of overly trustworthy components. For example,
`in a low-trust system the designer may choose to use a highly trustworthy component,
`rather than one that is less trustworthy because of availability or other criteria (e.g., an
`open source based product might be preferred). In this case, the dependency of the highly
`trustworthy component upon a less trustworthy component does not degrade the overall
`trustworthiness of the resulting system.
`
`Inverse Modification Threshold
`
`the degree of protection
`The corollary of inverse modification threshold states that
`provided to a component must be commensurate with its trustworthiness. In other words,
`trust in) a component increases,
`the protections against its
`as the criticality of (i.e.,
`unauthorized modification should also increase. This protection can come in the form of
`the component's own self-protection and trustworthiness, or from protections afforded to
`the component from other elements or attributes of the architecture. Unauthorized
`modification could take place through penetration of the component (e.g., an attack that
`
`Trustworthy Commodity Computation and Communication
`
`9
`
`DivX, LLC Exhibit 2014
`Page 2014 - 19
`Netflix Inc. et al. v. DivX, LLC, IPR2020-00614
`
`
`
`bypasses the intended interfaces), mIsuse of poorly designed interfaces, or
`surreptitiously placed trapdoors.
`
`from
`
`Techniques to show the absence of trapdoors and penetration vulnerabilities can be
`applied to the construction of highly trustworthy components.
`Examples of the
`application of this principle can be seen in the hardware, microcode, and low level
`software of trustworthy systems: none of these elements is easy to modify.
`
`Hierarchical Protection
`
`The principle of hierarchical protection states that a component need not be protected
`trusted
`In the degenerate case of the most
`from more trustworthy components.
`component, it must protect itself from all other components.
`In another example, a
`trusted computer system need not protect
`itself from an equally trustworthy user,
`reflecting use of untrusted systems in "system high" environments where the users are
`highly trustworthy.
`
`Minimized Security Elements
`
`The principle of minimized security elements states that the system should not have
`extraneous trusted components. This