`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`DOCKER INC.,
`Petitioner,
`v.
`INTELLECTUAL VENTURES II LLC,
`Patent Owner.
`
`Case No. IPR2025-00840
`
`U.S. Patent No. 8,332,844
`
`Declaration of Erez Zadok in Support of
`Petition for Inter Partes Review of U.S. Patent No. 8,332,844
`
`Docker EX1003
`Page 1 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`TABLE OF CONTENTS
`
`EXHIBIT LIST ......................................................................................................... i
`TABLE OF ABBREVIATIONS .............................................................................. ii
`I.
`ASSIGNMENT ............................................................................................... 1
`II.
`BACKGROUND AND QUALIFICATIONS ................................................ 2
`III. MATERIALS CONSIDERED ..................................................................... 12
`IV.
`LEVEL OF ORDINARY SKILL IN THE ART .......................................... 13
`V.
`RELEVANT LEGAL STANDARDS .......................................................... 14
`VI.
`SUMMARY OF OPINIONS ........................................................................ 18
`VII. TECHNOLOGY OVERVIEW ..................................................................... 18
`A.
`General Computer Operations ............................................................ 19
`1.
`Networking Overview .............................................................. 20
`Data Storage Devices and File Systems ............................................. 25
`1.
`Files and File Systems .............................................................. 26
`Networked and Distributed Storage Systems ..................................... 30
`Caching Technologies ........................................................................ 34
`1.
`Least-Recently Used (LRU) ..................................................... 42
`2.
`Least-Frequently Used (LFU) .................................................. 42
`Copy-on-Write and Snapshots ............................................................ 44
`Indexing .............................................................................................. 56
`F.
`Virtualization ...................................................................................... 57
`G.
`VIII. THE ’844 PATENT ...................................................................................... 61
`A.
`Overview ............................................................................................. 61
`
`B.
`
`C.
`D.
`
`E.
`
`i
`
`Docker EX1003
`Page 2 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`B.
`C.
`D.
`
`IX.
`
`X.
`
`The ’844 Patent Discloses Indexing at the Block Layer ..................... 71
`The ’844 Patent’s Union Block Device (UBD) .................................. 74
`Prosecution History ............................................................................ 76
`1.
`The ’477 Application ................................................................ 76
`2.
`The ’622 Application ................................................................ 83
`3.
`Priority Date ............................................................................. 84
`THE CLAIMS ............................................................................................... 85
`OVERVIEW OF THE PRIOR ART ............................................................. 90
`Menage ............................................................................................... 91
`Murphy ............................................................................................. 102
`Birse .................................................................................................. 110
`Rothman ............................................................................................ 120
`XI. CLAIM CONSTRUCTION ........................................................................ 123
`XII. DETAILED EXPLANATION OF GROUNDS ......................................... 124
`A.
`Ground 1: Menage Renders Claims 1-13 Obvious ........................... 124
`1.
`Claim 1 ................................................................................... 124
`2.
`Claim 2: The system as recited in claim 1 wherein said
`cache is configured to store X most recently accessed
`blocks of said root image, and wherein X represents a
`cache threshold value. ............................................................ 140
`Claim 3: The system as recited in claim 1 wherein said
`first storage unit, said second storage units, and said
`cache are contained within a single storage appliance. .......... 143
`Claim 4: The system as recited in claim 1 further
`comprising: a plurality of union block devices configured
`to interface between respective compute nodes and said
`
`3.
`
`4.
`
`ii
`
`Docker EX1003
`Page 3 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`5.
`
`6.
`
`7.
`8.
`
`first storage unit, respective second storage units, and
`said cache to distribute application environments to the
`compute nodes, wherein said union block devices are
`configured to create said application environments by
`merging the blocks of said root image with the blocks of
`respective leaf images. ............................................................ 145
`Claim 5: The system as recited in claim 4 wherein said
`union block devices comprise low-level drivers for
`interfacing between the file systems of respective
`compute nodes and said first storage unit, respective
`second storage units, and said cache. ..................................... 147
`Claim 6: The system as recited in claim 1 wherein said
`first storage unit is read-only. ................................................. 149
`Claim 7 ................................................................................... 149
`Claim 8: The method as recited in claim 7 further
`comprising: receiving a read request from at least one of
`said compute nodes, wherein a first portion of the data
`requested is currently stored in said cache memory; and
`providing said first portion of said data to said at least
`one of said compute nodes from said cache memory. ............ 150
`Claim 9: The method as recited in claim 8 further
`comprising: updating said cache memory based on said
`read request. ............................................................................ 151
`10. Claim 10: The method as recited in claim 9 wherein a
`second portion of the data requested is not currently
`stored in said cache memory and said updating
`comprises: caching said second portion in said cache
`memory; and removing the least recently accessed data
`from said cache memory if the amount of data in said
`cache memory is above a threshold value. ............................. 152
`11. Claim 11: The method as recited in claim 7 further
`comprising: merging the blocks of said root image with
`the blocks of respective leaf images to create cohesive
`respective application environments. ..................................... 153
`
`9.
`
`iii
`
`Docker EX1003
`Page 4 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`B.
`
`12. Claim 12: The method as recited in claim 11 wherein
`said merging occurs at an operational level between file
`systems of the respective compute nodes and said first
`storage unit, respective second storage units, and said
`cache memory. ........................................................................ 153
`13. Claim 13: The method as recited in claim 7 wherein said
`first storage unit is read-only. ................................................. 153
`Ground 2: Menage in View of Murphy Renders Claims 14-27
`Obvious ............................................................................................. 153
`1.
`Motivation to Combine Menage with Murphy ....................... 153
`2.
`Claim 14 ................................................................................. 157
`3.
`Claim 15: The system as recited in claim 14 wherein said
`first storage unit and said second storage units are
`contained within a single storage appliance. .......................... 163
`Claim 16: The system as recited in claim 14 further
`comprising: a plurality of union block devices configured
`to interface between respective compute nodes and said
`first storage unit and respective second storage units, said
`union block devices configured to distribute application
`environments to the compute nodes, wherein said union
`block devices are configured to create said application
`environments by merging the blocks of said root image
`with the blocks of respective leaf images. .............................. 163
`Claim 17: The system as recited in claim 16 wherein said
`union block devices comprise low-level drivers for
`interfacing between the file systems of respective
`compute nodes and said first storage unit, respective
`second storage units, and said cache. ..................................... 163
`Claim 18: The system as recited in claim 14 wherein said
`first storage unit is read-only. ................................................. 164
`Claim 19 ................................................................................. 164
`
`4.
`
`5.
`
`6.
`
`7.
`
`iv
`
`Docker EX1003
`Page 5 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`8.
`
`9.
`
`Claim 20: The method as recited in claim 19 further
`comprising: storing said indexing results on a shared
`storage unit. ............................................................................ 165
`Claim 21: The method as recited in claim 19 wherein
`said merging occurs at an operational level between
`respective file systems of the compute nodes and said
`first storage unit and respective second storage units. ........... 166
`10. Claim 22: The method as recited in claim 19 wherein
`said first storage unit is read-only. ......................................... 166
`11. Claim 23 ................................................................................. 167
`12. Claim 24: The computer-readable storage medium of
`claim 23, wherein the instructions further cause the at
`least one computing device to store said results of said
`indexing on a shared storage unit accessible by said
`second compute node. ............................................................ 169
`13. Claim 25: The computer-readable storage medium of
`claim 23, wherein the instructions further cause the at
`least one computing device to index said leaf image
`portion. .................................................................................... 169
`14. Claim 26: The computer-readable storage medium of
`claim 25, wherein the instructions further cause the at
`least one computing device to re-index said file system
`by re-indexing said leaf image portion and merging the
`results of said re-indexing of said leaf image portion with
`said results of said indexing of said root image portion. ........ 170
`15. Claim 27: The computer-readable storage medium of
`claim 25, wherein the instructions further cause the at
`least one computing device to re-index said file system
`by re-indexing said leaf image portion and merging the
`results of said re-indexing of said leaf image portion with
`said results of said indexing of said root image portion. ........ 171
`Ground 3: Birse in View of Rothman Renders Claims 1-13
`Obvious ............................................................................................. 172
`
`C.
`
`v
`
`Docker EX1003
`Page 6 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`1.
`2.
`3.
`
`4.
`
`5.
`
`6.
`
`7.
`
`8.
`9.
`
`Motivation to Combine Birse with Rothman ......................... 172
`Claim 1 ................................................................................... 176
`Claim 2: The system as recited in claim 1 wherein said
`cache is configured to store X most recently accessed
`blocks of said root image, and wherein X represents a
`cache threshold value. ............................................................ 195
`Claim 3: The system as recited in claim 1 wherein said
`first storage unit, said second storage units, and said
`cache are contained within a single storage appliance. .......... 196
`Claim 4: The system as recited in claim 1 further
`comprising: a plurality of union block devices configured
`to interface between respective compute nodes and said
`first storage unit, respective second storage units, and
`said cache to distribute application environments to the
`compute nodes, wherein said union block devices are
`configured to create said application environments by
`merging the blocks of said root image with the blocks of
`respective leaf images. ............................................................ 197
`Claim 5: The system as recited in claim 4 wherein said
`union block devices comprise low-level drivers for
`interfacing between the file systems of respective
`compute nodes and said first storage unit, respective
`second storage units, and said cache. ..................................... 200
`Claim 6: The system as recited in claim 1 wherein said
`first storage unit is read-only. ................................................. 201
`Claim 7 ................................................................................... 202
`Claim 8: The method as recited in claim 7 further
`comprising: receiving a read request from at least one of
`said compute nodes, wherein a first portion of the data
`requested is currently stored in said cache memory; and
`providing said first portion of said data to said at least
`one of said compute nodes from said cache memory. ............ 203
`
`vi
`
`Docker EX1003
`Page 7 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`10. Claim 9: The method as recited in claim 8 further
`comprising: updating said cache memory based on said
`read request. ............................................................................ 204
`11. Claim 10: The method as recited in claim 9 wherein a
`second portion of the data requested is not currently
`stored in said cache memory and said updating
`comprises: caching said second portion in said cache
`memory; and removing the least recently accessed data
`from said cache memory if the amount of data in said
`cache memory is above a threshold value. ............................. 205
`12. Claim 11: The method as recited in claim 7 further
`comprising: merging the blocks of said root image with
`the blocks of respective leaf images to create cohesive
`respective application environments. ..................................... 206
`13. Claim 12: The method as recited in claim 11 wherein
`said merging occurs at an operational level between file
`systems of the respective compute nodes and said first
`storage unit, respective second storage units, and said
`cache memory. ........................................................................ 206
`14. Claim 13: The method as recited in claim 7 wherein said
`first storage unit is read-only. ................................................. 206
`Ground 4: Birse in View of Murphy Renders Claims 14-27
`Obvious ............................................................................................. 206
`1.
`Motivation to Combine Birse with Murphy ........................... 206
`2.
`Claim 14 ................................................................................. 210
`3.
`Claim 15: The system as recited in claim 14 wherein said
`first storage unit and said second storage units are
`contained within a single storage appliance. .......................... 215
`Claim 16: The system as recited in claim 14 further
`comprising: a plurality of union block devices configured
`to interface between respective compute nodes and said
`first storage unit and respective second storage units, said
`union block devices con figured to distribute application
`
`4.
`
`vii
`
`D.
`
`Docker EX1003
`Page 8 of 310
`
`
`
`6.
`
`7.
`8.
`
`9.
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`5.
`
`environments to the compute nodes, wherein said union
`block devices are configured to create said application
`environments by merging the blocks of said root image
`with the blocks of respective leaf images. .............................. 216
`Claim 17: The system as recited in claim 16 wherein said
`union block devices comprise low-level drivers for
`interfacing between the file systems of respective
`compute nodes and said first storage unit, respective
`second storage units, and said cache. ..................................... 216
`Claim 18: The system as recited in claim 14 wherein said
`first storage unit is read-only. ................................................. 216
`Claim 19 ................................................................................. 216
`Claim 20: The method as recited in claim 19 further
`comprising: storing said indexing results on a shared
`storage unit. ............................................................................ 217
`Claim 21: The method as recited in claim 19 wherein
`said merging occurs at an operational level between
`respective file systems of the compute nodes and said
`first storage unit and respective second storage units. ........... 219
`10. Claim 22: The method as recited in claim 19 wherein
`said first storage unit is read-only. ......................................... 219
`11. Claim 23 ................................................................................. 219
`12. Claim 24: The computer-readable storage medium of
`claim 23, wherein the instructions further cause the at
`least one computing device to store said results of said
`indexing on a shared storage unit accessible by said
`second compute node. ............................................................ 222
`13. Claim 25: The computer-readable storage medium of
`claim 23, wherein the instructions further cause the at
`least one computing device to index said leaf image
`portion. .................................................................................... 222
`
`viii
`
`Docker EX1003
`Page 9 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`14. Claim 26: The computer-readable storage medium of
`claim 25, wherein the instructions further cause the at
`least one computing device to re-index said file system
`by re-indexing said leaf image portion and merging the
`results of said re-indexing of said leaf image portion with
`said results of said indexing of said root image portion. ........ 223
`15. Claim 27: The computer-readable storage medium of
`claim 25, wherein the instructions further cause the at
`least one computing device to re-index said file system
`by re-indexing said leaf image portion and merging the
`results of said re-indexing of said leaf image portion with
`said results of said indexing of said root image portion. ........ 224
`XIII. SECONDARY CONSIDERATIONS OF NON-OBVIOUSNESS ............ 224
`XIV. CONCLUSION ........................................................................................... 224
`
`ix
`
`Docker EX1003
`Page 10 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`EXHIBIT LIST
`
`Description
`
`Exhibit
`1001 U.S. Patent No. 8,332,844
`1002
`File History for U.S. Patent No. 8,332,844
`1004 U.S. Patent No. 6,618,736
`1005 U.S. Patent No. 7,395,324
`1006 U.S. Patent No. 7,089,300
`1007 U.S. Patent No. 7,398,382
`1008
`File History for U.S. Patent Appl. No. 11/395,816
`1009 A. Silberschatz & P. B. Galvin, Operating System Concepts (4th ed.
`1994)
`1010 D. P. Bovet & M. Cesati, Understanding the Linux Kernel (1st ed.
`2000)
`1011 U.S. Patent No. 5,313,646
`1012
`File History for U.S. Patent Appl. No. 11/026,622
`1013 Excerpts from Microsoft Computer Dictionary (3rd ed. 1997)
`1014
`Internet Small Computer Systems Interface (iSCSI), RFC 3720, IETF
`(April 2004), https://www.rfc-editor.org/rfc/pdfrfc/rfc3720.txt.pdf
`1015 Dave Hitz et al., Network Appliance Inc., File System Design for an
`NFS File Server Appliance, USENIX 1994
`1016 U.S. Patent No. 7,334,095
`1017 Hugo Patterson et al., Network Appliance Inc., File System Based
`Asynchronous Mirroring for Disaster Recovery, USENIX 2002
`1018 U.S. Patent No. 6,668,264
`1019
`File History for U.S. Patent Appl. No. 11/395,816
`
`x
`
`Docker EX1003
`Page 11 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`TABLE OF ABBREVIATIONS
`
`Term
`Abbreviation
`U.S. Patent Appl. No. 11/709,477
`’477 application
`U.S. Patent Appl. No. 11/395,816
`’816 application
`U.S. Patent Appl. No. 11/026,622
`’622 application
`U.S. Patent No. 8,332,844 (EX1001)
`’844 patent
`America Invents Act
`AIA
`U.S. Patent No. 7,089,300 (EX1006)
`Birse
`Challenged Claims Claims 1–27 of U.S. Patent No. 8,332,844
`Fair
`U.S. Patent No. 7,334,095 (EX1016)
`Fig.
`Figure
`Dave Hitz et al., File System Design for an NFS File
`Hitz
`Server Appliance Network Appliance, USENIX 1994
`(EX1015)
`inter partes review
`U.S. Patent No. 6,618,736 (EX1004)
`U.S. Patent No. 7,395,324 (EX1005)
`Intellectual Ventures II LLC
`Hugo Patterson et al., Network Appliance Inc., File System
`Based Asynchronous Mirroring for Disaster Recovery,
`USENIX 2002 (EX1017)
`U.S. Patent No. 6,668,264 (EX1018)
`Docker Inc.
`Person[s] of ordinary skill in the art
`U.S. Patent No. 7,398,382 (EX1007)
`
`IPR
`Menage
`Murphy
`Patent Owner
`Patterson I
`
`Patterson II
`Petitioner
`POSITA
`Rothman
`
`xi
`
`Docker EX1003
`Page 12 of 310
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`I, Erez Zadok, Ph.D., declare that:
`
`I.
`
`ASSIGNMENT
`
`1.
`
`I have been retained by Docker Inc. (“Petitioner”) as an independent
`
`expert consultant in this proceeding before the United States Patent and Trademark
`
`Office (“PTO”).
`
`2.
`
`My consulting company, Zadoks Consulting Services, is being
`
`compensated for my time at my standard consulting rate. I am also being
`
`reimbursed for expenses that I may incur during the course of this work.
`
`3.
`
`My compensation is in no way contingent on the nature of my findings,
`
`the presentation of my findings in testimony, or the outcome of this or any other
`
`proceeding. I have no other interest in this proceeding.
`
`4.
`
`I have been asked to consider whether certain references disclose or
`
`suggest the features recited in the claims of U.S. Patent No. 8,332,844 (“the ’844
`
`patent”) (EX1001).1 My opinions are set forth below.
`
`1 Where appropriate, I refer to the exhibits enumerated in the “Exhibit List” above,
`
`which I understand will be attached to the petition for inter partes review of the ’844
`
`patent (the “Petition”). All emphasis in quoted portions of the exhibits has been
`
`added, unless otherwise noted.
`
`
`
`Docker EX1003
`Page 13 of 310
`
`1
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`II.
`
`BACKGROUND AND QUALIFICATIONS
`
`5.
`
`I am a Professor in the Computer Science Department at Stony Brook
`
`University (part of the State University of New York (“SUNY”) system). I direct the
`
`File-systems and Storage Lab (FSL) at Stony Brook’s Computer Science
`
`Department.
`
` My research interests include file systems and storage systems,
`
`operating systems,
`
`transactional systems
`
`including database
`
`technologies,
`
`information technology and system administration, security/privacy and information
`
`assurance, networking, energy efficiency, performance and benchmarking,
`
`virtualization, cloud systems, compilers, applied machine learning, and software
`
`engineering.
`
`6.
`
`I studied at a professional high school in Israel, focusing on electrical
`
`engineering (“EE”), and graduated in 1982. I spent one more year at the high
`
`school’s college division, receiving a special Certified Technician’s degree in EE.
`
`I then went on to serve in the Israeli Defense Forces for three years (1983–1986). I
`
`received my Bachelor of Science degree in computer science (“CS”) in 1991, my
`
`master’s degree in CS in 1994, and my Ph.D. in CS in 2001—all from Columbia
`
`University in New York.
`
`7. When I began my undergraduate studies at Columbia University, I also
`
`started working as a student assistant in the various campus-wide computer labs,
`
`eventually becoming an assistant to the head labs manager, who was managing all
`
`
`
`Docker EX1003
`Page 14 of 310
`
`2
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`public computer labs on campus. During that time, I also became more involved
`
`IPR2025-00840
`
`
`
`
`
`with research within the CS Department at Columbia University, conducting
`
`research on operating systems, file and storage systems, distributed and networked
`
`systems, security, and other topics. I also assisted the CS department’s computer
`
`administrators in managing the department’s computers, which included storage, IT,
`
`networking, and cyber-security related duties.
`
`8.
`
`In 1991, I joined Columbia University’s CS department as a full-time
`
`systems administrator, studying towards my MS degree part-time. My MS thesis
`
`topic is related to file system reliability, fault tolerance, replication, and failover in
`
`mobile networked storage systems using file virtualization. My main duties as a
`
`systems administrator involved installing, configuring, and managing many
`
`networked servers, proxies, and desktops running several operating systems, as
`
`well as network devices setup; this included many software and hardware upgrades,
`
`device upgrades, and BIOS firmware/chipset updates/upgrades. My duties also
`
`included ensuring reliable, secure, authenticated access to networked systems/storage
`
`and licensed software, as well as software updates, security and bug fixes. Examples
`
`of servers and their protocols included email (SMTP), file transfer (FTP), domain
`
`names (DNS), network file systems (NFS), network news systems (NNTP), and Web
`
`(HTTP).
`
`
`
`Docker EX1003
`Page 15 of 310
`
`3
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`
`
`
`
`
`IPR2025-00840
`
`9.
`
`In 1994, I left my systems administrator position to pursue my doctoral
`
`studies at Columbia University. My PhD thesis topic was on versatile file system
`
`development using stackable (virtualized) file systems, with examples in the fields
`
`of security and encryption, efficiency, reliability, and failover. I continued to work
`
`part-time as a systems administrator at the CS department, and eventually I was asked
`
`to serve as manager to the entire information technology (“IT”) staff. From 1991
`
`to 2001, I was also a member of the faculty-level Facilities Committee that oversaw
`
`all IT operations at the CS department.
`
`10. As part of my PhD studies at Columbia, I collaborated on projects to
`
`develop advanced AI-like techniques to detect previously unknown viruses (a.k.a.
`
`“zero-day malware”), using data mining and rule-based detection. This work led to
`
`several highly cited papers (over 1,600 citations for one of the papers alone) and two
`
`patents. I also became a Teaching Assistant (“TA”) for a first-ever Computer
`
`Security course given at Columbia University’s CS department with Dr. Matt Blaze
`
`as instructor.
`
`11. From 1990 to 1998, I consulted for SOS Corporation and HydraWEB
`
`Technologies as a systems administrator and programmer, managing data storage use
`
`and backup/restore duties, databases, web servers, as well as information assurance
`
`and cyber-security (e.g., malware protection, software licensing). From 1994 to
`
`2000, I led projects at HydraWEB Technologies, and then became the Director of
`
`
`
`Docker EX1003
`Page 16 of 310
`
`4
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`IPR2025-00840
`
`Software Development—overseeing the development of several products and
`
`appliances such as stateful firewalls and HTTP load-balancers, utilizing network-
`
`virtualization and high-availability techniques.
`
` From 2009 to 2019, I have
`
`consulted for Packet General Networks, a startup specializing in secure, virtualized,
`
`network storage and applications’ data security in the cloud.
`
`12.
`
`In 2001, I joined the faculty of Stony Brook University, a position I have
`
`held since that time. In 2002, I joined the Operations Committee, which oversees
`
`the IT operations of the CS department at Stony Brook University. From 2006 to
`
`2010, I was the Director of IT Operations of the CS department. My day-to-day
`
`duties included setting policies regarding computing, hiring and training new staff,
`
`assisting any staff with topics of my specialty, defining requirements for new
`
`software/hardware, and purchasing. From 2010 to 2015, I had served as the Co-
`
`Chair to the Operations Committee. From 2016 to 2019, I oversaw the IT
`
`Operations as the Chair of the Operations Committee. A significant component of
`
`these duties included defining and helping implement policies for data management,
`
`to ensure the security of users and their data, and data reliability and availability,
`
`while minimizing the inconvenience and performance impact to users. I personally
`
`helped setup and maintain an initial virtual-host infrastructure in the department.
`
`Since late 2019, I’ve been a member of the department’s Executive Committee that
`
`also oversees all IT operations.
`
`
`
`Docker EX1003
`Page 17 of 310
`
`5
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`
`
`
`
`
`IPR2025-00840
`
`13.
`
`In 2017, I became the department’s Graduate Academic Adviser,
`
`advising all Master students (over 400 annually on average) and many other graduate
`
`students on an assortment of academic matters. In August 2024, I took over as the
`
`department’s Graduate Program Director, overseeing the entire graduate CS program
`
`(700-800 students annually on average).
`
`14. Since 2001, I have personally configured and managed my own
`
`research lab’s network. This includes setting up and configuring multiple storage
`
`systems (e.g., NFS, CIFS/SMB, NAS), virtual and physical environments,
`
`applications such as database (e.g., MySQL, Postgresql), Web servers (e.g., Apache),
`
`and mail servers; user access control (e.g., NIS, LDAP), backups and restores,
`
`snapshot policies, and more.
`
` I’ve personally installed, configured, changed,
`
`replaced parts, and upgraded components in numerous devices including mobile
`
`devices, laptops, desktops, and servers, both physical and virtual.
`
`15. Since 1995, I have taught courses on operating systems, storage and file
`
`systems, advanced systems programming in Unix/C, systems administration, data
`
`structures, data/software security, and more. My courses often use storage, file
`
`systems, distributed systems, and system/network security as key teaching principles
`
`and practical examples for assignments and projects. I have taught these concepts
`
`and techniques to my students, both to my direct advisees as well as in my courses.
`
`For example, in my graduate Operating Systems course, I often cover Linux’s kernel
`
`
`
`Docker EX1003
`Page 18 of 310
`
`6
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`mechanisms to protect users, applications, and data files, virtual file systems, as well
`
`IPR2025-00840
`
`
`
`
`
`as distributed storage systems (e.g., NFS). And in the System Administration
`
`undergraduate course, I covered many topics such as networking, storage, backups,
`
`and configuring complex applications such as mail, web, and database servers.
`
`16. My research often investigates computer systems from many angles:
`
`security, efficiency, energy use, scalability, reliability, portability, survivability,
`
`usability, ease-of-use, versatility, flexibility, and more. My research gives special
`
`attention to balancing five often- conflicting aspects of computer systems:
`
`performance, reliability, energy use, security, and ease-of-use.
`
`17. Since joining Stony Brook University in 2001, my group in the File-
`
`systems and Storage Lab (“FSL”) has developed many file systems and operating
`
`system extensions; examples include a highly-secure cryptographic file system, a
`
`portable copy-on-write (“COW”) versioning file system, a tracing file system useful
`
`to detect intrusions, a replaying file system useful for forensics, a snapshotting and
`
`sandboxing file system, a namespace unification file system (that uses stackable,
`
`virtualized, file-based COW), an anti-virus file system, an integrity-checking file
`
`system, a load balancing and replication/mirroring file system, network file system
`
`extensions for security and performance, distributed secure cloud-based storage
`
`systems, transactional key-value stores and file systems, OS-level embedded
`
`databases, a compiler to convert user-level C code to in-kernel efficient yet safe
`
`
`
`Docker EX1003
`Page 19 of 310
`
`7
`
`
`
`Declaration of Erez Zadok, Ph.D.
`
`code, GCC plugins, stackable file system templates, and a Web-based backup
`
`IPR2025-00840
`
`
`
`
`
`system. Many of these projects used one form of virtualization or another (storage,
`
`network, host, etc.). I continue to maintain and release newer versions of some of
`
`these file systems and software.
`
`18.
`
`I have published over 120 refereed pu



