`N THE UNITED STATES PATENT AND TRADEMARK OFFICE
`_________
`__________
`B
`EFORE THE PATENT TRIAL AND APPEAL BOARD
`_________
`__________
`Z
`ESTY.AI, INC.,
`Petitioner
`v.
`AON RE, INC.,
`Patent Owner
`_________
`__________
`C
`ase IPR2025-01359
`U.S. Patent No. 11,195,058
`___________________
`DE
`CLARATION OF DR. DAVID A. FORSYTH IN SUPPORT OF
`PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 11,195,058
`M
`ail Stop “PATENT BOARD”
`Patent Trial and Appeal Board
`U.S. Patent and Trademark Office
`P.O. Box 1450
`Alexandria, VA 22313-1450
`ZESTY.AI EX1003
`U.S. Patent No. 11,195,058
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - i -
`TABLE OF CONTENTS
`I. INTRODUCTION ....................................................................................... 1
`II. MY BACKGROUND AND QUALIFICATIONS ....................................... 2
`III. SUMMARY OF OPINIONS ....................................................................... 7
`IV. LIST OF DOCUMENTS CONSIDERED .................................................... 8
`V. LEGAL UNDERSTANDING.................................................................... 10
`A. My Understanding of Claim Construction.........................................10
`B. My Understanding of Obviousness ...................................................10
`C. A Person of Ordinary Skill in the Art ................................................13
`VI. OVERVIEW OF THE ’058 PATENT ....................................................... 14
`A. Summary of the ’058 Patent ..............................................................14
`B. Prosecution History...........................................................................17
`C. Level of Ordinary Skill in the Art .....................................................18
`D. Claim Construction ...........................................................................18
`VII. STATE OF THE ART ............................................................................... 19
`A. Aerial imagery analysis and property assessment techniques
`have been used for decades, well before the ’058 patent. ..................19
`B. Machine learning and computer vision techniques for image
`analysis were well-known before the ’058 patent. .............................21
`
`VIII. OVERVIEW OF GROSS .......................................................................... 22
`IX. GROUND 1: GROSS RENDERS OBVIOUS CLAIMS 1-20 .................... 26
`A. A POSA would have found it obvious Gross’s techniques were
`intended to be combined. ..................................................................27
`
`B. Independent Claim 1 .........................................................................30
`1. [1.P]: A system for automatically assessing features of a
`property location comprising a structure, the system
`comprising: ............................................................................ 30
`
`2. [1.1]: a non-volatile computer readable medium storing ........ 32
`3. [1.2]: a set of property characteristic profiles, each
`property characteristic profile of the set of property
`characteristic profiles being developed through training
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - ii -
`one or more machine learning algorithms using first
`property images to identify one or more characteristics of
`at least one property feature of one or more property
`features, and ........................................................................... 32
`4. [1.3]: a set of property condition profiles, ea ch property
`condition profile of the set of property condition profiles
`being developed through training one or more machine
`learning algorithms using second property images to
`identify one or more conditions of at least one property
`characteristic of one or more property characteristics
`corresponding to a given feature or features of the one or
`more property features; and ................................................... 36
`
`5. [1.4]: processing circuitry configured to ................................ 38
`6. [1.5]: receive, from a user at a remote computing device
`via a network, a request comprising identification of a
`property location, ................................................................... 39
`7. [1.6]: access aerial imagery of the property location, ............. 43
`8. [1.7]: classify a condition of one or more features of the
`property location, wherein classifying comprises, for each
`feature of the one or more features, ........................................ 44
`9. [1.8]: identifying the respective feature from at least one
`image obtained from the aerial imagery, ................................ 45
`10. [1.9]: applying one or more property characteristic profiles
`of the set of property characteristic profiles to at least a
`portion of each image of the at least one image to
`determine a plurality of characteristics of the respective
`feature, wherein a first characteristic of the plurality of
`characteristics comprises a type of material of the feature
`or a shape of the feature, and.................................................. 46
`
`11. [1.10]: applying one or more property condition profiles
`of the set of property condition profiles to at least a portion
`of each image of the at least one image to classify a
`condition of the respective property feature, and .................... 47
`12. [1.11]: res ponsive to receiving the request, cause
`presentation of, for review by the user at the remote
`computing device, a graphical user interface comprising
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - iii -
`information regarding the plurality of characteristics and
`the condition of each feature of the at least o ne property
`feature. ................................................................................... 48
`C. Independent Claim 13 .......................................................................50
`1. [13.P]: A method for automatically assessing features of a
`property location comprising a structure, the method
`comprising: ............................................................................ 50
`
`2. [13.1]: receiving, from a user at a remote computing
`device via a network, a request comprising identification
`of a property location; ............................................................ 51
`
`3. [13.2]: accessing, by processing circuitry, aerial imagery
`of the property location; ......................................................... 51
`4. [13.3]: classifying, by the processing circuitry, a condition
`of one or more features of the property location, wherein
`classifying comprises, for each feature of the one or more
`features, identifying the respective feature from at least
`one image obtained from the aerial imagery,.......................... 51
`
`5. [13.4]: applying one or more property characteristic
`profiles of a set of property characteristic profiles to at
`least a portion of each image of the at least one image to
`determine a plurality of characteristics of the respective
`feature, ................................................................................... 51
`
`6. [13.5]: wherein each property characteristic profile of the
`set of property characteristic profiles is developed through
`training one or more machine learning algorithms using
`first property images to identify one or more
`characteristics of at least one property feature of one or
`more property features, and a first characteristic of the
`plurality of characteristics comprises a type of material of
`the feature or a shape of the feature, and ................................ 52
`
`7. [13.6]: applying one or more property condition profiles
`of a set of property condition profiles to at least a p ortion
`of each image of the at least one image to classify a
`condition of the respective property feature, .......................... 52
`8. [13.7]: wherein each property condition profile of the set
`of property condition profiles is developed through
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - iv -
`training one or more machine learning algorithms using
`second property images to identify one or more conditions
`of at least one property characteristic of one or more
`property characteristics corresponding to a given feature
`or features of the one or more property features; and ............. 52
`
`9. [13.8]: responsive to receiving the request, causing
`presentation of, for review by the user at the remote
`computing device, a graphical user interface comprising
`information regarding the plurality of char acteristics and
`the condition of each feature of the at least one property
`feature. ................................................................................... 52
`
`D. Claim 2 .............................................................................................53
`E. Claim 3 .............................................................................................56
`F. Claim 4 .............................................................................................56
`G. Claim 5 .............................................................................................57
`H. Claim 6 .............................................................................................59
`I. Claim 7 .............................................................................................60
`J. Claim 8 .............................................................................................63
`K. Claim 9 .............................................................................................63
`L. Claim 10 ...........................................................................................65
`M. Claim 11 ...........................................................................................66
`N. Claim 12 ...........................................................................................67
`O. Claim 14 ...........................................................................................68
`P. Claim 15 ...........................................................................................68
`Q. Claim 16 ...........................................................................................69
`R. Claim 17 ...........................................................................................70
`S. Claim 18 ...........................................................................................72
`T. Claim 19 ...........................................................................................73
`U. Claim 20 ...........................................................................................74
`X. CONCLUSION ......................................................................................... 76
`
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 1 -
`I, David A. Forsyth, hereby declare as follows.
`I. INTRODUCTION
`1. I have been retained as an expert witness on behalf of Petitioner
`Zesty.ai, Inc. (“Petitioner”) for the above-captioned inter partes review (IPR). I am
`being compensated for my time in connection with this IPR at my standard
`consulting rate, which is $500 per hour. My compensation is not contingent on
`either my expert opinion or the outcome of this litigation. I have no other interest
`in this proceeding.
`2. I understand that this declaration accompanies a Petition for IPR
`involving U.S. Patent No. 11,195,058 (“the ’058 patent”) (EX1001), which
`resulted from U.S. Patent Application No. 16/868,113 (“the ’113 application”),
`filed on May 6, 2020. I understand that the ’058 patent alleges a priority date of
`September 23, 2016. I refer to this date throughout this declaration.
`3. In preparing this declaration, I have reviewed the ’058 patent and each
`of the documents cited herein, in light of general knowledge in the art before
`September 23, 2016. In formulating my opinions, I have relied upon my
`experience, education, and knowledge in the relevant art. In formulating my
`opinions, I have also considered the viewpoint of a person of ordinary skill in the
`art (“POSA”) (i.e., a person of ordinary skill in the field of image processing and
`machine learning) prior to September 23, 2016.
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 2 -
`II. MY BACKGROUND AND QUALIFICATIONS
`4. My qualifications and credentials are fully set forth in my curriculum
`vitae, which is attached as Exhibit 1004. A summary of my qualifications is
`included below.
`5. I am currently the Fulton-Watson-Copp Chair in Computer Science at
`the University of Illinois at Urbana-Champaign. I believe that my background and
`expertise qualify me as an expert in the technical issues in this matter.
`6. My education began in Cape Town, South Africa. I was an
`undergraduate at the University of the Witwatersrand, Johannesburg, and I hold a
`Bachelor of Science (1984) and Master of Science in Electrical Engineering from
`that University. I was awarded the Diocesan College Rhodes Scholarship in 1984
`to attend Oxford University. I hold a Master of Arts by special election from
`Oxford University and a Doctorate of Philosophy in Engineering Science from
`Balliol College, Oxford. I was appointed Fellow by Examination of Magdalen
`College, Oxford in 1989. I was then appointed Assistant Professor of Computer
`Science at the University of Iowa in 1991; in 1994, I was promoted to Associate
`Professor of Computer Science, and went on leave of absence to take up an
`appointment as Assistant Professor of Computer Science at U.C. Berkeley. I was
`promoted to Associate Professor of Computer Science at U.C. Berkeley in 1996,
`and to Professor of Computer Science in 2002. I went on leave of absence from
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 3 -
`U.C. Berkeley to take up an appointment as Professor of Computer Science at the
`University of Illinois at Urbana-Champaign in 2004. I am currently the Fulton-
`Watson-Copp Chair in Computer Science at the University of Illinois at Urbana -
`Champaign.
`7. I have studied Computer Vision since my final year as an
`undergraduate in 1984, when I engaged in an undergraduate project on the topic.
`Computer vision is a field of computer science that uses computing methods to
`interpret images and video, as well as image-like data, in ways that are useful. My
`Ph.D. thesis treats a traditional problem in computer vision, known as “color
`constancy,” and the paper that resulted from this work is still quite regularly cited.
`As a Fellow by Examination, I studied illumination and shading effects in vision,
`and geometric methods to recognize objects. I have published papers on a wide
`range of topics in computer vision, computer graphics, machine learning and
`human-computer interfaces. I have published about 220 refereed papers and 10
`book chapters. I have published one textbook on computer vision in two editions
`and four languages, one research monograph, and two recent textbooks on machine
`learning. I have edited six volumes of collected papers. A reasonably complete list
`of my publications is contained in my curriculum vitae in Exhibit 1004.
`8. In addition, I have occupied various leadership posts within my
`discipline. I was program co-chair for the Institute of Electrical and Electronics
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 4 -
`Engineers (“IEEE”) Conference on Computer Vision and Pattern Recognition
`(“CVPR”) in 2000, 2011, 2018, and 2021, and for the European Conference on
`Computer Vision (“ECCV”) in 2008. I was general co-chair for CVPR in 2006 and
`again held the position for CVPR 2015. I was general co-chair for the IEEE
`International Conference on Computer Vision in 2019. I am an Associate Editor of
`the International Journal of Computer Vision; of the Association for Computing
`Machinery (“ACM”) Transactions on Graphics; and of the Journal of the ACM. I
`was the Editor in Chief of the IEEE Transactions on Pattern Analysis and Machine
`Intelligence (“TPAMI”) for four years. TPAMI is a journal which receives
`approximately 1000 submissions per year, and which is usually seen as the leading
`journal in the discipline. I also served for three years on a committee of the
`National Research Council, convened to study methods to protect children from
`pornography and other inappropriate material on the internet.
`9. Throughout my career I have received a variety of awards. In 1993,
`the Marr prize for Best Paper at the International Conference on Computer Vision
`was awarded to “Extracting Projective Structure from Single Perspective Views of
`3D Point Sets,” a paper I wrote with Charles A. Rothwell, Andrew Zisserman, and
`Joseph L. Mundy. In 2002, the award for Best Paper in Cognitive Computer Vision
`at the European Conference in Computer Vision was awarded to “Object
`recognition as machine translation: Learning a lexicon for a fixed image
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 5 -
`vocabulary,” a paper I wrote with P. Duygulu, K. Barnard, and J.F.G. de Freitas. I
`was awarded an IEEE Technical Achievement Award in 2005 for my work on
`Computer Vision. I became an IEEE Fellow in 2009, and an ACM Fellow in 2014.
`In 2024, I received the Mark Everingham Prize at the European Conference in
`Computer Vision.
`10. My work is extensively cited in computer vision literature, with
`44,249 citations recorded by Google scholar in July, 2025.
`11. My appended CV lists all publications authored in the past 10 years,
`with perhaps a few exceptions that I have forgotten.
`12. I have made extensive contributions to computer vision education. I
`am lead co-author of a widely used textbook in the discipline, “Computer Vision -
`A Modern Approach”, which has appeared in two editions and four languages. I
`have written two other textbooks, and am the lead co-author of the computer vision
`chapter in the leading Artificial Intelligence textbook. I have trained 35 Ph.D.
`students. I have trained people who are very well known academics in computer
`vision. Tamara Berg was associate professor of computer science at the Unive rsity
`of North Carolina at Chapel Hill and is now an entrepreneur. Ali Farhadi is
`professor of computer science at the University of Washington and CEO of the
`Allen Institute for Artificial Intelligence. Deva Ramanan is a tenured professor of
`computer science at Carnegie Mellon University. Gang Wang was assistant
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 6 -
`professor of computer science at Nanyang Technological University in Singapore,
`and moved to become the senior lead in AI research at Alibaba. Zicheng Liao is a
`professor of computer science at Zhejiang University. Kobus Barnard is professor
`of computer science at University of Arizona. Each is seen as a young leader in the
`field.
`13. Based on my above-described 35 years of experience in the field of
`computer vision and the acceptance of my publications and professional
`recognition by societies in my field, I believe that I am considered to be an expert
`in the field of computer vision (including image analysis). For example, I have
`published numerous papers on interpreting images, including: a highly cit ed paper
`on identifying object attributes from images (A. Farhadi, I. Endres, D. Hoiem, and
`D.A. Forsyth, “Describing Objects by their Attributes,” CVPR 2009); the first
`paper demonstrating a method that can write sentences that describe an image (A
`Farhadi, M Hejrati, M A Sadeghi, P Young, C Rashtchian, J Hockenmaier, D. A.
`Forsyth, “Every Picture Tells a Story: Generating Sentences from Images,” Proc
`ECCV 2010); the first paper demonstrating frame-rate object detection (M.A.
`Sadeghi and D.A. Forsyth, “30Hz Object Detection with DPM V5,” Proc ECCV
`2014); methods to recover surface material descriptions from images (D.A. Forsyth
`and Jason Rock, “Intrinsic Image Decomposition using Paradigms,” IEEE TPAMI,
`November, 2022); and papers on interpreting the layout of rooms (V. Hedau, D.
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 7 -
`Hoiem and D.A. Forsyth, “Understanding the layout of cluttered rooms,” Int. Conf.
`Computer Vision (ICCV) 2009). I have published papers on transforming images,
`including: methods to relight images (Anand Bhattad, D.A. Forsyth, “StyL itGAN:
`Prompting StyleGAN to Produce New Illumination Conditions,” CVPR 2024);
`methods to restore face images (Min Jin Chong, Dejia Xu, Yi Zhang, Zhangyang
`Wang, David Forsyth, Gurunandan Krishnan, Yicheng Wu, Jian Wang, “Copy or
`not? Reference-based Face Image Restoration with Focus on Fine Details,” Winter
`Conference on Applications of Computer Vision, 2025); and methods to identify
`synthetic images (Ayush Sarkar, Hanlin Mai, Amitabh Mahapatra, Svetlana
`Lazebnik, D.A. Forsyth, Anand Bhattad, “Shadows Don’ t Lie and Lines Can’t
`Bend! Generative Models don’t know Projective Geometry…for now,” CVPR
`2024).
`III. SUMMARY OF OPINIONS
`14. In forming my opinions about the ’058 patent, I have considered the
`following Ground of unpatentability. Based on my review of the prior -art reference
`that forms the basis of this Ground, it is my opinion that claims 1- 20 of the ’058
`patent would have been obvious to a POSA prior to September 23, 2016.
`Ground Basis Claims References
`1 § 103 1-20 Gross
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 8 -
`15. I have been asked to consider how a POSA would have understood
`the challenged claims in light of the disclosures of the ’058 patent. I have also been
`asked to consider how a POSA would have understood the prior art reference that
`forms the basis of the above-listed Ground.
`16. Further, I have been asked to consider and provide my technical
`review, analysis, insights, and opinions regarding whether a POSA would have
`understood that the prior art reference listed in the table above renders obvious
`claims 1-20 of the ’058 patent.
`IV. LIST OF DOCUMENTS CONSIDERED
`17. In formulating my opinions, I have relied upon my training,
`knowledge, and experience that are relevant to the ’058 patent. I have also
`reviewed and am familiar with the following documents and materials in addition
`to any other documents cited in this declaration:
`Exhibit No. Description
`1001 U.S. Patent No. 11,194,058 to Okazaki (“’058 patent”)
`1002 Prosecution History of U.S. Patent No. 11,195,058 (“Prosecution
`History”)
`1004 Curriculum Vitae of Dr. David A. Forsyth
`1005 U.S. Publication No. 2015/0186953 to Gross et al. (“Gross”)
`1008
`Jafri, et al., “A Survey of Face Recognition Techniques,” Journal
`of Information Processing Systems, Vol. 5, No. 2, June 2009, pp.
`41-68 (“Jafri”)
`1009 U.S. Patent No. 8,078,436 to Pershing et al. (“Pershing”)
`1010 U.S. Patent No. 6,885,771 to Takahashi (“Takahashi”)
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 9 -
`Exhibit No. Description
`1011 U.S. Publication No. 2009/0265193 to Collins et al. (“Collins”)
`1012 U.S. Publication No. 2015/0302529 to Jagannathan
`(“Jagannathan”)
`1013 U.S. Patent No. 4,941,122 to Weideman (“Weideman”)
`1014 U.S. Patent No. 8,995,757 to Ciarcia et al. (“Ciarcia”)
`1015
`Redistricting Data Hub, “TIGER Boundary Files,” accessible at
`https://redistrictingdatahub.org/data/about-our-data/tiger-
`boundary-files/ (last accessed July 10, 2025)
`1016
`Norman, J., “Origins of Google Earth,” History of Information,
`accessible at
`https://www.historyofinformation.com/detail.php?entryid=3145
`(last accessed July 13, 2025)
`1017
`Krizhevsky, et al., “ImageNet Classification with Deep
`Convolutional Neural Networks,” Communications of the ACM,
`Vol. 60, No. 6, May 2017
`1018 Lin, et al., “Network In Network,” 2013
`1019
`Bradski, “The OpenCV Library,” accessible at
`https://www.drdobbs.com/open-source/the-opencv-
`library/184404319 (last accessed July 10, 2025)
`1020 van der Walt, et al., “scikit-image: image processing in Python,”
`PeerJ, June 2014
`1023
`Novak, et al., “Anatomy of a Color Histogram,” Proceedings of
`the 1992 IEEE Computer Society Conference on Computer Vision
`and Pattern Recognition, June 1992, pp. 599-605
`1024 U.S. Patent No. 8,401,222 to Thornberry et al. (“Thornberry”)
`1025 U.S. Patent No. 9,158,869 to Labrie et al. (“Labrie”)
`18. To the best of my knowledge, the above-mentioned documents and
`materials are true and accurate copies of what they purport to be. An expert in the
`field would reasonably rely on them to formulate opinions such as those set forth
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 10 -
`in this declaration.
`V. LEGAL UNDERSTANDING
`19. I have also relied upon various legal principles (as explained to me by
`Zesty.ai’s counsel) in formulating my opinions. My understanding of these
`principles is summarized below.
`A. My Understanding of Claim Construction
`20. I understand that during an inter partes review proceeding, claims are
`to be construed in light of the specification as would be read by a person of
`ordinary skill in the relevant art at the time the application was filed. I understand
`that claim terms are given their ordinary and customary meaning as would be
`understood by a person of ordinary skill in the relevant art in the context of the
`entire disclosure. A claim term, however, will not receive its ordinary meaning if
`the patentee acted as his own lexicographer and clearly set forth a definition of the
`claim term in the specification. In this case, the claim term will receive the
`definition set forth in the patent.
`B. My Understanding of Obviousness
`21. I understand that a patent claim is invalid if the claimed invention
`would have been obvious to a POSA at the time the application was filed. This
`means that even if all of the requirements of the claim cannot be found in a single
`prior art reference that would anticipate the claim, the claim can still be invalid.
`22. To obtain a patent, a claimed invention must have, as of the priority
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 11 -
`date, been nonobvious in view of the prior art in the field. I understand that an
`invention is obvious when the differences between the subject matter sought to be
`patented and the prior art are such that the subject matter as a whole would have
`been obvious to a POSA at the time the invention was made.
`23. I understand that to prove that prior art or a combination of prior art
`renders a patent obvious, it is necessary to:
`(1) identify the particular references that, singly or in combination, render
`the patent obvious;
`(2) specifically identify which elements of the patent claim appear in each of
`the asserted references; and
`(3) explain how the prior art references could have been combined in order
`to create the inventions claimed in the asserted claim.
`24. I also understand that prior art references can be combined under
`several different circumstances. For example, it is my understanding that one such
`circumstance is when a proposed combination of prior art references results in a
`system that represents a predictable variation, which is achieved using prior art
`elements according to their established functions.
`25. I also understand that when considering the obviousness of a patent
`claim, one should consider whether a teaching, suggestion, or motivation to
`combine the references exists so as to avoid impermissibly applying hindsight
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 12 -
`when considering the prior art. I understand this test should not be rigidly applied,
`but that the test can be important to avoid such hindsight.
`26. I understand that certain objective indicia can be important evidence
`as to whether a patent is obvious or nonobvious. Such indicia include: (1)
`commercial success of products covered by the patent claims; (2) a long -felt need
`for the invention; (3) failed attempts by others to make the invention; (4) copying
`of the invention by others in the field; (5) unexpected results achieved by the
`invention as compared to the closest prior art; (6) praise of the invention by the
`infringer or others in the field; (7) the taking of licenses under the patent by others;
`(8) expressions of surprise by experts and those skilled in the art at the making of
`the invention; and (9) the patentee proceeded contrary to the accepted wisdom of
`the prior art.
`27. At this point, I am not aware of any secondary indicia of non-
`obviousness. But, I reserve the right to review and opine on any evidence of
`objective indicia of nonobvious that may be presented during this proceeding.
`28. I also understand that “obviousness” is a legal conclusion based on the
`underlying factual issues of the scope and content of the prior art, the differences
`between the claimed invention and the prior art, the level of ordinary skill in the
`prior art, and any objective indicia of non-obviousness.
`29. For that reason, I am not rendering a legal opinion on the ultimate
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 13 -
`legal question of obviousness. Rather, my testimony addresses the underlying facts
`and factual analysis that would support a legal conclusion of obviousness or non -
`obviousness, and when I use the term obvious, I am referring to the perspective of
`one of ordinary skill at the time of invention.
`C. A Person of Ordinary Skill in the Art
`30. I understand that a person of ordinary skill in the relevant art
`(“POSA”) is presumed to be aware of all pertinent art, thinks al ong conventional
`wisdom in the art, and is a person of ordinary creativity—not an automaton.
`31. I have been asked to consider the level of ordinary skill in the field
`that someone would have had at the time the claimed invention was made. In
`deciding the level of ordinary skill, I considered the following:
`• the levels of education and experience of persons working in the
`field;
`• the types of problems encountered in the field; and
`• the sophistication of the technology.
`32. My opinion below explains how a POSA would have understood the
`technology described in the references I have identified herein around the
`September 23, 2016 timeframe, which I have been advised is the earliest possible
`effective filing date for the ’058 patent. However, my opinions herein would not be
`affected if the ’058 patent is found to only be entitled to a later effective filing date.
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 14 -
`33. Regardless of whether I use “I” or a “POSA” during my technical
`analysis below, all of my statements and opinions are always to be understood to
`be based on how a POSA would have understood or read a document at the time of
`the alleged invention.
`VI. OVERVIEW OF THE ’058 PATENT
`A. Summary of the ’058 Patent
`34. The ’058 patent discloses techniques for analyzing aerial imagery to
`evaluate property characteristics and condition classifications. EX1001, Abstract.
`For example, this might include analyzing images to determine the maintenance
`levels of certain property features, such as the roof. EX1001, 1:41-46, 2:34-49.
`This information can then be used, e.g., to estimate the risk of damage in the event
`of “disaster conditions, such as severe storms,” to calculate repair costs, or to
`confirm a property has been repaired. EX1001, 2:42-49, 12:28-32 (“The types of
`disasters can include, in some examples, earthquake, hurricane, tornado, storm
`surge, fire, straight line winds, or explosion. The types and estimated severity of
`disasters, in some embodiments, may depend upon the particular property
`location.”).
`35. The ’058 patent discusses applying “machine learning analysis” to
`“analyze aerial imagery and automatically extract characteristics of individual
`properties, providing fast and efficient automated classification of building styles
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 15 -
`and repair conditions,” in order to perform risk evaluations. EX1001, 2:16 -25,
`2:34-42 (emphasis added). Figure 3 (annotated below) illustrates an exemplary
`embodiment of the ’058 patent’s alleged invention.
`
`EX1001, FIG. 3 (annotated).
`36. The ’058 patent’s system operates in a series of steps, corresponding
`to numeric annotations in Figure 3 above. First, the system receives a request by “a
`user of a particular client computing system 306” to classify a property. EX1001,
`13:65-14:1. The request “may include at least one property identifier 340” (e.g.,
`address, geolocation coordinates) “as well as one or more property characteristics
`
`
`
`
`
`
`
`
`Case IPR2025-01359
`U.S. Patent No. 11,195,058
` - 16 -
`342” (e.g.



