`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`Address: COMMISSIONER FOR PATENTS
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`17/053,224
`
`11/05/2020
`
`Brian A. Ellenberger
`
`82015US003
`
`4226
`
`Solventum Intellectual Properties Company
`2510 Conway Ave E
`3M Center, 275-6E-21
`St Paul, MN 5514
`
`SHIN, SEONG-AH A
`
`2659
`
`PAPER NUMBER
`
`NOTIFICATION DATE
`
`DELIVERY MODE
`
`05/07/2024
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the
`following e-mail address(es):
`
`IPDocketing @ Solventum.com
`
`PTOL-90A (Rev. 04/07)
`
`
`
`
`
`Disposition of Claims*
`1-20 is/are pending in the application.
`)
`Claim(s)
`5a) Of the above claim(s) _ is/are withdrawn from consideration.
`C} Claim(s)__ is/are allowed.
`Claim(s) 1-20 is/are rejected.
`(] Claim(s)__ is/are objectedto.
`C] Claim(s
`are subjectto restriction and/or election requirement
`)
`* If any claims have been determined allowable, you maybeeligible to benefit from the Patent Prosecution Highway program at a
`participating intellectual property office for the corresponding application. For more information, please see
`http://www.uspto.gov/patents/init_events/pph/index.jsp or send an inquiry to PPHfeedback@uspto.gov.
`
`) ) ) )
`
`Application Papers
`10) The specification is objected to by the Examiner.
`11)0) The drawing(s) filedon__ is/are: a)(J accepted or b)( objected to by the Examiner.
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121(d).
`
`Priority under 35 U.S.C. § 119
`12)7) Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d)or (f).
`Certified copies:
`c)Z None ofthe:
`b)() Some**
`a)C All
`1.1.) Certified copies of the priority documents have been received.
`2.2) Certified copies of the priority documents have been received in Application No.
`3.1.) Copies of the certified copies of the priority documents have been receivedin this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`*“ See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`2) (J Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`Paper No(s)/Mail Date
`U.S. Patent and Trademark Office
`
`3)
`
`4)
`
`(LJ Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`(Qj Other:
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mail Date 20240501
`
`Application No.
`Applicant(s)
`171053,224
`Ellenberger etal.
`
`Office Action Summary Art Unit|AIA (FITF)StatusExaminer
`SEONG-AH A SHIN
`2659
`Yes
`
`
`
`-- The MAILING DATEof this communication appears on the cover sheet with the correspondence address --
`Period for Reply
`
`A SHORTENED STATUTORYPERIOD FOR REPLYIS SET TO EXPIRE 3 MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensionsof time may be available underthe provisions of 37 CFR 1.136(a). In no event, however, may a reply betimely filed after SIX (6) MONTHSfrom the mailing
`date of this communication.
`If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHSfrom the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133).
`Any reply received by the Office later than three months after the mailing date of this communication, evenif timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1) Responsive to communication(s) filed on 2/14/2024.
`C} A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/werefiled on
`
`2a)() This action is FINAL. 2b)¥)This action is non-final.
`3) An election was madeby the applicant in responseto a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`4)(2) Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Exparte Quayle, 1935 C.D. 11, 453 O.G. 213.
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AlA or AIA Status
`
`The present application, filed on or after March 16, 2013,
`
`is being examined
`
`underthefirst inventor to file provisions of the AIA.
`
`Status of Claims
`
`1.
`
`Claims 1-20 are pending in this application.
`
`Continued Examination Under 37 CFR 1.114
`
`2.
`
`A request for continued examination under 37 CFR 1.114, including the fee set
`
`forth in 37 CFR 1.17(e), wasfiled in this application after final rejection. Since this
`
`application is eligible for continued examination under 37 CFR 1.114, and the fee set
`
`forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action
`
`has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on
`
`2/14/2024 has been entered.
`
`Response to Arguments
`
`Regarding Rejection under 35 U.S.C. 103
`
`3.
`
`Applicant argues that the rejection under 35 USC 103 is improper because:
`
`1) Jeong does not process data asynchronously as claimed; 2) Jeong does not
`
`perform batch processing as claimed; and 3) Jeong does notdiscloses the newly
`
`amended limitation, “narrative and non-narrative data’, as recited in claim 1
`
`(REMARKS, on page 8-10).
`
`However, Examiner disagrees. The rejection under 35 USCisstill proper
`
`because:
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 3
`
`1)
`
`In response to applicant's argumentthat the referencesfail to show certain
`
`features of applicant's invention,
`
`it is noted that the features upon which applicant
`
`relies (i.e., processing data asynchronously) are not recited in the rejected claims
`
`and claim 1 recites the limitation “receiving ... data asynchronously from ...data
`
`source”. Although the claims are interpreted in light of the specification,
`
`limitations from the specification are not read into the claims. See /n re Van
`
`Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir.1993). Moreover, Jeong in
`
`view of Hayward does discloses the feature. See the rejection below.
`
`&
`
`In response to applicant's argumentthat the referencesfail to show certain
`
`features of applicant's invention,
`
`it is noted that the features upon which applicant
`
`relies (i.e., performing batch processing) are not recited in the rejected claims
`
`and claim 1 recites the limitation “batch ... data’ or “a batch...module”. Although
`
`the claims are interpreted in light of the specification, limitations from the
`
`specification are not read into the claims. Moreover, Jeong in view of Hayward
`
`does discloses the feature. See the rejection below.
`
`Jeong in view of Hayward further discloses claims including the newly amended
`
`limitation, “narrative and non-narrative data’, as recited in claims 1. See the
`
`rejection below. Therefore, Examiner maintains rejection.
`
`Claim Rejections - 35 USC § 103
`
`In the event the determination of the status of the application as subject to AIA 35
`
`U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any
`
`correction of the statutory basis for the rejection will not be considered a new ground of
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 4
`
`rejection if the prior art relied upon, and the rationale supporting the rejection, would be
`
`the same under either status.
`
`The following is a quotation of 35 U.S.C. 103 which forms the basis for all
`
`obviousness rejections set forth in this Office action:
`
`A patent for aclaimed invention may not be obtained, notwithstanding that the
`claimedinvention is not identically disclosed as set forth in section 102, if the
`differences between the claimed invention andthe prior art are such that the
`claimed invention as a whole would have been obvious beforethe effective filing
`date of the claimed invention to a person having ordinaryskill in the art to which
`the claimed invention pertains. Patentability shall not be negated by the manner
`in which the invention was made.
`
`The text of those sections of Title 35, U.S. Code not included in this action can
`
`be found in a prior Office action.
`
`The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148
`
`USPQ 459 (1966), that are applied for establishing a background for determining
`
`obviousness under 35 U.S.C. 103 are summarized as follows:
`
`1. Determining the scope and contents of the prior art.
`
`2. Ascertaining the differences between theprior art and the claims at issue.
`
`3. Resolving the level of ordinary skill in the pertinent art.
`
`4. Considering objective evidence present in the application indicating
`
`obviousness or nonobviousness.
`
`This application currently namesjoint inventors.
`
`In considering patentability of the
`
`claims the examiner presumes that the subject matter of the various claims was
`
`commonly ownedasof the effective filing date of the claimed invention(s) absent any
`
`evidenceto the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to
`
`point out the inventor and effective filing dates of each claim that was not commonly
`
`ownedas of the effectivefiling date of the later invention in order for the examiner to
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 5
`
`consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2)
`
`prior art against the later invention.
`
`4.
`
`Claims 1, 4, 7-10, 11, 14, and 17-20 are rejected under pre-AlA 35 U.S.C.
`
`103(a) as being unpatentable over Jeong (US Pub. 2017/0256260) in view of
`
`Haywardetal., (US Pat. 11,783,422, provisional application filed on Apr. 3, 2018)
`
`andfurther in view of Baughmanetal., (US Pub. 2018/0336459).
`
`Regarding claim 1, Jeong discloses a method performed by at least one
`
`computer processor executing computer program instructions stored on at least one
`
`non-transitory computer readable medium to execute a method, the method comprising:
`
`receiving [batch] data asynchronously froma[plurality of narrative and non-
`
`narrative data sources, wherein data in at least one of the data sourcesin the plurality is
`
`in a different data format than data in the other of the data sourcesin the plurality];
`
`performing NLP onthe [normalized batch] data to produce batch NLP data ([0108]-
`
`[0117] “receive voice commands uttered by a plurality of users from a plurality of display
`
`devices corresponding to the respective users....the control unit 170 may generate a
`
`training selection list ... and transmit the generated training selection list to the NLP
`
`server 500”);
`
`at a batch NLP module, generating a summarized NLP data model based on the
`
`batch NLPdata in a first amountof time ([0108]-[0117] “receive, from the NLP
`
`server 500, a training result obtained by performing the natural language processing on
`
`the training selection list, and store the received training result in the NLP DB 177”);
`
`at a live NLP processor, after performing NLP on the batch data:
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 6
`
`receiving first live data from a live data source ([0077]-[0084] Figs. 3 and 5,
`
`$111, S117, receiving text from corresponding utterance from a userof the display
`
`device);
`
`combining at least a first part of the summarized NLP data model with thefirst
`
`live data to producefirst combined data ([0077]-[0084] Figs. 3 and 5, “check whether
`
`the text pattern has matched with a prestored voice recognition pattern so as to perform
`
`a function of the display device 100, corresponding to the text pattern.
`
`In an
`
`embodiment, the NLP DB 177 maystore a corresponding relationship between
`
`functions of the display device 100 and voice recognition patterns corresponding
`
`thereto’); and
`
`performing live NLPon thefirst combined data to producefirst live NLP outputin
`
`a second amountof time, [wherein the second amountof time is shorter than thefirst
`
`amountof time] ([0077]-[0084] Figs. 3 and 5, “if the text pattern matches with the
`
`prestored voice recognition pattern, the control unit 170 performs a function of
`
`the display device 100, corresponding to the matched voice recognition pattern
`
`(S115)”).
`
`Jeong does not explicitly teach however Hayward does explicitly teach including
`
`the bracketed limitation:
`
`receiving [batch] data asynchronously from a [plurality of narrative and non-
`
`narrative data sources, wherein data in at least one of the data sourcesin the plurality is
`
`in a different data format than data in the other of the data sourcesin the plurality];
`
`normalizing the batch data to produce normalized batch data; performing NLP on the
`
`[normalized batch] data to produce batch NLP data (Col.18, lines 29-37, data may be
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 7
`
`
`
`analyzed using the ANN viaabatch process; Fig. 1 and Col. 8, lines 3-42, historical
`
`data 108 mayinclude a plurality (e.g., thousands or millions) of electronic documents,
`
`parameters, and/or other information; Col. 12, lines 6-22, looking up additional customer
`
`information from customer data 160 and life and health database 162; Col. 19, line 31-
`
`Col. 21, line18, “artificial neural network (ANN) 300 which maybe trained by ANN
`
`unit 150 of FIG.
`
`1 or ANN training application 264 of FIG. 2 ...Multiple different types of
`
`ANNs may be employed, including without limitation, recurrent neural networks,
`
`convolutional neural networks, and deep learning neural networks ...wherein input data
`
`may be normalized by mean centering, to determine loss and quantify the accuracy of
`
`outputs”).
`
`Therefore, it would have been obvious to one of ordinary skill before the effective
`
`filing date of the claimed invention to incorporate the method of performing a function of
`
`the display device corresponding to the voice command as taught by Jeong with the
`
`method of normalizing data as taught by Hayward to improve upon the user input
`
`handling experience by processing claims using machine learning techniques
`
`(Hayward, Col. 2, lines 13-15).
`
`Jeong in view of Hayward does not explicitly teach however Baughman does
`
`explicitly teach including the bracketedlimitation:
`
`performing live NLPon thefirst combined data to producefirst live NLP outputin
`
`a second amountof time, [wherein the second amountof time is shorter than thefirst
`
`amountof time] ([0035]-[0046] “providing structured data for storage in live event data
`
`area 2122 and comment data area 2124. Manager system 110, running preparation
`
`and maintenance process 111 can periodically replicate data of live event data
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 8
`
`area 2122into historical event data area 2121. Manager system 110 can structure
`
`records of live event data area 2122 and can structure data records of historical event
`
`data area 2121. As part of structuring data for use by historical event data
`
`area 2121, manager system 110 can vectorize records... training a neural
`
`network 300 can include use of historical event data and can perform training using data
`
`of live event data area 2122”).
`
`Therefore, it would have been obvious to one of ordinary skill before the effective
`
`filing date of the claimed invention to incorporate the method of performing a function of
`
`the display device corresponding to the voice commandastaught by Jeongin view of
`
`Hayward with the method oftraining data in real-time with live event data as taught by
`
`Baughmanto prevent of defining reactively after a problem is experienced and to be
`
`tailored to address the last experienced problem instead of future problems that may
`
`arise with the data records (Baughman, [0003)).
`
`Regarding claim 4, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1, and Baughman further discloses:
`
`wherein the combining comprises identifying portions of the summarized NLP
`
`data model that are relevant to the live data, and combining the identified portions of the
`
`summarized NLP data model with the live data ([0035]-[0046] “Training a neural
`
`network 300 caninclude use of historical event data and can perform training using data
`
`of live event data area 2122”).
`
`Regarding claim 7, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1, and Baughmanfurther discloses:
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 9
`
`wherein the live data includes data representing human speech, and wherein the
`
`live NLP output comprises text representing the human speech ([0057]-[0059] receiving
`
`speech and transform to text by a speech to text server (STT server) 300, and a natural
`
`language server (NLP server) 500).
`
`Regarding claim 8, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1, and Jeong further discloses:
`
`wherein the live data includes data representing human speech, and wherein the
`
`live NLP output comprises structured data including text representing the human
`
`speech and data representing concepts corresponding to the human speech ([0088]-
`
`[0091] NLP server mayanalysis the intention of the user with respect to the text pattern
`
`through morpheme analysis, syntax analysis, speech act analysis, and dialog
`
`processing analysis).
`
`Regarding claim 9, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1, and Jeong further discloses:
`
`at the live NLP processor, after performing NLP on the batch data: receiving
`
`second live data from the live data source ([0035]-[0046] “providing structured data for
`
`storage in live event data area 2122 and commentdata area 2124. Manager
`
`system 110, running preparation and maintenance process 111 can periodically
`
`replicate data of live event data area 2122 into historical event data area 2121. Manager
`
`system 110 can structure recordsof live event data area 2122 and canstructure data
`
`records of historical event data area 2121. As part of structuring data for use by
`
`historical event data area 2121, manager system 110 can vectorize records; [0100]
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 10
`
`$125, voice recognition unit 171 of the display device 100 check whether the voice
`
`commandreceivedin step S101 has been again received); and
`
`combining at least a second part of the summarized NLP data model with the
`
`secondlive data to produce second combined data ([0100] performing a function of
`
`the display device 100, corresponding to the matched voice recognition pattern).
`
`Jeong does not explicitly teach however Baughman does explicitly teach:
`
`performing live NLP on the second combined data to produce second live NLP
`
`output in a third amountof time, wherein the third amount of time is shorter than thefirst
`
`amountof time ([0035]-[0046] Manager system 110, running preparation
`
`and maintenance process 111 can periodically replicate data of live event data
`
`area 2122 into historical event data area 2121 ... Training a neural network 300 can
`
`include use of historical event data and can perform training using data of live event
`
`data area 2122”).
`
`Regarding claim 10, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1, and Jeong further discloses:
`
`wherein performing NLPonthe first combined data comprises performinglive
`
`NLPonafirst portion of thefirst live data in the first combined data at a first time, and
`
`performing live NLP again onthefirst portion of the first live data in the first combined
`
`data at a second time in response to determining that thefirst live data have changed
`
`since thefirst time ([0035]-[0046] Manager system 110, running preparation
`
`and maintenance process 111 can periodically replicate data of live event data
`
`area 2122 into historical event data area 2121 ... Training a neural network 300 can
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 11
`
`include use of historical event data and can perform training using data oflive event
`
`data area 2122”).
`
`Regarding claims 11, 14, and 17-20, Claims 11, 14, and 17-20 are the
`
`corresponding system claims to method claims 1, 4, and 7-10. Therefore, claims 11, 14,
`
`and 17-20 are rejected using the same rationale as applied to claims 1, 4, and 7-10
`
`above.
`
`5.
`
`Claims 2, 3, 5, 6, 12, 13, 15, and 16 are rejected under pre-AlA 35 U.S.C.
`
`103(a) as being unpatentable over Jeong (US Pub. 2017/0256260) in view of
`
`Haywardetal., (US Pat. 11,783,422, provisional application filed on Apr. 3, 2018)
`
`further in view of Baughmanetal., (US Pub. 2018/0336459) and further in view of
`
`Relangi et al., (US Pub. 2020/0372219).
`
`Regarding claim 2, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1.
`
`Jeong in view of Hayward and further in view of Baughman does not explicitly
`
`teach however Relangi does explicitly teach:
`
`wherein the second amountof time is at least ten times shorter than the first
`
`amount of time ([0098] “generate tasks/recommendations based on the machine
`
`learning models, which may be updated at regular intervals (e.g., monthly) and based
`
`on real-time events and conditions’).
`
`Therefore, it would have been obvious to one of ordinary skill before the effective
`
`filing date of the claimed invention to incorporate the method of performing a function of
`
`the display device corresponding to the voice commandastaught by Jeongin view of
`
`Hayward and further in view of Baughman with the method oftraining systems for
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 12
`
`labeling natural languages quickly to improve systems fortraining a chatbot, or other
`
`natural language system, to respond to a broad variety of queries and adapt to new
`
`queries not yet labeled in training data (Relangi, [0006][0087)).
`
`Regarding claim 3, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1.
`
`Jeong in view of Hayward and further in view of Baughman does not explicitly
`
`teach however Relangi does explicitly teach:
`
`wherein the second amountof time is at least one hundred times shorter than the
`
`first amountof time ([0098] “generate tasks/recommendations based on the machine
`
`learning models, which may be updated at regular intervals (e.g., monthly) and based
`
`on real-time events and conditions’).
`
`Regarding claim 5, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1.
`
`Jeong in view of Hayward and further in view of Baughman does not explicitly
`
`teach however Relangi does explicitly teach:
`
`wherein performing live NLP onthefirst combined data is performed in less than
`
`5 seconds ([0051] performing process and generating result within 5 seconds by NLP
`
`device).
`
`Regarding claim 6, Jeong in view of Hayward and further in view of Baughman
`
`discloses the method of claim 1.
`
`Jeong in view of Baughman does notexplicitly teach however Relangi does
`
`explicitly teach:
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 13
`
`wherein performing live NLP onthefirst combined data is performed in less than
`
`1 second ((0051] performing process and generating result within predetermined time
`
`by NLP device).
`
`Regarding claims 12, 13, 15, and 16, Claims 12, 13, 15, and 16 are the
`
`corresponding system claims to method claims 2, 3, 5, and 6. Therefore, claims 12, 13,
`
`15, and 16 are rejected using the same rationale as applied to claims 2, 3, 5, and 6
`
`above.
`
`Conclusion
`
`6.
`
`The prior art made of record and notrelied upon is considered pertinent to
`
`applicant's disclosure. Please see attached form PTO-892.
`
`Anyinquiry concerning this communication or earlier communications from the
`
`examiner should be directed to SEONG-AH A. SHIN whosetelephone numberis
`
`(571)272-5933. The examiner can normally be reached 9 AM-3PM.
`
`Examinerinterviews are available via telephone,
`
`in-person, and video
`
`conferencing using a USPTO supplied web-based collaboration tool. To schedule an
`
`interview, applicant is encouraged to use the USPTO AutomatedInterview Request
`
`(AIR) at http:/Avwww.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner's
`
`supervisor, Pierre-Louis Desir can be reached on 571-272-7799. The fax phone number
`
`for the organization where this application or proceeding is assigned is 571-273-8300.
`
`Information regarding the status of published or unpublished applications may be
`
`obtained from Patent Center. Unpublished application information in Patent Centeris
`
`available to registered users. To file and manage patent submissions in Patent Center,
`
`
`
`Application/Control Number: 17/053,224
`Art Unit: 2659
`
`Page 14
`
`visit: httos://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-
`
`center for more information about Patent Center and
`
`https://www.uspto.gov/patents/docx for information aboutfiling in DOCX format. For
`
`additional questions, contact the Electronic Business Center (EBC) at 866-217-9197
`
`(toll-free).
`
`If you would like assistance from a USPTO Customer Service
`
`Representative, call 800-786-9199 (IN USA OR CANADA)or 571-272-1000.
`
`Seong-ah A. Shin
`Primary Examiner
`Art Unit 2659
`
`/SEONG-AH A SHIN/
`Primary Examiner, Art Unit 2659
`
`