throbber
www.uspto.gov
`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`Address: COMMISSIONER FOR PATENTS
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`18/000,903
`
`12/06/2022
`
`SHINICHI KAWANO
`
`SYP334732US01
`
`4617
`
`CHIP LAW GROUP
`505 N. LAKE SHORE DRIVE
`SUITE 250
`CHICAGO, IL 60611
`
`WOO, STELLA L
`
`2693
`
`PAPER NUMBER
`
`NOTIFICATION DATE
`
`DELIVERY MODE
`
`11/08/2024
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the
`following e-mail address(es):
`
`docketing @chiplawgroup.com
`eofficeaction @appcoll.com
`sonydocket @evalueserve.com
`
`PTOL-90A (Rev. 04/07)
`
`

`

`
`
`Disposition of Claims*
`1-20 is/are pending in the application.
`)
`Claim(s)
`5a) Of the above claim(s) _ is/are withdrawn from consideration.
`[} Claim(s)__ is/are allowed.
`Claim(s) 1-20 is/are rejected.
`(] Claim(s)__ is/are objectedto.
`C] Claim(s
`are subjectto restriction and/or election requirement
`)
`* If any claims have been determined allowable, you maybeeligible to benefit from the Patent Prosecution Highway program at a
`participating intellectual property office for the corresponding application. For more information, please see
`http:/Awww.uspto.gov/patents/init_events/pph/index.jsp or send an inquiry to PPHfeedback@uspto.gov.
`
`) ) ) )
`
`Application Papers
`10)C The specification is objected to by the Examiner.
`11)M The drawing(s) filed on 06 December 2022 is/are: a) accepted or b)() objected to by the Examiner.
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121(d).
`
`Priority under 35 U.S.C. § 119
`12)[¥) Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d)or (f).
`Certified copies:
`c)Z None ofthe:
`b)() Some**
`a) All
`1.¥) Certified copies of the priority documents have been received.
`2.1) Certified copies of the priority documents have been received in Application No.
`3.1.) Copies of the certified copies of the priority documents have been receivedin this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`*“ See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`2)
`Paper No(s)/Mail Date
`U.S. Patent and Trademark Office
`
`3)
`
`4)
`
`(LJ Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`(Qj Other:
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mail Date 20241031
`
`Application No.
`Applicant(s)
`18/000,903
`KAWANO et al.
`
`Office Action Summary Art Unit|AIA (FITF)StatusExaminer
`Stella L Woo
`2693
`Yes
`
`
`
`-- The MAILING DATEof this communication appears on the cover sheet with the correspondence address --
`Period for Reply
`
`A SHORTENED STATUTORYPERIOD FOR REPLYIS SET TO EXPIRE 3 MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensionsof time may be available underthe provisions of 37 CFR 1.136(a). In no event, however, may a reply betimely filed after SIX (6) MONTHSfrom the mailing
`date of this communication.
`If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHSfrom the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133).
`Any reply received by the Office later than three months after the mailing date of this communication, evenif timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1)C) Responsive to communication(s) filed on
`CA declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/werefiledon
`
`2a)C) This action is FINAL. 2b)¥)This action is non-final.
`3) An election was madeby the applicant in responseto a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`4)() Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Exparte Quayle, 1935 C.D. 11, 453 O.G. 213.
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AlA or AIA Status
`
`1.
`
`The present application, filed on or after March 16, 2013, is being examined under the
`
`first inventor to file provisions of the AIA.
`
`Claim Rejections - 35 USC § 101
`
`2.
`
`35 U.S.C. 101 reads as follows:
`
`Whoever invents or discovers any new and useful process, machine, manufacture,or
`composition of matter, or any new and useful improvementthereof, may obtaina patent
`therefor, subjectto the conditions and requirementsofthis title.
`
`3.
`
`Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to
`
`non-statutory subject matter. The claim does notfall within at least one of the four categories of
`
`patent eligible subject matter because itis directed to a computer program, which doesnotfall
`
`within at least one of the four categories of patent eligible subject matter recited in 35 U.S.C.
`
`101 (i.e. process, machine, manufacture, or composition of matter).
`
`Claim Rejections - 35 USC § 112
`
`4.
`
`The following is a quotation of 35 U.S.C. 112(b):
`(b) CONCLUSION.—The specification shall concludewith one or more claims particularly
`pointing out and distinctly claiming the subject matter which the inventor ora jointinventor
`regards as the invention.
`
`The following is a quotation of 35 U.S.C. 112 (pre-AlA), second paragraph:
`The specification shall conclude with one or more claims particularly pointing outand distinctly
`claiming the subject matter which the applicant regards as his invention.
`
`5.
`
`Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AlA), second
`
`paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject
`
`matter which the inventor or ajoint inventor (or for applications subject to pre-AlA 35 U.S.C.
`
`112, the applicant), regards as the invention.
`
`In claim 8, it is not clear as what is considered to be “careful” soeech generation. What
`
`is meant by “careful” and howis “careful” soeech generation determined?
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 3
`
`Claim Rejections - 35 USC § 102
`
`6.
`
`In the event the determination of the status of the application as subject to AIA 35 U.S.C.
`
`102 and 103 (or as subject to pre-AlA 35 U.S.C. 102 and 103) is incorrect, any correction of the
`
`statutory basis (i.e., changing from AIA to pre-AlA)for the rejection will not be considered anew
`
`ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would
`
`be the same under either status.
`
`7.
`
`The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the
`
`basis for the rejections under this section made in this Office action:
`
`A person shall be entitled to a patent unless —
`
`(a)(1) the claimed invention waspatented, described in a printed publication, orin public use,
`on sale, or otherwise available to the public beforethe effectivefiling date of the claimed
`invention.
`
`(a)(2) the claimed invention was described in a patentissued under section 151, orinan
`application for patent published or deemed published under section 1 22(b), in which the
`patentor application, as the case may be, namesanother inventor and waseffectively filed
`beforethe effectivefiling date of the claimed invention.
`
`8.
`
`Claim(s) 1, 10-16, 19-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated
`
`by Taki et al. (US 2020/0075015A1, “Taki’).
`
`The applied reference has acommon applicant and joint inventor with the instant
`
`application. Based upon the earlier effectively filed date of the reference,it constitutes prior art
`
`under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcomeby: (1)
`
`a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was
`
`obtained directly or indirectly from the inventor or a joint inventor of this application and is thus
`
`not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) ashowing under 37 CFR 1.130(b) of
`
`a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed;
`
`or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the
`
`effectivefiling date of the claimed invention, the subject matter disclosed in the reference and
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 4
`
`the claimed invention were either owned by the same person or subject to an obligation of
`
`assignment to the same person or subject to ajoint research agreement.
`
`As toclaims 1, 19, 20, Taki discloses an information processing apparatus comprising a
`
`control unit (conversation assistance device 10 includes information processing unit 30, para.
`
`0051) configuredto:
`
`determine speech generated by afirst user on the basis of sensing information ofat
`
`least one sensor apparatus sensing at least one of the first user and asecond user
`
`communicating with the first user on the basis of the speech generation of the first user (Soeech
`
`recognition unit 31 converts speech of user A sensed by soundcollection unit 21, para. 0052,
`
`0100, 0152); and
`
`control information output to the first user on the basis of aresult of the determination of
`
`the speech generation of the first user (feedback control unit 40 displays feedback for instructing
`
`the speaking user A to slow downthe speaking speed, para. 0165-0166).
`
`As to claim 10, Taki discloses: wherein the sensing information includesa first voice
`
`signal of the first user, wherein the control unit causes a display apparatus to display text
`
`acquired by performing voice recognition of the voice signalof the first user, the information
`
`processing apparatus further comprising acommunication unit configured to transmit the text to
`
`a terminal apparatus of the second user, wherein the control unit acquires information relating to
`
`an understanding status of the second user for the text from the terminal apparatus and controls
`
`information output to the first user in accordance with the understanding status of the second
`
`user (it is determined that user B hasfinished reading the displayed speechtext, para. 0041 -
`
`0042; detection resultis displayed to the speaker user A, para. 0053-0054).
`
`As to claim 11, Taki discloses: wherein the information relating to the understanding
`
`status includes information relating to whether or not the second user has completed reading
`
`the text, information relating to a text portion of the text of which reading by the second user has
`
`been completed, information relating to atext portion of the text that is currently being read by
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 5
`
`the second user, or information relating to atext portion of the text that has not been read by the
`
`second user (feedback includes whether or not the user B has already read the speechtext,
`
`para. 0066).
`
`As to claim 12, Taki discloses: wherein the control unit acquires the information relating
`
`to whether or not the text has been completed to be read on the basis of a direction of a visual
`
`line of the second user (user B’s gaze has moved outside of the screen after gazing at the
`
`screen, para. 0054).
`
`As to claim 13, Taki discloses: wherein the control unit acquires the information relating
`
`to whether or the text has been completed to be read by the second user on the basis of a
`
`position of the visualline of the second user in adepth direction (para. 0055).
`
`As to claim 14, Taki discloses: wherein the control unit acquires the information relating
`
`to the text portion on the basis of a speed at which the second user reads characters (feedback
`
`includes reading speed of the user B, para. 0066).
`
`As to claim 15, Taki discloses: wherein the control unit causes the display apparatus to
`
`display information for identifying the text portion (speech text is displayed, para. 0066).
`
`As to claim 16, Taki discloses: wherein the control unit, as the information for identifying
`
`the text portion, changes acolor of the text portion, changes asize of characters of the text
`
`portion, changes a background of the text portion, decorates the text portion, moves the text
`
`portion, vibrates the text portion, vibrates a display area of the text portion, or transforms the
`
`display area of the text portion (size of the speech text may be changed, para. 0054).
`
`9.
`
`Claim(s) 1, 7, 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by
`
`Teshima (US 2017/0243520 A1).
`
`As toclaims 1, 19, 20, Teshima discloses an information processing apparatus
`
`(wearable device 10, Fig. 1) comprising acontrol unit (controller 30, para. 0064, 0066)
`
`configured to:
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 6
`
`determine speech generated by afirst user on the basis of sensing information ofat
`
`least one sensor apparatus sensing at least one of the first user and asecond user
`
`communicating with the first user on the basis of the soeech generation of the first user
`
`(controller 30 analyzes audio signals picked up by microphones22, and converts the speech
`
`into text, para. 0065-0066); and
`
`control information output to the first user on the basis of aresult of the determination of
`
`the speech generation of the first user (controller 30 controls the output section 28 so as to
`
`display information indicating the direction of emitted sound, an icon indicating the type of sound
`
`or the speech content, para. 0070; soeech-to-text caption of aspeaker, Fig. 6, para. 0091 -
`
`0100).
`
`As to claim 7, Teshima discloses: wherein the sensing information includes a voice
`
`signal of the first user, and wherein the control unit is configured to: cause adisplay apparatus
`
`to display text acquired by voice recognition of the voice signal of the first user; and cause the
`
`display apparatus to display information for identifying atext portion for which the determination
`
`of the speech generation in the text displayed in the display apparatus is a predetermined
`
`determination result (controller 30 controls the output section 28 so as to display information
`
`indicating the direction of emitted sound, an icon indicating the type of sound or the speech
`
`content, para. 0070; speech-to-text caption of aspeaker, Fig. 6, para. 0091-0100).
`
`Claim Rejections - 35 USC § 103
`
`10.
`
`In the event the determination of the status of the application as subject to AIA 35 U.S.C.
`
`102 and 103 (or as subject to pre-AlA 35 U.S.C. 102 and 103) is incorrect, any correction of the
`
`statutory basis (i.e., changing from AlA to pre-AlA)for the rejection will not be considered anew
`
`ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would
`
`be the same under either status.
`
`11.
`
`The following is a quotation of 35 U.S.C. 103 which formsthe basis for all obviousness
`
`rejections set forthin this Office action:
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 7
`
`A patent fora claimed invention may notbe obtained, notwithstanding thatthe claimed
`invention is not identically disclosed as set forth in section 102, if the differences between the
`claimed invention and the prior artare suchthat the claimed invention as a whole would have
`been obvious beforethe effective filing date of the claimed invention to a person having
`ordinary skill in the art to which the claimed invention pertains. Patentability shall notbe
`negated by the manner in whichthe invention was made.
`
`12.
`
`Claim(s) 2, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Teshima
`
`in view of Srivastava et al. (US 2019/0318742 Al, “Srivastava’).
`
`Teshimadiffers from claim 2 in that it does not disclose:
`
`wherein the sensing information includes afirst voice signal of the first user sensed
`
`using the sensor apparatus of afirst user side and a second voice signal of the first user sensed
`
`using the sensor apparatus of asecond user side, and
`
`wherein the control unit determines the speech generation on the basis of comparison
`
`between first text acquired by performing voice recognition of the first voice signal and second
`
`text acquired by performing voice recognition of the second voice signal.
`
`Srivastava teaches performing automatic speech recognition of speech detected during
`
`a meeting, at a plurality of client devices (102-1, 102-2, etc., Figs. 1, 4; para. 0036-0039), and
`
`determiningafinal transcript on the basis of comparison between ASR results from each client
`
`device (Figs. 5, 6A, 6B, 6C; para. 0042-0046, 0048-0052).
`
`It would have been obvious to one
`
`of ordinary skill in the art before the effectivefiling date of the claimed invention to modify
`
`Teshima with the aboveteaching of Srivastava in order to provide an accurate recognition of
`
`speech.
`
`As to claim 17, Teshimain view of Srivastava discloses: wherein the sensing information
`
`includesa voice signalof the first user, wherein the control unit is configured to cause a display
`
`apparatus to display text acquired by voice recognition of the voice signal of the first user, the
`
`information processing apparatus further comprising a communication unit configured to
`
`transmit the text to a terminal apparatus of the second user, wherein the communication unit
`
`receives atext portion of the text that is designated by the second user, and wherein the control
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 8
`
`unit causes the display apparatus to display information for identifying the text portion received
`
`by the communication unit (Srivastava: each client device can perform automatic speech
`
`recognition and sends the recognized speech to a master device, para. 0016, and/or other client
`
`devices, para. 0040-0041).
`
`13.
`
` Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Teshimain
`
`view of Daredia et al. (US 2020/0403816 A1, “Daredia’).
`
`Teshimadiffers from claim 3 in that it does not disclose: wherein the sensing information
`
`includesa first voice signal of the first user sensed using the sensor apparatus of afirst user
`
`side and a second voice signal of the first user sensed using the sensor apparatus of asecond
`
`user side, and wherein the control unit determines the speech generation on the basis of
`
`comparison between asignallevel of the first voice signal and a signal level of the second voice
`
`signal.
`
`Daredia teaches capturing and converting speech signals during ameeting using a
`
`plurality of client devices (Fig. 3), and attributing speakers based on highest volumelevel (p ara.
`
`0008, 0079).
`
`It would have been obvious to one of ordinary skill in the art before the effective
`
`filing date of the claimed invention to modify Teshima with the aboveteaching of Darediain
`
`order to provide an accurate meeting transcript.
`
`14.
`
`Claim(s) 4-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Teshimain
`
`view of Sendaiet al. (US 2017/0255447 A1, “Sendai’).
`
`Teshimadiffers from claim 4 in that it does not disclose: wherein the sensing information
`
`includes distance information between the first user and the second user, and wherein the
`
`control unit determines the speech generation on the basis of the distance information.
`
`Sendai teaches a head mounted display device which detects the distance to a person
`
`or object presentin the imaging direction of the camerafrom the user, and displays content and
`
`outputs sound based on the detected distance (Abstract, para. 0084-0090, 0128).
`
`It would have
`
`been obvious to one of ordinary skill in the art before the effective filing date of the claimed
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 9
`
`invention to modify Teshima with the above teaching of Sendai in order to provide auser with
`
`an accurate senseof distance with viewedtargets.
`
`As to claim 5, Teshima in view of Sendai discloses: wherein the sensing information
`
`includes an image of at least a part of a body ofthe first user or the second user, and wherein
`
`the control unit determines the speech generation on the basis of a size of the image of the part
`
`of the body included in the image (Sendai: the distance detection unit 173 acquires the distance
`
`to the target based on the size of the target image detected by the target detection unit 171 in
`
`the captured image of the camera 61, para. 0086).
`
`As to claim 6, Teshima in view of Sendai discloses: wherein the sensing information
`
`includes an image of at least a part of a body of the first user, and wherein the control unit
`
`determines the speech generation in accordancewith alength of atime in whicha
`
`predeterminedpart of the bodyof the first user is included in the image (Sendai: voice is output
`
`according to the state of the target, such as the mouth of aperson being detected as open,
`
`para. 0127).
`
`15.
`
`Claim(s) 8-9, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over
`
`Teshimain view of Taki et al. (US 2019/0147870 A1, “Taki ‘870”).
`
`Teshimadiffers from claim 8 in that it does not disclose: wherein the determination of the
`
`speech generation is determination of whether the soeech generation ofthe first user is careful
`
`speech generation for the second user, and wherein the predetermined determination result is a
`
`determination result representing that the soeech generation ofthe first useris not careful
`
`speech generation for the second user.
`
`Taki ‘870 teaches determining afactor that may causeanerror in speech recognition,
`
`such as the utterance being toofast, too slow, not being clear, too much background noise
`
`(Abstract, Fig. 11, para. 0154-0167, 0211-0231).
`
`It would have been obviousto one of ordinary
`
`skill in the art before the effectivefiling date of the claimed invention to modify Teshima with the
`
`above teaching of Taki ‘870 in order to improve communication between users.
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 10
`
`As to claim 9, Teshima in view of Taki ‘870 discloses: wherein, as the information for
`
`identifying the text portion, acolor of the text portion is changed, asize of characters of the text
`
`portion is changed, abackground ofthe text portion is changed, the text portion is decorated,
`
`the text portion is moved, the text portion is vibrated, a display area of the text portion is
`
`vibrated, or a display area of the text portion is transformed by the control unit ( Taki ‘870:
`
`portion with possible error is highlighted by color, para. 0157, Fig. 4).
`
`Asto claim 18, Teshimain view of Taki ‘870 discloses: a paralanguage information
`
`acquiring unit configured to acquire paralanguage information ofthe first user on the basis of the
`
`sensing information acquired by sensing the first user; a text decorating unit configured to
`
`decorate text acquired by performing voice recognition of a voice signalof the first user on the
`
`basis of the paralanguage information; and a communication unit configured to transmit the
`
`decorated text to a terminal apparatus of the second user (Taki ‘870: Fig. 4-8).
`
`Conclusion
`
`16.
`
`The prior art made of record and notrelied upon is considered pertinent to applicant's
`
`disclosure.
`
`Reece et al. (US 2021/0264921 A1) teach determining an emotional label based ona
`
`speaker’s tone, pitch, timing, voice quality, etc.
`
`Pogorelik (US 2017/0287355 A1) teach transcribing speechto text, determining a
`
`readability score and presenting aspeechclarity indicator to gauge participants’ understanding.
`
`Didik (US 2017/0186431 A1) teach a wearable device which displays text converted
`
`speechfor ahearing impaired person.
`
`Lindberg (US 9560316 B1) teach providing display speech-to-text output as feedback
`
`which may promptthe user to speak louder or movecloser to amicrophone.
`
`17.
`
`Anyinquiry concerning this communication or earlier communications from the examiner
`
`should be directed to StellaL Woo whose telephone number is (571)272-7512. The examiner
`
`can normally be reached Monday- Friday, 8 a.m. to 5 p.m.
`
`

`

`Application/Control Number: 18/000,903
`Art Unit: 2693
`
`Page 11
`
`Examiner interviews are available via telephone, in-person, and video conferencing
`
`using a USPTOsupplied web-based collaboration tool. To schedule an interview, applicantis
`
`encouraged to use the USPTO Automated Interview Request (AIR) at
`
`http://www.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
`
`supervisor, Anmad Matar can be reached on 571-272-7488. The fax phone number for the
`
`organization wherethis application or proceeding is assigned is 571-273-8300.
`
`Information regarding the status of published or unpublished applications may be
`
`obtained from Patent Center. Unpublished application information in Patent Center is available
`
`to registered users. Tofile and manage patent submissions in Patent Center, visit:
`
`https://patentcenter.uspto.gov. Visit httos://www.uspto.gov/patents/apply/patent-center for more
`
`information about Patent Center and https://www.uspto.gov/patents/docxfor information about
`
`filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC)at
`
`866-217-9197(toll-free). If you would like assistance froma USPTO Customer Service
`
`Representative, call 800-786-9199 (IN USA OR CANADA)or 571-272-1000.
`
`/Stella L. Woo/
`Primary Examiner, Art Unit 2693
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket