IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

Similar documents
PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO. 8,301,833 IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE. Filing Date: Nov. 27, 2002 CONTROL PLANE SECURITY AND TRAFFIC FLOW MANAGEMENT

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. APPLE INC. Petitioner,

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

Paper 13 Tel: Entered: January 16, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. APPLE INC. Petitioner,

Paper 10 Tel: Entered: October 10, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO.

Paper Entered: January 14, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. LG ELECTRONICS, INC. Petitioner

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. THE MANGROVE PARTNERS MASTER FUND, LTD.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. KYOCERA CORPORATION, and MOTOROLA MOBILITY LLC Petitioners,

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. Texas Association of REALTORS Petitioner,

Paper Entered: June 23, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. ESET, LLC and ESET spol s.r.o Petitioners

Paper 22 Tel: Entered: January 29, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. ServiceNow, Inc. Petitioner. BMC Software, Inc.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. Oracle Corporation Petitioner,

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. Unified Patents Inc., Petitioner v.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. ITRON, INC., Petitioner

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GOOGLE INC., Petitioner,

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GOOGLE INC., Petitioner,

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE. In the Inter Partes Review of: Attorney Docket No.:

Paper Entered: February 27, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper Entered: May 1, 2013 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO.

Paper 7 Tel: Entered: January 14, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. HEWLETT-PACKARD COMPANY, Petitioner

Paper Entered: March 6, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. HULU, LLC, NETFLIX, INC., and SPOTIFY USA INC.

Paper Date Entered: September 9, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE PETITION FOR INTER PARTES REVIEW UNDER 35 U.S.C. 311 AND 37 C.F.R

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. AVOCENT HUNTSVILLE CORP. AND LIEBERT CORP.

Paper No Entered: August 4, 2017 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. SAS INSTITUTE, INC. Petitioner. COMPLEMENTSOFT, LLC Patent Owner

Paper Date Entered: June 9, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. APPLE INC. Petitioner,

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GOOGLE INC., Petitioner,

Paper Date: July 29, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper Entered: May 24, 2013 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper No Date Entered: August 19, 2013 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. FedEx Corporate Services, Inc., Petitioner

Vivek Ganti Reg. No. 71,368; and Gregory Ourada Reg. No UNITED STATES PATENT AND TRADEMARK OFFICE

Paper 17 Tel: Entered: September 5, 2017 UNITED STATES PATENT AND TRADEMARK OFFICE

Paper Entered: July 15, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT & TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GOOGLE INC., Petitioner,

Paper 13 Tel: Entered: July 10, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. MASTERCARD INTERNATIONAL INCORPORATED Petitioner

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. In Re: U.S. Patent 7,191,233 : Attorney Docket No

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. FACEBOOK, INC., WHATSAPP INC., Petitioners

5/15/2015. Mangosoft v. Oracle. Case No. C JM. Plaintiff s Claim Construction Hearing Presentation. May 19, U.S.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD AMAZON.COM, INC., - vs. - SIMPLEAIR, INC.

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

The following acknowledges applicable legal standards, which are more fully set forth in the parties' briefs.

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. TALARI NETWORKS, INC., Petitioner,

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. TALARI NETWORKS, INC., Petitioner,

Paper Date: September 9, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. Cisco Systems, Inc., Petitioner, AIP Acquisition LLC, Patent Owner

Please find below and/or attached an Office communication concerning this application or proceeding.

Paper Entered: April 29, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

Paper Date: January 14, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper 73 Tel: Entered: May 23, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GoPro, Inc. Petitioner, Contour, LLC Patent Owner

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. NETFLIX, INC., Petitioner, COPY PROTECTION LLC, Patent Owner.

IN THE UNITED STATES DISTRICT COURT CENTRAL DISTRICT OF CALIFORNIA ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) )

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. AUTOMOTIVE DATA SOLUTIONS, INC., Petitioner,

Paper 16 Tel: Entered: February 19, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE

Kyocera Corporation and Motorola Mobility LLC (Petitioners) v. SoftView LLC (Patent Owner)

Paper No Entered: January 15, 2019 UNITED STATES PATENT AND TRADEMARK OFFICE

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE

PETITION FOR INTER PARTES REVIEW OF U.S. PATENT NO

Paper Date: February 16, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Case 9:06-cv RHC Document 113 Filed 08/17/2007 Page 1 of 12 UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF TEXAS LUFKIN DIVISION

Paper 8 Tel: Entered: June 11, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper Entered: September 25, 2013 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper No Entered: February 22, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. MOTOROLA SOLUTIONS, INC. Petitioner

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE. For: Datacenter Workflow Automation Scenarios Using Virtual Databases

Paper Entered: July 15, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper No Filed: May 30, 2018 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper 62 Tel: Entered: October 9, 2014 UNITED STATES PATENT AND TRADEMARK OFFICE

APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 90/010,420 02/23/ US 2134

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. EMERSON ELECTRIC CO., Petitioner, IP Co., LLC, Patent Owner.

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper Entered: September 9, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

Paper Date Entered: October 20, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE

edram Macro MUX SR (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 1" (RH) Read-Buffer" JO s (19) United States

Paper Entered: April 6, 2018 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

IN THE UNITED STATES DISTRICT COURT FOR THE NORTHERN DISTRICT OF CALIFORNIA

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. GOOGLE INC. Petitioner

United States Court of Appeals for the Federal Circuit

Transcription:

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE In re Patent of: Howard G. Sachs U.S. Patent No.: 5,463,750 Attorney Docket No.: 39521-0009IP1 Issue Date: Oct. 31, 1995 Appl. Serial No.: 08/146,818 Filing Date: November 2, 1993 Title: METHOD AND APPARATUS FOR TRANSLATING VIR- TUAL ADDRESSES IN A DATA PROCESSING SYSTEM HAVING MULTIPLE INSTRUCTION PIPELINES AND SEPARATE TLB S FOR EACH PIPELINE Mail Stop Patent Board Patent Trial and Appeal Board U.S. Patent and Trademark Office P.O. Box 1450 Alexandria, VA 22313-1450 PETITION FOR INTER PARTES REVIEW OF UNITED STATES PATENT NO. 5,463,750 PURSUANT TO 35 U.S.C. 311 319, 37 C.F.R. 42

TABLE OF CONTENTS Attorney Docket No 39521-0009IP1 I. MANDATORY NOTICES UNDER 37 C.F.R 42.8(a)(1)... 1 II. PAYMENT OF FEES 37 C.F.R. 42.103... 2 III. REQUIREMENTS FOR IPR UNDER 37 C.F.R. 42.104... 2 A. Grounds for Standing under 42.104(a)... 2 B. Challenge Under 42.104(b) and Relief Requested... 2 C. Claim Construction under 37 C.F.R. 42.104(b)(3)... 3 1. master translation memory (claims 8-11)... 4 2. a direct address translation unit (claims 8 and 11)... 5 IV. SUMMARY OF THE 750 PATENT... 8 A. Brief Description of the 750 Patent... 8 B. Summary of the Prosecution History of the 750 Patent... 11 V. MANNER OF APPLYING CITED PRIOR ART TO EVERY CLAIM FOR WHICH IPR IS REQUESTED, THUS ESTABLISHING A REASONABLE LIKELIHOOD THAT AT LEAST ONE CLAIM OF THE 750 PATENT IS UNPATENTABLE... 12 A. [GROUND 1] Titan I in view of Titan II Renders Claim 8 Obvious... 14 1. Introduction... 14 2. Memory Structure and Translation Tables... 16 3. TLB Configuration and the Handling of TLB Misses... 18 4. Summary... 23 B. [GROUND 2] Titan I in view of Titan II and Hattersley Renders Claims 9-12 Obvious... 36 1. Claim 9... 36 2. Claim 10... 43 3. Claim 11... 47 4. Claim 12... 52 C. [GROUND 3] Titan I in view of Titan II and Denning Renders Claim 8 Obvious... 56 D. [GROUND 4] Titan I in view of Titan II and further in view of Hattersley and Denning Renders Claims 9-12 Obvious... 58 VI. REDUNDANCY... 58 VII. CONCLUSION... 60 i

EXHIBITS APL-1001 U.S. Patent Number 5,463,750 to Howard G. Sachs ( the 750 patent ) APL-1002 APL-1003 APL-1004 APL-1005 APL-1006 APL-1007 APL-1008 APL-1009 APL-1010 APL-1011 Excerpts from the Prosecution History of the 750 Patent ( the Prosecution History ) Summons in a Civil Action for Vantage Point Technology Inc. v. Apple Inc., Civil Action No. 2:13-cv-989 (E.D. Tex.) Declaration of Dr. Donald Alpert ( Alpert Declaration ) Curriculum Vitae of Dr. Donald Alpert Patent Owner s Preliminary Response to Petition for Inter Partes Review of U.S. Patent No. 5,463,750, Case IPR2014-00467 (USPTO PTAB). Daniel P. Siewiorek et al., THE ARCHITECTURE OF SU- PERCOMPUTERS: TITAN, A CASE STUDY (1991) ( Titan I ) Tom Diede et al., The Titan Graphics Supercomputer Architecture, IEEE Computer 21(9), 13-30, September 1988 ( Titan II ) F. H. McMahon, The Livermore FORTRAN Kernels: A Computer Test of the Numerical Performance Range, Lawrence Livermore National Laboratory, Report UCRL-53745, December 1986 ( McMahon ) U.S. Patent Number 5,341,485 to Hattersley et al. ( Hattersley ) J. J. Dongarra, LINPACK WORKING NOTE #3: Fortran BLAS Timing, Argonne National Laboratory, Argonne, Illinois, Report ANL-80-24, February 1980 ( Dongarra ) ii

APL-1012 APL-1013 APL-1014 APL-1015 Decision Denying Institution of Inter Partes Review of U.S. Patent No. 5,463,750, Case IPR2014-00467 (USPTO PTAB) C. R. Moore, The PowerPC 601 Microprocessor, Compcon Spring 93, Digest of Papers, pp.109-116, Feb. 22-26,1993 ( PowerPC-1 ) M.S. Allen et al., Multiprocessing Aspects of the PowerPC 601, Compcon Spring 93, Digest of Papers, pp.117-126, Feb. 22-26 1993 ( PowerPC-2 ) Peter J. Denning, Virtual Memory, Computing Surveys, Vol. 2, No. 3, September 1970 ( Denning ) APL-1016 Petition for Inter Partes Review of U.S. Patent No. 5,463,750, Case IPR2014-01105 (USPTO-PTAB) APL-1017 APL-1018 APL-1019 Patent Owner s Preliminary Response to Petition for Inter Partes Review of U.S. Patent No. 5,463,750, Case IPR2014-01105 (USPTO PTAB) United States Copyright Office Public Catalog record of Compcon Spring 93, Digest of Papers, 22-26 February, 1993 WEBSTER S Ninth New Collegiate Dictionary, cover page, table of contents and pp. 1342-43 ( Webster s Dictionary ) APL-1020 Petition for Inter Partes Review of U.S. Patent No. 5,463,750, Case IPR2015-00175 (USPTO-PTAB) APL-1021 U.S. Patent Number 4,933,835 to Howard G. Sachs ( the 835 patent ) APL-1022 APL-1023 United States Copyright Office Public Catalog record for Daniel P. Siewiorek et al., THE ARCHITECTURE OF SUPER- COMPUTERS: TITAN, A CASE STUDY (1991) U.S. Patent Number 4,812,981 to Carl Chan et al., citing Peter iii

J. Denning, Virtual Memory, Computing Surveys, Vol. 2, No. 3, September 1970 APL-1024 The CONCISE OXFORD DICTIONARY of Current English, Eighth Edition, cover page, contents and pp. 1396-97 ( The Oxford Dictionary ) iv

Apple Inc. ( Petitioner or Apple ) petitions for Inter Partes Review ( IPR ) under 35 U.S.C. 311 319 and 37 C.F.R. 42 of claims 8-12 ( the Challenged Claims ) of U.S. Patent No. 5,463,750 ( the 750 patent ). I. MANDATORY NOTICES UNDER 37 C.F.R 42.8(a)(1) Petitioner, Apple Inc., the real party-in-interest, designates W. Karl Renner, Reg. No. 41,265, as Lead Counsel and Roberto Devoto, Reg. No. 55,108, as Backup Counsel, both available at 3200 RBC Plaza, 60 South Sixth Street, Minneapolis, MN 55402, or electronically by email at IPR39521-0009IP1@fr.com. Apple is not aware of any disclaimers or reexamination certificates for the 750 patent. The patent owner has asserted the 750 Patent in the following cases in the Eastern District of Texas: 2:13-cv-00908;2:13-cv-00909; 2:13-cv-00910; 2:13-cv-00911; 2:13- cv-00912; 2:13-cv-00913; 2:13-cv-00914; 2:13-cv-00915; 2:13-cv-00916; 2:13-cv- 00917; 2:13-cv-00918; 2:13-cv-00920; 2:13-cv-00921; 2:13-cv-00923; 2:13-cv- 00924; 2:13-cv-00925; 2:13-cv-00926; 2:13-cv-00927; 2:13-cv-00928; 2:13-cv- 00929; 2:13-cv-00930; 2:13-cv-00931; 2:13-cv-00989; 2:13-cv-00990; 2:13-cv- 00991; 2:13-cv-00992; 2:13-cv- 00993. Apple was first served on November 26, 2013. See APL-1003. The 750 patent is also the subject of IPR2014-00467, IPR2014-01105 and IPR2015-00175. Also, Apple filed IPR2015-00192 today on different non-redundant grounds, as explained in Section VI. 1

II. PAYMENT OF FEES 37 C.F.R. 42.103 Attorney Docket No 39521-0009IP1 Apple authorizes the Patent and Trademark Office to charge Deposit Account No. 06-1050 for the fee set in 37 C.F.R. 42.15(a) for this Petition and further authorizes any additional fees to be charged to this Deposit Account. III. REQUIREMENTS FOR IPR UNDER 37 C.F.R. 42.104 A. Grounds for Standing under 42.104(a) Apple certifies that the 750 Patent is available for IPR, and that Apple is not barred or estopped from requesting this review on the below-identified grounds. The present petition is being filed within one year of service of a complaint against Apple, Inc. in the 2013 Apple litigation. B. Challenge Under 42.104(b) and Relief Requested Apple requests IPR of the Challenged Claims on the grounds set forth in the table shown below, and requests that each of the Challenged Claims be found unpatentable. An explanation of unpatentability under the statutory grounds identified below is provided in the form of detailed description that follows, indicating where each element can be found in the cited prior art, and the relevance of that prior art. Additional explanation and support for each ground of rejection is set forth in APL-1004, Declaration of Dr. Donald Alpert ( Alpert Declaration ). Ground 750 Patent Claims Basis for Rejection Ground 1 8 103(a): Titan I and Titan II 2

Ground 750 Patent Claims Basis for Rejection Ground 2 9-12 103(a): Titan I, Titan II and Hattersley Ground 3 8 103(a): Titan I, Titan II and Denning Ground 4 9-12 103(a): Titan I, Titan II, Hattersley and Denning The 750 patent issued from U.S. patent application number 08/146,818, which was filed on November 2, 1993, and which claims no priority. Accordingly, the earliest effective filing date for the Challenged Claims is November 2, 1993. Titan I (APL-1007), Titan II (APL-1008) and Denning (APL-1015) each qualify as prior art under 35 U.S.C. 102(b). Titan I was published on August 6, 1991, see APL-1022, and it includes a reference to Titan II, see APL-1007 at p. 195. Denning was available to the public no later than March 14, 1989, see APL- 1023. Hattersley (APL-1010) qualifies as prior art under 35 U.S.C. 102(e), as Hattersley was filed on May 7, 1991 and issued as a patent on August 23, 1994. C. Claim Construction under 37 C.F.R. 42.104(b)(3) The subject patent is expired, and the Board s review of the claims of an expired patent is similar to that of a district court s review. In re Rambus, Inc., 694 F.3d 42, 46 (Fed. Cir. 2012). The principle set forth by the court in Phillips v. AWH Corp., 415 F.3d 1303, 1312, 1327 (Fed. Cir. 2005) (words of a claim are 3

generally given their ordinary and customary meaning as understood by a person of ordinary skill in the art in question at the time of the invention, construing to preserve validity in case of ambiguity) should be applied since the expired claims are not subject to amendment. Other than claim terms addressed immediately below, for which information concerning constructions appropriate for this Petition is set forth, the remaining terms in the claims are not believed to require additional clarification for purposes of the present IPR. 1. master translation memory (claims 8-11) The term master translation memory appears in the claims of the 750 patent, including claim 8. The plain language of claim 8 provides context that informs the meaning of this term. Specifically, according to claim 8, the master translation memory maintains translation data that is stored into the first/second translation buffers when that data is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs. Claim 8 recites a master translation memory for storing translation data, and it further requires storing the translation data for the [first/second] virtual address from the master translation memory into the [first/second] translation buffer when the translation data for the [first/second] virtual address is not stored in the [first/second] translation buffer. 4

In the context of the 750 patent specification, a virtual memory address is translated by first determining whether translation data is stored in buffers known as Translation Lookaside Buffers (TLBs, see APL-1001 at 2:64-65 and Abstract), and the absence of data in such translation buffers is known (commonly) as a TLB miss, which inspires main memory access to translation data thereafter stored in the translation buffer used to support future lookups. APL-1001 at 8:7-8, 8:22-27, 8:39-44. Consistent with this context, claim 8 recites the use of master translation memory for storing the translation data that is provided to the first/second translation buffers as a result of a TLB miss that is, when translation is determined to be missing from translation buffers (TLBs). See also APL-1004 at 47-48. Accordingly, without resolving whether the term master translation memory implicates additional limitations, for purposes of this Petition, Apple resolves that master translation memory is structure in memory that maintains translation data, including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs. This construction is not inconsistent with the construction advanced in the related case IPR2014-01105, see APL-1016 at p. 5, which was not disputed by the Patent Owner in its Preliminary Response filed on October 15, 2014, see APL-1017 at pp. 1-2. 2. a direct address translation unit (claims 8 and 11) 5

The term a direct address translation unit for translating virtual addresses into real addresses appears in the claims of the 750 patent, including claim 8. The plain language of claim 8 provides context that informs the meaning of this term, and, read in the context of the specification, reveals that the claimed direct address translation unit requires a single direct address translation unit that is activated to translate virtual addresses from multiple pipelines into real addresses. In its decision of September 5, 2014, in response to the related petition IPR2014-00467, the Board recognized that the United States Court of Appeals for the Federal Circuit has repeatedly emphasized that an indefinite article a or an in patent parlance carries the meaning of one or more in open-ended claims containing the translational phrase comprising. See APL-1012 at p. 14 (citing Baldwin Graphic Sys., Inc. v. Siebert, Inc., 512 F.3d 1338, 1342 (Fed. Circ. 2008) (internal citations and question marks omitted)). The Board further recognized that [a]n exception to the general rules that a or an means more than one only arises when the language of the claims themselves, the specification, or the prosecution history necessitate a departure from the rule. Id. (citing Baldwin Graphics Sys., Inc., at 1342-43). Citing Abtox, Inc. v. Exitron Corp., 122 F.3d 1019, 1024 (Fed. Cir. 1997) and Insituform Techs., Inc. v. Cat Contracting, Inc., 99 F.3d 1098, 1105-06 (Fed. Cir. 1996), the Board decided that both the claim language and the written description of the 750 patent overcome any rule (or rebut any presump- 6

tion) that a direct address translation unit should be construed to mean one or more direct address translation units. Id. at pp. 14-15. In particular, the Board relied upon the following limitations of claim 1 and similar limitations recited in claim 8 as supporting the conclusion that claims 1 and 8 require a single direct address translation unit: the direct address translation unit including a master translation memory for storing translation data ; the direct address translation unit for translating a virtual address into a corresponding real address ; first direct address translating means, coupled to the direct address translation unit ; and second direct address translating means, for activating the direct address translation unit to translate the second virtual address. Id. at p. 15 (emphasis in original). The Board further cited to various portions of the 750 patent, including Figure 5 and its associated text, as providing additional support that claims 1 and 8 should be construed as directed to a single direct address translation unit that is activated to translate virtual addresses from multiple pipelines. Id. at pp. 15-16. Consistent with the Board s decision of September 5, 2014 in the related petition IPR2014-00467 and without resolving whether the term a direct address translation unit implicates additional limitations, Apple resolves that a direct address translation unit, as recited in claim 8, requires a single direct address transla- 7

tion unit that is activated to translate virtual addresses from multiple pipelines into real addresses. This construction is not inconsistent with the construction advanced in the related petition IPR2014-01105, see APL-1016 at pp. 6-9, which was not disputed by the Patent Owner in its Preliminary Response filed on October 15, 2014, see APL-1017 at pp. 1-2. See also APL-1004 at 49-54. IV. SUMMARY OF THE 750 PATENT A. Brief Description of the 750 Patent The 750 patent describes a computing system having multiple pipelines wherein a separate TLB is provided for each pipeline requiring address translation services. APL-1001, 4:12-15. In its background section, the 750 patent discloses that a typical computing system 10 which employs virtual addressing of data, id. at 1:13-14, includes [an] address register 154 [that] receives an input virtual address which references data used by an instruction issued to one of instruction pipelines 14A-H, a translation memory (e.g., a translation lookaside buffer (TLB)) 158 and comparator 170 for initially determining whether data requested by the input virtual address resides in main memory 34, and a dynamic translation unit (DTU) 162 for accessing page tables in main memory 34, id. at 2:61-3:1. See also Figs. 1 and 4. In this context, the 750 patent describes the physical memory of the computing system as real memory and as main memory: the physical (real) memory available in main memory 34, id. at 1:56-57 (emphasis added), and 4 8

page tables suffice for a machine with 16 megabytes of physical main memory, id. at 2:19-20 (emphasis added). See also APL-1004 at 15-30 for further discussion of background technology that is relevant to the 750 patent. In such a computing system, if an input virtual address does not match a virtual address tag in the TLB, then a miss signal is provided to DTU 162. Id. at 3:33-35. The miss signal indicates that the requested data is not currently stored in main memory 34, or else the data is in fact present in main memory 34 but the corresponding entry in TLB 158 has been deleted. When the miss signal is generated, DTU 162 accesses the page tables in main memory 34 to determine whether... the requested data is currently stored in main memory 34. If not, then DTU 162 instructs data transfer unit 42 through a communication path 194 to fetch the page containing the requested data from mass storage device 30 TLB 158 is updated...and instruction issuing resumes. Id. at 3:40-47. The 750 patent notes that [w]hile [the above] mode of operation is ordinarily desirable, it may have disadvantages when a single memory array is used to service address translation requests from multiple pipelines. Id. at 3:60-63. Accordingly, the purported invention of the 750 patent is directed to a method and apparatus for translating virtual addresses in a computing system having multiple pipelines wherein a separate TLB is provided for each pipeline requiring address translation services. Id. at 4:11-15. Fig. 5 of the 750 patent, see, e.g., APL-1004 9

at p. 22, shows a particular embodiment of an apparatus 200 according to the present invention for translating virtual addresses in a computing system such as computing system 10 shown in FIG. 1. Apparatus 200 includes, for example, a load instruction pipeline 210A, a load instruction pipeline 210B, and a store instruction pipeline 210C. Pipelines 210A-C communicate virtual addresses to address registers 214A-C.Relevant portions of the virtual addresses stored in address registers 218A-C are communicated to TLB s 222A-C. Id. at 4:56-67. Describing the operation of the apparatus 200 of Fig. 5, the 750 patent discloses that update control circuit 240 controls the operation of DTU 162 and updates TLB s 222A-C whenever there is a miss signal generated on one or more of communication paths 238A-C. That is, update control circuit 240 activates DTU 162 whenever a miss signal is received over communication path 238A and stores the desired translation information in TLB 222A activates DTU 162 whenever a miss signal is received over communication path 238B and stores the desired translation information in TLB 222B activates DTU 162 whenever a miss signal is received over communication path 238C and stores the desired translation information in TLB 222C. Id. at 5:9-25. Independent claim 8 encompasses an implementation of the above-described operation using a computing system with two TLBs, with the DTU 162 being activated to fetch translation information from main memory 34 when a miss signal is received for each of the two 10

TLBs, and storing the desired translation information in the respective TLB. The 750 patent further describes that every time DTU 162 is activated for translating a virtual address supplied by pipeline 210A, then update control circuit stores the translation data in each of TLB s 222A-C, id. at 5:44-47, which is disclosure related to features recited in the dependent claims 9 and 10. B. Summary of the Prosecution History of the 750 Patent The 750 patent issued on October 31, 1995, from U.S. Application No. 08/146,818 ( the 818 application ), which was filed on November 2, 1993, with 14 claims. Claims 1 and 8 were independent, with claims 2-7 and 9-14 depending from claims 1 and 8 respectively. See APL-1002 at pp. 67-86. The as-filed claims of the application were twice rejected and issued after one amendment to each of claims 1 and 8. The first rejection was on July 28, 1994, see id. at pp. 100-106, to which the Applicant filed a response on October 3, 1994, see id. at pp. 107-112. Following the second rejection of November 3, 1994, see id. at pp. 113-117, the Applicant amended the claims in the response filed on March 6, 1995, see id. at pp. 118-138. The Office issued a Notice of Allowance on May 8, 1995. No reason for allowance was given. See id. at pp. 139-142. The Applicant paid the issue fee on August 4, 1995, and the 750 patent issued on October 31, 1995. See id. at pp. 149-150. 11

V. MANNER OF APPLYING CITED PRIOR ART TO EVERY CLAIM FOR WHICH IPR IS REQUESTED, THUS ESTAB- LISHING A REASONABLE LIKELIHOOD THAT AT LEAST ONE CLAIM OF THE 750 PATENT IS UNPATENTABLE The references cited in this petition demonstrate that, at the time the 750 patent was filed, it was notoriously well known to use different TLBs associated with different instruction pipelines, and to use a single address translation unit to service misses in each of those TLBs by accessing translation or page tables in the main memory. Moreover, the prior art demonstrates that it was also well known to update entries across multiple TLBs whenever any one TLB was updated. Indeed, the present references address each feature alleged missing from the prior art with regard to the asserted claims during original prosecution, and also during the recent post-grant proceedings. Specifically, in original prosecution, with respect to the Challenged Claims, novelty was said to have been based on recitation of a first and a second instruction pipeline, a first translation buffer, associated with the first instruction pipeline, for storing a first subset of translation data from the master translation memory, and a second translation buffer, associated with the second instruction pipeline, for storing a second subset of translation data from the master translation memory. APL-1002 at p. 129. As detailed in the following sections, the combination of Titan I and Titan II, which were not of record during prosecution of the 750 patent, yields this very feature. 12

More recently, when addressing prior art raised in IPR2014-00467, Patent Owner argued that the claims distinguish then-cited prior art by reciting a single direct address translation unit that serves the first translation buffer, associated with the first instruction pipeline and the second translation buffer, associated with the second instruction pipeline. See APL-1006, pp. 4-5, 7, 12-13. Yet, as described in this petition, these features also are present in the combination of Titan I and Titan II, neither of which was before the Board in IPR2014-00467. See, e.g., APL-1007 at p. 156. Finally, and again referencing the original prosecution, novelty was said to have existed in recitation of cross-updating of the TLB s. APL-1002 at p. 109. However, as detailed in the following sections, not only does the combination of Titan I and Titan II yield this feature, but also, this feature was explicitly disclosed by Hattersley, which is combined with Titan I and Titan II and which was also not of record during prosecution of the 750 patent or in front of the Board in related proceedings. See, e.g., APL-1010 at Abstract and APL-1004 at 117-137. Indeed, the cited references yield four grounds that render the Challenged Claims obvious under 35 U.S.C. 103(a). Because the Office was not aware of these references, it was unaware of the well-known nature of the claimed features, and, as a result, improvidently granted the patent. 13

In the following sections, Apple proposes each of these four grounds and, explains the justification for Inter Partes Review. Apple presents narrative that compares the claim language as construed under the above-ascribed claim interpretations, with the disclosure of the prior art as understood by one of ordinary skill in the art. A. [GROUND 1] Titan I in view of Titan II Renders Claim 8 Obvious The two references Titan I (APL-1007) and Titan II (APL-1008) describe different aspects of the same computing system the Titan graphics supercomputer. See, e.g., APL-1007 at pp. ix-x, xii and 10; APL-1008 at Title. Titan I also cites to Titan II as one of its references in describing the Titan graphics supercomputer. See, e.g., APL-1007 at p.195. It would, therefore, have been obvious to a person of ordinary skill in the art to consider the two references together when attempting to implement the Titan graphics supercomputer architecture as each reference provides details useful to enable such an implementation. See also APL- 1004 at 97. Overview of the Titan Graphics Supercomputer Architecture 1. Introduction Titan I describes the Titan graphics supercomputer architecture, which is a computing architecture designed to provide a single-user supercomputer with the ability to visualize the results of complex computations. APL-1007 at p. 3. As 14

illustrated below, [t]he Titan architecture consists of a shared system bus, a memory subsystem, between one and four processors (each with separate integer and vector floating-point execution hardware), a graphics display subsystem, and an I/O subsystem. Id. at p. 4. See also id. at Fig. 1.1 (annotated below). In this context, see APL-1004 at 15-30 for further discussion of relevant background technology. Each CPU includes an IPU and a VPU connected by system bus to main memory Each processor in Titan contains an Integer Processing Unit (IPU) and a Vector Processing Unit (VPU). Id. at p. 8. The IPU is built around the MIPS R2000 reduced instruction set computer (RISC). The IPU is used for integer scalar processing and for issuing instructions to the VPU. Id. The VPU is used for all floating-point operations. The VPU can process both scalar and vector quantities[.] Id. at p. 9. 15

Titan I discloses that the MIPS R2000, which forms the IPU, is a heavily pipelined Reduced Instruction Set Computer (RISC) processor that includes a five-stage instruction execution pipeline. Id. at p. 113 (emphasis added). According to Titan I, the VPU includes a pipelined floating-point 64-bit ALU and a pipelined floating point 64-bit multiplier. Id. at p. 118 (emphasis added). Titan I explains that the VPU has a load/store architecture, id. at p. 124, and uses a store pipe (i.e., store pipeline) and two load pipes (i.e., load pipelines), id. at p. 125 (emphasis added); see also id. at pp. 107, 127-128, 148-149, 162, 176. This feature of Titan is further described in Titan II, which discloses that Titan has [t]hree independent memory pipes: two load pipes, each capable of running at eight million accesses per second, and one store pipe capable of running at 16 million accesses per second. APL-1008 at pp. 20 (emphasis added); see also id. at pp. 21-22. See also APL-1004 at 100. 2. Memory Structure and Translation Tables Titan I discloses that Titan uses a full memory hierarchy consisting of registers, cache memories, physical memory, and virtual memory. APL-1007 at p. 107. In terms of physical memory, Titan uses from one to four memory boards and can provide a system memory of 8 MB to 128 MB. Id. at p. 108. The system memory (physical memory) of the Titan architecture includes RAM: [e]ach memory board contains between one and four banks of 256K X 4 bit dynamic 16

RAM (DRAM) chips. Id. (emphasis added); see also Fig. 5.2. The VPU accesses the system memory using two buses: an address from either the S-BUS or the R- BUS. Id. at pp. 108-109. Indeed, the R-BUS and S-BUS can simultaneously access different portions of memory. Id. at p. 108. Titan teaches that this physical system memory is the main memory by disclosing that vector-address generators for addressing vectors from main memory use the R-BUS and the S-BUS. Id. at p. 125 (emphasis added). Accordingly, as noted above, and also described in greater detail below, main memory addresses are placed on the R-BUS and the S- BUS to access the system memory. See also APL-1004 at 102-103. With respect to virtual memory, Titan I discloses that data for vector operations are transferred in blocks that are known as page[s] in a virtual memory system [that are] moved from input/output device into main memory[.] Id. at p. 61 (emphasis added). Titan I further discloses that virtual memory paging, i.e., the transferring of blocks/virtual pages from a mass storage device into the physical system memory (main memory), occurs through use of disk drives attached to the SCSI (Small Computer System Interface standard) port of the I/O processor Id. at p. 111. Titan I teaches that a translation table associated with physical memory is used for mapping virtual memory addresses to physical memory addresses: [v]irtual memory is a mapping technique that transforms a program's logical 17

memory address into an actual address in physical memory. A translation table translates the program s (virtual) addresses into memory (physical) addresses [and] is used to retain information about this mapping. The translation table can also indicate that a particular memory block is not resident in physical memory only the working set of the program resides in physical memory[.] Id. at p. 37 (emphasis added). 3. TLB Configuration and the Handling of TLB Misses Not surprisingly, given the need for improving the speed of virtual-tophysical address translations, Titan I also discloses that the Titan architecture uses multiple TLBs. As stated in Titan I, since the translation table is accessed on each memory reference, a portion of it is often kept in a special associative memory called a Translation Lookaside Buffer (TLB). This is a high-speed cache memory that holds mapping information. Titan uses a TLB in the VPU (called the External TLB or ETLB) as well as a TLB in the IPU. APL-1007 at p. 38 (emphases added). All memory accesses performed by the IPU and VPU are to virtual memory addresses. Id. at p. 111 (emphasis added). In greater detail, and as illustrated in the following annotated version of Fig. 5.4, Titan I describes that [t]he TLB used by the IPU is built into the MIPS R2000 architecture. It contains 64 entries, with each entry corresponding to a 4 KB page 18

of physical memory. Id. at pp. 111 (emphasis added); see also id. at p. 111, 113-114 and Fig. 5.4 (annotated herein). IPU uses an internal TLB in the MIPS R2000 processor for the instruction execution pipeline With respect to this, Titan I discloses that the IPU implements a five-stage instruction execution pipeline, using the MIPS R2000 processor, which leverages the TLB in the IPU (i.e., the TLB in the MIPS R2000) to handle virtual-to-physical address translations, where the TLB provides support for the instruction cache. APL-1007 at p. 113 (emphasis added). Referring to the VPU, and referencing annotated Fig. 5.9 below, the ETLB of the VPU is actually composed of two separate ETLBs, each of which services a different bus: the ETLB is actually split into two lookup tables - one for the R- BUS and one for the S-BUS. Id. at p. 125 (emphasis added). In describing each of these bus-specific ETLBs, Titan I discloses that [t]he TLB used by the vector unit (the ETLB ) has 8096 entries, each corresponding to a 4 KB page of physical 19

memory. Id. at p. 111. See also id. at Fig. 5.9 (annotated below) and APL-1004 at 107-108. A addr. and B addr. are the two load pipe addresses for the R-BUS D addr. is the store pipe address for the S-BUS R-BUS ETLB associated with load pipes ( load ETLB ) S-BUS ETLB associated with the store pipe ( store ETLB ) Two load pipes (A addr and B addr) share the RBUS, id., which is used exclusively for vector read traffic, and, therefore, are serviced by the ETLB that receives data from the R-BUS (hereinafter R-BUS ETLB ). Id. at p. 181. One store pipe (D addr) [stores] data via the S-BUS, id. at p. 125, since only the S- BUS may be used for writes, and, therefore, is serviced by the ETLB that receives data from the S-BUS (hereinafter S-BUS ETLB ), id. at p. 181. Virtual addresses are provided to the S-BUS ETLB one address every clock cycle for the store 20

pipe, and to the R-BUS ETLB one address every other clock cycle for the load pipes (which share the RBUS and therefore take turns issuing addresses). Id. at p. 125 (emphasis added). See also id. at Fig. 5.9 (annotated above). The association of separate ETLBs with the load pipes (i.e., load ETLB or R-BUS ETLB ) and the store pipe (i.e., store ETLB or S-BUS ETLB ) in the Titan architecture is further emphasized by Titan II, which discloses that Titan has a virtual-machine architecture. The address sources for the memory pipes produce reference streams in virtual space. To translate these streams into physical addresses for requests to memory, a pair of 8K-entry, direct-mapped external TLBs, each mapping a fourkilobyte page, are provided. The ETLBs are each associated with a specific bus, allowing simultaneous accesses for both load and store pipes. APL-1008 at p. 22 (emphases added); see also id. at Figs. 3 and 4 and APL-1004 at 109-110. Importantly, in the overall Titan architecture, the IPU handles the TLB misses of the TLB in the IPU as well as the TLB misses of the S-BUS and R-BUS ETLBs in the VPU. With respect to the IPU TLB, Titan I notes that [a] TLB miss on the IPU takes approximately 800 nanoseconds to process in the likely case that the handling routine is resident in the IPU instruction cache. Id. at p. 156 (emphasis added). Accordingly, the IPU takes approximately 800 nanoseconds to handle a miss of the TLB of the IPU. 21

With respect to the S-BUS and R-BUS ETLBs of the VPU, Titan I discloses that [w]henever an ETLB miss is detected, the IPU is interrupted and performs the necessary service to load the appropriate entry into the ETLB. Servicing an ETLB miss takes from 25 microseconds to 300 microseconds, depending on the number of instruction cache misses encountered by the IPU when processing the ETLB miss. Id. (emphasis added); see also id. at p. 118, Fig. 1.2 (partly reproduced and annotated below). Thus, the IPU takes from 25 microseconds to 300 microseconds to handle a miss of either the S-BUS ETLB or the R-BUS ETLB. See also APL-1004 at 111-113. IPU (MIPS R2000) associated with a single Direct Address Translation unit handles page walks for IPU TLB, R-BUS ETLB and S-BUS ETLB, and hence, multiple TLBs MIPS R2000 TLB associated with IPU instruction pipeline ( IPU TLB ) Fig. 1.2 R-BUS ETLB ( load ETLB ) associated with load pipes ( load instruction pipeline ) S-BUS ETLB ( store ETLB ) associated with store pipe ( store instruction pipeline ) 22

4. Summary As shown in the above annotated version of Fig. 1.2 of Titan I, the Titan architecture, therefore, is a computing system having multiple TLBs, including an R- BUS ETLB in a vector processing unit that is associated with two vector load pipes (i.e., load pipelines), an S-BUS ETLB in the vector processing unit that is associated with a vector store pipe (i.e., store pipeline), and a TLB in an integer processing unit that is associated with a five-stage instruction execution pipeline. Each of these TLBs is used to speed up virtual-to-physical address translations, and, importantly, TLB misses for each of these TLBs are handled by the same integer processing unit, which accesses a translation table structure in memory to map addresses for virtual memory pages to physical memory addresses. See also APL- 1004 at 114-116. Therefore, as described in more detail below, claim 8 of the 750 patent is obvious over of Titan I in view of Titan II, rendering this claim unpatentable under 35 U.S.C. 103(a). Claim 8(a): Titan I in view of Titan II yields [a] method for translating virtual addresses in a computing system having at least a first and a second instruction pipeline and a direct address translation unit for translating virtual addresses into real addresses, the direct address translation unit including a master translation memory for storing translation data, the direct address translation unit for 23

translating a virtual address into a corresponding real address, as recited in claim 8. In particular, Titan I describes a graphics supercomputing architecture ( computing system ), APL-1007 at p. 3, in which a processor includes a vector processing unit (VPU), id. at pp. 8-9. As noted in V(A), the VPU includes a load pipe ( first instruction pipeline ) and a store pipe ( second instruction pipeline ). See, e.g., APL-1007 at pp. 107, 125, 127-128, 148-149, 162, 176. Titan I discloses that memory access by the VPU, i.e., by the load and store pipes of the VPU, use virtual memory addresses, with [a]ll accesses to memory from the VPU translated by the ETLB, APL-1007 at p. 111, which is actually split into two lookup tables - one for the R-BUS and one for the S-BUS, id. at p. 125. See also id. at Fig. 5.9 (annotated above) and APL-1004 at 98-100, 105-110. According to Titan I, the integer processing unit (IPU) ( direct address translation unit ) handles ETLB misses, APL-1007 at pp. 118, 156. That is, when an ETLB miss occurs, the IPU is interrupted and performs the necessary [translation] service to load the appropriate entry into the ETLB, id. at p. 156, where the entry corresponds to a 4 KB page of physical memory, id. at p. 111 (emphasis added) ( translating a virtual address into a corresponding real address ). See also APL-1004 at 111-112. 24

The Titan IPU uses a translation table for its translation of the program's (virtual) addresses into memory (physical) addresses and to retain information about this mapping. APL-1007 at p. 37. Accordingly, [b]ecause the translation table is accessed on each memory reference, a portion of it is often kept in a special associative memory called a Translation Lookaside Buffer (TLB). This is a highspeed cache memory that holds mapping information. Id. at p. 38 (emphasis added). As such, and recalling that this applies to translations from each of the VPU load and store pipes, a skilled artisan would have understood that Titan I discloses that the translation table is the structure in memory that maintains translation data (i.e., mapping of virtual memory addresses into physical memory addresses), including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs ( a master translation memory for storing translation data ). See also APL-1004 at 104, 108, 113. Claim 8(b): Titan I in view of Titan II yields storing a first subset of translation data from the master translation memory into a first translation buffer associated with the first instruction pipeline, as recited in claim 8. In particular and as noted in V(A) supra, Titan I describes that the VPU includes a load pipe ( first instruction pipeline ) that shares the RBUS with another load pipe. APL-1007 at pp. 125; see also id. at pp. 148-149, 163, 181 ( The R (or Read) BUS is used exclusively for vector read traffic. ), APL-1008 at 25

pp. 20-21. The R-BUS ETLB ( first translation buffer associated with the first instruction pipeline ) is associated with the load pipe. APL-1007 at p. 125. Titan II confirms this association between the respective pipelines and their associated ETLBs. See, e.g., APL-1008 at p. 22 ( The address sources for the memory pipes produce reference streams in virtual space. To translate these streams to physical addresses for requests to memory, a pair of external TLBs are provided. The ETLBs are each associated with a specific bus, allowing simultaneous accesses for both load and store pipes (emphasis added)). Titan I teaches that the ETLB holds mapping information, i.e., a portion of the translation table that translates (virtual) addresses into memory (physical) addresses, APL-1007 at pp. 37-38 (emphasis added) ( storing a first subset of translation data from the master translation memory ). See also APL-1004 at 107-110, 114-116. Claim 8(c): Titan I in view of Titan II yields translating a first virtual address received from the first instruction pipeline into a corresponding first real address, wherein the first virtual address translating step comprises the steps of: accessing the first translation buffer, as recited in claim 8. In particular, as noted in Claim 8(b) supra, Titan I discloses that the VPU includes a load pipe ( first instruction pipeline ) that has an associated R-BUS ETLB ( first translation buffer ). The VPU contains three vector-address generators for addressing vectors from main memory (two for loading data from the R- 26

BUS and one for storing data via the S-BUS). APL-1007 at p. 125 (emphasis added). The vector-address generators [access] vectors in main memory. They provide virtual addresses to the External Translation Lookaside Buffer (ETLB) every other clock cycle for the load pipes ( accessing the first translation buffer ). Id. (emphasis added). Titan I teaches that the R-BUS ETLB holds a portion of the translation table mapping information, which is accessed on each load instruction memory reference, id. at p. 38, and the translation table translates the program s (virtual) addresses into memory (physical) addresses, id. at p. 37 ( translating a first virtual address received from the first instruction pipeline into a corresponding first real address ). This teaching of Titan I is reinforced by Titan II, which discloses that [t]he address sources for the memory pipes produce reference streams in virtual space. To translate these streams to physical addresses for requests to memory, a pair of 8K-entry, direct-mapped external TLBs, each mapping a four-kilobyte page, are provided. The ETLBs are each associated with a specific bus, allowing simultaneous accesses for both load and store pipes. APL-1008 at p. 22. See also APL-1004 at 107-110, 116. Claim 8(d): Titan I in view of Titan II yields indicating whether translation data for the first virtual address is stored in the first translation buffer, as recited in claim 8. 27

In particular, as noted in Claim 8(c) supra, Titan I teaches that a vector address generator provides a virtual address ( first virtual address ) for a load pipe to the associated R-BUS ETLB ( first translation buffer ), which holds a portion of the translation table that translates the program s (virtual) address[] into memory (physical) address[], APL-1007 at p. 37. Titan I discloses that the ETLB may have a miss, id. at p. 156. As disclosed in the background section of the 750 patent, see APL-1001 at 3:16-24 and 3:33-40, a hit in a TLB indicates that the virtual address successfully maps to a translation entry in the TLB identifying a corresponding physical address, and a miss in a TLB indicates that the virtual address does not successfully map to a translation entry in the TLB identifying a corresponding physical address. Titan I further discloses that when an ETLB miss is detected, the IPU is interrupted, APL-1007 at p. 156 (emphasis added), to perform[] the necessary service to load the appropriate entry into the ETLB, id. (emphasis added). See also id. at p. 118. Titan I, therefore, teaches triggering an operation, i.e., interrupting the IPU to perform translation, in the event of a TLB (ETLB) miss and, therefore, contemplates producing an indication of a miss or hit in order to selectively enable the consequent interrupting of the IPU in response thereto ( indicating whether translation data for the first virtual address is stored in the first translation buffer ). See also APL-1004 at 111-112. 28

Claim 8(e): Titan I in view of Titan II yields activating the direct address translation unit to translate the first virtual address when the translation data for the first virtual address is not stored in the first translation buffer, as recited in claim 8. In particular, as described in Claim 8(d) supra, Titan I in view of Titan II teaches that the R-BUS ETLB, which is associated with the vector load pipes, translates the pipeline s virtual address into memory (physical) address, and may generate a miss in the process ( the translation data for the first virtual address is not stored in the first translation buffer ). Titan I further discloses that when an ETLB miss is detected, the IPU is interrupted, APL-1007 at p. 156 (emphasis added) ( activating the direct address translation unit when the translation data for the first virtual address is not stored in the first translation buffer ), to perform[] the necessary service to load the appropriate entry into the ETLB, id. (emphasis added) ( translate the first virtual address ). See also id. at p. 118. The necessary service performed by the IPU (the single dynamic translation unit of the direct address translation unit ) is to translate the virtual address into a physical address using mapping information ( translation data for the first virtual address ) from the translation table (master translation memory included in the direct address translation unit) ( activating the direct address translation unit ). See id. at p. 37 ( Virtual memory is a mapping technique that transforms a program s log- 29

ical memory address into an actual address in physical memory. A translation table that translates the program's (virtual) addresses into memory (physical) addresses is used to retain information about this mapping. ) (emphasis added). See also APL-1004 at 111-113, 115-116. Claim 8(f): Titan I in view of Titan II yields storing the translation data for the first virtual address from the master translation memory into the first translation buffer, as recited in claim 8. In particular, as described in Claim 8(e) supra, Titan I teaches that the IPU is activated when an ETLB miss is detected. The IPU performs the necessary service, i.e., translates the virtual address into a corresponding physical address using mapping information from the translation table in memory. APL-1007 at pp. 37 and 156. As discussed in V(A)(2) and Claim 8(a) supra, Titan I discloses that the TLBs store a portion of the translation table. See APL-1007 at p. 38. A skilled artisan at the time of the 750 patent, therefore, would have readily appreciated that Titan I discloses that the translation table is the structure in memory that maintains translation data, including translation that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs ( translation data for the first virtual address from the master translation memory ). Titan I further discloses that the IPU load[s] the appropriate entry into the ETLB, APL-1007 at p. 156 (emphasis added), which corresponds to a 4 KB page of physical memory, 30

id. at p. 111 (emphasis added) ( storing the translation data into the first translation buffer ). See also APL-1004 at 112, 116. Claim 8(g): Titan I in view of Titan II yields storing a second subset of translation data from the master translation memory into a second translation buffer associated with the second instruction pipeline, as recited in claim 8. In particular, Titan I describes that the VPU further includes a store pipe ( second instruction pipeline ) that uses the S-BUS to store its results into memory. APL-1007 at p. 162; see also id. at pp. 125 ( storing data via the S- BUS ), 181 ( only the S-BUS may be used for writes ), APL-1008 at pp. 20-22. As noted previously in V(A), the S-BUS ETLB ( second translation buffer associated with the second instruction pipeline ) is associated with the store pipe. APL-1007 at p. 125. See also Titan II, APL-1008 at p. 22 ( The address sources for the memory pipes produce reference streams in virtual space. To translate these streams to physical addresses for requests to memory, a pair of external TLBs are provided. The ETLBs are each associated with a specific bus, allowing simultaneous accesses for both load and store pipes. ) (emphasis added). Titan I teaches that the ETLB holds mapping information, i.e., a portion of the translation table that translates (virtual) addresses into memory (physical) addresses, APL-1007 at pp. 37-38 (emphasis added) ( storing a second subset of translation data from the master translation memory ). As discussed in V(A)(2) and Claim 31

8(a) supra, Titan I discloses that the TLBs store a portion of the translation table. See APL-1007 at p. 38. A skilled artisan at the time of the 750 patent, therefore, would have readily appreciated that Titan I discloses that the translation table is the structure in memory that maintains translation data, including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs ( translation data from the master translation memory ). See also APL-1004 at 107-110, 114-116. Claim 8(h): Titan I in view of Titan II yields translating a second virtual address received from the second instruction pipeline into a corresponding second real address, wherein the second virtual address translating step comprises the steps of: accessing the second translation buffer, as recited in claim 8. In particular, as described in Claim 8(g) supra, Titan I discloses that the VPU includes a store pipe ( second instruction pipeline ) that has an associated S-BUS ETLB ( the second translation buffer ). Titan I discloses that the VPU contains three vector-address generators for addressing vectors from main memory (two for loading data from the R-BUS and one for storing data via the S- BUS). APL-1007 at p. 125. The vector-address generators provide virtual addresses to the External Translation Lookaside Buffer (ETLB) one address every clock cycle for the store pipe ( accessing the second translation buffer ). Id. (emphasis added). Titan I discloses that the S-BUS ETLB holds a portion of the 32

translation table mapping information and is accessed for each store instruction memory reference, id. at p. 38, and the translation table translates the program s (virtual) addresses into memory (physical) addresses, id. at p. 37 ( translating a second virtual address received from the second instruction pipeline into a corresponding second real address ). This teaching of Titan I is reinforced by Titan II, which teaches that [t]he address sources for the memory pipes produce reference streams in virtual space. To translate these streams to physical addresses for requests to memory, a pair of 8K-entry, direct-mapped external TLBs, each mapping a four-kilobyte page, are provided. The ETLBs are each associated with a specific bus, allowing simultaneous accesses for both load and store pipes. APL-1008 at p. 22 (emphasis added). See also APL-1004 at 107-110, 116. Claim 8(i) Titan I in view of Titan II yields indicating whether translation data for the second virtual address is stored in the second translation buffer, as recited in claim 8. In particular, as described in Claim 8(h) supra, Titan I teaches that a vector address generator provides a virtual address ( second virtual address ) of the store pipe to the S-BUS ETLB ( second translation buffer ), which holds a portion of the translation table that translates the program s (virtual) address[] into memory (physical) address[], APL-1007 at p. 37. Titan I discloses that the ETLB may have a miss, id. at p. 156, and, as described in Claim 8(d), that such a miss trig- 33

gers interruption of the IPU to provide a translation service. Titan I, therefore, contemplates producing an indication of a miss or hit in order to selectively enable the consequent interrupting of the IPU in response thereto. ( indicating whether translation data for the second virtual address is stored in the second translation buffer ). See also APL-1004 at 111-112. Claim 8(j): Titan I in view of Titan II yields activating the direct address translation unit to translate the second virtual address when the translation data for the second virtual address is not stored in the second translation buffer, as recited in claim 8. In particular, as described in Claim 8(i) supra, Titan I teaches that the S- BUS ETLB translates the virtual addresses received from the store pipe into memory (physical) addresses, and may generate a miss in the process ( the translation data for the second virtual address is not stored in the second translation buffer ). Titan I further discloses that when an ETLB miss is detected, the IPU is interrupted, APL-1007 at p. 156 (emphasis added) ( activating the direct address translation unit when the translation data for the second virtual address is not stored in the first translation buffer ), to perform[] the necessary service to load the appropriate entry into the ETLB ( translate the second virtual address ), id. (emphasis added). See also id. at p. 118. As described in Claim 8(e) supra, the necessary service performed by the IPU is to translate the virtual address into a 34

physical address using mapping information ( translation data for the second virtual address ) from the translation table in main memory. See Claim 8(e). Claim 8(k): Titan I in view of Titan II yields storing the translation data for the second virtual address from the master translation memory into the second translation buffer, as recited in claim 8. In particular, as described in Claim 8(j) supra, Titan I teaches that the IPU is activated when an ETLB miss is detected. The IPU performs the necessary service, i.e., translates the virtual address into a corresponding physical address using mapping information from the translation table in main memory, APL-1007 at p.156. As discussed in V(A)(2) and Claim 8(a) supra, Titan I discloses that the TLBs store a portion of the translation table. See APL-1007 at p. 38. A skilled artisan at the time of the 750 patent, therefore, would have readily appreciated that Titan I teaches that the translation table is the structure in memory that maintains translation data, including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs ( translation data for the second virtual address from the master translation memory ). The IPU load[s] the appropriate entry into the ETLB, APL-1007 at p. 156 (emphasis added), which corresponds to a 4 KB page of physical memory, id. at p. 111 (emphasis added) ( storing the translation data into the second translation buffer ). See also APL-1004 at 112, 116. 35

B. [GROUND 2] Titan I in view of Titan II and Hattersley Renders Claims 9-12 Obvious As described in more detail below, claims 9-12 of the 750 patent are obvious based on the teachings of Titan I and Titan II in view of Hattersley, rendering these claims unpatentable under 35 U.S.C. 103(a). 1. Claim 9 Titan I in view of Titan II and Hattersley yields storing the translation data for the first virtual address from the master translation memory into the second translation buffer whenever translation data for the first virtual address from the master translating memory is stored into the first translation buffer, as recited in claim 9. As set forth in detail below, a skilled artisan would have found it obvious based on the teachings of Titan I, Titan II and Hattersley to store translation data for a virtual address of a vector load instruction obtained by the IPU from main memory (the translation data for the first virtual address from the master translation memory) into both the R-BUS ETLB (first translation buffer) and the S-BUS ETLB (second translation buffer) in order to optimize performance of the Titan architecture. In particular, the Titan architecture is designed to perform computations that often involve a load from a memory location followed by a store to the same memory location. Storing the translation data for a virtual address of a vector load instruction into both the R-BUS ETLB and the S-BUS ETLB, therefore, would eliminate the need to perform the 36

same translation multiple times, thereby improving performance. Attorney Docket No 39521-0009IP1 Titan I teaches that typical computations performed by the Titan architecture include a computation that loads vector data from a memory location, performs a mathematical operation on the vector data, and then stores the resulting data back into the same memory location. In particular, Titan I discloses the use of the LINPACK numerical benchmark as representative of the anticipated users of [the] machine. APL-1007 at p. 15. See also id. at p. 168 ( Because a large portion of code run on supercomputers has the same general characteristics as LINPACK, it is widely considered to be a fair and accurate predictor of supercomputer performance on many scientific code applications (emphasis added)). Titan I discloses an example computation performed using the LINPACK benchmark: Double Precision A Times X Plus Y, or DAXPY, which multiplies the vector X times a scalar A, then performs a vector addition of that result to the vector Y. Id. (emphasis added). Dongarra provides an example of the LINPACK DAXPY operation and explicitly notes that the vector addition result is stored in the memory locations associated with the Y vector. See APL-1011 at p. 4 ( This is the loop for a DAXPY which adds a scalar times a vector to a vector with results going to the second vector (emphasis added)). Notably, Dongarra was publicly available no later than May 19, 1980. Id. at Title page. 37

As such, Titan I teaches that typical computations performed by the Titan architecture include a computation that loads vector data (i.e., the Y vector) from memory locations (i.e., the memory locations of the Y vector), performs an operation on that vector data (i.e., adds a vector X times a scalar A to the retrieved Y vector), and then stores the result into the same memory locations (i.e., the memory locations of the Y vector). Each vector spans multiple memory locations corresponding to the elements of the vector, and therefore the recited operations take place on multiple elements from the respective vectors. To perform this computation, the Titan architecture executes a load instruction directed to a memory location, performs the required calculation, followed by a store instruction directed to that same memory location. Titan I also provides other examples of vector operations where the load and store are performed on the same vector elements, such as multiplication of a vector by a scalar, which thereby require successive load/store accesses to the same memory locations. See, e.g., EXAMPLE 2.3. Loop execution speedup. APL-1007 at p. 50 and Fig. 2.15. When executing a load instruction from a memory location, an R-BUS ETLB miss may occur if there is no TLB entry in the R-BUS ETLB (first translation buffer) corresponding to the virtual address of the memory location. The TLB miss results in the IPU (single dynamic translation unit included in the di- 38

rect address translation unit) accessing a translation table in main memory (master translation memory included in the direct address translation unit) to translate the virtual address into a physical, i.e., real, address and then loading the corresponding translation entry into the R-BUS ETLB (first translation buffer). See, e.g., APL-1007 at pp. 125 and 156. As noted above, given that common operations performed on the Titan architecture often follow this load instruction with a store instruction to the same memory location, a skilled artisan would have found it obvious and advantageous to update both the R-BUS ETLB (first translation buffer) and the S-BUS ETLB (second translation buffer) with the translation data retrieved by the IPU as a result of the R-BUS ETLB miss. See also APL-1004 at 117-120. In particular, updating both the R-BUS ETLB (first translation buffer) and the S-BUS ETLB (second translation buffer) as a result of an R-BUS ETLB miss would be consistent with Titan I s stated goal of tuning performance using benchmarks such as LINPACK, see, e.g., APL-1007 at p. 15, given that Titan I indicates that TLB misses were a concern for optimizing performance, see, e.g., APL-1007 at p. 117 ( The limits that bar achieving the ideal of one clock cycle per instruction are data dependencies in the program, the response time of the memory hierarchy, and TLB misses (emphasis added)). Titan I discloses that an ETLB miss may take 25 µs. With a 16 MHz bus cycle time, id. at p. 5, an 39

ETLB miss would consequently take as much time as 400 memory accesses or 400 dual pipeline floating point operations performed with the 8 MHz VPU pipeline, id. at p. 118, and 2 operations per cycle. See APL-1004 at 121. Given this, it is not surprising that reducing ETLB misses is very important to optimizing performance of the Titan architecture. Id. While Titan I suggests (and a skilled artisan would have found it obvious and advantageous) that it would be desirable to update the S-BUS ETLB (second translation buffer) when the R-BUS ETLB (first translation buffer) is updated with an address translation entry as demonstrated above, this feature is explicitly disclosed in Hattersley. For example, Hattersley discloses using a plurality of directory look aside tables (DLATs) to provide multiple address translation, whereby a generated address for one DLAT may be written to all the DLATs [t]o avoid the problem of generating the same address multiple times for each of the DLATs. APL-1010, Abstract (emphasis added). Hattersley clarifies that a directory look aside table (DLAT) [is] sometimes referred to as a translation lookaside buffer (TLB), which stores recent virtual address translations. Id. at 1:34-37. Hattersley teaches that its disclosure is applicable to scientific or vector computing, id. at 1:55. See also id. at 1:47-53 and APL-1007 at pp. 63, 125. Hattersley is therefore applicable to the vector operations performed using the VPU in the Titan architecture. See, e.g., APL-1004 at 40

122. Therefore a skilled artisan would have been motivated to combine the teachings of Hattersley with Titan I s teachings of the vector computations performed in the Titan architecture. Explaining a motivation for its invention, Hattersley discloses that there has developed a need to generate and translate more than a single address per cycle. Specifically, the processor requires more than one memory request every cycle to be fully utilized. The requests may be, for example, three separate instructions so that three addresses must be generated every cycle to make the memory requests. APL-1010 at 1:65-2:3 (emphasis added). This is consistent with the Titan implementation, for which Titan I discloses that the Titan VPU performs two virtual address translations using the ETLBs in a cycle - one R- BUS ETLB address translation for the load pipe and one S-BUS ETLB address translation for a store pipe: vector-address generators provide virtual addresses to the External Translation Lookaside Buffer (ETLB) at up to one address every clock cycle for the store pipe and one address every other clock cycle for the load pipes (which share the RBUS and therefore take turns issuing addresses), APL-1007 at p. 125 (emphasis added). See also APL-1004 at 123. Hattersley discloses that often the translation will have been made for one DLAT and the same translation will be needed for the others. APL-1010 at 5:13-15 (emphasis added). As discussed above with respect to execution of 41

the LINPACK by the VPU in the Titan architecture, this is precisely the reason for updating the S-BUS ETLB (second translation buffer) when the R-BUS ETLB (first translation buffer) is updated with an address translation entry. See APL-1004 at 117-120. Hattersley confirms this by providing a similar solution, disclosing that a plurality of DLATs are used to provide multiple address translation. The DLATs are accessed in parallel by separate virtual address generators. To avoid the problem of generating the same address multiple times for each of the DLATs, a generated address for one DLAT may be written to all the DLATs. APL-1010 at 2:14-20 (emphasis added). A solution to making the same translation three times is to write to all three DLATs when a translation is made, as illustrated in FIG. 5. Thus, as indicated in FIG. 6, the translation need only be made once by the operating system and will, thereafter, be available in all the DLATs in only N cycles[.] Id. at 5:25-30 (emphasis added). See also id. at Figs. 5 (annotated herein). Accordingly, it would have been obvious to one of ordinary skill in the art that Titan I in view of Titan II and Hattersley yields the features recited in claim 9, including storing the translation data for the first virtual address from the master translation memory into the second translation buffer (IPU stores the virtual memory to physical memory mapping information for a load pipe address in the S-BUS ETLB (store pipe ETLB), as taught by Titan I, Titan II 42

and Hattersley) whenever translation data for the first virtual address from the master translation memory is stored into the first translation buffer (whenever the IPU loads the virtual memory to physical memory mapping information for the load pipe address from the translation table in main memory into the R-BUS ETLB (load pipe TLB), as taught by Titan I, Titan II and Hattersley). See also APL-1004 at 125. Map to load pipe (R-BUS) vector address generators of Titan Analogous to load pipe (R-BUS) ETLB of Titan 2. Claim 10 Titan I in view of Titan II and Hattersley yields storing the translation data for the second virtual address from the master translation memory into the first translation buffer whenever translation data for the second virtual address from the master translation memory is stored into the second translation buffer, as recited in claim 10. Claim 10, which depends from claim 9, is the mirror image of claim 9 in that it requires that, whenever the second translation buffer 43

is updated with translation data, the first translation buffer is also updated with the same translation data. As demonstrated in V(C)(1) supra, optimizing the performance of the Titan architecture can be achieved by decreasing TLB misses when performing computations of the type often performed on the Titan architecture. As evidenced by the LINPACK benchmark, computations often performed on the Titan architecture include computations that involve loading vector data from a memory location followed by storing vector data to that same memory location. Given this and for the reasons noted in V(C)(1) supra, a skilled artisan would have found performing the process recited in claim 9 obvious based on the teachings of Titan I, Titan II and Hattersley. That is, a skilled artisan would have found it obvious to store translation data for a virtual address of a vector load instruction obtained by the IPU from main memory (the translation data for the first virtual address from the master translation memory) into both the R-BUS ETLB (first translation buffer) and the S-BUS ETLB (second translation buffer) in order to optimize performance of the Titan architecture. Importantly, Titan I discloses that the computations often performed by the Titan architecture may further include computations that store vector data into a memory location and then subsequently load that same vector data from that same memory location. Indeed, it is common for scientific computing ap- 44

plications to write a vector result for one phase of a computation and then to read that same stored result in a subsequent phase of the computation. See APL-1004 at 126. Evidence that this type of computation is performed on the Titan architecture is found in Titan I s disclosure of the use of the Lawrence Livermore Loops benchmark, which is described as a tool for measuring and, therefore, optimizing the performance of the Titan architecture. APL-1007 at p. 33. The Livermore Loops benchmark uses the Livermore Fortran Kernel Loops, id. at pp. xii, see also id. at p. 196, which contain specific examples known as recurrence equations where the value computed and stored for one vector element depends on the value of a previously computed result for that same vector element. See APL-1004 at 126. Such computations, therefore, require execution of a store instruction to store the previously computed result for a vector element followed by execution of a load instruction to access the previously computed result in order to generate a new value for the vector element. See APL-1004 at 127. When executing a store instruction to a memory location, an S-BUS ETLB miss may occur if there is no TLB entry in the S-BUS ETLB (second translation buffer) corresponding to the virtual address of the memory location. The TLB miss results in the IPU (single dynamic translation unit of the direct address translation unit) accessing a translation table in main memory (master 45

translation memory included in the direct address translation unit) to translate the virtual address into a physical address and then loading the corresponding translation entry into the S-BUS ETLB (second translation buffer). See, e.g., APL-1007 at pp. 125 and 156. As noted above, given that common vector operations performed on the Titan architecture, such as the use of the Livermore FORTRAN Kernel Loops, often follow this store instruction with a load instruction to the same memory location, a skilled artisan would have found it obvious based on the teachings of Titan I, Titan II and Hattersley to update both the R-BUS ETLB (first translation buffer) and the S-BUS ETLB (second translation buffer) with the translation data retrieved by the IPU as a result of the S- BUS ETLB miss in order to improve the performance of the Titan architecture for the same reasons stated in V(C)(1) supra. See also APL-1004 at 126-128. In sum, it would have been obvious to one of ordinary skill in the art that Titan I in view of Titan II and Hattersley yields the features recited in claim 10, including storing the translation data for the second virtual address from the master translation memory into the first translation buffer (IPU stores the virtual memory to physical memory mapping information for a store pipe address in the R-BUS ETLB (load pipe ETLB), as taught by Titan I, Titan II and Hattersley) whenever translation data for the second virtual address from the mas- 46

ter translation memory is stored into the second translation buffer (whenever the IPU loads the virtual memory to physical memory mapping information for the store pipe address from the translation table in main memory into the S-BUS ETLB (store pipe TLB), as taught by Titan I, Titan II and Hattersley). 3. Claim 11 Claim 11(a): Titan I in view of Titan II and Hattersley yields [t]he method according to claim 10 further comprising the steps of: storing a third subset of translation data from the master translation memory into a third translation buffer associated with the third instruction pipeline, as recited in claim 11. In particular, as demonstrated in V(C)(2) supra, Titan I in view of Titan II and Hattersley yields the features of claim 10. Titan I discloses that the Titan computing architecture includes first and second instruction pipelines (i.e., load and store pipes) with respective associated translation buffers (i.e., R-BUS and S-BUS ETLBs) for translating virtual memory addresses into physical memory addresses. Titan I in view of Titan II and Hattersley also yields a dynamic address translation unit (i.e., the IPU and main memory) that is used when a virtual address (e.g., a load pipe address or a store pipe address) is not stored in the respective ETLB (i.e., on an ETLB miss). The IPU services the ETLB miss, i.e., maps the virtual memory address into a physical memory address using a translation table in physical memory, and the appropriate mapped entry is load- 47

ed into the corresponding (load) R-BUS or (store) S-BUS ETLB. See, e.g., V(B) supra. Titan I in view of Titan II and Hattersley also yields that when translation data for a first virtual address (e.g. a load pipe address) is loaded from the master translation memory (e.g., using the translation table in main memory) into the first translation buffer (e.g., the R-BUS ETLB, which is associated with the load pipe), the translation data is also stored into the second translation buffer (e.g., the S-BUS ETLB, which is associated with the store pipe), as recited in claim 9. The references further disclose that when translation data for a second virtual address (e.g. a store pipe address) is loaded from the master translation memory (i.e., using translation tables in main memory) into the second translation buffer (i.e., the S-BUS ETLB, which is associated with the store pipe), the translation data is also stored into the first translation buffer (i.e., the R-BUS ETLB, which is associated with the load pipe), as recited in claim 10. See, e.g., V(C)(1)-(2) supra. Additionally, Titan I discloses that the Titan architecture uses the MIPS R2000 processor in the IPU, which implements a five-stage instruction execution pipeline, APL-1007 at p. 113 (emphasis added) ( third instruction pipeline ). See also id. at pp. 114 and 117 ( MIPS R2000, like other RISC processors, aggressively pipelines to strive for an execution rate of one instruction 48

per clock cycle. This includes overlapped instruction fetching, decoding, and execution, as well as separate buses for data fetching and instruction fetching. ). Titan I teaches that the IPU performs operations using the instruction execution pipeline in parallel with operations performed by the load pipes and the store pipe in the VPU: [t]he IPU can be executing integer instructions or setting up a sequence of floating-point operations for the VPU while the VPU is performing a computation. Thus both integer and floating-point operations can be concurrently executed. Id. at p. 61 (emphasis added). See also APL-1004 at 129. Titan I discloses that Titan uses a TLB in the VPU (called the External TLB or ETLB) as well as a TLB in the IPU, APL-1007 at p. 38 (emphasis added) ( a third translation buffer associated with the third instruction pipeline ). Because the translation table is accessed on each memory reference, a portion of it is kept in [the IPU TLB]. Id. (emphasis added). The TLB used by the IPU is built into the MIPS R2000 architecture. It contains 64 entries, with each entry corresponding to a 4 KB page of physical memory. Id. at pp. 111 (emphasis added) ( storing a third subset of translation data from the master translation memory into a third translation buffer ). See also id. at pp. 111, 113-114 and Fig. 5.4 (annotated in V(A)(3)). Claim 11(b): Titan I in view of Titan II and Hattersley yields translating a third virtual address received from the third instruction pipeline into a corre- 49

sponding third real address, where in the third virtual address translating step comprises the steps of: accessing the third translation buffer, as recited in claim 11. In particular, Titan I teaches that [a]ll memory accesses performed by the IPU and VPU are to virtual memory addresses. Both the IPU and VPU contain Translation-Lookaside Buffers (TLBs) to map virtual addresses into physical addresses, APL-1007 at p. 111 (emphasis added) ( third virtual address translating step comprises the steps of: accessing the third translation buffer ). See also Claim 8(c), which recites similar features. Claim 11(c): Titan I in view of Titan II and Hattersley yields indicating whether translation data for the third virtual address is stored in the third translation buffer, as recited in claim 11. In particular, as described in Claim 8(d) supra, Titan I describes that the TLBs of the Titan architecture, including the TLB of the IPU, may experience a TLB miss[], APL-1007 at p. 117, i.e., an indication whether the virtual address successfully maps to an entry in the TLB corresponding to a physical address ( indicating whether translation data for the third virtual address is stored in the third translation buffer ). See also APL-1004 at 111, 130. Claim 11(d): Titan I in view of Titan II and Hattersley yields activating the direct address translation unit to translate the third virtual address when the 50

translation data for the third virtual address is not stored in the third translation buffer, as recited in claim 11. Titan I discloses that the IPU (the single dynamic translation unit of the direct address translation unit ) processes a TLB miss of the IPU TLB: [a] TLB miss on the IPU, APL-1007 at p. 56 (emphasis added) ( third virtual address is not stored in the third translation buffer ) takes approximately 800 nanoseconds to process in the likely case that the handling routine is resident in the IPU instruction cache. Id. (emphasis added). The IPU uses the translation table stored in main memory (the master translation memory included in the direct address translation unit ) for translating the (virtual) address[] associated with the TLB miss into [a] memory (physical) address[]. Id. at p. 37. See also APL-1004 at 111, 113, 116 and Claim 8(e) supra. Claim 11(e): Titan I in view of Titan II and Hattersley yields storing the translation data for the third virtual address from the master translation memory into the third translation buffer, as recited in claim 11. In particular, Titan I teaches that the IPU handles the IPU TLB miss by performing the necessary service to load the appropriate entry into the [TLB], APL-1007 at p. 156 (emphasis added) ( storing the translation data for the third virtual address into the third translation buffer ), which corresponds to a 4 KB page of physical memory, id. at p. 111 (emphasis added) ( from the 51

master translation memory ). See also APL-1004 at 111,.114-116 and Claim 8(f) supra. 4. Claim 12 Titan I in view of Titan II and Hattersley yields [t]he method according to claim 11, wherein the step of storing the translation data of the third virtual address comprises the step of storing the translation data for only the third virtual address in the third translation buffer, as recited in claim 12. Importantly, the feature introduced in claim 12 further limits the step of storing the translation data of the third virtual address recited in claim 11 by requiring that this particular storing step store translation data for only the third virtual address in the third translation buffer (emphasis added). Notably, this feature does not preclude other storing steps from occurring that store the first or second virtual addresses in the third translation buffer, but rather instead simply requires that other translation data besides the translation data for the third virtual address NOT be stored during the third virtual address translation data storing step. Stated differently, the above-noted feature requires that the translation data for only the third virtual address (emphasis added) be stored in the third translation buffer during the step that stores the third virtual address translation data. Of course, other storing steps distinct from the third virtual address translation data storing step may store other translation data, such as, for 52

example, translation data for the first or second virtual addresses, in the third translation buffer. See also APL-1004 at 131-132. As demonstrated in V(C)(3) supra, Titan I in view of Titan II and Hattersley yields the features of claim 11. Titan I discloses the Titan computing architecture that includes a third instruction pipeline (i.e., the instruction execution pipeline in the IPU) with an associated translation buffer (i.e., the IPU TLB) for translating virtual memory addresses into physical memory addresses. The dynamic address translation unit (i.e., the IPU and the main memory) is used when a virtual address (e.g., a virtual memory address for instruction execution pipeline) is not stored in the IPU TLB (i.e., on a TLB miss). The IPU services the TLB miss, i.e., maps the virtual memory address into a physical memory address using the translation table in main memory, and the appropriate mapped entry is loaded into the IPU TLB. Notably, only the appropriate mapped entry (the translation data for only the third virtual address) is loaded into the IPU TLB (third translation buffer) during that particular loading step. That is, Titan I does not describe or suggest that other mapped entries (other translation data) are loaded into the IPU TLB during the particular loading step triggered as a result of the IPU TLB miss. See also APL-1004 at 131-132. As such, Titan I in view of Titan II and Hattersley yields the features recited in claim 12. 53

Moreover, as noted in the paragraphs below, given the substantial differences between the IPU TLB and the VPU ETLBs and between the instructions executed by the IPU and those executed by the VPU, it would have been obvious to a skilled artisan that translation data obtained by the IPU as a result of TLB misses of the R-BUS ETLB (first translation buffer) and TLB misses of the S-BUS ETLB (second translation buffer) would not have been stored as entries in the IPU TLB. See APL- 1004 at 133-136. In particular, Titan I distinguishes the instructions executed by the IPU from those performed by the VPU by disclosing that the IPU instructions include normal control and integer scalar operations that are distinct from the floating point instructions executed by the load and store pipes of the VPU. APL-1007 at p. 118. The IPU often does not do much real work in the sense of producing actual computational results, id. at p. 111, while the VPU provides the computational power that does the real work for both the computation and display phases of a program's execution on Titan, id. at p. 118, by operating on integer and floating-point vectors, with scalars treated as oneelement vectors, id. See also id. at pp. 114-117. The VPU, therefore, processes instructions for vector data. In contrast, the IPU processes instructions for control and scalar data. Titan I further emphasizes the distinction between the TLB in the IPU and 54

the ETLBs in the VPU, disclosing that they have different sizes and characteristics to account for the different typical usages of memory by the IPU and VPU and to allow concurrent operation of the IPU and VPU. Id. at p. 38 (emphasis added). Accordingly, it would have been obvious to one of ordinary skill in the art that Titan I in view of Titan II and Hattersley yields that only the translation data generated as a result of an IPU TLB miss (translation data for the third virtual address) would be stored in the IPU TLB (the third translation buffer). Translation data generated as a result of an R-BUS ETLB or an S-BUS ETLB (translation data for the first virtual address and translation data for the second virtual address) miss, in contrast, would not be stored in the IPU TLB because of the substantial differences between the IPU TLB and the VPU ETLBs and between the instructions executed by the IPU and those executed by the VPU. For example, as noted by Dr. Alpert, a skilled artisan would have readily recognized that loading the translation for an instruction page generated by the IPU in response to an IPU TLB miss into the R-BUS (load) ETLB or S-BUS (store) ETLB of the VPU would hurt performance of the Titan computing system. In particular, because the VPU would not make use of the translation for the instruction page given the substantial differences between the IPU instructions and the VPU instructions (i.e., the VPU performs computations for large 55

data sets and does not access instruction pages), the time spent loading the instruction page translation into the ETLBs would needlessly slow down the computing system and may even result in a potentially useful address translation entry for a vector operand in the ETLBs being replaced by a useless IPU instruction page translation entry. See APL-1004 at 135-136. C. [GROUND 3] Titan I in view of Titan II and Denning Renders Claim 8 Obvious As explained in V(A) supra, Titan I in view of Titan II yields all of the limitations of claim 8, based on construing the term master translation memory as structure in memory that maintains translation data, including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs. See III(C)(I) and V(A). To the extent the Board construes the term master translation memory to be main memory, which is the construction advanced in the related case IPR2014-01105, see APL-1016 at p. 5, this distinction does not render claim 8 patentable. In particular, Denning (APL-1015) discloses this feature, and Titan I and Titan II in view of Denning renders claim 8 obvious. As noted in V(A)(2), Titan I discloses that the system or physical memory of Titan is the main memory of the Titan computing architecture, which is shown, e.g., in Fig. 1.1 (annotated above), as memory that is accessible by the processors using the system bus. See also APL-1004 at 102. Denning discloses that 56

page/translation tables are stored in the main memory of a computer, which is a relatively fast memory as compared to mass storage devices like disk drives, because of the need to quickly access this data during execution of a program. See, e.g., APL-1015 at pp. 162-164 (disclosing that the segment table can be stored in main memory ; page table can be stored in memory ; and the segment and page tables may be stored in memory ) (emphasis added). See also id. at p. 153 (distinguishing between main memory and auxiliary memory based on speed of access); p. 157 (describing system hardware with main memory and auxiliary memory where processors have direct access to main memory, but not to auxiliary memory; therefore information may be processed only when in main memory, and information not being processed may reside in auxiliary memory. From now on, the term memory specifically means main memory. ) (emphasis added); and pp. 169-171 (describing drums and disks as examples of auxiliary memory). A skilled artisan, therefore, would have found it obvious, based on the disclosure of Titan I and Titan II in view of the teachings of Denning, that the translation table of the Titan architecture would be stored in the system memory, i.e., the main memory, of the Titan architecture in order to provide fast translations of virtual addresses to physical addresses. See also APL-1004 at 137-138. It therefore would have been obvious to one of ordinary skill in the art to apply the teaching of Denning to the Titan architecture as disclosed by Titan I and Titan II, and such a 57

combination would meet all of the elements of claim 8, as otherwise noted above in V(A). D. [GROUND 4] Titan I in view of Titan II and further in view of Hattersley and Denning Renders Claims 9-12 Obvious As explained in V(B) supra, Titan I in view of Titan II and Hattersley yields all of the limitations of claims 9-12, based on construing the term master translation memory as structure in memory that maintains translation data, including translation data that is determined to be missing from the first/second translation buffers, i.e., when a TLB miss occurs. See III(C)(1) and V(A). As discussed in V(C) supra, to the extent the Board construes the term master translation memory to be main memory, Denning (APL-1015) discloses this feature, such that Titan I and Titan II in view of Hattersley and Denning renders claims 9-12 obvious. See V(B) and V(C). VI. REDUNDANCY This petition is being filed concurrently with one other petition regarding the same 750 Patent, namely case number IPR2015-00192 ( counterpart petition ). Between these two petitions, Apple has presented only a limited number of grounds, yet in doing so, has demonstrated how various teachings address the claims divergently. Indeed, the counterpart petition sets forth grounds based on PowerPC-1 (APL-1013) and PowerPC-2 (APL-1014), which are the only grounds that address claim 1. By contrast, the present petition sets forth 58

grounds based on Titan I and Titan II, which are the only grounds that address claims 10-12. The Board should be aware that a petition filed in proceeding number IPR2015-00175 advances grounds of rejection for claims 10-12 that are based on assertions made by Vantage Point in its infringement contentions in related litigation with respect to the meaning of the claim term whenever. See, e.g. APL-1020 at pp. 23-25, pp. 34-38, pp. 41-42 and pp. 56-59. To the extent that the Board considers grounds set forth in that petition in its redundancy analysis, Apple submits that the grounds of the present petition are not redundant to those advanced in the IPR2015-00175 petition at least because the grounds for rejection of claims 10-12 of the present petition do not rely on the infringement positions advocated by Vantage Point in the related litigation. In particular, the present petition demonstrates how Titan I and Titan II, and alternatively Titan I and Titan II in view of Denning, yields all the features of claims 10-12 without relying on infringement positions advocated by Vantage Point in the related litigation with respect to the meaning of the claim term whenever. Accordingly, Apple respectfully requests that the Board institute rejections on all grounds presented in these two petitions to avoid prejudicing Apple. However, to the extent the Board institutes fewer than the limited number of presented grounds, Apple requests that the Board institute at least the Titan I 59

and Titan II -based grounds of rejection. VII. CONCLUSION The prior art references identified in this Petition provide new, noncumulative technological teachings which indicate a reasonable likelihood of success as to Apple s assertion that the Challenged Claims of the 750 patent are not patentable pursuant to the grounds presented. Accordingly, Apple respectfully requests institution of IPR for the Challenged Claims of the 750 patent for each of the grounds presented herein. 60