AIFA Silicon Photonics

Super Computing Center

Project Planning Summary and Financial Forecast of AIFA Silicon Photonics Super Computing Center

~1.2 EFLOPS
Phase I Compute
≥108M
AI Tokens/Day
$240M
Target Annual Revenue
1.08-1.10
Target PUE
100%
Company-owned

1. Overview of Core Project Metrics

MetricValue
Land Area26,667 square meters
Total Planned Gross Floor Area287,039 square feet
Total Investment – Phase I + Phase IIUS$360M – US$420M
Phase InvestmentUS$180M – US$200M
Phase I Compute CapacityApproximately 1.2 EFLOPS (FP8)
Daily AI Token Output at Full Capacity>108 million units
Target Annual RevenueApproximately US$240 million
Target PUE1.08 – 1.10
Land Ownership100% Company-owned

2. Phased Construction and Financing Plan

Phase I Project
Q4 2026 – Q3 2027

Construction Scale

High-standard compute infrastructure; photonics compute clusters; liquid cooling systems; data center power infrastructure

Investment Scale

US$180M – US$220M

Funding Sources

US$150M in convertible notes + mezzanine debt / operating cash flow partnership

Core Objective

Achieve commercial operation by Q3 2027 with capability to deliver scalable high-performance compute services

Phase II Project
Q4 2027 – Q3 2028

Construction Scale

Compute capacity expansion; R&D facilities; silicon photonics print laboratory; cross-border data center platform

Investment Scale

US$180M – US$200M

Funding Sources

US$150M in convertible notes + operating cash flow from Phase I

Core Objective

Expand total GPU deployment to 12,000 units and establish fully integrated international compute service ecosystem

3. Capacity and Financial Planning

3.1 Compute Capacity and AI Token Output
MetricPhase I (Q3 2027)Full Capacity (Q3 2028)
GPU Department4,000 – 5,000 units10,000 – 12,000 units
Peak FP8 Compute Power1,000 – 1,200 PFLOPS2,500 – 2,800 PFLOPS
Daily AI Token Output40 – 50 million units>108 million units
Annual AI Token Output15 – 18 billion unitsApproximately 39.4 billion units
Phase I (Q3 2027)Full Capacity (Q3 2028)030006000900012000
  • GPU Units
  • PFLOPS

4. Technology System Advantages

4.1 Technology Selection Rationale

Selected architecture: Silicon Photonics CPO + Vera Tensor CPU + Liquid Cooling

  • Frontier-level interconnect solutions encounter significant power and heat challenges
  • Silicon photonics recognized by NVIDIA and TSMC as next-generation standard
  • Immersion liquid cooling handles high-density compute with industry-leading PUE
  • First-mover advantage in Asian market
4.3 Software Layer Details

Proprietary Silicon Photonics-Optimized AI OS

  • Built on deep, full-stack silicon photonics optimization
  • Supports heterogeneous computing with trillion-parameter MoE models
  • Reduces GPU memory usage by ~80%, increases throughput by 3x
Key Technology Components

Advanced Silicon Photonics Cluster

Intelligent scheduling engine for optimized resource allocation

Cross-Border Data Framework

Integrated data framework for dissemination and intelligent cross-traffic routing

Green Energy Infrastructure

Solar power + efficient energy storage systems for sustainable operations

5. Academic, Industry, and Supply Chain Collaboration Progress

Tsinghua University

Completed a full round of non-binding academic exchanges and technical discussions with relevant departments at Tsinghua University. Discussions focused on silicon photonics interconnect technology, liquid cooling system design, and heterogeneous computing scheduling. Parties have designated preliminary liaison teams and plan to enter into a formal cooperation framework agreement in the third quarter of 2026.

Leading Chip and Compute Companies

Company has engaged in preliminary technical discussions with leading chip and compute companies, including NVIDIA, AMD, Cambricon, and others. These discussions have focused on silicon photonics hardware, liquid cooling infrastructure, and potential supply chain partnerships. Aimed at achieving long-term strategic alignment in the project construction timelines by 2025.

Note: All collaborations are currently in preliminary discussion or planning stages and no legally binding agreements have been executed. The Company will make timely disclosures as developments occur.

6. Policy Environment and Strategic Location Advantages

Hainan Free Trade Port Policy Advantages
  • Offshore industry tax incentives
  • Duty-free importation of high-end equipment
  • Pilot programs for compliant cross-border data flows
  • Dedicated international communication channels
  • Targeted support for the digital economy

Expected to optimize both capital expenditures and long-term operating costs

Cross-Border Network Advantages

Leveraging inherent cross-border optical network advantages to remove barriers to:

  • Cross-border compute scheduling
  • Data exchange and business collaboration
  • High-end offshore supercomputing services

Designed to address the market gap in high-end offshore supercomputing services across Asia

7. Strategic Significance of the Project

Project Vision

Upon full completion, the AIFA Silicon Photonics AI Supercomputing Center is expected to become a landmark supercomputing infrastructure asset within the Hainan Free Trade Port and serve as a core strategic platform for AGAE's expansion in Asia and its broader positioning within the global AI industry.

Comprehensive Service Suite

The center is intended to provide a comprehensive suite of international services to institutions across Asia, global technology companies, research organizations, and financial institutions:

  • Customized high-end compute leasing
  • Dedicated large-model training
  • Compliant cross-border data processing
  • Offshore intelligent computing solutions
Long-Term Value Creation

Through these capabilities, the project is designed to unlock the long-term growth potential of Asia's digital economy and support the creation of sustained, stable, and high-quality value for the Company's global shareholders.