Foxconn Triples AI Server Capacity in Taiwan Push
Hon Hai Precision Industry, the parent of Foxconn, is moving fast to triple its AI server production capacity by the end of 2026, anchoring Taiwan's bet that the island can stay at the centre of the AI hardware chain even as customers diversify into Mexico, the United States, and Vietnam. Chairman Young Liu told investors this month that AI server lines have already eclipsed smartphones as the largest single revenue contributor, and that the company sees no slowdown in Nvidia GB200 and GB300 demand through 2027.
The shift is reshaping Hon Hai's factory footprint, supplier network, and cash conversion cycle in ways that go well beyond a quarterly beat. Foxconn now ranks as the largest single assembler of Nvidia rack-scale AI systems, and its order book has stretched well into next year.
Why The Push Is Bigger Than Nvidia
The headline customers are familiar: Microsoft, Google, Meta, AWS, and Oracle each placed substantial Nvidia GB300 NVL72 rack orders in the first quarter, and Foxconn quietly added two new lines at its Tucheng campus to absorb the volume. But the deeper story is the supplier ecosystem the company is dragging along. Component partners in Hsinchu and Tainan, including liquid cooling specialist Asia Vital Components and PCB maker Unimicron, have pulled forward capex of their own, pushing Taiwan's AI hardware stack deeper into vertical integration.
Foreign buyers used to bias their orders toward Quanta and Wistron when Foxconn was dominated by iPhones. That preference has eroded as Hon Hai showed it can ship a fully-tested GB300 NVL72 rack within roughly 12 weeks of order, half what it took 18 months ago.
Mexico And Wisconsin Are Real, But Taiwan Stays Central
Foxconn's Wisconsin facility and a new Guadalajara campus in Mexico are under construction with combined capex of roughly USD 1.4 billion, and both will produce server racks for North American customers. That is real, and it answers Washington's pressure for nearshoring. But the engineering, the design wins, the iterative tuning with Nvidia and AMD, and the most complex GB300 builds will stay in Taiwan for the rest of this hardware generation.
The reason is partly skills, partly clustering. Taiwan's AI server cluster pulls together advanced packaging at TSMC, HBM stacked memory shipped overland from Korea, liquid cooling, busbar power delivery, and final assembly within a four-hour drive radius. North America cannot match that density before 2028.
What Investors Are Watching Now
The market questions for the rest of 2026 are not about whether AI server demand exists. They are about margin, mix, and execution risk:
- Margin: Whether Hon Hai can hold operating margin above 3% as system content grows but commodity cooling and chassis components compress pricing.
- Mix: Whether the share of higher-value rack-scale shipments versus bare server boards continues to rise toward the 60% target Liu set out.
- Power: Whether Taiwan's grid can support the megawatt-scale burn-in testing facilities that complex GB300 racks now require for hyperscaler acceptance.
- Talent: Whether the company can keep poaching ASIC and platform engineers from TSMC, MediaTek, and Quanta without driving wage inflation that hurts the next bid.
- Diversification: Whether second-source customers like xAI and the Saudi-anchored HUMAIN move volumes that meaningfully reduce reliance on the top three US hyperscalers.
The Numbers Behind The Bet
| Indicator | 2024 | 2025 | 2026 (target) |
|---|---|---|---|
| AI server share of revenue | 9% | 22% | 34% |
| Cloud and networking growth | +38% | +52% | +65% Q1 actual |
| GB-class racks shipped (units) | ~3,200 | ~12,500 | ~38,000 target |
| North America capacity share | 5% | 9% | 14% |
| Operating margin | 2.7% | 3.1% | 3.2-3.4% guidance |
The Q1 numbers, reported in mid-April, broadly tracked the company's own guidance and reassured analysts who had worried that the Trump administration's expanded export controls would clip Hon Hai's China-tied revenue. So far, the AI server segment has more than absorbed the slowdown in legacy assembly volumes, and Liu pointedly told analysts that the company is not seeing a softening of orders into the second half.
AI server demand is still very strong. We do not see a slowdown into the second half. The bottleneck is on our side, not on the customer side.
What This Means For The Rest Of Asia
Foxconn's expansion is good news for Taiwan, but it puts pressure on regional rivals. Quanta and Wistron will fight harder for the second-tier Nvidia rack tier. South Korean memory suppliers like SK Hynix and Samsung Semiconductor will see HBM demand stay tight, but they also lose pricing leverage as Foxconn negotiates as a single buyer for hundreds of thousands of stacks. ASEAN's data centre play, which has lately pulled in Sea Limited's Singapore AI centre and Vietnamese hyperscaler buildouts, is increasingly downstream of Taiwan's hardware decisions.
For readers tracking Asia's AI compute race, the takeaway is that hardware concentration is widening, not narrowing. The Mexico and Wisconsin moves have political weight, but the engineering centre of gravity remains New Taipei.
Frequently Asked Questions
How does Foxconn's AI server business compare to Quanta's?
Foxconn now ships roughly twice the rack-scale GB-class volume of Quanta and ahead of Wistron, but Quanta retains an edge on bare-board ODM relationships with select hyperscalers. The two will trade share through 2027 as Nvidia diversifies its supplier roster.
Will US tariffs hurt Foxconn's AI servers?
Most AI server racks ship into the US with cooperative duty treatment because final assembly happens at Foxconn's Wisconsin and Mexico campuses. The bigger risk is component-level tariffs on Chinese cooling and PCB inputs, which Foxconn is mitigating by qualifying Taiwanese and Vietnamese substitutes.
What does this mean for Taiwan's grid?
The new burn-in test halls demand sustained megawatt-scale power. Taipower has approved priority connections at Tucheng and Linkou, but the Bureau of Energy is now reviewing whether AI testing should be classed as critical industrial load.
Is Hon Hai exposed if Nvidia demand falls?
Yes, the customer concentration risk is real. Hon Hai is trying to broaden into AMD MI400 systems and ASIC racks for hyperscalers' custom silicon, but Nvidia is still well above 70% of the AI server pipeline.