
Hyperaccel
Made Fast, Efficient, and Affordable.
Date | Investors | Amount | Round |
---|---|---|---|
* | KRW55.0b | Series A | |
Total Funding | 000k |
Hyperaccel, a South Korean startup founded in January 2023, is focused on the design and production of specialized semiconductors for artificial intelligence. The company is developing what it calls a Large Language Model (LLM) Processing Unit, or LPU, an AI processor specifically engineered for LLM inference. This positions Hyperaccel in the application-specific semiconductor market, creating hardware solutions for emerging AI applications.
The company was founded by Professor Joo-Young Kim of KAIST, who brings experience in server hardware accelerators from his time at Microsoft. He also serves as the director of the KAIST AI Semiconductor Systems Research Center, leveraging this academic and research background to advance Hyperaccel's goals. The business model encompasses being a dedicated processor IP licensor and a provider of servers for generative AI workloads. Its clientele includes generative AI service companies seeking to reduce operational costs and enhance service quality.
Hyperaccel's core product is the LPU, a semiconductor that optimizes memory bandwidth usage, a critical factor for large-scale generative AI computations. The LPU is designed to be integrated into AI-specific logic to accelerate the entire inference process of LLMs. The company asserts its LPU can offer ten times the cost-efficiency in LLM inference performance compared to high-performance GPUs, aiming to provide a more cost-effective and energy-efficient alternative. Hyperaccel launched an accelerator server named 'Orion' in October 2023, which utilizes its FPGA semiconductor for LLMs. The company is also working on a 4nm generative AI processor IP called 'Bertha', with mass production anticipated in the first quarter of 2026. Beyond server-use chips, Hyperaccel is expanding into the on-device AI market, targeting applications in robots and home appliances.
Since its inception, the company has secured significant funding. After a seed round of $4.54M in August 2023, it raised approximately $41.2 million in a Series A round in late 2024. This round was led by Korea Investment Partners and included a mix of Korean and international investors such as Vickers Venture Partners, Company K Partners, and LB Investment. The capital is intended for the mass production of its LLM-specific chips.
Keywords: AI semiconductors, Large Language Model, LLM Processing Unit, LPU, application specific semiconductors, server hardware accelerators, generative AI, IP licensor, AI inference, memory bandwidth optimization, on-device AI, FPGA semiconductor, KAIST, Joo-Young Kim, Orion server, Bertha processor