
Semron
AI chips with the highest intelligence density.
- dt and ls
- horizon europe
- core ai
- ai applications
- slush24
- semiconductor designers and manufacturers
- nif defense security and resilience
- start2 group
- german accelerator
- chips and processors
- future of computing
- nif protection of critical infrastructure
- ai chips and processors
- green computing
- edge computing
- wf green comouting
- nif ai
Date | Investors | Amount | Round |
---|---|---|---|
- | investor investor investor investor | €0.0 | round |
investor | €0.0 | round | |
investor investor investor investor investor investor investor | €0.0 | round | |
€2.5m | Grant | ||
Total Funding | 000k |
Related Content
Based in Dresden, Germany, Semron is a semiconductor company developing high-efficiency chips for artificial intelligence applications. The company was founded in 2019 by Aron Kirschen (CEO) and Kai-Uwe Demasius (CTO). Their journey began during their studies at the Technical University of Dresden, where they conceptualized an ideal device for deep learning. This led to the development of their core technology, CapRAM™, starting in 2015.
Semron's primary business is the design and development of AI inference chips intended to disrupt the edge computing market. The company targets manufacturers of smart devices such as smartphones, earbuds, VR/AR headsets, and other IoT devices. Its business model revolves around providing these manufacturers with chips that can run large AI models, including generative AI, locally on the device, thereby reducing latency and dependence on data centers. This is achieved through a proprietary in-memory computing architecture.
The company's core product is a 3D-scaled AI chip based on its CapRAM™ technology. Unlike traditional chips that use transistors and electrical currents for computation, Semron's technology utilizes memcapacitors—variable capacitors that compute using electric fields. This approach significantly reduces electron movement, leading to lower power consumption and heat generation. A key feature is the ability to stack hundreds of computing layers on a single chip, creating a high-density architecture. This results in chips that are purportedly up to 20 times more energy-efficient and can handle AI models 500 to 1000 times larger than conventional mobile chips, all while keeping costs down by leveraging existing semiconductor manufacturing processes.
Keywords: AI chips, edge computing, semiconductors, in-memory computing, memcapacitor, CapRAM, 3D scaling, AI inference, generative AI, low-power AI, mobile devices, deep learning hardware, semiconductor design, Dresden, Aron Kirschen, Kai-Uwe Demasius, smart devices, wearables, IoT, energy-efficient computing, neural processing unit, hardware development, artificial intelligence hardware, compute-in-memory, large language models on-device