Semidynamics, an advanced computing company developing memory-centric AI infrastructure, announced a strategic investment from SK hynix to accelerate the development of next-generation AI inference chips. The investment underscores a growing industry shift toward memory architecture as a critical factor in determining the cost and performance of large-scale AI systems.
As large language models and agentic AI workloads continue to expand, inference performance is increasingly limited by memory capacity and data movement rather than raw compute power. Semidynamics is addressing this constraint by designing processors that significantly increase memory capacity compared to traditional high-bandwidth memory systems. This enables support for larger models, expanded KV caches, and longer context windows, ultimately allowing more users per rack and reducing cost per token.
Headquartered in Barcelona, Semidynamics has built its proprietary processor architecture on the open RISC-V standard, specifically engineered to overcome the “memory wall.” Its Gazzillion memory subsystem technology focuses on latency tolerance, ensuring systems remain productive despite long memory access times that typically stall conventional AI accelerators.
The company recently achieved a key milestone with a 3nm silicon tape-out in partnership with TSMC, positioning it among a small group of European semiconductor firms reaching this advanced node. This progress supports its broader roadmap to deliver high-performance inference processors and fully integrated AI infrastructure systems.
Through the collaboration, Semidynamics and SK hynix plan to co-optimize processor architecture with next-generation memory technologies. The goal is to better support increasingly complex AI workloads, particularly those involving multi-step reasoning, long-context processing, and continuous stateful interactions. These workloads are fundamentally constrained by data movement, making memory efficiency central to scaling AI economically.
The investment will also support future chip tape-outs and the development of rack-level AI infrastructure systems. To date, Semidynamics has secured €45 million in non-dilutive funding from European and Spanish innovation programs, further strengthening its position in the AI semiconductor ecosystem.
Semidynamics is building a full-stack AI infrastructure platform that includes chips, boards, and rack-level systems designed for data center-scale inference deployments.
KEY QUOTES:
“SK hynix’s investment is a direct reflection of where AI infrastructure is heading, systems where memory architecture is as strategically important as compute. We built Semidynamics around that thesis, and this partnership strengthens our position as we bring our inference platform to market at a moment when the industry has recognized that token economics are a memory problem as much as a compute problem.”
Roger Espasa, Founder And CEO, Semidynamics
“AI workloads are fundamentally memory-bound problems, and the industry has been underinvesting in architecture-level solutions. Semidynamics is one of the few companies that has built from first principles around this constraint.”
Heejin Chung, SVP And Head Of Venture Investment, SK hynix America

