Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip.
The company plans to release an AI accelerator with heavy in-memory processing, but it has been quiet about its upcoming chip.
A slide at the ISC conference mentioned a “RISC-V CPU/AI accelerator from Samsung.” It isn’t clear if the RISC-V CPU is related to that specific chip or whether Samsung is developing a separate RISC-V processor.
The slide was presented at a session about the UXL Foundation (Unified Acceleration Foundation), which aims to build AI software to support the sales of AI accelerators that aren’t Nvidia’s GPUs. The AI software will compete against Nvidia’s CUDA.
A slide at the ISC 2024 conference during a UXL Foundation session mentions a Samsung “RISC-V CPU/accelerator” (Source: ISC 2024 Video)
Samsung has extensively discussed in-memory processing. Computing near memory or inside the memory alleviates bandwidth issues for scientific and AI applications. Samsung will reportedly release an upcoming AI accelerator called Mach-1, which Naver (a South Korean internet conglomerate ) has already ordered to the tune of $752 million.
Bongjun Kim, a staff researcher at Samsung Advanced Institute of Technology, said during the session that LLMs typically require a lot of memory for AI applications, and there are times when GPUs experience underutilization.
“You need to use some processing in-memory to alleviate the memory bandwidth problems,” Bongjun said during the ISC session titled “Unlocking the next 35 years of software for HPC and AI,” which is available on the web.
He didn’t talk specifically about the chip, and there are no additional details on Samsung’s RISC-V CPU.
The CPU could be a low-performance RISC-V processor in Samsung’s memory-based chip to run specific tasks defined by software kit functions.
RISC-V isn’t yet at a stage where it can reliably function as a high-performance CPU. It doesn’t offer the performance of a CPU based on Intel’s x86 architecture or ARM, and it has poor software support.
RISC-V CPUs are built on an open-standard instruction set, which is free to license. Nvidia, Apple, and many chip providers are already putting RISC-V microcontrollers in their chips powered by ARM CPUs. Intel’s NIOS FPGA also has a RISC-V CPU.
Europe, China, and Russia are also building sovereign chips based on RISC-V CPUs. These countries prefer not to be locked into proprietary designs from Intel, AMD, or ARM. Semiconductors are weaponized as a geopolitical tool to dictate trade terms between countries.
Samsung uses an open-source toolkit promoted by the UXL Foundation for AI applications to run on its in-memory AI accelerator. It also supports OpenMP and OpenACC, which provide a hook to latch onto Nvidia’s GPUs.
Andrew Richards, CEO of Codeplay, which is owned by Intel, said during the ISC session that the UXL Foundation software kit provides a base for hardware companies to innovate new hardware.
Richards said a tool called SYCL—which is at the heart of UXL Foundation’s software toolkit—provides a ready-made framework into which new hardware can be plugged in.
Users can use SYCL to move AI applications from Nvidia’s GPUs to other AI processors.
Samsung’s Bongjun said bringing SYCL to near- and in-memory processing use cases was challenging.
Samsung abstracted its underlying low-level SDK to the OneAPI-based software stack. OneAPI is Intel’s parallel processing framework for AI and HPC, which includes SYCLomatic, a SYCL tool.
“It was quite easy because we decided to design the processing in-memory, near-memory API as aligned with the SQL group functions. I can say that it was a little bit complex regarding the implementation,” Bongjun said.
Richards responded: “When we designed SYCL, we weren’t thinking about processing in memory. You’ve extended SYCL with some of the special features of your processor in memory.”