Show HN: Kernel-level LLM inference via /dev/llm0

I saw an April Fools joke and decided to implement it.

This is a rough port of llm.c into a kernel module. A lot of hacks were needed to make this happen, so a lot of performance was left on the table. Nevertheless, it is a minimally functional GPT2 inference loop running in the kernel.


Comments URL: https://news.ycombinator.com/item?id=43558042

Points: 2

# Comments: 0