Llama2.C64 by Elysium [web]
Llama2.c64 Ported to C64 by Maciej 'YTM/Elysium' Witkowiak using oscar64 Llama2.c64 is a port of llama2.c to the Commodore C64 equipped with at least 2MB REU. It runs the 260K tinystories model bringing Llama2's capabilities to the unique C64 hardware environment. This is not a chat model. Rather, imagine prompting a 3-year-old child with the beginning of a story — they will continue it to the best of their vocabulary and abilities. Pros Low power consumption On-premise inference Safe: your data is completely under your control, it's not used to train new models Doesn't require an expensive GPU Waiting for the next token on a C64 is just as exciting as waiting for one coming from DeepSeek running on your laptop Cons None really, this is fantastic Ram Expansion Unit (REU) with at least 2MB is necessary Feels a bit slow, not for the impatient Won't handle models larger than about 8MB, because REU is limited to 16MB more info: https://github.com/ytmytm/llama2.c64/blob/main/README.md
[ back to the prod ]