• ylai@lemmy.ml
    ·
    edit-2
    1 year ago

    With local models and inference like llama.cpp, I wish the modder rather spent his energy with models that are locally run, and possibly even fine-tuned to the in-game world. Instead, this mod requires a metered API that needs billing and always-on network connection, while just serving a generic language model with little in-game knowledge.