Search results

11 packages found

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

published version 3.9.0, 4 days ago31 dependents licensed under $MIT
33,871

a GGUF parser that works on remotely hosted files

published version 0.1.17, a month ago7 dependents licensed under $MIT
10,043

llama.cpp gguf file parser for javascript

published version 0.2.2, a year ago1 dependents licensed under $MIT
1,214

Various utilities for maintaining Ollama compatibility with models on Hugging Face hub

published version 0.0.11, 21 days ago0 dependents licensed under $MIT
748

Lightweight JavaScript package for running GGUF language models

published version 0.1.0, 3 months ago0 dependents licensed under $MIT
617

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

published version 1.4.0, 4 days ago1 dependents licensed under $MIT
589

Chat UI and Local API for the Llama models

published version 3.2.2, a year ago0 dependents licensed under $MIT
85

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level

published version 0.1.0, 10 months ago0 dependents licensed under $MIT
13

A browser-friendly library for running LLM inference using Wllama with preset and dynamic model loading, caching, and download capabilities.

published version 0.1.3, 2 months ago0 dependents licensed under $MIT
14

a GGUF parser that works on remotely hosted files

published version 0.1.12-dev, 3 months ago0 dependents licensed under $MIT
9

Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.

published version 0.3.0, 5 months ago0 dependents licensed under $MIT
6