Search results
11 packages found
Sort by: Default
- Default
- Most downloaded this week
- Most downloaded this month
- Most dependents
- Recently published
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- vulkan
- grammar
- View more
a GGUF parser that works on remotely hosted files
llama.cpp gguf file parser for javascript
Various utilities for maintaining Ollama compatibility with models on Hugging Face hub
Lightweight JavaScript package for running GGUF language models
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- vulkan
- grammar
- View more
Chat UI and Local API for the Llama models
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- grammar
- json-grammar
- View more
A browser-friendly library for running LLM inference using Wllama with preset and dynamic model loading, caching, and download capabilities.
a GGUF parser that works on remotely hosted files
Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.