llama.cpp's icon

llama.cpp 4143.0

LLM inference in C/C++.

Description

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

Readme


Release Notes


EULA

  • Actions

    Go to TurboScript
  • Dependencies
    No dependencies
  • Used By
    No repositories
  • Website
  • Current
    4143.0 updated a month ago
  • Details
    Updated:
    Created: