Skip to contents

Package Overview

localLLM-package localLLM
R Interface to llama.cpp with Runtime Library Loading

Quick Start

Simple functions to get started quickly

quick_llama()
Quick LLaMA Inference
quick_llama_reset()
Reset quick_llama state

Core Functions

Main text generation and annotation functions

generate()
Generate Text Using Language Model Context
generate_parallel()
Generate Text in Parallel for Multiple Prompts
explore()
Compare multiple LLMs over a shared set of prompts

Model Management

Functions for loading and managing models

model_load()
Load Language Model with Automatic Download Support
context_create()
Create Inference Context for Text Generation
download_model()
Download a model manually
list_cached_models()
List cached models on disk
list_ollama_models()
List GGUF models managed by Ollama

Backend Setup

Installation and backend management

install_localLLM()
Install localLLM Backend Library
backend_init()
Initialize localLLM backend
backend_free()
Free localLLM backend
lib_is_installed()
Check if Backend Library is Installed
get_lib_path()
Get Backend Library Path

Tokenization

Token manipulation functions

tokenize()
Convert Text to Token IDs
detokenize()
Convert Token IDs Back to Text
tokenize_test()
Test tokenize function (debugging)

Chat Templates

Functions for formatting chat conversations

apply_chat_template()
Apply Chat Template to Format Conversations
smart_chat_template()
Smart Chat Template Application
apply_gemma_chat_template()
Apply Gemma-Compatible Chat Template

Reliability & Validation

Annotation reliability and validation tools

intercoder_reliability()
Intercoder reliability for LLM annotations
compute_confusion_matrices()
Compute confusion matrices from multi-model annotations
validate()
Validate model predictions against gold labels and peer agreement

Utilities

Helper functions and utilities

annotation_sink_csv()
Create a CSV sink for streaming annotation chunks
hardware_profile()
Inspect detected hardware resources
get_model_cache_dir()
Get the model cache directory
set_hf_token()
Configure Hugging Face access token
document_start()
Start automatic run documentation
document_end()
Finish automatic run documentation

Data

Example datasets

ag_news_sample
AG News classification sample