Package index
-
localLLM-packagelocalLLM - R Interface to llama.cpp with Runtime Library Loading
-
quick_llama() - Quick LLaMA Inference
-
quick_llama_reset() - Reset quick_llama state
-
generate() - Generate Text Using Language Model Context
-
generate_parallel() - Generate Text in Parallel for Multiple Prompts
-
explore() - Compare multiple LLMs over a shared set of prompts
-
model_load() - Load Language Model with Automatic Download Support
-
context_create() - Create Inference Context for Text Generation
-
download_model() - Download a model manually
-
list_cached_models() - List cached models on disk
-
list_ollama_models() - List GGUF models managed by Ollama
-
install_localLLM() - Install localLLM Backend Library
-
backend_init() - Initialize localLLM backend
-
backend_free() - Free localLLM backend
-
lib_is_installed() - Check if Backend Library is Installed
-
get_lib_path() - Get Backend Library Path
-
tokenize() - Convert Text to Token IDs
-
detokenize() - Convert Token IDs Back to Text
-
tokenize_test() - Test tokenize function (debugging)
-
apply_chat_template() - Apply Chat Template to Format Conversations
-
smart_chat_template() - Smart Chat Template Application
-
apply_gemma_chat_template() - Apply Gemma-Compatible Chat Template
-
intercoder_reliability() - Intercoder reliability for LLM annotations
-
compute_confusion_matrices() - Compute confusion matrices from multi-model annotations
-
validate() - Validate model predictions against gold labels and peer agreement
-
annotation_sink_csv() - Create a CSV sink for streaming annotation chunks
-
hardware_profile() - Inspect detected hardware resources
-
get_model_cache_dir() - Get the model cache directory
-
set_hf_token() - Configure Hugging Face access token
-
document_start() - Start automatic run documentation
-
document_end() - Finish automatic run documentation
-
ag_news_sample - AG News classification sample