Local Language Model Inference


[Up] [Top]

Documentation for package ‘edgemodelr’ version 0.1.0

Help Pages

build_chat_prompt Build chat prompt from conversation history
edge_chat_stream Interactive chat session with streaming responses
edge_clean_cache Clean up cache directory and manage storage
edge_completion Generate text completion using loaded model
edge_download_model Download a GGUF model from Hugging Face
edge_free_model Free model context and release memory
edge_list_models List popular pre-configured models
edge_load_model Load a local GGUF model for inference
edge_quick_setup Quick setup for a popular model
edge_set_verbose Control llama.cpp logging verbosity
edge_stream_completion Stream text completion with real-time token generation
is_valid_model Check if model context is valid