Skip to contents

Generate Text in Parallel for Multiple Prompts

Usage

generate_parallel(
  context,
  prompts,
  max_tokens = 100L,
  top_k = 40L,
  top_p = 1,
  temperature = 0,
  repeat_last_n = 0L,
  penalty_repeat = 1,
  seed = 1234L,
  progress = FALSE,
  clean = FALSE,
  hash = TRUE
)

Arguments

context

A context object created with context_create

prompts

Character vector of input text prompts

max_tokens

Maximum number of tokens to generate (default: 100)

top_k

Top-k sampling parameter (default: 40). Limits vocabulary to k most likely tokens

top_p

Top-p (nucleus) sampling parameter (default: 1.0). Cumulative probability threshold for token selection

temperature

Sampling temperature (default: 0.0). Set to 0 for greedy decoding. Higher values increase creativity

repeat_last_n

Number of recent tokens to consider for repetition penalty (default: 0). Set to 0 to disable

penalty_repeat

Repetition penalty strength (default: 1.0). Values >1 discourage repetition. Set to 1.0 to disable

seed

Random seed for reproducible generation (default: 1234). Use positive integers for deterministic output

progress

If TRUE, displays a console progress bar indicating batch completion status while generations are running (default: FALSE).

clean

If TRUE, remove common chat-template control tokens from each generated text (default: FALSE).

hash

When `TRUE` (default), computes SHA-256 hashes for the supplied prompts and generated outputs. Hashes are attached via the `"hashes"` attribute for later inspection.

Value

Character vector of generated texts