CodBi
    Preparing search index...

    Provides the AI_LLAMA_CHAT.functionality.

    Initial Author: Callari, Salvatore (Callari@WaXCode.net) Maintainer: Callari, Salvatore (Callari@WaXCode.net)

    Index

    Constructors

    Methods

    Constructors

    Methods

    • This functionality turns a set of HTML elements into a chat interface for the LLAMA Model served by the CodBi. It enables interactive, multi-turn conversations about uploaded images and PDF documents, provides internet query access to the model via the Brave Search API and the client's location via the Geolocation API. Voice input is supported via the Media.Input.Speech.Whisper-Functionality.

      If the model is not specified QWEN3-VL 2B is downloaded and utilized.

      Required Elements (found by CSS class within the nearest common ancestor):

      CSS Class Element Purpose
      The class tagged with this functionality textarea Chat display (read-only conversation history)
      AI_LLAMA_CHAT_Input input type="text" or textarea Text input where the user types messages
      AI_LLAMA_CHAT_Send button Send button (triggers inference)
      AI_LLAMA_CHAT_Stop button Stop button (aborts running inference)
      AI_LLAMA_CHAT_Upload (Optional) input type="file" File upload for images/PDFs to chat about
      AI_LLAMA_CHAT_Thinking (Optional) input type="checkbox" Toggles thinking mode (chain-of-thought) on/off
      AI_LLAMA_CHAT_Internet (Optional) input type="checkbox" Toggles internet search availability on/off
      AI_LLAMA_CHAT_Location (Optional) input type="checkbox" Toggles geolocation (get_current_location) on/off
      AI_LLAMA_CHAT_MailForward (Optional) input type="checkbox" Toggles auto-forward of every AI response via email
      AI_LLAMA_CHAT_MailAddress (Optional) input type="text" or input type="email" Email address for auto-forwarding (shown when checkbox is checked)
      AI_LLAMA_CHAT_AlertOnFinish (Optional) input type="checkbox" Toggles alert on finish of inference

      Generated CSS Classes (injected at runtime):

      CSS Class Element Purpose
      LLAMA_Chat_Container div Scrollable chat wrapper replacing the hidden textarea
      LLAMA_Chat_Row div Flex row holding a single bubble
      LLAMA_Chat_Row--user div Row modifier: right-aligned (user message)
      LLAMA_Chat_Row--llama div Row modifier: left-aligned (Llama response)
      LLAMA_Chat_Row--system div Row modifier: centered (system/info messages)
      LLAMA_Chat_Bubble div Base speech-bubble styling (padding, border-radius, shadow)
      LLAMA_Chat_Bubble--user div User bubble colors (background via --user-bubble-bg)
      LLAMA_Chat_Bubble--llama div Llama bubble colors (background via --llama-bubble-bg)
      LLAMA_Chat_Bubble--system div System bubble: transparent, italic, muted
      LLAMA_Chat_Bubble--thinking div Temporary "thinking" indicator (dimmed, italic)
      LLAMA_Chat_Bubble--error div Error bubble: red-tinted background
      LLAMA_Chat_AiHint span Small "AI-Generated" label inside an AI bubble

      Behavior:

      • The display textarea is made read-only and shows the full conversation history.
      • When files are selected via the upload input, they are attached for subsequent messages.
      • When the user clicks Send (or presses CTRL+Enter in the input), the message and any attached files are sent to the standard backend for processing by the AI model. The response is displayed in the chat.
      • PDF files are automatically detected and processed (rendered to images or extracted).
      • Multiple files can be attached; each is processed independently by the model.
      • The send button and input are disabled during inference to prevent duplicate requests.
      • MaxPages: Maximum PDF pages to process (default: 5).
      • Rotation: Image rotation in degrees (90, 180, or 270). If it is known that the image to process is rotated, this can be set to avoid Tesseract OSD (if available) or the AI having to deal with it, speeding up the inference. Not setting or setting to 0 means that rotation is unknown.
      • MaxPixelSize: Maximum total pixel budget (width×height). Images exceeding this are downscaled client-side while preserving the aspect ratio. Default: 3211264 (≈ 1792×1792). Set to 0 to disable client-side downscaling.
      • LLAMABubble: Background color for Llama (AI) bubbles (default: #e5e5ea).
      • UserBubble: Background color for user bubbles (default: #0b93f6).
      • WelcomeText: Text shown after the model name(s) in the ready message (default: "Chat ready. Attach file(s) and type your question.").
      • VoiceHotkey: Keyboard shortcut to toggle voice input, e.g. "Alt+A" (default: "Alt+A"). Format: modifier(s) + key separated by +. Recognized modifiers: Alt, Ctrl, Shift, Meta. The key part is case-insensitive.
      • VoicePlaceholder: Placeholder text shown in the chat input when voice input is available. Default: "Alt+A = 🎙 on/off | Alt+Q = 🎙 off + send" (reflects the configured hotkeys).
      • VoiceSendHotkey: Keyboard shortcut to stop recording and send, e.g. "Alt+Q" (default: "Alt+Q"). Same modifier format as VoiceHotkey.
      • Language: Language code for Whisper speech-to-text (e.g. "de", "en"). Empty or unset means auto-detect.
      • WaitingText: Text shown while waiting for the AI server to become available (default: "Waiting for AI server\u2026").
      • LowConfidenceText: Warning text shown when the AI response has low confidence (default: "Low Confidence").
      • RethinkButtonText: Button label offering to re-answer with the thinking model (default: "Rethink").
      • UncertainText: Tooltip text shown when hovering over uncertain (low-confidence) tokens (default: "Low confidence").
      • ShowUncertainTokens: Whether to visually highlight uncertain tokens in AI responses. Set to "false" to disable highlighting (default: "true").
      • ResponseLanguage: Two-letter ISO 639-1 code (e.g. "de", "fr"). When set, the AI is forced to respond in this language — no auto-detection is performed. Overrides the AI_LLAMA_STD_Language plugin property for this instance. The chat interface reflects this language for labels where available.
      • Specialist: Name of a specialist model registered via AI_LLAMA_STD_SPECIALIST_XXX plugin property. When set, requests are routed to that specialist's dedicated server instance (case-insensitive match).
      • QueueBadge: If set to "true", shows a badge with the current queue position while waiting for inference. Overrides the AI_QueueBadge plugin property for this instance. Default: determined by plugin property.
      • QueueText: Text appended after the queue position number in the badge (e.g. "in queue" → badge shows "3 in queue"). Default: empty.
      • FilterResults: If set to "true", enables PII filtering on Brave Search queries for this instance, overriding the global AI_BraveSearch_FilterResults plugin property. Default: determined by plugin property.

      Parameters

      • toLoad: { [key: string]: unknown }

        Provided by the CodBi.

      • toProcess: Element

        Provided by the CodBi.

      Returns void