➕LLM's
This document outlines the process of integrating Large Language Models (LLMs) into your Frontman application using the provided interface.
Steps to Setup LLM Connection
1. Select Bot
Here you have to select bot name which you have created before to apply LLM for it. Frontman provide
Choose Large Language Model (LLMs) Provider
OpenAI (Default): This is the pre-selected provider. OpenAI is a leading research company offering various LLM models through their API.
Anthropic (Coming Soon): Anthropic is another research company developing LLMs, but their access is not yet available in this interface.
Google (Coming Soon): Google AI offers various LLM models, but their access is not yet available in this interface.
Meta (Coming Soon): Meta AI also offers LLM models, but their access is not yet available in this interface.
Mistral (Coming Soon): Mistral is an LLM provider, but their access is not yet available in this interface.
Perplexity (Coming Soon): Perplexity is an LLM provider, but their access is not yet available in this interface.
Together AI (Coming Soon): Together AI is an LLM provider, but their access is not yet available in this interface.
Groq (Coming Soon): Groq is an LLM provider, but their access is not yet available in this interface.
MosaicML (Coming Soon): MosaicML offers LLM access, but their integration is not yet available in this interface.
Replicate (Coming Soon): Replicate is a platform for deploying machine learning models, including LLMs, but their integration is not yet available in this interface.
HuggingFace (Coming Soon): HuggingFace is a popular hub for sharing pre-trained models, including LLMs, but their integration is not yet available in this interface.
Note: Currently, OpenAI is the only supported provider. Other providers will be available in the future.
3. Select your LLM Model
This section allows you to choose the specific LLM model you want to use from the selected provider. Here's an explanation of the currently available OpenAI models:
gpt-4o-mini: This is likely a smaller version of the gpt-4 model, offering reduced capabilities but potentially lower cost.
gpt-3.5-turbo-1106: This is a specific version of the gpt-3.5 model, possibly indicating optimizations or specific functionalities.
gpt-3.5-turbo: This is a general variant of the gpt-3.5 model, known for its powerful language processing capabilities.
gpt-4o: This is likely the full version of the gpt-4 model, offering the most advanced capabilities but potentially higher cost.
Choosing the right model depends on your specific needs. Consider factors like the complexity of your tasks, budget constraints, and desired level of accuracy.
How to Configuration Parameters of LLMs
Temperature (Top-P):
Definition: Controls the randomness of the generated text.
Range: 0.0 - 1.0
Default: 0.70
Behavior: Higher values lead to more creative and diverse outputs, while lower values produce more focused and deterministic text.
Top-K:
Definition: Determines the number of most probable tokens considered at each step.
Range: Positive integer
Default: 10
Behavior: Higher values increase diversity, while lower values focus on more likely tokens.
Confidence Score:
Definition: A confidence score is a numerical representation of the model's certainty about the correctness of its generated output
Range: 0.0 - 1.0
Default: 0.40
Behavior: Higher values increase the confidence in the generated text, potentially reducing fluency.
Choose Memory:
Definition: Stores previous conversation context.
Conversation Buffer Window Memory: Size of the window for recent conversation history.
Conversation Buffer Memory: Total size of the conversation buffer.
Context Window:
Definition: Maximum input length the model can process.
Default: 4096 tokens (adjust based on model capabilities)
Text Splitting
Recursive Text Splitter:
Definition: Splits text recursively based on sentence boundaries or other criteria.
Chunk length: Maximum length of a chunk.
Chunk overlap: Overlapping characters between chunks.
Character Text Splitter:
Definition: Splits text into fixed-size character chunks.
Chunk length: Length of each character chunk.
Max Tokens:
Definition: Maximum number of tokens allowed in an input.
Options: User defined, model defined (with default value)
Guardrails: Overview of safety measures implemented to prevent harmful or biased outputs.
Stream Data: Explanation of streaming capabilities (real-time generation).
Citations: How the model handles citations and references.
Domain Restricted: Options for restricting the model's knowledge to a specific domain.
Streaming Speed: Available options for controlling the speed of text generation are slow, medium and fast which you can set as per users behaviour.
Reset Settings: To restore default settings or custom configurations.
Last updated