The AI Engine (LLM) settings in Invicta AI’s platform allow you to control the underlying parameters of the GPT-3.5 and GPT-4 models. These settings include:

1. Temperature

The temperature parameter controls the randomness of the AI-generated responses. A higher temperature value (e.g., 0.8) produces more diverse and creative responses, while a lower value (e.g., 0.2) generates more focused and conservative responses.

Example: Suppose you have an AI agent for a creative writing assistant. Setting a higher temperature value can help generate imaginative and varied story ideas, while a lower temperature value can provide more structured and refined suggestions.

2. Top p

The top p parameter, also known as nucleus sampling or “penalty-free sampling,” defines the probability distribution of words selected during text generation. It limits the selection to the most probable words until the cumulative probability exceeds the top p value.

Example: In customer support scenarios, using a higher top p value (e.g., 0.9) can ensure that the generated responses are more relevant and precise, reducing the risk of potentially incorrect or nonsensical answers.

3. Frequency Penalty

The frequency penalty parameter discourages repeated or redundant phrases in the generated text. It penalizes phrases that have already been generated earlier in the response, making the AI model more likely to provide fresh and diverse outputs.

Example: In content writing tasks, applying a higher frequency penalty (e.g., 1.2) can help avoid excessive repetition and improve the overall quality of the generated content.

4. Presence Penalty

The presence penalty parameter encourages the AI model to include more of the provided input information in generating responses. It penalizes the likelihood of omitting any specific details from the AI-generated text.

Example: When using the AI model for summarization tasks, applying a higher presence penalty (e.g., 0.8) can ensure that the summary includes all the important points from the input document, reducing the risk of important details being missed.




These AI Engine (LLM) settings allow you to customize and fine-tune the behavior and output of the AI models according to your specific requirements and use cases, providing greater control and flexibility in generating personalized and effective content.

Note: It is recommended to experiment with different combinations of these settings and evaluate their impact on the generated results to achieve optimal performance for your specific application.