Our model uses a Transformer architecture, at the root of all the currently available
open and closed LLM models, allowing what is the core of a LLM: generating a
prediction of the next token output by taking in a user input, or, in other words,
providing an NLP output understandable by humans.
Monai comes with several checkpoints tailored to different uses, from generalpurpose language modeling to specialized chat and instruction-following capabilities. Its versatility makes it suitable for a variety of applications, including chatbots,
content generation, and complex problem-solving tasks.
Our core belief is that a LLM can only reach the highest of its potential if unrestricted
from censorship, as the observable changes (and decrease) in performance of
GPT-3.5 and GPT-4 tend to tend to attest, as suggested in a recent paper by researchers from Stanford University and UC Berkeley. And at least one of the explanations of this phenomenon is the increase of guardrails and hard coded biases,
binding the LLM capabilities. Training steps
The first step was the data collection and preparation.