πAI Model Training & Testing
This page provides an overview of AI Model Testing & Training.
AI Model Training
Which large language model/s does TrusstGPT use?
Trusst AI uses task specific fine-tuned proprietary models. Trusst AI is continually evaluating the latest models and techniques to ensure maximum value for each use case, and optimise infrastructure costs for our customers.
Does Trusst AI have a policy on the design and use of AI/ML solutions?
Yes. Policy document is available on request.
What data sources are used for model training?
A Trusst AI proprietary dataset.
Is there a process of quality assurance (QA) in the learning process?
Yes, the process is as following:
Human review and verification of test set results for generated text fields, such as long summary.
Human labelling of extraction and classification performance for categorical fields.
Quantitative evaluation of categorical field accuracy.
This was done both on licensed datasets in a range of domains such as telecommunications and manufacturing as well as on a test set held out from the proprietary data source.
Has bias been considered in the training of the model?
Yes, bias was considered. The steps that were taken were an explicit selection and filtering of the training data to reduce bias, as well as human evaluation and labelling for safety testing of model outputs.
What process is used to test AI models?
As described above in the QA process consisting of:
Human labelling of the test dataset
Qualitative human evaluation of long text fields, performance on the test dataset
Quantitative evaluation of categorical fields in relation to human labels
Evaluation by a separate LLM over the entire training set.
Red-teaming is used to identify vulnerabilities and emergent risks.
A feedback mechanism is available within the TrusstGPT product itself for users to rate the model output with thumbs up and thumbs down, to capture further emergent issues.
Are models continuously fine tuned as new data becomes available?
No. Models are fine-tuned on a periodic basis and on customer request using pipelines provided by Trusst AI.
Model Testing
What process is used to test the models TrusstGPT uses?
As described above in the test process consisting of:
Human labelling of the test dataset
Qualitative human evaluation of long text fields, performance on the test dataset
Quantitative evaluation of categorical fields in relation to human labels
Also evaluation by a separate LLM over the entire training set.
Red-teaming is used to identify vulnerabilities and emergent risks.
What is the review process for TrusstGPT AI models?
Human review and labelling of the results on test datasets.
What safeguards are in place to detect and address model hallucinations?
A subset of model outputs are reviewed by humans and performance is verified for each training run. To address model hallucinations for zero-shot fine-tuned model outputs a low temperature and top_k of 0.1 is chosen which limits any hallucinations. Fine-tuning was also used as a technique to reduce hallucinations as the further aligned model is explicitly tuned to be on-task and to base responses only on input. Prompt-engineering is used to instruct models specifically to base answers on the input. Finally for categorical extraction, additional filtering applying clustering is performed to narrow down the possible output space, which eliminates the possibility of hallucination.
How is the model protected against adversarial attacks?
The main protection against adversarial attacks are as follows:
No PII is used in the model fine-tuning.
Trusst AI models have been fine-tuned with internal proprietary data, not with the use of public or customer data.
Models are deployed individually for each customer in their own Virtual Private Cloud (VPC). No customer data leaves the customerβs account, even in the form of model weights.
For customers requesting fine-tuning on their data, PII is explicitly redacted and models remain in their account and are not shared with other customers, or 3rd parties.
Last updated