Changelog & Access

Changelog

Change log of current and historically deployed models. Historical models are not used in production, and can only be accessed for research purposes on request.

RoBERTa v2.3.0 [CURRENT] [MODERATION]: Iteration 22, 98% Accuracy. Tweaks and adjustments. Deployed 9/12/2022

RoBERTa v2.2.0 [MODERATION]: Iteration 17, 98% Accuracy. Feedback from community. Deployed 6/15/2022

RoBERTa v2.1.1 [MODERATION]: Iteration 16, 98% Accuracy. Adjusting for niched contextual terms. Deployed 6/8/2022

RoBERTa v2.1.0 [MODERATION]: Iteration 15, 98% Accuracy. Tweaking weights, normalizing characters. Deployed 5/18/2022

RoBERTa v2.0.0 [MODERATION]: Iteration 14, 98% Accuracy. Introduction of a new API targeted towards content moderation. Deployed 5/18/2022

RoBERTa v1.4.2 [CURRENT] [TOXICITY]: Iteration 13, 98% Accuracy. Addressing case discrepancies. Deployed 5/13/2022

RoBERTa v1.4.1 [DEPRECATED]: Iteration 12, 98% Accuracy. Patch. Deployed 5/06/2022

RoBERTa v1.4.0 [DEPRECATED]: Iteration 11, 98% Accuracy. Addressing generalized false positives. Deployed 4/30/2022

RoBERTa v1.3.0 [DEPRECATED]: Iteration 10, 98% Accuracy. NSFW Content FP. Deployed 3/10/2022

RoBERTa v1.2.0 [DEPRECATED]: Iteration 9, 98% Accuracy. Addressing Cultural Slang. Deployed 2/27/2022

RoBERTa v1.1.1 [BETA]: Iteration 8, 98% Accuracy. Re-validated Dataset v2. Deployed 2/21/2022

RoBERTa v1.1.0 [BETA]: Iteration 7, 98% Accuracy. Re-validated Dataset. Deployed 2/18/2022

RoBERTa v1.0.0 [BETA]: Iteration 6, 97% Accuracy. Bias Mitigation Dataset v2. Deployed 2/13/2022

ALBERT v1.1.0 [BETA]: Iteration 5, 97% Accuracy. Bias Mitigation Dataset v2. Deployed 2/12/2022

ALBERT v1.0.0 [BETA]: Iteration 4, 97% Accuracy. Bias Mitigation Dataset. Deployed 2/12/2022

ALBERT v0.0.3 [PRE-BETA]: Iteration 3, 97% Accuracy. Expanded Dataset. Deployed 2/10/2022

ALBERT v0.0.2 [PRE-BETA]: Iteration 2, 97% Accuracy. Initial Dataset. Deployed 2/09/2022

ALBERT v0.0.1 [PRE-BETA]: Iteration 1, 96% Accuracy. Initial Dataset. Deployed 2/08/2022

Access

For on-premise deployment, further research, or edge prediction, we can provide mini, quantized model weights and access to our full dataset. Please email support@moderatehatespeech.com with your use case for more information.