Every day users use our platform to send and receive messages, but sometimes sifting through the incoming messages can be tough and time consuming. In response, we have developed a model that assigns labels to incoming messages based on the content to help sort your inbox and surface the most important replies.
To do this, we have trained a language model that classifies messages as they’re received and assigns likelihood scores. Currently the model rates the likelihood that a given response is an opt-out attempt, even if the respondent doesn’t use an opt-out keyword. Each message is scored on a scale from 0 to 1, with one being the highest confidence.
You can find this in your inbox. Instead of exporting your incoming messages and searching for keywords in a spreadsheet, you can utilize our scoring system to filter the incoming messages right in Switchboard!
Open up the “Edit Filters” button, and scroll down on the menu till you find our Opt-Out scores:
The inbox will only show messages that are labeled with the scores you indicate. You can see exactly your range of scores above the inbox, for example:
<aside> ⚠️ Like any model, the classification this model performs will not be perfect. Despite being accurate on average, it will make mistakes. Please feel free to ask questions about scores and we will monitor the model’s performance over time.
Along with acute problems with modeling, some general limitations also apply to language models like this one:
During our model training, we perform validation of the model’s performance. Scores can give a general sense of the quality of a model but are not conclusive because they are summaries of the model over all.
Those scores, and brief explanations, can be found here:
0.2
, which indicates the model's error rate. A lower value suggests a better model.0.8
where 1
is the best possible score.0.9
on the AUC ROC. This is a measure of the model's ability to distinguish between classes, with 1
indicating perfect classification.