A couple months ago, another Dreamforce ended. I was fortunate enough to present two Breakout sessions at Dreamforce. My session on the ABCs of Einstein Prediction Builder was almost a sold-out house. I’d like to highlight the key concepts of the session in this blog post. By reading this blog post, you’ll have the skills necessary to build your first Einstein Prediction.
Building a prediction can be done in a 4-step process; Discovery, Requirements, Develop and Automate. As we walk through these steps, let’s imagine we are a SaaS company called Cloudy’s Computing Co. that sells licenses on a subscription basis.
The first step in building a Prediction is going through the Discovery process to determine what you’re going to predict. Think of this like a Prediction Storybook. Many of us probably created a Mad Libs book in our youth, where we filled in the blanks to create our own story, and this applies perfectly to Prediction Builder. We can fill in the blanks to build our prediction.
The key components in the Storybook are that we’re setting a persona that is encountering a clear problem that causes the business a significant pain. It only makes sense to build a prediction if there is a clear problem and pain point that a prediction can help with.
Next, we identify what we’re going to predict with Prediction Builder, and the benefit that the prediction will bring. Lastly, we state an action we’ll be able to take with the knowledge the prediction will bring. Completing the Storybook will help us with the next steps of Requirements, Develop and Automate. Download a blank copy of the Storybook here. Here’s how this can look in our SaaS example.
The next step is to complete the requirements checklist to ensure you’ve got everything needed to build the prediction. With Prediction Builder, there are 5 requirements that must be met, which can be seen in the table below.
- All the data must be in Salesforce
- All fields must come from a single object
- The object the predicted field is on must have 400 records minimum
- The predicted field must be either a checkbox, numeric or formula field
- If the predicted field is a checkbox, it must have at least 100 true and 100 false records.
Complete the Requirements checklist to make sure you can create the Prediction. If all the requirements are met, you’re good to go. The completed table below shows how this could look like in our SaaS use case.
After completing the requirements checklist, you’re ready to start the development process. If you need additional details on the steps, you can download Salesforce’s detailed documentation on Prediction Builder. This post covers two key concepts; the Setup and the Scorecard.
In the Setup process, the first thing you need to do is determine your Dataset, Segment, Example Set and Prediction Set. The Dataset is the object your predicted field – is in our SaaS scenario, it would be Accounts. Optionally, you can set a Segment, which is a subset of the Dataset. For our Churn use case, we want to only focus on customer accounts, and exclude accounts like Partners, since they don’t use our software. The Example Set is the records that Einstein uses to score records. For our Churn use case, let’s set the Example Set as Accounts that were recently up for a renewal, as these essentially made a recent decision of whether to stay on a customer. Lastly, the Prediction Set is the set of records that Einstein scores; it is all the records within the segment that isn’t in the Example Set; in our use case, it would be all Customer Accounts that weren’t up for a renewal recently. Now, we know how the Setup for our Prediction will look like.
The next step of the Setup process is to select the fields you want to include in the prediction. Keep in mind from the Requirements step that all fields need to come from a single object. By default, Einstein selects all fields to include in the Prediction, but we recommend using a subset for better prediction quality. The screenshot below shows some tips and tricks on fields to include and exclude, as well as some examples for each for our SaaS use case.
After you’ve completed the Setup, you will be able to check the Prediction Scorecard, which you can use to check prediction quality and big-picture metrics. There is a lot of detail within the Scorecard, and I will focus on two key components.
The Prediction Quality, which can be found on the Overview tab, indicates how accurate prediction is likely to be. It scores the prediction from 1 – 100. It is recommended to only enable the prediction if the score is at least 60. For our Churn use case, the score is 79. That’s great! This means the quality is high enough to enable the prediction.
Another key component in the Scorecard is the Details tab, which shows values for the Impact, Correlation and Importance of all fields and values used in the Prediction setup. We can use these scores to help us in the next step of Automation. Let’s review the terminology for these fields.
- Impact is a number between 0 and 1 that represents the scaled weight or importance of a predictor.
- Correlation is the relationship, positive or negative, between a predictor and the field being predicted.
- Importance and Weight indicate the significance of a predictor. Depending on the model type used to build the prediction, either importance or weight is displayed, but not both.
Using the screenshot below as an example, we can see that a low license utilization bucket, and having received no training has high Impact, Correlation and Importance. This means Accounts with low utilization and no training have a higher likelihood to Churn. Before moving to the Automation step, this arms you with knowledge of typical characteristics that lead to Churn.
The last step in the Development steps is to enable the prediction. Enabling the prediction will create a custom field that stores the predicted score, and start scoring your records. In our case, it would create a custom field called Churn Likelihood, and we’d start to see the churn likelihood of Accounts scored 1 – 100.
In the Automate step, we try to solve how we can improve the metric we are measuring. In our SaaS use case, we learned that low utilization and amount of training lead to a higher likelihood to Churn. Based on this, let’s offer a free in-person training to Accounts that have a high Likelihood to Churn and that haven’t received any training.
Since enabling the prediction automatically created a custom field, we are able to call upon that field in our automation steps. There are two ways we can create this automation. Firstly, we can use a Process Builder to set up automation around this. The second option is that we can use Einstein Next Best Action, where recommendations can guide users on next steps to take. Next Best Action is the better option if we want users to leave the ultimate decision on which automation to kick off with the user; for example, we may determine that Accounts with a high likelihood to churn should either be given free training or a higher support level, but leave it to the user to decide which recommendation to accept.
We hope this helps in building your first Einstein Prediction. The four-step process of Discovery, Requirements, Develop and Automate along with the SaaS scenario and churn use case, should be helpful in learning how to set up a Prediction and Automate using either Process Builder or Next Best Action.
Be sure to check out the SpringML blog How to Build Your First Einstein Next Best Action Recommendation.
If you’d like additional Salesforce resources, check out these related SpringML resources:
- Learn How Salesforce Einstein Bots Improve Customer Experience
- Predictive Churn Accelerator Primer
- SpringML’s Predictive Churn Accelerator
- Propensity to Buy Using Salesforce Einstein
- The First Step in Your Analytics Journey