What is Predictive Marketing?

Do You Even Need Predictive Marketing?

It comes down to insights versus actions.

Predictive Marketing is the umbrella term encompassing processes that rely on AI and machine learning to drive actions with accurate predictions. This is in contrast to traditional marketing, which has been built on descriptive and diagnostic processes.

Traditional Analysis
Business questions are formulated and translated into requirements that include data collection, tooling, and governance. KPI’s are created and these leading and lagging indicators are put into reports and visualizations for statistical analysis and “data storytelling.” The results are insights—impressions of “what’s going on.”

There are two main limitations of this approach. First, statistical methods rely on guesses of what factors to consider. It is difficult or impossible to analyze every variable with respect to every other. Second, reports and visuals that demonstrate trends do not identify individual records that exhibit them. More steps need to take place to actually apply the insights.

Predictive Analysis
Business questions are answered by the data itself. Machine learning is used to understand relationships within sets of known outcomes so that the patterns can be applied to never-before-seen data. Typical questions are “yes-no?” (binary), “which of three or more?” (multivariate), or “how much?” (forecast regression). No presumption of the causal factors is required. Simply by identifying which features (columns) in the known outcome data represent the answers, machine learning can fill the blanks for the new data. The result of AI-driven predictive analytics are data sets with record-by-record scores of which individuals are likely to exhibit the behaviors.

With predictive AI, knowledge of statistics, data science, and programming is unnecessary. Most importantly, marketers can instantly switch from insights to actions. For use cases such as lead scoring, conversion optimization, attribution, content targeting, cross-sell uplift, lifetime value forecasting, and churn reduction, actions always beat insights. Think of the value of simply examining data within your existing martech stack to produce ranked to-do lists for your most important marketing programs.

Sound like magic?
Predictive marketing is built upon sound AI science, with proven reliability and accuracy. In fact, the explanations of why machine learning algorithms made their predictions are tremendously valuable. By listing which variables are most predictive, AI reveals the most important elements for focus in marketing outreach—even ones you never suspected might be operative.

A famous movie director once opined that if film began as color, no one would have thought of inventing black-and-white. Try predictive marketing with Squark and see if your vision of the future is suddenly more colorful.

How to Pick AutoML Use Cases

Hint: Focus on key performance indicators.

Automated Machine Learning (AutoML) can help you make better and more timely decisions by detecting signals in data that would be impossible to see with conventional analysis. To make the most of this power to see the future, attack your most important performance indicators. Remember that AutoML delivers specific, record-by-record predictions—not just generalized insights. That means you can take actions that change outcomes. Apply supervised AutoML when you want these answers:

  • Will this happen or not?
    Predictions based on two possible outcomes are called binary classifications.  For example, will the sales opportunity be won or lost; will the customer stay or leave for a competitor; will the package arrive on time or not?
  • Which of several things will happen?
    Predicting outcomes when there are three or more possibilities is termed multinomial classification. Which promotional offer will increase sales most; which advertisements on what media work best; on what days and times are problems most likely to occur?
  • How much will happen?
    Predicting and outcome where the possibility could be any real number is called regression. What is the revenue forecast; what will this customer’s lifetime value be; what is the maximum temperature the product will reach during transit?

As a bonus, Squark AutoML explains why by highlighting variables that are most predictive. This means you learn which knobs to turn to improve performance and which to leave alone—the essence of turning predictions into profits.

The takeaway: AutoML produces answers quickly, before the questions change. Pick your application and go for it.

AutoML vs. Data Scientists

A little respect is due—in both directions.

Can Automated Machine Learning (AutoML) beat serious data scientists in producing accurate predictions? People who understand data science and programming deeply, when armed with all the tools and computer resources and time they need, are nearly always able to produce better models than generalized, self-programming AutoML systems. There are two questions to ask before you try solving problems with specialists:

How much time and money do you have?
Hand-crafting machine learning models means projects will take from days to months to complete. Expert data scientists are costly. The investment in their work must exceed the value of their predictions by the expected rate of return to make projects viable. Custom work is best suited to very-high-value outcomes where generalized tools do not produce the required accuracy, and where there is time to build and test the solutions before the questions change.

How good must your models be?
Not all predictions have life-and-death consequences. Marketing decisions, for instance, can benefit from improving accuracy from “coin-flip” to 70%. Waiting to achieve 73% with fewer false positives/negatives rarely pays compared to acting on good-enough data. When detecting serious diseases, that same 3% in accuracy with lowest false results could determine who survives. Surgical precision is essential for surgery, but you wouldn’t bake an apple pie that way.

The takeaway: We love data scientists and employ them. Their knowledge of mathematics, statistics, algorithms, and code is irreplaceable—for the appropriate tasks. Use their rare talents for your most critical work. For most ordinary business decisions, AutoML results are remarkably accurate and available in minutes at very low cost.

How “Auto” Is That AutoML?

Some AutoML systems are more automatic than others.

AutoML stands for Automated Machine Learning, meaning streamlining the end-to-end process of solving problems with machine learning. Steps that need to be automated include:

  • Data Preparation (type and dependency)
  • Feature Engineering
  • Model Algorithm Selection
  • Training
  • Hyperparameter Tuning
  • Model Evaluation
  • Production Processing and Deployment

In practice, there is a spectrum of “auto-ness” starting at “barely better than coding from scratch.” Here are some clues that a particular instance of AutoML may not be so automatic:

  • Mentions Jupyter notebooks
  • Requires knowledge of Python
  • Refers you to GitHub
  • Asks for hyperparameter ranges
  • Makes you select an algorithm
  • Omits performance metrics in output

The takeaway: Don’t assume that AutoML in the product’s description means that you can use it without programming or manual inputs.

How to Prove AI Value to Skeptics

Make a prediction that comes true.

AI has not taken its seat at the decision making table for many organizations that would reap big benefits from it. That’s easy to understand, since the terminology—Artificial Intelligence, machine learning, robotic assistants, and the like—are conflated in stories and ads to the point of being meaningless. Nearly every system claims to be AI-driven, without explaining what that means.

So what do you do when you are an executive who understands the concept and the promise of AI, but can’t get past the barriers? Building a data science team is an expensive leap of faith before you quantify potential gains. Automated machine learning is the best way to show AI’s value swiftly—by making accurate, actionable predictions that deliver better results than your current processes.

Pick a practical problem. Think of areas where classification or regression predictions could add revenue or save cost.

  • Will this happen or not?
  • Which of several things will happen?
  • How much will happen?

Making predictions with AutoML is as simple as pointing to training data, selecting which values you want to predict, and setting it to work on your data. What returns is record-by-record likelihood of future outcomes. Those are easy to monetize through programs that amplify high-probability actions and avoid low-probability ones. The results can be startling. One of Squark’s customers applied AutoML to a marketing cross-sell problem and had this to say:

“Double-digit uplift in sales overnight, with no coding. Remarkable.”

The takeaway: AutoML proves the value of AI quickly. Pick a prediction use case and demonstrate how a view into the future transforms decision making with instant ROI.

How AutoML Beats Scoring Formulas

AutoML’s ability to detect patterns and predict can out-perform algebraic formulas and Boolean logic in common tasks.

Anyone who has written a formula in Excel or adjusted parameters in an online application knows how even the smallest change can produce dramatically different results. The power of mass calculation is exactly what puts algebraic and Boolean logic at the core of nearly every productivity tool you use. But there are costs and risks.

“I just blew up my whole workbook.” “That one little setting changed the whole forecast?” “You should have told me that was an important criterion when we wrote the code.” Sound familiar? Programmed logic is limited by our ability to understand the problem and represent it in reliable formulas. Precision and timeliness are critical, so hard-coded logic requires careful maintenance too.

AutoML works by letting data speak for itself. By finding patterns and applying them to new data, machine learning can obviate coded logic and go straight to the answers. Here is just one example, lead scoring:

Formulas
CRM and marketing automation systems have features to rank leads by creating a “score.” Sales teams use lead score to prioritize activities. But scores are generated by applying weights that users set manually. How many points for a whitepaper download? How do page visits and time-on-page bump the number? Do three clicks on email calls to action mean I have a hot prospect? In practice, it is nearly impossible to fine-tune scoring models well and quickly enough for them to be useful. (More on this scoring example in this blog post.)

AutoML
By learning from patterns of known outcomes in existing data, AutoML can make predictions on new data. Lead tables contain tens or hundreds of columns of data, and AutoML discovers which are most predictive automatically. That means a prioritized list of leads can be generated in minutes based on the very latest trends with no coding. Squark AutoML even reports which variables were most important, so you learn buyer persona as a nice side benefit.

The takeaway: Find opportunities where AutoML can free you from the shackles of programming logic.

What Are Machine Learning Hyperparameters and How Are They Used?

Parameters are functions of training data. Hyperparameters are settings used to tune model algorithm performance.

In Automated Machine Learning (AutoML), data sets containing known outcomes are used to train models to make predictions. The actual values in training data sets never directly become parts of models. Instead, AutoML algorithms learn patterns in the features (columns) and instances (rows) of training data and express them as parameters that are the basis for the model’s predictions on new data. Parameters are always a funtion of the data itself, and are never set externally.

Hyperparameters are variables external to and not directly related to the training data. They are configuration variables that are used to optimize model performance. Think of them as instructions to the ML algorithms on how to approach model building. Each modeling algorithm can be set with hyperparameters appropriate to the particular classification or regression prediction problem.

Hyperparameter tuning in Squark is automatic. Squark makes multiple training passes, keeps track of the results of each trial run, and makes hyperparameter adjustments for subsequent runs. The progressive improvement in configuration values results in faster time to resolution of the best model accuracy.

The takeaway: Squark uses hyperparameters to learn how to learn better as it works through each model—inventing shortcuts and best practices—the way people do when attacking problems.

What is Bias in Machine Learning?

Bias occurs when ML does not separate the true signal from the noise in training data.

Biases in AI systems make headlines for results such as favoring gender in hiring, recommending loans based on ethnicity, or recognizing faces differently based on race. Some of these cases were due to biases baked into the algorithms written by (human) data scientists, but the majority merely learned from data that was itself biased.

How do you know if your business predictions are biased? Testing against broader sets of known outcomes is the best way. Since you don’t necessarily know which factors may be introducing bias, examination of the predictive importance placed on data features can help reveal them. Squark shows lists of Variable Importance for the models it generates. Click on the model name link In the Squark Leaderboard to see them. Different algorithms can produce different ranks for variable importance, which may lend insight.

Bias in Training Data
Selecting training data wisely is the best way to reduce bias. For instance, if the training data set you select is dominated by outcomes that you expect, it should be no surprise that the model will include confirmation bias

Bias in Algorithms
Algorithmic bias occurs when model building takes too few training variables into account. In data sets with large numbers of features (columns), algorithms that can handle only fixed or limited numbers of training variables show high bias and result in underfitting. Certain algorithms such as Linear Regression, Linear Discriminant Analysis, and Logistic Regression are prone to high bias.

The takeaway: If you think your predictions may show bias, experiment. Go back to the variable selection and select/deselect suspicious columns. Iterate as many times as you need to understand your data. At that point you may decide to revise the training and production files to reflect reality with less of a “thumb on the scale.”

Monte Carlo Simulation vs. Machine Learning

Simulation uses models constructed by experts to predict probabilities. Machine Learning builds its own models to predict future outcomes.

Monte Carlo (the place) is the iconic capital of gambling—an endeavor that relies exclusively on chance probabilities to determine winners and losers. Monte Carlo (the method) employs random inputs to models to make predictions on how a system will behave.

When subject matter experts create good Simulation models, they can be valuable in revealing probabilities in complex systems with large numbers of variables—such as predicting human behaviors in markets. “What if?” scenarios can be tested because individual data points or sets of data points can be manipulated to show their effects on the entirety.

Machine Learning builds its own models based on data sets of known outcomes. Predictions are done automatically by applying these models to new sets of data. This methodology is perfect for business analyses such as identifying customers who will churn or predicting customer lifetime value. No human input or modelling skill is required. “The cards call themselves,” as you might say for hands at the Baccarat table.

The take-away: Simulation excels where domain expertise can be captured to build accurate models to enable experimentation—even creating data inputs to see what happens. Machine Learning is best for fast, automatic predictions on new data based on observations of known outcomes. They are not mutually exclusive. In fact, Machine Learning can be handy to test and refine Simulation models.

Data Mining vs. Machine Learning

Data Mining describes patterns, correlations, and anomalies in data.

Mines are not the best analogies for the processes referred to as Data Mining. Never mind that we call data storage places bases, warehouses, and lakes. Extraction of raw data material is not the goal of data mining, but rather identifying characteristics within data sets that can be used to make decisions and predictions.

Think of Data Mining as applying statistics to make it easier for humans to understand past events recorded in data. By making assumptions and testing them, insights may be generated to help make decisions or predict general behavior in the future. Since all its variables are known and static, data mining itself cannot predict specific behavior on new variables.

Data Mining Processes
Here are some of the commonly used terms for tasks in data mining:

  • Anomaly Detection – identifying records that are different enough from others to be checked as errors or outliers.
  • Dependency Modelling – Identifying relationships among variables, such as market basket analysis for items frequently bought together.
  • Clustering – Identifying characteristics of groups of records that are more similar to each other than to other groups.
  • Classification – Calculating the probability that a record matches one or more sets of variables.
  • Regression – Estimating the relationship among an independent variable and one or more dependent variables.
  • Summarization – Creating a shortened example set of data, including reports and graphical representations.

Data Mining is good for preparing data and understanding variables that may be useful for predictions. The constraints of time and human analytical capacity to query, join, parse, and process large data sets makes Data Mining ill-suited to production predictive analysis.

Machine Learning to the Rescue
Automated Machine Learning (AutoML) automatically makes assumptions and iterates the models until it understands patterns—without the need for human intervention. This means that programming to account for every possible data relationship is unnecessary. The speed of results—even for large data sets—is remarkable. Best of all, the AI models can be applied to fresh data automatically, which is the essence of prediction.

The take-away: Data mining is useful to gain insights and to prepare data for predictive analytics, including AutoML. Machine Learning uses data patterns to predict future outcomes for new records.