New Search

If you are not happy with the results below please do another search

15 search results for:

1

Press Release: Squark Launches Codeless AI that Transforms How Business Analysts Deliver Accurate Predictions

With AI-powered analytics now as easy as a spreadsheet, Squark Seer equips any organization to make decisions based on probabilities instead of guesses Burlington, MA – Feb. 28, 2019 – Squark, a software as a service (SaaS) predictive analytics provider, today announced Squark™ Seer, a tool that enables non-programmers to take advantage of the power […]

2

Leaderboard

View/Download as PDF Squark Seer produces a Leaderboard that lists the best-performing models that were trained on your specific data from Squark’s set of powerful codeless AI algorithms.  While Squark Seer may have built thousands of models while you waited for results, the Leaderboard only contains the most accurate model for each algorithm we used. […]

3

Confusion Matrix

A Confusion Matrix, if calculated, is a table depicting performance of prediction models on false positives, false negatives, true positives, and true negatives. It is so named because it shows how often the model confuses the two labels. The matrix is generated by cross-validation – comparing predictions against a benchmark hold-out of data.

4

Root Mean Square Logarithmic Error or RMSLE

RMSLE, or the Root Mean Square Logarithmic Error, is the ratio (the log) between the actual values in your data and predicted values in the model. Use RMSLE instead of RMSE if an under-prediction is worse than an over-prediction – where underestimating is more problematic than overestimating. For example, is it worse to forecast too much sales […]

5

Root Mean Square Error or RMSE

RMSE is the Root Mean Square Error. The RMSE will always be larger or equal to the MAE. The RMSE metric evaluates how well a model can predict a continuous value. The RMSE units are the same units as your data’s dependent variable/target (so if that’s dollars, this is in dollars), which is useful for […]

6

Mean Square Error or MSE

MSE is the Mean Square Error and is a model quality metric.  Closer to zero is better.  The MSE metric measures the average of the squares of the errors or deviations. MSE takes the distances from the points to the regression line (these distances are the “errors”) and then squares them to remove any negative […]

7

Mean Absolute Error or MAE

MAE or the Mean Absolute Error is an average of the absolute errors. The smaller the MAE the better the model’s performance. The MAE units are the same units as your data’s dependent variable/target (so if that’s dollars, this is in dollars), which is useful for understanding whether the size of the error is meaningful […]

8

Logloss

Logloss (or Logarithmic Loss) measures classification performance; specifically, uncertainty. This metric evaluates how closely a model’s predicted values are to the actual target value. For example, does a model tend to assign a high predicted value like .90 for the positive class, or does it show a poor ability to identify the positive class and […]

9

Residual Deviance

Residual Deviance (in Regression Only) is short for Mean Residual Deviance and measures the goodness of the models’ fit. In a perfect world this metric would be zero. Deviance is equal to MSE in Gaussian distributions. If Deviance doesn’t equal MSE, then it gives a more useful estimate of error, which is why Squark uses […]

10

Mean Per Class Error

Mean Per Class Error (in Multi-class Classification only) is the average of the errors of each class in your multi-class data set. This metric speaks toward misclassification of the data across the classes. The lower this metric, the better.