A process that uses rewards and punishments to teach machines how to do new tasks, improving with practice and feedback.
- Area Under the Curve or AUC
- Artificial Intelligence (AI)
- Artificial Neural Networks (ANNs)
- Big Data
- Black Box Algorithms
- Computer Vision
- Confusion Matrix
- Data Science
- Decision tree
- Deep Learning
- Embodied AI
- Ethical AI
- Explainable AI (XAI)
- Few-shot Learning
- Generative Adversarial Networks (GANs)
- Linear Algebra
- Machine learning
- Max F1
- Mean Absolute Error or MAE
- Mean Per Class Error
- Mean Square Error or MSE
- Natural Language Processing
- Pragmatic AI
- Predictive Analytics
- Reinforcement Learning
- Residual Deviance
- Root Mean Square Error or RMSE
- Root Mean Square Logarithmic Error or RMSLE
- Supervised Learning
- Transfer Learning
- Unsupervised Learning
- Variable Importance
- Weak AI
Residual Deviance (in Regression Only) is short for Mean Residual Deviance and measures the goodness of the models’ fit. In a perfect world this metric would be zero. Deviance is equal to MSE in Gaussian distributions. If Deviance doesn’t equal MSE, then it gives a more useful estimate of error, which is why Squark uses it as the default metric to rank for regression models.
RMSE is the Root Mean Square Error. The RMSE will always be larger or equal to the MAE. The RMSE metric evaluates how well a model can predict a continuous value. The RMSE units are the same units as your data’s dependent variable/target (so if that’s dollars, this is in dollars), which is useful for understanding whether the size of the error is meaningful or not. The smaller the RMSE, the better the model’s performance. RSME is sensitive to outliers. If your data does not have outliers, then examine the Mean Average Error (MAE), which is not as sensitive to outliers.
RMSLE, or the Root Mean Square Logarithmic Error, is the ratio (the log) between the actual values in your data and predicted values in the model. Use RMSLE instead of RMSE if an under-prediction is worse than an over-prediction – where underestimating is more problematic than overestimating. For example, is it worse to forecast too much sales revenue or too little? Use RMSLE when your data has large numbers to predict and you don’t want to penalize large differences between the actual and predicted values (because both of the values are large numbers).
SMOTE stands for Synthetic Minority Over-sampling Technique. Oversampling is a technique used to manage class imbalance in data sets. Data set imbalance occurs when the category you are targeting is very rare in the population, or where the data might simply be difficult to collect. SMOTE is helpful when the class you want to analyze is under-represented.
SMOTE works by generating new instances from existing minority cases that you supply as input. SMOTE does not change the number of majority cases.
New instances are not just copies of existing minority class instances. SMOTE synthesizes new minority instances between existing (real) minority instances.The algorithm takes samples of the feature space for each target class and its nearest neighbors, and generates new examples combining the features of the target case with features of its neighbors. This approach increases the features available to each class and makes the samples more general.
1.) The company that produces Squark Seer, most powerful AI predictive tool available, distinguished by its use of automated machine learning (AutoML) to achieve completely codeless operation. See www.squarkai.com.
2.) In particle physics, the hypothetical supersymmetric boson counterpart of a quark, with spin 0.
Machine-learning algorithms are taught to solve a specific task using training data which contains labels for the “correct answer” for each example
This method tries to take training data used for one thing and reused it for a new set of tasks, without having to retrain the system from scratch.
AI algorithms are given unlabeled data and must make sense of it without any instruction. Such machines “teach themselves” what result to produce. The algorithm looks for structure in the training data, like finding which examples are similar to each other, and groups them in clusters.
Variable importance is a metric that indicates how much an independent variable contributes to predictions in a model. The higher the value shown for a variable in its ranking, the more important it is to the model generated.
Understanding the significance of predictors provides insights for interpreting results, and also may be useful for improving model quality. For instance, editing data sets to rationalize incorrect or incomplete columns — or removing irrelevant ones — can make models faster and more accurate.