Parameters are functions of training data. Hyperparameters are settings used to tune model algorithm performance.
In Automated Machine Learning (AutoML), data sets containing known outcomes are used to train models to make predictions. The actual values in training data sets never directly become parts of models. Instead, AutoML algorithms learn patterns in the features (columns) and instances (rows) of training data and express them as parameters that are the basis for the model’s predictions on new data. Parameters are always a funtion of the data itself, and are never set externally.
Hyperparameters are variables external to and not directly related to the training data. They are configuration variables that are used to optimize model performance. Think of them as instructions to the ML algorithms on how to approach model building. Each modeling algorithm can be set with hyperparameters appropriate to the particular classification or regression prediction problem.
Hyperparameter tuning in Squark is automatic. Squark makes multiple training passes, keeps track of the results of each trial run, and makes hyperparameter adjustments for subsequent runs. The progressive improvement in configuration values results in faster time to resolution of the best model accuracy.
The takeaway: Squark uses hyperparameters to learn how to learn better as it works through each model—inventing shortcuts and best practices—the way people do when attacking problems.