|
Medial Code Documentation
|
Functions | |
| log_loss (preds, labels) | |
| Set up a couple of utilities for our experiments. | |
| experiment (objective, label_type, data) | |
Variables | |
| int | N = 1000 |
| Simulate some binary data with a single categorical and single continuous predictor. | |
| X | |
| list | CATEGORICAL_EFFECTS = [-1, -1, -2, -2, 2] |
| LINEAR_TERM | |
| TRUE_PROB = expit(LINEAR_TERM) | |
| Y = np.random.binomial(1, TRUE_PROB, size=N) | |
| dict | DATA |
| int | K = 10 |
| list | A |
| list | B |
Comparison of `binary` and `xentropy` objectives. BLUF: The `xentropy` objective does logistic regression and generalizes to the case where labels are probabilistic (i.e. numbers between 0 and 1). Details: Both `binary` and `xentropy` minimize the log loss and use `boost_from_average = TRUE` by default. Possibly the only difference between them with default settings is that `binary` may achieve a slight speed improvement by assuming that the labels are binary instead of probabilistic.
| logistic_regression.experiment | ( | objective, | |
| label_type, | |||
| data | |||
| ) |
Measure performance of an objective.
Parameters
----------
objective : string 'binary' or 'xentropy'
Objective function.
label_type : string 'binary' or 'probability'
Type of the label.
data : dict
Data for training.
Returns
-------
result : dict
Experiment summary stats.
| logistic_regression.log_loss | ( | preds, | |
| labels | |||
| ) |
Set up a couple of utilities for our experiments.
Logarithmic loss with non-necessarily-binary labels.
| list logistic_regression.A |
| list logistic_regression.B |
| dict logistic_regression.DATA |
| logistic_regression.LINEAR_TERM |
| logistic_regression.X |