Medial Code Documentation
Loading...
Searching...
No Matches
Data Structures | Functions
lightgbm.engine Namespace Reference

Data Structures

class  _CVBooster
 

Functions

 train (params, train_set, num_boost_round=100, valid_sets=None, valid_names=None, fobj=None, feval=None, init_model=None, feature_name='auto', categorical_feature='auto', early_stopping_rounds=None, evals_result=None, verbose_eval=True, learning_rates=None, keep_training_booster=False, callbacks=None)
 
 _make_n_folds (full_data, folds, nfold, params, seed, fpreproc=None, stratified=True, shuffle=True)
 
 _agg_cv_result (raw_results)
 
 cv (params, train_set, num_boost_round=100, folds=None, nfold=5, stratified=True, shuffle=True, metrics=None, fobj=None, feval=None, init_model=None, feature_name='auto', categorical_feature='auto', early_stopping_rounds=None, fpreproc=None, verbose_eval=None, show_stdv=True, seed=0, callbacks=None)
 

Detailed Description

Library with training routines of LightGBM.

Function Documentation

◆ _agg_cv_result()

lightgbm.engine._agg_cv_result (   raw_results)
protected
Aggregate cross-validation results.

◆ _make_n_folds()

lightgbm.engine._make_n_folds (   full_data,
  folds,
  nfold,
  params,
  seed,
  fpreproc = None,
  stratified = True,
  shuffle = True 
)
protected
Make a n-fold list of Booster from random indices.

◆ cv()

lightgbm.engine.cv (   params,
  train_set,
  num_boost_round = 100,
  folds = None,
  nfold = 5,
  stratified = True,
  shuffle = True,
  metrics = None,
  fobj = None,
  feval = None,
  init_model = None,
  feature_name = 'auto',
  categorical_feature = 'auto',
  early_stopping_rounds = None,
  fpreproc = None,
  verbose_eval = None,
  show_stdv = True,
  seed = 0,
  callbacks = None 
)
Perform the cross-validation with given paramaters.

Parameters
----------
params : dict
    Parameters for Booster.
train_set : Dataset
    Data to be trained on.
num_boost_round : int, optional (default=100)
    Number of boosting iterations.
folds : generator or iterator of (train_idx, test_idx) tuples, scikit-learn splitter object or None, optional (default=None)
    If generator or iterator, it should yield the train and test indices for each fold.
    If object, it should be one of the scikit-learn splitter classes
    (http://scikit-learn.org/stable/modules/classes.html#splitter-classes)
    and have ``split`` method.
    This argument has highest priority over other data split arguments.
nfold : int, optional (default=5)
    Number of folds in CV.
stratified : bool, optional (default=True)
    Whether to perform stratified sampling.
shuffle : bool, optional (default=True)
    Whether to shuffle before splitting data.
metrics : string, list of strings or None, optional (default=None)
    Evaluation metrics to be monitored while CV.
    If not None, the metric in ``params`` will be overridden.
fobj : callable or None, optional (default=None)
    Custom objective function.
feval : callable or None, optional (default=None)
    Customized evaluation function.
    Should accept two parameters: preds, train_data,
    and return (eval_name, eval_result, is_higher_better) or list of such tuples.
    For multi-class task, the preds is group by class_id first, then group by row_id.
    If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
    To ignore the default metric corresponding to the used objective,
    set ``metrics`` to the string ``"None"``.
init_model : string, Booster or None, optional (default=None)
    Filename of LightGBM model or Booster instance used for continue training.
feature_name : list of strings or 'auto', optional (default="auto")
    Feature names.
    If 'auto' and data is pandas DataFrame, data columns names are used.
categorical_feature : list of strings or int, or 'auto', optional (default="auto")
    Categorical features.
    If list of int, interpreted as indices.
    If list of strings, interpreted as feature names (need to specify ``feature_name`` as well).
    If 'auto' and data is pandas DataFrame, pandas categorical columns are used.
    All values in categorical features should be less than int32 max value (2147483647).
    Large values could be memory consuming. Consider using consecutive integers starting from zero.
    All negative values in categorical features will be treated as missing values.
early_stopping_rounds : int or None, optional (default=None)
    Activates early stopping.
    CV score needs to improve at least every ``early_stopping_rounds`` round(s)
    to continue.
    Requires at least one metric. If there's more than one, will check all of them.
    Last entry in evaluation history is the one from the best iteration.
fpreproc : callable or None, optional (default=None)
    Preprocessing function that takes (dtrain, dtest, params)
    and returns transformed versions of those.
verbose_eval : bool, int, or None, optional (default=None)
    Whether to display the progress.
    If None, progress will be displayed when np.ndarray is returned.
    If True, progress will be displayed at every boosting stage.
    If int, progress will be displayed at every given ``verbose_eval`` boosting stage.
show_stdv : bool, optional (default=True)
    Whether to display the standard deviation in progress.
    Results are not affected by this parameter, and always contain std.
seed : int, optional (default=0)
    Seed used to generate the folds (passed to numpy.random.seed).
callbacks : list of callables or None, optional (default=None)
    List of callback functions that are applied at each iteration.
    See Callbacks in Python API for more information.

Returns
-------
eval_hist : dict
    Evaluation history.
    The dictionary has the following format:
    {'metric1-mean': [values], 'metric1-stdv': [values],
    'metric2-mean': [values], 'metric2-stdv': [values],
    ...}.

◆ train()

lightgbm.engine.train (   params,
  train_set,
  num_boost_round = 100,
  valid_sets = None,
  valid_names = None,
  fobj = None,
  feval = None,
  init_model = None,
  feature_name = 'auto',
  categorical_feature = 'auto',
  early_stopping_rounds = None,
  evals_result = None,
  verbose_eval = True,
  learning_rates = None,
  keep_training_booster = False,
  callbacks = None 
)
Perform the training with given parameters.

Parameters
----------
params : dict
    Parameters for training.
train_set : Dataset
    Data to be trained on.
num_boost_round : int, optional (default=100)
    Number of boosting iterations.
valid_sets : list of Datasets or None, optional (default=None)
    List of data to be evaluated on during training.
valid_names : list of strings or None, optional (default=None)
    Names of ``valid_sets``.
fobj : callable or None, optional (default=None)
    Customized objective function.
feval : callable or None, optional (default=None)
    Customized evaluation function.
    Should accept two parameters: preds, train_data,
    and return (eval_name, eval_result, is_higher_better) or list of such tuples.
    For multi-class task, the preds is group by class_id first, then group by row_id.
    If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
    To ignore the default metric corresponding to the used objective,
    set the ``metric`` parameter to the string ``"None"`` in ``params``.
init_model : string, Booster or None, optional (default=None)
    Filename of LightGBM model or Booster instance used for continue training.
feature_name : list of strings or 'auto', optional (default="auto")
    Feature names.
    If 'auto' and data is pandas DataFrame, data columns names are used.
categorical_feature : list of strings or int, or 'auto', optional (default="auto")
    Categorical features.
    If list of int, interpreted as indices.
    If list of strings, interpreted as feature names (need to specify ``feature_name`` as well).
    If 'auto' and data is pandas DataFrame, pandas categorical columns are used.
    All values in categorical features should be less than int32 max value (2147483647).
    Large values could be memory consuming. Consider using consecutive integers starting from zero.
    All negative values in categorical features will be treated as missing values.
early_stopping_rounds : int or None, optional (default=None)
    Activates early stopping. The model will train until the validation score stops improving.
    Validation score needs to improve at least every ``early_stopping_rounds`` round(s)
    to continue training.
    Requires at least one validation data and one metric.
    If there's more than one, will check all of them. But the training data is ignored anyway.
    The index of iteration that has the best performance will be saved in the ``best_iteration`` field
    if early stopping logic is enabled by setting ``early_stopping_rounds``.
evals_result: dict or None, optional (default=None)
    This dictionary used to store all evaluation results of all the items in ``valid_sets``.

    Example
    -------
    With a ``valid_sets`` = [valid_set, train_set],
    ``valid_names`` = ['eval', 'train']
    and a ``params`` = {'metric': 'logloss'}
    returns {'train': {'logloss': ['0.48253', '0.35953', ...]},
    'eval': {'logloss': ['0.480385', '0.357756', ...]}}.

verbose_eval : bool or int, optional (default=True)
    Requires at least one validation data.
    If True, the eval metric on the valid set is printed at each boosting stage.
    If int, the eval metric on the valid set is printed at every ``verbose_eval`` boosting stage.
    The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.

    Example
    -------
    With ``verbose_eval`` = 4 and at least one item in ``valid_sets``,
    an evaluation metric is printed every 4 (instead of 1) boosting stages.

learning_rates : list, callable or None, optional (default=None)
    List of learning rates for each boosting round
    or a customized function that calculates ``learning_rate``
    in terms of current number of round (e.g. yields learning rate decay).
keep_training_booster : bool, optional (default=False)
    Whether the returned Booster will be used to keep training.
    If False, the returned value will be converted into _InnerPredictor before returning.
    You can still use _InnerPredictor as ``init_model`` for future continue training.
callbacks : list of callables or None, optional (default=None)
    List of callback functions that are applied at each iteration.
    See Callbacks in Python API for more information.

Returns
-------
booster : Booster
    The trained Booster model.