|
Medial Code Documentation
|
Public Member Functions | |
| None | __init__ (self, *SklObjective objective="binary:logistic", **Any kwargs) |
| "XGBClassifier" | fit (self, ArrayLike X, ArrayLike y, *Optional[ArrayLike] sample_weight=None, Optional[ArrayLike] base_margin=None, Optional[Sequence[Tuple[ArrayLike, ArrayLike]]] eval_set=None, Optional[Union[str, Sequence[str], Metric]] eval_metric=None, Optional[int] early_stopping_rounds=None, Optional[Union[bool, int]] verbose=True, Optional[Union[Booster, str, XGBModel]] xgb_model=None, Optional[Sequence[ArrayLike]] sample_weight_eval_set=None, Optional[Sequence[ArrayLike]] base_margin_eval_set=None, Optional[ArrayLike] feature_weights=None, Optional[Sequence[TrainingCallback]] callbacks=None) |
| ArrayLike | predict (self, ArrayLike X, bool output_margin=False, bool validate_features=True, Optional[ArrayLike] base_margin=None, Optional[Tuple[int, int]] iteration_range=None) |
| np.ndarray | predict_proba (self, ArrayLike X, bool validate_features=True, Optional[ArrayLike] base_margin=None, Optional[Tuple[int, int]] iteration_range=None) |
| np.ndarray | classes_ (self) |
Public Member Functions inherited from xgboost.sklearn.XGBModel | |
| bool | __sklearn_is_fitted__ (self) |
| Booster | get_booster (self) |
| "XGBModel" | set_params (self, **Any params) |
| Dict[str, Any] | get_params (self, bool deep=True) |
| Dict[str, Any] | get_xgb_params (self) |
| int | get_num_boosting_rounds (self) |
| None | save_model (self, Union[str, os.PathLike] fname) |
| None | load_model (self, ModelIn fname) |
| np.ndarray | apply (self, ArrayLike X, Optional[Tuple[int, int]] iteration_range=None) |
| Dict[str, Dict[str, List[float]]] | evals_result (self) |
| int | n_features_in_ (self) |
| np.ndarray | feature_names_in_ (self) |
| float | best_score (self) |
| int | best_iteration (self) |
| np.ndarray | feature_importances_ (self) |
| np.ndarray | coef_ (self) |
| np.ndarray | intercept_ (self) |
Data Fields | |
| n_classes_ | |
| objective | |
Data Fields inherited from xgboost.sklearn.XGBModel | |
| n_estimators | |
| objective | |
| max_depth | |
| max_leaves | |
| max_bin | |
| grow_policy | |
| learning_rate | |
| verbosity | |
| booster | |
| tree_method | |
| gamma | |
| min_child_weight | |
| max_delta_step | |
| subsample | |
| sampling_method | |
| colsample_bytree | |
| colsample_bylevel | |
| colsample_bynode | |
| reg_alpha | |
| reg_lambda | |
| scale_pos_weight | |
| base_score | |
| missing | |
| num_parallel_tree | |
| random_state | |
| n_jobs | |
| monotone_constraints | |
| interaction_constraints | |
| importance_type | |
| device | |
| validate_parameters | |
| enable_categorical | |
| feature_types | |
| max_cat_to_onehot | |
| max_cat_threshold | |
| multi_strategy | |
| eval_metric | |
| early_stopping_rounds | |
| callbacks | |
| kwargs | |
| n_classes_ | |
| evals_result_ | |
Static Public Attributes | |
| str | extra_parameters |
Protected Attributes | |
| _Booster | |
Protected Attributes inherited from xgboost.sklearn.XGBModel | |
| _Booster | |
Additional Inherited Members | |
Protected Member Functions inherited from xgboost.sklearn.XGBModel | |
| Dict[str, bool] | _more_tags (self) |
| str | _get_type (self) |
| None | _load_model_attributes (self, dict config) |
| Tuple[ Optional[Union[Booster, str, "XGBModel"]], Optional[Metric], Dict[str, Any], Optional[int], Optional[Sequence[TrainingCallback]],] | _configure_fit (self, Optional[Union[Booster, "XGBModel", str]] booster, Optional[Union[Callable, str, Sequence[str]]] eval_metric, Dict[str, Any] params, Optional[int] early_stopping_rounds, Optional[Sequence[TrainingCallback]] callbacks) |
| DMatrix | _create_dmatrix (self, Optional[DMatrix] ref, **Any kwargs) |
| None | _set_evaluation_result (self, TrainingCallback.EvalsLog evals_result) |
| bool | _can_use_inplace_predict (self) |
| Tuple[int, int] | _get_iteration_range (self, Optional[Tuple[int, int]] iteration_range) |
| None xgboost.sklearn.XGBClassifier.__init__ | ( | self, | |
| *SklObjective | objective = "binary:logistic", |
||
| **Any | kwargs | ||
| ) |
Reimplemented from xgboost.sklearn.XGBModel.
| "XGBClassifier" xgboost.sklearn.XGBClassifier.fit | ( | self, | |
| ArrayLike | X, | ||
| ArrayLike | y, | ||
| *Optional[ArrayLike] | sample_weight = None, |
||
| Optional[ArrayLike] | base_margin = None, |
||
| Optional[Sequence[Tuple[ArrayLike, ArrayLike]]] | eval_set = None, |
||
| Optional[Union[str, Sequence[str], Metric]] | eval_metric = None, |
||
| Optional[int] | early_stopping_rounds = None, |
||
| Optional[Union[bool, int]] | verbose = True, |
||
| Optional[Union[Booster, str, XGBModel]] | xgb_model = None, |
||
| Optional[Sequence[ArrayLike]] | sample_weight_eval_set = None, |
||
| Optional[Sequence[ArrayLike]] | base_margin_eval_set = None, |
||
| Optional[ArrayLike] | feature_weights = None, |
||
| Optional[Sequence[TrainingCallback]] | callbacks = None |
||
| ) |
Fit gradient boosting model.
Note that calling ``fit()`` multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass ``xgb_model`` argument.
Parameters
----------
X :
Feature matrix. See :ref:`py-data` for a list of supported types.
When the ``tree_method`` is set to ``hist``, internally, the
:py:class:`QuantileDMatrix` will be used instead of the :py:class:`DMatrix`
for conserving memory. However, this has performance implications when the
device of input data is not matched with algorithm. For instance, if the
input is a numpy array on CPU but ``cuda`` is used for training, then the
data is first processed on CPU then transferred to GPU.
y :
Labels
sample_weight :
instance weights
base_margin :
global bias for each instance.
eval_set :
A list of (X, y) tuple pairs to use as validation sets, for which
metrics will be computed.
Validation metrics will help us track the performance of the model.
eval_metric : str, list of str, or callable, optional
.. deprecated:: 1.6.0
Use `eval_metric` in :py:meth:`__init__` or :py:meth:`set_params` instead.
early_stopping_rounds : int
.. deprecated:: 1.6.0
Use `early_stopping_rounds` in :py:meth:`__init__` or :py:meth:`set_params`
instead.
verbose :
If `verbose` is True and an evaluation set is used, the evaluation metric
measured on the validation set is printed to stdout at each boosting stage.
If `verbose` is an integer, the evaluation metric is printed at each
`verbose` boosting stage. The last boosting stage / the boosting stage found
by using `early_stopping_rounds` is also printed.
xgb_model :
file name of stored XGBoost model or 'Booster' instance XGBoost model to be
loaded before training (allows training continuation).
sample_weight_eval_set :
A list of the form [L_1, L_2, ..., L_n], where each L_i is an array like
object storing instance weights for the i-th validation set.
base_margin_eval_set :
A list of the form [M_1, M_2, ..., M_n], where each M_i is an array like
object storing base margin for the i-th validation set.
feature_weights :
Weight for each feature, defines the probability of each feature being
selected when colsample is being used. All values must be greater than 0,
otherwise a `ValueError` is thrown.
callbacks :
.. deprecated:: 1.6.0
Use `callbacks` in :py:meth:`__init__` or :py:meth:`set_params` instead.
Reimplemented from xgboost.sklearn.XGBModel.
Reimplemented in xgboost.sklearn.XGBRFClassifier.
| ArrayLike xgboost.sklearn.XGBClassifier.predict | ( | self, | |
| ArrayLike | X, | ||
| bool | output_margin = False, |
||
| bool | validate_features = True, |
||
| Optional[ArrayLike] | base_margin = None, |
||
| Optional[Tuple[int, int]] | iteration_range = None |
||
| ) |
Predict with `X`. If the model is trained with early stopping, then
:py:attr:`best_iteration` is used automatically. The estimator uses
`inplace_predict` by default and falls back to using :py:class:`DMatrix` if
devices between the data and the estimator don't match.
.. note:: This function is only thread safe for `gbtree` and `dart`.
Parameters
----------
X :
Data to predict with.
output_margin :
Whether to output the raw untransformed margin value.
validate_features :
When this is True, validate that the Booster's and data's feature_names are
identical. Otherwise, it is assumed that the feature_names are the same.
base_margin :
Margin added to prediction.
iteration_range :
Specifies which layer of trees are used in prediction. For example, if a
random forest is trained with 100 rounds. Specifying ``iteration_range=(10,
20)``, then only the forests built during [10, 20) (half open set) rounds
are used in this prediction.
.. versionadded:: 1.4.0
Returns
-------
prediction
Reimplemented from xgboost.sklearn.XGBModel.
| np.ndarray xgboost.sklearn.XGBClassifier.predict_proba | ( | self, | |
| ArrayLike | X, | ||
| bool | validate_features = True, |
||
| Optional[ArrayLike] | base_margin = None, |
||
| Optional[Tuple[int, int]] | iteration_range = None |
||
| ) |
Predict the probability of each `X` example being of a given class. If the
model is trained with early stopping, then :py:attr:`best_iteration` is used
automatically. The estimator uses `inplace_predict` by default and falls back to
using :py:class:`DMatrix` if devices between the data and the estimator don't
match.
.. note:: This function is only thread safe for `gbtree` and `dart`.
Parameters
----------
X :
Feature matrix. See :ref:`py-data` for a list of supported types.
validate_features :
When this is True, validate that the Booster's and data's feature_names are
identical. Otherwise, it is assumed that the feature_names are the same.
base_margin :
Margin added to prediction.
iteration_range :
Specifies which layer of trees are used in prediction. For example, if a
random forest is trained with 100 rounds. Specifying `iteration_range=(10,
20)`, then only the forests built during [10, 20) (half open set) rounds are
used in this prediction.
Returns
-------
prediction :
a numpy array of shape array-like of shape (n_samples, n_classes) with the
probability of each data example being of a given class.
|
static |