Medial Code Documentation
|
Namespaces | |
namespace | dask |
namespace | data |
namespace | data_iter |
namespace | metrics |
namespace | params |
namespace | ranking |
namespace | shared |
namespace | updater |
Data Structures | |
class | DirectoryExcursion |
class | IteratorForTest |
class | TestDataset |
Functions | |
bool | has_ipv6 () |
PytestSkip | no_mod (str name) |
PytestSkip | no_ipv6 () |
PytestSkip | no_ubjson () |
PytestSkip | no_sklearn () |
PytestSkip | no_dask () |
PytestSkip | no_dask_ml () |
PytestSkip | no_spark () |
PytestSkip | no_pandas () |
PytestSkip | no_arrow () |
PytestSkip | no_modin () |
PytestSkip | no_dt () |
PytestSkip | no_matplotlib () |
PytestSkip | no_dask_cuda () |
PytestSkip | no_cudf () |
PytestSkip | no_cupy () |
PytestSkip | no_dask_cudf () |
PytestSkip | no_json_schema () |
PytestSkip | no_graphviz () |
PytestSkip | no_rmm () |
PytestSkip | no_multiple (*Any args) |
PytestSkip | skip_s390x () |
Tuple[List[np.ndarray], List[np.ndarray], List[np.ndarray]] | make_batches (int n_samples_per_batch, int n_features, int n_batches, bool use_cupy=False, *bool vary_size=False) |
Tuple[ArrayLike, ArrayLike, ArrayLike] | make_regression (int n_samples, int n_features, bool use_cupy) |
Tuple[List[sparse.csr_matrix], List[np.ndarray], List[np.ndarray]] | make_batches_sparse (int n_samples_per_batch, int n_features, int n_batches, float sparsity) |
Tuple[ArrayLike, np.ndarray] | make_categorical (int n_samples, int n_features, int n_categories, bool onehot, float sparsity=0.0, float cat_ratio=1.0, bool shuffle=False) |
Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray] | make_ltr (int n_samples, int n_features, int n_query_groups, int max_rel) |
strategies.SearchStrategy | _cat_sampled_from () |
Tuple[Union[sparse.csr_matrix], np.ndarray] | make_sparse_regression (int n_samples, int n_features, float sparsity, bool as_dense) |
Callable | make_datasets_with_margin (strategies.SearchStrategy unweighted_strategy) |
Callable | make_dataset_strategy () |
bool | non_increasing (Sequence[float] L, float tolerance=1e-4) |
bool | predictor_equal (xgb.DMatrix lhs, xgb.DMatrix rhs) |
Tuple[str, np.float64] | eval_error_metric (np.ndarray predt, xgb.DMatrix dtrain) |
np.float64 | eval_error_metric_skl (np.ndarray y_true, np.ndarray y_score) |
float | root_mean_square (np.ndarray y_true, np.ndarray y_score) |
np.ndarray | softmax (np.ndarray x) |
SklObjective | softprob_obj (int classes) |
Generator[Tuple[StringIO, StringIO], None, None] | captured_output () |
Any | timeout (int sec, *Any args, bool enable=True, **Any kwargs) |
None | setup_rmm_pool (Any _, pytest.Config pytestconfig) |
List[str] | get_client_workers (Any client) |
str | demo_dir (str path) |
str | normpath (str path) |
str | data_dir (str path) |
Tuple[xgb.DMatrix, xgb.DMatrix] | load_agaricus (str path) |
str | project_root (str path) |
Variables | |
hypothesis = pytest.importorskip("hypothesis") | |
datasets = pytest.importorskip("sklearn.datasets") | |
PytestSkip = TypedDict("PytestSkip", {"condition": bool, "reason": str}) | |
strategies | categorical_dataset_strategy = _cat_sampled_from() |
sparse_datasets_strategy | |
_unweighted_multi_datasets_strategy | |
Callable | multi_dataset_strategy |
M = TypeVar("M", xgb.Booster, xgb.XGBModel) | |
Utilities for defining Python tests. The module is private and subject to frequent change without notice.
Generator[Tuple[StringIO, StringIO], None, None] xgboost.testing.captured_output | ( | ) |
Reassign stdout temporarily in order to test printed statements Taken from: https://stackoverflow.com/questions/4219717/how-to-assert-output-with-nosetest-unittest-in-python Also works for pytest.
str xgboost.testing.demo_dir | ( | str | path | ) |
Look for the demo directory based on the test file name.
Tuple[str, np.float64] xgboost.testing.eval_error_metric | ( | np.ndarray | predt, |
xgb.DMatrix | dtrain | ||
) |
Evaluation metric for xgb.train
np.float64 xgboost.testing.eval_error_metric_skl | ( | np.ndarray | y_true, |
np.ndarray | y_score | ||
) |
Evaluation metric that looks like metrics provided by sklearn.
bool xgboost.testing.has_ipv6 | ( | ) |
Check whether IPv6 is enabled on this host.
Tuple[ArrayLike, np.ndarray] xgboost.testing.make_categorical | ( | int | n_samples, |
int | n_features, | ||
int | n_categories, | ||
bool | onehot, | ||
float | sparsity = 0.0 , |
||
float | cat_ratio = 1.0 , |
||
bool | shuffle = False |
||
) |
Generate categorical features for test. Parameters ---------- n_categories: Number of categories for categorical features. onehot: Should we apply one-hot encoding to the data? sparsity: The ratio of the amount of missing values over the number of all entries. cat_ratio: The ratio of features that are categorical. shuffle: Whether we should shuffle the columns. Returns ------- X, y
Callable xgboost.testing.make_datasets_with_margin | ( | strategies.SearchStrategy | unweighted_strategy | ) |
Factory function for creating strategies that generates datasets with weight and base margin.
Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray] xgboost.testing.make_ltr | ( | int | n_samples, |
int | n_features, | ||
int | n_query_groups, | ||
int | max_rel | ||
) |
Make a dataset for testing LTR.
Tuple[ArrayLike, ArrayLike, ArrayLike] xgboost.testing.make_regression | ( | int | n_samples, |
int | n_features, | ||
bool | use_cupy | ||
) |
Make a simple regression dataset.
Tuple[Union[sparse.csr_matrix], np.ndarray] xgboost.testing.make_sparse_regression | ( | int | n_samples, |
int | n_features, | ||
float | sparsity, | ||
bool | as_dense | ||
) |
Make sparse matrix. Parameters ---------- as_dense: Return the matrix as np.ndarray with missing values filled by NaN
PytestSkip xgboost.testing.no_ipv6 | ( | ) |
PyTest skip mark for IPv6.
bool xgboost.testing.predictor_equal | ( | xgb.DMatrix | lhs, |
xgb.DMatrix | rhs | ||
) |
Assert whether two DMatrices contain the same predictors.
Any xgboost.testing.timeout | ( | int | sec, |
*Any | args, | ||
bool | enable = True , |
||
**Any | kwargs | ||
) |
Make a pytest mark for the `pytest-timeout` package. Parameters ---------- sec : Timeout seconds. enable : Control whether timeout should be applied, used for debugging. Returns ------- pytest.mark.timeout
|
protected |
Callable xgboost.testing.multi_dataset_strategy |