Skip to content

Development Kit

Overview

The Development Kit validates newly developed models before integration with AlgoMarker. It ensures your model includes essential components (cleaners, imputers, bootstrap results, etc.) and passes a comprehensive suite of tests using the same dataset as training. For external validation, see the External Silent Run kit.

Goals

  • Ensure model quality and completeness before deployment.
  • Automate validation of key model components.
  • Provide reproducible, standardized testing.

How to Use

Please refere to Creating a New TestKit for Your Model To run all tests, execute from the created TestKit folder:

./run.sh
Or use run.specific.sh to execute a specific test

Review results in your configured output directory.

Configuration

Set required parameters in env.sh. If a parameter is missing for a test, that test will be skipped.

  • REPOSITORY_PATH: Path to your data repository.
  • MODEL_PATH: Path to your trained model.
  • WORK_DIR: Output directory for results.
  • CALIBRATED_MODEL, EXPLAINABLE_MODEL: Optional, for calibration and explainability tests.
  • BT_JSON, BT_COHORT: Bootstrap configuration files.
  • NOISER_JSON, TIME_NOISES, VAL_NOISES, DROP_NOISES: For noise sensitivity analysis.
  • BASELINE_MODEL_PATH, BASELINE_COMPARE_TOP: For baseline comparison.
  • See full parameter list above for details.

Additional Files

  • coverage_groups.py: Defines high-risk groups for coverage tests.
  • feat_resolution.tsv: Controls feature resolution for matrix feature tests.

Test Descriptions

Each test in this kit is documented separately: