Testing with DllAPITester
The DllAPITester is a comprehensive testing tool for AlgoMarkers. It contains several useful features:
- Testing results on a sample set via the infrastructure and via the AlgoMarker
- Testing via the AlgoMarker library in the infrastructure or via the .so compiled for it.
- Generating json examples from data.
- Testing on json examples.
In the following explanations we will assume one has an AlgoMarker with a model inside, a repository to test on, and a samples file to work with.
DllAPITester Help Command
Options:
--rep: Repository file name--samples: MedSamples file to use--model: Model file to use--amconfig: AlgoMarker configuration file--amlib: The actual .so to use when calling AM_API calls (if not given will use current recent code compiled with DllAPITester)--direct_csv: (optional) The full feature matrix generated by running through the infrastructure--am_csv: (optional) The full feature matrix generated by running through the AlgoMarker--out_jsons: (optional) if given will generate a file with a list of jsons (in a long array) that contain all the data needed to give a prediction. These can be used for direct tests in the AlgoAnalyzer.--in_jsons: (optional) if given will take input data from the given jsons, instead of the repository and samples.--jreq: input json request (could contain also data of in_jsons)--jresp: get output as a json response to store in file--create_jreq: generate a request for a given samples file (will need also jreq_defs, optionally the --add_data_to_jreq flag) and an output file in --jreq_out)
Application Code and Build
The App is located in ../MR/Libs/Internal/AlgoMarker/Linux/Release/DllAPITester. To compile, simply compile the AlgoMarker directory with Boost Library (Internal/AlgoMarker/full_build.sh).
Testing a Batch of examples and comparing infrastructure to AlgoMarker implementation
| Example Run | |
|---|---|
| Example Output | |
|---|---|
[!NOTE] This only tests for scores and not other outputs, like explainability, or outliers warnings.
A full usage example of creating json inputs and comparing the results with using directly the infrasturure, can be see in this script: MR_Tools/test_algomarker/scripts_template/run_minimal_score_compare.sh
Testing a Single Example Directly
Another option is running a direct test using self-explanatory test data formats:
| Testing a single example directly | |
|---|---|
Generating json requests
- To create JSON request from repository and samples files, use:
Here is jreq_defs example:
- Create json data for AddDataByType request (if we haven't included --add_data_to_jreq in above code, so the data is provided in a different API call):
- You can use the
--rename_signalflag to provide a file that maps repository signal names to official AlgoMarker names. This tab-delimited file allows for standardized naming; for example, converting GENDER to SEX. - You can use the
--signal_categ_regexto keep only categorical values that matched a specific regex, to avoid uneccerary categorical values that aren't supported by the AlgoMarker.
- Get Full response from the AlgoMarker (without repository, samples):
You can also provide path to get the discovery API json with --discovery