Inspector

See the rendered example for more details.

Debug Tool

There is a debug tool, that helps to analyze result folders: python evaluate.py -h

usage: evaluate.py [-h] [-r RESULT_FOLDER] [-e EXPERIMENT] [-q QUERY] [-c CONNECTION] [-n NUM_RUN] [-d] [-rt] {resultsets,errors,warnings,query}

A debug tool for DBMSBenchmarker. It helps to analyze a result folder. It depends on the evaluation cube, so that cube must have been created before.

positional arguments:
  {resultsets,errors,warnings,query}
                        show debug infos about which part of the outcome

optional arguments:
  -h, --help            show this help message and exit
  -r RESULT_FOLDER, --result-folder RESULT_FOLDER
                        folder for storing benchmark result files, default is given by timestamp
  -e EXPERIMENT, --experiment EXPERIMENT
                        code of experiment
  -q QUERY, --query QUERY
                        number of query to inspect
  -c CONNECTION, --connection CONNECTION
                        name of DBMS to inspect
  -n NUM_RUN, --num-run NUM_RUN
                        number of run to inspect
  -d, --diff            show differences in result sets
  -rt, --remove-titles  remove titles when comparing result sets

It depends on the evaluation cube. In case, it can be generated by dbmsbenchmarker -e yes -r 1647993954 read for example for experiment 1647993954.

Show Queries

We can take a look at the actual queries that have been sent: python evaluate.py -e 1647993954 -q 1 -n 0 query

This shows the query string for query number 2, first run.

Show Result Sets

We can take a look at the actual queries that have been sent: python evaluate.py -e 1647993954 -q 2 resultsets

This shows the query string for query number 2.

Show Errors

We can take a look at the actual queries that have been sent: python evaluate.py -e 1647993954 errors

Show Warnings

We can take a look at the actual queries that have been sent: python evaluate.py -e 1647993954 warnings