Getting started
Minimal example
Create a benchmark suite, attach it to a function as a decorator, then call the function as normal:
from microbench import MicroBench
bench = MicroBench()
@bench
def my_function(x):
return x ** 2
my_function(42)
By default results are captured into an in-memory buffer. Read them back as a pandas DataFrame:
Every record contains these fields automatically:
| Field | Description |
|---|---|
mb_run_id |
UUID generated once when microbench is imported. Identical across all bench suites in the same process — use groupby('mb_run_id') to correlate records from independent suites. |
mb_version |
Version of the microbench package that produced the record. |
start_time |
ISO-8601 timestamp when the function was called (UTC by default). |
finish_time |
ISO-8601 timestamp when the function returned. |
run_durations |
List of per-iteration durations in seconds. |
function_name |
Name of the decorated function. |
timestamp_tz |
Timezone used for start_time/finish_time. |
duration_counter |
Name of the timer function used for run_durations. |
Extended example
Subclass MicroBench when you want to add mixins
or set reusable configuration. Pass keyword arguments to the constructor to
attach experiment metadata to every record:
from microbench import MicroBench, MBFunctionCall, MBPythonVersion, MBHostInfo
import numpy, pandas, time
class MyBench(MicroBench, MBFunctionCall, MBPythonVersion, MBHostInfo):
outfile = '/home/user/my-benchmarks.jsonl'
capture_versions = (numpy, pandas)
env_vars = ('SLURM_ARRAY_TASK_ID',)
benchmark = MyBench(experiment='run-1', iterations=3,
duration_counter=time.monotonic)
outfilesaves results to a file (one JSON object per line).capture_versionsrecords the versions of specified packages.env_varscaptures environment variables asenv_<NAME>fields — see Environment variables for a SLURM example.iterations=3runs the function three times, recording all three durations.duration_counteroverrides the timer (see Configuration).experiment='run-1'adds a customexperimentfield to every record.
Class attributes vs constructor arguments
Class attributes configure microbench's own behaviour — outfile,
capture_versions, env_vars, mixin-specific settings like
nvidia_attributes. They are shared across all instances of the class.
Constructor keyword arguments attach experiment metadata to every
record — use them for labels like experiment=, trial=, node=.
They are stored verbatim in each JSON record.
If you don't need mixins, skip the class entirely:
Saving results to a file
Pass outfile as a constructor argument or set it as a class attribute:
Results are appended in JSONL format (one JSON object per line). Read them back with pandas:
Or via get_results(), which works regardless of the output sink:
Analysing results
Load results into a pandas DataFrame and use its full range of aggregation and filtering capabilities:
import pandas
results = pandas.read_json('/home/user/my-benchmarks.jsonl', lines=True)
# Total elapsed time per call
results['elapsed'] = results['finish_time'] - results['start_time']
# Average runtime by Python version
results.groupby('python_version')['elapsed'].mean()
# Correlate all records from the same process run
results.groupby('mb_run_id')['elapsed'].describe()
See the pandas documentation for more.