Skip to content

Python API configuration

CLI configuration

This page covers Python API configuration. For CLI flags such as --outfile, --iterations, --warmup, --mixin, and output options, see the CLI reference.

Constructor parameters

Parameter Default Description
outfile None File path or file-like object to write results to.
outputs None List of Output instances for multi-sink output. Mutually exclusive with outfile.
json_encoder JSONEncoder Custom JSON encoder class.
tz timezone.utc Timezone for start_time / finish_time.
iterations 1 Number of times to run the decorated function.
warmup 0 Number of unrecorded calls before timing begins. Useful for priming caches or JIT compilation.
duration_counter time.perf_counter Callable used for call.durations.

Any additional keyword arguments are stored as extra fields in every record:

bench = MicroBench(experiment='run-42', node='gpu-node-03')

Environment variables

Set env_vars as a class attribute to capture specific environment variables into every record. Each variable is stored as env.<NAME>; if the variable is unset it is recorded as null:

from microbench import MicroBench

class MyBench(MicroBench):
    env_vars = ('MY_VAR', 'ANOTHER_VAR')

HPC / SLURM

In SLURM environments, capture job and task identifiers automatically:

class SlurmBench(MicroBench):
    env_vars = (
        'SLURM_JOB_ID',
        'SLURM_ARRAY_TASK_ID',
        'SLURM_NODELIST',
        'SLURM_CPUS_PER_TASK',
    )

Fields are stored as env.SLURM_JOB_ID, env.SLURM_ARRAY_TASK_ID, etc. Combined with mb.run_id, this lets you group and compare results across all tasks in a job array:

results.groupby(['mb.run_id', 'env.SLURM_ARRAY_TASK_ID'])['call.durations'].mean()

Run env | grep SLURM inside a job to see which variables are available in your cluster's environment.

Duration timings

call.durations are measured using time.perf_counter by default, which gives wall-clock time in fractional seconds. Override with any callable that returns a numeric value:

import time
from microbench import MicroBench

# Nanosecond precision
bench = MicroBench(duration_counter=time.perf_counter_ns)

# Monotonic clock (unaffected by NTP adjustments)
bench = MicroBench(duration_counter=time.monotonic)

The name of the counter function is recorded in the mb.duration_counter field so results remain interpretable after the code changes.

When iterations > 1, call.durations is a list with one entry per iteration. The function's return value is always taken from the final iteration.

Timezones

start_time and finish_time are ISO-8601 timestamps in UTC by default. Override with any datetime.timezone:

import datetime
from microbench import MicroBench

# Local machine timezone
bench = MicroBench(tz=datetime.datetime.now().astimezone().tzinfo)

UTC is recommended when comparing results across machines in different locations. The timezone is also recorded in the mb.timezone field.

Runtime impact

Capturing environment variables, package versions, and timing has negligible overhead. Things with measurable cost:

  • MBNvidiaSmi — spawns a subprocess per invocation; typically < 1 s.
  • MBInstalledPackages / MBCondaPackages — enumerates all installed packages; can take several seconds on large environments. Consider running once and storing the output separately rather than capturing on every call.
  • Periodic monitoring — background thread with configurable interval. Keep monitor_interval at 60 s or more to avoid meaningful overhead.
  • MBLineProfiler — instruments every line; expect 2–5× slowdown on typical Python code. Only use for profiling runs, not production timing.