API reference
Core
microbench.MicroBenchBase
Source code in microbench/core/bench.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 | |
arecord(name=None)
Return an async context manager that times a block and writes one record.
Use with async with inside an async function or coroutine.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Value for the |
None
|
.. note:: Elapsed wall time includes event-loop interleaving from other concurrent tasks. Results are comparable across runs only when the event loop is not saturated by other tasks.
Example::
async with bench.arecord('data_load'):
await load_data()
Source code in microbench/core/bench.py
get_results(format='dict', flat=False)
Return results from the first output sink that supports it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
|
'dict'
|
flat
|
bool
|
If True, flatten nested dict fields into
dot-notation keys (e.g. |
False
|
Returns:
| Type | Description |
|---|---|
|
list[dict] or pandas.DataFrame |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no configured sink supports reading results. |
ImportError
|
If format is |
ValueError
|
If format is not |
Source code in microbench/core/bench.py
output_result(bm_data)
Fan out the JSON-encoded result to all configured output sinks.
record(name=None)
Return a context manager that times a block and writes one record.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Value for the |
None
|
Example::
with bench.record('training'):
model.fit(X, y)
Source code in microbench/core/bench.py
record_on_exit(name=None, handle_sigterm=True)
Register a process-exit handler that writes one benchmark record.
Call once near the start of a script. When the process exits normally
(or via SIGTERM when handle_sigterm is True), a record is written
containing the wall-clock duration from this call to exit, plus all
mixin fields captured at exit time.
Calling this method a second time on the same instance replaces the previous registration and resets the start time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Value for the |
None
|
handle_sigterm
|
bool
|
Install a SIGTERM handler that writes the
record before re-delivering the signal. Only effective when
called from the main thread. Defaults to |
True
|
Fields added beyond the standard timing fields:
exit_signal:'SIGTERM'when the handler was triggered by SIGTERM; absent otherwise.exception:{"type": ..., "message": ...}when the process is exiting due to an unhandled exception; absent otherwise.
.. note::
SIGKILL and os._exit() cannot be caught; no record will be
written in those cases. Use capture_optional = True on the
benchmark class so that slow or unavailable capture methods do
not delay the exit handler.
Example::
bench = MyBench(outfile='/scratch/results.jsonl')
bench.record_on_exit('simulation')
run_simulation()
Source code in microbench/core/bench.py
533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 | |
summary()
Print summary statistics for call.durations across all results.
Requires no dependencies beyond the Python standard library.
Reads results via :meth:get_results.
time(name)
Return a context manager recording a named sub-timing within a benchmark.
Sub-timings are stored in call.timings as a list of
{"name": ..., "duration": ...} dicts in call order.
Compatible with bench.record(), bench.arecord(),
@bench (sync and async), and bench.record_on_exit().
Calling outside an active benchmark is a silent no-op.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Label for this timing section. |
required |
Source code in microbench/core/bench.py
microbench.MicroBench
Bases: MBPythonInfo, MicroBenchBase
Benchmark suite with :class:MBPythonInfo included by default.
Subclass this for typical usage. If you need a completely bare benchmark
class with no default mixins, subclass :class:MicroBenchBase instead.
Source code in microbench/core/bench.py
microbench.summary(results)
Print summary statistics for call.durations across a list of results.
Requires no dependencies beyond the Python standard library.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
list[dict]
|
Result dicts as returned by
:meth: |
required |
Example::
bench = MicroBench()
@bench
def my_function():
...
my_function()
summary(bench.get_results())
# n=1 min=0.000042 mean=0.000042 median=0.000042 max=0.000042 stdev=nan
Source code in microbench/core/bench.py
Output sinks
microbench.Output
Abstract base class for benchmark output sinks.
Subclass this to implement custom output destinations.
Must implement :meth:write. May optionally implement
:meth:get_results to allow reading back stored results.
Example::
class MyOutput(Output):
def write(self, bm_json_str):
send_somewhere(bm_json_str)
Source code in microbench/outputs/base.py
get_results(format='dict', flat=False)
Return all stored results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
|
'dict'
|
flat
|
bool
|
If True, flatten nested dict fields into
dot-notation keys (e.g. |
False
|
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If this sink does not support reading results. |
ImportError
|
If format is |
ValueError
|
If format is not |
Source code in microbench/outputs/base.py
write(bm_json_str)
Write a single JSON-encoded benchmark result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bm_json_str
|
str
|
JSON string (without trailing newline). |
required |
microbench.FileOutput
Bases: Output
Write benchmark results to a file path or file-like object (JSONL format).
Each result is written as a single JSON line. When outfile is a path
string, each write opens the file in append mode (POSIX O_APPEND),
which is safe for concurrent writers on the same filesystem. When
outfile is a file-like object it is written to directly.
When no outfile is given an :class:io.StringIO buffer is used,
which allows results to be read back via :meth:get_results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
outfile
|
str or file - like
|
Destination file path or
file-like object. Defaults to a fresh :class: |
None
|
Source code in microbench/outputs/file.py
microbench.HttpOutput
Bases: Output
POST each benchmark result to an HTTP/HTTPS endpoint.
Designed for webhooks and real-time notifications (e.g. Slack, Teams,
custom event endpoints). Not intended for bulk storage — there is no
:meth:get_results support.
Uses only the Python standard library (urllib). Raises on non-2xx
responses or network failures — no silent dropping, no automatic retry.
By default the record dict is JSON-encoded and sent with
Content-Type: application/json. Override :meth:format_payload in a
subclass to produce any body shape required by the target provider (e.g.
a Slack {"text": ...} envelope).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
Endpoint URL. Must be |
required |
headers
|
dict
|
Extra HTTP headers merged with the defaults.
Caller-supplied keys win on collision (case-sensitive). Use this
for authentication (e.g. |
None
|
timeout
|
float
|
Request timeout in seconds passed to
:func: |
30.0
|
method
|
str
|
HTTP method. Defaults to |
'POST'
|
Raises:
| Type | Description |
|---|---|
HTTPError
|
If the server returns a non-2xx status code. |
URLError
|
If a network-level error occurs (DNS failure, connection refused, etc.). |
Example — basic usage::
from microbench import MicroBench, HttpOutput
bench = MicroBench(outputs=[HttpOutput('https://example.com/events')])
Example — bearer token authentication::
from microbench import MicroBench, HttpOutput
bench = MicroBench(outputs=[HttpOutput(
'https://api.example.com/benchmarks',
headers={'Authorization': 'Bearer my-secret-token'},
)])
Example — Slack webhook via subclass::
import json
from microbench import MicroBench, HttpOutput
class SlackOutput(HttpOutput):
def format_payload(self, record):
name = record.get('call', {}).get('name', '?')
return json.dumps({'text': f'Benchmark `{name}` finished.'}).encode()
bench = MicroBench(outputs=[SlackOutput('https://hooks.slack.com/services/...')])
Source code in microbench/outputs/http.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | |
format_payload(record)
Encode record as the HTTP request body.
The default implementation JSON-encodes the record dict and returns UTF-8 bytes. Subclasses may override this to produce any body shape required by the target provider.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
record
|
dict
|
Decoded benchmark result dict. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
bytes |
Request body. |
Source code in microbench/outputs/http.py
write(bm_json_str)
POST bm_json_str to the configured URL.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bm_json_str
|
str
|
JSON-encoded benchmark record, as produced by
:meth: |
required |
Raises:
| Type | Description |
|---|---|
HTTPError
|
On a non-2xx HTTP response. |
URLError
|
On a network-level error. |
Source code in microbench/outputs/http.py
microbench.RedisOutput
Bases: Output
Write benchmark results to a Redis list (one JSON string per record).
Results are appended using RPUSH and can be read back via
:meth:get_results using LRANGE.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
redis_key
|
str
|
Redis key for the result list. |
required |
**redis_connection
|
Keyword arguments forwarded to
|
{}
|
Example::
from microbench import MicroBench, RedisOutput
bench = MicroBench(outputs=[RedisOutput('microbench:mykey',
host='localhost', port=6379)])
Source code in microbench/outputs/redis.py
Mixins
microbench.MBFunctionCall
Capture function arguments and keyword arguments
Source code in microbench/mixins/call.py
microbench.MBReturnValue
microbench.MBPythonInfo
Capture the Python interpreter version, prefix, and executable path.
Records a python dict with three keys:
version: the Python version string (e.g."3.12.4")prefix:sys.prefix— the environment rootexecutable:sys.executable— the absolute interpreter path
This mixin is included in :class:MicroBench by default (Python API)
and in the CLI default mixin set. It supersedes the former MBPythonVersion.
Note
CLI compatible.
Source code in microbench/mixins/python.py
microbench.MBHostInfo
Capture hostname, operating system, and (optionally) CPU and RAM info.
Always records host.hostname and host.os using only the standard
library. When psutil <https://pypi.org/project/psutil/>_ is installed,
also records host.cpu_cores_logical, host.cpu_cores_physical, and
host.ram_total (bytes). The psutil fields are silently omitted when
psutil is not available — no error or warning is raised.
This mixin supersedes the former MBHostCpuCores and MBHostRamTotal
mixins, which have been removed.
Note
CLI compatible.
Source code in microbench/mixins/system.py
microbench.MBPeakMemory
Capture peak Python memory allocation during the benchmarked function.
Uses :mod:tracemalloc from the Python standard library (no extra
dependencies). Records the peak memory allocated in bytes across all
iterations as call.peak_memory_bytes.
Note
tracemalloc tracks memory that goes through Python's allocator,
which covers Python objects and most C-extension allocations. Memory
allocated directly via malloc in C extensions (e.g. some large
NumPy arrays) is not tracked.
CLI compatible.
Source code in microbench/mixins/profiling.py
microbench.MBSlurmInfo
Capture all SLURM_* environment variables.
Results are stored in the slurm field as a dict, with keys
lowercased and the SLURM_ prefix stripped. If no SLURM environment
variables are set (e.g. running locally), slurm is an empty dict.
Example output::
{
"slurm": {
"job_id": "12345",
"array_task_id": "3",
"nodelist": "gpu-node-[01-04]",
"cpus_per_task": "4"
}
}
Note
CLI compatible.
Source code in microbench/mixins/system.py
microbench.MBLoadedModules
Capture loaded Lmod / Environment Modules.
Reads the LOADEDMODULES environment variable set by both Lmod and
Environment Modules and records the loaded modules as a dict mapping
module name to version string. If no modules are loaded, or the
benchmark is not running in a module-enabled environment,
loaded_modules is an empty dict.
Example output::
{
"loaded_modules": {
"gcc": "12.2.0",
"openmpi": "4.1.5",
"python": "3.10.4"
}
}
Module entries without a version (e.g. null) are stored with an
empty string as the version.
Note
CLI compatible.
Source code in microbench/mixins/system.py
microbench.MBWorkingDir
Capture the working directory at benchmark time.
Records the current working directory as call.working_dir. This is
per-call data since the working directory can change between calls.
Note
CLI compatible.
Source code in microbench/mixins/system.py
microbench.MBCgroupLimits
Capture CPU quota and memory limit from Linux cgroups.
Works for SLURM jobs and Kubernetes pods (cgroup v1 and v2). Results
are stored in the cgroups field as a dict containing:
cpu_cores_limit: effective CPU parallelism as a float (quota ÷ period), orNoneif unlimited or unavailable.memory_bytes_limit: memory limit in bytes as an int, orNoneif unlimited or unavailable.version:1or2.
On non-Linux systems or when the cgroup filesystem is unavailable,
cgroups is an empty dict.
Note
cpu_cores_limit is derived from the cgroup CPU quota and period,
so it represents effective CPU parallelism, not a physical core count.
A SLURM job launched with --cpus-per-task=4 will typically report
cpu_cores_limit: 4.0.
Example output::
{
"cgroups": {
"cpu_cores_limit": 4.0,
"memory_bytes_limit": 17179869184,
"version": 2
}
}
Note
CLI compatible.
Source code in microbench/mixins/system.py
microbench.MBGitInfo
Capture git repository information.
Requires git ≥ 2.11 to be available on PATH. Records the
current repo directory, commit hash, branch name, and whether the
working tree has uncommitted changes. Results are stored in the
git field.
By default inspects the repository containing the running script
(sys.argv[0]), falling back to the shell's working directory
when the script path is unavailable (e.g. interactive Python). Set
git_repo explicitly to target a specific directory, which is
useful when the script and the repository root are in different
locations.
CLI usage: the default is the current
working directory rather than the script directory, since
sys.argv[0] points to the microbench package itself. Use
--git-repo DIR to override.
Attributes:
| Name | Type | Description |
|---|---|---|
git_repo |
str
|
Directory to inspect. Defaults to the directory of the running script, or the shell's working directory if unavailable. |
Example output::
{
"git": {
"repo": "/home/user/project",
"commit": "a1b2c3d4e5f6...",
"branch": "main",
"dirty": false
}
}
Note
CLI compatible.
Source code in microbench/mixins/vcs.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | |
microbench.MBFileHash
Capture cryptographic hashes of specified files.
Useful for recording the exact state of scripts or configuration files alongside benchmark results, so results can be tied to a specific version of the code even without version control.
By default hashes the running script (sys.argv[0]). Set
hash_files to an iterable of paths to hash specific files
instead. Files are read in 64 KB chunks, so large files are handled
without loading them fully into memory.
CLI usage: the default list of files to hash is the
benchmarked command executable (cmd[0]) plus any arguments
that resolve to existing files on disk (cmd[1:]). This
transparently captures input files without requiring
--hash-file. Use --hash-file FILE [FILE ...] to override the
default entirely, and --hash-algorithm to change the algorithm.
Attributes:
| Name | Type | Description |
|---|---|---|
hash_files |
iterable of str
|
File paths to hash.
Defaults to |
hash_algorithm |
str
|
Hash algorithm name accepted by
:func: |
Example output::
{
"file_hashes": {
"run_experiment.py": "e3b0c44298fc1c14...",
"input.csv": "2cf24dba5fb0a30e..."
}
}
Note
The hashing algorithm name is stored under mb.file_hash_algorithm.
CLI compatible.
Source code in microbench/mixins/vcs.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 | |
microbench.MBGlobalPackages
Capture Python packages imported in global environment.
Results are stored in python.loaded_packages as a dict mapping
package name to version string.
Source code in microbench/mixins/python.py
microbench.MBInstalledPackages
Capture installed Python packages using importlib.
Records the name and version of every distribution available in the
current Python environment via importlib.metadata.
Results are stored in python.installed_packages as a dict mapping
package name to version string. When capture_paths=True,
installation paths are stored in python.installed_package_paths.
Attributes:
| Name | Type | Description |
|---|---|---|
capture_paths |
bool
|
Also record the installation path of each
package under |
Note
CLI compatible.
Source code in microbench/mixins/python.py
microbench.MBCondaPackages
Capture conda packages and active environment metadata.
Runs conda list --prefix PREFIX where PREFIX is taken from the
CONDA_PREFIX environment variable (the active conda environment).
Falls back to sys.prefix when CONDA_PREFIX is not set (e.g.
when running inside the base environment without explicit activation).
If conda is not on PATH, the CONDA_EXE environment variable
is tried as a fallback before raising an error.
Records a single conda dict with three keys:
name(fromCONDA_DEFAULT_ENV) — may beNoneif unset.path(fromCONDA_PREFIX) — may beNoneif unset.packages— dict mapping package name to version string.
Attributes:
| Name | Type | Description |
|---|---|---|
include_builds |
bool
|
Include the build string in the version.
Defaults to |
include_channels |
bool
|
Include the channel name in the version.
Defaults to |
Note
CLI compatible.
Source code in microbench/mixins/python.py
microbench.MBNvidiaSmi
Capture attributes on installed NVIDIA GPUs using nvidia-smi.
Requires the nvidia-smi utility to be available on PATH
(bundled with NVIDIA drivers).
Results are stored as nvidia, a list of per-GPU dicts. Each dict
contains uuid plus one key per queried attribute. Run
nvidia-smi --help-query-gpu for all available attribute names.
Run nvidia-smi -L to list GPU UUIDs.
Example output::
{
"nvidia": [
{
"uuid": "GPU-abc123",
"gpu_name": "Tesla T4",
"memory.total": "16160 MiB"
}
]
}
Attributes:
| Name | Type | Description |
|---|---|---|
nvidia_attributes |
tuple[str]
|
Attributes to query. Defaults to
|
nvidia_gpus |
tuple
|
GPU IDs to poll — zero-based indexes, UUIDs, or PCI bus IDs. GPU UUIDs are recommended (indexes can change after a reboot). Omit to poll all installed GPUs. |
Note
CLI compatible.
Source code in microbench/mixins/gpu.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | |
microbench.MBLineProfiler
Run the line profiler on the selected function
Requires the line_profiler package. This will generate a benchmark which times the execution of each line of Python code in your function. This will slightly slow down the execution of your function, so it's not recommended in production.
Results are stored in call.line_profiler as a base64-encoded pickled
LineStats object.
Source code in microbench/mixins/profiling.py
decode_line_profile(line_profile_pickled)
staticmethod
Decode a base64-encoded pickled line profiler result.
Security note: This uses pickle.loads, which can execute arbitrary code. Only call this on data from a trusted source (e.g. your own benchmark output files). Do not decode line profile data received over a network or from an untrusted file.
Source code in microbench/mixins/profiling.py
CLI
microbench.CLIArg
Declares a CLI argument that sets a mixin attribute.
Attach a list of CLIArg instances to a mixin class as cli_args
to expose configurable attributes through python -m microbench.
Arguments are added to the parser automatically; no changes to the CLI
code are needed when adding new configurable mixins.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
flags
|
Flag strings for the argument, e.g. |
required | |
dest
|
Mixin attribute name to set, e.g. |
required | |
help
|
Help text shown in |
required | |
metavar
|
Display name for the value in help text. |
None
|
|
type
|
Callable to convert the raw string. Defaults to |
str
|
|
nargs
|
Number of arguments (e.g. |
None
|
|
cli_default
|
Default when the flag is not given on the CLI.
If callable, called with the command list ( |
_UNSET
|
Source code in microbench/mixins/base.py
JSON encoding
microbench.JSONEncoder
Bases: JSONEncoder