Skip to content

feature: expand benchmarking #1989

@wpbonelli

Description

@wpbonelli

Is your feature request related to a problem? Please describe.

FloPy currently has a very small set of benchmarks using pytest-benchmark, including

It might be worthwhile to a) benchmark a broader set of models/utils, and b) minimize ad hoc code needed to achieve this.

Describe the solution you'd like

Maybe benchmark load/write for all test models provided by a models API as proposed in #1872, as well as any widely used pre/post-processing utils. Could also try ASV — it has been adopted by other projects like numpy, shapely, and pywatershed.

Describe alternatives you've considered

We could just stick with pytest-benchmark and a bit of scripting instead of moving to ASV.

Additional context

This would help quantify performance improvements from the ongoing effort to use pandas for file IO

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions