-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add built-in function for local sensitivity analysis #1908
base: main
Are you sure you want to change the base?
Conversation
This commit introduces a new function, `local_sensitivity_analysis`, to the Mesa modeling framework, enhancing the capabilities for conducting sensitivity analysis on agent-based models. This function significantly improves the flexibility and precision of parameter variation studies. Key Features: - The function accepts both relative and absolute ranges for parameter variations. Users can specify a common range of multipliers (for relative changes) or specific values (for absolute changes) that are applied uniformly to all numeric parameters. This allows for a more nuanced exploration of parameter impacts. - An additional argument, `specific_ranges`, is introduced. This optional parameter lets users define custom ranges or multipliers for specific model parameters, providing even greater control over the sensitivity analysis process. - The `ignore_parameters` argument is included, allowing users to exclude certain parameters from the analysis. This is particularly useful for non-numeric parameters or those that should remain constant. - The function automatically handles integer parameters by rounding the varied values to the nearest integer and removing duplicates, ensuring a meaningful and efficient analysis. - The implementation also includes a default run using the model's baseline parameters, serving as a reference point for comparing the effects of parameter variations. - Additional keyword arguments (`**kwargs`) are seamlessly passed to the Mesa `batch_run` function, maintaining compatibility and flexibility with Mesa's batch running capabilities.
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1908 +/- ##
==========================================
- Coverage 77.35% 75.58% -1.78%
==========================================
Files 15 15
Lines 1007 1032 +25
Branches 220 211 -9
==========================================
+ Hits 779 780 +1
- Misses 197 223 +26
+ Partials 31 29 -2 ☔ View full report in Codecov by Sentry. |
@jackiekazil @tpike3 @Corvince @quaquel curious what you think! |
Concept SGTM. Having more canned helper function like this would encourage fast exploration from a Jupyter notebook. This reminds me of the toolkit in the fastai library. |
You know my opinion on One At a Time Sensitivity Analysis: thou shall not use it. The broader question is what is within scope for MESA? Having some simple helper functions for model exploration seems sensible. However, with libraries like SALib and the EMA workbench, I would point people to those for more sophisticated analyses. |
Just curious: Assume you you have compute budget for about a thousand model runs. Your model is stochastically quite noisy, you need about 10 runs to get your metrics within a satisfyingly small confidence interval. That means you can test about 100 configurations. You have around 10 uncertanties (and maybe 3 policy levers with 3 magnitudes each, if relevant). How would you approach sensitivity / extreme value analysis? (this is an actual scenario I encountered in a project. Runtime 5 to 7 min, 12 simultaneous runs on my PC, so ~100 to ~150 runs per hour. One night of simulation was about a thousand runs) |
Use common random numbers to reduce the noise. Use system knowledge and the question at hand (i.e., the purpose of the model) to carefully select the scenarios to run. The point with one-at-a-time sensitivity analysis is that it is bound to produce misleading results because of interaction effects. Another more sophisticated direction is to use adaptive sparse grids (see this thesis), which is what was used to do Sobol on the CovidSim. |
Looks very interesting. Do we have this in the workbench? Do we want it? |
I'm new to the sensitivity analysis libraries in the ABM context. The sensitivity analysis of my past work, on economic models, is mainly guided by intuition. I am eager to learn the systematic feature learning under uncertainty. In the context of Mesa, it would be great if there is a representative example model that showcases this.
I can't quite exactly unpack this sentence. If you look at variation in only 1 parameter, with other parameters ceteris paribus, they shouldn't affect how I should interpret the effect of a particular parameter? |
@jackiekazil @tpike3 @Corvince @wang-boyu, from your modelling and teaching experience, what do you think of a local (single variable) sensitivity analysis method in Mesa? |
yes, but it is not trivial algorithmically. |
Ceteris paribus is exactly the problem here. Take the Ishigami function below, a classic sensitivity analysis test function, as an example. If you vary |
That makes sense. It's, in a way, similar to surveying the relevant local extrema (and additionally identifying the global optimum) of a function. It's both of SciPy's global optimization and local minima finding.
What I had in mind was actually to efficiently map the Schelling model
It's somewhat a tangential point, but a simulation in Python being slow sometimes discourages people from doing sensitivity analysis. |
This is interesting to me, but part of me wonders -- Is this "core" functionality or is this the start of something bigger that is an analyst tool kit? |
I think given that there are already existing SA tools (SALib and the EMA workbench), it would make more sense to document their usage in the examples, instead of reinventing the wheels. |
I agree with @rht that showing how other tools can be used in conjunction with MESA is the way to go. However, if you try to use numpy.qmc or SALib with MESA you do run into the issue that the current BatchRunner does not really support this. So, as a next step, why not make an example with numpy.qmc or SALib and see what would be needed in the BatchRunner to keep the code nice and concise? The EMA workbench has a MESA example: https://emaworkbench.readthedocs.io/en/latest/examples/example_mesa.html so it would be easy to point to that rather than repeat it. |
From my reading of the example, # Define model parameters and their ranges to be sampled
model.uncertainties = [
IntegerParameter("num_nodes", 10, 100),
IntegerParameter("avg_node_degree", 2, 8),
RealParameter("virus_spread_chance", 0.1, 1),
RealParameter("virus_check_frequency", 0.1, 1),
RealParameter("recovery_chance", 0.1, 1),
RealParameter("gain_resistance_chance", 0.1, 1),
] provides a range of the parameters to be sampled. Is a rigid grid scan ( |
when calling |
Add a function that automatically configures the batchrunner to perform local sensitivity analysis. The only required input is a model.
Extensive docstring is included, see that for details.
Feedback, both conceptually and implementation wise, is very welcome!
Usage example: