Skip to content
Open
6 changes: 3 additions & 3 deletions aviary/docs/getting_started/onboarding_level1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1020,7 +1020,7 @@
"source": [
"### SQLite database file\n",
"\n",
"There is a `.db` file after run. By default, it is {glue:md}`record_filename_default` in the current directory. This is an SQLite database file. In level 2 and level 3, we will be able to choose a different name. Our run is recorded into this file. You generally shouldn't need to parse through this file on your own, but it is available if you're seeking additional problem information."
"There is a `.db` file created after run called {glue:md}'problem_history.db' in the report directory. This is an SQLite database file. Our run is recorded into this file. You generally shouldn't need to parse through this file on your own, but it is available if you're seeking additional problem information."
]
},
{
Expand Down Expand Up @@ -1152,7 +1152,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "aviary",
"language": "python",
"name": "python3"
},
Expand All @@ -1166,7 +1166,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.9"
}
},
"nbformat": 4,
Expand Down
17 changes: 6 additions & 11 deletions aviary/docs/getting_started/onboarding_level2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -116,13 +116,11 @@
"- {glue:md}`phase_info`: not provided (and will be loaded from {glue:md}`phase_info_path`)\n",
"- {glue:md}`optimizer`: {glue:md}`optimizer_default`\n",
"- {glue:md}`objective_type`: {glue:md}`objective_type_default`\n",
"- {glue:md}`record_filename`: {glue:md}`record_filename_default`\n",
"- {glue:md}`restart_filename`: {glue:md}`restart_filename_default`)\n",
"- {glue:md}`restart_filename`: {glue:md}`restart_filename_default`\n",
"- {glue:md}`max_iter`: {glue:md}`max_iter_default`\n",
"- {glue:md}`run_driver`: {glue:md}`run_driver_default`\n",
"- {glue:md}`make_plots`: {glue:md}`make_plots_default`\n",
"- {glue:md}`phase_info_parameterization`: {glue:md}`phase_info_parameterization_default`\n",
"- {glue:md}`optimization_history_filename`: {glue:md}`optimization_history_filename_default`\n",
"- {glue:md}`verbosity`: {glue:md}`verbosity_default`\n",
"\n",
"All the above arguments are straightforward except {glue:md}`objective_type`. Even though {glue:md}`objective_type` is `None`, it is not treated as `None`. In this scenario, the objective is set based on {glue:md}`problem_type` when using the {glue:md}`2DOF` mission method. There are three options for {glue:md}`problem_type` which is set to {glue:md}`SIZING` as default when aircraft is created. Aviary has the following mapping when user does not set {glue:md}`objective_type` but set `mission_method` to {glue:md}`2DOF` (in .csv file):\n",
Expand Down Expand Up @@ -228,7 +226,6 @@
"aircraft_data = 'models/aircraft/test_aircraft/aircraft_for_bench_GwGm.csv'\n",
"optimizer = 'IPOPT'\n",
"objective_type = None\n",
"record_filename = 'aviary_history.db'\n",
"restart_filename = None\n",
"max_iter = 0\n",
"phase_info = deepcopy(av.default_2DOF_phase_info)\n",
Expand Down Expand Up @@ -263,7 +260,7 @@
"prob.setup()\n",
"\n",
"# run the problem we just set up\n",
"prob.run_aviary_problem(record_filename, restart_filename=restart_filename)"
"prob.run_aviary_problem(restart_filename=restart_filename)"
]
},
{
Expand Down Expand Up @@ -313,7 +310,6 @@
"aircraft_data = 'models/aircraft/test_aircraft/aircraft_for_bench_GwGm.csv'\n",
"optimizer = 'IPOPT'\n",
"objective_type = None\n",
"record_filename = 'aviary_history.db'\n",
"restart_filename = None\n",
"max_iter = 1\n",
"\n",
Expand Down Expand Up @@ -822,7 +818,7 @@
"id": "107f7407",
"metadata": {},
"source": [
"This is a simple wrapper of Dymos' [run_problem()](https://openmdao.github.io/dymos/api/run_problem_function.html) function. It allows the users to provide `record_filename`, `restart_filename`, `suppress_solver_print`, and `run_driver`. In our case, `record_filename` is changed to `aviary_history.db` and `restart_filename` is set to `None`. The rest of the arguments take default values. If a restart file name is provided, aviary (or dymos) will load the states, controls, and parameters as given in the provided case as the initial guess for the next run. We have discussed the `.db` file in [level 1 onboarding doc](onboarding_level1) and will discuss how to use it to generate useful output in [level 3 onboarding doc](onboarding_level3).\n",
"This is a simple wrapper of Dymos' [run_problem()](https://openmdao.github.io/dymos/api/run_problem_function.html) function. It allows the users to provide, `restart_filename`, `suppress_solver_print`, and `run_driver`. In our case, `restart_filename` is set to `None`. The rest of the arguments take default values. If a restart file name is provided, aviary (or dymos) will load the states, controls, and parameters as given in the provided case as the initial guess for the next run. We have discussed the `.db` file in [level 1 onboarding doc](onboarding_level1) and will discuss how to use it to generate useful output in [level 3 onboarding doc](onboarding_level3).\n",
"\n",
"Finally, we can add a few print statements for the variables that we are interested:\n"
]
Expand Down Expand Up @@ -951,7 +947,6 @@
"mass_method = 'FLOPS'\n",
"optimizer = 'SLSQP'\n",
"objective_type = None\n",
"record_filename = 'history.db'\n",
"restart_filename = None\n",
"\n",
"# Build problem\n",
Expand All @@ -975,7 +970,7 @@
"\n",
"prob.setup()\n",
"\n",
"prob.run_aviary_problem(record_filename)\n",
"prob.run_aviary_problem()\n",
"\n",
"print('done')"
]
Expand Down Expand Up @@ -1018,7 +1013,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "aviary",
"language": "python",
"name": "python3"
},
Expand All @@ -1032,7 +1027,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.9"
}
},
"nbformat": 4,
Expand Down
4 changes: 1 addition & 3 deletions aviary/docs/user_guide/aviary_commands.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -787,8 +787,6 @@
"source": [
"To use this utility, either a problem has been run or a run script is provided.\n",
"\n",
"{glue:md}`--problem_recorder` is an input. Default is {glue:md}`problem_recorder_default`.\n",
"{glue:md}`--driver_recorder` is an optional input.\n",
"{glue:md}`--port` is the dashboard server port ID. The default is {glue:md}`port_default` meaning any free port.\n",
"{glue:md}`-b` or {glue:md}`--background` indicates to run in background. Default is `False`.\n",
"{glue:md}`-d` or {glue:md}`--debug` indicates to show debugging output. Default is `False`.\n",
Expand Down Expand Up @@ -845,7 +843,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "aviary",
"language": "python",
"name": "python3"
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -174,9 +174,7 @@
"\n",
"prob.setup()\n",
"\n",
"prob.run_aviary_problem(\n",
" record_filename='level2_example.db', suppress_solver_print=True, make_plots=False\n",
")"
"prob.run_aviary_problem(suppress_solver_print=True, make_plots=False)"
]
},
{
Expand Down
74 changes: 27 additions & 47 deletions aviary/docs/user_guide/postprocessing_and_visualizing_results.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,32 +51,31 @@
"\n",
"| **Section** | **File** | **Location** |\n",
"|--------------|-----------------------------------------------|--------------------------------------------------------------------------------|\n",
"| Model | Input Checks | ./reports/*name_of_run_script*/input_checkss.md |\n",
"| Model | Inputs | ./reports/*name_of_run_script*/inputs.html |\n",
"| Model | Debug Input List | ./input_list.txt |\n",
"| Model | Debug Input List | ./output_list.txt |\n",
"| Model | N2 | ./reports/*name_of_run_script*/n2.html |\n",
"| Model | Trajectory Linkage Report | ./reports/*name_of_run_script*/traj_linkage_report.html |\n",
"| Optimization | Driver Scaling Report | ./reports/*name_of_run_script*/driver_scaling_report.html |\n",
"| Optimization | Total Coloring Report | ./reports/*name_of_run_script*/total_coloring.html |\n",
"| Optimization | Optimization Report | ./reports/*name_of_run_script*/opt_report.html |\n",
"| Optimization | SNOPT Output (similarly for other optimizers) | ./reports/*name_of_run_script*/SNOPT_print.out |\n",
"| Optimization | Driver recording | Case Recorder file specified by `driver_recorder` command option |\n",
"| Results | Trajectory Results Report | ./reports/*name_of_run_script*/traj_results_report.html |\n",
"| Results | Subsystem Results | ./reports/subsystems/*name_of_subsystem.md (or .html)* |\n",
"| Results | Mission Results | ./reports/subsystems/mission_summary.md |\n",
"| Results | Problem final case recording | Case Recorder file specified by `problem_recorder` command option, default is {glue:md}`problem_recorder_default` |\n",
"\n",
"As an example of the workflow for the dashboard, assume that the user has run an Aviary script, {glue:md}`run_level2_example.py`, which records both the `Problem` final case and also all the cases of the optimization done by the [`Driver`](https://openmdao.org/newdocs/versions/latest/features/building_blocks/drivers/). The sample code can be found in {glue:md}`aviary/examples` folder. (To record both the Problem final case and also the Driver optimization iterations, the user must make use of the {glue:md}`optimization_history_filename` option in the call to {glue:md}`run_aviary_problem()`.)\n",
"| Model | Input Checks | ./*name_of_run_script*_out/reports/input_checks.md |\n",
"| Model | Inputs | ./*name_of_run_script*_out/reports/inputs.html |\n",
"| Model | Debug Input List | ./*name_of_run_script*_out/reports/input_list.txt |\n",
"| Model | Debug Input List | ./*name_of_run_script*_out/reports/output_list.txt |\n",
"| Model | N2 | ./*name_of_run_script*_out/reports/n2.html |\n",
"| Model | Trajectory Linkage Report | ./*name_of_run_script*_out/reports/traj_linkage_report.html |\n",
"| Optimization | Driver Scaling Report | ./*name_of_run_script*_out/reports/driver_scaling_report.html |\n",
"| Optimization | Total Coloring Report | ./*name_of_run_script*_out/reports/total_coloring.html |\n",
"| Optimization | Optimization Report | ./*name_of_run_script*_out/reports/opt_report.html |\n",
"| Optimization | SNOPT Output (similarly for other optimizers) | ./*name_of_run_script*_out/reports/SNOPT_print.out |\n",
"| Results | Trajectory Results Report | ./*name_of_run_script*/reports/traj_results_report.html |\n",
"| Results | Subsystem Results | ./*name_of_run_script*_out/reports/subsystems/*name_of_subsystem.md (or .html)*|\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Results: Trajectory Results Report and Optimization: Driver recording are missing from the new list. Maybe was intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch that shouldn't have gotten removed

"| Results | Mission Results | ./*name_of_run_script*_out/reports/subsystems/mission_summary.md |\n",
"| Results | Problem final case recording | ./*name_of_run_script*_out/problem_history.db |\n",
"\n",
"As an example of the workflow for the dashboard, assume that the user has run an Aviary script, {glue:md}`run_level2_example.py`, which records both the `Problem` final case and also all the cases of the optimization done by the [`Driver`](https://openmdao.org/newdocs/versions/latest/features/building_blocks/drivers/). The sample code can be found in {glue:md}`aviary/examples` folder. (To record both the Problem final case and also the Driver optimization iterations, the user must make use of the `verbosity` flag in the call to {glue:md}`run_aviary_problem()`.)\n",
"\n",
"```bash\n",
"python level2_example.py\n",
"```\n",
"\n",
"In this example, the case recorder files are named `problem_final_case.db` and `driver_cases.db`, respectively. So after the run is completed, the user could run the dashboard using:\n",
"After the run is completed, the user can run the dashboard using:\n",
"\n",
"```bash\n",
"aviary dashboard level2_example --problem_recorder=problem_final_case.db --driver_recorder=driver_cases.db\n",
"aviary dashboard level2_example\n",
"```\n",
"\n",
"```{note}\n",
Expand Down Expand Up @@ -122,9 +121,7 @@
"file_name = 'run_level2_example'\n",
"commands = [\n",
" 'python ' + file_name + '.py',\n",
" 'aviary dashboard '\n",
" + file_name\n",
" + ' --problem_recorder=problem_final_case.db --driver_recorder=driver_cases.db --background',\n",
" 'aviary dashboard ' + file_name + '--background',\n",
"]\n",
"with tempfile.TemporaryDirectory() as tempdir:\n",
" os.chdir(tempdir)\n",
Expand All @@ -150,22 +147,20 @@
"The Problem recorder file is required for the Aircraft 3d model tab to be displayed in the dashboard.\n",
"```\n",
"\n",
"The {glue:md}`--problem_recorder` and {glue:md}`--driver_recorder` options to the dashboard command are used to indicate the file names for those recorder files, if they are not the standard values of {glue:md}`problem_recorder_default` and {glue:md}`driver_recorder_default`, respectively. If {glue:md}`--driver_recorder` is set to the string `\"None\"`, then the driver case recorder file is ignored. This is useful if the user is not interested in seeing dashboard tabs related to driver history. If that file is large, it could unnecessarily be read and slow down the generation of the dashboard significantly.\n",
"\n",
"### Saving and Sharing Dashboards\n",
"\n",
"The user can also save a dashboard and share it with other users to view. The dashboard is saved as a zip file. To save a dashboard to a file, use the {glue:md}`--save` option. For example, \n",
"\n",
"```bash\n",
"aviary dashboard --save --problem_recorder=problem_final_case.db --driver_recorder=driver_cases.db\n",
"aviary dashboard --save\n",
"```\n",
"\n",
"By default, the zip file is named based on the name of the problem. So in this example, the saved zip file will be named {glue:md}`run_level2_example.zip`.\n",
"\n",
"If the user wants to save to a different file, they can provide that file name as an argument to the {glue:md}`--save` option as in this example:\n",
"\n",
"```bash\n",
"aviary dashboard --save saved_dashboard.zip level2_example --problem_recorder=problem_final_case.db --driver_recorder=driver_cases.db\n",
"aviary dashboard --save saved_dashboard.zip level2_example\n",
"```\n",
"\n",
"In this case, the zip file will be named `saved_dashboard.zip`. \n",
Expand Down Expand Up @@ -225,24 +220,7 @@
"\n",
"### Database Output Files\n",
"\n",
"There is an SQLite database output. By default, it is {glue:md}`problem_recorder_default`. It can be used to rerun your case though we do not detail that here. Users can write separate Python script to create user customized outputs and graphs. We will show how to use the this database to create user's customized graph in [the onboarding docs](../getting_started/onboarding)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remove-cell"
]
},
"outputs": [],
"source": [
"# Testing Cell\n",
"from aviary.interface.methods_for_level2 import AviaryProblem\n",
"from aviary.utils.doctape import check_args\n",
"\n",
"check_args(AviaryProblem.run_aviary_problem, {'record_filename': 'problem_history.db'}, exact=False)"
"Aviary creates an SQLite database output called `problem_history.db`. It can be used to rerun your case, though we do not detail that here. Users can write a separate Python script to create user customized outputs and graphs. We will show how to use the this database to create a customized graph in [the onboarding docs](../getting_started/onboarding)."
]
},
{
Expand Down Expand Up @@ -306,7 +284,9 @@
" verbosity=verbosity,\n",
" )\n",
" sys.stdout = old_stdout\n",
" folder_contents = [f.name for f in os.scandir(tempdir)]\n",
" folder_contents = [\n",
" f.name for f in os.scandir(tempdir + '/aircraft_for_bench_FwFm_out/reports/')\n",
" ]\n",
" all_files = []\n",
" for p, d, f in os.walk(tempdir):\n",
" all_files += f\n",
Expand Down Expand Up @@ -409,7 +389,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "aviary",
"language": "python",
"name": "python3"
},
Expand All @@ -423,7 +403,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.9"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions aviary/examples/run_level2_with_detailed_landing.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,13 +167,13 @@

prob.setup()

prob.run_aviary_problem(record_filename='detailed_landing.db')
prob.run_aviary_problem()

try:
loc = prob.get_outputs_dir()
cr = om.CaseReader(f'{loc}/detailed_landing.db')
cr = om.CaseReader(f'{loc}/problem_history.db')
except:
cr = om.CaseReader('detailed_landing.db')
cr = om.CaseReader('problem_history.db')

cases = cr.get_cases('problem')
case = cases[0]
Expand Down
6 changes: 3 additions & 3 deletions aviary/examples/run_level2_with_detailed_takeoff.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,13 +327,13 @@

prob.setup()

prob.run_aviary_problem(record_filename='detailed_takeoff.db', suppress_solver_print=True)
prob.run_aviary_problem(suppress_solver_print=True)

try:
loc = prob.get_outputs_dir()
cr = om.CaseReader(f'{loc}/detailed_takeoff.db')
cr = om.CaseReader(f'{loc}/problem_history.db')
except:
cr = om.CaseReader('detailed_takeoff.db')
cr = om.CaseReader('problem_history.db')

cases = cr.get_cases('problem')
case = cases[0]
Expand Down
Loading
Loading