You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sphinx/source/install_usage/install.rst
+25-16Lines changed: 25 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,9 @@
2
2
Installation
3
3
************
4
4
5
-
For x86 systems we provide pre-built docker images users can quickly start with their own TAU instrumented applications (See `Chimbuko docker <https://codarcode.github.io/Chimbuko/installation/docker.html>`_). Otherwise, we recommend that Chimbuko be installed via the `Spack package manager <https://spack.io/>`_. Below we provide instructions for installing Chimbuko on a typical Ubuntu desktop and also on the Summit computer. Some details on installing Chimbuko in absence of Spack can be found in the :ref:`Appendix <manual_installation_of_chimbuko>`.
5
+
For x86 systems we provide pre-built docker images users can quickly start with their own TAU instrumented applications (See `Chimbuko docker <https://codarcode.github.io/Chimbuko/installation/docker.html>`_). Otherwise, we recommend that Chimbuko be installed via the `Spack package manager <https://spack.io/>`_. Below we provide instructions for installing Chimbuko on a typical Ubuntu desktop and also on the Summit and Crusher computers. Some details on installing Chimbuko in absence of Spack can be found in the :ref:`Appendix <manual_installation_of_chimbuko>`.
6
6
7
-
In all cases, the first step is to download and install Spack following the instructions `here <https://github.com/spack/spack>`_ . Note that installing Spack requires Python.
7
+
The first step is to download and install Spack following the instructions `here <https://github.com/spack/spack>`_ . Note that installing Spack requires Python.
8
8
9
9
We require Spack repositories for Chimbuko and for the Mochi stack:
10
10
@@ -24,9 +24,10 @@ A basic installation of Chimbuko can be achieved very easily:
24
24
25
25
.. code:: bash
26
26
27
-
spack install chimbuko^py-setuptools-scm+toml
27
+
spack install chimbuko
28
28
29
-
Note that the dependency on :code:`py-setuptools-scm+toml` resolves a dependency conflict likely resulting from a bug in Spack's current dependency resolution.
29
+
..
30
+
^py-setuptools-scm+toml Note that the dependency on :code:`py-setuptools-scm+toml` resolves a dependency conflict likely resulting from a bug in Spack's current dependency resolution.
30
31
31
32
A Dockerfile (instructions for building a Docker image) that installs Chimbuko on top of a basic Ubuntu 18.04 image following the above steps can be found `here <https://github.com/CODARcode/PerformanceAnalysis/blob/master/docker/ubuntu18.04/openmpi4.0.4/Dockerfile.chimbuko.spack>`_ .
32
33
@@ -71,7 +72,10 @@ Chimbuko can be built without MPI by disabling the **mpi** Spack variant as foll
When used in this mode the user is responsible for manually assigning a "rank" index to each instance of the online AD module, and also for ensuring that an instance of this module is created alongside each instance or rank of the target application (e.g. using a wrapper script that is launched via mpirun). We discuss how this can be achieved :ref:`here <non_mpi_run>`.
77
81
@@ -117,10 +121,10 @@ Once installed, simply
117
121
after loading the modules above.
118
122
119
123
120
-
Spock
124
+
Crusher
121
125
~~~~~~
122
126
123
-
In the PerformanceAnalysis source we also provide a Spack environment yaml for use on Spock, :code:`spack/environments/spock.yaml`. This environment is designed for the AMD compiler suite with Rocm 4.3.0. Installation instructions follow:
127
+
In the PerformanceAnalysis source we also provide a Spack environment yaml for use on Crusher, :code:`spack/environments/crusher_rocm5.2_PrgEnv-amd.yaml`. This environment is designed for the AMD programming environment with Rocm 5.2.0. Installation instructions follow:
124
128
125
129
First download the Chimbuko and Mochi repositories:
126
130
@@ -129,7 +133,7 @@ First download the Chimbuko and Mochi repositories:
Copy the file :code:`spack/environments/spock.yaml` from the PerformanceAnalysis git repository to a convenient location and edit the paths in the :code:`repos` section to point to the paths at which you downloaded the repositories:
136
+
Copy the file :code:`spack/environments/crusher_rocm5.2_PrgEnv-amd.yaml` from the PerformanceAnalysis git repository to a convenient location and edit the paths in the :code:`repos` section to point to the paths at which you downloaded the repositories:
133
137
134
138
.. code:: yaml
135
139
@@ -141,10 +145,18 @@ This environment uses the following modules, which must be loaded prior to insta
The output of Chimbuko is stored in the provenance database. The database is sharded over multiple files of the form **provdb.${SHARD}.unqlite** that are by default output into the :file:`chimbuko/provdb` directory in the run path. We provide several tools for analyzing the contents of the provenance database:
6
+
7
+
1. **provdb_query**, a command-line tool for filtering and querying the database
8
+
9
+
2. A **Python module** for connecting to the database, filtering and querying, for use in custom analysis tools
10
+
11
+
3. **provdb-python**, a Python-based command-line tool for analyzing the database.
Once installed the tool can be used as a regular command line function, executed from the directory containing the provenance database UnQLite files:
45
+
46
+
.. code:: bash
47
+
48
+
cd chimbuko/provdb
49
+
provdb-python
50
+
51
+
Several components are available, with further documentation forthcoming.
52
+
53
+
54
+
Using **provdb_query**
55
+
~~~~~~~~~~~~~~~~~~~~~~
56
+
57
+
The provenance database is stored in a single file, **provdb.${SHARD}.unqlite** in the job's run directory. From this directory the user can interact with the provenance database via the visualization module. A more general command line interface to the database is also provided via the **provdb_query** tool that allows the user to execute arbitrary jx9 queries on the database.
58
+
59
+
The **provdb_query** tool has two modes of operation: **filter** and **execute**.
60
+
61
+
Filter mode
62
+
-----------
63
+
64
+
**filter** mode allows the user to provide a jx9 filter function that is applied to filter out entries in a particular collection. The result is displayed in JSON format and can be piped to disk. It can be used as follows:
65
+
66
+
.. code:: bash
67
+
68
+
provdb_query filter ${COLLECTION}${QUERY}
69
+
70
+
Where the variables are as follows:
71
+
72
+
- **COLLECTION** : One of the three collections in the database, **anomalies**, **normalexecs**, **metadata** (cf :ref:`introduction/provdb:Provenance Database`).
73
+
- **QUERY**: The query, format described below.
74
+
75
+
The **QUERY** argument should be a jx9 function returning a bool and enclosed in quotation marks. It should be of the format
Alternatively the query can be set to "DUMP", which will output all entries.
83
+
84
+
The function is applied sequentially to each element of the collection. Inside the function the entry is described by the variable **$entry**. Note that the backslash-dollar (\\$) is necessary to prevent the shell from trying to expand the variable. Fields of **$entry** can be queried using the square-bracket notation with the field name inside. In the sketch above the field "some_field" is compared to a value **${SOME_VALUE}** (here representing a numerical value or a value expanded by the shell, *not* a jx9 variable!).
85
+
86
+
Some examples:
87
+
88
+
- Find every anomaly whose function contains the substring "Kokkos":
If the pserver is connected to the provenance database, at the end of the run the aggregated function profile data and global averages of counters will be stored in a "global" database "provdb.global.unqlite". This database can be queried using the **filter-global** mode, which like the above allows the user to provide a jx9 filter function that is applied to filter out entries in a particular collection. The result is displayed in JSON format and can be piped to disk. It can be used as follows:
104
+
105
+
.. code:: bash
106
+
107
+
provdb_query filter-global ${COLLECTION}${QUERY}
108
+
109
+
Where the variables are as follows:
110
+
111
+
- **COLLECTION** : One of the two collections in the database, **func_stats**, **counter_stats**.
112
+
- **QUERY**: The query, format described below.
113
+
114
+
The formatting of the **QUERY** argument is described above.
115
+
116
+
Execute mode
117
+
------------
118
+
119
+
**execute** mode allows running a complete jx9 script on the database as a whole, allowing for more complex queries that collect different outputs and span collections.
- **VARIABLES** : a comma-separated list (without spaces) of the variables assigned by the script
129
+
130
+
The **CODE** argument is a complete jx9 script. As above, backslashes ('\') must be placed before internal '$' and '"' characters to prevent shell expansion.
131
+
132
+
If the option **-from_file** is specified the **${CODE}** variable above will be treated as a filename from which to obtain the script. Note that in this case the backslashes before the special characters are not necessary.
Copy file name to clipboardExpand all lines: sphinx/source/install_usage/run_chimbuko.rst
+3-84Lines changed: 3 additions & 84 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,10 +172,10 @@ which can be used as follows:
172
172
<LAUNCH N RANKS OF APP ON BODY NODES> = jsrun -U main.urs
173
173
174
174
175
-
Running on Spock
176
-
^^^^^^^^^^^^^^^^
175
+
Running on Slurm-based systems
176
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
177
177
178
-
In this section we provide specifics on launching on the Spock machine.
178
+
This section we provide specifics on launching on the Spock machine, but the procedure will also apply to other machines using the Slurm task scheduler.
179
179
180
180
Spock uses the *slurm* job management system. To control the explicit placement of the ranks we will use the :code:`--nodelist` (:code:`-w`) slurm option to specify the nodes associated with a resource set, the :code:`--nodes` (:code:`-N`) option to specify the number of nodes and the :code:`--overlap` option to allow the AD and application resource sets to coexist on the same node. These options are documented `here <https://slurm.schedmd.com/srun.html>`_.
181
181
@@ -350,84 +350,3 @@ To run the image the user must have access to a system with an installation of t
350
350
nvidia-docker run -p 5002:5002 --cap-add=SYS_PTRACE --security-opt seccomp=unconfined chimbuko/run_mocu:latest
351
351
352
352
And connect to this visualization server at **localhost:5002**.
353
-
354
-
355
-
Interacting with the Provenance Database
356
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
357
-
358
-
The provenance database is stored in a single file, **provdb.${SHARD}.unqlite** in the job's run directory. From this directory the user can interact with the provenance database via the visualization module. A more general command line interface to the database is also provided via the **provdb_query** tool that allows the user to execute arbitrary jx9 queries on the database.
359
-
360
-
The **provdb_query** tool has two modes of operation: **filter** and **execute**.
361
-
362
-
Filter mode
363
-
-----------
364
-
365
-
**filter** mode allows the user to provide a jx9 filter function that is applied to filter out entries in a particular collection. The result is displayed in JSON format and can be piped to disk. It can be used as follows:
366
-
367
-
.. code:: bash
368
-
369
-
provdb_query filter ${COLLECTION}${QUERY}
370
-
371
-
Where the variables are as follows:
372
-
373
-
- **COLLECTION** : One of the three collections in the database, **anomalies**, **normalexecs**, **metadata** (cf :ref:`introduction/provdb:Provenance Database`).
374
-
- **QUERY**: The query, format described below.
375
-
376
-
The **QUERY** argument should be a jx9 function returning a bool and enclosed in quotation marks. It should be of the format
Alternatively the query can be set to "DUMP", which will output all entries.
384
-
385
-
The function is applied sequentially to each element of the collection. Inside the function the entry is described by the variable **$entry**. Note that the backslash-dollar (\\$) is necessary to prevent the shell from trying to expand the variable. Fields of **$entry** can be queried using the square-bracket notation with the field name inside. In the sketch above the field "some_field" is compared to a value **${SOME_VALUE}** (here representing a numerical value or a value expanded by the shell, *not* a jx9 variable!).
386
-
387
-
Some examples:
388
-
389
-
- Find every anomaly whose function contains the substring "Kokkos":
If the pserver is connected to the provenance database, at the end of the run the aggregated function profile data and global averages of counters will be stored in a "global" database "provdb.global.unqlite". This database can be queried using the **filter-global** mode, which like the above allows the user to provide a jx9 filter function that is applied to filter out entries in a particular collection. The result is displayed in JSON format and can be piped to disk. It can be used as follows:
405
-
406
-
.. code:: bash
407
-
408
-
provdb_query filter-global ${COLLECTION}${QUERY}
409
-
410
-
Where the variables are as follows:
411
-
412
-
- **COLLECTION** : One of the two collections in the database, **func_stats**, **counter_stats**.
413
-
- **QUERY**: The query, format described below.
414
-
415
-
The formatting of the **QUERY** argument is described above.
416
-
417
-
Execute mode
418
-
------------
419
-
420
-
**execute** mode allows running a complete jx9 script on the database as a whole, allowing for more complex queries that collect different outputs and span collections.
- **VARIABLES** : a comma-separated list (without spaces) of the variables assigned by the script
430
-
431
-
The **CODE** argument is a complete jx9 script. As above, backslashes ('\') must be placed before internal '$' and '"' characters to prevent shell expansion.
432
-
433
-
If the option **-from_file** is specified the **${CODE}** variable above will be treated as a filename from which to obtain the script. Note that in this case the backslashes before the special characters are not necessary.
0 commit comments