KaliVeda
Toolkit for HIC analysis
|
If you want to use or try out KaliVeda, several options exist:
Continuously updated & tagged Docker containers for the latest version of KaliVeda are available from gitlab-registry.in2p3.fr/kaliveda-dev/dockers
(see here for list of all available containers). See the Docker documentation if you are new to Docker containers.
To download and use the latest version of kaliveda (this corresponds to the tip of the dev
branch: see https://gitlab.in2p3.fr/kaliveda-dev/kaliveda for details) you can do:
You can then run the kaliveda command-line interface with
To have access to your HOME
directory from within the container (and start kaliveda with this directory as your working directory):
Finally, in order to use the graphical interfaces provided by either ROOT or KaliVeda, you can do
Note that the graphical display only works if the host machine is running either an X11/Xorg or XWayland display server (Wayland is the default display for Ubuntu since 22.04, and by default XWayland should be installed and running; on earlier versions Xorg was the default. You can also use one of the legacy Xorg Ubuntu desktops).
Making an alias to one of the above commands, e.g. such as
you can then use the container as if the kaliveda executable had been compiled and installed on your PC:
The source code can be obtained from the git repository https://gitlab.in2p3.fr/kaliveda-dev/kaliveda :
(the depth=1
argument avoids downloading the full history of the git repository, which makes downloading much faster and lighter).
If you want to change the name of the resulting working directory, give the name you want as an extra argument:
For more information, see the git documentation or type:
KaliVeda is an object-oriented toolkit written in (mostly) C++ and based on ROOT, so the prerequisites for building and installing it are basically the same as those for ROOT: if you can build and install a functioning version of ROOT on your machine, you should be able to build and install KaliVeda.
Linux (Ubuntu, Scientific Linux, CentOS, Debian, ...) and MacOS X operating systems are supported. No support for Windows.
For the other required software see the prerequisites for ROOT for your OS.
Only ROOT versions >= 6.22 are supported. The following ROOT libraries must be installed (most of them are installed by default):
The following ROOT libraries are recommended but not compulsory:
Other optional software packages which may add more functionality to KaliVeda if installed before building include:
GEMINI++
If this nuclear statistical decay code is installed, we provide a simple interface to it, as well as the possibility to use it as an afterburner before filtering simulated data for detector efficiency. Instructions for downloading and installing as needed can be found here: Using GEMINI++ with KaliVeda
Google Protocol buffers
Raw FAZIA data is written using Google protocol buffers. Installation of the software for reading the buffers is mandatory if you want to read raw FAZIA data.
MultiFrame Metaformat (MFM) data
The GANIL/SPIRAL2 acquisition data format can be read by KaliVeda if you first build and install the package which can be found at https://gitlab.in2p3.fr/jdfcode/mfmlib. This is mandatory to be able to read raw data from INDRA-FAZIA experiments.
Mesytec data software
Since the 2022 upgrade of INDRA electronics, reading raw data requires the Mesytec data parsing software which is available from https://gitlab.in2p3.fr/mesytec-ganil/mesytec_data.
Only out-of-source builds are supported i.e. cmake
must be used outside of the source directory. Let us suppose that the sources were cloned from the git repository to a local directory S. We then suppose the existence of a "build directory" B (where the sources are to be compiled) and an "install directory" I (where the compiled libraries and executables etc. are to be installed).
For example, imagine the following layout:
In the build directory, B, configure the build using the options detailed in Build options below. A minimum working example would be
with S and I replaced by the respective paths to the source and installation directories.
Then compile and install the sources with
where [N]
is the number of processors/cores on your PC (parallel build).
The list of options which can be used to configure the build are given here, with their default values in brackets. Example of use:
$ cmake [path to sources] -DCMAKE_INSTALL_PREFIX=[installation path] -D[option]=[yes|no] ...
Option | Default | Meaning |
---|---|---|
WITH_INDRA_DATASETS | OFF | download & install datasets for INDRA experiments automatically enables USE_INDRA |
WITH_FAZIA_DATASETS | OFF | download & install datasets for FAZIA experiments automatically enables USE_FAZIA |
WITH_INDRAFAZIA_DATASETS | OFF | download & install datasets for INDRA-FAZIA experiments automatically enables USE_INDRAFAZIA |
USE_INDRA | OFF | build libraries for INDRA data analysis & simulation |
USE_FAZIA | OFF | build libraries for FAZIA data analysis & simulation |
USE_INDRAFAZIA | OFF | build libraries for INDRA-FAZIA data analysis & simulation automatically enables USE_INDRA and USE_FAZIA |
USE_MICROSTAT | ON | build libraries for generation of events with different statistical weights see MicroStat Package |
USE_FITLTG | ON | build Tassan-Got package for fitting identification grids (see KVTGID) |
USE_BUILTIN_GRU | ON | build own library for reading legacy GANIL acquisition data (not MFM format) |
USE_MFM | ON | use library for reading GANIL acquisition data in MFM format requires mfmlib package from: https://gitlab.in2p3.fr/jdfcode/mfmlib.git |
USE_PROTOBUF | ON | use Google protocol buffers e.g. for reading raw FAZIA data |
USE_MESYTEC | ON | enable support for reading (INDRA) data from Mesytec DAQ electronics requires software from: https://gitlab.in2p3.fr/mesytec-ganil/mesytec_data |
USE_GEMINI | ON | if Gemini++ library is installed on system and can be found, build the KVGemini interface. see Using GEMINI++ with KaliVeda for more details |
USE_SQLITE | ON | enable SQLite database interface KVSQLite, if ROOT compiled with SQLite support |
gnuinstall | OFF | install libraries etc. following a standard GNU directory layout (see Installation directories below) |
CCIN2P3_BUILD | OFF | configure build for IN2P3 Computing Centre environment |
WITH_*_DATASETS=ON
build option (see above) is given will be installed.For full details on the build system and available options, see Appendix: KaliVeda build system.
Depending on the option gnuinstall
given to cmake
, two installation architectures are possible. The default layout (i.e. with gnuinstall=off|no
) is
bin/ etc/ examples/ include/ KVFiles/ data/ [dataset1]/ [dataset2]/ classTemplate1.{h,cpp} classTemplate2.{h,cpp} lib/ man/ tools/
Note that this layout must not be used if you are installing in a read-only directory i.e. if the value of CMAKE_INSTALL_PREFIX
is a system directory, because available datasets files and experimental database files which may be generated during use of KaliVeda will be written inside this installation directory.
When option gnuinstall=on|yes
the installation follows near-standard GNU conventions:
bin/ include/ kaliveda/ lib/ kaliveda/ share/ kaliveda/ [dataset1]/ [dataset2]/ examples/ data/ etc/ templates/ man/
This type of installation can be used with system directories, e.g. /usr/local
, as any files generated during use of KaliVeda are written in a user-specific "working directory" which by default is created in $HOME/.kaliveda
.
After building and installing the toolkit, quite a large number of environment variables need to be updated in order to be able to use KaliVeda. To do this execute one of the following shell scripts which can be found in the bin
directory of your installation:
To set up the environment every time you login/open a terminal, add the following commands to the appropriate start-up script depending on your shell:
(t)csh
shells the installation path must be given as argument to the thiskaliveda.chs
scriptIn order to update the source code to the latest version in the gitlab repository, go to the source directory S (see Building and installing above) and do the following:
If one or more of the dataset submodules which you have configured in your source tree have been updated, when you run the git pull
command you will see something like:
which indicates that datasets/INDRA
has updates. If you run git status
at this point (after having configured it to show informations on submodule changes, by doing:
) then you will see something like:
which can be very confusing: you just did a git pull
which went well (apparently), and yet there are now uncommitted changes in your working copy? No, not at all: unlike for ordinary files (e.g. release_notes.md
in the above example), submodule changes are not automatically applied when git pull
is run. Instead you have to manually update the submodule(s) with
and then everything is fine:
In most cases it is then sufficient to return to the build directory B and recompile the sources:
with N
the number of cores/CPU on your machine.
In case of problems with the build, delete & recreate the build directory and start again from the cmake
step of Building and installing :