![]() |
KaliVeda
Toolkit for HIC analysis
|
If you want to use or try out KaliVeda, several options exist:
The source code can be obtained from the git repository https://gitlab.in2p3.fr/kaliveda-dev/kaliveda :
This will download the sources into a new directory called kaliveda in your current working directory. If you want to download to a different directory, which may or may not already exist, just give the path name as a second argument after the repository URL:
This will create my_kaliveda_sources in the current working directory and download the sources there.
As development is always ongoing, several branches are available. When you clone the repository, you get the current default branch, which may not be the one you want/need:
| Branch name | Version | Notes |
|---|---|---|
main | 1.15 | current default: includes new datasets E881 & E884 |
1.14 | 1.14 | can be used with data up to E818 |
1.12 | 1.12 | old version used for E789 |
1.10 | 1.10 | legacy support for INDRA-VAMOS data |
If you don't want the default branch, just change your current working directory to the directory where you downloaded the sources
and then execute the following command with the name of the branch you want, e.g. to change to the 1.14 branch:
KaliVeda is an object-oriented toolkit written in (mostly) C++ and based on ROOT, so the prerequisites for building and installing it are basically the same as those for ROOT: if you can build and install a functioning version of ROOT on your machine, you should be able to build and install KaliVeda.
Linux (Ubuntu, Scientific Linux, CentOS, Debian, ...) and MacOS X operating systems are supported. No support for Windows.
For the other required software see the prerequisites for ROOT for your OS.
Only ROOT versions >= 6.22 are supported. Note that for ROOT versions >= 6.32, PROOF and PROOFLite are no longer enabled by default (or may simply be removed). We recommend using a ROOT version which has PROOF/PROOFLite support built-in.
The following ROOT libraries must be installed (most of them are installed by default):
The following ROOT libraries are recommended but not compulsory:
Note that we recommend using ROOT with built-in SQLite support; if not, we can provide built-in sqlite functionality if the required packages are present on the system (see below). Note that if you use a pre-compiled version of ROOT with SQLite support, you should ensure that the necessary SQLite packages are installed on your system (see below).
Other optional software packages which may add more functionality to KaliVeda if installed before building include:
sqlite
We highly recommend to use a version of ROOT with built-in SQLite support (see above); if this is not possible, we provide a drop-in replacement for it, as long as sqlite (more precisely, the necessary development package, e.g. on Ubuntu, libsqlite3-dev) is installed on the system. If you use a pre-compiled version of ROOT with SQLite support, you should also ensure that these packages are present on your system.
Google Protocol buffers
Raw FAZIA data is written using Google protocol buffers. Installation of the software for reading the buffers is mandatory if you want to read raw FAZIA data.
The list of options which can be used to configure the build are given here, with their default values in brackets. Example of use:
$ cmake [..see next subsection..] -D[option1]=[YES|NO] -D[option2]=[ON|OFF] [...]
| Option | Default | Meaning |
|---|---|---|
WITH_INDRA_DATASETS | OFF | download & install datasets for INDRA experiments |
WITH_FAZIA_DATASETS | OFF | download & install datasets for FAZIA experiments |
WITH_INDRA_FAZIA_DATASETS | OFF | download & install datasets for INDRA-FAZIA experiments |
WITH_ALL_DATASETS | OFF | download & install all datasets listed above |
USE_GEMINI | OFF | build KVGemini interface to built-in Gemini++ statistical decay code |
USE_MFM | OFF | use library for reading GANIL acquisition data in MFM format |
USE_PROTOBUF | OFF | use Google protocol buffers e.g. for reading raw FAZIA data |
USE_MESYTEC | OFF | enable support for reading (INDRA) data from Mesytec DAQ electronics |
USE_FITLTG | OFF | build Tassan-Got package for fitting identification grids (see KVTGID) |
USE_MICROSTAT | OFF | build libraries for generation of events with different statistical weights see MicroStat Package |
USE_BUILTIN_GRU | OFF | build own library for reading legacy GANIL acquisition data (not MFM format) |
ENABLE_ALL_OPTIONS | OFF | enable all USE_* options listed above |
USE_SQLITE | ON | enable SQLite database interface KVSQLite, if SQLite3 available |
CCIN2P3_BUILD | OFF | configure build for IN2P3 Computing Centre environment |
ONLINE_TOOLS | OFF | build & install tools used during experiment data taking |
WITH_*_DATASETS=ON build option (see above) is given will be installed.For full details on the build system and available options, see Appendix: KaliVeda build system.
Only out-of-source builds are supported: this means that the "build directory" must be in a path which is completely outside the path to the source directory. As good practice, the "installation directory" should also be completely outside both source and build directory paths. Once compiled and installed, you should be able to delete both the source directory and build directory without affecting the installed executables and libraries.
Let us suppose that the sources were cloned from the git repository to a local directory S. With a "build directory" B (where the sources are to be compiled) and an "install directory" I (where the compiled libraries and executables etc. are to be installed), configure the build using the options detailed in Build options above by doing
in a working directory outside the source directory, , with [B], [S] and [I] replaced by the respective (relative or absolute) paths to the build, source and installation directories.
For example, if you cloned the git repository into S=/home/user/kaliveda/sources (see Build from source for how to choose the name of the directory you download the sources to with git clone) and want to install in I=home/user/kaliveda/install, you could do
from directory /home/user, or
from directory /home/user/kaliveda.
After successful configuration of the build, compile and install the code by doing
where [N] is the number of processors/cores you want to use for parallel compilation of the code, or, assuming you have a standard Linux environment which uses GNU Makefiles by default,
The default layout for the installation is
bin/
include/
kaliveda/
lib/
kaliveda/
cmake/
share/
doc/
kaliveda/
examples/
core/
...
kaliveda/
data/
etc/
templates/
[dataset1]/
[dataset2]/
...
man/
man1/
This allows to install KaliVeda inside system directories, e.g. /usr/local, as any files generated during use of KaliVeda are written in a user-specific "working directory" which by default is created in $HOME/.kaliveda.
After building and installing the toolkit, quite a large number of environment variables need to be updated in order to be able to use KaliVeda. To do this execute one of the following shell scripts which can be found in the bin directory of your installation:
To set up the environment every time you login/open a terminal, add the following commands to the appropriate start-up script depending on your shell:
(t)csh shells the installation path must be given as argument to the thiskaliveda.chs scriptThe environment variable KVROOT will be set with the path to the root of your installation directory (equivalent to ROOTSYS for ROOT).
In order to update the source code to the latest version in the gitlab repository, go to the source directory S (see Building and installing above) and do the following:
If one or more of the dataset submodules which you have configured in your source tree have been updated, when you run the git pull command you will see something like:
which indicates that datasets/INDRA has updates. If you run git status at this point (after having configured it to show informations on submodule changes, by doing:
) then you will see something like:
which can be very confusing: you just did a git pull which went well (apparently), and yet there are now uncommitted changes in your working copy? No, not at all: unlike for ordinary files (e.g. release_notes.md in the above example), submodule changes are not automatically applied when git pull is run. Instead you have to manually update the submodule(s) with
and then everything is fine:
In most cases it is sufficient to re-run the previous Compile and install command to recompile the sources & update the installation, i.e. either do
or
in both cases[N] is the number of cores/CPU to use for parallel compilation.
In case of problems with the build, delete the build directory and start again from the Configuring the build step, e.g.:
Continuously updated & tagged Docker containers for the latest version of KaliVeda are available from gitlab-registry.in2p3.fr/kaliveda-dev/dockers (see here for list of all available containers). See the Docker documentation if you are new to Docker containers.
To download and use the latest version of kaliveda (this corresponds to the tip of the dev branch: see https://gitlab.in2p3.fr/kaliveda-dev/kaliveda for details) you can do:
You can then run the kaliveda command-line interface with
To have access to your HOME directory from within the container (and start kaliveda with this directory as your working directory):
Finally, in order to use the graphical interfaces provided by either ROOT or KaliVeda, you can do
Note that the graphical display only works if the host machine is running either an X11/Xorg or XWayland display server (Wayland is the default display for Ubuntu since 22.04, and by default XWayland should be installed and running; on earlier versions Xorg was the default. You can also use one of the legacy Xorg Ubuntu desktops).
Making an alias to one of the above commands, e.g. such as
you can then use the container as if the kaliveda executable had been compiled and installed on your PC: