KaliVeda
Toolkit for HIC analysis
Download/Install

If you want to use or try out KaliVeda, several options exist:

  • use in a container (Docker)
  • build & install from source

Docker containers

Continuously updated & tagged Docker containers for the latest version of KaliVeda are available from gitlab-registry.in2p3.fr/kaliveda-dev/dockers (see here for list of all available containers). See the Docker documentation if you are new to Docker containers.

To download and use the latest version of kaliveda (this corresponds to the tip of the dev branch: see https://gitlab.in2p3.fr/kaliveda-dev/kaliveda for details) you can do:

$ docker pull gitlab-registry.in2p3.fr/kaliveda-dev/dockers/kaliveda

You can then run the kaliveda command-line interface with

$ docker run --rm -it \
--user $(id -u):$(id -g) \
gitlab-registry.in2p3.fr/kaliveda-dev/dockers/kaliveda

To have access to your HOME directory from within the container (and start kaliveda with this directory as your working directory):

$ docker run --rm -it \
--user $(id -u):$(id -g) \
-v $HOME:/home/$USER -w /home/$USER \
gitlab-registry.in2p3.fr/kaliveda-dev/dockers/kaliveda

Finally, in order to use the graphical interfaces provided by either ROOT or KaliVeda, you can do

$ docker run --rm -it \
--user $(id -u):$(id -g) \
-v $HOME:/home/$USER -w /home/$USER \
-e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
gitlab-registry.in2p3.fr/kaliveda-dev/dockers/kaliveda

Note that the graphical display only works if the host machine is running either an X11/Xorg or XWayland display server (Wayland is the default display for Ubuntu since 22.04, and by default XWayland should be installed and running; on earlier versions Xorg was the default. You can also use one of the legacy Xorg Ubuntu desktops).

Making an alias to one of the above commands, e.g. such as

$ alias kaliveda='docker run --rm -it \
--user $(id -u):$(id -g) \
-v $HOME:/home/$USER -w /home/$USER \
gitlab-registry.in2p3.fr/kaliveda-dev/dockers/kaliveda'

you can then use the container as if the kaliveda executable had been compiled and installed on your PC:

$ kaliveda
$ kaliveda some_file.root
$ kaliveda my_script.C+

Build from source

The source code can be obtained from the git repository https://gitlab.in2p3.fr/kaliveda-dev/kaliveda :

$ git clone --depth=1 https://gitlab.in2p3.fr/kaliveda-dev/kaliveda.git

(the depth=1 argument avoids downloading the full history of the git repository, which makes downloading much faster and lighter).

If you want to change the name of the resulting working directory, give the name you want as an extra argument:

$ git clone --depth=1 https://gitlab.in2p3.fr/kaliveda-dev/kaliveda.git kaliveda_sources

For more information, see the git documentation or type:

$ git help clone

Prerequisites

KaliVeda is an object-oriented toolkit written in (mostly) C++ and based on ROOT, so the prerequisites for building and installing it are basically the same as those for ROOT: if you can build and install a functioning version of ROOT on your machine, you should be able to build and install KaliVeda.

Operating systems

Linux (Ubuntu, Scientific Linux, CentOS, Debian, ...) and MacOS X operating systems are supported. No support for Windows.

Basic software requirements

  • cmake (version >= 3.12)
  • ROOT (version >= 6.22)
  • C/C++/Fortran compilers (the same as used to build ROOT)

For the other required software see the prerequisites for ROOT for your OS.

ROOT requirements

Only ROOT versions >= 6.22 are supported. The following ROOT libraries must be installed (most of them are installed by default):

  • libRint.so
  • libGraf.so
  • libGeom.so
  • libSpectrum.so
  • libProof.so
  • libProofPlayer.so
  • libPhysics.so
  • libHist.so
  • libRIO.so
  • libTree.so
  • libMathCore.so
  • libMatrix.so
  • libGpad.so
  • libGui.so
  • libNet.so
  • libGraf3d.so
  • libThread.so

The following ROOT libraries are recommended but not compulsory:

  • libRGL.so - OpenGL support
  • libMinuit.so - data-fitting library
  • libRooFit.so & libRooFitCore.so - advanced data-fitting library
  • libRSQLite.so or libSQLite.so - SQLite database interface

Optional software

Other optional software packages which may add more functionality to KaliVeda if installed before building include:

  • GEMINI++

    If this nuclear statistical decay code is installed, we provide a simple interface to it, as well as the possibility to use it as an afterburner before filtering simulated data for detector efficiency. Instructions for downloading and installing as needed can be found here: Using GEMINI++ with KaliVeda

  • Google Protocol buffers

    Raw FAZIA data is written using Google protocol buffers. Installation of the software for reading the buffers is mandatory if you want to read raw FAZIA data.

  • MultiFrame Metaformat (MFM) data

    The GANIL/SPIRAL2 acquisition data format can be read by KaliVeda if you first build and install the package which can be found at https://gitlab.in2p3.fr/jdfcode/mfmlib. This is mandatory to be able to read raw data from INDRA-FAZIA experiments.

  • Mesytec data software

    Since the 2022 upgrade of INDRA electronics, reading raw data requires the Mesytec data parsing software which is available from https://gitlab.in2p3.fr/mesytec-ganil/mesytec_data.

Building and installing

Only out-of-source builds are supported i.e. cmake must be used outside of the source directory. Let us suppose that the sources were cloned from the git repository to a local directory S. We then suppose the existence of a "build directory" B (where the sources are to be compiled) and an "install directory" I (where the compiled libraries and executables etc. are to be installed).

For example, imagine the following layout:

${HOME}/software/sources/kaliveda => 'S', result of git clone command
${HOME}/software/build/kaliveda => 'B', created by you (mkdir -p B)
${HOME}/software/install/kaliveda => 'I', also created by you (mkdir -p I)

In the build directory, B, configure the build using the options detailed in Build options below. A minimum working example would be

$ cmake S -DCMAKE_INSTALL_PREFIX=I

with S and I replaced by the respective paths to the source and installation directories.

Then compile and install the sources with

$ make -j[N] install

where [N] is the number of processors/cores on your PC (parallel build).

Build options

The list of options which can be used to configure the build are given here, with their default values in brackets. Example of use:

$ cmake [path to sources] -DCMAKE_INSTALL_PREFIX=[installation path] -D[option]=[yes|no] ...
Option Default Meaning
WITH_INDRA_DATASETS OFF download & install datasets for INDRA experiments
automatically enables USE_INDRA
WITH_FAZIA_DATASETS OFF download & install datasets for FAZIA experiments
automatically enables USE_FAZIA
WITH_INDRAFAZIA_DATASETS OFF download & install datasets for INDRA-FAZIA experiments
automatically enables USE_INDRAFAZIA
USE_INDRA OFF build libraries for INDRA data analysis & simulation
USE_FAZIA OFF build libraries for FAZIA data analysis & simulation
USE_INDRAFAZIA OFF build libraries for INDRA-FAZIA data analysis & simulation
automatically enables USE_INDRA and USE_FAZIA
USE_MICROSTAT ON build libraries for generation of events with different statistical weights
see MicroStat Package
USE_FITLTG ON build Tassan-Got package for fitting identification grids (see KVTGID)
USE_BUILTIN_GRU ON build own library for reading legacy GANIL acquisition data (not MFM format)
USE_MFM ON use library for reading GANIL acquisition data in MFM format
requires mfmlib package from: https://gitlab.in2p3.fr/jdfcode/mfmlib.git
USE_PROTOBUF ON use Google protocol buffers e.g. for reading raw FAZIA data
USE_MESYTEC ON enable support for reading (INDRA) data from Mesytec DAQ electronics
requires software from: https://gitlab.in2p3.fr/mesytec-ganil/mesytec_data
USE_GEMINI ON if Gemini++ library is installed on system and can be found,
build the KVGemini interface. see Using GEMINI++ with KaliVeda for more details
USE_SQLITE ON enable SQLite database interface KVSQLite, if ROOT compiled with SQLite support
gnuinstall OFF install libraries etc. following a standard GNU directory layout
(see Installation directories below)
CCIN2P3_BUILD OFF configure build for IN2P3 Computing Centre environment
Warning
Contrary to previous versions (<1.14) of KaliVeda, the files required to handle different datasets from INDRA, FAZIA or INDRA-FAZIA experiments (which includes just being able to build the multidetector array(s) used in each case) are not installed by default. Only experiments for which the corresponding WITH_*_DATASETS=ON build option (see above) is given will be installed.

For full details on the build system and available options, see Appendix: KaliVeda build system.

Installation directories

Depending on the option gnuinstall given to cmake, two installation architectures are possible. The default layout (i.e. with gnuinstall=off|no) is

bin/
etc/
examples/
include/
KVFiles/
  data/
  [dataset1]/
  [dataset2]/
  classTemplate1.{h,cpp}
  classTemplate2.{h,cpp}
lib/
man/
tools/

Note that this layout must not be used if you are installing in a read-only directory i.e. if the value of CMAKE_INSTALL_PREFIX is a system directory, because available datasets files and experimental database files which may be generated during use of KaliVeda will be written inside this installation directory.

When option gnuinstall=on|yes the installation follows near-standard GNU conventions:

bin/
include/
  kaliveda/
lib/
  kaliveda/
share/
  kaliveda/
    [dataset1]/
    [dataset2]/
    examples/
    data/
    etc/
    templates/
    man/

This type of installation can be used with system directories, e.g. /usr/local, as any files generated during use of KaliVeda are written in a user-specific "working directory" which by default is created in $HOME/.kaliveda.

Setting up the environment

After building and installing the toolkit, quite a large number of environment variables need to be updated in order to be able to use KaliVeda. To do this execute one of the following shell scripts which can be found in the bin directory of your installation:

### sh/bash shells - i.e. 'echo $SHELL' prints '/bin/sh', '/bin/bash', or equivalent
$ source [path to install]/bin/thiskaliveda.sh
### csh/tcsh shells - i.e. 'echo $SHELL' prints '/bin/csh', '/bin/tcsh', or equivalent
$ source [path to install]/bin/thiskaliveda.csh

To set up the environment every time you login/open a terminal, add the following commands to the appropriate start-up script depending on your shell:

### sh/bash shells - add to '.bashrc', '.profile', or equivalent
$ source [path to install]/bin/thiskaliveda.sh
### csh/tcsh shells - add to '.chsrc' or equivalent
$ source [path to install]/bin/thiskaliveda.csh [path to install]
Note
For (t)csh shells the installation path must be given as argument to the thiskaliveda.chs script

Keeping up to date

In order to update the source code to the latest version in the gitlab repository, go to the source directory S (see Building and installing above) and do the following:

$ git pull
$ git submodule update
Warning
The second command is mandatory to update any datasets you may have configured in your sources (see Build options). Without it, the dataset directories will not receive the latest updates and your sources may be left in an incoherent state. Read the next subsection for more details.

Updating the submodules/datasets

If one or more of the dataset submodules which you have configured in your source tree have been updated, when you run the git pull command you will see something like:

$ git pull
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 7 (delta 5), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (7/7), 1.22 KiB | 416.00 KiB/s, done.
From https://gitlab.in2p3.fr/kaliveda-dev/kaliveda
63f1900..c9cee5b dev -> origin/dev
Updating 63f1900..c9cee5b
Fast-forward
datasets/INDRA | 2 +-
release_notes.md | 24 +++++++++++++++++++++++-
2 files changed, 24 insertions(+), 2 deletions(-)

which indicates that datasets/INDRA has updates. If you run git status at this point (after having configured it to show informations on submodule changes, by doing:

$ git config --global status.submoduleSummary true

) then you will see something like:

On branch dev
Your branch is up-to-date with 'origin/dev'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: datasets/INDRA (new commits)
Submodules changed but not updated:
* datasets/INDRA cf08b50...b010e5e (1):
< [CAMP2]: complete CsI calibrations for INDRA rings 2-9
no changes added to commit (use "git add" and/or "git commit -a")

which can be very confusing: you just did a git pull which went well (apparently), and yet there are now uncommitted changes in your working copy? No, not at all: unlike for ordinary files (e.g. release_notes.md in the above example), submodule changes are not automatically applied when git pull is run. Instead you have to manually update the submodule(s) with

$ git submodule update
Submodule path 'datasets/INDRA': checked out 'cf08b502aa7791e0c262e583dff5678bfe5f5d54'

and then everything is fine:

$ git status
On branch dev
Your branch is up-to-date with 'origin/dev'.

Recompiling after updates

In most cases it is then sufficient to return to the build directory B and recompile the sources:

$ make -j[N] install

with N the number of cores/CPU on your machine.

In case of problems with the build, delete & recreate the build directory and start again from the cmake step of Building and installing :

$ rm -rf B && mkdir B && cd B
$ cmake S -DCMAKE_INSTALL_PREFIX=I [your build options here]
$ make -j[N] install