KaliVeda
Toolkit for HIC analysis
|
The aim of the KaliVeda toolkit is to provide tools for the analysis and simulation of heavy-ion collisions from Coulomb barrier energies up to a few 100s of MeV per nucleon, for use with charged particle detectors: the toolkit does not handle either electromagnetic radiation or neutron detection, nor any charged particles which are not atomic nuclei (ions).
Nuclear particles and their (relativistic) kinematics are described by the KVNucleus class, which also supplies many informations on nuclear binding energies, abundances, charge radii, etc. Nuclei can be grouped together into 'events' using the container classes derived from KVEvent, which provides facilities for iterating over the nuclei, and, for example, manipulating the (relativistic) reference frames used to describe their kinematics (KVFrameTransform). Simulated data events are handled by KVSimEvent (containing KVSimNucleus particles), while reconstructed (experimental or simulated) data are handled by KVReconstructedEvent (containing KVReconstructedNucleus particles).
The energy losses and ranges of charged nuclear particles in matter, specifically in detector or target materials (either solid or gaseous) can be analytically (rapidly) calculated using objects of the KVMaterial, KVDetector and KVTarget classes, respectively. All necessary inverse calculations (e.g. deducing incident energy from measured energy loss) are also implemented. The default energy-loss calculator, KVedaLoss, is based on the range tables of F. Hubert, R. Bimbot, and H. Gauvin, "Range and Stopping-Power Tables for 2.5-500 MeV/nucleon Heavy Ions in Solids", Atomic Data and Nuclear Data Tables 46 (1990) 1.
Each detector can be associated with one or more raw signals coming from the DAQ system using KVDetectorSignal objects, while calibrations and the associated additional output signals for each detector are implemented using KVCalibrator and KVCalibratedSignal class objects. Different calibrations can be defined using informations in a simple formatted text file. Specific classes are provided for the calibration of CsI scintillator detectors (KVLightEnergyCsI, KVLightEnergyCsIFull).
Detectors which are used for particle identification can be grouped together into 'identification telescope' objects represented by the KVIDTelescope class, which can then be used to identify particles reconstructed from data using graphical identification grids handled by KVIDGraph and child classes. A full set of feature-rich GUI are provided to aid with defining identification grids for data (KVIDGridManagerGUI, KVIDGridEditor), including the "New semi-automatic method for reaction product charge and mass identification in heavy-ion collisions at Fermi energies", D. Gruyer et al., Nucl. Instrum. Methods. Phys. Res. A 847 (2017) 142
An array formed of several (or many) detectors with their associated spacial geometry can be described by an object of the KVMultiDetArray class. Such objects are used to deduce complex information from the geometry of the array, such as all possible trajectories which particles can take through the detectors of the array (KVGeoDNTrajectory), and consequently all possible ways in which particles can be identified using \(\Delta E-E\) telescopes (or other methods) (KVIDTelescope). It also subdivides the array into 'groups' (KVGroup) by trajectory clustering: each group is a set of detectors which share one or more trajectories with each other, but not with any detector in any other group. The detectors of each KVGroup can therefore be treated independently of all others.
One or more independent detection arrays used to perform coupled experiments can be combined in an object of the KVExpSetUp class.
The management of data for a given experiment or experimental campaign is handled by a KVDataSet object, which brings together:
Many different experiments or experimental campaigns can be grouped together and handled by a KVDataSetManager instance: such an object can be used to consult which datasets are available and change from one to another.
Accessing the data files for different experiments is handled by an object of KVDataRepository type: data may be accessed either locally on the host PC or remotely using e.g. IRODS or xrootd backends. Each KVDataRepository has an associated KVDataSetManager, which describes the list of experiments (KVDataSet objects) whose data is available from that source.
Several different KVDataRepository data sources may be handled by an instance of the KVDataRepositoryManager class.
A KVMultiDetArray can be used to reconstruct detected nuclei from experimental data, by associating the fired detectors along each possible trajectory through the array (KVReconNucTrajectory). Using the identification telescopes (KVIDTelescope) constituted by the successive detectors on these trajectories, the reconstructed nuclei (KVReconstructedNucleus) can then be identified, in \(Z\) and/or in \(A\). The full set of identification grids for a given experiment for the array are handled by a KVIDGridManager object.
Event reconstruction is managed by the KVEventReconstructor class, which uses many KVGroupReconstructor objects in order to handle reconstruction of particles within each 'group' of the array. The resulting KVReconstructedEvent objects are stored in a TTree in a ROOT file which is automatically added back into the KVDataRepository for future analysis.
Different data analysis tasks can be treated in a unified framework. Each defined task (KVDataAnalysisTask) is associated with a KVDataAnalyser which pilots the analysis:
In addition, the KVDataAnalyser can interface with batch computing systems (KVBatchSystem) or (when analysing ROOT files) with PROOFLite for parallel processing (KVPROOFLiteBatch).
For analysis of reconstructed or simulated data, all user analysis classes derive from KVEventSelector, a general purpose class for the analysis of a TTree containing KVTemplateEvent-derived event objects.
It should be noted that event reconstruction from experimental data (KVRawDataReconstructor) as well as filtering of simulated data (KVEventFiltering) are implemented as data analysis tasks in the same framework (without user analysis class).