ROOT Tips Tricks
<gisthub gist="6173976" file="TH1_Fill.cxx"/> | <gisthub gist="6173976" file="TH1_Fill_1.cxx"/> |
<gisthub gist="6173976" file="TH1_AddBinContent.cxx"/> | <gisthub gist="6173976" file="TH1_AddBinContent_1.cxx"/> |
<gisthub gist="6173976" file="TH1_SetBinContent.cxx"/> | <gisthub gist="6173976" file="TH1_SetBinError.cxx"/> |
<gisthub gist="6173976" file="TH1_GetBinContent.cxx"/> | <gisthub gist="6173976" file="TH1_GetBinError.cxx"/> |
<gisthub gist="6173976" file="TH1_Sumw2.cxx"/> | <gisthub gist="6173976" file="TH1_GetEffectiveEntries.cxx"/> |
<gisthub gist="6173976" file="TH1_GetStats.cxx"/> | <gisthub gist="6173976" file="TH1_ResetStats.cxx"/> |
<gisthub gist="6173976" file="TH1_PutStats.cxx"/> |
float/double
float_double.C |
<gisthub gist="18bcef1394f59e46f611" file="float_double.C"/> |
- linux, gcc 4.9, glibc 2.20
root [1] float_double() float min = 1.17549e-38 float max = 3.40282e+38 double min = 2.22507e-308 double max = 1.79769e+308 float epsilon = 0.000000119209289550781250000000000000000000000000000000000000 => 1.19209e-07 double epsilon = 0.000000000000000222044604925031308084726333618164062500000000 => 2.22045e-16 number 1234567890123456789012345.1234567890123456789012345 as float 1234567946798590058299392.000000 => 1.23457e+24 as double 1234567890123456824475648.000000 => 1.23457e+24 number 1234567890123456789012345.0 as float 1234567946798590058299392.000000 => 1.23457e+24 as double 1234567890123456824475648.000000 => 1.23457e+24 number 0.1234567890123456789012345 as float 0.123456791043281555175781250000000000000000000000000000000000 => 0.123457 as double 0.123456789012345677369886232099815970286726951599121093750000 => 0.123457 number 1234567890123456789012345 (no decimal) as float 1096246353119412224.000000 => 1.09625e+18 as double 1096246371337559936.000000 => 1.09625e+18 number 9.123e+22 as float 91230003119595695636480.000000 => 9.123e+22 as double 91230000000000004718592.000000 => 9.123e+22 number 9.123e+40 as float inf => inf as double 91230000000000006312383662990803305758720.000000 => 9.123e+40 number 9.123e-10 as float 0.000000000912300013311551083461381494998931884765625000000000 => 9.123e-10 as double 0.000000000912300000000000044597902336461579114734732343094947 => 9.123e-10 number 9.123e-50 as float 0.000000000000000000000000000000000000000000000000000000000000 => 0 as double 0.000000000000000000000000000000000000000000000000091230000000 => 9.123e-50
other
second bit status word
UShort_t fBits2;
TClonesArray
loop { TClonesArray &hits = *fTDCHits; new(hits[fNTDCHits++]) TTDCHit(ch, tld); }
Uses a TClonesArray which reduces the number of new/delete calls.
V slucke sa stale vola normal constructor (ale bez delete)
loop { // TTDCHit *const hit = static_cast<TTDCHit *>(fTDCHits->ConstructedAt(fNTDCHits++)); TTDCHit *hit = (TTDCHit *)fTDCHits->ConstructedAt(fNTDCHits++); hit->Set(ch, tld); } hit->SetChannelTime(ch, tld);
To reduce the number of call to the constructor (especially useful if the user class requires memory allocation), the object can be added (and constructed when needed) using ConstructedAt which only calls the constructor once per slot.
Always returns an already constructed object. If the slot is being used for the first time, it calls the default constructor otherwise it returns the object as is (unless a string is passed as the 2nd argument to the function in which case, it also calls Clear(second_argument) on the object). Simpler and more efficient even in case where the TTDCHit/TTrack class allocates memory.
V slucke sa vola sa iba jeden (resp. maximum z fNTDCHits) default constructor. V funkcii Set() preto musim zabezpecit 'resetovanie' premennych (z predchadzajuceho hitu) alebo pouzit/volat extra funkciu hit->Clear() alebo priamo volat ConstructedAt(Int_t idx, Option_t *clear_options). Kedze v TClonesArray mozu byt len objekty odvodene od TObject, moze nastat situacia, ze je potrebne 'resetovat' aj premenne (fUniqueID, fBits) v TObject (TObject::Clear() je na dnesny den prazdna funkcia)
Particle *const p = static_cast<Particle *>(particles_.ConstructedAt(nActive_++)); Event *const newEvent = static_cast<Event *>(events.ConstructedAt(j));
Clone or Copy
What is the actual difference between the results of a TObject->Clone() and TObject->Copy() ?
The difference is the interface (one create a new object, the other update an existing object)
Double_32 dictionary generation
gROOT->ProcessLine("struct D32Holder { Double32_t var; //[0, 4096, 13] };"); myTree->Branch("var", TClass::GetClass("D32Holder"), &var); myTree->Branch("var", "D32Holder", (void*)&var); gInterpreter->Declare("struct D32Holder { Double32_t var; //[0, 4096, 13]\n};"); void *p = (void*)&var; myTree->Branch("var", "D32Holder", (void*)&p);
Tree and SetAutoSave
- When large Trees are produced, it is safe to activate the AutoSave procedure.
- Each AutoSave generates a new key on the file. Once the key with the tree header has been written, the previous cycle (if any) is deleted.
- calling TTree::SetAutoSave with a small value (or calling TTree::AutoSave too frequently) is an expensive operation. You should make tests for your own application to find a compromise between speed and the quantity of information you may loose in case of a job crash.
- => ak nepotrebujem mat istotu moznosti recovery (napr. straty danych pri necakanom skonceni programu) tak dam velku hodnotu a tree header sa ulozi az uplne na konci, resp. az po velkej hodnote entries/bytes
- => tree;1 => 1 cycle bude vzdy ak SetAutoSave > entries; inak sa ulozi aj predchadzajuci cycle (for deleting the previous cycle and just stay with the newest one
fTree->Write("",kWriteDelete)
)
test file: tree pp, events = 43714
- fTree->SetAutoSave(100); // autosave when 100 entries (!!! filling about 10.5 sec. !!!) !!! expensive operation !!!
- pp;438 => 43714 entries
- pp;437 => 43700 entries (437*100 = 43700)
- fTree->SetAutoSave(1000); // autosave when 1000 entries (filling about 2.0 sec.)
- pp;44 => 43714 entries
- pp;43 => 43000 entries (43*1000 = 43000)
- fTree->SetAutoSave(50000); // autosave when 50000 entries (filling about 1.5 sec.)
- pp;1 => 43714 entries
- fTree->SetAutoSave(-300000000); // DEFAULT, autosave when 300000000 bytes (unzipped) written (filling about 1.5 sec.)
- pp;1 => 43714 entries
misc
rootcint -f TStrawDict.cxx -c TStrawCham.h TStrawTrack.h StrawChamLinkDef.h
g++ -O -fPIC -I$ROOTSYS/include -c TStrawDict.cxx TStrawCham.cxx TStrawTrack.cxx
g++ -g -shared TStrawDict.o TStrawCham.o TStrawTrack.o TStrawDict.o -o libStraw.so
TImage *img = TImage::Create();
img->FromPad(c1, 520, 1100, 630, 600);
img->WriteImage("bla.png");
ProcInfo_t info;
gSystem->GetProcInfo(&info);
Printf("MemResident = %8ld kB, MemVirtual = %8ld kB", info.fMemResident, info.fMemVirtual);
Makefile
ROOT_MAJOR := $(shell $(RC) --version | cut -f1 -d '.')
ROOT_MINOR := $(shell $(RC) --version | cut -f2 -d '.' | cut -f1 -d '/')
ROOT_PATCH := $(shell $(RC) --version | cut -f2 -d '/')
ROOT_VER_INT := $(ROOT_MAJOR)$(ROOT_MINOR)$(ROOT_PATCH)
ROOT_VER_OK := $(shell [ $(ROOT_VER_INT) -ge 60305 ] && echo ok)
do:
echo $(ROOT_VER_INT)
echo $(ROOT_VER_OK)
ifeq ($(ROOT_VER_OK),ok)
echo "OK version"
else
echo "BAD version"
endif
Tree compression
data file: /home/musinsky/TQDC_sample/vmeRUN114_2015-02-20_02-24-02.dat size = 1672234636 bytes (418058659 words) spills = 99 events = 2065816 vmeRUN114_2015-02-20_02-24-02.dat => 1595M (SSD disk, PC strela04)
TDC + TQDC SetCompressionSettings(ROOT::CompressionSettings(ROOT::kUseGlobalSetting, 1)) DEFAULT root [] .x d.C settings 1 algorithm 0 level 1 factor 2.79228 Info in <TTDCEvent::~TTDCEvent>: Destructor Info in <TTQDCEvent::~TTQDCEvent>: Destructor Real time 0:00:35.814060, CP time 33.160 Real time 0:00:35.260297, CP time 32.980 root [] gStrela->AnalyzeEntries() Real time 0:00:32.734012, CP time 32.710 Real time 0:00:32.887350, CP time 32.840
TDC + TQDC SetCompressionSettings(ROOT::CompressionSettings(ROOT::kUseGlobalSetting, 0)) NO COMPRESSION root [] .x d.C settings 0 algorithm 0 level 0 factor 1.00001 => vme_data.root => 1187M Info in <TTDCEvent::~TTDCEvent>: Destructor Info in <TTQDCEvent::~TTQDCEvent>: Destructor Real time 0:00:20.595933, CP time 15.830 !!!!!!!!!!!!!!!!!!!! Real time 0:00:20.444206, CP time 16.220 root [] gStrela->AnalyzeEntries() Real time 0:00:28.762151, CP time 28.740 Real time 0:00:29.007629, CP time 28.990
TDC + TQDC SetCompressionSettings(ROOT::CompressionSettings(ROOT::kUseGlobalSetting, 2)) root [] .x d.C settings 2 algorithm 0 level 2 factor 2.71566 Info in <TTDCEvent::~TTDCEvent>: Destructor Info in <TTQDCEvent::~TTQDCEvent>: Destructor Real time 0:00:36.719002, CP time 35.660 Real time 0:00:35.929452, CP time 35.590 root [] gStrela->AnalyzeEntries() Real time 0:00:32.512738, CP time 32.500 Real time 0:00:32.428839, CP time 32.410
TDC + TQDC SetCompressionSettings(ROOT::CompressionSettings(ROOT::kUseGlobalSetting, 5)) root [] .x d.C settings 5 algorithm 0 level 5 factor 2.72555 Info in <TTDCEvent::~TTDCEvent>: Destructor Info in <TTQDCEvent::~TTQDCEvent>: Destructor Real time 0:00:56.073635, CP time 54.390 Real time 0:00:56.018901, CP time 54.940 root [] gStrela->AnalyzeEntries() Real time 0:00:32.388135, CP time 32.360 Real time 0:00:32.398775, CP time 32.380
TDC + TQDC SetCompressionSettings(ROOT::CompressionSettings(ROOT::kUseGlobalSetting, 9)) root [] .x d.C settings 9 algorithm 0 level 9 factor 2.75533 Info in <TTDCEvent::~TTDCEvent>: Destructor Info in <TTQDCEvent::~TTQDCEvent>: Destructor Real time 0:03:01.095543, CP time 180.740 Real time 0:02:59.151411, CP time 178.600 root [] gStrela->AnalyzeEntries() Real time 0:00:32.280681, CP time 32.270 Real time 0:00:32.255684, CP time 32.230
C/C++ stack/heap
// funkcia int getaddrinfo(p1, p2, p3, struct addrinfo **res); // priame volanie struct addrinfo *result; getaddrinfo(p1, p2, p3, &result);
// retrieving a pointer to pointer (the addrinfo list head) void GetAddrInfo(struct addrinfo **update) { getaddrinfo(p1, p2, p3, update); } struct addrinfo *myresult; GetAddrInfo(&myresult); // or void GetAddrInfo(struct addrinfo **update) { struct addrinfo *res; getaddrinfo(p1, p2, p3, &res); *update = res; // save the pointer to the struct } struct addrinfo *myresult; GetAddrInfo(&myresult);
C/C++ difference
C style pass-by-reference (C or C++)
int x = 0; fun(&x); // now x = 1 fun(int *xx) *xx = 1;
C++ style pass-by-reference (only C++)
int x = 0; fun(x); // now x = 1 fun(int &xx) xx = 1;
- exit vs. return: exit = All open stdio(3) streams are flushed and closed. Files created by tmpfile(3) are removed ...
sizeof
- sizeof is a compile time operator
- sizeof is evaluated at compile time
- sizeof is always computed at compile time in C89. Since C99 and variable length arrays, it is computed at run time when a variable length array is part of the expression in the sizeof operand.
mc
# ROOT shell/i/.root View=%view{ascii} rootls -1lt %f 2>/dev/null
ROOT Histo
In its simplest form, a (1-dimensional) histogram is defined as the collection of the N pairs (1, n1), (2, n2), ..., (N, nN) and it is usually implemented as an array in which the first value of the pair (i, ni) is the index of the element containing the value ni.
- Fix or variable bin size
All histogram types support either fix or variable bin sizes.
- Convention for numbering bins
For all histogram types: nbins, xlow, xup
bin = 0; underflow bin bin = 1; first bin with low-edge xlow INCLUDED bin = nbins; last bin with upper-edge xup EXCLUDED bin = nbins+1; overflow bin
- Histograms with automatic bins
When an histogram is created with an axis lower limit greater or equal to its upper limit, the Template:ROOTLink is automatically called with an argument Template:ROOTLink equal to Template:ROOTLink (default value=1000). Template:ROOTLink may be reset via the static function Template:ROOTLink. The axis limits will be automatically computed when the buffer will be full or when the function Template:ROOTLink is called.
- Filling histograms
An histogram is typically filled with statements like h1->Fill(x)
, or fill with weight h1->Fill(x, w)
. The Fill functions compute the bin number corresponding to the given x argument and increment this bin by the given weight. The Fill functions return the bin number for 1-D histograms.
If Template:ROOTLink has been called before filling, the sum of squares of weights is also stored. One can also increment directly a bin number via Template:ROOTLink or replace the existing content via Template:ROOTLink. By default, the bin number is computed using the current axis ranges. If the automatic binning option has been set via h->SetBit(TH1::kCanRebin)
then, the Fill function will automatically extend the axis range to accomodate the new value specified in the Fill argument. The method used is to double the bin size until the new value fits in the range, merging bins two by two. This automatic binning options is extensively used by the Template:ROOTLink function when histogramming Tree variables with an unknown range. During filling, some statistics parameters are incremented to compute the mean value and Root Mean Square with the maximum precision.
- Re-binning
At any time, a histogram can be re-binned via the Template:ROOTLink method. It returns a new histogram with the re-binned contents. If bin errors were stored, they are recomputed during the re-binning.
- Associated errors
By default, for each bin, the sum of weights is computed at fill time. One can also call Template:ROOTLink to force the storage and computation of the sum of the square of weights per bin. If Template:ROOTLink has been called, the error per bin is computed as the sqrt(sum of squares of weights), otherwise the error is set equal to the sqrt(bin content). To return the error for a given bin number, do h->GetBinError(bin)
- Histogram errors
ROOT assumes by default that the histogram represents counting of values from stochastically independent measurements (e.g. the outcomes of repeated die throwing), so that it computes the bin fluctuation accordingly to the Poisson distribution: the best estimate of the standard deviation on the observed value ni of entries in bin i is the square root of ni. Note that this is not always correct, frequent examples are
- rate histograms the i-th bin contains the number of events passing cut i and all previous selection criteria. In this case the correct probability distribution is the binomial distribution and the correct standard deviation should be computed by the user (it can also be saved into the histogram itself, for future use). When ni is not too small, the binomial distribution is well approximated by the Poisson distribution, so that the ROOT default is acceptable in most cases (but not for all bins!). The Template:ROOTLink method has the option "B" for binomial division, but the best error estimation in this case is provided by Template:ROOTLink method (which is not as easy to understand ...)
- histograms computed using other histograms, for example by adding or dividing them bin by bin, ROOT assumes that the input histograms have the correct standard deviations and correctly computes the final ones. The user should make sure that the input histograms have correct errors
- histograms that do not represent value counting from stochastically independent random processes for each specific case, the user should take care of computing and saving, via Template:ROOTLink, the correct standard deviations.
- Operations on histograms
Many types of operations are supported on histograms or between histograms
- Addition of an histogram to the current histogram.
- Additions of two histograms with coefficients and storage into the current histogram.
- Multiplications and Divisions are supported in the same way as additions.
- The Add, Divide and Multiply functions also exist to add, divide or multiply an histogram by a function.
If an histogram has associated error bars (Template:ROOTLink has been called), the resulting error bars are also computed assuming independent histograms. In case of divisions, Binomial errors are also supported. One can mark a histogram to be an "average" histogram by setting its bit Template:ROOTLink via myhist.SetBit(TH1::kIsAverage)
. When adding (see Template:ROOTLink) average histograms, the histograms are averaged and not summed.
- Normalizing histograms
One can scale an histogram such that the bins integral is equal to the normalization parameter via myhist.Scale(Double_t norm)
, where norm is the desired normalization divided by the integral of the histogram myhist.Scale(norm/(h->Integral()))