CALICE MAPS Meeting, Phone, 01/10/09 ==================================== Birmingham: Owen Miller, Tony Price, Nigel Watson, John Wilson Imperial: Jamie Ballin, Paul Dauncey Minutes: Paul Minutes of previous meeting: John has not yet put the run quality file onto the Confluence website. Also, the configuration bad pixel counts need to be merged into the file. Paul has not yet checked with Jamie Crooks about the issue of rapid multiple hits being able to refire the monostable or not. [Note added after the meeting: Jamie responded with the following; "The monostable could feasibly fire ~immediately after it has gone low, so in the ns. A double hit could indeed be two hits. The pulse length of a particular monostable should be pretty constant, so I guess you could build up statistics on the likelihood of a pixel to generate a double-hit - and therefore use this to deduce whether the particular double-hit in question is more or less likely to be a genuine second hit - depends how often you see double hits, whether this would yield much benefit. In terms of deducing efficiencies, we could consider doing some global characterisation of the monostable pulse lengths, on one of the sensors you took to the beam test - I could imagine a laser pulse applied to many pixels (perhaps by defocusing, or one by one) - where the timing is adjusted to scan the timing of the pulse w.r.t. the sequencing of the pixel - pixels would sometimes report double hits or no hits - the distribution of which could give you a figure for the (in)efficiency of an array of TPAC pixels."] Monstable lengths: Tony showed some slides related to the number of sequential hits seen; see usual web page. The runs he lists as RunNumber 0, 1 and 2 are 447495, 447480 and 447481, respectively. Note, the run version number (-v) does not uniquely define the threshold settings used. In particular, -v 0 to -v 6 were used to start sequences of runs where the threshold was scanned automatically. Hence, for these runs, the threshold will depend on where in the sequence the run was. The run quality file has one line per run and the first sets of values are the sensor ids and the thresholds, given in layer order 0-5. For the runs used by Tony, these are Run | Sensor ids | Thresholds 447480 43 29 63 26 41 48 230 230 999 230 230 230 447481 43 29 63 26 41 48 150 130 999 130 150 150 447495 43 29 63 26 41 48 150 130 999 130 150 150 Here, the sensor in layer 2 with apparent id 63 was actually sensor 32 but it did not read out and its data are missing. Paul pointed out that a DAQ bug meant that for runs 447473 to 447495 (which includes 447480 and 447481 used by Tony), then both this missing sensor in layer 2 and also sensor 26 in layer 3 were not read out for bunch trains, even though only the former was genuinely dead. This explains why Tony only sees data for four layers. This bug is mentioned in the test beam elog https://heplnm061.pp.rl.ac.uk/display/spider/TPAC+Logbook+CERN+180809 under the entry at 20.17. Hence, Tony should use runs outside this range and John should flag these two layers as bad for all runs in this range in the run quality file. Paul suggested trying to count how many pixels never give more than one sequential hit although it is not clear of the best way to plot the data so as to extract this value. Counting efficiency: Owen reported on progress in his efficiency study; see slides on the usual web page. The runs he uses have entries in the run quality file of Run | Sensor ids | Thresholds 447659 43 29 32 26 41 48 130 130 130 130 130 130 447660 43 29 32 26 41 48 140 140 140 140 140 140 447661 43 29 32 26 41 48 150 150 150 150 150 150 447662 43 29 32 26 41 48 160 160 160 160 160 160 447663 43 29 32 26 41 48 170 170 170 170 170 170 447664 43 29 32 26 41 48 180 180 180 180 180 180 447665 43 29 32 26 41 48 190 190 190 190 190 190 447666 43 29 32 26 41 48 200 200 200 200 200 200 and Run | Sensor ids | Thresholds 447790 43 29 39 21 41 48 150 150 130 130 150 150 447791 43 29 39 21 41 48 150 150 200 200 150 150 447792 43 29 39 21 41 48 150 150 190 190 150 150 447793 43 29 39 21 41 48 150 150 180 180 150 150 447794 43 29 39 21 41 48 150 150 170 170 150 150 447795 43 29 39 21 41 48 150 150 160 160 150 150 447796 43 29 39 21 41 48 150 150 200 200 150 150 447797 43 29 39 21 41 48 150 150 190 190 150 150 The absolute chi-squared values depend directly on the errors assumed for the points used as input to the track fit. There can also be a bias from selecting the track with the best chi-squared. Both of these would need to be understood in detail to get a true (flat) probability distribution. The error for an arbitrary cluster of neighbouring pixel hits (potentially also with bad pixels causing missing hits) is non trivial to determine and will need a substantial study. To help with the track selection, then tracks which are within some angular cut of the z axis, or the tracks with the smallest nominal projection error onto the sensor under study, could be chosen. The efficiency is calculated by looking for at least one hit within a search region of the track and then dividing the number of tracks with at least one hit by the total number of tracks considered. The hit region is currently a square box of +/-20 pixels (+/- 1mm) around the track impact point in both x and y, which corresponds to 1600 pixels total. Owen corrects for both full memory and bad configuration pixels (the former with his own implementation integrating over the whole bunch train, and the latter using the pxl files). His correction simply involves only using tracks for which the whole search region did not contain any bad pixels. However, this should be extremely inefficient as the typical rate of just bad configuration pixels is ~5% so the average number which would be expected in a region of 1600 pixels would be ~80, so getting zero should be very rare. The offsets seen in the last slide for sensors 39 and 21 are from run 447866. Paul will check this run in case the alignment is wrong for this particular run. Note, Owen has implemented his own misalignment handling, although he uses the values from the aln files. 2D efficiency: Paul reported on progress in his efficiency study; see slides on the usual web page. The fit has a feature which can result in an efficiency value larger than the peak of the distribution (shown at the bottom of slide 11). Hence the efficiency will be correlated with the fitted track error parameter. In addition, the fits often are dealing with low statistics for which the standard binomial error for the bin efficiency is not very accurate. Both these mean the efficiency values may not yet be reliable. Paul was sloppy in the use of "efficiency" here; e.g. slide 4 actually shows the fraction of pixels which are not labelled as bad. We should be more careful when showing plots outside the group. There is a large discrepancy between Owen's and Paul's efficiency values so clearly this is an important issue to be understood. Owen will check run 447825 to compare directly with Paul's values in his slide 12. Other items: Paul reported the simulation has a bug which means no hits are produced. He needs to fix this before generating any samples. The temperature variation could be studied by comparing runs with the same nominal thresholds but at different temperatures. This needs the efficiency to be reliably determined for each run, which is not yet the case. Next meeting: By phone on Tue 13 Oct, starting at 14.00. This may be moved if the next general SPIDER meeting occurs on that day; it is one of the suggested dates.