CALICE MAPS Meeting, Phone, 11/09/09 ==================================== Present: Benedict Allbroke, Jamie Ballin, Paul Dauncey, Owen Miller, Tony Price, Marcel Stanitzki, Nigel Watson, John Wilson Minutes: Paul Analysis tasks: Paul had distributed a list of tasks to the email list the previous day and had prepared some slides covering these in more detail; see the usual web page. For run quality, it was thought that the PMTs are reliable enough that run information on the spill structure will not be needed. The flag for good/bad configuration will only be available when the bad pixel and configuration studies are completed. For bad pixel handling, then given that Owen has code to flag full memory over the whole bunch train now, this could be used as a first approximation, with the more general flagging after the BX in which the memory goes full being implemented later. The latter would make better use of the available statistics and so may be important given the small number of particles recorded. Merging the bad pixels for the run with those flagged bad for a bunch train will require some thought as the run-level bad flags should not need to be reread from disk every bunch train as this might be slow. For clustering, there is a typo on slide 4; the cluster mean is actually given by (sum_i r_i)/N, where N is the number of pixels in the cluster; the N is missing in the slide. Marcel thought the error study is too complex to do in the first round (~1 month) so to start with, a common error for all clusters which gives a reasonable chi-sq for the track fit should be used. Note, this may need to be threshold dependent. For efficiency extraction, the suggestion given by Paul in the slides takes the spatial dependence into account and ignores the bad pixels without a bias. Owen and Marcel suggested a simpler method, namely checking for any hit within some distance (~100mu) of the track projection point. This could be biased, e.g. if the track hit a bad pixel (and hence gave no hits) but the projection appears to indicate a good pixel. This may be a small effect but this needs study. One approach might be to not use any tracks where there are one or more bad pixels is within the track projection window. However, if the window corresponds to a 5x5 pixel array, then the ~10% masked pixels would on average give 2-3 bad pixels for every track, so such a cut may not be feasible. Both methods can be tried and compared; they really measure different properties of the sensor. Also, although a general comparison of tracks in all layers would be best, a first approximation may be to use the inner/outer layer divide as originally assumed but it would be good not to hardcode this too much. For monostables, Benedict reported that he had now checked ~10 runs and all gave very few double hits in time. This implies the monostable length might be a big effect and so will need to be studed. It is also not known if the monostables are sensitive to temperature. Jamie Crooks may have ideas on how to measure the pulse length. Paul also reported that some time ago, Matt Noy had rigged up a timed pulse from the USB_DAQ which could fired the pixels at a well defined time; this could be used but would require us to reassemble the readout system. This may be hard as Marcel is not at RAL for some time within the next month. Similarly, measuring the pedestal dependence on temperature will be hard. The shower studies will need the efficiency results and so cannot be realistically done in the first round of analysis. For simulation, Paul reported that the simulation code for producing the same format data was in poor shape and would need some work before it could be released. He will therefore do the production in the short term. GEANT4-level generation takes only a few minutes for 1k events. However, the digitisation takes a lot longer, due to the large noise count; roughly 10 hours for 1k events. Even this is with the assumption that the noise level on all pixels is the same, which is not the case for the real data. Some studies could be done with Mokka samples but these are limited due to the different format. Materials for the EUDET telescope and CERN beam line would require digging into documentation from the two sources. Marcel agrees that it would be good for Gary to start looking at the hi-res charge spread with the Sentaurus sensor simulation. The following are the list of tasks which people said they would cover: o Run quality - John o Bad pixels, configuration and threshold - Paul o Bad pixels, full memory - Owen o Cluster algorithms - Marcel and Jan Strube o Efficiency, 2D method - Paul o Efficiency, hit counting method - Owen o Simulation updates and production - Paul In the short term (~1 month), then there seems to be too little effort available to start on the cluster comparison with MC, shower studies, temperature and monostable effects, and beamline material. These will be reconsidered after the first phase of the analysis. Next meeting: This will be at Birmingham on Mon 21 Sep, starting at 13.30. A phone connection can be made if needed.