CALICE MAPS Meeting, RAL, 16/04/07 ================================== Present: Jamie Ballin, Jamie Crooks, Paul Dauncey, Anne-Marie Magnan, Yoshi Mikami, Vladimir Rajovic, Marcel Stanitzki, Konstantin Stefanov, Renato Turchetta, Giulio Villani, Minutes: Paul Minutes of the previous meeting: No corrections. Paul asked JamieC to check the noise values given for the pre-shaper case but otherwise there were no comments. Physics simulation: Paul discussed the preparations for the LCWS meeting at the end of May. We should expect to have one or two MAPS talks in the parallel calorimetry sessions, although the schedule is apparently already busy. Paul has circulated a list before the meeting; see web page. Sensor simulation: Giulio reported that he would preferably use the GDS file directly from JamieC as this (obviously) has the full detail of the design. However, this level of detail may require an extremely fine simulation mesh, which could make it unfeasible to simulate the 3x3 pixel array done previously, let alone a larger volume as requested. Giulio thought in any case it was unlikely that simulating a volume greater than 3x3 pixels would be useful as the charge spread over the external ~16 pixels is small enough to be negligible in each. In addition, he is concerned that the standard "triangle" of 21 points to simulate would not be enough given that the pixel is no longer square-symmetric. He will strip out the unneeded features such as metal layers anyway, but it would be a lot of work to produce a simplified, symmetric version which would be easier to model. However, if this is needed, then there seems to be little alternative. Giulio will investigate what is technically feasible and report back in around a week. Any output must be available by at most four weeks from now to allow results for LCWS to be finalised, as well as because Giulio then goes on vacation. There is also an issue of whether to use the same 21 points as before. GEANT4/Mokka seems to not handle the small 5mu regions well and so it might be better to enlarge their size. This would clearly degrade the charge diffusion modeling but it was not known at what level a significant bias would be incurred. The size has to be an integer multiple of the 25mu half-pixel size from the centre to the edge; the 5mu is clearly 25mu/5. The options are then 25/4=6.25mu, 25/3=8.33mu and 25/2=12.5mu. Of these, the first is likely to suffer from the same simulation problem as the 5mu size, while the last would have the largest error in the charge diffusion. Anne-Marie will check the simulation with these three sizes to see how they look in terms of the deposited energy distribution. Resolution studies: We will need a significant amount of simulation events for all the planned studies. These should be done with a single Mokka version to ensure they are all consistent. If this can be done on the Grid it would be quickest and then the production could all be done by one person. Yoshi and Anne-Marie will ensure they have the latest Mokka version and Anne-Marie will continue looking into using the Grid. JamieB has just registered himself in the CALICE VO and so will forward round the instructions on how to do this. Using photons for the resolution studies will get around the B field bending and material bremsstrahlung effects seen with electrons. The photon energies should be generated with a logarithmic scale over a wide range of energies; specifically the ten values 0.5, 1, 2, 5, 10, 20, 50, 100, 200, 500GeV were chosen and 5k events of each should be sufficient. The disk space needed will large, although it should not be prohibitive. An estimate given was that 5k events of 20GeV electrons give an ~1GByte output file. Assuming photons are very similar and that the disk space scales linearly with energy, then this estimate corresponds to 10kByte/GeV per event. Hence, 5k events at 500GeV will take ~25GBytes and so may need to be generated in ~25files of 200 events each. The total disk space will be ~50GBytes. Both RAL and Imperial have ~1TByte disk space so this should be acceptable. PFA studies: The PFA studies need full physics events rather than single photons. The standard events used are e+e- -> qqbar at various centre-of-mass energies. It was decided that 5k events at each of Ecm of 100, 200 and 500GeV would be needed. In addition, for calibration, then samples of single K0L and neutrons are required; 1k events of each at both 5 and 20 GeV are needed. The simulation to be used should be LDCSc, i.e. the AHCAL, not DHCAL, version as Marcel already has a calibration for the HCAL for this. While we should try to adapt the code to run within the SiD simulation (SLIC), this was thought unfeasible on the timescale of LCWS and so will only be attempted afterwards. We will need to choose consistent values for the noise, threshold, etc. How to pick the optimal threshold value is not clear, as it could be chosen to optimise the EM resolution or increased to reduce the noise rate for the DAQ. For these studies, the DAQ rate is probably not a constraint we should be held by; the uncertainties in the DAQ performance are too large. However, generating the noise values will add significantly to the MC production time and analysis speed. The two options are to do the whole analysis for LCWS without noise and then add it in at the end to show it is negligible, or to produce the reconstructed files including noise hits so the generation is only done once (although the larger file sizes would make analysis programs run slower). The former will clearly be quicker although we could get a nasty suprise very close to LCWS if the noise has a larger effect than expected. For the latter, then with ~10^6 noise hits per event and each hit needing one 4-byte word, then the noise will give ~4MBytes/event extra in the reconstructed files. This would increase the disk space required by ~250GBytes, i.e. dominating the data volume. An alternative would be to chose a high enough threshold that the noise is definitely negligible. The problem is knowing what noise rate is genuinely negligible without generating the noise anyway. Marcel considered the "no harm" PFA comparison with the diode pads, where the energy is estimated in areas equivalent to the diode pads by simply counting pixel hits, would be quite easy to implement in PandoraPFA and he will do this. However, rewriting the PFA algorithm to better use the MAPS granularity will be a significant task; there are many parameters to estimate, such as the optimal calibration of pixel hits into energies and how photon/hadron separation will look, so this will be postponed until after LCWS. PandoraPFA unfortunately depends heavily on truth matching of hits to particles, i.e. LCRelations between SimCalorimeterHits and CalorimeterHits. These are not well-defined and not implemented in the existing digitisation code. This will be a large amount of work to implement sensibly and it is not clear this can be done in the time available. FDR, Part 2: JamieC reported that this had gone smoothly. The design was close to completion and so almost all the parts remaining from Part 1 had been covered. One recommendation of the FDR was to submit a dry run to the foundry as soon as possible. JamieC sent this in on the day of the review (Fri 30 Mar) but, despite a reminder to the foundry, has had no feedback yet. Sensor design: JamieC showed results on an Eldo noise simulation of the capacitor orientations; see the Excel spreadsheet on the web page. The implementation of the capacitors as n-wells means their capacitance depends on their polarity and he has simulated all possible polarity combinations to find the best ones. These Eldo results had not been available during the FDR and only Spectre ones were shown there. The simulation shown was only for 3.6mu diode sizes but they agree reasonably well with the expectations given the Spectre results and the polarity combinations to be used are unchanged. The actual S/N values for these combinations are around 16 for the 3.6mu diodes, which imply a S/N of 24 for 1.8mu diodes, which seems high. The sensor is now effectively complete except for the test structures. JamieC is developing two parallel designs; one without the test structures to be sure he can submit something on time and the second with the test structures which can be sent if they are completed before the deadline. He aims to have the non-test structure version finished tomorrow; he is currently doing the "antenna rules" checks and adding diodes where necessary. He intends to submit on Thursday this week. He is not sure he will be able to get the full set of test structures done by this time but thinks at least the pre-sampler ones will be ready. There are no significant outstanding FDR issues which he thinks will be tricky to implement, so he is confident the main part of the sensor will be complete on time. The foundry has introduced a 0.13mu process since we organised the production of our sensors. Even if it had been available at the time, it would have been too expensive for our use. Sensor simulation: Nothing to report beyond the discussion above. DAQ status: Vladimir needs the bond pad geometry to be able to lay out the sensor-holder PCB. The issue of a logic analyser connector was again raised; this had previously been ruled out as being too bulky for the PCB but Marcel thought he could use a smaller connector instead. Vladimir will consider this if one is suggested. There will be a total of 86 LVDS signals going across the connector from the USB_DAQ board to the sensor-holder PCB, requiring 172 pins for these. Paul had not been able to meet with Matt before he left for a meeting this week in Japan. However, Anne-Marie reported that two USB_DAQ boards had been assembled; one seems to work although the other has what is assumed to be an assembly fault. SiD issues: Marcel gave a talk on his personal view of the SiD meeting at FNAL, at which he gave a talk last week; see web page for the presentation today. Marcel was asked several questions during the meeting. One was on the wafer thickness; the SiD detector design has a 5T solenoidal magnet and the solenoid cost is very significant. An incremental figure of $2M for every cm in radius is often quoted. The issue is that a swap-in MAPS design to replace the diode pad silicon does not allow for thinning the MAPS wafers and so saving space. Wafer-thinning is done by grinding down the wafers from 700mu after the circuitry is fabricated. The standard size is ~250-300mu and this will be the thickness of the sensors being fabricated now. If the cost of thinning further is small, then they could in principle be thinned down to around 50mu although handling and mounting they may become issues. Renato has used wafers down to 100mu in other projects with no major problems. However, for a 30-layer calorimeter, then thinning from 300mu to 100mu would only save a total of 6mm radius, corresponding to a saving of $1.2M for the magnet, which is small compared with the total cost of the wafers and could easily be negated by an extra cost for the thinning itself. There was a projected cost for the diode pads from Hamamatsu of $2/cm2. This is compared with ~$6/cm2 paid by CMS. The problem with this would be that there would likely be a sole supplier for such detectors. He was also asked about power, the ultimate maximum sensor size, the construction of the mechanical stave and whether a version could be made with more than one threshold (effectively a non-linear ADC with 3-4 bits). These are all issues for the future although we should begin to consider answers as these questions will be raised again. Marcel also mentioned a talk from the Yale and Oregon groups in SiD on "Chronopixels", which are a similar concept to our MAPS but for a vertex application. Although they recognise the importance of deep p-well, they do not have access to such a process yet. The link to this talk is on the web page. Conferences: During the recent CALICE-UK Steering Board meeting at Imperial on Mon 26 Mar, there had been a discussion on upcoming conferences and the abstract deadlines. In a discussion following this, Marcel had been volunteered to identify relevant conferences for the MAPS project and organise abstract submission and speakers. He will draft an abstract with help from the relevant people in the group and if we get granted a talk, will bring this up at the MAPS meeting so a speaker can be identified from the people who are able to go. A list of relevant conferences for MAPS have been linked on the web page. For the first, the EPS conference in Manchester in Jul, Marcel has already submitted an abstract. IEEE/NSS in Hawaii in Oct is another obvious candidate. Vertex 2007 in NY state in Sep does not call for abstracts but is invitation-only; people should suggest a MAPS talk to the organisers if they know them. The ICATPP conference in Como in Oct is not so relevant; we may have difficulties finding speakers and this would be a lower priority. Finally, Pixel07 will be 4-8 Sep at FNAL, but there is no web site for this yet. People should consider their availability for these meetings. Next meeting: 13.00 on Thu 17 May, in a PPD meeting room to be announced later.