Poster Abstracts and Summaries

Eight-channel PCBs for wire tracking detectors.

E.Atkin, Yu.Volkov, I.Ilyushchenko, S.Kondratenko, Yu.Mishin, P.Khlopkov
Moscow Engineering Physics Institute (MEPhI), Department of Electronics
volkov@eldep.mephi.ru

V.Chernikov, V.Subbotin, S.Tsvetkov
State Research Institute of Pulse Technique

Abstract

The structures of several PCB varieties for a preliminary processing of wire tracker signals is described. Each channel of such a structure contains a successive connection of a preamp, shaper and comparator, implemented with both semicustom and full-custom analog chips. The PCBs are designed as four-layer ones, assembled with SMD technology and all have the same dimensions of 110*30 mm. sq.

The results of their experimental testing and comparison by speed, dynamic range, power consumption are presented.

The given PCBs are intended for application with various wire detectors, having a standard range of detector capacitances 5...20 pF and that of signal currents 1...200 mA.

Summary

There has been carried out the experimental testing of three PCBs, implemented with, correspondingly:

Measurement results of the main amplifier-shaper and comparator parameters are given in the table.

Main parameters

ASIC

SA

ASD8B

Transimpedance, kOhm

20

15

30

Input resistance, Ohm

160

180

250

Shaper output rise-time, ns

10

7

8

Shaper output pulse width at base level, ns

40

40

55

Shaper output signal swing, V

1.4

1.5

1.2

Input noise level, el.

1500

1500

1400

Crosstalks of neighboring channels, %

²1

²1

²1

Amplifier-shaper power consumption per channel, mW

15

40

14

Comparator threshold, mV

10 - 150

10 - 150

10 - 150

Comparator rise-time, ns

5

6

6

Comparator fall-time, ns

7

8

8

Comparator propagation delay, ns

8

8

8

Comparator power consumption per channel, mW

18

20

18

Comparator output logic

GTL

TTL

GTL

The PCBs are designed as four-layer ones, assembled with SMD technology and all have the same dimensions of 110*30 mm. sq.

The given PCBs are intended for application with various wire detectors, having a standard range of detector capacitances 5...20 pF and that of signal currents 1...200 mA.

The comparison of PCB parameters allows to conclude: ASIC and ASD8B parameters appear rather close; the latter having the advantage of greater sensitivity (transimpedance) and integration scale (eight channels per package), while the ASIC has a greater speed (a lesser output pulse width at base level), a larger dynamic range of signals and an option of signal shape adjustment by external elements at an approximately equal power consumption per channel. The PCB, implemented with SA, is inferior to the other two in power consumption per channel. The design of PCBs involved studying different arrangements of supply circuits. In all PCBs the reduction of couplings through the negative supply was achieved only at the expense of decoupling capacitors. The positive supply couplings were reduced by decoupling RC circuits in the ASIC PCB and by composite emitter followers in the ASD8B PCB, in consequence of what the voltage, fed to the PCB, was raised up to 5 V (instead of 3 V for the other two PCBs). Therefore the total power consumption increased by 40%.

The nearest planned application of the PCBs is the front-end read-out electronics for use in the experiments with multiwire drift chambers in framework of HADES project. After the first approbation there is planned to propose the PCBs for even more multichannel subdetectors in LHC.


An application specific semicustom array for the implementation of multichannel front-end ICs

A. Arkhangelsky, E. Atkin, S. Kondratenko , Yu. Mishin , A. Pleshko , Yu. Volkov
Department of Electronics, Moscow Engineering Physics Institute
Kashirskoe shosse 31, 115409 Moscow Russia
E-mail: atkin@eldep.mephi.ru

Abstract

The structure of an application specific semicustom array (ASSA), designed proceeding from the necessity and convenience of accommodating several identical front-end channels of the «preamp - shaper - comparator - output stage» type has been described. The circuit solutions of separate channel units are not fixed and can be chosen with account of the requirements of a specific physical experiment.

A version of an ASSA implementation with the bipolar process of the research institute «Pulsar» (Moscow) (unity gain frequency of npn-transistors about 7 GHz) has been considered. The nearest planned application of ASSA is the manufacture of front-end ICs for use in the LHC experiments with wire trackers and time-of-flight systems of the plastic wall type.

Summary

The large and continuously growing number of front-end ICs, produced by various manufacturers, reflects the trends in the development of major contemporary physical experiments - such as ATLAS, CMS, HERA-B (DESY), HADES (GSI) and others. As distinct from full-custom ICs, the approach to designing various front-end ICs on the basis of one and the same semicustom array provides a greater succession at transit from one task to another at the account of preserving a common basis - a chip with disconnected components, a library with standard lay-out fragments and a set of SPICE models. Only a limited number of final processing stages (namely, the arrangement of the necessary connections by means of one or several plating layers and encapsulation) is needed to implement each new front-end IC. Comparing with full custom ICs, the integration scale (number of channels per chip) here is lesser. However, this restriction is partially overcome by the application specialty of the chip, which is displayed in the following peculiarities of its lay-out:

For an initial disposition on the chip there were chosen a preamp-shaper and comparator, designed for use in the wire chamber experiment (wave resistance of the detector equals ~ 330 Ohm) and having the following (design) characteristics: - the conversion factor of detector current to shaper output differential voltage - not less than 10 kOhm; - shaper output pulse duration at the level of 10% of maximum - not more than 30 ns; - noise charge not more than 2000 el; - inherent propagation time of discriminator (at an overdrive of 100 mV) not more than 2 ns; - power consumption per channel not more than 20 mW. Channel simulation employed special software means what allowed account of component stray capacitances with the substrate and of the so-called «underdive» resistances at choosing the final circuit disposition on the chip.

The given ASSA presents some additional opportunities to designers for implementing front-end ICs besides using (or designing) application specific ICs or using general purpose semicustom arrays.

The work is supported by the International Science and Technology Center (ISTC).


A 16-channel digital TDC chip

P. Bailly, J. Chauveau, J.F. Genat, J.F. Huppert, H. Lebbolo, L. Roos, Zhang Bo
LPNHE , Universite de Paris 6 et 7 , 75252 Paris Cedex, France

Abstract

A 16-channel digital TDC chip has been designed in the context of the DIRC Detector for the BaBar experiment at the SLAC B-factory (Stanford, USA). Binning is 0.5 nanosecond, full-scale is 32 microseconds. A data driven scheme is implemented, since the chip integrates channel buffering and selective readout of data falling within a programmable time window. Use has been made of all design styles, analog full-custom for the TDC section and FIFO memories, and digital design synthesized from high level language hardware descriptions for the data processing functions. Tests have shown linearity performance better than 170 ps rms including binning noise. The chip is now under production.

Summary

This digital TDC chip is intended to equip the Front-end electronics of the Detector of Internally Reflected Cherenkov light of the BaBar experiment. Time measurements are used there to reject background noise. The required resolution is of the order of one nanosecond. A binning of 500 ps has been chosen, with a full-scale of 32 ns, in order to cope with a first trigger latency of 11 microseconds. Double pulse resolution is 32 nanosecond. Internal buffering of digitized times allows to manage random inputs at a maximum average rate of 100 kHz with deadtime less than .1%. Readout of data within a programmable time window is implemented. This feature reduces by a factor of ten the amount of information to be read for the DIRC detector at BaBar. The chip is housed in 68 pins package. Die size is 36 mm2, power is less than 100 mW at 100 kHz input rate.

The TDC section uses sixteen fully independent digital delay lines to digitize time with 500 ps binning. A separate calibration channel allows to tune the chip delays on the 60 MHz reference clock, in order to cope with supply, temperature and process variations. Coarse time is measured with a fast counter common to all channels, providing the 11 most significant bits. Double pulse resolution is 32 ns.

Buffering is implemented using sixteen 4-deep channel dual-port FIFOs, a single 32-deep intermediate FIFO and a 32-deep output FIFO. Data stay in the channel FIFOs or the intermediate FIFO for one trigger latency, and are transfered afterwards to the output FIFO where they stay for the trigger window. Trigger latency and window can be programmed up to 16 microseconds ans 2 microseconds respectively, by steps of 64 ns. On a readout request, the chip builds a data packet that outputs the times selected within the programmed trigger window and any channel-FIFO overload.

The TDC section has been designed using the Cadence CAD package, Spice simulations have been used to model the delays, the calibration mechanism and the channel FIFOs, implemented as a full-custom design for timing accuracy and layout compacity reasons. A stick layout tool has been used. Digital hardware has been synthesized from Verilog behavioural and structural descriptions using a logic synthesizer. The TDC section has also been modeled with Verilog for the simulation of the full chip.

The tests of the first prototype chip have shown that the process uniformity is good enough for integrating 16 channels on a single die, with both differential and integral linearities far below the least significant bit value. The calibration process has been found stable and transparent, no differences have been found in time measurements with or without the phase lock loop running. Crosstalk between adjacent channels is less than 125 ps. As a result of tests, the differential linearity is found better than 170 picosecond rms on all channels including the unavoidable binning noise.

The chip is presently under production.


A CMOS Cluster Finder to read sparse data

C.Caligiore, D.Lo Presti, G.V.Russo
Physics Department, University of Catania and Sezione INFN, Catania (ITALY)

Abstract

A Cluster Finder for the ALICE ITS SDD Readout is proposed. It detects analog data cluster in the Analog Memory of the readout system. It consists of the Time Peak Detector (TPD) and the Centre Peak Detector (CPD). The TPD looks for the peak of the signal for each anode of the detector. A threshold remote control is provided to avoid the detection of gost tracks because of noise. The CPD, whose chip was already made in CMOS-1.2um technology, is mainly a Winner Take All. It compares the signals of three adjacent anodes when the central one has its maximum.

Summary

A circuit which is part of the Readout system for the Silicon Drift Detectors used in ALICE experiment is proposed. It results possible to compute with a good precision, applying the Fuzzy Logic, the track position and the charge released by the particle in the device.

The Event Memory (EM), which is part of the electronics we propose, stores all the data coming from the detector.

It is necessary a circuit that processes data entering the EM to detect the contiguous entries bond to the charge released by a particle (cluster). The signals entering the EM are continously processed by the Cluster Finder (CF), the system foreseen to the search of the clusters.

In the EM there are two memories: the Analog (AM) and the Mirror Memory (MM). The AM-cells store the analog data resulting from the sampling of the amplifier (TA) signals, the MM-cells the binary ones.

To find a cluster, we compare the content of the 5 cells that form a cross: if the central one contains the highest value, we have found a cluster. I mean that when the signal of the central anode goes to its maximum value, we compare it with the adjacent ones at the same time. The MM will be marked with a bit in the correspondent cell.

The CF consists of two circuits, the Time Peak Detector (TPD) and the Centre Peak Detector (CPD). This circuit uses the signals coming from three consecutive channels of the TA. The TPD is that part which looks for the peak of the signal on the central anode. When the peak is found, the TPD enables the CPD to compare this signal with those of the two adjacent channels at the same time. If the latter ones have a lower level, a cluster was found.

The TPD is based on a zero-crossing discriminator: it switches when its input crosses zero-voltage at the trailing edge. The input of the TPD is firstly differentiated by a CR-filter; then the signal is amplified and squared. When the TPD finds a peak, a squared 8 ns wide signal will be produced. A threshold remote control is provided to avoid the detection of gost tracks because of noise.

The output of the TPD will enable the CPD: this circuit is mainly a Winner Take All. It has n analog input signals and one binary output signal. It is able to detect which is the highest input signal. The CPD works only if a comparison must be made. A 3-input and a 5-input CPD were made in CMOS AMS 1.2 um technology. The CPD was designed to be very fast with a low power dissipation (only 6 mW at the highest frequency). This circuit is able, in the worst case, to give its output in less than 10 ns up to 80 MHz.

Our purpose is to make a complete version of the CF chip in CMOS AMS 0.8 um technology.


RAM radiation functional upsets

A.I.Chumakov, A.V.Yanenko, O.A.Kalashnikov

Abstract

RAM single event upsets and total dose failures can be resulted from the LHC radiation environment. The approaches are presented to RAM radiation hardness for both error types evaluation and assurance. The radiation tolerant RAM units design principles and examples are discussed.

Random Access Memory (RAM) is the important part of physical experiment electronics. It can be used in the dynamic data buffers, for experimental results storage and, usually, it placed near a primary radiation beam. So, RAM upsets and failures are resulted from the LHC radiation environment. Generally, upsets (single event upsets) are due to the local ionisation from high energy nuclear particles and can be described by the average upset frequency during irradiation. The failures are concerned with the RAM characteristics degradation under irradiation as a total dose response.

The difference in failures results in the different approaches to RAM radiation hardness evaluation and assurance.

To design the RAM units tolerant to the nuclear particles irradiation one should:

- evaluate a radiation environment including the secondary

irradiation (neutrons, protons);

- evaluate an average and a maximum frequency of single

event effects;

- do not use the RAMs with possibility of catastrophic

errors (latch-up, burnout);

- provide the algorithmic methods for upsets correction.

As a rule total dose failures of RAM are due to a significant threshold voltage shift in cell MOS transistors (including parasitic ones) or gain degradation of bipolar transistors. During hardness evaluation the effective dose rate and possible switching-off of RAM modules are to be taken into account. Therefore, the standard techniques of hardness assurance of RAM are not applicable for LHC conditions. So, RAM experimental research techniques should observe both functional failure dependence on total dose and functional failure "annealing" after irradiation. These experimental data for irradiation with relatively high dose rate allow to estimate RAM behaviour under relatively low dose rate. The most profitable way to obtain such data is X-tester technique usage.

The additional advantage of X-tester technique is concerned with a possibility to irradiate the local chip fragment. These data allow to evaluate a radiation sensitivity of different parts of and extract the radiation sensitivity parameters which can be used for radiation prediction.

One of the effective method of RAM functional upset level prediction is based on fuzzy logic. The simulation is based on the criterion membership functions method (CMF) of the fuzzy sets theory. The effect of irradiation on element is simulated by the introduction of an additional CMF input signal.

In this work the algorithms of RAM units hardness evaluation under typical radiation conditions are considered and the examples of such evaluations and techniques of RAM modules hardness providing are given. For example, different error correction techniques in RAM is considered when SEUs take place. The authors offer the RAM unit based on Hamming error correction and periodical information restoring.


Simplified Technique for Predicting Single Event Upsets in RAM

A.I.Chumakov, A.V.Yanenko, A.Y.Shevchenko

Abstract

The simplified technique for predicting single event upsets from neutrons and protons in RAM is presented. It is based on Bendel approximation and Cf-252 tests.

The improvement of such approach is developed including tests for both different angle of particle beam incidence into sensitive volume and the particle energy attenuation. Additional alpha-particle isotopic tests for different supply voltage is proposed also.

The comparison between the prediction and experimental results is presented.

Single event upset (SEU) is a bit flip in a digital element of integrated circuits (IC) that has been caused by a single nuclear particle. Such effects can determine failures and upsets of electronic equipment in radiation environment. The most SEU sensitive ICs are VLSI, especially, buffer RAMs placed near nuclear detectors because of secondary neutron and proton irradiation with relatively high particles energy provided by LHC.

The usual procedure of SEU rate prediction consists of three stages: estimation of radiation environment; determination of IC SEU sensitivity parameters and SEU rate prediction. The most difficult problem associated with the second stage is determination of proton (neutron)-induced SEU cross section dependence on particles energy. Usually, it is obtained from proton and/or ion accelerator tests for different energies or linear energy transfers.

Such approach is rather complex and expensive and it would be useful to develop more simple alternative methods. Cf-252 and picosecond focused laser beam tests are widely used for SEU research. There are some problems in application of the focused beam laser simulators for SEU prediction because of influence of optical and IC's topological effects. Therefore the only Cf-252 tests can be used.

The main issue of this work is to determine upset cross section vs. proton (neutron) energy using Cf-252 tests. For high energy particles (>30 MeV) one or two- parameter Bendel approach is straightforward. The threshold energy can be determined using the next suggestions:

- a sensitive device volume is a right rectangular parallelepiped and its area is proportional to SEU heavy ion saturation cross-section;

- there is a relation between SEU specific threshold energy and asymptotic cross section.

Some improvement of this approach can be obtained changing the angle of particle beam incidence into sensitive volume or using the special layers for energy attenuation.

In our opinion the best results can be produced from SEU isotopic tests for different values of RAM power supply voltage. If the power supply voltage became lower, the SEU sensitivity increases and vice versa. Therefore, the threshold can be obtained from RAM supply voltage value when SEUs disappear. As rule, it is impossible to obtain this value from Cf-252 tests because of very large supply voltage value. The usage of isotopic alpha-source helps to decide this problem because there are not SEU in normal conditions for IC majority. So, it is necessary to decrease supply voltage and to determine the voltage value when SEUs appear. In spite of some problems, SEU threshold energy and asymptotic cross section can be determined from these tests and both parameters of Bendel model can be used.

Unfortunately, this model gives a large error for relatively low particle energy (< 30 MeV), especially for neutrons. The cross-section energy dependence can be obtained using the simplest scattering model and taking into account only a total energy loss of the heaviest secondary particle in the sensitive volume for this energy range. The comparison between the prediction and experimental results demonstrates the possibility to use this approach.


Performance of the HELIX128S-2 Readout Chip for the HERA-B Silicon Vertex and Inner Tracking Detectors

W. Fallot-Burghardt, W. Hofmann, K.T. Knoepfle, E. Sexauer, U. Trunk
Max-Planck-Institut fuer Kernphysik

M. Cuje, M. Feuerstack-Raible, F. Eisele, B. Glass, U. Straumann
Universitaet Heidelberg

Abstract

HELIX128S-2 is the second version of a 128 channel readout chip designed for the silicon vertex and the inner tracking microstrip gas chamber detectors of HERA-B. It has been manufactured in the AMS 0.8um CMOS process.

A modified version (manufactured in the DMILL process) will meet the specifications for LHC-B vertex detector readout.

Measurements of the HELIX128S-2's performance are presented, as well as test results from new features.

Special emphasis will be given to the results of irradiation tests of the HELIX128-1 chip using a 137-Cs source.

Finally the radiation tolerance of the AMS CMOS processes in general will be discussed.

Summary

HELIX128S-2 is the second version of a 128 channel readout chip designed for the silicon vertex and the inner tracking microstrip gaseous chamber detectors of the HERA-B experiment; it has been manufactured in the AMS 0.8um CMOS process.

The chip's design features in a brief summary:

A modified version (manufactured in the DMILL process) will meet the specifications for LHC-B vertex detector readout.

Since the analog signal path has been completely revised, HELIX128S-2 is expected to show an enhanced performance compared to its predecessor HELIX128-1.

Due to an revised preamplifier-shaper frontend, a lower noise (in the340e-+40e-/pF regime), improved pulse shape and better linearity (about -10 to +10 MIPs) are expected.

Also the changes to - the comparator circuit (changed to an AC-coupled differential amplifier) - the pipeline readout amplifier - the multiplexer (changed to a two-stage design running at 40MHz) - the output buffer (changed to a low-power current output type)are intended to lead to an improved performance of the chip.

The bias generator circuits (formerly implemented on an extra chip) have been integrated on the HELIX128S-2 chip. All settings can be programmed via a serial interface using three lines.

HELIX128S-2 can now handle up to 8 pending triggers. Therefore the depth of the derandomizer buffer has been increased from 4 to 8 stages.

A token scheme and an internal switch allow the daisy-chained readout of two or more HELIX128S-2 without external components.

Synchronicity deviations of several chips running in parallel are detected by the means of a scalable error detection circuit.

Measurements of the HELIX128S-2's

are presented, as well as test results from new features (e.g. daisychained readout, higher readout frequency) are given.

Special emphasis will be given to the results of irradiation tests of theHELIX128-1 chip using a 137-Cs source. Strategies to compensateperformance degradation introduced by irradiation damage will be addressed.

Further issues will be the impact of irradiation to noise performance, channel offsets and pipeline cell capacitances.

Finally the radiation tolerance of the AMS CMOS processes in general will be discussed.


Microelectronic Approach to Smart Sensor Quality, Reliability and Radiation Hardness Regulation and Assurance

V.A.Telets, A.Y.Nikiforov and D.V.Gromov

Abstract

The approach to smart sensor reliability, testability and radiation hardness regulation and assurance on the basis of microelectronics standards is presented. Smart sensor components are identified not only as measuring instruments but also as an original class of integrated circuits (IC) with the corresponding general-purpose parameter's system for basic applications. The conventional IC general specifications, radiation hardness requirements and qualification technique standards are suggested to be adopted and adjusted to this new class in particular supplements. Thus the Microelectronic and metrological terms and parameter systems within SCIC are analysed in details. The examples of the introduced outlook implementation positive experience in thermal, pressure, radiation and optical sensors design and tests are presented and analysed.

Summary

Microelectronics with its great functional and process potential invaded in traditional metrological areas in sensor and particularly smart sensor design. It is demonstrated that most of sensor elements are designed and manufactured on the basis of the same (or similar) microelectronics principles and process operations and also packaged in standard or adopted Microelectronic packages. Therefore smart sensors as well as IC but in contrast with measuring instruments are mass products with the high level of multi-purpose usage, run-to-run reproducibility of the parameters and all other advantages and achievements which are inherent to microelectronics.

The state of art reality is in poor conformity with the traditional metrological rules and standards. This resulted in the spontaneous development of sensors nomenclature on the basic of the particular customer requirements and in general negatively influence upon the smart sensor quality, reliability and radiation hardness regulation and assurance.

We consider that the best way to overcome most of contradictions is to introduce the original IC class of "sensor components". Thus smart sensors can be considered as the functional and technological peak of Sensor Component Integrated Circuits (SCIC) development. With the advent of SCIC it is possible now to achieve the technical compromise between the metrological and microelectronic approaches in device design and their quality, reliability and radiation hardness regulation and assurance. SCIC can be defined as the functional devices, which are the individual products from the point of view of technical requirements, qualification and operation, which are intended to sense and transform the input external environment parameters (physical values) into the output electrical signals. SCIC itself are not the measuring instruments and thus does not need any metrological calibration procedure. But if SCIC are intended for measuring applications they should be calibrated by the customer either at the incoming inspection or within the particular measuring system. Therefore SCIC introduce the general-purpose basic cells to build various measuring instruments (with their inherent calibration procedure) as well as any data acquisition, control, processing, etc. devices. It is demonstrated that the clear technical status of SCIC give us a possibility to construct the general-purpose system of terms, parameters, requirements and qualification methods by the synthesis of appropriate basic ideas, concepts and parameters from both metrology and microelectronics. This, in turn permits us to develop and introduce the general documents intended to be guides for technical personnel of the end users and suppliers.

The standard of "General specifications" type for SCIC could be developed on the basis of the same standard for IC but with the additional particular supplements specified according to the input external environment parameters: temperature, pressure, radiation, etc. It should be noted here that the external environment parameters (physical quantities) are considered at the same time as the input data and as the destabilising factors. The approach is quite efficient to develop the SCIC quality, reliability, radiation hardness regulation and assurance systems, which could be based on the appropriate IC standards, taking into account the above mentioned particularities and with additional SCIC supplements for various input physical quantities.

In general the approach is based on the wide experience of the positive technical solution of the same problems in microelectronics. The examples of the introduced outlook implementation positive experience in thermal, pressure, radiation and optical sensors design and tests are presented and analysed as well.


Fuzzy Simulation of Total Dose Functional Failures of Digital Units

O.A.Kalashnikov

Abstract

The approach to numeric simulation of digital units total dose functional failures is presented. It is based on the criterion membership functions method of the fuzzy logic sets theory. This approach allows to research the failures of digital LHC electronic units in various operation modes.

INTRODUCTION

The total dose IC parameters degradation and errors are very typical for the LHC applications. For rather complex digital ICs and units the functional failures are of the most interest. Research of such failures under real operation conditions requires the significant efforts. The various methods of numeric simulation are widely used as an alternative.

The modern CAD systems for digital units and ICs radiation failures simulation use either electrical models of analysed unit with limited complexity or the worst case approach which is proved to give the results unacceptable in many cases. Therefore the specialized system intended for simulation of the radiation functional failures in digital units and ICs is developed.

2. PRINCIPLES OF THE SIMULATION

The simulation is based on the criterion membership functions (CMF) method of the fuzzy logic sets theory. Application of the CMF method for radiation functional failures in digital units and IC simulation is based on the following principles.

1. The continuous character of processes in IC elements under irradiation brings to the parametrical character of their failures. It claims the transformation of Boolean logic model with set of signals {0,1} to logic model with signals which are belong to the continuous interval [0,1] for analysis of such processes.

2. The influence of irradiation on element is simulated by introduction of an additional CMF input signal which is also within the range [0,1]. The value of this signal depends on time of irradiation (total dose). The technique for CMF determination is presented.

3. The Boolean logic functions are transformed to continuous functions. Thus, the system of Boolean logic equations of unit operation is transformed into the system of fuzzy logic equations.

An example of rather simple radiation model construction is discussed.

According to the above mentioned principles the system of functional-logic numeric simulation have been developed. In general the system consists of:

- calculation kernel;

- elements library;

- subsystem for radiation model parameters determination;

- subsystem for calculation management.

3. MODEL PARAMETERS DETERMINATION

The determination of an element functional-logic model parameters is the determination of the corresponding CMF dependencies against irradiation time. Thus, it is necessary to solve the following problems: - determination of element radiation failure types and the

amount of corresponding CMF; - determination of element characteristics dependencies

against total dose; - determination of criterion of signals membership to logic

"0" or "1"; - the CMF calculation.

The ways to solve these problems are presented and discussed. The fuzzy simulation method does not need information about devices technology and design. It deals with functional level of operation and failures and requires only electrical calculations for determination of model parameters.

4. SIMULATION OF DIGITAL UNITS RADIATION FUNCTIONAL FAILURES

The approach proposed for radiation long-term functional failures simulation and the system based on it allow us to research the digital units failures under various operation modes. Some examples of ICs and simple units simulation and the comparison of simulation and test results are presented.

It is demonstrated that the developed fuzzy logic approach is able to simulate the long-term failures under any LHC radiation types such as neutrons, electrons, particles.


A discriminator PCB for precise timing signal generation.

E. Atkin, Yu. Volkov, P. Khlopkov
Department of Electronics, Moscow Engineering Physics Institute
Kashirskoe shosse 31, 115409 Moscow Russia
E-mail: khlopkov@eldep.mephi.ru

Abstract.

On the basis of a previously elaborated ASIC, there has been designed and manufactured a discriminator PCB for precise timing signal generation. Particular attention is paid to the reproducibility of the characteristics, since the discriminators are intended for application in multichannel physical research equipment (hundreds and thousands of channels).

The expected timing accuracy is ±60 ps (FWHM) for input amplitudes of 30 mV to 2 V with a rise-time of 3 ns (in the CFD mode). This value is confirmed by the testing results of functional bread-board prototypes. The input signals thereat can be both positive and negative. The dimensions of a multilayer SMD PCB containing one discriminator channel are 40x100 mm. sq.

Introduction.

As is generally known, the LHC experiments set tight requirements on the dimensions of the front-end blocks which often are placed in the experimental area and carry out the necessary preliminary processing of the detector signals.

The PCB, being presented, is intended both for application nearby the signal sources (for instance PMTs) and in the distance from them in the body of any standard module, incorporated in the equipment rack.

Contents and design.

The basic unit of the PCB is the earlier designed full custom IC [1]. The given discriminator can operate both in the mode of an ordinary constant fraction discriminator (CFD) and in the mode of compensating the input rise-time variation. To improve the reproducibility of the discriminator characteristics there were used the circuits, allowing to compensate the standard technological spread of the IC parameters. In the circuit there is foreseen a network, allowing to change the double pulse time resolution and hence prevent spurious switching at bad input matching and non-monotonous trailing edges of input signals. To reduce the influence of interference in the distant version of discriminator disposition, the supply circuits are fitted with stabilizers.

Performance characteristics and application.

The main parameters of a single discriminator channel are the following:

The first approbation of the discriminator is expected to take place within the HADES project of GSI (Darmstadt, Germany) and in INFN (Catania, Italy). By their parameters the designed discriminators can fit a wide range of other experimental equipment, for instance the first level trigger of ALICE.

Development

During the nearest months there is planned the following development of the discriminator described:

References

1. E. Atkin et.al. A bipolar IC for the equipment of precise time reference, Proceeding of the Second Workshop on Electronics for LHC Experiments, Balatonfuered, Hungary, September 23-27, 1996. CERN/LHCC/96-39, 21 October 1996.


Electrical and Signal-integrity Design of a 1.06 Gb/s Fiber-optic Media Interface

Tivadar Kiss, László Sáfrány, István Novák, Bertalan Eged
Department of Microwave Telecommunications, TU of Budapest

Abstract

As a part of the ALICE DDL project, experimental, 1.06 Gb/s media interface cards have been designed and tested. Two versions of the circuit have been realized with different PCB stack-up and power supply filtering methodes. In addition to the functional tests, thoroughful signal integrity measurements have been carried out. After evaluation of the measurements, the PCB design of the integrated DDL SIU and DIU cards can be optimized.

Summary

Concerning the signal quality and the link reliability, the high speed serial interconnections seem to be the most critical part of the SIU and DIU interface cards of ALICE Detector Data Links. The component selection, the PCB stack-up and layout design, and the applied power supply filtering methode are all very hot issues here. Teherfore, experimental cards have been designed to test the newest components, the PCB design. The block diagram of the circuits is shown in Figure 1.


Using LOTOS in Specifying and Verifying the ALICE-DDL protocol

S. Szil·gyi, Z.Meggyesi, J.Harangoz
Technical University of Budapest (BME), Hungary

G.Rubin,
Research Institute for Particle and Nuclear Physics (RMKI), Budapest, Hungary

Abstract

The Data Acquisition systems in LHC experiments will require several thousands of high speed and reliable data links. Special, not yet standardized solutions in communication protocols such as ALICE-DDL necessitate modeling and simulation in the development phase. The use of a formal description technique leads to early design error detection and reduces the development cost. LOTOS is a formal description language which can provide formal support to a large part of the design cycle. The model can serve as a means with which we can study the structure of the protocol, and its completeness and logical consistency. The validation model defines the interactions of processes in the system, without resolving implementation details.

Summary

In the physical decomposition step the FEE, RORC and DDL components will be specified as highest level processes. It means defining the events and event structures on these ports and defining data types for the interfaces. Refinement in the components of the DDL link is not necessary unless a lower level communication protocol via the physical media is modeled.

The next design step is a functional decomposition where the components of the black-box models and their interfaces will be identified fore a more complex model of the system in a stepwise refinement process. New patterns of behaviour will be added in a hierarchical manner. The highest level procedures will be: data transfer with flow control from FEE to RORC and data transfer with flow control from RORC to FEE. These protocols will use lower level processes which are: front-end command transmission, front-end status read-out, interface command transmission, interface status read-out.

Validation and verification of a LOTOS model can be performed by using test processes written directly in LOTOS. The technique consists of describing the property in the style of a test process and executing that test in parallel with the system to be tested. The test process includes a special action which indicates that the test has been passed and the two processes synchronise on all other events. If the system can perform the same actions as the test process then the test is passed; if not, the test fails. Some possible test processes: link initialisation, link reset, data transmission, data read-out. The validation model defines the interactions of processes in the system, without resolving implementation details.

The logical consistency is one of the most important criteria for the proper operation of a system. For this reason the protocol should be free of deadlocks, livelocks, and improper termination. The successful completion of a test run is indicated by reaching termination.

Using LOTOS in specifying in a formalised way the user's requirements and verifying system properties has been very stimulating. However, two main limitations have occurred. The first is caused by the absence of a time-dependency specification mechanism, which could be essential for the safety of the system behaviour. The second is that the verification tools, which evaluate equivalencies in proving the specification satisfies implementation, ignore the data part of the specification. Since data can affect the flow of control within a process and its behaviour, ignoring data constructs to a specification may have serious implications for any correctness result. Extended LOTOS (E-LOTOS), which is under development seems to be able to solve some of the above mentioned problems.


CMOS IC Latch-up Screening Test Technique

A.Y.Nikiforov, A.I.Chumakov, V.S.Figurov, P.K.Skorobogatov and V.A.Telets

Abstract

Bulk CMOS IC operating in LHC environment are sensitive to latch-up effect due to electromagnetic and radiation influences. In this work the latch-up inspection and screening technique is developed and tested. The approach is based on the latch-up triggering by laser pulses - in this case latch-up parameters are in a good agreement with those in real LHC environment.

The latch-up 2D-numerical simulation together with experimental tests was performed on commercial and radiation hard bulk CMOS IC as well as on specialized test structures.

Latch-up general sensitivity and margins are obtained from CMOS IC laser tests within the power supply voltage and temperature range. The developed latch-up screening technique gave the possibility to detect and investigate latch-up windows effect in the high temperature range.

Summary

CMOS IC are widely used in LHC electronics. But most of bulk CMOS IC operating in harsh environment are vulnerable to latch-up effect which cause catastrophic power supply increase and operability loss under radiation or electromagnetic influences due to parasitic structure triggering. These internal parasitic structures are inherent to bulk CMOS IC and the latch-up sensitivity parameters variation is essential not only from one CMOS IC type to the other but also from one chip to the other within the same type. It should be noted that latch-up may take place under various radiation (ionising radiation, heavy charged particles) as well as non-radiation (voltage overstress, dU/dt, heating) influences. Therefore the general CMOS IC latch-up assurance, inspection and screening technique should to be developed, which is the main issue of this paper.

The 2D-numerical simulation of various mechanism's induced latch-up together with experimental tests was performed on commercial and radiation hard bulk CMOS IC as well as on specialized test structures.

It was found that the latch-up steady-state parameters (holding current and voltage) for any of parasitic structures are practically independent on the particular influence type. At the same time the equivalence of latch-up triggering conditions can be established under various influences based on the condition of the equivalent triggering current. The particular parasitic structures cocktail and thus IC's general latch-up sensitivity are dependent on the influence type.

In the research result the solid-state portable pulsed laser simulator have been chosen as the best CMOS IC latch-up triggering source. The laser source technical requirements was worked out and the appropriate experimental set-up was developed. It was demonstrated that laser induced latch-up parameters are in a good correspondence with those under the real LHC influences.

The latch-up general sensitivity and margins determination technique was also developed based on CMOS IC laser tests within the power supply voltage (from 2 to 15 V) and temperature (from 10 to 100 oC) ranges. It was found that latch-up vulnerability is shrinking with the power supply voltage as well as with operational temperature increase.

As a rule the screening procedure is based on the latch-up threshold estimation as compared to the maximum specified influence level. But in some CMOS IC the required level may be within local latch-up free ranges inside the permanent latch-up range which are referred to as latch-up windows. The developed latch-up screening technique gave the possibility to detect and investigate latch-up windows effect which is even more probable in high temperature range. In our tests some of CMOS IC were found latch-up free at room temperature while either permanent latch-up range or latch-up windows were detected in the range above 40 oC.

Latch-up windows effect corresponds to the local structure's latch-up triggering and switching off due to power supply voltage drop. At the room temperature the holding voltage value may be above the power supply voltage and there are no latch-up. At higher temperature holding voltage reduces below the power supply value and latch-up may take place. But in case if the holding voltage would be just slightly less as compared to the power supply voltage then at some radiation level the increased power supply voltage drop due to total current flow would be sufficient to switch-off the latch-up. A single latch-up window corresponds to one SCR while multiple windows are due to various independent parasitic structures with different latch-up thresholds within the same IC. The latch-up 3D-map experimental determination required hundreds of laser pulses per each sample.


Comparative Transient Simulation and Radiation Tests of Multichip Diode Bridge Circuits

A.Y.Nikiforov, P.K.Skorobogatov, A.V.Artamonov, V.A.Telets,V.S.Figurov, S.A.Polevich

Abstract

Multichip diode bridge assembly transient radiation effects are investigated. A good agreement is obtained between comparative laser and two-dimensional software simulations results of the single diode in spite of the essential metal shadowing. The diode bridge assembly transient response PSPICE simulation is found to be in good mutual correspondence with flash radiation test results.

Summary

Transient radiation effect estimation and prediction of LHC front-end and power supply electronics components can be performed with the usage of software simulation (1), laser simulation (2) or radiation tests (3).

The basic software simulation particularity is that it operates only with virtual device - so one can use such simulation at the early stage of design process when no real device under test exists yet. The main problem of software simulation is whether the virtual device is adequate to the real one, which requires both the model adequacy and the model parameters identification. Laser simulation method is based on the laser beam capability to ionise semiconductor areas of the device under test. Laser simulation tests may be considered as the basic facility for transient radiation effect research and development for LHC applications especially for complex and large scale IC. But the following basic limitations in laser test usage should be noted: (1) transparency of the package; (2) the single-chip tests; and (3) the surface metal shadowing.

Therefore it is the joint theoretical together with experimental simulation that gives the best and most authentic to the reality results. This work demonstrates the technique and results of the transient response investigation of multichip diode bridge assembly. The research procedure included: (1) two-dimensional software simulation and (2) the laser simulation tests of a single diode; (3) PSPICE simulation and (4) flash radiation final tests of diode bridge assembly.

To perform the semiconductor structure transient analysis the original solver of two-dimensional fundamental system of equations have been used taking into account the ionisation, optical shadowing and absorption, and carrier's lifetime and mobility vs. excess carriers and doping impurity concentrations dependencies. The calculated transient photocurrent amplitude vs. dose rate dependence is practically linear and the calculated dose rate sensitive factor is 1.5E-11 A*s/rad(Si). It was demonstrated that the transient photocurrent amplitude is relatively insensitive to the diode reverse bias voltage and to gold doping.

Laser simulation results were found to be in excellent agreement with theoretical results: the measured diode transient current vs. dose rate dependence is also linear with the same dose rate sensitive factor. This proves the laser simulation adequacy in spite of the fact the p-n junction is completely covered with metal.

The multichip diode bridge assembly investigation was also performed in two steps: (1) PSPICE software simulation and (2) flash radiation tests.

It should be noted that it is quite difficult to measure the transient current of a single diode within the bridge circuit. Thus the correspondence between a single diode transient current and the diode bridge assembly total transient current was simulated with PSPICE. The comparative simulation results demonstrated that the total transient current amplitude of bridge circuit is twice more as compared to the single diode transient current amplitude. According to the simplified analysis the directly biased diodes do not influence on the assembly response. Thus the final equivalent circuit for all cases includes two equal diode current generators connected in parallel which form the total diode assembly transient current. The obtained simulation results were found to be in a good mutual correspondence with flash radiation test results.


Radiation Hardness Assurance of Small-scale Production ASIC Based on Simulation Inspection and Screening Tests

A.Y.Nikiforov, V.A.Telets, A.I.Chumakov, P.K.Skorobogatov, A.I.Sheremetyev

Abstract

In this paper the radiation hardness of small-scale production ASIC for LHC applications is analysed and the assurance concept is developed based on the simulation inspection and screening tests usage requirements.

The radiation hardness assurance procedure is developed including dominant radiation effects determination, process and structure inspection, particular ASIC design project qualification and chip hardness screening.

Specialized test units are designed in Bulk CMOS and CMOS/SOS processes and their simulation test results are obtained and analysed.

Summary

The specialized installations development for nuclear physics applications, including LHC experiments as a rule requires a large number of various IC types in small lots. So custom and semicustom small-scale production application specific IC (ASIC) are of wide usage here. As far as those ICs are intended for the LHC radiation environment the ICs radiation hardness should be taken into account.

In case of specialized radiation hard parts application one may trust the manufacturer's hardness assurance system. But radiation hard parts are much more expensive as compared to commercial while their functional level and performance characteristics may not meet the requirements (1); IC's hardness parameters are specified and quilificated mainly against military and space radiation environment requirements which are not the same as in nuclear physics (2).

The total project price would be decreased essentially if not the radiation hard but radiation tolerant or even commercial parts are used. But in this case the customer's individual ICs radiation hardness assurance system should be implemented.

In this paper ASIC radiation hardness assurance concept is developed and introduced which is based on the simulation inspection and screening tests. In the previous work [1] the necessary and sufficient set of simulators together with the data acquisition system have been developed. Here we have adopted the simulation tests system for small-scale production ASIC radiation hardness inspection and screening.

The radiation hardness assurance procedure is developed. Its first step deals with the dominant radiation effects determination for the particular IC in the specified nuclear experiment environment. It was found that the CMOS IC within LHC electronics are sensitive for total dose and latch-up effects while the bipolar IC are sensitive for total dose and structural damage effects. Most of parts are usually rather sensitive to the voltage overstress and single event effects. Following the determined dominant radiation effects the appropriate simulator sources (X-ray, laser, isotopic, etc.) are chosen.

At the second step the ASIC's process and structure are examined and their potential radiation sensitivity features are evaluated against the chosen dominant radiation effects. The best results would be obtained using the specially designed test unit, which is to be manufactured within the same lot with the ASIC. The technical requirements to test unit's structure and parameters are worked out and discussed. Test units descriptions for Bulk CMOS and CMOS/SOS processes together with their simulation test results are presented and analysed in details.

The third step of ASIC radiation hardness assurance procedure deals with the particular ASIC design project inspection and qualification. The best device under test to check the design project is ASIC itself. As far as ASIC's particular configuration do influence upon the radiation hardness parameters it is necessary to examine each of the originally designed ASICs.

The last one step of ASIC assurance procedure deals with the chip hardness screening. It should be noted that the 100% inspection is not used because of the radiation induced damage which may influence negatively on ASIC reliability even after radiation effect relaxation or annealing. Therefore only the limited number of ASIC samples are usually taken and tested from each wafer or even from the whole lot. The sample size is determined from the ASIC radiation hardness margin and parameters variation.

Literature:

A.I.Chumakov, D.V.Gromov, O.A.Kalashnikov, A.Y.Nikiforov, A.I. Sheremetyev, P.K.Skorobogatov, V.A.Telets and A.V.Yanenko "Specialized Simulation Test System for Microelectronic Devices Radiation Hardness Investigation and Failure Prediction", Proceedings of the Second Workshop on Electronics for LHC Experiments, Sept. 23-27, 1996, Balatonfured, Hungary, pp.428-432


Technique and Results of ADC/DAC Radiation HardnessSimulation Tests

A.S.Artamonov, A.A.Demidov, O.A.Kalashnikov, A.Y.Nikiforov, S.A.Polevich, V.A.Telets

Abstract

The approach, technique and equipment are presented aimed at ADC and DAC IC radiation hardness inspection and screening tests. They are based mainly on the simulator sources and universal testing tools usage and adopted to nuclear physics environment.

The typical Bulk CMOS as well as CMOS/SOS converters X-ray simulation tests have been carried out to demonstrate the developed technique and instruments efficiency. The measured conversion parameters degradation results are presented and analysed.

The designed technique and instruments can be recommended for ADC and DAC ICs radiation tests within LHC electronics design.

Summary

Analog-to-digital and digital-to-analog converters (ADC/DAC) integrated circuits (IC) are widely used in nuclear physics data acquisition systems. Thus the radiation hardness and behaviour of such ICs in the LHC environment should be estimated. As compared to digital ICs even the minor radiation shift of ADC/DAC internal element parameters (comparator threshold voltages, internal reference voltage, etc.) results in essential degradation of ADC/DAC transfer function. Besides the conception of "functional test" for ADC/DAC is also specific - their operability can be estimated only with accurate measurement of conversion parameters under the radiation influence. Therefore ADC/DAC radiation test requires both specific technique and instruments which are the main issues of this paper.

The rather long test duration and the remote measurements are inherent to conventional radiation installations (nuclear reactors, accelerators, etc.) tests in general. But in specific ADC/DAC tests case there are particular difficulties with both limited test amount (due to radiation response dependence on the mode of operation) and with remote IC control and measurements (because of essential signal distortion).

To our mind the best way to achieve the necessary ADC/DAC test results quality and quantity is to use mainly simulation sources and technique. The simulation test set-up is easily adaptable to various ADC/DAC test configuration that gives us the possibility to investigate IC radiation behaviour in details and to identify the most radiation sensitive parameters and operation modes. These simulation test data would give us a lot of useful information and provide the possibility to minimise the radiation tests amount with the same or even improved test quality.

According to the carried out analysis the ADC/DAC radiation response includes: (1) the permanent degradation of conversion parameters due to the total dose effects and (2) output code upsets (ADC) or output voltage spikes (DAC) together with latch-up due to transient effects. The set of radiation sensitive parameters together with their measurement procedures are considered.

The ADC/DAC test system have been developed on the basic of the previously presented Specialized Simulation Test System [1]. The experimental set-up includes the set of simulators, specialized together with universal test units, device-under-test board and a PC. In spite of ICs and experiments variety the basic concepts of these units construction, hardware and software proved to be similar. The system requirements are discussed and the specific test units are developed.

The test system is easily adaptable to perform convenient and safe tests of various converter ICs within wide operation mode and controlled parameters variation range and to obtain the detailed test results in the wide total dose as well as dose rate ranges.

The typical Bulk CMOS and CMOS/SOS converters have been chosen for total dose tests and a lot of X-ray simulation tests have been carried out to demonstrate the developed technique and instruments efficiency. The measured conversion parameters degradation results are presented and analysed. The simulation test results were checked and calibrated against radiation test results with the usage of Co-60 source. The satisfactory agreement between simulation and radiation test results was obtained.

It is concluded that ADC/DAC simulation tests are a real alternative to radiation installation tests in ADC/DAC radiation inspection and screening during electronic systems design for LHC applications. The developed simulation test technique and instruments could be recommended as rather universal and convenient practical tool.

Literature:

A.I.Chumakov, D.V.Gromov, O.A.Kalashnikov, A.Y.Nikiforov, A.I. Sheremetyev, P.K.Skorobogatov, V.A.Telets and A.V.Yanenko "Specialized Simulation Test System for Microelectronic Devices Radiation Hardness Investigation and Failure Prediction", Proceedings of the Second Workshop on Electronics for LHC Experiments, Sept. 23-27, 1996, Balatonfured, Hungary, pp.428-432


Use of Superimposed Codes for Coding Analog signals

I.N. Alexandrov, N.M. Nikityuk
Joint Institute for Nuclear Research
E-mail: nikityuk@uct167.jinr.dubna.su

Abstract

A short revew of the known coding schemes where superimposed codes are used is given. Peculiarity of these codes is that they can be used both for coding light signals and in hodoscopes (MPC, MSG, RPC and semiconductor detectors) where weak electrical signals are registered. It is important that light mixers (PMs) and electronic amplifier - mixsers can be used for parallel compression of data. Special characteristics of codes usefull for hodoscopes with light coding and methods for designing superimposed codes having optimal compression coeficient for multiplicity t = 1 with cluster, t = 2 and t > 2 are considered. The use of compressed data which can be decoded by means of PROMs leads to essential decrease of the number of data channels and accordingly, active elements. We show what way the superimposed codes can be used for fast event selection. Several coding matrices having such an optimal parameter as compression coeficients for multiplicity t > 2 are suggested. Iteration superimposed codes for large channel regstration n > 1000 and multiplicity > 2 are investigated and suggested. The suggested method for data compression can drastically change the approach to fast indetification cation of muon tracks on the ATLAS installation. The ALTERA technology is used for simulation of the schemesn, having 28 inputs and 8 outputs and the results of simulation are given.

Increase of the information flux from the multichannel detectors of nuclear particles has generated a need for studying the problem on optimal coding data and processing method. Optical fibers and scintillaton optical fibers are widely used for the detection, transfer, and encoding of light signals appearing as a result of charged particle interractions with the detector material. In several physics experiments, when the upper limit on the number of signals recorded by the detector, is known of certan, the number of data channels and, accordingly, active elements (PMs, for example) can be decreased considerably by using the compressed data which can be decoded by means of PROMS. The information can be encoded if the signals are divided by using light pipes and connected to the PMs inputs according to particular rule. It is known from coding theory that in order to create parallel encoders, modulo-2 adders or NAND-elements are mainly used. But logic signals must be supplied to the inputs of the adders for their correct operation. However, the question arises : "Won't an encoder be constructed so that it can operate when small electrical or ligt signal is supplied to the it inputs?" Some authors use optimal superimposed Hemming or Gray codes for multiplicity t= 1. The syndrome of such codes takes the information on position only one red sources. We suggest new type codes which can be used for optimal coding analog signals for t >1 by means of using the theory superimposed codes. A binary superimposed code consists of a set of code words whose digit-by-digit Boolean sums (1 +1) enjoy a prescribed level of distinguishability. We seek for a large number n of code words such that, for a given small positive integer m, every sum of up to m different code words is distinct from every sum of m or fewer code words. We consder three case which are strongly proved. 1. Development of an optimum coding matrix for registration of one particle (multiplicity = 1) with cluster, having some size b.2 .Multiplicity = 2 with cluster. 3. Multiplicity t > 2 with cluster.

In accordance with the syndrome coding method, before the determination coordinates x and multiplicity t , information from hodoscope plane with registration channels n is compressed in parallel to M with the aid the of a coding matrix where each column corresponds to a registration channel and every row corresponds to a mixer. As example, we consider the next coding matrix I I, having few ones, n = C 28 + 8 = 36 columns and M = 8 rows.

Outputs

1 1 1 1 1 1 1 1
1 1 1 1 1 11 1 2
1 1 1 1 111 1 3
1 1 1 1111 1 4
1 1 11111 1 5
1 111111 1 6
1111111 1 7
1 111111 1 8
1 36
Inputs

In general there are n = C 2M + In different columns and M different rows, where In unitary matrix. This means that compression coeficient kc = n/M and it grows with growth of n. Only one and two ones are in each row (the signal fan-out equals 2). The position of the ones in each row determine what and how many signals are connected with corresponding mixers (the number of inputs of the mixers). Using such coding matrix, one coordinate of a red source with cluster 1 ²b²7 can be selected. Let us consider some examples. The only partcle is registered in the plane at the first position, and there is no cluster. In this case the syndrome equals 10000001 ( first column of the coding matrix). Ifa double cluster is generated, the syndrome equals 11000001. For the triple cluster we get the syndrome 11100001 and etc. For large inputs 1000 the use iteration superimposed codes are suggested. Block coding method is suggested for considarable decrease of the compression coeffcient. For this purpose is necessary for this purpuse to repeat the coding matrix several times and one bit for coding a block.

The rules for construction coding matrix for t = 2 and t > 2 with clusters are given. The application of the proposed coding method could provides a new way for constructing muon tracks at the first level of the ATLAS trigger system. Thus, there is no special need to use coincidence matrices and priority encoders. Quite the reverse it is enough to use a coding scheme consisting of a small number of ampli ers-mixers and PROMs. Results of modeling a coding matrix for 28 inputs with the use ALTERA technology are given.


SPATIAL AND CHARGE RESOLUTIONS IN FUZZY PROCESSING OF ALICE SDDs' SIGNALS.

C.PETTA - Dipartimento di Fisica dell'Universita' e Sezione INFN, Catania, Italy
G.V.RUSSO - Dipartimento di Fisica dell'Universita' e Sezione INFN, Catania, Italy M.RUSSO - Istituto di Ingegneria El. e Tel. e Sezione INFN, Catania, Italy

ABSTRACT

Two layers of the Inner Tracker System (ITS) in ALICE will be arranged with large-area Silicon Drift Detectors (SDDs). They allow single-track spatial measurements with precisions up to 15-20 microns in both coordinates and good energy resolutions. An innovative readout system has been proposed for ALICE, performing on-line SDDs' data pre-processing with Fuzzy Logic (FL) techniques. Here, we show the preliminary results of the overall detector/readout simulated response to the realistic SHAKER event. The required spatial precisions are maintained by the system, but the total amout of information carried on is reduced, in comparison with the traditional readouts and analysis.

SUMMARY

The proposed Fuzzy SDD readout foresees an on-line reconstruction of the crucial information generated by a particle into the detector. The immediate advantage is the reduction by over a factor 4 of the amount of transferred data, in comparison with the more traditional zero-suppression. This goal is obtained suppressing the redundancies but without any loss in the significative basic information. To demonstrate this last assertion, a lot of simulation work has been carried out by the authors and primary results have been fixed about expected resolutions with Fuzzy data-processing.

In ALICE ITS the SDDs will be arranged in two cilindrical layers of radius equal to 15.1 cm (SDD-1 layer) and 24.0 cm (SDD-2 layer). Each layer consists of 14 ladders of 5 detectors (SDD-1 layer) and of 22 ladders of 8 detectors (SDD-2 layer). In our simulation, we took into account only the SDD-1 layer because of its higher number of particle hits.

One simulated event in ALICE is a central collision Pb-Pb generated by SHAKER code with charged particle rapidity density of dN/dy = 8000 and a pseudorapidity region of -1.3 < h <1.3 (30°< q < 150°). The generated information was used for GEANT based simulation of SDDs. The total data output from each event is the collection of all charge clouds, i.e. the distributions among anodes of the charge produced by every particle and sampled at 1 ns time-steps. Actually, simulations do not include noise and electronic chains.

A simple algorithm of clustering was implemented to recognize as 'single cluster' a group of contiguous charge values exceeding some threshold and generated by a single charged particle. We have about 12000 single clusters over ca. 15000 total clusters/event.

We used data from the central detector to build the learning/testing examples for the fuzzy rule generator (GEFREX - Genetic Fuzzy Rules Extractor). Three different learnings have been performed up to now: the beam-direction (Z) coordinate reconstruction, the drift-direction (X) coordinate reconstruction and the deposited charge (Q) reconstruction.

The input data for Z-reconstruction are 5 samplings around the anode relative to the maximum of the cluster. Samplings are taken at the same time, within 25 nsec around the maximum of the cloud. For X-reconstruction the input data are 5 charge values sampled every 25 ns around the maximum of the cloud, taken in the anode with the maximum of the signal. For Q-reconstruction, 9 charge values forming a cross in both anode and time direction, sampled every 25 ns around the maximum, are used to build the learning and testing examples. In all the three situations we added one more input to the fuzzy processor, a gross information related to the spread of the charge distribution.

The GEFREX results are encouraging: the obtained rules' sets guarantee narrow and uniform distributions of errors over all the X and Q ranges. In the Z-reconstruction, we have obtained a precision of 12 mm in absence of noise with 15 rules. A similar result was obtained in the X-reconstruction: we have a precision of 13 mm in absence of noise with 12 rules. Finally, the Q-reconstruction has been obtained with 15 rules and it allows a precision of about 10%.


TRANSIMPEDANCE AMPLIFIER FOR SDD IN ALICE ITS

N.RANDAZZO and G.V.RUSSO
Dipartimento di Fisica - Universita' di Catania and
INFN sezione di Catania

ABSTRACT:

We present a first prototype of 16 channels transimpedece amplifier. This amplifier was designed taking in account the constrains from physics consideration in the two planes of Silicon Drift Detector (SDD) of the Inner Tracking System (ITS) in the ALICE detector, that must be used for future experiment at LHC. The amplifier was realized in 1.2um CMOS technology. Experimental measurements in accordance with theoretical predictions and simulations, show an ENC = 340 e- with transresistance of 150 Kohm and a bandwidth over 20 MHz with power consumption of 0.88 mW/channel.

The ITS of ALICE consist of five planes of position sensitive detector. The 3th and 4th planes are of Silicon Drift Detector. The dimensions of active area are: 44 mm in spatial the direction perpendicular to the beam (x), and 35 mm in the beam direction (Z).

The charge are collected in the anodes implanted at the edge of the detector and showing a gaussian shape due to the diffusion process. Using the method of the centre-of-mass to compute the point of impact of the particles in beam direction, is very important that the shape at the output of the preamplifier is, as more as possible, equal to the charge shape from the detector. In this case transimpedance amplifiers are an attractive solution.

In our SDD we have a typical drift time of 6um/nsec. Than a gaussian shape of charge having a 5 nsec <sigma< 30 nsec, corresponding at the point of impact of the particles from Z=1 mm to Z=35 mm in beam direction.

To process this kind of signal we need a transimpedance amplifier with about 20 MHz bandwidth. Moreover the precision to reconstruct the impact point depend on noise. Requirement for ALICE detector are ENC<500 e- to obtain spatial resolution< 20 um. An other key point is the power dissipation, since the drift time and than the resolution in SDD strongly depends on the temperature. In agree with the request of the cooling system we can dissipate no more than 1mW/channel in the F.E.

We have designed a 16-channel prototype in 1.2 um CMOS technology. High gain; high frequency stage was realized with folded cascode and a PMOS input transistor to minimize 1/f noise. Feedback was addressed with active (PMOS) device to allow very compact layout (100 um x 100 um) avoiding the use of passive device such as capacitor and resistor that are area consuming.

The main characteristics of the chip are:

Theoretical

Measured

Power dissipation

0.88 mW/ch

0.88 mW/ch

ENC

300 e-

340 e-

Dynamic range

1-40 MIP

1-40 MIP

CONCLUSION:

Measurements and simulations are in agree. The main characteristics of the preamplifier well matching the requirements of the SDD for ALICE detector.


Water Cooled Electronics

G.Dumont. B.Righini

Abstract

Direct cooling of VMEbus electronics modules, by addition of individual coolers, has been tested. Water cooling of the power supplies is in progress. Tests of these units in magnetic fields as expected for the ATLAS experiment are prepared.


Soft Computing Techniques in Developing Smart Hw in HEPE

M.Russo and G.V.Russo
Istituto di Informatica e Telecomunicazioni
Facolta' d'Ingegneria
Universita' di Catania (ITALY)

ABSTRACT:

Fuzzy Logic (FL) is applied in several fields. Recently FL was applied in High Energy Physic Experiments (HEPEs). The first HEPE studies about FL were the design and implementations of Fuzzy Processors. The main drawback of these studies was the complete lack of knowledge about the requirements of a Fuzzy System (FS) in HEPE. So first FP were very speed, but too general purpose. In this paper we will describe GEFREX, the latest version of a Rule Generator (RG) able to extract in a supervised manner FSs. With GEFREX we are characterizing FSs in HEPE. So we are refining previous FP architectures to obtain the required speed in HEPEs. Further we are defining the necessary prerequisites of a learning accelerator card able to perform an on-line calibration.

EXTENDED ABSTRACT:

Fuzzy Logic (FL) has become a popular component of consumer products because it is able to solve difficult non-linear control problems, exhibit robust behaviour and present linguistic representation. Recently there are several applications in HEPE [ALICE95]. HEPE first studies were about the feasibility of dedicated FPs [Book95,Wilf97a]. The main result was the extreme achievable speed of FPs: 20 ns per rule. Unfortunately, in literature there are a lot of FSs, but when the inputs increase the number of necessary rules explodes. So, actually there are several studies about automatic methods to extract very compact FSs.

In a first attempt the authors adopted FuGeNeSys [Wilf97b]. This tool permitted to show that FL is able to work for our purpose [LHC96]. This first version allowed to find good results, but the error distributions were not so good. After FuGeNeSys the authors used GEFREX [Russo97]. The main characteristics of GEFREX are its generality and its capability of extracting very compact FSs. It works fine both with approximation and classification tasks. It can identify significant features. Lastly, GEFREX generally needs less rules than other automatic methods found in literature. The number of rules is, independently of the number of inputs, always very low. GEFREX has a genetic nature. There are several possible coded solutions (individuals) that evolve in parallel. This parallelism allows generally to find better error minima than similar approaches based on Neural Networks (NNs).

In this paper we will also show that by using recent results obtained in the field of FPs it is possible to design a fuzzy multiprocessor card which will significantly accelerate the learning process. It is easy to show that adopting any learning technique based exclusively on knowledge of the function to be optimized, such as the Downhill Simplex Method [Recipes], GAs [Fog95] or the Weight Perturbation Method (WPM) [Jab92], it is possible to design a system that can easily be extended, thus guaranteeing excellent versatility. In particular we wish to use this card with GEFREX. The main idea is the real-time adaptation of the FSs. First of all we will study the learning patterns. We will identify a number of patterns that will allow to obtain off-line good FSs. It means that they will have to allow to reach good errors in all cases. After we will apply a clustering technique [Bis96] to extract only a little number of input patterns. To each of these patterns we will associate their weight that is the number of learning patterns closer to each of them. This will permit to GEFREX to learn in a very short time. In fact, GEFREX can learn input patterns with weights. Of course the clustering process will introduce the quantization error. But after a some simulations we have understand that is possible a good compression of the input patterns without obtaining a meaningful degradation of the error. We will introduce on the detector some charge injectors that periodically will need to test the performance of the detector itself. Thanks to some environmental changes sometimes the response of the detector will change, for example with the increase of the temperature. We wish to re-calibrate our FSs to take in account these variations. So, it is possible to avoid the use of a very sophisticate system of thermal control. One system that is located outside our system and that has an exact copy of the internal FSs can examine the results provided by the detector as a consequence of the injected charges. When it detects that internal FSs are starting to produce inadequate results it starts a new learning phase. This phase will be very short. In fact, we will use the accelerator card and the P patterns with the prefixed weights. Further, the change in the environment can be thought as a slight moving of our system from the acceptable error minimum. So, it is very easy to re-find the new error minimum.

REFERENCES:

[ALICE95] N.Ahmad et al., "A Large Ion Collider Experiment", CERN, December 1995.
[Book95] M.Masetti, E.Gandolfi, A.Gabrielli, M.Cecchi, M.Russo, and I.D'Antone, "Design and Realisation of a 50 MFIPS Fuzzy Processor in 1.0 micron CMOS VLSI Technology", volume III of "Advances in Fuzzy Theory and Technology", pp. 327-340, Duke Univ., Durham, North Carolina 27708--0291, USA, 1995.
[Wilf97a] A.Gabrielli, E.Gandolfi, M.Masetti, G.V.Russo, D.Lo Presti, S.Panebianco, C.Petta, N.Randazzo, S.Reito, M.Russo, and C.Caligiore, "Design of a Family of VLSI High Speed Fuzzy Processors for Trigger Applications in HEPE", WILF, Bari, Italy, March 1997, invited talk.
[Wilf97b] M.Russo, "FuGeNeSys: Comparisons with Previous Works", WILF, Bari, Italy, March 1997.
[LHC96] G.V.Russo, D.Lo Presti, S.Panebianco, C.Petta, N.Randazzo, S.Reito, M.Russo, M.Masetti, E.Gandolfi, and A.Gabrielli, "Silicon Drift Detectors and Time Projection Chambers Readouts using Fast VLSI Dedicated Fuzzy Processors", II Workshop on Electronics for LHC Experiments, Sept. 1996.
[Recipes] W.H.Press and S.A.Teukolsky and W.T.Vetterling and B.P.Flannery, "Numerical Recipes in C, The Art of Scientific Computing", University Press, Cambridge, 1992.
[Jab92] M.Jabri and B.Flower, "Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks", IEEE Trans. on Neural Networks, vol.3, no.1, pp.154-157, Jan 1992.
[Fog95] D.B.Fogel, "An Introduction to Simulated Evolutionary Optimization", IEEE Trans. on Neural Networks, vol.5, no.1, pp.3-14, Jan 1995.
[Bis96] C.M.Bishop, "Neural Networks for Pattern Recognition", Clarendon Press, Oxford, 1996
[Russo97] M.Russo, "GEFREX: A GEnetic Fuzzy Rule Extractor", subm. to IEEE Trans. on Industrial Electronics.


CMOS/SOS IC transient radiation response

P.K.Skorobogatov, A.Y.Nikiforov, I.V.Poljakov

Abstract

CMOS SOS IC's elements two-dimensional numerical simulation is performed and the appropriate simplified analytical models are developed taking into account transient radiation effects. The simulation results demonstrated the dependence of the transistor ionising current and recovery time on the floating body potential. The numerical calculations are in a good correlation with experimental data obtained from the optical simulation tests.

Summary

The primary and secondary radiation particles cause radiation induced damage of LHC electronic components and silicon-based detector systems [1,2]. CMOS SOS together with CMOS SOI IC are widely used here due their latch-up free radiation response. In contrast to bulk CMOS IC the CMOS SOS IC's transient radiation behaviour can be described on the basic of their transistor's models.

The usual CMOS/SOS radiation models predict the direct proportionality of transistor transient drain current to dose rate [3,4]. However, the test structure experimental results demonstrate the sufficient non-linearity of the above mentioned dependence in low dose rate range.

In order to investigate this effect the numerical modelling was accomplished. The device under simulation was SOS test structure with the channel length of 4 mcs and silicon layer thickness of 0.6 mcm. Dose rate behaviour was simulated using specialized two-dimensional simulator "DIODE-2D" [5].

As a result the region of sharp superlinear dependency was found in the dose rate range from 1E5 up to 1E6 rad(Si)/s. While dose rate increased to ten times the amplitude of transient current increases to the fifty times in this range. It was not a result of parasitic bipolar transistor action because of very low common base current gain (near 0.005) in the tested SOS structure.

In the result of analysis it was demonstrated that the SOS structure floating body potential shift under dose rate is the main reason of non-linearity. In the absence of radiation the floating body potential is slightly negative. Under the low dose rate radiation the floating body potential rises tending to open source junction which in turns causes the superlinear structure ionising current increase.

The floating body potential influences the SOS structure recovery time under dose rate pulse. The residual floating body charge after source junction closing has the recovery time constant of about hundreds mcs.

The change of floating body potential under background ionising radiation do influence the structure's transient current. According to the calculations results the low dose rate background radiation can modify the transient pulse form. Under the background radiation the increase of floating body potential tends to open the source junction and give rise of ionising current becomes more fast.

The experimental procedure was developed in order to investigate the test structure. The light emitting diode was used as a source of the main ionisation as well as the background one. The equivalent dose rate up to 2.75E5 rad(Si)/s was accomplished.

The test structure experimental results confirm the influence of background radiation on ionising behaviour of CMOS/SOS structure elements. The (5 - 10) times ionising current increase was measured under background radiation. It was defined that ionising current becomes more sharp if background radiation is applied. It means the increase of SOS structure sensitivity to short ionising pulses and charged particles in the presence of background radiation.

References

1. G.R.Stevenson, A.Fasso, A.Ferrari, P.R.Sala, The Radiation Field in and around Hadron Collider Detectors. - IEEE Trans. Nucl. Sc., 1992, NS-39, No 6, p.1712-1719.
2. An Experiments to Study CP Violation in the B System Using an Internal Target at the HERA Proton Ring. - Proposal, DESY-PRC, May 1994.
3. A.Y.Nikiforov, V.A.Telets, A.I.Chumakov "Radiation Effects in CMOS IC's". Moscow: Radio i svjaz, 1994, 164 p.
4. Phillips D.H. Silicon-Sapphire Device Photoconduction Prediction//IEEE Trans., 1974. NS-21, No.6, p.217-220.
5. DIODE-2D Simulator. Operator Manual. Moscow: SPELS. - 1996.


Electrical overstress hardness of electron components

P.K.Skorobogatov, A.Y.Nikiforov, V.M.Barbashov

Abstract

Test method for IC electrical overstress (EOS) hardness estimation is introduced. It is based on test condition unification. The advantage of this method is a possibility of different IC's comparison from the point of hardness to electrical interferences during LHC experiments.

Specialized test system is developed for estimation of various IC's hardness to EOS including transient as well as permanent effects. The experimental data for CMOS RAM and bipolar linear preamplifier are presented.

Summary

The electrical overstress (EOS) influence on IC should be taken into account to provide their reliable operation in LHC environment [1]. The most sufficient overstress may be induced in the connecting cables and power supply lines causing the failure of input preamplifiers and upset of memory ICs.

The analysis of physical effects in IC under EOS influence demonstrated that the transient and catastrophic device damages are concerned to the following parameters of electrical flash [2]:

- voltage amplitude;

- current amplitude;

- energy value;

- voltage rise rate value and their combination.

The EOS test generator output signal parameters must be unified to cause most of transient and catastrophic events in modern IC. It is a way to compare various IC's hardness to EOS.

The specialized test system was developed for IC EOS hardness estimation. The system includes the specialized pulse voltage generator (EMI-601), buffer unit, storage ocsilloscope and units for IC electrical and functional tests. EMI-601 has double exponent output pulse waveform with 20..30 ns time rise and regulated decay in the range from 0.1 to 10 mcs. Pulse amplitude is regulated from 5 to 800 V. Output resistance equals 50 Ohms [3].

The IC under test is connected through buffer unit to storage ocsilloscope and electrical and functional control modules. EOS hardness levels may be defined separately for inputs, outputs and supply pins of IC. The EOS pulses were applied between ground and appropriate pin.

For LHC applications it is interesting to define EOS hardness of bipolar preamplifier inputs and CMOS RAM supply pins.

The bipolar preamplifier is a charge-sensitive amplifier with current feedback. It was found that bipolar preamplifier input level hardness is more than 800 V at the pulse width 0.1 mcs and in the range from 100 V to 120 V at the pulse width 10 mcs.

The tested CMOS RAM is a conventional bulk CMOS static RAM 2Kx8. The behaviour of RAM under supply pins EOS is rather complex. In the range from 32 to 42 V there were only storage information upsets. In the range from 42 to 800 V for pulse width 0.1 mcs (42 to 240 V for 1 mcs and 42 - 100 V for 10 mcs) the non-destructive lath-up took place. The further increasing of output voltage amplitude tends to catastrophic failure (malfunction) of CMOS IC.

References

1. An Experiments to Study CP Violation in the B System Using an Internal Target at the HERA Proton Ring. - Proposal, DESY-PRC, May 1994.
2. L.W.Ricketts, J.E.Bridges, J.Miletta "EMP Radiation and Protective techniques", John Wiley&Sons, 1976. -P.328.
3. EMI-601 Electrical Simulator. Technical Manual. SPELS, 1995.


The LeCroy MHV100, a New CMOS Power Supply Controller for Distributed High Voltage Systems.

Richard Sumner and George Blanar, LeCroy Corporation

ABSTRACT

The features and performance of a CMOS integrated circuit controller for high voltage systems are described. This IC provides all digital and low voltage analog functions including amplifiers, DACs and an ADC, for a complete switching power supply. As many as 256 devices can be connected to a host computer system with a simple serial cable which provides digital communication and low voltage power. This new CMOS controller allows high voltage to be generated where it is needed (in a tube base, for example), without sacrificing the features of a conventional rack-mounted system.

This new controller chip joins other available LeCroy integrated circuits in providing low cost, high performance, commercially supported components for use in research instruments designed and built by laboratories and university groups as well as in commercial products.

SUMMARY

This new CMOS Integrated Circuit was designed for an experiment requiring very low power, very reliable, and very light weight high voltage channels for several hundred photomultiplier tubes. The prototypes consume less than 30 mw, and are now being tested. Production devices will be available within a few months.

All of the low voltage components needed in a full featured high voltage channel are present in the chip. This includes a 14 bit DAC for the high voltage, two 6 bit DACs for voltage and current limits, and an auxiliary 8 bit DAC which provides an analog output for use off chip.

The ADC is adjustable (software selected) up to 16 bits resolution. The ADC input is via a multiplexer which allows measurements of the DACs, the actual high voltage and current, a chip temperature sensor and an auxiliary analog input.

Logic outputs drive the external high voltage switching transistors with a choice of Pulse Width Modulated control or linear regulator and push pull switching (or a combination). There is an internal band gap reference, trimmed to better than 0.5% and two uncommitted opamps (in addition to the main error amplifier).

The communication and control is entirely digital, with a simple serial protocol. Registers in the chip allow setting the DACs and configuration of the chip options. An 8 bit chip address (pin selected) allows a serial bus configuration to simplify cabling.

The polarity, high voltage and current capability will of course depend on the external high voltage components. This IC is designed to be used in a variety of high voltage generating topologies and will satisfy the requirements of nearly all high voltage applications in high energy physics experiments. By generating the high voltage exactly where it is needed, bulky high voltage cables can be eliminated. Since each channel is independent, overall system reliability is improved. These characteristics are obviously relevant to the design of the very large LHC detectors.

This IC will be available in LeCroy products, and as a separate component, for use in user designed and built systems. This new high voltage controller will substantially reduce the effort required to build a high voltage system.


A discriminator and shaper integrated circuit

J. Ardelean, R. L. Chase, A. Hrisoho, S. Sen, K. Truong, G. Wormser
Laboratoire de l'Accelerateur Lineaire, LAL Orsay, 91405 Orsay, France

Abstract

In the context of the DIRC detector for the BABAR experiment at the SLAC B-factory (Stanford, USA), an analog chip is designed to receive from a PMT, through 50 Ohm coaxial cable, signals rising in 3 ns , falling in 8 ns and to provide a digital output for timing purpose and a multiplexed analog output, proportional to the input for spectral measurements. There are 8 channels per chip with a common gain adjustment for the preamplifier and invidual control for the offset and threshold etting. The discriminator sensitivity is 2 mV. The dynamic range is from 2 mV to 100 mV. The equivalent noise at the input is 100 mV. The RMS time dispersion is less than 550 ps for 3 mV threshold. The shaping amplifier is of bipolar shape with 80 ns peaking time and 2.5 V mean output voltage. The crosstalk between channels is less than 2%.

Summary

This BABAR DIRC analog chip is intended to equip the front-end electronics of the Detector of Internally Reflected Cerenkov light of the BABAR experiment. It is the first stage of this chain where the PMT signals are discriminated and amplified. A timing resolution better than 1 ns, independant of the amplitude (in the range 3-60 mV) is required, which has led to a zero-cross design for the discriminator. The amplitude information will be used for monitoring purposes and thus a multiplexed scheme has been chosen. There are 8 channels on a CMOS 1.2 micron technology ASIC, with a 68 pin package. The die size is 4 mm2 and the power consumption less than ?? mW at 100 kHz trigger rate. Each channel is composed of an amplifier with gain adjustment by a common digital control for the 8 channels. The amplifier differential output is followed by two buffers driving the discriminator circuit and the shaping circuit respectively. The discriminator is driven differentially, to minimize cross-talk to the neighbouring channels and compensate for te,mperature dependant threshold drifts. THe amplifier is a differential pair cascode. The discriminator is a zero-cross type. A short RC differentiation at 10 ns is applied to obtain a bipolar shape. After three stages of amplification, a differential comparator is used to provide the regeneration and hysteresis. The minimum threshold is 2 mV, with an equivalent input noise of 100 uV. The time walk is 1 ns close to the threshold and much samller afterwards, leading to a time dispersion averaged over a PMT spectrum less than 500 ps. The shaping amplifier is designed to have a peaking time of 80 ns. It makes use of current convertors (ICON circuit) to increase the effective value of the resistors defining the shaping differentiation and integration time constants. The dead time per channel is 50 ns. Crosstalk between channels has been reduced to negligible values by careful sepration in the layout of the analog and digital parts and by extensive use of ground layers and multiple ground connections from the chip to the package. Good chip-to-chip uniformity have been measured on a sample of around 100 chips. The chip is presently under production.