by Ricardo Goncalo (Royal Holloway)
The ATLAS detector at the Large Hadron Collider (LHC) will be exposed to proton-proton collisions at a rate of 40 MHz. Using fast reconstruction algorithms, the ATLAS trigger needs to efficiently reject a huge rate of background events while retaining the most interesting physics. After a first processing level using custom electronics, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a "stress test" of the system. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. Both running periods provided strict tests of the HLT reconstruction and selection algorithms as well as of its configuration and monitoring systems. This allowed the commissioning of several tracking, muon-finding, and calorimetry algorithms under different running conditions. Frequent changes of the selection menu were required to cope with the parallel commissioning of the ATLAS subdetectors. After giving an overview of the trigger design and its innovative features, this talk will focus on the valuable experience gained in running the trigger in the fast-changing environment of the detector commissioning. It will emphasize the commissioning of the HLT algorithms, monitoring and configuration, and on plans for future development.