(a)Setting up the CMS environment at Imperial
. /vols/grid/glite/ui/3.2.5-0/external/etc/profile.d/grid-env.sh
voms-proxy-init --voms cms
This code is 32 bit software which is not the default anymore:
export SCRAM_ARCH=slc5_ia32_gcc434 source /vols/cms/grid/setup.sh
. /vols/cms/grid/CRAB/current/crab.sh

(b) Compiling the example code (from Gordon)
scram project CMSSW CMSSW_3_8_7
This makes a directory called CMSSW_3_8_7. I guess there's no choice of name here.
lx06:skimming1 :~] ls CMSSW_3_8_7/
bin cfipython config doc external include lib logs python share src test tmp
cd CMSSW_3_8_7/src
unpack Gordon's tarball:
lx06:src :~] ls
DataFormats RecoTauTag Skimming TauAnalysis
lx06:src :~] cmsenv (though best done in CMSSW_3_8_7 dir)
scram build -j 2

(c)Running locally
cd CMSSW_3_8_7/src/Skimming/ElectronMuonTauJet/test
cmsRun electronMuonTauJetTree_mc.py
ctrl-C to stop (will exit gracefully)
When running again, skip the compiling step, but do cmsenv in 'test' before running the executable.

(d)Submitting a crab job
First I need a crab config file: crab_mc.cfg.
And example is in $CRABPATH/crab.cfg and $CRABPATH/full_crab.cfg.
crab -cfg crab_mc.cfg -create -submit
crab -status provides link to dashboard (but only for last set of crab jobs submitted ?)
Or I can go and find myself on the dashboard: Here.

Data Access
For jobs submitted to our site via crab automatically use dcaches gsidcap protocol, i.e. the file is not copied to the worker node, but accessed from the storage element directly.

Relevant CMS documentation:
CRAB
Crab prerequisites (at CERN, but still helps)
Crab tutorial
CMS DBS
find dataset where site=T2_UK_London_IC and datatype = data and dataset.status like VALID*