The research will divide in two distinct phases corresponding to the two years of duration of the grant. The first year will be devoted to the study of simulations and to the measurement of the charge ambiguity on cosmic-rays collected during 2008 and 2009 with the detector in its operational configuration (nominal magnetic field and all sub-detector in operation). The grant recipient will develop the interface between the new physics event generators in the context of the models already described, and the detailed simulation of CMS. He or she will then produce several samples of signal for different mass hypotheses, on which the preselection filters to be applied also to background from standard processes (centrally produced within CMS) will be based. Since LHC will be already operational, one will be able to tailor the simulations to the data and obtain realistic estimates for efficiency, resolution, and background levels. One will then be able to establish the value of selection cuts such that sensitivity is optimized to discovery, given the performances of the collder (in terms of integrated luminosity delivered) and the detector. In the meantime, through the study of cosmic-ray data, from the comparison of the two parts of a same track reconstructed independently in the upper and lower emisphere of CMS, the experimental ambiguity of track charge reconstruction. At the end of the first year we foresee therefore to have structured the search and to have acquired a realistic estimate of one of the main instrumental backgrounds. The contents of the search will be documented in a collaboration note (CMS note), and subjected to scrutiny of internal referees. The second year will be devoted to the measurement, which will proceed according to the following steps: - validation of the identification algorithms for leptons, by means of a comparison of single-particle spectra observed and produced by the simulation; - validation of background level predictions by means of the development of representative control samples with a favourable signal/noise ratio. To that aim we will be able to use samples of events with two opposite-charge leptons (much more frequent for backgrounds), samples with low jet multiplicity and two b-tags, samples at higher jet multiplicity but lacking b-tagged jets, and samples satisfying all analysis requirements but lacking a significant missing transverse energy. The samples will be defined such that each of them is enriched in a specific source of background. The backgrounds from false muons (punch-through and in-flight decays) will be measured with samples of Ks phi and Lambda particles (the feasibility of this approach has already been demonstrated in simulations from the work of a PhD student participating in the project). The simulation predictions in the signal region for each source will be then rescaled according to the results obtained from the comparison of data and simulation in the control samples. - Calculation of trigger and selection efficiencies: the efficiency of the dilepton trigger, used for several measurements, will be determined centrally by a group esplicitly formed for that task. As already stressed, for a preliminary observation (or a limit), it is sufficient to measure the lepton identification efficiency with a precision close to 5%. To this aim are sufficient the samples of Z0 and Upsilon which will be collected during the first two years of run. To measure efficiency, the resonances will be reconstructed by requiring a positive identification of a single lepton. The other track will be used for the measurement of identification efficiency ("sample and probe" method). We will tailor to our analysis the calibration procedures and simulation matching to data for the other variables (b-tagging, particle flow) developed centrally by CMS. If the operational conditions proved favourable, and we found a good agreement between data and simulation in all control samples, we could decide to perform a multi-dimensional analysis rather than a search based on sequential cuts, such that the significance of the result be maximized. At that point, we will proceed to the measurement, computing the significant of a possible observation, or an exclusion limit as a function of the candidate mass.