[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

fadc dynamic range of FCAL



Hi All,

I come back to the issue of the dynamic range the FCAL has to  cover.
By this I mean the range in FCAL signal amplitudes the fADC has to handle .
I am not talking about how many bits the adc is or should be using for 
digitization.

Let me assume that I want to see at maximum a 9GeV photon in the FCAL
that will not saturate the fADC. lets say the signal amplitude will be 
-1.8V.
Now I know that cosmic rays that pass perpendicular through a calo block
generate as many photons as a 30 MeV photon that would hit the calo block
in the center from the front. The range from 9GeV down to 30 MeV is 300
so the signal amplitude expected from this 30MeV photon would be 6mV.
such a small signal will probably drown in the noise no matter how many
bits the adc has.
The problem is the dynamic range of 300 we need to cover if we want no 
saturation
at 9GeV and we want to see a very low energy photon like 30MeV.
So the question is what is more important low photons or no saturation. 
So here my
thoughts on this issue.

A) We want to see photons with as low as possible an energy.
    -> we have to set the HV such that cosmics signals are clearly separated
         from the pedestal. This means signal amplitudes in the order of 
20-40mV
         This would allow us to see photons of the order of 30 MeV energy.
         (From MC simulation we know that cosmics passing perpendicular 
to the calo block
          will generate about the same amount of photo-electrons as a 30 
MeV photon hitting
          the center front of the calo block.)
    -> however in this way a 9 GeV photon would generate about a 12V 
signal in the central block
         and the fADC will clearly saturate. (I assume here the base can 
handle the current
         needed for such a large signal). the response of the calo block 
is linear. the amount
         of photon-electrons is directly proportional to the deposited 
energy which is directly
         proportional to the incident photon energy.

    => there might be 2 possible solution to this problem assuming the 
base can
          handle these large amplitude signals.
          1) about 30% of the energy is deposited in the neighouring 
blocks. So the
              surrounding blocks of the one hit by the 9GeV photon will 
see photons
              causing signal amplitudes in the order of 1V which will 
not saturate
              the fADC and these signals could be used to estimate the 
signal amplitude
              in the central block.
              This is of course rather crude since most of the time the 
photon does not hit
               a calo block in the center and there might be more than 
one block saturating
              and an algorithm to reconstruct the real photon energy 
might be very difficult
              to find and implement.

          2) we split the signal before the fADC input by a ratio 1:7 
and delay the signal with
              the large amplitude by say 150ns and combine the two 
signals again before
              the input into the fADC. So for each event we would have 
two signals in the fADC
              the "low-gain" signal at latency time X and the second 
"high-gain" signal at latency time
              X+150ns.
              This would not work I guess if the rate in a single 
channel is high enough to cause
              significant pile-up or the PMT-base can not handle these 
high amplitude signals.
         
B) we set the low energy photon threshold and hence the gain using the 
pmt HV depending
    on the physics topic of the run. Lets say the physics we are after 
gives rise to a high
    photon limit of 3GeV. Then we could set the gain of the PMTs such 
that 3GeV is full range and
    a 30MeV photon would then generate a signal of about  18mV.
    Or lets say we have a physics topic of looking only for high photon 
energies in the FCAL.
    Then we set the HV such the high energies will not saturate.
    In this scenario we would be forced to select the FCAL gain 
according to the physics program
    and we might lose potential data we did not even know we might have 
had as our understanding
    and insights grow over the course of the experiment.


My view of the problem is:
1) from a physics point of view we want to have the FCAL threshold as 
low as possible to
    measure photons as low as possible.

2) if this low threshold means 30MeV photons to be seen we talk about a 
factor of 300
    in dynamic range for the highest photons, if we do not want saturation.

3) even if the threshold is higher lets say 150MeV and we do not want 
saturation at 9GeV
    we have to cover a dynamic range of 60. Also this is already tricky.

I am looking forward to your comments and thoughts. If I am wrong about 
this and it is
not a problem I would be really happy ;-)

cheers,
Beni