Hi Rob, in each scenario we're acquiring 500k samples at 5GSps. Also, we're splitting the same trace into two 250k sample buffers like so: Buffer 1: Sample 0, 2, 4, 6, etc... Buffer 2: Sample 1, 3, 5, 7, etc.. Our assumption: Buffers 1 and 2 then each hold the samples converted by each active ADC core. Then we're histogramming the original and both of the split buffers to get mean and standard deviation values for each. Setup: We're feeding a DC voltage through the LMH3401 FDA into the ADC (by means of the above mentioned offet DAC). Scenario 1: 0V We're letting the ADC perform an offset calibration, then take a 500k sample trace (top left). Combined Buffer (top left): mean: 2050, std dev: ~2.5 (top middle) Buffer 1 and Buffer 2 (both traces overlayed red/blue, bot left): the same as the combined buffer (bot middle, bot right). Scenario 2: 200mV (results in a converted ADC value of ~100) We're not re-running any calibrations, then take a 500k sample trace. Combined Buffer: mean: 100, std dev: >3 Buffer 1: mean 98, std dev: ~2.5 Buffer 2: mean 102, std dev: ~2.5 We can also observe this behaviour near the other end of the input range, just with a flipped orientation between buffer 1 and 2. I hope this makes sense. I understand, that there are no means to adjust the gain per ADC core. That would have been an explanation as to what we're seeing here. Thanks, Thorsten
↧