VADC - Startup Calibration

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
Not applicable
Hi all,

Summary: Poor DNL, INL for VADC module. Oversampling improves things, but the ENOB is still very low for 12-bit configuration on XMC4400. It is possible that startup calibration is not happening as the program that checks for the "CAL" bit (indicating calibration process) doesn't seem to get set (I've tried checking it with a "while" loop - see ADC AI.H004 errata notes). The purpose of the question is to hopefully narrow down the root causes of such behaviour.

I am currently working with an ADC module in an attempt to better understand its properties. As of now, I am getting very inaccurate results, i.e. really big DNL and INL, quite low ENOB. One of the few non-obvious things I did was connecting GND to VAGND, which almost halved DNL, but it's still showing a very high value. I am using XMC4400 kit; sampling at 400 kHz with 4x oversampling and averaging. At this point its not quite clear to me whether the problem is within the software or are there any hardware setup that I am doing wrong. Could anyone please suggest a possible explanation as to why the accuracy might be so poor? Thank you!

As an additional question that's related to the topic - do I need to connect a supply to the VAREF pin to provide a voltage reference if my package has distinct VAREF and VDDA pins?

A small update: I've tried waiting for the ARBCFG.CAL bit to be set to '1' - as suggested in errata application notes - in order to ensure proper calibration. I am only using group #0, hence I inserted the following code into xmc_vadc.c/XMC_VADC_GLOBAL_StartupCalibration() API:

  XMC_ASSERT("XMC_VADC_GLOBAL_StartupCalibration:Wrong Module Pointer", (global_ptr == VADC))

global_ptr->GLOBCFG |= (uint32_t)VADC_GLOBCFG_SUCAL_Msk;

/* Inserted the code below to comply with errata recommendations */
while(!(VADC_G0->ARBCFG & VADC_G_ARBCFG_CAL_Msk)){
__asm(" nop"); // Wait for the bit to be set to 1
}


What I think it should be doing is watching the CAL bit for the particular group that I am working with - and waiting until it becomes '1', meaning that the calibration process has started. However, the program seems to be stuck in that loop; and in debug mode, manually setting GLOBCFG.SUCAL bit doesn't trigger the startup calibration. Could you please - if possible, of course 🙂 - provide me with some insight as to why this code is not working as expected.

And one more thing I would like to address - when I did the oversampling, I used both SW and HW options (i.e. sampling very fast or accumulating results by "turning on" this function). In particular, I used queue oversampling with FIFO results storage. My expectations were that if I put the same channel in the queue 8 times and make it so that the first conversion is triggered by the CCU status bit signal. If the "refill" option is enabled, then all 8 conversions would occur sequentially, after the trigger, and then placed back in the queue, awaiting for the next "round". The timer was configured to 200 kHz for the sampling rate to also be 200 kHz. I've checked the conversion times against the sampling period and it seems like there's plenty of time to do all 8. However, when I sampled the waveform (sawtooth), I found that you would have to adjust the input frequency because the sampling actually happened at 66.7 kHz! Could that be that after the refill the conversion requests don't wait for the trigger and just start right away?

All the best,
Andrey
0 Likes
0 Replies