it is fairly easy, once you have either HSE or LSE up and running.
1. in general, both HSI and LSI on those chips are fairly accurate. usually within 1% and often around 0.1%. good enough to run uart at reasonably high speed.
2. getting HSE up and running is fairly easy - you usually don't need caps. LSE usually need caps.
3. the basic concept is the same. all you need a mechanism to read off the mcu ticks: DWT or SysTick, or even a timer, 32-bit.
a) set up the RTC to run off LSI.
b) hold the execution off a few seconds run of the RTC.
c) measure the elapsed ticks over b).
d) compare the ticks vs. expectation.
For example, if you run RTC for 2 seconds, on a chip that runs off a 4Mhz ticks. you would expect 8M ticks over a 2 second period. If your reading from above is < 7.9M, your LSI runs slightly faster; or slower if you get more than 8M.
here is what I wrote to implement the process above: I was trying to calibrate the various clocks (using LSE to calibrate HSI, or using HSE to calibrate LSI).
Code: Select all
uint32_t rtcTks(uint32_t sec) {
uint32_t tks=0;
uint32_t tmp;
tmp=RTC2time(NULL); while (RTC2time(NULL) == tmp) continue;
tks=ticks();
while (sec--) {
tmp=RTC2time(NULL); while (RTC2time(NULL) == tmp) continue;
};
return ticks() - tks;
}
I even wrote a binary search algorithm to automate the trial-and-error:
https://dannyelectronics.wordpress.com/ ... stm32f103/
that particular implementation was on a STM32F103 using 4x7seg led display. But it is fairly portable as well: I did the same thing on a G030F, a F030F, and a F100 too.