summaryrefslogtreecommitdiffstats
path: root/kernel/time
diff options
context:
space:
mode:
authorWaiman Long <longman@redhat.com>2020-02-07 14:39:29 -0500
committerThomas Gleixner <tglx@linutronix.de>2020-03-04 10:18:11 +0100
commitd441dceb5dce71150f28add80d36d91bbfccba99 (patch)
tree921ce6611cb4e44ed4dc8f08ea56e70f81b7dab0 /kernel/time
parent38f7b0b1316d435f38ec3f2bb078897b7a1cfdea (diff)
downloadlinux-d441dceb5dce71150f28add80d36d91bbfccba99.tar.bz2
tick/common: Make tick_periodic() check for missing ticks
The tick_periodic() function is used at the beginning part of the bootup process for time keeping while the other clock sources are being initialized. The current code assumes that all the timer interrupts are handled in a timely manner with no missing ticks. That is not actually true. Some ticks are missed and there are some discrepancies between the tick time (jiffies) and the timestamp reported in the kernel log. Some systems, however, are more prone to missing ticks than the others. In the extreme case, the discrepancy can actually cause a soft lockup message to be printed by the watchdog kthread. For example, on a Cavium ThunderX2 Sabre arm64 system: [ 25.496379] watchdog: BUG: soft lockup - CPU#14 stuck for 22s! On that system, the missing ticks are especially prevalent during the smp_init() phase of the boot process. With an instrumented kernel, it was found that it took about 24s as reported by the timestamp for the tick to accumulate 4s of time. Investigation and bisection done by others seemed to point to the commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") as the culprit. It could also be a firmware issue as new firmware was promised that would fix the issue. To properly address this problem, stop assuming that there will be no missing tick in tick_periodic(). Modify it to follow the example of tick_do_update_jiffies64() by using another reference clock to check for missing ticks. Since the watchdog timer uses running_clock(), it is used here as the reference. With this applied, the soft lockup problem in the affected arm64 system is gone and tick time tracks much more closely to the timestamp time. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200207193929.27308-1-longman@redhat.com
Diffstat (limited to 'kernel/time')
-rw-r--r--kernel/time/tick-common.c36
1 files changed, 33 insertions, 3 deletions
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 7e5d3524e924..cce4ed1515c7 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -16,6 +16,7 @@
#include <linux/profile.h>
#include <linux/sched.h>
#include <linux/module.h>
+#include <linux/sched/clock.h>
#include <trace/events/power.h>
#include <asm/irq_regs.h>
@@ -84,12 +85,41 @@ int tick_is_oneshot_available(void)
static void tick_periodic(int cpu)
{
if (tick_do_timer_cpu == cpu) {
+ /*
+ * Use running_clock() as reference to check for missing ticks.
+ */
+ static ktime_t last_update;
+ ktime_t now;
+ int ticks = 1;
+
+ now = ns_to_ktime(running_clock());
write_seqlock(&jiffies_lock);
- /* Keep track of the next tick event */
- tick_next_period = ktime_add(tick_next_period, tick_period);
+ if (last_update) {
+ u64 delta = ktime_sub(now, last_update);
- do_timer(1);
+ /*
+ * Check for eventually missed ticks
+ *
+ * There is likely a persistent delta between
+ * last_update and tick_next_period. So they are
+ * updated separately.
+ */
+ if (delta >= 2 * tick_period) {
+ s64 period = ktime_to_ns(tick_period);
+
+ ticks = ktime_divns(delta, period);
+ }
+ last_update = ktime_add(last_update,
+ ticks * tick_period);
+ } else {
+ last_update = now;
+ }
+
+ /* Keep track of the next tick event */
+ tick_next_period = ktime_add(tick_next_period,
+ ticks * tick_period);
+ do_timer(ticks);
write_sequnlock(&jiffies_lock);
update_wall_time();
}