Skip Navigation

I have recently built a new PC, to be used as a server. For months now, I have been getting unexplained crashes, sometimes after a few minutes, sometimes after a few days, where the PC just reboots without any trace in the logs. Just normal occasional status logs, and then, a few seconds later, the log of a normal boot process.

This is slowly driving me crazy because I just can't make out the issue. I have tried multiple different Linux installs, swapped out the ssd and PSU and ran a ram test but this behaviour stills persists.

Today something was different. Instead of rebooting, it showed me this blue screen, this time finally with a log. But I still can't seem to make out the issues. Some quick internet searches show some very vague answers; everything from software to hardware, and psu to CPU.

Can any Linux wizard help me fix my problem? Link to the log

Update: I have now faced an even weirder issue. I booted up, installed cpupower like a comment suggested, installed man to look up its documentation and then the screen froze, and I was forced to reboot the PC by pressing the power button for 3s. Then when I booted back up, my bash history was reset to a state a from a few days back (~.bash_history mod time from 2 days ago) even though I rebooted several times since then, and have not had any persistency errors like this. man was also not installed anymore. Even weirder is that cpupower was still installed. So it seems like some data was saved, while other files were discarded. I will now use a second ssd and try to replicate this. I now suspect some kind of Storage issue, even though the two ssd drives in question have never caused issues in my laptop. This seems scary, I have never witnessed a so weirdly corrupted Linux install, ever.

31 comments
  • Looking at the call trace:

     
        
    [ 1641.073507] RIP: 0010:rb_erase+0x199/0x3b0
    ...
    [ 1641.073601] Call Trace:
    [ 1641.073608]  <TASK>
    [ 1641.073615]  timerqueue_del+0x2e/0x50
    [ 1641.073632]  tmigr_update_events+0x1b5/0x340
    [ 1641.073650]  tmigr_inactive_up+0x84/0x120
    [ 1641.073663]  tmigr_cpu_deactivate+0xc2/0x190
    [ 1641.073680]  __get_next_timer_interrupt+0x1c2/0x2e0
    [ 1641.073698]  tick_nohz_stop_tick+0x5f/0x230
    [ 1641.073714]  tick_nohz_idle_stop_tick+0x70/0xd0
    [ 1641.073728]  do_idle+0x19f/0x210
    [ 1641.073745]  cpu_startup_entry+0x29/0x30
    [ 1641.073757]  start_secondary+0x11e/0x140
    [ 1641.073768]  common_startup_64+0x13e/0x141
    [ 1641.073794]  </TASK>
    
      

    What's happening here leading up to the panic is start_secondary followed by cpu_startup_entry, eventually ending up in CPU idle time management (tmigr), giving a context of "waking up/sleeping an idle CPU". I've had a few systems in my life where somewhat aggressive power-saving settings in the BIOS were not cleanly communicated to Linux, so to say, causing such issues.

    This area is notorious for being subtly borked, but you can test this hypothesis easily by either disabling a setting akin to "Global C States" in your BIOS, which effectively disables power-saving for your CPUs, or try an equivalent setting of the kernel arguments processor.max_cstate=1 intel_idle.max_cstate=0, or even a cpuidle.off=1.

    This is obviously losing your power-saving capability of the CPUs, but if your system runs stable that way, you're likely in the right ballpark and find a specific solution for that issue, possibly in a BIOS/Fimware update. Here's a not too shabby gist roughly explaining what c-states are. Don't read too many of the comments, they're more confusing than enlightening.

    The kernel docs I linked to above are comprehensive, and utterly indecipherable for a layperson. Instead of fumbling about in sysfs, try the cpupower tool/package to visualize the CPU idle settings, and try increasing enabled idle states until your system crashes again, to find out if a specific (deep) sleep state triggers your issue, and disable exactly that if you cannot find a bugfix/BIOS update.

    If this is your problem, to reproduce the panic, try leaving your system as idle as possible after bootup. If a panic happens regularly that way, try starting processes exercising all your CPUs - if the hypothesis holds, this should not panic at any time, as no CPU is ever idle.

    • Thanks, please check my updated post. I have disabled the relevant setting in my BIOS, installed cpupower and increased the idle state to the maximum value of 2. I have also tried states 0 & 1. Do I need to run the machine for longer or should it have crashed right away according to your hypothesis? I also can't tell you if the BIOS setting already fixed my issue since I still can't reproduce it.

      About your last paragraph, the system has had these issues mostly while idle, but that's probably because my system is running idle most of time anyways. I have also had the issue during low to medium loads, like transcoding audio via jellyfin. But I haven't methodically run a process on all cpus. How would I go about running a load that uses all cores? I don't particularly want to run a stress test for hours (because loud), but at this time I'm really open to trying anything.

      I have also enabled an option in my BIOS that generates a dummy load some time ago, because some forum post had suggested a PSU issue is at fault for unexplained reboots. I have a 500W PSU that is way overkill for my components, and some users suggested that some PSU's can turn of when the load is to low. The option did not fix my problem. I have since connected a weaker 220W PSU, which also didn't help.

      • Just my two cents as someone who does this a lot, myself, only change one thing at a time when testing troubleshooting suggestions. I know the reply suggested a few things in succession, but that was showing progressive steps to confirm and identify the underlaying cause. Doing them all at once fails to correctly identify the root-cause at best, and at worst may have introduced new problems.

        I say this again, as someone who notoriously does this all the time. It's a time-saver reflex, but one that will bite you in the ass eventually.

      • Do I need to run the machine for longer or should it have crashed right away according to your hypothesis?

        Sorry for mudding the waters with my verbosity. It should not crash anymore. I believe your kernel panic was caused when an idle CPU 6 was sent to sleep. Disabling C-states, or limiting them to C0 or C1, prevents your CPUs from going into (deep) sleep. Thusly, by disabling or limiting c-states, a kernel panic should not happen anymore.

        I haven't found a way to explicitly put a core into a specific c-state of your choosing, so best I can recommend now is to keep your c-states disabled or limited to C1, and just normally use your computer. If this kernel panic shows up again, and you're sure your c-state setting was effective, then I would consider my c-state hypothesis as falsified.

        If, however, your system runs normally for a few days, or "long enough for you to feel good about it" with disabled c-states, that would be a strong indication for having some kind of issue when entering deeper sleep modes. You may then try increasing the c-state limit again until your system becomes unstable. Then you know at least a workaround at the cost of some loss of power savings, and you can try to find specific issues with your CPU or mainboard concerning the faulty sleep mode on Linux.

        Best of luck!

  • Screen freezes should also leave traces in your syslog, if they're caused by any panic or GPU driver issue. You might want to check if your system is still accessible via SSH, if only the screen froze, and try killing X from there, if switching to text VTs doesn't work. SysRq might become helpful, too.

  • screen froze, and I was forced to reboot the PC by pressing the power button for 3s

    seems like some data was saved, while other files were discarded

    I would not worry too much about a somehow "forgetful" file system immediately after a hard power cycle. This is exactly what happens if data could not be flushed to disk. Thanks to journaling, your FS does not get corrupted, but data lingering in caches is still lost and discarded on fsck, to retain a consistent fs. I would recommend to repeat the installations you did before the crash, and maybe shove a manual sync behind it, to make sure you don't encounter totally weird "bugs" with man later, when you don't remember this as a cause anymore. Your bash history is saved to file on clean shell exit only, and is generally a bit non-intuitive, especially with multiple interactive shells in parallel, so I would personally disregard the old .bash_history file as "not a fault, only confusing" and let that rest, too.

    Starting a long SMART self-test and a keen eye on the drive's error logs (smartctl -l error <drive>), or better yet, all available SMART info (-x), to see if anything seems fishy with your drive is a good idea, anyway. Keep in mind that your mainboard / drive controller or its connection may just as well be (intermittently) faulty. In ye olden times, a defective disk cable or socket was messing up my system once or twice. You will see particular faults in your syslog, though - this is not invisible. You don't only get a kernel panic without some sprinkling of I/O errors as well. If your drive is SMART-OK, but you clearly get disk I/O errors, time to inspect and clean the SSD socket and contacts and re-seat once more. If you never saw any disk I/O errors, and your disk's logs are clean, I'd consider the SSD as not an issue.

    If you encouter random kernel panics, random as in "in different and unrelated call stacks that do not make sense in any other way", I agree that RAM is a likely culprit, or an electrical fault somewhere on the mainboard. It's rare, but it happens. If you can, replace (only) the mainboard, or better yet, take a working PC with compatible parts, and replace the working MBO with your suspected broken one to see if the previously working machine now faults. "Carrying the fault with you" is easier/quicker than proving an intermittent fault gone.

    Unless you get different kernel panics, my money's still on your c-states handling. I'd prefer the lowest level you can find to inhibit your CPUs from going to sleep, i. e. BIOS > kernel boot args > sysctl > cpupower, to keep the stack thin. If that is finnicky somehow, you could alternatively boot with a single CPU and leave the rest disabled (bootarg nosmp). The point is just to find out where to focus your attention, not to keep this as a long-term workaround.

    To keep N CPUs running, I usually just background N infinite loops in bash:

     
        
    $ cpus=4; for i in $(seq 1 $cpus); do { while true; do true; done; } & done 
    [1] 7185
    [2] 7186
    [3] 7187
    [4] 7188
    
      

    In your case you might change that to:

     
        
    cpus=4; for i in $(seq 0 $((cpus - 1))); do { taskset -c $i bash -c 'while true; do sleep 1; done'; } & done
    
    
      

    To just kick each CPU every second, it does not have to be stressed. The taskset will bind each loop to one CPU, to prevent the system from cleverly distributing the tiny load. This could also become a terrible, terrible workaround to keep running if all else fails. :)

31 comments