Kicked to initramfs after rebooting for no reason
https://askubuntu.com/questions/1259341/kicked-to-initramfs-after-rebooting-for-no-reason
I'm running Ubuntu 16.04 on an HP T620 Thin Client. Since a few days ago I've been struggling with this new issue. I started up my PC normally, everything looked just fine, but Psensor indicator displayed only CPU usage (that caught my attention because I had configured GPU indication as well). I thought "OK fine, a restart could help", but after rebooting I was kicked to initramfs (for me, with no logical reason) with this information.
[ 2.759289] Couldn't get size: 0x800000000000000e
/dev/sda2 contains a file system with errors, check forced.
Inodes that were part of a corrupted orphan linked list found.
/dev/sda2: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda2 requires a manual fsck
BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1.4) built-in shell (ash)
Enter 'help' for a list of built-in commands.
(initramfs) shutdown_
Ubuntu suggested to perform fsck, so I did it. After that Ubuntu restarted properly, but concerned about my PC security (something corrupted important system files causing this whole "initramfs situation") I've run dmesg to view logs and look for suspicious entries.
I found only this:
Couldn't get size: 0x800000000000000e
MODSIGN: Couldn't get UEFI MokListRT
ACPI Error: Method parse/execution failed \HWMC, AE_AML_UNINITIALIZED_ELEMENT (20170831/psparse-550)
ACPI Error: Method parse/execution failed \_SB.WMID.WMAA, AE_AML_UNINITIALIZED_ELEMENT (20170831/psparse-550)
Disk SMART parameters are good (checked it, even ran some diagnostics to be sure), I've changed nothing in BIOS (especially in the power management settings).
What happened exactly? Should I be concerned about system security because it was caused by virus or something? I was looking for a similar thread in the forum, but I didn't find someone who had a similar problem.
Today I decided to take a closer look on SMART logs, because the problem is still present and I want to discover what is going on with my system and what caused the data corruption which I meant earlier. I did short and long DST tests from UEFI. Short passed, extended -failed, but this has no confirmation with tests done from OS (all tests passed) - until today. Today the disk is reporting that did not pass any test, and has some uncorrectable errors. The system itself does not hang, and does not show any symptoms that something is wrong.
The problem that I'm facing with is actually some bad sectors, bad blocks...? Is this disk failure has a correlation with the problem described earlier?


Effect of hdparm command:
ubuntu@hp-t620:~$ sudo hdparm --read-sector 12032336 /dev/sda
/dev/sda:
reading sector 12032336: SG_IO: bad/missing sense data, sb[]: 70 00 03 00 00 00 00 0a 40 51 e0 01 11 04 00 00 00 50 00 00 00 00 00 00 00 00 00 00 00 00 00 00
succeeded
Before I asked the question here, I tried to modify the grub attributes (as indicated in the question pointed by @user535733), but this did not bring the desired results.