When running some high workloads on UEK kernels on systems with a lot of memory you might see the following errors in /var/log/messages:
INFO: task bonnie++:31785 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. bonnie++ D ffff810009004420 0 31785 11051 11096 (NOTLB) ffff81021c771aa8 0000000000000082 ffff81103e62ccc0 ffffffff88031cb3 ffff810ac94cd6c0 0000000000000007 ffff810220347820 ffffffff80310b60 00016803dfd77991 00000000001312ee ffff810220347a08 0000000000000001 Call Trace: [<ffffffff88031cb3>] :jbd:do_get_write_access+0x4f9/0x530 [<ffffffff800ce675>] zone_statistics+0x3e/0x6d [<ffffffff88032002>] :jbd:start_this_handle+0x2e5/0x36c [<ffffffff800a28b4>] autoremove_wake_function+0x0/0x2e [<ffffffff88032152>] :jbd:journal_start+0xc9/0x100 [<ffffffff88050362>] :ext3:ext3_write_begin+0x9a/0x1cc [<ffffffff8000fda3>] generic_file_buffered_write+0x14b/0x675 [<ffffffff80016679>] __generic_file_aio_write_nolock+0x369/0x3b6 [<ffffffff80021850>] generic_file_aio_write+0x65/0xc1 [<ffffffff8804c1b6>] :ext3:ext3_file_write+0x16/0x91 [<ffffffff800182df>] do_sync_write+0xc7/0x104 [<ffffffff800a28b4>] autoremove_wake_function+0x0/0x2e [<ffffffff80062ff0>] thread_return+0x62/0xfe [<ffffffff80016a81>] vfs_write+0xce/0x174 [<ffffffff80017339>] sys_write+0x45/0x6e [<ffffffff8005d28d>] tracesys+0xd5/0xe0
This is a know bug. By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. This especially happens on systems with a lof of memory.
The problem is solved in later kernels and there is not “fix” from Oracle. I fixed this by lowering the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=10” in /etc/sysctl.conf. This setting does not influence overall database performance since you hopefully use Direct IO and bypass the file system cache completely.
Thank you very much. It really helped me.
doing “echo 0 > /proc/sys/kernel/hung_task_timeout_secs” disables this message”
does disable the message, only, isn’t ?
it may be things worst, i argue
what do u suggest about that?
Yes it does but the message has a sense…. so disabling is not a good idea.
Pingback: Grund und Workaround für “task updatdb.mlocat blocked for more than 120s” | Georgs Blog
Pingback: INFO: task blocked for more than 120 seconds.故障解决 « P哥运维
Thank you, this solved my problem on a busy Enterprise Manager Cloud Control 12c2+Oracle Database 11.2.0.3 system, you are a life saver!
Pingback: domU hung | Jasey Wang
Thank you very much!
Pingback: Task blocked for more than 120 seconds errors and crashes - Page 2
Thanks ! This work :-)
Pingback: Supermicro H8DCL-iF crashes randomly with strange kernel messages
Can you please mention the exact kernel version in which this bug is fixed?
Hi Kati,
this is not really a bug. It basically says that the kernel is blocked writing data to disk.
The solution implemented in new kernels is to lower the “dirty_ratio” parameter down from 40% to a more realistic value like 5 or so.
You can implement that yourself by setting in your /etc/sysctl.conf the following value:
vm.dirty_ratio=5
# for 5 percent of the whole memory
and activate it with “sysctl -p”.
Hi Ronny,
I am facing this issue on 2.6.39-300.17.3.el6uek.x86_64 kernel. The default dirty_ratio is
vm.dirty_ratio = 20
would it help if I reduce it to 10 or 5 ?
Hi,
yes, very likely. You also want to use Direct IO wherere possible to bypass the file system cache.
When you said “The problem is solved in later kernels” back in 2011 how much later were on Ubuntu 12.04 with kernal 3.8 and are having this issue.
Then you either want to use Direct I/O or lower the value of vm.dirty_ratio (e.g. to 1) to have less data to flush….
What do you mean in detail with Direct I/O. Mounting the filesystems with it? Having same problems on Debian + VmWare during backup of the databases (once per month or less). I thought oracle is good enough to call there own sources with direct I/O?! Do you know a article or something where I can read about it?
Thanks anyway, I try to reduce dirty_ratio, maybe thats enough.
You have to activate DIRECT IO in the database with a database parameter. On Linux there is no “forcedirectio” mount option as in Solaris.
Hi,
We are using RHEL 6.2 32 bit (Physical Dell server)and kerner is 2.6.32-220.el6.i686. We face same kind of issues on this server, i have checked /etc/sysctl.conf its doesnt contain any line “vm.dirty_ratio” . Please let me know whats the workaround for the same.
Hi,
just put in the line and activate it. That´s all.
I’ve seen this as recently as 3.16 kernels. It has to do with heavy write-loads, not database in particular. In this case I triggered it by dd’ing in 256 megabyte chunks.
# grep . /etc/issue
Debian GNU/Linux 8 \n \l
# uname -a
Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-3~deb8u1 (2015-04-24) x86_64 GNU/Linux
It occurs on heavy write load. Last time I saw this, I fixed it by changing the I/O scheduleing from cfd to deadline.
# for f in /sys/devices/*/*/*/*/*/*/block/sda/queue/scheduler; do echo deadline > $f; done
And I added a script to that effect to my rc.local
Tony
After I edited the /etc/sysctl.conf. Then reboot. It says Can’t connect to default. Skipping. Waiting for Shutdown of managed resources…… . . .