And OOM kills on some processes are bloating the log files - hence causing a lot of IO on the OS disk, and the system becomes unstable and hard to interact with.
NO!
Your system ran out of memory, so badly that the OOM killer was triggered. That is what makes your system unstable.
You're mistaking cause and effect there.
IMHO if the Out-of-Memory killer needs to run frequently, that should be the actual issue you need to address.
Unless your system is really large, or ridiculously undersized, generating and storing the task dump that is generated by the OoM-killer should not be a real issue.
But on large systems, you can set the kernel tuneable vm.oom_dump_tasks
to 0 to disable the task dump.
See https://www.kernel.org/doc/Documentation/sysctl/vm.txt
oom_dump_tasks
Enables a system-wide task dump (excluding kernel threads) to be
produced when the kernel performs an OOM-killing and includes such
information as pid, uid, tgid, vm size, rss, pgtables_bytes, swapents,
oom_score_adj score, and name. This is helpful to determine why the
OOM killer was invoked, to identify the rogue task that caused it, and
to determine why the OOM killer chose the task it did to kill.
If this is set to zero, this information is suppressed. On very large
systems with thousands of tasks it may not be feasible to dump the
memory state information for each one. Such systems should not be
forced to incur a performance penalty in OOM conditions when the
information may not be desired.
If this is set to non-zero, this information is shown whenever the OOM
killer actually kills a memory-hogging task.
The default value is 1
(enabled).