mm: migrate high-order folios in swap cache correctly
The following commit fixes an issue that causes tasks to hang on attempting to take mm->mmap_lock
commit fc346d0a70a13d52fe1c4bc49516d83a42cd7c4c
Author: Charan Teja Kalla <quic_charante@quicinc.com>
Date: Thu Dec 14 04:58:41 2023 +0000
mm: migrate high-order folios in swap cache correctly
Large folios occupy N consecutive entries in the swap cache instead of
using multi-index entries like the page cache. However, if a large folio
is re-added to the LRU list, it can be migrated. The migration code was
not aware of the difference between the swap cache and the page cache and
assumed that a single xas_store() would be sufficient.
This leaves potentially many stale pointers to the now-migrated folio in
the swap cache, which can lead to almost arbitrary data corruption in the
future. This can also manifest as infinite loops with the RCU read lock
held.
JIRA: https://issues.redhat.com/browse/RHEL-23654
Signed-off-by: Nico Pache npache@redhat.com