mm/filemap: Add folio_wait_bit()
Rename wait_on_page_bit() to folio_wait_bit(). We must always wait on
the folio, otherwise we won't be woken up due to the tail page hashing
to a different bucket from the head page.
This commit shrinks the kernel by 770 bytes, mostly due to moving
the page waitqueue lookup into folio_wait_bit_common().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeff Layton <jlayton@kernel.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Reviewed-by: David Howells <dhowells@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 0b0b7cd..1d8f2ee 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2889,7 +2889,7 @@ void folio_wait_writeback(struct folio *folio)
{
while (folio_test_writeback(folio)) {
trace_wait_on_page_writeback(&folio->page, folio_mapping(folio));
- wait_on_page_bit(&folio->page, PG_writeback);
+ folio_wait_bit(folio, PG_writeback);
}
}
EXPORT_SYMBOL_GPL(folio_wait_writeback);
@@ -2911,7 +2911,7 @@ int folio_wait_writeback_killable(struct folio *folio)
{
while (folio_test_writeback(folio)) {
trace_wait_on_page_writeback(&folio->page, folio_mapping(folio));
- if (wait_on_page_bit_killable(&folio->page, PG_writeback))
+ if (folio_wait_bit_killable(folio, PG_writeback))
return -EINTR;
}