NVMe: Correct sg list setup in nvme_map_user_pages
Our SG list was constructed to always fill the entire first page, even
if that was more than the length of the I/O. This is probably harmless,
but some IOMMUs might do something bad.
Correcting the first call to sg_set_page() made it look a lot closer to
the sg_set_page() in the loop, so fold the first call to sg_set_page()
into the loop.
Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com>
Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
diff --git a/drivers/block/nvme.c b/drivers/block/nvme.c
index 0956e12..5843409 100644
--- a/drivers/block/nvme.c
+++ b/drivers/block/nvme.c
@@ -996,11 +996,11 @@
sg = kcalloc(count, sizeof(*sg), GFP_KERNEL);
sg_init_table(sg, count);
- sg_set_page(&sg[0], pages[0], PAGE_SIZE - offset, offset);
- length -= (PAGE_SIZE - offset);
- for (i = 1; i < count; i++) {
- sg_set_page(&sg[i], pages[i], min_t(int, length, PAGE_SIZE), 0);
- length -= PAGE_SIZE;
+ for (i = 0; i < count; i++) {
+ sg_set_page(&sg[i], pages[i],
+ min_t(int, length, PAGE_SIZE - offset), offset);
+ length -= (PAGE_SIZE - offset);
+ offset = 0;
}
err = -ENOMEM;