Revert "udmabuf: fix vmap_udmabuf error page set"

This reverts commit 18d7de823b.

We cannot use vmap_pfn() in vmap_udmabuf() as it would fail the pfn_valid()
check in vmap_pfn_apply(). This is because vmap_pfn() is intended to be
used for mapping non-struct-page memory such as PCIe BARs. Since, udmabuf
mostly works with pages/folios backed by shmem/hugetlbfs/THP, vmap_pfn()
is not the right tool or API to invoke for implementing vmap.

Signed-off-by: Huan Yang <link@vivo.com>
Suggested-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Reported-by: Bingbu Cao <bingbu.cao@linux.intel.com>
Closes: https://lore.kernel.org/dri-devel/eb7e0137-3508-4287-98c4-816c5fd98e10@vivo.com/T/#mbda4f64a3532b32e061f4e8763bc8e307bea3ca8
Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Link: https://lore.kernel.org/r/20250428073831.19942-2-link@vivo.com
This commit is contained in:
Huan Yang 2025-04-28 15:38:29 +08:00 committed by Vivek Kasireddy
parent 549810e918
commit ceb7b62eaa
2 changed files with 7 additions and 16 deletions

View File

@ -36,7 +36,6 @@ config UDMABUF
depends on DMA_SHARED_BUFFER depends on DMA_SHARED_BUFFER
depends on MEMFD_CREATE || COMPILE_TEST depends on MEMFD_CREATE || COMPILE_TEST
depends on MMU depends on MMU
select VMAP_PFN
help help
A driver to let userspace turn memfd regions into dma-bufs. A driver to let userspace turn memfd regions into dma-bufs.
Qemu can use this to create host dmabufs for guest framebuffers. Qemu can use this to create host dmabufs for guest framebuffers.

View File

@ -109,29 +109,21 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map) static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
{ {
struct udmabuf *ubuf = buf->priv; struct udmabuf *ubuf = buf->priv;
unsigned long *pfns; struct page **pages;
void *vaddr; void *vaddr;
pgoff_t pg; pgoff_t pg;
dma_resv_assert_held(buf->resv); dma_resv_assert_held(buf->resv);
/** pages = kvmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
* HVO may free tail pages, so just use pfn to map each folio if (!pages)
* into vmalloc area.
*/
pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL);
if (!pfns)
return -ENOMEM; return -ENOMEM;
for (pg = 0; pg < ubuf->pagecount; pg++) { for (pg = 0; pg < ubuf->pagecount; pg++)
unsigned long pfn = folio_pfn(ubuf->folios[pg]); pages[pg] = &ubuf->folios[pg]->page;
pfn += ubuf->offsets[pg] >> PAGE_SHIFT; vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
pfns[pg] = pfn; kvfree(pages);
}
vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL);
kvfree(pfns);
if (!vaddr) if (!vaddr)
return -EINVAL; return -EINVAL;