From eddb1c228f7951d399240a0cc57455dccc7f8777 Mon Sep 17 00:00:00 2001 From: John Hubbard Date: Thu, 30 Jan 2020 22:12:54 -0800 Subject: mm/gup: introduce pin_user_pages*() and FOLL_PIN MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Introduce pin_user_pages*() variations of get_user_pages*() calls, and also pin_longterm_pages*() variations. For now, these are placeholder calls, until the various call sites are converted to use the correct get_user_pages*() or pin_user_pages*() API. These variants will eventually all set FOLL_PIN, which is also introduced, and thoroughly documented. pin_user_pages() pin_user_pages_remote() pin_user_pages_fast() All pages that are pinned via the above calls, must be unpinned via put_user_page(). The underlying rules are: * FOLL_PIN is a gup-internal flag, so the call sites should not directly set it. That behavior is enforced with assertions. * Call sites that want to indicate that they are going to do DirectIO ("DIO") or something with similar characteristics, should call a get_user_pages()-like wrapper call that sets FOLL_PIN. These wrappers will: * Start with "pin_user_pages" instead of "get_user_pages". That makes it easy to find and audit the call sites. * Set FOLL_PIN * For pages that are received via FOLL_PIN, those pages must be returned via put_user_page(). Thanks to Jan Kara and Vlastimil Babka for explaining the 4 cases in this documentation. (I've reworded it and expanded upon it.) Link: http://lkml.kernel.org/r/20200107224558.2362728-12-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jan Kara Reviewed-by: Mike Rapoport [Documentation] Reviewed-by: Jérôme Glisse Cc: Jonathan Corbet Cc: Ira Weiny Cc: Alex Williamson Cc: Aneesh Kumar K.V Cc: Björn Töpel Cc: Christoph Hellwig Cc: Daniel Vetter Cc: Dan Williams Cc: Hans Verkuil Cc: Jason Gunthorpe Cc: Jason Gunthorpe Cc: Jens Axboe Cc: Kirill A. Shutemov Cc: Leon Romanovsky Cc: Mauro Carvalho Chehab Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/core-api/index.rst | 1 + Documentation/core-api/pin_user_pages.rst | 232 ++++++++++++++++++++++++++++++ 2 files changed, 233 insertions(+) create mode 100644 Documentation/core-api/pin_user_pages.rst (limited to 'Documentation') diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index bc0c727d7fd8..a501dc1c90d0 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -31,6 +31,7 @@ Core utilities generic-radix-tree memory-allocation mm-api + pin_user_pages gfp_mask-from-fs-io timekeeping boot-time-mm diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst new file mode 100644 index 000000000000..71849830cd48 --- /dev/null +++ b/Documentation/core-api/pin_user_pages.rst @@ -0,0 +1,232 @@ +.. SPDX-License-Identifier: GPL-2.0 + +==================================================== +pin_user_pages() and related calls +==================================================== + +.. contents:: :local: + +Overview +======== + +This document describes the following functions:: + + pin_user_pages() + pin_user_pages_fast() + pin_user_pages_remote() + +Basic description of FOLL_PIN +============================= + +FOLL_PIN and FOLL_LONGTERM are flags that can be passed to the get_user_pages*() +("gup") family of functions. FOLL_PIN has significant interactions and +interdependencies with FOLL_LONGTERM, so both are covered here. + +FOLL_PIN is internal to gup, meaning that it should not appear at the gup call +sites. This allows the associated wrapper functions (pin_user_pages*() and +others) to set the correct combination of these flags, and to check for problems +as well. + +FOLL_LONGTERM, on the other hand, *is* allowed to be set at the gup call sites. +This is in order to avoid creating a large number of wrapper functions to cover +all combinations of get*(), pin*(), FOLL_LONGTERM, and more. Also, the +pin_user_pages*() APIs are clearly distinct from the get_user_pages*() APIs, so +that's a natural dividing line, and a good point to make separate wrapper calls. +In other words, use pin_user_pages*() for DMA-pinned pages, and +get_user_pages*() for other cases. There are four cases described later on in +this document, to further clarify that concept. + +FOLL_PIN and FOLL_GET are mutually exclusive for a given gup call. However, +multiple threads and call sites are free to pin the same struct pages, via both +FOLL_PIN and FOLL_GET. It's just the call site that needs to choose one or the +other, not the struct page(s). + +The FOLL_PIN implementation is nearly the same as FOLL_GET, except that FOLL_PIN +uses a different reference counting technique. + +FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying that is, +FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN. + +Which flags are set by each wrapper +=================================== + +For these pin_user_pages*() functions, FOLL_PIN is OR'd in with whatever gup +flags the caller provides. The caller is required to pass in a non-null struct +pages* array, and the function then pin pages by incrementing each by a special +value. For now, that value is +1, just like get_user_pages*().:: + + Function + -------- + pin_user_pages FOLL_PIN is always set internally by this function. + pin_user_pages_fast FOLL_PIN is always set internally by this function. + pin_user_pages_remote FOLL_PIN is always set internally by this function. + +For these get_user_pages*() functions, FOLL_GET might not even be specified. +Behavior is a little more complex than above. If FOLL_GET was *not* specified, +but the caller passed in a non-null struct pages* array, then the function +sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount +of each page by +1.:: + + Function + -------- + get_user_pages FOLL_GET is sometimes set internally by this function. + get_user_pages_fast FOLL_GET is sometimes set internally by this function. + get_user_pages_remote FOLL_GET is sometimes set internally by this function. + +Tracking dma-pinned pages +========================= + +Some of the key design constraints, and solutions, for tracking dma-pinned +pages: + +* An actual reference count, per struct page, is required. This is because + multiple processes may pin and unpin a page. + +* False positives (reporting that a page is dma-pinned, when in fact it is not) + are acceptable, but false negatives are not. + +* struct page may not be increased in size for this, and all fields are already + used. + +* Given the above, we can overload the page->_refcount field by using, sort of, + the upper bits in that field for a dma-pinned count. "Sort of", means that, + rather than dividing page->_refcount into bit fields, we simple add a medium- + large value (GUP_PIN_COUNTING_BIAS, initially chosen to be 1024: 10 bits) to + page->_refcount. This provides fuzzy behavior: if a page has get_page() called + on it 1024 times, then it will appear to have a single dma-pinned count. + And again, that's acceptable. + +This also leads to limitations: there are only 31-10==21 bits available for a +counter that increments 10 bits at a time. + +TODO: for 1GB and larger huge pages, this is cutting it close. That's because +when pin_user_pages() follows such pages, it increments the head page by "1" +(where "1" used to mean "+1" for get_user_pages(), but now means "+1024" for +pin_user_pages()) for each tail page. So if you have a 1GB huge page: + +* There are 256K (18 bits) worth of 4 KB tail pages. +* There are 21 bits available to count up via GUP_PIN_COUNTING_BIAS (that is, + 10 bits at a time) +* There are 21 - 18 == 3 bits available to count. Except that there aren't, + because you need to allow for a few normal get_page() calls on the head page, + as well. Fortunately, the approach of using addition, rather than "hard" + bitfields, within page->_refcount, allows for sharing these bits gracefully. + But we're still looking at about 8 references. + +This, however, is a missing feature more than anything else, because it's easily +solved by addressing an obvious inefficiency in the original get_user_pages() +approach of retrieving pages: stop treating all the pages as if they were +PAGE_SIZE. Retrieve huge pages as huge pages. The callers need to be aware of +this, so some work is required. Once that's in place, this limitation mostly +disappears from view, because there will be ample refcounting range available. + +* Callers must specifically request "dma-pinned tracking of pages". In other + words, just calling get_user_pages() will not suffice; a new set of functions, + pin_user_page() and related, must be used. + +FOLL_PIN, FOLL_GET, FOLL_LONGTERM: when to use which flags +========================================================== + +Thanks to Jan Kara, Vlastimil Babka and several other -mm people, for describing +these categories: + +CASE 1: Direct IO (DIO) +----------------------- +There are GUP references to pages that are serving +as DIO buffers. These buffers are needed for a relatively short time (so they +are not "long term"). No special synchronization with page_mkclean() or +munmap() is provided. Therefore, flags to set at the call site are: :: + + FOLL_PIN + +...but rather than setting FOLL_PIN directly, call sites should use one of +the pin_user_pages*() routines that set FOLL_PIN. + +CASE 2: RDMA +------------ +There are GUP references to pages that are serving as DMA +buffers. These buffers are needed for a long time ("long term"). No special +synchronization with page_mkclean() or munmap() is provided. Therefore, flags +to set at the call site are: :: + + FOLL_PIN | FOLL_LONGTERM + +NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's +because DAX pages do not have a separate page cache, and so "pinning" implies +locking down file system blocks, which is not (yet) supported in that way. + +CASE 3: Hardware with page faulting support +------------------------------------------- +Here, a well-written driver doesn't normally need to pin pages at all. However, +if the driver does choose to do so, it can register MMU notifiers for the range, +and will be called back upon invalidation. Either way (avoiding page pinning, or +using MMU notifiers to unpin upon request), there is proper synchronization with +both filesystem and mm (page_mkclean(), munmap(), etc). + +Therefore, neither flag needs to be set. + +In this case, ideally, neither get_user_pages() nor pin_user_pages() should be +called. Instead, the software should be written so that it does not pin pages. +This allows mm and filesystems to operate more efficiently and reliably. + +CASE 4: Pinning for struct page manipulation only +------------------------------------------------- +Here, normal GUP calls are sufficient, so neither flag needs to be set. + +page_dma_pinned(): the whole point of pinning +============================================= + +The whole point of marking pages as "DMA-pinned" or "gup-pinned" is to be able +to query, "is this page DMA-pinned?" That allows code such as page_mkclean() +(and file system writeback code in general) to make informed decisions about +what to do when a page cannot be unmapped due to such pins. + +What to do in those cases is the subject of a years-long series of discussions +and debates (see the References at the end of this document). It's a TODO item +here: fill in the details once that's worked out. Meanwhile, it's safe to say +that having this available: :: + + static inline bool page_dma_pinned(struct page *page) + +...is a prerequisite to solving the long-running gup+DMA problem. + +Another way of thinking about FOLL_GET, FOLL_PIN, and FOLL_LONGTERM +=================================================================== + +Another way of thinking about these flags is as a progression of restrictions: +FOLL_GET is for struct page manipulation, without affecting the data that the +struct page refers to. FOLL_PIN is a *replacement* for FOLL_GET, and is for +short term pins on pages whose data *will* get accessed. As such, FOLL_PIN is +a "more severe" form of pinning. And finally, FOLL_LONGTERM is an even more +restrictive case that has FOLL_PIN as a prerequisite: this is for pages that +will be pinned longterm, and whose data will be accessed. + +Unit testing +============ +This file:: + + tools/testing/selftests/vm/gup_benchmark.c + +has the following new calls to exercise the new pin*() wrapper functions: + +* PIN_FAST_BENCHMARK (./gup_benchmark -a) +* PIN_BENCHMARK (./gup_benchmark -b) + +You can monitor how many total dma-pinned pages have been acquired and released +since the system was booted, via two new /proc/vmstat entries: :: + + /proc/vmstat/nr_foll_pin_requested + /proc/vmstat/nr_foll_pin_requested + +Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is +because there is a noticeable performance drop in put_user_page(), when they +are activated. + +References +========== + +* `Some slow progress on get_user_pages() (Apr 2, 2019) `_ +* `DMA and get_user_pages() (LPC: Dec 12, 2018) `_ +* `The trouble with get_user_pages() (Apr 30, 2018) `_ + +John Hubbard, October, 2019 -- cgit v1.2.3 From f1f6a7dd9b53aafd81b696b9017036e7b08e57ea Mon Sep 17 00:00:00 2001 From: John Hubbard Date: Thu, 30 Jan 2020 22:13:35 -0800 Subject: mm, tree-wide: rename put_user_page*() to unpin_user_page*() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit In order to provide a clearer, more symmetric API for pinning and unpinning DMA pages. This way, pin_user_pages*() calls match up with unpin_user_pages*() calls, and the API is a lot closer to being self-explanatory. Link: http://lkml.kernel.org/r/20200107224558.2362728-23-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jan Kara Cc: Alex Williamson Cc: Aneesh Kumar K.V Cc: Björn Töpel Cc: Christoph Hellwig Cc: Daniel Vetter Cc: Dan Williams Cc: Hans Verkuil Cc: Ira Weiny Cc: Jason Gunthorpe Cc: Jason Gunthorpe Cc: Jens Axboe Cc: Jerome Glisse Cc: Jonathan Corbet Cc: Kirill A. Shutemov Cc: Leon Romanovsky Cc: Mauro Carvalho Chehab Cc: Mike Rapoport Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/core-api/pin_user_pages.rst | 2 +- arch/powerpc/mm/book3s64/iommu_api.c | 4 ++-- drivers/gpu/drm/via/via_dmablit.c | 4 ++-- drivers/infiniband/core/umem.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 2 +- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +++--- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- drivers/infiniband/sw/siw/siw_mem.c | 2 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 4 ++-- drivers/platform/goldfish/goldfish_pipe.c | 4 ++-- drivers/vfio/vfio_iommu_type1.c | 2 +- fs/io_uring.c | 4 ++-- include/linux/mm.h | 26 +++++++++++------------ mm/gup.c | 32 ++++++++++++++--------------- mm/process_vm_access.c | 4 ++-- net/xdp/xdp_umem.c | 2 +- 18 files changed, 55 insertions(+), 55 deletions(-) (limited to 'Documentation') diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index 71849830cd48..1d490155ecd7 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -219,7 +219,7 @@ since the system was booted, via two new /proc/vmstat entries: :: /proc/vmstat/nr_foll_pin_requested Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is -because there is a noticeable performance drop in put_user_page(), when they +because there is a noticeable performance drop in unpin_user_page(), when they are activated. References diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index a86547822034..eba73ebd8ae5 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -168,7 +168,7 @@ good_exit: free_exit: /* free the references taken */ - put_user_pages(mem->hpages, pinned); + unpin_user_pages(mem->hpages, pinned); vfree(mem->hpas); kfree(mem); @@ -214,7 +214,7 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) SetPageDirty(page); - put_user_page(page); + unpin_user_page(page); mem->hpas[i] = 0; } diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c index 37c5e572993a..719d036c9384 100644 --- a/drivers/gpu/drm/via/via_dmablit.c +++ b/drivers/gpu/drm/via/via_dmablit.c @@ -188,8 +188,8 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) kfree(vsg->desc_pages); /* fall through */ case dr_via_pages_locked: - put_user_pages_dirty_lock(vsg->pages, vsg->num_pages, - (vsg->direction == DMA_FROM_DEVICE)); + unpin_user_pages_dirty_lock(vsg->pages, vsg->num_pages, + (vsg->direction == DMA_FROM_DEVICE)); /* fall through */ case dr_via_pages_alloc: vfree(vsg->pages); diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index aae5bfed7f3b..c3769a5f096d 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,7 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); + unpin_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index 9a94761765c0..3b505006c0a6 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,7 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - put_user_pages_dirty_lock(p, npages, dirty); + unpin_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index 8269ab040c21..78a48aea3faf 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -482,7 +482,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_user_page(pages[0]); + unpin_user_page(pages[0]); goto out; } @@ -490,7 +490,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_user_page(sg_page(&db_tab->page[i].mem)); + unpin_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -556,7 +556,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_user_page(sg_page(&db_tab->page[i].mem)); + unpin_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 7fc4b5f81fcd..342e3172ca40 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,7 +40,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - put_user_pages_dirty_lock(p, num_pages, dirty); + unpin_user_pages_dirty_lock(p, num_pages, dirty); } /** diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 1a3cc2957e3a..a67599b5a550 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -317,7 +317,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_user_page(page); + unpin_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -631,7 +631,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_user_page(pkt->addr[i].page); + unpin_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -706,7 +706,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_user_page(pages[i++]); + unpin_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 600896727d34..bd9f944b68fc 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,7 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - put_user_pages_dirty_lock(&page, 1, dirty); + unpin_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index e53b07dcfed5..e2061dc0b043 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -63,7 +63,7 @@ struct siw_mem *siw_mem_id2obj(struct siw_device *sdev, int stag_index) static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, bool dirty) { - put_user_pages_dirty_lock(chunk->plist, num_pages, dirty); + unpin_user_pages_dirty_lock(chunk->plist, num_pages, dirty); } void siw_umem_release(struct siw_umem *umem, bool dirty) diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 162a2633b1e3..13b65ed9e74c 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,8 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); if (dma->pages) { - put_user_pages_dirty_lock(dma->pages, dma->nr_pages, - dma->direction == DMA_FROM_DEVICE); + unpin_user_pages_dirty_lock(dma->pages, dma->nr_pages, + dma->direction == DMA_FROM_DEVICE); kfree(dma->pages); dma->pages = NULL; } diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index 2a5901efecde..1ab207ec9c94 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -360,8 +360,8 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe, *consumed_size = pipe->command_buffer->rw_params.consumed_size; - put_user_pages_dirty_lock(pipe->pages, pages_count, - !is_write && *consumed_size > 0); + unpin_user_pages_dirty_lock(pipe->pages, pages_count, + !is_write && *consumed_size > 0); mutex_unlock(&pipe->lock); return 0; diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 18bfc2fc8e6d..a177bf2c6683 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -310,7 +310,7 @@ static int put_pfn(unsigned long pfn, int prot) if (!is_invalid_reserved_pfn(pfn)) { struct page *page = pfn_to_page(pfn); - put_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); + unpin_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); return 1; } return 0; diff --git a/fs/io_uring.c b/fs/io_uring.c index 54f664e8b9b8..1806afddfea5 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -6005,7 +6005,7 @@ static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx) struct io_mapped_ubuf *imu = &ctx->user_bufs[i]; for (j = 0; j < imu->nr_bvecs; j++) - put_user_page(imu->bvec[j].bv_page); + unpin_user_page(imu->bvec[j].bv_page); if (ctx->account_mem) io_unaccount_mem(ctx->user, imu->nr_bvecs); @@ -6150,7 +6150,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg, * release any pages we did get */ if (pret > 0) - put_user_pages(pages, pret); + unpin_user_pages(pages, pret); if (ctx->account_mem) io_unaccount_mem(ctx->user, nr_pages); kvfree(imu->bvec); diff --git a/include/linux/mm.h b/include/linux/mm.h index 79ca557349c6..fc543eb45de1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1039,27 +1039,27 @@ static inline void put_page(struct page *page) } /** - * put_user_page() - release a gup-pinned page + * unpin_user_page() - release a gup-pinned page * @page: pointer to page to be released * * Pages that were pinned via pin_user_pages*() must be released via either - * put_user_page(), or one of the put_user_pages*() routines. This is so that - * eventually such pages can be separately tracked and uniquely handled. In + * unpin_user_page(), or one of the unpin_user_pages*() routines. This is so + * that eventually such pages can be separately tracked and uniquely handled. In * particular, interactions with RDMA and filesystems need special handling. * - * put_user_page() and put_page() are not interchangeable, despite this early - * implementation that makes them look the same. put_user_page() calls must + * unpin_user_page() and put_page() are not interchangeable, despite this early + * implementation that makes them look the same. unpin_user_page() calls must * be perfectly matched up with pin*() calls. */ -static inline void put_user_page(struct page *page) +static inline void unpin_user_page(struct page *page) { put_page(page); } -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, - bool make_dirty); +void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); -void put_user_pages(struct page **pages, unsigned long npages); +void unpin_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS @@ -2590,7 +2590,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ -#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */ +#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each @@ -2625,7 +2625,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, * Direct IO). This lets the filesystem know that some non-file-system entity is * potentially changing the pages' data. In contrast to FOLL_GET (whose pages * are released via put_page()), FOLL_PIN pages must be released, ultimately, by - * a call to put_user_page(). + * a call to unpin_user_page(). * * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use different * and separate refcounting mechanisms, however, and that means that each has @@ -2633,7 +2633,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, * * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. * - * FOLL_PIN: pin_user_pages*() to acquire, and put_user_pages to release. + * FOLL_PIN: pin_user_pages*() to acquire, and unpin_user_pages to release. * * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-based @@ -2643,7 +2643,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never * directly by the caller. That's in order to help avoid mismatches when * releasing pages: get_user_pages*() pages must be released via put_page(), - * while pin_user_pages*() pages must be released via put_user_page(). + * while pin_user_pages*() pages must be released via unpin_user_page(). * * Please see Documentation/vm/pin_user_pages.rst for more information. */ diff --git a/mm/gup.c b/mm/gup.c index cc7e78f4a960..e13f4d211475 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -45,7 +45,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) } /** - * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. * @make_dirty: whether to mark the pages dirty @@ -55,19 +55,19 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) * * For each page in the @pages array, make that page (or its head page, if a * compound page) dirty, if @make_dirty is true, and if the page was previously - * listed as clean. In any case, releases all pages using put_user_page(), - * possibly via put_user_pages(), for the non-dirty case. + * listed as clean. In any case, releases all pages using unpin_user_page(), + * possibly via unpin_user_pages(), for the non-dirty case. * - * Please see the put_user_page() documentation for details. + * Please see the unpin_user_page() documentation for details. * * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is * required, then the caller should a) verify that this is really correct, * because _lock() is usually required, and b) hand code it: - * set_page_dirty_lock(), put_user_page(). + * set_page_dirty_lock(), unpin_user_page(). * */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, - bool make_dirty) +void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { unsigned long index; @@ -78,7 +78,7 @@ void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, */ if (!make_dirty) { - put_user_pages(pages, npages); + unpin_user_pages(pages, npages); return; } @@ -106,21 +106,21 @@ void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, */ if (!PageDirty(page)) set_page_dirty_lock(page); - put_user_page(page); + unpin_user_page(page); } } -EXPORT_SYMBOL(put_user_pages_dirty_lock); +EXPORT_SYMBOL(unpin_user_pages_dirty_lock); /** - * put_user_pages() - release an array of gup-pinned pages. + * unpin_user_pages() - release an array of gup-pinned pages. * @pages: array of pages to be marked dirty and released. * @npages: number of pages in the @pages array. * - * For each page in the @pages array, release the page using put_user_page(). + * For each page in the @pages array, release the page using unpin_user_page(). * - * Please see the put_user_page() documentation for details. + * Please see the unpin_user_page() documentation for details. */ -void put_user_pages(struct page **pages, unsigned long npages) +void unpin_user_pages(struct page **pages, unsigned long npages) { unsigned long index; @@ -130,9 +130,9 @@ void put_user_pages(struct page **pages, unsigned long npages) * single operation to the head page should suffice. */ for (index = 0; index < npages; index++) - put_user_page(pages[index]); + unpin_user_page(pages[index]); } -EXPORT_SYMBOL(put_user_pages); +EXPORT_SYMBOL(unpin_user_pages); #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index fd20ab675b85..de41e830cdac 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -126,8 +126,8 @@ static int process_vm_rw_single_vec(unsigned long addr, pa += pinned_pages * PAGE_SIZE; /* If vm_write is set, the pages need to be made dirty: */ - put_user_pages_dirty_lock(process_pages, pinned_pages, - vm_write); + unpin_user_pages_dirty_lock(process_pages, pinned_pages, + vm_write); } return rc; diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index b17ce9a5534d..fa7bb5e060d0 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -212,7 +212,7 @@ static int xdp_umem_map_pages(struct xdp_umem *umem) static void xdp_umem_unpin_pages(struct xdp_umem *umem) { - put_user_pages_dirty_lock(umem->pgs, umem->npgs, true); + unpin_user_pages_dirty_lock(umem->pgs, umem->npgs, true); kfree(umem->pgs); umem->pgs = NULL; -- cgit v1.2.3 From 45190f01dd402112d3d22c0ddc4152994f9e1e55 Mon Sep 17 00:00:00 2001 From: Vitaly Wool Date: Thu, 30 Jan 2020 22:15:04 -0800 Subject: mm/zswap.c: add allocation hysteresis if pool limit is hit zswap will always try to shrink pool when zswap is full. If there is a high pressure on zswap it will result in flipping pages in and out zswap pool without any real benefit, and the overall system performance will drop. The previous discussion on this subject [1] ended up with a suggestion to implement a sort of hysteresis to refuse taking pages into zswap pool until it has sufficient space if the limit has been hit. This is my take on this. Hysteresis is controlled with a sysfs-configurable parameter (namely, /sys/kernel/debug/zswap/accept_threhsold_percent). It specifies the threshold at which zswap would start accepting pages again after it became full. Setting this parameter to 100 disables the hysteresis and sets the zswap behavior to pre-hysteresis state. [1] https://lkml.org/lkml/2019/11/8/949 Link: http://lkml.kernel.org/r/20200108200118.15563-1-vitaly.wool@konsulko.com Signed-off-by: Vitaly Wool Cc: Dan Streetman Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/vm/zswap.rst | 13 +++++++ mm/zswap.c | 85 +++++++++++++++++++++++++++++----------------- 2 files changed, 67 insertions(+), 31 deletions(-) (limited to 'Documentation') diff --git a/Documentation/vm/zswap.rst b/Documentation/vm/zswap.rst index 1444ecd40911..61f6185188cd 100644 --- a/Documentation/vm/zswap.rst +++ b/Documentation/vm/zswap.rst @@ -130,6 +130,19 @@ checking for the same-value filled pages during store operation. However, the existing pages which are marked as same-value filled pages remain stored unchanged in zswap until they are either loaded or invalidated. +To prevent zswap from shrinking pool when zswap is full and there's a high +pressure on swap (this will result in flipping pages in and out zswap pool +without any real benefit but with a performance drop for the system), a +special parameter has been introduced to implement a sort of hysteresis to +refuse taking pages into zswap pool until it has sufficient space if the limit +has been hit. To set the threshold at which zswap would start accepting pages +again after it became full, use the sysfs ``accept_threhsold_percent`` +attribute, e. g.:: + + echo 80 > /sys/module/zswap/parameters/accept_threhsold_percent + +Setting this parameter to 100 will disable the hysteresis. + A debugfs interface is provided for various statistic about pool size, number of pages stored, same-value filled pages and various counters for the reasons pages are rejected. diff --git a/mm/zswap.c b/mm/zswap.c index 46a322316e52..7ec8bd912d13 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -32,6 +32,7 @@ #include #include #include +#include /********************************* * statistics @@ -65,6 +66,11 @@ static u64 zswap_reject_kmemcache_fail; /* Duplicate store was encountered (rare) */ static u64 zswap_duplicate_entry; +/* Shrinker work queue */ +static struct workqueue_struct *shrink_wq; +/* Pool limit was hit, we need to calm down */ +static bool zswap_pool_reached_full; + /********************************* * tunables **********************************/ @@ -109,6 +115,11 @@ module_param_cb(zpool, &zswap_zpool_param_ops, &zswap_zpool_type, 0644); static unsigned int zswap_max_pool_percent = 20; module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644); +/* The threshold for accepting new pages after the max_pool_percent was hit */ +static unsigned int zswap_accept_thr_percent = 90; /* of max pool size */ +module_param_named(accept_threshold_percent, zswap_accept_thr_percent, + uint, 0644); + /* Enable/disable handling same-value filled pages (enabled by default) */ static bool zswap_same_filled_pages_enabled = true; module_param_named(same_filled_pages_enabled, zswap_same_filled_pages_enabled, @@ -123,7 +134,8 @@ struct zswap_pool { struct crypto_comp * __percpu *tfm; struct kref kref; struct list_head list; - struct work_struct work; + struct work_struct release_work; + struct work_struct shrink_work; struct hlist_node node; char tfm_name[CRYPTO_MAX_ALG_NAME]; }; @@ -214,6 +226,13 @@ static bool zswap_is_full(void) DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); } +static bool zswap_can_accept(void) +{ + return totalram_pages() * zswap_accept_thr_percent / 100 * + zswap_max_pool_percent / 100 > + DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); +} + static void zswap_update_total_size(void) { struct zswap_pool *pool; @@ -501,6 +520,16 @@ static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor) return NULL; } +static void shrink_worker(struct work_struct *w) +{ + struct zswap_pool *pool = container_of(w, typeof(*pool), + shrink_work); + + if (zpool_shrink(pool->zpool, 1, NULL)) + zswap_reject_reclaim_fail++; + zswap_pool_put(pool); +} + static struct zswap_pool *zswap_pool_create(char *type, char *compressor) { struct zswap_pool *pool; @@ -551,6 +580,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) */ kref_init(&pool->kref); INIT_LIST_HEAD(&pool->list); + INIT_WORK(&pool->shrink_work, shrink_worker); zswap_pool_debug("created", pool); @@ -624,7 +654,8 @@ static int __must_check zswap_pool_get(struct zswap_pool *pool) static void __zswap_pool_release(struct work_struct *work) { - struct zswap_pool *pool = container_of(work, typeof(*pool), work); + struct zswap_pool *pool = container_of(work, typeof(*pool), + release_work); synchronize_rcu(); @@ -647,8 +678,8 @@ static void __zswap_pool_empty(struct kref *kref) list_del_rcu(&pool->list); - INIT_WORK(&pool->work, __zswap_pool_release); - schedule_work(&pool->work); + INIT_WORK(&pool->release_work, __zswap_pool_release); + schedule_work(&pool->release_work); spin_unlock(&zswap_pools_lock); } @@ -942,22 +973,6 @@ end: return ret; } -static int zswap_shrink(void) -{ - struct zswap_pool *pool; - int ret; - - pool = zswap_pool_last_get(); - if (!pool) - return -ENOENT; - - ret = zpool_shrink(pool->zpool, 1, NULL); - - zswap_pool_put(pool); - - return ret; -} - static int zswap_is_page_same_filled(void *ptr, unsigned long *value) { unsigned int pos; @@ -1011,21 +1026,23 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, /* reclaim space if needed */ if (zswap_is_full()) { + struct zswap_pool *pool; + zswap_pool_limit_hit++; - if (zswap_shrink()) { - zswap_reject_reclaim_fail++; - ret = -ENOMEM; - goto reject; - } + zswap_pool_reached_full = true; + pool = zswap_pool_last_get(); + if (pool) + queue_work(shrink_wq, &pool->shrink_work); + ret = -ENOMEM; + goto reject; + } - /* A second zswap_is_full() check after - * zswap_shrink() to make sure it's now - * under the max_pool_percent - */ - if (zswap_is_full()) { + if (zswap_pool_reached_full) { + if (!zswap_can_accept()) { ret = -ENOMEM; goto reject; - } + } else + zswap_pool_reached_full = false; } /* allocate entry */ @@ -1332,11 +1349,17 @@ static int __init init_zswap(void) zswap_enabled = false; } + shrink_wq = create_workqueue("zswap-shrink"); + if (!shrink_wq) + goto fallback_fail; + frontswap_register_ops(&zswap_frontswap_ops); if (zswap_debugfs_init()) pr_warn("debugfs initialization failed\n"); return 0; +fallback_fail: + zswap_pool_destroy(pool); hp_fail: cpuhp_remove_state(CPUHP_MM_ZSWP_MEM_PREPARE); dstmem_fail: -- cgit v1.2.3 From c65e6815db1c2e28d5554bd99d3a6e522ab599d1 Mon Sep 17 00:00:00 2001 From: Mikhail Zaslonko Date: Thu, 30 Jan 2020 22:16:27 -0800 Subject: s390/boot: add dfltcc= kernel command line parameter Add the new kernel command line parameter 'dfltcc=' to configure s390 zlib hardware support. Format: { on | off | def_only | inf_only | always } on: s390 zlib hardware support for compression on level 1 and decompression (default) off: No s390 zlib hardware support def_only: s390 zlib hardware support for deflate only (compression on level 1) inf_only: s390 zlib hardware support for inflate only (decompression) always: Same as 'on' but ignores the selected compression level always using hardware support (used for debugging) Link: http://lkml.kernel.org/r/20200103223334.20669-5-zaslonko@linux.ibm.com Signed-off-by: Mikhail Zaslonko Cc: Chris Mason Cc: Christian Borntraeger Cc: David Sterba Cc: Eduard Shishkin Cc: Heiko Carstens Cc: Ilya Leoshkevich Cc: Josef Bacik Cc: Richard Purdie Cc: Vasily Gorbik Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/admin-guide/kernel-parameters.txt | 12 ++++++++++++ arch/s390/boot/ipl_parm.c | 14 ++++++++++++++ arch/s390/include/asm/setup.h | 7 +++++++ arch/s390/kernel/setup.c | 2 ++ lib/zlib_dfltcc/dfltcc.c | 5 ++++- lib/zlib_dfltcc/dfltcc.h | 1 + lib/zlib_dfltcc/dfltcc_deflate.c | 6 ++++++ lib/zlib_dfltcc/dfltcc_inflate.c | 6 ++++++ lib/zlib_dfltcc/dfltcc_util.h | 4 +++- 9 files changed, 55 insertions(+), 2 deletions(-) (limited to 'Documentation') diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ec92120a7952..ddc5ccdd4cd1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -834,6 +834,18 @@ dump out devices still on the deferred probe list after retrying. + dfltcc= [HW,S390] + Format: { on | off | def_only | inf_only | always } + on: s390 zlib hardware support for compression on + level 1 and decompression (default) + off: No s390 zlib hardware support + def_only: s390 zlib hardware support for deflate + only (compression on level 1) + inf_only: s390 zlib hardware support for inflate + only (decompression) + always: Same as 'on' but ignores the selected compression + level always using hardware support (used for debugging) + dhash_entries= [KNL] Set number of hash buckets for dentry cache. diff --git a/arch/s390/boot/ipl_parm.c b/arch/s390/boot/ipl_parm.c index 24ef67eb1cef..357adad991d2 100644 --- a/arch/s390/boot/ipl_parm.c +++ b/arch/s390/boot/ipl_parm.c @@ -14,6 +14,7 @@ char __bootdata(early_command_line)[COMMAND_LINE_SIZE]; struct ipl_parameter_block __bootdata_preserved(ipl_block); int __bootdata_preserved(ipl_block_valid); +unsigned int __bootdata_preserved(zlib_dfltcc_support) = ZLIB_DFLTCC_FULL; unsigned long __bootdata(vmalloc_size) = VMALLOC_DEFAULT_SIZE; unsigned long __bootdata(memory_end); @@ -229,6 +230,19 @@ void parse_boot_command_line(void) if (!strcmp(param, "vmalloc") && val) vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE); + if (!strcmp(param, "dfltcc")) { + if (!strcmp(val, "off")) + zlib_dfltcc_support = ZLIB_DFLTCC_DISABLED; + else if (!strcmp(val, "on")) + zlib_dfltcc_support = ZLIB_DFLTCC_FULL; + else if (!strcmp(val, "def_only")) + zlib_dfltcc_support = ZLIB_DFLTCC_DEFLATE_ONLY; + else if (!strcmp(val, "inf_only")) + zlib_dfltcc_support = ZLIB_DFLTCC_INFLATE_ONLY; + else if (!strcmp(val, "always")) + zlib_dfltcc_support = ZLIB_DFLTCC_FULL_DEBUG; + } + if (!strcmp(param, "noexec")) { rc = kstrtobool(val, &enabled); if (!rc && !enabled) diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h index 69289e99cabd..b241ddb67caf 100644 --- a/arch/s390/include/asm/setup.h +++ b/arch/s390/include/asm/setup.h @@ -79,6 +79,13 @@ struct parmarea { char command_line[ARCH_COMMAND_LINE_SIZE]; /* 0x10480 */ }; +extern unsigned int zlib_dfltcc_support; +#define ZLIB_DFLTCC_DISABLED 0 +#define ZLIB_DFLTCC_FULL 1 +#define ZLIB_DFLTCC_DEFLATE_ONLY 2 +#define ZLIB_DFLTCC_INFLATE_ONLY 3 +#define ZLIB_DFLTCC_FULL_DEBUG 4 + extern int noexec_disabled; extern int memory_end_set; extern unsigned long memory_end; diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index 7d113e208f65..b2c2f75860e8 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -111,6 +111,8 @@ unsigned long __bootdata_preserved(__etext_dma); unsigned long __bootdata_preserved(__sdma); unsigned long __bootdata_preserved(__edma); unsigned long __bootdata_preserved(__kaslr_offset); +unsigned int __bootdata_preserved(zlib_dfltcc_support); +EXPORT_SYMBOL(zlib_dfltcc_support); unsigned long VMALLOC_START; EXPORT_SYMBOL(VMALLOC_START); diff --git a/lib/zlib_dfltcc/dfltcc.c b/lib/zlib_dfltcc/dfltcc.c index 7f77e5bb01c6..c30de430b30c 100644 --- a/lib/zlib_dfltcc/dfltcc.c +++ b/lib/zlib_dfltcc/dfltcc.c @@ -44,7 +44,10 @@ void dfltcc_reset( dfltcc_state->param.nt = 1; /* Initialize tuning parameters */ - dfltcc_state->level_mask = DFLTCC_LEVEL_MASK; + if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG) + dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG; + else + dfltcc_state->level_mask = DFLTCC_LEVEL_MASK; dfltcc_state->block_size = DFLTCC_BLOCK_SIZE; dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE; dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE; diff --git a/lib/zlib_dfltcc/dfltcc.h b/lib/zlib_dfltcc/dfltcc.h index 4782c92bb2ff..be70c807b62f 100644 --- a/lib/zlib_dfltcc/dfltcc.h +++ b/lib/zlib_dfltcc/dfltcc.h @@ -8,6 +8,7 @@ * Tuning parameters. */ #define DFLTCC_LEVEL_MASK 0x2 /* DFLTCC compression for level 1 only */ +#define DFLTCC_LEVEL_MASK_DEBUG 0x3fe /* DFLTCC compression for all levels */ #define DFLTCC_BLOCK_SIZE 1048576 #define DFLTCC_FIRST_FHT_BLOCK_SIZE 4096 #define DFLTCC_DHT_MIN_SAMPLE_SIZE 4096 diff --git a/lib/zlib_dfltcc/dfltcc_deflate.c b/lib/zlib_dfltcc/dfltcc_deflate.c index 9f0f1c90c4e0..00c185101c6d 100644 --- a/lib/zlib_dfltcc/dfltcc_deflate.c +++ b/lib/zlib_dfltcc/dfltcc_deflate.c @@ -3,6 +3,7 @@ #include "../zlib_deflate/defutil.h" #include "dfltcc_util.h" #include "dfltcc.h" +#include #include /* @@ -15,6 +16,11 @@ int dfltcc_can_deflate( deflate_state *state = (deflate_state *)strm->state; struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state); + /* Check for kernel dfltcc command line parameter */ + if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED || + zlib_dfltcc_support == ZLIB_DFLTCC_INFLATE_ONLY) + return 0; + /* Unsupported compression settings */ if (!dfltcc_are_params_ok(state->level, state->w_bits, state->strategy, dfltcc_state->level_mask)) diff --git a/lib/zlib_dfltcc/dfltcc_inflate.c b/lib/zlib_dfltcc/dfltcc_inflate.c index 12a93a06bd61..aa9ef23474df 100644 --- a/lib/zlib_dfltcc/dfltcc_inflate.c +++ b/lib/zlib_dfltcc/dfltcc_inflate.c @@ -3,6 +3,7 @@ #include "../zlib_inflate/inflate.h" #include "dfltcc_util.h" #include "dfltcc.h" +#include #include /* @@ -15,6 +16,11 @@ int dfltcc_can_inflate( struct inflate_state *state = (struct inflate_state *)strm->state; struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state); + /* Check for kernel dfltcc command line parameter */ + if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED || + zlib_dfltcc_support == ZLIB_DFLTCC_DEFLATE_ONLY) + return 0; + /* Unsupported compression settings */ if (state->wbits != HB_BITS) return 0; diff --git a/lib/zlib_dfltcc/dfltcc_util.h b/lib/zlib_dfltcc/dfltcc_util.h index 82cd1950c416..7c0d3bdc50a9 100644 --- a/lib/zlib_dfltcc/dfltcc_util.h +++ b/lib/zlib_dfltcc/dfltcc_util.h @@ -4,6 +4,7 @@ #include #include +#include /* * C wrapper for the DEFLATE CONVERSION CALL instruction. @@ -102,7 +103,8 @@ static inline int dfltcc_are_params_ok( static inline int is_dfltcc_enabled(void) { - return test_facility(DFLTCC_FACILITY); + return (zlib_dfltcc_support != ZLIB_DFLTCC_DISABLED && + test_facility(DFLTCC_FACILITY)); } char *oesc_msg(char *buf, int oesc); -- cgit v1.2.3