- Jul 03, 2024
-
-
Christoph Hellwig authored
Now that the integrity payload is always freed in bio_uninit, don't bother freeing it a little earlier in bio_integrity_unmap_free_user. With that the separate bio_integrity_unmap_free_user can go away by just passing the bio to bio_integrity_unmap_user. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-7-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
blk_rq_unmap_user always unmaps user space pass-through request. If such a request has integrity data attached it must come from a user mapping as well. Call bio_integrity_unmap_free_user from blk_rq_unmap_user and remove the nvme_unmap_bio wrapper in the nvme driver. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Anuj Gupta <anuj20.g@samsung.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-5-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jun 26, 2024
-
-
Christoph Hellwig authored
dma_pad_mask is a queue_limits by all ways of looking at it, so move it there and set it through the atomic queue limits APIs. Add a little helper that takes the alignment and pad into account to simplify the code that is touched a bit. Note that there never was any need for the > check in blk_queue_update_dma_pad, this probably was just copy and paste from dma_update_dma_alignment. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20240626142637.300624-9-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jan 23, 2024
-
-
Christian A. Ehrhardt authored
Syzkaller reports a warning in _copy_from_iter because an iov_iter is supposedly used in the wrong direction. The reason is that syzcaller managed to generate a request with a transfer direction of SG_DXFER_TO_FROM_DEV. This instructs the kernel to copy user buffers into the kernel, read into the copied buffers and then copy the data back to user space. Thus the iovec is used in both directions. Detect this situation in the block layer and construct a new iterator with the correct direction for the copy-in. Reported-by:
<syzbot+a532b03fdfee2c137666@syzkaller.appspotmail.com> Closes: https://lore.kernel.org/lkml/0000000000009b92c10604d7a5e9@google.com/t/ Reported-by:
<syzbot+63dec323ac56c28e644f@syzkaller.appspotmail.com> Closes: https://lore.kernel.org/lkml/0000000000003faaa105f6e7c658@google.com/T/ Signed-off-by:
Christian A. Ehrhardt <lk@c--e.de> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240121202634.275068-1-lk@c--e.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 06, 2023
-
-
Christoph Hellwig authored
There is no need to unpin the added page when adding it to the bio fails as that is done by the loop below. Instead we want to unpin it when adding a single page to the bio more than once as bio_release_pages will only unpin it once. Fixes: d1916c86 ("block: move same page handling from __bio_add_pc_page to the callers") Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20230905124731.328255-1-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- May 24, 2023
-
-
David Howells authored
This will pin pages or leave them unaltered rather than getting a ref on them as appropriate to the iterator. The pages need to be pinned for DIO rather than having refs taken on them to prevent VM copy-on-write from malfunctioning during a concurrent fork() (the result of the I/O could otherwise end up being visible to/affected by the child process). Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> cc: Al Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Jan Kara <jack@suse.cz> cc: Matthew Wilcox <willy@infradead.org> cc: Logan Gunthorpe <logang@deltatee.com> cc: linux-block@vger.kernel.org Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230522205744.2825689-7-dhowells@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Replace BIO_NO_PAGE_REF with a BIO_PAGE_REFFED flag that has the inverted meaning is only set when a page reference has been acquired that needs to be released by bio_release_pages(). Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> cc: Al Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Jan Kara <jack@suse.cz> cc: Matthew Wilcox <willy@infradead.org> cc: Logan Gunthorpe <logang@deltatee.com> cc: linux-block@vger.kernel.org Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230522205744.2825689-4-dhowells@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- May 23, 2023
-
-
Anuj Gupta authored
commit <8af870aa> ("block: enable bio caching use for passthru IO") introduced bio-cache for passthru IO. In case when nr_vecs are greater than BIO_INLINE_VECS, bio and bvecs are allocated from mempool (instead of percpu cache) and REQ_ALLOC_CACHE is cleared. This causes the side effect of not freeing bio/bvecs into mempool on completion. This patch lets the passthru IO fallback to allocation using bio_kmalloc when nr_vecs are greater than BIO_INLINE_VECS. The corresponding bio is freed during call to blk_mq_map_bio_put during completion. Cc: stable@vger.kernel.org # 6.1 fixes <8af870aa> ("block: enable bio caching use for passthru IO") Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Signed-off-by:
Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20230523111709.145676-1-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Mar 30, 2023
-
-
Jens Axboe authored
This returns a pointer to the current iovec entry in the iterator. Only useful with ITER_IOVEC right now, but it prepares us to treat ITER_UBUF and ITER_IOVEC identically for the first segment. Rename struct iov_iter->iov to iov_iter->__iov to find any potentially troublesome spots, and also to prevent anyone from adding new code that accesses iter->iov directly. Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Mar 29, 2023
-
-
Jens Axboe authored
This helper blindly copies the iovec, even if we don't have one. Make this case a bit smarter by only doing so if we have an iovec array to copy. Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Feb 20, 2023
-
-
David Howells authored
Define flags to qualify page extraction to pass into iov_iter_*_pages*() rather than passing in FOLL_* flags. For now only a flag to allow peer-to-peer DMA is supported. Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> Reviewed-by:
Jens Axboe <axboe@kernel.dk> cc: Al Viro <viro@zeniv.linux.org.uk> cc: Logan Gunthorpe <logang@deltatee.com> cc: linux-fsdevel@vger.kernel.org cc: linux-block@vger.kernel.org Signed-off-by:
Steve French <stfrench@microsoft.com>
-
- Jan 29, 2023
-
-
Anuj Gupta authored
This patch modifies the present check, so that bio-cache is not limited to iopoll. Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Signed-off-by:
Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20230117120638.72254-3-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jan 11, 2023
-
-
Keith Busch authored
This is more efficient than iter_iov. Signed-off-by:
Keith Busch <kbusch@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> [axboe: fold in iovec assumption fix] Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Nov 09, 2022
-
-
Logan Gunthorpe authored
When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the NVMe passthru requests to use P2PDMA pages. Signed-off-by:
Logan Gunthorpe <logang@deltatee.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20221021174116.7200-8-logang@deltatee.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Oct 25, 2022
-
-
Bart Van Assche authored
Document which functions do not modify the queue limits. Reviewed-by:
Ming Lei <ming.lei@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Keith Busch <kbusch@kernel.org> Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20221025191755.1711437-3-bvanassche@acm.org Reviewed-by:
Keith Busch <kbusch@kernel.org> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 30, 2022
-
-
Kanchan Joshi authored
Extend blk_rq_map_user_iov so that it can handle bvec iterator, using the new blk_rq_map_user_bvec function. It maps the pages from bvec iterator into a bio and place the bio into request. This helper will be used by nvme for uring-passthrough path when IO is done using pre-mapped buffers. Signed-off-by:
Kanchan Joshi <joshi.k@samsung.com> Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Suggested-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220930062749.152261-11-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Kanchan Joshi authored
Move bio allocation logic from bio_map_user_iov to a new helper blk_rq_map_bio_alloc. It is named so because functionality is opposite of what is done inside blk_mq_map_bio_put. This is a prep patch. Signed-off-by:
Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220930062749.152261-10-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Anuj Gupta authored
This patch renames existing bio_map_put function to blk_mq_map_bio_put. Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Suggested-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220930062749.152261-9-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Anuj Gupta authored
Create a helper blk_rq_map_user_io for mapping of vectored as well as non-vectored requests. This will help in saving dupilcation of code at few places in scsi and nvme. Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Suggested-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220930062749.152261-4-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 05, 2022
-
-
Jiapeng Chong authored
The variable added is not effectively used in the function, so delete it. block/blk-map.c:273:16: warning: variable 'added' set but not used. Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=2049 Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20220905063253.120082-1-jiapeng.chong@linux.alibaba.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Aug 22, 2022
-
-
Jens Axboe authored
Avoid a kmalloc+kfree for each page array, if we only have a few pages that are mapped. An alloc+free for each IO is quite expensive, and it's pretty pointless if we're only dealing with 1 or a few vecs. Use UIO_FASTIOV like we do in other spots to set a sane limit for how big of an IO we want to avoid allocations for. Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
bdev based polled O_DIRECT is currently quite a bit faster than passthru on the same device, and one of the reaons is that we're not able to use the bio caching for passthru IO. If REQ_POLLED is set on the request, use the fs bio set for grabbing a bio from the caches, if available. This saves 5-6% of CPU over head for polled passthru IO. Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
We don't need full ints for several of these members. Change the page_order and nr_entries to unsigned shorts, and the true/false from_user and null_mapped to booleans. This shrinks the struct from 32 to 24 bytes on 64-bit archs. Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Aug 09, 2022
-
-
Al Viro authored
... doing revert if we end up not using some pages Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- Apr 23, 2022
-
-
Michal Orzel authored
Get rid of redundant assignment to a variable ret from function bio_map_user_iov as it is being assigned a value that is never read. It is being re-assigned in the first instruction after the while loop Reported by clang-tidy [deadcode.DeadStores] Signed-off-by:
Michal Orzel <michalorzel.eng@gmail.com> Link: https://lore.kernel.org/r/20220423113811.13335-2-michalorzel.eng@gmail.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 18, 2022
-
-
Christoph Hellwig authored
Remove the magic autofree semantics and require the callers to explicitly call bio_init to initialize the bio. This allows bio_free to catch accidental bio_put calls on bio_init()ed bios as well. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Coly Li <colyli@suse.de> Acked-by:
Mike Snitzer <snitzer@kernel.org> Link: https://lore.kernel.org/r/20220406061228.410163-5-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Feb 17, 2022
-
-
Haimin Zhang authored
Add __GFP_ZERO flag for alloc_page in function bio_copy_kern to initialize the buffer of a bio. Signed-off-by:
Haimin Zhang <tcs.kernel@gmail.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220216084038.15635-1-tcs.kernel@gmail.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 03, 2021
-
-
Christoph Hellwig authored
flush_kernel_dcache_page is a rather confusing interface that implements a subset of flush_dcache_page by not being able to properly handle page cache mapped pages. The only callers left are in the exec code as all other previous callers were incorrect as they could have dealt with page cache pages. Replace the calls to flush_kernel_dcache_page with calls to flush_dcache_page, which for all architectures does either exactly the same thing, can contains one or more of the following: 1) an optimization to defer the cache flush for page cache pages not mapped into userspace 2) additional flushing for mapped page cache pages if cache aliases are possible Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by:
Ira Weiny <ira.weiny@intel.com> Cc: Alex Shi <alexs@kernel.org> Cc: Geoff Levand <geoff@infradead.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Guo Ren <guoren@kernel.org> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Cercueil <paul@crapouillou.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 02, 2021
-
-
Christoph Hellwig authored
Use memcpy_from_bvec instead of open coding the logic. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20210727055646.118787-13-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 12, 2021
-
-
Christoph Hellwig authored
blk_rq_append_bio is also used for the copy case, not just the map case, so tis debug check is not correct. Fixes: 393bb12e ("block: stop calling blk_queue_bounce for passthrough requests") Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Christoph Hellwig <hch@lst.de> Tested-by:
Guenter Roeck <linux@roeck-us.net> Reviewed-by:
Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by:
Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Link: https://lore.kernel.org/r/20210409150447.1977410-1-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 06, 2021
-
-
Christoph Hellwig authored
Instead of overloading the passthrough fast path with the deprecated block layer bounce buffering let the users that combine an old undermaintained driver with a highmem system pay the price by always falling back to copies in that case. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by:
Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20210331073001.46776-9-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Remove the BLK_BOUNCE_ISA support now that all users are gone. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by:
Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20210331073001.46776-7-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Mar 11, 2021
-
-
Christoph Hellwig authored
Ever since the addition of multipage bio_vecs BIO_MAX_PAGES has been horribly confusingly misnamed. Rename it to BIO_MAX_VECS to stop confusing users of the bio API. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20210311110137.1132391-2-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Feb 26, 2021
-
-
Matthew Wilcox (Oracle) authored
It's often inconvenient to use BIO_MAX_PAGES due to min() requiring the sign to be the same. Introduce bio_max_segs() and change BIO_MAX_PAGES to be unsigned to make it easier for the users. Reviewed-by:
Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 23, 2020
-
-
Christoph Hellwig authored
bmd is allocated using kmalloc in bio_alloc_map_data, so make sure is_null_mapped is properly initialized to false for the !null_mapped case. Fixes: f3256075 ("block: remove the BIO_NULL_MAPPED flag") Reported-by:
Marc Hartmayer <mhartmay@linux.ibm.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Sep 01, 2020
-
-
Christoph Hellwig authored
Just check if there is private data, in which case the bio must have originated from bio_copy_user_iov. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Just duplicate a small amount of code in the low-level map into the bio and copy to the bio routines, leading to much easier to follow and maintain code, and better shared error handling. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Open code __blk_rq_unmap_user in the two callers. Both never pass a NULL bio, and one of them can use an existing local variable instead of the bio flag. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
We can simply use a boolean flag in the bio_map_data data structure instead. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- May 14, 2020
-
-
Satya Tangirala authored
We must have some way of letting a storage device driver know what encryption context it should use for en/decrypting a request. However, it's the upper layers (like the filesystem/fscrypt) that know about and manages encryption contexts. As such, when the upper layer submits a bio to the block layer, and this bio eventually reaches a device driver with support for inline encryption, the device driver will need to have been told the encryption context for that bio. We want to communicate the encryption context from the upper layer to the storage device along with the bio, when the bio is submitted to the block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can represent an encryption context (note that we can't use the bi_private field in struct bio to do this because that field does not function to pass information across layers in the storage stack). We also introduce various functions to manipulate the bio_crypt_ctx and make the bio/request merging logic aware of the bio_crypt_ctx. We also make changes to blk-mq to make it handle bios with encryption contexts. blk-mq can merge many bios into the same request. These bios need to have contiguous data unit numbers (the necessary changes to blk-merge are also made to ensure this) - as such, it suffices to keep the data unit number of just the first bio, since that's all a storage driver needs to infer the data unit number to use for each data block in each bio in a request. blk-mq keeps track of the encryption context to be used for all the bios in a request with the request's rq_crypt_ctx. When the first bio is added to an empty request, blk-mq will program the encryption context of that bio into the request_queue's keyslot manager, and store the returned keyslot in the request's rq_crypt_ctx. All the functions to operate on encryption contexts are in blk-crypto.c. Upper layers only need to call bio_crypt_set_ctx with the encryption key, algorithm and data_unit_num; they don't have to worry about getting a keyslot for each encryption context, as blk-mq/blk-crypto handles that. Blk-crypto also makes it possible for request-based layered devices like dm-rq to make use of inline encryption hardware by cloning the rq_crypt_ctx and programming a keyslot in the new request_queue when necessary. Note that any user of the block layer can submit bios with an encryption context, such as filesystems, device-mapper targets, etc. Signed-off-by:
Satya Tangirala <satyat@google.com> Reviewed-by:
Eric Biggers <ebiggers@google.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-