summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2015-07-06ufs_inode_getblock(): pass indirect block number and full indexAl Viro1-46/+16
... instead of messing with buffer_head. We can bloody well do sb_bread() in there. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_inode_getblock(): pass index instead of 'fragment'Al Viro1-19/+13
The value passed to ufs_inode_getblock() as the 3rd argument had lower bits ignored; the upper bits were shifted down and used and they actually make sense - those are _lower_ bits of index in indirect block (i.e. they form the index within a fragment within an indirect block). Pass those as argument. Upper bits of index (i.e. the number of fragment within indirect block) will join them shortly. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_inode_get{frag,block}(): leave sb_getblk() to callerAl Viro1-33/+55
just return the damn block number Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_getfrag_block(): get rid of macro junglesAl Viro1-29/+22
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_inode_get{frag,block}(): consolidate success exitsAl Viro1-28/+22
These calling conventions are rudiments of pre-2.3 times; they really need to be sanitized. This is the first step; next will be _always_ returning a block number, instead of this "return a pointer to buffer_head, except when we get to the actual data" crap. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: use the branch depth in ufs_getfrag_block()Al Viro1-6/+4
we'd already calculated it... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: move calculation of offsets into ufs_getfrag_block()Al Viro1-8/+9
... and massage ufs_frag_map() to take those instead of fragment number. As it is, we duplicate the damn thing on the write side, open-coded and bloody hard to follow. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_inode_get{frag,block}(): get rid of retriesAl Viro1-35/+8
We are holding ->truncate_mutex, so nobody else can alter our block pointers. Rechecks/retries were needed back when we only held BKL there, and had to cope with write_begin/writepage and writepage/truncate races. Can't happen anymore... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06__ufs_truncate_blocks(): avoid excessive dirtying of indirect blocksAl Viro1-3/+1
There's a case when an indirect block gets dirtied for no good reason - when there's a hole starting in the middle of area covered by it and spanning past its end, and truncate() is done precisely to the beginning of the hole. The block is obviously not modified at all - all removals happen beyond it. However, existing code ends up dirtying it just in case. It's trivial to fix and while it's not a real bug by any stretch of imagination, it makes the damn thing harder to follow. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06free_full_branch(): don't bother modifying the block we are going to freeAl Viro1-12/+2
Note that it's already made unreachable from the inode, so we don't have to worry about ufs_frag_map() walking into something already freed. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06move marking inode dirty to the end of __ufs_truncate_blocks()Al Viro1-6/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06free_full_branch(): saner calling conventionsAl Viro1-49/+51
Have caller fetch the block number *and* remove it from wherever it was. Pass the block number instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_branch(): kill recursionAl Viro1-26/+26
turn recursion into a pair of loops Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_branch(): massage towards killing recursionAl Viro1-5/+5
We always have 0 < depth2 <= depth in there, so if (--depth) { if (--depth2) A B } else { C // not using depth2 } D // not using depth2 is equivalent to if (--depth2) A with s/depth/depth - 1/ if (--depth) B else C D Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06split ufs_truncate_branch() into full- and partial-branch variantsAl Viro1-16/+58
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: unify the logics for collecting adjacent data blocks to freeAl Viro1-34/+22
open-coded in several places... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_branch(): separate the calls with non-NULL offsetsAl Viro1-4/+7
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_branch(): never call with offsets != NULL && depth2 == 0Al Viro1-3/+6
For calls in __ufs_truncate_blocks() it's just a matter of not incrementing offsets[0] and not making that call - immediately following loop will be executed one extra time and we'll be just fine. For recursive call in ufs_trunc_branch() itself, just assing NULL to offsets if we would be about to make such call. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06__ufs_trunc_blocks(): turn the part after switch into a loopAl Viro1-25/+10
... and turn the switch into if (), since all cases with depth != 1 have just become identical. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06__ufs_truncate_blocks(): unify freeing the full branchesAl Viro1-15/+14
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06unify ufs_trunc_..indirect()Al Viro1-138/+60
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_..indirect(): more massage towards unifyingAl Viro1-17/+26
Instead of manually checking that the array contains only zeroes, find the position of the last non-zero (in __ufs_truncate(), where we can conveniently do that) and use that to tell if there's any non-zero in the array tail passed to ufs_trunc_...indirect(). The goal of all that clumsiness is to get fold these functions together. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_...indirect(): pass the array of indices instead of offsetsAl Viro1-28/+22
rather than bitslicing the offset just formed as sum of shifted indices, pass the array of those indices itself. NULL is used as equivalent of "all zeroes" (== free the entire branch). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06__ufs_truncate(); find cutoff distances into branches by offsets[] arrayAl Viro1-2/+6
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_dindirect(): pass the number of blocks to keepAl Viro1-31/+26
same as the previous two. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_indirect(): pass the index of the first pointer to freeAl Viro1-33/+23
... instead of file offset. Same cleanups as in the tindirect conversion in previous commit. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs_trunc_tindirect(): pass the number of blocks to keepAl Viro1-17/+11
IOW, the distance of cutoff from the begining of the branch (in blocks). That (and the fact that block just prior to cutoff is guaranteed to be present) allows to tell whether to free triple indirect block just by looking at the offset. While we are at it, using u64 for index in the block is wrong - those should be unsigned int. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: beginning of __ufs_truncate_block() massageAl Viro1-4/+12
Use ufs_block_to_path() to find the cutoff path in the block pointers' tree. For now just use the information about the depth (to bypass the fully preserved subtrees); subsequent commits will use the information about actual path. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: the offsets ufs_block_to_path() puts into array are not sector_tAl Viro1-3/+3
type makes no sense - those are indices in block number arrays, not block numbers. And no, UFS is not likely to grow indirect blocks with 4Gpointers in them... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: move truncate code into inode.cAl Viro4-533/+470
It is closely tied to block pointers handling there, can benefit from existing helpers, etc. - no point keeping them apart. Trimmed the trailing whitespaces in inode.c at the same time. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: no retries are needed on truncateAl Viro1-40/+17
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: ufs_trunc_...() has exclusion with everything that might cause allocationsAl Viro1-12/+0
Currently - on lock_ufs(), eventually - on per-inode mutex. lock_ufs() used to be mere BKL, which is much weaker, so it needed those rechecks. BKL doesn't provide any exclusion once we lose CPU; its blind replacement, OTOH, _does_. Making that per-filesystem was an atrocity, but at least we can simplify life here. And yes, we certainly need to make that sucker per-inode - these days inode.c and truncate.c uses are needed only to protect the block pointers. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: ufs_trunc_direct() always returns 0Al Viro1-6/+3
make it return void Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: kill lock_ufs()Al Viro2-37/+2
There were 3 remaining users; in two of them we took ->s_lock immediately after lock_ufs() and held it until just before unlock_ufs(); the third one (statfs) could not be called from itself or from other two (remount and sync_fs). Just use ->s_lock in statfs and don't bother with lock_ufs at all. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: don't use lock_ufs() for block pointers tree protectionAl Viro5-47/+121
* stores to block pointers are under per-inode seqlock (meta_lock) and mutex (truncate_mutex) * fetches of block pointers are either under truncate_mutex, or wrapped into seqretry loop on meta_lock * all changes of ->i_size are under truncate_mutex and i_mutex * all changes of ->i_lastfrag are under truncate_mutex It's similar to what ext2 is doing; the main difference is that unlike ext2 we can't rely upon the atomicity of stores into block pointers - on UFS2 they are 64bit. So we can't cut the corner when switching a pointer from NULL to non-NULL as we could in ext2_splice_branch() and need to use meta_lock on all modifications. We use seqlock where ext2 uses rwlock; ext2 could probably also benefit from such change... Another non-trivial difference is that with UFS we *cannot* have reader grab truncate_mutex in case of race - it has to keep retrying. That might be possible to change, but not until we lift tail unpacking several levels up in call chain. After that commit we do *NOT* hold fs-wide serialization on accesses to block pointers anymore. Moreover, lock_ufs() can become a normal mutex now - it's only used on statfs, remount and sync_fs and none of those uses are recursive. As the matter of fact, *now* it can be collapsed with ->s_lock, and be eventually replaced with saner per-cylinder-group spinlocks, but that's a separate story. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: bforget() indirect blocks before freeing themAl Viro1-3/+3
right now it doesn't matter (lock_ufs() serializes everything), but when we switch to per-inode locking, it will be needed. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: move lock_ufs() down into __ufs_truncate_blocks()Al Viro1-7/+2
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: move truncate_setsize() down into ufs_truncate()Al Viro1-16/+11
just prior to __ufs_truncate_blocks(), with matching change of calling conventions Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: free excessive blocks upon ->write_begin() failure/short copyAl Viro1-2/+16
Broken in "[PATCH] ufs: truncate should allocate block for last byte"; all way back in 2006. ufs_setattr() hadn't been the only user of vmtruncate() and eliminating ->truncate() method required corrections in a bunch of places. Eventually those places had migrated into ->write_begin() failure exit and ->write_end() after short copy... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: switch ufs_evict_inode() to trimmed-down variant of ufs_truncate()Al Viro3-27/+44
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-06ufs: kill more lock_ufs() callsAl Viro2-13/+4
a) move it inside ufs_truncate() b) ufs_free_inode() doesn't need it - it's serialized on ->s_lock c) ufs_write_inode() doesn't need it either (and can be called without it anyway). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-05Linux 4.2-rc1v4.2-rc1Linus Torvalds1-2/+2
2015-07-05Merge tag 'platform-drivers-x86-v4.2-2' of ↵Linus Torvalds7-45/+1004
git://git.infradead.org/users/dvhart/linux-platform-drivers-x86 Pull late x86 platform driver updates from Darren Hart: "The following came in a bit later and I wanted them to bake in next a few more days before submitting, thus the second pull. A new intel_pmc_ipc driver, a symmetrical allocation and free fix in dell-laptop, a couple minor fixes, and some updated documentation in the dell-laptop comments. intel_pmc_ipc: - Add Intel Apollo Lake PMC IPC driver tc1100-wmi: - Delete an unnecessary check before the function call "kfree" dell-laptop: - Fix allocating & freeing SMI buffer page - Show info about WiGig and UWB in debugfs - Update information about wireless control" * tag 'platform-drivers-x86-v4.2-2' of git://git.infradead.org/users/dvhart/linux-platform-drivers-x86: intel_pmc_ipc: Add Intel Apollo Lake PMC IPC driver tc1100-wmi: Delete an unnecessary check before the function call "kfree" dell-laptop: Fix allocating & freeing SMI buffer page dell-laptop: Show info about WiGig and UWB in debugfs dell-laptop: Update information about wireless control
2015-07-04Merge branch 'for-linus' of ↵Linus Torvalds99-553/+784
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull more vfs updates from Al Viro: "Assorted VFS fixes and related cleanups (IMO the most interesting in that part are f_path-related things and Eric's descriptor-related stuff). UFS regression fixes (it got broken last cycle). 9P fixes. fs-cache series, DAX patches, Jan's file_remove_suid() work" [ I'd say this is much more than "fixes and related cleanups". The file_table locking rule change by Eric Dumazet is a rather big and fundamental update even if the patch isn't huge. - Linus ] * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (49 commits) 9p: cope with bogus responses from server in p9_client_{read,write} p9_client_write(): avoid double p9_free_req() 9p: forgetting to cancel request on interrupted zero-copy RPC dax: bdev_direct_access() may sleep block: Add support for DAX reads/writes to block devices dax: Use copy_from_iter_nocache dax: Add block size note to documentation fs/file.c: __fget() and dup2() atomicity rules fs/file.c: don't acquire files->file_lock in fd_install() fs:super:get_anon_bdev: fix race condition could cause dev exceed its upper limitation vfs: avoid creation of inode number 0 in get_next_ino namei: make set_root_rcu() return void make simple_positive() public ufs: use dir_pages instead of ufs_dir_pages() pagemap.h: move dir_pages() over there remove the pointless include of lglock.h fs: cleanup slight list_entry abuse xfs: Correctly lock inode when removing suid and file capabilities fs: Call security_ops->inode_killpriv on truncate fs: Provide function telling whether file_remove_privs() will do anything ...
2015-07-04bluetooth: fix list handlingLinus Torvalds2-2/+3
Commit 835a6a2f8603 ("Bluetooth: Stop sabotaging list poisoning") thought that the code was sabotaging the list poisoning when NULL'ing out the list pointers and removed it. But what was going on was that the bluetooth code was using NULL pointers for the list as a way to mark it empty, and that commit just broke it (and replaced the test with NULL with a "list_empty()" test on a uninitialized list instead, breaking things even further). So fix it all up to use the regular and real list_empty() handling (which does not use NULL, but a pointer to itself), also making sure to initialize the list properly (the previous NULL case was initialized implicitly by the session being allocated with kzalloc()) This is a combination of patches by Marcel Holtmann and Tedd Ho-Jeong An. [ I would normally expect to get this through the bt tree, but I'm going to release -rc1, so I'm just committing this directly - Linus ] Reported-and-tested-by: Jörg Otte <jrg.otte@gmail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Original-by: Tedd Ho-Jeong An <tedd.an@intel.com> Original-by: Marcel Holtmann <marcel@holtmann.org>: Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-07-04Merge branch 'for-next' of ↵Linus Torvalds68-6312/+3163
git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending Pull SCSI target updates from Nicholas Bellinger: "It's been a busy development cycle for target-core in a number of different areas. The fabric API usage for se_node_acl allocation is now within target-core code, dropping the external API callers for all fabric drivers tree-wide. There is a new conversion to RCU hlists for se_node_acl and se_portal_group LUN mappings, that turns fast-past LUN lookup into a completely lockless code-path. It also removes the original hard-coded limitation of 256 LUNs per fabric endpoint. The configfs attributes for backends can now be shared between core and driver code, allowing existing drivers to use common code while still allowing flexibility for new backend provided attributes. The highlights include: - Merge sbc_verify_dif_* into common code (sagi) - Remove iscsi-target support for obsolete IFMarker/OFMarker (Christophe Vu-Brugier) - Add bidi support in target/user backend (ilias + vangelis + agover) - Move se_node_acl allocation into target-core code (hch) - Add crc_t10dif_update common helper (akinobu + mkp) - Handle target-core odd SGL mapping for data transfer memory (akinobu) - Move transport ID handling into target-core (hch) - Move task tag into struct se_cmd + support 64-bit tags (bart) - Convert se_node_acl->device_list[] to RCU hlist (nab + hch + paulmck) - Convert se_portal_group->tpg_lun_list[] to RCU hlist (nab + hch + paulmck) - Simplify target backend driver registration (hch) - Consolidate + simplify target backend attribute implementations (hch + nab) - Subsume se_port + t10_alua_tg_pt_gp_member into se_lun (hch) - Drop lun_sep_lock for se_lun->lun_se_dev RCU usage (hch + nab) - Drop unnecessary core_tpg_register TFO parameter (nab) - Use 64-bit LUNs tree-wide (hannes) - Drop left-over TARGET_MAX_LUNS_PER_TRANSPORT limit (hannes)" * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: (76 commits) target: Bump core version to v5.0 target: remove target_core_configfs.h target: remove unused TARGET_CORE_CONFIG_ROOT define target: consolidate version defines target: implement WRITE_SAME with UNMAP bit using ->execute_unmap target: simplify UNMAP handling target: replace se_cmd->execute_rw with a protocol_data field target/user: Fix inconsistent kmap_atomic/kunmap_atomic target: Send UA when changing LUN inventory target: Send UA upon LUN RESET tmr completion target: Send UA on ALUA target port group change target: Convert se_lun->lun_deve_lock to normal spinlock target: use 'se_dev_entry' when allocating UAs target: Remove 'ua_nacl' pointer from se_ua structure target_core_alua: Correct UA handling when switching states xen-scsiback: Fix compile warning for 64-bit LUN target: Remove TARGET_MAX_LUNS_PER_TRANSPORT target: use 64-bit LUNs target: Drop duplicate + unused se_dev_check_wce target: Drop unnecessary core_tpg_register TFO parameter ...
2015-07-04Merge tag 'ntb-4.2' of git://github.com/jonmason/ntbLinus Torvalds23-2816/+5545
Pull NTB updates from Jon Mason: "This includes a pretty significant reworking of the NTB core code, but has already produced some significant performance improvements. An abstraction layer was added to allow the hardware and clients to be easily added. This required rewriting the NTB transport layer for this abstraction layer. This modification will allow future "high performance" NTB clients. In addition to this change, a number of performance modifications were added. These changes include NUMA enablement, using CPU memcpy instead of asyncdma, and modification of NTB layer MTU size" * tag 'ntb-4.2' of git://github.com/jonmason/ntb: (22 commits) NTB: Add split BAR output for debugfs stats NTB: Change WARN_ON_ONCE to pr_warn_once on unsafe NTB: Print driver name and version in module init NTB: Increase transport MTU to 64k from 16k NTB: Rename Intel code names to platform names NTB: Default to CPU memcpy for performance NTB: Improve performance with write combining NTB: Use NUMA memory in Intel driver NTB: Use NUMA memory and DMA chan in transport NTB: Rate limit ntb_qp_link_work NTB: Add tool test client NTB: Add ping pong test client NTB: Add parameters for Intel SNB B2B addresses NTB: Reset transport QP link stats on down NTB: Do not advance transport RX on link down NTB: Differentiate transport link down messages NTB: Check the device ID to set errata flags NTB: Enable link for Intel root port mode in probe NTB: Read peer info from local SPAD in transport NTB: Split ntb_hw_intel and ntb_transport drivers ...
2015-07-049p: cope with bogus responses from server in p9_client_{read,write}Al Viro1-0/+8
if server claims to have written/read more than we'd told it to, warn and cap the claimed byte count to avoid advancing more than we are ready to.
2015-07-04p9_client_write(): avoid double p9_free_req()Al Viro1-0/+1
Braino in "9p: switch p9_client_write() to passing it struct iov_iter *"; if response is impossible to parse and we discard the request, get the out of the loop right there. Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-049p: forgetting to cancel request on interrupted zero-copy RPCAl Viro1-1/+2
If we'd already sent a request and decide to abort it, we *must* issue TFLUSH properly and not just blindly reuse the tag, or we'll get seriously screwed when response eventually arrives and we confuse it for response to later request that had reused the same tag. Cc: stable@vger.kernel.org # v3.2 and later Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>