- Jan 12, 2024
-
-
David Howells authored
rxrpc normally has the Don't Fragment flag set on the UDP packets it transmits, except when it has decided that DATA packets aren't getting through - in which case it turns it off just for the DATA transmissions. This can be a problem, however, for RESPONSE packets that convey authentication and crypto data from the client to the server as ticket may be larger than can fit in the MTU. In such a case, rxrpc gets itself into an infinite loop as the sendmsg returns an error (EMSGSIZE), which causes rxkad_send_response() to return -EAGAIN - and the CHALLENGE packet is put back on the Rx queue to retry, leading to the I/O thread endlessly attempting to perform the transmission. Fix this by disabling DF on RESPONSE packets for now. The use of DF and best data MTU determination needs reconsidering at some point in the future. Fixes: 17926a79 ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both") Reported-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: linux-afs@lists.infradead.org Acked-by:
Paolo Abeni <pabeni@redhat.com> Link: https://lore.kernel.org/r/1581852.1704813048@warthog.procyon.org.uk Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- Jan 07, 2024
-
-
David Howells authored
Fix rxrpc_cleanup_ring() to use rxrpc_purge_queue() rather than skb_queue_purge() so that the count of outstanding skbuffs is correctly updated when a failed call is cleaned up. Without this rmmod may hang waiting for rxrpc_n_rx_skbs to become zero. Fixes: 5d7edbc9 ("rxrpc: Get rid of the Rx ring") Reported-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Dec 24, 2023
-
-
David Howells authored
Create /proc/net/rxrpc/bundles to display outstanding rxrpc client connection bundles. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Change rxrpc's API such that: (1) A new function, rxrpc_kernel_lookup_peer(), is provided to look up an rxrpc_peer record for a remote address and a corresponding function, rxrpc_kernel_put_peer(), is provided to dispose of it again. (2) When setting up a call, the rxrpc_peer object used during a call is now passed in rather than being set up by rxrpc_connect_call(). For afs, this meenat passing it to rxrpc_kernel_begin_call() rather than the full address (the service ID then has to be passed in as a separate parameter). (3) A new function, rxrpc_kernel_remote_addr(), is added so that afs can get a pointer to the transport address for display purposed, and another, rxrpc_kernel_remote_srx(), to gain a pointer to the full rxrpc address. (4) The function to retrieve the RTT from a call, rxrpc_kernel_get_srtt(), is then altered to take a peer. This now returns the RTT or -1 if there are insufficient samples. (5) Rename rxrpc_kernel_get_peer() to rxrpc_kernel_call_get_peer(). (6) Provide a new function, rxrpc_kernel_get_peer(), to get a ref on a peer the caller already has. This allows the afs filesystem to pin the rxrpc_peer records that it is using, allowing faster lookups and pointer comparisons rather than comparing sockaddr_rxrpc contents. It also makes it easier to get hold of the RTT. The following changes are made to afs: (1) The addr_list struct's addrs[] elements now hold a peer struct pointer and a service ID rather than a sockaddr_rxrpc. (2) When displaying the transport address, rxrpc_kernel_remote_addr() is used. (3) The port arg is removed from afs_alloc_addrlist() since it's always overridden. (4) afs_merge_fs_addr4() and afs_merge_fs_addr6() do peer lookup and may now return an error that must be handled. (5) afs_find_server() now takes a peer pointer to specify the address. (6) afs_find_server(), afs_compare_fs_alists() and afs_merge_fs_addr[46]{} now do peer pointer comparison rather than address comparison. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
Oleg Nesterov authored
rxrpc_find_service_conn_rcu() should make the "seq" counter odd on the second pass, otherwise read_seqbegin_or_lock() never takes the lock. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/20231117164846.GA10410@redhat.com/
-
- Nov 17, 2023
-
-
David Howells authored
Defer the generation of a PING RESPONSE ACK in response to a PING ACK until we've parsed the PING ACK so that we pick up any changes to the packet queue so that we can update ackinfo. This is also applied to an ACK generated in response to an ACK with the REQUEST_ACK flag set. Note that whilst the problem was added in commit 248f219c, it didn't really matter at that point because the ACK was proposed in softirq mode and generated asynchronously later in process context, taking the latest values at the time. But this fix is only needed since the move to parse incoming packets in an I/O thread rather than in softirq and generate the ACK at point of proposal (b0346843). Fixes: 248f219c ("rxrpc: Rewrite the data and ack handling code") Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David Howells authored
Fix RTT determination to be able to use any type of ACK as the response from which RTT can be calculated provided its ack.serial is non-zero and matches the serial number of an outgoing DATA or ACK packet. This shouldn't be limited to REQUESTED-type ACKs as these can have other types substituted for them for things like duplicate or out-of-order packets. Fixes: 4700c4d8 ("rxrpc: Fix loss of RTT samples due to interposed ACK") Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David Howells authored
Fix some superficial issues with the tracing of rxrpc_bundle structs, including: (1) Set the debug_id when the bundle is allocated rather than when it is set up so that the "NEW" trace line displays the correct bundle ID. (2) Show the refcount when emitting the "FREE" traceline. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Nov 02, 2023
-
-
David Howells authored
Fix two connection reaping bugs: (1) rxrpc_connection_expiry is in units of seconds, so rxrpc_disconnect_call() needs to multiply it by HZ when adding it to jiffies. (2) rxrpc_client_conn_reap_timeout() should set RXRPC_CLIENT_REAP_TIMER if local->kill_all_client_conns is clear, not if it is set (in which case we don't need the timer). Without this, old client connections don't get cleaned up until the local endpoint is cleaned up. Fixes: 5040011d ("rxrpc: Make the local endpoint hold a ref on a connected call") Fixes: 0d6bf319 ("rxrpc: Move the client conn cache management to the I/O thread") Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/783911.1698364174@warthog.procyon.org.uk Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- Jun 24, 2023
-
-
David Howells authored
Remove ->sendpage() and ->sendpage_locked(). sendmsg() with MSG_SPLICE_PAGES should be used instead. This allows multiple pages and multipage folios to be passed through. Signed-off-by:
David Howells <dhowells@redhat.com> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for net/can cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-afs@lists.infradead.org cc: mptcp@lists.linux.dev cc: rds-devel@oss.oracle.com cc: tipc-discussion@lists.sourceforge.net cc: virtualization@lists.linux-foundation.org Link: https://lore.kernel.org/r/20230623225513.2732256-16-dhowells@redhat.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- May 30, 2023
-
-
UTS_RELEASE has a maximum length of 64 which can cause rxrpc_version to exceed the 65 byte message limit. Per the rx spec[1]: "If a server receives a packet with a type value of 13, and the client-initiated flag set, it should respond with a 65-byte payload containing a string that identifies the version of AFS software it is running." The current implementation causes a compile error when WERROR is turned on and/or UTS_RELEASE exceeds the length of 49 (making the version string more than 64 characters). Fix this by generating the string during module initialisation and limiting the UTS_RELEASE segment of the string does not exceed 49 chars. We need to make sure that the 64 bytes includes "linux-" at the front and " AF_RXRPC" at the back as this may be used in pattern matching. Fixes: 44ba0698 ("RxRPC: Handle VERSION Rx protocol packets") Reported-by:
Kenny Ho <Kenny.Ho@amd.com> Link: https://lore.kernel.org/r/20230523223944.691076-1-Kenny.Ho@amd.com/ Signed-off-by:
David Howells <dhowells@redhat.com> Acked-by:
Kenny Ho <Kenny.Ho@amd.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Andrew Lunn <andrew@lunn.ch> cc: David Laight <David.Laight@ACULAB.COM> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Link: https://web.mit.edu/kolya/afs/rx/rx-spec [1] Reviewed-by:
Simon Horman <simon.horman@corigine.com> Reviewed-by:
Jeffrey Altman <jaltman@auristor.com> Link: https://lore.kernel.org/r/654974.1685100894@warthog.procyon.org.uk Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- May 25, 2023
-
-
Tejun Heo authored
BACKGROUND ========== When multiple work items are queued to a workqueue, their execution order doesn't match the queueing order. They may get executed in any order and simultaneously. When fully serialized execution - one by one in the queueing order - is needed, an ordered workqueue should be used which can be created with alloc_ordered_workqueue(). However, alloc_ordered_workqueue() was a later addition. Before it, an ordered workqueue could be obtained by creating an UNBOUND workqueue with @max_active==1. This originally was an implementation side-effect which was broken by 4c16bd32 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered"). Because there were users that depended on the ordered execution, 5c0338c6 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered") made workqueue allocation path to implicitly promote UNBOUND workqueues w/ @max_active==1 to ordered workqueues. While this has worked okay, overloading the UNBOUND allocation interface this way creates other issues. It's difficult to tell whether a given workqueue actually needs to be ordered and users that legitimately want a min concurrency level wq unexpectedly gets an ordered one instead. With planned UNBOUND workqueue updates to improve execution locality and more prevalence of chiplet designs which can benefit from such improvements, this isn't a state we wanna be in forever. This patch series audits all callsites that create an UNBOUND workqueue w/ @max_active==1 and converts them to alloc_ordered_workqueue() as necessary. WHAT TO LOOK FOR ================ The conversions are from alloc_workqueue(WQ_UNBOUND | flags, 1, args..) to alloc_ordered_workqueue(flags, args...) which don't cause any functional changes. If you know that fully ordered execution is not necessary, please let me know. I'll drop the conversion and instead add a comment noting the fact to reduce confusion while conversion is in progress. If you aren't fully sure, it's completely fine to let the conversion through. The behavior will stay exactly the same and we can always reconsider later. As there are follow-up workqueue core changes, I'd really appreciate if the patch can be routed through the workqueue tree w/ your acks. Thanks. Signed-off-by:
Tejun Heo <tj@kernel.org> Cc: David Howells <dhowells@redhat.com> Cc: Marc Dionne <marc.dionne@auristor.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: linux-afs@lists.infradead.org Cc: netdev@vger.kernel.org
-
- May 01, 2023
-
-
David Howells authored
afs_make_call() calls rxrpc_kernel_begin_call() to begin a call (which may get stalled in the background waiting for a connection to become available); it then calls rxrpc_kernel_set_max_life() to set the timeouts - but that starts the call timer so the call timer might then expire before we get a connection assigned - leading to the following oops if the call stalled: BUG: kernel NULL pointer dereference, address: 0000000000000000 ... CPU: 1 PID: 5111 Comm: krxrpcio/0 Not tainted 6.3.0-rc7-build3+ #701 RIP: 0010:rxrpc_alloc_txbuf+0xc0/0x157 ... Call Trace: <TASK> rxrpc_send_ACK+0x50/0x13b rxrpc_input_call_event+0x16a/0x67d rxrpc_io_thread+0x1b6/0x45f ? _raw_spin_unlock_irqrestore+0x1f/0x35 ? rxrpc_input_packet+0x519/0x519 kthread+0xe7/0xef ? kthread_complete_and_exit+0x1b/0x1b ret_from_fork+0x22/0x30 Fix this by noting the timeouts in struct rxrpc_call when the call is created. The timer will be started when the first packet is transmitted. It shouldn't be possible to trigger this directly from userspace through AF_RXRPC as sendmsg() will return EBUSY if the call is in the waiting-for-conn state if it dropped out of the wait due to a signal. Fixes: 9d35d880 ("rxrpc: Move client call connection to the I/O thread") Reported-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David Howells authored
When sendmsg() creates an rxrpc call, it queues it to wait for a connection and channel to be assigned and then waits before it can start shovelling data as the encrypted DATA packet content includes a summary of the connection parameters. However, sendmsg() may get interrupted before a connection gets assigned and further sendmsg() calls will fail with EBUSY until an assignment is made. Fix this so that the call can at least be aborted without failing on EBUSY. We have to be careful here as sendmsg() mustn't be allowed to start the call timer if the call doesn't yet have a connection assigned as an oops may follow shortly thereafter. Fixes: 540b1c48 ("rxrpc: Fix deadlock between call creation and sendmsg/recvmsg") Reported-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David Howells authored
The hard call timeout is specified in the RXRPC_SET_CALL_TIMEOUT cmsg in seconds, so fix the point at which sendmsg() applies it to the call to convert to jiffies from seconds, not milliseconds. Fixes: a158bdd3 ("rxrpc: Fix timeout of a call that hasn't yet been granted a channel") Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Apr 27, 2023
-
-
Inside the loop in rxrpc_wait_to_be_connected() it checks call->error to see if it should exit the loop without first checking the call state. This is probably safe as if call->error is set, the call is dead anyway, but we should probably wait for the call state to have been set to completion first, lest it cause surprise on the way out. Fix this by only accessing call->error if the call is complete. We don't actually need to access the error inside the loop as we'll do that after. This caused the following report: BUG: KCSAN: data-race in rxrpc_send_data / rxrpc_set_call_completion write to 0xffff888159cf3c50 of 4 bytes by task 25673 on cpu 1: rxrpc_set_call_completion+0x71/0x1c0 net/rxrpc/call_state.c:22 rxrpc_send_data_packet+0xba9/0x1650 net/rxrpc/output.c:479 rxrpc_transmit_one+0x1e/0x130 net/rxrpc/output.c:714 rxrpc_decant_prepared_tx net/rxrpc/call_event.c:326 [inline] rxrpc_transmit_some_data+0x496/0x600 net/rxrpc/call_event.c:350 rxrpc_input_call_event+0x564/0x1220 net/rxrpc/call_event.c:464 rxrpc_io_thread+0x307/0x1d80 net/rxrpc/io_thread.c:461 kthread+0x1ac/0x1e0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 read to 0xffff888159cf3c50 of 4 bytes by task 25672 on cpu 0: rxrpc_send_data+0x29e/0x1950 net/rxrpc/sendmsg.c:296 rxrpc_do_sendmsg+0xb7a/0xc20 net/rxrpc/sendmsg.c:726 rxrpc_sendmsg+0x413/0x520 net/rxrpc/af_rxrpc.c:565 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0x375/0x4c0 net/socket.c:2501 ___sys_sendmsg net/socket.c:2555 [inline] __sys_sendmmsg+0x263/0x500 net/socket.c:2641 __do_sys_sendmmsg net/socket.c:2670 [inline] __se_sys_sendmmsg net/socket.c:2667 [inline] __x64_sys_sendmmsg+0x57/0x60 net/socket.c:2667 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd value changed: 0x00000000 -> 0xffffffea Fixes: 9d35d880 ("rxrpc: Move client call connection to the I/O thread") Reported-by:
<syzbot+ebc945fdb4acd72cba78@syzkaller.appspotmail.com> Link: https://lore.kernel.org/r/000000000000e7c6d205fa10a3cd@google.com/ Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Dmitry Vyukov <dvyukov@google.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org Link: https://lore.kernel.org/r/508133.1682427395@warthog.procyon.org.uk Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- Apr 23, 2023
-
-
Marc Dionne authored
When converting from ASSERTCMP to WARN_ON, the tested condition must be inverted, which was missed for this case. This would cause an EIO error when trying to read an rxrpc token, for instance when trying to display tokens with AuriStor's "tokens" command. Fixes: 84924aac ("rxrpc: Fix checker warning") Signed-off-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Zero-length arrays as fake flexible arrays are deprecated and we are moving towards adopting C99 flexible-array members instead. Transform zero-length array into flexible-array member in struct rxrpc_ackpacket. Address the following warnings found with GCC-13 and -fstrict-flex-arrays=3 enabled: net/rxrpc/call_event.c:149:38: warning: array subscript i is outside array bounds of ‘uint8_t[0]’ {aka ‘unsigned char[]’} [-Warray-bounds=] This helps with the ongoing efforts to tighten the FORTIFY_SOURCE routines on memcpy() and help us make progress towards globally enabling -fstrict-flex-arrays=3 [1]. Link: https://github.com/KSPP/linux/issues/21 Link: https://github.com/KSPP/linux/issues/263 Link: https://gcc.gnu.org/pipermail/gcc-patches/2022-October/602902.html [1] Signed-off-by:
Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Simon Horman <simon.horman@corigine.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Jeffrey Altman <jaltman@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-hardening@vger.kernel.org Link: https://lore.kernel.org/r/ZAZT11n4q5bBttW0@work/ Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Apr 22, 2023
-
-
David Howells authored
If the rxrpc call set up by afs_make_call() receives an error whilst it is transmitting the request, there's the possibility that it may get to the point the rxrpc call is ended (after the error_kill_call label) just as the call is queued for async processing. This could manifest itself as call->rxcall being seen as NULL in afs_deliver_to_call() when it tries to lock the call. Fix this by splitting rxrpc_kernel_end_call() into a function to shut down an rxrpc call and a function to release the caller's reference and calling the latter only when we get to afs_put_call(). Reported-by:
Jeffrey Altman <jaltman@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> Tested-by:
<kafs-testing+fedora36_64checkkafs-build-306@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Apr 18, 2023
-
-
Peter Zijlstra authored
Fundamentally semaphores are a counted primitive, but DEFINE_SEMAPHORE() does not expose this and explicitly creates a binary semaphore. Change DEFINE_SEMAPHORE() to take a number argument and use that in the few places that open-coded it using __SEMAPHORE_INITIALIZER(). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> [mcgrof: add some tribal knowledge about why some folks prefer binary sempahores over mutexes] Reviewed-by:
Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by:
Davidlohr Bueso <dave@stgolabs.net> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- Feb 20, 2023
-
-
Fix three cases of overproduction of wakeups: (1) rxrpc_input_split_jumbo() conditionally notifies the app that there's data for recvmsg() to collect if it queues some data - and then its only caller, rxrpc_input_data(), goes and wakes up recvmsg() anyway. Fix the rxrpc_input_data() to only do the wakeup in failure cases. (2) If a DATA packet is received for a call by the I/O thread whilst recvmsg() is busy draining the call's rx queue in the app thread, the call will left on the recvmsg() queue for recvmsg() to pick up, even though there isn't any data on it. This can cause an unexpected recvmsg() with a 0 return and no MSG_EOR set after the reply has been posted to a service call. Fix this by discarding pending calls from the recvmsg() queue that don't need servicing yet. (3) Not-yet-completed calls get requeued after having data read from them, even if they have no data to read. Fix this by only requeuing them if they have data waiting on them; if they don't, the I/O thread will requeue them when data arrives or they fail. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/3386149.1676497685@warthog.procyon.org.uk Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- Feb 07, 2023
-
-
David Howells authored
rxrpc_recvmsg_data() schedules an ACK to be transmitted every time at least two packets have been consumed and any time it runs out of data and would return -EAGAIN to the caller. Both events may occur within a single loop, however, and if the I/O thread is quick enough it may send duplicate ACKs. The ACKs are sent to inform the peer that more space has been made in the local Rx window, but the I/O thread is going to send an ACK every couple of DATA packets anyway, so we end up sending a lot more ACKs than we really need to. So reduce the rate at which recvmsg() schedules ACKs, such that if the I/O thread sends ACKs at its normal faster rate, recvmsg() won't actually schedule ACKs until the Rx flow stops (call->rx_consumed is cleared any time we transmit an ACK for that call, resetting the counter used by recvmsg). Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Log ack.rwind in the rxrpc_tx_ack tracepoint. This value is useful to see as it represents flow-control information to the peer. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
If an rxrpc call is given a poke, it will get woken up unconditionally, even if there's already a poke pending (for which there will have been a wake) or if the call refcount has gone to 0. Fix this by only waking the call if it is still referenced and if it doesn't already have a poke pending. Fixes: 15f661dc ("rxrpc: Implement a mechanism to send an event notification to a call") Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Use consume_skb() rather than kfree_skb_reason(). Reported-by:
Paolo Abeni <pabeni@redhat.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
- Feb 03, 2023
-
-
Christoph Hellwig authored
Use the bvec_set_page helper to initialize a bvec. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230203150634.3199647-21-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jan 31, 2023
-
-
David Howells authored
Now that the bundle->channel_lock has been eliminated, we don't need the dummy service bundle anymore. It's purpose was purely to provide the channel_lock for service connections. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Set a reason when freeing a packet that has been consumed such that dropwatch doesn't complain that it has been dropped. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
We no longer need local->defrag_sem as all DATA packet transmission is now done from one thread, so remove it. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
call->tx_buffer is now only accessed within the I/O thread (->tx_sendmsg is the way sendmsg passes packets to the I/O thread) so there's no need to lock around it. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Now that general ACK transmission is done from the same thread as incoming DATA packet wrangling, there's no possibility that the SACK table will be being updated by the latter whilst the former is trying to copy it to an ACK. This means that we can safely rotate the SACK table whilst updating it without having to take a lock, rather than keeping all the bits inside it in fixed place and copying and then rotating it in the transmitter. Therefore, simplify SACK handing by keeping track of starting point in the ring and rotate slots down as we consume them. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
call->ackr_window doesn't need to be atomic as ACK generation and ACK transmission are now done in the same thread, so drop the atomic64 handling and split it into two separate members. Similarly, call->ackr_nr_unacked doesn't need to be atomic now either. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
When doing a call that has a single transmitted data packet and a massive amount of received data packets, we only ping for one RTT sample, which means we don't get a good reading on it. Fix this by converting occasional IDLE ACKs into PING ACKs to elicit a response. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
If CONFIG_AF_RXRPC_DEBUG_RX_DELAY=y, then a delay is injected between packets and errors being received and them being made available to the processing code, thereby allowing the RTT to be artificially increased. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Convert call->recvmsg_lock to a spinlock as it's only ever write-locked. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
- Jan 30, 2023
-
-
David Howells authored
Fix a trace string to indicate that it's discarding the local endpoint for a preallocated peer, not a preallocated connection. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
- Jan 13, 2023
-
-
David Howells authored
Fix rxrpc_connect_call() to return -ENOMEM rather than 0 if it fails to look up a peer. This generated a smatch warning: net/rxrpc/call_object.c:303 rxrpc_connect_call() warn: missing error code 'ret' I think this also fixes a syzbot-found bug: rxrpc: Assertion failed - 1(0x1) == 11(0xb) is false ------------[ cut here ]------------ kernel BUG at net/rxrpc/call_object.c:645! where the call being put is in the wrong state - as would be the case if we failed to clear up correctly after the error in rxrpc_connect_call(). Fixes: 9d35d880 ("rxrpc: Move client call connection to the I/O thread") Reported-by:
kernel test robot <lkp@intel.com> Reported-by:
Dan Carpenter <error27@gmail.com> Reported-and-tested-by:
<syzbot+4bb6356bb29d6299360e@syzkaller.appspotmail.com> Signed-off-by:
David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/202301111153.9eZRYLf1-lkp@intel.com/ Reviewed-by:
Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/2438405.1673460435@warthog.procyon.org.uk Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- Jan 07, 2023
-
-
David Howells authored
An incoming call can race with rxrpc socket destruction, leading to a leaked call. This may result in an oops when the call timer eventually expires: BUG: kernel NULL pointer dereference, address: 0000000000000874 RIP: 0010:_raw_spin_lock_irqsave+0x2a/0x50 Call Trace: <IRQ> try_to_wake_up+0x59/0x550 ? __local_bh_enable_ip+0x37/0x80 ? rxrpc_poke_call+0x52/0x110 [rxrpc] ? rxrpc_poke_call+0x110/0x110 [rxrpc] ? rxrpc_poke_call+0x110/0x110 [rxrpc] call_timer_fn+0x24/0x120 with a warning in the kernel log looking something like: rxrpc: Call 00000000ba5e571a still in use (1,SvAwtACK,1061d,0)! incurred during rmmod of rxrpc. The 1061d is the call flags: RECVMSG_READ_ALL, RX_HEARD, BEGAN_RX_TIMER, RX_LAST, EXPOSED, IS_SERVICE, RELEASED but no DISCONNECTED flag (0x800), so it's an incoming (service) call and it's still connected. The race appears to be that: (1) rxrpc_new_incoming_call() consults the service struct, checks sk_state and allocates a call - then pauses, possibly for an interrupt. (2) rxrpc_release_sock() sets RXRPC_CLOSE, nulls the service pointer, discards the prealloc and releases all calls attached to the socket. (3) rxrpc_new_incoming_call() resumes, launching the new call, including its timer and attaching it to the socket. Fix this by read-locking local->services_lock to access the AF_RXRPC socket providing the service rather than RCU in rxrpc_new_incoming_call(). There's no real need to use RCU here as local->services_lock is only write-locked by the socket side in two places: when binding and when shutting down. Fixes: 5e6ef4f1 ("rxrpc: Make the I/O thread take over the call and local processor work") Reported-by:
Marc Dionne <marc.dionne@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: linux-afs@lists.infradead.org
-
- Jan 06, 2023
-
-
David Howells authored
Move the connection setup of client calls to the I/O thread so that a whole load of locking and barrierage can be eliminated. This necessitates the app thread waiting for connection to complete before it can begin encrypting data. This also completes the fix for a race that exists between call connection and call disconnection whereby the data transmission code adds the call to the peer error distribution list after the call has been disconnected (say by the rxrpc socket getting closed). The fix is to complete the process of moving call connection, data transmission and call disconnection into the I/O thread and thus forcibly serialising them. Note that the issue may predate the overhaul to an I/O thread model that were included in the merge window for v6.2, but the timing is very much changed by the change given below. Fixes: cf37b598 ("rxrpc: Move DATA transmission into call processor work item") Reported-by:
<syzbot+c22650d2844392afdcfd@syzkaller.appspotmail.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-
David Howells authored
Move the management of the client connection cache to the I/O thread rather than managing it from the namespace as an aggregate across all the local endpoints within the namespace. This will allow a load of locking to be got rid of in a future patch as only the I/O thread will be looking at the this. The downside is that the total number of cached connections on the system can get higher because the limit is now per-local rather than per-netns. We can, however, keep the number of client conns in use across the entire netfs and use that to reduce the expiration time of idle connection. Signed-off-by:
David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
-