summaryrefslogtreecommitdiffstats
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2022-11-24timers: Rename del_timer_sync() to timer_delete_sync()Thomas Gleixner1-9/+9
The timer related functions do not have a strict timer_ prefixed namespace which is really annoying. Rename del_timer_sync() to timer_delete_sync() and provide del_timer_sync() as a wrapper. Document that del_timer_sync() is not for new code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Link: https://lore.kernel.org/r/20221123201624.954785441@linutronix.de
2022-11-24timers: Use del_timer_sync() even on UPThomas Gleixner1-2/+0
del_timer_sync() is assumed to be pointless on uniprocessor systems and can be mapped to del_timer() because in theory del_timer() can never be invoked while the timer callback function is executed. This is not entirely true because del_timer() can be invoked from interrupt context and therefore hit in the middle of a running timer callback. Contrary to that del_timer_sync() is not allowed to be invoked from interrupt context unless the affected timer is marked with TIMER_IRQSAFE. del_timer_sync() has proper checks in place to detect such a situation. Give up on the UP optimization and make del_timer_sync() unconditionally available. Co-developed-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/20221123201624.888306160@linutronix.de
2022-11-24timers: Update kernel-doc for various functionsThomas Gleixner1-58/+90
The kernel-doc of timer related functions is partially uncomprehensible word salad. Rewrite it to make it useful. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Link: https://lore.kernel.org/r/20221123201624.828703870@linutronix.de
2022-11-24timers: Replace BUG_ON()sThomas Gleixner1-5/+6
The timer code still has a few BUG_ON()s left which are crashing the kernel in situations where it still can recover or simply refuse to take an action. Remove the one in the hotplug callback which checks for the CPU being offline. If that happens then the whole hotplug machinery will explode in colourful ways. Replace the rest with WARN_ON_ONCE() and conditional returns where appropriate. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Link: https://lore.kernel.org/r/20221123201624.769128888@linutronix.de
2022-11-24timers: Get rid of del_singleshot_timer_sync()Thomas Gleixner1-1/+1
del_singleshot_timer_sync() used to be an optimization for deleting timers which are not rearmed from the timer callback function. This optimization turned out to be broken and got mapped to del_timer_sync() about 17 years ago. Get rid of the undocumented indirection and use del_timer_sync() directly. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Link: https://lore.kernel.org/r/20221123201624.706987932@linutronix.de
2022-11-24perf/core: Don't allow grouping events from different hw pmusRavi Bangoria1-9/+15
Event group from different hw pmus does not make sense and thus perf has never allowed it. However, with recent rewrite that restriction has been inadvertently removed. Fix it. Fixes: bd2756811766 ("perf: Rewrite core context handling") Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221122080326.228-1-ravi.bangoria@amd.com
2022-11-24perf: Fix function pointer casePeter Zijlstra1-2/+6
With the advent of CFI it is no longer acceptible to cast function pointers. The robot complains thusly: kernel-events-core.c:warning:cast-from-int-(-)(struct-perf_cpu_pmu_context-)-to-remote_function_f-(aka-int-(-)(void-)-)-converts-to-incompatible-function-type Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2022-11-24perf: Consider OS filter failPeter Zijlstra1-2/+22
Some PMUs (notably the traditional hardware kind) have boundary issues with the OS filter. Specifically, it is possible for perf_event_attr::exclude_kernel=1 events to trigger in-kernel due to SKID or errata. This can upset the sigtrap logic some and trigger the WARN. However, if this invalid sample is the first we must not loose the SIGTRAP, OTOH if it is the second, it must not override the pending_addr with a (possibly) invalid one. Fixes: ca6c21327c6a ("perf: Fix missing SIGTRAPs") Reported-by: Pengfei Xu <pengfei.xu@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Marco Elver <elver@google.com> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Link: https://lkml.kernel.org/r/Y3hDYiXwRnJr8RYG@xpf.sh.intel.com
2022-11-24perf: Fixup SIGTRAP and sample_flags interactionPeter Zijlstra1-1/+4
The perf_event_attr::sigtrap functionality relies on data->addr being set. However commit 7b0846301531 ("perf: Use sample_flags for addr") changed this to only initialize data->addr when not 0. Fixes: 7b0846301531 ("perf: Use sample_flags for addr") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y3426b4OimE%2FI5po%40hirez.programming.kicks-ass.net
2022-11-23bpf: Add bpf_task_from_pid() kfuncDavid Vernet1-1/+20
Callers can currently store tasks as kptrs using bpf_task_acquire(), bpf_task_kptr_get(), and bpf_task_release(). These are useful if a caller already has a struct task_struct *, but there may be some callers who only have a pid, and want to look up the associated struct task_struct * from that to e.g. find task->comm. This patch therefore adds a new bpf_task_from_pid() kfunc which allows BPF programs to get a struct task_struct * kptr from a pid. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20221122145300.251210-2-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-24bpf: Unify and simplify btf_func_proto_check error handlingStanislav Fomichev1-9/+5
Replace 'err = x; break;' with 'return x;'. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221124002838.2700179-1-sdf@google.com
2022-11-23ftrace: Avoid needless updates of the ftrace function callSteven Rostedt (Google)1-2/+15
Song Shuai reported: The list func (ftrace_ops_list_func) will be patched first before the transition between old and new calls are set, which fixed the race described in this commit `59338f75`. While ftrace_trace_function changes from the list func to a ftrace_ops func, like unregistering the klp_ops to leave the only global_ops in ftrace_ops_list, the ftrace_[regs]_call will be replaced with the list func although it already exists. So there should be a condition to avoid this. And suggested using another variable to keep track of what the ftrace function is set to. But this could be simplified by using a helper function that does the same with a static variable. Link: https://lore.kernel.org/lkml/20221026132039.2236233-1-suagrfillet@gmail.com/ Link: https://lore.kernel.org/linux-trace-kernel/20221122180905.737b6f52@gandalf.local.home Reported-by: Song Shuai <suagrfillet@gmail.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Optimize event type allocation with IDAZheng Yejian1-50/+16
After commit 060fa5c83e67 ("tracing/events: reuse trace event ids after overflow"), trace events with dynamic type are linked up in list 'ftrace_event_list' through field 'trace_event.list'. Then when max event type number used up, it's possible to reuse type number of some freed one by traversing 'ftrace_event_list'. As instead, using IDA to manage available type numbers can make codes simpler and then the field 'trace_event.list' can be dropped. Since 'struct trace_event' is used in static tracepoints, drop 'trace_event.list' can make vmlinux smaller. Local test with about 2000 tracepoints, vmlinux reduced about 64KB: before:-rwxrwxr-x 1 root root 76669448 Nov 8 17:14 vmlinux after: -rwxrwxr-x 1 root root 76604176 Nov 8 17:15 vmlinux Link: https://lkml.kernel.org/r/20221110020319.1259291-1-zhengyejian1@huawei.com Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Make tracepoint_print_iter staticXiu Jianfeng2-3/+1
After change in commit 4239174570da ("tracing: Make tracepoint_printk a static_key"), this symbol is not used outside of the file, so mark it static. Link: https://lkml.kernel.org/r/20221122091456.72055-1-xiujianfeng@huawei.com Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing/perf: Use strndup_user instead of kzalloc/strncpy_from_userChuang Wang1-10/+6
This patch uses strndup_user instead of kzalloc + strncpy_from_user, which makes the code more concise. Link: https://lkml.kernel.org/r/20221121080831.707409-1-nashuiliang@gmail.com Signed-off-by: Chuang Wang <nashuiliang@gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing/osnoise: Add OSNOISE_WORKLOAD optionDaniel Bristot de Oliveira1-6/+22
The osnoise tracer is not only a tracer, and a set of tracepoints, but also a workload dispatcher. In preparation for having other workloads, e.g., in user-space, add an option to avoid dispatching the workload. By not dispatching the workload, the osnoise: tracepoints become generic events to measure the execution time of *any* task on Linux. For example: # cd /sys/kernel/tracing/ # cat osnoise/options DEFAULTS OSNOISE_WORKLOAD # echo NO_OSNOISE_WORKLOAD > osnoise/options # cat osnoise/options NO_DEFAULTS NO_OSNOISE_WORKLOAD # echo osnoise > set_event # echo osnoise > current_tracer # tail -8 trace make-94722 [002] d..3. 1371.794507: thread_noise: make:94722 start 1371.794302286 duration 200897 ns sh-121042 [020] d..3. 1371.794534: thread_noise: sh:121042 start 1371.781610976 duration 8943683 ns make-121097 [005] d..3. 1371.794542: thread_noise: make:121097 start 1371.794481522 duration 60444 ns <...>-40 [005] d..3. 1371.794550: thread_noise: migration/5:40 start 1371.794542256 duration 7154 ns <idle>-0 [018] dNh2. 1371.794554: irq_noise: reschedule:253 start 1371.794553547 duration 40 ns <idle>-0 [018] dNh2. 1371.794561: irq_noise: local_timer:236 start 1371.794556222 duration 4890 ns <idle>-0 [018] .Ns2. 1371.794563: softirq_noise: SCHED:7 start 1371.794561803 duration 992 ns <idle>-0 [018] d..3. 1371.794566: thread_noise: swapper/18:0 start 1371.781368110 duration 13191798 ns In preparation for the rtla exec_time tracer/tool and rtla osnoise --user option. Link: https://lkml.kernel.org/r/f5cfbd37aefd419eefe9243b4d2fc38ed5753fe4.1668692096.git.bristot@kernel.org Cc: Daniel Bristot de Oliveira <bristot@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing/osnoise: Add osnoise/options fileDaniel Bristot de Oliveira1-0/+170
Add the tracing/osnoise/options file to control osnoise/timerlat tracer features. It is a single file to contain multiple features, similar to the sched/features file. Reading the file displays a list of options. Writing the OPTION_NAME enables it, writing NO_OPTION_NAME disables it. The DEAFULTS is a particular option that resets the options to the default ones. It uses a bitmask to keep track of the status of the option. When needed, we can add a list of static keys, but for now it does not justify the memory increase. Link: https://lkml.kernel.org/r/f8d34aefdb225d2603fcb4c02a120832a0cd3339.1668692096.git.bristot@kernel.org Cc: Daniel Bristot de Oliveira <bristot@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23ring_buffer: Remove unused "event" parameterSong Chen3-9/+7
After commit a389d86f7fd0 ("ring-buffer: Have nested events still record running time stamp"), the "event" parameter is no longer used in either ring_buffer_unlock_commit() or rb_commit(). Best to remove it. Link: https://lkml.kernel.org/r/1666274811-24138-1-git-send-email-chensong_2000@189.cn Signed-off-by: Song Chen <chensong_2000@189.cn> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Add trace_trigger kernel command line optionSteven Rostedt (Google)1-2/+70
Allow triggers to be enabled at kernel boot up. For example: trace_trigger="sched_switch.stacktrace if prev_state == 2" The above will enable the stacktrace trigger on top of the sched_switch event and only trigger if its prev_state is 2 (TASK_UNINTERRUPTIBLE). Then at boot up, a stacktrace will trigger and be recorded in the tracing ring buffer every time the sched_switch happens where the previous state is TASK_INTERRUPTIBLE. Another useful trigger would be "traceoff" which can stop tracing on an event if a field of the event matches a certain value defined by the filter ("if" statement). Link: https://lore.kernel.org/linux-trace-kernel/20221020210056.0d8d0a5b@gandalf.local.home Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23ftrace: Clean comments related to FTRACE_OPS_FL_PER_CPUZheng Yejian1-7/+3
Commit b3a88803ac5b ("ftrace: Kill FTRACE_OPS_FL_PER_CPU") didn't completely remove the comments related to FTRACE_OPS_FL_PER_CPU. Link: https://lkml.kernel.org/r/20221025153923.1995973-1-zhengyejian1@huawei.com Fixes: b3a88803ac5b ("ftrace: Kill FTRACE_OPS_FL_PER_CPU") Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Free buffers when a used dynamic event is removedSteven Rostedt (Google)2-1/+12
After 65536 dynamic events have been added and removed, the "type" field of the event then uses the first type number that is available (not currently used by other events). A type number is the identifier of the binary blobs in the tracing ring buffer (known as events) to map them to logic that can parse the binary blob. The issue is that if a dynamic event (like a kprobe event) is traced and is in the ring buffer, and then that event is removed (because it is dynamic, which means it can be created and destroyed), if another dynamic event is created that has the same number that new event's logic on parsing the binary blob will be used. To show how this can be an issue, the following can crash the kernel: # cd /sys/kernel/tracing # for i in `seq 65536`; do echo 'p:kprobes/foo do_sys_openat2 $arg1:u32' > kprobe_events # done For every iteration of the above, the writing to the kprobe_events will remove the old event and create a new one (with the same format) and increase the type number to the next available on until the type number reaches over 65535 which is the max number for the 16 bit type. After it reaches that number, the logic to allocate a new number simply looks for the next available number. When an dynamic event is removed, that number is then available to be reused by the next dynamic event created. That is, once the above reaches the max number, the number assigned to the event in that loop will remain the same. Now that means deleting one dynamic event and created another will reuse the previous events type number. This is where bad things can happen. After the above loop finishes, the kprobes/foo event which reads the do_sys_openat2 function call's first parameter as an integer. # echo 1 > kprobes/foo/enable # cat /etc/passwd > /dev/null # cat trace cat-2211 [005] .... 2007.849603: foo: (do_sys_openat2+0x0/0x130) arg1=4294967196 cat-2211 [005] .... 2007.849620: foo: (do_sys_openat2+0x0/0x130) arg1=4294967196 cat-2211 [005] .... 2007.849838: foo: (do_sys_openat2+0x0/0x130) arg1=4294967196 cat-2211 [005] .... 2007.849880: foo: (do_sys_openat2+0x0/0x130) arg1=4294967196 # echo 0 > kprobes/foo/enable Now if we delete the kprobe and create a new one that reads a string: # echo 'p:kprobes/foo do_sys_openat2 +0($arg2):string' > kprobe_events And now we can the trace: # cat trace sendmail-1942 [002] ..... 530.136320: foo: (do_sys_openat2+0x0/0x240) arg1= cat-2046 [004] ..... 530.930817: foo: (do_sys_openat2+0x0/0x240) arg1="������������������������������������������������������������������������������������������������" cat-2046 [004] ..... 530.930961: foo: (do_sys_openat2+0x0/0x240) arg1="������������������������������������������������������������������������������������������������" cat-2046 [004] ..... 530.934278: foo: (do_sys_openat2+0x0/0x240) arg1="������������������������������������������������������������������������������������������������" cat-2046 [004] ..... 530.934563: foo: (do_sys_openat2+0x0/0x240) arg1="������������������������������������������������������������������������������������������������" bash-1515 [007] ..... 534.299093: foo: (do_sys_openat2+0x0/0x240) arg1="kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk���������@��4Z����;Y�����U And dmesg has: ================================================================== BUG: KASAN: use-after-free in string+0xd4/0x1c0 Read of size 1 at addr ffff88805fdbbfa0 by task cat/2049 CPU: 0 PID: 2049 Comm: cat Not tainted 6.1.0-rc6-test+ #641 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v03.03 07/14/2016 Call Trace: <TASK> dump_stack_lvl+0x5b/0x77 print_report+0x17f/0x47b kasan_report+0xad/0x130 string+0xd4/0x1c0 vsnprintf+0x500/0x840 seq_buf_vprintf+0x62/0xc0 trace_seq_printf+0x10e/0x1e0 print_type_string+0x90/0xa0 print_kprobe_event+0x16b/0x290 print_trace_line+0x451/0x8e0 s_show+0x72/0x1f0 seq_read_iter+0x58e/0x750 seq_read+0x115/0x160 vfs_read+0x11d/0x460 ksys_read+0xa9/0x130 do_syscall_64+0x3a/0x90 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7fc2e972ade2 Code: c0 e9 b2 fe ff ff 50 48 8d 3d b2 3f 0a 00 e8 05 f0 01 00 0f 1f 44 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 RSP: 002b:00007ffc64e687c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fc2e972ade2 RDX: 0000000000020000 RSI: 00007fc2e980d000 RDI: 0000000000000003 RBP: 00007fc2e980d000 R08: 00007fc2e980c010 R09: 0000000000000000 R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000020f00 R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000 </TASK> The buggy address belongs to the physical page: page:ffffea00017f6ec0 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5fdbb flags: 0xfffffc0000000(node=0|zone=1|lastcpupid=0x1fffff) raw: 000fffffc0000000 0000000000000000 ffffea00017f6ec8 0000000000000000 raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff88805fdbbe80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88805fdbbf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >ffff88805fdbbf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff88805fdbc000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88805fdbc080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ================================================================== This was found when Zheng Yejian sent a patch to convert the event type number assignment to use IDA, which gives the next available number, and this bug showed up in the fuzz testing by Yujie Liu and the kernel test robot. But after further analysis, I found that this behavior is the same as when the event type numbers go past the 16bit max (and the above shows that). As modules have a similar issue, but is dealt with by setting a "WAS_ENABLED" flag when a module event is enabled, and when the module is freed, if any of its events were enabled, the ring buffer that holds that event is also cleared, to prevent reading stale events. The same can be done for dynamic events. If any dynamic event that is being removed was enabled, then make sure the buffers they were enabled in are now cleared. Link: https://lkml.kernel.org/r/20221123171434.545706e3@gandalf.local.home Link: https://lore.kernel.org/all/20221110020319.1259291-1-zhengyejian1@huawei.com/ Cc: stable@vger.kernel.org Cc: Andrew Morton <akpm@linux-foundation.org> Depends-on: e18eb8783ec49 ("tracing: Add tracing_reset_all_online_cpus_unlocked() function") Depends-on: 5448d44c38557 ("tracing: Add unified dynamic event framework") Depends-on: 6212dd29683ee ("tracing/kprobes: Use dyn_event framework for kprobe events") Depends-on: 065e63f951432 ("tracing: Only have rmmod clear buffers that its events were active in") Depends-on: 575380da8b469 ("tracing: Only clear trace buffer on module unload if event was traced") Fixes: 77b44d1b7c283 ("tracing/kprobes: Rename Kprobe-tracer to kprobe-event") Reported-by: Zheng Yejian <zhengyejian1@huawei.com> Reported-by: Yujie Liu <yujie.liu@intel.com> Reported-by: kernel test robot <yujie.liu@intel.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Add tracing_reset_all_online_cpus_unlocked() functionSteven Rostedt (Google)4-4/+12
Currently the tracing_reset_all_online_cpus() requires the trace_types_lock held. But only one caller of this function actually has that lock held before calling it, and the other just takes the lock so that it can call it. More users of this function is needed where the lock is not held. Add a tracing_reset_all_online_cpus_unlocked() function for the one use case that calls it without being held, and also add a lockdep_assert to make sure it is held when called. Then have tracing_reset_all_online_cpus() take the lock internally, such that callers do not need to worry about taking it. Link: https://lkml.kernel.org/r/20221123192741.658273220@goodmis.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Zheng Yejian <zhengyejian1@huawei.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-23tracing: Fix race where histograms can be called before the eventSteven Rostedt (Google)1-0/+3
commit 94eedf3dded5 ("tracing: Fix race where eprobes can be called before the event") fixed an issue where if an event is soft disabled, and the trigger is being added, there's a small window where the event sees that there's a trigger but does not see that it requires reading the event yet, and then calls the trigger with the record == NULL. This could be solved with adding memory barriers in the hot path, or to make sure that all the triggers requiring a record check for NULL. The latter was chosen. Commit 94eedf3dded5 set the eprobe trigger handle to check for NULL, but the same needs to be done with histograms. Link: https://lore.kernel.org/linux-trace-kernel/20221118211809.701d40c0f8a757b0df3c025a@kernel.org/ Link: https://lore.kernel.org/linux-trace-kernel/20221123164323.03450c3a@gandalf.local.home Cc: Tom Zanussi <zanussi@kernel.org> Cc: stable@vger.kernel.org Fixes: 7491e2c442781 ("tracing: Add a probe that attaches to trace events") Reported-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-24bpf: Prevent decl_tag from being referenced in func_proto argStanislav Fomichev1-0/+5
Syzkaller managed to hit another decl_tag issue: btf_func_proto_check kernel/bpf/btf.c:4506 [inline] btf_check_all_types kernel/bpf/btf.c:4734 [inline] btf_parse_type_sec+0x1175/0x1980 kernel/bpf/btf.c:4763 btf_parse kernel/bpf/btf.c:5042 [inline] btf_new_fd+0x65a/0xb00 kernel/bpf/btf.c:6709 bpf_btf_load+0x6f/0x90 kernel/bpf/syscall.c:4342 __sys_bpf+0x50a/0x6c0 kernel/bpf/syscall.c:5034 __do_sys_bpf kernel/bpf/syscall.c:5093 [inline] __se_sys_bpf kernel/bpf/syscall.c:5091 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5091 do_syscall_64+0x54/0x70 arch/x86/entry/common.c:48 This seems similar to commit ea68376c8bed ("bpf: prevent decl_tag from being referenced in func_proto") but for the argument. Reported-by: syzbot+8dd0551dda6020944c5d@syzkaller.appspotmail.com Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20221123035422.872531-2-sdf@google.com
2022-11-23bpf: Don't use idx variable when registering kfunc dtorsDavid Vernet1-5/+5
In commit fda01efc6160 ("bpf: Enable cgroups to be used as kptrs"), I added an 'int idx' variable to kfunc_init() which was meant to dynamically set the index of the btf id entries of the 'generic_dtor_ids' array. This was done to make the code slightly less brittle as the struct cgroup * kptr kfuncs such as bpf_cgroup_aquire() are compiled out if CONFIG_CGROUPS is not defined. This, however, causes an lkp build warning: >> kernel/bpf/helpers.c:2005:40: warning: multiple unsequenced modifications to 'idx' [-Wunsequenced] .btf_id = generic_dtor_ids[idx++], Fix the warning by just hard-coding the indices. Fixes: fda01efc6160 ("bpf: Enable cgroups to be used as kptrs") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221123135253.637525-1-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-23cgroup: remove rcu_read_lock()/rcu_read_unlock() in critical section of ↵Ran Tian1-2/+0
spin_lock_irq() spin_lock_irq() already disable preempt, so remove rcu_read_lock(). Signed-off-by: Ran Tian <tianran_trtr@163.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-11-24tracing: Fix race where eprobes can be called before the eventSteven Rostedt (Google)1-0/+3
The flag that tells the event to call its triggers after reading the event is set for eprobes after the eprobe is enabled. This leads to a race where the eprobe may be triggered at the beginning of the event where the record information is NULL. The eprobe then dereferences the NULL record causing a NULL kernel pointer bug. Test for a NULL record to keep this from happening. Link: https://lore.kernel.org/linux-trace-kernel/20221116192552.1066630-1-rafaelmendsr@gmail.com/ Link: https://lore.kernel.org/all/20221117214249.2addbe10@gandalf.local.home/ Cc: stable@vger.kernel.org Fixes: 7491e2c442781 ("tracing: Add a probe that attaches to trace events") Reported-by: Rafael Mendonca <rafaelmendsr@gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2022-11-22gcov: clang: fix the buffer overflow issueMukesh Ojha1-0/+2
Currently, in clang version of gcov code when module is getting removed gcov_info_add() incorrectly adds the sfn_ptr->counter to all the dst->functions and it result in the kernel panic in below crash report. Fix this by properly handling it. [ 8.899094][ T599] Unable to handle kernel write to read-only memory at virtual address ffffff80461cc000 [ 8.899100][ T599] Mem abort info: [ 8.899102][ T599] ESR = 0x9600004f [ 8.899103][ T599] EC = 0x25: DABT (current EL), IL = 32 bits [ 8.899105][ T599] SET = 0, FnV = 0 [ 8.899107][ T599] EA = 0, S1PTW = 0 [ 8.899108][ T599] FSC = 0x0f: level 3 permission fault [ 8.899110][ T599] Data abort info: [ 8.899111][ T599] ISV = 0, ISS = 0x0000004f [ 8.899113][ T599] CM = 0, WnR = 1 [ 8.899114][ T599] swapper pgtable: 4k pages, 39-bit VAs, pgdp=00000000ab8de000 [ 8.899116][ T599] [ffffff80461cc000] pgd=18000009ffcde003, p4d=18000009ffcde003, pud=18000009ffcde003, pmd=18000009ffcad003, pte=00600000c61cc787 [ 8.899124][ T599] Internal error: Oops: 9600004f [#1] PREEMPT SMP [ 8.899265][ T599] Skip md ftrace buffer dump for: 0x1609e0 .... .., [ 8.899544][ T599] CPU: 7 PID: 599 Comm: modprobe Tainted: G S OE 5.15.41-android13-8-g38e9b1af6bce #1 [ 8.899547][ T599] Hardware name: XXX (DT) [ 8.899549][ T599] pstate: 82400005 (Nzcv daif +PAN -UAO +TCO -DIT -SSBS BTYPE=--) [ 8.899551][ T599] pc : gcov_info_add+0x9c/0xb8 [ 8.899557][ T599] lr : gcov_event+0x28c/0x6b8 [ 8.899559][ T599] sp : ffffffc00e733b00 [ 8.899560][ T599] x29: ffffffc00e733b00 x28: ffffffc00e733d30 x27: ffffffe8dc297470 [ 8.899563][ T599] x26: ffffffe8dc297000 x25: ffffffe8dc297000 x24: ffffffe8dc297000 [ 8.899566][ T599] x23: ffffffe8dc0a6200 x22: ffffff880f68bf20 x21: 0000000000000000 [ 8.899569][ T599] x20: ffffff880f68bf00 x19: ffffff8801babc00 x18: ffffffc00d7f9058 [ 8.899572][ T599] x17: 0000000000088793 x16: ffffff80461cbe00 x15: 9100052952800785 [ 8.899575][ T599] x14: 0000000000000200 x13: 0000000000000041 x12: 9100052952800785 [ 8.899577][ T599] x11: ffffffe8dc297000 x10: ffffffe8dc297000 x9 : ffffff80461cbc80 [ 8.899580][ T599] x8 : ffffff8801babe80 x7 : ffffffe8dc2ec000 x6 : ffffffe8dc2ed000 [ 8.899583][ T599] x5 : 000000008020001f x4 : fffffffe2006eae0 x3 : 000000008020001f [ 8.899586][ T599] x2 : ffffff8027c49200 x1 : ffffff8801babc20 x0 : ffffff80461cb3a0 [ 8.899589][ T599] Call trace: [ 8.899590][ T599] gcov_info_add+0x9c/0xb8 [ 8.899592][ T599] gcov_module_notifier+0xbc/0x120 [ 8.899595][ T599] blocking_notifier_call_chain+0xa0/0x11c [ 8.899598][ T599] do_init_module+0x2a8/0x33c [ 8.899600][ T599] load_module+0x23cc/0x261c [ 8.899602][ T599] __arm64_sys_finit_module+0x158/0x194 [ 8.899604][ T599] invoke_syscall+0x94/0x2bc [ 8.899607][ T599] el0_svc_common+0x1d8/0x34c [ 8.899609][ T599] do_el0_svc+0x40/0x54 [ 8.899611][ T599] el0_svc+0x94/0x2f0 [ 8.899613][ T599] el0t_64_sync_handler+0x88/0xec [ 8.899615][ T599] el0t_64_sync+0x1b4/0x1b8 [ 8.899618][ T599] Code: f905f56c f86e69ec f86e6a0f 8b0c01ec (f82e6a0c) [ 8.899620][ T599] ---[ end trace ed5218e9e5b6e2e6 ]--- Link: https://lkml.kernel.org/r/1668020497-13142-1-git-send-email-quic_mojha@quicinc.com Fixes: e178a5beb369 ("gcov: clang support") Signed-off-by: Mukesh Ojha <quic_mojha@quicinc.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Tested-by: Peter Oberparleiter <oberpar@linux.ibm.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Tom Rix <trix@redhat.com> Cc: <stable@vger.kernel.org> [5.2+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-11-22tracing/osnoise: Fix duration typeDaniel Bristot de Oliveira1-3/+3
The duration type is a 64 long value, not an int. This was causing some long noise to report wrong values. Change the duration to a 64 bits value. Link: https://lkml.kernel.org/r/a93d8a8378c7973e9c609de05826533c9e977939.1668692096.git.bristot@kernel.org Cc: stable@vger.kernel.org Cc: Daniel Bristot de Oliveira <bristot@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Fixes: bce29ac9ce0b ("trace: Add osnoise tracer") Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-22tracing/user_events: Fix memory leak in user_event_create()Xiu Jianfeng1-1/+3
Before current_user_event_group(), it has allocated memory and save it in @name, this should freed before return error. Link: https://lkml.kernel.org/r/20221115014445.158419-1-xiujianfeng@huawei.com Fixes: e5d271812e7a ("tracing/user_events: Move pages/locks into groups to prepare for namespaces") Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Beau Belgrave <beaub@linux.microsoft.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-22bpf: Add bpf_cgroup_ancestor() kfuncDavid Vernet1-0/+20
struct cgroup * objects have a variably sized struct cgroup *ancestors[] field which stores pointers to their ancestor cgroups. If using a cgroup as a kptr, it can be useful to access these ancestors, but doing so requires variable offset accesses for PTR_TO_BTF_ID, which is currently unsupported. This is a very useful field to access for cgroup kptrs, as programs may wish to walk their ancestor cgroups when determining e.g. their proportional cpu.weight. So as to enable this functionality with cgroup kptrs before var_off is supported for PTR_TO_BTF_ID, this patch adds a bpf_cgroup_ancestor() kfunc which accesses the cgroup node on behalf of the caller, and acquires a reference on it. Once var_off is supported for PTR_TO_BTF_ID, and fields inside a struct can be marked as trusted so they retain the PTR_TRUSTED modifier when walked, this can be removed. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20221122055458.173143-4-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-22bpf: Enable cgroups to be used as kptrsDavid Vernet1-3/+80
Now that tasks can be used as kfuncs, and the PTR_TRUSTED flag is available for us to easily add basic acquire / get / release kfuncs, we can do the same for cgroups. This patch set adds the following kfuncs which enable using cgroups as kptrs: struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp); struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp); void bpf_cgroup_release(struct cgroup *cgrp); A follow-on patch will add a selftest suite which validates these kfuncs. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20221122055458.173143-2-void@manifault.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-22tracing/hist: add in missing * in comment blocksColin Ian King1-2/+2
There are a couple of missing * in comment blocks. Fix these. Cleans up two clang warnings: kernel/trace/trace_events_hist.c:986: warning: bad line: kernel/trace/trace_events_hist.c:3229: warning: bad line: Link: https://lkml.kernel.org/r/20221020133019.1547587-1-colin.i.king@gmail.com Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-11-22cgroup/cpuset: Improve cpuset_css_alloc() descriptionKamalesh Babulal1-4/+8
Change the function argument in the description of cpuset_css_alloc() from 'struct cgroup' -> 'struct cgroup_subsys_state'. The change to the argument type was introduced by commit eb95419b023a ("cgroup: pass around cgroup_subsys_state instead of cgroup in subsystem methods"). Also, add more information to its description. Signed-off-by: Kamalesh Babulal <kamalesh.babulal@oracle.com> Acked-by: Waiman Long <longman@redhat.com> Acked-by: Joel Savitz <jsavitz@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-11-22Revert "cpufreq: schedutil: Move max CPU capacity to sugov_policy"Sam Wu1-15/+15
This reverts commit 6d5afdc97ea71958287364a1f1d07e59ef151b11. On a Pixel 6 device, it is observed that this commit increases latency by approximately 50ms, or 20%, in migrating a task that requires full CPU utilization from a LITTLE CPU to Fmax on a big CPU. Reverting this change restores the latency back to its original baseline value. Fixes: 6d5afdc97ea7 ("cpufreq: schedutil: Move max CPU capacity to sugov_policy") Signed-off-by: Sam Wu <wusamuel@google.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-11-22kobject: kset_uevent_ops: make filter() callback take a const *Greg Kroah-Hartman1-1/+1
The filter() callback in struct kset_uevent_ops does not modify the kobject passed into it, so make the pointer const to enforce this restriction. When doing so, fix up all existing filter() callbacks to have the correct signature to preserve the build. Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Acked-by: Rafael J. Wysocki <rafael@kernel.org> Acked-by: Christian König <christian.koenig@amd.com> for the changes to Link: https://lore.kernel.org/r/20221121094649.1556002-3-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-11-22printk: fix a typo of commentWang Honghui1-1/+1
Fix a typo of comment Signed-off-by: Wang Honghui <honghui.wang@ucas.com.cn> Reviewed-by: John Ogness <john.ogness@linutronix.de> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/77522C532189E547+Y3yG91g6XALbtdJr@TP-P15V.lan Link: https://lore.kernel.org/r/0C7C980DB815FAE1+Y3yNXJCqZ3Nzxa5V@TP-P15V.lan
2022-11-21Merge tag 'v6.1-rc6' into x86/core, to resolve conflictsIngo Molnar32-286/+570
Resolve conflicts between these commits in arch/x86/kernel/asm-offsets.c: # upstream: debc5a1ec0d1 ("KVM: x86: use a separate asm-offsets.c file") # retbleed work in x86/core: 5d8213864ade ("x86/retbleed: Add SKL return thunk") ... and these commits in include/linux/bpf.h: # upstram: 18acb7fac22f ("bpf: Revert ("Fix dispatcher patchable function entry to 5 bytes nop")") # x86/core commits: 931ab63664f0 ("x86/ibt: Implement FineIBT") bea75b33895f ("x86/Kconfig: Introduce function padding") The latter two modify BPF_DISPATCHER_ATTRIBUTES(), which was removed upstream. Conflicts: arch/x86/kernel/asm-offsets.c include/linux/bpf.h Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-11-21bpf: Do not copy spin lock field from user in bpf_selem_allocXu Kuohai1-1/+1
bpf_selem_alloc function is used by inode_storage, sk_storage and task_storage maps to set map value, for these map types, there may be a spin lock in the map value, so if we use memcpy to copy the whole map value from user, the spin lock field may be initialized incorrectly. Since the spin lock field is zeroed by kzalloc, call copy_map_value instead of memcpy to skip copying the spin lock field to fix it. Fixes: 6ac99e8f23d4 ("bpf: Introduce bpf sk local storage") Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Link: https://lore.kernel.org/r/20221114134720.1057939-2-xukuohai@huawei.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-21bpf: Pin the start cgroup in cgroup_iter_seq_init()Hou Tao1-0/+14
bpf_iter_attach_cgroup() has already acquired an extra reference for the start cgroup, but the reference may be released if the iterator link fd is closed after the creation of iterator fd, and it may lead to user-after-free problem when reading the iterator fd. An alternative fix is pinning iterator link when opening iterator, but it will make iterator link being still visible after the close of iterator link fd and the behavior is different with other link types, so just fixing it by acquiring another reference for the start cgroup. Fixes: d4ccaf58a847 ("bpf: Introduce cgroup iter") Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20221121073440.1828292-2-houtao@huaweicloud.com
2022-11-21bpf/verifier: Use kmalloc_size_roundup() to match ksize() usageKees Cook1-4/+8
Most allocation sites in the kernel want an explicitly sized allocation (and not "more"), and that dynamic runtime analysis tools (e.g. KASAN, UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise bounds checking (i.e. not something that is rounded up). A tiny handful of allocations were doing an implicit alloc/realloc loop that actually depended on ksize(), and didn't actually always call realloc. This has created a long series of bugs and problems over many years related to the runtime bounds checking, so these callers are finally being adjusted to _not_ depend on the ksize() side-effect, by doing one of several things: - tracking the allocation size precisely and just never calling ksize() at all [1]. - always calling realloc and not using ksize() at all. (This solution ends up actually be a subset of the next solution.) - using kmalloc_size_roundup() to explicitly round up the desired allocation size immediately [2]. The bpf/verifier case is this another of this latter case, and is the last outstanding case to be fixed in the kernel. Because some of the dynamic bounds checking depends on the size being an _argument_ to an allocator function (i.e. see the __alloc_size attribute), the ksize() users are rare, and it could waste local variables, it was been deemed better to explicitly separate the rounding up from the allocation itself [3]. Round up allocations with kmalloc_size_roundup() so that the verifier's use of ksize() is always accurate. [1] e.g.: https://git.kernel.org/linus/712f210a457d https://git.kernel.org/linus/72c08d9f4c72 [2] e.g.: https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad https://git.kernel.org/netdev/net-next/c/ab3f7828c979 https://git.kernel.org/netdev/net-next/c/d6dd508080a3 [3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com/ Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221118183409.give.387-kees@kernel.org
2022-11-21Merge 6.1-rc6 into driver-core-nextGreg Kroah-Hartman32-286/+570
We need the kernfs changes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-11-21dma-mapping: reject __GFP_COMP in dma_alloc_attrsChristoph Hellwig1-0/+8
DMA allocations can never be turned back into a page pointer, so requesting compound pages doesn't make sense and it can't even be supported at all by various backends. Reject __GFP_COMP with a warning in dma_alloc_attrs, and stop clearing the flag in the arm dma ops and dma-iommu. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
2022-11-20sched: Move numa_balancing sysctls to its own fileKefeng Wang3-23/+27
The sysctl_numa_balancing_promote_rate_limit and sysctl_numa_balancing are part of sched, move them to its own file. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-11-20bpf: Add a kfunc for generic type castYonghong Song2-2/+24
Implement bpf_rdonly_cast() which tries to cast the object to a specified type. This tries to support use case like below: #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB))) where skb_end_pointer(SKB) is a 'unsigned char *' and needs to be casted to 'struct skb_shared_info *'. The signature of bpf_rdonly_cast() looks like void *bpf_rdonly_cast(void *obj, __u32 btf_id) The function returns the same 'obj' but with PTR_TO_BTF_ID with btf_id. The verifier will ensure btf_id being a struct type. Since the supported type cast may not reflect what the 'obj' represents, the returned btf_id is marked as PTR_UNTRUSTED, so the return value and subsequent pointer chasing cannot be used as helper/kfunc arguments. Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221120195437.3114585-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-20bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctxYonghong Song3-0/+48
Implement bpf_cast_to_kern_ctx() kfunc which does a type cast of a uapi ctx object to the corresponding kernel ctx. Previously if users want to access some data available in kctx but not in uapi ctx, bpf_probe_read_kernel() helper is needed. The introduction of bpf_cast_to_kern_ctx() allows direct memory access which makes code simpler and easier to understand. Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221120195432.3113982-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-20bpf: Add support for kfunc set with common btf_idsYonghong Song2-1/+19
Later on, we will introduce kfuncs bpf_cast_to_kern_ctx() and bpf_rdonly_cast() which apply to all program types. Currently kfunc set only supports individual prog types. This patch added support for kfunc applying to all program types. Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221120195426.3113828-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-20bpf: Disallow bpf_obj_new_impl call when bpf_mem_alloc_init failsKumar Kartikeya Dwivedi2-2/+3
In the unlikely event that bpf_global_ma is not correctly initialized, instead of checking the boolean everytime bpf_obj_new_impl is called, simply check it while loading the program and return an error if bpf_global_ma_set is false. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20221120212610.2361700-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-20Merge tag 'trace-probes-v6.1' of ↵Linus Torvalds4-20/+45
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing/probes fixes from Steven Rostedt: - Fix possible NULL pointer dereference on trace_event_file in kprobe_event_gen_test_exit() - Fix NULL pointer dereference for trace_array in kprobe_event_gen_test_exit() - Fix memory leak of filter string for eprobes - Fix a possible memory leak in rethook_alloc() - Skip clearing aggrprobe's post_handler in kprobe-on-ftrace case which can cause a possible use-after-free - Fix warning in eprobe filter creation - Fix eprobe filter creation as it picked the wrong event for the fields * tag 'trace-probes-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing/eprobe: Fix eprobe filter to make a filter correctly tracing/eprobe: Fix warning in filter creation kprobes: Skip clearing aggrprobe's post_handler in kprobe-on-ftrace case rethook: fix a potential memleak in rethook_alloc() tracing/eprobe: Fix memory leak of filter string tracing: kprobe: Fix potential null-ptr-deref on trace_array in kprobe_event_gen_test_exit() tracing: kprobe: Fix potential null-ptr-deref on trace_event_file in kprobe_event_gen_test_exit()
2022-11-20Merge tag 'trace-v6.1-rc5' of ↵Linus Torvalds7-43/+71
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing fixes from Steven Rostedt: - Fix polling to block on watermark like the reads do, as user space applications get confused when the select says read is available, and then the read blocks - Fix accounting of ring buffer dropped pages as it is what is used to determine if the buffer is empty or not - Fix memory leak in tracing_read_pipe() - Fix struct trace_array warning about being declared in parameters - Fix accounting of ftrace pages used in output at start up. - Fix allocation of dyn_ftrace pages by subtracting one from order instead of diving it by 2 - Static analyzer found a case were a pointer being used outside of a NULL check (rb_head_page_deactivate()) - Fix possible NULL pointer dereference if kstrdup() fails in ftrace_add_mod() - Fix memory leak in test_gen_synth_cmd() and test_empty_synth_event() - Fix bad pointer dereference in register_synth_event() on error path - Remove unused __bad_type_size() method - Fix possible NULL pointer dereference of entry in list 'tr->err_log' - Fix NULL pointer deference race if eprobe is called before the event setup * tag 'trace-v6.1-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing: Fix race where eprobes can be called before the event tracing: Fix potential null-pointer-access of entry in list 'tr->err_log' tracing: Remove unused __bad_type_size() method tracing: Fix wild-memory-access in register_synth_event() tracing: Fix memory leak in test_gen_synth_cmd() and test_empty_synth_event() ftrace: Fix null pointer dereference in ftrace_add_mod() ring_buffer: Do not deactivate non-existant pages ftrace: Optimize the allocation for mcount entries ftrace: Fix the possible incorrect kernel message tracing: Fix warning on variable 'struct trace_array' tracing: Fix memory leak in tracing_read_pipe() ring-buffer: Include dropped pages in counting dirty patches tracing/ring-buffer: Have polling block on watermark