mirror of https://github.com/torvalds/linux.git
tracing updates for v6.19:
- Merge branch shared with kprobes on extending trace options
The trace options were defined by a 32 bit variable. This limits the
tracing instances to have a total of 32 different options. As that limit
has been hit, and more options are being added, increase the option mask
to a 64 bit number, doubling the number of options available.
As this is required for the kprobe topic branches as well as the tracing
topic branch, a separate branch was created and merged into both.
- Make trace_user_fault_read() available for the rest of tracing
The function trace_user_fault_read() is used by trace_marker file read to
allow reading user space to be done fast and without locking or
allocations. Make this available so that the system call trace events can
use it too.
- Have system call trace events read user space values
Now that the system call trace events callbacks are called in a faultable
context, take advantage of this and read the user space buffers for
various system calls. For example, show the path name of the openat system
call instead of just showing the pointer to that path name in user space.
Also show the contents of the buffer of the write system call. Several
system call trace events are updated to make tracing into a light weight
strace tool for all applications in the system.
- Update perf system call tracing to do the same
- And a config and syscall_user_buf_size file to control the size of the buffer
Limit the amount of data that can be read from user space. The default
size is 63 bytes but that can be expanded to 165 bytes.
- Allow the persistent ring buffer to print system calls normally
The persistent ring buffer prints trace events by their type and ignores
the print_fmt. This is because the print_fmt may change from kernel to
kernel. As the system call output is fixed by the system call ABI itself,
there's no reason to limit that. This makes reading the system call events
in the persistent ring buffer much nicer and easier to understand.
- Add options to show text offset to function profiler
The function profiler that counts the number of times a function is hit
currently lists all functions by its name and offset. But this becomes
ambiguous when there are several functions with the same name. Add a
tracing option that changes the output to be that of _text+offset
instead. Now a user space tool can use this information to map the
_text+offset to the unique function it is counting.
- Report bad dynamic event command
If a bad command is passed to the dynamic_events file, report it properly
in the error log.
- Clean up tracer options
Clean up the tracer option code a bit, by removing some useless code and
also using switch statements instead of a series of if statements.
- Have tracing options be instance specific
Tracers can have their own options (function tracer, irqsoff tracer,
function graph tracer, etc). But now that the same tracer can be enabled
in multiple trace instances, their options are still global. The API is
per instance, thus changing one affects other instances. This isn't even
consistent, as the option take affect differently depending on when an
tracer started in an instance. Make the options for instances only affect
the instance it is changed under.
- Optimize pid_list lock contention
Whenever the pid_list is read, it uses a spin lock. This happens at every
sched switch. Taking the lock at sched switch can be removed by instead
using a seqlock counter.
- Clean up the trace trigger structures
The trigger code uses two different structures to implement a single
tigger. This was due to trying to reuse code for the two different types
of triggers (always on trigger, and count limited trigger). But by adding
a single field to one structure, the other structure could be absorbed
into the first structure making he code easier to understand.
- Create a bulk garbage collector for trace triggers
If user space has triggers for several hundreds of events and then removes
them, it can take several seconds to complete. This is because each
removal calls the slow tracepoint_synchronize_unregister() that can take
hundreds of milliseconds to complete. Instead, create a helper thread that
will do the clean up. When a trigger is removed, it will create the
kthread if it isn't already created, and then add the trigger to a llist.
The kthread will take the items off the llist, call
tracepoint_synchronize_unregister(), and then remove the items it took
off. It will then check if there's more items to free before sleeping.
This makes user space removing all these triggers to finish in less than a
second.
- Allow function tracing of some of the tracing infrastructure code
Because the tracing code can cause recursion issues if it is traced by the
function tracer the entire tracing directory disables function tracing.
But not all of tracing causes issues if it is traced. Namely, the event
tracing code. Add a config that enables some of the tracing code to be
traced to help in debugging it. Note, when this is enabled, it does add
noise to general function tracing, especially if events are enabled as
well (which is a common case).
- Add boot-time backup instance for persistent buffer
The persistent ring buffer is used mostly for kernel crash analysis in the
field. One issue is that if there's a crash, the data in the persistent
ring buffer must be read before tracing can begin using it. This slows
down the boot process. Once tracing starts in the persistent ring buffer,
the old data must be freed and the addresses no longer match and old
events can't be in the buffer with new events.
Create a way to create a backup buffer that copies the persistent ring
buffer at boot up. Then after a crash, the always on tracer can begin
immediately as well as the normal boot process while the crash analysis
tooling uses the backup buffer. After the backup buffer is finished being
read, it can be removed.
- Enable function graph args and return address options at the same time
Currently the when reading of arguments in the function graph tracer is
enabled, the option to record the parent function in the entry event can
not be enabled. Update the code so that it can.
- Add new struct_offset() helper macro
Add a new macro that takes a pointer to a structure and a name of one of
its members and it will return the offset of that member. This allows the
ring buffer code to simplify the following:
From: size = struct_size(entry, buf, cnt - sizeof(entry->id));
To: size = struct_offset(entry, id) + cnt;
There should be other simplifications that this macro can help out with as
well.
-----BEGIN PGP SIGNATURE-----
iIoEABYKADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCaS9xqxQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qj6tAQD4MR1lsE3XpH09asO4CDDfhbtRSQVD
o8bVKVihWx/j5gD/XezjqE2Q2+DO6dhnsQY6pbtNdXoKgaMuQJGA+dvPsQc=
=HilC
-----END PGP SIGNATURE-----
Merge tag 'trace-v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt:
- Extend tracing option mask to 64 bits
The trace options were defined by a 32 bit variable. This limits the
tracing instances to have a total of 32 different options. As that
limit has been hit, and more options are being added, increase the
option mask to a 64 bit number, doubling the number of options
available.
As this is required for the kprobe topic branches as well as the
tracing topic branch, a separate branch was created and merged into
both.
- Make trace_user_fault_read() available for the rest of tracing
The function trace_user_fault_read() is used by trace_marker file
read to allow reading user space to be done fast and without locking
or allocations. Make this available so that the system call trace
events can use it too.
- Have system call trace events read user space values
Now that the system call trace events callbacks are called in a
faultable context, take advantage of this and read the user space
buffers for various system calls. For example, show the path name of
the openat system call instead of just showing the pointer to that
path name in user space. Also show the contents of the buffer of the
write system call. Several system call trace events are updated to
make tracing into a light weight strace tool for all applications in
the system.
- Update perf system call tracing to do the same
- And a config and syscall_user_buf_size file to control the size of
the buffer
Limit the amount of data that can be read from user space. The
default size is 63 bytes but that can be expanded to 165 bytes.
- Allow the persistent ring buffer to print system calls normally
The persistent ring buffer prints trace events by their type and
ignores the print_fmt. This is because the print_fmt may change from
kernel to kernel. As the system call output is fixed by the system
call ABI itself, there's no reason to limit that. This makes reading
the system call events in the persistent ring buffer much nicer and
easier to understand.
- Add options to show text offset to function profiler
The function profiler that counts the number of times a function is
hit currently lists all functions by its name and offset. But this
becomes ambiguous when there are several functions with the same
name.
Add a tracing option that changes the output to be that of
'_text+offset' instead. Now a user space tool can use this
information to map the '_text+offset' to the unique function it is
counting.
- Report bad dynamic event command
If a bad command is passed to the dynamic_events file, report it
properly in the error log.
- Clean up tracer options
Clean up the tracer option code a bit, by removing some useless code
and also using switch statements instead of a series of if
statements.
- Have tracing options be instance specific
Tracers can have their own options (function tracer, irqsoff tracer,
function graph tracer, etc). But now that the same tracer can be
enabled in multiple trace instances, their options are still global.
The API is per instance, thus changing one affects other instances.
This isn't even consistent, as the option take affect differently
depending on when an tracer started in an instance. Make the options
for instances only affect the instance it is changed under.
- Optimize pid_list lock contention
Whenever the pid_list is read, it uses a spin lock. This happens at
every sched switch. Taking the lock at sched switch can be removed by
instead using a seqlock counter.
- Clean up the trace trigger structures
The trigger code uses two different structures to implement a single
tigger. This was due to trying to reuse code for the two different
types of triggers (always on trigger, and count limited trigger). But
by adding a single field to one structure, the other structure could
be absorbed into the first structure making he code easier to
understand.
- Create a bulk garbage collector for trace triggers
If user space has triggers for several hundreds of events and then
removes them, it can take several seconds to complete. This is
because each removal calls tracepoint_synchronize_unregister() that
can take hundreds of milliseconds to complete.
Instead, create a helper thread that will do the clean up. When a
trigger is removed, it will create the kthread if it isn't already
created, and then add the trigger to a llist. The kthread will take
the items off the llist, call tracepoint_synchronize_unregister(),
and then remove the items it took off. It will then check if there's
more items to free before sleeping.
This makes user space removing all these triggers to finish in less
than a second.
- Allow function tracing of some of the tracing infrastructure code
Because the tracing code can cause recursion issues if it is traced
by the function tracer the entire tracing directory disables function
tracing. But not all of tracing causes issues if it is traced.
Namely, the event tracing code. Add a config that enables some of the
tracing code to be traced to help in debugging it. Note, when this is
enabled, it does add noise to general function tracing, especially if
events are enabled as well (which is a common case).
- Add boot-time backup instance for persistent buffer
The persistent ring buffer is used mostly for kernel crash analysis
in the field. One issue is that if there's a crash, the data in the
persistent ring buffer must be read before tracing can begin using
it. This slows down the boot process. Once tracing starts in the
persistent ring buffer, the old data must be freed and the addresses
no longer match and old events can't be in the buffer with new
events.
Create a way to create a backup buffer that copies the persistent
ring buffer at boot up. Then after a crash, the always on tracer can
begin immediately as well as the normal boot process while the crash
analysis tooling uses the backup buffer. After the backup buffer is
finished being read, it can be removed.
- Enable function graph args and return address options at the same
time
Currently the when reading of arguments in the function graph tracer
is enabled, the option to record the parent function in the entry
event can not be enabled. Update the code so that it can.
- Add new struct_offset() helper macro
Add a new macro that takes a pointer to a structure and a name of one
of its members and it will return the offset of that member. This
allows the ring buffer code to simplify the following:
From: size = struct_size(entry, buf, cnt - sizeof(entry->id));
To: size = struct_offset(entry, id) + cnt;
There should be other simplifications that this macro can help out
with as well
* tag 'trace-v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (42 commits)
overflow: Introduce struct_offset() to get offset of member
function_graph: Enable funcgraph-args and funcgraph-retaddr to work simultaneously
tracing: Add boot-time backup of persistent ring buffer
ftrace: Allow tracing of some of the tracing code
tracing: Use strim() in trigger_process_regex() instead of skip_spaces()
tracing: Add bulk garbage collection of freeing event_trigger_data
tracing: Remove unneeded event_mutex lock in event_trigger_regex_release()
tracing: Merge struct event_trigger_ops into struct event_command
tracing: Remove get_trigger_ops() and add count_func() from trigger ops
tracing: Show the tracer options in boot-time created instance
ftrace: Avoid redundant initialization in register_ftrace_direct
tracing: Remove unused variable in tracing_trace_options_show()
fgraph: Make fgraph_no_sleep_time signed
tracing: Convert function graph set_flags() to use a switch() statement
tracing: Have function graph tracer option sleep-time be per instance
tracing: Move graph-time out of function graph options
tracing: Have function graph tracer option funcgraph-irqs be per instance
trace/pid_list: optimize pid_list->lock contention
tracing: Have function graph tracer define options per instance
tracing: Have function tracer define options per instance
...
This commit is contained in:
commit
69c5079b49
|
|
@ -366,6 +366,14 @@ of ftrace. Here is a list of some of the key files:
|
|||
for each function. The displayed address is the patch-site address
|
||||
and can differ from /proc/kallsyms address.
|
||||
|
||||
syscall_user_buf_size:
|
||||
|
||||
Some system call trace events will record the data from a user
|
||||
space address that one of the parameters point to. The amount of
|
||||
data per event is limited. This file holds the max number of bytes
|
||||
that will be recorded into the ring buffer to hold this data.
|
||||
The max value is currently 165.
|
||||
|
||||
dyn_ftrace_total_info:
|
||||
|
||||
This file is for debugging purposes. The number of functions that
|
||||
|
|
|
|||
|
|
@ -1167,17 +1167,14 @@ static inline void ftrace_init(void) { }
|
|||
*/
|
||||
struct ftrace_graph_ent {
|
||||
unsigned long func; /* Current function */
|
||||
int depth;
|
||||
unsigned long depth;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
* Structure that defines an entry function trace with retaddr.
|
||||
* It's already packed but the attribute "packed" is needed
|
||||
* to remove extra padding at the end.
|
||||
*/
|
||||
struct fgraph_retaddr_ent {
|
||||
unsigned long func; /* Current function */
|
||||
int depth;
|
||||
struct ftrace_graph_ent ent;
|
||||
unsigned long retaddr; /* Return address */
|
||||
} __packed;
|
||||
|
||||
|
|
|
|||
|
|
@ -458,6 +458,18 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
|
|||
#define struct_size_t(type, member, count) \
|
||||
struct_size((type *)NULL, member, count)
|
||||
|
||||
/**
|
||||
* struct_offset() - Calculate the offset of a member within a struct
|
||||
* @p: Pointer to the struct
|
||||
* @member: Name of the member to get the offset of
|
||||
*
|
||||
* Calculates the offset of a particular @member of the structure pointed
|
||||
* to by @p.
|
||||
*
|
||||
* Return: number of bytes to the location of @member.
|
||||
*/
|
||||
#define struct_offset(p, member) (offsetof(typeof(*(p)), member))
|
||||
|
||||
/**
|
||||
* __DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
|
||||
* Enables caller macro to pass arbitrary trailing expressions
|
||||
|
|
|
|||
|
|
@ -149,6 +149,23 @@ static inline void seq_buf_commit(struct seq_buf *s, int num)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* seq_buf_pop - pop off the last written character
|
||||
* @s: the seq_buf handle
|
||||
*
|
||||
* Removes the last written character to the seq_buf @s.
|
||||
*
|
||||
* Returns the last character or -1 if it is empty.
|
||||
*/
|
||||
static inline int seq_buf_pop(struct seq_buf *s)
|
||||
{
|
||||
if (!s->len)
|
||||
return -1;
|
||||
|
||||
s->len--;
|
||||
return (unsigned int)s->buffer[s->len];
|
||||
}
|
||||
|
||||
extern __printf(2, 3)
|
||||
int seq_buf_printf(struct seq_buf *s, const char *fmt, ...);
|
||||
extern __printf(2, 0)
|
||||
|
|
|
|||
|
|
@ -80,6 +80,19 @@ static inline bool trace_seq_has_overflowed(struct trace_seq *s)
|
|||
return s->full || seq_buf_has_overflowed(&s->seq);
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_seq_pop - pop off the last written character
|
||||
* @s: trace sequence descriptor
|
||||
*
|
||||
* Removes the last written character to the trace_seq @s.
|
||||
*
|
||||
* Returns the last character or -1 if it is empty.
|
||||
*/
|
||||
static inline int trace_seq_pop(struct trace_seq *s)
|
||||
{
|
||||
return seq_buf_pop(&s->seq);
|
||||
}
|
||||
|
||||
/*
|
||||
* Currently only defined when tracing is enabled.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -16,6 +16,9 @@
|
|||
* @name: name of the syscall
|
||||
* @syscall_nr: number of the syscall
|
||||
* @nb_args: number of parameters it takes
|
||||
* @user_arg_is_str: set if the arg for @user_arg_size is a string
|
||||
* @user_arg_size: holds @arg that has size of the user space to read
|
||||
* @user_mask: mask of @args that will read user space
|
||||
* @types: list of types as strings
|
||||
* @args: list of args as strings (args[i] matches types[i])
|
||||
* @enter_fields: list of fields for syscall_enter trace event
|
||||
|
|
@ -25,7 +28,10 @@
|
|||
struct syscall_metadata {
|
||||
const char *name;
|
||||
int syscall_nr;
|
||||
int nb_args;
|
||||
u8 nb_args:7;
|
||||
u8 user_arg_is_str:1;
|
||||
s8 user_arg_size;
|
||||
short user_mask;
|
||||
const char **types;
|
||||
const char **args;
|
||||
struct list_head enter_fields;
|
||||
|
|
|
|||
|
|
@ -342,6 +342,20 @@ config DYNAMIC_FTRACE_WITH_JMP
|
|||
depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
depends on HAVE_DYNAMIC_FTRACE_WITH_JMP
|
||||
|
||||
config FUNCTION_SELF_TRACING
|
||||
bool "Function trace tracing code"
|
||||
depends on FUNCTION_TRACER
|
||||
help
|
||||
Normally all the tracing code is set to notrace, where the function
|
||||
tracer will ignore all the tracing functions. Sometimes it is useful
|
||||
for debugging to trace some of the tracing infratructure itself.
|
||||
Enable this to allow some of the tracing infrastructure to be traced
|
||||
by the function tracer. Note, this will likely add noise to function
|
||||
tracing if events and other tracing features are enabled along with
|
||||
function tracing.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config FPROBE
|
||||
bool "Kernel Function Probe (fprobe)"
|
||||
depends on HAVE_FUNCTION_GRAPH_FREGS && HAVE_FTRACE_GRAPH_FUNC
|
||||
|
|
@ -587,6 +601,20 @@ config FTRACE_SYSCALLS
|
|||
help
|
||||
Basic tracer to catch the syscall entry and exit events.
|
||||
|
||||
config TRACE_SYSCALL_BUF_SIZE_DEFAULT
|
||||
int "System call user read max size"
|
||||
range 0 165
|
||||
default 63
|
||||
depends on FTRACE_SYSCALLS
|
||||
help
|
||||
Some system call trace events will record the data from a user
|
||||
space address that one of the parameters point to. The amount of
|
||||
data per event is limited. That limit is set by this config and
|
||||
this config also affects how much user space data perf can read.
|
||||
|
||||
For a tracing instance, this size may be changed by writing into
|
||||
its syscall_user_buf_size file.
|
||||
|
||||
config TRACER_SNAPSHOT
|
||||
bool "Create a snapshot trace buffer"
|
||||
select TRACER_MAX_TRACE
|
||||
|
|
|
|||
|
|
@ -16,6 +16,23 @@ obj-y += trace_selftest_dynamic.o
|
|||
endif
|
||||
endif
|
||||
|
||||
# Allow some files to be function traced
|
||||
ifdef CONFIG_FUNCTION_SELF_TRACING
|
||||
CFLAGS_trace_output.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_seq.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_stat.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_tracing_map.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_synth_event_gen_test.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_syscalls.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events_filter.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events_trigger.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events_synth.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events_hist.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_events_user.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_trace_dynevent.o = $(CC_FLAGS_FTRACE)
|
||||
endif
|
||||
|
||||
ifdef CONFIG_FTRACE_STARTUP_TEST
|
||||
CFLAGS_trace_kprobe_selftest.o = $(CC_FLAGS_FTRACE)
|
||||
obj-$(CONFIG_KPROBE_EVENTS) += trace_kprobe_selftest.o
|
||||
|
|
|
|||
|
|
@ -1738,7 +1738,7 @@ static enum print_line_t print_one_line(struct trace_iterator *iter,
|
|||
|
||||
t = te_blk_io_trace(iter->ent);
|
||||
what = (t->action & ((1 << BLK_TC_SHIFT) - 1)) & ~__BLK_TA_CGROUP;
|
||||
long_act = !!(tr->trace_flags & TRACE_ITER_VERBOSE);
|
||||
long_act = !!(tr->trace_flags & TRACE_ITER(VERBOSE));
|
||||
log_action = classic ? &blk_log_action_classic : &blk_log_action;
|
||||
has_cg = t->action & __BLK_TA_CGROUP;
|
||||
|
||||
|
|
@ -1803,9 +1803,9 @@ blk_tracer_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
|
|||
/* don't output context-info for blk_classic output */
|
||||
if (bit == TRACE_BLK_OPT_CLASSIC) {
|
||||
if (set)
|
||||
tr->trace_flags &= ~TRACE_ITER_CONTEXT_INFO;
|
||||
tr->trace_flags &= ~TRACE_ITER(CONTEXT_INFO);
|
||||
else
|
||||
tr->trace_flags |= TRACE_ITER_CONTEXT_INFO;
|
||||
tr->trace_flags |= TRACE_ITER(CONTEXT_INFO);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -498,9 +498,6 @@ void *fgraph_retrieve_parent_data(int idx, int *size_bytes, int depth)
|
|||
return get_data_type_data(current, offset);
|
||||
}
|
||||
|
||||
/* Both enabled by default (can be cleared by function_graph tracer flags */
|
||||
bool fgraph_sleep_time = true;
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
/*
|
||||
* archs can override this function if they must do something
|
||||
|
|
@ -1023,11 +1020,6 @@ void fgraph_init_ops(struct ftrace_ops *dst_ops,
|
|||
#endif
|
||||
}
|
||||
|
||||
void ftrace_graph_sleep_time_control(bool enable)
|
||||
{
|
||||
fgraph_sleep_time = enable;
|
||||
}
|
||||
|
||||
/*
|
||||
* Simply points to ftrace_stub, but with the proper protocol.
|
||||
* Defined by the linker script in linux/vmlinux.lds.h
|
||||
|
|
@ -1098,7 +1090,7 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
|
|||
* Does the user want to count the time a function was asleep.
|
||||
* If so, do not update the time stamps.
|
||||
*/
|
||||
if (fgraph_sleep_time)
|
||||
if (!fgraph_no_sleep_time)
|
||||
return;
|
||||
|
||||
timestamp = trace_clock_local();
|
||||
|
|
|
|||
|
|
@ -534,7 +534,9 @@ static int function_stat_headers(struct seq_file *m)
|
|||
|
||||
static int function_stat_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct trace_array *tr = trace_get_global_array();
|
||||
struct ftrace_profile *rec = v;
|
||||
const char *refsymbol = NULL;
|
||||
char str[KSYM_SYMBOL_LEN];
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
static struct trace_seq s;
|
||||
|
|
@ -554,7 +556,29 @@ static int function_stat_show(struct seq_file *m, void *v)
|
|||
return 0;
|
||||
#endif
|
||||
|
||||
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
||||
if (tr->trace_flags & TRACE_ITER(PROF_TEXT_OFFSET)) {
|
||||
unsigned long offset;
|
||||
|
||||
if (core_kernel_text(rec->ip)) {
|
||||
refsymbol = "_text";
|
||||
offset = rec->ip - (unsigned long)_text;
|
||||
} else {
|
||||
struct module *mod;
|
||||
|
||||
guard(rcu)();
|
||||
mod = __module_text_address(rec->ip);
|
||||
if (mod) {
|
||||
refsymbol = mod->name;
|
||||
/* Calculate offset from module's text entry address. */
|
||||
offset = rec->ip - (unsigned long)mod->mem[MOD_TEXT].base;
|
||||
}
|
||||
}
|
||||
if (refsymbol)
|
||||
snprintf(str, sizeof(str), " %s+%#lx", refsymbol, offset);
|
||||
}
|
||||
if (!refsymbol)
|
||||
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
||||
|
||||
seq_printf(m, " %-30.30s %10lu", str, rec->counter);
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
|
|
@ -838,6 +862,8 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace,
|
|||
return 1;
|
||||
}
|
||||
|
||||
bool fprofile_no_sleep_time;
|
||||
|
||||
static void profile_graph_return(struct ftrace_graph_ret *trace,
|
||||
struct fgraph_ops *gops,
|
||||
struct ftrace_regs *fregs)
|
||||
|
|
@ -863,7 +889,7 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
|
|||
|
||||
calltime = rettime - profile_data->calltime;
|
||||
|
||||
if (!fgraph_sleep_time) {
|
||||
if (fprofile_no_sleep_time) {
|
||||
if (current->ftrace_sleeptime)
|
||||
calltime -= current->ftrace_sleeptime - profile_data->sleeptime;
|
||||
}
|
||||
|
|
@ -6075,7 +6101,7 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
|
|||
new_hash = NULL;
|
||||
|
||||
ops->func = call_direct_funcs;
|
||||
ops->flags = MULTI_FLAGS;
|
||||
ops->flags |= MULTI_FLAGS;
|
||||
ops->trampoline = FTRACE_REGS_ADDR;
|
||||
ops->direct_call = addr;
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@
|
|||
* Copyright (C) 2021 VMware Inc, Steven Rostedt <rostedt@goodmis.org>
|
||||
*/
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/seqlock.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/slab.h>
|
||||
#include "trace.h"
|
||||
|
|
@ -126,7 +127,7 @@ bool trace_pid_list_is_set(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
{
|
||||
union upper_chunk *upper_chunk;
|
||||
union lower_chunk *lower_chunk;
|
||||
unsigned long flags;
|
||||
unsigned int seq;
|
||||
unsigned int upper1;
|
||||
unsigned int upper2;
|
||||
unsigned int lower;
|
||||
|
|
@ -138,14 +139,16 @@ bool trace_pid_list_is_set(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
if (pid_split(pid, &upper1, &upper2, &lower) < 0)
|
||||
return false;
|
||||
|
||||
raw_spin_lock_irqsave(&pid_list->lock, flags);
|
||||
upper_chunk = pid_list->upper[upper1];
|
||||
if (upper_chunk) {
|
||||
lower_chunk = upper_chunk->data[upper2];
|
||||
if (lower_chunk)
|
||||
ret = test_bit(lower, lower_chunk->data);
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&pid_list->lock, flags);
|
||||
do {
|
||||
seq = read_seqcount_begin(&pid_list->seqcount);
|
||||
ret = false;
|
||||
upper_chunk = pid_list->upper[upper1];
|
||||
if (upper_chunk) {
|
||||
lower_chunk = upper_chunk->data[upper2];
|
||||
if (lower_chunk)
|
||||
ret = test_bit(lower, lower_chunk->data);
|
||||
}
|
||||
} while (read_seqcount_retry(&pid_list->seqcount, seq));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -178,6 +181,7 @@ int trace_pid_list_set(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
return -EINVAL;
|
||||
|
||||
raw_spin_lock_irqsave(&pid_list->lock, flags);
|
||||
write_seqcount_begin(&pid_list->seqcount);
|
||||
upper_chunk = pid_list->upper[upper1];
|
||||
if (!upper_chunk) {
|
||||
upper_chunk = get_upper_chunk(pid_list);
|
||||
|
|
@ -199,6 +203,7 @@ int trace_pid_list_set(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
set_bit(lower, lower_chunk->data);
|
||||
ret = 0;
|
||||
out:
|
||||
write_seqcount_end(&pid_list->seqcount);
|
||||
raw_spin_unlock_irqrestore(&pid_list->lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -230,6 +235,7 @@ int trace_pid_list_clear(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
return -EINVAL;
|
||||
|
||||
raw_spin_lock_irqsave(&pid_list->lock, flags);
|
||||
write_seqcount_begin(&pid_list->seqcount);
|
||||
upper_chunk = pid_list->upper[upper1];
|
||||
if (!upper_chunk)
|
||||
goto out;
|
||||
|
|
@ -250,6 +256,7 @@ int trace_pid_list_clear(struct trace_pid_list *pid_list, unsigned int pid)
|
|||
}
|
||||
}
|
||||
out:
|
||||
write_seqcount_end(&pid_list->seqcount);
|
||||
raw_spin_unlock_irqrestore(&pid_list->lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -340,8 +347,10 @@ static void pid_list_refill_irq(struct irq_work *iwork)
|
|||
|
||||
again:
|
||||
raw_spin_lock(&pid_list->lock);
|
||||
write_seqcount_begin(&pid_list->seqcount);
|
||||
upper_count = CHUNK_ALLOC - pid_list->free_upper_chunks;
|
||||
lower_count = CHUNK_ALLOC - pid_list->free_lower_chunks;
|
||||
write_seqcount_end(&pid_list->seqcount);
|
||||
raw_spin_unlock(&pid_list->lock);
|
||||
|
||||
if (upper_count <= 0 && lower_count <= 0)
|
||||
|
|
@ -370,6 +379,7 @@ static void pid_list_refill_irq(struct irq_work *iwork)
|
|||
}
|
||||
|
||||
raw_spin_lock(&pid_list->lock);
|
||||
write_seqcount_begin(&pid_list->seqcount);
|
||||
if (upper) {
|
||||
*upper_next = pid_list->upper_list;
|
||||
pid_list->upper_list = upper;
|
||||
|
|
@ -380,6 +390,7 @@ static void pid_list_refill_irq(struct irq_work *iwork)
|
|||
pid_list->lower_list = lower;
|
||||
pid_list->free_lower_chunks += lcnt;
|
||||
}
|
||||
write_seqcount_end(&pid_list->seqcount);
|
||||
raw_spin_unlock(&pid_list->lock);
|
||||
|
||||
/*
|
||||
|
|
@ -419,6 +430,7 @@ struct trace_pid_list *trace_pid_list_alloc(void)
|
|||
init_irq_work(&pid_list->refill_irqwork, pid_list_refill_irq);
|
||||
|
||||
raw_spin_lock_init(&pid_list->lock);
|
||||
seqcount_raw_spinlock_init(&pid_list->seqcount, &pid_list->lock);
|
||||
|
||||
for (i = 0; i < CHUNK_ALLOC; i++) {
|
||||
union upper_chunk *chunk;
|
||||
|
|
|
|||
|
|
@ -76,6 +76,7 @@ union upper_chunk {
|
|||
};
|
||||
|
||||
struct trace_pid_list {
|
||||
seqcount_raw_spinlock_t seqcount;
|
||||
raw_spinlock_t lock;
|
||||
struct irq_work refill_irqwork;
|
||||
union upper_chunk *upper[UPPER1_SIZE]; // 1 or 2K in size
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/ctype.h>
|
||||
#include <linux/once_lite.h>
|
||||
#include <linux/ftrace_regs.h>
|
||||
#include <linux/llist.h>
|
||||
|
||||
#include "pid_list.h"
|
||||
|
||||
|
|
@ -131,6 +132,8 @@ enum trace_type {
|
|||
#define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long))
|
||||
#define HIST_STACKTRACE_SKIP 5
|
||||
|
||||
#define SYSCALL_FAULT_USER_MAX 165
|
||||
|
||||
/*
|
||||
* syscalls are special, and need special handling, this is why
|
||||
* they are not included in trace_entries.h
|
||||
|
|
@ -216,7 +219,7 @@ struct array_buffer {
|
|||
int cpu;
|
||||
};
|
||||
|
||||
#define TRACE_FLAGS_MAX_SIZE 32
|
||||
#define TRACE_FLAGS_MAX_SIZE 64
|
||||
|
||||
struct trace_options {
|
||||
struct tracer *tracer;
|
||||
|
|
@ -390,7 +393,8 @@ struct trace_array {
|
|||
int buffer_percent;
|
||||
unsigned int n_err_log_entries;
|
||||
struct tracer *current_trace;
|
||||
unsigned int trace_flags;
|
||||
struct tracer_flags *current_trace_flags;
|
||||
u64 trace_flags;
|
||||
unsigned char trace_flags_index[TRACE_FLAGS_MAX_SIZE];
|
||||
unsigned int flags;
|
||||
raw_spinlock_t start_lock;
|
||||
|
|
@ -404,6 +408,7 @@ struct trace_array {
|
|||
struct list_head systems;
|
||||
struct list_head events;
|
||||
struct list_head marker_list;
|
||||
struct list_head tracers;
|
||||
struct trace_event_file *trace_marker_file;
|
||||
cpumask_var_t tracing_cpumask; /* only trace on set CPUs */
|
||||
/* one per_cpu trace_pipe can be opened by only one user */
|
||||
|
|
@ -430,6 +435,7 @@ struct trace_array {
|
|||
int function_enabled;
|
||||
#endif
|
||||
int no_filter_buffering_ref;
|
||||
unsigned int syscall_buf_sz;
|
||||
struct list_head hist_vars;
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
struct cond_snapshot *cond_snapshot;
|
||||
|
|
@ -448,6 +454,7 @@ enum {
|
|||
TRACE_ARRAY_FL_LAST_BOOT = BIT(2),
|
||||
TRACE_ARRAY_FL_MOD_INIT = BIT(3),
|
||||
TRACE_ARRAY_FL_MEMMAP = BIT(4),
|
||||
TRACE_ARRAY_FL_VMALLOC = BIT(5),
|
||||
};
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
|
@ -631,9 +638,10 @@ struct tracer {
|
|||
u32 old_flags, u32 bit, int set);
|
||||
/* Return 0 if OK with change, else return non-zero */
|
||||
int (*flag_changed)(struct trace_array *tr,
|
||||
u32 mask, int set);
|
||||
u64 mask, int set);
|
||||
struct tracer *next;
|
||||
struct tracer_flags *flags;
|
||||
struct tracer_flags *default_flags;
|
||||
int enabled;
|
||||
bool print_max;
|
||||
bool allow_instances;
|
||||
|
|
@ -937,8 +945,6 @@ static __always_inline bool ftrace_hash_empty(struct ftrace_hash *hash)
|
|||
#define TRACE_GRAPH_PRINT_FILL_SHIFT 28
|
||||
#define TRACE_GRAPH_PRINT_FILL_MASK (0x3 << TRACE_GRAPH_PRINT_FILL_SHIFT)
|
||||
|
||||
extern void ftrace_graph_sleep_time_control(bool enable);
|
||||
|
||||
#ifdef CONFIG_FUNCTION_PROFILER
|
||||
extern void ftrace_graph_graph_time_control(bool enable);
|
||||
#else
|
||||
|
|
@ -958,7 +964,8 @@ extern int __trace_graph_entry(struct trace_array *tr,
|
|||
extern int __trace_graph_retaddr_entry(struct trace_array *tr,
|
||||
struct ftrace_graph_ent *trace,
|
||||
unsigned int trace_ctx,
|
||||
unsigned long retaddr);
|
||||
unsigned long retaddr,
|
||||
struct ftrace_regs *fregs);
|
||||
extern void __trace_graph_return(struct trace_array *tr,
|
||||
struct ftrace_graph_ret *trace,
|
||||
unsigned int trace_ctx,
|
||||
|
|
@ -1109,7 +1116,8 @@ static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftra
|
|||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
|
||||
extern unsigned int fgraph_max_depth;
|
||||
extern bool fgraph_sleep_time;
|
||||
extern int fgraph_no_sleep_time;
|
||||
extern bool fprofile_no_sleep_time;
|
||||
|
||||
static inline bool
|
||||
ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace)
|
||||
|
|
@ -1345,11 +1353,11 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
|||
# define FUNCTION_FLAGS \
|
||||
C(FUNCTION, "function-trace"), \
|
||||
C(FUNC_FORK, "function-fork"),
|
||||
# define FUNCTION_DEFAULT_FLAGS TRACE_ITER_FUNCTION
|
||||
# define FUNCTION_DEFAULT_FLAGS TRACE_ITER(FUNCTION)
|
||||
#else
|
||||
# define FUNCTION_FLAGS
|
||||
# define FUNCTION_DEFAULT_FLAGS 0UL
|
||||
# define TRACE_ITER_FUNC_FORK 0UL
|
||||
# define TRACE_ITER_FUNC_FORK_BIT -1
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
|
|
@ -1359,6 +1367,24 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
|||
# define STACK_FLAGS
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_FUNCTION_PROFILER
|
||||
# define PROFILER_FLAGS \
|
||||
C(PROF_TEXT_OFFSET, "prof-text-offset"),
|
||||
# ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
# define FPROFILE_FLAGS \
|
||||
C(GRAPH_TIME, "graph-time"),
|
||||
# define FPROFILE_DEFAULT_FLAGS TRACE_ITER(GRAPH_TIME)
|
||||
# else
|
||||
# define FPROFILE_FLAGS
|
||||
# define FPROFILE_DEFAULT_FLAGS 0UL
|
||||
# endif
|
||||
#else
|
||||
# define PROFILER_FLAGS
|
||||
# define FPROFILE_FLAGS
|
||||
# define FPROFILE_DEFAULT_FLAGS 0UL
|
||||
# define TRACE_ITER_PROF_TEXT_OFFSET_BIT -1
|
||||
#endif
|
||||
|
||||
/*
|
||||
* trace_iterator_flags is an enumeration that defines bit
|
||||
* positions into trace_flags that controls the output.
|
||||
|
|
@ -1391,13 +1417,15 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
|||
C(MARKERS, "markers"), \
|
||||
C(EVENT_FORK, "event-fork"), \
|
||||
C(TRACE_PRINTK, "trace_printk_dest"), \
|
||||
C(COPY_MARKER, "copy_trace_marker"),\
|
||||
C(COPY_MARKER, "copy_trace_marker"), \
|
||||
C(PAUSE_ON_TRACE, "pause-on-trace"), \
|
||||
C(HASH_PTR, "hash-ptr"), /* Print hashed pointer */ \
|
||||
FUNCTION_FLAGS \
|
||||
FGRAPH_FLAGS \
|
||||
STACK_FLAGS \
|
||||
BRANCH_FLAGS
|
||||
BRANCH_FLAGS \
|
||||
PROFILER_FLAGS \
|
||||
FPROFILE_FLAGS
|
||||
|
||||
/*
|
||||
* By defining C, we can make TRACE_FLAGS a list of bit names
|
||||
|
|
@ -1413,20 +1441,17 @@ enum trace_iterator_bits {
|
|||
};
|
||||
|
||||
/*
|
||||
* By redefining C, we can make TRACE_FLAGS a list of masks that
|
||||
* use the bits as defined above.
|
||||
* And use TRACE_ITER(flag) to define the bit masks.
|
||||
*/
|
||||
#undef C
|
||||
#define C(a, b) TRACE_ITER_##a = (1 << TRACE_ITER_##a##_BIT)
|
||||
|
||||
enum trace_iterator_flags { TRACE_FLAGS };
|
||||
#define TRACE_ITER(flag) \
|
||||
(TRACE_ITER_##flag##_BIT < 0 ? 0 : 1ULL << (TRACE_ITER_##flag##_BIT))
|
||||
|
||||
/*
|
||||
* TRACE_ITER_SYM_MASK masks the options in trace_flags that
|
||||
* control the output of kernel symbols.
|
||||
*/
|
||||
#define TRACE_ITER_SYM_MASK \
|
||||
(TRACE_ITER_PRINT_PARENT|TRACE_ITER_SYM_OFFSET|TRACE_ITER_SYM_ADDR)
|
||||
(TRACE_ITER(PRINT_PARENT)|TRACE_ITER(SYM_OFFSET)|TRACE_ITER(SYM_ADDR))
|
||||
|
||||
extern struct tracer nop_trace;
|
||||
|
||||
|
|
@ -1435,7 +1460,7 @@ extern int enable_branch_tracing(struct trace_array *tr);
|
|||
extern void disable_branch_tracing(void);
|
||||
static inline int trace_branch_enable(struct trace_array *tr)
|
||||
{
|
||||
if (tr->trace_flags & TRACE_ITER_BRANCH)
|
||||
if (tr->trace_flags & TRACE_ITER(BRANCH))
|
||||
return enable_branch_tracing(tr);
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -1531,6 +1556,23 @@ void trace_buffered_event_enable(void);
|
|||
|
||||
void early_enable_events(struct trace_array *tr, char *buf, bool disable_first);
|
||||
|
||||
struct trace_user_buf;
|
||||
struct trace_user_buf_info {
|
||||
struct trace_user_buf __percpu *tbuf;
|
||||
size_t size;
|
||||
int ref;
|
||||
};
|
||||
|
||||
typedef int (*trace_user_buf_copy)(char *dst, const char __user *src,
|
||||
size_t size, void *data);
|
||||
int trace_user_fault_init(struct trace_user_buf_info *tinfo, size_t size);
|
||||
int trace_user_fault_get(struct trace_user_buf_info *tinfo);
|
||||
int trace_user_fault_put(struct trace_user_buf_info *tinfo);
|
||||
void trace_user_fault_destroy(struct trace_user_buf_info *tinfo);
|
||||
char *trace_user_fault_read(struct trace_user_buf_info *tinfo,
|
||||
const char __user *ptr, size_t size,
|
||||
trace_user_buf_copy copy_func, void *data);
|
||||
|
||||
static inline void
|
||||
__trace_event_discard_commit(struct trace_buffer *buffer,
|
||||
struct ring_buffer_event *event)
|
||||
|
|
@ -1752,13 +1794,13 @@ extern void clear_event_triggers(struct trace_array *tr);
|
|||
|
||||
enum {
|
||||
EVENT_TRIGGER_FL_PROBE = BIT(0),
|
||||
EVENT_TRIGGER_FL_COUNT = BIT(1),
|
||||
};
|
||||
|
||||
struct event_trigger_data {
|
||||
unsigned long count;
|
||||
int ref;
|
||||
int flags;
|
||||
const struct event_trigger_ops *ops;
|
||||
struct event_command *cmd_ops;
|
||||
struct event_filter __rcu *filter;
|
||||
char *filter_str;
|
||||
|
|
@ -1769,6 +1811,7 @@ struct event_trigger_data {
|
|||
char *name;
|
||||
struct list_head named_list;
|
||||
struct event_trigger_data *named_data;
|
||||
struct llist_node llist;
|
||||
};
|
||||
|
||||
/* Avoid typos */
|
||||
|
|
@ -1783,6 +1826,10 @@ struct enable_trigger_data {
|
|||
bool hist;
|
||||
};
|
||||
|
||||
bool event_trigger_count(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event);
|
||||
|
||||
extern int event_enable_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_data *data);
|
||||
extern void event_enable_trigger_free(struct event_trigger_data *data);
|
||||
|
|
@ -1845,64 +1892,6 @@ extern void event_trigger_unregister(struct event_command *cmd_ops,
|
|||
extern void event_file_get(struct trace_event_file *file);
|
||||
extern void event_file_put(struct trace_event_file *file);
|
||||
|
||||
/**
|
||||
* struct event_trigger_ops - callbacks for trace event triggers
|
||||
*
|
||||
* The methods in this structure provide per-event trigger hooks for
|
||||
* various trigger operations.
|
||||
*
|
||||
* The @init and @free methods are used during trigger setup and
|
||||
* teardown, typically called from an event_command's @parse()
|
||||
* function implementation.
|
||||
*
|
||||
* The @print method is used to print the trigger spec.
|
||||
*
|
||||
* The @trigger method is the function that actually implements the
|
||||
* trigger and is called in the context of the triggering event
|
||||
* whenever that event occurs.
|
||||
*
|
||||
* All the methods below, except for @init() and @free(), must be
|
||||
* implemented.
|
||||
*
|
||||
* @trigger: The trigger 'probe' function called when the triggering
|
||||
* event occurs. The data passed into this callback is the data
|
||||
* that was supplied to the event_command @reg() function that
|
||||
* registered the trigger (see struct event_command) along with
|
||||
* the trace record, rec.
|
||||
*
|
||||
* @init: An optional initialization function called for the trigger
|
||||
* when the trigger is registered (via the event_command reg()
|
||||
* function). This can be used to perform per-trigger
|
||||
* initialization such as incrementing a per-trigger reference
|
||||
* count, for instance. This is usually implemented by the
|
||||
* generic utility function @event_trigger_init() (see
|
||||
* trace_event_triggers.c).
|
||||
*
|
||||
* @free: An optional de-initialization function called for the
|
||||
* trigger when the trigger is unregistered (via the
|
||||
* event_command @reg() function). This can be used to perform
|
||||
* per-trigger de-initialization such as decrementing a
|
||||
* per-trigger reference count and freeing corresponding trigger
|
||||
* data, for instance. This is usually implemented by the
|
||||
* generic utility function @event_trigger_free() (see
|
||||
* trace_event_triggers.c).
|
||||
*
|
||||
* @print: The callback function invoked to have the trigger print
|
||||
* itself. This is usually implemented by a wrapper function
|
||||
* that calls the generic utility function @event_trigger_print()
|
||||
* (see trace_event_triggers.c).
|
||||
*/
|
||||
struct event_trigger_ops {
|
||||
void (*trigger)(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer,
|
||||
void *rec,
|
||||
struct ring_buffer_event *rbe);
|
||||
int (*init)(struct event_trigger_data *data);
|
||||
void (*free)(struct event_trigger_data *data);
|
||||
int (*print)(struct seq_file *m,
|
||||
struct event_trigger_data *data);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct event_command - callbacks and data members for event commands
|
||||
*
|
||||
|
|
@ -1952,7 +1941,7 @@ struct event_trigger_ops {
|
|||
*
|
||||
* @reg: Adds the trigger to the list of triggers associated with the
|
||||
* event, and enables the event trigger itself, after
|
||||
* initializing it (via the event_trigger_ops @init() function).
|
||||
* initializing it (via the event_command @init() function).
|
||||
* This is also where commands can use the @trigger_type value to
|
||||
* make the decision as to whether or not multiple instances of
|
||||
* the trigger should be allowed. This is usually implemented by
|
||||
|
|
@ -1961,7 +1950,7 @@ struct event_trigger_ops {
|
|||
*
|
||||
* @unreg: Removes the trigger from the list of triggers associated
|
||||
* with the event, and disables the event trigger itself, after
|
||||
* initializing it (via the event_trigger_ops @free() function).
|
||||
* initializing it (via the event_command @free() function).
|
||||
* This is usually implemented by the generic utility function
|
||||
* @unregister_trigger() (see trace_event_triggers.c).
|
||||
*
|
||||
|
|
@ -1975,12 +1964,41 @@ struct event_trigger_ops {
|
|||
* ignored. This is usually implemented by the generic utility
|
||||
* function @set_trigger_filter() (see trace_event_triggers.c).
|
||||
*
|
||||
* @get_trigger_ops: The callback function invoked to retrieve the
|
||||
* event_trigger_ops implementation associated with the command.
|
||||
* This callback function allows a single event_command to
|
||||
* support multiple trigger implementations via different sets of
|
||||
* event_trigger_ops, depending on the value of the @param
|
||||
* string.
|
||||
* All the methods below, except for @init() and @free(), must be
|
||||
* implemented.
|
||||
*
|
||||
* @trigger: The trigger 'probe' function called when the triggering
|
||||
* event occurs. The data passed into this callback is the data
|
||||
* that was supplied to the event_command @reg() function that
|
||||
* registered the trigger (see struct event_command) along with
|
||||
* the trace record, rec.
|
||||
*
|
||||
* @count_func: If defined and a numeric parameter is passed to the
|
||||
* trigger, then this function will be called before @trigger
|
||||
* is called. If this function returns false, then @trigger is not
|
||||
* executed.
|
||||
*
|
||||
* @init: An optional initialization function called for the trigger
|
||||
* when the trigger is registered (via the event_command reg()
|
||||
* function). This can be used to perform per-trigger
|
||||
* initialization such as incrementing a per-trigger reference
|
||||
* count, for instance. This is usually implemented by the
|
||||
* generic utility function @event_trigger_init() (see
|
||||
* trace_event_triggers.c).
|
||||
*
|
||||
* @free: An optional de-initialization function called for the
|
||||
* trigger when the trigger is unregistered (via the
|
||||
* event_command @reg() function). This can be used to perform
|
||||
* per-trigger de-initialization such as decrementing a
|
||||
* per-trigger reference count and freeing corresponding trigger
|
||||
* data, for instance. This is usually implemented by the
|
||||
* generic utility function @event_trigger_free() (see
|
||||
* trace_event_triggers.c).
|
||||
*
|
||||
* @print: The callback function invoked to have the trigger print
|
||||
* itself. This is usually implemented by a wrapper function
|
||||
* that calls the generic utility function @event_trigger_print()
|
||||
* (see trace_event_triggers.c).
|
||||
*/
|
||||
struct event_command {
|
||||
struct list_head list;
|
||||
|
|
@ -2001,7 +2019,18 @@ struct event_command {
|
|||
int (*set_filter)(char *filter_str,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file);
|
||||
const struct event_trigger_ops *(*get_trigger_ops)(char *cmd, char *param);
|
||||
void (*trigger)(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer,
|
||||
void *rec,
|
||||
struct ring_buffer_event *rbe);
|
||||
bool (*count_func)(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer,
|
||||
void *rec,
|
||||
struct ring_buffer_event *rbe);
|
||||
int (*init)(struct event_trigger_data *data);
|
||||
void (*free)(struct event_trigger_data *data);
|
||||
int (*print)(struct seq_file *m,
|
||||
struct event_trigger_data *data);
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -2022,7 +2051,7 @@ struct event_command {
|
|||
* either committed or discarded. At that point, if any commands
|
||||
* have deferred their triggers, those commands are finally
|
||||
* invoked following the close of the current event. In other
|
||||
* words, if the event_trigger_ops @func() probe implementation
|
||||
* words, if the event_command @func() probe implementation
|
||||
* itself logs to the trace buffer, this flag should be set,
|
||||
* otherwise it can be left unspecified.
|
||||
*
|
||||
|
|
@ -2064,8 +2093,8 @@ extern const char *__stop___tracepoint_str[];
|
|||
|
||||
void trace_printk_control(bool enabled);
|
||||
void trace_printk_start_comm(void);
|
||||
int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
|
||||
int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled);
|
||||
int trace_keep_overwrite(struct tracer *tracer, u64 mask, int set);
|
||||
int set_tracer_flag(struct trace_array *tr, u64 mask, int enabled);
|
||||
|
||||
/* Used from boot time tracer */
|
||||
extern int trace_set_options(struct trace_array *tr, char *option);
|
||||
|
|
@ -2248,4 +2277,25 @@ static inline int rv_init_interface(void)
|
|||
*/
|
||||
#define FTRACE_TRAMPOLINE_MARKER ((unsigned long) INT_MAX)
|
||||
|
||||
/*
|
||||
* This is used to get the address of the args array based on
|
||||
* the type of the entry.
|
||||
*/
|
||||
#define FGRAPH_ENTRY_ARGS(e) \
|
||||
({ \
|
||||
unsigned long *_args; \
|
||||
struct ftrace_graph_ent_entry *_e = e; \
|
||||
\
|
||||
if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) && \
|
||||
e->ent.type == TRACE_GRAPH_RETADDR_ENT) { \
|
||||
struct fgraph_retaddr_ent_entry *_re; \
|
||||
\
|
||||
_re = (typeof(_re))_e; \
|
||||
_args = _re->args; \
|
||||
} else { \
|
||||
_args = _e->args; \
|
||||
} \
|
||||
_args; \
|
||||
})
|
||||
|
||||
#endif /* _LINUX_KERNEL_TRACE_H */
|
||||
|
|
|
|||
|
|
@ -144,9 +144,16 @@ static int create_dyn_event(const char *raw_command)
|
|||
if (!ret || ret != -ECANCELED)
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&dyn_event_ops_mutex);
|
||||
if (ret == -ECANCELED)
|
||||
if (ret == -ECANCELED) {
|
||||
static const char *err_msg[] = {"No matching dynamic event type"};
|
||||
|
||||
/* Wrong dynamic event. Leave an error message. */
|
||||
tracing_log_err(NULL, "dynevent", raw_command, err_msg,
|
||||
0, 0);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
mutex_unlock(&dyn_event_ops_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -80,11 +80,11 @@ FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry,
|
|||
F_STRUCT(
|
||||
__field_struct( struct ftrace_graph_ent, graph_ent )
|
||||
__field_packed( unsigned long, graph_ent, func )
|
||||
__field_packed( unsigned int, graph_ent, depth )
|
||||
__field_packed( unsigned long, graph_ent, depth )
|
||||
__dynamic_array(unsigned long, args )
|
||||
),
|
||||
|
||||
F_printk("--> %ps (%u)", (void *)__entry->func, __entry->depth)
|
||||
F_printk("--> %ps (%lu)", (void *)__entry->func, __entry->depth)
|
||||
);
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
|
||||
|
|
@ -95,13 +95,14 @@ FTRACE_ENTRY_PACKED(fgraph_retaddr_entry, fgraph_retaddr_ent_entry,
|
|||
TRACE_GRAPH_RETADDR_ENT,
|
||||
|
||||
F_STRUCT(
|
||||
__field_struct( struct fgraph_retaddr_ent, graph_ent )
|
||||
__field_packed( unsigned long, graph_ent, func )
|
||||
__field_packed( unsigned int, graph_ent, depth )
|
||||
__field_packed( unsigned long, graph_ent, retaddr )
|
||||
__field_struct( struct fgraph_retaddr_ent, graph_rent )
|
||||
__field_packed( unsigned long, graph_rent.ent, func )
|
||||
__field_packed( unsigned long, graph_rent.ent, depth )
|
||||
__field_packed( unsigned long, graph_rent, retaddr )
|
||||
__dynamic_array(unsigned long, args )
|
||||
),
|
||||
|
||||
F_printk("--> %ps (%u) <- %ps", (void *)__entry->func, __entry->depth,
|
||||
F_printk("--> %ps (%lu) <- %ps", (void *)__entry->func, __entry->depth,
|
||||
(void *)__entry->retaddr)
|
||||
);
|
||||
|
||||
|
|
|
|||
|
|
@ -484,13 +484,6 @@ static void eprobe_trigger_func(struct event_trigger_data *data,
|
|||
__eprobe_trace_func(edata, rec);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops eprobe_trigger_ops = {
|
||||
.trigger = eprobe_trigger_func,
|
||||
.print = eprobe_trigger_print,
|
||||
.init = eprobe_trigger_init,
|
||||
.free = eprobe_trigger_free,
|
||||
};
|
||||
|
||||
static int eprobe_trigger_cmd_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd,
|
||||
|
|
@ -513,12 +506,6 @@ static void eprobe_trigger_unreg_func(char *glob,
|
|||
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops *eprobe_trigger_get_ops(char *cmd,
|
||||
char *param)
|
||||
{
|
||||
return &eprobe_trigger_ops;
|
||||
}
|
||||
|
||||
static struct event_command event_trigger_cmd = {
|
||||
.name = "eprobe",
|
||||
.trigger_type = ETT_EVENT_EPROBE,
|
||||
|
|
@ -527,8 +514,11 @@ static struct event_command event_trigger_cmd = {
|
|||
.reg = eprobe_trigger_reg_func,
|
||||
.unreg = eprobe_trigger_unreg_func,
|
||||
.unreg_all = NULL,
|
||||
.get_trigger_ops = eprobe_trigger_get_ops,
|
||||
.set_filter = NULL,
|
||||
.trigger = eprobe_trigger_func,
|
||||
.print = eprobe_trigger_print,
|
||||
.init = eprobe_trigger_init,
|
||||
.free = eprobe_trigger_free,
|
||||
};
|
||||
|
||||
static struct event_trigger_data *
|
||||
|
|
@ -548,7 +538,6 @@ new_eprobe_trigger(struct trace_eprobe *ep, struct trace_event_file *file)
|
|||
|
||||
trigger->flags = EVENT_TRIGGER_FL_PROBE;
|
||||
trigger->count = -1;
|
||||
trigger->ops = &eprobe_trigger_ops;
|
||||
|
||||
/*
|
||||
* EVENT PROBE triggers are not registered as commands with
|
||||
|
|
|
|||
|
|
@ -845,13 +845,13 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
|
|||
if (soft_disable)
|
||||
set_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags);
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_RECORD_CMD) {
|
||||
if (tr->trace_flags & TRACE_ITER(RECORD_CMD)) {
|
||||
cmd = true;
|
||||
tracing_start_cmdline_record();
|
||||
set_bit(EVENT_FILE_FL_RECORDED_CMD_BIT, &file->flags);
|
||||
}
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_RECORD_TGID) {
|
||||
if (tr->trace_flags & TRACE_ITER(RECORD_TGID)) {
|
||||
tgid = true;
|
||||
tracing_start_tgid_record();
|
||||
set_bit(EVENT_FILE_FL_RECORDED_TGID_BIT, &file->flags);
|
||||
|
|
|
|||
|
|
@ -5696,7 +5696,7 @@ static void hist_trigger_show(struct seq_file *m,
|
|||
seq_puts(m, "\n\n");
|
||||
|
||||
seq_puts(m, "# event histogram\n#\n# trigger info: ");
|
||||
data->ops->print(m, data);
|
||||
data->cmd_ops->print(m, data);
|
||||
seq_puts(m, "#\n\n");
|
||||
|
||||
hist_data = data->private_data;
|
||||
|
|
@ -6018,7 +6018,7 @@ static void hist_trigger_debug_show(struct seq_file *m,
|
|||
seq_puts(m, "\n\n");
|
||||
|
||||
seq_puts(m, "# event histogram\n#\n# trigger info: ");
|
||||
data->ops->print(m, data);
|
||||
data->cmd_ops->print(m, data);
|
||||
seq_puts(m, "#\n\n");
|
||||
|
||||
hist_data = data->private_data;
|
||||
|
|
@ -6328,20 +6328,21 @@ static void event_hist_trigger_free(struct event_trigger_data *data)
|
|||
free_hist_pad();
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops event_hist_trigger_ops = {
|
||||
.trigger = event_hist_trigger,
|
||||
.print = event_hist_trigger_print,
|
||||
.init = event_hist_trigger_init,
|
||||
.free = event_hist_trigger_free,
|
||||
};
|
||||
|
||||
static int event_hist_trigger_named_init(struct event_trigger_data *data)
|
||||
{
|
||||
int ret;
|
||||
|
||||
data->ref++;
|
||||
|
||||
save_named_trigger(data->named_data->name, data);
|
||||
|
||||
return event_hist_trigger_init(data->named_data);
|
||||
ret = event_hist_trigger_init(data->named_data);
|
||||
if (ret < 0) {
|
||||
kfree(data->cmd_ops);
|
||||
data->cmd_ops = &trigger_hist_cmd;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void event_hist_trigger_named_free(struct event_trigger_data *data)
|
||||
|
|
@ -6353,24 +6354,14 @@ static void event_hist_trigger_named_free(struct event_trigger_data *data)
|
|||
|
||||
data->ref--;
|
||||
if (!data->ref) {
|
||||
struct event_command *cmd_ops = data->cmd_ops;
|
||||
|
||||
del_named_trigger(data);
|
||||
trigger_data_free(data);
|
||||
kfree(cmd_ops);
|
||||
}
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops event_hist_trigger_named_ops = {
|
||||
.trigger = event_hist_trigger,
|
||||
.print = event_hist_trigger_print,
|
||||
.init = event_hist_trigger_named_init,
|
||||
.free = event_hist_trigger_named_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops *event_hist_get_trigger_ops(char *cmd,
|
||||
char *param)
|
||||
{
|
||||
return &event_hist_trigger_ops;
|
||||
}
|
||||
|
||||
static void hist_clear(struct event_trigger_data *data)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
|
|
@ -6564,13 +6555,24 @@ static int hist_register_trigger(char *glob,
|
|||
data->paused = true;
|
||||
|
||||
if (named_data) {
|
||||
struct event_command *cmd_ops;
|
||||
|
||||
data->private_data = named_data->private_data;
|
||||
set_named_trigger_data(data, named_data);
|
||||
data->ops = &event_hist_trigger_named_ops;
|
||||
/* Copy the command ops and update some of the functions */
|
||||
cmd_ops = kmalloc(sizeof(*cmd_ops), GFP_KERNEL);
|
||||
if (!cmd_ops) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
*cmd_ops = *data->cmd_ops;
|
||||
cmd_ops->init = event_hist_trigger_named_init;
|
||||
cmd_ops->free = event_hist_trigger_named_free;
|
||||
data->cmd_ops = cmd_ops;
|
||||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data);
|
||||
if (data->cmd_ops->init) {
|
||||
ret = data->cmd_ops->init(data);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
|
@ -6684,8 +6686,8 @@ static void hist_unregister_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (test && test->ops->free)
|
||||
test->ops->free(test);
|
||||
if (test && test->cmd_ops->free)
|
||||
test->cmd_ops->free(test);
|
||||
|
||||
if (hist_data->enable_timestamps) {
|
||||
if (!hist_data->remove || test)
|
||||
|
|
@ -6737,8 +6739,8 @@ static void hist_unreg_all(struct trace_event_file *file)
|
|||
update_cond_flag(file);
|
||||
if (hist_data->enable_timestamps)
|
||||
tracing_set_filter_buffering(file->tr, false);
|
||||
if (test->ops->free)
|
||||
test->ops->free(test);
|
||||
if (test->cmd_ops->free)
|
||||
test->cmd_ops->free(test);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -6914,8 +6916,11 @@ static struct event_command trigger_hist_cmd = {
|
|||
.reg = hist_register_trigger,
|
||||
.unreg = hist_unregister_trigger,
|
||||
.unreg_all = hist_unreg_all,
|
||||
.get_trigger_ops = event_hist_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = event_hist_trigger,
|
||||
.print = event_hist_trigger_print,
|
||||
.init = event_hist_trigger_init,
|
||||
.free = event_hist_trigger_free,
|
||||
};
|
||||
|
||||
__init int register_trigger_hist_cmd(void)
|
||||
|
|
@ -6947,66 +6952,6 @@ hist_enable_trigger(struct event_trigger_data *data,
|
|||
}
|
||||
}
|
||||
|
||||
static void
|
||||
hist_enable_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
hist_enable_trigger(data, buffer, rec, event);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops hist_enable_trigger_ops = {
|
||||
.trigger = hist_enable_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops hist_enable_count_trigger_ops = {
|
||||
.trigger = hist_enable_count_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops hist_disable_trigger_ops = {
|
||||
.trigger = hist_enable_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops hist_disable_count_trigger_ops = {
|
||||
.trigger = hist_enable_count_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops *
|
||||
hist_enable_get_trigger_ops(char *cmd, char *param)
|
||||
{
|
||||
const struct event_trigger_ops *ops;
|
||||
bool enable;
|
||||
|
||||
enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
|
||||
|
||||
if (enable)
|
||||
ops = param ? &hist_enable_count_trigger_ops :
|
||||
&hist_enable_trigger_ops;
|
||||
else
|
||||
ops = param ? &hist_disable_count_trigger_ops :
|
||||
&hist_disable_trigger_ops;
|
||||
|
||||
return ops;
|
||||
}
|
||||
|
||||
static void hist_enable_unreg_all(struct trace_event_file *file)
|
||||
{
|
||||
struct event_trigger_data *test, *n;
|
||||
|
|
@ -7016,8 +6961,8 @@ static void hist_enable_unreg_all(struct trace_event_file *file)
|
|||
list_del_rcu(&test->list);
|
||||
update_cond_flag(file);
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
if (test->ops->free)
|
||||
test->ops->free(test);
|
||||
if (test->cmd_ops->free)
|
||||
test->cmd_ops->free(test);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -7029,8 +6974,12 @@ static struct event_command trigger_hist_enable_cmd = {
|
|||
.reg = event_enable_register_trigger,
|
||||
.unreg = event_enable_unregister_trigger,
|
||||
.unreg_all = hist_enable_unreg_all,
|
||||
.get_trigger_ops = hist_enable_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = hist_enable_trigger,
|
||||
.count_func = event_trigger_count,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static struct event_command trigger_hist_disable_cmd = {
|
||||
|
|
@ -7040,8 +6989,12 @@ static struct event_command trigger_hist_disable_cmd = {
|
|||
.reg = event_enable_register_trigger,
|
||||
.unreg = event_enable_unregister_trigger,
|
||||
.unreg_all = hist_enable_unreg_all,
|
||||
.get_trigger_ops = hist_enable_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = hist_enable_trigger,
|
||||
.count_func = event_trigger_count,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static __init void unregister_trigger_hist_enable_disable_cmds(void)
|
||||
|
|
|
|||
|
|
@ -359,7 +359,7 @@ static enum print_line_t print_synth_event(struct trace_iterator *iter,
|
|||
fmt = synth_field_fmt(se->fields[i]->type);
|
||||
|
||||
/* parameter types */
|
||||
if (tr && tr->trace_flags & TRACE_ITER_VERBOSE)
|
||||
if (tr && tr->trace_flags & TRACE_ITER(VERBOSE))
|
||||
trace_seq_printf(s, "%s ", fmt);
|
||||
|
||||
snprintf(print_fmt, sizeof(print_fmt), "%%s=%s%%s", fmt);
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/security.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/mutex.h>
|
||||
|
|
@ -17,15 +18,77 @@
|
|||
static LIST_HEAD(trigger_commands);
|
||||
static DEFINE_MUTEX(trigger_cmd_mutex);
|
||||
|
||||
static struct task_struct *trigger_kthread;
|
||||
static struct llist_head trigger_data_free_list;
|
||||
static DEFINE_MUTEX(trigger_data_kthread_mutex);
|
||||
|
||||
/* Bulk garbage collection of event_trigger_data elements */
|
||||
static int trigger_kthread_fn(void *ignore)
|
||||
{
|
||||
struct event_trigger_data *data, *tmp;
|
||||
struct llist_node *llnodes;
|
||||
|
||||
/* Once this task starts, it lives forever */
|
||||
for (;;) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (llist_empty(&trigger_data_free_list))
|
||||
schedule();
|
||||
|
||||
__set_current_state(TASK_RUNNING);
|
||||
|
||||
llnodes = llist_del_all(&trigger_data_free_list);
|
||||
|
||||
/* make sure current triggers exit before free */
|
||||
tracepoint_synchronize_unregister();
|
||||
|
||||
llist_for_each_entry_safe(data, tmp, llnodes, llist)
|
||||
kfree(data);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void trigger_data_free(struct event_trigger_data *data)
|
||||
{
|
||||
if (data->cmd_ops->set_filter)
|
||||
data->cmd_ops->set_filter(NULL, data, NULL);
|
||||
|
||||
/* make sure current triggers exit before free */
|
||||
tracepoint_synchronize_unregister();
|
||||
if (unlikely(!trigger_kthread)) {
|
||||
guard(mutex)(&trigger_data_kthread_mutex);
|
||||
/* Check again after taking mutex */
|
||||
if (!trigger_kthread) {
|
||||
struct task_struct *kthread;
|
||||
|
||||
kfree(data);
|
||||
kthread = kthread_create(trigger_kthread_fn, NULL,
|
||||
"trigger_data_free");
|
||||
if (!IS_ERR(kthread))
|
||||
WRITE_ONCE(trigger_kthread, kthread);
|
||||
}
|
||||
}
|
||||
|
||||
if (!trigger_kthread) {
|
||||
/* Do it the slow way */
|
||||
tracepoint_synchronize_unregister();
|
||||
kfree(data);
|
||||
return;
|
||||
}
|
||||
|
||||
llist_add(&data->llist, &trigger_data_free_list);
|
||||
wake_up_process(trigger_kthread);
|
||||
}
|
||||
|
||||
static inline void data_ops_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
const struct event_command *cmd_ops = data->cmd_ops;
|
||||
|
||||
if (data->flags & EVENT_TRIGGER_FL_COUNT) {
|
||||
if (!cmd_ops->count_func(data, buffer, rec, event))
|
||||
return;
|
||||
}
|
||||
|
||||
cmd_ops->trigger(data, buffer, rec, event);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -70,7 +133,7 @@ event_triggers_call(struct trace_event_file *file,
|
|||
if (data->paused)
|
||||
continue;
|
||||
if (!rec) {
|
||||
data->ops->trigger(data, buffer, rec, event);
|
||||
data_ops_trigger(data, buffer, rec, event);
|
||||
continue;
|
||||
}
|
||||
filter = rcu_dereference_sched(data->filter);
|
||||
|
|
@ -80,7 +143,7 @@ event_triggers_call(struct trace_event_file *file,
|
|||
tt |= data->cmd_ops->trigger_type;
|
||||
continue;
|
||||
}
|
||||
data->ops->trigger(data, buffer, rec, event);
|
||||
data_ops_trigger(data, buffer, rec, event);
|
||||
}
|
||||
return tt;
|
||||
}
|
||||
|
|
@ -122,7 +185,7 @@ event_triggers_post_call(struct trace_event_file *file,
|
|||
if (data->paused)
|
||||
continue;
|
||||
if (data->cmd_ops->trigger_type & tt)
|
||||
data->ops->trigger(data, NULL, NULL, NULL);
|
||||
data_ops_trigger(data, NULL, NULL, NULL);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(event_triggers_post_call);
|
||||
|
|
@ -191,7 +254,7 @@ static int trigger_show(struct seq_file *m, void *v)
|
|||
}
|
||||
|
||||
data = list_entry(v, struct event_trigger_data, list);
|
||||
data->ops->print(m, data);
|
||||
data->cmd_ops->print(m, data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -245,7 +308,8 @@ int trigger_process_regex(struct trace_event_file *file, char *buff)
|
|||
char *command, *next;
|
||||
struct event_command *p;
|
||||
|
||||
next = buff = skip_spaces(buff);
|
||||
next = buff = strim(buff);
|
||||
|
||||
command = strsep(&next, ": \t");
|
||||
if (next) {
|
||||
next = skip_spaces(next);
|
||||
|
|
@ -282,8 +346,6 @@ static ssize_t event_trigger_regex_write(struct file *file,
|
|||
if (IS_ERR(buf))
|
||||
return PTR_ERR(buf);
|
||||
|
||||
strim(buf);
|
||||
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
event_file = event_file_file(file);
|
||||
|
|
@ -300,13 +362,9 @@ static ssize_t event_trigger_regex_write(struct file *file,
|
|||
|
||||
static int event_trigger_regex_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
mutex_lock(&event_mutex);
|
||||
|
||||
if (file->f_mode & FMODE_READ)
|
||||
seq_release(inode, file);
|
||||
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -378,7 +436,37 @@ __init int unregister_event_command(struct event_command *cmd)
|
|||
}
|
||||
|
||||
/**
|
||||
* event_trigger_print - Generic event_trigger_ops @print implementation
|
||||
* event_trigger_count - Optional count function for event triggers
|
||||
* @data: Trigger-specific data
|
||||
* @buffer: The ring buffer that the event is being written to
|
||||
* @rec: The trace entry for the event, NULL for unconditional invocation
|
||||
* @event: The event meta data in the ring buffer
|
||||
*
|
||||
* For triggers that can take a count parameter that doesn't do anything
|
||||
* special, they can use this function to assign to their .count_func
|
||||
* field.
|
||||
*
|
||||
* This simply does a count down of the @data->count field.
|
||||
*
|
||||
* If the @data->count is greater than zero, it will decrement it.
|
||||
*
|
||||
* Returns false if @data->count is zero, otherwise true.
|
||||
*/
|
||||
bool event_trigger_count(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return false;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* event_trigger_print - Generic event_command @print implementation
|
||||
* @name: The name of the event trigger
|
||||
* @m: The seq_file being printed to
|
||||
* @data: Trigger-specific data
|
||||
|
|
@ -413,7 +501,7 @@ event_trigger_print(const char *name, struct seq_file *m,
|
|||
}
|
||||
|
||||
/**
|
||||
* event_trigger_init - Generic event_trigger_ops @init implementation
|
||||
* event_trigger_init - Generic event_command @init implementation
|
||||
* @data: Trigger-specific data
|
||||
*
|
||||
* Common implementation of event trigger initialization.
|
||||
|
|
@ -430,7 +518,7 @@ int event_trigger_init(struct event_trigger_data *data)
|
|||
}
|
||||
|
||||
/**
|
||||
* event_trigger_free - Generic event_trigger_ops @free implementation
|
||||
* event_trigger_free - Generic event_command @free implementation
|
||||
* @data: Trigger-specific data
|
||||
*
|
||||
* Common implementation of event trigger de-initialization.
|
||||
|
|
@ -492,8 +580,8 @@ clear_event_triggers(struct trace_array *tr)
|
|||
list_for_each_entry_safe(data, n, &file->triggers, list) {
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
list_del_rcu(&data->list);
|
||||
if (data->ops->free)
|
||||
data->ops->free(data);
|
||||
if (data->cmd_ops->free)
|
||||
data->cmd_ops->free(data);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -556,8 +644,8 @@ static int register_trigger(char *glob,
|
|||
return -EEXIST;
|
||||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data);
|
||||
if (data->cmd_ops->init) {
|
||||
ret = data->cmd_ops->init(data);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -595,8 +683,8 @@ static bool try_unregister_trigger(char *glob,
|
|||
}
|
||||
|
||||
if (data) {
|
||||
if (data->ops->free)
|
||||
data->ops->free(data);
|
||||
if (data->cmd_ops->free)
|
||||
data->cmd_ops->free(data);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
@ -807,9 +895,13 @@ int event_trigger_separate_filter(char *param_and_filter, char **param,
|
|||
* @private_data: User data to associate with the event trigger
|
||||
*
|
||||
* Allocate an event_trigger_data instance and initialize it. The
|
||||
* @cmd_ops are used along with the @cmd and @param to get the
|
||||
* trigger_ops to assign to the event_trigger_data. @private_data can
|
||||
* also be passed in and associated with the event_trigger_data.
|
||||
* @cmd_ops defines how the trigger will operate. If @param is set,
|
||||
* and @cmd_ops->trigger_ops->count_func is non NULL, then the
|
||||
* data->count is set to @param and before the trigger is executed, the
|
||||
* @cmd_ops->trigger_ops->count_func() is called. If that function returns
|
||||
* false, the @cmd_ops->trigger_ops->trigger() function will not be called.
|
||||
* @private_data can also be passed in and associated with the
|
||||
* event_trigger_data.
|
||||
*
|
||||
* Use trigger_data_free() to free an event_trigger_data object.
|
||||
*
|
||||
|
|
@ -821,18 +913,16 @@ struct event_trigger_data *trigger_data_alloc(struct event_command *cmd_ops,
|
|||
void *private_data)
|
||||
{
|
||||
struct event_trigger_data *trigger_data;
|
||||
const struct event_trigger_ops *trigger_ops;
|
||||
|
||||
trigger_ops = cmd_ops->get_trigger_ops(cmd, param);
|
||||
|
||||
trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
|
||||
if (!trigger_data)
|
||||
return NULL;
|
||||
|
||||
trigger_data->count = -1;
|
||||
trigger_data->ops = trigger_ops;
|
||||
trigger_data->cmd_ops = cmd_ops;
|
||||
trigger_data->private_data = private_data;
|
||||
if (param && cmd_ops->count_func)
|
||||
trigger_data->flags |= EVENT_TRIGGER_FL_COUNT;
|
||||
|
||||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
INIT_LIST_HEAD(&trigger_data->named_list);
|
||||
|
|
@ -1271,31 +1361,28 @@ traceon_trigger(struct event_trigger_data *data,
|
|||
tracing_on();
|
||||
}
|
||||
|
||||
static void
|
||||
traceon_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
static bool
|
||||
traceon_count_func(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
return false;
|
||||
} else {
|
||||
if (tracing_is_on())
|
||||
return;
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!data->count)
|
||||
return;
|
||||
return false;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
if (file)
|
||||
tracer_tracing_on(file->tr);
|
||||
else
|
||||
tracing_on();
|
||||
return true;
|
||||
}
|
||||
|
||||
static void
|
||||
|
|
@ -1319,31 +1406,28 @@ traceoff_trigger(struct event_trigger_data *data,
|
|||
tracing_off();
|
||||
}
|
||||
|
||||
static void
|
||||
traceoff_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
static bool
|
||||
traceoff_count_func(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
struct trace_event_file *file = data->private_data;
|
||||
|
||||
if (file) {
|
||||
if (!tracer_tracing_is_on(file->tr))
|
||||
return;
|
||||
return false;
|
||||
} else {
|
||||
if (!tracing_is_on())
|
||||
return;
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!data->count)
|
||||
return;
|
||||
return false;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
if (file)
|
||||
tracer_tracing_off(file->tr);
|
||||
else
|
||||
tracing_off();
|
||||
return true;
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
@ -1360,58 +1444,18 @@ traceoff_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
|||
data->filter_str);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops traceon_trigger_ops = {
|
||||
.trigger = traceon_trigger,
|
||||
.print = traceon_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops traceon_count_trigger_ops = {
|
||||
.trigger = traceon_count_trigger,
|
||||
.print = traceon_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops traceoff_trigger_ops = {
|
||||
.trigger = traceoff_trigger,
|
||||
.print = traceoff_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops traceoff_count_trigger_ops = {
|
||||
.trigger = traceoff_count_trigger,
|
||||
.print = traceoff_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops *
|
||||
onoff_get_trigger_ops(char *cmd, char *param)
|
||||
{
|
||||
const struct event_trigger_ops *ops;
|
||||
|
||||
/* we register both traceon and traceoff to this callback */
|
||||
if (strcmp(cmd, "traceon") == 0)
|
||||
ops = param ? &traceon_count_trigger_ops :
|
||||
&traceon_trigger_ops;
|
||||
else
|
||||
ops = param ? &traceoff_count_trigger_ops :
|
||||
&traceoff_trigger_ops;
|
||||
|
||||
return ops;
|
||||
}
|
||||
|
||||
static struct event_command trigger_traceon_cmd = {
|
||||
.name = "traceon",
|
||||
.trigger_type = ETT_TRACE_ONOFF,
|
||||
.parse = event_trigger_parse,
|
||||
.reg = register_trigger,
|
||||
.unreg = unregister_trigger,
|
||||
.get_trigger_ops = onoff_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = traceon_trigger,
|
||||
.count_func = traceon_count_func,
|
||||
.print = traceon_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static struct event_command trigger_traceoff_cmd = {
|
||||
|
|
@ -1421,8 +1465,12 @@ static struct event_command trigger_traceoff_cmd = {
|
|||
.parse = event_trigger_parse,
|
||||
.reg = register_trigger,
|
||||
.unreg = unregister_trigger,
|
||||
.get_trigger_ops = onoff_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = traceoff_trigger,
|
||||
.count_func = traceoff_count_func,
|
||||
.print = traceoff_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
|
|
@ -1439,20 +1487,6 @@ snapshot_trigger(struct event_trigger_data *data,
|
|||
tracing_snapshot();
|
||||
}
|
||||
|
||||
static void
|
||||
snapshot_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
snapshot_trigger(data, buffer, rec, event);
|
||||
}
|
||||
|
||||
static int
|
||||
register_snapshot_trigger(char *glob,
|
||||
struct event_trigger_data *data,
|
||||
|
|
@ -1484,34 +1518,18 @@ snapshot_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
|||
data->filter_str);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops snapshot_trigger_ops = {
|
||||
.trigger = snapshot_trigger,
|
||||
.print = snapshot_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops snapshot_count_trigger_ops = {
|
||||
.trigger = snapshot_count_trigger,
|
||||
.print = snapshot_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops *
|
||||
snapshot_get_trigger_ops(char *cmd, char *param)
|
||||
{
|
||||
return param ? &snapshot_count_trigger_ops : &snapshot_trigger_ops;
|
||||
}
|
||||
|
||||
static struct event_command trigger_snapshot_cmd = {
|
||||
.name = "snapshot",
|
||||
.trigger_type = ETT_SNAPSHOT,
|
||||
.parse = event_trigger_parse,
|
||||
.reg = register_snapshot_trigger,
|
||||
.unreg = unregister_snapshot_trigger,
|
||||
.get_trigger_ops = snapshot_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = snapshot_trigger,
|
||||
.count_func = event_trigger_count,
|
||||
.print = snapshot_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static __init int register_trigger_snapshot_cmd(void)
|
||||
|
|
@ -1558,20 +1576,6 @@ stacktrace_trigger(struct event_trigger_data *data,
|
|||
trace_dump_stack(STACK_SKIP);
|
||||
}
|
||||
|
||||
static void
|
||||
stacktrace_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
stacktrace_trigger(data, buffer, rec, event);
|
||||
}
|
||||
|
||||
static int
|
||||
stacktrace_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
||||
{
|
||||
|
|
@ -1579,26 +1583,6 @@ stacktrace_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
|||
data->filter_str);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops stacktrace_trigger_ops = {
|
||||
.trigger = stacktrace_trigger,
|
||||
.print = stacktrace_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops stacktrace_count_trigger_ops = {
|
||||
.trigger = stacktrace_count_trigger,
|
||||
.print = stacktrace_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops *
|
||||
stacktrace_get_trigger_ops(char *cmd, char *param)
|
||||
{
|
||||
return param ? &stacktrace_count_trigger_ops : &stacktrace_trigger_ops;
|
||||
}
|
||||
|
||||
static struct event_command trigger_stacktrace_cmd = {
|
||||
.name = "stacktrace",
|
||||
.trigger_type = ETT_STACKTRACE,
|
||||
|
|
@ -1606,8 +1590,12 @@ static struct event_command trigger_stacktrace_cmd = {
|
|||
.parse = event_trigger_parse,
|
||||
.reg = register_trigger,
|
||||
.unreg = unregister_trigger,
|
||||
.get_trigger_ops = stacktrace_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = stacktrace_trigger,
|
||||
.count_func = event_trigger_count,
|
||||
.print = stacktrace_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_trigger_free,
|
||||
};
|
||||
|
||||
static __init int register_trigger_stacktrace_cmd(void)
|
||||
|
|
@ -1642,24 +1630,24 @@ event_enable_trigger(struct event_trigger_data *data,
|
|||
set_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &enable_data->file->flags);
|
||||
}
|
||||
|
||||
static void
|
||||
event_enable_count_trigger(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
static bool
|
||||
event_enable_count_func(struct event_trigger_data *data,
|
||||
struct trace_buffer *buffer, void *rec,
|
||||
struct ring_buffer_event *event)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
if (!data->count)
|
||||
return;
|
||||
return false;
|
||||
|
||||
/* Skip if the event is in a state we want to switch to */
|
||||
if (enable_data->enable == !(enable_data->file->flags & EVENT_FILE_FL_SOFT_DISABLED))
|
||||
return;
|
||||
return false;
|
||||
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
event_enable_trigger(data, buffer, rec, event);
|
||||
return true;
|
||||
}
|
||||
|
||||
int event_enable_trigger_print(struct seq_file *m,
|
||||
|
|
@ -1704,34 +1692,6 @@ void event_enable_trigger_free(struct event_trigger_data *data)
|
|||
}
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops event_enable_trigger_ops = {
|
||||
.trigger = event_enable_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops event_enable_count_trigger_ops = {
|
||||
.trigger = event_enable_count_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops event_disable_trigger_ops = {
|
||||
.trigger = event_enable_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static const struct event_trigger_ops event_disable_count_trigger_ops = {
|
||||
.trigger = event_enable_count_trigger,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
int event_enable_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param_and_filter)
|
||||
|
|
@ -1861,8 +1821,8 @@ int event_enable_register_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data);
|
||||
if (data->cmd_ops->init) {
|
||||
ret = data->cmd_ops->init(data);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -1902,30 +1862,8 @@ void event_enable_unregister_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (data && data->ops->free)
|
||||
data->ops->free(data);
|
||||
}
|
||||
|
||||
static const struct event_trigger_ops *
|
||||
event_enable_get_trigger_ops(char *cmd, char *param)
|
||||
{
|
||||
const struct event_trigger_ops *ops;
|
||||
bool enable;
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
|
||||
(strcmp(cmd, ENABLE_HIST_STR) == 0));
|
||||
#else
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
#endif
|
||||
if (enable)
|
||||
ops = param ? &event_enable_count_trigger_ops :
|
||||
&event_enable_trigger_ops;
|
||||
else
|
||||
ops = param ? &event_disable_count_trigger_ops :
|
||||
&event_disable_trigger_ops;
|
||||
|
||||
return ops;
|
||||
if (data && data->cmd_ops->free)
|
||||
data->cmd_ops->free(data);
|
||||
}
|
||||
|
||||
static struct event_command trigger_enable_cmd = {
|
||||
|
|
@ -1934,8 +1872,12 @@ static struct event_command trigger_enable_cmd = {
|
|||
.parse = event_enable_trigger_parse,
|
||||
.reg = event_enable_register_trigger,
|
||||
.unreg = event_enable_unregister_trigger,
|
||||
.get_trigger_ops = event_enable_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = event_enable_trigger,
|
||||
.count_func = event_enable_count_func,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static struct event_command trigger_disable_cmd = {
|
||||
|
|
@ -1944,8 +1886,12 @@ static struct event_command trigger_disable_cmd = {
|
|||
.parse = event_enable_trigger_parse,
|
||||
.reg = event_enable_register_trigger,
|
||||
.unreg = event_enable_unregister_trigger,
|
||||
.get_trigger_ops = event_enable_get_trigger_ops,
|
||||
.set_filter = set_trigger_filter,
|
||||
.trigger = event_enable_trigger,
|
||||
.count_func = event_enable_count_func,
|
||||
.print = event_enable_trigger_print,
|
||||
.init = event_trigger_init,
|
||||
.free = event_enable_trigger_free,
|
||||
};
|
||||
|
||||
static __init void unregister_trigger_enable_disable_cmds(void)
|
||||
|
|
|
|||
|
|
@ -632,7 +632,7 @@ print_fentry_event(struct trace_iterator *iter, int flags,
|
|||
|
||||
trace_seq_printf(s, "%s: (", trace_probe_name(tp));
|
||||
|
||||
if (!seq_print_ip_sym(s, field->ip, flags | TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_offset(s, field->ip, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_putc(s, ')');
|
||||
|
|
@ -662,12 +662,12 @@ print_fexit_event(struct trace_iterator *iter, int flags,
|
|||
|
||||
trace_seq_printf(s, "%s: (", trace_probe_name(tp));
|
||||
|
||||
if (!seq_print_ip_sym(s, field->ret_ip, flags | TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_offset(s, field->ret_ip, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_puts(s, " <- ");
|
||||
|
||||
if (!seq_print_ip_sym(s, field->func, flags & ~TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_no_offset(s, field->func, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_putc(s, ')');
|
||||
|
|
|
|||
|
|
@ -154,11 +154,11 @@ static int function_trace_init(struct trace_array *tr)
|
|||
if (!tr->ops)
|
||||
return -ENOMEM;
|
||||
|
||||
func = select_trace_function(func_flags.val);
|
||||
func = select_trace_function(tr->current_trace_flags->val);
|
||||
if (!func)
|
||||
return -EINVAL;
|
||||
|
||||
if (!handle_func_repeats(tr, func_flags.val))
|
||||
if (!handle_func_repeats(tr, tr->current_trace_flags->val))
|
||||
return -ENOMEM;
|
||||
|
||||
ftrace_init_array_ops(tr, func);
|
||||
|
|
@ -459,14 +459,14 @@ func_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
|
|||
u32 new_flags;
|
||||
|
||||
/* Do nothing if already set. */
|
||||
if (!!set == !!(func_flags.val & bit))
|
||||
if (!!set == !!(tr->current_trace_flags->val & bit))
|
||||
return 0;
|
||||
|
||||
/* We can change this flag only when not running. */
|
||||
if (tr->current_trace != &function_trace)
|
||||
return 0;
|
||||
|
||||
new_flags = (func_flags.val & ~bit) | (set ? bit : 0);
|
||||
new_flags = (tr->current_trace_flags->val & ~bit) | (set ? bit : 0);
|
||||
func = select_trace_function(new_flags);
|
||||
if (!func)
|
||||
return -EINVAL;
|
||||
|
|
@ -491,7 +491,7 @@ static struct tracer function_trace __tracer_data =
|
|||
.init = function_trace_init,
|
||||
.reset = function_trace_reset,
|
||||
.start = function_trace_start,
|
||||
.flags = &func_flags,
|
||||
.default_flags = &func_flags,
|
||||
.set_flag = func_set_flag,
|
||||
.allow_instances = true,
|
||||
#ifdef CONFIG_FTRACE_SELFTEST
|
||||
|
|
|
|||
|
|
@ -16,9 +16,12 @@
|
|||
#include "trace.h"
|
||||
#include "trace_output.h"
|
||||
|
||||
/* When set, irq functions will be ignored */
|
||||
/* When set, irq functions might be ignored */
|
||||
static int ftrace_graph_skip_irqs;
|
||||
|
||||
/* Do not record function time when task is sleeping */
|
||||
int fgraph_no_sleep_time;
|
||||
|
||||
struct fgraph_cpu_data {
|
||||
pid_t last_pid;
|
||||
int depth;
|
||||
|
|
@ -33,14 +36,19 @@ struct fgraph_ent_args {
|
|||
unsigned long args[FTRACE_REGS_MAX_ARGS];
|
||||
};
|
||||
|
||||
struct fgraph_retaddr_ent_args {
|
||||
struct fgraph_retaddr_ent_entry ent;
|
||||
/* Force the sizeof of args[] to have FTRACE_REGS_MAX_ARGS entries */
|
||||
unsigned long args[FTRACE_REGS_MAX_ARGS];
|
||||
};
|
||||
|
||||
struct fgraph_data {
|
||||
struct fgraph_cpu_data __percpu *cpu_data;
|
||||
|
||||
/* Place to preserve last processed entry. */
|
||||
union {
|
||||
struct fgraph_ent_args ent;
|
||||
/* TODO allow retaddr to have args */
|
||||
struct fgraph_retaddr_ent_entry rent;
|
||||
struct fgraph_retaddr_ent_args rent;
|
||||
};
|
||||
struct ftrace_graph_ret_entry ret;
|
||||
int failed;
|
||||
|
|
@ -85,11 +93,6 @@ static struct tracer_opt trace_opts[] = {
|
|||
/* Include sleep time (scheduled out) between entry and return */
|
||||
{ TRACER_OPT(sleep-time, TRACE_GRAPH_SLEEP_TIME) },
|
||||
|
||||
#ifdef CONFIG_FUNCTION_PROFILER
|
||||
/* Include time within nested functions */
|
||||
{ TRACER_OPT(graph-time, TRACE_GRAPH_GRAPH_TIME) },
|
||||
#endif
|
||||
|
||||
{ } /* Empty entry */
|
||||
};
|
||||
|
||||
|
|
@ -97,13 +100,13 @@ static struct tracer_flags tracer_flags = {
|
|||
/* Don't display overruns, proc, or tail by default */
|
||||
.val = TRACE_GRAPH_PRINT_CPU | TRACE_GRAPH_PRINT_OVERHEAD |
|
||||
TRACE_GRAPH_PRINT_DURATION | TRACE_GRAPH_PRINT_IRQS |
|
||||
TRACE_GRAPH_SLEEP_TIME | TRACE_GRAPH_GRAPH_TIME,
|
||||
TRACE_GRAPH_SLEEP_TIME,
|
||||
.opts = trace_opts
|
||||
};
|
||||
|
||||
static bool tracer_flags_is_set(u32 flags)
|
||||
static bool tracer_flags_is_set(struct trace_array *tr, u32 flags)
|
||||
{
|
||||
return (tracer_flags.val & flags) == flags;
|
||||
return (tr->current_trace_flags->val & flags) == flags;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -162,20 +165,32 @@ int __trace_graph_entry(struct trace_array *tr,
|
|||
int __trace_graph_retaddr_entry(struct trace_array *tr,
|
||||
struct ftrace_graph_ent *trace,
|
||||
unsigned int trace_ctx,
|
||||
unsigned long retaddr)
|
||||
unsigned long retaddr,
|
||||
struct ftrace_regs *fregs)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer = tr->array_buffer.buffer;
|
||||
struct fgraph_retaddr_ent_entry *entry;
|
||||
int size;
|
||||
|
||||
/* If fregs is defined, add FTRACE_REGS_MAX_ARGS long size words */
|
||||
size = sizeof(*entry) + (FTRACE_REGS_MAX_ARGS * !!fregs * sizeof(long));
|
||||
|
||||
event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_RETADDR_ENT,
|
||||
sizeof(*entry), trace_ctx);
|
||||
size, trace_ctx);
|
||||
if (!event)
|
||||
return 0;
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry->graph_ent.func = trace->func;
|
||||
entry->graph_ent.depth = trace->depth;
|
||||
entry->graph_ent.retaddr = retaddr;
|
||||
entry->graph_rent.ent = *trace;
|
||||
entry->graph_rent.retaddr = retaddr;
|
||||
|
||||
#ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
|
||||
if (fregs) {
|
||||
for (int i = 0; i < FTRACE_REGS_MAX_ARGS; i++)
|
||||
entry->args[i] = ftrace_regs_get_argument(fregs, i);
|
||||
}
|
||||
#endif
|
||||
|
||||
trace_buffer_unlock_commit_nostack(buffer, event);
|
||||
|
||||
return 1;
|
||||
|
|
@ -184,17 +199,21 @@ int __trace_graph_retaddr_entry(struct trace_array *tr,
|
|||
int __trace_graph_retaddr_entry(struct trace_array *tr,
|
||||
struct ftrace_graph_ent *trace,
|
||||
unsigned int trace_ctx,
|
||||
unsigned long retaddr)
|
||||
unsigned long retaddr,
|
||||
struct ftrace_regs *fregs)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline int ftrace_graph_ignore_irqs(void)
|
||||
static inline int ftrace_graph_ignore_irqs(struct trace_array *tr)
|
||||
{
|
||||
if (!ftrace_graph_skip_irqs || trace_recursion_test(TRACE_IRQ_BIT))
|
||||
return 0;
|
||||
|
||||
if (tracer_flags_is_set(tr, TRACE_GRAPH_PRINT_IRQS))
|
||||
return 0;
|
||||
|
||||
return in_hardirq();
|
||||
}
|
||||
|
||||
|
|
@ -238,16 +257,17 @@ static int graph_entry(struct ftrace_graph_ent *trace,
|
|||
if (ftrace_graph_ignore_func(gops, trace))
|
||||
return 0;
|
||||
|
||||
if (ftrace_graph_ignore_irqs())
|
||||
if (ftrace_graph_ignore_irqs(tr))
|
||||
return 0;
|
||||
|
||||
if (fgraph_sleep_time) {
|
||||
/* Only need to record the calltime */
|
||||
ftimes = fgraph_reserve_data(gops->idx, sizeof(ftimes->calltime));
|
||||
} else {
|
||||
if (fgraph_no_sleep_time &&
|
||||
!tracer_flags_is_set(tr, TRACE_GRAPH_SLEEP_TIME)) {
|
||||
ftimes = fgraph_reserve_data(gops->idx, sizeof(*ftimes));
|
||||
if (ftimes)
|
||||
ftimes->sleeptime = current->ftrace_sleeptime;
|
||||
} else {
|
||||
/* Only need to record the calltime */
|
||||
ftimes = fgraph_reserve_data(gops->idx, sizeof(ftimes->calltime));
|
||||
}
|
||||
if (!ftimes)
|
||||
return 0;
|
||||
|
|
@ -263,9 +283,10 @@ static int graph_entry(struct ftrace_graph_ent *trace,
|
|||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
|
||||
tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR)) {
|
||||
tracer_flags_is_set(tr, TRACE_GRAPH_PRINT_RETADDR)) {
|
||||
unsigned long retaddr = ftrace_graph_top_ret_addr(current);
|
||||
ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
|
||||
ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx,
|
||||
retaddr, fregs);
|
||||
} else {
|
||||
ret = __graph_entry(tr, trace, trace_ctx, fregs);
|
||||
}
|
||||
|
|
@ -333,11 +354,15 @@ void __trace_graph_return(struct trace_array *tr,
|
|||
trace_buffer_unlock_commit_nostack(buffer, event);
|
||||
}
|
||||
|
||||
static void handle_nosleeptime(struct ftrace_graph_ret *trace,
|
||||
static void handle_nosleeptime(struct trace_array *tr,
|
||||
struct ftrace_graph_ret *trace,
|
||||
struct fgraph_times *ftimes,
|
||||
int size)
|
||||
{
|
||||
if (fgraph_sleep_time || size < sizeof(*ftimes))
|
||||
if (size < sizeof(*ftimes))
|
||||
return;
|
||||
|
||||
if (!fgraph_no_sleep_time || tracer_flags_is_set(tr, TRACE_GRAPH_SLEEP_TIME))
|
||||
return;
|
||||
|
||||
ftimes->calltime += current->ftrace_sleeptime - ftimes->sleeptime;
|
||||
|
|
@ -366,7 +391,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
|
|||
if (!ftimes)
|
||||
return;
|
||||
|
||||
handle_nosleeptime(trace, ftimes, size);
|
||||
handle_nosleeptime(tr, trace, ftimes, size);
|
||||
|
||||
calltime = ftimes->calltime;
|
||||
|
||||
|
|
@ -379,6 +404,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
|
|||
struct ftrace_regs *fregs)
|
||||
{
|
||||
struct fgraph_times *ftimes;
|
||||
struct trace_array *tr;
|
||||
int size;
|
||||
|
||||
ftrace_graph_addr_finish(gops, trace);
|
||||
|
|
@ -392,7 +418,8 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
|
|||
if (!ftimes)
|
||||
return;
|
||||
|
||||
handle_nosleeptime(trace, ftimes, size);
|
||||
tr = gops->private;
|
||||
handle_nosleeptime(tr, trace, ftimes, size);
|
||||
|
||||
if (tracing_thresh &&
|
||||
(trace_clock_local() - ftimes->calltime < tracing_thresh))
|
||||
|
|
@ -441,7 +468,7 @@ static int graph_trace_init(struct trace_array *tr)
|
|||
{
|
||||
int ret;
|
||||
|
||||
if (tracer_flags_is_set(TRACE_GRAPH_ARGS))
|
||||
if (tracer_flags_is_set(tr, TRACE_GRAPH_ARGS))
|
||||
tr->gops->entryfunc = trace_graph_entry_args;
|
||||
else
|
||||
tr->gops->entryfunc = trace_graph_entry;
|
||||
|
|
@ -451,6 +478,12 @@ static int graph_trace_init(struct trace_array *tr)
|
|||
else
|
||||
tr->gops->retfunc = trace_graph_return;
|
||||
|
||||
if (!tracer_flags_is_set(tr, TRACE_GRAPH_PRINT_IRQS))
|
||||
ftrace_graph_skip_irqs++;
|
||||
|
||||
if (!tracer_flags_is_set(tr, TRACE_GRAPH_SLEEP_TIME))
|
||||
fgraph_no_sleep_time++;
|
||||
|
||||
/* Make gops functions visible before we start tracing */
|
||||
smp_mb();
|
||||
|
||||
|
|
@ -468,10 +501,6 @@ static int ftrace_graph_trace_args(struct trace_array *tr, int set)
|
|||
{
|
||||
trace_func_graph_ent_t entry;
|
||||
|
||||
/* Do nothing if the current tracer is not this tracer */
|
||||
if (tr->current_trace != &graph_trace)
|
||||
return 0;
|
||||
|
||||
if (set)
|
||||
entry = trace_graph_entry_args;
|
||||
else
|
||||
|
|
@ -492,6 +521,16 @@ static int ftrace_graph_trace_args(struct trace_array *tr, int set)
|
|||
|
||||
static void graph_trace_reset(struct trace_array *tr)
|
||||
{
|
||||
if (!tracer_flags_is_set(tr, TRACE_GRAPH_PRINT_IRQS))
|
||||
ftrace_graph_skip_irqs--;
|
||||
if (WARN_ON_ONCE(ftrace_graph_skip_irqs < 0))
|
||||
ftrace_graph_skip_irqs = 0;
|
||||
|
||||
if (!tracer_flags_is_set(tr, TRACE_GRAPH_SLEEP_TIME))
|
||||
fgraph_no_sleep_time--;
|
||||
if (WARN_ON_ONCE(fgraph_no_sleep_time < 0))
|
||||
fgraph_no_sleep_time = 0;
|
||||
|
||||
tracing_stop_cmdline_record();
|
||||
unregister_ftrace_graph(tr->gops);
|
||||
}
|
||||
|
|
@ -634,13 +673,9 @@ get_return_for_leaf(struct trace_iterator *iter,
|
|||
* Save current and next entries for later reference
|
||||
* if the output fails.
|
||||
*/
|
||||
if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) {
|
||||
data->rent = *(struct fgraph_retaddr_ent_entry *)curr;
|
||||
} else {
|
||||
int size = min((int)sizeof(data->ent), (int)iter->ent_size);
|
||||
int size = min_t(int, sizeof(data->rent), iter->ent_size);
|
||||
|
||||
memcpy(&data->ent, curr, size);
|
||||
}
|
||||
memcpy(&data->rent, curr, size);
|
||||
/*
|
||||
* If the next event is not a return type, then
|
||||
* we only care about what type it is. Otherwise we can
|
||||
|
|
@ -703,7 +738,7 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
|
|||
addr >= (unsigned long)__irqentry_text_end)
|
||||
return;
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_CONTEXT_INFO) {
|
||||
if (tr->trace_flags & TRACE_ITER(CONTEXT_INFO)) {
|
||||
/* Absolute time */
|
||||
if (flags & TRACE_GRAPH_PRINT_ABS_TIME)
|
||||
print_graph_abs_time(iter->ts, s);
|
||||
|
|
@ -723,7 +758,7 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr,
|
|||
}
|
||||
|
||||
/* Latency format */
|
||||
if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
|
||||
if (tr->trace_flags & TRACE_ITER(LATENCY_FMT))
|
||||
print_graph_lat_fmt(s, ent);
|
||||
}
|
||||
|
||||
|
|
@ -777,7 +812,7 @@ print_graph_duration(struct trace_array *tr, unsigned long long duration,
|
|||
struct trace_seq *s, u32 flags)
|
||||
{
|
||||
if (!(flags & TRACE_GRAPH_PRINT_DURATION) ||
|
||||
!(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
|
||||
!(tr->trace_flags & TRACE_ITER(CONTEXT_INFO)))
|
||||
return;
|
||||
|
||||
/* No real adata, just filling the column with spaces */
|
||||
|
|
@ -818,7 +853,7 @@ static void print_graph_retaddr(struct trace_seq *s, struct fgraph_retaddr_ent_e
|
|||
trace_seq_puts(s, " /*");
|
||||
|
||||
trace_seq_puts(s, " <-");
|
||||
seq_print_ip_sym(s, entry->graph_ent.retaddr, trace_flags | TRACE_ITER_SYM_OFFSET);
|
||||
seq_print_ip_sym_offset(s, entry->graph_rent.retaddr, trace_flags);
|
||||
|
||||
if (comment)
|
||||
trace_seq_puts(s, " */");
|
||||
|
|
@ -964,7 +999,7 @@ print_graph_entry_leaf(struct trace_iterator *iter,
|
|||
trace_seq_printf(s, "%ps", (void *)ret_func);
|
||||
|
||||
if (args_size >= FTRACE_REGS_MAX_ARGS * sizeof(long)) {
|
||||
print_function_args(s, entry->args, ret_func);
|
||||
print_function_args(s, FGRAPH_ENTRY_ARGS(entry), ret_func);
|
||||
trace_seq_putc(s, ';');
|
||||
} else
|
||||
trace_seq_puts(s, "();");
|
||||
|
|
@ -1016,7 +1051,7 @@ print_graph_entry_nested(struct trace_iterator *iter,
|
|||
args_size = iter->ent_size - offsetof(struct ftrace_graph_ent_entry, args);
|
||||
|
||||
if (args_size >= FTRACE_REGS_MAX_ARGS * sizeof(long))
|
||||
print_function_args(s, entry->args, func);
|
||||
print_function_args(s, FGRAPH_ENTRY_ARGS(entry), func);
|
||||
else
|
||||
trace_seq_puts(s, "()");
|
||||
|
||||
|
|
@ -1054,7 +1089,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
|
|||
/* Interrupt */
|
||||
print_graph_irq(iter, addr, type, cpu, ent->pid, flags);
|
||||
|
||||
if (!(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
|
||||
if (!(tr->trace_flags & TRACE_ITER(CONTEXT_INFO)))
|
||||
return;
|
||||
|
||||
/* Absolute time */
|
||||
|
|
@ -1076,7 +1111,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s,
|
|||
}
|
||||
|
||||
/* Latency format */
|
||||
if (tr->trace_flags & TRACE_ITER_LATENCY_FMT)
|
||||
if (tr->trace_flags & TRACE_ITER(LATENCY_FMT))
|
||||
print_graph_lat_fmt(s, ent);
|
||||
|
||||
return;
|
||||
|
|
@ -1198,11 +1233,14 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s,
|
|||
/*
|
||||
* print_graph_entry() may consume the current event,
|
||||
* thus @field may become invalid, so we need to save it.
|
||||
* sizeof(struct ftrace_graph_ent_entry) is very small,
|
||||
* it can be safely saved at the stack.
|
||||
* This function is shared by ftrace_graph_ent_entry and
|
||||
* fgraph_retaddr_ent_entry, the size of the latter one
|
||||
* is larger, but it is very small and can be safely saved
|
||||
* at the stack.
|
||||
*/
|
||||
struct ftrace_graph_ent_entry *entry;
|
||||
u8 save_buf[sizeof(*entry) + FTRACE_REGS_MAX_ARGS * sizeof(long)];
|
||||
struct fgraph_retaddr_ent_entry *rentry;
|
||||
u8 save_buf[sizeof(*rentry) + FTRACE_REGS_MAX_ARGS * sizeof(long)];
|
||||
|
||||
/* The ent_size is expected to be as big as the entry */
|
||||
if (iter->ent_size > sizeof(save_buf))
|
||||
|
|
@ -1431,12 +1469,17 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
|
|||
}
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_RETADDR
|
||||
case TRACE_GRAPH_RETADDR_ENT: {
|
||||
struct fgraph_retaddr_ent_entry saved;
|
||||
/*
|
||||
* ftrace_graph_ent_entry and fgraph_retaddr_ent_entry have
|
||||
* similar functions and memory layouts. The only difference
|
||||
* is that the latter one has an extra retaddr member, so
|
||||
* they can share most of the logic.
|
||||
*/
|
||||
struct fgraph_retaddr_ent_entry *rfield;
|
||||
|
||||
trace_assign_type(rfield, entry);
|
||||
saved = *rfield;
|
||||
return print_graph_entry((struct ftrace_graph_ent_entry *)&saved, s, iter, flags);
|
||||
return print_graph_entry((struct ftrace_graph_ent_entry *)rfield,
|
||||
s, iter, flags);
|
||||
}
|
||||
#endif
|
||||
case TRACE_GRAPH_RET: {
|
||||
|
|
@ -1459,7 +1502,8 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
|
|||
static enum print_line_t
|
||||
print_graph_function(struct trace_iterator *iter)
|
||||
{
|
||||
return print_graph_function_flags(iter, tracer_flags.val);
|
||||
struct trace_array *tr = iter->tr;
|
||||
return print_graph_function_flags(iter, tr->current_trace_flags->val);
|
||||
}
|
||||
|
||||
static enum print_line_t
|
||||
|
|
@ -1495,7 +1539,7 @@ static void print_lat_header(struct seq_file *s, u32 flags)
|
|||
static void __print_graph_headers_flags(struct trace_array *tr,
|
||||
struct seq_file *s, u32 flags)
|
||||
{
|
||||
int lat = tr->trace_flags & TRACE_ITER_LATENCY_FMT;
|
||||
int lat = tr->trace_flags & TRACE_ITER(LATENCY_FMT);
|
||||
|
||||
if (lat)
|
||||
print_lat_header(s, flags);
|
||||
|
|
@ -1535,7 +1579,10 @@ static void __print_graph_headers_flags(struct trace_array *tr,
|
|||
|
||||
static void print_graph_headers(struct seq_file *s)
|
||||
{
|
||||
print_graph_headers_flags(s, tracer_flags.val);
|
||||
struct trace_iterator *iter = s->private;
|
||||
struct trace_array *tr = iter->tr;
|
||||
|
||||
print_graph_headers_flags(s, tr->current_trace_flags->val);
|
||||
}
|
||||
|
||||
void print_graph_headers_flags(struct seq_file *s, u32 flags)
|
||||
|
|
@ -1543,10 +1590,10 @@ void print_graph_headers_flags(struct seq_file *s, u32 flags)
|
|||
struct trace_iterator *iter = s->private;
|
||||
struct trace_array *tr = iter->tr;
|
||||
|
||||
if (!(tr->trace_flags & TRACE_ITER_CONTEXT_INFO))
|
||||
if (!(tr->trace_flags & TRACE_ITER(CONTEXT_INFO)))
|
||||
return;
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_LATENCY_FMT) {
|
||||
if (tr->trace_flags & TRACE_ITER(LATENCY_FMT)) {
|
||||
/* print nothing if the buffers are empty */
|
||||
if (trace_empty(iter))
|
||||
return;
|
||||
|
|
@ -1613,17 +1660,56 @@ void graph_trace_close(struct trace_iterator *iter)
|
|||
static int
|
||||
func_graph_set_flag(struct trace_array *tr, u32 old_flags, u32 bit, int set)
|
||||
{
|
||||
if (bit == TRACE_GRAPH_PRINT_IRQS)
|
||||
ftrace_graph_skip_irqs = !set;
|
||||
/*
|
||||
* The function profiler gets updated even if function graph
|
||||
* isn't the current tracer. Handle it separately.
|
||||
*/
|
||||
#ifdef CONFIG_FUNCTION_PROFILER
|
||||
if (bit == TRACE_GRAPH_SLEEP_TIME && (tr->flags & TRACE_ARRAY_FL_GLOBAL) &&
|
||||
!!set == fprofile_no_sleep_time) {
|
||||
if (set) {
|
||||
fgraph_no_sleep_time--;
|
||||
if (WARN_ON_ONCE(fgraph_no_sleep_time < 0))
|
||||
fgraph_no_sleep_time = 0;
|
||||
fprofile_no_sleep_time = false;
|
||||
} else {
|
||||
fgraph_no_sleep_time++;
|
||||
fprofile_no_sleep_time = true;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
if (bit == TRACE_GRAPH_SLEEP_TIME)
|
||||
ftrace_graph_sleep_time_control(set);
|
||||
/* Do nothing if the current tracer is not this tracer */
|
||||
if (tr->current_trace != &graph_trace)
|
||||
return 0;
|
||||
|
||||
if (bit == TRACE_GRAPH_GRAPH_TIME)
|
||||
ftrace_graph_graph_time_control(set);
|
||||
/* Do nothing if already set. */
|
||||
if (!!set == !!(tr->current_trace_flags->val & bit))
|
||||
return 0;
|
||||
|
||||
if (bit == TRACE_GRAPH_ARGS)
|
||||
switch (bit) {
|
||||
case TRACE_GRAPH_SLEEP_TIME:
|
||||
if (set) {
|
||||
fgraph_no_sleep_time--;
|
||||
if (WARN_ON_ONCE(fgraph_no_sleep_time < 0))
|
||||
fgraph_no_sleep_time = 0;
|
||||
} else {
|
||||
fgraph_no_sleep_time++;
|
||||
}
|
||||
break;
|
||||
|
||||
case TRACE_GRAPH_PRINT_IRQS:
|
||||
if (set)
|
||||
ftrace_graph_skip_irqs--;
|
||||
else
|
||||
ftrace_graph_skip_irqs++;
|
||||
if (WARN_ON_ONCE(ftrace_graph_skip_irqs < 0))
|
||||
ftrace_graph_skip_irqs = 0;
|
||||
break;
|
||||
|
||||
case TRACE_GRAPH_ARGS:
|
||||
return ftrace_graph_trace_args(tr, set);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -1660,7 +1746,7 @@ static struct tracer graph_trace __tracer_data = {
|
|||
.reset = graph_trace_reset,
|
||||
.print_line = print_graph_function,
|
||||
.print_header = print_graph_headers,
|
||||
.flags = &tracer_flags,
|
||||
.default_flags = &tracer_flags,
|
||||
.set_flag = func_graph_set_flag,
|
||||
.allow_instances = true,
|
||||
#ifdef CONFIG_FTRACE_SELFTEST
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ irq_trace(void)
|
|||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
static int irqsoff_display_graph(struct trace_array *tr, int set);
|
||||
# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER_DISPLAY_GRAPH)
|
||||
# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER(DISPLAY_GRAPH))
|
||||
#else
|
||||
static inline int irqsoff_display_graph(struct trace_array *tr, int set)
|
||||
{
|
||||
|
|
@ -485,8 +485,8 @@ static int register_irqsoff_function(struct trace_array *tr, int graph, int set)
|
|||
{
|
||||
int ret;
|
||||
|
||||
/* 'set' is set if TRACE_ITER_FUNCTION is about to be set */
|
||||
if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER_FUNCTION)))
|
||||
/* 'set' is set if TRACE_ITER(FUNCTION) is about to be set */
|
||||
if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER(FUNCTION))))
|
||||
return 0;
|
||||
|
||||
if (graph)
|
||||
|
|
@ -515,7 +515,7 @@ static void unregister_irqsoff_function(struct trace_array *tr, int graph)
|
|||
|
||||
static int irqsoff_function_set(struct trace_array *tr, u32 mask, int set)
|
||||
{
|
||||
if (!(mask & TRACE_ITER_FUNCTION))
|
||||
if (!(mask & TRACE_ITER(FUNCTION)))
|
||||
return 0;
|
||||
|
||||
if (set)
|
||||
|
|
@ -536,7 +536,7 @@ static inline int irqsoff_function_set(struct trace_array *tr, u32 mask, int set
|
|||
}
|
||||
#endif /* CONFIG_FUNCTION_TRACER */
|
||||
|
||||
static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
|
||||
static int irqsoff_flag_changed(struct trace_array *tr, u64 mask, int set)
|
||||
{
|
||||
struct tracer *tracer = tr->current_trace;
|
||||
|
||||
|
|
@ -544,7 +544,7 @@ static int irqsoff_flag_changed(struct trace_array *tr, u32 mask, int set)
|
|||
return 0;
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
if (mask & TRACE_ITER_DISPLAY_GRAPH)
|
||||
if (mask & TRACE_ITER(DISPLAY_GRAPH))
|
||||
return irqsoff_display_graph(tr, set);
|
||||
#endif
|
||||
|
||||
|
|
@ -582,10 +582,10 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
|
|||
save_flags = tr->trace_flags;
|
||||
|
||||
/* non overwrite screws up the latency tracers */
|
||||
set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
|
||||
set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, 1);
|
||||
set_tracer_flag(tr, TRACE_ITER(OVERWRITE), 1);
|
||||
set_tracer_flag(tr, TRACE_ITER(LATENCY_FMT), 1);
|
||||
/* without pause, we will produce garbage if another latency occurs */
|
||||
set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, 1);
|
||||
set_tracer_flag(tr, TRACE_ITER(PAUSE_ON_TRACE), 1);
|
||||
|
||||
tr->max_latency = 0;
|
||||
irqsoff_trace = tr;
|
||||
|
|
@ -605,15 +605,15 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
|
|||
|
||||
static void __irqsoff_tracer_reset(struct trace_array *tr)
|
||||
{
|
||||
int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
|
||||
int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
|
||||
int pause_flag = save_flags & TRACE_ITER_PAUSE_ON_TRACE;
|
||||
int lat_flag = save_flags & TRACE_ITER(LATENCY_FMT);
|
||||
int overwrite_flag = save_flags & TRACE_ITER(OVERWRITE);
|
||||
int pause_flag = save_flags & TRACE_ITER(PAUSE_ON_TRACE);
|
||||
|
||||
stop_irqsoff_tracer(tr, is_graph(tr));
|
||||
|
||||
set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, lat_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, pause_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER(LATENCY_FMT), lat_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER(OVERWRITE), overwrite_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER(PAUSE_ON_TRACE), pause_flag);
|
||||
ftrace_reset_array_ops(tr);
|
||||
|
||||
irqsoff_busy = false;
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ static void ftrace_dump_buf(int skip_entries, long cpu_file)
|
|||
old_userobj = tr->trace_flags;
|
||||
|
||||
/* don't look at user memory in panic mode */
|
||||
tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
|
||||
tr->trace_flags &= ~TRACE_ITER(SYM_USEROBJ);
|
||||
|
||||
kdb_printf("Dumping ftrace buffer:\n");
|
||||
if (skip_entries)
|
||||
|
|
|
|||
|
|
@ -1584,7 +1584,7 @@ print_kprobe_event(struct trace_iterator *iter, int flags,
|
|||
|
||||
trace_seq_printf(s, "%s: (", trace_probe_name(tp));
|
||||
|
||||
if (!seq_print_ip_sym(s, field->ip, flags | TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_offset(s, field->ip, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_putc(s, ')');
|
||||
|
|
@ -1614,12 +1614,12 @@ print_kretprobe_event(struct trace_iterator *iter, int flags,
|
|||
|
||||
trace_seq_printf(s, "%s: (", trace_probe_name(tp));
|
||||
|
||||
if (!seq_print_ip_sym(s, field->ret_ip, flags | TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_offset(s, field->ret_ip, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_puts(s, " <- ");
|
||||
|
||||
if (!seq_print_ip_sym(s, field->func, flags & ~TRACE_ITER_SYM_OFFSET))
|
||||
if (!seq_print_ip_sym_no_offset(s, field->func, flags))
|
||||
goto out;
|
||||
|
||||
trace_seq_putc(s, ')');
|
||||
|
|
|
|||
|
|
@ -420,7 +420,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
|
|||
}
|
||||
mmap_read_unlock(mm);
|
||||
}
|
||||
if (ret && ((sym_flags & TRACE_ITER_SYM_ADDR) || !file))
|
||||
if (ret && ((sym_flags & TRACE_ITER(SYM_ADDR)) || !file))
|
||||
trace_seq_printf(s, " <" IP_FMT ">", ip);
|
||||
return !trace_seq_has_overflowed(s);
|
||||
}
|
||||
|
|
@ -433,9 +433,9 @@ seq_print_ip_sym(struct trace_seq *s, unsigned long ip, unsigned long sym_flags)
|
|||
goto out;
|
||||
}
|
||||
|
||||
trace_seq_print_sym(s, ip, sym_flags & TRACE_ITER_SYM_OFFSET);
|
||||
trace_seq_print_sym(s, ip, sym_flags & TRACE_ITER(SYM_OFFSET));
|
||||
|
||||
if (sym_flags & TRACE_ITER_SYM_ADDR)
|
||||
if (sym_flags & TRACE_ITER(SYM_ADDR))
|
||||
trace_seq_printf(s, " <" IP_FMT ">", ip);
|
||||
|
||||
out:
|
||||
|
|
@ -569,7 +569,7 @@ static int
|
|||
lat_print_timestamp(struct trace_iterator *iter, u64 next_ts)
|
||||
{
|
||||
struct trace_array *tr = iter->tr;
|
||||
unsigned long verbose = tr->trace_flags & TRACE_ITER_VERBOSE;
|
||||
unsigned long verbose = tr->trace_flags & TRACE_ITER(VERBOSE);
|
||||
unsigned long in_ns = iter->iter_flags & TRACE_FILE_TIME_IN_NS;
|
||||
unsigned long long abs_ts = iter->ts - iter->array_buffer->time_start;
|
||||
unsigned long long rel_ts = next_ts - iter->ts;
|
||||
|
|
@ -636,7 +636,7 @@ int trace_print_context(struct trace_iterator *iter)
|
|||
|
||||
trace_seq_printf(s, "%16s-%-7d ", comm, entry->pid);
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_RECORD_TGID) {
|
||||
if (tr->trace_flags & TRACE_ITER(RECORD_TGID)) {
|
||||
unsigned int tgid = trace_find_tgid(entry->pid);
|
||||
|
||||
if (!tgid)
|
||||
|
|
@ -647,7 +647,7 @@ int trace_print_context(struct trace_iterator *iter)
|
|||
|
||||
trace_seq_printf(s, "[%03d] ", iter->cpu);
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_IRQ_INFO)
|
||||
if (tr->trace_flags & TRACE_ITER(IRQ_INFO))
|
||||
trace_print_lat_fmt(s, entry);
|
||||
|
||||
trace_print_time(s, iter, iter->ts);
|
||||
|
|
@ -661,7 +661,7 @@ int trace_print_lat_context(struct trace_iterator *iter)
|
|||
struct trace_entry *entry, *next_entry;
|
||||
struct trace_array *tr = iter->tr;
|
||||
struct trace_seq *s = &iter->seq;
|
||||
unsigned long verbose = (tr->trace_flags & TRACE_ITER_VERBOSE);
|
||||
unsigned long verbose = (tr->trace_flags & TRACE_ITER(VERBOSE));
|
||||
u64 next_ts;
|
||||
|
||||
next_entry = trace_find_next_entry(iter, NULL, &next_ts);
|
||||
|
|
@ -950,7 +950,9 @@ static void print_fields(struct trace_iterator *iter, struct trace_event_call *c
|
|||
int offset;
|
||||
int len;
|
||||
int ret;
|
||||
int i;
|
||||
void *pos;
|
||||
char *str;
|
||||
|
||||
list_for_each_entry_reverse(field, head, link) {
|
||||
trace_seq_printf(&iter->seq, " %s=", field->name);
|
||||
|
|
@ -977,8 +979,29 @@ static void print_fields(struct trace_iterator *iter, struct trace_event_call *c
|
|||
trace_seq_puts(&iter->seq, "<OVERFLOW>");
|
||||
break;
|
||||
}
|
||||
pos = (void *)iter->ent + offset;
|
||||
trace_seq_printf(&iter->seq, "%.*s", len, (char *)pos);
|
||||
str = (char *)iter->ent + offset;
|
||||
/* Check if there's any non printable strings */
|
||||
for (i = 0; i < len; i++) {
|
||||
if (str[i] && !(isascii(str[i]) && isprint(str[i])))
|
||||
break;
|
||||
}
|
||||
if (i < len) {
|
||||
for (i = 0; i < len; i++) {
|
||||
if (isascii(str[i]) && isprint(str[i]))
|
||||
trace_seq_putc(&iter->seq, str[i]);
|
||||
else
|
||||
trace_seq_putc(&iter->seq, '.');
|
||||
}
|
||||
trace_seq_puts(&iter->seq, " (");
|
||||
for (i = 0; i < len; i++) {
|
||||
if (i)
|
||||
trace_seq_putc(&iter->seq, ':');
|
||||
trace_seq_printf(&iter->seq, "%02x", str[i]);
|
||||
}
|
||||
trace_seq_putc(&iter->seq, ')');
|
||||
} else {
|
||||
trace_seq_printf(&iter->seq, "%.*s", len, str);
|
||||
}
|
||||
break;
|
||||
case FILTER_PTR_STRING:
|
||||
if (!iter->fmt_size)
|
||||
|
|
@ -1127,7 +1150,7 @@ static void print_fn_trace(struct trace_seq *s, unsigned long ip,
|
|||
if (args)
|
||||
print_function_args(s, args, ip);
|
||||
|
||||
if ((flags & TRACE_ITER_PRINT_PARENT) && parent_ip) {
|
||||
if ((flags & TRACE_ITER(PRINT_PARENT)) && parent_ip) {
|
||||
trace_seq_puts(s, " <-");
|
||||
seq_print_ip_sym(s, parent_ip, flags);
|
||||
}
|
||||
|
|
@ -1417,7 +1440,7 @@ static enum print_line_t trace_user_stack_print(struct trace_iterator *iter,
|
|||
|
||||
trace_seq_puts(s, "<user stack trace>\n");
|
||||
|
||||
if (tr->trace_flags & TRACE_ITER_SYM_USEROBJ) {
|
||||
if (tr->trace_flags & TRACE_ITER(SYM_USEROBJ)) {
|
||||
struct task_struct *task;
|
||||
/*
|
||||
* we do the lookup on the thread group leader,
|
||||
|
|
|
|||
|
|
@ -16,6 +16,17 @@ extern int
|
|||
seq_print_ip_sym(struct trace_seq *s, unsigned long ip,
|
||||
unsigned long sym_flags);
|
||||
|
||||
static inline int seq_print_ip_sym_offset(struct trace_seq *s, unsigned long ip,
|
||||
unsigned long sym_flags)
|
||||
{
|
||||
return seq_print_ip_sym(s, ip, sym_flags | TRACE_ITER(SYM_OFFSET));
|
||||
}
|
||||
static inline int seq_print_ip_sym_no_offset(struct trace_seq *s, unsigned long ip,
|
||||
unsigned long sym_flags)
|
||||
{
|
||||
return seq_print_ip_sym(s, ip, sym_flags & ~TRACE_ITER(SYM_OFFSET));
|
||||
}
|
||||
|
||||
extern void trace_seq_print_sym(struct trace_seq *s, unsigned long address, bool offset);
|
||||
extern int trace_print_context(struct trace_iterator *iter);
|
||||
extern int trace_print_lat_context(struct trace_iterator *iter);
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ static void stop_func_tracer(struct trace_array *tr, int graph);
|
|||
static int save_flags;
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER_DISPLAY_GRAPH)
|
||||
# define is_graph(tr) ((tr)->trace_flags & TRACE_ITER(DISPLAY_GRAPH))
|
||||
#else
|
||||
# define is_graph(tr) false
|
||||
#endif
|
||||
|
|
@ -247,8 +247,8 @@ static int register_wakeup_function(struct trace_array *tr, int graph, int set)
|
|||
{
|
||||
int ret;
|
||||
|
||||
/* 'set' is set if TRACE_ITER_FUNCTION is about to be set */
|
||||
if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER_FUNCTION)))
|
||||
/* 'set' is set if TRACE_ITER(FUNCTION) is about to be set */
|
||||
if (function_enabled || (!set && !(tr->trace_flags & TRACE_ITER(FUNCTION))))
|
||||
return 0;
|
||||
|
||||
if (graph)
|
||||
|
|
@ -277,7 +277,7 @@ static void unregister_wakeup_function(struct trace_array *tr, int graph)
|
|||
|
||||
static int wakeup_function_set(struct trace_array *tr, u32 mask, int set)
|
||||
{
|
||||
if (!(mask & TRACE_ITER_FUNCTION))
|
||||
if (!(mask & TRACE_ITER(FUNCTION)))
|
||||
return 0;
|
||||
|
||||
if (set)
|
||||
|
|
@ -324,7 +324,7 @@ __trace_function(struct trace_array *tr,
|
|||
trace_function(tr, ip, parent_ip, trace_ctx, NULL);
|
||||
}
|
||||
|
||||
static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
|
||||
static int wakeup_flag_changed(struct trace_array *tr, u64 mask, int set)
|
||||
{
|
||||
struct tracer *tracer = tr->current_trace;
|
||||
|
||||
|
|
@ -332,7 +332,7 @@ static int wakeup_flag_changed(struct trace_array *tr, u32 mask, int set)
|
|||
return 0;
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
if (mask & TRACE_ITER_DISPLAY_GRAPH)
|
||||
if (mask & TRACE_ITER(DISPLAY_GRAPH))
|
||||
return wakeup_display_graph(tr, set);
|
||||
#endif
|
||||
|
||||
|
|
@ -681,8 +681,8 @@ static int __wakeup_tracer_init(struct trace_array *tr)
|
|||
save_flags = tr->trace_flags;
|
||||
|
||||
/* non overwrite screws up the latency tracers */
|
||||
set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
|
||||
set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, 1);
|
||||
set_tracer_flag(tr, TRACE_ITER(OVERWRITE), 1);
|
||||
set_tracer_flag(tr, TRACE_ITER(LATENCY_FMT), 1);
|
||||
|
||||
tr->max_latency = 0;
|
||||
wakeup_trace = tr;
|
||||
|
|
@ -725,15 +725,15 @@ static int wakeup_dl_tracer_init(struct trace_array *tr)
|
|||
|
||||
static void wakeup_tracer_reset(struct trace_array *tr)
|
||||
{
|
||||
int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
|
||||
int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
|
||||
int lat_flag = save_flags & TRACE_ITER(LATENCY_FMT);
|
||||
int overwrite_flag = save_flags & TRACE_ITER(OVERWRITE);
|
||||
|
||||
stop_wakeup_tracer(tr);
|
||||
/* make sure we put back any tasks we are tracing */
|
||||
wakeup_reset(tr);
|
||||
|
||||
set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, lat_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER(LATENCY_FMT), lat_flag);
|
||||
set_tracer_flag(tr, TRACE_ITER(OVERWRITE), overwrite_flag);
|
||||
ftrace_reset_array_ops(tr);
|
||||
wakeup_busy = false;
|
||||
}
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue