2020-03-17 19:02:16 +08:00
// SPDX-License-Identifier: GPL-2.0
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
# include "math.h"
2020-03-17 19:02:16 +08:00
# include "parse-events.h"
# include "pmu.h"
2023-05-27 00:22:03 -07:00
# include "pmus.h"
2020-03-17 19:02:16 +08:00
# include "tests.h"
# include <errno.h>
# include <stdio.h>
# include <linux/kernel.h>
2020-03-17 19:02:19 +08:00
# include <linux/zalloc.h>
2020-03-17 19:02:16 +08:00
# include "debug.h"
# include "../pmu-events/pmu-events.h"
2022-08-12 16:09:43 -07:00
# include <perf/evlist.h>
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
# include "util/evlist.h"
# include "util/expr.h"
2022-11-09 10:49:11 -08:00
# include "util/hashmap.h"
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
# include "util/parse-events.h"
2021-04-07 18:32:46 +08:00
# include "metricgroup.h"
2022-08-12 16:09:43 -07:00
# include "stat.h"
2020-03-17 19:02:16 +08:00
struct perf_pmu_test_event {
2020-10-22 19:02:27 +08:00
/* used for matching against events from generated pmu-events.c */
2020-03-17 19:02:16 +08:00
struct pmu_event event ;
2020-03-17 19:02:19 +08:00
2020-10-22 19:02:27 +08:00
/* used for matching against event aliases */
2020-03-17 19:02:19 +08:00
/* extra events for aliases */
const char * alias_str ;
/*
* Note : For when PublicDescription does not exist in the JSON , we
* will have no long_desc in pmu_event . long_desc , but long_desc may
* be set in the alias .
*/
const char * alias_long_desc ;
2021-07-29 21:56:22 +08:00
/* PMU which we should match against */
const char * matching_pmu ;
} ;
struct perf_pmu_test_pmu {
struct perf_pmu pmu ;
struct perf_pmu_test_event const * aliases [ 10 ] ;
2020-03-17 19:02:16 +08:00
} ;
2020-03-17 19:02:19 +08:00
2021-07-29 21:56:18 +08:00
static const struct perf_pmu_test_event bp_l1_btb_correct = {
. event = {
. name = " bp_l1_btb_correct " ,
. event = " event=0x8a " ,
. desc = " L1 BTB Correction " ,
. topic = " branch " ,
2020-03-17 19:02:16 +08:00
} ,
2021-07-29 21:56:18 +08:00
. alias_str = " event=0x8a " ,
. alias_long_desc = " L1 BTB Correction " ,
} ;
static const struct perf_pmu_test_event bp_l2_btb_correct = {
. event = {
. name = " bp_l2_btb_correct " ,
. event = " event=0x8b " ,
. desc = " L2 BTB Correction " ,
. topic = " branch " ,
2020-03-17 19:02:16 +08:00
} ,
2021-07-29 21:56:18 +08:00
. alias_str = " event=0x8b " ,
. alias_long_desc = " L2 BTB Correction " ,
} ;
static const struct perf_pmu_test_event segment_reg_loads_any = {
. event = {
. name = " segment_reg_loads.any " ,
2022-05-11 14:15:23 -07:00
. event = " event=0x6,period=200000,umask=0x80 " ,
2021-07-29 21:56:18 +08:00
. desc = " Number of segment register loads " ,
. topic = " other " ,
2020-03-17 19:02:16 +08:00
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0x6,period=0x30d40,umask=0x80 " ,
2021-07-29 21:56:18 +08:00
. alias_long_desc = " Number of segment register loads " ,
} ;
static const struct perf_pmu_test_event dispatch_blocked_any = {
. event = {
. name = " dispatch_blocked.any " ,
2022-05-11 14:15:23 -07:00
. event = " event=0x9,period=200000,umask=0x20 " ,
2021-07-29 21:56:18 +08:00
. desc = " Memory cluster signals to block micro-op dispatch for any reason " ,
. topic = " other " ,
2020-03-17 19:02:16 +08:00
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0x9,period=0x30d40,umask=0x20 " ,
2021-07-29 21:56:18 +08:00
. alias_long_desc = " Memory cluster signals to block micro-op dispatch for any reason " ,
} ;
static const struct perf_pmu_test_event eist_trans = {
. event = {
. name = " eist_trans " ,
2022-05-11 14:15:23 -07:00
. event = " event=0x3a,period=200000,umask=0x0 " ,
2021-07-29 21:56:18 +08:00
. desc = " Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions " ,
. topic = " other " ,
2020-10-22 19:02:27 +08:00
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0x3a,period=0x30d40,umask=0 " ,
2021-07-29 21:56:18 +08:00
. alias_long_desc = " Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions " ,
} ;
static const struct perf_pmu_test_event l3_cache_rd = {
. event = {
. name = " l3_cache_rd " ,
. event = " event=0x40 " ,
. desc = " L3 cache access, read " ,
. long_desc = " Attributable Level 3 cache access, read " ,
. topic = " cache " ,
2020-03-17 19:02:16 +08:00
} ,
2021-07-29 21:56:18 +08:00
. alias_str = " event=0x40 " ,
. alias_long_desc = " Attributable Level 3 cache access, read " ,
2020-03-17 19:02:16 +08:00
} ;
2021-07-29 21:56:18 +08:00
static const struct perf_pmu_test_event * core_events [ ] = {
& bp_l1_btb_correct ,
& bp_l2_btb_correct ,
& segment_reg_loads_any ,
& dispatch_blocked_any ,
& eist_trans ,
& l3_cache_rd ,
NULL
} ;
static const struct perf_pmu_test_event uncore_hisi_ddrc_flux_wcmd = {
. event = {
. name = " uncore_hisi_ddrc.flux_wcmd " ,
. event = " event=0x2 " ,
. desc = " DDRC write commands. Unit: hisi_sccl,ddrc " ,
. topic = " uncore " ,
. long_desc = " DDRC write commands " ,
. pmu = " hisi_sccl,ddrc " ,
2020-03-17 19:02:16 +08:00
} ,
2021-07-29 21:56:18 +08:00
. alias_str = " event=0x2 " ,
. alias_long_desc = " DDRC write commands " ,
2021-07-29 21:56:22 +08:00
. matching_pmu = " hisi_sccl1_ddrc2 " ,
2021-07-29 21:56:18 +08:00
} ;
static const struct perf_pmu_test_event unc_cbo_xsnp_response_miss_eviction = {
. event = {
. name = " unc_cbo_xsnp_response.miss_eviction " ,
2022-05-11 14:15:23 -07:00
. event = " event=0x22,umask=0x81 " ,
2022-05-11 14:15:20 -07:00
. desc = " A cross-core snoop resulted from L3 Eviction which misses in some processor core. Unit: uncore_cbox " ,
2021-07-29 21:56:18 +08:00
. topic = " uncore " ,
. long_desc = " A cross-core snoop resulted from L3 Eviction which misses in some processor core " ,
. pmu = " uncore_cbox " ,
2020-03-17 19:02:16 +08:00
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0x22,umask=0x81 " ,
2021-07-29 21:56:18 +08:00
. alias_long_desc = " A cross-core snoop resulted from L3 Eviction which misses in some processor core " ,
2021-07-29 21:56:22 +08:00
. matching_pmu = " uncore_cbox_0 " ,
2020-03-17 19:02:16 +08:00
} ;
2022-01-17 23:10:14 +08:00
static const struct perf_pmu_test_event uncore_hyphen = {
. event = {
. name = " event-hyphen " ,
2022-05-11 14:15:23 -07:00
. event = " event=0xe0,umask=0x00 " ,
2022-05-11 14:15:20 -07:00
. desc = " UNC_CBO_HYPHEN. Unit: uncore_cbox " ,
2022-01-17 23:10:14 +08:00
. topic = " uncore " ,
. long_desc = " UNC_CBO_HYPHEN " ,
. pmu = " uncore_cbox " ,
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0xe0,umask=0 " ,
2022-01-17 23:10:14 +08:00
. alias_long_desc = " UNC_CBO_HYPHEN " ,
. matching_pmu = " uncore_cbox_0 " ,
} ;
static const struct perf_pmu_test_event uncore_two_hyph = {
. event = {
. name = " event-two-hyph " ,
2022-05-11 14:15:23 -07:00
. event = " event=0xc0,umask=0x00 " ,
2022-05-11 14:15:20 -07:00
. desc = " UNC_CBO_TWO_HYPH. Unit: uncore_cbox " ,
2022-01-17 23:10:14 +08:00
. topic = " uncore " ,
. long_desc = " UNC_CBO_TWO_HYPH " ,
. pmu = " uncore_cbox " ,
} ,
2022-05-11 14:15:23 -07:00
. alias_str = " event=0xc0,umask=0 " ,
2022-01-17 23:10:14 +08:00
. alias_long_desc = " UNC_CBO_TWO_HYPH " ,
. matching_pmu = " uncore_cbox_0 " ,
} ;
2021-07-29 21:56:23 +08:00
static const struct perf_pmu_test_event uncore_hisi_l3c_rd_hit_cpipe = {
. event = {
. name = " uncore_hisi_l3c.rd_hit_cpipe " ,
2021-09-16 20:34:23 +08:00
. event = " event=0x7 " ,
2021-07-29 21:56:23 +08:00
. desc = " Total read hits. Unit: hisi_sccl,l3c " ,
. topic = " uncore " ,
. long_desc = " Total read hits " ,
. pmu = " hisi_sccl,l3c " ,
} ,
. alias_str = " event=0x7 " ,
. alias_long_desc = " Total read hits " ,
. matching_pmu = " hisi_sccl3_l3c7 " ,
} ;
static const struct perf_pmu_test_event uncore_imc_free_running_cache_miss = {
. event = {
. name = " uncore_imc_free_running.cache_miss " ,
. event = " event=0x12 " ,
. desc = " Total cache misses. Unit: uncore_imc_free_running " ,
. topic = " uncore " ,
. long_desc = " Total cache misses " ,
. pmu = " uncore_imc_free_running " ,
} ,
. alias_str = " event=0x12 " ,
. alias_long_desc = " Total cache misses " ,
. matching_pmu = " uncore_imc_free_running_0 " ,
} ;
static const struct perf_pmu_test_event uncore_imc_cache_hits = {
. event = {
. name = " uncore_imc.cache_hits " ,
. event = " event=0x34 " ,
. desc = " Total cache hits. Unit: uncore_imc " ,
. topic = " uncore " ,
. long_desc = " Total cache hits " ,
. pmu = " uncore_imc " ,
} ,
. alias_str = " event=0x34 " ,
. alias_long_desc = " Total cache hits " ,
. matching_pmu = " uncore_imc_0 " ,
} ;
2021-07-29 21:56:18 +08:00
static const struct perf_pmu_test_event * uncore_events [ ] = {
& uncore_hisi_ddrc_flux_wcmd ,
& unc_cbo_xsnp_response_miss_eviction ,
2022-01-17 23:10:14 +08:00
& uncore_hyphen ,
& uncore_two_hyph ,
2021-07-29 21:56:23 +08:00
& uncore_hisi_l3c_rd_hit_cpipe ,
& uncore_imc_free_running_cache_miss ,
& uncore_imc_cache_hits ,
2021-07-29 21:56:18 +08:00
NULL
} ;
2020-03-17 19:02:16 +08:00
2021-07-29 21:56:26 +08:00
static const struct perf_pmu_test_event sys_ddr_pmu_write_cycles = {
. event = {
. name = " sys_ddr_pmu.write_cycles " ,
. event = " event=0x2b " ,
. desc = " ddr write-cycles event. Unit: uncore_sys_ddr_pmu " ,
. topic = " uncore " ,
. pmu = " uncore_sys_ddr_pmu " ,
. compat = " v8 " ,
} ,
. alias_str = " event=0x2b " ,
. alias_long_desc = " ddr write-cycles event. Unit: uncore_sys_ddr_pmu " ,
. matching_pmu = " uncore_sys_ddr_pmu " ,
} ;
2021-09-16 20:34:24 +08:00
static const struct perf_pmu_test_event sys_ccn_pmu_read_cycles = {
. event = {
. name = " sys_ccn_pmu.read_cycles " ,
. event = " config=0x2c " ,
. desc = " ccn read-cycles event. Unit: uncore_sys_ccn_pmu " ,
. topic = " uncore " ,
. pmu = " uncore_sys_ccn_pmu " ,
. compat = " 0x01 " ,
} ,
. alias_str = " config=0x2c " ,
. alias_long_desc = " ccn read-cycles event. Unit: uncore_sys_ccn_pmu " ,
. matching_pmu = " uncore_sys_ccn_pmu " ,
} ;
2021-07-29 21:56:26 +08:00
static const struct perf_pmu_test_event * sys_events [ ] = {
& sys_ddr_pmu_write_cycles ,
2021-09-16 20:34:24 +08:00
& sys_ccn_pmu_read_cycles ,
2021-07-29 21:56:26 +08:00
NULL
} ;
2020-03-17 19:02:16 +08:00
static bool is_same ( const char * reference , const char * test )
{
if ( ! reference & & ! test )
return true ;
if ( reference & & ! test )
return false ;
if ( ! reference & & test )
return false ;
return ! strcmp ( reference , test ) ;
}
2021-10-15 10:21:15 -07:00
static int compare_pmu_events ( const struct pmu_event * e1 , const struct pmu_event * e2 )
2021-07-29 21:56:16 +08:00
{
2021-09-16 20:34:23 +08:00
if ( ! is_same ( e1 - > name , e2 - > name ) ) {
pr_debug2 ( " testing event e1 %s: mismatched name string, %s vs %s \n " ,
e1 - > name , e1 - > name , e2 - > name ) ;
return - 1 ;
}
if ( ! is_same ( e1 - > compat , e2 - > compat ) ) {
pr_debug2 ( " testing event e1 %s: mismatched compat string, %s vs %s \n " ,
e1 - > name , e1 - > compat , e2 - > compat ) ;
return - 1 ;
}
if ( ! is_same ( e1 - > event , e2 - > event ) ) {
pr_debug2 ( " testing event e1 %s: mismatched event, %s vs %s \n " ,
e1 - > name , e1 - > event , e2 - > event ) ;
return - 1 ;
}
2021-07-29 21:56:16 +08:00
if ( ! is_same ( e1 - > desc , e2 - > desc ) ) {
pr_debug2 ( " testing event e1 %s: mismatched desc, %s vs %s \n " ,
e1 - > name , e1 - > desc , e2 - > desc ) ;
return - 1 ;
}
if ( ! is_same ( e1 - > topic , e2 - > topic ) ) {
pr_debug2 ( " testing event e1 %s: mismatched topic, %s vs %s \n " ,
e1 - > name , e1 - > topic , e2 - > topic ) ;
return - 1 ;
}
if ( ! is_same ( e1 - > long_desc , e2 - > long_desc ) ) {
pr_debug2 ( " testing event e1 %s: mismatched long_desc, %s vs %s \n " ,
e1 - > name , e1 - > long_desc , e2 - > long_desc ) ;
return - 1 ;
}
2021-09-16 20:34:23 +08:00
if ( ! is_same ( e1 - > pmu , e2 - > pmu ) ) {
pr_debug2 ( " testing event e1 %s: mismatched pmu string, %s vs %s \n " ,
e1 - > name , e1 - > pmu , e2 - > pmu ) ;
return - 1 ;
}
2021-07-29 21:56:16 +08:00
if ( ! is_same ( e1 - > unit , e2 - > unit ) ) {
pr_debug2 ( " testing event e1 %s: mismatched unit, %s vs %s \n " ,
e1 - > name , e1 - > unit , e2 - > unit ) ;
return - 1 ;
}
2023-02-19 01:28:03 -08:00
if ( e1 - > perpkg ! = e2 - > perpkg ) {
pr_debug2 ( " testing event e1 %s: mismatched perpkg, %d vs %d \n " ,
2021-07-29 21:56:16 +08:00
e1 - > name , e1 - > perpkg , e2 - > perpkg ) ;
return - 1 ;
}
2023-02-19 01:28:02 -08:00
if ( e1 - > deprecated ! = e2 - > deprecated ) {
pr_debug2 ( " testing event e1 %s: mismatched deprecated, %d vs %d \n " ,
2021-09-16 20:34:23 +08:00
e1 - > name , e1 - > deprecated , e2 - > deprecated ) ;
2021-07-29 21:56:16 +08:00
return - 1 ;
}
return 0 ;
}
2021-07-29 21:56:19 +08:00
static int compare_alias_to_test_event ( struct perf_pmu_alias * alias ,
struct perf_pmu_test_event const * test_event ,
char const * pmu_name )
{
struct pmu_event const * event = & test_event - > event ;
/* An alias was found, ensure everything is in order */
if ( ! is_same ( alias - > name , event - > name ) ) {
pr_debug ( " testing aliases PMU %s: mismatched name, %s vs %s \n " ,
pmu_name , alias - > name , event - > name ) ;
return - 1 ;
}
if ( ! is_same ( alias - > desc , event - > desc ) ) {
pr_debug ( " testing aliases PMU %s: mismatched desc, %s vs %s \n " ,
pmu_name , alias - > desc , event - > desc ) ;
return - 1 ;
}
if ( ! is_same ( alias - > long_desc , test_event - > alias_long_desc ) ) {
pr_debug ( " testing aliases PMU %s: mismatched long_desc, %s vs %s \n " ,
pmu_name , alias - > long_desc ,
test_event - > alias_long_desc ) ;
return - 1 ;
}
if ( ! is_same ( alias - > topic , event - > topic ) ) {
pr_debug ( " testing aliases PMU %s: mismatched topic, %s vs %s \n " ,
pmu_name , alias - > topic , event - > topic ) ;
return - 1 ;
}
if ( ! is_same ( alias - > str , test_event - > alias_str ) ) {
pr_debug ( " testing aliases PMU %s: mismatched str, %s vs %s \n " ,
pmu_name , alias - > str , test_event - > alias_str ) ;
return - 1 ;
}
if ( ! is_same ( alias - > long_desc , test_event - > alias_long_desc ) ) {
pr_debug ( " testing aliases PMU %s: mismatched long desc, %s vs %s \n " ,
pmu_name , alias - > str , test_event - > alias_long_desc ) ;
return - 1 ;
}
if ( ! is_same ( alias - > pmu_name , test_event - > event . pmu ) ) {
pr_debug ( " testing aliases PMU %s: mismatched pmu_name, %s vs %s \n " ,
pmu_name , alias - > pmu_name , test_event - > event . pmu ) ;
return - 1 ;
}
return 0 ;
}
2022-08-12 16:09:45 -07:00
static int test__pmu_event_table_core_callback ( const struct pmu_event * pe ,
2022-08-12 16:09:46 -07:00
const struct pmu_events_table * table __maybe_unused ,
2022-08-12 16:09:45 -07:00
void * data )
2020-03-17 19:02:16 +08:00
{
2022-08-12 16:09:45 -07:00
int * map_events = data ;
struct perf_pmu_test_event const * * test_event_table ;
bool found = false ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
if ( pe - > pmu )
test_event_table = & uncore_events [ 0 ] ;
else
test_event_table = & core_events [ 0 ] ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
for ( ; * test_event_table ; test_event_table + + ) {
struct perf_pmu_test_event const * test_event = * test_event_table ;
struct pmu_event const * event = & test_event - > event ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
if ( strcmp ( pe - > name , event - > name ) )
continue ;
found = true ;
( * map_events ) + + ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
if ( compare_pmu_events ( pe , event ) )
return - 1 ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
pr_debug ( " testing event table %s: pass \n " , pe - > name ) ;
}
if ( ! found ) {
pr_err ( " testing event table: could not find event %s \n " , pe - > name ) ;
return - 1 ;
}
return 0 ;
}
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
static int test__pmu_event_table_sys_callback ( const struct pmu_event * pe ,
2022-08-12 16:09:46 -07:00
const struct pmu_events_table * table __maybe_unused ,
2022-08-12 16:09:45 -07:00
void * data )
{
int * map_events = data ;
struct perf_pmu_test_event const * * test_event_table ;
bool found = false ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
test_event_table = & sys_events [ 0 ] ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
for ( ; * test_event_table ; test_event_table + + ) {
struct perf_pmu_test_event const * test_event = * test_event_table ;
struct pmu_event const * event = & test_event - > event ;
if ( strcmp ( pe - > name , event - > name ) )
continue ;
found = true ;
( * map_events ) + + ;
2020-03-17 19:02:16 +08:00
2022-08-12 16:09:45 -07:00
if ( compare_pmu_events ( pe , event ) )
return TEST_FAIL ;
2021-07-29 21:56:26 +08:00
2022-08-12 16:09:45 -07:00
pr_debug ( " testing sys event table %s: pass \n " , pe - > name ) ;
}
if ( ! found ) {
pr_debug ( " testing sys event table: could not find event %s \n " , pe - > name ) ;
return TEST_FAIL ;
}
return TEST_OK ;
}
/* Verify generated events from pmu-events.c are as expected */
static int test__pmu_event_table ( struct test_suite * test __maybe_unused ,
int subtest __maybe_unused )
{
2023-01-26 15:36:40 -08:00
const struct pmu_events_table * sys_event_table =
find_sys_events_table ( " pmu_events__test_soc_sys " ) ;
2022-08-12 16:09:46 -07:00
const struct pmu_events_table * table = find_core_events_table ( " testarch " , " testcpu " ) ;
2022-08-12 16:09:45 -07:00
int map_events = 0 , expected_events , err ;
2021-07-29 21:56:26 +08:00
2022-08-12 16:09:45 -07:00
/* ignore 3x sentinels */
expected_events = ARRAY_SIZE ( core_events ) +
ARRAY_SIZE ( uncore_events ) +
ARRAY_SIZE ( sys_events ) - 3 ;
2021-07-29 21:56:26 +08:00
2022-08-12 16:09:45 -07:00
if ( ! table | | ! sys_event_table )
return - 1 ;
2021-07-29 21:56:26 +08:00
2022-08-12 16:09:45 -07:00
err = pmu_events_table_for_each_event ( table , test__pmu_event_table_core_callback ,
& map_events ) ;
if ( err )
return err ;
2021-07-29 21:56:26 +08:00
2022-08-12 16:09:45 -07:00
err = pmu_events_table_for_each_event ( sys_event_table , test__pmu_event_table_sys_callback ,
& map_events ) ;
if ( err )
return err ;
2021-07-29 21:56:26 +08:00
2020-03-17 19:02:16 +08:00
if ( map_events ! = expected_events ) {
pr_err ( " testing event table: found %d, but expected %d \n " ,
map_events , expected_events ) ;
2022-08-12 16:09:45 -07:00
return TEST_FAIL ;
2020-03-17 19:02:16 +08:00
}
return 0 ;
}
2020-03-17 19:02:19 +08:00
static struct perf_pmu_alias * find_alias ( const char * test_event , struct list_head * aliases )
{
struct perf_pmu_alias * alias ;
list_for_each_entry ( alias , aliases , list )
if ( ! strcmp ( test_event , alias - > name ) )
return alias ;
return NULL ;
}
/* Verify aliases are as expected */
2021-07-29 21:56:20 +08:00
static int __test_core_pmu_event_aliases ( char * pmu_name , int * count )
2020-03-17 19:02:19 +08:00
{
2021-07-29 21:56:18 +08:00
struct perf_pmu_test_event const * * test_event_table ;
2020-03-17 19:02:19 +08:00
struct perf_pmu * pmu ;
LIST_HEAD ( aliases ) ;
int res = 0 ;
2022-08-12 16:09:46 -07:00
const struct pmu_events_table * table = find_core_events_table ( " testarch " , " testcpu " ) ;
2020-09-15 12:18:18 +09:00
struct perf_pmu_alias * a , * tmp ;
2020-03-17 19:02:19 +08:00
2022-08-12 16:09:41 -07:00
if ( ! table )
2020-03-17 19:02:19 +08:00
return - 1 ;
2021-07-29 21:56:20 +08:00
test_event_table = & core_events [ 0 ] ;
2020-03-17 19:02:19 +08:00
pmu = zalloc ( sizeof ( * pmu ) ) ;
if ( ! pmu )
return - 1 ;
pmu - > name = pmu_name ;
2022-08-12 16:09:41 -07:00
pmu_add_cpu_aliases_table ( & aliases , pmu , table ) ;
2020-03-17 19:02:19 +08:00
2021-07-29 21:56:18 +08:00
for ( ; * test_event_table ; test_event_table + + ) {
struct perf_pmu_test_event const * test_event = * test_event_table ;
struct pmu_event const * event = & test_event - > event ;
struct perf_pmu_alias * alias = find_alias ( event - > name , & aliases ) ;
2020-03-17 19:02:19 +08:00
if ( ! alias ) {
2021-07-29 21:56:20 +08:00
pr_debug ( " testing aliases core PMU %s: no alias, alias_table->name=%s \n " ,
2021-07-29 21:56:18 +08:00
pmu_name , event - > name ) ;
2020-03-17 19:02:19 +08:00
res = - 1 ;
break ;
}
2021-07-29 21:56:19 +08:00
if ( compare_alias_to_test_event ( alias , test_event , pmu_name ) ) {
2020-03-17 19:02:19 +08:00
res = - 1 ;
break ;
}
( * count ) + + ;
2021-07-29 21:56:20 +08:00
pr_debug2 ( " testing aliases core PMU %s: matched event %s \n " ,
2020-03-17 19:02:19 +08:00
pmu_name , alias - > name ) ;
}
2020-09-15 12:18:18 +09:00
list_for_each_entry_safe ( a , tmp , & aliases , list ) {
list_del ( & a - > list ) ;
perf_pmu_free_alias ( a ) ;
}
2020-03-17 19:02:19 +08:00
free ( pmu ) ;
return res ;
}
2021-07-29 21:56:22 +08:00
static int __test_uncore_pmu_event_aliases ( struct perf_pmu_test_pmu * test_pmu )
{
int alias_count = 0 , to_match_count = 0 , matched_count = 0 ;
struct perf_pmu_test_event const * * table ;
struct perf_pmu * pmu = & test_pmu - > pmu ;
const char * pmu_name = pmu - > name ;
struct perf_pmu_alias * a , * tmp , * alias ;
2022-08-12 16:09:46 -07:00
const struct pmu_events_table * events_table ;
2021-07-29 21:56:22 +08:00
LIST_HEAD ( aliases ) ;
int res = 0 ;
2022-08-12 16:09:42 -07:00
events_table = find_core_events_table ( " testarch " , " testcpu " ) ;
2022-08-12 16:09:41 -07:00
if ( ! events_table )
2021-07-29 21:56:22 +08:00
return - 1 ;
2022-08-12 16:09:41 -07:00
pmu_add_cpu_aliases_table ( & aliases , pmu , events_table ) ;
2021-07-29 21:56:26 +08:00
pmu_add_sys_aliases ( & aliases , pmu ) ;
2021-07-29 21:56:22 +08:00
/* Count how many aliases we generated */
list_for_each_entry ( alias , & aliases , list )
alias_count + + ;
/* Count how many aliases we expect from the known table */
for ( table = & test_pmu - > aliases [ 0 ] ; * table ; table + + )
to_match_count + + ;
if ( alias_count ! = to_match_count ) {
pr_debug ( " testing aliases uncore PMU %s: mismatch expected aliases (%d) vs found (%d) \n " ,
pmu_name , to_match_count , alias_count ) ;
res = - 1 ;
goto out ;
}
list_for_each_entry ( alias , & aliases , list ) {
bool matched = false ;
for ( table = & test_pmu - > aliases [ 0 ] ; * table ; table + + ) {
struct perf_pmu_test_event const * test_event = * table ;
struct pmu_event const * event = & test_event - > event ;
if ( ! strcmp ( event - > name , alias - > name ) ) {
if ( compare_alias_to_test_event ( alias ,
test_event ,
pmu_name ) ) {
continue ;
}
matched = true ;
matched_count + + ;
}
}
if ( matched = = false ) {
pr_debug ( " testing aliases uncore PMU %s: could not match alias %s \n " ,
pmu_name , alias - > name ) ;
res = - 1 ;
goto out ;
}
}
if ( alias_count ! = matched_count ) {
pr_debug ( " testing aliases uncore PMU %s: mismatch found aliases (%d) vs matched (%d) \n " ,
pmu_name , matched_count , alias_count ) ;
res = - 1 ;
}
out :
list_for_each_entry_safe ( a , tmp , & aliases , list ) {
list_del ( & a - > list ) ;
perf_pmu_free_alias ( a ) ;
}
return res ;
}
static struct perf_pmu_test_pmu test_pmus [ ] = {
{
. pmu = {
. name = ( char * ) " hisi_sccl1_ddrc2 " ,
. is_uncore = 1 ,
} ,
. aliases = {
& uncore_hisi_ddrc_flux_wcmd ,
} ,
} ,
{
. pmu = {
. name = ( char * ) " uncore_cbox_0 " ,
. is_uncore = 1 ,
} ,
. aliases = {
& unc_cbo_xsnp_response_miss_eviction ,
2022-01-17 23:10:14 +08:00
& uncore_hyphen ,
& uncore_two_hyph ,
2021-07-29 21:56:22 +08:00
} ,
} ,
2021-07-29 21:56:23 +08:00
{
. pmu = {
. name = ( char * ) " hisi_sccl3_l3c7 " ,
. is_uncore = 1 ,
} ,
. aliases = {
& uncore_hisi_l3c_rd_hit_cpipe ,
} ,
} ,
{
. pmu = {
. name = ( char * ) " uncore_imc_free_running_0 " ,
. is_uncore = 1 ,
} ,
. aliases = {
& uncore_imc_free_running_cache_miss ,
} ,
} ,
{
. pmu = {
. name = ( char * ) " uncore_imc_0 " ,
. is_uncore = 1 ,
} ,
. aliases = {
& uncore_imc_cache_hits ,
} ,
} ,
2021-07-29 21:56:26 +08:00
{
. pmu = {
. name = ( char * ) " uncore_sys_ddr_pmu0 " ,
. is_uncore = 1 ,
. id = ( char * ) " v8 " ,
} ,
. aliases = {
& sys_ddr_pmu_write_cycles ,
} ,
} ,
2021-09-16 20:34:24 +08:00
{
. pmu = {
. name = ( char * ) " uncore_sys_ccn_pmu4 " ,
. is_uncore = 1 ,
. id = ( char * ) " 0x01 " ,
} ,
. aliases = {
& sys_ccn_pmu_read_cycles ,
} ,
} ,
2021-07-29 21:56:22 +08:00
} ;
2020-10-22 19:02:27 +08:00
/* Test that aliases generated are as expected */
2021-11-03 23:41:56 -07:00
static int test__aliases ( struct test_suite * test __maybe_unused ,
int subtest __maybe_unused )
2020-03-17 19:02:16 +08:00
{
2020-03-17 19:02:19 +08:00
struct perf_pmu * pmu = NULL ;
2021-07-29 21:56:22 +08:00
unsigned long i ;
2020-03-17 19:02:19 +08:00
2023-05-27 00:22:05 -07:00
while ( ( pmu = perf_pmus__scan_core ( pmu ) ) ! = NULL ) {
2020-03-17 19:02:19 +08:00
int count = 0 ;
if ( list_empty ( & pmu - > format ) ) {
2021-07-29 21:56:20 +08:00
pr_debug2 ( " skipping testing core PMU %s \n " , pmu - > name ) ;
2020-03-17 19:02:19 +08:00
continue ;
}
2021-07-29 21:56:20 +08:00
if ( __test_core_pmu_event_aliases ( pmu - > name , & count ) ) {
pr_debug ( " testing core PMU %s aliases: failed \n " , pmu - > name ) ;
2020-03-17 19:02:19 +08:00
return - 1 ;
}
2021-07-29 21:56:20 +08:00
if ( count = = 0 ) {
pr_debug ( " testing core PMU %s aliases: no events to match \n " ,
2020-03-17 19:02:19 +08:00
pmu - > name ) ;
2021-07-29 21:56:20 +08:00
return - 1 ;
}
pr_debug ( " testing core PMU %s aliases: pass \n " , pmu - > name ) ;
2020-03-17 19:02:19 +08:00
}
2021-07-29 21:56:22 +08:00
for ( i = 0 ; i < ARRAY_SIZE ( test_pmus ) ; i + + ) {
int res = __test_uncore_pmu_event_aliases ( & test_pmus [ i ] ) ;
if ( res )
return res ;
}
2020-03-17 19:02:16 +08:00
return 0 ;
}
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
static bool is_number ( const char * str )
{
char * end_ptr ;
double v ;
errno = 0 ;
v = strtod ( str , & end_ptr ) ;
( void ) v ; // We're not interested in this value, only if it is valid
return errno = = 0 & & end_ptr ! = str ;
}
2020-06-03 12:51:15 -03:00
static int check_parse_id ( const char * id , struct parse_events_error * error ,
struct perf_pmu * fake_pmu )
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
{
struct evlist * evlist ;
int ret ;
perf metric: Encode and use metric-id as qualifier
For a metric like IPC a group of events like {instructions,cycles}:W
would be formed.
If the events names were changed in parsing then the metric expression
parser would fail to find them.
This change makes the event encoding be something like:
{instructions/metric-id=instructions/, cycles/metric-id=cycles/}
and then uses the evsel's stable metric-id value to locate the events.
This fixes the case that an event is restricted to user because of the
paranoia setting:
$ echo 2 > /proc/sys/kernel/perf_event_paranoid
$ perf stat -M IPC /bin/true
Performance counter stats for '/bin/true':
150,298 inst_retired.any:u # 0.77 IPC
187,095 cpu_clk_unhalted.thread:u
0.002042731 seconds time elapsed
0.000000000 seconds user
0.002377000 seconds sys
Adding the metric-id as a qualifier has a complication in that
qualifiers will become embedded in qualifiers.
For example, msr/tsc/ could become msr/tsc,metric-id=msr/tsc// which
will fail parse-events.
To solve this problem the metric is encoded and decoded for the
metric-id with !<num> standing in for an encoded value.
Previously ! wasn't parsed.
With this msr/tsc/ becomes msr/tsc,metric-id=msr!3tsc!3/
The metric expression parser is changed so that @ isn't changed to /,
instead this is done when the ID is encoded for parse events.
metricgroup__add_metric_non_group() and metricgroup__add_metric_weak_group()
need to inject the metric-id qualifier, so to avoid repetition they are
merged into a single metricgroup__build_event_string with error codes
more rigorously checked.
stat-shadow's prepare_metric() uses the metric-id to match the metricgroup
code.
As "metric-id=..." is added to all events, it is adding during testing
with the fake PMU.
This complicates pmu_str_check code as PE_PMU_EVENT_FAKE won't match as
part of a configuration.
The testing fake PMU case is fixed so that if a known qualifier with an
! is parsed then it isn't reported as a fake PMU.
This is sufficient to pass all testing but it and the original mechanism
are somewhat brittle.
Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Antonov <alexander.antonov@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andrew Kilroy <andrew.kilroy@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Changbin Du <changbin.du@intel.com>
Cc: Denys Zagorui <dzagorui@cisco.com>
Cc: Fabian Hemmer <copy@copy.sh>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joakim Zhang <qiangqing.zhang@nxp.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Kees Kook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nicholas Fraser <nfraser@codeweavers.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Riccardo Mancini <rickyman7@gmail.com>
Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: ShihCheng Tu <mrtoastcheng@gmail.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: Wan Jiabing <wanjiabing@vivo.com>
Cc: Zhen Lei <thunder.leizhen@huawei.com>
Link: https://lore.kernel.org/r/20211015172132.1162559-17-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-15 10:21:27 -07:00
char * dup , * cur ;
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
/* Numbers are always valid. */
if ( is_number ( id ) )
return 0 ;
evlist = evlist__new ( ) ;
2020-06-02 23:47:31 +02:00
if ( ! evlist )
return - ENOMEM ;
perf metric: Encode and use metric-id as qualifier
For a metric like IPC a group of events like {instructions,cycles}:W
would be formed.
If the events names were changed in parsing then the metric expression
parser would fail to find them.
This change makes the event encoding be something like:
{instructions/metric-id=instructions/, cycles/metric-id=cycles/}
and then uses the evsel's stable metric-id value to locate the events.
This fixes the case that an event is restricted to user because of the
paranoia setting:
$ echo 2 > /proc/sys/kernel/perf_event_paranoid
$ perf stat -M IPC /bin/true
Performance counter stats for '/bin/true':
150,298 inst_retired.any:u # 0.77 IPC
187,095 cpu_clk_unhalted.thread:u
0.002042731 seconds time elapsed
0.000000000 seconds user
0.002377000 seconds sys
Adding the metric-id as a qualifier has a complication in that
qualifiers will become embedded in qualifiers.
For example, msr/tsc/ could become msr/tsc,metric-id=msr/tsc// which
will fail parse-events.
To solve this problem the metric is encoded and decoded for the
metric-id with !<num> standing in for an encoded value.
Previously ! wasn't parsed.
With this msr/tsc/ becomes msr/tsc,metric-id=msr!3tsc!3/
The metric expression parser is changed so that @ isn't changed to /,
instead this is done when the ID is encoded for parse events.
metricgroup__add_metric_non_group() and metricgroup__add_metric_weak_group()
need to inject the metric-id qualifier, so to avoid repetition they are
merged into a single metricgroup__build_event_string with error codes
more rigorously checked.
stat-shadow's prepare_metric() uses the metric-id to match the metricgroup
code.
As "metric-id=..." is added to all events, it is adding during testing
with the fake PMU.
This complicates pmu_str_check code as PE_PMU_EVENT_FAKE won't match as
part of a configuration.
The testing fake PMU case is fixed so that if a known qualifier with an
! is parsed then it isn't reported as a fake PMU.
This is sufficient to pass all testing but it and the original mechanism
are somewhat brittle.
Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Antonov <alexander.antonov@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andrew Kilroy <andrew.kilroy@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Changbin Du <changbin.du@intel.com>
Cc: Denys Zagorui <dzagorui@cisco.com>
Cc: Fabian Hemmer <copy@copy.sh>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joakim Zhang <qiangqing.zhang@nxp.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Kees Kook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nicholas Fraser <nfraser@codeweavers.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Riccardo Mancini <rickyman7@gmail.com>
Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: ShihCheng Tu <mrtoastcheng@gmail.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: Wan Jiabing <wanjiabing@vivo.com>
Cc: Zhen Lei <thunder.leizhen@huawei.com>
Link: https://lore.kernel.org/r/20211015172132.1162559-17-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-15 10:21:27 -07:00
dup = strdup ( id ) ;
if ( ! dup )
return - ENOMEM ;
for ( cur = strchr ( dup , ' @ ' ) ; cur ; cur = strchr ( + + cur , ' @ ' ) )
* cur = ' / ' ;
2023-05-02 15:38:36 -07:00
ret = __parse_events ( evlist , dup , /*pmu_filter=*/ NULL , error , fake_pmu ,
/*warn_if_reordered=*/ true ) ;
perf metric: Encode and use metric-id as qualifier
For a metric like IPC a group of events like {instructions,cycles}:W
would be formed.
If the events names were changed in parsing then the metric expression
parser would fail to find them.
This change makes the event encoding be something like:
{instructions/metric-id=instructions/, cycles/metric-id=cycles/}
and then uses the evsel's stable metric-id value to locate the events.
This fixes the case that an event is restricted to user because of the
paranoia setting:
$ echo 2 > /proc/sys/kernel/perf_event_paranoid
$ perf stat -M IPC /bin/true
Performance counter stats for '/bin/true':
150,298 inst_retired.any:u # 0.77 IPC
187,095 cpu_clk_unhalted.thread:u
0.002042731 seconds time elapsed
0.000000000 seconds user
0.002377000 seconds sys
Adding the metric-id as a qualifier has a complication in that
qualifiers will become embedded in qualifiers.
For example, msr/tsc/ could become msr/tsc,metric-id=msr/tsc// which
will fail parse-events.
To solve this problem the metric is encoded and decoded for the
metric-id with !<num> standing in for an encoded value.
Previously ! wasn't parsed.
With this msr/tsc/ becomes msr/tsc,metric-id=msr!3tsc!3/
The metric expression parser is changed so that @ isn't changed to /,
instead this is done when the ID is encoded for parse events.
metricgroup__add_metric_non_group() and metricgroup__add_metric_weak_group()
need to inject the metric-id qualifier, so to avoid repetition they are
merged into a single metricgroup__build_event_string with error codes
more rigorously checked.
stat-shadow's prepare_metric() uses the metric-id to match the metricgroup
code.
As "metric-id=..." is added to all events, it is adding during testing
with the fake PMU.
This complicates pmu_str_check code as PE_PMU_EVENT_FAKE won't match as
part of a configuration.
The testing fake PMU case is fixed so that if a known qualifier with an
! is parsed then it isn't reported as a fake PMU.
This is sufficient to pass all testing but it and the original mechanism
are somewhat brittle.
Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Antonov <alexander.antonov@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andrew Kilroy <andrew.kilroy@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Changbin Du <changbin.du@intel.com>
Cc: Denys Zagorui <dzagorui@cisco.com>
Cc: Fabian Hemmer <copy@copy.sh>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joakim Zhang <qiangqing.zhang@nxp.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Kees Kook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nicholas Fraser <nfraser@codeweavers.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Riccardo Mancini <rickyman7@gmail.com>
Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: ShihCheng Tu <mrtoastcheng@gmail.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: Wan Jiabing <wanjiabing@vivo.com>
Cc: Zhen Lei <thunder.leizhen@huawei.com>
Link: https://lore.kernel.org/r/20211015172132.1162559-17-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-15 10:21:27 -07:00
free ( dup ) ;
2020-06-02 23:47:31 +02:00
evlist__delete ( evlist ) ;
return ret ;
}
2020-06-03 12:51:15 -03:00
static int check_parse_fake ( const char * id )
{
2021-11-07 01:00:01 -08:00
struct parse_events_error error ;
int ret ;
2020-06-03 12:51:15 -03:00
2021-11-07 01:00:01 -08:00
parse_events_error__init ( & error ) ;
ret = check_parse_id ( id , & error , & perf_pmu__fake ) ;
parse_events_error__exit ( & error ) ;
2020-06-03 12:51:15 -03:00
return ret ;
}
2021-04-07 18:32:46 +08:00
struct metric {
struct list_head list ;
struct metric_ref metric_ref ;
} ;
2023-01-26 15:36:39 -08:00
static int test__parsing_callback ( const struct pmu_metric * pm ,
const struct pmu_metrics_table * table ,
2022-08-12 16:09:43 -07:00
void * data )
2022-08-12 16:09:42 -07:00
{
2022-08-12 16:09:43 -07:00
int * failures = data ;
2022-08-12 16:09:42 -07:00
int k ;
2022-08-12 16:09:43 -07:00
struct evlist * evlist ;
struct perf_cpu_map * cpus ;
struct evsel * evsel ;
struct rblist metric_events = {
. nr_entries = 0 ,
} ;
int err = 0 ;
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
2023-01-26 15:36:34 -08:00
if ( ! pm - > metric_expr )
2022-08-12 16:09:42 -07:00
return 0 ;
2023-01-26 15:36:34 -08:00
pr_debug ( " Found metric '%s' \n " , pm - > metric_name ) ;
2022-08-12 16:09:43 -07:00
( * failures ) + + ;
2022-08-12 16:09:42 -07:00
2022-08-12 16:09:43 -07:00
/*
* We need to prepare evlist for stat mode running on CPU 0
* because that ' s where all the stats are going to be created .
*/
evlist = evlist__new ( ) ;
if ( ! evlist )
return - ENOMEM ;
cpus = perf_cpu_map__new ( " 0 " ) ;
if ( ! cpus ) {
evlist__delete ( evlist ) ;
return - ENOMEM ;
2021-09-23 00:46:04 -07:00
}
2022-08-12 16:09:41 -07:00
2022-08-12 16:09:43 -07:00
perf_evlist__set_maps ( & evlist - > core , cpus , NULL ) ;
2023-02-19 01:28:35 -08:00
err = metricgroup__parse_groups_test ( evlist , table , pm - > metric_name , & metric_events ) ;
2022-08-12 16:09:43 -07:00
if ( err ) {
2023-01-26 15:36:34 -08:00
if ( ! strcmp ( pm - > metric_name , " M1 " ) | | ! strcmp ( pm - > metric_name , " M2 " ) | |
! strcmp ( pm - > metric_name , " M3 " ) ) {
2022-08-12 16:09:43 -07:00
( * failures ) - - ;
2023-01-26 15:36:34 -08:00
pr_debug ( " Expected broken metric %s skipping \n " , pm - > metric_name ) ;
2022-08-12 16:09:43 -07:00
err = 0 ;
}
goto out_err ;
2022-08-12 16:09:42 -07:00
}
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
2022-10-17 19:02:15 -07:00
err = evlist__alloc_stats ( /*config=*/ NULL , evlist , /*alloc_raw=*/ false ) ;
2022-08-12 16:09:43 -07:00
if ( err )
goto out_err ;
2022-08-12 16:09:42 -07:00
/*
* Add all ids with a made up value . The value may trigger divide by
* zero when subtracted and so try to make them unique .
*/
k = 1 ;
2023-02-19 01:28:46 -08:00
evlist__alloc_aggr_stats ( evlist , 1 ) ;
2022-08-12 16:09:43 -07:00
evlist__for_each_entry ( evlist , evsel ) {
2023-02-19 01:28:46 -08:00
evsel - > stats - > aggr - > counts . val = k ;
2023-04-20 15:54:11 -03:00
if ( evsel__name_is ( evsel , " duration_time " ) )
2022-08-12 16:09:43 -07:00
update_stats ( & walltime_nsecs_stats , k ) ;
k + + ;
2022-08-12 16:09:42 -07:00
}
2022-08-12 16:09:43 -07:00
evlist__for_each_entry ( evlist , evsel ) {
struct metric_event * me = metricgroup__lookup ( & metric_events , evsel , false ) ;
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
2022-08-12 16:09:43 -07:00
if ( me ! = NULL ) {
struct metric_expr * mexp ;
2021-04-07 18:32:46 +08:00
2022-08-12 16:09:43 -07:00
list_for_each_entry ( mexp , & me - > head , nd ) {
2023-01-26 15:36:34 -08:00
if ( strcmp ( mexp - > metric_name , pm - > metric_name ) )
2022-08-12 16:09:43 -07:00
continue ;
2023-02-19 01:28:44 -08:00
pr_debug ( " Result %f \n " , test_generic_metric ( mexp , 0 ) ) ;
2022-08-12 16:09:43 -07:00
err = 0 ;
( * failures ) - - ;
goto out_err ;
}
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
}
}
2023-01-26 15:36:34 -08:00
pr_debug ( " Didn't find parsed metric %s " , pm - > metric_name ) ;
2022-08-12 16:09:43 -07:00
err = 1 ;
out_err :
if ( err )
2023-01-26 15:36:34 -08:00
pr_debug ( " Broken metric %s \n " , pm - > metric_name ) ;
2022-08-12 16:09:43 -07:00
/* ... cleanup. */
metricgroup__rblist_exit ( & metric_events ) ;
evlist__free_stats ( evlist ) ;
perf_cpu_map__put ( cpus ) ;
evlist__delete ( evlist ) ;
return err ;
2022-08-12 16:09:42 -07:00
}
static int test__parsing ( struct test_suite * test __maybe_unused ,
int subtest __maybe_unused )
{
2022-08-12 16:09:43 -07:00
int failures = 0 ;
2022-08-12 16:09:42 -07:00
2023-01-26 15:36:34 -08:00
pmu_for_each_core_metric ( test__parsing_callback , & failures ) ;
pmu_for_each_sys_metric ( test__parsing_callback , & failures ) ;
2022-08-12 16:09:42 -07:00
2022-08-12 16:09:43 -07:00
return failures = = 0 ? TEST_OK : TEST_FAIL ;
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
}
2020-06-03 12:51:15 -03:00
struct test_metric {
const char * str ;
} ;
static struct test_metric metrics [ ] = {
{ " (unc_p_power_state_occupancy.cores_c0 / unc_p_clockticks) * 100. " } ,
{ " imx8_ddr0@read \\ -cycles@ * 4 * 4 " , } ,
{ " imx8_ddr0@axid \\ -read \\ ,axi_mask \\ =0xffff \\ ,axi_id \\ =0x0000@ * 4 " , } ,
{ " (cstate_pkg@c2 \\ -residency@ / msr@tsc@) * 100 " , } ,
{ " (imx8_ddr0@read \\ -cycles@ + imx8_ddr0@write \\ -cycles@) " , } ,
} ;
2022-12-14 22:47:24 -08:00
static int metric_parse_fake ( const char * metric_name , const char * str )
2020-06-03 12:51:15 -03:00
{
2021-09-23 00:46:04 -07:00
struct expr_parse_ctx * ctx ;
2020-06-03 12:51:15 -03:00
struct hashmap_entry * cur ;
double result ;
int ret = - 1 ;
size_t bkt ;
int i ;
2022-12-14 22:47:24 -08:00
pr_debug ( " parsing '%s': '%s' \n " , metric_name , str ) ;
2020-06-03 12:51:15 -03:00
2021-09-23 00:46:04 -07:00
ctx = expr__ctx_new ( ) ;
if ( ! ctx ) {
pr_debug ( " expr__ctx_new failed " ) ;
return TEST_FAIL ;
}
2023-01-26 15:36:42 -08:00
ctx - > sctx . is_test = true ;
2021-10-15 10:21:16 -07:00
if ( expr__find_ids ( str , NULL , ctx ) < 0 ) {
2021-09-23 00:46:10 -07:00
pr_err ( " expr__find_ids failed \n " ) ;
2020-06-03 12:51:15 -03:00
return - 1 ;
}
/*
* Add all ids with a made up value . The value may
* trigger divide by zero when subtracted and so try to
* make them unique .
*/
i = 1 ;
2021-09-23 00:46:04 -07:00
hashmap__for_each_entry ( ctx - > ids , cur , bkt )
libbpf: Hashmap interface update to allow both long and void* keys/values
An update for libbpf's hashmap interface from void* -> void* to a
polymorphic one, allowing both long and void* keys and values.
This simplifies many use cases in libbpf as hashmaps there are mostly
integer to integer.
Perf copies hashmap implementation from libbpf and has to be
updated as well.
Changes to libbpf, selftests/bpf and perf are packed as a single
commit to avoid compilation issues with any future bisect.
Polymorphic interface is acheived by hiding hashmap interface
functions behind auxiliary macros that take care of necessary
type casts, for example:
#define hashmap_cast_ptr(p) \
({ \
_Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
#p " pointee should be a long-sized integer or a pointer"); \
(long *)(p); \
})
bool hashmap_find(const struct hashmap *map, long key, long *value);
#define hashmap__find(map, key, value) \
hashmap_find((map), (long)(key), hashmap_cast_ptr(value))
- hashmap__find macro casts key and value parameters to long
and long* respectively
- hashmap_cast_ptr ensures that value pointer points to a memory
of appropriate size.
This hack was suggested by Andrii Nakryiko in [1].
This is a follow up for [2].
[1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/
[2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com
2022-11-09 16:26:09 +02:00
expr__add_id_val ( ctx , strdup ( cur - > pkey ) , i + + ) ;
2020-06-03 12:51:15 -03:00
2021-09-23 00:46:04 -07:00
hashmap__for_each_entry ( ctx - > ids , cur , bkt ) {
libbpf: Hashmap interface update to allow both long and void* keys/values
An update for libbpf's hashmap interface from void* -> void* to a
polymorphic one, allowing both long and void* keys and values.
This simplifies many use cases in libbpf as hashmaps there are mostly
integer to integer.
Perf copies hashmap implementation from libbpf and has to be
updated as well.
Changes to libbpf, selftests/bpf and perf are packed as a single
commit to avoid compilation issues with any future bisect.
Polymorphic interface is acheived by hiding hashmap interface
functions behind auxiliary macros that take care of necessary
type casts, for example:
#define hashmap_cast_ptr(p) \
({ \
_Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
#p " pointee should be a long-sized integer or a pointer"); \
(long *)(p); \
})
bool hashmap_find(const struct hashmap *map, long key, long *value);
#define hashmap__find(map, key, value) \
hashmap_find((map), (long)(key), hashmap_cast_ptr(value))
- hashmap__find macro casts key and value parameters to long
and long* respectively
- hashmap_cast_ptr ensures that value pointer points to a memory
of appropriate size.
This hack was suggested by Andrii Nakryiko in [1].
This is a follow up for [2].
[1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/
[2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com
2022-11-09 16:26:09 +02:00
if ( check_parse_fake ( cur - > pkey ) ) {
2020-06-03 12:51:15 -03:00
pr_err ( " check_parse_fake failed \n " ) ;
goto out ;
}
}
2021-12-23 10:56:22 -08:00
ret = 0 ;
if ( expr__parse ( & result , ctx , str ) ) {
/*
* Parsing failed , make numbers go from large to small which can
* resolve divide by zero issues .
*/
i = 1024 ;
hashmap__for_each_entry ( ctx - > ids , cur , bkt )
libbpf: Hashmap interface update to allow both long and void* keys/values
An update for libbpf's hashmap interface from void* -> void* to a
polymorphic one, allowing both long and void* keys and values.
This simplifies many use cases in libbpf as hashmaps there are mostly
integer to integer.
Perf copies hashmap implementation from libbpf and has to be
updated as well.
Changes to libbpf, selftests/bpf and perf are packed as a single
commit to avoid compilation issues with any future bisect.
Polymorphic interface is acheived by hiding hashmap interface
functions behind auxiliary macros that take care of necessary
type casts, for example:
#define hashmap_cast_ptr(p) \
({ \
_Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
#p " pointee should be a long-sized integer or a pointer"); \
(long *)(p); \
})
bool hashmap_find(const struct hashmap *map, long key, long *value);
#define hashmap__find(map, key, value) \
hashmap_find((map), (long)(key), hashmap_cast_ptr(value))
- hashmap__find macro casts key and value parameters to long
and long* respectively
- hashmap_cast_ptr ensures that value pointer points to a memory
of appropriate size.
This hack was suggested by Andrii Nakryiko in [1].
This is a follow up for [2].
[1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/
[2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com
2022-11-09 16:26:09 +02:00
expr__add_id_val ( ctx , strdup ( cur - > pkey ) , i - - ) ;
2021-12-23 10:56:22 -08:00
if ( expr__parse ( & result , ctx , str ) ) {
2022-12-14 22:47:24 -08:00
pr_err ( " expr__parse failed for %s \n " , metric_name ) ;
/* The following have hard to avoid divide by zero. */
if ( ! strcmp ( metric_name , " tma_clears_resteers " ) | |
! strcmp ( metric_name , " tma_mispredicts_resteers " ) )
ret = 0 ;
else
ret = - 1 ;
2021-12-23 10:56:22 -08:00
}
}
2020-06-03 12:51:15 -03:00
out :
2021-09-23 00:46:04 -07:00
expr__ctx_free ( ctx ) ;
2020-06-03 12:51:15 -03:00
return ret ;
}
2023-01-26 15:36:34 -08:00
static int test__parsing_fake_callback ( const struct pmu_metric * pm ,
2023-01-26 15:36:39 -08:00
const struct pmu_metrics_table * table __maybe_unused ,
2022-08-12 16:09:42 -07:00
void * data __maybe_unused )
{
2023-01-26 15:36:34 -08:00
return metric_parse_fake ( pm - > metric_name , pm - > metric_expr ) ;
2022-08-12 16:09:42 -07:00
}
2020-06-03 12:51:15 -03:00
/*
* Parse all the metrics for current architecture ,
* or all defined cpus via the ' fake_pmu '
* in parse_events .
*/
2021-11-03 23:41:56 -07:00
static int test__parsing_fake ( struct test_suite * test __maybe_unused ,
int subtest __maybe_unused )
2020-06-03 12:51:15 -03:00
{
int err = 0 ;
2022-08-12 16:09:42 -07:00
for ( size_t i = 0 ; i < ARRAY_SIZE ( metrics ) ; i + + ) {
2022-12-14 22:47:24 -08:00
err = metric_parse_fake ( " " , metrics [ i ] . str ) ;
2020-06-03 12:51:15 -03:00
if ( err )
return err ;
}
2023-01-26 15:36:34 -08:00
err = pmu_for_each_core_metric ( test__parsing_fake_callback , NULL ) ;
2022-08-12 16:09:42 -07:00
if ( err )
return err ;
2022-08-12 16:09:41 -07:00
2023-01-26 15:36:34 -08:00
return pmu_for_each_sys_metric ( test__parsing_fake_callback , NULL ) ;
2020-06-03 12:51:15 -03:00
}
2023-02-19 01:28:31 -08:00
static int test__parsing_threshold_callback ( const struct pmu_metric * pm ,
const struct pmu_metrics_table * table __maybe_unused ,
void * data __maybe_unused )
{
if ( ! pm - > metric_threshold )
return 0 ;
return metric_parse_fake ( pm - > metric_name , pm - > metric_threshold ) ;
}
static int test__parsing_threshold ( struct test_suite * test __maybe_unused ,
int subtest __maybe_unused )
{
int err = 0 ;
err = pmu_for_each_core_metric ( test__parsing_threshold_callback , NULL ) ;
if ( err )
return err ;
return pmu_for_each_sys_metric ( test__parsing_threshold_callback , NULL ) ;
}
2021-11-03 23:41:56 -07:00
static struct test_case pmu_events_tests [ ] = {
TEST_CASE ( " PMU event table sanity " , pmu_event_table ) ,
TEST_CASE ( " PMU event map aliases " , aliases ) ,
TEST_CASE_REASON ( " Parsing of PMU event table metrics " , parsing ,
" some metrics failed " ) ,
TEST_CASE ( " Parsing of PMU event table metrics with fake PMUs " , parsing_fake ) ,
2023-02-19 01:28:31 -08:00
TEST_CASE ( " Parsing of metric thresholds with fake PMUs " , parsing_threshold ) ,
2021-11-03 23:41:56 -07:00
{ . name = NULL , }
perf test: Improve pmu event metric testing
Break pmu-events test into 2 and add a test to verify that all pmu
metric expressions simply parse. Try to parse all metric ids/events,
skip/warn if metrics for the current architecture fail to parse. To
support warning for a skip, and an ability for a subtest to describe why
it skips.
Tested on power9, skylakex, haswell, broadwell, westmere, sandybridge and
ivybridge.
May skip/warn on other architectures if metrics are invalid. In
particular s390 is untested, but its expressions are trivial. The
untested architectures with expressions are power8, cascadelakex,
tremontx, skylake, jaketown, ivytown and variants of haswell and
broadwell.
v3. addresses review comments from John Garry <john.garry@huawei.com>,
Jiri Olsa <jolsa@redhat.com> and Arnaldo Carvalho de Melo
<acme@kernel.org>.
v2. changes the commit message as event parsing errors no longer cause
the test to fail.
Committer notes:
Check the return value of strtod() to fix the build in systems where
that function is declared with attribute warn_unused_result.
Signed-off-by: Ian Rogers <irogers@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200513212933.41273-1-irogers@google.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-13 14:29:33 -07:00
} ;
2021-11-03 23:41:51 -07:00
struct test_suite suite__pmu_events = {
2021-11-03 23:41:50 -07:00
. desc = " PMU events " ,
2021-11-03 23:41:56 -07:00
. test_cases = pmu_events_tests ,
2021-11-03 23:41:50 -07:00
} ;