Blame SOURCES/oprofile-silvermont.patch

d5df0a
commit 4b1497d8befcc4c8b26dc4e4866c3422ae8787c3
d5df0a
Author: Andi Kleen <ak@linux.intel.com>
d5df0a
Date:   Thu Oct 10 13:12:28 2013 -0500
d5df0a
d5df0a
    Add support for Intel Silvermont processor
d5df0a
    
d5df0a
    Just add the event list for Intel Silvermont based systems
d5df0a
    (Avoton, BayTrail) and the usual changes for a new CPU.
d5df0a
    No new code otherwise.
d5df0a
    
d5df0a
    The model number list is incomplete at this point, more will
d5df0a
    be added in the future.
d5df0a
    
d5df0a
    I also finally removed the top level event list descriptions.
d5df0a
    All the events are only described in the unit masks now
d5df0a
    (Intel doesn't really have a top level event, and I had
d5df0a
    to invent descriptions, which was error prone and
d5df0a
    often wrong)
d5df0a
    
d5df0a
    I also removed some outdated document number references.
d5df0a
    
d5df0a
    Signed-off-by: Andi Kleen <ak@linux.intel.com>
d5df0a
d5df0a
diff --git a/events/Makefile.am b/events/Makefile.am
d5df0a
index d91d44b..3028c2f 100644
d5df0a
--- a/events/Makefile.am
d5df0a
+++ b/events/Makefile.am
d5df0a
@@ -21,6 +21,7 @@ event_files = \
d5df0a
 	i386/sandybridge/events i386/sandybridge/unit_masks \
d5df0a
 	i386/ivybridge/events i386/ivybridge/unit_masks \
d5df0a
 	i386/haswell/events i386/haswell/unit_masks \
d5df0a
+	i386/silvermont/events i386/silvermont/unit_masks \
d5df0a
 	ia64/ia64/events ia64/ia64/unit_masks \
d5df0a
 	ia64/itanium2/events ia64/itanium2/unit_masks \
d5df0a
 	ia64/itanium/events ia64/itanium/unit_masks \
d5df0a
diff --git a/events/i386/silvermont/events b/events/i386/silvermont/events
d5df0a
new file mode 100644
d5df0a
index 0000000..077cc0a
d5df0a
--- /dev/null
d5df0a
+++ b/events/i386/silvermont/events
d5df0a
@@ -0,0 +1,26 @@
d5df0a
+#
d5df0a
+# Intel "Silvermont" microarchitecture core events.
d5df0a
+#
d5df0a
+# See http://ark.intel.com/ for help in identifying Silvermont based CPUs
d5df0a
+#
d5df0a
+# Note the minimum counts are not discovered experimentally and could be likely
d5df0a
+# lowered in many cases without ill effect.
d5df0a
+#
d5df0a
+include:i386/arch_perfmon
d5df0a
+event:0x32 counters:0,1 um:l2_prefetcher_throttle minimum:200003 name:l2_prefetcher_throttle :
d5df0a
+event:0x3e counters:0,1 um:one minimum:200003 name:l2_prefetcher_pref_stream_alloc :
d5df0a
+event:0x50 counters:0,1 um:zero minimum:200003 name:l2_prefetch_pend_streams_pref_stream_pend_set :
d5df0a
+event:0x86 counters:0,1 um:nip_stall minimum:200003 name:nip_stall :
d5df0a
+event:0x87 counters:0,1 um:decode_stall minimum:200003 name:decode_stall :
d5df0a
+event:0x96 counters:0,1 um:uip_match minimum:200003 name:uip_match :
d5df0a
+event:0xc2 counters:0,1 um:uops_retired minimum:2000003 name:uops_retired :
d5df0a
+event:0xc3 counters:0,1 um:x10 minimum:200003 name:machine_clears_live_lock_breaker :
d5df0a
+event:0xc4 counters:0,1 um:br_inst_retired minimum:2000003 name:br_inst_retired :
d5df0a
+event:0xc5 counters:0,1 um:br_misp_retired minimum:200003 name:br_misp_retired :
d5df0a
+event:0xca counters:0,1 um:no_alloc_cycles minimum:200003 name:no_alloc_cycles :
d5df0a
+event:0xcb counters:0,1 um:rs_full_stall minimum:200003 name:rs_full_stall :
d5df0a
+event:0xcc counters:0,1 um:rs_dispatch_stall minimum:200003 name:rs_dispatch_stall :
d5df0a
+event:0xe6 counters:0,1 um:baclears minimum:2000003 name:baclears :
d5df0a
+event:0xe7 counters:0,1 um:x02 minimum:200003 name:ms_decoded_early_exit :
d5df0a
+event:0xe8 counters:0,1 um:one minimum:200003 name:btclears_all :
d5df0a
+event:0xe9 counters:0,1 um:decode_restriction minimum:200003 name:decode_restriction :
d5df0a
diff --git a/events/i386/silvermont/unit_masks b/events/i386/silvermont/unit_masks
d5df0a
new file mode 100644
d5df0a
index 0000000..6309282
d5df0a
--- /dev/null
d5df0a
+++ b/events/i386/silvermont/unit_masks
d5df0a
@@ -0,0 +1,71 @@
d5df0a
+#
d5df0a
+# Unit masks for the Intel "Silvermont" micro architecture
d5df0a
+#
d5df0a
+# See http://ark.intel.com/ for help in identifying Silvermont based CPUs
d5df0a
+#
d5df0a
+include:i386/arch_perfmon
d5df0a
+name:x02 type:mandatory default:0x2
d5df0a
+	0x2 No unit mask
d5df0a
+name:x10 type:mandatory default:0x10
d5df0a
+	0x10 No unit mask
d5df0a
+name:l2_prefetcher_throttle type:exclusive default:0x2
d5df0a
+	0x2 extra:edge conservative Counts the number of cycles the L2 prefetcher spends in throttling mode
d5df0a
+	0x1 extra:edge aggressive Counts the number of cycles the L2 prefetcher spends in throttling mode
d5df0a
+name:nip_stall type:exclusive default:0x3f
d5df0a
+	0x3f extra: all Counts the number of cycles the NIP stalls.
d5df0a
+	0x1 extra: pfb_full Counts the number of cycles the NIP stalls and the PFBs are full.   This DOES NOT inlude PFB throttler cases.
d5df0a
+	0x2 extra: itlb_miss Counts the number of cycles the NIP stalls and there is an outstanding ITLB miss. This is a cummulative count of cycles the NIP stalled for all ITLB misses.
d5df0a
+	0x8 extra: pfb_throttler Counts the number of cycles the NIP stalls, the throttler is engaged, and the PFBs appear full.
d5df0a
+	0x10 extra: do_snoop Counts the number of cycles the NIP stalls because of a SMC compliance snoop to the MEC is required.
d5df0a
+	0x20 extra: misc_other Counts the number of cycles the NIP stalls due to NUKE, Stop Front End, Inserted flows.
d5df0a
+	0x1e extra: pfb_ready Counts the number of cycles the NIP stalls when the PFBs are not full and the decoders are able to process bytes.  Does not count PFB_FULL nor MISC_OTHER stall cycles.
d5df0a
+name:decode_stall type:exclusive default:0x1
d5df0a
+	0x1 extra: pfb_empty Counts the number of cycles decoder is stalled because the PFB is empty, this count is useful to see if the decoder is receiving the bytes from the front end. This event together with the DECODE_STALL.IQ_FULL may be used to narrow down on the bottleneck.
d5df0a
+	0x2 extra: iq_full Counts the number of cycles decoder is stalled because the IQ is full, this count is useful to see if the decoder is delivering the decoded uops. This event together with the DECODE_STALL.PFB_EMPTY may be used to narrow down on the bottleneck.
d5df0a
+name:uip_match type:exclusive default:0x1
d5df0a
+	0x1 extra: first_uip This event is used for counting the number of times a specific micro IP address was decoded
d5df0a
+	0x2 extra: second_uip This event is used for counting the number of times a specific micro IP address was decoded
d5df0a
+name:uops_retired type:exclusive default:0x2
d5df0a
+	0x2 extra: x87 This event counts the number of micro-ops retired that used X87 hardware.
d5df0a
+	0x4 extra: mul This event counts the number of micro-ops retired that used MUL hardware.
d5df0a
+	0x8 extra: div This event counts the number of micro-ops retired that used DIV hardware.
d5df0a
+	0x1 extra: ms_cyles Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS).  This includes uops from flows due to faults, assists, and inserted flows.
d5df0a
+name:br_inst_retired type:exclusive default:0x1
d5df0a
+	0x1 extra: remove_jcc REMOVE_JCC counts the number of branch instructions retired but removes taken and not taken conditional branches (JCC).  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x2 extra: remove_rel_call REMOVE_REL_CALL counts the number of branch instructions retired but removes near relative CALL.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x4 extra: remove_ind_call REMOVE_IND_CALL counts the number of branch instructions retired but removes near indirect CALL. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x8 extra: remove_ret REMOVE_RET counts the number of branch instructions retired but removes near RET.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x10 extra: remove_ind_jmp REMOVE_IND_JMP counts the number of branch instructions retired but removes near indirect JMP.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x20 extra: remove_rel_jmp REMOVE_REL_JMP counts the number of branch instructions retired but removes near relative JMP.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x40 extra: remove_far REMOVE_FAR counts the number of branch instructions retired but removes all far branches.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x80 extra: remove_not_taken_jcc REMOVE_NOT_TAKEN_JCC counts the number of branch instructions retired but removes taken conditional branches (JCC).  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+name:br_misp_retired type:exclusive default:0x1
d5df0a
+	0x1 extra: remove_jcc REMOVE_JCC counts the number of mispredicted branch instructions retired but removes taken and not taken conditional branches (JCC).  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0x4 extra: remove_ind_call REMOVE_IND_CALL Counts the number of mispredicted branch instructions retired but removes near indirect CALL.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0x8 extra: remove_ret REMOVE_RET Counts the number of mispredicted branch instructions retired but removes near RET.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0x10 extra: remove_ind_jmp REMOVE_IND_JMP counts the number of mispredicted branch instructions retired but removes near indirect JMP.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0x80 extra: remove_not_taken_jcc REMOVE_NOT_TAKEN_JCC counts the number of mispredicted branch instructions retired but removes taken conditional branches (JCC).  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+name:no_alloc_cycles type:exclusive default:0x3f
d5df0a
+	0x3f extra:inv all Counts the number of cycles that uops are allocated (inverse of NO_ALLOC_CYCLES.ALL)
d5df0a
+	0x2 extra: sd_buffer_full Counts the number of cycles when no uops are allocated and the store data buffer is full.
d5df0a
+	0x4 extra: mispredicts Counts the number of cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted jump to retire.  After the misprediction is detected, the front end will start immediately but the allocate pipe stalls until the mispredicted
d5df0a
+	0x8 extra: scoreboard Counts the number of cycles when no uops are allocated and a microcode IQ-based scoreboard stall is active. This includes stalls due to both the retirement scoreboard (at-ret) and micro-Jcc execution scoreboard (at-jeu).  Does not count cycles when the MS
d5df0a
+	0x10 extra: iq_empty Counts the number of cycles when no uops are allocated and the IQ is empty.  Will assert immediately after a mispredict and partially overlap with MISPREDICTS sub event.
d5df0a
+name:rs_full_stall type:exclusive default:0x2
d5df0a
+	0x2 extra: iec_port0 Counts the number of cycles the Alloc pipeline is stalled because IEC RS for port 0 is full.
d5df0a
+	0x4 extra: iec_port1 Counts the number of cycles the Alloc pipeline is stalled because IEC RS for port 1 is full.
d5df0a
+	0x8 extra: fpc_port0 Counts the number of cycles the Alloc pipeline is stalled because FPC RS for port 0 is full.
d5df0a
+	0x10 extra: fpc_port1 Counts the number of cycles the Alloc pipeline is stalled because FPC RS for port 1 is full.
d5df0a
+name:rs_dispatch_stall type:exclusive default:0x1
d5df0a
+	0x1 extra: iec0_rs *COUNTER BROKEN - NO FIX* Counts cycles when no uops were disptached from port 0 of IEC RS while the RS had valid ops left to dispatch
d5df0a
+	0x2 extra: iec1_rs *COUNTER BROKEN - NO FIX* Counts cycles when no uops were disptached from port 1 of IEC RS while the RS had valid ops left to dispatch
d5df0a
+	0x4 extra: fpc0_rs Counts cycles when no uops were disptached from port 0 of FPC RS while the RS had valid ops left to dispatch
d5df0a
+	0x8 extra: fpc1_rs Counts cycles when no uops were disptached from port 1 of FPC RS while the RS had valid ops left to dispatch
d5df0a
+	0x10 extra: mec_rs Counts cycles when no uops were dispatched from the MEC RS or rehab queue while valid ops were left to dispatch
d5df0a
+name:baclears type:exclusive default:0x2
d5df0a
+	0x2 extra: indirect Counts the number indirect branch baclears
d5df0a
+	0x4 extra: uncond Counts the number unconditional branch baclears
d5df0a
+	0x1e extra: no_corner_case sum of submasks [4:1].  Does not count special case baclears due to things like parity errors, bogus branches, and pd$ issues.
d5df0a
+name:decode_restriction type:exclusive default:0x1
d5df0a
+	0x1 extra: pdcache_wrong Counts the number of times a decode restriction reduced the decode throughput due to wrong instruction length prediction
d5df0a
+	0x2 extra: all_3cycle_resteers Counts the number of times a decode restriction reduced the decode throughput because of all 3 cycle resteer conditions.  Mainly PDCACHE_WRONG and MS_ENTRY cases.
d5df0a
diff --git a/libop/op_cpu_type.c b/libop/op_cpu_type.c
d5df0a
index badb7ba..4bb34b7 100644
d5df0a
--- a/libop/op_cpu_type.c
d5df0a
+++ b/libop/op_cpu_type.c
d5df0a
@@ -127,6 +127,7 @@ static struct cpu_descr const cpu_descrs[MAX_CPU_TYPE] = {
d5df0a
 	{ "AMD64 generic", "x86-64/generic", CPU_AMD64_GENERIC, 4 },
d5df0a
 	{ "IBM Power Architected Events V1", "ppc64/architected_events_v1", CPU_PPC64_ARCH_V1, 6 },
d5df0a
 	{ "ppc64 POWER8", "ppc64/power8", CPU_PPC64_POWER8, 6 },
d5df0a
+	{ "Intel Silvermont microarchitecture", "i386/silvermont", CPU_SILVERMONT, 2 },
d5df0a
 };
d5df0a
  
d5df0a
 static size_t const nr_cpu_descrs = sizeof(cpu_descrs) / sizeof(struct cpu_descr);
d5df0a
@@ -644,6 +645,7 @@ op_cpu op_cpu_base_type(op_cpu cpu_type)
d5df0a
 	case CPU_ATOM:
d5df0a
 	case CPU_NEHALEM:
d5df0a
 	case CPU_HASWELL:
d5df0a
+	case CPU_SILVERMONT:
d5df0a
 	case CPU_WESTMERE:
d5df0a
 	case CPU_SANDYBRIDGE:
d5df0a
 	case CPU_IVYBRIDGE:
d5df0a
diff --git a/libop/op_cpu_type.h b/libop/op_cpu_type.h
d5df0a
index 934fe9e..4703fa9 100644
d5df0a
--- a/libop/op_cpu_type.h
d5df0a
+++ b/libop/op_cpu_type.h
d5df0a
@@ -107,6 +107,7 @@ typedef enum {
d5df0a
 	CPU_AMD64_GENERIC, /**< AMD64 Generic */
d5df0a
 	CPU_PPC64_ARCH_V1, /** < IBM Power architected events version 1 */
d5df0a
 	CPU_PPC64_POWER8, /**< ppc64 POWER8 family */
d5df0a
+	CPU_SILVERMONT, /** < Intel Silvermont microarchitecture */
d5df0a
 	MAX_CPU_TYPE
d5df0a
 } op_cpu;
d5df0a
 
d5df0a
diff --git a/libop/op_events.c b/libop/op_events.c
d5df0a
index 9d2aa5e..39c710d 100644
d5df0a
--- a/libop/op_events.c
d5df0a
+++ b/libop/op_events.c
d5df0a
@@ -1201,6 +1201,7 @@ void op_default_event(op_cpu cpu_type, struct op_default_event_descr * descr)
d5df0a
  		case CPU_CORE_I7:
d5df0a
 		case CPU_NEHALEM:
d5df0a
 		case CPU_HASWELL:
d5df0a
+		case CPU_SILVERMONT:
d5df0a
 		case CPU_WESTMERE:
d5df0a
 		case CPU_SANDYBRIDGE:
d5df0a
 		case CPU_IVYBRIDGE:
d5df0a
diff --git a/libop/op_hw_specific.h b/libop/op_hw_specific.h
d5df0a
index 6ae19bc..e86dcae 100644
d5df0a
--- a/libop/op_hw_specific.h
d5df0a
+++ b/libop/op_hw_specific.h
d5df0a
@@ -150,6 +150,9 @@ static inline op_cpu op_cpu_specific_type(op_cpu cpu_type)
d5df0a
 		case 0x46:
d5df0a
 		case 0x47:
d5df0a
 			return CPU_HASWELL;
d5df0a
+		case 0x37:
d5df0a
+		case 0x4d:
d5df0a
+			return CPU_SILVERMONT;
d5df0a
 		}
d5df0a
 	}
d5df0a
 	return cpu_type;
d5df0a
diff --git a/utils/ophelp.c b/utils/ophelp.c
d5df0a
index 3b2896a..7543c6f 100644
d5df0a
--- a/utils/ophelp.c
d5df0a
+++ b/utils/ophelp.c
d5df0a
@@ -551,19 +551,20 @@ int main(int argc, char const * argv[])
d5df0a
 	case CPU_CORE_I7:
d5df0a
 	case CPU_NEHALEM:
d5df0a
 	case CPU_HASWELL:
d5df0a
+	case CPU_SILVERMONT:
d5df0a
 	case CPU_WESTMERE:
d5df0a
 	case CPU_SANDYBRIDGE:
d5df0a
 	case CPU_IVYBRIDGE:
d5df0a
 	case CPU_ATOM:
d5df0a
 		event_doc =
d5df0a
 			"See Intel Architecture Developer's Manual Volume 3B, Appendix A and\n"
d5df0a
-			"Intel Architecture Optimization Reference Manual (730795-001)\n\n";
d5df0a
+			"Intel Architecture Optimization Reference Manual\n\n";
d5df0a
 		break;
d5df0a
 
d5df0a
 	case CPU_ARCH_PERFMON:
d5df0a
 		event_doc =
d5df0a
 			"See Intel 64 and IA-32 Architectures Software Developer's Manual\n"
d5df0a
-			"Volume 3B (Document 253669) Chapter 18 for architectural perfmon events\n"
d5df0a
+			"Volume 3B Chapter 18 for architectural perfmon events\n"
d5df0a
 			"This is a limited set of fallback events because oprofile doesn't know your CPU\n";
d5df0a
 		break;
d5df0a
 	
d5df0a
commit 88779857662560604f85db608cf90f8609e1da6f
d5df0a
Author: Andi Kleen <ak@linux.intel.com>
d5df0a
Date:   Thu Sep 11 09:00:52 2014 -0500
d5df0a
d5df0a
    Update the Silvermont event files
d5df0a
    
d5df0a
    On further review the silvermont event files had a lot of problems.
d5df0a
    I regenerated them completely. This fixes the PEBS events, and
d5df0a
    fixes a range of others.
d5df0a
    
d5df0a
    The test suite passes without problems.
d5df0a
    
d5df0a
    Signed-off-by: Andi Kleen <ak@linux.intel.com>
d5df0a
d5df0a
diff --git a/events/i386/silvermont/events b/events/i386/silvermont/events
d5df0a
index 077cc0a..434538f 100644
d5df0a
--- a/events/i386/silvermont/events
d5df0a
+++ b/events/i386/silvermont/events
d5df0a
@@ -7,20 +7,18 @@
d5df0a
 # lowered in many cases without ill effect.
d5df0a
 #
d5df0a
 include:i386/arch_perfmon
d5df0a
-event:0x32 counters:0,1 um:l2_prefetcher_throttle minimum:200003 name:l2_prefetcher_throttle :
d5df0a
-event:0x3e counters:0,1 um:one minimum:200003 name:l2_prefetcher_pref_stream_alloc :
d5df0a
-event:0x50 counters:0,1 um:zero minimum:200003 name:l2_prefetch_pend_streams_pref_stream_pend_set :
d5df0a
-event:0x86 counters:0,1 um:nip_stall minimum:200003 name:nip_stall :
d5df0a
-event:0x87 counters:0,1 um:decode_stall minimum:200003 name:decode_stall :
d5df0a
-event:0x96 counters:0,1 um:uip_match minimum:200003 name:uip_match :
d5df0a
+event:0x03 counters:0,1 um:rehabq minimum:200003 name:rehabq :
d5df0a
+event:0x04 counters:0,1 um:mem_uops_retired minimum:200003 name:mem_uops_retired :
d5df0a
+event:0x05 counters:0,1 um:page_walks minimum:200003 name:page_walks :
d5df0a
+event:0x30 counters:0,1 um:zero minimum:200003 name:l2_reject_xq_all :
d5df0a
+event:0x31 counters:0,1 um:zero minimum:200003 name:core_reject_l2q_all :
d5df0a
+event:0x80 counters:0,1 um:icache minimum:200003 name:icache :
d5df0a
 event:0xc2 counters:0,1 um:uops_retired minimum:2000003 name:uops_retired :
d5df0a
-event:0xc3 counters:0,1 um:x10 minimum:200003 name:machine_clears_live_lock_breaker :
d5df0a
-event:0xc4 counters:0,1 um:br_inst_retired minimum:2000003 name:br_inst_retired :
d5df0a
+event:0xc3 counters:0,1 um:machine_clears minimum:200003 name:machine_clears :
d5df0a
+event:0xc4 counters:0,1 um:br_inst_retired minimum:200003 name:br_inst_retired :
d5df0a
 event:0xc5 counters:0,1 um:br_misp_retired minimum:200003 name:br_misp_retired :
d5df0a
 event:0xca counters:0,1 um:no_alloc_cycles minimum:200003 name:no_alloc_cycles :
d5df0a
 event:0xcb counters:0,1 um:rs_full_stall minimum:200003 name:rs_full_stall :
d5df0a
-event:0xcc counters:0,1 um:rs_dispatch_stall minimum:200003 name:rs_dispatch_stall :
d5df0a
-event:0xe6 counters:0,1 um:baclears minimum:2000003 name:baclears :
d5df0a
-event:0xe7 counters:0,1 um:x02 minimum:200003 name:ms_decoded_early_exit :
d5df0a
-event:0xe8 counters:0,1 um:one minimum:200003 name:btclears_all :
d5df0a
-event:0xe9 counters:0,1 um:decode_restriction minimum:200003 name:decode_restriction :
d5df0a
+event:0xcd counters:0,1 um:one minimum:2000003 name:cycles_div_busy_all :
d5df0a
+event:0xe6 counters:0,1 um:baclears minimum:200003 name:baclears :
d5df0a
+event:0xe7 counters:0,1 um:one minimum:200003 name:ms_decoded_ms_entry :
d5df0a
diff --git a/events/i386/silvermont/unit_masks b/events/i386/silvermont/unit_masks
d5df0a
index 6309282..c0dac26 100644
d5df0a
--- a/events/i386/silvermont/unit_masks
d5df0a
+++ b/events/i386/silvermont/unit_masks
d5df0a
@@ -4,68 +4,86 @@
d5df0a
 # See http://ark.intel.com/ for help in identifying Silvermont based CPUs
d5df0a
 #
d5df0a
 include:i386/arch_perfmon
d5df0a
-name:x02 type:mandatory default:0x2
d5df0a
-	0x2 No unit mask
d5df0a
-name:x10 type:mandatory default:0x10
d5df0a
-	0x10 No unit mask
d5df0a
-name:l2_prefetcher_throttle type:exclusive default:0x2
d5df0a
-	0x2 extra:edge conservative Counts the number of cycles the L2 prefetcher spends in throttling mode
d5df0a
-	0x1 extra:edge aggressive Counts the number of cycles the L2 prefetcher spends in throttling mode
d5df0a
-name:nip_stall type:exclusive default:0x3f
d5df0a
-	0x3f extra: all Counts the number of cycles the NIP stalls.
d5df0a
-	0x1 extra: pfb_full Counts the number of cycles the NIP stalls and the PFBs are full.   This DOES NOT inlude PFB throttler cases.
d5df0a
-	0x2 extra: itlb_miss Counts the number of cycles the NIP stalls and there is an outstanding ITLB miss. This is a cummulative count of cycles the NIP stalled for all ITLB misses.
d5df0a
-	0x8 extra: pfb_throttler Counts the number of cycles the NIP stalls, the throttler is engaged, and the PFBs appear full.
d5df0a
-	0x10 extra: do_snoop Counts the number of cycles the NIP stalls because of a SMC compliance snoop to the MEC is required.
d5df0a
-	0x20 extra: misc_other Counts the number of cycles the NIP stalls due to NUKE, Stop Front End, Inserted flows.
d5df0a
-	0x1e extra: pfb_ready Counts the number of cycles the NIP stalls when the PFBs are not full and the decoders are able to process bytes.  Does not count PFB_FULL nor MISC_OTHER stall cycles.
d5df0a
-name:decode_stall type:exclusive default:0x1
d5df0a
-	0x1 extra: pfb_empty Counts the number of cycles decoder is stalled because the PFB is empty, this count is useful to see if the decoder is receiving the bytes from the front end. This event together with the DECODE_STALL.IQ_FULL may be used to narrow down on the bottleneck.
d5df0a
-	0x2 extra: iq_full Counts the number of cycles decoder is stalled because the IQ is full, this count is useful to see if the decoder is delivering the decoded uops. This event together with the DECODE_STALL.PFB_EMPTY may be used to narrow down on the bottleneck.
d5df0a
-name:uip_match type:exclusive default:0x1
d5df0a
-	0x1 extra: first_uip This event is used for counting the number of times a specific micro IP address was decoded
d5df0a
-	0x2 extra: second_uip This event is used for counting the number of times a specific micro IP address was decoded
d5df0a
-name:uops_retired type:exclusive default:0x2
d5df0a
-	0x2 extra: x87 This event counts the number of micro-ops retired that used X87 hardware.
d5df0a
-	0x4 extra: mul This event counts the number of micro-ops retired that used MUL hardware.
d5df0a
-	0x8 extra: div This event counts the number of micro-ops retired that used DIV hardware.
d5df0a
-	0x1 extra: ms_cyles Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS).  This includes uops from flows due to faults, assists, and inserted flows.
d5df0a
-name:br_inst_retired type:exclusive default:0x1
d5df0a
-	0x1 extra: remove_jcc REMOVE_JCC counts the number of branch instructions retired but removes taken and not taken conditional branches (JCC).  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x2 extra: remove_rel_call REMOVE_REL_CALL counts the number of branch instructions retired but removes near relative CALL.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x4 extra: remove_ind_call REMOVE_IND_CALL counts the number of branch instructions retired but removes near indirect CALL. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x8 extra: remove_ret REMOVE_RET counts the number of branch instructions retired but removes near RET.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x10 extra: remove_ind_jmp REMOVE_IND_JMP counts the number of branch instructions retired but removes near indirect JMP.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x20 extra: remove_rel_jmp REMOVE_REL_JMP counts the number of branch instructions retired but removes near relative JMP.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x40 extra: remove_far REMOVE_FAR counts the number of branch instructions retired but removes all far branches.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-	0x80 extra: remove_not_taken_jcc REMOVE_NOT_TAKEN_JCC counts the number of branch instructions retired but removes taken conditional branches (JCC).  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
-name:br_misp_retired type:exclusive default:0x1
d5df0a
-	0x1 extra: remove_jcc REMOVE_JCC counts the number of mispredicted branch instructions retired but removes taken and not taken conditional branches (JCC).  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
-	0x4 extra: remove_ind_call REMOVE_IND_CALL Counts the number of mispredicted branch instructions retired but removes near indirect CALL.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
-	0x8 extra: remove_ret REMOVE_RET Counts the number of mispredicted branch instructions retired but removes near RET.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
-	0x10 extra: remove_ind_jmp REMOVE_IND_JMP counts the number of mispredicted branch instructions retired but removes near indirect JMP.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
-	0x80 extra: remove_not_taken_jcc REMOVE_NOT_TAKEN_JCC counts the number of mispredicted branch instructions retired but removes taken conditional branches (JCC).  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+name:rehabq type:exclusive default:0x1
d5df0a
+	0x1 extra: ld_block_st_forward This event counts the number of retired loads that were prohibited from receiving forwarded data from the store because of address mismatch.
d5df0a
+	0x1 extra:pebs ld_block_st_forward_pebs This event counts the number of retired loads that were prohibited from receiving forwarded data from the store because of address mismatch.
d5df0a
+	0x2 extra: ld_block_std_notready This event counts the cases where a forward was technically possible, but did not occur because the store data was not available at the right time
d5df0a
+	0x4 extra: st_splits This event counts the number of retire stores that experienced cache line boundary splits
d5df0a
+	0x8 extra: ld_splits This event counts the number of retire loads that experienced cache line boundary splits
d5df0a
+	0x8 extra:pebs ld_splits_pebs This event counts the number of retire loads that experienced cache line boundary splits
d5df0a
+	0x10 extra: lock This event counts the number of retired memory operations with lock semantics. These are either implicit locked instructions such as the XCHG instruction or instructions with an explicit LOCK prefix (0xF0).
d5df0a
+	0x20 extra: sta_full This event counts the number of retired stores that are delayed because there is not a store address buffer available.
d5df0a
+	0x40 extra: any_ld This event counts the number of load uops reissued from Rehabq
d5df0a
+	0x80 extra: any_st This event counts the number of store uops reissued from Rehabq
d5df0a
+name:mem_uops_retired type:exclusive default:0x1
d5df0a
+	0x1 extra: l1_miss_loads This event counts the number of load ops retired that miss in L1 Data cache. Note that prefetch misses will not be counted.
d5df0a
+	0x2 extra: l2_hit_loads This event counts the number of load ops retired that hit in the L2
d5df0a
+	0x2 extra:pebs l2_hit_loads_pebs This event counts the number of load ops retired that hit in the L2
d5df0a
+	0x4 extra: l2_miss_loads This event counts the number of load ops retired that miss in the L2
d5df0a
+	0x4 extra:pebs l2_miss_loads_pebs This event counts the number of load ops retired that miss in the L2
d5df0a
+	0x8 extra: dtlb_miss_loads This event counts the number of load ops retired that had DTLB miss.
d5df0a
+	0x8 extra:pebs dtlb_miss_loads_pebs This event counts the number of load ops retired that had DTLB miss.
d5df0a
+	0x10 extra: utlb_miss This event counts the number of load ops retired that had UTLB miss.
d5df0a
+	0x20 extra: hitm This event counts the number of load ops retired that got data from the other core or from the other module.
d5df0a
+	0x20 extra:pebs hitm_pebs This event counts the number of load ops retired that got data from the other core or from the other module.
d5df0a
+	0x40 extra: all_loads This event counts the number of load ops retired
d5df0a
+	0x80 extra: all_stores This event counts the number of store ops retired
d5df0a
+name:page_walks type:exclusive default:0x1
d5df0a
+	0x1 extra:edge d_side_walks This event counts when a data (D) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
d5df0a
+	0x1 extra: d_side_cycles This event counts every cycle when a D-side (walks due to a load) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walks.
d5df0a
+	0x2 extra:edge i_side_walks This event counts when an instruction (I) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
d5df0a
+	0x2 extra: i_side_cycles This event counts every cycle when a I-side (walks due to an instruction fetch) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walks.
d5df0a
+	0x3 extra:edge walks This event counts when a data (D) page walk or an instruction (I) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
d5df0a
+	0x3 extra: cycles This event counts every cycle when a data (D) page walk or instruction (I) page walk is in progress.  Since a pagewalk implies a TLB miss, the approximate cost of a TLB miss can be determined from this event.
d5df0a
+name:icache type:exclusive default:0x3
d5df0a
+	0x3 extra: accesses This event counts all instruction fetches, including uncacheable fetches.
d5df0a
+	0x1 extra: hit This event counts all instruction fetches from the instruction cache.
d5df0a
+	0x2 extra: misses This event counts all instruction fetches that miss the Instruction cache or produce memory requests. This includes uncacheable fetches. An instruction fetch miss is counted only once and not once for every cycle it is outstanding.
d5df0a
+name:uops_retired type:exclusive default:0x10
d5df0a
+	0x10 extra: all This event counts the number of micro-ops retired. The processor decodes complex macro instructions into a sequence of simpler micro-ops. Most instructions are composed of one or two micro-ops. Some instructions are decoded into longer sequences such as repeat instructions, floating point transcendental instructions, and assists. In some cases micro-op sequences are fused or whole instructions are fused into one micro-op. See other UOPS_RETIRED events for differentiating retired fused and non-fused micro-ops.
d5df0a
+	0x1 extra: ms This event counts the number of micro-ops retired that were supplied from MSROM.
d5df0a
+name:machine_clears type:exclusive default:0x8
d5df0a
+	0x8 extra: all Machine clears happen when something happens in the machine that causes the hardware to need to take special care to get the right answer. When such a condition is signaled on an instruction, the front end of the machine is notified that it must restart, so no more instructions will be decoded from the current path.  All instructions "older" than this one will be allowed to finish.  This instruction and all "younger" instructions must be cleared, since they must not be allowed to complete.  Essentially, the hardware waits until the problematic instruction is the oldest instruction in the machine.  This means all older instructions are retired, and all pending stores (from older instructions) are completed.  Then the new path of instructions from the front end are allowed to start into the machine.  There are many conditions that might cause a machine clear (including the receipt of an interrupt, or a trap or a fault).  All those conditions (including but not limited to MACHINE_CLEARS.MEMORY_ORDERING, MACHINE_CLEARS.SMC, and MACHINE_CLEARS.FP_ASSIST) are captured in the ANY event. In addition, some conditions can be specifically counted (i.e. SMC, MEMORY_ORDERING, FP_ASSIST).  However, the sum of SMC, MEMORY_ORDERING, and FP_ASSIST machine clears will not necessarily equal the number of ANY.
d5df0a
+	0x1 extra: smc This event counts the number of times that a program writes to a code section. Self-modifying code causes a severe penalty in all Intel? architecture processors.
d5df0a
+	0x2 extra: memory_ordering This event counts the number of times that pipeline was cleared due to memory ordering issues.
d5df0a
+	0x4 extra: fp_assist This event counts the number of times that pipeline stalled due to FP operations needing assists.
d5df0a
+name:br_inst_retired type:exclusive default:0x7e
d5df0a
+	0x7e extra: jcc JCC counts the number of conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0x7e extra:pebs jcc_pebs JCC counts the number of conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfe extra: taken_jcc TAKEN_JCC counts the number of taken conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfe extra:pebs taken_jcc_pebs TAKEN_JCC counts the number of taken conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xf9 extra: call CALL counts the number of near CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xf9 extra:pebs call_pebs CALL counts the number of near CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfd extra: rel_call REL_CALL counts the number of near relative CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfd extra:pebs rel_call_pebs REL_CALL counts the number of near relative CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfb extra: ind_call IND_CALL counts the number of near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xfb extra:pebs ind_call_pebs IND_CALL counts the number of near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xf7 extra: return RETURN counts the number of near RET branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xf7 extra:pebs return_pebs RETURN counts the number of near RET branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xeb extra: non_return_ind NON_RETURN_IND counts the number of near indirect JMP and near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xeb extra:pebs non_return_ind_pebs NON_RETURN_IND counts the number of near indirect JMP and near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xbf extra: far_branch FAR counts the number of far branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+	0xbf extra:pebs far_branch_pebs FAR counts the number of far branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
d5df0a
+name:br_misp_retired type:exclusive default:0x7e
d5df0a
+	0x7e extra: jcc JCC counts the number of mispredicted conditional branches (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0x7e extra:pebs jcc_pebs JCC counts the number of mispredicted conditional branches (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xfe extra: taken_jcc TAKEN_JCC counts the number of mispredicted taken conditional branch (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xfe extra:pebs taken_jcc_pebs TAKEN_JCC counts the number of mispredicted taken conditional branch (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xfb extra: ind_call IND_CALL counts the number of mispredicted near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xfb extra:pebs ind_call_pebs IND_CALL counts the number of mispredicted near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xf7 extra: return RETURN counts the number of mispredicted near RET branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xf7 extra:pebs return_pebs RETURN counts the number of mispredicted near RET branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xeb extra: non_return_ind NON_RETURN_IND counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
+	0xeb extra:pebs non_return_ind_pebs NON_RETURN_IND counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
d5df0a
 name:no_alloc_cycles type:exclusive default:0x3f
d5df0a
-	0x3f extra:inv all Counts the number of cycles that uops are allocated (inverse of NO_ALLOC_CYCLES.ALL)
d5df0a
-	0x2 extra: sd_buffer_full Counts the number of cycles when no uops are allocated and the store data buffer is full.
d5df0a
-	0x4 extra: mispredicts Counts the number of cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted jump to retire.  After the misprediction is detected, the front end will start immediately but the allocate pipe stalls until the mispredicted
d5df0a
-	0x8 extra: scoreboard Counts the number of cycles when no uops are allocated and a microcode IQ-based scoreboard stall is active. This includes stalls due to both the retirement scoreboard (at-ret) and micro-Jcc execution scoreboard (at-jeu).  Does not count cycles when the MS
d5df0a
-	0x10 extra: iq_empty Counts the number of cycles when no uops are allocated and the IQ is empty.  Will assert immediately after a mispredict and partially overlap with MISPREDICTS sub event.
d5df0a
-name:rs_full_stall type:exclusive default:0x2
d5df0a
-	0x2 extra: iec_port0 Counts the number of cycles the Alloc pipeline is stalled because IEC RS for port 0 is full.
d5df0a
-	0x4 extra: iec_port1 Counts the number of cycles the Alloc pipeline is stalled because IEC RS for port 1 is full.
d5df0a
-	0x8 extra: fpc_port0 Counts the number of cycles the Alloc pipeline is stalled because FPC RS for port 0 is full.
d5df0a
-	0x10 extra: fpc_port1 Counts the number of cycles the Alloc pipeline is stalled because FPC RS for port 1 is full.
d5df0a
-name:rs_dispatch_stall type:exclusive default:0x1
d5df0a
-	0x1 extra: iec0_rs *COUNTER BROKEN - NO FIX* Counts cycles when no uops were disptached from port 0 of IEC RS while the RS had valid ops left to dispatch
d5df0a
-	0x2 extra: iec1_rs *COUNTER BROKEN - NO FIX* Counts cycles when no uops were disptached from port 1 of IEC RS while the RS had valid ops left to dispatch
d5df0a
-	0x4 extra: fpc0_rs Counts cycles when no uops were disptached from port 0 of FPC RS while the RS had valid ops left to dispatch
d5df0a
-	0x8 extra: fpc1_rs Counts cycles when no uops were disptached from port 1 of FPC RS while the RS had valid ops left to dispatch
d5df0a
-	0x10 extra: mec_rs Counts cycles when no uops were dispatched from the MEC RS or rehab queue while valid ops were left to dispatch
d5df0a
-name:baclears type:exclusive default:0x2
d5df0a
-	0x2 extra: indirect Counts the number indirect branch baclears
d5df0a
-	0x4 extra: uncond Counts the number unconditional branch baclears
d5df0a
-	0x1e extra: no_corner_case sum of submasks [4:1].  Does not count special case baclears due to things like parity errors, bogus branches, and pd$ issues.
d5df0a
-name:decode_restriction type:exclusive default:0x1
d5df0a
-	0x1 extra: pdcache_wrong Counts the number of times a decode restriction reduced the decode throughput due to wrong instruction length prediction
d5df0a
-	0x2 extra: all_3cycle_resteers Counts the number of times a decode restriction reduced the decode throughput because of all 3 cycle resteer conditions.  Mainly PDCACHE_WRONG and MS_ENTRY cases.
d5df0a
+	0x3f extra: all The NO_ALLOC_CYCLES.ALL event counts the number of cycles when the front-end does not provide any instructions to be allocated for any reason. This event indicates the cycles where an allocation stalls occurs, and no UOPS are allocated in that cycle.
d5df0a
+	0x1 extra: rob_full Counts the number of cycles when no uops are allocated and the ROB is full (less than 2 entries available)
d5df0a
+	0x20 extra: rat_stall Counts the number of cycles when no uops are allocated and a RATstall is asserted.
d5df0a
+	0x50 extra: not_delivered The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound.  When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance.  Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources.  When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks.  However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the the front-end bandwidth.
d5df0a
+name:rs_full_stall type:exclusive default:0x1f
d5df0a
+	0x1f extra: all Counts the number of cycles the Alloc pipeline is stalled when any one of the RSs (IEC, FPC and MEC) is full. This event is a superset of all the individual RS stall event counts.
d5df0a
+	0x1 extra: mec Counts the number of cycles and allocation pipeline is stalled and is waiting for a free MEC reservation station entry.  The cycles should be appropriately counted in case of the cracked ops e.g. In case of a cracked load-op, the load portion is sent to M
d5df0a
+name:baclears type:exclusive default:0x1
d5df0a
+	0x1 extra: all The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.ANY event counts the number of baclears for any type of branch.
d5df0a
+	0x8 extra: return The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.RETURN event counts the number of RETURN baclears.
d5df0a
+	0x10 extra: cond The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.COND event counts the number of JCC (Jump on Condtional Code) baclears.