Remarks are structured, human- and machine-readable notes emitted by the compiler to explain:
The RemarkEngine
collects finalized remarks during compilation and sends them to a pluggable streamer. By default, MLIR integrates with LLVM’s llvm::remarks
, allowing you to:
MLIRContext
.Passed
, Missed
, Failure
, Analysis
.<<
(like MLIR diagnostics).Two main components:
RemarkEngine
(owned by MLIRContext
): Receives finalized InFlightRemark
s, optionally mirrors them to the DiagnosticEngine
, and dispatches to the installed streamer.
MLIRRemarkStreamerBase
(abstract): Backend interface with a single hook:
virtual void streamOptimizationRemark(const Remark &remark) = 0;
Default backend – MLIRLLVMRemarkStreamer
Adapts mlir::Remark
to LLVM’s remark format and writes YAML/bitstream via llvm::remarks::RemarkStreamer
.
Ownership flow: MLIRContext
→ RemarkEngine
→ MLIRRemarkStreamerBase
MLIR provides four built-in remark categories (extendable if needed):
Optimization/transformation succeeded.
[Passed] RemarkName | Category:Vectorizer:myPass1 | Function=foo | Remark="vectorized loop", tripCount=128
Optimization/transformation didn’t apply — ideally with actionable feedback.
[Missed] | Category:Unroll | Function=foo | Reason="tripCount=4 < threshold=256", Suggestion="increase unroll to 128"
Optimization/transformation attempted but failed. This is slightly different from the Missed
category.
For example, the user specifies -use-max-register=100
when invoking the compiler, but the attempt fails for some reason:
$ your-compiler -use-max-register=100 mycode.xyz
[Failed] Category:RegisterAllocator | Reason="Limiting to use-max-register=100 failed; it now uses 104 registers for better performance"
Neutral analysis results.
[Analysis] Category:Register | Remark="Kernel uses 168 registers" [Analysis] Category:Register | Remark="Kernel uses 10kB local memory"
The remark::*
helpers return an in-flight remark. You append strings or key–value metrics using <<
.
When constructing a remark, you typically provide four fields that are StringRef
:
#include "mlir/IR/Remarks.h" LogicalResult MyPass::runOnOperation() { Location loc = getOperation()->getLoc(); remark::RemarkOpts opts = remark::RemarkOpts::name(MyRemarkName1) .category(categoryVectorizer) .function(fName) .subCategory(myPassname1); // PASSED remark::passed(loc, opts) << "vectorized loop" << remark::metric("tripCount", 128); // ANALYSIS remark::analysis(loc, opts) << "Kernel uses 168 registers"; // MISSED (with reason + suggestion) int tripBad = 4, threshold = 256, target = 128; remark::missed(loc, opts) << remark::reason("tripCount={0} < threshold={1}", tripBad, threshold) << remark::suggest("increase unroll to {0}", target); // FAILURE remark::failed(loc, opts) << remark::reason("failed due to unsupported pattern"); return success(); }
Helper functions accept LLVM format style strings. This format builds lazily, so remarks are zero-cost when disabled.
remark::add(fmt, ...)
– Shortcut for metric("Remark", ...)
.remark::reason(fmt, ...)
– Shortcut for metric("Reason", ...)
. Used to explain why a remark was missed or failed.remark::suggest(fmt, ...)
– Shortcut for metric("Suggestion", ...)
. Used to provide actionable feedback.remark::metric(key, value)
– Adds a structured key–value metric.Example: tracking TripCount
. When exported to YAML, it appears under args
for machine readability:
remark::metric("TripCount", value)
Passing a plain string (e.g. << "vectorized loop"
) is equivalent to:
metric("Remark", "vectorized loop")
Persists remarks to a file in the chosen format.
mlir::remark::RemarkCategories cats{/*passed=*/categoryLoopunroll, /*missed=*/std::nullopt, /*analysis=*/std::nullopt, /*failed=*/categoryLoopunroll}; mlir::remark::enableOptimizationRemarksWithLLVMStreamer( context, yamlFile, llvm::remarks::Format::YAML, cats);
YAML format – human-readable, easy to diff:
--- !Passed pass: Category:SubCategory name: MyRemarkName1 function: myFunc loc: myfile.mlir:12:3 args: - Remark: vectorized loop - tripCount: 128
Bitstream format – compact binary for large runs.
mlir::emitRemarks
(No Streamer)If the streamer isn't passed, the remarks are mirrored to the DiagnosticEngine
using mlir::emitRemarks
mlir::remark::RemarkCategories cats{/*passed=*/categoryLoopunroll, /*missed=*/std::nullopt, /*analysis=*/std::nullopt, /*failed=*/categoryLoopunroll}; remark::enableOptimizationRemarks( /*streamer=*/nullptr, cats, /*printAsEmitRemarks=*/true);
You can implement a custom streamer by inheriting MLIRRemarkStreamerBase
to consume remarks in any format.
class MyStreamer : public MLIRRemarkStreamerBase { public: void streamOptimizationRemark(const Remark &remark) override { // Convert and write remark to your custom format } }; auto myStreamer = std::make_unique<MyStreamer>(); remark::enableOptimizationRemarks( /*streamer=*/myStreamer, cats, /*printAsEmitRemarks=*/true);