)]}'
{
  "commit": "02d6aad5cc940f17904c1288dfabc3fd2d439279",
  "tree": "ce4c3851312bb2a90b425a758590786e6e8e9614",
  "parents": [
    "4a9da96dc68d878893399210888a03117b39b802"
  ],
  "author": {
    "name": "Teresa Johnson",
    "email": "tejohnson@google.com",
    "time": "Tue Sep 24 16:18:48 2024 -0700"
  },
  "committer": {
    "name": "GitHub",
    "email": "noreply@github.com",
    "time": "Tue Sep 24 16:18:48 2024 -0700"
  },
  "message": "[MemProf] Reduce unnecessary context id computation (NFC) (#109857)\n\nOne of the memory reduction techniques was to compute node context ids\r\non the fly. This reduced memory at the expense of some compile time\r\nincrease.\r\n\r\nFor a large binary we were spending a lot of time invoking getContextIds\r\non the node during assignStackNodesPostOrder, because we were iterating\r\nthrough the stack ids for a call from leaf to root (first to last node\r\nin the parlance used in that code). However, all calls for a given entry\r\nin the StackIdToMatchingCalls map share the same last node, so we can\r\nborrow the approach used by similar code in updateStackNodes and compute\r\nthe context ids on the last node once, then iterate each call\u0027s stack\r\nids in reverse order while reusing the last node\u0027s context ids.\r\n\r\nThis reduced the thin link time by 43% for a large target. It isn\u0027t\r\nclear why there wasn\u0027t a similar increase measured when introducing the\r\nnode context id recomputation, but the compile time was longer to start\r\nwith then.",
  "tree_diff": [
    {
      "type": "modify",
      "old_id": "6927fe538e367b49d9e7791af8103e16f6dbb6f6",
      "old_mode": 33188,
      "old_path": "llvm/lib/Transforms/IPO/MemProfContextDisambiguation.cpp",
      "new_id": "576a31f8b86ae02779d4c9d890950456bc57e5fe",
      "new_mode": 33188,
      "new_path": "llvm/lib/Transforms/IPO/MemProfContextDisambiguation.cpp"
    }
  ]
}
