[NFC] Trim trailing whitespace in *.rst
diff --git a/llvm/docs/AliasAnalysis.rst b/llvm/docs/AliasAnalysis.rst
index 4e6d0c3..b9a8a3a 100644
--- a/llvm/docs/AliasAnalysis.rst
+++ b/llvm/docs/AliasAnalysis.rst
@@ -31,7 +31,7 @@
 
 This document contains information necessary to successfully implement this
 interface, use it, and to test both sides.  It also explains some of the finer
-points about what exactly results mean.  
+points about what exactly results mean.
 
 ``AliasAnalysis`` Class Overview
 ================================
@@ -70,7 +70,7 @@
 
   int i;
   char C[2];
-  char A[10]; 
+  char A[10];
   /* ... */
   for (i = 0; i != 10; ++i) {
     C[0] = A[i];          /* One byte store */
@@ -87,7 +87,7 @@
 
   int i;
   char C[2];
-  char A[10]; 
+  char A[10];
   /* ... */
   for (i = 0; i != 10; ++i) {
     ((short*)C)[0] = A[i];  /* Two byte store! */
@@ -103,7 +103,7 @@
 
 The ``alias`` method
 --------------------
-  
+
 The ``alias`` method is the primary interface used to determine whether or not
 two memory objects alias each other.  It takes two memory objects as input and
 returns MustAlias, PartialAlias, MayAlias, or NoAlias as appropriate.
diff --git a/llvm/docs/BigEndianNEON.rst b/llvm/docs/BigEndianNEON.rst
index aa564c1..196e591 100644
--- a/llvm/docs/BigEndianNEON.rst
+++ b/llvm/docs/BigEndianNEON.rst
@@ -54,7 +54,7 @@
 
 .. figure:: ARM-BE-ldr.png
     :align: right
-    
+
     Big endian vector load using ``LDR``.
 
 
@@ -82,7 +82,7 @@
 .. container:: clearer
 
     Note that throughout this section we only mention loads. Stores have exactly the same problems as their associated loads, so have been skipped for brevity.
- 
+
 
 Considerations
 ==============
@@ -156,7 +156,7 @@
 
 There are 3 parts to the implementation:
 
-    1. Predicate ``LDR`` and ``STR`` instructions so that they are never allowed to be selected to generate vector loads and stores. The exception is one-lane vectors [1]_ - these by definition cannot have lane ordering problems so are fine to use ``LDR``/``STR``. 
+    1. Predicate ``LDR`` and ``STR`` instructions so that they are never allowed to be selected to generate vector loads and stores. The exception is one-lane vectors [1]_ - these by definition cannot have lane ordering problems so are fine to use ``LDR``/``STR``.
 
     2. Create code generation patterns for bitconverts that create ``REV`` instructions.
 
@@ -191,7 +191,7 @@
 
     LD1   v0.4s, [x]
 
-    REV64 v0.4s, v0.4s                  // There is no REV128 instruction, so it must be synthesizedcd 
+    REV64 v0.4s, v0.4s                  // There is no REV128 instruction, so it must be synthesizedcd
     EXT   v0.16b, v0.16b, v0.16b, #8    // with a REV64 then an EXT to swap the two 64-bit elements.
 
     REV64 v0.2d, v0.2d
@@ -202,4 +202,3 @@
 It turns out that these ``REV`` pairs can, in almost all cases, be squashed together into a single ``REV``. For the example above, a ``REV128 4s`` + ``REV128 2d`` is actually a ``REV64 4s``, as shown in the figure on the right.
 
 .. [1] One lane vectors may seem useless as a concept but they serve to distinguish between values held in general purpose registers and values held in NEON/VFP registers. For example, an ``i64`` would live in an ``x`` register, but ``<1 x i64>`` would live in a ``d`` register.
-
diff --git a/llvm/docs/BitCodeFormat.rst b/llvm/docs/BitCodeFormat.rst
index 32f87fe..462e1e5 100644
--- a/llvm/docs/BitCodeFormat.rst
+++ b/llvm/docs/BitCodeFormat.rst
@@ -840,7 +840,7 @@
   plus 1.
 
 * *preemptionspecifier*: If present, an encoding of the :ref:`runtime preemption specifier<bcpreemptionspecifier>`  of this function.
- 
+
 MODULE_CODE_ALIAS Record
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/llvm/docs/BuildingADistribution.rst b/llvm/docs/BuildingADistribution.rst
index 559c478..ee7abe1 100644
--- a/llvm/docs/BuildingADistribution.rst
+++ b/llvm/docs/BuildingADistribution.rst
@@ -230,7 +230,7 @@
   components. LLVM library components are either library names with the LLVM
   prefix removed (i.e. Support, Demangle...), LLVM target names, or special
   purpose component names. The special purpose component names are:
-  
+
   #. ``all`` - All LLVM available component libraries
   #. ``Native`` - The LLVM target for the Native system
   #. ``AllTargetsAsmParsers`` - All the included target ASM parsers libraries
diff --git a/llvm/docs/CMake.rst b/llvm/docs/CMake.rst
index 72d303e..05dcae6 100644
--- a/llvm/docs/CMake.rst
+++ b/llvm/docs/CMake.rst
@@ -536,8 +536,8 @@
   Defaults to ON.
 
 **LLVM_EXPERIMENTAL_TARGETS_TO_BUILD**:STRING
-  Semicolon-separated list of experimental targets to build and linked into 
-  llvm. This will build the experimental target without needing it to add to the 
+  Semicolon-separated list of experimental targets to build and linked into
+  llvm. This will build the experimental target without needing it to add to the
   list of all the targets available in the LLVM's main CMakeLists.txt.
 
 **LLVM_EXTERNAL_{CLANG,LLD,POLLY}_SOURCE_DIR**:PATH
@@ -615,7 +615,7 @@
 
     $ D:\git> git clone https://github.com/mjansson/rpmalloc
     $ D:\llvm-project> cmake ... -DLLVM_INTEGRATED_CRT_ALLOC=D:\git\rpmalloc
-  
+
   This flag needs to be used along with the static CRT, ie. if building the
   Release target, add -DLLVM_USE_CRT_RELEASE=MT.
 
diff --git a/llvm/docs/CodingStandards.rst b/llvm/docs/CodingStandards.rst
index d0c737f..55a8bc1 100644
--- a/llvm/docs/CodingStandards.rst
+++ b/llvm/docs/CodingStandards.rst
@@ -178,10 +178,10 @@
 """"""""""""
 
 The header file's guard should be the all-caps path that a user of this header
-would #include, using '_' instead of path separator and extension marker. 
+would #include, using '_' instead of path separator and extension marker.
 For example, the header file
-``llvm/include/llvm/Analysis/Utils/Local.h`` would be ``#include``-ed as 
-``#include "llvm/Analysis/Utils/Local.h"``, so its guard is 
+``llvm/include/llvm/Analysis/Utils/Local.h`` would be ``#include``-ed as
+``#include "llvm/Analysis/Utils/Local.h"``, so its guard is
 ``LLVM_ANALYSIS_UTILS_LOCAL_H``.
 
 Class overviews
diff --git a/llvm/docs/CommandGuide/llvm-ar.rst b/llvm/docs/CommandGuide/llvm-ar.rst
index eda4cf8..f1385ad 100644
--- a/llvm/docs/CommandGuide/llvm-ar.rst
+++ b/llvm/docs/CommandGuide/llvm-ar.rst
@@ -25,32 +25,32 @@
 (quick update) operations, the archive will be reconstructed in the format
 defined by :option:`--format`.
 
-Here's where :program:`llvm-ar` departs from previous :program:`ar` 
+Here's where :program:`llvm-ar` departs from previous :program:`ar`
 implementations:
 
 *The following option is not supported*
- 
+
  [f] - truncate inserted filenames
- 
+
 *The following options are ignored for compatibility*
 
  --plugin=<string> - load a plugin which adds support for other file formats
- 
- [l] - ignored in :program:`ar` 
+
+ [l] - ignored in :program:`ar`
 
 *Symbol Table*
 
  Since :program:`llvm-ar` supports bitcode files, the symbol table it creates
  includes both native and bitcode symbols.
- 
+
 *Deterministic Archives*
 
  By default, :program:`llvm-ar` always uses zero for timestamps and UIDs/GIDs
- to write archives in a deterministic mode. This is equivalent to the 
+ to write archives in a deterministic mode. This is equivalent to the
  :option:`D` modifier being enabled by default. If you wish to maintain
  compatibility with other :program:`ar` implementations, you can pass the
  :option:`U` modifier to write actual timestamps and UIDs/GIDs.
- 
+
 *Windows Paths*
 
  When on Windows :program:`llvm-ar` treats the names of archived *files* in the same
@@ -62,7 +62,7 @@
 
 :program:`llvm-ar` operations are compatible with other :program:`ar`
 implementations. However, there are a few modifiers (:option:`L`) that are not
-found in other :program:`ar` implementations. The options for 
+found in other :program:`ar` implementations. The options for
 :program:`llvm-ar` specify a single basic Operation to perform on the archive,
 a variety of Modifiers for that Operation, the name of the archive file, and an
 optional list of file names. If the *files* option is not specified, it
@@ -127,7 +127,7 @@
  they do not exist. The :option:`a`, :option:`b`, :option:`T` and :option:`u`
  modifiers apply to this operation. If no *files* are specified, the archive
  is not modified.
- 
+
 t[v]
 .. option:: t [vO]
 
@@ -139,10 +139,10 @@
  size, and the date. With the :option:`O` modifier, display member offsets. If
  any *files* are specified, the listing is only for those files. If no *files*
  are specified, the table of contents for the whole archive is printed.
- 
+
 .. option:: V
 
- A synonym for the :option:`--version` option. 
+ A synonym for the :option:`--version` option.
 
 .. option:: x [oP]
 
@@ -174,7 +174,7 @@
 
 .. option:: i
 
- A synonym for the :option:`b` option. 
+ A synonym for the :option:`b` option.
 
 .. option:: L
 
@@ -188,13 +188,13 @@
  selects the instance of the given name, with "1" indicating the first
  instance. If :option:`N` is not specified the first member of that name will
  be selected. If *count* is not supplied, the operation fails.*count* cannot be
- 
+
 .. option:: o
 
  When extracting files, use the modification times of any *files* as they
  appear in the ``archive``. By default *files* extracted from the archive
  use the time of extraction.
- 
+
 .. option:: O
 
  Display member offsets inside the archive.
@@ -248,12 +248,12 @@
  This modifier is the opposite of the :option:`s` modifier. It instructs
  :program:`llvm-ar` to not build the symbol table. If both :option:`s` and
  :option:`S` are used, the last modifier to occur in the options will prevail.
- 
+
 .. option:: u
 
  Only update ``archive`` members with *files* that have more recent
  timestamps.
- 
+
 .. option:: U
 
  Use actual timestamps and UIDs/GIDs.
@@ -277,7 +277,7 @@
  stream. No other options are compatible with this option.
 
 .. option:: --rsp-quoting=<type>
- This option selects the quoting style ``<type>`` for response files, either 
+ This option selects the quoting style ``<type>`` for response files, either
  ``posix`` or ``windows``. The default when on Windows is ``windows``, otherwise the
  default is ``posix``.
 
@@ -296,11 +296,11 @@
 supported by archivers following in the ar tradition. An MRI script contains a
 sequence of commands to be executed by the archiver. The :option:`-M` option
 allows for an MRI script to be passed to :program:`llvm-ar` through the
-standard input stream. 
- 
+standard input stream.
+
 Note that :program:`llvm-ar` has known limitations regarding the use of MRI
 scripts:
- 
+
 * Each script can only create one archive.
 * Existing archives can not be modified.
 
diff --git a/llvm/docs/CommandGuide/llvm-mca.rst b/llvm/docs/CommandGuide/llvm-mca.rst
index 4895edc..d226936 100644
--- a/llvm/docs/CommandGuide/llvm-mca.rst
+++ b/llvm/docs/CommandGuide/llvm-mca.rst
@@ -254,7 +254,7 @@
 
   # LLVM-MCA-BEGIN A simple example
     add %eax, %eax
-  # LLVM-MCA-END 
+  # LLVM-MCA-END
 
 The code from the example above defines a region named "A simple example" with a
 single instruction in it. Note how the region name doesn't have to be repeated
@@ -627,26 +627,26 @@
 
 
   Cycles with backend pressure increase [ 48.07% ]
-  Throughput Bottlenecks: 
+  Throughput Bottlenecks:
     Resource Pressure       [ 47.77% ]
     - JFPA  [ 47.77% ]
     - JFPU0  [ 47.77% ]
     Data Dependencies:      [ 0.30% ]
     - Register Dependencies [ 0.30% ]
     - Memory Dependencies   [ 0.00% ]
-  
+
   Critical sequence based on the simulation:
-  
+
                 Instruction                         Dependency Information
    +----< 2.    vhaddps %xmm3, %xmm3, %xmm4
    |
-   |    < loop carried > 
+   |    < loop carried >
    |
    |      0.    vmulps  %xmm0, %xmm1, %xmm2
    +----> 1.    vhaddps %xmm2, %xmm2, %xmm3         ## RESOURCE interference:  JFPA [ probability: 74% ]
    +----> 2.    vhaddps %xmm3, %xmm3, %xmm4         ## REGISTER dependency:  %xmm3
    |
-   |    < loop carried > 
+   |    < loop carried >
    |
    +----> 1.    vhaddps %xmm2, %xmm2, %xmm3         ## RESOURCE interference:  JFPA [ probability: 74% ]
 
diff --git a/llvm/docs/CommandGuide/llvm-objcopy.rst b/llvm/docs/CommandGuide/llvm-objcopy.rst
index 79c181f..5f3aa88 100644
--- a/llvm/docs/CommandGuide/llvm-objcopy.rst
+++ b/llvm/docs/CommandGuide/llvm-objcopy.rst
@@ -383,7 +383,7 @@
  represents a single symbol, with leading and trailing whitespace ignored, as is
  anything following a '#'. Can be specified multiple times to read names from
  multiple files.
- 
+
 .. option:: --new-symbol-visibility <visibility>
 
  Specify the visibility of the symbols automatically created when using binary
diff --git a/llvm/docs/CommandGuide/llvm-objdump.rst b/llvm/docs/CommandGuide/llvm-objdump.rst
index 52bf3bd..88bade71 100644
--- a/llvm/docs/CommandGuide/llvm-objdump.rst
+++ b/llvm/docs/CommandGuide/llvm-objdump.rst
@@ -32,7 +32,7 @@
 .. option:: -D, --disassemble-all
 
   Disassemble all sections found in the input files.
-  
+
 .. option:: --disassemble-symbols=<symbol1[,symbol2,...]>
 
   Disassemble only the specified symbols. Takes demangled symbol names when
@@ -92,7 +92,7 @@
 .. option:: -u, --unwind-info
 
   Display the unwind info of the input(s).
-  
+
   This operation is only currently supported for COFF and Mach-O object files.
 
 .. option:: -v, --version
diff --git a/llvm/docs/CommandGuide/llvm-profdata.rst b/llvm/docs/CommandGuide/llvm-profdata.rst
index 6472320..7c99e14 100644
--- a/llvm/docs/CommandGuide/llvm-profdata.rst
+++ b/llvm/docs/CommandGuide/llvm-profdata.rst
@@ -94,13 +94,13 @@
 .. option:: -sample
 
  Specify that the input profile is a sample-based profile.
- 
+
  The format of the generated file can be generated in one of three ways:
 
  .. option:: -binary (default)
 
  Emit the profile using a binary encoding. For instrumentation-based profile
- the output format is the indexed binary format. 
+ the output format is the indexed binary format.
 
  .. option:: -extbinary
 
diff --git a/llvm/docs/CommandGuide/llvm-readelf.rst b/llvm/docs/CommandGuide/llvm-readelf.rst
index 8ba1a0e..d83d566 100644
--- a/llvm/docs/CommandGuide/llvm-readelf.rst
+++ b/llvm/docs/CommandGuide/llvm-readelf.rst
@@ -41,7 +41,7 @@
 .. option:: --demangle, -C
 
  Display demangled symbol names in the output.
- 
+
 .. option:: --dependent-libraries
 
  Display the dependent libraries section.
@@ -118,7 +118,7 @@
 .. option:: --needed-libs
 
  Display the needed libraries.
-  
+
 .. option:: --no-demangle
 
  Do not display demangled symbol names in the output. On by default.
@@ -196,11 +196,11 @@
 .. option:: --version-info, -V
 
  Display version sections.
- 
+
 .. option:: --wide, -W
 
  Ignored for GNU readelf compatibility. The output is already similar to when using -W with GNU readelf.
- 
+
 .. option:: @<FILE>
 
  Read command-line options from response file `<FILE>`.
diff --git a/llvm/docs/CommandGuide/llvm-readobj.rst b/llvm/docs/CommandGuide/llvm-readobj.rst
index 068808e..e7e6c73 100644
--- a/llvm/docs/CommandGuide/llvm-readobj.rst
+++ b/llvm/docs/CommandGuide/llvm-readobj.rst
@@ -116,7 +116,7 @@
  section index or section name.
 
 .. option:: --string-table
- 
+
  Display contents of the string table.
 
 .. option:: --symbols, --syms, -s
diff --git a/llvm/docs/CommandGuide/llvm-symbolizer.rst b/llvm/docs/CommandGuide/llvm-symbolizer.rst
index 9c15c7e..9518736 100644
--- a/llvm/docs/CommandGuide/llvm-symbolizer.rst
+++ b/llvm/docs/CommandGuide/llvm-symbolizer.rst
@@ -182,7 +182,7 @@
 
   Print just the file's name without any directories, instead of the
   absolute path.
-  
+
 .. _llvm-symbolizer-opt-C:
 
 .. option:: --demangle, -C
@@ -241,7 +241,7 @@
   Specify the preferred output style. Defaults to ``LLVM``. When the output
   style is set to ``GNU``, the tool follows the style of GNU's **addr2line**.
   The differences from the ``LLVM`` style are:
-  
+
   * Does not print the column of a source code location.
 
   * Does not add an empty line after the report for an address.
diff --git a/llvm/docs/Coroutines.rst b/llvm/docs/Coroutines.rst
index 5485a48..8ea4056 100644
--- a/llvm/docs/Coroutines.rst
+++ b/llvm/docs/Coroutines.rst
@@ -7,7 +7,7 @@
    :depth: 3
 
 .. warning::
-  This is a work in progress. Compatibility across LLVM releases is not 
+  This is a work in progress. Compatibility across LLVM releases is not
   guaranteed.
 
 Introduction
@@ -15,13 +15,13 @@
 
 .. _coroutine handle:
 
-LLVM coroutines are functions that have one or more `suspend points`_. 
+LLVM coroutines are functions that have one or more `suspend points`_.
 When a suspend point is reached, the execution of a coroutine is suspended and
-control is returned back to its caller. A suspended coroutine can be resumed 
-to continue execution from the last suspend point or it can be destroyed. 
+control is returned back to its caller. A suspended coroutine can be resumed
+to continue execution from the last suspend point or it can be destroyed.
 
-In the following example, we call function `f` (which may or may not be a 
-coroutine itself) that returns a handle to a suspended coroutine 
+In the following example, we call function `f` (which may or may not be a
+coroutine itself) that returns a handle to a suspended coroutine
 (**coroutine handle**) that is used by `main` to resume the coroutine twice and
 then destroy it:
 
@@ -38,8 +38,8 @@
 
 .. _coroutine frame:
 
-In addition to the function stack frame which exists when a coroutine is 
-executing, there is an additional region of storage that contains objects that 
+In addition to the function stack frame which exists when a coroutine is
+executing, there is an additional region of storage that contains objects that
 keep the coroutine state when a coroutine is suspended. This region of storage
 is called the **coroutine frame**. It is created when a coroutine is called
 and destroyed when a coroutine either runs to completion or is destroyed
@@ -273,12 +273,12 @@
      for(;;) {
        print(n++);
        <suspend> // returns a coroutine handle on first suspend
-     }     
-  } 
+     }
+  }
 
 This coroutine calls some function `print` with value `n` as an argument and
-suspends execution. Every time this coroutine resumes, it calls `print` again with an argument one bigger than the last time. This coroutine never completes by itself and must be destroyed explicitly. If we use this coroutine with 
-a `main` shown in the previous section. It will call `print` with values 4, 5 
+suspends execution. Every time this coroutine resumes, it calls `print` again with an argument one bigger than the last time. This coroutine never completes by itself and must be destroyed explicitly. If we use this coroutine with
+a `main` shown in the previous section. It will call `print` with values 4, 5
 and 6 after which the coroutine will be destroyed.
 
 The LLVM IR for this coroutine looks like this:
@@ -309,28 +309,28 @@
   }
 
 The `entry` block establishes the coroutine frame. The `coro.size`_ intrinsic is
-lowered to a constant representing the size required for the coroutine frame. 
-The `coro.begin`_ intrinsic initializes the coroutine frame and returns the 
-coroutine handle. The second parameter of `coro.begin` is given a block of memory 
+lowered to a constant representing the size required for the coroutine frame.
+The `coro.begin`_ intrinsic initializes the coroutine frame and returns the
+coroutine handle. The second parameter of `coro.begin` is given a block of memory
 to be used if the coroutine frame needs to be allocated dynamically.
 The `coro.id`_ intrinsic serves as coroutine identity useful in cases when the
-`coro.begin`_ intrinsic get duplicated by optimization passes such as 
+`coro.begin`_ intrinsic get duplicated by optimization passes such as
 jump-threading.
 
-The `cleanup` block destroys the coroutine frame. The `coro.free`_ intrinsic, 
+The `cleanup` block destroys the coroutine frame. The `coro.free`_ intrinsic,
 given the coroutine handle, returns a pointer of the memory block to be freed or
-`null` if the coroutine frame was not allocated dynamically. The `cleanup` 
+`null` if the coroutine frame was not allocated dynamically. The `cleanup`
 block is entered when coroutine runs to completion by itself or destroyed via
 call to the `coro.destroy`_ intrinsic.
 
-The `suspend` block contains code to be executed when coroutine runs to 
-completion or suspended. The `coro.end`_ intrinsic marks the point where 
-a coroutine needs to return control back to the caller if it is not an initial 
-invocation of the coroutine. 
+The `suspend` block contains code to be executed when coroutine runs to
+completion or suspended. The `coro.end`_ intrinsic marks the point where
+a coroutine needs to return control back to the caller if it is not an initial
+invocation of the coroutine.
 
-The `loop` blocks represents the body of the coroutine. The `coro.suspend`_ 
-intrinsic in combination with the following switch indicates what happens to 
-control flow when a coroutine is suspended (default case), resumed (case 0) or 
+The `loop` blocks represents the body of the coroutine. The `coro.suspend`_
+intrinsic in combination with the following switch indicates what happens to
+control flow when a coroutine is suspended (default case), resumed (case 0) or
 destroyed (case 1).
 
 Coroutine Transformation
@@ -338,24 +338,24 @@
 
 One of the steps of coroutine lowering is building the coroutine frame. The
 def-use chains are analyzed to determine which objects need be kept alive across
-suspend points. In the coroutine shown in the previous section, use of virtual register 
-`%inc` is separated from the definition by a suspend point, therefore, it 
-cannot reside on the stack frame since the latter goes away once the coroutine 
-is suspended and control is returned back to the caller. An i32 slot is 
+suspend points. In the coroutine shown in the previous section, use of virtual register
+`%inc` is separated from the definition by a suspend point, therefore, it
+cannot reside on the stack frame since the latter goes away once the coroutine
+is suspended and control is returned back to the caller. An i32 slot is
 allocated in the coroutine frame and `%inc` is spilled and reloaded from that
 slot as needed.
 
-We also store addresses of the resume and destroy functions so that the 
+We also store addresses of the resume and destroy functions so that the
 `coro.resume` and `coro.destroy` intrinsics can resume and destroy the coroutine
-when its identity cannot be determined statically at compile time. For our 
+when its identity cannot be determined statically at compile time. For our
 example, the coroutine frame will be:
 
 .. code-block:: llvm
 
   %f.frame = type { void (%f.frame*)*, void (%f.frame*)*, i32 }
 
-After resume and destroy parts are outlined, function `f` will contain only the 
-code responsible for creation and initialization of the coroutine frame and 
+After resume and destroy parts are outlined, function `f` will contain only the
+code responsible for creation and initialization of the coroutine frame and
 execution of the coroutine until a suspend point is reached:
 
 .. code-block:: llvm
@@ -370,12 +370,12 @@
     store void (%f.frame*)* @f.resume, void (%f.frame*)** %1
     %2 = getelementptr %f.frame, %f.frame* %frame, i32 0, i32 1
     store void (%f.frame*)* @f.destroy, void (%f.frame*)** %2
-   
+
     %inc = add nsw i32 %n, 1
     %inc.spill.addr = getelementptr inbounds %f.Frame, %f.Frame* %FramePtr, i32 0, i32 2
     store i32 %inc, i32* %inc.spill.addr
     call void @print(i32 %n)
-   
+
     ret i8* %frame
   }
 
@@ -406,16 +406,16 @@
 
 Avoiding Heap Allocations
 -------------------------
- 
-A particular coroutine usage pattern, which is illustrated by the `main` 
-function in the overview section, where a coroutine is created, manipulated and 
+
+A particular coroutine usage pattern, which is illustrated by the `main`
+function in the overview section, where a coroutine is created, manipulated and
 destroyed by the same calling function, is common for coroutines implementing
-RAII idiom and is suitable for allocation elision optimization which avoid 
-dynamic allocation by storing the coroutine frame as a static `alloca` in its 
+RAII idiom and is suitable for allocation elision optimization which avoid
+dynamic allocation by storing the coroutine frame as a static `alloca` in its
 caller.
 
 In the entry block, we will call `coro.alloc`_ intrinsic that will return `true`
-when dynamic allocation is required, and `false` if dynamic allocation is 
+when dynamic allocation is required, and `false` if dynamic allocation is
 elided.
 
 .. code-block:: llvm
@@ -496,9 +496,9 @@
     switch i8 %3, label %suspend [i8 0, label %loop
                                   i8 1, label %cleanup]
 
-In this case, the coroutine frame would include a suspend index that will 
-indicate at which suspend point the coroutine needs to resume. The resume 
-function will use an index to jump to an appropriate basic block and will look 
+In this case, the coroutine frame would include a suspend index that will
+indicate at which suspend point the coroutine needs to resume. The resume
+function will use an index to jump to an appropriate basic block and will look
 as follows:
 
 .. code-block:: llvm
@@ -528,25 +528,25 @@
     ret void
   }
 
-If different cleanup code needs to get executed for different suspend points, 
+If different cleanup code needs to get executed for different suspend points,
 a similar switch will be in the `f.destroy` function.
 
 .. note ::
 
   Using suspend index in a coroutine state and having a switch in `f.resume` and
-  `f.destroy` is one of the possible implementation strategies. We explored 
+  `f.destroy` is one of the possible implementation strategies. We explored
   another option where a distinct `f.resume1`, `f.resume2`, etc. are created for
-  every suspend point, and instead of storing an index, the resume and destroy 
+  every suspend point, and instead of storing an index, the resume and destroy
   function pointers are updated at every suspend. Early testing showed that the
-  current approach is easier on the optimizer than the latter so it is a 
+  current approach is easier on the optimizer than the latter so it is a
   lowering strategy implemented at the moment.
 
 Distinct Save and Suspend
 -------------------------
 
-In the previous example, setting a resume index (or some other state change that 
+In the previous example, setting a resume index (or some other state change that
 needs to happen to prepare a coroutine for resumption) happens at the same time as
-a suspension of a coroutine. However, in certain cases, it is necessary to control 
+a suspension of a coroutine. However, in certain cases, it is necessary to control
 when coroutine is prepared for resumption and when it is suspended.
 
 In the following example, a coroutine represents some activity that is driven
@@ -571,10 +571,10 @@
      }
   }
 
-In this case, coroutine should be ready for resumption prior to a call to 
+In this case, coroutine should be ready for resumption prior to a call to
 `async_op1` and `async_op2`. The `coro.save`_ intrinsic is used to indicate a
 point when coroutine should be ready for resumption (namely, when a resume index
-should be stored in the coroutine frame, so that it can be resumed at the 
+should be stored in the coroutine frame, so that it can be resumed at the
 correct resume point):
 
 .. code-block:: llvm
@@ -599,7 +599,7 @@
 
 A coroutine author or a frontend may designate a distinguished `alloca` that can
 be used to communicate with the coroutine. This distinguished alloca is called
-**coroutine promise** and is provided as the second parameter to the 
+**coroutine promise** and is provided as the second parameter to the
 `coro.id`_ intrinsic.
 
 The following coroutine designates a 32 bit integer `promise` and uses it to
@@ -685,17 +685,17 @@
 * it is possible to check whether a suspended coroutine is at the final suspend
   point via `coro.done`_ intrinsic;
 
-* a resumption of a coroutine stopped at the final suspend point leads to 
+* a resumption of a coroutine stopped at the final suspend point leads to
   undefined behavior. The only possible action for a coroutine at a final
   suspend point is destroying it via `coro.destroy`_ intrinsic.
 
-From the user perspective, the final suspend point represents an idea of a 
+From the user perspective, the final suspend point represents an idea of a
 coroutine reaching the end. From the compiler perspective, it is an optimization
 opportunity for reducing number of resume points (and therefore switch cases) in
 the resume function.
 
 The following is an example of a function that keeps resuming the coroutine
-until the final suspend point is reached after which point the coroutine is 
+until the final suspend point is reached after which point the coroutine is
 destroyed:
 
 .. code-block:: llvm
@@ -729,7 +729,7 @@
 .. code-block:: c
 
   void* coroutine(int n) {
-    int current_value; 
+    int current_value;
     <designate current_value to be coroutine promise>
     <SUSPEND> // injected suspend point, so that the coroutine starts suspended
     for (int i = 0; i < n; ++i) {
@@ -785,8 +785,8 @@
 Semantics:
 """"""""""
 
-When possible, the `coro.destroy` intrinsic is replaced with a direct call to 
-the coroutine destroy function. Otherwise it is replaced with an indirect call 
+When possible, the `coro.destroy` intrinsic is replaced with a direct call to
+the coroutine destroy function. Otherwise it is replaced with an indirect call
 based on the function pointer for the destroy function stored in the coroutine
 frame. Destroying a coroutine that is not suspended leads to undefined behavior.
 
@@ -813,8 +813,8 @@
 """"""""""
 
 When possible, the `coro.resume` intrinsic is replaced with a direct call to the
-coroutine resume function. Otherwise it is replaced with an indirect call based 
-on the function pointer for the resume function stored in the coroutine frame. 
+coroutine resume function. Otherwise it is replaced with an indirect call based
+on the function pointer for the resume function stored in the coroutine frame.
 Resuming a coroutine that is not suspended leads to undefined behavior.
 
 .. _coro.done:
@@ -840,7 +840,7 @@
 Semantics:
 """"""""""
 
-Using this intrinsic on a coroutine that does not have a `final suspend`_ point 
+Using this intrinsic on a coroutine that does not have a `final suspend`_ point
 or on a coroutine that is not suspended leads to undefined behavior.
 
 .. _coro.promise:
@@ -855,25 +855,25 @@
 Overview:
 """""""""
 
-The '``llvm.coro.promise``' intrinsic obtains a pointer to a 
+The '``llvm.coro.promise``' intrinsic obtains a pointer to a
 `coroutine promise`_ given a switched-resume coroutine handle and vice versa.
 
 Arguments:
 """"""""""
 
-The first argument is a handle to a coroutine if `from` is false. Otherwise, 
+The first argument is a handle to a coroutine if `from` is false. Otherwise,
 it is a pointer to a coroutine promise.
 
-The second argument is an alignment requirements of the promise. 
-If a frontend designated `%promise = alloca i32` as a promise, the alignment 
-argument to `coro.promise` should be the alignment of `i32` on the target 
-platform. If a frontend designated `%promise = alloca i32, align 16` as a 
+The second argument is an alignment requirements of the promise.
+If a frontend designated `%promise = alloca i32` as a promise, the alignment
+argument to `coro.promise` should be the alignment of `i32` on the target
+platform. If a frontend designated `%promise = alloca i32, align 16` as a
 promise, the alignment argument should be 16.
 This argument only accepts constants.
 
 The third argument is a boolean indicating a direction of the transformation.
-If `from` is true, the intrinsic returns a coroutine handle given a pointer 
-to a promise. If `from` is false, the intrinsics return a pointer to a promise 
+If `from` is true, the intrinsic returns a coroutine handle given a pointer
+to a promise. If `from` is false, the intrinsics return a pointer to a promise
 from a coroutine handle. This argument only accepts constants.
 
 Semantics:
@@ -907,7 +907,7 @@
   entry:
     %hdl = call i8* @f(i32 4) ; starts the coroutine and returns its handle
     %promise.addr.raw = call i8* @llvm.coro.promise(i8* %hdl, i32 4, i1 false)
-    %promise.addr = bitcast i8* %promise.addr.raw to i32*    
+    %promise.addr = bitcast i8* %promise.addr.raw to i32*
     %val = load i32, i32* %promise.addr ; load a value from the promise
     call void @print(i32 %val)
     call void @llvm.coro.destroy(i8* %hdl)
@@ -946,7 +946,7 @@
 """"""""""
 
 The `coro.size` intrinsic is lowered to a constant representing the size of
-the coroutine frame. 
+the coroutine frame.
 
 .. _coro.begin:
 
@@ -964,7 +964,7 @@
 Arguments:
 """"""""""
 
-The first argument is a token returned by a call to '``llvm.coro.id``' 
+The first argument is a token returned by a call to '``llvm.coro.id``'
 identifying the coroutine.
 
 The second argument is a pointer to a block of memory where coroutine frame
@@ -975,9 +975,9 @@
 """"""""""
 
 Depending on the alignment requirements of the objects in the coroutine frame
-and/or on the codegen compactness reasons the pointer returned from `coro.begin` 
-may be at offset to the `%mem` argument. (This could be beneficial if 
-instructions that express relative access to data can be more compactly encoded 
+and/or on the codegen compactness reasons the pointer returned from `coro.begin`
+may be at offset to the `%mem` argument. (This could be beneficial if
+instructions that express relative access to data can be more compactly encoded
 with small positive and negative offsets).
 
 A frontend should emit exactly one `coro.begin` intrinsic per coroutine.
@@ -993,7 +993,7 @@
 Overview:
 """""""""
 
-The '``llvm.coro.free``' intrinsic returns a pointer to a block of memory where 
+The '``llvm.coro.free``' intrinsic returns a pointer to a block of memory where
 coroutine frame is stored or `null` if this instance of a coroutine did not use
 dynamically allocated memory for its coroutine frame.  This intrinsic is not
 supported for returned-continuation coroutines.
@@ -1001,7 +1001,7 @@
 Arguments:
 """"""""""
 
-The first argument is a token returned by a call to '``llvm.coro.id``' 
+The first argument is a token returned by a call to '``llvm.coro.id``'
 identifying the coroutine.
 
 The second argument is a pointer to the coroutine frame. This should be the same
@@ -1050,7 +1050,7 @@
 Arguments:
 """"""""""
 
-The first argument is a token returned by a call to '``llvm.coro.id``' 
+The first argument is a token returned by a call to '``llvm.coro.id``'
 identifying the coroutine.
 
 Semantics:
@@ -1137,7 +1137,7 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 ::
 
-  declare token @llvm.coro.id(i32 <align>, i8* <promise>, i8* <coroaddr>, 
+  declare token @llvm.coro.id(i32 <align>, i8* <promise>, i8* <coroaddr>,
                                                           i8* <fnaddrs>)
 
 Overview:
@@ -1149,8 +1149,8 @@
 Arguments:
 """"""""""
 
-The first argument provides information on the alignment of the memory returned 
-by the allocation function and given to `coro.begin` by the first argument. If 
+The first argument provides information on the alignment of the memory returned
+by the allocation function and given to `coro.begin` by the first argument. If
 this argument is 0, the memory is assumed to be aligned to 2 * sizeof(i8*).
 This argument only accepts constants.
 
@@ -1158,10 +1158,10 @@
 to be a `coroutine promise`_.
 
 The third argument is `null` coming out of the frontend. The CoroEarly pass sets
-this argument to point to the function this coro.id belongs to. 
+this argument to point to the function this coro.id belongs to.
 
-The fourth argument is `null` before coroutine is split, and later is replaced 
-to point to a private global constant array containing function pointers to 
+The fourth argument is `null` before coroutine is split, and later is replaced
+to point to a private global constant array containing function pointers to
 outlined resume and destroy parts of the coroutine.
 
 
@@ -1298,7 +1298,7 @@
 Overview:
 """""""""
 
-The '``llvm.coro.end``' marks the point where execution of the resume part of 
+The '``llvm.coro.end``' marks the point where execution of the resume part of
 the coroutine should end and control should return to the caller.
 
 
@@ -1307,18 +1307,18 @@
 
 The first argument should refer to the coroutine handle of the enclosing
 coroutine. A frontend is allowed to supply null as the first parameter, in this
-case `coro-early` pass will replace the null with an appropriate coroutine 
+case `coro-early` pass will replace the null with an appropriate coroutine
 handle value.
 
-The second argument should be `true` if this coro.end is in the block that is 
-part of the unwind sequence leaving the coroutine body due to an exception and 
+The second argument should be `true` if this coro.end is in the block that is
+part of the unwind sequence leaving the coroutine body due to an exception and
 `false` otherwise.
 
 Semantics:
 """"""""""
 The purpose of this intrinsic is to allow frontends to mark the cleanup and
 other code that is only relevant during the initial invocation of the coroutine
-and should not be present in resume and destroy parts. 
+and should not be present in resume and destroy parts.
 
 In returned-continuation lowering, ``llvm.coro.end`` fully destroys the
 coroutine frame.  If the second argument is `false`, it also returns from
@@ -1335,11 +1335,11 @@
 the start, resume and destroy parts. In the start part, it is a no-op,
 in resume and destroy parts, it is replaced with `ret void` instruction and
 the rest of the block containing `coro.end` instruction is discarded.
-In landing pads it is replaced with an appropriate instruction to unwind to 
-caller. The handling of coro.end differs depending on whether the target is 
+In landing pads it is replaced with an appropriate instruction to unwind to
+caller. The handling of coro.end differs depending on whether the target is
 using landingpad or WinEH exception model.
 
-For landingpad based exception model, it is expected that frontend uses the 
+For landingpad based exception model, it is expected that frontend uses the
 `coro.end`_ intrinsic as follows:
 
 .. code-block:: llvm
@@ -1368,12 +1368,12 @@
 
 .. code-block:: llvm
 
-    ehcleanup: 
+    ehcleanup:
       %tok = cleanuppad within none []
       %unused = call i1 @llvm.coro.end(i8* null, i1 true) [ "funclet"(token %tok) ]
       cleanupret from %tok unwind label %RestOfTheCleanup
 
-The `CoroSplit` pass, if the funclet bundle is present, will insert 
+The `CoroSplit` pass, if the funclet bundle is present, will insert
 ``cleanupret from %tok unwind to caller`` before
 the `coro.end`_ intrinsic and will remove the rest of the block.
 
@@ -1452,7 +1452,7 @@
 Arguments:
 """"""""""
 
-The first argument refers to a token of `coro.save` intrinsic that marks the 
+The first argument refers to a token of `coro.save` intrinsic that marks the
 point when coroutine state is prepared for suspension. If `none` token is passed,
 the intrinsic behaves as if there were a `coro.save` immediately preceding
 the `coro.suspend` intrinsic.
@@ -1480,7 +1480,7 @@
     %s.final = call i8 @llvm.coro.suspend(token none, i1 true)
     switch i8 %s.final, label %suspend [i8 0, label %trap
                                         i8 1, label %cleanup]
-  trap: 
+  trap:
     call void @llvm.trap()
     unreachable
 
@@ -1490,7 +1490,7 @@
 If a coroutine that was suspended at the suspend point marked by this intrinsic
 is resumed via `coro.resume`_ the control will transfer to the basic block
 of the 0-case. If it is resumed via `coro.destroy`_, it will proceed to the
-basic block indicated by the 1-case. To suspend, coroutine proceed to the 
+basic block indicated by the 1-case. To suspend, coroutine proceed to the
 default label.
 
 If suspend intrinsic is marked as final, it can consider the `true` branch
@@ -1507,9 +1507,9 @@
 Overview:
 """""""""
 
-The '``llvm.coro.save``' marks the point where a coroutine need to update its 
-state to prepare for resumption to be considered suspended (and thus eligible 
-for resumption). 
+The '``llvm.coro.save``' marks the point where a coroutine need to update its
+state to prepare for resumption to be considered suspended (and thus eligible
+for resumption).
 
 Arguments:
 """"""""""
@@ -1520,17 +1520,17 @@
 """"""""""
 
 Whatever coroutine state changes are required to enable resumption of
-the coroutine from the corresponding suspend point should be done at the point 
+the coroutine from the corresponding suspend point should be done at the point
 of `coro.save` intrinsic.
 
 Example:
 """"""""
 
-Separate save and suspend points are necessary when a coroutine is used to 
+Separate save and suspend points are necessary when a coroutine is used to
 represent an asynchronous control flow driven by callbacks representing
 completions of asynchronous operations.
 
-In such a case, a coroutine should be ready for resumption prior to a call to 
+In such a case, a coroutine should be ready for resumption prior to a call to
 `async_op` function that may trigger resumption of a coroutine from the same or
 a different thread possibly prior to `async_op` call returning control back
 to the coroutine:
@@ -1664,8 +1664,8 @@
 Arguments:
 """"""""""
 
-The first argument points to an `alloca` storing the value of a parameter to a 
-coroutine. 
+The first argument points to an `alloca` storing the value of a parameter to a
+coroutine.
 
 The second argument points to an `alloca` storing the value of the copy of that
 parameter.
@@ -1675,12 +1675,12 @@
 
 The optimizer is free to always replace this intrinsic with `i1 true`.
 
-The optimizer is also allowed to replace it with `i1 false` provided that the 
+The optimizer is also allowed to replace it with `i1 false` provided that the
 parameter copy is only used prior to control flow reaching any of the suspend
-points. The code that would be DCE'd if the `coro.param` is replaced with 
+points. The code that would be DCE'd if the `coro.param` is replaced with
 `i1 false` is not considered to be a use of the parameter copy.
 
-The frontend can emit this intrinsic if its language rules allow for this 
+The frontend can emit this intrinsic if its language rules allow for this
 optimization.
 
 Example:
@@ -1702,7 +1702,7 @@
   }
 
 Note that, uses of `b` is used after a suspend point and thus must be copied
-into a coroutine frame, whereas `a` does not have to, since it never used 
+into a coroutine frame, whereas `a` does not have to, since it never used
 after suspend.
 
 A frontend can create parameter copies for `a` and `b` as follows:
@@ -1733,24 +1733,24 @@
 ---------
 The pass CoroEarly lowers coroutine intrinsics that hide the details of the
 structure of the coroutine frame, but, otherwise not needed to be preserved to
-help later coroutine passes. This pass lowers `coro.frame`_, `coro.done`_, 
+help later coroutine passes. This pass lowers `coro.frame`_, `coro.done`_,
 and `coro.promise`_ intrinsics.
 
 .. _CoroSplit:
 
 CoroSplit
 ---------
-The pass CoroSplit buides coroutine frame and outlines resume and destroy parts 
+The pass CoroSplit buides coroutine frame and outlines resume and destroy parts
 into separate functions.
 
 CoroElide
 ---------
-The pass CoroElide examines if the inlined coroutine is eligible for heap 
-allocation elision optimization. If so, it replaces 
+The pass CoroElide examines if the inlined coroutine is eligible for heap
+allocation elision optimization. If so, it replaces
 `coro.begin` intrinsic with an address of a coroutine frame placed on its caller
 and replaces `coro.alloc` and `coro.free` intrinsics with `false` and `null`
-respectively to remove the deallocation code. 
-This pass also replaces `coro.resume` and `coro.destroy` intrinsics with direct 
+respectively to remove the deallocation code.
+This pass also replaces `coro.resume` and `coro.destroy` intrinsics with direct
 calls to resume and destroy functions for a particular coroutine where possible.
 
 CoroCleanup
@@ -1773,7 +1773,7 @@
    allocas.
 
 #. The CoroElide optimization pass relies on coroutine ramp function to be
-   inlined. It would be beneficial to split the ramp function further to 
+   inlined. It would be beneficial to split the ramp function further to
    increase the chance that it will get inlined into its caller.
 
 #. Design a convention that would make it possible to apply coroutine heap
diff --git a/llvm/docs/DebuggingJITedCode.rst b/llvm/docs/DebuggingJITedCode.rst
index e158364..8e8d1ff 100644
--- a/llvm/docs/DebuggingJITedCode.rst
+++ b/llvm/docs/DebuggingJITedCode.rst
@@ -132,7 +132,7 @@
       7            f *= n;
       8        return f;
    -> 9    }
-      10  
+      10
       11   int main(int argc, char** argv)
       12   {
    (lldb) p f
@@ -156,7 +156,7 @@
       14           return -1;
       15       char firstletter = argv[1][0];
    -> 16       int result = compute_factorial(firstletter - '0');
-      17  
+      17
       18       // Returned result is clipped at 255...
       19       return result;
    (lldb) p result
@@ -166,7 +166,7 @@
    * thread #1, name = 'lli', stop reason = step over
       frame #0: 0x00007ffff7fd0098 JIT(0x45c2cb0)`main(argc=2, argv=0x00000000046122f0) at showdebug.c:19:12
       16       int result = compute_factorial(firstletter - '0');
-      17  
+      17
       18       // Returned result is clipped at 255...
    -> 19       return result;
       20   }
diff --git a/llvm/docs/DependenceGraphs/index.rst b/llvm/docs/DependenceGraphs/index.rst
index 8b7421d..7e8c73c 100644
--- a/llvm/docs/DependenceGraphs/index.rst
+++ b/llvm/docs/DependenceGraphs/index.rst
@@ -27,12 +27,12 @@
 instructions.
 
 As described in [1]_ the DDG uses graph abstraction to group nodes
-that are part of a strongly connected component of the graph 
+that are part of a strongly connected component of the graph
 into special nodes called pi-blocks. pi-blocks represent cycles of data
 dependency that prevent reordering transformations. Since any strongly
 connected component of the graph is a maximal subgraph of all the nodes
 that form a cycle, pi-blocks are at most one level deep. In other words,
-no pi-blocks are nested inside another pi-block, resulting in a 
+no pi-blocks are nested inside another pi-block, resulting in a
 hierarchical representation that is at most one level deep.
 
 
@@ -130,7 +130,7 @@
 graph described in [1]_ in the following ways:
 
   1. The graph nodes in the paper represent three main program components, namely *assignment statements*, *for loop headers* and *while loop headers*. In this implementation, DDG nodes naturally represent LLVM IR instructions. An assignment statement in this implementation typically involves a node representing the ``store`` instruction along with a number of individual nodes computing the right-hand-side of the assignment that connect to the ``store`` node via a def-use edge.  The loop header instructions are not represented as special nodes in this implementation because they have limited uses and can be easily identified, for example, through ``LoopAnalysis``.
-  2. The paper describes five types of dependency edges between nodes namely *loop dependency*, *flow-*, *anti-*, *output-*, and *input-* dependencies. In this implementation *memory* edges represent the *flow-*, *anti-*, *output-*, and *input-* dependencies. However, *loop dependencies* are not made explicit, because they mainly represent association between a loop structure and the program elements inside the loop and this association is fairly obvious in LLVM IR itself. 
+  2. The paper describes five types of dependency edges between nodes namely *loop dependency*, *flow-*, *anti-*, *output-*, and *input-* dependencies. In this implementation *memory* edges represent the *flow-*, *anti-*, *output-*, and *input-* dependencies. However, *loop dependencies* are not made explicit, because they mainly represent association between a loop structure and the program elements inside the loop and this association is fairly obvious in LLVM IR itself.
   3. The paper describes two types of pi-blocks; *recurrences* whose bodies are SCCs and *IN* nodes whose bodies are not part of any SCC. In this implementation, pi-blocks are only created for *recurrences*. *IN* nodes remain as simple DDG nodes in the graph.
 
 
diff --git a/llvm/docs/DeveloperPolicy.rst b/llvm/docs/DeveloperPolicy.rst
index 26444f9..758f218 100644
--- a/llvm/docs/DeveloperPolicy.rst
+++ b/llvm/docs/DeveloperPolicy.rst
@@ -357,7 +357,7 @@
 * It is customary to respond to the original commit email mentioning the
   revert.  This serves as both a notice to the original author that their
   patch was reverted, and helps others following llvm-commits track context.
-* Ideally, you should have a publicly reproducible test case ready to share.  
+* Ideally, you should have a publicly reproducible test case ready to share.
   Where possible, we encourage sharing of test cases in commit threads, or
   in PRs.  We encourage the reverter to minimize the test case and to prune
   dependencies where practical.  This even applies when reverting your own
@@ -648,17 +648,17 @@
 Working with the CI system
 --------------------------
 
-The main continuous integration (CI) tool for the LLVM project is the 
-`LLVM Buildbot <https://lab.llvm.org/buildbot/>`_. It uses different *builders* 
-to cover a wide variety of sub-projects and configurations. The builds are 
-executed on different *workers*. Builders and workers are configured and 
+The main continuous integration (CI) tool for the LLVM project is the
+`LLVM Buildbot <https://lab.llvm.org/buildbot/>`_. It uses different *builders*
+to cover a wide variety of sub-projects and configurations. The builds are
+executed on different *workers*. Builders and workers are configured and
 provided by community members.
 
-The Buildbot tracks the commits on the main branch and the release branches. 
+The Buildbot tracks the commits on the main branch and the release branches.
 This means that patches are built and tested after they are merged to the these
 branches (aka post-merge testing). This also means it's okay to break the build
 occasionally, as it's unreasonable to expect contributors to build and test
-their patch with every possible configuration. 
+their patch with every possible configuration.
 
 *If your commit broke the build:*
 
@@ -669,7 +669,7 @@
 
 *If someone else broke the build and this blocks your work*
 
-* Comment on the code review in `Phabricator <https://reviews.llvm.org/>`_ 
+* Comment on the code review in `Phabricator <https://reviews.llvm.org/>`_
   (if available) or email the author, explain the problem and how this impacts
   you. Add a link to the broken build and the error message so folks can
   understand the problem.
@@ -678,14 +678,14 @@
 *If a build/worker is permanently broken*
 
 * 1st step: contact the owner of the worker. You can find the name and contact
-  information for the *Admin* of worker on the page of the build in the 
+  information for the *Admin* of worker on the page of the build in the
   *Worker* tab:
 
   .. image:: buildbot_worker_contact.png
 
-* 2nd step: If the owner does not respond or fix the worker, please escalate 
+* 2nd step: If the owner does not respond or fix the worker, please escalate
   to Galina Kostanova, the maintainer of the BuildBot master.
-* 3rd step: If Galina could not help you, please escalate to the 
+* 3rd step: If Galina could not help you, please escalate to the
   `Infrastructure Working Group <mailto:iwg@llvm.org>`_.
 
 .. _new-llvm-components:
diff --git a/llvm/docs/FaultMaps.rst b/llvm/docs/FaultMaps.rst
index d63ff5a..a089a38 100644
--- a/llvm/docs/FaultMaps.rst
+++ b/llvm/docs/FaultMaps.rst
@@ -71,15 +71,15 @@
     %ptr = call i32* @get_ptr()
     %ptr_is_null = icmp i32* %ptr, null
     br i1 %ptr_is_null, label %is_null, label %not_null, !make.implicit !0
-  
+
   not_null:
     %t = load i32, i32* %ptr
     br label %do_something_with_t
-    
+
   is_null:
     call void @HFC()
     unreachable
-  
+
   !0 = !{}
 
 to control flow implicit in the instruction loading or storing through
@@ -90,7 +90,7 @@
     %ptr = call i32* @get_ptr()
     %t = load i32, i32* %ptr  ;; handler-pc = label %is_null
     br label %do_something_with_t
-    
+
   is_null:
     call void @HFC()
     unreachable
diff --git a/llvm/docs/GarbageCollection.rst b/llvm/docs/GarbageCollection.rst
index 800182d..9701aff 100644
--- a/llvm/docs/GarbageCollection.rst
+++ b/llvm/docs/GarbageCollection.rst
@@ -9,20 +9,20 @@
 ========
 
 This document covers how to integrate LLVM into a compiler for a language which
-supports garbage collection.  **Note that LLVM itself does not provide a 
-garbage collector.**  You must provide your own.  
+supports garbage collection.  **Note that LLVM itself does not provide a
+garbage collector.**  You must provide your own.
 
 Quick Start
 ============
 
-First, you should pick a collector strategy.  LLVM includes a number of built 
+First, you should pick a collector strategy.  LLVM includes a number of built
 in ones, but you can also implement a loadable plugin with a custom definition.
-Note that the collector strategy is a description of how LLVM should generate 
+Note that the collector strategy is a description of how LLVM should generate
 code such that it interacts with your collector and runtime, not a description
 of the collector itself.
 
-Next, mark your generated functions as using your chosen collector strategy.  
-From c++, you can call: 
+Next, mark your generated functions as using your chosen collector strategy.
+From c++, you can call:
 
 .. code-block:: c++
 
@@ -38,37 +38,37 @@
 
 When generating LLVM IR for your functions, you will need to:
 
-* Use ``@llvm.gcread`` and/or ``@llvm.gcwrite`` in place of standard load and 
-  store instructions.  These intrinsics are used to represent load and store 
-  barriers.  If you collector does not require such barriers, you can skip 
-  this step.  
+* Use ``@llvm.gcread`` and/or ``@llvm.gcwrite`` in place of standard load and
+  store instructions.  These intrinsics are used to represent load and store
+  barriers.  If you collector does not require such barriers, you can skip
+  this step.
 
-* Use the memory allocation routines provided by your garbage collector's 
+* Use the memory allocation routines provided by your garbage collector's
   runtime library.
 
-* If your collector requires them, generate type maps according to your 
-  runtime's binary interface.  LLVM is not involved in the process.  In 
-  particular, the LLVM type system is not suitable for conveying such 
+* If your collector requires them, generate type maps according to your
+  runtime's binary interface.  LLVM is not involved in the process.  In
+  particular, the LLVM type system is not suitable for conveying such
   information though the compiler.
 
-* Insert any coordination code required for interacting with your collector.  
+* Insert any coordination code required for interacting with your collector.
   Many collectors require running application code to periodically check a
-  flag and conditionally call a runtime function.  This is often referred to 
-  as a safepoint poll.  
+  flag and conditionally call a runtime function.  This is often referred to
+  as a safepoint poll.
 
-You will need to identify roots (i.e. references to heap objects your collector 
-needs to know about) in your generated IR, so that LLVM can encode them into 
-your final stack maps.  Depending on the collector strategy chosen, this is 
-accomplished by using either the ``@llvm.gcroot`` intrinsics or an 
-``gc.statepoint`` relocation sequence. 
+You will need to identify roots (i.e. references to heap objects your collector
+needs to know about) in your generated IR, so that LLVM can encode them into
+your final stack maps.  Depending on the collector strategy chosen, this is
+accomplished by using either the ``@llvm.gcroot`` intrinsics or an
+``gc.statepoint`` relocation sequence.
 
 Don't forget to create a root for each intermediate value that is generated when
-evaluating an expression.  In ``h(f(), g())``, the result of ``f()`` could 
+evaluating an expression.  In ``h(f(), g())``, the result of ``f()`` could
 easily be collected if evaluating ``g()`` triggers a collection.
 
-Finally, you need to link your runtime library with the generated program 
-executable (for a static compiler) or ensure the appropriate symbols are 
-available for the runtime linker (for a JIT compiler).  
+Finally, you need to link your runtime library with the generated program
+executable (for a static compiler) or ensure the appropriate symbols are
+available for the runtime linker (for a JIT compiler).
 
 
 Introduction
@@ -136,15 +136,15 @@
 
 * reference counting
 
-We hope that the support built into the LLVM IR is sufficient to support a 
-broad class of garbage collected languages including Scheme, ML, Java, C#, 
+We hope that the support built into the LLVM IR is sufficient to support a
+broad class of garbage collected languages including Scheme, ML, Java, C#,
 Perl, Python, Lua, Ruby, other scripting languages, and more.
 
 Note that LLVM **does not itself provide a garbage collector** --- this should
 be part of your language's runtime library.  LLVM provides a framework for
 describing the garbage collectors requirements to the compiler.  In particular,
-LLVM provides support for generating stack maps at call sites, polling for a 
-safepoint, and emitting load and store barriers.  You can also extend LLVM - 
+LLVM provides support for generating stack maps at call sites, polling for a
+safepoint, and emitting load and store barriers.  You can also extend LLVM -
 possibly through a loadable :ref:`code generation plugins <plugin>` - to
 generate code and data structures which conforms to the *binary interface*
 specified by the *runtime library*.  This is similar to the relationship between
@@ -183,12 +183,12 @@
 In general, LLVM's support for GC does not include features which can be
 adequately addressed with other features of the IR and does not specify a
 particular binary interface.  On the plus side, this means that you should be
-able to integrate LLVM with an existing runtime.  On the other hand, it can 
-have the effect of leaving a lot of work for the developer of a novel 
-language.  We try to mitigate this by providing built in collector strategy 
-descriptions that can work with many common collector designs and easy 
-extension points.  If you don't already have a specific binary interface 
-you need to support, we recommend trying to use one of these built in collector 
+able to integrate LLVM with an existing runtime.  On the other hand, it can
+have the effect of leaving a lot of work for the developer of a novel
+language.  We try to mitigate this by providing built in collector strategy
+descriptions that can work with many common collector designs and easy
+extension points.  If you don't already have a specific binary interface
+you need to support, we recommend trying to use one of these built in collector
 strategies.
 
 .. _gc_intrinsics:
@@ -198,8 +198,8 @@
 
 This section describes the garbage collection facilities provided by the
 :doc:`LLVM intermediate representation <LangRef>`.  The exact behavior of these
-IR features is specified by the selected :ref:`GC strategy description 
-<plugin>`. 
+IR features is specified by the selected :ref:`GC strategy description
+<plugin>`.
 
 Specifying GC code generation: ``gc "..."``
 -------------------------------------------
@@ -212,9 +212,9 @@
 compiler.  Its programmatic equivalent is the ``setGC`` method of ``Function``.
 
 Setting ``gc "name"`` on a function triggers a search for a matching subclass
-of GCStrategy.  Some collector strategies are built in.  You can add others 
+of GCStrategy.  Some collector strategies are built in.  You can add others
 using either the loadable plugin mechanism, or by patching your copy of LLVM.
-It is the selected GC strategy which defines the exact nature of the code 
+It is the selected GC strategy which defines the exact nature of the code
 generated to support GC.  If none is found, the compiler will raise an error.
 
 Specifying the GC style on a per-function basis allows LLVM to link together
@@ -226,17 +226,17 @@
 ----------------------------------
 
 LLVM currently supports two different mechanisms for describing references in
-compiled code at safepoints.  ``llvm.gcroot`` is the older mechanism; 
-``gc.statepoint`` has been added more recently.  At the moment, you can choose 
-either implementation (on a per :ref:`GC strategy <plugin>` basis).  Longer 
-term, we will probably either migrate away from ``llvm.gcroot`` entirely, or 
-substantially merge their implementations. Note that most new development 
-work is focused on ``gc.statepoint``.  
+compiled code at safepoints.  ``llvm.gcroot`` is the older mechanism;
+``gc.statepoint`` has been added more recently.  At the moment, you can choose
+either implementation (on a per :ref:`GC strategy <plugin>` basis).  Longer
+term, we will probably either migrate away from ``llvm.gcroot`` entirely, or
+substantially merge their implementations. Note that most new development
+work is focused on ``gc.statepoint``.
 
 Using ``gc.statepoint``
 ^^^^^^^^^^^^^^^^^^^^^^^^
-:doc:`This page <Statepoints>` contains detailed documentation for 
-``gc.statepoint``. 
+:doc:`This page <Statepoints>` contains detailed documentation for
+``gc.statepoint``.
 
 Using ``llvm.gcwrite``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -247,8 +247,8 @@
 
 The ``llvm.gcroot`` intrinsic is used to inform LLVM that a stack variable
 references an object on the heap and is to be tracked for garbage collection.
-The exact impact on generated code is specified by the Function's selected 
-:ref:`GC strategy <plugin>`.  All calls to ``llvm.gcroot`` **must** reside 
+The exact impact on generated code is specified by the Function's selected
+:ref:`GC strategy <plugin>`.  All calls to ``llvm.gcroot`` **must** reside
 inside the first basic block.
 
 The first argument **must** be a value referring to an alloca instruction or a
@@ -256,12 +256,12 @@
 associated with the pointer, and **must** be a constant or global value
 address.  If your target collector uses tags, use a null pointer for metadata.
 
-A compiler which performs manual SSA construction **must** ensure that SSA 
+A compiler which performs manual SSA construction **must** ensure that SSA
 values representing GC references are stored in to the alloca passed to the
-respective ``gcroot`` before every call site and reloaded after every call.  
-A compiler which uses mem2reg to raise imperative code using ``alloca`` into 
-SSA form need only add a call to ``@llvm.gcroot`` for those variables which 
-are pointers into the GC heap.  
+respective ``gcroot`` before every call site and reloaded after every call.
+A compiler which uses mem2reg to raise imperative code using ``alloca`` into
+SSA form need only add a call to ``@llvm.gcroot`` for those variables which
+are pointers into the GC heap.
 
 It is also important to mark intermediate values with ``llvm.gcroot``.  For
 example, consider ``h(f(), g())``.  Beware leaking the result of ``f()`` in the
@@ -343,13 +343,13 @@
 (although a particular :ref:`collector strategy <plugin>` might).  However, it
 would be an unusual collector that violated it.
 
-The use of these intrinsics is naturally optional if the target GC does not 
-require the corresponding barrier.  The GC strategy used with such a collector 
-should replace the intrinsic calls with the corresponding ``load`` or 
+The use of these intrinsics is naturally optional if the target GC does not
+require the corresponding barrier.  The GC strategy used with such a collector
+should replace the intrinsic calls with the corresponding ``load`` or
 ``store`` instruction if they are used.
 
-One known deficiency with the current design is that the barrier intrinsics do 
-not include the size or alignment of the underlying operation performed.  It is 
+One known deficiency with the current design is that the barrier intrinsics do
+not include the size or alignment of the underlying operation performed.  It is
 currently assumed that the operation is of pointer size and the alignment is
 assumed to be the target machine's default alignment.
 
@@ -391,7 +391,7 @@
 Built In GC Strategies
 ======================
 
-LLVM includes built in support for several varieties of garbage collectors.  
+LLVM includes built in support for several varieties of garbage collectors.
 
 The Shadow Stack GC
 ----------------------
@@ -484,15 +484,15 @@
 The 'Erlang' and 'Ocaml' GCs
 -----------------------------
 
-LLVM ships with two example collectors which leverage the ``gcroot`` 
-mechanisms.  To our knowledge, these are not actually used by any language 
-runtime, but they do provide a reasonable starting point for someone interested 
-in writing an ``gcroot`` compatible GC plugin.  In particular, these are the 
-only in tree examples of how to produce a custom binary stack map format using 
+LLVM ships with two example collectors which leverage the ``gcroot``
+mechanisms.  To our knowledge, these are not actually used by any language
+runtime, but they do provide a reasonable starting point for someone interested
+in writing an ``gcroot`` compatible GC plugin.  In particular, these are the
+only in tree examples of how to produce a custom binary stack map format using
 a ``gcroot`` strategy.
 
-As there names imply, the binary format produced is intended to model that 
-used by the Erlang and OCaml compilers respectively.  
+As there names imply, the binary format produced is intended to model that
+used by the Erlang and OCaml compilers respectively.
 
 .. _statepoint_example_gc:
 
@@ -503,19 +503,19 @@
 
   F.setGC("statepoint-example");
 
-This GC provides an example of how one might use the infrastructure provided 
-by ``gc.statepoint``. This example GC is compatible with the 
-:ref:`PlaceSafepoints` and :ref:`RewriteStatepointsForGC` utility passes 
-which simplify ``gc.statepoint`` sequence insertion. If you need to build a 
+This GC provides an example of how one might use the infrastructure provided
+by ``gc.statepoint``. This example GC is compatible with the
+:ref:`PlaceSafepoints` and :ref:`RewriteStatepointsForGC` utility passes
+which simplify ``gc.statepoint`` sequence insertion. If you need to build a
 custom GC strategy around the ``gc.statepoints`` mechanisms, it is recommended
 that you use this one as a starting point.
 
-This GC strategy does not support read or write barriers.  As a result, these 
+This GC strategy does not support read or write barriers.  As a result, these
 intrinsics are lowered to normal loads and stores.
 
-The stack map format generated by this GC strategy can be found in the 
-:ref:`stackmap-section` using a format documented :ref:`here 
-<statepoint-stackmap-format>`. This format is intended to be the standard 
+The stack map format generated by this GC strategy can be found in the
+:ref:`stackmap-section` using a format documented :ref:`here
+<statepoint-stackmap-format>`. This format is intended to be the standard
 format supported by LLVM going forward.
 
 The CoreCLR GC
@@ -525,15 +525,15 @@
 
   F.setGC("coreclr");
 
-This GC leverages the ``gc.statepoint`` mechanism to support the 
+This GC leverages the ``gc.statepoint`` mechanism to support the
 `CoreCLR <https://github.com/dotnet/coreclr>`__ runtime.
 
-Support for this GC strategy is a work in progress. This strategy will 
-differ from 
-:ref:`statepoint-example GC<statepoint_example_gc>` strategy in 
+Support for this GC strategy is a work in progress. This strategy will
+differ from
+:ref:`statepoint-example GC<statepoint_example_gc>` strategy in
 certain aspects like:
 
-* Base-pointers of interior pointers are not explicitly 
+* Base-pointers of interior pointers are not explicitly
   tracked and reported.
 
 * A different format is used for encoding stack maps.
@@ -545,24 +545,24 @@
 ====================
 
 If none of the built in GC strategy descriptions met your needs above, you will
-need to define a custom GCStrategy and possibly, a custom LLVM pass to perform 
-lowering.  Your best example of where to start defining a custom GCStrategy 
+need to define a custom GCStrategy and possibly, a custom LLVM pass to perform
+lowering.  Your best example of where to start defining a custom GCStrategy
 would be to look at one of the built in strategies.
 
 You may be able to structure this additional code as a loadable plugin library.
-Loadable plugins are sufficient if all you need is to enable a different 
-combination of built in functionality, but if you need to provide a custom 
-lowering pass, you will need to build a patched version of LLVM.  If you think 
-you need a patched build, please ask for advice on llvm-dev.  There may be an 
-easy way we can extend the support to make it work for your use case without 
-requiring a custom build.  
+Loadable plugins are sufficient if all you need is to enable a different
+combination of built in functionality, but if you need to provide a custom
+lowering pass, you will need to build a patched version of LLVM.  If you think
+you need a patched build, please ask for advice on llvm-dev.  There may be an
+easy way we can extend the support to make it work for your use case without
+requiring a custom build.
 
 Collector Requirements
 ----------------------
 
 You should be able to leverage any existing collector library that includes the following elements:
 
-#. A memory allocator which exposes an allocation function your compiled 
+#. A memory allocator which exposes an allocation function your compiled
    code can call.
 
 #. A binary format for the stack map.  A stack map describes the location
@@ -571,14 +571,14 @@
    which conservatively scan the stack don't require such a structure.
 
 #. A stack crawler to discover functions on the call stack, and enumerate the
-   references listed in the stack map for each call site.  
+   references listed in the stack map for each call site.
 
-#. A mechanism for identifying references in global locations (e.g. global 
+#. A mechanism for identifying references in global locations (e.g. global
    variables).
 
 #. If you collector requires them, an LLVM IR implementation of your collectors
-   load and store barriers.  Note that since many collectors don't require 
-   barriers at all, LLVM defaults to lowering such barriers to normal loads 
+   load and store barriers.  Note that since many collectors don't require
+   barriers at all, LLVM defaults to lowering such barriers to normal loads
    and stores unless you arrange otherwise.
 
 
@@ -852,12 +852,12 @@
 For GCs which use barriers or unusual treatment of stack roots, the
 implementor is responsibly for providing a custom pass to lower the
 intrinsics with the desired semantics.  If you have opted in to custom
-lowering of a particular intrinsic your pass **must** eliminate all 
+lowering of a particular intrinsic your pass **must** eliminate all
 instances of the corresponding intrinsic in functions which opt in to
-your GC.  The best example of such a pass is the ShadowStackGC and it's 
-ShadowStackGCLowering pass.  
+your GC.  The best example of such a pass is the ShadowStackGC and it's
+ShadowStackGCLowering pass.
 
-There is currently no way to register such a custom lowering pass 
+There is currently no way to register such a custom lowering pass
 without building a custom copy of LLVM.
 
 .. _safe-points:
diff --git a/llvm/docs/GettingInvolved.rst b/llvm/docs/GettingInvolved.rst
index 3f38b6b..eb3f7ed 100644
--- a/llvm/docs/GettingInvolved.rst
+++ b/llvm/docs/GettingInvolved.rst
@@ -102,8 +102,8 @@
 -------------

 

 If you can't find what you need in these docs, try consulting the mailing

-lists. In addition to the traditional mailing lists there is also a 

-`Discourse server <https://llvm.discourse.group>`_ available. 

+lists. In addition to the traditional mailing lists there is also a

+`Discourse server <https://llvm.discourse.group>`_ available.

 

 `Developer's List (llvm-dev)`__

   This list is for people who want to be included in technical discussions of

@@ -161,7 +161,7 @@
      - Every 2 weeks on Thursday

      - `ics <https://calendar.google.com/calendar/ical/lowrisc.org_0n5pkesfjcnp0bh5hps1p0bd80%40group.calendar.google.com/public/basic.ics>`__

        `gcal <https://calendar.google.com/calendar/b/1?cid=bG93cmlzYy5vcmdfMG41cGtlc2ZqY25wMGJoNWhwczFwMGJkODBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ>`__

-     - 

+     -

    * - Scalable Vectors and Arm SVE

      - Monthly, every 3rd Tuesday

      - `ics <https://calendar.google.com/calendar/ical/bjms39pe6k6bo5egtsp7don414%40group.calendar.google.com/public/basic.ics>`__

@@ -178,27 +178,27 @@
      - `Minutes/docs <https://docs.google.com/document/d/1GLCE8cl7goCaLSiM9j1eIq5IqeXt6_YTY2UEcC4jmsg/edit?usp=sharing>`__

    * - `CIRCT <https://github.com/llvm/circt>`__

      - Weekly, on Wednesday

-     - 

+     -

      - `Minutes/docs <https://docs.google.com/document/d/1fOSRdyZR2w75D87yU2Ma9h2-_lEPL4NxvhJGJd-s5pk/edit#heading=h.mulvhjtr8dk9>`__

    * - `MLIR <https://mlir.llvm.org>`__ design meetings

      - Weekly, on Thursdays

-     - 

+     -

      - `Minutes/docs <https://docs.google.com/document/d/1y_9f1AbfgcoVdJh4_aM6-BaSHvrHl8zuA5G4jv_94K8/edit#heading=h.cite1kolful9>`__

    * - flang

      - Multiple meeting series, `documented here <https://github.com/llvm/llvm-project/blob/main/flang/docs/GettingInvolved.md#calls>`__

-     - 

-     - 

+     -

+     -

    * - OpenMP

      - Multiple meeting series, `documented here <https://openmp.llvm.org/docs/SupportAndFAQ.html>`__

-     - 

-     - 

+     -

+     -

    * - LLVM Alias Analysis

      - Every 4 weeks on Tuesdays

      - `ics <http://lists.llvm.org/pipermail/llvm-dev/attachments/20201103/a3499a67/attachment-0001.ics>`__

      - `Minutes/docs <https://docs.google.com/document/d/17U-WvX8qyKc3S36YUKr3xfF-GHunWyYowXbxEdpHscw>`__

    * - Windows/COFF related developments

      - Every 2 months on Thursday

-     - 

+     -

      - `Minutes/docs <https://docs.google.com/document/d/1A-W0Sas_oHWTEl_x_djZYoRtzAdTONMW_6l1BH9G6Bo/edit?usp=sharing>`__

    * - Vector Predication

      - Every 2 weeks on Tuesdays, 3pm UTC

@@ -233,11 +233,11 @@
 * clang-bot - A `geordi <http://www.eelis.net/geordi/>`_ instance running

   near-trunk clang instead of gcc.

 

-In addition to the traditional IRC there is a 

-`Discord <https://discord.com/channels/636084430946959380/636725486533345280>`_ 

-chat server available. To sign up, please use this 

+In addition to the traditional IRC there is a

+`Discord <https://discord.com/channels/636084430946959380/636725486533345280>`_

+chat server available. To sign up, please use this

 `invitation link <https://discord.com/invite/xS7Z362>`_.

-  

+

 

 .. _meetups-social-events:

 

diff --git a/llvm/docs/GettingStartedVS.rst b/llvm/docs/GettingStartedVS.rst
index b9e294f..24d813c 100644
--- a/llvm/docs/GettingStartedVS.rst
+++ b/llvm/docs/GettingStartedVS.rst
@@ -112,7 +112,7 @@
 
      pip install psutil
      git clone https://github.com/llvm/llvm-project.git llvm
- 
+
  Instead of ``git clone`` you may download a compressed source distribution
  from the `releases page <https://github.com/llvm/llvm-project/releases>`_.
  Select the last link: ``Source code (zip)`` and unpack the downloaded file using
@@ -170,7 +170,7 @@
    You can run LLVM tests by merely building the project "check-all". The test
    results will be shown in the VS output window. Once the build succeeds, you
    have verified a working LLVM development environment!
-   
+
    You should not see any unexpected failures, but will see many unsupported
    tests and expected failures:
 
@@ -195,10 +195,10 @@
    choco install -y git cmake python3
    pip3 install psutil
 
-There is also a Windows 
-`Dockerfile <https://github.com/llvm/llvm-zorg/blob/main/buildbot/google/docker/windows-base-vscode2019/Dockerfile>`_ 
+There is also a Windows
+`Dockerfile <https://github.com/llvm/llvm-zorg/blob/main/buildbot/google/docker/windows-base-vscode2019/Dockerfile>`_
 with the entire build tool chain. This can be used to test the build with a
-tool chain different from your host installation or to create build servers. 
+tool chain different from your host installation or to create build servers.
 
 Next steps
 ==========
diff --git a/llvm/docs/GitBisecting.rst b/llvm/docs/GitBisecting.rst
index 81876c7..4d12191 100644
--- a/llvm/docs/GitBisecting.rst
+++ b/llvm/docs/GitBisecting.rst
@@ -63,7 +63,7 @@
 
 To make sure your run script works, it's a good idea to run ``./run.sh`` by
 hand and tweak the script until it works, then run ``git bisect good`` or
-``git bisect bad`` manually once based on the result of the script 
+``git bisect bad`` manually once based on the result of the script
 (check ``echo $?`` after your script ran), and only then run ``git bisect run
 ./run.sh``. Don't forget to mark your run script as executable -- ``git bisect
 run`` doesn't check for that, it just assumes the run script failed each time.
@@ -85,7 +85,7 @@
      A-o-o-......-o-D-o-o-HEAD
                    /
        B-o-...-o-C-
- 
+
 ``A`` is the first commit in LLVM ever, ``97724f18c79c``.
 
 ``B`` is the first commit in MLIR, ``aed0d21a62db``.
diff --git a/llvm/docs/GlobalISel/GenericOpcode.rst b/llvm/docs/GlobalISel/GenericOpcode.rst
index 32dd0ae..4b35066 100644
--- a/llvm/docs/GlobalISel/GenericOpcode.rst
+++ b/llvm/docs/GlobalISel/GenericOpcode.rst
@@ -811,9 +811,9 @@
 G_JUMP_TABLE
 ^^^^^^^^^^^^
 
-Generates a pointer to the address of the jump table specified by the source 
+Generates a pointer to the address of the jump table specified by the source
 operand. The source operand is a jump table index.
-G_JUMP_TABLE can be used in conjunction with G_BRJT to support jump table 
+G_JUMP_TABLE can be used in conjunction with G_BRJT to support jump table
 codegen with GlobalISel.
 
 .. code-block:: none
diff --git a/llvm/docs/GlobalISel/Legalizer.rst b/llvm/docs/GlobalISel/Legalizer.rst
index fdde42f..1ff7b30 100644
--- a/llvm/docs/GlobalISel/Legalizer.rst
+++ b/llvm/docs/GlobalISel/Legalizer.rst
@@ -235,14 +235,14 @@
 
 * ``widenScalarToNextPow2()`` is like ``widenScalarIf()`` but is satisfied iff the type
   size in bits is not a power of 2 and selects a target type that is the next
-  largest power of 2. 
+  largest power of 2.
 
 .. _clampscalar:
 
 * ``minScalar()`` is like ``widenScalarIf()`` but is satisfied iff the type
   size in bits is smaller than the given minimum and selects the minimum as the
   target type. Similarly, there is also a ``maxScalar()`` for the maximum and a
-  ``clampScalar()`` to do both at once. 
+  ``clampScalar()`` to do both at once.
 
 * ``minScalarSameAs()`` is like ``minScalar()`` but the minimum is taken from another
   type index.
diff --git a/llvm/docs/HowToAddABuilder.rst b/llvm/docs/HowToAddABuilder.rst
index 95dcf03..974bb5f 100644
--- a/llvm/docs/HowToAddABuilder.rst
+++ b/llvm/docs/HowToAddABuilder.rst
@@ -112,7 +112,7 @@
    to see if it works.
 
 #. Send a patch which adds your build worker and your builder to
-   `zorg <https://github.com/llvm/llvm-zorg>`_. Use the typical LLVM 
+   `zorg <https://github.com/llvm/llvm-zorg>`_. Use the typical LLVM
    `workflow <https://llvm.org/docs/Contributing.html#how-to-submit-a-patch>`_.
 
    * workers are added to ``buildbot/osuosl/master/config/workers.py``
diff --git a/llvm/docs/HowToBuildWindowsItaniumPrograms.rst b/llvm/docs/HowToBuildWindowsItaniumPrograms.rst
index 9823641..883c186 100644
--- a/llvm/docs/HowToBuildWindowsItaniumPrograms.rst
+++ b/llvm/docs/HowToBuildWindowsItaniumPrograms.rst
@@ -48,7 +48,7 @@
 In the Itanium C++ ABI the first member of an object is a pointer to the vtable
 for its class. The vtable is often emitted into the object file with the key function
 and must be imported for classes marked dllimport. The pointers must be globally
-unique. Unfortunately, the COFF/PE file format does not provide a mechanism to 
+unique. Unfortunately, the COFF/PE file format does not provide a mechanism to
 store a runtime address from another DLL into this pointer (although runtime
 addresses are patched into the IAT). Therefore, the compiler must emit some code,
 that runs after IAT patching but before anything that might use the vtable pointers,
@@ -58,7 +58,7 @@
 programs to link we currently rely on the -auto-import switch in LLD to auto-import
 references to __cxxabiv1::__class_type_info pointers (see: https://reviews.llvm.org/D43184
 for a related discussion). This allows for linking; but, code that actually uses
-such fields will not work as they these will not be fixed up at runtime. See 
+such fields will not work as they these will not be fixed up at runtime. See
 _pei386_runtime_relocator which handles the runtime component of the autoimporting
 scheme used for mingw and comments in https://reviews.llvm.org/D43184 and
 https://reviews.llvm.org/D89518 for more.
diff --git a/llvm/docs/HowToReleaseLLVM.rst b/llvm/docs/HowToReleaseLLVM.rst
index 2fce477..bd56570 100644
--- a/llvm/docs/HowToReleaseLLVM.rst
+++ b/llvm/docs/HowToReleaseLLVM.rst
@@ -152,7 +152,7 @@
 That process will perform both Release+Asserts and Release builds but only
 pack the Release build for upload. You should use the Release+Asserts sysroot,
 normally under ``final/Phase3/Release+Asserts/llvmCore-3.8.1-RCn.install/``,
-for test-suite and run-time benchmarks, to make sure nothing serious has 
+for test-suite and run-time benchmarks, to make sure nothing serious has
 passed through the net. For compile-time benchmarks, use the Release version.
 
 The minimum required version of the tools you'll need are :doc:`here <GettingStarted>`
@@ -375,4 +375,3 @@
 
 Send an email to the list announcing the release, pointing people to all the
 relevant documentation, download pages and bugs fixed.
-
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 7540a77..aaec2a4 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -446,7 +446,7 @@
     - On iOS platforms, we use AAPCS-VFP calling convention.
 "``swifttailcc``"
     This calling convention is like ``swiftcc`` in most respects, but also the
-    callee pops the argument area of the stack so that mandatory tail calls are 
+    callee pops the argument area of the stack so that mandatory tail calls are
     possible as in ``tailcc``.
 "``cfguard_checkcc``" - Windows Control Flow Guard (Check mechanism)
     This calling convention is used for the Control Flow Guard check function,
@@ -623,7 +623,7 @@
 appropriate fencing is inserted.  Since the appropriate fencing is
 implementation defined, the optimizer can't do the latter.  The former is
 challenging as many commonly expected properties, such as
-``ptrtoint(v)-ptrtoint(v) == 0``, don't hold for non-integral types.  
+``ptrtoint(v)-ptrtoint(v) == 0``, don't hold for non-integral types.
 
 .. _globalvars:
 
@@ -12230,7 +12230,7 @@
 
       declare token
         @llvm.experimental.gc.statepoint(i64 <id>, i32 <num patch bytes>,
-                       func_type <target>, 
+                       func_type <target>,
                        i64 <#call args>, i64 <flags>,
                        ... (call parameters),
                        i64 0, i64 0)
@@ -12340,7 +12340,7 @@
 
 The first and only argument is the ``gc.statepoint`` which starts
 the safepoint sequence of which this ``gc.result`` is a part.
-Despite the typing of this as a generic token, *only* the value defined 
+Despite the typing of this as a generic token, *only* the value defined
 by a ``gc.statepoint`` is legal here.
 
 Semantics:
@@ -12364,8 +12364,8 @@
 ::
 
       declare <pointer type>
-        @llvm.experimental.gc.relocate(token %statepoint_token, 
-                                       i32 %base_offset, 
+        @llvm.experimental.gc.relocate(token %statepoint_token,
+                                       i32 %base_offset,
                                        i32 %pointer_offset)
 
 Overview:
@@ -12379,7 +12379,7 @@
 
 The first argument is the ``gc.statepoint`` which starts the
 safepoint sequence of which this ``gc.relocation`` is a part.
-Despite the typing of this as a generic token, *only* the value defined 
+Despite the typing of this as a generic token, *only* the value defined
 by a ``gc.statepoint`` is legal here.
 
 The second and third arguments are both indices into operands of the
diff --git a/llvm/docs/MCJITDesignAndImplementation.rst b/llvm/docs/MCJITDesignAndImplementation.rst
index 63a9e40..ca38cba 100644
--- a/llvm/docs/MCJITDesignAndImplementation.rst
+++ b/llvm/docs/MCJITDesignAndImplementation.rst
@@ -30,7 +30,7 @@
 the Module that was used to create the EngineBuilder.
 
 .. image:: MCJIT-engine-builder.png
- 
+
 EngineBuilder::create will call the static MCJIT::createJIT function,
 passing in its pointers to the module, memory manager and target machine
 objects, all of which will subsequently be owned by the MCJIT object.
@@ -41,7 +41,7 @@
 gets created when an object is loaded.
 
 .. image:: MCJIT-creation.png
- 
+
 Upon creation, MCJIT holds a pointer to the Module object that it received
 from EngineBuilder but it does not immediately generate code for this
 module.  Code generation is deferred until either the
@@ -61,7 +61,7 @@
 on the Module with which it was created.
 
 .. image:: MCJIT-load.png
- 
+
 The PassManager::run call causes the MC code generation mechanisms to emit
 a complete relocatable binary object image (either in either ELF or MachO
 format, depending on the target) into the ObjectBufferStream object, which
@@ -85,7 +85,7 @@
 actual loading.
 
 .. image:: MCJIT-dyld-load.png
- 
+
 RuntimeDyldImpl::loadObject begins by creating an ObjectImage instance
 from the ObjectBuffer it received.  ObjectImage, which wraps the
 ObjectFile class, is a helper class which parses the binary object image
@@ -106,7 +106,7 @@
 an external symbol relocation map.
 
 .. image:: MCJIT-load-object.png
- 
+
 When RuntimeDyldImpl::loadObject returns, all of the code and data
 sections for the object will have been loaded into memory allocated by the
 memory manager and relocation information will have been prepared, but the
@@ -166,7 +166,7 @@
 likely located in a different section.
 
 .. image:: MCJIT-resolve-relocations.png
- 
+
 Once relocations have been applied as described above, MCJIT calls
 RuntimeDyld::getEHFrameSection, and if a non-zero result is returned
 passes the section data to the memory manager's registerEHFrames method.
@@ -177,4 +177,3 @@
 method, the memory manager will invalidate the target code cache, if
 necessary, and apply final permissions to the memory pages it has
 allocated for code and data memory.
-
diff --git a/llvm/docs/NVPTXUsage.rst b/llvm/docs/NVPTXUsage.rst
index 38222af..e4b5ace 100644
--- a/llvm/docs/NVPTXUsage.rst
+++ b/llvm/docs/NVPTXUsage.rst
@@ -16,8 +16,8 @@
 end, including a description of the conventions used and the set of accepted
 LLVM IR.
 
-.. note:: 
-   
+.. note::
+
    This document assumes a basic familiarity with CUDA and the PTX
    assembly language. Information about the CUDA Driver API and the PTX assembly
    language can be found in the `CUDA documentation
@@ -519,7 +519,7 @@
 Dissecting the Kernel
 ---------------------
 
-Now let us dissect the LLVM IR that makes up this kernel. 
+Now let us dissect the LLVM IR that makes up this kernel.
 
 Data Layout
 ^^^^^^^^^^^
@@ -969,4 +969,3 @@
     st.global.f32   [%rl1], %f110;
     ret;
   }
-
diff --git a/llvm/docs/NewPassManager.rst b/llvm/docs/NewPassManager.rst
index cedd2c7e..9074603 100644
--- a/llvm/docs/NewPassManager.rst
+++ b/llvm/docs/NewPassManager.rst
@@ -287,7 +287,7 @@
   PreservedAnalyses PA;
   PA.preserveSet<CFGAnalyses>();
   return PA;
-  
+
 The pass manager will call the analysis manager's ``invalidate()`` method
 with the pass's returned ``PreservedAnalyses``. This can be also done
 manually within the pass:
diff --git a/llvm/docs/PDB/CodeViewTypes.rst b/llvm/docs/PDB/CodeViewTypes.rst
index 99c05e9..64fe745 100644
--- a/llvm/docs/PDB/CodeViewTypes.rst
+++ b/llvm/docs/PDB/CodeViewTypes.rst
@@ -33,7 +33,7 @@
 Padding is implemented by inserting a decreasing sequence of `<_padding_records>`
 that terminates with ``LF_PAD0``.
 
-The final category of record is a ``member record``.  One particular leaf type -- 
+The final category of record is a ``member record``.  One particular leaf type --
 ``LF_FIELDLIST`` -- contains a series of embedded records.  While the outer
 ``LF_FIELDLIST`` describes its length (like any other leaf record), the embedded
 records -- called ``member records`` do not.
diff --git a/llvm/docs/PDB/DbiStream.rst b/llvm/docs/PDB/DbiStream.rst
index bdb5b56..9e939a9 100644
--- a/llvm/docs/PDB/DbiStream.rst
+++ b/llvm/docs/PDB/DbiStream.rst
@@ -51,7 +51,7 @@
     uint16_t Machine;
     uint32_t Padding;
   };
-  
+
 - **VersionSignature** - Unknown meaning.  Appears to always be ``-1``.
 
 - **VersionHeader** - A value from the following enum.
@@ -71,11 +71,11 @@
 
 - **Age** - The number of times the PDB has been written.  Equal to the same
   field from the :ref:`PDB Stream header <pdb_stream_header>`.
-  
+
 - **GlobalStreamIndex** - The index of the :doc:`Global Symbol Stream <GlobalStream>`,
   which contains CodeView symbol records for all global symbols.  Actual records
   are stored in the symbol record stream, and are referenced from this stream.
-  
+
 - **BuildNumber** - A bitfield containing values representing the major and minor
   version number of the toolchain (e.g. 12.0 for MSVC 2013) used to build the
   program, with the following layout:
@@ -90,19 +90,19 @@
 If it is ``false``, the layout above does not apply and the reader should consult
 the `Microsoft Source Code <https://github.com/Microsoft/microsoft-pdb>`__ for
 further guidance.
-  
+
 - **PublicStreamIndex** - The index of the :doc:`Public Symbol Stream <PublicStream>`,
   which contains CodeView symbol records for all public symbols.  Actual records
   are stored in the symbol record stream, and are referenced from this stream.
-  
+
 - **PdbDllVersion** - The version number of ``mspdbXXXX.dll`` used to produce this
   PDB.  Note this obviously does not apply for LLVM as LLVM does not use ``mspdb.dll``.
-  
+
 - **SymRecordStream** - The stream containing all CodeView symbol records used
   by the program.  This is used for deduplication, so that many different
   compilands can refer to the same symbols without having to include the full record
   content inside of each module stream.
-  
+
 - **PdbDllRbld** - Unknown
 
 - **MFCTypeServerIndex** - The index of the MFC type server in the
@@ -110,7 +110,7 @@
 
 - **Flags** - A bitfield with the following layout, containing various
   information about how the program was built:
-  
+
 .. code-block:: c++
 
   uint16_t WasIncrementallyLinked : 1;
@@ -135,7 +135,7 @@
 of each of the following ``7`` fields.
 
 - **ModInfoSize** - The length of the :ref:`dbi_mod_info_substream`.
-  
+
 - **SectionContributionSize** - The length of the :ref:`dbi_sec_contr_substream`.
 
 - **SectionMapSize** - The length of the :ref:`dbi_section_map_substream`.
@@ -162,7 +162,7 @@
 module info substream is an array of variable-length records, each one
 describing a single module (e.g. object file) linked into the program.  Each
 record in the array has the format:
-  
+
 .. code-block:: c++
 
   struct ModInfo {
@@ -191,17 +191,17 @@
     char ModuleName[];
     char ObjFileName[];
   };
-  
+
 - **SectionContr** - Describes the properties of the section in the final binary
   which contain the code and data from this module.
 
   ``SectionContr.Characteristics`` corresponds to the ``Characteristics`` field
   of the `IMAGE_SECTION_HEADER <https://msdn.microsoft.com/en-us/library/windows/desktop/ms680341(v=vs.85).aspx>`__
   structure.
-  
+
 
 - **Flags** - A bitfield with the following format:
-  
+
 .. code-block:: c++
 
   // ``true`` if this ModInfo has been written since reading the PDB.  This is
@@ -217,7 +217,7 @@
   // but as LLVM treats /Zi as /Z7, this field will always be invalid for LLVM
   // generated PDBs.
   uint16_t TSM : 8;
-  
+
 
 - **ModuleSymStream** - The index of the stream that contains symbol information
   for this module.  This includes CodeView symbol information as well as source
@@ -263,31 +263,31 @@
 Begins at offset ``0`` immediately after the :ref:`dbi_mod_info_substream` ends,
 and consumes ``Header->SectionContributionSize`` bytes.  This substream begins
 with a single ``uint32_t`` which will be one of the following values:
-  
+
 .. code-block:: c++
 
   enum class SectionContrSubstreamVersion : uint32_t {
     Ver60 = 0xeffe0000 + 19970605,
     V2 = 0xeffe0000 + 20140516
   };
-  
+
 ``Ver60`` is the only value which has been observed in a PDB so far.  Following
 this is an array of fixed-length structures.  If the version is ``Ver60``,
 it is an array of ``SectionContribEntry`` structures (this is the nested structure
 from the ``ModInfo`` type.  If the version is ``V2``, it is an array of
 ``SectionContribEntry2`` structures, defined as follows:
-  
+
 .. code-block:: c++
 
   struct SectionContribEntry2 {
     SectionContribEntry SC;
     uint32_t ISectCoff;
   };
-  
+
 The purpose of the second field is not well understood.  The name implies that
 is the index of the COFF section, but this also describes the existing field
 ``SectionContribEntry::Section``.
-  
+
 
 .. _dbi_section_map_substream:
 
@@ -297,14 +297,14 @@
 and consumes ``Header->SectionMapSize`` bytes.  This substream begins with an ``4``
 byte header followed by an array of fixed-length records.  The header and records
 have the following layout:
-  
+
 .. code-block:: c++
 
   struct SectionMapHeader {
     uint16_t Count;    // Number of segment descriptors
     uint16_t LogCount; // Number of logical segment descriptors
   };
-  
+
   struct SectionMapEntry {
     uint16_t Flags;         // See the SectionMapEntryFlags enum below.
     uint16_t Ovl;           // Logical overlay number
@@ -315,7 +315,7 @@
     uint32_t Offset;        // Byte offset of the logical segment within physical segment.  If group is set in flags, this is the offset of the group.
     uint32_t SectionLength; // Byte count of the segment or group.
   };
-  
+
   enum class SectionMapEntryFlags : uint16_t {
     Read = 1 << 0,              // Segment is readable.
     Write = 1 << 1,             // Segment is writable.
@@ -325,7 +325,7 @@
     IsAbsoluteAddress = 1 << 9, // Frame represents an absolute address.
     IsGroup = 1 << 10           // If set, descriptor represents a group.
   };
-  
+
 Many of these fields are not well understood, so will not be discussed further.
 
 .. _dbi_file_info_substream:
@@ -339,13 +339,13 @@
 uses a string table to store each unique file name only once, and then have each
 module use offsets into the string table rather than embedding the string's value
 directly.  The format of this substream is as follows:
-  
+
 .. code-block:: c++
 
   struct FileInfoSubstream {
     uint16_t NumModules;
     uint16_t NumSourceFiles;
-    
+
     uint16_t ModIndices[NumModules];
     uint16_t ModFileCounts[NumModules];
     uint32_t FileNameOffsets[NumSourceFiles];
@@ -430,18 +430,18 @@
 debug data directory of type ``IMAGE_DEBUG_TYPE_FIXUP``.
 
 **Omap To Src Data** - ``DbgStreamArray[3]``.  The data in the referenced stream
-is a debug data directory of type ``IMAGE_DEBUG_TYPE_OMAP_TO_SRC``.  This 
+is a debug data directory of type ``IMAGE_DEBUG_TYPE_OMAP_TO_SRC``.  This
 is used for mapping addresses between instrumented and uninstrumented code.
 
 **Omap From Src Data** - ``DbgStreamArray[4]``.  The data in the referenced stream
-is a debug data directory of type ``IMAGE_DEBUG_TYPE_OMAP_FROM_SRC``.  This 
+is a debug data directory of type ``IMAGE_DEBUG_TYPE_OMAP_FROM_SRC``.  This
 is used for mapping addresses between instrumented and uninstrumented code.
 
 **Section Header Data** - ``DbgStreamArray[5]``.  A dump of all section headers from
 the original executable.
 
 **Token / RID Map** - ``DbgStreamArray[6]``.  The layout of this stream is not
-understood, but it is assumed to be a mapping from ``CLR Token`` to 
+understood, but it is assumed to be a mapping from ``CLR Token`` to
 ``CLR Record ID``.  Refer to `ECMA 335 <http://www.ecma-international.org/publications/standards/Ecma-335.htm>`__
 for more information.
 
@@ -459,7 +459,7 @@
 Thus, it is possible for both to appear in the same PDB if both MASM object files
 and cl object files are linked into the same program.
 
-**Original Section Header Data** - ``DbgStreamArray[10]``.  Similar to 
+**Original Section Header Data** - ``DbgStreamArray[10]``.  Similar to
 ``DbgStreamArray[5]``, but contains the section headers before any binary translation
 has been performed.  This can be used in conjunction with ``DebugStreamArray[3]``
 and ``DbgStreamArray[4]`` to map instrumented and uninstrumented addresses.
diff --git a/llvm/docs/PDB/PdbStream.rst b/llvm/docs/PDB/PdbStream.rst
index 40de9b7..417b421 100644
--- a/llvm/docs/PDB/PdbStream.rst
+++ b/llvm/docs/PDB/PdbStream.rst
@@ -49,15 +49,15 @@
   problems of using a timestamp with 1-second granularity, this field does not
   really serve its intended purpose, and as such is typically ignored in favor
   of the ``Guid`` field, described below.
-  
+
 - **Age** - The number of times the PDB file has been written.  This can be used
   along with ``Guid`` to match the PDB to its corresponding executable.
-  
+
 - **Guid** - A 128-bit identifier guaranteed to be unique across space and time.
-  In general, this can be thought of as the result of calling the Win32 API 
+  In general, this can be thought of as the result of calling the Win32 API
   `UuidCreate <https://msdn.microsoft.com/en-us/library/windows/desktop/aa379205(v=vs.85).aspx>`__,
   although LLVM cannot rely on that, as it must work on non-Windows platforms.
-  
+
 .. _pdb_named_stream_map:
 
 Named Stream Map
@@ -66,7 +66,7 @@
 Following the header is a serialized hash table whose key type is a string, and
 whose value type is an integer.  The existence of a mapping ``X -> Y`` means
 that the stream with the name ``X`` has stream index ``Y`` in the underlying MSF
-file.  Note that not all streams are named (for example, the 
+file.  Note that not all streams are named (for example, the
 :doc:`TPI Stream <TpiStream>` has a fixed index and as such there is no need to
 look up its index by name).  In practice, there are usually only a small number
 of named streams and these are enumerated in the table of streams in :doc:`index`.
@@ -86,7 +86,7 @@
 a buffer of string data prefixed by a 32-bit length.  The second is a serialized
 hash table whose key and value types are both ``uint32_t``.  The key is the offset
 of a null-terminated string in the string data buffer specifying the name of the
-stream, and the value is the MSF stream index of the stream with said name. 
+stream, and the value is the MSF stream index of the stream with said name.
 Note that although the key is an integer, the hash function used to find the right
 bucket hashes the string at the corresponding offset in the string data buffer.
 
@@ -95,7 +95,7 @@
 Note that the entire Named Stream Map is not length-prefixed, so the only way to
 get to the data following it is to de-serialize it in its entirety.
 
-  
+
 .. _pdb_stream_features:
 
 PDB Feature Codes
@@ -111,7 +111,7 @@
     NoTypeMerge = 0x4D544F4E,
     MinimalDebugInfo = 0x494E494D,
   };
-  
+
 The meaning of these values is summarized by the following table:
 
 +------------------+-------------------------------------------------+
@@ -131,7 +131,7 @@
 |                  | - There is no TPI / IPI stream, all type info   |
 |                  |   is contained in the original object files.    |
 +------------------+-------------------------------------------------+
-  
+
 Matching a PDB to its executable
 ================================
 The linker is responsible for writing both the PDB and the final executable, and
diff --git a/llvm/docs/Phabricator.rst b/llvm/docs/Phabricator.rst
index daa2f25..21964d8 100644
--- a/llvm/docs/Phabricator.rst
+++ b/llvm/docs/Phabricator.rst
@@ -174,12 +174,12 @@
 Pre-merge testing
 -----------------
 
-The pre-merge tests are a continuous integration (CI) workflow. The workflow 
-checks the patches uploaded to Phabricator before a user merges them to the main 
-branch - thus the term *pre-merge testing*. 
+The pre-merge tests are a continuous integration (CI) workflow. The workflow
+checks the patches uploaded to Phabricator before a user merges them to the main
+branch - thus the term *pre-merge testing*.
 
 When a user uploads a patch to Phabricator, Phabricator triggers the checks and
-then displays the results. This way bugs in a patch are contained during the 
+then displays the results. This way bugs in a patch are contained during the
 code review stage and do not pollute the main branch.
 
 Our goal with pre-merge testing is to report most true problems while strongly
@@ -187,8 +187,8 @@
 reported are always actionable.  If you notice a false positive, please report
 it so that we can identify the cause.
 
-If you notice issues or have an idea on how to improve pre-merge checks, please 
-`create a new issue <https://github.com/google/llvm-premerge-checks/issues/new>`_ 
+If you notice issues or have an idea on how to improve pre-merge checks, please
+`create a new issue <https://github.com/google/llvm-premerge-checks/issues/new>`_
 or give a ❤️ to an existing one.
 
 Requirements
@@ -198,8 +198,8 @@
 patch to the checked out git repository. Please make sure that either:
 
 * You set a git hash as ``sourceControlBaseRevision`` in Phabricator which is
-  available on the GitHub repository, 
-* **or** you define the dependencies of your patch in Phabricator, 
+  available on the GitHub repository,
+* **or** you define the dependencies of your patch in Phabricator,
 * **or** your patch can be applied to the main branch.
 
 Only then can the build server apply the patch locally and run the builds and
@@ -208,7 +208,7 @@
 Accessing build results
 ^^^^^^^^^^^^^^^^^^^^^^^
 Phabricator will automatically trigger a build for every new patch you upload or
-modify. Phabricator shows the build results at the top of the entry. Clicking on 
+modify. Phabricator shows the build results at the top of the entry. Clicking on
 the links (in the red box) will show more details:
 
   .. image:: Phabricator_premerge_results.png
diff --git a/llvm/docs/ProgrammersManual.rst b/llvm/docs/ProgrammersManual.rst
index b3004af..f26ae7b 100644
--- a/llvm/docs/ProgrammersManual.rst
+++ b/llvm/docs/ProgrammersManual.rst
@@ -285,7 +285,7 @@
 strings, especially for platform-specific types like ``size_t`` or pointer types.
 Unlike both ``printf`` and Python, it additionally fails to compile if LLVM does
 not know how to format the type.  These two properties ensure that the function
-is both safer and simpler to use than traditional formatting methods such as 
+is both safer and simpler to use than traditional formatting methods such as
 the ``printf`` family of functions.
 
 Simple formatting
@@ -303,7 +303,7 @@
 the value into, and the alignment of the value within the field.  It is specified as
 an optional **alignment style** followed by a positive integral **field width**.  The
 alignment style can be one of the characters ``-`` (left align), ``=`` (center align),
-or ``+`` (right align).  The default is right aligned.  
+or ``+`` (right align).  The default is right aligned.
 
 ``style`` is an optional string consisting of a type specific that controls the
 formatting of the value.  For example, to format a floating point value as a percentage,
@@ -318,7 +318,7 @@
    type ``T`` with the appropriate static format method.
 
   .. code-block:: c++
-  
+
     namespace llvm {
       template<>
       struct format_provider<MyFooBar> {
@@ -331,16 +331,16 @@
         std::string S = formatv("{0}", X);
       }
     }
-    
+
   This is a useful extensibility mechanism for adding support for formatting your own
   custom types with your own custom Style options.  But it does not help when you want
   to extend the mechanism for formatting a type that the library already knows how to
   format.  For that, we need something else.
-    
+
 2. Provide a **format adapter** inheriting from ``llvm::FormatAdapter<T>``.
 
   .. code-block:: c++
-  
+
     namespace anything {
       struct format_int_custom : public llvm::FormatAdapter<int> {
         explicit format_int_custom(int N) : llvm::FormatAdapter<int>(N) {}
@@ -354,7 +354,7 @@
         std::string S = formatv("{0}", anything::format_int_custom(42));
       }
     }
-    
+
   If the type is detected to be derived from ``FormatAdapter<T>``, ``formatv``
   will call the
   ``format`` method on the argument passing in the specified style.  This allows
@@ -369,28 +369,28 @@
 
 
 .. code-block:: c++
-  
+
   std::string S;
   // Simple formatting of basic types and implicit string conversion.
   S = formatv("{0} ({1:P})", 7, 0.35);  // S == "7 (35.00%)"
-  
+
   // Out-of-order referencing and multi-referencing
   outs() << formatv("{0} {2} {1} {0}", 1, "test", 3); // prints "1 3 test 1"
-  
+
   // Left, right, and center alignment
   S = formatv("{0,7}",  'a');  // S == "      a";
   S = formatv("{0,-7}", 'a');  // S == "a      ";
   S = formatv("{0,=7}", 'a');  // S == "   a   ";
   S = formatv("{0,+7}", 'a');  // S == "      a";
-  
+
   // Custom styles
   S = formatv("{0:N} - {0:x} - {1:E}", 12345, 123908342); // S == "12,345 - 0x3039 - 1.24E8"
-  
+
   // Adapters
   S = formatv("{0}", fmt_align(42, AlignStyle::Center, 7));  // S == "  42   "
   S = formatv("{0}", fmt_repeat("hi", 3)); // S == "hihihi"
   S = formatv("{0}", fmt_pad("hi", 2, 6)); // S == "  hi      "
-  
+
   // Ranges
   std::vector<int> V = {8, 9, 10};
   S = formatv("{0}", make_range(V.begin(), V.end())); // S == "8, 9, 10"
@@ -4095,5 +4095,3 @@
 This subclass of Value defines the interface for incoming formal arguments to a
 function.  A Function maintains a list of its formal arguments.  An argument has
 a pointer to the parent Function.
-
-
diff --git a/llvm/docs/Projects.rst b/llvm/docs/Projects.rst
index 4695664..e62a6d8 100644
--- a/llvm/docs/Projects.rst
+++ b/llvm/docs/Projects.rst
@@ -94,7 +94,7 @@
   benchmarks and programs that are known to compile with the Clang front
   end. You can use these programs to test your code, gather statistical
   information, and compare it to the current LLVM performance statistics.
-  
+
   Currently, there is no way to hook your tests directly into the ``llvm/test``
   testing harness. You will simply need to find a way to use the source
   provided within that directory on your own.
diff --git a/llvm/docs/Proposals/GitHubMove.rst b/llvm/docs/Proposals/GitHubMove.rst
index 86aa8d8..dbb38ee 100644
--- a/llvm/docs/Proposals/GitHubMove.rst
+++ b/llvm/docs/Proposals/GitHubMove.rst
@@ -813,7 +813,7 @@
 ``submodule-map.txt`` is a list of pairs, one per line.  The first
 pair item describes the path to a submodule in the umbrella
 repository.  The second pair item describes the path where trees for
-that submodule should be written in the zipped history.  
+that submodule should be written in the zipped history.
 
 Let's say your umbrella repository is actually the llvm repository and
 it has submodules in the "nested sources" layout (clang in
diff --git a/llvm/docs/SourceLevelDebugging.rst b/llvm/docs/SourceLevelDebugging.rst
index dbadaed..b3647ef 100644
--- a/llvm/docs/SourceLevelDebugging.rst
+++ b/llvm/docs/SourceLevelDebugging.rst
@@ -427,7 +427,7 @@
 these potentially stale variable values from the developer diminishes the
 amount of available debug information, but increases the reliability of the
 remaining information.
- 
+
 To illustrate some potential issues, consider the following example:
 
 .. code-block:: llvm
@@ -797,7 +797,7 @@
   entry:
     br i1 %cond, label %truebr, label %falsebr
 
-  bb1: 
+  bb1:
     %value = phi i32 [ %value1, %truebr ], [ %value2, %falsebr ]
     br label %exit, !dbg !26
 
@@ -813,7 +813,7 @@
     %value = add i32 %input, 2
     br label %bb1
 
-  exit: 
+  exit:
     ret i32 %value, !dbg !30
   }
 
@@ -1068,7 +1068,7 @@
 
 .. code-block:: text
 
-  DW_TAG_subprogram [3]  
+  DW_TAG_subprogram [3]
      DW_AT_low_pc [DW_FORM_addr]     (0x0000000000000010 ".text")
      DW_AT_high_pc [DW_FORM_data4]   (0x00000001)
      ...
diff --git a/llvm/docs/SphinxQuickstartTemplate.rst b/llvm/docs/SphinxQuickstartTemplate.rst
index f15970d..db9bd26 100644
--- a/llvm/docs/SphinxQuickstartTemplate.rst
+++ b/llvm/docs/SphinxQuickstartTemplate.rst
@@ -169,9 +169,9 @@
 ============================
 
 You can generate the HTML documentation from the sources locally if you want to
-see what they would look like. In addition to the normal 
+see what they would look like. In addition to the normal
 `build tools <docs/GettingStarted.html>`_
-you need to install `Sphinx`_ and the 
+you need to install `Sphinx`_ and the
 `recommonmark <https://recommonmark.readthedocs.io/en/latest/>`_ extension.
 
 On Debian you can install these with:
@@ -195,7 +195,7 @@
    cmake -DLLVM_ENABLE_SPHINX=On ../llvm
    cmake --build . --target docs-llvm-html
 
-In case you already have the Cmake build set up and want to reuse that, 
+In case you already have the Cmake build set up and want to reuse that,
 just set the CMake variable ``LLVM_ENABLE_SPHINX=On``.
 
 After that you find the generated documentation in ``build/docs/html``
diff --git a/llvm/docs/StackMaps.rst b/llvm/docs/StackMaps.rst
index 1501ddd..8f7f41a 100644
--- a/llvm/docs/StackMaps.rst
+++ b/llvm/docs/StackMaps.rst
@@ -511,7 +511,7 @@
 Supported Architectures
 =======================
 
-Support for StackMap generation and the related intrinsics requires 
-some code for each backend.  Today, only a subset of LLVM's backends 
-are supported.  The currently supported architectures are X86_64, 
+Support for StackMap generation and the related intrinsics requires
+some code for each backend.  Today, only a subset of LLVM's backends
+are supported.  The currently supported architectures are X86_64,
 PowerPC, Aarch64 and SystemZ.
diff --git a/llvm/docs/Statepoints.rst b/llvm/docs/Statepoints.rst
index e13f5d9..ff8cdd6 100644
--- a/llvm/docs/Statepoints.rst
+++ b/llvm/docs/Statepoints.rst
@@ -10,21 +10,21 @@
 =======
 
 This document describes a set of extensions to LLVM to support garbage
-collection.  By now, these mechanisms are well proven with commercial java 
-implementation with a fully relocating collector having shipped using them.  
+collection.  By now, these mechanisms are well proven with commercial java
+implementation with a fully relocating collector having shipped using them.
 There are a couple places where bugs might still linger; these are called out
 below.
 
 They are still listed as "experimental" to indicate that no forward or backward
-compatibility guarantees are offered across versions.  If your use case is such 
-that you need some form of forward compatibility guarantee, please raise the 
-issue on the llvm-dev mailing list.  
+compatibility guarantees are offered across versions.  If your use case is such
+that you need some form of forward compatibility guarantee, please raise the
+issue on the llvm-dev mailing list.
 
-LLVM still supports an alternate mechanism for conservative garbage collection 
+LLVM still supports an alternate mechanism for conservative garbage collection
 support using the ``gcroot`` intrinsic.  The ``gcroot`` mechanism is mostly of
 historical interest at this point with one exception - its implementation of
 shadow stacks has been used successfully by a number of language frontends and
-is still supported.  
+is still supported.
 
 Overview & Core Concepts
 ========================
@@ -98,12 +98,12 @@
 Abstract Machine Model
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-At a high level, LLVM has been extended to support compiling to an abstract 
-machine which extends the actual target with a non-integral pointer type 
-suitable for representing a garbage collected reference to an object.  In 
-particular, such non-integral pointer type have no defined mapping to an 
-integer representation.  This semantic quirk allows the runtime to pick a 
-integer mapping for each point in the program allowing relocations of objects 
+At a high level, LLVM has been extended to support compiling to an abstract
+machine which extends the actual target with a non-integral pointer type
+suitable for representing a garbage collected reference to an object.  In
+particular, such non-integral pointer type have no defined mapping to an
+integer representation.  This semantic quirk allows the runtime to pick a
+integer mapping for each point in the program allowing relocations of objects
 without visible effects.
 
 This high level abstract machine model is used for most of the optimizer.  As
@@ -115,25 +115,25 @@
 Note that most of the value of the abstract machine model comes for collectors
 which need to model potentially relocatable objects.  For a compiler which
 supports only a non-relocating collector, you may wish to consider starting
-with the fully explicit form.  
+with the fully explicit form.
 
-Warning: There is one currently known semantic hole in the definition of 
+Warning: There is one currently known semantic hole in the definition of
 non-integral pointers which has not been addressed upstream.  To work around
-this, you need to disable speculation of loads unless the memory type 
-(non-integral pointer vs anything else) is known to unchanged.  That is, it is 
-not safe to speculate a load if doing causes a non-integral pointer value to 
-be loaded as any other type or vice versa.  In practice, this restriction is 
+this, you need to disable speculation of loads unless the memory type
+(non-integral pointer vs anything else) is known to unchanged.  That is, it is
+not safe to speculate a load if doing causes a non-integral pointer value to
+be loaded as any other type or vice versa.  In practice, this restriction is
 well isolated to isSafeToSpeculate in ValueTracking.cpp.
 
 Explicit Representation
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-A frontend could directly generate this low level explicit form, but 
+A frontend could directly generate this low level explicit form, but
 doing so may inhibit optimization.  Instead, it is recommended that
 compilers with relocating collectors target the abstract machine model just
-described.  
+described.
 
-The heart of the explicit approach is to construct (or rewrite) the IR in a 
+The heart of the explicit approach is to construct (or rewrite) the IR in a
 manner where the possible updates performed by the garbage collector are
 explicitly visible in the IR.  Doing so requires that we:
 
@@ -157,8 +157,8 @@
   collected values, transforming the IR to expose a pointer giving the
   base object for every such live pointer, and inserting all the
   intrinsics correctly is explicitly out of scope for this document.
-  The recommended approach is to use the :ref:`utility passes 
-  <statepoint-utilities>` described below. 
+  The recommended approach is to use the :ref:`utility passes
+  <statepoint-utilities>` described below.
 
 This abstract function call is concretely represented by a sequence of
 intrinsic calls known collectively as a "statepoint relocation sequence".
@@ -167,26 +167,26 @@
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     call void ()* @foo()
     ret i8 addrspace(1)* %obj
   }
 
-Depending on our language we may need to allow a safepoint during the execution 
-of ``foo``. If so, we need to let the collector update local values in the 
-current frame.  If we don't, we'll be accessing a potential invalid reference 
+Depending on our language we may need to allow a safepoint during the execution
+of ``foo``. If so, we need to let the collector update local values in the
+current frame.  If we don't, we'll be accessing a potential invalid reference
 once we eventually return from the call.
 
-In this example, we need to relocate the SSA value ``%obj``.  Since we can't 
-actually change the value in the SSA value ``%obj``, we need to introduce a new 
+In this example, we need to relocate the SSA value ``%obj``.  Since we can't
+actually change the value in the SSA value ``%obj``, we need to introduce a new
 SSA value ``%obj.relocated`` which represents the potentially changed value of
-``%obj`` after the safepoint and update any following uses appropriately.  The 
+``%obj`` after the safepoint and update any following uses appropriately.  The
 resulting relocation sequence is:
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     %0 = call token (i64, i32, void ()*, i32, i32, ...)* @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 0, i32 0, void ()* @foo, i32 0, i32 0, i32 0, i32 0, i8 addrspace(1)* %obj)
     %obj.relocated = call coldcc i8 addrspace(1)* @llvm.experimental.gc.relocate.p1i8(token %0, i32 7, i32 7)
@@ -205,13 +205,13 @@
 of the call, we use the ``gc.result`` intrinsic.  To get the relocation
 of each pointer in turn, we use the ``gc.relocate`` intrinsic with the
 appropriate index.  Note that both the ``gc.relocate`` and ``gc.result`` are
-tied to the statepoint.  The combination forms a "statepoint relocation 
+tied to the statepoint.  The combination forms a "statepoint relocation
 sequence" and represents the entirety of a parseable call or 'statepoint'.
 
 When lowered, this example would generate the following x86 assembly:
 
 .. code-block:: gas
-  
+
 	  .globl	test1
 	  .align	16, 0x90
 	  pushq	%rax
@@ -230,7 +230,7 @@
 The relevant parts of the StackMap section for our example are:
 
 .. code-block:: gas
-  
+
   # This describes the call site
   # Stack Maps: callsite 2882400000
 	  .quad	2882400000
@@ -238,7 +238,7 @@
 	  .short	0
   # .. 8 entries skipped ..
   # This entry describes the spill slot which is directly addressable
-  # off RSP with offset 0.  Given the value was spilled with a pushq, 
+  # off RSP with offset 0.  Given the value was spilled with a pushq,
   # that makes sense.
   # Stack Maps:   Loc 8: Direct RSP     [encoding: .byte 2, .byte 8, .short 7, .int 0]
 	  .byte	2
@@ -262,14 +262,14 @@
 information about which location contain live references, it doesn't need to
 represent explicit relocations.  As such, the previously described explicit
 lowering can be simplified to remove all of the ``gc.relocate`` intrinsic
-calls and leave uses in terms of the original reference value.  
+calls and leave uses in terms of the original reference value.
 
 Here's the explicit lowering for the previous example for a non-relocating
 collector:
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     call token (i64, i32, void ()*, i32, i32, ...)* @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 0, i32 0, void ()* @foo, i32 0, i32 0, i32 0, i32 0, i8 addrspace(1)* %obj)
     ret i8 addrspace(1)* %obj
@@ -303,41 +303,41 @@
 recommended to use this with caution and expect to have to fix a few bugs.
 In particular, the RewriteStatepointsForGC utility pass does not do
 anything for allocas today.
-  
+
 Base & Derived Pointers
 ^^^^^^^^^^^^^^^^^^^^^^^
 
 A "base pointer" is one which points to the starting address of an allocation
 (object).  A "derived pointer" is one which is offset from a base pointer by
-some amount.  When relocating objects, a garbage collector needs to be able 
-to relocate each derived pointer associated with an allocation to the same 
+some amount.  When relocating objects, a garbage collector needs to be able
+to relocate each derived pointer associated with an allocation to the same
 offset from the new address.
 
-"Interior derived pointers" remain within the bounds of the allocation 
-they're associated with.  As a result, the base object can be found at 
+"Interior derived pointers" remain within the bounds of the allocation
+they're associated with.  As a result, the base object can be found at
 runtime provided the bounds of allocations are known to the runtime system.
 
 "Exterior derived pointers" are outside the bounds of the associated object;
 they may even fall within *another* allocations address range.  As a result,
-there is no way for a garbage collector to determine which allocation they 
+there is no way for a garbage collector to determine which allocation they
 are associated with at runtime and compiler support is needed.
 
 The ``gc.relocate`` intrinsic supports an explicit operand for describing the
-allocation associated with a derived pointer.  This operand is frequently 
+allocation associated with a derived pointer.  This operand is frequently
 referred to as the base operand, but does not strictly speaking have to be
 a base pointer, but it does need to lie within the bounds of the associated
 allocation.  Some collectors may require that the operand be an actual base
-pointer rather than merely an internal derived pointer. Note that during 
-lowering both the base and derived pointer operands are required to be live 
-over the associated call safepoint even if the base is otherwise unused 
+pointer rather than merely an internal derived pointer. Note that during
+lowering both the base and derived pointer operands are required to be live
+over the associated call safepoint even if the base is otherwise unused
 afterwards.
 
-If we extend our previous example to include a pointless derived pointer, 
+If we extend our previous example to include a pointless derived pointer,
 we get:
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     %gep = getelementptr i8, i8 addrspace(1)* %obj, i64 20000
     %token = call token (i64, i32, void ()*, i32, i32, ...)* @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 0, i32 0, void ()* @foo, i32 0, i32 0, i32 0, i32 0, i8 addrspace(1)* %obj, i8 addrspace(1)* %gep)
@@ -468,29 +468,29 @@
   "deopt" operand bundle.  At the moment, only deopt parameters with a bitwidth
   of 64 bits or less are supported.  Values of a type larger than 64 bits can be
   specified and reported only if a) the value is constant at the call site, and
-  b) the constant can be represented with less than 64 bits (assuming zero 
+  b) the constant can be represented with less than 64 bits (assuming zero
   extension to the original bitwidth).
-* Variable number of relocation records, each of which consists of 
+* Variable number of relocation records, each of which consists of
   exactly two Locations.  Relocation records are described in detail
   below.
 
-Each relocation record provides sufficient information for a collector to 
-relocate one or more derived pointers.  Each record consists of a pair of 
-Locations.  The second element in the record represents the pointer (or 
-pointers) which need updated.  The first element in the record provides a 
+Each relocation record provides sufficient information for a collector to
+relocate one or more derived pointers.  Each record consists of a pair of
+Locations.  The second element in the record represents the pointer (or
+pointers) which need updated.  The first element in the record provides a
 pointer to the base of the object with which the pointer(s) being relocated is
-associated.  This information is required for handling generalized derived 
+associated.  This information is required for handling generalized derived
 pointers since a pointer may be outside the bounds of the original allocation,
 but still needs to be relocated with the allocation.  Additionally:
 
-* It is guaranteed that the base pointer must also appear explicitly as a 
-  relocation pair if used after the statepoint. 
+* It is guaranteed that the base pointer must also appear explicitly as a
+  relocation pair if used after the statepoint.
 * There may be fewer relocation records then gc parameters in the IR
   statepoint. Each *unique* pair will occur at least once; duplicates
-  are possible.  
-* The Locations within each record may either be of pointer size or a 
-  multiple of pointer size.  In the later case, the record must be 
-  interpreted as describing a sequence of pointers and their corresponding 
+  are possible.
+* The Locations within each record may either be of pointer size or a
+  multiple of pointer size.  In the later case, the record must be
+  interpreted as describing a sequence of pointers and their corresponding
   base pointers. If the Location is of size N x sizeof(pointer), then
   there will be N records of one pointer each contained within the Location.
   Both Locations in a pair can be assumed to be of the same size.
@@ -551,20 +551,20 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
 The pass RewriteStatepointsForGC transforms a function's IR to lower from the
-abstract machine model described above to the explicit statepoint model of 
+abstract machine model described above to the explicit statepoint model of
 relocations.  To do this, it replaces all calls or invokes of functions which
 might contain a safepoint poll with a ``gc.statepoint`` and associated full
-relocation sequence, including all required ``gc.relocates``.  
+relocation sequence, including all required ``gc.relocates``.
 
-Note that by default, this pass only runs for the "statepoint-example" or 
-"core-clr" gc strategies.  You will need to add your custom strategy to this 
-list or use one of the predefined ones. 
+Note that by default, this pass only runs for the "statepoint-example" or
+"core-clr" gc strategies.  You will need to add your custom strategy to this
+list or use one of the predefined ones.
 
 As an example, given this code:
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     call void @foo()
     ret i8 addrspace(1)* %obj
@@ -574,7 +574,7 @@
 
 .. code-block:: llvm
 
-  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj) 
+  define i8 addrspace(1)* @test1(i8 addrspace(1)* %obj)
          gc "statepoint-example" {
     %0 = call token (i64, i32, void ()*, i32, i32, ...)* @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 2882400000, i32 0, void ()* @foo, i32 0, i32 0, i32 0, i32 5, i32 0, i32 -1, i32 0, i32 0, i32 0, i8 addrspace(1)* %obj)
     %obj.relocated = call coldcc i8 addrspace(1)* @llvm.experimental.gc.relocate.p1i8(token %0, i32 12, i32 12)
@@ -586,20 +586,20 @@
 non references.  The pass assumes that all addrspace(1) pointers are non-integral
 pointer types.  Address space 1 is not globally reserved for this purpose.
 
-This pass can be used an utility function by a language frontend that doesn't 
-want to manually reason about liveness, base pointers, or relocation when 
-constructing IR.  As currently implemented, RewriteStatepointsForGC must be 
+This pass can be used an utility function by a language frontend that doesn't
+want to manually reason about liveness, base pointers, or relocation when
+constructing IR.  As currently implemented, RewriteStatepointsForGC must be
 run after SSA construction (i.e. mem2ref).
 
 RewriteStatepointsForGC will ensure that appropriate base pointers are listed
 for every relocation created.  It will do so by duplicating code as needed to
 propagate the base pointer associated with each pointer being relocated to
-the appropriate safepoints.  The implementation assumes that the following 
-IR constructs produce base pointers: loads from the heap, addresses of global 
+the appropriate safepoints.  The implementation assumes that the following
+IR constructs produce base pointers: loads from the heap, addresses of global
 variables, function arguments, function return values. Constant pointers (such
 as null) are also assumed to be base pointers.  In practice, this constraint
-can be relaxed to producing interior derived pointers provided the target 
-collector can find the associated allocation from an arbitrary interior 
+can be relaxed to producing interior derived pointers provided the target
+collector can find the associated allocation from an arbitrary interior
 derived pointer.
 
 By default RewriteStatepointsForGC passes in ``0xABCDEF00`` as the statepoint
@@ -617,8 +617,8 @@
 are not propagated to the ``gc.statepoint`` call or invoke if they
 could be successfully parsed.
 
-In practice, RewriteStatepointsForGC should be run much later in the pass 
-pipeline, after most optimization is already done.  This helps to improve 
+In practice, RewriteStatepointsForGC should be run much later in the pass
+pipeline, after most optimization is already done.  This helps to improve
 the quality of the generated code when compiled with garbage collection support.
 
 .. _RewriteStatepointsForGC_intrinsic_lowering:
@@ -674,10 +674,10 @@
 PlaceSafepoints
 ^^^^^^^^^^^^^^^^
 
-The pass PlaceSafepoints inserts safepoint polls sufficient to ensure running 
-code checks for a safepoint request on a timely manner. This pass is expected 
-to be run before RewriteStatepointsForGC and thus does not produce full 
-relocation sequences.  
+The pass PlaceSafepoints inserts safepoint polls sufficient to ensure running
+code checks for a safepoint request on a timely manner. This pass is expected
+to be run before RewriteStatepointsForGC and thus does not produce full
+relocation sequences.
 
 As an example, given input IR of the following:
 
@@ -705,25 +705,25 @@
     ret void
   }
 
-In this case, we've added an (unconditional) entry safepoint poll.  Note that 
-despite appearances, the entry poll is not necessarily redundant.  We'd have to 
-know that ``foo`` and ``test`` were not mutually recursive for the poll to be 
-redundant.  In practice, you'd probably want to your poll definition to contain 
+In this case, we've added an (unconditional) entry safepoint poll.  Note that
+despite appearances, the entry poll is not necessarily redundant.  We'd have to
+know that ``foo`` and ``test`` were not mutually recursive for the poll to be
+redundant.  In practice, you'd probably want to your poll definition to contain
 a conditional branch of some form.
 
-At the moment, PlaceSafepoints can insert safepoint polls at method entry and 
-loop backedges locations.  Extending this to work with return polls would be 
+At the moment, PlaceSafepoints can insert safepoint polls at method entry and
+loop backedges locations.  Extending this to work with return polls would be
 straight forward if desired.
 
-PlaceSafepoints includes a number of optimizations to avoid placing safepoint 
-polls at particular sites unless needed to ensure timely execution of a poll 
-under normal conditions.  PlaceSafepoints does not attempt to ensure timely 
+PlaceSafepoints includes a number of optimizations to avoid placing safepoint
+polls at particular sites unless needed to ensure timely execution of a poll
+under normal conditions.  PlaceSafepoints does not attempt to ensure timely
 execution of a poll under worst case conditions such as heavy system paging.
 
-The implementation of a safepoint poll action is specified by looking up a 
+The implementation of a safepoint poll action is specified by looking up a
 function of the name ``gc.safepoint_poll`` in the containing Module.  The body
 of this function is inserted at each poll site desired.  While calls or invokes
-inside this method are transformed to a ``gc.statepoints``, recursive poll 
+inside this method are transformed to a ``gc.statepoints``, recursive poll
 insertion is not performed.
 
 This pass is useful for any language frontend which only has to support
@@ -732,7 +732,7 @@
 you can insert safepoint polls in the frontend.  If you have the later case,
 please ask on llvm-dev for suggestions.  There's been a good amount of work
 done on making such a scheme work well in practice which is not yet documented
-here.  
+here.
 
 
 Supported Architectures
@@ -769,7 +769,7 @@
 The missing pieces are a) integration with rewriting (RS4GC) from the
 abstract machine model and b) support for optionally decomposing on stack
 objects so as not to require heap maps for them.  The later is required
-for ease of integration with some collectors.  
+for ease of integration with some collectors.
 
 Lowering Quality and Representation Overhead
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -805,7 +805,7 @@
 post processing of each individual object file.  While not implemented
 today for statepoints, there is precedent for a GCStrategy to be able to
 select a customer GCMetataPrinter for this purpose.  Patches to enable
-this functionality upstream are welcome.   
+this functionality upstream are welcome.
 
 Bugs and Enhancements
 =====================
@@ -819,4 +819,3 @@
 <http://lists.llvm.org/mailman/listinfo/llvm-dev>`_, and patches
 should be sent to `llvm-commits
 <http://lists.llvm.org/mailman/listinfo/llvm-commits>`_ for review.
-
diff --git a/llvm/docs/SupportLibrary.rst b/llvm/docs/SupportLibrary.rst
index 9e5e091..6fc407d 100644
--- a/llvm/docs/SupportLibrary.rst
+++ b/llvm/docs/SupportLibrary.rst
@@ -58,7 +58,7 @@
 ---------------------------
 
 The Support Library must shield LLVM from **all** system headers. To obtain
-system level functionality, LLVM source must 
+system level functionality, LLVM source must
 ``#include "llvm/Support/Thing.h"`` and nothing else. This means that
 ``Thing.h`` cannot expose any system header files. This protects LLVM from
 accidentally using system specific functionality and only allows it via
@@ -226,7 +226,7 @@
   #endif
 
 The implementation in ``lib/Support/Unix/Path.inc`` should handle all Unix
-variants. The implementation in ``lib/Support/Windows/Path.inc`` should handle 
+variants. The implementation in ``lib/Support/Windows/Path.inc`` should handle
 all Windows variants.  What this does is quickly inc the basic class
 of operating system that will provide the implementation. The specific details
 for a given platform must still be determined through the use of ``#ifdef``.
diff --git a/llvm/docs/TableGen/BackEnds.rst b/llvm/docs/TableGen/BackEnds.rst
index 1e1a4e7..1c1137e 100644
--- a/llvm/docs/TableGen/BackEnds.rst
+++ b/llvm/docs/TableGen/BackEnds.rst
@@ -324,7 +324,7 @@
 ClangAttrVisitor
 -------------------
 
-**Purpose**: Creates AttrVisitor.inc, which is used when implementing 
+**Purpose**: Creates AttrVisitor.inc, which is used when implementing
 recursive AST visitors.
 
 ClangAttrTemplateInstantiate
@@ -789,7 +789,7 @@
           return false;
         return false;
       });
-  
+
     if (Idx == Table.end() ||
         Key.Val1 != Idx->Val1 ||
         Key.Val2 != Idx->Val2)
@@ -967,4 +967,3 @@
       return nullptr;
     return &CTable[Idx->_index];
   }
-
diff --git a/llvm/docs/TableGen/BackGuide.rst b/llvm/docs/TableGen/BackGuide.rst
index ca6821f..7da39bf 100644
--- a/llvm/docs/TableGen/BackGuide.rst
+++ b/llvm/docs/TableGen/BackGuide.rst
@@ -191,7 +191,7 @@
 are described in the following subsections.
 
 *All* of the classes derived from ``RecTy`` provide the ``get()`` function.
-It returns an instance of ``Recty`` corresponding to the derived class. 
+It returns an instance of ``Recty`` corresponding to the derived class.
 Some of the ``get()`` functions require an argument to
 specify which particular variant of the type is desired. These arguments are
 described in the following subsections.
@@ -334,7 +334,7 @@
 ~~~~~~~~~~~
 
 The ``DagInit`` class is a subclass of ``TypedInit``. Its instances
-represent the possible direct acyclic graphs (``dag``). 
+represent the possible direct acyclic graphs (``dag``).
 
 The class includes a pointer to an ``Init`` for the DAG operator and a
 pointer to a ``StringInit`` for the operator name. It includes the count of
@@ -426,7 +426,7 @@
 .. code-block:: text
 
   using const_iterator = Init *const *;
- 
+
 
 ``StringInit``
 ~~~~~~~~~~~~~~
@@ -463,7 +463,7 @@
      function. It should invoke the "main function" of your backend, which
      in this case, according to convention, is named ``EmitAddressModes``.
 
-5. Add a declaration of your "main function" to the corresponding 
+5. Add a declaration of your "main function" to the corresponding
    ``TableGenBackends.h`` header file.
 
 #. Add your backend C++ file to the appropriate ``CMakeLists.txt`` file so
@@ -616,7 +616,7 @@
 
 The field is assumed to have another record as its value. That record is returned
 as a pointer to a ``Record``. If the field does not exist or is unset, the
-functions returns null.  
+functions returns null.
 
 Getting Record Superclasses
 ===========================
@@ -692,12 +692,12 @@
 
 * ``PrintFatalNote`` prints a note and then terminates.
 
-Each of these five functions is overloaded four times. 
+Each of these five functions is overloaded four times.
 
 * ``PrintError(const Twine &Msg)``:
   Prints the message with no source file location.
 
-* ``PrintError(ArrayRef<SMLoc> ErrorLoc, const Twine &Msg)``: 
+* ``PrintError(ArrayRef<SMLoc> ErrorLoc, const Twine &Msg)``:
   Prints the message followed by the specified source line,
   along with a pointer to the item in error. The array of
   source file locations is typically taken from a ``Record`` instance.
@@ -769,14 +769,14 @@
 .. code-block:: text
 
   DETAILED RECORDS for file llvm-project\llvm\lib\target\arc\arc.td
-  
+
   -------------------- Global Variables (5) --------------------
-  
+
   AMDGPUBufferIntrinsics = [int_amdgcn_buffer_load_format, ...
   AMDGPUImageDimAtomicIntrinsics = [int_amdgcn_image_atomic_swap_1d, ...
   ...
   -------------------- Classes (758) --------------------
-  
+
   AMDGPUBufferLoad  |IntrinsicsAMDGPU.td:879|
     Template args:
       LLVMType AMDGPUBufferLoad:data_ty = llvm_any_ty  |IntrinsicsAMDGPU.td:879|
@@ -786,7 +786,7 @@
       string LLVMName = ""  |Intrinsics.td:343|
   ...
   -------------------- Records (12303) --------------------
-  
+
   AMDGPUSample_lz_o  |IntrinsicsAMDGPU.td:560|
     Defm sequence: |IntrinsicsAMDGPU.td:584| |IntrinsicsAMDGPU.td:566|
     Superclasses: AMDGPUSampleVariant
@@ -799,7 +799,7 @@
   their values.
 
 * The classes are shown with their source location, template arguments,
-  superclasses, and fields. 
+  superclasses, and fields.
 
 * The records are shown with their source location, ``defm`` sequence,
   superclasses, and fields.
@@ -828,7 +828,7 @@
                                TableGen Phase Timing
   ===-------------------------------------------------------------------------===
     Total Execution Time: 101.0106 seconds (102.4819 wall clock)
-  
+
      ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
     85.5197 ( 84.9%)   0.1560 ( 50.0%)  85.6757 ( 84.8%)  85.7009 ( 83.6%)  Backend overall
     15.1789 ( 15.1%)   0.0000 (  0.0%)  15.1789 ( 15.0%)  15.1829 ( 14.8%)  Parse, build records
@@ -847,7 +847,7 @@
                                TableGen Phase Timing
   ===-------------------------------------------------------------------------===
     Total Execution Time: 746.3868 seconds (747.1447 wall clock)
-  
+
      ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
     657.7938 ( 88.1%)   0.1404 ( 90.0%)  657.9342 ( 88.1%)  658.6497 ( 88.2%)  Emit matcher table
     70.2317 (  9.4%)   0.0000 (  0.0%)  70.2317 (  9.4%)  70.2700 (  9.4%)  Convert to matchers
diff --git a/llvm/docs/TableGen/ProgRef.rst b/llvm/docs/TableGen/ProgRef.rst
index 248af5b..1dd849f 100644
--- a/llvm/docs/TableGen/ProgRef.rst
+++ b/llvm/docs/TableGen/ProgRef.rst
@@ -217,7 +217,7 @@
 
 .. productionlist::
    BangOperator: one of
-               : !add        !and         !cast        !con         !dag 
+               : !add        !and         !cast        !con         !dag
                : !empty      !eq          !filter      !find        !foldl
                : !foreach    !ge          !getdagop    !gt          !head
                : !if         !interleave  !isa         !le          !listconcat
@@ -550,14 +550,14 @@
    Statement: `Assert` | `Class` | `Def` | `Defm` | `Defset` | `Defvar`
             :| `Foreach` | `If` | `Let` | `MultiClass`
 
-The following sections describe each of these top-level statements. 
+The following sections describe each of these top-level statements.
 
 
 ``class`` --- define an abstract record class
 ---------------------------------------------
 
 A ``class`` statement defines an abstract record class from which other
-classes and records can inherit. 
+classes and records can inherit.
 
 .. productionlist::
    Class: "class" `ClassID` [`TemplateArgList`] `RecordBody`
@@ -924,7 +924,7 @@
 
 Once multiclasses have been defined, you use the ``defm`` statement to
 "invoke" them and process the multiple record definitions in those
-multiclasses. Those record definitions are specified by ``def`` 
+multiclasses. Those record definitions are specified by ``def``
 statements in the multiclasses, and indirectly by ``defm`` statements.
 
 .. productionlist::
@@ -1324,7 +1324,7 @@
 ``dag`` datatype. A DAG node consists of an operator and zero or more
 arguments (or operands). Each argument can be of any desired type. By using
 another DAG node as an argument, an arbitrary graph of DAG nodes can be
-built. 
+built.
 
 The syntax of a ``dag`` instance is:
 
@@ -1332,7 +1332,7 @@
 
 The operator must be present and must be a record. There can be zero or more
 arguments, separated by commas. The operator and arguments can have three
-formats. 
+formats.
 
 ====================== =============================================
 Format                 Meaning
@@ -1625,7 +1625,7 @@
 
 ``!eq(`` *a*\ `,` *b*\ ``)``
     This operator produces 1 if *a* is equal to *b*; 0 otherwise.
-    The arguments must be ``bit``, ``bits``, ``int``, ``string``, or 
+    The arguments must be ``bit``, ``bits``, ``int``, ``string``, or
     record values. Use ``!cast<string>`` to compare other types of objects.
 
 ``!filter(``\ *var*\ ``,`` *list*\ ``,`` *predicate*\ ``)``
diff --git a/llvm/docs/TableGen/index.rst b/llvm/docs/TableGen/index.rst
index b2eddcd..0056927 100644
--- a/llvm/docs/TableGen/index.rst
+++ b/llvm/docs/TableGen/index.rst
@@ -73,7 +73,7 @@
   XMM0, XMM1, XMM10, XMM11, XMM12, XMM13, XMM14, XMM15, XMM2, XMM3, XMM4, XMM5,
   XMM6, XMM7, XMM8, XMM9,
 
-  $ llvm-tblgen X86.td -print-enums -class=Instruction 
+  $ llvm-tblgen X86.td -print-enums -class=Instruction
   ABS_F, ABS_Fp32, ABS_Fp64, ABS_Fp80, ADC32mi, ADC32mi8, ADC32mr, ADC32ri,
   ADC32ri8, ADC32rm, ADC32rr, ADC64mi32, ADC64mi8, ADC64mr, ADC64ri32, ADC64ri8,
   ADC64rm, ADC64rr, ADD16mi, ADD16mi8, ADD16mr, ADD16ri, ADD16ri8, ADD16rm,
@@ -266,7 +266,7 @@
 TableGen files have no real meaning without a backend. The default operation
 when running ``*-tblgen`` is to print the information in a textual format, but
 that's only useful for debugging the TableGen files themselves. The power
-in TableGen is, however, to interpret the source files into an internal 
+in TableGen is, however, to interpret the source files into an internal
 representation that can be generated into anything you want.
 
 Current usage of TableGen is to create huge include files with tables that you
diff --git a/llvm/docs/Vectorizers.rst b/llvm/docs/Vectorizers.rst
index dd2cce1..32cc2ff 100644
--- a/llvm/docs/Vectorizers.rst
+++ b/llvm/docs/Vectorizers.rst
@@ -262,7 +262,7 @@
 Scatter / Gather
 ^^^^^^^^^^^^^^^^
 
-The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions 
+The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions
 that scatter/gathers memory.
 
 .. code-block:: c++
@@ -328,9 +328,9 @@
 |     |     | fmuladd |
 +-----+-----+---------+
 
-Note that the optimizer may not be able to vectorize math library functions 
-that correspond to these intrinsics if the library calls access external state 
-such as "errno". To allow better optimization of C/C++ math library functions, 
+Note that the optimizer may not be able to vectorize math library functions
+that correspond to these intrinsics if the library calls access external state
+such as "errno". To allow better optimization of C/C++ math library functions,
 use "-fno-math-errno".
 
 The loop vectorizer knows about special instructions on the target and will
@@ -349,8 +349,8 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Modern processors feature multiple execution units, and only programs that contain a
-high degree of parallelism can fully utilize the entire width of the machine. 
-The Loop Vectorizer increases the instruction level parallelism (ILP) by 
+high degree of parallelism can fully utilize the entire width of the machine.
+The Loop Vectorizer increases the instruction level parallelism (ILP) by
 performing partial-unrolling of loops.
 
 In the example below the entire array is accumulated into the variable 'sum'.
@@ -368,7 +368,7 @@
   }
 
 The Loop Vectorizer uses a cost model to decide when it is profitable to unroll loops.
-The decision to unroll the loop depends on the register pressure and the generated code size. 
+The decision to unroll the loop depends on the register pressure and the generated code size.
 
 Epilogue Vectorization
 ^^^^^^^^^^^^^^^^^^^^^^
diff --git a/llvm/docs/WritingAnLLVMBackend.rst b/llvm/docs/WritingAnLLVMBackend.rst
index c153114..fbf16cb 100644
--- a/llvm/docs/WritingAnLLVMBackend.rst
+++ b/llvm/docs/WritingAnLLVMBackend.rst
@@ -943,7 +943,7 @@
 
 XXXInstrInfo.cpp:
 
-.. code-block:: c++ 
+.. code-block:: c++
 
   #define GET_INSTRINFO_NAMED_OPS // For getNamedOperandIdx() function
   #include "XXXGenInstrInfo.inc"
@@ -1047,7 +1047,7 @@
 
 .. code-block:: shell
 
-  $ VERBOSE=1 make ... 
+  $ VERBOSE=1 make ...
 
 and search for ``llvm-tblgen`` commands in the output.
 
@@ -1976,4 +1976,3 @@
 values, incoming arguments, and frame and return address.  The callback
 function needs low-level access to the registers or stack, so it is typically
 implemented with assembler.
-
diff --git a/llvm/docs/WritingAnLLVMPass.rst b/llvm/docs/WritingAnLLVMPass.rst
index 133775c..9065a7d 100644
--- a/llvm/docs/WritingAnLLVMPass.rst
+++ b/llvm/docs/WritingAnLLVMPass.rst
@@ -66,7 +66,7 @@
 
   add_llvm_library( LLVMHello MODULE
     Hello.cpp
-  
+
     PLUGIN_TOOL
     opt
     )
@@ -214,7 +214,7 @@
   struct Hello : public FunctionPass {
     static char ID;
     Hello() : FunctionPass(ID) {}
-  
+
     bool runOnFunction(Function &F) override {
       errs() << "Hello: ";
       errs().write_escaped(F.getName()) << '\n';
@@ -307,7 +307,7 @@
                         ... Pass execution timing report ...
   ===-------------------------------------------------------------------------===
     Total Execution Time: 0.0007 seconds (0.0005 wall clock)
-  
+
      ---User Time---   --User+System--   ---Wall Time---  --- Name ---
      0.0004 ( 55.3%)   0.0004 ( 55.3%)   0.0004 ( 75.7%)  Bitcode Writer
      0.0003 ( 44.7%)   0.0003 ( 44.7%)   0.0001 ( 13.6%)  Hello World Pass
@@ -1440,4 +1440,3 @@
 places (for global resources).  Although this is a simple extension, we simply
 haven't had time (or multiprocessor machines, thus a reason) to implement this.
 Despite that, we have kept the LLVM passes SMP ready, and you should too.
-
diff --git a/llvm/docs/XRayExample.rst b/llvm/docs/XRayExample.rst
index 1ff66dc..2260120 100644
--- a/llvm/docs/XRayExample.rst
+++ b/llvm/docs/XRayExample.rst
@@ -38,7 +38,7 @@
 
   $ objdump -h -j xray_instr_map ./bin/llc
   ./bin/llc:     file format elf64-x86-64
-  
+
   Sections:
   Idx Name          Size      VMA               LMA               File off  Algn
    14 xray_instr_map 00002fc0  00000000041516c6  00000000041516c6  03d516c6  2**0
@@ -105,13 +105,13 @@
 
   $ llvm-xray convert -f yaml --symbolize --instr_map=./bin/llc xray-log.llc.m35qPB
   ---
-  header:          
+  header:
     version:         1
     type:            0
     constant-tsc:    true
     nonstop-tsc:     true
     cycle-frequency: 2601000000
-  records:         
+  records:
     - { type: 0, func-id: 110, function: __cxx_global_var_init.8, cpu: 37, thread: 69819, kind: function-enter, tsc: 5434426023268520 }
     - { type: 0, func-id: 110, function: __cxx_global_var_init.8, cpu: 37, thread: 69819, kind: function-exit, tsc: 5434426023523052 }
     - { type: 0, func-id: 164, function: __cxx_global_var_init, cpu: 37, thread: 69819, kind: function-enter, tsc: 5434426029925386 }
@@ -153,7 +153,7 @@
 
   $ llvm-xray account xray-log.llc.5rqxkU --top=10 --sort=sum --sortorder=dsc --instr_map=./bin/llc
   Functions with latencies: 36652
-   funcid      count [      min,       med,       90p,       99p,       max]       sum  function    
+   funcid      count [      min,       med,       90p,       99p,       max]       sum  function
        75          1 [ 0.672368,  0.672368,  0.672368,  0.672368,  0.672368]  0.672368  llc.cpp:271:0: main
        78          1 [ 0.626455,  0.626455,  0.626455,  0.626455,  0.626455]  0.626455  llc.cpp:381:0: compileModule(char**, llvm::LLVMContext&)
    139617          1 [ 0.472618,  0.472618,  0.472618,  0.472618,  0.472618]  0.472618  LegacyPassManager.cpp:1723:0: llvm::legacy::PassManager::run(llvm::Module&)
@@ -345,5 +345,3 @@
     XRay traces.
   - Collecting function call stacks and how often they're encountered in the
     XRay trace.
-
-
diff --git a/llvm/docs/YamlIO.rst b/llvm/docs/YamlIO.rst
index a42650d..c696aff 100644
--- a/llvm/docs/YamlIO.rst
+++ b/llvm/docs/YamlIO.rst
@@ -8,8 +8,8 @@
 Introduction to YAML
 ====================
 
-YAML is a human readable data serialization language.  The full YAML language 
-spec can be read at `yaml.org 
+YAML is a human readable data serialization language.  The full YAML language
+spec can be read at `yaml.org
 <http://www.yaml.org/spec/1.2/spec.html#Introduction>`_.  The simplest form of
 yaml is just "scalars", "mappings", and "sequences".  A scalar is any number
 or string.  The pound/hash symbol (#) begins a comment line.   A mapping is
@@ -20,8 +20,8 @@
      # a mapping
      name:      Tom
      hat-size:  7
-     
-A sequence is a list of items where each item starts with a leading dash ('-'). 
+
+A sequence is a list of items where each item starts with a leading dash ('-').
 For example:
 
 .. code-block:: yaml
@@ -51,7 +51,7 @@
 
 Sometime sequences are known to be short and the one entry per line is too
 verbose, so YAML offers an alternate syntax for sequences called a "Flow
-Sequence" in which you put comma separated sequence elements into square 
+Sequence" in which you put comma separated sequence elements into square
 brackets.  The above example could then be simplified to :
 
 
@@ -71,27 +71,27 @@
 
 The use of indenting makes the YAML easy for a human to read and understand,
 but having a program read and write YAML involves a lot of tedious details.
-The YAML I/O library structures and simplifies reading and writing YAML 
+The YAML I/O library structures and simplifies reading and writing YAML
 documents.
 
 YAML I/O assumes you have some "native" data structures which you want to be
-able to dump as YAML and recreate from YAML.  The first step is to try 
-writing example YAML for your data structures. You may find after looking at 
+able to dump as YAML and recreate from YAML.  The first step is to try
+writing example YAML for your data structures. You may find after looking at
 possible YAML representations that a direct mapping of your data structures
 to YAML is not very readable.  Often the fields are not in the order that
 a human would find readable.  Or the same information is replicated in multiple
-locations, making it hard for a human to write such YAML correctly.  
+locations, making it hard for a human to write such YAML correctly.
 
-In relational database theory there is a design step called normalization in 
-which you reorganize fields and tables.  The same considerations need to 
+In relational database theory there is a design step called normalization in
+which you reorganize fields and tables.  The same considerations need to
 go into the design of your YAML encoding.  But, you may not want to change
 your existing native data structures.  Therefore, when writing out YAML
 there may be a normalization step, and when reading YAML there would be a
-corresponding denormalization step.  
+corresponding denormalization step.
 
-YAML I/O uses a non-invasive, traits based design.  YAML I/O defines some 
+YAML I/O uses a non-invasive, traits based design.  YAML I/O defines some
 abstract base templates.  You specialize those templates on your data types.
-For instance, if you have an enumerated type FooBar you could specialize 
+For instance, if you have an enumerated type FooBar you could specialize
 ScalarEnumerationTraits on that type and define the enumeration() method:
 
 .. code-block:: c++
@@ -107,21 +107,21 @@
     };
 
 
-As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for 
+As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for
 both reading and writing YAML. That is, the mapping between in-memory enum
 values and the YAML string representation is only in one place.
 This assures that the code for writing and parsing of YAML stays in sync.
 
-To specify a YAML mappings, you define a specialization on 
+To specify a YAML mappings, you define a specialization on
 llvm::yaml::MappingTraits.
 If your native data structure happens to be a struct that is already normalized,
 then the specialization is simple.  For example:
 
 .. code-block:: c++
-   
+
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-    
+
     template <>
     struct MappingTraits<Person> {
       static void mapping(IO &io, Person &info) {
@@ -135,11 +135,11 @@
 iterators and a push_back() method.  Therefore any of the STL containers
 (such as std::vector<>) will automatically translate to YAML sequences.
 
-Once you have defined specializations for your data types, you can 
+Once you have defined specializations for your data types, you can
 programmatically use YAML I/O to write a YAML document:
 
 .. code-block:: c++
-   
+
     using llvm::yaml::Output;
 
     Person tom;
@@ -151,10 +151,10 @@
     std::vector<Person> persons;
     persons.push_back(tom);
     persons.push_back(dan);
-    
+
     Output yout(llvm::outs());
     yout << persons;
-   
+
 This would write the following:
 
 .. code-block:: yaml
@@ -172,21 +172,21 @@
 
     typedef std::vector<Person> PersonList;
     std::vector<PersonList> docs;
-    
+
     Input yin(document.getBuffer());
     yin >> docs;
-    
+
     if ( yin.error() )
       return;
-    
+
     // Process read document
     for ( PersonList &pl : docs ) {
       for ( Person &person : pl ) {
         cout << "name=" << person.name;
       }
     }
-  
-One other feature of YAML is the ability to define multiple documents in a 
+
+One other feature of YAML is the ability to define multiple documents in a
 single file.  That is why reading YAML produces a vector of your document type.
 
 
@@ -194,9 +194,9 @@
 Error Handling
 ==============
 
-When parsing a YAML document, if the input does not match your schema (as 
-expressed in your XxxTraits<> specializations).  YAML I/O 
-will print out an error message and your Input object's error() method will 
+When parsing a YAML document, if the input does not match your schema (as
+expressed in your XxxTraits<> specializations).  YAML I/O
+will print out an error message and your Input object's error() method will
 return true. For instance the following document:
 
 .. code-block:: yaml
@@ -206,7 +206,7 @@
      - name:      Dan
        hat-size:  7
 
-Has a key (shoe-size) that is not defined in the schema.  YAML I/O will 
+Has a key (shoe-size) that is not defined in the schema.  YAML I/O will
 automatically generate this error:
 
 .. code-block:: yaml
@@ -265,7 +265,7 @@
     LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags)
 
 This generates two classes MyFooFlags and MyBarFlags which you can use in your
-native data structures instead of uint32_t. They are implicitly 
+native data structures instead of uint32_t. They are implicitly
 converted to and from uint32_t.  The point of creating these unique types
 is that you can now specify traits on them to get different YAML conversions.
 
@@ -289,7 +289,7 @@
 YAML I/O supports translating between in-memory enumerations and a set of string
 values in YAML documents. This is done by specializing ScalarEnumerationTraits<>
 on your enumeration type and define an enumeration() method.
-For instance, suppose you had an enumeration of CPUs and a struct with it as 
+For instance, suppose you had an enumeration of CPUs and a struct with it as
 a field:
 
 .. code-block:: c++
@@ -299,15 +299,15 @@
       cpu_x86     = 7,
       cpu_PowerPC = 8
     };
-    
+
     struct Info {
       CPUs      cpu;
       uint32_t  flags;
     };
-    
-To support reading and writing of this enumeration, you can define a 
-ScalarEnumerationTraits specialization on CPUs, which can then be used 
-as a field type: 
+
+To support reading and writing of this enumeration, you can define a
+ScalarEnumerationTraits specialization on CPUs, which can then be used
+as a field type:
 
 .. code-block:: c++
 
@@ -323,7 +323,7 @@
         io.enumCase(value, "PowerPC", cpu_PowerPC);
       }
     };
- 
+
     template <>
     struct MappingTraits<Info> {
       static void mapping(IO &io, Info &info) {
@@ -336,13 +336,13 @@
 specified by enumCase() methods, an error is automatically generated.
 When writing YAML, if the value being written does not match any of the values
 specified by the enumCase() methods, a runtime assertion is triggered.
-  
+
 
 BitValue
 --------
 Another common data structure in C++ is a field where each bit has a unique
 meaning.  This is often used in a "flags" field.  YAML I/O has support for
-converting such fields to a flow sequence.   For instance suppose you 
+converting such fields to a flow sequence.   For instance suppose you
 had the following bit flags defined:
 
 .. code-block:: c++
@@ -355,9 +355,9 @@
     };
 
     LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFlags)
-    
+
 To support reading and writing of MyFlags, you specialize ScalarBitSetTraits<>
-on MyFlags and provide the bit values and their names.   
+on MyFlags and provide the bit values and their names.
 
 .. code-block:: c++
 
@@ -374,12 +374,12 @@
         io.bitSetCase(value, "pointy",  flagPointy);
       }
     };
-    
+
     struct Info {
       StringRef   name;
       MyFlags     flags;
     };
-    
+
     template <>
     struct MappingTraits<Info> {
       static void mapping(IO &io, Info& info) {
@@ -388,7 +388,7 @@
        }
     };
 
-With the above, YAML I/O (when writing) will test mask each value in the 
+With the above, YAML I/O (when writing) will test mask each value in the
 bitset trait against the flags field, and each that matches will
 cause the corresponding string to be added to the flow sequence.  The opposite
 is done when reading and any unknown string values will result in an error. With
@@ -440,8 +440,8 @@
 -------------
 Sometimes for readability a scalar needs to be formatted in a custom way. For
 instance your internal data structure may use an integer for time (seconds since
-some epoch), but in YAML it would be much nicer to express that integer in 
-some time format (e.g. 4-May-2012 10:30pm).  YAML I/O has a way to support  
+some epoch), but in YAML it would be much nicer to express that integer in
+some time format (e.g. 4-May-2012 10:30pm).  YAML I/O has a way to support
 custom formatting and parsing of scalar types by specializing ScalarTraits<> on
 your data type.  When writing, YAML I/O will provide the native type and
 your specialization must create a temporary llvm::StringRef.  When reading,
@@ -518,21 +518,21 @@
       }
     };
 
-    
+
 
 Mappings
 ========
 
-To be translated to or from a YAML mapping for your type T you must specialize  
-llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)" 
+To be translated to or from a YAML mapping for your type T you must specialize
+llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)"
 method. If your native data structures use pointers to a class everywhere,
 you can specialize on the class pointer.  Examples:
 
 .. code-block:: c++
-   
+
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-    
+
     // Example of struct Foo which is used by value
     template <>
     struct MappingTraits<Foo> {
@@ -555,16 +555,16 @@
 No Normalization
 ----------------
 
-The ``mapping()`` method is responsible, if needed, for normalizing and 
-denormalizing. In a simple case where the native data structure requires no 
-normalization, the mapping method just uses mapOptional() or mapRequired() to 
+The ``mapping()`` method is responsible, if needed, for normalizing and
+denormalizing. In a simple case where the native data structure requires no
+normalization, the mapping method just uses mapOptional() or mapRequired() to
 bind the struct's fields to YAML key names.  For example:
 
 .. code-block:: c++
-   
+
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-    
+
     template <>
     struct MappingTraits<Person> {
       static void mapping(IO &io, Person &info) {
@@ -583,17 +583,17 @@
 do the normalization and denormalization.  The template is used to create
 a local variable in your mapping() method which contains the normalized keys.
 
-Suppose you have native data type 
+Suppose you have native data type
 Polar which specifies a position in polar coordinates (distance, angle):
 
 .. code-block:: c++
-   
+
     struct Polar {
       float distance;
       float angle;
     };
 
-but you've decided the normalized YAML for should be in x,y coordinates. That 
+but you've decided the normalized YAML for should be in x,y coordinates. That
 is, you want the yaml to look like:
 
 .. code-block:: yaml
@@ -602,50 +602,50 @@
     y:   -4.7
 
 You can support this by defining a MappingTraits that normalizes the polar
-coordinates to x,y coordinates when writing YAML and denormalizes x,y 
-coordinates into polar when reading YAML.  
+coordinates to x,y coordinates when writing YAML and denormalizes x,y
+coordinates into polar when reading YAML.
 
 .. code-block:: c++
-   
+
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-        
+
     template <>
     struct MappingTraits<Polar> {
-      
+
       class NormalizedPolar {
       public:
         NormalizedPolar(IO &io)
           : x(0.0), y(0.0) {
         }
         NormalizedPolar(IO &, Polar &polar)
-          : x(polar.distance * cos(polar.angle)), 
+          : x(polar.distance * cos(polar.angle)),
             y(polar.distance * sin(polar.angle)) {
         }
         Polar denormalize(IO &) {
           return Polar(sqrt(x*x+y*y), arctan(x,y));
         }
-         
+
         float        x;
         float        y;
       };
 
       static void mapping(IO &io, Polar &polar) {
         MappingNormalization<NormalizedPolar, Polar> keys(io, polar);
-        
+
         io.mapRequired("x",    keys->x);
         io.mapRequired("y",    keys->y);
       }
     };
 
-When writing YAML, the local variable "keys" will be a stack allocated 
+When writing YAML, the local variable "keys" will be a stack allocated
 instance of NormalizedPolar, constructed from the supplied polar object which
 initializes it x and y fields.  The mapRequired() methods then write out the x
-and y values as key/value pairs.  
+and y values as key/value pairs.
 
 When reading YAML, the local variable "keys" will be a stack allocated instance
-of NormalizedPolar, constructed by the empty constructor.  The mapRequired 
-methods will find the matching key in the YAML document and fill in the x and y 
+of NormalizedPolar, constructed by the empty constructor.  The mapRequired
+methods will find the matching key in the YAML document and fill in the x and y
 fields of the NormalizedPolar object keys. At the end of the mapping() method
 when the local keys variable goes out of scope, the denormalize() method will
 automatically be called to convert the read values back to polar coordinates,
@@ -654,7 +654,7 @@
 In some cases, the normalized class may be a subclass of the native type and
 could be returned by the denormalize() method, except that the temporary
 normalized instance is stack allocated.  In these cases, the utility template
-MappingNormalizationHeap<> can be used instead.  It just like 
+MappingNormalizationHeap<> can be used instead.  It just like
 MappingNormalization<> except that it heap allocates the normalized object
 when reading YAML.  It never destroys the normalized object.  The denormalize()
 method can this return "this".
@@ -662,23 +662,23 @@
 
 Default values
 --------------
-Within a mapping() method, calls to io.mapRequired() mean that that key is 
-required to exist when parsing YAML documents, otherwise YAML I/O will issue an 
+Within a mapping() method, calls to io.mapRequired() mean that that key is
+required to exist when parsing YAML documents, otherwise YAML I/O will issue an
 error.
 
-On the other hand, keys registered with io.mapOptional() are allowed to not 
-exist in the YAML document being read.  So what value is put in the field 
-for those optional keys? 
-There are two steps to how those optional fields are filled in. First, the  
+On the other hand, keys registered with io.mapOptional() are allowed to not
+exist in the YAML document being read.  So what value is put in the field
+for those optional keys?
+There are two steps to how those optional fields are filled in. First, the
 second parameter to the mapping() method is a reference to a native class.  That
 native class must have a default constructor.  Whatever value the default
 constructor initially sets for an optional field will be that field's value.
 Second, the mapOptional() method has an optional third parameter.  If provided
-it is the value that mapOptional() should set that field to if the YAML document  
-does not have that key.  
+it is the value that mapOptional() should set that field to if the YAML document
+does not have that key.
 
 There is one important difference between those two ways (default constructor
-and third parameter to mapOptional). When YAML I/O generates a YAML document, 
+and third parameter to mapOptional). When YAML I/O generates a YAML document,
 if the mapOptional() third parameter is used, if the actual value being written
 is the same as (using ==) the default value, then that key/value is not written.
 
@@ -692,19 +692,19 @@
 the YAML document would find natural.  This may be different that the order
 of the fields in the native class.
 
-When reading in a YAML document, the keys in the document can be in any order, 
-but they are processed in the order that the calls to mapRequired()/mapOptional() 
-are made in the mapping() method.  That enables some interesting 
+When reading in a YAML document, the keys in the document can be in any order,
+but they are processed in the order that the calls to mapRequired()/mapOptional()
+are made in the mapping() method.  That enables some interesting
 functionality.  For instance, if the first field bound is the cpu and the second
 field bound is flags, and the flags are cpu specific, you can programmatically
-switch how the flags are converted to and from YAML based on the cpu.  
+switch how the flags are converted to and from YAML based on the cpu.
 This works for both reading and writing. For example:
 
 .. code-block:: c++
 
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-    
+
     struct Info {
       CPUs        cpu;
       uint32_t    flags;
@@ -729,9 +729,9 @@
 The YAML syntax supports tags as a way to specify the type of a node before
 it is parsed. This allows dynamic types of nodes.  But the YAML I/O model uses
 static typing, so there are limits to how you can use tags with the YAML I/O
-model. Recently, we added support to YAML I/O for checking/setting the optional 
-tag on a map. Using this functionality it is even possible to support different 
-mappings, as long as they are convertible.  
+model. Recently, we added support to YAML I/O for checking/setting the optional
+tag on a map. Using this functionality it is even possible to support different
+mappings, as long as they are convertible.
 
 To check a tag, inside your mapping() method you can use io.mapTag() to specify
 what the tag should be.  This will also add that tag when writing yaml.
@@ -742,13 +742,13 @@
 Sometimes in a YAML map, each key/value pair is valid, but the combination is
 not.  This is similar to something having no syntax errors, but still having
 semantic errors.  To support semantic level checking, YAML I/O allows
-an optional ``validate()`` method in a MappingTraits template specialization.  
+an optional ``validate()`` method in a MappingTraits template specialization.
 
-When parsing YAML, the ``validate()`` method is call *after* all key/values in 
-the map have been processed. Any error message returned by the ``validate()`` 
+When parsing YAML, the ``validate()`` method is call *after* all key/values in
+the map have been processed. Any error message returned by the ``validate()``
 method during input will be printed just a like a syntax error would be printed.
-When writing YAML, the ``validate()`` method is called *before* the YAML 
-key/values  are written.  Any error during output will trigger an ``assert()`` 
+When writing YAML, the ``validate()`` method is called *before* the YAML
+key/values  are written.  Any error during output will trigger an ``assert()``
 because it is a programming error to have invalid struct values.
 
 
@@ -756,7 +756,7 @@
 
     using llvm::yaml::MappingTraits;
     using llvm::yaml::IO;
-    
+
     struct Stuff {
       ...
     };
@@ -819,16 +819,16 @@
   };
 
 The size() method returns how many elements are currently in your sequence.
-The element() method returns a reference to the i'th element in the sequence. 
+The element() method returns a reference to the i'th element in the sequence.
 When parsing YAML, the element() method may be called with an index one bigger
 than the current size.  Your element() method should allocate space for one
 more element (using default constructor if element is a C++ object) and returns
-a reference to that new allocated space.  
+a reference to that new allocated space.
 
 
 Flow Sequence
 -------------
-A YAML "flow sequence" is a sequence that when written to YAML it uses the 
+A YAML "flow sequence" is a sequence that when written to YAML it uses the
 inline notation (e.g [ foo, bar ] ).  To specify that a sequence type should
 be written in YAML as a flow sequence, your SequenceTraits specialization should
 add "static const bool flow = true;".  For instance:
@@ -839,13 +839,13 @@
   struct SequenceTraits<MyList> {
     static size_t size(IO &io, MyList &list) { ... }
     static MyListEl &element(IO &io, MyList &list, size_t index) { ... }
-    
+
     // The existence of this member causes YAML I/O to use a flow sequence
     static const bool flow = true;
   };
 
-With the above, if you used MyList as the data type in your native data 
-structures, then when converted to YAML, a flow sequence of integers 
+With the above, if you used MyList as the data type in your native data
+structures, then when converted to YAML, a flow sequence of integers
 will be used (e.g. [ 10, -3, 4 ]).
 
 Flow sequences are subject to line wrapping according to the Output object
@@ -855,7 +855,7 @@
 --------------
 Since a common source of sequences is std::vector<>, YAML I/O provides macros:
 LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which
-can be used to easily specify SequenceTraits<> on a std::vector type.  YAML 
+can be used to easily specify SequenceTraits<> on a std::vector type.  YAML
 I/O does not partial specialize SequenceTraits on std::vector<> because that
 would force all vectors to be sequences.  An example use of the macros:
 
@@ -871,13 +871,13 @@
 Document List
 =============
 
-YAML allows you to define multiple "documents" in a single YAML file.  Each 
+YAML allows you to define multiple "documents" in a single YAML file.  Each
 new document starts with a left aligned "---" token.  The end of all documents
 is denoted with a left aligned "..." token.  Many users of YAML will never
 have need for multiple documents.  The top level node in their YAML schema
 will be a mapping or sequence. For those cases, the following is not needed.
 But for cases where you do want multiple documents, you can specify a
-trait for you document list type.  The trait has the same methods as 
+trait for you document list type.  The trait has the same methods as
 SequenceTraits but is named DocumentListTraits.  For example:
 
 .. code-block:: c++
@@ -891,18 +891,18 @@
 
 User Context Data
 =================
-When an llvm::yaml::Input or llvm::yaml::Output object is created their 
-constructors take an optional "context" parameter.  This is a pointer to 
-whatever state information you might need.  
+When an llvm::yaml::Input or llvm::yaml::Output object is created their
+constructors take an optional "context" parameter.  This is a pointer to
+whatever state information you might need.
 
-For instance, in a previous example we showed how the conversion type for a 
-flags field could be determined at runtime based on the value of another field 
+For instance, in a previous example we showed how the conversion type for a
+flags field could be determined at runtime based on the value of another field
 in the mapping. But what if an inner mapping needs to know some field value
 of an outer mapping?  That is where the "context" parameter comes in. You
 can set values in the context in the outer map's mapping() method and
 retrieve those values in the inner map's mapping() method.
 
-The context value is just a void*.  All your traits which use the context 
+The context value is just a void*.  All your traits which use the context
 and operate on your native data types, need to agree what the context value
 actually is.  It could be a pointer to an object or struct which your various
 traits use to shared context sensitive information.
@@ -911,9 +911,9 @@
 Output
 ======
 
-The llvm::yaml::Output class is used to generate a YAML document from your 
-in-memory data structures, using traits defined on your data types.  
-To instantiate an Output object you need an llvm::raw_ostream, an optional 
+The llvm::yaml::Output class is used to generate a YAML document from your
+in-memory data structures, using traits defined on your data types.
+To instantiate an Output object you need an llvm::raw_ostream, an optional
 context pointer and an optional wrapping column:
 
 .. code-block:: c++
@@ -921,20 +921,20 @@
       class Output : public IO {
       public:
         Output(llvm::raw_ostream &, void *context = NULL, int WrapColumn = 70);
-    
+
 Once you have an Output object, you can use the C++ stream operator on it
 to write your native data as YAML. One thing to recall is that a YAML file
 can contain multiple "documents".  If the top level data structure you are
 streaming as YAML is a mapping, scalar, or sequence, then Output assumes you
-are generating one document and wraps the mapping output 
-with  "``---``" and trailing "``...``".  
+are generating one document and wraps the mapping output
+with  "``---``" and trailing "``...``".
 
 The WrapColumn parameter will cause the flow mappings and sequences to
 line-wrap when they go over the supplied column. Pass 0 to completely
 suppress the wrapping.
 
 .. code-block:: c++
-   
+
     using llvm::yaml::Output;
 
     void dumpMyMapDoc(const MyMapType &info) {
@@ -957,7 +957,7 @@
 and ends with a "...".
 
 .. code-block:: c++
-   
+
     using llvm::yaml::Output;
 
     void dumpMyMapDoc(const MyDocListType &docList) {
@@ -982,7 +982,7 @@
 
 The llvm::yaml::Input class is used to parse YAML document(s) into your native
 data structures. To instantiate an Input
-object you need a StringRef to the entire YAML file, and optionally a context 
+object you need a StringRef to the entire YAML file, and optionally a context
 pointer:
 
 .. code-block:: c++
@@ -990,22 +990,22 @@
       class Input : public IO {
       public:
         Input(StringRef inputContent, void *context=NULL);
-    
+
 Once you have an Input object, you can use the C++ stream operator to read
 the document(s).  If you expect there might be multiple YAML documents in
 one file, you'll need to specialize DocumentListTraits on a list of your
 document type and stream in that document list type.  Otherwise you can
-just stream in the document type.  Also, you can check if there was 
+just stream in the document type.  Also, you can check if there was
 any syntax errors in the YAML be calling the error() method on the Input
 object.  For example:
 
 .. code-block:: c++
-   
+
      // Reading a single document
      using llvm::yaml::Input;
 
      Input yin(mb.getBuffer());
-     
+
      // Parse the YAML file
      MyDocType theDoc;
      yin >> theDoc;
@@ -1013,17 +1013,17 @@
      // Check for error
      if ( yin.error() )
        return;
-  
-      
+
+
 .. code-block:: c++
-   
+
      // Reading multiple documents in one file
      using llvm::yaml::Input;
 
      LLVM_YAML_IS_DOCUMENT_LIST_VECTOR(MyDocType)
-     
+
      Input yin(mb.getBuffer());
-     
+
      // Parse the YAML file
      std::vector<MyDocType> theDocList;
      yin >> theDocList;
@@ -1031,5 +1031,3 @@
      // Check for error
      if ( yin.error() )
        return;
-
-
diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl08.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl08.rst
index 16b4532..31232e4 100644
--- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl08.rst
+++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl08.rst
@@ -130,8 +130,8 @@
 .. code-block:: c++
 
   TheModule->setDataLayout(TargetMachine->createDataLayout());
-  TheModule->setTargetTriple(TargetTriple);   
-  
+  TheModule->setTargetTriple(TargetTriple);
+
 Emit Object Code
 ================
 
@@ -179,7 +179,7 @@
 when you're done.
 
 ::
-   
+
     $ ./toy
     ready> def average(x y) (x + y) * 0.5;
     ^D
diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/index.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/index.rst
index e1e477d..669df43 100644
--- a/llvm/docs/tutorial/MyFirstLanguageFrontend/index.rst
+++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/index.rst
@@ -78,7 +78,7 @@
 -  `Chapter #8: Compiling to Object Files <LangImpl08.html>`_ - This
    chapter explains how to take LLVM IR and compile it down to object
    files, like a static compiler does.
--  `Chapter #9: Debug Information <LangImpl09.html>`_ - A real language 
+-  `Chapter #9: Debug Information <LangImpl09.html>`_ - A real language
    needs to support debuggers, so we
    add debug information that allows setting breakpoints in Kaleidoscope
    functions, print out argument variables, and call functions!