hmm.rst 20.2 KB
Newer Older
1 2 3
.. hmm:

=====================================
4
Heterogeneous Memory Management (HMM)
5
=====================================
6

7 8 9 10 11 12
Provide infrastructure and helpers to integrate non-conventional memory (device
memory like GPU on board memory) into regular kernel path, with the cornerstone
of this being specialized struct page for such memory (see sections 5 to 7 of
this document).

HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
13 14 15 16
allowing a device to transparently access program address coherently with
the CPU meaning that any valid pointer on the CPU is also a valid pointer
for the device. This is becoming mandatory to simplify the use of advanced
heterogeneous computing where GPU, DSP, or FPGA are used to perform various
17
computations on behalf of a process.
18 19 20 21 22

This document is divided as follows: in the first section I expose the problems
related to using device specific memory allocators. In the second section, I
expose the hardware limitations that are inherent to many platforms. The third
section gives an overview of the HMM design. The fourth section explains how
23
CPU page-table mirroring works and the purpose of HMM in this context. The
24 25 26 27
fifth section deals with how device memory is represented inside the kernel.
Finally, the last section presents a new migration helper that allows lever-
aging the device DMA engine.

28
.. contents:: :local:
29

30 31
Problems of using a device specific memory allocator
====================================================
32

33
Devices with a large amount of on board memory (several gigabytes) like GPUs
34 35 36 37 38 39 40
have historically managed their memory through dedicated driver specific APIs.
This creates a disconnect between memory allocated and managed by a device
driver and regular application memory (private anonymous, shared memory, or
regular file backed memory). From here on I will refer to this aspect as split
address space. I use shared address space to refer to the opposite situation:
i.e., one in which any application memory region can be used by a device
transparently.
41

42 43 44 45
Split address space happens because device can only access memory allocated
through device specific API. This implies that all memory objects in a program
are not equal from the device point of view which complicates large programs
that rely on a wide set of libraries.
46

47 48 49 50
Concretely this means that code that wants to leverage devices like GPUs needs
to copy object between generically allocated memory (malloc, mmap private, mmap
share) and memory allocated through the device driver API (this still ends up
with an mmap but of the device file).
51

52 53 54
For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
complex data sets (list, tree, ...) are hard to get right. Duplicating a
complex data set needs to re-map all the pointer relations between each of its
55
elements. This is error prone and program gets harder to debug because of the
56
duplicate data set and addresses.
57

58
Split address space also means that libraries cannot transparently use data
59
they are getting from the core program or another library and thus each library
60
might have to duplicate its input data set using the device specific memory
61 62
allocator. Large projects suffer from this and waste resources because of the
various memory copies.
63

64
Duplicating each library API to accept as input or output memory allocated by
65
each device specific allocator is not a viable option. It would lead to a
66
combinatorial explosion in the library entry points.
67

68 69 70 71 72
Finally, with the advance of high level language constructs (in C++ but in
other languages too) it is now possible for the compiler to leverage GPUs and
other devices without programmer knowledge. Some compiler identified patterns
are only do-able with a shared address space. It is also more reasonable to use
a shared address space for all other patterns.
73 74


75 76
I/O bus, device memory characteristics
======================================
77

78 79 80 81
I/O buses cripple shared address spaces due to a few limitations. Most I/O
buses only allow basic memory access from device to main memory; even cache
coherency is often optional. Access to device memory from CPU is even more
limited. More often than not, it is not cache coherent.
82

83 84 85
If we only consider the PCIE bus, then a device can access main memory (often
through an IOMMU) and be cache coherent with the CPUs. However, it only allows
a limited set of atomic operations from device on main memory. This is worse
86 87
in the other direction: the CPU can only access a limited range of the device
memory and cannot perform atomic operations on it. Thus device memory cannot
88
be considered the same as regular memory from the kernel point of view.
89 90

Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0
91 92 93
and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s).
The final limitation is latency. Access to main memory from the device has an
order of magnitude higher latency than when the device accesses its own memory.
94

95
Some platforms are developing new I/O buses or additions/modifications to PCIE
96
to address some of these limitations (OpenCAPI, CCIX). They mainly allow two-
97
way cache coherency between CPU and device and allow all atomic operations the
98
architecture supports. Sadly, not all platforms are following this trend and
99
some major architectures are left without hardware solutions to these problems.
100

101 102 103
So for shared address space to make sense, not only must we allow devices to
access any memory but we must also permit any memory to be migrated to device
memory while device is using it (blocking CPU access while it happens).
104 105


106 107
Shared address space and migration
==================================
108 109

HMM intends to provide two main features. First one is to share the address
110 111
space by duplicating the CPU page table in the device page table so the same
address points to the same physical memory for any valid main memory address in
112 113
the process address space.

114
To achieve this, HMM offers a set of helpers to populate the device page table
115
while keeping track of CPU page table updates. Device page table updates are
116 117 118
not as easy as CPU page table updates. To update the device page table, you must
allocate a buffer (or use a pool of pre-allocated buffers) and write GPU
specific commands in it to perform the update (unmap, cache invalidations, and
119
flush, ...). This cannot be done through common code for all devices. Hence
120 121 122
why HMM provides helpers to factor out everything that can be while leaving the
hardware specific details to the device driver.

123
The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
124
allows allocating a struct page for each page of the device memory. Those pages
125
are special because the CPU cannot map them. However, they allow migrating
126 127 128 129 130 131 132 133 134 135 136 137 138 139
main memory to device memory using existing migration mechanisms and everything
looks like a page is swapped out to disk from the CPU point of view. Using a
struct page gives the easiest and cleanest integration with existing mm mech-
anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
memory for the device memory and second to perform migration. Policy decisions
of what and when to migrate things is left to the device driver.

Note that any CPU access to a device page triggers a page fault and a migration
back to main memory. For example, when a page backing a given CPU address A is
migrated from a main memory page to a device page, then any CPU access to
address A triggers a page fault and initiates a migration back to main memory.

With these two features, HMM not only allows a device to mirror process address
space and keeping both CPU and device page table synchronized, but also lever-
140
ages device memory by migrating the part of the data set that is actively being
141
used by the device.
142 143


144 145
Address space mirroring implementation and API
==============================================
146

147 148
Address space mirroring's main objective is to allow duplication of a range of
CPU page table into a device page table; HMM helps keep both synchronized. A
149
device driver that wants to mirror a process address space must start with the
150
registration of an hmm_mirror struct::
151 152 153 154 155 156

 int hmm_mirror_register(struct hmm_mirror *mirror,
                         struct mm_struct *mm);
 int hmm_mirror_register_locked(struct hmm_mirror *mirror,
                                struct mm_struct *mm);

157

158
The locked variant is to be used when the driver is already holding mmap_sem
159
of the mm in write mode. The mirror struct has a set of callbacks that are used
160
to propagate CPU page tables::
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184

 struct hmm_mirror_ops {
     /* sync_cpu_device_pagetables() - synchronize page tables
      *
      * @mirror: pointer to struct hmm_mirror
      * @update_type: type of update that occurred to the CPU page table
      * @start: virtual start address of the range to update
      * @end: virtual end address of the range to update
      *
      * This callback ultimately originates from mmu_notifiers when the CPU
      * page table is updated. The device driver must update its page table
      * in response to this callback. The update argument tells what action
      * to perform.
      *
      * The device driver must not return from this callback until the device
      * page tables are completely updated (TLBs flushed, etc); this is a
      * synchronous call.
      */
      void (*update)(struct hmm_mirror *mirror,
                     enum hmm_update action,
                     unsigned long start,
                     unsigned long end);
 };

185 186 187
The device driver must perform the update action to the range (mark range
read only, or fully unmap, ...). The device must be done with the update before
the driver callback returns.
188

189
When the device driver wants to populate a range of virtual addresses, it can
190
use either::
191

192
  long hmm_range_snapshot(struct hmm_range *range);
193
  long hmm_range_fault(struct hmm_range *range, bool block);
194

195
The first one (hmm_range_snapshot()) will only fetch present CPU page table
196 197
entries and will not trigger a page fault on missing or non-present entries.
The second one does trigger a page fault on missing or read-only entry if the
198 199
write parameter is true. Page faults use the generic mm page fault code path
just like a CPU page fault.
200

201 202 203 204
Both functions copy CPU page table entries into their pfns array argument. Each
entry in that array corresponds to an address in the virtual range. HMM
provides a set of flags to help the driver identify special CPU page table
entries.
205 206

Locking with the update() callback is the most important aspect the driver must
207
respect in order to keep things properly synchronized. The usage pattern is::
208 209 210 211 212

 int driver_populate_range(...)
 {
      struct hmm_range range;
      ...
213 214 215 216 217 218 219

      range.start = ...;
      range.end = ...;
      range.pfns = ...;
      range.flags = ...;
      range.values = ...;
      range.pfn_shift = ...;
220 221 222 223 224 225 226 227
      hmm_range_register(&range);

      /*
       * Just wait for range to be valid, safe to ignore return value as we
       * will use the return value of hmm_range_snapshot() below under the
       * mmap_sem to ascertain the validity of the range.
       */
      hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
228

229
 again:
230 231 232 233
      down_read(&mm->mmap_sem);
      ret = hmm_range_snapshot(&range);
      if (ret) {
          up_read(&mm->mmap_sem);
234 235 236 237 238 239 240 241 242
          if (ret == -EAGAIN) {
            /*
             * No need to check hmm_range_wait_until_valid() return value
             * on retry we will get proper error with hmm_range_snapshot()
             */
            hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
            goto again;
          }
          hmm_mirror_unregister(&range);
243
          return ret;
244
      }
245
      take_lock(driver->update);
246
      if (!range.valid) {
247
          release_lock(driver->update);
248
          up_read(&mm->mmap_sem);
249 250 251 252 253
          goto again;
      }

      // Use pfns array content to update device page table

254
      hmm_mirror_unregister(&range);
255
      release_lock(driver->update);
256
      up_read(&mm->mmap_sem);
257 258 259
      return 0;
 }

260
The driver->update lock is the same lock that the driver takes inside its
261 262
update() callback. That lock must be held before checking the range.valid
field to avoid any race with a concurrent CPU page table update.
263

264 265 266
HMM implements all this on top of the mmu_notifier API because we wanted a
simpler API and also to be able to perform optimizations latter on like doing
concurrent device updates in multi-devices scenario.
267

268
HMM also serves as an impedance mismatch between how CPU page table updates
269 270
are done (by CPU write to the page table and TLB flushes) and how devices
update their own page table. Device updates are a multi-step process. First,
271
appropriate commands are written to a buffer, then this buffer is scheduled for
272 273 274 275 276
execution on the device. It is only once the device has executed commands in
the buffer that the update is done. Creating and scheduling the update command
buffer can happen concurrently for multiple devices. Waiting for each device to
report commands as executed is serialized (there is no point in doing this
concurrently).
277 278


279 280 281 282 283 284 285 286 287 288 289 290
Leverage default_flags and pfn_flags_mask
=========================================

The hmm_range struct has 2 fields default_flags and pfn_flags_mask that allows
to set fault or snapshot policy for a whole range instead of having to set them
for each entries in the range.

For instance if the device flags for device entries are:
    VALID (1 << 63)
    WRITE (1 << 62)

Now let say that device driver wants to fault with at least read a range then
291 292 293
it does set::

    range->default_flags = (1 << 63);
294 295 296 297 298 299
    range->pfn_flags_mask = 0;

and calls hmm_range_fault() as described above. This will fill fault all page
in the range with at least read permission.

Now let say driver wants to do the same except for one page in the range for
300 301
which its want to have write. Now driver set::

302 303 304 305 306 307 308 309 310 311 312 313 314 315
    range->default_flags = (1 << 63);
    range->pfn_flags_mask = (1 << 62);
    range->pfns[index_of_write] = (1 << 62);

With this HMM will fault in all page with at least read (ie valid) and for the
address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
write permission ie if the CPU pte does not have write permission set then HMM
will call handle_mm_fault().

Note that HMM will populate the pfns array with write permission for any entry
that have write permission within the CPU pte no matter what are the values set
in default_flags or pfn_flags_mask.


316 317
Represent and manage device memory from core kernel point of view
=================================================================
318

319 320 321 322 323 324
Several different designs were tried to support device memory. First one used
a device specific data structure to keep information about migrated memory and
HMM hooked itself in various places of mm code to handle any access to
addresses that were backed by device memory. It turns out that this ended up
replicating most of the fields of struct page and also needed many kernel code
paths to be updated to understand this new kind of memory.
325

326 327 328 329 330
Most kernel code paths never try to access the memory behind a page
but only care about struct page contents. Because of this, HMM switched to
directly using struct page for device memory which left most kernel code paths
unaware of the difference. We only need to make sure that no one ever tries to
map those pages from the CPU side.
331

332
HMM provides a set of helpers to register and hotplug device memory as a new
333
region needing a struct page. This is offered through a very simple API::
334 335 336 337 338 339

 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
                                   struct device *device,
                                   unsigned long size);
 void hmm_devmem_remove(struct hmm_devmem *devmem);

340
The hmm_devmem_ops is where most of the important things are::
341 342 343 344 345 346 347 348 349 350 351 352

 struct hmm_devmem_ops {
     void (*free)(struct hmm_devmem *devmem, struct page *page);
     int (*fault)(struct hmm_devmem *devmem,
                  struct vm_area_struct *vma,
                  unsigned long addr,
                  struct page *page,
                  unsigned flags,
                  pmd_t *pmdp);
 };

The first callback (free()) happens when the last reference on a device page is
353 354
dropped. This means the device page is now free and no longer used by anyone.
The second callback happens whenever the CPU tries to access a device page
355
which it cannot do. This second callback must trigger a migration back to
356
system memory.
357 358


359 360
Migration to and from device memory
===================================
361

362
Because the CPU cannot access device memory, migration must use the device DMA
363
engine to perform copy from and to device memory. For this we need a new
364
migration helper::
365 366 367 368 369 370 371 372 373 374

 int migrate_vma(const struct migrate_vma_ops *ops,
                 struct vm_area_struct *vma,
                 unsigned long mentries,
                 unsigned long start,
                 unsigned long end,
                 unsigned long *src,
                 unsigned long *dst,
                 void *private);

375 376
Unlike other migration functions it works on a range of virtual address, there
are two reasons for that. First, device DMA copy has a high setup overhead cost
377
and thus batching multiple pages is needed as otherwise the migration overhead
378
makes the whole exercise pointless. The second reason is because the
379
migration might be for a range of addresses the device is actively accessing.
380

381 382
The migrate_vma_ops struct defines two callbacks. First one (alloc_and_copy())
controls destination memory allocation and copy operation. Second one is there
383
to allow the device driver to perform cleanup operations after migration::
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399

 struct migrate_vma_ops {
     void (*alloc_and_copy)(struct vm_area_struct *vma,
                            const unsigned long *src,
                            unsigned long *dst,
                            unsigned long start,
                            unsigned long end,
                            void *private);
     void (*finalize_and_map)(struct vm_area_struct *vma,
                              const unsigned long *src,
                              const unsigned long *dst,
                              unsigned long start,
                              unsigned long end,
                              void *private);
 };

400
It is important to stress that these migration helpers allow for holes in the
401
virtual address range. Some pages in the range might not be migrated for all
402 403
the usual reasons (page is pinned, page is locked, ...). This helper does not
fail but just skips over those pages.
404

405 406 407
The alloc_and_copy() might decide to not migrate all pages in the
range (for reasons under the callback control). For those, the callback just
has to leave the corresponding dst entry empty.
408

409
Finally, the migration of the struct page might fail (for file backed page) for
410
various reasons (failure to freeze reference, or update page cache, ...). If
411 412
that happens, then the finalize_and_map() can catch any pages that were not
migrated. Note those pages were still copied to a new page and thus we wasted
413 414 415 416
bandwidth but this is considered as a rare event and a price that we are
willing to pay to keep all the code simpler.


417 418
Memory cgroup (memcg) and rss accounting
========================================
419 420

For now device memory is accounted as any regular page in rss counters (either
421 422 423 424 425
anonymous if device page is used for anonymous, file if device page is used for
file backed page or shmem if device page is used for shared memory). This is a
deliberate choice to keep existing applications, that might start using device
memory without knowing about it, running unimpacted.

426
A drawback is that the OOM killer might kill an application using a lot of
427 428 429
device memory and not a lot of regular system memory and thus not freeing much
system memory. We want to gather more real world experience on how applications
and system react under memory pressure in the presence of device memory before
430 431 432
deciding to account device memory differently.


433
Same decision was made for memory cgroup. Device memory pages are accounted
434 435
against same memory cgroup a regular page would be accounted to. This does
simplify migration to and from device memory. This also means that migration
436
back from device memory to regular memory cannot fail because it would
437
go above memory cgroup limit. We might revisit this choice latter on once we
438
get more experience in how device memory is used and its impact on memory
439 440 441
resource control.


442
Note that device memory can never be pinned by device driver nor through GUP
443
and thus such memory is always free upon process exit. Or when last reference
444
is dropped in case of shared memory or file backed memory.