Merge tag 'for-5.16/cdrom-2021-10-29' of git://git.kernel.dk/linux-block
[linux-block.git] / Documentation / gpu / drm-mm.rst
CommitLineData
2fa91d15
JN
1=====================
2DRM Memory Management
3=====================
4
5Modern Linux systems require large amount of graphics memory to store
6frame buffers, textures, vertices and other graphics-related data. Given
7the very dynamic nature of many of that data, managing graphics memory
8efficiently is thus crucial for the graphics stack and plays a central
9role in the DRM infrastructure.
10
11The DRM core includes two memory managers, namely Translation Table Maps
12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
13manager to be developed and tried to be a one-size-fits-them all
14solution. It provides a single userspace API to accommodate the need of
15all hardware, supporting both Unified Memory Architecture (UMA) devices
16and devices with dedicated video RAM (i.e. most discrete video cards).
17This resulted in a large, complex piece of code that turned out to be
18hard to use for driver development.
19
20GEM started as an Intel-sponsored project in reaction to TTM's
21complexity. Its design philosophy is completely different: instead of
22providing a solution to every graphics memory-related problems, GEM
23identified common code between drivers and created a support library to
24share it. GEM has simpler initialization and execution requirements than
25TTM, but has no video RAM management capabilities and is thus limited to
26UMA devices.
27
28The Translation Table Manager (TTM)
8febdf0d 29===================================
2fa91d15
JN
30
31TTM design background and information belongs here.
32
33TTM initialization
8febdf0d 34------------------
2fa91d15
JN
35
36 **Warning**
2fa91d15
JN
37 This section is outdated.
38
b834ff86 39Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver
bd4dadaf 40<ttm_bo_driver>` structure to ttm_device_init, together with an
b834ff86
GKB
41initialized global reference to the memory manager. The ttm_bo_driver
42structure contains several fields with function pointers for
43initializing the TTM, allocating and freeing memory, waiting for command
44completion and fence synchronization, and memory migration.
2fa91d15 45
b834ff86
GKB
46The :c:type:`struct drm_global_reference <drm_global_reference>` is made
47up of several fields:
2fa91d15 48
29849a69 49.. code-block:: c
2fa91d15 50
b834ff86 51 struct drm_global_reference {
2fa91d15
JN
52 enum ttm_global_types global_type;
53 size_t size;
54 void *object;
b834ff86
GKB
55 int (*init) (struct drm_global_reference *);
56 void (*release) (struct drm_global_reference *);
2fa91d15
JN
57 };
58
59
60There should be one global reference structure for your memory manager
61as a whole, and there will be others for each object created by the
62memory manager at runtime. Your global TTM should have a type of
63TTM_GLOBAL_TTM_MEM. The size field for the global object should be
64sizeof(struct ttm_mem_global), and the init and release hooks should
65point at your driver-specific init and release routines, which probably
66eventually call ttm_mem_global_init and ttm_mem_global_release,
67respectively.
68
69Once your global TTM accounting structure is set up and initialized by
70calling ttm_global_item_ref() on it, you need to create a buffer
71object TTM to provide a pool for buffer object allocation by clients and
72the kernel itself. The type of this object should be
73TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct
74ttm_bo_global). Again, driver-specific init and release functions may
e55a5c9b
TZ
75be provided, likely eventually calling ttm_bo_global_ref_init() and
76ttm_bo_global_ref_release(), respectively. Also, like the previous
2fa91d15
JN
77object, ttm_global_item_ref() is used to create an initial reference
78count for the TTM, which will call your initialization function.
79
b834ff86
GKB
80See the radeon_ttm.c file for an example of usage.
81
2fa91d15 82The Graphics Execution Manager (GEM)
8febdf0d 83====================================
2fa91d15
JN
84
85The GEM design approach has resulted in a memory manager that doesn't
86provide full coverage of all (or even all common) use cases in its
87userspace or kernel API. GEM exposes a set of standard memory-related
88operations to userspace and a set of helper functions to drivers, and
89let drivers implement hardware-specific operations with their own
90private API.
91
92The GEM userspace API is described in the `GEM - the Graphics Execution
93Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While
94slightly outdated, the document provides a good overview of the GEM API
95principles. Buffer allocation and read and write operations, described
96as part of the common GEM API, are currently implemented using
97driver-specific ioctls.
98
99GEM is data-agnostic. It manages abstract buffer objects without knowing
100what individual buffers contain. APIs that require knowledge of buffer
101contents or purpose, such as buffer allocation or synchronization
102primitives, are thus outside of the scope of GEM and must be implemented
103using driver-specific ioctls.
104
105On a fundamental level, GEM involves several operations:
106
107- Memory allocation and freeing
108- Command execution
109- Aperture management at command execution time
110
111Buffer object allocation is relatively straightforward and largely
112provided by Linux's shmem layer, which provides memory to back each
113object.
114
115Device-specific operations, such as command execution, pinning, buffer
116read & write, mapping, and domain ownership transfers are left to
117driver-specific ioctls.
118
119GEM Initialization
8febdf0d 120------------------
2fa91d15
JN
121
122Drivers that use GEM must set the DRIVER_GEM bit in the struct
123:c:type:`struct drm_driver <drm_driver>` driver_features
124field. The DRM core will then automatically initialize the GEM core
125before calling the load operation. Behind the scene, this will create a
126DRM Memory Manager object which provides an address space pool for
127object allocation.
128
129In a KMS configuration, drivers need to allocate and initialize a
130command ring buffer following core GEM initialization if required by the
131hardware. UMA devices usually have what is called a "stolen" memory
132region, which provides space for the initial framebuffer and large,
133contiguous memory regions required by the device. This space is
134typically not managed by GEM, and must be initialized separately into
135its own DRM MM object.
136
137GEM Objects Creation
8febdf0d 138--------------------
2fa91d15
JN
139
140GEM splits creation of GEM objects and allocation of the memory that
141backs them in two distinct operations.
142
143GEM objects are represented by an instance of struct :c:type:`struct
144drm_gem_object <drm_gem_object>`. Drivers usually need to
145extend GEM objects with private information and thus create a
146driver-specific GEM object structure type that embeds an instance of
147struct :c:type:`struct drm_gem_object <drm_gem_object>`.
148
149To create a GEM object, a driver allocates memory for an instance of its
150specific GEM object type and initializes the embedded struct
151:c:type:`struct drm_gem_object <drm_gem_object>` with a call
6acc942c 152to drm_gem_object_init(). The function takes a pointer
2fa91d15
JN
153to the DRM device, a pointer to the GEM object and the buffer object
154size in bytes.
155
156GEM uses shmem to allocate anonymous pageable memory.
6acc942c 157drm_gem_object_init() will create an shmfs file of the
2fa91d15
JN
158requested size and store it into the struct :c:type:`struct
159drm_gem_object <drm_gem_object>` filp field. The memory is
160used as either main storage for the object when the graphics hardware
161uses system memory directly or as a backing store otherwise.
162
163Drivers are responsible for the actual physical pages allocation by
6acc942c 164calling shmem_read_mapping_page_gfp() for each page.
2fa91d15
JN
165Note that they can decide to allocate pages when initializing the GEM
166object, or to delay allocation until the memory is needed (for instance
167when a page fault occurs as a result of a userspace memory access or
168when the driver needs to start a DMA transfer involving the memory).
169
170Anonymous pageable memory allocation is not always desired, for instance
171when the hardware requires physically contiguous system memory as is
172often the case in embedded devices. Drivers can create GEM objects with
6acc942c
DV
173no shmfs backing (called private GEM objects) by initializing them with a call
174to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for
175private GEM objects must be managed by drivers.
2fa91d15
JN
176
177GEM Objects Lifetime
8febdf0d 178--------------------
2fa91d15
JN
179
180All GEM objects are reference-counted by the GEM core. References can be
6acc942c 181acquired and release by calling drm_gem_object_get() and drm_gem_object_put()
5b4231fb 182respectively.
2fa91d15
JN
183
184When the last reference to a GEM object is released the GEM core calls
d693def4 185the :c:type:`struct drm_gem_object_funcs <gem_object_funcs>` free
2fa91d15
JN
186operation. That operation is mandatory for GEM-enabled drivers and must
187free the GEM object and all associated resources.
188
d693def4 189void (\*free) (struct drm_gem_object \*obj); Drivers are
2fa91d15
JN
190responsible for freeing all GEM object resources. This includes the
191resources created by the GEM core, which need to be released with
6acc942c 192drm_gem_object_release().
2fa91d15
JN
193
194GEM Objects Naming
8febdf0d 195------------------
2fa91d15
JN
196
197Communication between userspace and the kernel refers to GEM objects
198using local handles, global names or, more recently, file descriptors.
199All of those are 32-bit integer values; the usual Linux kernel limits
200apply to the file descriptors.
201
202GEM handles are local to a DRM file. Applications get a handle to a GEM
203object through a driver-specific ioctl, and can use that handle to refer
204to the GEM object in other standard or driver-specific ioctls. Closing a
205DRM file handle frees all its GEM handles and dereferences the
206associated GEM objects.
207
6acc942c
DV
208To create a handle for a GEM object drivers call drm_gem_handle_create(). The
209function takes a pointer to the DRM file and the GEM object and returns a
210locally unique handle. When the handle is no longer needed drivers delete it
211with a call to drm_gem_handle_delete(). Finally the GEM object associated with a
212handle can be retrieved by a call to drm_gem_object_lookup().
2fa91d15
JN
213
214Handles don't take ownership of GEM objects, they only take a reference
215to the object that will be dropped when the handle is destroyed. To
216avoid leaking GEM objects, drivers must make sure they drop the
217reference(s) they own (such as the initial reference taken at object
218creation time) as appropriate, without any special consideration for the
219handle. For example, in the particular case of combined GEM object and
220handle creation in the implementation of the dumb_create operation,
221drivers must drop the initial reference to the GEM object before
222returning the handle.
223
224GEM names are similar in purpose to handles but are not local to DRM
225files. They can be passed between processes to reference a GEM object
226globally. Names can't be used directly to refer to objects in the DRM
227API, applications must convert handles to names and names to handles
228using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
229respectively. The conversion is handled by the DRM core without any
230driver-specific support.
231
232GEM also supports buffer sharing with dma-buf file descriptors through
233PRIME. GEM-based drivers must use the provided helpers functions to
234implement the exporting and importing correctly. See ?. Since sharing
235file descriptors is inherently more secure than the easily guessable and
236global GEM names it is the preferred buffer sharing mechanism. Sharing
237buffers through GEM names is only supported for legacy userspace.
238Furthermore PRIME also allows cross-device buffer sharing since it is
239based on dma-bufs.
240
241GEM Objects Mapping
8febdf0d 242-------------------
2fa91d15
JN
243
244Because mapping operations are fairly heavyweight GEM favours
245read/write-like access to buffers, implemented through driver-specific
246ioctls, over mapping buffers to userspace. However, when random access
247to the buffer is needed (to perform software rendering for instance),
248direct access to the object can be more efficient.
249
250The mmap system call can't be used directly to map GEM objects, as they
251don't have their own file handle. Two alternative methods currently
252co-exist to map GEM objects to userspace. The first method uses a
253driver-specific ioctl to perform the mapping operation, calling
6acc942c 254do_mmap() under the hood. This is often considered
2fa91d15
JN
255dubious, seems to be discouraged for new GEM-enabled drivers, and will
256thus not be described here.
257
258The second method uses the mmap system call on the DRM file handle. void
259\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t
260offset); DRM identifies the GEM object to be mapped by a fake offset
261passed through the mmap offset argument. Prior to being mapped, a GEM
262object must thus be associated with a fake offset. To do so, drivers
6acc942c 263must call drm_gem_create_mmap_offset() on the object.
2fa91d15
JN
264
265Once allocated, the fake offset value must be passed to the application
266in a driver-specific way and can then be used as the mmap offset
267argument.
268
6acc942c 269The GEM core provides a helper method drm_gem_mmap() to
2fa91d15
JN
270handle object mapping. The method can be set directly as the mmap file
271operation handler. It will look up the GEM object based on the offset
272value and set the VMA operations to the :c:type:`struct drm_driver
6acc942c
DV
273<drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to
274userspace, but relies on the driver-provided fault handler to map pages
275individually.
2fa91d15 276
6acc942c
DV
277To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver
278<drm_driver>` gem_vm_ops field with a pointer to VM operations.
2fa91d15 279
059c7a5a
LD
280The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
281made up of several fields, the more interesting ones being:
282
283.. code-block:: c
284
285 struct vm_operations_struct {
286 void (*open)(struct vm_area_struct * area);
287 void (*close)(struct vm_area_struct * area);
b9a40816 288 vm_fault_t (*fault)(struct vm_fault *vmf);
059c7a5a
LD
289 };
290
2fa91d15
JN
291
292The open and close operations must update the GEM object reference
6acc942c
DV
293count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
294functions directly as open and close handlers.
2fa91d15
JN
295
296The fault operation handler is responsible for mapping individual pages
297to userspace when a page fault occurs. Depending on the memory
298allocation scheme, drivers can allocate pages at fault time, or can
299decide to allocate memory for the GEM object at the time the object is
300created.
301
302Drivers that want to map the GEM object upfront instead of handling page
303faults can implement their own mmap file operation handler.
304
62a0d98a 305For platforms without MMU the GEM core provides a helper method
6acc942c
DV
306drm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a
307proposed address for the mapping.
62a0d98a 308
6acc942c
DV
309To use drm_gem_cma_get_unmapped_area(), drivers must fill the struct
310:c:type:`struct file_operations <file_operations>` get_unmapped_area field with
311a pointer on drm_gem_cma_get_unmapped_area().
62a0d98a
BG
312
313More detailed information about get_unmapped_area can be found in
800c02f5 314Documentation/admin-guide/mm/nommu-mmap.rst
62a0d98a 315
2fa91d15 316Memory Coherency
8febdf0d 317----------------
2fa91d15
JN
318
319When mapped to the device or used in a command buffer, backing pages for
320an object are flushed to memory and marked write combined so as to be
321coherent with the GPU. Likewise, if the CPU accesses an object after the
322GPU has finished rendering to the object, then the object must be made
323coherent with the CPU's view of memory, usually involving GPU cache
324flushing of various kinds. This core CPU<->GPU coherency management is
325provided by a device-specific ioctl, which evaluates an object's current
326domain and performs any necessary flushing or synchronization to put the
327object into the desired coherency domain (note that the object may be
328busy, i.e. an active render target; in that case, setting the domain
329blocks the client and waits for rendering to complete before performing
330any necessary flushing operations).
331
332Command Execution
8febdf0d 333-----------------
2fa91d15
JN
334
335Perhaps the most important GEM function for GPU devices is providing a
336command execution interface to clients. Client programs construct
337command buffers containing references to previously allocated memory
338objects, and then submit them to GEM. At that point, GEM takes care to
339bind all the objects into the GTT, execute the buffer, and provide
340necessary synchronization between clients accessing the same buffers.
341This often involves evicting some objects from the GTT and re-binding
342others (a fairly expensive operation), and providing relocation support
343which hides fixed GTT offsets from clients. Clients must take care not
344to submit command buffers that reference more objects than can fit in
345the GTT; otherwise, GEM will reject them and no rendering will occur.
346Similarly, if several objects in the buffer require fence registers to
347be allocated for correct rendering (e.g. 2D blits on pre-965 chips),
348care must be taken not to require more fence registers than are
349available to the client. Such resource management should be abstracted
350from the client in libdrm.
351
352GEM Function Reference
353----------------------
354
2fa91d15
JN
355.. kernel-doc:: include/drm/drm_gem.h
356 :internal:
357
1ea35768
DV
358.. kernel-doc:: drivers/gpu/drm/drm_gem.c
359 :export:
360
8febdf0d
DV
361GEM CMA Helper Functions Reference
362----------------------------------
363
364.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
365 :doc: cma helpers
366
8febdf0d
DV
367.. kernel-doc:: include/drm/drm_gem_cma_helper.h
368 :internal:
369
1ea35768
DV
370.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
371 :export:
372
0b638559
DV
373GEM SHMEM Helper Function Reference
374-----------------------------------
375
376.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
377 :doc: overview
378
379.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
380 :internal:
381
382.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
383 :export:
384
85438a8d
TZ
385GEM VRAM Helper Functions Reference
386-----------------------------------
387
388.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
389 :doc: overview
390
391.. kernel-doc:: include/drm/drm_gem_vram_helper.h
392 :internal:
393
394.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
395 :export:
396
ff540b76
GH
397GEM TTM Helper Functions Reference
398-----------------------------------
399
400.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
401 :doc: overview
402
ff540b76
GH
403.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
404 :export:
405
2fa91d15 406VMA Offset Manager
8febdf0d 407==================
2fa91d15
JN
408
409.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
410 :doc: vma offset manager
411
2fa91d15
JN
412.. kernel-doc:: include/drm/drm_vma_manager.h
413 :internal:
414
1ea35768
DV
415.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
416 :export:
417
41f50708
MD
418.. _prime_buffer_sharing:
419
2fa91d15 420PRIME Buffer Sharing
8febdf0d 421====================
2fa91d15
JN
422
423PRIME is the cross device buffer sharing framework in drm, originally
424created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
425buffers are dma-buf based file descriptors.
426
805dc614
DV
427Overview and Lifetime Rules
428---------------------------
429
430.. kernel-doc:: drivers/gpu/drm/drm_prime.c
431 :doc: overview and lifetime rules
2fa91d15
JN
432
433PRIME Helper Functions
8febdf0d 434----------------------
2fa91d15
JN
435
436.. kernel-doc:: drivers/gpu/drm/drm_prime.c
437 :doc: PRIME Helpers
438
439PRIME Function References
440-------------------------
441
c6bb9baa
DV
442.. kernel-doc:: include/drm/drm_prime.h
443 :internal:
444
2fa91d15
JN
445.. kernel-doc:: drivers/gpu/drm/drm_prime.c
446 :export:
447
448DRM MM Range Allocator
8febdf0d 449======================
2fa91d15
JN
450
451Overview
8febdf0d 452--------
2fa91d15
JN
453
454.. kernel-doc:: drivers/gpu/drm/drm_mm.c
455 :doc: Overview
456
457LRU Scan/Eviction Support
8febdf0d 458-------------------------
2fa91d15
JN
459
460.. kernel-doc:: drivers/gpu/drm/drm_mm.c
05fc0321 461 :doc: lru scan roster
2fa91d15
JN
462
463DRM MM Range Allocator Function References
464------------------------------------------
465
2fa91d15
JN
466.. kernel-doc:: include/drm/drm_mm.h
467 :internal:
f0e36723 468
1ea35768
DV
469.. kernel-doc:: drivers/gpu/drm/drm_mm.c
470 :export:
471
b7e32bef
TH
472DRM Cache Handling and Fast WC memcpy()
473=======================================
f0e36723
GKB
474
475.. kernel-doc:: drivers/gpu/drm/drm_cache.c
476 :export:
e9083420
DA
477
478DRM Sync Objects
479===========================
480
481.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
482 :doc: Overview
483
484.. kernel-doc:: include/drm/drm_syncobj.h
9f15a4ab 485 :internal:
e9083420
DA
486
487.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
488 :export:
677e8622
ND
489
490GPU Scheduler
491=============
492
493Overview
494--------
495
851c2509 496.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
677e8622
ND
497 :doc: Overview
498
499Scheduler Function References
500-----------------------------
501
502.. kernel-doc:: include/drm/gpu_scheduler.h
503 :internal:
504
851c2509 505.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
677e8622 506 :export: