Commit | Line | Data |
---|---|---|
2fa91d15 JN |
1 | ===================== |
2 | DRM Memory Management | |
3 | ===================== | |
4 | ||
5 | Modern Linux systems require large amount of graphics memory to store | |
6 | frame buffers, textures, vertices and other graphics-related data. Given | |
7 | the very dynamic nature of many of that data, managing graphics memory | |
8 | efficiently is thus crucial for the graphics stack and plays a central | |
9 | role in the DRM infrastructure. | |
10 | ||
11 | The DRM core includes two memory managers, namely Translation Table Maps | |
12 | (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory | |
13 | manager to be developed and tried to be a one-size-fits-them all | |
14 | solution. It provides a single userspace API to accommodate the need of | |
15 | all hardware, supporting both Unified Memory Architecture (UMA) devices | |
16 | and devices with dedicated video RAM (i.e. most discrete video cards). | |
17 | This resulted in a large, complex piece of code that turned out to be | |
18 | hard to use for driver development. | |
19 | ||
20 | GEM started as an Intel-sponsored project in reaction to TTM's | |
21 | complexity. Its design philosophy is completely different: instead of | |
22 | providing a solution to every graphics memory-related problems, GEM | |
23 | identified common code between drivers and created a support library to | |
24 | share it. GEM has simpler initialization and execution requirements than | |
25 | TTM, but has no video RAM management capabilities and is thus limited to | |
26 | UMA devices. | |
27 | ||
28 | The Translation Table Manager (TTM) | |
8febdf0d | 29 | =================================== |
2fa91d15 JN |
30 | |
31 | TTM design background and information belongs here. | |
32 | ||
33 | TTM initialization | |
8febdf0d | 34 | ------------------ |
2fa91d15 JN |
35 | |
36 | **Warning** | |
2fa91d15 JN |
37 | This section is outdated. |
38 | ||
b834ff86 GKB |
39 | Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver |
40 | <ttm_bo_driver>` structure to ttm_bo_device_init, together with an | |
41 | initialized global reference to the memory manager. The ttm_bo_driver | |
42 | structure contains several fields with function pointers for | |
43 | initializing the TTM, allocating and freeing memory, waiting for command | |
44 | completion and fence synchronization, and memory migration. | |
2fa91d15 | 45 | |
b834ff86 GKB |
46 | The :c:type:`struct drm_global_reference <drm_global_reference>` is made |
47 | up of several fields: | |
2fa91d15 | 48 | |
29849a69 | 49 | .. code-block:: c |
2fa91d15 | 50 | |
b834ff86 | 51 | struct drm_global_reference { |
2fa91d15 JN |
52 | enum ttm_global_types global_type; |
53 | size_t size; | |
54 | void *object; | |
b834ff86 GKB |
55 | int (*init) (struct drm_global_reference *); |
56 | void (*release) (struct drm_global_reference *); | |
2fa91d15 JN |
57 | }; |
58 | ||
59 | ||
60 | There should be one global reference structure for your memory manager | |
61 | as a whole, and there will be others for each object created by the | |
62 | memory manager at runtime. Your global TTM should have a type of | |
63 | TTM_GLOBAL_TTM_MEM. The size field for the global object should be | |
64 | sizeof(struct ttm_mem_global), and the init and release hooks should | |
65 | point at your driver-specific init and release routines, which probably | |
66 | eventually call ttm_mem_global_init and ttm_mem_global_release, | |
67 | respectively. | |
68 | ||
69 | Once your global TTM accounting structure is set up and initialized by | |
70 | calling ttm_global_item_ref() on it, you need to create a buffer | |
71 | object TTM to provide a pool for buffer object allocation by clients and | |
72 | the kernel itself. The type of this object should be | |
73 | TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct | |
74 | ttm_bo_global). Again, driver-specific init and release functions may | |
e55a5c9b TZ |
75 | be provided, likely eventually calling ttm_bo_global_ref_init() and |
76 | ttm_bo_global_ref_release(), respectively. Also, like the previous | |
2fa91d15 JN |
77 | object, ttm_global_item_ref() is used to create an initial reference |
78 | count for the TTM, which will call your initialization function. | |
79 | ||
b834ff86 GKB |
80 | See the radeon_ttm.c file for an example of usage. |
81 | ||
b834ff86 | 82 | |
2fa91d15 | 83 | The Graphics Execution Manager (GEM) |
8febdf0d | 84 | ==================================== |
2fa91d15 JN |
85 | |
86 | The GEM design approach has resulted in a memory manager that doesn't | |
87 | provide full coverage of all (or even all common) use cases in its | |
88 | userspace or kernel API. GEM exposes a set of standard memory-related | |
89 | operations to userspace and a set of helper functions to drivers, and | |
90 | let drivers implement hardware-specific operations with their own | |
91 | private API. | |
92 | ||
93 | The GEM userspace API is described in the `GEM - the Graphics Execution | |
94 | Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While | |
95 | slightly outdated, the document provides a good overview of the GEM API | |
96 | principles. Buffer allocation and read and write operations, described | |
97 | as part of the common GEM API, are currently implemented using | |
98 | driver-specific ioctls. | |
99 | ||
100 | GEM is data-agnostic. It manages abstract buffer objects without knowing | |
101 | what individual buffers contain. APIs that require knowledge of buffer | |
102 | contents or purpose, such as buffer allocation or synchronization | |
103 | primitives, are thus outside of the scope of GEM and must be implemented | |
104 | using driver-specific ioctls. | |
105 | ||
106 | On a fundamental level, GEM involves several operations: | |
107 | ||
108 | - Memory allocation and freeing | |
109 | - Command execution | |
110 | - Aperture management at command execution time | |
111 | ||
112 | Buffer object allocation is relatively straightforward and largely | |
113 | provided by Linux's shmem layer, which provides memory to back each | |
114 | object. | |
115 | ||
116 | Device-specific operations, such as command execution, pinning, buffer | |
117 | read & write, mapping, and domain ownership transfers are left to | |
118 | driver-specific ioctls. | |
119 | ||
120 | GEM Initialization | |
8febdf0d | 121 | ------------------ |
2fa91d15 JN |
122 | |
123 | Drivers that use GEM must set the DRIVER_GEM bit in the struct | |
124 | :c:type:`struct drm_driver <drm_driver>` driver_features | |
125 | field. The DRM core will then automatically initialize the GEM core | |
126 | before calling the load operation. Behind the scene, this will create a | |
127 | DRM Memory Manager object which provides an address space pool for | |
128 | object allocation. | |
129 | ||
130 | In a KMS configuration, drivers need to allocate and initialize a | |
131 | command ring buffer following core GEM initialization if required by the | |
132 | hardware. UMA devices usually have what is called a "stolen" memory | |
133 | region, which provides space for the initial framebuffer and large, | |
134 | contiguous memory regions required by the device. This space is | |
135 | typically not managed by GEM, and must be initialized separately into | |
136 | its own DRM MM object. | |
137 | ||
138 | GEM Objects Creation | |
8febdf0d | 139 | -------------------- |
2fa91d15 JN |
140 | |
141 | GEM splits creation of GEM objects and allocation of the memory that | |
142 | backs them in two distinct operations. | |
143 | ||
144 | GEM objects are represented by an instance of struct :c:type:`struct | |
145 | drm_gem_object <drm_gem_object>`. Drivers usually need to | |
146 | extend GEM objects with private information and thus create a | |
147 | driver-specific GEM object structure type that embeds an instance of | |
148 | struct :c:type:`struct drm_gem_object <drm_gem_object>`. | |
149 | ||
150 | To create a GEM object, a driver allocates memory for an instance of its | |
151 | specific GEM object type and initializes the embedded struct | |
152 | :c:type:`struct drm_gem_object <drm_gem_object>` with a call | |
153 | to :c:func:`drm_gem_object_init()`. The function takes a pointer | |
154 | to the DRM device, a pointer to the GEM object and the buffer object | |
155 | size in bytes. | |
156 | ||
157 | GEM uses shmem to allocate anonymous pageable memory. | |
158 | :c:func:`drm_gem_object_init()` will create an shmfs file of the | |
159 | requested size and store it into the struct :c:type:`struct | |
160 | drm_gem_object <drm_gem_object>` filp field. The memory is | |
161 | used as either main storage for the object when the graphics hardware | |
162 | uses system memory directly or as a backing store otherwise. | |
163 | ||
164 | Drivers are responsible for the actual physical pages allocation by | |
165 | calling :c:func:`shmem_read_mapping_page_gfp()` for each page. | |
166 | Note that they can decide to allocate pages when initializing the GEM | |
167 | object, or to delay allocation until the memory is needed (for instance | |
168 | when a page fault occurs as a result of a userspace memory access or | |
169 | when the driver needs to start a DMA transfer involving the memory). | |
170 | ||
171 | Anonymous pageable memory allocation is not always desired, for instance | |
172 | when the hardware requires physically contiguous system memory as is | |
173 | often the case in embedded devices. Drivers can create GEM objects with | |
174 | no shmfs backing (called private GEM objects) by initializing them with | |
175 | a call to :c:func:`drm_gem_private_object_init()` instead of | |
176 | :c:func:`drm_gem_object_init()`. Storage for private GEM objects | |
177 | must be managed by drivers. | |
178 | ||
179 | GEM Objects Lifetime | |
8febdf0d | 180 | -------------------- |
2fa91d15 JN |
181 | |
182 | All GEM objects are reference-counted by the GEM core. References can be | |
e6b62714 TR |
183 | acquired and release by :c:func:`calling drm_gem_object_get()` and |
184 | :c:func:`drm_gem_object_put()` respectively. The caller must hold the | |
185 | :c:type:`struct drm_device <drm_device>` struct_mutex lock when calling | |
186 | :c:func:`drm_gem_object_get()`. As a convenience, GEM provides | |
187 | :c:func:`drm_gem_object_put_unlocked()` functions that can be called without | |
188 | holding the lock. | |
2fa91d15 JN |
189 | |
190 | When the last reference to a GEM object is released the GEM core calls | |
47f39800 | 191 | the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked |
2fa91d15 JN |
192 | operation. That operation is mandatory for GEM-enabled drivers and must |
193 | free the GEM object and all associated resources. | |
194 | ||
195 | void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are | |
196 | responsible for freeing all GEM object resources. This includes the | |
197 | resources created by the GEM core, which need to be released with | |
198 | :c:func:`drm_gem_object_release()`. | |
199 | ||
200 | GEM Objects Naming | |
8febdf0d | 201 | ------------------ |
2fa91d15 JN |
202 | |
203 | Communication between userspace and the kernel refers to GEM objects | |
204 | using local handles, global names or, more recently, file descriptors. | |
205 | All of those are 32-bit integer values; the usual Linux kernel limits | |
206 | apply to the file descriptors. | |
207 | ||
208 | GEM handles are local to a DRM file. Applications get a handle to a GEM | |
209 | object through a driver-specific ioctl, and can use that handle to refer | |
210 | to the GEM object in other standard or driver-specific ioctls. Closing a | |
211 | DRM file handle frees all its GEM handles and dereferences the | |
212 | associated GEM objects. | |
213 | ||
214 | To create a handle for a GEM object drivers call | |
215 | :c:func:`drm_gem_handle_create()`. The function takes a pointer | |
216 | to the DRM file and the GEM object and returns a locally unique handle. | |
217 | When the handle is no longer needed drivers delete it with a call to | |
218 | :c:func:`drm_gem_handle_delete()`. Finally the GEM object | |
219 | associated with a handle can be retrieved by a call to | |
220 | :c:func:`drm_gem_object_lookup()`. | |
221 | ||
222 | Handles don't take ownership of GEM objects, they only take a reference | |
223 | to the object that will be dropped when the handle is destroyed. To | |
224 | avoid leaking GEM objects, drivers must make sure they drop the | |
225 | reference(s) they own (such as the initial reference taken at object | |
226 | creation time) as appropriate, without any special consideration for the | |
227 | handle. For example, in the particular case of combined GEM object and | |
228 | handle creation in the implementation of the dumb_create operation, | |
229 | drivers must drop the initial reference to the GEM object before | |
230 | returning the handle. | |
231 | ||
232 | GEM names are similar in purpose to handles but are not local to DRM | |
233 | files. They can be passed between processes to reference a GEM object | |
234 | globally. Names can't be used directly to refer to objects in the DRM | |
235 | API, applications must convert handles to names and names to handles | |
236 | using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls | |
237 | respectively. The conversion is handled by the DRM core without any | |
238 | driver-specific support. | |
239 | ||
240 | GEM also supports buffer sharing with dma-buf file descriptors through | |
241 | PRIME. GEM-based drivers must use the provided helpers functions to | |
242 | implement the exporting and importing correctly. See ?. Since sharing | |
243 | file descriptors is inherently more secure than the easily guessable and | |
244 | global GEM names it is the preferred buffer sharing mechanism. Sharing | |
245 | buffers through GEM names is only supported for legacy userspace. | |
246 | Furthermore PRIME also allows cross-device buffer sharing since it is | |
247 | based on dma-bufs. | |
248 | ||
249 | GEM Objects Mapping | |
8febdf0d | 250 | ------------------- |
2fa91d15 JN |
251 | |
252 | Because mapping operations are fairly heavyweight GEM favours | |
253 | read/write-like access to buffers, implemented through driver-specific | |
254 | ioctls, over mapping buffers to userspace. However, when random access | |
255 | to the buffer is needed (to perform software rendering for instance), | |
256 | direct access to the object can be more efficient. | |
257 | ||
258 | The mmap system call can't be used directly to map GEM objects, as they | |
259 | don't have their own file handle. Two alternative methods currently | |
260 | co-exist to map GEM objects to userspace. The first method uses a | |
261 | driver-specific ioctl to perform the mapping operation, calling | |
262 | :c:func:`do_mmap()` under the hood. This is often considered | |
263 | dubious, seems to be discouraged for new GEM-enabled drivers, and will | |
264 | thus not be described here. | |
265 | ||
266 | The second method uses the mmap system call on the DRM file handle. void | |
267 | \*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t | |
268 | offset); DRM identifies the GEM object to be mapped by a fake offset | |
269 | passed through the mmap offset argument. Prior to being mapped, a GEM | |
270 | object must thus be associated with a fake offset. To do so, drivers | |
271 | must call :c:func:`drm_gem_create_mmap_offset()` on the object. | |
272 | ||
273 | Once allocated, the fake offset value must be passed to the application | |
274 | in a driver-specific way and can then be used as the mmap offset | |
275 | argument. | |
276 | ||
277 | The GEM core provides a helper method :c:func:`drm_gem_mmap()` to | |
278 | handle object mapping. The method can be set directly as the mmap file | |
279 | operation handler. It will look up the GEM object based on the offset | |
280 | value and set the VMA operations to the :c:type:`struct drm_driver | |
281 | <drm_driver>` gem_vm_ops field. Note that | |
282 | :c:func:`drm_gem_mmap()` doesn't map memory to userspace, but | |
283 | relies on the driver-provided fault handler to map pages individually. | |
284 | ||
285 | To use :c:func:`drm_gem_mmap()`, drivers must fill the struct | |
286 | :c:type:`struct drm_driver <drm_driver>` gem_vm_ops field | |
287 | with a pointer to VM operations. | |
288 | ||
059c7a5a LD |
289 | The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` |
290 | made up of several fields, the more interesting ones being: | |
291 | ||
292 | .. code-block:: c | |
293 | ||
294 | struct vm_operations_struct { | |
295 | void (*open)(struct vm_area_struct * area); | |
296 | void (*close)(struct vm_area_struct * area); | |
b9a40816 | 297 | vm_fault_t (*fault)(struct vm_fault *vmf); |
059c7a5a LD |
298 | }; |
299 | ||
2fa91d15 JN |
300 | |
301 | The open and close operations must update the GEM object reference | |
302 | count. Drivers can use the :c:func:`drm_gem_vm_open()` and | |
303 | :c:func:`drm_gem_vm_close()` helper functions directly as open | |
304 | and close handlers. | |
305 | ||
306 | The fault operation handler is responsible for mapping individual pages | |
307 | to userspace when a page fault occurs. Depending on the memory | |
308 | allocation scheme, drivers can allocate pages at fault time, or can | |
309 | decide to allocate memory for the GEM object at the time the object is | |
310 | created. | |
311 | ||
312 | Drivers that want to map the GEM object upfront instead of handling page | |
313 | faults can implement their own mmap file operation handler. | |
314 | ||
62a0d98a BG |
315 | For platforms without MMU the GEM core provides a helper method |
316 | :c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call | |
317 | this to get a proposed address for the mapping. | |
318 | ||
319 | To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the | |
320 | struct :c:type:`struct file_operations <file_operations>` get_unmapped_area | |
321 | field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`. | |
322 | ||
323 | More detailed information about get_unmapped_area can be found in | |
324 | Documentation/nommu-mmap.txt | |
325 | ||
2fa91d15 | 326 | Memory Coherency |
8febdf0d | 327 | ---------------- |
2fa91d15 JN |
328 | |
329 | When mapped to the device or used in a command buffer, backing pages for | |
330 | an object are flushed to memory and marked write combined so as to be | |
331 | coherent with the GPU. Likewise, if the CPU accesses an object after the | |
332 | GPU has finished rendering to the object, then the object must be made | |
333 | coherent with the CPU's view of memory, usually involving GPU cache | |
334 | flushing of various kinds. This core CPU<->GPU coherency management is | |
335 | provided by a device-specific ioctl, which evaluates an object's current | |
336 | domain and performs any necessary flushing or synchronization to put the | |
337 | object into the desired coherency domain (note that the object may be | |
338 | busy, i.e. an active render target; in that case, setting the domain | |
339 | blocks the client and waits for rendering to complete before performing | |
340 | any necessary flushing operations). | |
341 | ||
342 | Command Execution | |
8febdf0d | 343 | ----------------- |
2fa91d15 JN |
344 | |
345 | Perhaps the most important GEM function for GPU devices is providing a | |
346 | command execution interface to clients. Client programs construct | |
347 | command buffers containing references to previously allocated memory | |
348 | objects, and then submit them to GEM. At that point, GEM takes care to | |
349 | bind all the objects into the GTT, execute the buffer, and provide | |
350 | necessary synchronization between clients accessing the same buffers. | |
351 | This often involves evicting some objects from the GTT and re-binding | |
352 | others (a fairly expensive operation), and providing relocation support | |
353 | which hides fixed GTT offsets from clients. Clients must take care not | |
354 | to submit command buffers that reference more objects than can fit in | |
355 | the GTT; otherwise, GEM will reject them and no rendering will occur. | |
356 | Similarly, if several objects in the buffer require fence registers to | |
357 | be allocated for correct rendering (e.g. 2D blits on pre-965 chips), | |
358 | care must be taken not to require more fence registers than are | |
359 | available to the client. Such resource management should be abstracted | |
360 | from the client in libdrm. | |
361 | ||
362 | GEM Function Reference | |
363 | ---------------------- | |
364 | ||
2fa91d15 JN |
365 | .. kernel-doc:: include/drm/drm_gem.h |
366 | :internal: | |
367 | ||
1ea35768 DV |
368 | .. kernel-doc:: drivers/gpu/drm/drm_gem.c |
369 | :export: | |
370 | ||
8febdf0d DV |
371 | GEM CMA Helper Functions Reference |
372 | ---------------------------------- | |
373 | ||
374 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c | |
375 | :doc: cma helpers | |
376 | ||
8febdf0d DV |
377 | .. kernel-doc:: include/drm/drm_gem_cma_helper.h |
378 | :internal: | |
379 | ||
1ea35768 DV |
380 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c |
381 | :export: | |
382 | ||
2fa91d15 | 383 | VMA Offset Manager |
8febdf0d | 384 | ================== |
2fa91d15 JN |
385 | |
386 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c | |
387 | :doc: vma offset manager | |
388 | ||
2fa91d15 JN |
389 | .. kernel-doc:: include/drm/drm_vma_manager.h |
390 | :internal: | |
391 | ||
1ea35768 DV |
392 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c |
393 | :export: | |
394 | ||
41f50708 MD |
395 | .. _prime_buffer_sharing: |
396 | ||
2fa91d15 | 397 | PRIME Buffer Sharing |
8febdf0d | 398 | ==================== |
2fa91d15 JN |
399 | |
400 | PRIME is the cross device buffer sharing framework in drm, originally | |
401 | created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME | |
402 | buffers are dma-buf based file descriptors. | |
403 | ||
404 | Overview and Driver Interface | |
8febdf0d | 405 | ----------------------------- |
2fa91d15 JN |
406 | |
407 | Similar to GEM global names, PRIME file descriptors are also used to | |
408 | share buffer objects across processes. They offer additional security: | |
409 | as file descriptors must be explicitly sent over UNIX domain sockets to | |
410 | be shared between applications, they can't be guessed like the globally | |
411 | unique GEM names. | |
412 | ||
413 | Drivers that support the PRIME API must set the DRIVER_PRIME bit in the | |
414 | struct :c:type:`struct drm_driver <drm_driver>` | |
415 | driver_features field, and implement the prime_handle_to_fd and | |
416 | prime_fd_to_handle operations. | |
417 | ||
418 | int (\*prime_handle_to_fd)(struct drm_device \*dev, struct drm_file | |
419 | \*file_priv, uint32_t handle, uint32_t flags, int \*prime_fd); int | |
420 | (\*prime_fd_to_handle)(struct drm_device \*dev, struct drm_file | |
421 | \*file_priv, int prime_fd, uint32_t \*handle); Those two operations | |
422 | convert a handle to a PRIME file descriptor and vice versa. Drivers must | |
423 | use the kernel dma-buf buffer sharing framework to manage the PRIME file | |
424 | descriptors. Similar to the mode setting API PRIME is agnostic to the | |
425 | underlying buffer object manager, as long as handles are 32bit unsigned | |
426 | integers. | |
427 | ||
428 | While non-GEM drivers must implement the operations themselves, GEM | |
429 | drivers must use the :c:func:`drm_gem_prime_handle_to_fd()` and | |
430 | :c:func:`drm_gem_prime_fd_to_handle()` helper functions. Those | |
431 | helpers rely on the driver gem_prime_export and gem_prime_import | |
432 | operations to create a dma-buf instance from a GEM object (dma-buf | |
433 | exporter role) and to create a GEM object from a dma-buf instance | |
434 | (dma-buf importer role). | |
435 | ||
436 | struct dma_buf \* (\*gem_prime_export)(struct drm_device \*dev, | |
437 | struct drm_gem_object \*obj, int flags); struct drm_gem_object \* | |
438 | (\*gem_prime_import)(struct drm_device \*dev, struct dma_buf | |
439 | \*dma_buf); These two operations are mandatory for GEM drivers that | |
440 | support PRIME. | |
441 | ||
442 | PRIME Helper Functions | |
8febdf0d | 443 | ---------------------- |
2fa91d15 JN |
444 | |
445 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c | |
446 | :doc: PRIME Helpers | |
447 | ||
448 | PRIME Function References | |
449 | ------------------------- | |
450 | ||
c6bb9baa DV |
451 | .. kernel-doc:: include/drm/drm_prime.h |
452 | :internal: | |
453 | ||
2fa91d15 JN |
454 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c |
455 | :export: | |
456 | ||
457 | DRM MM Range Allocator | |
8febdf0d | 458 | ====================== |
2fa91d15 JN |
459 | |
460 | Overview | |
8febdf0d | 461 | -------- |
2fa91d15 JN |
462 | |
463 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c | |
464 | :doc: Overview | |
465 | ||
466 | LRU Scan/Eviction Support | |
8febdf0d | 467 | ------------------------- |
2fa91d15 JN |
468 | |
469 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c | |
05fc0321 | 470 | :doc: lru scan roster |
2fa91d15 JN |
471 | |
472 | DRM MM Range Allocator Function References | |
473 | ------------------------------------------ | |
474 | ||
2fa91d15 JN |
475 | .. kernel-doc:: include/drm/drm_mm.h |
476 | :internal: | |
f0e36723 | 477 | |
1ea35768 DV |
478 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c |
479 | :export: | |
480 | ||
f0e36723 GKB |
481 | DRM Cache Handling |
482 | ================== | |
483 | ||
484 | .. kernel-doc:: drivers/gpu/drm/drm_cache.c | |
485 | :export: | |
e9083420 DA |
486 | |
487 | DRM Sync Objects | |
488 | =========================== | |
489 | ||
490 | .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c | |
491 | :doc: Overview | |
492 | ||
493 | .. kernel-doc:: include/drm/drm_syncobj.h | |
9f15a4ab | 494 | :internal: |
e9083420 DA |
495 | |
496 | .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c | |
497 | :export: | |
677e8622 ND |
498 | |
499 | GPU Scheduler | |
500 | ============= | |
501 | ||
502 | Overview | |
503 | -------- | |
504 | ||
851c2509 | 505 | .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c |
677e8622 ND |
506 | :doc: Overview |
507 | ||
508 | Scheduler Function References | |
509 | ----------------------------- | |
510 | ||
511 | .. kernel-doc:: include/drm/gpu_scheduler.h | |
512 | :internal: | |
513 | ||
851c2509 | 514 | .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c |
677e8622 | 515 | :export: |