Commit | Line | Data |
---|---|---|
2fa91d15 JN |
1 | ===================== |
2 | DRM Memory Management | |
3 | ===================== | |
4 | ||
5 | Modern Linux systems require large amount of graphics memory to store | |
6 | frame buffers, textures, vertices and other graphics-related data. Given | |
7 | the very dynamic nature of many of that data, managing graphics memory | |
8 | efficiently is thus crucial for the graphics stack and plays a central | |
9 | role in the DRM infrastructure. | |
10 | ||
11 | The DRM core includes two memory managers, namely Translation Table Maps | |
12 | (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory | |
13 | manager to be developed and tried to be a one-size-fits-them all | |
14 | solution. It provides a single userspace API to accommodate the need of | |
15 | all hardware, supporting both Unified Memory Architecture (UMA) devices | |
16 | and devices with dedicated video RAM (i.e. most discrete video cards). | |
17 | This resulted in a large, complex piece of code that turned out to be | |
18 | hard to use for driver development. | |
19 | ||
20 | GEM started as an Intel-sponsored project in reaction to TTM's | |
21 | complexity. Its design philosophy is completely different: instead of | |
22 | providing a solution to every graphics memory-related problems, GEM | |
23 | identified common code between drivers and created a support library to | |
24 | share it. GEM has simpler initialization and execution requirements than | |
25 | TTM, but has no video RAM management capabilities and is thus limited to | |
26 | UMA devices. | |
27 | ||
28 | The Translation Table Manager (TTM) | |
8febdf0d | 29 | =================================== |
2fa91d15 JN |
30 | |
31 | TTM design background and information belongs here. | |
32 | ||
33 | TTM initialization | |
8febdf0d | 34 | ------------------ |
2fa91d15 JN |
35 | |
36 | **Warning** | |
2fa91d15 JN |
37 | This section is outdated. |
38 | ||
b834ff86 GKB |
39 | Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver |
40 | <ttm_bo_driver>` structure to ttm_bo_device_init, together with an | |
41 | initialized global reference to the memory manager. The ttm_bo_driver | |
42 | structure contains several fields with function pointers for | |
43 | initializing the TTM, allocating and freeing memory, waiting for command | |
44 | completion and fence synchronization, and memory migration. | |
2fa91d15 | 45 | |
b834ff86 GKB |
46 | The :c:type:`struct drm_global_reference <drm_global_reference>` is made |
47 | up of several fields: | |
2fa91d15 | 48 | |
29849a69 | 49 | .. code-block:: c |
2fa91d15 | 50 | |
b834ff86 | 51 | struct drm_global_reference { |
2fa91d15 JN |
52 | enum ttm_global_types global_type; |
53 | size_t size; | |
54 | void *object; | |
b834ff86 GKB |
55 | int (*init) (struct drm_global_reference *); |
56 | void (*release) (struct drm_global_reference *); | |
2fa91d15 JN |
57 | }; |
58 | ||
59 | ||
60 | There should be one global reference structure for your memory manager | |
61 | as a whole, and there will be others for each object created by the | |
62 | memory manager at runtime. Your global TTM should have a type of | |
63 | TTM_GLOBAL_TTM_MEM. The size field for the global object should be | |
64 | sizeof(struct ttm_mem_global), and the init and release hooks should | |
65 | point at your driver-specific init and release routines, which probably | |
66 | eventually call ttm_mem_global_init and ttm_mem_global_release, | |
67 | respectively. | |
68 | ||
69 | Once your global TTM accounting structure is set up and initialized by | |
70 | calling ttm_global_item_ref() on it, you need to create a buffer | |
71 | object TTM to provide a pool for buffer object allocation by clients and | |
72 | the kernel itself. The type of this object should be | |
73 | TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct | |
74 | ttm_bo_global). Again, driver-specific init and release functions may | |
e55a5c9b TZ |
75 | be provided, likely eventually calling ttm_bo_global_ref_init() and |
76 | ttm_bo_global_ref_release(), respectively. Also, like the previous | |
2fa91d15 JN |
77 | object, ttm_global_item_ref() is used to create an initial reference |
78 | count for the TTM, which will call your initialization function. | |
79 | ||
b834ff86 GKB |
80 | See the radeon_ttm.c file for an example of usage. |
81 | ||
2fa91d15 | 82 | The Graphics Execution Manager (GEM) |
8febdf0d | 83 | ==================================== |
2fa91d15 JN |
84 | |
85 | The GEM design approach has resulted in a memory manager that doesn't | |
86 | provide full coverage of all (or even all common) use cases in its | |
87 | userspace or kernel API. GEM exposes a set of standard memory-related | |
88 | operations to userspace and a set of helper functions to drivers, and | |
89 | let drivers implement hardware-specific operations with their own | |
90 | private API. | |
91 | ||
92 | The GEM userspace API is described in the `GEM - the Graphics Execution | |
93 | Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While | |
94 | slightly outdated, the document provides a good overview of the GEM API | |
95 | principles. Buffer allocation and read and write operations, described | |
96 | as part of the common GEM API, are currently implemented using | |
97 | driver-specific ioctls. | |
98 | ||
99 | GEM is data-agnostic. It manages abstract buffer objects without knowing | |
100 | what individual buffers contain. APIs that require knowledge of buffer | |
101 | contents or purpose, such as buffer allocation or synchronization | |
102 | primitives, are thus outside of the scope of GEM and must be implemented | |
103 | using driver-specific ioctls. | |
104 | ||
105 | On a fundamental level, GEM involves several operations: | |
106 | ||
107 | - Memory allocation and freeing | |
108 | - Command execution | |
109 | - Aperture management at command execution time | |
110 | ||
111 | Buffer object allocation is relatively straightforward and largely | |
112 | provided by Linux's shmem layer, which provides memory to back each | |
113 | object. | |
114 | ||
115 | Device-specific operations, such as command execution, pinning, buffer | |
116 | read & write, mapping, and domain ownership transfers are left to | |
117 | driver-specific ioctls. | |
118 | ||
119 | GEM Initialization | |
8febdf0d | 120 | ------------------ |
2fa91d15 JN |
121 | |
122 | Drivers that use GEM must set the DRIVER_GEM bit in the struct | |
123 | :c:type:`struct drm_driver <drm_driver>` driver_features | |
124 | field. The DRM core will then automatically initialize the GEM core | |
125 | before calling the load operation. Behind the scene, this will create a | |
126 | DRM Memory Manager object which provides an address space pool for | |
127 | object allocation. | |
128 | ||
129 | In a KMS configuration, drivers need to allocate and initialize a | |
130 | command ring buffer following core GEM initialization if required by the | |
131 | hardware. UMA devices usually have what is called a "stolen" memory | |
132 | region, which provides space for the initial framebuffer and large, | |
133 | contiguous memory regions required by the device. This space is | |
134 | typically not managed by GEM, and must be initialized separately into | |
135 | its own DRM MM object. | |
136 | ||
137 | GEM Objects Creation | |
8febdf0d | 138 | -------------------- |
2fa91d15 JN |
139 | |
140 | GEM splits creation of GEM objects and allocation of the memory that | |
141 | backs them in two distinct operations. | |
142 | ||
143 | GEM objects are represented by an instance of struct :c:type:`struct | |
144 | drm_gem_object <drm_gem_object>`. Drivers usually need to | |
145 | extend GEM objects with private information and thus create a | |
146 | driver-specific GEM object structure type that embeds an instance of | |
147 | struct :c:type:`struct drm_gem_object <drm_gem_object>`. | |
148 | ||
149 | To create a GEM object, a driver allocates memory for an instance of its | |
150 | specific GEM object type and initializes the embedded struct | |
151 | :c:type:`struct drm_gem_object <drm_gem_object>` with a call | |
6acc942c | 152 | to drm_gem_object_init(). The function takes a pointer |
2fa91d15 JN |
153 | to the DRM device, a pointer to the GEM object and the buffer object |
154 | size in bytes. | |
155 | ||
156 | GEM uses shmem to allocate anonymous pageable memory. | |
6acc942c | 157 | drm_gem_object_init() will create an shmfs file of the |
2fa91d15 JN |
158 | requested size and store it into the struct :c:type:`struct |
159 | drm_gem_object <drm_gem_object>` filp field. The memory is | |
160 | used as either main storage for the object when the graphics hardware | |
161 | uses system memory directly or as a backing store otherwise. | |
162 | ||
163 | Drivers are responsible for the actual physical pages allocation by | |
6acc942c | 164 | calling shmem_read_mapping_page_gfp() for each page. |
2fa91d15 JN |
165 | Note that they can decide to allocate pages when initializing the GEM |
166 | object, or to delay allocation until the memory is needed (for instance | |
167 | when a page fault occurs as a result of a userspace memory access or | |
168 | when the driver needs to start a DMA transfer involving the memory). | |
169 | ||
170 | Anonymous pageable memory allocation is not always desired, for instance | |
171 | when the hardware requires physically contiguous system memory as is | |
172 | often the case in embedded devices. Drivers can create GEM objects with | |
6acc942c DV |
173 | no shmfs backing (called private GEM objects) by initializing them with a call |
174 | to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for | |
175 | private GEM objects must be managed by drivers. | |
2fa91d15 JN |
176 | |
177 | GEM Objects Lifetime | |
8febdf0d | 178 | -------------------- |
2fa91d15 JN |
179 | |
180 | All GEM objects are reference-counted by the GEM core. References can be | |
6acc942c | 181 | acquired and release by calling drm_gem_object_get() and drm_gem_object_put() |
5b4231fb | 182 | respectively. |
2fa91d15 JN |
183 | |
184 | When the last reference to a GEM object is released the GEM core calls | |
47f39800 | 185 | the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked |
2fa91d15 JN |
186 | operation. That operation is mandatory for GEM-enabled drivers and must |
187 | free the GEM object and all associated resources. | |
188 | ||
189 | void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are | |
190 | responsible for freeing all GEM object resources. This includes the | |
191 | resources created by the GEM core, which need to be released with | |
6acc942c | 192 | drm_gem_object_release(). |
2fa91d15 JN |
193 | |
194 | GEM Objects Naming | |
8febdf0d | 195 | ------------------ |
2fa91d15 JN |
196 | |
197 | Communication between userspace and the kernel refers to GEM objects | |
198 | using local handles, global names or, more recently, file descriptors. | |
199 | All of those are 32-bit integer values; the usual Linux kernel limits | |
200 | apply to the file descriptors. | |
201 | ||
202 | GEM handles are local to a DRM file. Applications get a handle to a GEM | |
203 | object through a driver-specific ioctl, and can use that handle to refer | |
204 | to the GEM object in other standard or driver-specific ioctls. Closing a | |
205 | DRM file handle frees all its GEM handles and dereferences the | |
206 | associated GEM objects. | |
207 | ||
6acc942c DV |
208 | To create a handle for a GEM object drivers call drm_gem_handle_create(). The |
209 | function takes a pointer to the DRM file and the GEM object and returns a | |
210 | locally unique handle. When the handle is no longer needed drivers delete it | |
211 | with a call to drm_gem_handle_delete(). Finally the GEM object associated with a | |
212 | handle can be retrieved by a call to drm_gem_object_lookup(). | |
2fa91d15 JN |
213 | |
214 | Handles don't take ownership of GEM objects, they only take a reference | |
215 | to the object that will be dropped when the handle is destroyed. To | |
216 | avoid leaking GEM objects, drivers must make sure they drop the | |
217 | reference(s) they own (such as the initial reference taken at object | |
218 | creation time) as appropriate, without any special consideration for the | |
219 | handle. For example, in the particular case of combined GEM object and | |
220 | handle creation in the implementation of the dumb_create operation, | |
221 | drivers must drop the initial reference to the GEM object before | |
222 | returning the handle. | |
223 | ||
224 | GEM names are similar in purpose to handles but are not local to DRM | |
225 | files. They can be passed between processes to reference a GEM object | |
226 | globally. Names can't be used directly to refer to objects in the DRM | |
227 | API, applications must convert handles to names and names to handles | |
228 | using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls | |
229 | respectively. The conversion is handled by the DRM core without any | |
230 | driver-specific support. | |
231 | ||
232 | GEM also supports buffer sharing with dma-buf file descriptors through | |
233 | PRIME. GEM-based drivers must use the provided helpers functions to | |
234 | implement the exporting and importing correctly. See ?. Since sharing | |
235 | file descriptors is inherently more secure than the easily guessable and | |
236 | global GEM names it is the preferred buffer sharing mechanism. Sharing | |
237 | buffers through GEM names is only supported for legacy userspace. | |
238 | Furthermore PRIME also allows cross-device buffer sharing since it is | |
239 | based on dma-bufs. | |
240 | ||
241 | GEM Objects Mapping | |
8febdf0d | 242 | ------------------- |
2fa91d15 JN |
243 | |
244 | Because mapping operations are fairly heavyweight GEM favours | |
245 | read/write-like access to buffers, implemented through driver-specific | |
246 | ioctls, over mapping buffers to userspace. However, when random access | |
247 | to the buffer is needed (to perform software rendering for instance), | |
248 | direct access to the object can be more efficient. | |
249 | ||
250 | The mmap system call can't be used directly to map GEM objects, as they | |
251 | don't have their own file handle. Two alternative methods currently | |
252 | co-exist to map GEM objects to userspace. The first method uses a | |
253 | driver-specific ioctl to perform the mapping operation, calling | |
6acc942c | 254 | do_mmap() under the hood. This is often considered |
2fa91d15 JN |
255 | dubious, seems to be discouraged for new GEM-enabled drivers, and will |
256 | thus not be described here. | |
257 | ||
258 | The second method uses the mmap system call on the DRM file handle. void | |
259 | \*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t | |
260 | offset); DRM identifies the GEM object to be mapped by a fake offset | |
261 | passed through the mmap offset argument. Prior to being mapped, a GEM | |
262 | object must thus be associated with a fake offset. To do so, drivers | |
6acc942c | 263 | must call drm_gem_create_mmap_offset() on the object. |
2fa91d15 JN |
264 | |
265 | Once allocated, the fake offset value must be passed to the application | |
266 | in a driver-specific way and can then be used as the mmap offset | |
267 | argument. | |
268 | ||
6acc942c | 269 | The GEM core provides a helper method drm_gem_mmap() to |
2fa91d15 JN |
270 | handle object mapping. The method can be set directly as the mmap file |
271 | operation handler. It will look up the GEM object based on the offset | |
272 | value and set the VMA operations to the :c:type:`struct drm_driver | |
6acc942c DV |
273 | <drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to |
274 | userspace, but relies on the driver-provided fault handler to map pages | |
275 | individually. | |
2fa91d15 | 276 | |
6acc942c DV |
277 | To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver |
278 | <drm_driver>` gem_vm_ops field with a pointer to VM operations. | |
2fa91d15 | 279 | |
059c7a5a LD |
280 | The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` |
281 | made up of several fields, the more interesting ones being: | |
282 | ||
283 | .. code-block:: c | |
284 | ||
285 | struct vm_operations_struct { | |
286 | void (*open)(struct vm_area_struct * area); | |
287 | void (*close)(struct vm_area_struct * area); | |
b9a40816 | 288 | vm_fault_t (*fault)(struct vm_fault *vmf); |
059c7a5a LD |
289 | }; |
290 | ||
2fa91d15 JN |
291 | |
292 | The open and close operations must update the GEM object reference | |
6acc942c DV |
293 | count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper |
294 | functions directly as open and close handlers. | |
2fa91d15 JN |
295 | |
296 | The fault operation handler is responsible for mapping individual pages | |
297 | to userspace when a page fault occurs. Depending on the memory | |
298 | allocation scheme, drivers can allocate pages at fault time, or can | |
299 | decide to allocate memory for the GEM object at the time the object is | |
300 | created. | |
301 | ||
302 | Drivers that want to map the GEM object upfront instead of handling page | |
303 | faults can implement their own mmap file operation handler. | |
304 | ||
62a0d98a | 305 | For platforms without MMU the GEM core provides a helper method |
6acc942c DV |
306 | drm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a |
307 | proposed address for the mapping. | |
62a0d98a | 308 | |
6acc942c DV |
309 | To use drm_gem_cma_get_unmapped_area(), drivers must fill the struct |
310 | :c:type:`struct file_operations <file_operations>` get_unmapped_area field with | |
311 | a pointer on drm_gem_cma_get_unmapped_area(). | |
62a0d98a BG |
312 | |
313 | More detailed information about get_unmapped_area can be found in | |
800c02f5 | 314 | Documentation/admin-guide/mm/nommu-mmap.rst |
62a0d98a | 315 | |
2fa91d15 | 316 | Memory Coherency |
8febdf0d | 317 | ---------------- |
2fa91d15 JN |
318 | |
319 | When mapped to the device or used in a command buffer, backing pages for | |
320 | an object are flushed to memory and marked write combined so as to be | |
321 | coherent with the GPU. Likewise, if the CPU accesses an object after the | |
322 | GPU has finished rendering to the object, then the object must be made | |
323 | coherent with the CPU's view of memory, usually involving GPU cache | |
324 | flushing of various kinds. This core CPU<->GPU coherency management is | |
325 | provided by a device-specific ioctl, which evaluates an object's current | |
326 | domain and performs any necessary flushing or synchronization to put the | |
327 | object into the desired coherency domain (note that the object may be | |
328 | busy, i.e. an active render target; in that case, setting the domain | |
329 | blocks the client and waits for rendering to complete before performing | |
330 | any necessary flushing operations). | |
331 | ||
332 | Command Execution | |
8febdf0d | 333 | ----------------- |
2fa91d15 JN |
334 | |
335 | Perhaps the most important GEM function for GPU devices is providing a | |
336 | command execution interface to clients. Client programs construct | |
337 | command buffers containing references to previously allocated memory | |
338 | objects, and then submit them to GEM. At that point, GEM takes care to | |
339 | bind all the objects into the GTT, execute the buffer, and provide | |
340 | necessary synchronization between clients accessing the same buffers. | |
341 | This often involves evicting some objects from the GTT and re-binding | |
342 | others (a fairly expensive operation), and providing relocation support | |
343 | which hides fixed GTT offsets from clients. Clients must take care not | |
344 | to submit command buffers that reference more objects than can fit in | |
345 | the GTT; otherwise, GEM will reject them and no rendering will occur. | |
346 | Similarly, if several objects in the buffer require fence registers to | |
347 | be allocated for correct rendering (e.g. 2D blits on pre-965 chips), | |
348 | care must be taken not to require more fence registers than are | |
349 | available to the client. Such resource management should be abstracted | |
350 | from the client in libdrm. | |
351 | ||
352 | GEM Function Reference | |
353 | ---------------------- | |
354 | ||
2fa91d15 JN |
355 | .. kernel-doc:: include/drm/drm_gem.h |
356 | :internal: | |
357 | ||
1ea35768 DV |
358 | .. kernel-doc:: drivers/gpu/drm/drm_gem.c |
359 | :export: | |
360 | ||
8febdf0d DV |
361 | GEM CMA Helper Functions Reference |
362 | ---------------------------------- | |
363 | ||
364 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c | |
365 | :doc: cma helpers | |
366 | ||
8febdf0d DV |
367 | .. kernel-doc:: include/drm/drm_gem_cma_helper.h |
368 | :internal: | |
369 | ||
1ea35768 DV |
370 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c |
371 | :export: | |
372 | ||
0b638559 DV |
373 | GEM SHMEM Helper Function Reference |
374 | ----------------------------------- | |
375 | ||
376 | .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c | |
377 | :doc: overview | |
378 | ||
379 | .. kernel-doc:: include/drm/drm_gem_shmem_helper.h | |
380 | :internal: | |
381 | ||
382 | .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c | |
383 | :export: | |
384 | ||
85438a8d TZ |
385 | GEM VRAM Helper Functions Reference |
386 | ----------------------------------- | |
387 | ||
388 | .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c | |
389 | :doc: overview | |
390 | ||
391 | .. kernel-doc:: include/drm/drm_gem_vram_helper.h | |
392 | :internal: | |
393 | ||
394 | .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c | |
395 | :export: | |
396 | ||
ff540b76 GH |
397 | GEM TTM Helper Functions Reference |
398 | ----------------------------------- | |
399 | ||
400 | .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c | |
401 | :doc: overview | |
402 | ||
ff540b76 GH |
403 | .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c |
404 | :export: | |
405 | ||
2fa91d15 | 406 | VMA Offset Manager |
8febdf0d | 407 | ================== |
2fa91d15 JN |
408 | |
409 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c | |
410 | :doc: vma offset manager | |
411 | ||
2fa91d15 JN |
412 | .. kernel-doc:: include/drm/drm_vma_manager.h |
413 | :internal: | |
414 | ||
1ea35768 DV |
415 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c |
416 | :export: | |
417 | ||
41f50708 MD |
418 | .. _prime_buffer_sharing: |
419 | ||
2fa91d15 | 420 | PRIME Buffer Sharing |
8febdf0d | 421 | ==================== |
2fa91d15 JN |
422 | |
423 | PRIME is the cross device buffer sharing framework in drm, originally | |
424 | created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME | |
425 | buffers are dma-buf based file descriptors. | |
426 | ||
805dc614 DV |
427 | Overview and Lifetime Rules |
428 | --------------------------- | |
429 | ||
430 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c | |
431 | :doc: overview and lifetime rules | |
2fa91d15 JN |
432 | |
433 | PRIME Helper Functions | |
8febdf0d | 434 | ---------------------- |
2fa91d15 JN |
435 | |
436 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c | |
437 | :doc: PRIME Helpers | |
438 | ||
439 | PRIME Function References | |
440 | ------------------------- | |
441 | ||
c6bb9baa DV |
442 | .. kernel-doc:: include/drm/drm_prime.h |
443 | :internal: | |
444 | ||
2fa91d15 JN |
445 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c |
446 | :export: | |
447 | ||
448 | DRM MM Range Allocator | |
8febdf0d | 449 | ====================== |
2fa91d15 JN |
450 | |
451 | Overview | |
8febdf0d | 452 | -------- |
2fa91d15 JN |
453 | |
454 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c | |
455 | :doc: Overview | |
456 | ||
457 | LRU Scan/Eviction Support | |
8febdf0d | 458 | ------------------------- |
2fa91d15 JN |
459 | |
460 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c | |
05fc0321 | 461 | :doc: lru scan roster |
2fa91d15 JN |
462 | |
463 | DRM MM Range Allocator Function References | |
464 | ------------------------------------------ | |
465 | ||
2fa91d15 JN |
466 | .. kernel-doc:: include/drm/drm_mm.h |
467 | :internal: | |
f0e36723 | 468 | |
1ea35768 DV |
469 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c |
470 | :export: | |
471 | ||
f0e36723 GKB |
472 | DRM Cache Handling |
473 | ================== | |
474 | ||
475 | .. kernel-doc:: drivers/gpu/drm/drm_cache.c | |
476 | :export: | |
e9083420 DA |
477 | |
478 | DRM Sync Objects | |
479 | =========================== | |
480 | ||
481 | .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c | |
482 | :doc: Overview | |
483 | ||
484 | .. kernel-doc:: include/drm/drm_syncobj.h | |
9f15a4ab | 485 | :internal: |
e9083420 DA |
486 | |
487 | .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c | |
488 | :export: | |
677e8622 ND |
489 | |
490 | GPU Scheduler | |
491 | ============= | |
492 | ||
493 | Overview | |
494 | -------- | |
495 | ||
851c2509 | 496 | .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c |
677e8622 ND |
497 | :doc: Overview |
498 | ||
499 | Scheduler Function References | |
500 | ----------------------------- | |
501 | ||
502 | .. kernel-doc:: include/drm/gpu_scheduler.h | |
503 | :internal: | |
504 | ||
851c2509 | 505 | .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c |
677e8622 | 506 | :export: |