Merge drm/drm-next into drm-misc-next
authorMaxime Ripard <maxime@cerno.tech>
Tue, 16 Mar 2021 08:06:23 +0000 (09:06 +0100)
committerMaxime Ripard <maxime@cerno.tech>
Tue, 16 Mar 2021 08:06:23 +0000 (09:06 +0100)
Noralf needs some patches in 5.12-rc3, and we've been delaying the 5.12
merge due to the swap issue so it looks like a good time.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>
20 files changed:
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
Documentation/driver-api/dma-buf.rst
Documentation/gpu/todo.rst
drivers/gpu/drm/ingenic/ingenic-drm-drv.c
drivers/gpu/drm/mcde/mcde_dsi.c
drivers/gpu/drm/panel/panel-novatek-nt35510.c
drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
drivers/gpu/drm/panel/panel-simple.c
drivers/gpu/drm/panel/panel-sony-acx424akp.c
drivers/gpu/drm/scheduler/sched_entity.c
drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
drivers/gpu/drm/stm/ltdc.c
drivers/gpu/drm/vboxvideo/vbox_ttm.c
drivers/gpu/drm/virtio/virtgpu_ioctl.c
drivers/gpu/drm/virtio/virtgpu_object.c
drivers/video/fbdev/core/fb_defio.c
drivers/video/fbdev/core/fbmem.c
include/linux/fb.h
include/uapi/drm/drm.h

index 62b0d54d87b7f0e2c6c32c3cd7320da1640a9bad..b3797ba2698b1b30c6cc7f41dff3f27fbdfd5ee8 100644 (file)
@@ -161,6 +161,8 @@ properties:
         # Innolux Corporation 12.1" G121X1-L03 XGA (1024x768) TFT LCD panel
       - innolux,g121x1-l03
         # Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
+      - innolux,n116bca-ea1
+        # Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
       - innolux,n116bge
         # InnoLux 13.3" FHD (1920x1080) eDP TFT LCD panel
       - innolux,n125hce-gn1
index a2133d69872c658a8aa2bf216cc1a8fcbc3e68a4..7f37ec30d9fd721404b5db65d249a98a6ef6aaf9 100644 (file)
@@ -257,3 +257,79 @@ fences in the kernel. This means:
   userspace is allowed to use userspace fencing or long running compute
   workloads. This also means no implicit fencing for shared buffers in these
   cases.
+
+Recoverable Hardware Page Faults Implications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linux rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. Specifically this means implicit synchronization will not be possible.
+The exception is when page faults are only used as migration hints and never to
+on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few options to prevent this problem, one of which drivers need to
+ensure:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+  all workloads must be flushed from the GPU when switching between jobs
+  requiring DMA fences or jobs requiring page fault handling: This means all DMA
+  fences must complete before a compute job with page fault handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA fence can be
+  made visible anywhere in the system, all compute workloads must be preempted
+  to guarantee all pending GPU page faults are flushed.
+
+- Only a fairly theoretical option would be to untangle these dependencies when
+  allocating memory to repair hardware page faults, either through separate
+  memory blocks or runtime tracking of the full dependency graph of all DMA
+  fences. This results very wide impact on the kernel, since resolving the page
+  on the CPU side can itself involve a page fault. It is much more feasible and
+  robust to limit the impact of handling hardware page faults to the specific
+  driver.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally
+in the kernel even for resolving hardware page faults, e.g. by using copy
+engines to clear or copy memory needed to resolve the page fault.
+
+In some ways this page fault problem is a special case of the `Infinite DMA
+Fences` discussions: Infinite fences from compute workloads are allowed to
+depend on DMA fences, but not the other way around. And not even the page fault
+problem is new, because some other CPU thread in userspace might
+hit a page fault which holds up a userspace fence - supporting page faults on
+GPUs doesn't anything fundamentally new.
index 1b4b64b71c7efd278b4be0dcd77fcbb24cd7f0d2..7ff9fac10d8b3d09144563252641a78231951efa 100644 (file)
@@ -677,7 +677,7 @@ Outside DRM
 Convert fbdev drivers to DRM
 ----------------------------
 
-There are plenty of fbdev drivers for older hardware. Some hwardware has
+There are plenty of fbdev drivers for older hardware. Some hardware has
 become obsolete, but some still provides good(-enough) framebuffers. The
 drivers that are still useful should be converted to DRM and afterwards
 removed from fbdev.
index 54ee2cb61f3c944006c73ea70c29d52fc68c99ef..d60e1eefc9d1fc4553876217498c0d59e8afe7f5 100644 (file)
@@ -561,7 +561,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
                height = newstate->src_h >> 16;
                cpp = newstate->fb->format->cpp[0];
 
-               if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY)
+               if (!priv->soc_info->has_osd || plane->type == DRM_PLANE_TYPE_OVERLAY)
                        hwdesc = &priv->dma_hwdescs->hwdesc_f0;
                else
                        hwdesc = &priv->dma_hwdescs->hwdesc_f1;
@@ -833,6 +833,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
        const struct jz_soc_info *soc_info;
        struct ingenic_drm *priv;
        struct clk *parent_clk;
+       struct drm_plane *primary;
        struct drm_bridge *bridge;
        struct drm_panel *panel;
        struct drm_encoder *encoder;
@@ -947,9 +948,11 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
        if (soc_info->has_osd)
                priv->ipu_plane = drm_plane_from_index(drm, 0);
 
-       drm_plane_helper_add(&priv->f1, &ingenic_drm_plane_helper_funcs);
+       primary = priv->soc_info->has_osd ? &priv->f1 : &priv->f0;
 
-       ret = drm_universal_plane_init(drm, &priv->f1, 1,
+       drm_plane_helper_add(primary, &ingenic_drm_plane_helper_funcs);
+
+       ret = drm_universal_plane_init(drm, primary, 1,
                                       &ingenic_drm_primary_plane_funcs,
                                       priv->soc_info->formats_f1,
                                       priv->soc_info->num_formats_f1,
@@ -961,7 +964,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
 
        drm_crtc_helper_add(&priv->crtc, &ingenic_drm_crtc_helper_funcs);
 
-       ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->f1,
+       ret = drm_crtc_init_with_planes(drm, &priv->crtc, primary,
                                        NULL, &ingenic_drm_crtc_funcs, NULL);
        if (ret) {
                dev_err(dev, "Failed to init CRTC: %i\n", ret);
index 2314c81229920dc96f385272571b2857a8ef2cb1..b3fd3501c41279979dea06abf2de9eac392a5ea7 100644 (file)
@@ -760,7 +760,7 @@ static void mcde_dsi_start(struct mcde_dsi *d)
                DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
                DSI_MCTL_MAIN_DATA_CTL_READ_EN |
                DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
-       if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
+       if (!(d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET))
                val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
        writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
 
index b9a0e56f33e24ab58d03c76e25a61423fbacdf1a..ef70140c5b09da351eba4be180771b210e7bebde 100644 (file)
@@ -898,8 +898,7 @@ static int nt35510_probe(struct mipi_dsi_device *dsi)
         */
        dsi->hs_rate = 349440000;
        dsi->lp_rate = 9600000;
-       dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
-               MIPI_DSI_MODE_EOT_PACKET;
+       dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
 
        /*
         * Every new incarnation of this display must have a unique
index 4aac0d1573dd096bfd6af013780131d8f61c4286..70560cac53a99e1a9d7709df7203feedba73a37b 100644 (file)
@@ -184,9 +184,7 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
         * As we only send commands we do not need to be continuously
         * clocked.
         */
-       dsi->mode_flags =
-               MIPI_DSI_CLOCK_NON_CONTINUOUS |
-               MIPI_DSI_MODE_EOT_PACKET;
+       dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
 
        s6->supply = devm_regulator_get(dev, "vdd1");
        if (IS_ERR(s6->supply))
index eec74c10dddaf35236a1502e612843b1126e5e91..9c3563c61e8cc77c688e44931c369fb621204d53 100644 (file)
@@ -97,7 +97,6 @@ static int s6e63m0_dsi_probe(struct mipi_dsi_device *dsi)
        dsi->hs_rate = 349440000;
        dsi->lp_rate = 9600000;
        dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
-               MIPI_DSI_MODE_EOT_PACKET |
                MIPI_DSI_MODE_VIDEO_BURST;
 
        ret = s6e63m0_probe(dev, s6e63m0_dsi_dcs_read, s6e63m0_dsi_dcs_write,
index 9858079f9e14a029dcbc2a7a0c54343c1ac267ff..be312b5c04dd9d3d989918740d73ee0a16555c77 100644 (file)
@@ -376,12 +376,13 @@ static int panel_simple_get_hpd_gpio(struct device *dev,
        return 0;
 }
 
-static int panel_simple_prepare(struct drm_panel *panel)
+static int panel_simple_prepare_once(struct drm_panel *panel)
 {
        struct panel_simple *p = to_panel_simple(panel);
        unsigned int delay;
        int err;
        int hpd_asserted;
+       unsigned long hpd_wait_us;
 
        if (p->prepared_time != 0)
                return 0;
@@ -406,25 +407,63 @@ static int panel_simple_prepare(struct drm_panel *panel)
                if (IS_ERR(p->hpd_gpio)) {
                        err = panel_simple_get_hpd_gpio(panel->dev, p, false);
                        if (err)
-                               return err;
+                               goto error;
                }
 
+               if (p->desc->delay.hpd_absent_delay)
+                       hpd_wait_us = p->desc->delay.hpd_absent_delay * 1000UL;
+               else
+                       hpd_wait_us = 2000000;
+
                err = readx_poll_timeout(gpiod_get_value_cansleep, p->hpd_gpio,
                                         hpd_asserted, hpd_asserted,
-                                        1000, 2000000);
+                                        1000, hpd_wait_us);
                if (hpd_asserted < 0)
                        err = hpd_asserted;
 
                if (err) {
-                       dev_err(panel->dev,
-                               "error waiting for hpd GPIO: %d\n", err);
-                       return err;
+                       if (err != -ETIMEDOUT)
+                               dev_err(panel->dev,
+                                       "error waiting for hpd GPIO: %d\n", err);
+                       goto error;
                }
        }
 
        p->prepared_time = ktime_get();
 
        return 0;
+
+error:
+       gpiod_set_value_cansleep(p->enable_gpio, 0);
+       regulator_disable(p->supply);
+       p->unprepared_time = ktime_get();
+
+       return err;
+}
+
+/*
+ * Some panels simply don't always come up and need to be power cycled to
+ * work properly.  We'll allow for a handful of retries.
+ */
+#define MAX_PANEL_PREPARE_TRIES                5
+
+static int panel_simple_prepare(struct drm_panel *panel)
+{
+       int ret;
+       int try;
+
+       for (try = 0; try < MAX_PANEL_PREPARE_TRIES; try++) {
+               ret = panel_simple_prepare_once(panel);
+               if (ret != -ETIMEDOUT)
+                       break;
+       }
+
+       if (ret == -ETIMEDOUT)
+               dev_err(panel->dev, "Prepare timeout after %d tries\n", try);
+       else if (try)
+               dev_warn(panel->dev, "Prepare needed %d retries\n", try);
+
+       return ret;
 }
 
 static int panel_simple_enable(struct drm_panel *panel)
@@ -1445,6 +1484,7 @@ static const struct panel_desc boe_nv110wtm_n61 = {
        .delay = {
                .hpd_absent_delay = 200,
                .prepare_to_enable = 80,
+               .enable = 50,
                .unprepare = 500,
        },
        .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
@@ -2368,6 +2408,36 @@ static const struct panel_desc innolux_g121x1_l03 = {
        },
 };
 
+static const struct drm_display_mode innolux_n116bca_ea1_mode = {
+       .clock = 76420,
+       .hdisplay = 1366,
+       .hsync_start = 1366 + 136,
+       .hsync_end = 1366 + 136 + 30,
+       .htotal = 1366 + 136 + 30 + 60,
+       .vdisplay = 768,
+       .vsync_start = 768 + 8,
+       .vsync_end = 768 + 8 + 12,
+       .vtotal = 768 + 8 + 12 + 12,
+       .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc innolux_n116bca_ea1 = {
+       .modes = &innolux_n116bca_ea1_mode,
+       .num_modes = 1,
+       .bpc = 6,
+       .size = {
+               .width = 256,
+               .height = 144,
+       },
+       .delay = {
+               .hpd_absent_delay = 200,
+               .prepare_to_enable = 80,
+               .unprepare = 500,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+       .connector_type = DRM_MODE_CONNECTOR_eDP,
+};
+
 /*
  * Datasheet specifies that at 60 Hz refresh rate:
  * - total horizontal time: { 1506, 1592, 1716 }
@@ -4283,6 +4353,9 @@ static const struct of_device_id platform_of_match[] = {
        }, {
                .compatible = "innolux,g121x1-l03",
                .data = &innolux_g121x1_l03,
+       }, {
+               .compatible = "innolux,n116bca-ea1",
+               .data = &innolux_n116bca_ea1,
        }, {
                .compatible = "innolux,n116bge",
                .data = &innolux_n116bge,
index 065efae213f5b04584fdd6024f330c98cef5958b..95659a4d15e979d0a05c6c1d4709ff8580aa380f 100644 (file)
@@ -449,8 +449,7 @@ static int acx424akp_probe(struct mipi_dsi_device *dsi)
                        MIPI_DSI_MODE_VIDEO_BURST;
        else
                dsi->mode_flags =
-                       MIPI_DSI_CLOCK_NON_CONTINUOUS |
-                       MIPI_DSI_MODE_EOT_PACKET;
+                       MIPI_DSI_CLOCK_NON_CONTINUOUS;
 
        acx->supply = devm_regulator_get(dev, "vddi");
        if (IS_ERR(acx->supply))
index 92d965b629c661c521ceed6c0ed19dbe9f16f313..f0790e9471d1aab283821826ddd168c6ba145c96 100644 (file)
@@ -453,7 +453,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
        struct drm_gpu_scheduler *sched;
        struct drm_sched_rq *rq;
 
-       if (spsc_queue_count(&entity->job_queue) || entity->num_sched_list <= 1)
+       if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
                return;
 
        fence = READ_ONCE(entity->last_scheduled);
@@ -467,8 +467,10 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
                drm_sched_rq_remove_entity(entity->rq, entity);
                entity->rq = rq;
        }
-
        spin_unlock(&entity->rq_lock);
+
+       if (entity->num_sched_list == 1)
+               entity->sched_list = NULL;
 }
 
 /**
index 2e1f2664495d00371f78ddefd75f8aa002f8f81a..8399d337589d5e411f8f3ff69b4f399ce7accef7 100644 (file)
@@ -363,8 +363,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
        dsi->vdd_supply = devm_regulator_get(dev, "phy-dsi");
        if (IS_ERR(dsi->vdd_supply)) {
                ret = PTR_ERR(dsi->vdd_supply);
-               if (ret != -EPROBE_DEFER)
-                       DRM_ERROR("Failed to request regulator: %d\n", ret);
+               dev_err_probe(dev, ret, "Failed to request regulator\n");
                return ret;
        }
 
@@ -377,9 +376,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
        dsi->pllref_clk = devm_clk_get(dev, "ref");
        if (IS_ERR(dsi->pllref_clk)) {
                ret = PTR_ERR(dsi->pllref_clk);
-               if (ret != -EPROBE_DEFER)
-                       DRM_ERROR("Unable to get pll reference clock: %d\n",
-                                 ret);
+               dev_err_probe(dev, ret, "Unable to get pll reference clock\n");
                goto err_clk_get;
        }
 
@@ -419,7 +416,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
        dsi->dsi = dw_mipi_dsi_probe(pdev, &dw_mipi_dsi_stm_plat_data);
        if (IS_ERR(dsi->dsi)) {
                ret = PTR_ERR(dsi->dsi);
-               DRM_ERROR("Failed to initialize mipi dsi host: %d\n", ret);
+               dev_err_probe(dev, ret, "Failed to initialize mipi dsi host\n");
                goto err_dsi_probe;
        }
 
index b5117fccf355e9e78629683f5346e15ce3cacba4..65c3c79ad1d5425e2a542083201f5cf81a64418c 100644 (file)
@@ -31,6 +31,7 @@
 #include <drm/drm_of.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_probe_helper.h>
+#include <drm/drm_simple_kms_helper.h>
 #include <drm/drm_vblank.h>
 
 #include <video/videomode.h>
@@ -1054,14 +1055,6 @@ cleanup:
        return ret;
 }
 
-/*
- * DRM_ENCODER
- */
-
-static const struct drm_encoder_funcs ltdc_encoder_funcs = {
-       .destroy = drm_encoder_cleanup,
-};
-
 static void ltdc_encoder_disable(struct drm_encoder *encoder)
 {
        struct drm_device *ddev = encoder->dev;
@@ -1122,8 +1115,7 @@ static int ltdc_encoder_init(struct drm_device *ddev, struct drm_bridge *bridge)
        encoder->possible_crtcs = CRTC_MASK;
        encoder->possible_clones = 0;   /* No cloning support */
 
-       drm_encoder_init(ddev, encoder, &ltdc_encoder_funcs,
-                        DRM_MODE_ENCODER_DPI, NULL);
+       drm_simple_encoder_init(ddev, encoder, DRM_MODE_ENCODER_DPI);
 
        drm_encoder_helper_add(encoder, &ltdc_encoder_helper_funcs);
 
index 0066a3c1dfc96fdd720cbcfc4923ad37775890f3..fd8a53a4d8d651a57486c0153b83be965bb9d0c8 100644 (file)
 
 int vbox_mm_init(struct vbox_private *vbox)
 {
-       struct drm_vram_mm *vmm;
        int ret;
        struct drm_device *dev = &vbox->ddev;
        struct pci_dev *pdev = to_pci_dev(dev->dev);
 
-       vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(pdev, 0),
+       ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0),
                                       vbox->available_vram_size);
-       if (IS_ERR(vmm)) {
-               ret = PTR_ERR(vmm);
+       if (ret) {
                DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
                return ret;
        }
@@ -33,5 +31,4 @@ int vbox_mm_init(struct vbox_private *vbox)
 void vbox_mm_fini(struct vbox_private *vbox)
 {
        arch_phys_wc_del(vbox->fb_mtrr);
-       drm_vram_helper_release_mm(&vbox->ddev);
 }
index 23eb6d772e405cb7a6e5ab169bfb5a69abf110b5..669f2ee3951548de2fd5481d2a601d8d29ed7706 100644 (file)
@@ -174,7 +174,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
                if (!sync_file) {
                        dma_fence_put(&out_fence->f);
                        ret = -ENOMEM;
-                       goto out_memdup;
+                       goto out_unresv;
                }
 
                exbuf->fence_fd = out_fence_fd;
index d69a5b6da55320c7e10ab88d6baa49f50c4c6e7d..4ff1ec28e630d4c045e54c0e5f4f5a9b47bd4ba2 100644 (file)
@@ -248,6 +248,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
        ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
        if (ret != 0) {
+               virtio_gpu_array_put_free(objs);
                virtio_gpu_free_object(&shmem_obj->base);
                return ret;
        }
index a591d291b231af8531174bd7dd8057d767ab13f1..b292887a248150d1669cfe85d857f71a5be3795e 100644 (file)
@@ -52,13 +52,6 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
                return VM_FAULT_SIGBUS;
 
        get_page(page);
-
-       if (vmf->vma->vm_file)
-               page->mapping = vmf->vma->vm_file->f_mapping;
-       else
-               printk(KERN_ERR "no mapping available\n");
-
-       BUG_ON(!page->mapping);
        page->index = vmf->pgoff;
 
        vmf->page = page;
@@ -151,17 +144,6 @@ static const struct vm_operations_struct fb_deferred_io_vm_ops = {
        .page_mkwrite   = fb_deferred_io_mkwrite,
 };
 
-static int fb_deferred_io_set_page_dirty(struct page *page)
-{
-       if (!PageDirty(page))
-               SetPageDirty(page);
-       return 0;
-}
-
-static const struct address_space_operations fb_deferred_io_aops = {
-       .set_page_dirty = fb_deferred_io_set_page_dirty,
-};
-
 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
 {
        vma->vm_ops = &fb_deferred_io_vm_ops;
@@ -212,29 +194,12 @@ void fb_deferred_io_init(struct fb_info *info)
 }
 EXPORT_SYMBOL_GPL(fb_deferred_io_init);
 
-void fb_deferred_io_open(struct fb_info *info,
-                        struct inode *inode,
-                        struct file *file)
-{
-       file->f_mapping->a_ops = &fb_deferred_io_aops;
-}
-EXPORT_SYMBOL_GPL(fb_deferred_io_open);
-
 void fb_deferred_io_cleanup(struct fb_info *info)
 {
        struct fb_deferred_io *fbdefio = info->fbdefio;
-       struct page *page;
-       int i;
 
        BUG_ON(!fbdefio);
        cancel_delayed_work_sync(&info->deferred_work);
-
-       /* clear out the mapping that we setup */
-       for (i = 0 ; i < info->fix.smem_len; i += PAGE_SIZE) {
-               page = fb_deferred_io_page(info, i);
-               page->mapping = NULL;
-       }
-
        mutex_destroy(&fbdefio->lock);
 }
 EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
index 06f5805de2dee1d25258beed9e952f2bb4d2404a..372b52a2befab2bb8682275eb8a0cbcea8ad16fa 100644 (file)
@@ -1415,10 +1415,6 @@ __releases(&info->lock)
                if (res)
                        module_put(info->fbops->owner);
        }
-#ifdef CONFIG_FB_DEFERRED_IO
-       if (info->fbdefio)
-               fb_deferred_io_open(info, inode, file);
-#endif
 out:
        unlock_fb_info(info);
        if (res)
index ecfbcc0553a5904d28cd6f3fb4a1952c43069439..a8dccd23c2499b532537a8bd4a239aab0b54bb8f 100644 (file)
@@ -659,9 +659,6 @@ static inline void __fb_pad_aligned_buffer(u8 *dst, u32 d_pitch,
 /* drivers/video/fb_defio.c */
 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
 extern void fb_deferred_io_init(struct fb_info *info);
-extern void fb_deferred_io_open(struct fb_info *info,
-                               struct inode *inode,
-                               struct file *file);
 extern void fb_deferred_io_cleanup(struct fb_info *info);
 extern int fb_deferred_io_fsync(struct file *file, loff_t start,
                                loff_t end, int datasync);
index 0827037c54847898c6771146c5ac285d57849ede..67b94bc3c88522fc29d2a3adb25cba34521d64cf 100644 (file)
@@ -625,30 +625,147 @@ struct drm_gem_open {
        __u64 size;
 };
 
+/**
+ * DRM_CAP_DUMB_BUFFER
+ *
+ * If set to 1, the driver supports creating dumb buffers via the
+ * &DRM_IOCTL_MODE_CREATE_DUMB ioctl.
+ */
 #define DRM_CAP_DUMB_BUFFER            0x1
+/**
+ * DRM_CAP_VBLANK_HIGH_CRTC
+ *
+ * If set to 1, the kernel supports specifying a CRTC index in the high bits of
+ * &drm_wait_vblank_request.type.
+ *
+ * Starting kernel version 2.6.39, this capability is always set to 1.
+ */
 #define DRM_CAP_VBLANK_HIGH_CRTC       0x2
+/**
+ * DRM_CAP_DUMB_PREFERRED_DEPTH
+ *
+ * The preferred bit depth for dumb buffers.
+ *
+ * The bit depth is the number of bits used to indicate the color of a single
+ * pixel excluding any padding. This is different from the number of bits per
+ * pixel. For instance, XRGB8888 has a bit depth of 24 but has 32 bits per
+ * pixel.
+ *
+ * Note that this preference only applies to dumb buffers, it's irrelevant for
+ * other types of buffers.
+ */
 #define DRM_CAP_DUMB_PREFERRED_DEPTH   0x3
+/**
+ * DRM_CAP_DUMB_PREFER_SHADOW
+ *
+ * If set to 1, the driver prefers userspace to render to a shadow buffer
+ * instead of directly rendering to a dumb buffer. For best speed, userspace
+ * should do streaming ordered memory copies into the dumb buffer and never
+ * read from it.
+ *
+ * Note that this preference only applies to dumb buffers, it's irrelevant for
+ * other types of buffers.
+ */
 #define DRM_CAP_DUMB_PREFER_SHADOW     0x4
+/**
+ * DRM_CAP_PRIME
+ *
+ * Bitfield of supported PRIME sharing capabilities. See &DRM_PRIME_CAP_IMPORT
+ * and &DRM_PRIME_CAP_EXPORT.
+ *
+ * PRIME buffers are exposed as dma-buf file descriptors. See
+ * Documentation/gpu/drm-mm.rst, section "PRIME Buffer Sharing".
+ */
 #define DRM_CAP_PRIME                  0x5
+/**
+ * DRM_PRIME_CAP_IMPORT
+ *
+ * If this bit is set in &DRM_CAP_PRIME, the driver supports importing PRIME
+ * buffers via the &DRM_IOCTL_PRIME_FD_TO_HANDLE ioctl.
+ */
 #define  DRM_PRIME_CAP_IMPORT          0x1
+/**
+ * DRM_PRIME_CAP_EXPORT
+ *
+ * If this bit is set in &DRM_CAP_PRIME, the driver supports exporting PRIME
+ * buffers via the &DRM_IOCTL_PRIME_HANDLE_TO_FD ioctl.
+ */
 #define  DRM_PRIME_CAP_EXPORT          0x2
+/**
+ * DRM_CAP_TIMESTAMP_MONOTONIC
+ *
+ * If set to 0, the kernel will report timestamps with ``CLOCK_REALTIME`` in
+ * struct drm_event_vblank. If set to 1, the kernel will report timestamps with
+ * ``CLOCK_MONOTONIC``. See ``clock_gettime(2)`` for the definition of these
+ * clocks.
+ *
+ * Starting from kernel version 2.6.39, the default value for this capability
+ * is 1. Starting kernel version 4.15, this capability is always set to 1.
+ */
 #define DRM_CAP_TIMESTAMP_MONOTONIC    0x6
+/**
+ * DRM_CAP_ASYNC_PAGE_FLIP
+ *
+ * If set to 1, the driver supports &DRM_MODE_PAGE_FLIP_ASYNC.
+ */
 #define DRM_CAP_ASYNC_PAGE_FLIP                0x7
-/*
- * The CURSOR_WIDTH and CURSOR_HEIGHT capabilities return a valid widthxheight
- * combination for the hardware cursor. The intention is that a hardware
- * agnostic userspace can query a cursor plane size to use.
+/**
+ * DRM_CAP_CURSOR_WIDTH
+ *
+ * The ``CURSOR_WIDTH`` and ``CURSOR_HEIGHT`` capabilities return a valid
+ * width x height combination for the hardware cursor. The intention is that a
+ * hardware agnostic userspace can query a cursor plane size to use.
  *
  * Note that the cross-driver contract is to merely return a valid size;
  * drivers are free to attach another meaning on top, eg. i915 returns the
  * maximum plane size.
  */
 #define DRM_CAP_CURSOR_WIDTH           0x8
+/**
+ * DRM_CAP_CURSOR_HEIGHT
+ *
+ * See &DRM_CAP_CURSOR_WIDTH.
+ */
 #define DRM_CAP_CURSOR_HEIGHT          0x9
+/**
+ * DRM_CAP_ADDFB2_MODIFIERS
+ *
+ * If set to 1, the driver supports supplying modifiers in the
+ * &DRM_IOCTL_MODE_ADDFB2 ioctl.
+ */
 #define DRM_CAP_ADDFB2_MODIFIERS       0x10
+/**
+ * DRM_CAP_PAGE_FLIP_TARGET
+ *
+ * If set to 1, the driver supports the &DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE and
+ * &DRM_MODE_PAGE_FLIP_TARGET_RELATIVE flags in
+ * &drm_mode_crtc_page_flip_target.flags for the &DRM_IOCTL_MODE_PAGE_FLIP
+ * ioctl.
+ */
 #define DRM_CAP_PAGE_FLIP_TARGET       0x11
+/**
+ * DRM_CAP_CRTC_IN_VBLANK_EVENT
+ *
+ * If set to 1, the kernel supports reporting the CRTC ID in
+ * &drm_event_vblank.crtc_id for the &DRM_EVENT_VBLANK and
+ * &DRM_EVENT_FLIP_COMPLETE events.
+ *
+ * Starting kernel version 4.12, this capability is always set to 1.
+ */
 #define DRM_CAP_CRTC_IN_VBLANK_EVENT   0x12
+/**
+ * DRM_CAP_SYNCOBJ
+ *
+ * If set to 1, the driver supports sync objects. See
+ * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
+ */
 #define DRM_CAP_SYNCOBJ                0x13
+/**
+ * DRM_CAP_SYNCOBJ_TIMELINE
+ *
+ * If set to 1, the driver supports timeline operations on sync objects. See
+ * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
+ */
 #define DRM_CAP_SYNCOBJ_TIMELINE       0x14
 
 /* DRM_IOCTL_GET_CAP ioctl argument type */