F: drivers/media/dvb-frontends/a8293*
AACRAID SCSI RAID DRIVER
-M: Adaptec OEM Raid Solutions <aacraid@adaptec.com>
+M: Adaptec OEM Raid Solutions <aacraid@microsemi.com>
L: linux-scsi@vger.kernel.org
W: http://www.adaptec.com/
S: Supported
F: include/acpi/
F: Documentation/acpi/
F: Documentation/ABI/testing/sysfs-bus-acpi
+F: Documentation/ABI/testing/configfs-acpi
F: drivers/pci/*acpi*
F: drivers/pci/*/*acpi*
F: drivers/pci/*/*/*acpi*
S: Maintained
F: drivers/gpio/gpio-altera.c
+ALTERA SYSTEM RESOURCE DRIVER FOR ARRIA10 DEVKIT
+M: Thor Thayer <tthayer@opensource.altera.com>
+S: Maintained
+F: drivers/gpio/gpio-altera-a10sr.c
+F: drivers/mfd/altera-a10sr.c
+F: include/linux/mfd/altera-a10sr.h
+
ALTERA TRIPLE SPEED ETHERNET DRIVER
M: Vince Bridgers <vbridger@opensource.altera.com>
L: netdev@vger.kernel.org
M: Keyur Chudgar <kchudgar@apm.com>
S: Supported
F: drivers/net/ethernet/apm/xgene/
+F: drivers/net/phy/mdio-xgene.c
F: Documentation/devicetree/bindings/net/apm-xgene-enet.txt
+F: Documentation/devicetree/bindings/net/apm-xgene-mdio.txt
APTINA CAMERA SENSOR PLL
M: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>
ARM HDLCD DRM DRIVER
M: Liviu Dudau <liviu.dudau@arm.com>
S: Supported
- F: drivers/gpu/drm/arm/
+ F: drivers/gpu/drm/arm/hdlcd_*
F: Documentation/devicetree/bindings/display/arm,hdlcd.txt
+ ARM MALI-DP DRM DRIVER
+ M: Liviu Dudau <liviu.dudau@arm.com>
+ M: Brian Starkey <brian.starkey@arm.com>
+ M: Mali DP Maintainers <malidp@foss.arm.com>
+ S: Supported
+ F: drivers/gpu/drm/arm/
+ F: Documentation/devicetree/bindings/display/arm,malidp.txt
+
ARM MFM AND FLOPPY DRIVERS
M: Ian Molton <spyro@f2s.com>
S: Maintained
L: linux-arm-msm@vger.kernel.org
L: linux-soc@vger.kernel.org
S: Maintained
+F: Documentation/devicetree/bindings/soc/qcom/
F: arch/arm/boot/dts/qcom-*.dts
F: arch/arm/boot/dts/qcom-*.dtsi
F: arch/arm/mach-qcom/
F: arch/arm/mach-s3c64xx/
F: arch/arm/mach-s5p*/
F: arch/arm/mach-exynos*/
-F: drivers/*/*s3c2410*
-F: drivers/*/*/*s3c2410*
+F: drivers/*/*s3c24*
+F: drivers/*/*/*s3c24*
+F: drivers/*/*s3c64xx*
+F: drivers/*/*s5pv210*
F: drivers/memory/samsung/*
F: drivers/soc/samsung/*
F: drivers/spi/spi-s3c*
-F: sound/soc/samsung/*
F: Documentation/arm/Samsung/
F: Documentation/devicetree/bindings/arm/samsung/
F: Documentation/devicetree/bindings/sram/samsung-sram.txt
S: Maintained
F: drivers/media/platform/s5p-tv/
+ARM/SAMSUNG S5P SERIES HDMI CEC SUBSYSTEM SUPPORT
+M: Kyungmin Park <kyungmin.park@samsung.com>
+L: linux-arm-kernel@lists.infradead.org
+L: linux-media@vger.kernel.org
+S: Maintained
+F: drivers/staging/media/platform/s5p-cec/
+
ARM/SAMSUNG S5P SERIES JPEG CODEC SUPPORT
M: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
M: Jacek Anaszewski <j.anaszewski@samsung.com>
F: arch/arm/configs/shmobile_defconfig
F: arch/arm/include/debug/renesas-scif.S
F: arch/arm/mach-shmobile/
-F: drivers/sh/
F: drivers/soc/renesas/
F: include/linux/soc/renesas/
M: Marc Gonzalez <marc_gonzalez@sigmadesigns.com>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
-F: arch/arm/mach-tango/
-F: arch/arm/boot/dts/tango*
+N: tango
ARM/TECHNOLOGIC SYSTEMS TS7250 MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>
T: git git://git.linaro.org/people/ulfh/clk.git
S: Maintained
F: drivers/clk/ux500/
-F: include/linux/platform_data/clk-ux500.h
ARM/VERSATILE EXPRESS PLATFORM
M: Liviu Dudau <liviu.dudau@arm.com>
F: Documentation/ABI/testing/sysfs-class-net-batman-adv
F: Documentation/ABI/testing/sysfs-class-net-mesh
F: Documentation/networking/batman-adv.txt
+F: include/uapi/linux/batman_adv.h
F: net/batman-adv/
BAYCOM/HDLCDRV DRIVERS FOR AX.25
S: Supported
F: drivers/net/ethernet/broadcom/b44.*
+BROADCOM B53 ETHERNET SWITCH DRIVER
+M: Florian Fainelli <f.fainelli@gmail.com>
+L: netdev@vger.kernel.org
+L: openwrt-devel@lists.openwrt.org (subscribers-only)
+S: Supported
+F: drivers/net/dsa/b53/*
+F: include/linux/platform_data/b53.h
+
BROADCOM GENET ETHERNET DRIVER
M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
M: Florian Fainelli <f.fainelli@gmail.com>
M: Ray Jui <rjui@broadcom.com>
M: Scott Branden <sbranden@broadcom.com>
-L: bcm-kernel-feedback-list@broadcom.com
+M: bcm-kernel-feedback-list@broadcom.com
T: git git://github.com/broadcom/mach-bcm
S: Maintained
+N: bcm281*
+N: bcm113*
+N: bcm216*
+N: kona
F: arch/arm/mach-bcm/
-F: arch/arm/boot/dts/bcm113*
-F: arch/arm/boot/dts/bcm216*
-F: arch/arm/boot/dts/bcm281*
-F: arch/arm64/boot/dts/broadcom/
-F: arch/arm/configs/bcm_defconfig
-F: drivers/mmc/host/sdhci-bcm-kona.c
-F: drivers/clocksource/bcm_kona_timer.c
BROADCOM BCM2835 ARM ARCHITECTURE
M: Stephen Warren <swarren@wwwdotorg.org>
BROADCOM BCM5301X ARM ARCHITECTURE
M: Hauke Mehrtens <hauke@hauke-m.de>
+M: Rafał Miłecki <zajec5@gmail.com>
+M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: arch/arm/mach-bcm/bcm_5301x.c
-F: arch/arm/boot/dts/bcm5301x.dtsi
+F: arch/arm/boot/dts/bcm5301x*.dtsi
F: arch/arm/boot/dts/bcm470*
BROADCOM BCM63XX ARM ARCHITECTURE
M: Florian Fainelli <f.fainelli@gmail.com>
+M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-L: bcm-kernel-feedback-list@broadcom.com
T: git git://github.com/broadcom/stblinux.git
S: Maintained
-F: arch/arm/mach-bcm/bcm63xx.c
-F: arch/arm/include/debug/bcm63xx.S
+N: bcm63xx
BROADCOM BCM63XX/BCM33XX UDC DRIVER
M: Kevin Cernekee <cernekee@gmail.com>
M: Brian Norris <computersforpeace@gmail.com>
M: Gregory Fong <gregory.0xf0@gmail.com>
M: Florian Fainelli <f.fainelli@gmail.com>
+M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-L: bcm-kernel-feedback-list@broadcom.com
T: git git://github.com/broadcom/stblinux.git
S: Maintained
F: arch/arm/mach-bcm/*brcmstb*
F: drivers/net/ethernet/broadcom/tg3.*
BROADCOM BRCM80211 IEEE802.11n WIRELESS DRIVER
-M: Brett Rudley <brudley@broadcom.com>
-M: Arend van Spriel <arend@broadcom.com>
-M: Franky (Zhenhui) Lin <frankyl@broadcom.com>
-M: Hante Meuleman <meuleman@broadcom.com>
+M: Arend van Spriel <arend.vanspriel@broadcom.com>
+M: Franky Lin <franky.lin@broadcom.com>
+M: Hante Meuleman <hante.meuleman@broadcom.com>
L: linux-wireless@vger.kernel.org
-L: brcm80211-dev-list@broadcom.com
+L: brcm80211-dev-list.pdl@broadcom.com
S: Supported
F: drivers/net/wireless/broadcom/brcm80211/
M: Ray Jui <rjui@broadcom.com>
M: Scott Branden <sbranden@broadcom.com>
M: Jon Mason <jonmason@broadcom.com>
+M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-L: bcm-kernel-feedback-list@broadcom.com
T: git git://github.com/broadcom/cygnus-linux.git
S: Maintained
N: iproc
N: cygnus
-N: nsp
+N: bcm[-_]nsp
N: bcm9113*
N: bcm9583*
N: bcm9585*
N: bcm585*
N: bcm586*
N: bcm88312
+F: arch/arm64/boot/dts/broadcom/ns2*
+F: drivers/clk/bcm/clk-ns*
+F: drivers/pinctrl/bcm/pinctrl-ns*
BROADCOM BRCMSTB GPIO DRIVER
M: Gregory Fong <gregory.0xf0@gmail.com>
BROADCOM VULCAN ARM64 SOC
M: Jayachandran C. <jchandra@broadcom.com>
+M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-L: bcm-kernel-feedback-list@broadcom.com
S: Maintained
F: arch/arm64/boot/dts/broadcom/vulcan*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git
S: Maintained
+F: Documentation/devicetree/bindings/net/can/
F: drivers/net/can/
F: include/linux/can/dev.h
F: include/linux/can/platform/
F: include/uapi/linux/can/netlink.h
CAPABILITIES
-M: Serge Hallyn <serge.hallyn@canonical.com>
+M: Serge Hallyn <serge@hallyn.com>
L: linux-security-module@vger.kernel.org
S: Supported
F: include/linux/capability.h
F: include/linux/spi/cc2520.h
F: Documentation/devicetree/bindings/net/ieee802154/cc2520.txt
+CEC DRIVER
+M: Hans Verkuil <hans.verkuil@cisco.com>
+L: linux-media@vger.kernel.org
+T: git git://linuxtv.org/media_tree.git
+W: http://linuxtv.org
+S: Supported
+F: Documentation/cec.txt
+F: Documentation/DocBook/media/v4l/cec*
+F: drivers/staging/media/cec/
+F: drivers/media/cec-edid.c
+F: drivers/media/rc/keymaps/rc-cec.c
+F: include/media/cec.h
+F: include/media/cec-edid.h
+F: include/linux/cec.h
+F: include/linux/cec-funcs.h
+
CELL BROADBAND ENGINE ARCHITECTURE
M: Arnd Bergmann <arnd@arndb.de>
L: linuxppc-dev@lists.ozlabs.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git
S: Maintained
F: Documentation/crypto/
+F: Documentation/devicetree/bindings/crypto/
F: Documentation/DocBook/crypto-API.tmpl
F: arch/*/crypto/
F: crypto/
W: http://www.dialog-semiconductor.com/products
S: Supported
F: Documentation/hwmon/da90??
+F: Documentation/devicetree/bindings/mfd/da90*.txt
+F: Documentation/devicetree/bindings/regulator/da92*.txt
F: Documentation/devicetree/bindings/sound/da[79]*.txt
F: drivers/gpio/gpio-da90??.c
F: drivers/hwmon/da90??-hwmon.c
F: include/linux/mfd/da903x.h
F: include/linux/mfd/da9052/
F: include/linux/mfd/da9055/
+F: include/linux/mfd/da9062/
F: include/linux/mfd/da9063/
F: include/linux/mfd/da9150/
+F: include/linux/regulator/da9211.h
F: include/sound/da[79]*.h
F: sound/soc/codecs/da[79]*.[ch]
F: Documentation/dma-buf-sharing.txt
T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
+ SYNC FILE FRAMEWORK
+ M: Sumit Semwal <sumit.semwal@linaro.org>
+ R: Gustavo Padovan <gustavo@padovan.org>
+ S: Maintained
+ L: linux-media@vger.kernel.org
+ L: dri-devel@lists.freedesktop.org
+ F: drivers/dma-buf/sync_file.c
+ F: include/linux/sync_file.h
+ F: Documentation/sync_file.txt
+ T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
+
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vinod.koul@intel.com>
L: dmaengine@vger.kernel.org
S: Maintained
F: drivers/gpu/drm/
F: drivers/gpu/vga/
- F: Documentation/DocBook/gpu.*
+ F: Documentation/devicetree/bindings/display/
+ F: Documentation/devicetree/bindings/gpu/
+ F: Documentation/devicetree/bindings/video/
+ F: Documentation/gpu/
F: include/drm/
F: include/uapi/drm/
F: drivers/gpu/drm/i915/
F: include/drm/i915*
F: include/uapi/drm/i915_drm.h
+ F: Documentation/gpu/i915.rst
DRM DRIVERS FOR ATMEL HLCDC
M: Boris Brezillon <boris.brezillon@free-electrons.com>
F: include/uapi/drm/vc4_drm.h
F: Documentation/devicetree/bindings/display/brcm,bcm-vc4.txt
+ DRM DRIVERS FOR TI OMAP
+ M: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ L: dri-devel@lists.freedesktop.org
+ S: Maintained
+ F: drivers/gpu/drm/omapdrm/
+ F: Documentation/devicetree/bindings/display/ti/
+
+ DRM DRIVERS FOR TI LCDC
+ M: Jyri Sarha <jsarha@ti.com>
+ R: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ L: dri-devel@lists.freedesktop.org
+ S: Maintained
+ F: drivers/gpu/drm/tilcdc/
+ F: Documentation/devicetree/bindings/display/tilcdc/
+
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
F: drivers/staging/fbtft/
FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
-M: Vasu Dev <vasu.dev@intel.com>
+M: Johannes Thumshirn <jth@kernel.org>
L: fcoe-devel@open-fcoe.org
W: www.Open-FCoE.org
S: Supported
X: drivers/net/ethernet/freescale/gianfar_ptp.c
F: Documentation/devicetree/bindings/net/fsl-tsec-phy.txt
+FREESCALE QUICC ENGINE UCC HDLC DRIVER
+M: Zhao Qiang <qiang.zhao@nxp.com>
+L: netdev@vger.kernel.org
+L: linuxppc-dev@lists.ozlabs.org
+S: Maintained
+F: drivers/net/wan/fsl_ucc_hdlc*
+
FREESCALE QUICC ENGINE UCC UART DRIVER
M: Timur Tabi <timur@tabi.org>
L: linuxppc-dev@lists.ozlabs.org
F: fs/fscache/
F: include/linux/fscache*.h
+FS-CRYPTO: FILE SYSTEM LEVEL ENCRYPTION SUPPORT
+M: Theodore Y. Ts'o <tytso@mit.edu>
+M: Jaegeuk Kim <jaegeuk@kernel.org>
+S: Supported
+F: fs/crypto/
+F: include/linux/fscrypto.h
+
F2FS FILE SYSTEM
M: Jaegeuk Kim <jaegeuk@kernel.org>
M: Changman Lee <cm224.lee@samsung.com>
F: drivers/media/usb/gspca/m5602/
GSPCA PAC207 SONIXB SUBDRIVER
-M: Hans de Goede <hdegoede@redhat.com>
+M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
-S: Maintained
+S: Odd Fixes
F: drivers/media/usb/gspca/pac207.c
GSPCA SN9C20X SUBDRIVER
F: drivers/media/usb/gspca/t613.c
GSPCA USB WEBCAM DRIVER
-M: Hans de Goede <hdegoede@redhat.com>
+M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
-S: Maintained
+S: Odd Fixes
F: drivers/media/usb/gspca/
GUID PARTITION TABLE (GPT)
M: Herbert Xu <herbert@gondor.apana.org.au>
L: linux-crypto@vger.kernel.org
S: Odd fixes
+F: Documentation/devicetree/bindings/rng/
F: Documentation/hw_random.txt
F: drivers/char/hw_random/
F: include/linux/hw_random.h
L: linux-remoteproc@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git
+F: Documentation/devicetree/bindings/hwlock/
F: Documentation/hwspinlock.txt
-F: drivers/hwspinlock/hwspinlock_*
+F: drivers/hwspinlock/
F: include/linux/hwspinlock.h
HARMONY SOUND DRIVER
S: Maintained
F: drivers/media/dvb-frontends/hd29l2*
+HEWLETT PACKARD ENTERPRISE ILO NMI WATCHDOG DRIVER
+M: Brian Boylston <brian.boylston@hpe.com>
+S: Supported
+F: Documentation/watchdog/hpwdt.txt
+F: drivers/watchdog/hpwdt.c
+
HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa)
M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com
F: net/802/hippi.c
F: drivers/net/hippi/
+HISILICON NETWORK SUBSYSTEM DRIVER
+M: Yisen Zhuang <yisen.zhuang@huawei.com>
+M: Salil Mehta <salil.mehta@huawei.com>
+L: netdev@vger.kernel.org
+W: http://www.hisilicon.com
+S: Maintained
+F: drivers/net/ethernet/hisilicon/
+F: Documentation/devicetree/bindings/net/hisilicon*.txt
+
HISILICON SAS Controller
M: John Garry <john.garry@huawei.com>
W: http://www.hisilicon.com
R: Lars-Peter Clausen <lars@metafoo.de>
R: Peter Meerwald-Stadler <pmeerw@pmeerw.net>
L: linux-iio@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git
S: Maintained
+F: Documentation/devicetree/bindings/iio/
F: drivers/iio/
F: drivers/staging/iio/
F: include/linux/iio/
S: Maintained
F: drivers/platform/x86/intel-hid.c
+INTEL VIRTUAL BUTTON DRIVER
+M: AceLan Kao <acelan.kao@canonical.com>
+L: platform-driver-x86@vger.kernel.org
+S: Maintained
+F: drivers/platform/x86/intel-vbtn.c
+
INTEL IDLE DRIVER
M: Len Brown <lenb@kernel.org>
L: linux-pm@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/i40iw/
+INTEL MERRIFIELD GPIO DRIVER
+M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+L: linux-gpio@vger.kernel.org
+S: Maintained
+F: drivers/gpio/gpio-merrifield.c
+
INTEL-MID GPIO DRIVER
M: David Cohen <david.a.cohen@linux.intel.com>
L: linux-gpio@vger.kernel.org
L: iommu@lists.linux-foundation.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
S: Maintained
+F: Documentation/devicetree/bindings/iommu/
F: drivers/iommu/
IP MASQUERADING
F: drivers/irqchip/
IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)
-M: Jiang Liu <jiang.liu@linux.intel.com>
M: Marc Zyngier <marc.zyngier@arm.com>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
L: linux-leds@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds.git
S: Maintained
+F: Documentation/devicetree/bindings/leds/
F: drivers/leds/
F: include/linux/leds.h
F: drivers/ata/
F: include/linux/ata.h
F: include/linux/libata.h
+F: Documentation/devicetree/bindings/ata/
LIBATA PATA ARASAN COMPACT FLASH CONTROLLER
M: Viresh Kumar <vireshk@kernel.org>
F: drivers/crypto/vmx/
F: drivers/net/ethernet/ibm/ibmveth.*
F: drivers/net/ethernet/ibm/ibmvnic.*
+F: drivers/pci/hotplug/pnv_php.c
F: drivers/pci/hotplug/rpa*
F: drivers/scsi/ibmvscsi/
N: opal
LINUX KERNEL DUMP TEST MODULE (LKDTM)
M: Kees Cook <keescook@chromium.org>
S: Maintained
-F: drivers/misc/lkdtm.c
+F: drivers/misc/lkdtm*
LLC (802.2)
M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
S: Maintained
F: drivers/media/usb/dvb-usb-v2/lmedm04*
-LOCKDEP AND LOCKSTAT
+LOCKING PRIMITIVES
M: Peter Zijlstra <peterz@infradead.org>
M: Ingo Molnar <mingo@redhat.com>
L: linux-kernel@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/locking
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core
S: Maintained
-F: Documentation/locking/lockdep*.txt
-F: Documentation/locking/lockstat.txt
+F: Documentation/locking/
F: include/linux/lockdep.h
+F: include/linux/spinlock*.h
+F: arch/*/include/asm/spinlock*.h
+F: include/linux/rwlock*.h
+F: include/linux/mutex*.h
+F: arch/*/include/asm/mutex*.h
+F: include/linux/rwsem*.h
+F: arch/*/include/asm/rwsem.h
+F: include/linux/seqlock.h
+F: lib/locking*.[ch]
F: kernel/locking/
LOGICAL DISK MANAGER SUPPORT (LDM, Windows 2000/XP/Vista Dynamic Disks)
L: linux-man@vger.kernel.org
S: Maintained
+MARVELL 88E6XXX ETHERNET SWITCH FABRIC DRIVER
+M: Andrew Lunn <andrew@lunn.ch>
+M: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
+S: Maintained
+F: drivers/net/dsa/mv88e6xxx/
+
MARVELL ARMADA DRM SUPPORT
M: Russell King <rmk+kernel@armlinux.org.uk>
S: Maintained
F: include/uapi/drm/armada_drm.h
F: Documentation/devicetree/bindings/display/armada/
-MARVELL 88E6352 DSA support
-M: Guenter Roeck <linux@roeck-us.net>
-S: Maintained
-F: drivers/net/dsa/mv88e6352.c
-
MARVELL CRYPTO DRIVER
M: Boris Brezillon <boris.brezillon@free-electrons.com>
M: Arnaud Ebalard <arno@natisbad.org>
F: drivers/hwmon/max6697.c
F: include/linux/platform_data/max6697.h
+MAX9860 MONO AUDIO VOICE CODEC DRIVER
+M: Peter Rosin <peda@axentia.se>
+L: alsa-devel@alsa-project.org (moderated for non-subscribers)
+S: Maintained
+F: Documentation/devicetree/bindings/sound/max9860.txt
+F: sound/soc/codecs/max9860.*
+
MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS
M: Krzysztof Kozlowski <k.kozlowski@samsung.com>
L: linux-pm@vger.kernel.org
S: Maintained
F: drivers/iio/potentiometer/mcp4531.c
+MEDIA DRIVERS FOR RENESAS - FCP
+M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+L: linux-media@vger.kernel.org
+L: linux-renesas-soc@vger.kernel.org
+T: git git://linuxtv.org/media_tree.git
+S: Supported
+F: Documentation/devicetree/bindings/media/renesas,fcp.txt
+F: drivers/media/platform/rcar-fcp.c
+F: include/media/rcar-fcp.h
+
MEDIA DRIVERS FOR RENESAS - VSP1
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
L: linux-media@vger.kernel.org
F: Documentation/devicetree/bindings/media/renesas,vsp1.txt
F: drivers/media/platform/vsp1/
+MEDIA DRIVERS FOR HELENE
+M: Abylay Ospan <aospan@netup.ru>
+L: linux-media@vger.kernel.org
+W: https://linuxtv.org
+W: http://netup.tv/
+T: git git://linuxtv.org/media_tree.git
+S: Supported
+F: drivers/media/dvb-frontends/helene*
+
MEDIA DRIVERS FOR ASCOT2E
M: Sergey Kozlov <serjk@netup.ru>
+M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
W: https://linuxtv.org
W: http://netup.tv/
MEDIA DRIVERS FOR CXD2841ER
M: Sergey Kozlov <serjk@netup.ru>
+M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
W: https://linuxtv.org
W: http://netup.tv/
MEDIA DRIVERS FOR HORUS3A
M: Sergey Kozlov <serjk@netup.ru>
+M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
W: https://linuxtv.org
W: http://netup.tv/
MEDIA DRIVERS FOR LNBH25
M: Sergey Kozlov <serjk@netup.ru>
+M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
W: https://linuxtv.org
W: http://netup.tv/
MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices
M: Sergey Kozlov <serjk@netup.ru>
+M: Abylay Ospan <aospan@netup.ru>
L: linux-media@vger.kernel.org
W: https://linuxtv.org
W: http://netup.tv/
W: https://linuxtv.org
W: http://palosaari.fi/linux/
Q: http://patchwork.linuxtv.org/project/linux-media/list/
-T: git git://linuxtv.org/anttip/media_tree.git
S: Maintained
-F: drivers/staging/media/mn88472/
-F: drivers/media/dvb-frontends/mn88472.h
+F: drivers/media/dvb-frontends/mn88472*
MN88473 MEDIA DRIVER
M: Antti Palosaari <crope@iki.fi>
L: linux-mmc@vger.kernel.org
T: git git://git.linaro.org/people/ulf.hansson/mmc.git
S: Maintained
+F: Documentation/devicetree/bindings/mmc/
F: drivers/mmc/
F: include/linux/mmc/
F: include/uapi/linux/mmc/
F: drivers/nvme/host/
F: include/linux/nvme.h
+NVM EXPRESS TARGET DRIVER
+M: Christoph Hellwig <hch@lst.de>
+M: Sagi Grimberg <sagi@grimberg.me>
+L: linux-nvme@lists.infradead.org
+S: Supported
+F: drivers/nvme/target/
+
NVMEM FRAMEWORK
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
M: Maxime Ripard <maxime.ripard@free-electrons.com>
L: linux-pm@vger.kernel.org
T: git git://git.infradead.org/battery-2.6.git
S: Maintained
+F: Documentation/devicetree/bindings/power/
+F: Documentation/devicetree/bindings/power_supply/
F: include/linux/power_supply.h
F: drivers/power/
X: drivers/power/avs/
F: include/linux/psci.h
F: include/uapi/linux/psci.h
+POWERNV OPERATOR PANEL LCD DISPLAY DRIVER
+M: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
+L: linuxppc-dev@lists.ozlabs.org
+S: Maintained
+F: drivers/char/powernv-op-panel.c
+
PNP SUPPORT
M: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
S: Maintained
F: include/uapi/linux/ptrace.h
F: kernel/ptrace.c
+PULSE8-CEC DRIVER
+M: Hans Verkuil <hverkuil@xs4all.nl>
+L: linux-media@vger.kernel.org
+T: git git://linuxtv.org/media_tree.git
+S: Maintained
+F: drivers/staging/media/pulse8-cec
+
PVRUSB2 VIDEO4LINUX DRIVER
M: Mike Isely <isely@pobox.com>
L: pvrusb2@isely.net (subscribers-only)
F: drivers/media/usb/pvrusb2/
PWC WEBCAM DRIVER
-M: Hans de Goede <hdegoede@redhat.com>
+M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
-S: Maintained
+S: Odd Fixes
F: drivers/media/usb/pwc/*
PWM FAN DRIVER
S: Maintained
QAT DRIVER
-M: Tadeusz Struk <tadeusz.struk@intel.com>
+M: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
+M: Salvatore Benedetto <salvatore.benedetto@intel.com>
L: qat-linux@intel.com
S: Supported
F: drivers/crypto/qat/
F: include/uapi/linux/radeonfb.h
RADIOSHARK RADIO DRIVER
-M: Hans de Goede <hdegoede@redhat.com>
+M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
S: Maintained
F: drivers/media/radio/radio-shark.c
RADIOSHARK2 RADIO DRIVER
-M: Hans de Goede <hdegoede@redhat.com>
+M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
S: Maintained
S: Maintained
RDC R6040 FAST ETHERNET DRIVER
-M: Florian Fainelli <florian@openwrt.org>
+M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/rdc/r6040.c
L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap.git
S: Supported
+F: Documentation/devicetree/bindings/regmap/
F: drivers/base/regmap/
F: include/linux/regmap.h
L: linux-remoteproc@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git
S: Maintained
-F: drivers/remoteproc/
+F: Documentation/devicetree/bindings/remoteproc/
F: Documentation/remoteproc.txt
+F: drivers/remoteproc/
F: include/linux/remoteproc.h
REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM
ROCKER DRIVER
M: Jiri Pirko <jiri@resnulli.us>
-M: Scott Feldman <sfeldma@gmail.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/rocker/
F: drivers/platform/x86/samsung-laptop.c
SAMSUNG AUDIO (ASoC) DRIVERS
+M: Krzysztof Kozlowski <k.kozlowski@samsung.com>
M: Sangbeom Kim <sbkim73@samsung.com>
+M: Sylwester Nawrocki <s.nawrocki@samsung.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
F: sound/soc/samsung/
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
S: Maintained
+F: Documentation/devicetree/bindings/serial/
F: drivers/tty/serial/
SYNOPSYS DESIGNWARE DMAC DRIVER
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
L: linux-scsi@vger.kernel.org
S: Maintained
+F: Documentation/devicetree/bindings/scsi/
F: drivers/scsi/
F: include/scsi/
K: \bsecure_computing
K: \bTIF_SECCOMP\b
+SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) Broadcom BRCMSTB DRIVER
+M: Al Cooper <alcooperx@gmail.com>
+L: linux-mmc@vger.kernel.org
+L: bcm-kernel-feedback-list@broadcom.com
+S: Maintained
+F: drivers/mmc/host/sdhci-brcmstb*
+
SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) SAMSUNG DRIVER
M: Ben Dooks <ben-linux@fluff.org>
M: Jaehoon Chung <jh80.chung@samsung.com>
S: Supported
F: drivers/scsi/be2iscsi/
-Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER
+Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net)
M: Sathya Perla <sathya.perla@broadcom.com>
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
-M: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
L: netdev@vger.kernel.org
M: Casey Schaufler <casey@schaufler-ca.com>
L: linux-security-module@vger.kernel.org
W: http://schaufler-ca.com
-T: git git://git.gitorious.org/smack-next/kernel.git
+T: git git://github.com/cschaufler/smack-next
S: Maintained
F: Documentation/security/Smack.txt
F: security/smack/
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
W: http://alsa-project.org/main/index.php/ASoC
S: Supported
+F: Documentation/devicetree/bindings/sound/
F: Documentation/sound/alsa/soc/
F: sound/soc/
F: include/sound/soc*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
Q: http://patchwork.kernel.org/project/spi-devel-general/list/
S: Maintained
+F: Documentation/devicetree/bindings/spi/
F: Documentation/spi/
F: drivers/spi/
F: include/linux/spi/
M: Jonathan Cameron <jic23@kernel.org>
L: linux-iio@vger.kernel.org
S: Odd Fixes
+F: Documentation/devicetree/bindings/staging/iio/
F: drivers/staging/iio/
STAGING - LIRC (LINUX INFRARED REMOTE CONTROL) DRIVERS
F: drivers/thermal/cpu_cooling.c
F: include/linux/cpu_cooling.h
-THINGM BLINK(1) USB RGB LED DRIVER
-M: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
-S: Maintained
-F: drivers/hid/hid-thingm.c
-
THINKPAD ACPI EXTRAS DRIVER
M: Henrique de Moraes Holschuh <ibm-acpi@hmh.eng.br>
L: ibm-acpi-devel@lists.sourceforge.net
F: Documentation/scsi/ufs.txt
F: drivers/scsi/ufs/
+UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS
+M: Joao Pinto <Joao.Pinto@synopsys.com>
+L: linux-scsi@vger.kernel.org
+S: Supported
+F: drivers/scsi/ufs/*dwc*
+
UNSORTED BLOCK IMAGES (UBI)
M: Artem Bityutskiy <dedekind1@gmail.com>
M: Richard Weinberger <richard@nod.at>
F: drivers/net/wireless/ath/ar5523/
USB ATTACHED SCSI
-M: Hans de Goede <hdegoede@redhat.com>
-M: Gerd Hoffmann <kraxel@redhat.com>
+M: Oliver Neukum <oneukum@suse.com>
L: linux-usb@vger.kernel.org
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/net/vmxnet3/
VMware PVSCSI driver
-M: Arvind Kumar <arvindkumar@vmware.com>
+M: Jim Gill <jgill@vmware.com>
M: VMware PV-Drivers <pv-drivers@vmware.com>
L: linux-scsi@vger.kernel.org
S: Maintained
static bool cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
{
+ if (obj->base.write_domain == I915_GEM_DOMAIN_CPU)
+ return false;
+
if (!cpu_cache_is_coherent(obj->base.dev, obj->cache_level))
return true;
return obj->pin_display;
}
+ static int
+ insert_mappable_node(struct drm_i915_private *i915,
+ struct drm_mm_node *node, u32 size)
+ {
+ memset(node, 0, sizeof(*node));
+ return drm_mm_insert_node_in_range_generic(&i915->ggtt.base.mm, node,
+ size, 0, 0, 0,
+ i915->ggtt.mappable_end,
+ DRM_MM_SEARCH_DEFAULT,
+ DRM_MM_CREATE_DEFAULT);
+ }
+
+ static void
+ remove_mappable_node(struct drm_mm_node *node)
+ {
+ drm_mm_remove_node(node);
+ }
+
/* some bookkeeping */
static void i915_gem_info_add_obj(struct drm_i915_private *dev_priv,
size_t size)
int i915_mutex_lock_interruptible(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
ret = i915_gem_wait_for_error(&dev_priv->gpu_error);
static int
i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
{
- struct address_space *mapping = file_inode(obj->base.filp)->i_mapping;
+ struct address_space *mapping = obj->base.filp->f_mapping;
char *vaddr = obj->phys_handle->vaddr;
struct sg_table *st;
struct scatterlist *sg;
vaddr += PAGE_SIZE;
}
- i915_gem_chipset_flush(obj->base.dev);
+ i915_gem_chipset_flush(to_i915(obj->base.dev));
st = kmalloc(sizeof(*st), GFP_KERNEL);
if (st == NULL)
obj->dirty = 0;
if (obj->dirty) {
- struct address_space *mapping = file_inode(obj->base.filp)->i_mapping;
+ struct address_space *mapping = obj->base.filp->f_mapping;
char *vaddr = obj->phys_handle->vaddr;
int i;
}
drm_clflush_virt_range(vaddr, args->size);
- i915_gem_chipset_flush(dev);
+ i915_gem_chipset_flush(to_i915(dev));
out:
intel_fb_obj_flush(obj, false, ORIGIN_CPU);
void *i915_gem_object_alloc(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
return kmem_cache_zalloc(dev_priv->objects, GFP_KERNEL);
}
void i915_gem_object_free(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
kmem_cache_free(dev_priv->objects, obj);
}
return -EINVAL;
/* Allocate the new object */
- obj = i915_gem_alloc_object(dev, size);
- if (obj == NULL)
- return -ENOMEM;
+ obj = i915_gem_object_create(dev, size);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
ret = drm_gem_handle_create(file, &obj->base, &handle);
/* drop reference from allocate - handle holds it now */
/**
* Creates a new mm object and returns a handle to it.
+ * @dev: drm device pointer
+ * @data: ioctl data blob
+ * @file: drm file pointer
*/
int
i915_gem_create_ioctl(struct drm_device *dev, void *data,
*needs_clflush = 0;
- if (WARN_ON((obj->ops->flags & I915_GEM_OBJECT_HAS_STRUCT_PAGE) == 0))
+ if (WARN_ON(!i915_gem_object_has_struct_page(obj)))
return -EINVAL;
if (!(obj->base.read_domains & I915_GEM_DOMAIN_CPU)) {
return ret ? - EFAULT : 0;
}
+ static inline unsigned long
+ slow_user_access(struct io_mapping *mapping,
+ uint64_t page_base, int page_offset,
+ char __user *user_data,
+ unsigned long length, bool pwrite)
+ {
+ void __iomem *ioaddr;
+ void *vaddr;
+ uint64_t unwritten;
+
+ ioaddr = io_mapping_map_wc(mapping, page_base, PAGE_SIZE);
+ /* We can use the cpu mem copy function because this is X86. */
+ vaddr = (void __force *)ioaddr + page_offset;
+ if (pwrite)
+ unwritten = __copy_from_user(vaddr, user_data, length);
+ else
+ unwritten = __copy_to_user(user_data, vaddr, length);
+
+ io_mapping_unmap(ioaddr);
+ return unwritten;
+ }
+
+ static int
+ i915_gem_gtt_pread(struct drm_device *dev,
+ struct drm_i915_gem_object *obj, uint64_t size,
+ uint64_t data_offset, uint64_t data_ptr)
+ {
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct i915_ggtt *ggtt = &dev_priv->ggtt;
+ struct drm_mm_node node;
+ char __user *user_data;
+ uint64_t remain;
+ uint64_t offset;
+ int ret;
+
+ ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE);
+ if (ret) {
+ ret = insert_mappable_node(dev_priv, &node, PAGE_SIZE);
+ if (ret)
+ goto out;
+
+ ret = i915_gem_object_get_pages(obj);
+ if (ret) {
+ remove_mappable_node(&node);
+ goto out;
+ }
+
+ i915_gem_object_pin_pages(obj);
+ } else {
+ node.start = i915_gem_obj_ggtt_offset(obj);
+ node.allocated = false;
+ ret = i915_gem_object_put_fence(obj);
+ if (ret)
+ goto out_unpin;
+ }
+
+ ret = i915_gem_object_set_to_gtt_domain(obj, false);
+ if (ret)
+ goto out_unpin;
+
+ user_data = u64_to_user_ptr(data_ptr);
+ remain = size;
+ offset = data_offset;
+
+ mutex_unlock(&dev->struct_mutex);
+ if (likely(!i915.prefault_disable)) {
+ ret = fault_in_multipages_writeable(user_data, remain);
+ if (ret) {
+ mutex_lock(&dev->struct_mutex);
+ goto out_unpin;
+ }
+ }
+
+ while (remain > 0) {
+ /* Operation in this page
+ *
+ * page_base = page offset within aperture
+ * page_offset = offset within page
+ * page_length = bytes to copy for this page
+ */
+ u32 page_base = node.start;
+ unsigned page_offset = offset_in_page(offset);
+ unsigned page_length = PAGE_SIZE - page_offset;
+ page_length = remain < page_length ? remain : page_length;
+ if (node.allocated) {
+ wmb();
+ ggtt->base.insert_page(&ggtt->base,
+ i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
+ node.start,
+ I915_CACHE_NONE, 0);
+ wmb();
+ } else {
+ page_base += offset & PAGE_MASK;
+ }
+ /* This is a slow read/write as it tries to read from
+ * and write to user memory which may result into page
+ * faults, and so we cannot perform this under struct_mutex.
+ */
+ if (slow_user_access(ggtt->mappable, page_base,
+ page_offset, user_data,
+ page_length, false)) {
+ ret = -EFAULT;
+ break;
+ }
+
+ remain -= page_length;
+ user_data += page_length;
+ offset += page_length;
+ }
+
+ mutex_lock(&dev->struct_mutex);
+ if (ret == 0 && (obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0) {
+ /* The user has modified the object whilst we tried
+ * reading from it, and we now have no idea what domain
+ * the pages should be in. As we have just been touching
+ * them directly, flush everything back to the GTT
+ * domain.
+ */
+ ret = i915_gem_object_set_to_gtt_domain(obj, false);
+ }
+
+ out_unpin:
+ if (node.allocated) {
+ wmb();
+ ggtt->base.clear_range(&ggtt->base,
+ node.start, node.size,
+ true);
+ i915_gem_object_unpin_pages(obj);
+ remove_mappable_node(&node);
+ } else {
+ i915_gem_object_ggtt_unpin(obj);
+ }
+ out:
+ return ret;
+ }
+
static int
i915_gem_shmem_pread(struct drm_device *dev,
struct drm_i915_gem_object *obj,
int needs_clflush = 0;
struct sg_page_iter sg_iter;
+ if (!i915_gem_object_has_struct_page(obj))
+ return -ENODEV;
+
user_data = u64_to_user_ptr(args->data_ptr);
remain = args->size;
/**
* Reads data from the object referenced by handle.
+ * @dev: drm device pointer
+ * @data: ioctl data blob
+ * @file: drm file pointer
*
* On error, the contents of *data are undefined.
*/
goto out;
}
- /* prime objects have no backing filp to GEM pread/pwrite
- * pages from.
- */
- if (!obj->base.filp) {
- ret = -EINVAL;
- goto out;
- }
-
trace_i915_gem_object_pread(obj, args->offset, args->size);
ret = i915_gem_shmem_pread(dev, obj, args, file);
+ /* pread for non shmem backed objects */
+ if (ret == -EFAULT || ret == -ENODEV)
+ ret = i915_gem_gtt_pread(dev, obj, args->size,
+ args->offset, args->data_ptr);
+
out:
drm_gem_object_unreference(&obj->base);
unlock:
/**
* This is the fast pwrite path, where we copy the data directly from the
* user into the GTT, uncached.
+ * @dev: drm device pointer
+ * @obj: i915 gem object
+ * @args: pwrite arguments structure
+ * @file: drm file pointer
*/
static int
- i915_gem_gtt_pwrite_fast(struct drm_device *dev,
+ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
struct drm_i915_gem_object *obj,
struct drm_i915_gem_pwrite *args,
struct drm_file *file)
{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_ggtt *ggtt = &dev_priv->ggtt;
- ssize_t remain;
- loff_t offset, page_base;
+ struct i915_ggtt *ggtt = &i915->ggtt;
+ struct drm_device *dev = obj->base.dev;
+ struct drm_mm_node node;
+ uint64_t remain, offset;
char __user *user_data;
- int page_offset, page_length, ret;
+ int ret;
+ bool hit_slow_path = false;
+
+ if (obj->tiling_mode != I915_TILING_NONE)
+ return -EFAULT;
ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE | PIN_NONBLOCK);
- if (ret)
- goto out;
+ if (ret) {
+ ret = insert_mappable_node(i915, &node, PAGE_SIZE);
+ if (ret)
+ goto out;
+
+ ret = i915_gem_object_get_pages(obj);
+ if (ret) {
+ remove_mappable_node(&node);
+ goto out;
+ }
+
+ i915_gem_object_pin_pages(obj);
+ } else {
+ node.start = i915_gem_obj_ggtt_offset(obj);
+ node.allocated = false;
+ ret = i915_gem_object_put_fence(obj);
+ if (ret)
+ goto out_unpin;
+ }
ret = i915_gem_object_set_to_gtt_domain(obj, true);
if (ret)
goto out_unpin;
- ret = i915_gem_object_put_fence(obj);
- if (ret)
- goto out_unpin;
+ intel_fb_obj_invalidate(obj, ORIGIN_GTT);
+ obj->dirty = true;
user_data = u64_to_user_ptr(args->data_ptr);
+ offset = args->offset;
remain = args->size;
-
- offset = i915_gem_obj_ggtt_offset(obj) + args->offset;
-
- intel_fb_obj_invalidate(obj, ORIGIN_GTT);
-
- while (remain > 0) {
+ while (remain) {
/* Operation in this page
*
* page_base = page offset within aperture
* page_offset = offset within page
* page_length = bytes to copy for this page
*/
- page_base = offset & PAGE_MASK;
- page_offset = offset_in_page(offset);
- page_length = remain;
- if ((page_offset + remain) > PAGE_SIZE)
- page_length = PAGE_SIZE - page_offset;
-
+ u32 page_base = node.start;
+ unsigned page_offset = offset_in_page(offset);
+ unsigned page_length = PAGE_SIZE - page_offset;
+ page_length = remain < page_length ? remain : page_length;
+ if (node.allocated) {
+ wmb(); /* flush the write before we modify the GGTT */
+ ggtt->base.insert_page(&ggtt->base,
+ i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
+ node.start, I915_CACHE_NONE, 0);
+ wmb(); /* flush modifications to the GGTT (insert_page) */
+ } else {
+ page_base += offset & PAGE_MASK;
+ }
/* If we get a fault while copying data, then (presumably) our
* source page isn't available. Return the error and we'll
* retry in the slow path.
+ * If the object is non-shmem backed, we retry again with the
+ * path that handles page fault.
*/
if (fast_user_write(ggtt->mappable, page_base,
page_offset, user_data, page_length)) {
- ret = -EFAULT;
- goto out_flush;
+ hit_slow_path = true;
+ mutex_unlock(&dev->struct_mutex);
+ if (slow_user_access(ggtt->mappable,
+ page_base,
+ page_offset, user_data,
+ page_length, true)) {
+ ret = -EFAULT;
+ mutex_lock(&dev->struct_mutex);
+ goto out_flush;
+ }
+
+ mutex_lock(&dev->struct_mutex);
}
remain -= page_length;
}
out_flush:
+ if (hit_slow_path) {
+ if (ret == 0 &&
+ (obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0) {
+ /* The user has modified the object whilst we tried
+ * reading from it, and we now have no idea what domain
+ * the pages should be in. As we have just been touching
+ * them directly, flush everything back to the GTT
+ * domain.
+ */
+ ret = i915_gem_object_set_to_gtt_domain(obj, false);
+ }
+ }
+
intel_fb_obj_flush(obj, false, ORIGIN_GTT);
out_unpin:
- i915_gem_object_ggtt_unpin(obj);
+ if (node.allocated) {
+ wmb();
+ ggtt->base.clear_range(&ggtt->base,
+ node.start, node.size,
+ true);
+ i915_gem_object_unpin_pages(obj);
+ remove_mappable_node(&node);
+ } else {
+ i915_gem_object_ggtt_unpin(obj);
+ }
out:
return ret;
}
}
if (needs_clflush_after)
- i915_gem_chipset_flush(dev);
+ i915_gem_chipset_flush(to_i915(dev));
else
obj->cache_dirty = true;
/**
* Writes data to the object referenced by handle.
+ * @dev: drm device
+ * @data: ioctl data blob
+ * @file: drm file
*
* On error, the contents of the buffer that were to be modified are undefined.
*/
i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_pwrite *args = data;
struct drm_i915_gem_object *obj;
int ret;
goto out;
}
- /* prime objects have no backing filp to GEM pread/pwrite
- * pages from.
- */
- if (!obj->base.filp) {
- ret = -EINVAL;
- goto out;
- }
-
trace_i915_gem_object_pwrite(obj, args->offset, args->size);
ret = -EFAULT;
* pread/pwrite currently are reading and writing from the CPU
* perspective, requiring manual detiling by the client.
*/
- if (obj->tiling_mode == I915_TILING_NONE &&
- obj->base.write_domain != I915_GEM_DOMAIN_CPU &&
+ if (!i915_gem_object_has_struct_page(obj) ||
cpu_write_needs_clflush(obj)) {
- ret = i915_gem_gtt_pwrite_fast(dev, obj, args, file);
+ ret = i915_gem_gtt_pwrite_fast(dev_priv, obj, args, file);
/* Note that the gtt paths might fail with non-page-backed user
* pointers (e.g. gtt mappings when moving data between
* textures). Fallback to the shmem path in that case. */
}
- if (ret == -EFAULT || ret == -ENOSPC) {
+ if (ret == -EFAULT) {
if (obj->phys_handle)
ret = i915_gem_phys_pwrite(obj, args, file);
- else
+ else if (i915_gem_object_has_struct_page(obj))
ret = i915_gem_shmem_pwrite(dev, obj, args, file);
+ else
+ ret = -ENODEV;
}
out:
return 0;
}
- static void fake_irq(unsigned long data)
- {
- wake_up_process((struct task_struct *)data);
- }
-
- static bool missed_irq(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *engine)
- {
- return test_bit(engine->id, &dev_priv->gpu_error.missed_irq_rings);
- }
-
static unsigned long local_clock_us(unsigned *cpu)
{
unsigned long t;
return this_cpu != cpu;
}
- static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
+ bool __i915_spin_request(const struct drm_i915_gem_request *req,
+ int state, unsigned long timeout_us)
{
- unsigned long timeout;
unsigned cpu;
/* When waiting for high frequency requests, e.g. during synchronous
* takes to sleep on a request, on the order of a microsecond.
*/
- if (req->engine->irq_refcount)
- return -EBUSY;
-
- /* Only spin if we know the GPU is processing this request */
- if (!i915_gem_request_started(req, true))
- return -EAGAIN;
-
- timeout = local_clock_us(&cpu) + 5;
- while (!need_resched()) {
- if (i915_gem_request_completed(req, true))
- return 0;
+ timeout_us += local_clock_us(&cpu);
+ do {
+ if (i915_gem_request_completed(req))
+ return true;
if (signal_pending_state(state, current))
break;
- if (busywait_stop(timeout, cpu))
+ if (busywait_stop(timeout_us, cpu))
break;
cpu_relax_lowlatency();
- }
+ } while (!need_resched());
- if (i915_gem_request_completed(req, false))
- return 0;
-
- return -EAGAIN;
+ return false;
}
/**
* @req: duh!
* @interruptible: do an interruptible wait (normally yes)
* @timeout: in - how long to wait (NULL forever); out - how much time remaining
+ * @rps: RPS client
*
* Note: It is of utmost importance that the passed in seqno and reset_counter
* values have been read by the caller in an smp safe manner. Where read-side
s64 *timeout,
struct intel_rps_client *rps)
{
- struct intel_engine_cs *engine = i915_gem_request_get_engine(req);
- struct drm_device *dev = engine->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- const bool irq_test_in_progress =
- ACCESS_ONCE(dev_priv->gpu_error.test_irq_rings) & intel_engine_flag(engine);
int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
- DEFINE_WAIT(wait);
- unsigned long timeout_expire;
+ DEFINE_WAIT(reset);
+ struct intel_wait wait;
+ unsigned long timeout_remain;
s64 before = 0; /* Only to silence a compiler warning. */
- int ret;
+ int ret = 0;
- WARN(!intel_irqs_enabled(dev_priv), "IRQs disabled");
+ might_sleep();
if (list_empty(&req->list))
return 0;
- if (i915_gem_request_completed(req, true))
+ if (i915_gem_request_completed(req))
return 0;
- timeout_expire = 0;
+ timeout_remain = MAX_SCHEDULE_TIMEOUT;
if (timeout) {
if (WARN_ON(*timeout < 0))
return -EINVAL;
if (*timeout == 0)
return -ETIME;
- timeout_expire = jiffies + nsecs_to_jiffies_timeout(*timeout);
+ timeout_remain = nsecs_to_jiffies_timeout(*timeout);
/*
* Record current time in case interrupted by signal, or wedged.
before = ktime_get_raw_ns();
}
- if (INTEL_INFO(dev_priv)->gen >= 6)
- gen6_rps_boost(dev_priv, rps, req->emitted_jiffies);
-
trace_i915_gem_request_wait_begin(req);
- /* Optimistic spin for the next jiffie before touching IRQs */
- ret = __i915_spin_request(req, state);
- if (ret == 0)
- goto out;
-
- if (!irq_test_in_progress && WARN_ON(!engine->irq_get(engine))) {
- ret = -ENODEV;
- goto out;
- }
+ /* This client is about to stall waiting for the GPU. In many cases
+ * this is undesirable and limits the throughput of the system, as
+ * many clients cannot continue processing user input/output whilst
+ * blocked. RPS autotuning may take tens of milliseconds to respond
+ * to the GPU load and thus incurs additional latency for the client.
+ * We can circumvent that by promoting the GPU frequency to maximum
+ * before we wait. This makes the GPU throttle up much more quickly
+ * (good for benchmarks and user experience, e.g. window animations),
+ * but at a cost of spending more power processing the workload
+ * (bad for battery). Not all clients even want their results
+ * immediately and for them we should just let the GPU select its own
+ * frequency to maximise efficiency. To prevent a single client from
+ * forcing the clocks too high for the whole system, we only allow
+ * each client to waitboost once in a busy period.
+ */
+ if (INTEL_INFO(req->i915)->gen >= 6)
+ gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
- for (;;) {
- struct timer_list timer;
+ /* Optimistic spin for the next ~jiffie before touching IRQs */
+ if (i915_spin_request(req, state, 5))
+ goto complete;
- prepare_to_wait(&engine->irq_queue, &wait, state);
+ set_current_state(state);
+ add_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
- /* We need to check whether any gpu reset happened in between
- * the request being submitted and now. If a reset has occurred,
- * the request is effectively complete (we either are in the
- * process of or have discarded the rendering and completely
- * reset the GPU. The results of the request are lost and we
- * are free to continue on with the original operation.
+ intel_wait_init(&wait, req->seqno);
+ if (intel_engine_add_wait(req->engine, &wait))
+ /* In order to check that we haven't missed the interrupt
+ * as we enabled it, we need to kick ourselves to do a
+ * coherent check on the seqno before we sleep.
*/
- if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
- ret = 0;
- break;
- }
-
- if (i915_gem_request_completed(req, false)) {
- ret = 0;
- break;
- }
+ goto wakeup;
+ for (;;) {
if (signal_pending_state(state, current)) {
ret = -ERESTARTSYS;
break;
}
- if (timeout && time_after_eq(jiffies, timeout_expire)) {
+ timeout_remain = io_schedule_timeout(timeout_remain);
+ if (timeout_remain == 0) {
ret = -ETIME;
break;
}
- timer.function = NULL;
- if (timeout || missed_irq(dev_priv, engine)) {
- unsigned long expire;
+ if (intel_wait_complete(&wait))
+ break;
- setup_timer_on_stack(&timer, fake_irq, (unsigned long)current);
- expire = missed_irq(dev_priv, engine) ? jiffies + 1 : timeout_expire;
- mod_timer(&timer, expire);
- }
+ set_current_state(state);
- io_schedule();
+ wakeup:
+ /* Carefully check if the request is complete, giving time
+ * for the seqno to be visible following the interrupt.
+ * We also have to check in case we are kicked by the GPU
+ * reset in order to drop the struct_mutex.
+ */
+ if (__i915_request_irq_complete(req))
+ break;
- if (timer.function) {
- del_singleshot_timer_sync(&timer);
- destroy_timer_on_stack(&timer);
- }
+ /* Only spin if we know the GPU is processing this request */
+ if (i915_spin_request(req, state, 2))
+ break;
}
- if (!irq_test_in_progress)
- engine->irq_put(engine);
+ remove_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
- finish_wait(&engine->irq_queue, &wait);
-
- out:
+ intel_engine_remove_wait(req->engine, &wait);
+ __set_current_state(TASK_RUNNING);
+ complete:
trace_i915_gem_request_wait_end(req);
if (timeout) {
*timeout = 0;
}
+ if (rps && req->seqno == req->engine->last_submitted_seqno) {
+ /* The GPU is now idle and this client has stalled.
+ * Since no other client has submitted a request in the
+ * meantime, assume that this client is the only one
+ * supplying work to the GPU but is unable to keep that
+ * work supplied because it is waiting. Since the GPU is
+ * then never kept fully busy, RPS autoclocking will
+ * keep the clocks relatively low, causing further delays.
+ * Compensate by giving the synchronous client credit for
+ * a waitboost next time.
+ */
+ spin_lock(&req->i915->rps.client_lock);
+ list_del_init(&rps->link);
+ spin_unlock(&req->i915->rps.client_lock);
+ }
+
return ret;
}
list_del_init(&request->list);
i915_gem_request_remove_from_client(request);
+ if (request->previous_context) {
+ if (i915.enable_execlists)
+ intel_lr_context_unpin(request->previous_context,
+ request->engine);
+ }
+
+ i915_gem_context_unreference(request->ctx);
i915_gem_request_unreference(request);
}
struct intel_engine_cs *engine = req->engine;
struct drm_i915_gem_request *tmp;
- lockdep_assert_held(&engine->dev->struct_mutex);
+ lockdep_assert_held(&engine->i915->drm.struct_mutex);
if (list_empty(&req->list))
return;
/**
* Waits for a request to be signaled, and cleans up the
* request and object lists appropriately for that event.
+ * @req: request to wait on
*/
int
i915_wait_request(struct drm_i915_gem_request *req)
interruptible = dev_priv->mm.interruptible;
- BUG_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
+ BUG_ON(!mutex_is_locked(&dev_priv->drm.struct_mutex));
ret = __i915_wait_request(req, interruptible, NULL, NULL);
if (ret)
return ret;
/* If the GPU hung, we want to keep the requests to find the guilty. */
- if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
+ if (!i915_reset_in_progress(&dev_priv->gpu_error))
__i915_gem_request_retire__upto(req);
return 0;
/**
* Ensures that all rendering to the object has completed and the object is
* safe to unbind from the GTT or access from the CPU.
+ * @obj: i915 gem object
+ * @readonly: waiting for read access or write
*/
int
i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
else if (obj->last_write_req == req)
i915_gem_object_retire__write(obj);
- if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
+ if (!i915_reset_in_progress(&req->i915->gpu_error))
__i915_gem_request_retire__upto(req);
}
bool readonly)
{
struct drm_device *dev = obj->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_request *requests[I915_NUM_ENGINES];
int ret, i, n = 0;
return &fpriv->rps;
}
+ static enum fb_op_origin
+ write_origin(struct drm_i915_gem_object *obj, unsigned domain)
+ {
+ return domain == I915_GEM_DOMAIN_GTT && !obj->has_wc_mmap ?
+ ORIGIN_GTT : ORIGIN_CPU;
+ }
+
/**
* Called when user space prepares to use an object with the CPU, either
* through the mmap ioctl's mapping or a GTT mapping.
+ * @dev: drm device
+ * @data: ioctl data blob
+ * @file: drm file
*/
int
i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
ret = i915_gem_object_set_to_cpu_domain(obj, write_domain != 0);
if (write_domain != 0)
- intel_fb_obj_invalidate(obj,
- write_domain == I915_GEM_DOMAIN_GTT ?
- ORIGIN_GTT : ORIGIN_CPU);
+ intel_fb_obj_invalidate(obj, write_origin(obj, write_domain));
unref:
drm_gem_object_unreference(&obj->base);
/**
* Called when user space has done writes to this buffer
+ * @dev: drm device
+ * @data: ioctl data blob
+ * @file: drm file
*/
int
i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
}
/**
- * Maps the contents of an object, returning the address it is mapped
- * into.
+ * i915_gem_mmap_ioctl - Maps the contents of an object, returning the address
+ * it is mapped to.
+ * @dev: drm device
+ * @data: ioctl data blob
+ * @file: drm file
*
* While the mapping holds a reference on the contents of the object, it doesn't
* imply a ref on the object itself.
else
addr = -ENOMEM;
up_write(&mm->mmap_sem);
+
+ /* This may race, but that's ok, it only gets set */
+ WRITE_ONCE(to_intel_bo(obj)->has_wc_mmap, true);
}
drm_gem_object_unreference_unlocked(obj);
if (IS_ERR((void *)addr))
return size;
/* Previous chips need a power-of-two fence region when tiling */
- if (INTEL_INFO(dev)->gen == 3)
+ if (IS_GEN3(dev))
gtt_size = 1024*1024;
else
gtt_size = 512*1024;
/**
* i915_gem_get_gtt_alignment - return required GTT alignment for an object
- * @obj: object to check
+ * @dev: drm device
+ * @size: object size
+ * @tiling_mode: tiling mode
+ * @fenced: is fenced alignemned required or not
*
* Return the required GTT alignment for an object, taking into account
* potential fence register mapping.
static int i915_gem_object_create_mmap_offset(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
int ret;
dev_priv->mm.shrinker_no_lock_stealing = true;
if (obj->base.filp == NULL)
return;
- mapping = file_inode(obj->base.filp)->i_mapping,
+ mapping = obj->base.filp->f_mapping,
invalidate_mapping_pages(mapping, 0, (loff_t)-1);
}
static void
i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj)
{
- struct sg_page_iter sg_iter;
+ struct sgt_iter sgt_iter;
+ struct page *page;
int ret;
BUG_ON(obj->madv == __I915_MADV_PURGED);
if (obj->madv == I915_MADV_DONTNEED)
obj->dirty = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) {
- struct page *page = sg_page_iter_page(&sg_iter);
-
+ for_each_sgt_page(page, sgt_iter, obj->pages) {
if (obj->dirty)
set_page_dirty(page);
static int
i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
int page_count, i;
struct address_space *mapping;
struct sg_table *st;
struct scatterlist *sg;
- struct sg_page_iter sg_iter;
+ struct sgt_iter sgt_iter;
struct page *page;
unsigned long last_pfn = 0; /* suppress gcc warning */
int ret;
*
* Fail silently without starting the shrinker
*/
- mapping = file_inode(obj->base.filp)->i_mapping;
+ mapping = obj->base.filp->f_mapping;
gfp = mapping_gfp_constraint(mapping, ~(__GFP_IO | __GFP_RECLAIM));
gfp |= __GFP_NORETRY | __GFP_NOWARN;
sg = st->sgl;
err_pages:
sg_mark_end(sg);
- for_each_sg_page(st->sgl, &sg_iter, st->nents, 0)
- put_page(sg_page_iter_page(&sg_iter));
+ for_each_sgt_page(page, sgt_iter, st)
+ put_page(page);
sg_free_table(st);
kfree(st);
int
i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
const struct drm_i915_gem_object_ops *ops = obj->ops;
int ret;
return 0;
}
+ /* The 'mapping' part of i915_gem_object_pin_map() below */
+ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj)
+ {
+ unsigned long n_pages = obj->base.size >> PAGE_SHIFT;
+ struct sg_table *sgt = obj->pages;
+ struct sgt_iter sgt_iter;
+ struct page *page;
+ struct page *stack_pages[32];
+ struct page **pages = stack_pages;
+ unsigned long i = 0;
+ void *addr;
+
+ /* A single page can always be kmapped */
+ if (n_pages == 1)
+ return kmap(sg_page(sgt->sgl));
+
+ if (n_pages > ARRAY_SIZE(stack_pages)) {
+ /* Too big for stack -- allocate temporary array instead */
+ pages = drm_malloc_gfp(n_pages, sizeof(*pages), GFP_TEMPORARY);
+ if (!pages)
+ return NULL;
+ }
+
+ for_each_sgt_page(page, sgt_iter, sgt)
+ pages[i++] = page;
+
+ /* Check that we have the expected number of pages */
+ GEM_BUG_ON(i != n_pages);
+
+ addr = vmap(pages, n_pages, 0, PAGE_KERNEL);
+
+ if (pages != stack_pages)
+ drm_free_large(pages);
+
+ return addr;
+ }
+
+ /* get, pin, and map the pages of the object into kernel space */
void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj)
{
int ret;
i915_gem_object_pin_pages(obj);
- if (obj->mapping == NULL) {
- struct page **pages;
-
- pages = NULL;
- if (obj->base.size == PAGE_SIZE)
- obj->mapping = kmap(sg_page(obj->pages->sgl));
- else
- pages = drm_malloc_gfp(obj->base.size >> PAGE_SHIFT,
- sizeof(*pages),
- GFP_TEMPORARY);
- if (pages != NULL) {
- struct sg_page_iter sg_iter;
- int n;
-
- n = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter,
- obj->pages->nents, 0)
- pages[n++] = sg_page_iter_page(&sg_iter);
-
- obj->mapping = vmap(pages, n, 0, PAGE_KERNEL);
- drm_free_large(pages);
- }
- if (obj->mapping == NULL) {
+ if (!obj->mapping) {
+ obj->mapping = i915_gem_object_map(obj);
+ if (!obj->mapping) {
i915_gem_object_unpin_pages(obj);
return ERR_PTR(-ENOMEM);
}
}
static int
- i915_gem_init_seqno(struct drm_device *dev, u32 seqno)
+ i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *engine;
int ret;
if (ret)
return ret;
}
- i915_gem_retire_requests(dev);
+ i915_gem_retire_requests(dev_priv);
+
+ /* If the seqno wraps around, we need to clear the breadcrumb rbtree */
+ if (!i915_seqno_passed(seqno, dev_priv->next_seqno)) {
+ while (intel_kick_waiters(dev_priv) ||
+ intel_kick_signalers(dev_priv))
+ yield();
+ }
/* Finally reset hw state */
for_each_engine(engine, dev_priv)
int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
if (seqno == 0)
/* HWS page needs to be set less than what we
* will inject to ring
*/
- ret = i915_gem_init_seqno(dev, seqno - 1);
+ ret = i915_gem_init_seqno(dev_priv, seqno - 1);
if (ret)
return ret;
}
int
- i915_gem_get_seqno(struct drm_device *dev, u32 *seqno)
+ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
/* reserve 0 for non-seqno */
if (dev_priv->next_seqno == 0) {
- int ret = i915_gem_init_seqno(dev, 0);
+ int ret = i915_gem_init_seqno(dev_priv, 0);
if (ret)
return ret;
return 0;
}
+ static void i915_gem_mark_busy(const struct intel_engine_cs *engine)
+ {
+ struct drm_i915_private *dev_priv = engine->i915;
+
+ dev_priv->gt.active_engines |= intel_engine_flag(engine);
+ if (dev_priv->gt.awake)
+ return;
+
+ intel_runtime_pm_get_noresume(dev_priv);
+ dev_priv->gt.awake = true;
+
+ i915_update_gfx_val(dev_priv);
+ if (INTEL_GEN(dev_priv) >= 6)
+ gen6_rps_busy(dev_priv);
+
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->gt.retire_work,
+ round_jiffies_up_relative(HZ));
+ }
+
/*
* NB: This function is not allowed to fail. Doing so would mean the the
* request is not being tracked for completion but the work itself is
bool flush_caches)
{
struct intel_engine_cs *engine;
- struct drm_i915_private *dev_priv;
struct intel_ringbuffer *ringbuf;
u32 request_start;
+ u32 reserved_tail;
int ret;
if (WARN_ON(request == NULL))
return;
engine = request->engine;
- dev_priv = request->i915;
ringbuf = request->ringbuf;
/*
* should already have been reserved in the ring buffer. Let the ring
* know that it is time to use that space up.
*/
- intel_ring_reserved_space_use(ringbuf);
-
request_start = intel_ring_get_tail(ringbuf);
+ reserved_tail = request->reserved_space;
+ request->reserved_space = 0;
+
/*
* Emit any outstanding flushes - execbuf can fail to emit the flush
* after having emitted the batchbuffer command. Hence we need to fix
}
/* Not allowed to fail! */
WARN(ret, "emit|add_request failed: %d!\n", ret);
-
- i915_queue_hangcheck(engine->dev);
-
- queue_delayed_work(dev_priv->wq,
- &dev_priv->mm.retire_work,
- round_jiffies_up_relative(HZ));
- intel_mark_busy(dev_priv->dev);
-
/* Sanity check that the reserved size was large enough. */
- intel_ring_reserved_space_end(ringbuf);
+ ret = intel_ring_get_tail(ringbuf) - request_start;
+ if (ret < 0)
+ ret += ringbuf->size;
+ WARN_ONCE(ret > reserved_tail,
+ "Not enough space reserved (%d bytes) "
+ "for adding the request (%d bytes)\n",
+ reserved_tail, ret);
+
+ i915_gem_mark_busy(engine);
}
- static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
- const struct intel_context *ctx)
+ static bool i915_context_is_banned(const struct i915_gem_context *ctx)
{
unsigned long elapsed;
- elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
-
if (ctx->hang_stats.banned)
return true;
+ elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
if (ctx->hang_stats.ban_period_seconds &&
elapsed <= ctx->hang_stats.ban_period_seconds) {
- if (!i915_gem_context_is_default(ctx)) {
- DRM_DEBUG("context hanging too fast, banning!\n");
- return true;
- } else if (i915_stop_ring_allow_ban(dev_priv)) {
- if (i915_stop_ring_allow_warn(dev_priv))
- DRM_ERROR("gpu hanging too fast, banning!\n");
- return true;
- }
+ DRM_DEBUG("context hanging too fast, banning!\n");
+ return true;
}
return false;
}
- static void i915_set_reset_status(struct drm_i915_private *dev_priv,
- struct intel_context *ctx,
+ static void i915_set_reset_status(struct i915_gem_context *ctx,
const bool guilty)
{
- struct i915_ctx_hang_stats *hs;
-
- if (WARN_ON(!ctx))
- return;
-
- hs = &ctx->hang_stats;
+ struct i915_ctx_hang_stats *hs = &ctx->hang_stats;
if (guilty) {
- hs->banned = i915_context_is_banned(dev_priv, ctx);
+ hs->banned = i915_context_is_banned(ctx);
hs->batch_active++;
hs->guilty_ts = get_seconds();
} else {
{
struct drm_i915_gem_request *req = container_of(req_ref,
typeof(*req), ref);
- struct intel_context *ctx = req->ctx;
-
- if (req->file_priv)
- i915_gem_request_remove_from_client(req);
-
- if (ctx) {
- if (i915.enable_execlists && ctx != req->i915->kernel_context)
- intel_lr_context_unpin(ctx, req->engine);
-
- i915_gem_context_unreference(ctx);
- }
-
kmem_cache_free(req->i915->requests, req);
}
static inline int
__i915_gem_request_alloc(struct intel_engine_cs *engine,
- struct intel_context *ctx,
+ struct i915_gem_context *ctx,
struct drm_i915_gem_request **req_out)
{
- struct drm_i915_private *dev_priv = to_i915(engine->dev);
+ struct drm_i915_private *dev_priv = engine->i915;
unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
struct drm_i915_gem_request *req;
int ret;
if (req == NULL)
return -ENOMEM;
- ret = i915_gem_get_seqno(engine->dev, &req->seqno);
+ ret = i915_gem_get_seqno(engine->i915, &req->seqno);
if (ret)
goto err;
kref_init(&req->ref);
req->i915 = dev_priv;
req->engine = engine;
- req->reset_counter = reset_counter;
req->ctx = ctx;
i915_gem_context_reference(req->ctx);
- if (i915.enable_execlists)
- ret = intel_logical_ring_alloc_request_extras(req);
- else
- ret = intel_ring_alloc_request_extras(req);
- if (ret) {
- i915_gem_context_unreference(req->ctx);
- goto err;
- }
-
/*
* Reserve space in the ring buffer for all the commands required to
* eventually emit this request. This is to guarantee that the
* to be redone if the request is not actually submitted straight
* away, e.g. because a GPU scheduler has deferred it.
*/
+ req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
+
if (i915.enable_execlists)
- ret = intel_logical_ring_reserve_space(req);
+ ret = intel_logical_ring_alloc_request_extras(req);
else
- ret = intel_ring_reserve_space(req);
- if (ret) {
- /*
- * At this point, the request is fully allocated even if not
- * fully prepared. Thus it can be cleaned up using the proper
- * free code.
- */
- intel_ring_reserved_space_cancel(req->ringbuf);
- i915_gem_request_unreference(req);
- return ret;
- }
+ ret = intel_ring_alloc_request_extras(req);
+ if (ret)
+ goto err_ctx;
*req_out = req;
return 0;
+ err_ctx:
+ i915_gem_context_unreference(ctx);
err:
kmem_cache_free(dev_priv->requests, req);
return ret;
*/
struct drm_i915_gem_request *
i915_gem_request_alloc(struct intel_engine_cs *engine,
- struct intel_context *ctx)
+ struct i915_gem_context *ctx)
{
struct drm_i915_gem_request *req;
int err;
if (ctx == NULL)
- ctx = to_i915(engine->dev)->kernel_context;
+ ctx = engine->i915->kernel_context;
err = __i915_gem_request_alloc(engine, ctx, &req);
return err ? ERR_PTR(err) : req;
}
{
struct drm_i915_gem_request *request;
+ /* We are called by the error capture and reset at a random
+ * point in time. In particular, note that neither is crucially
+ * ordered with an interrupt. After a hang, the GPU is dead and we
+ * assume that no more writes can happen (we waited long enough for
+ * all writes that were in transaction to be flushed) - adding an
+ * extra delay for a recent interrupt is pointless. Hence, we do
+ * not need an engine->irq_seqno_barrier() before the seqno reads.
+ */
list_for_each_entry(request, &engine->request_list, list) {
- if (i915_gem_request_completed(request, false))
+ if (i915_gem_request_completed(request))
continue;
return request;
return NULL;
}
- static void i915_gem_reset_engine_status(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *engine)
+ static void i915_gem_reset_engine_status(struct intel_engine_cs *engine)
{
struct drm_i915_gem_request *request;
bool ring_hung;
request = i915_gem_find_active_request(engine);
-
if (request == NULL)
return;
ring_hung = engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG;
- i915_set_reset_status(dev_priv, request->ctx, ring_hung);
-
+ i915_set_reset_status(request->ctx, ring_hung);
list_for_each_entry_continue(request, &engine->request_list, list)
- i915_set_reset_status(dev_priv, request->ctx, false);
+ i915_set_reset_status(request->ctx, false);
}
- static void i915_gem_reset_engine_cleanup(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *engine)
+ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
{
struct intel_ringbuffer *buffer;
/* Ensure irq handler finishes or is cancelled. */
tasklet_kill(&engine->irq_tasklet);
- spin_lock_bh(&engine->execlist_lock);
- /* list_splice_tail_init checks for empty lists */
- list_splice_tail_init(&engine->execlist_queue,
- &engine->execlist_retired_req_list);
- spin_unlock_bh(&engine->execlist_lock);
-
- intel_execlists_retire_requests(engine);
+ intel_execlists_cancel_requests(engine);
}
/*
void i915_gem_reset(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
/*
* their reference to the objects, the inspection must be done first.
*/
for_each_engine(engine, dev_priv)
- i915_gem_reset_engine_status(dev_priv, engine);
+ i915_gem_reset_engine_status(engine);
for_each_engine(engine, dev_priv)
- i915_gem_reset_engine_cleanup(dev_priv, engine);
+ i915_gem_reset_engine_cleanup(engine);
i915_gem_context_reset(dev);
/**
* This function clears the request list as sequence numbers are passed.
+ * @engine: engine to retire requests on
*/
void
i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
struct drm_i915_gem_request,
list);
- if (!i915_gem_request_completed(request, true))
+ if (!i915_gem_request_completed(request))
break;
i915_gem_request_retire(request);
i915_gem_object_retire__read(obj, engine->id);
}
- if (unlikely(engine->trace_irq_req &&
- i915_gem_request_completed(engine->trace_irq_req, true))) {
- engine->irq_put(engine);
- i915_gem_request_assign(&engine->trace_irq_req, NULL);
- }
-
WARN_ON(i915_verify_lists(engine->dev));
}
- bool
- i915_gem_retire_requests(struct drm_device *dev)
+ void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *engine;
- bool idle = true;
+
+ lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+ if (dev_priv->gt.active_engines == 0)
+ return;
+
+ GEM_BUG_ON(!dev_priv->gt.awake);
for_each_engine(engine, dev_priv) {
i915_gem_retire_requests_ring(engine);
- idle &= list_empty(&engine->request_list);
- if (i915.enable_execlists) {
- spin_lock_bh(&engine->execlist_lock);
- idle &= list_empty(&engine->execlist_queue);
- spin_unlock_bh(&engine->execlist_lock);
-
- intel_execlists_retire_requests(engine);
- }
+ if (list_empty(&engine->request_list))
+ dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
}
- if (idle)
- mod_delayed_work(dev_priv->wq,
- &dev_priv->mm.idle_work,
+ if (dev_priv->gt.active_engines == 0)
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->gt.idle_work,
msecs_to_jiffies(100));
-
- return idle;
}
static void
i915_gem_retire_work_handler(struct work_struct *work)
{
struct drm_i915_private *dev_priv =
- container_of(work, typeof(*dev_priv), mm.retire_work.work);
- struct drm_device *dev = dev_priv->dev;
- bool idle;
+ container_of(work, typeof(*dev_priv), gt.retire_work.work);
+ struct drm_device *dev = &dev_priv->drm;
/* Come back later if the device is busy... */
- idle = false;
if (mutex_trylock(&dev->struct_mutex)) {
- idle = i915_gem_retire_requests(dev);
+ i915_gem_retire_requests(dev_priv);
mutex_unlock(&dev->struct_mutex);
}
- if (!idle)
- queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
+
+ /* Keep the retire handler running until we are finally idle.
+ * We do not need to do this test under locking as in the worst-case
+ * we queue the retire worker once too often.
+ */
+ if (READ_ONCE(dev_priv->gt.awake))
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->gt.retire_work,
round_jiffies_up_relative(HZ));
}
i915_gem_idle_work_handler(struct work_struct *work)
{
struct drm_i915_private *dev_priv =
- container_of(work, typeof(*dev_priv), mm.idle_work.work);
- struct drm_device *dev = dev_priv->dev;
+ container_of(work, typeof(*dev_priv), gt.idle_work.work);
+ struct drm_device *dev = &dev_priv->drm;
struct intel_engine_cs *engine;
+ unsigned int stuck_engines;
+ bool rearm_hangcheck;
+
+ if (!READ_ONCE(dev_priv->gt.awake))
+ return;
+
+ if (READ_ONCE(dev_priv->gt.active_engines))
+ return;
+
+ rearm_hangcheck =
+ cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
+
+ if (!mutex_trylock(&dev->struct_mutex)) {
+ /* Currently busy, come back later */
+ mod_delayed_work(dev_priv->wq,
+ &dev_priv->gt.idle_work,
+ msecs_to_jiffies(50));
+ goto out_rearm;
+ }
+
+ if (dev_priv->gt.active_engines)
+ goto out_unlock;
for_each_engine(engine, dev_priv)
- if (!list_empty(&engine->request_list))
- return;
+ i915_gem_batch_pool_fini(&engine->batch_pool);
- /* we probably should sync with hangcheck here, using cancel_work_sync.
- * Also locking seems to be fubar here, engine->request_list is protected
- * by dev->struct_mutex. */
+ GEM_BUG_ON(!dev_priv->gt.awake);
+ dev_priv->gt.awake = false;
+ rearm_hangcheck = false;
- intel_mark_idle(dev);
+ stuck_engines = intel_kick_waiters(dev_priv);
+ if (unlikely(stuck_engines)) {
+ DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
+ dev_priv->gpu_error.missed_irq_rings |= stuck_engines;
+ }
- if (mutex_trylock(&dev->struct_mutex)) {
- for_each_engine(engine, dev_priv)
- i915_gem_batch_pool_fini(&engine->batch_pool);
+ if (INTEL_GEN(dev_priv) >= 6)
+ gen6_rps_idle(dev_priv);
+ intel_runtime_pm_put(dev_priv);
+ out_unlock:
+ mutex_unlock(&dev->struct_mutex);
- mutex_unlock(&dev->struct_mutex);
+ out_rearm:
+ if (rearm_hangcheck) {
+ GEM_BUG_ON(!dev_priv->gt.awake);
+ i915_queue_hangcheck(dev_priv);
}
}
* Ensures that an object will eventually get non-busy by flushing any required
* write domains, emitting any outstanding lazy request and retiring and
* completed requests.
+ * @obj: object to flush
*/
static int
i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
if (req == NULL)
continue;
- if (list_empty(&req->list))
- goto retire;
-
- if (i915_gem_request_completed(req, true)) {
- __i915_gem_request_retire__upto(req);
- retire:
+ if (i915_gem_request_completed(req))
i915_gem_object_retire__read(obj, i);
- }
}
return 0;
/**
* i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT
- * @DRM_IOCTL_ARGS: standard ioctl arguments
+ * @dev: drm device pointer
+ * @data: ioctl data blob
+ * @file: drm file pointer
*
* Returns 0 if successful, else an error is returned with the remaining time in
* the timeout parameter.
ret = __i915_wait_request(req[i], true,
args->timeout_ns > 0 ? &args->timeout_ns : NULL,
to_rps_client(file));
- i915_gem_request_unreference__unlocked(req[i]);
+ i915_gem_request_unreference(req[i]);
}
return ret;
if (to == from)
return 0;
- if (i915_gem_request_completed(from_req, true))
+ if (i915_gem_request_completed(from_req))
return 0;
- if (!i915_semaphore_is_enabled(obj->base.dev)) {
+ if (!i915_semaphore_is_enabled(to_i915(obj->base.dev))) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
ret = __i915_wait_request(from_req,
i915->mm.interruptible,
old_write_domain);
}
+ static void __i915_vma_iounmap(struct i915_vma *vma)
+ {
+ GEM_BUG_ON(vma->pin_count);
+
+ if (vma->iomap == NULL)
+ return;
+
+ io_mapping_unmap(vma->iomap);
+ vma->iomap = NULL;
+ }
+
static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
{
struct drm_i915_gem_object *obj = vma->obj;
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
int ret;
if (list_empty(&vma->obj_link))
ret = i915_gem_object_put_fence(obj);
if (ret)
return ret;
+
+ __i915_vma_iounmap(vma);
}
trace_i915_vma_unbind(vma);
return __i915_vma_unbind(vma, false);
}
- int i915_gpu_idle(struct drm_device *dev)
+ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *engine;
int ret;
- /* Flush everything onto the inactive list. */
- for_each_engine(engine, dev_priv) {
- if (!i915.enable_execlists) {
- struct drm_i915_gem_request *req;
+ lockdep_assert_held(&dev_priv->drm.struct_mutex);
- req = i915_gem_request_alloc(engine, NULL);
- if (IS_ERR(req))
- return PTR_ERR(req);
-
- ret = i915_switch_context(req);
- i915_add_request_no_flush(req);
- if (ret)
- return ret;
- }
+ for_each_engine(engine, dev_priv) {
+ if (engine->last_context == NULL)
+ continue;
ret = intel_engine_idle(engine);
if (ret)
/**
* Finds free space in the GTT aperture and binds the object or a view of it
* there.
+ * @obj: object to bind
+ * @vm: address space to bind into
+ * @ggtt_view: global gtt view if applicable
+ * @alignment: requested alignment
+ * @flags: mask of PIN_* flags to use
*/
static struct i915_vma *
i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
return;
if (i915_gem_clflush_object(obj, obj->pin_display))
- i915_gem_chipset_flush(obj->base.dev);
+ i915_gem_chipset_flush(to_i915(obj->base.dev));
old_write_domain = obj->base.write_domain;
obj->base.write_domain = 0;
/**
* Moves a single object to the GTT read, and possibly write domain.
+ * @obj: object to act on
+ * @write: ask for write access or read only
*
* This function returns when the move is complete, including waiting on
* flushes to occur.
/**
* Changes the cache-level of an object across all VMA.
+ * @obj: object to act on
+ * @cache_level: new cache level to set for the object
*
* After this function returns, the object will be in the new cache-level
* across all GTT and the contents of the backing storage will be coherent,
* object is now coherent at its new cache level (with respect
* to the access domain).
*/
- if (obj->cache_dirty &&
- obj->base.write_domain != I915_GEM_DOMAIN_CPU &&
- cpu_write_needs_clflush(obj)) {
+ if (obj->cache_dirty && cpu_write_needs_clflush(obj)) {
if (i915_gem_clflush_object(obj, true))
- i915_gem_chipset_flush(obj->base.dev);
+ i915_gem_chipset_flush(to_i915(obj->base.dev));
}
return 0;
int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_caching *args = data;
struct drm_i915_gem_object *obj;
enum i915_cache_level level;
/**
* Moves a single object to the CPU read, and possibly write domain.
+ * @obj: object to act on
+ * @write: requesting write or read-only access
*
* This function returns when the move is complete, including waiting on
* flushes to occur.
static int
i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_file_private *file_priv = file->driver_priv;
unsigned long recent_enough = jiffies - DRM_I915_THROTTLE_JIFFIES;
struct drm_i915_gem_request *request, *target = NULL;
return 0;
ret = __i915_wait_request(target, true, NULL, NULL);
- if (ret == 0)
- queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
-
- i915_gem_request_unreference__unlocked(target);
+ i915_gem_request_unreference(target);
return ret;
}
uint32_t alignment,
uint64_t flags)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct i915_vma *vma;
unsigned bound;
int ret;
i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_madvise *args = data;
struct drm_i915_gem_object *obj;
int ret;
obj->fence_reg = I915_FENCE_REG_NONE;
obj->madv = I915_MADV_WILLNEED;
- i915_gem_info_add_obj(obj->base.dev->dev_private, obj->base.size);
+ i915_gem_info_add_obj(to_i915(obj->base.dev), obj->base.size);
}
static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
.put_pages = i915_gem_object_put_pages_gtt,
};
- struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev,
+ struct drm_i915_gem_object *i915_gem_object_create(struct drm_device *dev,
size_t size)
{
struct drm_i915_gem_object *obj;
struct address_space *mapping;
gfp_t mask;
+ int ret;
obj = i915_gem_object_alloc(dev);
if (obj == NULL)
- return NULL;
+ return ERR_PTR(-ENOMEM);
- if (drm_gem_object_init(dev, &obj->base, size) != 0) {
- i915_gem_object_free(obj);
- return NULL;
- }
+ ret = drm_gem_object_init(dev, &obj->base, size);
+ if (ret)
+ goto fail;
mask = GFP_HIGHUSER | __GFP_RECLAIMABLE;
if (IS_CRESTLINE(dev) || IS_BROADWATER(dev)) {
mask |= __GFP_DMA32;
}
- mapping = file_inode(obj->base.filp)->i_mapping;
+ mapping = obj->base.filp->f_mapping;
mapping_set_gfp_mask(mapping, mask);
i915_gem_object_init(obj, &i915_gem_object_ops);
trace_i915_gem_object_create(obj);
return obj;
+
+ fail:
+ i915_gem_object_free(obj);
+
+ return ERR_PTR(ret);
}
static bool discard_backing_storage(struct drm_i915_gem_object *obj)
{
struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
struct drm_device *dev = obj->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_vma *vma, *next;
intel_runtime_pm_get(dev_priv);
struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view)
{
- struct drm_device *dev = obj->base.dev;
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct i915_vma *vma;
- BUG_ON(!view);
+ GEM_BUG_ON(!view);
list_for_each_entry(vma, &obj->vma_list, obj_link)
- if (vma->vm == &ggtt->base &&
- i915_ggtt_view_equal(&vma->ggtt_view, view))
+ if (vma->is_ggtt && i915_ggtt_view_equal(&vma->ggtt_view, view))
return vma;
return NULL;
}
static void
i915_gem_stop_engines(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
for_each_engine(engine, dev_priv)
int
i915_gem_suspend(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int ret = 0;
mutex_lock(&dev->struct_mutex);
- ret = i915_gpu_idle(dev);
+ ret = i915_gem_wait_for_idle(dev_priv);
if (ret)
goto err;
- i915_gem_retire_requests(dev);
+ i915_gem_retire_requests(dev_priv);
i915_gem_stop_engines(dev);
+ i915_gem_context_lost(dev_priv);
mutex_unlock(&dev->struct_mutex);
cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
- cancel_delayed_work_sync(&dev_priv->mm.retire_work);
- flush_delayed_work(&dev_priv->mm.idle_work);
+ cancel_delayed_work_sync(&dev_priv->gt.retire_work);
+ flush_delayed_work(&dev_priv->gt.idle_work);
/* Assert that we sucessfully flushed all the work and
* reset the GPU back to its idle, low power state.
*/
- WARN_ON(dev_priv->mm.busy);
+ WARN_ON(dev_priv->gt.awake);
return 0;
return ret;
}
- int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
- {
- struct intel_engine_cs *engine = req->engine;
- struct drm_device *dev = engine->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- u32 *remap_info = dev_priv->l3_parity.remap_info[slice];
- int i, ret;
-
- if (!HAS_L3_DPF(dev) || !remap_info)
- return 0;
-
- ret = intel_ring_begin(req, GEN7_L3LOG_SIZE / 4 * 3);
- if (ret)
- return ret;
-
- /*
- * Note: We do not worry about the concurrent register cacheline hang
- * here because no other code should access these registers other than
- * at initialization time.
- */
- for (i = 0; i < GEN7_L3LOG_SIZE / 4; i++) {
- intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(engine, GEN7_L3LOG(slice, i));
- intel_ring_emit(engine, remap_info[i]);
- }
-
- intel_ring_advance(engine);
-
- return ret;
- }
-
void i915_gem_init_swizzling(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
if (INTEL_INFO(dev)->gen < 5 ||
dev_priv->mm.bit_6_swizzle_x == I915_BIT_6_SWIZZLE_NONE)
static void init_unused_ring(struct drm_device *dev, u32 base)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
I915_WRITE(RING_CTL(base), 0);
I915_WRITE(RING_HEAD(base), 0);
int i915_gem_init_engines(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
ret = intel_init_render_ring_buffer(dev);
int
i915_gem_init_hw(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
- int ret, j;
+ int ret;
/* Double layer security blanket, see i915_gem_init() */
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
intel_mocs_init_l3cc_table(dev);
/* We can't enable contexts until all firmware is loaded */
- if (HAS_GUC_UCODE(dev)) {
- ret = intel_guc_ucode_load(dev);
- if (ret) {
- DRM_ERROR("Failed to initialize GuC, error %d\n", ret);
- ret = -EIO;
- goto out;
- }
- }
-
- /*
- * Increment the next seqno by 0x100 so we have a visible break
- * on re-initialisation
- */
- ret = i915_gem_set_seqno(dev, dev_priv->next_seqno+0x100);
+ ret = intel_guc_setup(dev);
if (ret)
goto out;
- /* Now it is safe to go back round and do everything else: */
- for_each_engine(engine, dev_priv) {
- struct drm_i915_gem_request *req;
-
- req = i915_gem_request_alloc(engine, NULL);
- if (IS_ERR(req)) {
- ret = PTR_ERR(req);
- break;
- }
-
- if (engine->id == RCS) {
- for (j = 0; j < NUM_L3_SLICES(dev); j++) {
- ret = i915_gem_l3_remap(req, j);
- if (ret)
- goto err_request;
- }
- }
-
- ret = i915_ppgtt_init_ring(req);
- if (ret)
- goto err_request;
-
- ret = i915_gem_context_enable(req);
- if (ret)
- goto err_request;
-
- err_request:
- i915_add_request_no_flush(req);
- if (ret) {
- DRM_ERROR("Failed to enable %s, error=%d\n",
- engine->name, ret);
- i915_gem_cleanup_engines(dev);
- break;
- }
- }
-
out:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
int i915_gem_init(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
- i915.enable_execlists = intel_sanitize_enable_execlists(dev,
- i915.enable_execlists);
-
mutex_lock(&dev->struct_mutex);
if (!i915.enable_execlists) {
*/
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
- ret = i915_gem_init_userptr(dev);
- if (ret)
- goto out_unlock;
-
+ i915_gem_init_userptr(dev_priv);
i915_gem_init_ggtt(dev);
ret = i915_gem_context_init(dev);
void
i915_gem_cleanup_engines(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
for_each_engine(engine, dev_priv)
dev_priv->gt.cleanup_engine(engine);
-
- if (i915.enable_execlists)
- /*
- * Neither the BIOS, ourselves or any other kernel
- * expects the system to be in execlists mode on startup,
- * so we need to reset the GPU back to legacy mode.
- */
- intel_gpu_reset(dev, ALL_ENGINES);
}
static void
void
i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
{
- struct drm_device *dev = dev_priv->dev;
+ struct drm_device *dev = &dev_priv->drm;
if (INTEL_INFO(dev_priv)->gen >= 7 && !IS_VALLEYVIEW(dev_priv) &&
!IS_CHERRYVIEW(dev_priv))
else
dev_priv->num_fence_regs = 8;
- if (intel_vgpu_active(dev))
+ if (intel_vgpu_active(dev_priv))
dev_priv->num_fence_regs =
I915_READ(vgtif_reg(avail_rs.fence_num));
void
i915_gem_load_init(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(dev);
int i;
dev_priv->objects =
init_engine_lists(&dev_priv->engine[i]);
for (i = 0; i < I915_MAX_NUM_FENCES; i++)
INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);
- INIT_DELAYED_WORK(&dev_priv->mm.retire_work,
+ INIT_DELAYED_WORK(&dev_priv->gt.retire_work,
i915_gem_retire_work_handler);
- INIT_DELAYED_WORK(&dev_priv->mm.idle_work,
+ INIT_DELAYED_WORK(&dev_priv->gt.idle_work,
i915_gem_idle_work_handler);
+ init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
dev_priv->relative_constants_mode = I915_EXEC_CONSTANTS_REL_GENERAL;
- /*
- * Set initial sequence number for requests.
- * Using this number allows the wraparound to happen early,
- * catching any obvious problems.
- */
- dev_priv->next_seqno = ((u32)~0 - 0x1100);
- dev_priv->last_seqno = ((u32)~0 - 0x1101);
-
INIT_LIST_HEAD(&dev_priv->mm.fence_list);
init_waitqueue_head(&dev_priv->pending_flip_queue);
kmem_cache_destroy(dev_priv->objects);
}
+ int i915_gem_freeze_late(struct drm_i915_private *dev_priv)
+ {
+ struct drm_i915_gem_object *obj;
+
+ /* Called just before we write the hibernation image.
+ *
+ * We need to update the domain tracking to reflect that the CPU
+ * will be accessing all the pages to create and restore from the
+ * hibernation, and so upon restoration those pages will be in the
+ * CPU domain.
+ *
+ * To make sure the hibernation image contains the latest state,
+ * we update that state just before writing out the image.
+ */
+
+ list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
+ obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+ obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+ }
+
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ obj->base.read_domains = I915_GEM_DOMAIN_CPU;
+ obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+ }
+
+ return 0;
+ }
+
void i915_gem_release(struct drm_device *dev, struct drm_file *file)
{
struct drm_i915_file_private *file_priv = file->driver_priv;
return -ENOMEM;
file->driver_priv = file_priv;
- file_priv->dev_priv = dev->dev_private;
+ file_priv->dev_priv = to_i915(dev);
file_priv->file = file;
INIT_LIST_HEAD(&file_priv->rps.link);
u64 i915_gem_obj_offset(struct drm_i915_gem_object *o,
struct i915_address_space *vm)
{
- struct drm_i915_private *dev_priv = o->base.dev->dev_private;
+ struct drm_i915_private *dev_priv = to_i915(o->base.dev);
struct i915_vma *vma;
WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
u64 i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
const struct i915_ggtt_view *view)
{
- struct drm_i915_private *dev_priv = to_i915(o->base.dev);
- struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct i915_vma *vma;
list_for_each_entry(vma, &o->vma_list, obj_link)
- if (vma->vm == &ggtt->base &&
- i915_ggtt_view_equal(&vma->ggtt_view, view))
+ if (vma->is_ggtt && i915_ggtt_view_equal(&vma->ggtt_view, view))
return vma->node.start;
WARN(1, "global vma for this object not found. (view=%u)\n", view->type);
bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
const struct i915_ggtt_view *view)
{
- struct drm_i915_private *dev_priv = to_i915(o->base.dev);
- struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct i915_vma *vma;
list_for_each_entry(vma, &o->vma_list, obj_link)
- if (vma->vm == &ggtt->base &&
+ if (vma->is_ggtt &&
i915_ggtt_view_equal(&vma->ggtt_view, view) &&
drm_mm_node_allocated(&vma->node))
return true;
return false;
}
- unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
- struct i915_address_space *vm)
+ unsigned long i915_gem_obj_ggtt_size(struct drm_i915_gem_object *o)
{
- struct drm_i915_private *dev_priv = o->base.dev->dev_private;
struct i915_vma *vma;
- WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
-
- BUG_ON(list_empty(&o->vma_list));
+ GEM_BUG_ON(list_empty(&o->vma_list));
list_for_each_entry(vma, &o->vma_list, obj_link) {
if (vma->is_ggtt &&
- vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
- continue;
- if (vma->vm == vm)
+ vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
return vma->node.size;
}
+
return 0;
}
struct page *page;
/* Only default objects have per-page dirty tracking */
- if (WARN_ON((obj->ops->flags & I915_GEM_OBJECT_HAS_STRUCT_PAGE) == 0))
+ if (WARN_ON(!i915_gem_object_has_struct_page(obj)))
return NULL;
page = i915_gem_object_get_page(obj, n);
size_t bytes;
int ret;
- obj = i915_gem_alloc_object(dev, round_up(size, PAGE_SIZE));
- if (IS_ERR_OR_NULL(obj))
+ obj = i915_gem_object_create(dev, round_up(size, PAGE_SIZE));
+ if (IS_ERR(obj))
return obj;
ret = i915_gem_object_set_to_cpu_domain(obj, true);