This is similar to /sys/bus/pci/drivers_autoprobe, but
affects only the VFs associated with a specific PF.
+
+What: /sys/bus/pci/devices/.../p2pmem/size
+Date: November 2017
+Contact: Logan Gunthorpe <logang@deltatee.com>
+Description:
+ If the device has any Peer-to-Peer memory registered, this
+ file contains the total amount of memory that the device
+ provides (in decimal).
+
+What: /sys/bus/pci/devices/.../p2pmem/available
+Date: November 2017
+Contact: Logan Gunthorpe <logang@deltatee.com>
+Description:
+ If the device has any Peer-to-Peer memory registered, this
+ file contains the amount of memory that has not been
+ allocated (in decimal).
+
+What: /sys/bus/pci/devices/.../p2pmem/published
+Date: November 2017
+Contact: Logan Gunthorpe <logang@deltatee.com>
+Description:
+ If the device has any Peer-to-Peer memory registered, this
+ file contains a '1' if the memory has been published for
+ use outside the driver that owns the device.
2.2 Using Endpoint Test function Device
pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint
-tests. Before pcitest.sh can be used pcitest.c should be compiled using the
-following commands.
+tests. To compile this tool the following commands should be used:
- cd <kernel-dir>
- make headers_install ARCH=arm
- arm-linux-gnueabihf-gcc -Iusr/include tools/pci/pcitest.c -o pcitest
- cp pcitest <rootfs>/usr/sbin/
- cp tools/pci/pcitest.sh <rootfs>
+ # cd <kernel-dir>
+ # make -C tools/pci
+
+or if you desire to compile and install in your system:
+
+ # cd <kernel-dir>
+ # make -C tools/pci install
+
+The tool and script will be located in <rootfs>/usr/bin/
2.2.1 pcitest.sh Output
- # ./pcitest.sh
+ # pcitest.sh
BAR tests
BAR0: OKAY
event will be platform-dependent, but will follow the general
sequence described below.
-STEP 0: Error Event: ERR_NONFATAL
+STEP 0: Error Event
-------------------
A PCI bus error is detected by the PCI hardware. On powerpc, the slot
is isolated, in that all I/O is blocked: all reads return 0xffffffff,
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
proceeds to STEP 4 (Slot Reset)
-STEP 3: Slot Reset
+STEP 3: Link Reset
+------------------
+The platform resets the link. This is a PCI-Express specific step
+and is done whenever a fatal error has been detected that can be
+"solved" by resetting the link.
+
+STEP 4: Slot Reset
------------------
In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
>>> However, it probably should.
-STEP 4: Resume Operations
+STEP 5: Resume Operations
-------------------------
The platform will call the resume() callback on all affected device
drivers if all drivers on the segment have returned
At this point, if a new error happens, the platform will restart
a new error recovery sequence.
-STEP 5: Permanent Failure
+STEP 6: Permanent Failure
-------------------------
A "permanent failure" has occurred, and the platform cannot recover
the device. The platform will call error_detected() with a
for additional detail on real-life experience of the causes of
software errors.
-STEP 0: Error Event: ERR_FATAL
--------------------
-PCI bus error is detected by the PCI hardware. On powerpc, the slot is
-isolated, in that all I/O is blocked: all reads return 0xffffffff, all
-writes are ignored.
-
-STEP 1: Remove devices
---------------------
-Platform removes the devices depending on the error agent, it could be
-this port for all subordinates or upstream component (likely downstream
-port)
-
-STEP 2: Reset link
---------------------
-The platform resets the link. This is a PCI-Express specific step and is
-done whenever a fatal error has been detected that can be "solved" by
-resetting the link.
-
-STEP 3: Re-enumerate the devices
---------------------
-Initiates the re-enumeration.
Conclusion; General Remarks
---------------------------
- reset-names: Must contain the following entires:
- "pciephy"
- "apps"
+ - "turnoff"
Example:
interrupt-cells: should be set to 1
interrupts: GIC interrupt lines connected to PCI MSI interrupt lines
+ti,syscon-pcie-id : phandle to the device control module required to set device
+ id and vendor id.
+
Example:
pcie_msi_intc: msi-interrupt-controller {
interrupt-controller;
Required properties:
- compatible: "renesas,pci-r8a7743" for the R8A7743 SoC;
+ "renesas,pci-r8a7744" for the R8A7744 SoC;
"renesas,pci-r8a7745" for the R8A7745 SoC;
"renesas,pci-r8a7790" for the R8A7790 SoC;
"renesas,pci-r8a7791" for the R8A7791 SoC;
Required properties:
compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC;
+ "renesas,pcie-r8a7744" for the R8A7744 SoC;
"renesas,pcie-r8a7779" for the R8A7779 SoC;
"renesas,pcie-r8a7790" for the R8A7790 SoC;
"renesas,pcie-r8a7791" for the R8A7791 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-r8a7796" for the R8A7796 SoC;
"renesas,pcie-r8a77980" for the R8A77980 SoC;
+ "renesas,pcie-r8a77990" for the R8A77990 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or
RZ/G1 compatible device.
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.
ranges,
interrupt-map-mask,
interrupt-map : as specified in ../designware-pcie.txt
+ - ti,syscon-unaligned-access: phandle to the syscon DT node. The 1st argument
+ should contain the register offset within syscon
+ and the 2nd argument should contain the bit field
+ for setting the bit to enable unaligned
+ access.
DEVICE MODE
===========
input
usb/index
firewire
- pci
+ pci/index
spi
i2c
hsi
+++ /dev/null
-PCI Support Library
--------------------
-
-.. kernel-doc:: drivers/pci/pci.c
- :export:
-
-.. kernel-doc:: drivers/pci/pci-driver.c
- :export:
-
-.. kernel-doc:: drivers/pci/remove.c
- :export:
-
-.. kernel-doc:: drivers/pci/search.c
- :export:
-
-.. kernel-doc:: drivers/pci/msi.c
- :export:
-
-.. kernel-doc:: drivers/pci/bus.c
- :export:
-
-.. kernel-doc:: drivers/pci/access.c
- :export:
-
-.. kernel-doc:: drivers/pci/irq.c
- :export:
-
-.. kernel-doc:: drivers/pci/probe.c
- :export:
-
-.. kernel-doc:: drivers/pci/slot.c
- :export:
-
-.. kernel-doc:: drivers/pci/rom.c
- :export:
-
-.. kernel-doc:: drivers/pci/iov.c
- :export:
-
-.. kernel-doc:: drivers/pci/pci-sysfs.c
- :internal:
-
-PCI Hotplug Support Library
----------------------------
-
-.. kernel-doc:: drivers/pci/hotplug/pci_hotplug_core.c
- :export:
--- /dev/null
+.. SPDX-License-Identifier: GPL-2.0
+
+============================================
+The Linux PCI driver implementer's API guide
+============================================
+
+.. class:: toc-title
+
+ Table of contents
+
+.. toctree::
+ :maxdepth: 2
+
+ pci
+ p2pdma
+
+.. only:: subproject and html
+
+ Indices
+ =======
+
+ * :ref:`genindex`
--- /dev/null
+.. SPDX-License-Identifier: GPL-2.0
+
+============================
+PCI Peer-to-Peer DMA Support
+============================
+
+The PCI bus has pretty decent support for performing DMA transfers
+between two devices on the bus. This type of transaction is henceforth
+called Peer-to-Peer (or P2P). However, there are a number of issues that
+make P2P transactions tricky to do in a perfectly safe way.
+
+One of the biggest issues is that PCI doesn't require forwarding
+transactions between hierarchy domains, and in PCIe, each Root Port
+defines a separate hierarchy domain. To make things worse, there is no
+simple way to determine if a given Root Complex supports this or not.
+(See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel
+only supports doing P2P when the endpoints involved are all behind the
+same PCI bridge, as such devices are all in the same PCI hierarchy
+domain, and the spec guarantees that all transactions within the
+hierarchy will be routable, but it does not require routing
+between hierarchies.
+
+The second issue is that to make use of existing interfaces in Linux,
+memory that is used for P2P transactions needs to be backed by struct
+pages. However, PCI BARs are not typically cache coherent so there are
+a few corner case gotchas with these pages so developers need to
+be careful about what they do with them.
+
+
+Driver Writer's Guide
+=====================
+
+In a given P2P implementation there may be three or more different
+types of kernel drivers in play:
+
+* Provider - A driver which provides or publishes P2P resources like
+ memory or doorbell registers to other drivers.
+* Client - A driver which makes use of a resource by setting up a
+ DMA transaction to or from it.
+* Orchestrator - A driver which orchestrates the flow of data between
+ clients and providers.
+
+In many cases there could be overlap between these three types (i.e.,
+it may be typical for a driver to be both a provider and a client).
+
+For example, in the NVMe Target Copy Offload implementation:
+
+* The NVMe PCI driver is both a client, provider and orchestrator
+ in that it exposes any CMB (Controller Memory Buffer) as a P2P memory
+ resource (provider), it accepts P2P memory pages as buffers in requests
+ to be used directly (client) and it can also make use of the CMB as
+ submission queue entries (orchastrator).
+* The RDMA driver is a client in this arrangement so that an RNIC
+ can DMA directly to the memory exposed by the NVMe device.
+* The NVMe Target driver (nvmet) can orchestrate the data from the RNIC
+ to the P2P memory (CMB) and then to the NVMe device (and vice versa).
+
+This is currently the only arrangement supported by the kernel but
+one could imagine slight tweaks to this that would allow for the same
+functionality. For example, if a specific RNIC added a BAR with some
+memory behind it, its driver could add support as a P2P provider and
+then the NVMe Target could use the RNIC's memory instead of the CMB
+in cases where the NVMe cards in use do not have CMB support.
+
+
+Provider Drivers
+----------------
+
+A provider simply needs to register a BAR (or a portion of a BAR)
+as a P2P DMA resource using :c:func:`pci_p2pdma_add_resource()`.
+This will register struct pages for all the specified memory.
+
+After that it may optionally publish all of its resources as
+P2P memory using :c:func:`pci_p2pmem_publish()`. This will allow
+any orchestrator drivers to find and use the memory. When marked in
+this way, the resource must be regular memory with no side effects.
+
+For the time being this is fairly rudimentary in that all resources
+are typically going to be P2P memory. Future work will likely expand
+this to include other types of resources like doorbells.
+
+
+Client Drivers
+--------------
+
+A client driver typically only has to conditionally change its DMA map
+routine to use the mapping function :c:func:`pci_p2pdma_map_sg()` instead
+of the usual :c:func:`dma_map_sg()` function. Memory mapped in this
+way does not need to be unmapped.
+
+The client may also, optionally, make use of
+:c:func:`is_pci_p2pdma_page()` to determine when to use the P2P mapping
+functions and when to use the regular mapping functions. In some
+situations, it may be more appropriate to use a flag to indicate a
+given request is P2P memory and map appropriately. It is important to
+ensure that struct pages that back P2P memory stay out of code that
+does not have support for them as other code may treat the pages as
+regular memory which may not be appropriate.
+
+
+Orchestrator Drivers
+--------------------
+
+The first task an orchestrator driver must do is compile a list of
+all client devices that will be involved in a given transaction. For
+example, the NVMe Target driver creates a list including the namespace
+block device and the RNIC in use. If the orchestrator has access to
+a specific P2P provider to use it may check compatibility using
+:c:func:`pci_p2pdma_distance()` otherwise it may find a memory provider
+that's compatible with all clients using :c:func:`pci_p2pmem_find()`.
+If more than one provider is supported, the one nearest to all the clients will
+be chosen first. If more than one provider is an equal distance away, the
+one returned will be chosen at random (it is not an arbitrary but
+truely random). This function returns the PCI device to use for the provider
+with a reference taken and therefore when it's no longer needed it should be
+returned with pci_dev_put().
+
+Once a provider is selected, the orchestrator can then use
+:c:func:`pci_alloc_p2pmem()` and :c:func:`pci_free_p2pmem()` to
+allocate P2P memory from the provider. :c:func:`pci_p2pmem_alloc_sgl()`
+and :c:func:`pci_p2pmem_free_sgl()` are convenience functions for
+allocating scatter-gather lists with P2P memory.
+
+Struct Page Caveats
+-------------------
+
+Driver writers should be very careful about not passing these special
+struct pages to code that isn't prepared for it. At this time, the kernel
+interfaces do not have any checks for ensuring this. This obviously
+precludes passing these pages to userspace.
+
+P2P memory is also technically IO memory but should never have any side
+effects behind it. Thus, the order of loads and stores should not be important
+and ioreadX(), iowriteX() and friends should not be necessary.
+However, as the memory is not cache coherent, if access ever needs to
+be protected by a spinlock then :c:func:`mmiowb()` must be used before
+unlocking the lock. (See ACQUIRES VS I/O ACCESSES in
+Documentation/memory-barriers.txt)
+
+
+P2P DMA Support Library
+=======================
+
+.. kernel-doc:: drivers/pci/p2pdma.c
+ :export:
--- /dev/null
+PCI Support Library
+-------------------
+
+.. kernel-doc:: drivers/pci/pci.c
+ :export:
+
+.. kernel-doc:: drivers/pci/pci-driver.c
+ :export:
+
+.. kernel-doc:: drivers/pci/remove.c
+ :export:
+
+.. kernel-doc:: drivers/pci/search.c
+ :export:
+
+.. kernel-doc:: drivers/pci/msi.c
+ :export:
+
+.. kernel-doc:: drivers/pci/bus.c
+ :export:
+
+.. kernel-doc:: drivers/pci/access.c
+ :export:
+
+.. kernel-doc:: drivers/pci/irq.c
+ :export:
+
+.. kernel-doc:: drivers/pci/probe.c
+ :export:
+
+.. kernel-doc:: drivers/pci/slot.c
+ :export:
+
+.. kernel-doc:: drivers/pci/rom.c
+ :export:
+
+.. kernel-doc:: drivers/pci/iov.c
+ :export:
+
+.. kernel-doc:: drivers/pci/pci-sysfs.c
+ :internal:
+
+PCI Hotplug Support Library
+---------------------------
+
+.. kernel-doc:: drivers/pci/hotplug/pci_hotplug_core.c
+ :export:
through the Memory-mapped Remote Procedure Call (MRPC) interface.
Commands are submitted to the interface with a 4-byte command
identifier and up to 1KB of command specific data. The firmware will
-respond with a 4 bytes return code and up to 1KB of command specific
+respond with a 4-byte return code and up to 1KB of command-specific
data. The interface only processes a single command at a time.
The char device has the following semantics:
* A write must consist of at least 4 bytes and no more than 1028 bytes.
- The first four bytes will be interpreted as the command to run and
- the remainder will be used as the input data. A write will send the
+ The first 4 bytes will be interpreted as the Command ID and the
+ remainder will be used as the input data. A write will send the
command to the firmware to begin processing.
* Each write must be followed by exactly one read. Any double write will
produce an error.
* A read will block until the firmware completes the command and return
- the four bytes of status plus up to 1024 bytes of output data. (The
- length will be specified by the size parameter of the read call --
- reading less than 4 bytes will produce an error.
+ the 4-byte Command Return Value plus up to 1024 bytes of output
+ data. (The length will be specified by the size parameter of the read
+ call -- reading less than 4 bytes will produce an error.)
* The poll call will also be supported for userspace applications that
need to do other things while waiting for the command to complete.
Non-Transparent Bridge (NTB) Driver
===================================
-An NTB driver is provided for the switchtec hardware in switchtec_ntb.
-Currently, it only supports switches configured with exactly 2
-partitions. It also requires the following configuration settings:
+An NTB hardware driver is provided for the Switchtec hardware in
+ntb_hw_switchtec. Currently, it only supports switches configured with
+exactly 2 NT partitions and zero or more non-NT partitions. It also requires
+the following configuration settings:
-* Both partitions must be able to access each other's GAS spaces.
+* Both NT partitions must be able to access each other's GAS spaces.
Thus, the bits in the GAS Access Vector under Management Settings
must be set to support this.
+* Kernel configuration MUST include support for NTB (CONFIG_NTB needs
+ to be set)
+
+NT EP BAR 2 will be dynamically configured as a Direct Window, and
+the configuration file does not need to configure it explicitly.
+
+Please refer to Documentation/ntb.txt in Linux source tree for an overall
+understanding of the Linux NTB stack. ntb_hw_switchtec works as an NTB
+Hardware Driver in this stack.
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-F: drivers/pci/controller/dwc/*keystone*
+F: drivers/pci/controller/dwc/pci-keystone.c
PCI ENDPOINT SUBSYSTEM
M: Kishon Vijay Abraham I <kishon@ti.com>
fsl,max-link-speed = <2>;
power-domains = <&pgc_pcie_phy>;
resets = <&src IMX7_RESET_PCIEPHY>,
- <&src IMX7_RESET_PCIE_CTRL_APPS_EN>;
- reset-names = "pciephy", "apps";
+ <&src IMX7_RESET_PCIE_CTRL_APPS_EN>,
+ <&src IMX7_RESET_PCIE_CTRL_APPS_TURNOFF>;
+ reset-names = "pciephy", "apps", "turnoff";
status = "disabled";
};
};
/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
- int node = acpi_get_node(root->device->handle);
struct acpi_pci_generic_root_info *ri;
struct pci_bus *bus, *child;
struct acpi_pci_root_ops *root_ops;
- ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
+ ri = kzalloc(sizeof(*ri), GFP_KERNEL);
if (!ri)
return NULL;
- root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node);
+ root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
if (!root_ops) {
kfree(ri);
return NULL;
struct pnv_php_slot {
struct hotplug_slot slot;
- struct hotplug_slot_info slot_info;
uint64_t id;
char *name;
int slot_no;
struct pci_dev *pdev;
struct pci_bus *bus;
bool power_state_check;
+ u8 attention_state;
void *fdt;
void *dt;
struct of_changeset ocs;
} else {
struct pci_root_info *info;
- info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
dev_err(&root->device->dev,
"pci_bus %04x:%02x: ignored (out of memory)\n",
static void quirk_no_aersid(struct pci_dev *pdev)
{
/* VMD Domain */
- if (is_vmd(pdev->bus))
+ if (is_vmd(pdev->bus) && pci_is_root_bus(pdev->bus))
pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID;
}
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid);
+DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
#ifdef CONFIG_PHYS_ADDR_T_64BIT
}
EXPORT_SYMBOL(acpi_pci_osc_control_set);
-static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
+static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
+ bool is_pcie)
{
u32 support, control, requested;
acpi_status status;
decode_osc_support(root, "OS supports", support);
status = acpi_pci_osc_support(root, support);
if (ACPI_FAILURE(status)) {
- dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
- acpi_format_exception(status));
*no_aspm = 1;
+
+ /* _OSC is optional for PCI host bridges */
+ if ((status == AE_NOT_FOUND) && !is_pcie)
+ return;
+
+ dev_info(&device->dev, "_OSC failed (%s)%s\n",
+ acpi_format_exception(status),
+ pcie_aspm_support_enabled() ? "; disabling ASPM" : "");
return;
}
acpi_handle handle = device->handle;
int no_aspm = 0;
bool hotadd = system_state == SYSTEM_RUNNING;
+ bool is_pcie;
root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
if (!root)
root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle);
- negotiate_os_control(root, &no_aspm);
+ is_pcie = strcmp(acpi_device_hid(device), "PNP0A08") == 0;
+ negotiate_os_control(root, &no_aspm, is_pcie);
/*
* TBD: Need PCI interface for enumeration/configuration of roots.
acpi_object_type type,
const union acpi_object **obj);
-/* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
-static const guid_t prp_guid =
+static const guid_t prp_guids[] = {
+ /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c,
- 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01);
-/* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */
+ 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01),
+ /* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */
+ GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3,
+ 0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4),
+};
+
static const guid_t ads_guid =
GUID_INIT(0xdbb8e3e6, 0x5886, 0x4ba6,
0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b);
dn->name = link->package.elements[0].string.pointer;
dn->fwnode.ops = &acpi_data_fwnode_ops;
dn->parent = parent;
+ INIT_LIST_HEAD(&dn->data.properties);
INIT_LIST_HEAD(&dn->data.subnodes);
result = acpi_extract_properties(desc, &dn->data);
adev->flags.of_compatible_ok = 1;
}
+static bool acpi_is_property_guid(const guid_t *guid)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(prp_guids); i++) {
+ if (guid_equal(guid, &prp_guids[i]))
+ return true;
+ }
+
+ return false;
+}
+
+struct acpi_device_properties *
+acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid,
+ const union acpi_object *properties)
+{
+ struct acpi_device_properties *props;
+
+ props = kzalloc(sizeof(*props), GFP_KERNEL);
+ if (props) {
+ INIT_LIST_HEAD(&props->list);
+ props->guid = guid;
+ props->properties = properties;
+ list_add_tail(&props->list, &data->properties);
+ }
+
+ return props;
+}
+
static bool acpi_extract_properties(const union acpi_object *desc,
struct acpi_device_data *data)
{
properties->type != ACPI_TYPE_PACKAGE)
break;
- if (!guid_equal((guid_t *)guid->buffer.pointer, &prp_guid))
+ if (!acpi_is_property_guid((guid_t *)guid->buffer.pointer))
continue;
/*
* package immediately following it.
*/
if (!acpi_properties_format_valid(properties))
- break;
+ continue;
- data->properties = properties;
- return true;
+ acpi_data_add_props(data, (const guid_t *)guid->buffer.pointer,
+ properties);
}
- return false;
+ return !list_empty(&data->properties);
}
void acpi_init_properties(struct acpi_device *adev)
acpi_status status;
bool acpi_of = false;
+ INIT_LIST_HEAD(&adev->data.properties);
INIT_LIST_HEAD(&adev->data.subnodes);
if (!adev->handle)
void acpi_free_properties(struct acpi_device *adev)
{
+ struct acpi_device_properties *props, *tmp;
+
acpi_destroy_nondev_subnodes(&adev->data.subnodes);
ACPI_FREE((void *)adev->data.pointer);
adev->data.of_compatible = NULL;
adev->data.pointer = NULL;
- adev->data.properties = NULL;
+ list_for_each_entry_safe(props, tmp, &adev->data.properties, list) {
+ list_del(&props->list);
+ kfree(props);
+ }
}
/**
const char *name, acpi_object_type type,
const union acpi_object **obj)
{
- const union acpi_object *properties;
- int i;
+ const struct acpi_device_properties *props;
if (!data || !name)
return -EINVAL;
- if (!data->pointer || !data->properties)
+ if (!data->pointer || list_empty(&data->properties))
return -EINVAL;
- properties = data->properties;
- for (i = 0; i < properties->package.count; i++) {
- const union acpi_object *propname, *propvalue;
- const union acpi_object *property;
+ list_for_each_entry(props, &data->properties, list) {
+ const union acpi_object *properties;
+ unsigned int i;
- property = &properties->package.elements[i];
+ properties = props->properties;
+ for (i = 0; i < properties->package.count; i++) {
+ const union acpi_object *propname, *propvalue;
+ const union acpi_object *property;
- propname = &property->package.elements[0];
- propvalue = &property->package.elements[1];
+ property = &properties->package.elements[i];
- if (!strcmp(name, propname->string.pointer)) {
- if (type != ACPI_TYPE_ANY && propvalue->type != type)
- return -EPROTO;
- if (obj)
- *obj = propvalue;
+ propname = &property->package.elements[0];
+ propvalue = &property->package.elements[1];
- return 0;
+ if (!strcmp(name, propname->string.pointer)) {
+ if (type != ACPI_TYPE_ANY &&
+ propvalue->type != type)
+ return -EPROTO;
+ if (obj)
+ *obj = propvalue;
+
+ return 0;
+ }
}
}
return -EINVAL;
}
WARN_ON(free_space != (void *)newprops + newsize);
- adev->data.properties = newprops;
adev->data.pointer = newprops;
+ acpi_data_add_props(&adev->data, &apple_prp_guid, newprops);
out_free:
ACPI_FREE(props);
* like others but it will lock up the whole machine HARD if
* 65536 byte PRD entry is fed. Reduce maximum segment size.
*/
- rc = pci_set_dma_max_seg_size(pdev, 65536 - 512);
+ rc = dma_set_max_seg_size(&pdev->dev, 65536 - 512);
if (rc) {
dev_err(&pdev->dev, "failed to set the maximum segment size\n");
return rc;
goto failed_enable;
pci_set_master(dev);
- pci_set_dma_max_seg_size(dev, RSXX_HW_BLK_SIZE);
+ dma_set_max_seg_size(&dev->dev, RSXX_HW_BLK_SIZE);
st = dma_set_mask(&dev->dev, DMA_BIT_MASK(64));
if (st) {
pr_err("QAT: Can't find acceleration device\n");
return PCI_ERS_RESULT_DISCONNECT;
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC))
return PCI_ERS_RESULT_DISCONNECT;
static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev)
{
pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED;
- int err;
dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME);
pci_wake_from_d3(pdev, false);
}
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- dev_err(&pdev->dev,
- "AER uncorrect error status clear failed: %#x\n", err);
- }
-
return result;
}
bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
{
/* Never allow fallback if the device has properties */
- if (adev->data.properties || adev->driver_gpios)
+ if (acpi_dev_has_props(adev) || adev->driver_gpios)
return false;
return con_id == NULL;
*/
#include <linux/moduleparam.h>
#include <linux/slab.h>
+#include <linux/pci-p2pdma.h>
#include <rdma/mr_pool.h>
#include <rdma/rw.h>
struct ib_device *dev = qp->pd->device;
int ret;
- ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+ if (is_pci_p2pdma_page(sg_page(sg)))
+ ret = pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir);
+ else
+ ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+
if (!ret)
return -ENOMEM;
sg_cnt = ret;
break;
}
- ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+ /* P2PDMA contexts do not need to be unmapped */
+ if (!is_pci_p2pdma_page(sg_page(sg)))
+ ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
}
EXPORT_SYMBOL(rdma_rw_ctx_destroy);
static void dealloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
{
dma_free_coherent(&(rdev->lldi.pdev->dev), sq->memsize, sq->queue,
- pci_unmap_addr(sq, mapping));
+ dma_unmap_addr(sq, mapping));
}
static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
if (!sq->queue)
return -ENOMEM;
sq->phys_addr = virt_to_phys(sq->queue);
- pci_unmap_addr_set(sq, mapping, sq->dma_addr);
+ dma_unmap_addr_set(sq, mapping, sq->dma_addr);
return 0;
}
dma_free_coherent(&rdev->lldi.pdev->dev,
wq->memsize, wq->queue,
- pci_unmap_addr(wq, mapping));
+ dma_unmap_addr(wq, mapping));
c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size);
kfree(wq->sw_rq);
c4iw_put_qpid(rdev, wq->qid, uctx);
goto err_free_rqtpool;
memset(wq->queue, 0, wq->memsize);
- pci_unmap_addr_set(wq, mapping, wq->dma_addr);
+ dma_unmap_addr_set(wq, mapping, wq->dma_addr);
wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, T4_BAR2_QTYPE_EGRESS,
&wq->bar2_qid,
err_free_queue:
dma_free_coherent(&rdev->lldi.pdev->dev,
wq->memsize, wq->queue,
- pci_unmap_addr(wq, mapping));
+ dma_unmap_addr(wq, mapping));
err_free_rqtpool:
c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size);
err_free_pending_wrs:
struct t4_srq {
union t4_recv_wr *queue;
dma_addr_t dma_addr;
- DECLARE_PCI_UNMAP_ADDR(mapping);
+ DEFINE_DMA_UNMAP_ADDR(mapping);
struct t4_swrqe *sw_rq;
void __iomem *bar2_va;
u64 bar2_pa;
struct hfi1_devdata *dd = pci_get_drvdata(pdev);
dd_dev_info(dd, "HFI1 resume function called\n");
- pci_cleanup_aer_uncorrect_error_status(pdev);
/*
* Running jobs will fail, since it's asynchronous
* unlike sysfs-requested reset. Better than
struct qib_devdata *dd = pci_get_drvdata(pdev);
qib_devinfo(pdev, "QIB resume function called\n");
- pci_cleanup_aer_uncorrect_error_status(pdev);
/*
* Running jobs will fail, since it's asynchronous
* unlike sysfs-requested reset. Better than
if (!alx_reset_mac(hw))
rc = PCI_ERS_RESULT_RECOVERED;
out:
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
rtnl_unlock();
return rc;
if (!(bp->flags & BNX2_FLAG_AER_ENABLED))
return result;
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- dev_err(&pdev->dev,
- "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
- err); /* non-fatal, continue */
- }
-
return result;
}
rtnl_unlock();
- /* If AER, perform cleanup of the PCIe registers */
- if (bp->flags & AER_ENABLED) {
- if (pci_cleanup_aer_uncorrect_error_status(pdev))
- BNX2X_ERR("pci_cleanup_aer_uncorrect_error_status failed\n");
- else
- DP(NETIF_MSG_HW, "pci_cleanup_aer_uncorrect_error_status succeeded\n");
- }
-
return PCI_ERS_RESULT_RECOVERED;
}
rtnl_unlock();
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- dev_err(&pdev->dev,
- "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
- err); /* non-fatal, continue */
- }
-
return PCI_ERS_RESULT_RECOVERED;
}
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
- pci_cleanup_aer_uncorrect_error_status(pdev);
if (t4_wait_dev_ready(adap->regs) < 0)
return PCI_ERS_RESULT_DISCONNECT;
if (status)
return PCI_ERS_RESULT_DISCONNECT;
- pci_cleanup_aer_uncorrect_error_status(pdev);
be_clear_error(adapter, BE_CLEAR_ALL);
return PCI_ERS_RESULT_RECOVERED;
}
result = PCI_ERS_RESULT_RECOVERED;
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
return result;
}
result = PCI_ERS_RESULT_RECOVERED;
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
return result;
}
{
struct i40e_pf *pf = pci_get_drvdata(pdev);
pci_ers_result_t result;
- int err;
u32 reg;
dev_dbg(&pdev->dev, "%s\n", __func__);
result = PCI_ERS_RESULT_DISCONNECT;
}
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- dev_info(&pdev->dev,
- "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
- err);
- /* non-fatal, continue */
- }
-
return result;
}
struct igb_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
pci_ers_result_t result;
- int err;
if (pci_enable_device_mem(pdev)) {
dev_err(&pdev->dev,
result = PCI_ERS_RESULT_RECOVERED;
}
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- dev_err(&pdev->dev,
- "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
- err);
- /* non-fatal, continue */
- }
-
return result;
}
/* Free device reference count */
pci_dev_put(vfdev);
}
-
- pci_cleanup_aer_uncorrect_error_status(pdev);
}
/*
{
struct ixgbe_adapter *adapter = pci_get_drvdata(pdev);
pci_ers_result_t result;
- int err;
if (pci_enable_device_mem(pdev)) {
e_err(probe, "Cannot re-enable PCI device after reset.\n");
result = PCI_ERS_RESULT_RECOVERED;
}
- err = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (err) {
- e_dev_err("pci_cleanup_aer_uncorrect_error_status "
- "failed 0x%0x\n", err);
- /* non-fatal, continue */
- }
-
return result;
}
return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
-static void netxen_io_resume(struct pci_dev *pdev)
-{
- pci_cleanup_aer_uncorrect_error_status(pdev);
-}
-
static void netxen_nic_shutdown(struct pci_dev *pdev)
{
struct netxen_adapter *adapter = pci_get_drvdata(pdev);
static const struct pci_error_handlers netxen_err_handler = {
.error_detected = netxen_io_error_detected,
.slot_reset = netxen_io_slot_reset,
- .resume = netxen_io_resume,
};
static struct pci_driver netxen_driver = {
{
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
- pci_cleanup_aer_uncorrect_error_status(pdev);
if (test_and_clear_bit(__QLCNIC_AER, &adapter->state))
qlcnic_83xx_aer_start_poll_work(adapter);
}
u32 state;
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
- pci_cleanup_aer_uncorrect_error_status(pdev);
state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
if (state == QLCNIC_DEV_READY && test_and_clear_bit(__QLCNIC_AER,
&adapter->state))
{
struct efx_nic *efx = pci_get_drvdata(pdev);
pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED;
- int rc;
if (pci_enable_device(pdev)) {
netif_err(efx, hw, efx->net_dev,
status = PCI_ERS_RESULT_DISCONNECT;
}
- rc = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (rc) {
- netif_err(efx, hw, efx->net_dev,
- "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc);
- /* Non-fatal error. Continue. */
- }
-
return status;
}
{
struct ef4_nic *efx = pci_get_drvdata(pdev);
pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED;
- int rc;
if (pci_enable_device(pdev)) {
netif_err(efx, hw, efx->net_dev,
status = PCI_ERS_RESULT_DISCONNECT;
}
- rc = pci_cleanup_aer_uncorrect_error_status(pdev);
- if (rc) {
- netif_err(efx, hw, efx->net_dev,
- "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc);
- /* Non-fatal error. Continue. */
- }
-
return status;
}
ns->queue = blk_mq_init_queue(ctrl->tagset);
if (IS_ERR(ns->queue))
goto out_free_ns;
+
blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
+ if (ctrl->ops->flags & NVME_F_PCI_P2PDMA)
+ blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
+
ns->queue->queuedata = ns;
ns->ctrl = ctrl;
unsigned int flags;
#define NVME_F_FABRICS (1 << 0)
#define NVME_F_METADATA_SUPPORTED (1 << 1)
+#define NVME_F_PCI_P2PDMA (1 << 2)
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
#include <linux/types.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sed-opal.h>
+#include <linux/pci-p2pdma.h>
#include "nvme.h"
struct work_struct remove_work;
struct mutex shutdown_lock;
bool subsystem;
- void __iomem *cmb;
- pci_bus_addr_t cmb_bus_addr;
u64 cmb_size;
+ bool cmb_use_sqes;
u32 cmbsz;
u32 cmbloc;
struct nvme_ctrl ctrl;
struct nvme_dev *dev;
spinlock_t sq_lock;
struct nvme_command *sq_cmds;
- struct nvme_command __iomem *sq_cmds_io;
+ bool sq_cmds_is_io;
spinlock_t cq_lock ____cacheline_aligned_in_smp;
volatile struct nvme_completion *cqes;
struct blk_mq_tags **tags;
static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd)
{
spin_lock(&nvmeq->sq_lock);
- if (nvmeq->sq_cmds_io)
- memcpy_toio(&nvmeq->sq_cmds_io[nvmeq->sq_tail], cmd,
- sizeof(*cmd));
- else
- memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
+
+ memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
if (++nvmeq->sq_tail == nvmeq->q_depth)
nvmeq->sq_tail = 0;
goto out;
ret = BLK_STS_RESOURCE;
- nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, dma_dir,
- DMA_ATTR_NO_WARN);
+
+ if (is_pci_p2pdma_page(sg_page(iod->sg)))
+ nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents,
+ dma_dir);
+ else
+ nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
+ dma_dir, DMA_ATTR_NO_WARN);
if (!nr_mapped)
goto out;
DMA_TO_DEVICE : DMA_FROM_DEVICE;
if (iod->nents) {
- dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir);
+ /* P2PDMA requests do not need to be unmapped */
+ if (!is_pci_p2pdma_page(sg_page(iod->sg)))
+ dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir);
+
if (blk_integrity_rq(req))
dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir);
}
{
dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth),
(void *)nvmeq->cqes, nvmeq->cq_dma_addr);
- if (nvmeq->sq_cmds)
- dma_free_coherent(nvmeq->q_dmadev, SQ_SIZE(nvmeq->q_depth),
- nvmeq->sq_cmds, nvmeq->sq_dma_addr);
+
+ if (nvmeq->sq_cmds) {
+ if (nvmeq->sq_cmds_is_io)
+ pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev),
+ nvmeq->sq_cmds,
+ SQ_SIZE(nvmeq->q_depth));
+ else
+ dma_free_coherent(nvmeq->q_dmadev,
+ SQ_SIZE(nvmeq->q_depth),
+ nvmeq->sq_cmds,
+ nvmeq->sq_dma_addr);
+ }
}
static void nvme_free_queues(struct nvme_dev *dev, int lowest)
static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq,
int qid, int depth)
{
- /* CMB SQEs will be mapped before creation */
- if (qid && dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS))
- return 0;
+ struct pci_dev *pdev = to_pci_dev(dev->dev);
+
+ if (qid && dev->cmb_use_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
+ nvmeq->sq_cmds = pci_alloc_p2pmem(pdev, SQ_SIZE(depth));
+ nvmeq->sq_dma_addr = pci_p2pmem_virt_to_bus(pdev,
+ nvmeq->sq_cmds);
+ nvmeq->sq_cmds_is_io = true;
+ }
+
+ if (!nvmeq->sq_cmds) {
+ nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
+ &nvmeq->sq_dma_addr, GFP_KERNEL);
+ nvmeq->sq_cmds_is_io = false;
+ }
- nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
- &nvmeq->sq_dma_addr, GFP_KERNEL);
if (!nvmeq->sq_cmds)
return -ENOMEM;
return 0;
int result;
s16 vector;
- if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
- unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth),
- dev->ctrl.page_size);
- nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset;
- nvmeq->sq_cmds_io = dev->cmb + offset;
- }
-
/*
* A queue's vector matches the queue identifier unless the controller
* has only one vector available.
return;
dev->cmbloc = readl(dev->bar + NVME_REG_CMBLOC);
- if (!use_cmb_sqes)
- return;
-
size = nvme_cmb_size_unit(dev) * nvme_cmb_size(dev);
offset = nvme_cmb_size_unit(dev) * NVME_CMB_OFST(dev->cmbloc);
bar = NVME_CMB_BIR(dev->cmbloc);
if (size > bar_size - offset)
size = bar_size - offset;
- dev->cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size);
- if (!dev->cmb)
+ if (pci_p2pdma_add_resource(pdev, bar, size, offset)) {
+ dev_warn(dev->ctrl.device,
+ "failed to register the CMB\n");
return;
- dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset;
+ }
+
dev->cmb_size = size;
+ dev->cmb_use_sqes = use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS);
+
+ if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) ==
+ (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS))
+ pci_p2pmem_publish(pdev, true);
if (sysfs_add_file_to_group(&dev->ctrl.device->kobj,
&dev_attr_cmb.attr, NULL))
static inline void nvme_release_cmb(struct nvme_dev *dev)
{
- if (dev->cmb) {
- iounmap(dev->cmb);
- dev->cmb = NULL;
+ if (dev->cmb_size) {
sysfs_remove_file_from_group(&dev->ctrl.device->kobj,
&dev_attr_cmb.attr, NULL);
- dev->cmbsz = 0;
+ dev->cmb_size = 0;
}
}
if (nr_io_queues == 0)
return 0;
- if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) {
+ if (dev->cmb_use_sqes) {
result = nvme_cmb_qdepth(dev, nr_io_queues,
sizeof(struct nvme_command));
if (result > 0)
dev->q_depth = result;
else
- nvme_release_cmb(dev);
+ dev->cmb_use_sqes = false;
}
do {
static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
- .flags = NVME_F_METADATA_SUPPORTED,
+ .flags = NVME_F_METADATA_SUPPORTED |
+ NVME_F_PCI_P2PDMA,
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
struct nvme_dev *dev = pci_get_drvdata(pdev);
flush_work(&dev->ctrl.reset_work);
- pci_cleanup_aer_uncorrect_error_status(pdev);
}
static const struct pci_error_handlers nvme_err_handler = {
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/ctype.h>
+#include <linux/pci.h>
+#include <linux/pci-p2pdma.h>
#include "nvmet.h"
CONFIGFS_ATTR(nvmet_ns_, device_path);
+#ifdef CONFIG_PCI_P2PDMA
+static ssize_t nvmet_ns_p2pmem_show(struct config_item *item, char *page)
+{
+ struct nvmet_ns *ns = to_nvmet_ns(item);
+
+ return pci_p2pdma_enable_show(page, ns->p2p_dev, ns->use_p2pmem);
+}
+
+static ssize_t nvmet_ns_p2pmem_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_ns *ns = to_nvmet_ns(item);
+ struct pci_dev *p2p_dev = NULL;
+ bool use_p2pmem;
+ int ret = count;
+ int error;
+
+ mutex_lock(&ns->subsys->lock);
+ if (ns->enabled) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
+ error = pci_p2pdma_enable_store(page, &p2p_dev, &use_p2pmem);
+ if (error) {
+ ret = error;
+ goto out_unlock;
+ }
+
+ ns->use_p2pmem = use_p2pmem;
+ pci_dev_put(ns->p2p_dev);
+ ns->p2p_dev = p2p_dev;
+
+out_unlock:
+ mutex_unlock(&ns->subsys->lock);
+
+ return ret;
+}
+
+CONFIGFS_ATTR(nvmet_ns_, p2pmem);
+#endif /* CONFIG_PCI_P2PDMA */
+
static ssize_t nvmet_ns_device_uuid_show(struct config_item *item, char *page)
{
return sprintf(page, "%pUb\n", &to_nvmet_ns(item)->uuid);
&nvmet_ns_attr_ana_grpid,
&nvmet_ns_attr_enable,
&nvmet_ns_attr_buffered_io,
+#ifdef CONFIG_PCI_P2PDMA
+ &nvmet_ns_attr_p2pmem,
+#endif
NULL,
};
#include <linux/module.h>
#include <linux/random.h>
#include <linux/rculist.h>
+#include <linux/pci-p2pdma.h>
#include "nvmet.h"
nvmet_file_ns_disable(ns);
}
+static int nvmet_p2pmem_ns_enable(struct nvmet_ns *ns)
+{
+ int ret;
+ struct pci_dev *p2p_dev;
+
+ if (!ns->use_p2pmem)
+ return 0;
+
+ if (!ns->bdev) {
+ pr_err("peer-to-peer DMA is not supported by non-block device namespaces\n");
+ return -EINVAL;
+ }
+
+ if (!blk_queue_pci_p2pdma(ns->bdev->bd_queue)) {
+ pr_err("peer-to-peer DMA is not supported by the driver of %s\n",
+ ns->device_path);
+ return -EINVAL;
+ }
+
+ if (ns->p2p_dev) {
+ ret = pci_p2pdma_distance(ns->p2p_dev, nvmet_ns_dev(ns), true);
+ if (ret < 0)
+ return -EINVAL;
+ } else {
+ /*
+ * Right now we just check that there is p2pmem available so
+ * we can report an error to the user right away if there
+ * is not. We'll find the actual device to use once we
+ * setup the controller when the port's device is available.
+ */
+
+ p2p_dev = pci_p2pmem_find(nvmet_ns_dev(ns));
+ if (!p2p_dev) {
+ pr_err("no peer-to-peer memory is available for %s\n",
+ ns->device_path);
+ return -EINVAL;
+ }
+
+ pci_dev_put(p2p_dev);
+ }
+
+ return 0;
+}
+
+/*
+ * Note: ctrl->subsys->lock should be held when calling this function
+ */
+static void nvmet_p2pmem_ns_add_p2p(struct nvmet_ctrl *ctrl,
+ struct nvmet_ns *ns)
+{
+ struct device *clients[2];
+ struct pci_dev *p2p_dev;
+ int ret;
+
+ if (!ctrl->p2p_client)
+ return;
+
+ if (ns->p2p_dev) {
+ ret = pci_p2pdma_distance(ns->p2p_dev, ctrl->p2p_client, true);
+ if (ret < 0)
+ return;
+
+ p2p_dev = pci_dev_get(ns->p2p_dev);
+ } else {
+ clients[0] = ctrl->p2p_client;
+ clients[1] = nvmet_ns_dev(ns);
+
+ p2p_dev = pci_p2pmem_find_many(clients, ARRAY_SIZE(clients));
+ if (!p2p_dev) {
+ pr_err("no peer-to-peer memory is available that's supported by %s and %s\n",
+ dev_name(ctrl->p2p_client), ns->device_path);
+ return;
+ }
+ }
+
+ ret = radix_tree_insert(&ctrl->p2p_ns_map, ns->nsid, p2p_dev);
+ if (ret < 0)
+ pci_dev_put(p2p_dev);
+
+ pr_info("using p2pmem on %s for nsid %d\n", pci_name(p2p_dev),
+ ns->nsid);
+}
+
int nvmet_ns_enable(struct nvmet_ns *ns)
{
struct nvmet_subsys *subsys = ns->subsys;
+ struct nvmet_ctrl *ctrl;
int ret;
mutex_lock(&subsys->lock);
if (ret)
goto out_unlock;
+ ret = nvmet_p2pmem_ns_enable(ns);
+ if (ret)
+ goto out_unlock;
+
+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ nvmet_p2pmem_ns_add_p2p(ctrl, ns);
+
ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
0, GFP_KERNEL);
if (ret)
mutex_unlock(&subsys->lock);
return ret;
out_dev_put:
+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
+
nvmet_ns_dev_disable(ns);
goto out_unlock;
}
void nvmet_ns_disable(struct nvmet_ns *ns)
{
struct nvmet_subsys *subsys = ns->subsys;
+ struct nvmet_ctrl *ctrl;
mutex_lock(&subsys->lock);
if (!ns->enabled)
list_del_rcu(&ns->dev_link);
if (ns->nsid == subsys->max_nsid)
subsys->max_nsid = nvmet_max_nsid(subsys);
+
+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
+
mutex_unlock(&subsys->lock);
/*
percpu_ref_exit(&ns->ref);
mutex_lock(&subsys->lock);
+
subsys->nr_namespaces--;
nvmet_ns_changed(subsys, ns->nsid);
nvmet_ns_dev_disable(ns);
}
EXPORT_SYMBOL_GPL(nvmet_req_execute);
+int nvmet_req_alloc_sgl(struct nvmet_req *req)
+{
+ struct pci_dev *p2p_dev = NULL;
+
+ if (IS_ENABLED(CONFIG_PCI_P2PDMA)) {
+ if (req->sq->ctrl && req->ns)
+ p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map,
+ req->ns->nsid);
+
+ req->p2p_dev = NULL;
+ if (req->sq->qid && p2p_dev) {
+ req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt,
+ req->transfer_len);
+ if (req->sg) {
+ req->p2p_dev = p2p_dev;
+ return 0;
+ }
+ }
+
+ /*
+ * If no P2P memory was available we fallback to using
+ * regular memory
+ */
+ }
+
+ req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
+ if (!req->sg)
+ return -ENOMEM;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
+
+void nvmet_req_free_sgl(struct nvmet_req *req)
+{
+ if (req->p2p_dev)
+ pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
+ else
+ sgl_free(req->sg);
+
+ req->sg = NULL;
+ req->sg_cnt = 0;
+}
+EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
+
static inline bool nvmet_cc_en(u32 cc)
{
return (cc >> NVME_CC_EN_SHIFT) & 0x1;
return __nvmet_host_allowed(subsys, hostnqn);
}
+/*
+ * Note: ctrl->subsys->lock should be held when calling this function
+ */
+static void nvmet_setup_p2p_ns_map(struct nvmet_ctrl *ctrl,
+ struct nvmet_req *req)
+{
+ struct nvmet_ns *ns;
+
+ if (!req->p2p_client)
+ return;
+
+ ctrl->p2p_client = get_device(req->p2p_client);
+
+ list_for_each_entry_rcu(ns, &ctrl->subsys->namespaces, dev_link)
+ nvmet_p2pmem_ns_add_p2p(ctrl, ns);
+}
+
+/*
+ * Note: ctrl->subsys->lock should be held when calling this function
+ */
+static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
+{
+ struct radix_tree_iter iter;
+ void __rcu **slot;
+
+ radix_tree_for_each_slot(slot, &ctrl->p2p_ns_map, &iter, 0)
+ pci_dev_put(radix_tree_deref_slot(slot));
+
+ put_device(ctrl->p2p_client);
+}
+
u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
{
INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
INIT_LIST_HEAD(&ctrl->async_events);
+ INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
mutex_lock(&subsys->lock);
list_add_tail(&ctrl->subsys_entry, &subsys->ctrls);
+ nvmet_setup_p2p_ns_map(ctrl, req);
mutex_unlock(&subsys->lock);
*ctrlp = ctrl;
struct nvmet_subsys *subsys = ctrl->subsys;
mutex_lock(&subsys->lock);
+ nvmet_release_p2p_ns_map(ctrl);
list_del(&ctrl->subsys_entry);
mutex_unlock(&subsys->lock);
op = REQ_OP_READ;
}
+ if (is_pci_p2pdma_page(sg_page(req->sg)))
+ op_flags |= REQ_NOMERGE;
+
sector = le64_to_cpu(req->cmd->rw.slba);
sector <<= (req->ns->blksize_shift - 9);
#include <linux/configfs.h>
#include <linux/rcupdate.h>
#include <linux/blkdev.h>
+#include <linux/radix-tree.h>
#define NVMET_ASYNC_EVENTS 4
#define NVMET_ERROR_LOG_SLOTS 128
struct completion disable_done;
mempool_t *bvec_pool;
struct kmem_cache *bvec_cache;
+
+ int use_p2pmem;
+ struct pci_dev *p2p_dev;
};
static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
return container_of(to_config_group(item), struct nvmet_ns, group);
}
+static inline struct device *nvmet_ns_dev(struct nvmet_ns *ns)
+{
+ return ns->bdev ? disk_to_dev(ns->bdev->bd_disk) : NULL;
+}
+
struct nvmet_cq {
u16 qid;
u16 size;
char subsysnqn[NVMF_NQN_FIELD_LEN];
char hostnqn[NVMF_NQN_FIELD_LEN];
+
+ struct device *p2p_client;
+ struct radix_tree_root p2p_ns_map;
};
struct nvmet_subsys {
void (*execute)(struct nvmet_req *req);
const struct nvmet_fabrics_ops *ops;
+
+ struct pci_dev *p2p_dev;
+ struct device *p2p_client;
};
extern struct workqueue_struct *buffered_io_wq;
void nvmet_req_uninit(struct nvmet_req *req);
void nvmet_req_execute(struct nvmet_req *req);
void nvmet_req_complete(struct nvmet_req *req, u16 status);
+int nvmet_req_alloc_sgl(struct nvmet_req *req);
+void nvmet_req_free_sgl(struct nvmet_req *req);
void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
u16 size);
}
if (rsp->req.sg != rsp->cmd->inline_sg)
- sgl_free(rsp->req.sg);
+ nvmet_req_free_sgl(&rsp->req);
if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list)))
nvmet_rdma_process_wr_wait_list(queue);
{
struct rdma_cm_id *cm_id = rsp->queue->cm_id;
u64 addr = le64_to_cpu(sgl->addr);
- u32 len = get_unaligned_le24(sgl->length);
u32 key = get_unaligned_le32(sgl->key);
int ret;
+ rsp->req.transfer_len = get_unaligned_le24(sgl->length);
+
/* no data command? */
- if (!len)
+ if (!rsp->req.transfer_len)
return 0;
- rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
- if (!rsp->req.sg)
- return NVME_SC_INTERNAL;
+ ret = nvmet_req_alloc_sgl(&rsp->req);
+ if (ret < 0)
+ goto error_out;
ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
nvmet_data_dir(&rsp->req));
if (ret < 0)
- return NVME_SC_INTERNAL;
- rsp->req.transfer_len += len;
+ goto error_out;
rsp->n_rdma += ret;
if (invalidate) {
}
return 0;
+
+error_out:
+ rsp->req.transfer_len = 0;
+ return NVME_SC_INTERNAL;
}
static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
cmd->send_sge.addr, cmd->send_sge.length,
DMA_TO_DEVICE);
+ cmd->req.p2p_client = &queue->dev->device->dev;
+
if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
&queue->nvme_sq, &nvmet_rdma_ops))
return;
config PCI_LOCKLESS_CONFIG
bool
+config PCI_BRIDGE_EMUL
+ bool
+
config PCI_IOV
bool "PCI IOV support"
depends on PCI
If unsure, say N.
+config PCI_P2PDMA
+ bool "PCI peer-to-peer transfer support"
+ depends on PCI && ZONE_DEVICE
+ select GENERIC_ALLOCATOR
+ help
+ Enableѕ drivers to do PCI peer-to-peer transactions to and from
+ BARs that are exposed in other devices that are the part of
+ the hierarchy where peer-to-peer DMA is guaranteed by the PCI
+ specification to work (ie. anything below a single PCI bridge).
+
+ Many PCIe root complexes do not support P2P transactions and
+ it's hard to tell which support it at all, so at this time,
+ P2P DMA transations must be between devices behind the same root
+ port.
+
+ If unsure, say N.
+
config PCI_LABEL
def_bool y if (DMI || ACPI)
depends on PCI
obj-$(CONFIG_PCI_MSI) += msi.o
obj-$(CONFIG_PCI_ATS) += ats.o
obj-$(CONFIG_PCI_IOV) += iov.o
+obj-$(CONFIG_PCI_BRIDGE_EMUL) += pci-bridge-emul.o
obj-$(CONFIG_ACPI) += pci-acpi.o
obj-$(CONFIG_PCI_LABEL) += pci-label.o
obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o
obj-$(CONFIG_PCI_STUB) += pci-stub.o
obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o
obj-$(CONFIG_PCI_ECAM) += ecam.o
+obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o
obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o
# Endpoint library must be initialized before its users
#endif
#define PCI_OP_READ(size, type, len) \
-int pci_bus_read_config_##size \
+int noinline pci_bus_read_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type *value) \
{ \
int res; \
}
#define PCI_OP_WRITE(size, type, len) \
-int pci_bus_write_config_##size \
+int noinline pci_bus_write_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type value) \
{ \
int res; \
depends on MVEBU_MBUS
depends on ARM
depends on OF
+ select PCI_BRIDGE_EMUL
config PCI_AARDVARK
bool "Aardvark PCIe controller"
depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
+ select PCI_BRIDGE_EMUL
help
Add support for Aardvark 64bit PCIe Host Controller. This
controller is part of the South Bridge of the Marvel Armada
available to support GEN2 with 4 slots.
config PCIE_MEDIATEK
- bool "MediaTek PCIe controller"
+ tristate "MediaTek PCIe controller"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
-obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
+obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
};
/*
- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
+ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
* @dra7xx: the dra7xx device where the workaround should be applied
*
* Access to the PCIe slave port that are not 32-bit aligned will result
*
* To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
*/
-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
+static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
{
int ret;
struct device_node *np = dev->of_node;
dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
DEVICE_TYPE_RC);
+
+ ret = dra7xx_pcie_unaligned_memaccess(dev);
+ if (ret)
+ dev_err(dev, "WA for Errata i870 not applied\n");
+
ret = dra7xx_add_pcie_port(dra7xx, pdev);
if (ret < 0)
goto err_gpio;
dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
DEVICE_TYPE_EP);
- ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
+ ret = dra7xx_pcie_unaligned_memaccess(dev);
if (ret)
goto err_gpio;
struct regmap *iomuxc_gpr;
struct reset_control *pciephy_reset;
struct reset_control *apps_reset;
+ struct reset_control *turnoff_reset;
enum imx6_pcie_variants variant;
u32 tx_deemph_gen1;
u32 tx_deemph_gen2_3p5db;
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
/* PHY registers (not memory-mapped) */
+#define PCIE_PHY_ATEOVRD 0x10
+#define PCIE_PHY_ATEOVRD_EN (0x1 << 2)
+#define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0
+#define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1
+
+#define PCIE_PHY_MPLL_OVRD_IN_LO 0x11
+#define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2
+#define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f
+#define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9)
+
#define PCIE_PHY_RX_ASIC_OUT 0x100D
#define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0)
IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12);
}
+static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie)
+{
+ unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy);
+ int mult, div;
+ u32 val;
+
+ switch (phy_rate) {
+ case 125000000:
+ /*
+ * The default settings of the MPLL are for a 125MHz input
+ * clock, so no need to reconfigure anything in that case.
+ */
+ return 0;
+ case 100000000:
+ mult = 25;
+ div = 0;
+ break;
+ case 200000000:
+ mult = 25;
+ div = 1;
+ break;
+ default:
+ dev_err(imx6_pcie->pci->dev,
+ "Unsupported PHY reference clock rate %lu\n", phy_rate);
+ return -EINVAL;
+ }
+
+ pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val);
+ val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK <<
+ PCIE_PHY_MPLL_MULTIPLIER_SHIFT);
+ val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT;
+ val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD;
+ pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val);
+
+ pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val);
+ val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK <<
+ PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT);
+ val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT;
+ val |= PCIE_PHY_ATEOVRD_EN;
+ pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val);
+
+ return 0;
+}
+
static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
return -EINVAL;
}
+static void imx6_pcie_ltssm_enable(struct device *dev)
+{
+ struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
+
+ switch (imx6_pcie->variant) {
+ case IMX6Q:
+ case IMX6SX:
+ case IMX6QP:
+ regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
+ IMX6Q_GPR12_PCIE_CTL_2,
+ IMX6Q_GPR12_PCIE_CTL_2);
+ break;
+ case IMX7D:
+ reset_control_deassert(imx6_pcie->apps_reset);
+ break;
+ }
+}
+
static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp);
/* Start LTSSM. */
- if (imx6_pcie->variant == IMX7D)
- reset_control_deassert(imx6_pcie->apps_reset);
- else
- regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
- IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
+ imx6_pcie_ltssm_enable(dev);
ret = imx6_pcie_wait_for_link(imx6_pcie);
if (ret)
imx6_pcie_assert_core_reset(imx6_pcie);
imx6_pcie_init_phy(imx6_pcie);
imx6_pcie_deassert_core_reset(imx6_pcie);
+ imx6_setup_phy_mpll(imx6_pcie);
dw_pcie_setup_rc(pp);
imx6_pcie_establish_link(imx6_pcie);
.link_up = imx6_pcie_link_up,
};
+#ifdef CONFIG_PM_SLEEP
+static void imx6_pcie_ltssm_disable(struct device *dev)
+{
+ struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
+
+ switch (imx6_pcie->variant) {
+ case IMX6SX:
+ case IMX6QP:
+ regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
+ IMX6Q_GPR12_PCIE_CTL_2, 0);
+ break;
+ case IMX7D:
+ reset_control_assert(imx6_pcie->apps_reset);
+ break;
+ default:
+ dev_err(dev, "ltssm_disable not supported\n");
+ }
+}
+
+static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie)
+{
+ reset_control_assert(imx6_pcie->turnoff_reset);
+ reset_control_deassert(imx6_pcie->turnoff_reset);
+
+ /*
+ * Components with an upstream port must respond to
+ * PME_Turn_Off with PME_TO_Ack but we can't check.
+ *
+ * The standard recommends a 1-10ms timeout after which to
+ * proceed anyway as if acks were received.
+ */
+ usleep_range(1000, 10000);
+}
+
+static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie)
+{
+ clk_disable_unprepare(imx6_pcie->pcie);
+ clk_disable_unprepare(imx6_pcie->pcie_phy);
+ clk_disable_unprepare(imx6_pcie->pcie_bus);
+
+ if (imx6_pcie->variant == IMX7D) {
+ regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
+ IMX7D_GPR12_PCIE_PHY_REFCLK_SEL,
+ IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
+ }
+}
+
+static int imx6_pcie_suspend_noirq(struct device *dev)
+{
+ struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
+
+ if (imx6_pcie->variant != IMX7D)
+ return 0;
+
+ imx6_pcie_pm_turnoff(imx6_pcie);
+ imx6_pcie_clk_disable(imx6_pcie);
+ imx6_pcie_ltssm_disable(dev);
+
+ return 0;
+}
+
+static int imx6_pcie_resume_noirq(struct device *dev)
+{
+ int ret;
+ struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
+ struct pcie_port *pp = &imx6_pcie->pci->pp;
+
+ if (imx6_pcie->variant != IMX7D)
+ return 0;
+
+ imx6_pcie_assert_core_reset(imx6_pcie);
+ imx6_pcie_init_phy(imx6_pcie);
+ imx6_pcie_deassert_core_reset(imx6_pcie);
+ dw_pcie_setup_rc(pp);
+
+ ret = imx6_pcie_establish_link(imx6_pcie);
+ if (ret < 0)
+ dev_info(dev, "pcie link is down after resume.\n");
+
+ return 0;
+}
+#endif
+
+static const struct dev_pm_ops imx6_pcie_pm_ops = {
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq,
+ imx6_pcie_resume_noirq)
+};
+
static int imx6_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
break;
}
+ /* Grab turnoff reset */
+ imx6_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff");
+ if (IS_ERR(imx6_pcie->turnoff_reset)) {
+ dev_err(dev, "Failed to get TURNOFF reset control\n");
+ return PTR_ERR(imx6_pcie->turnoff_reset);
+ }
+
/* Grab GPR config register range */
imx6_pcie->iomuxc_gpr =
syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
.name = "imx6q-pcie",
.of_match_table = imx6_pcie_of_match,
.suppress_bind_attrs = true,
+ .pm = &imx6_pcie_pm_ops,
},
.probe = imx6_pcie_probe,
.shutdown = imx6_pcie_shutdown,
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/*
- * DesignWare application register space functions for Keystone PCI controller
- *
- * Copyright (C) 2013-2014 Texas Instruments., Ltd.
- * http://www.ti.com
- *
- * Author: Murali Karicheri <m-karicheri2@ti.com>
- */
-
-#include <linux/irq.h>
-#include <linux/irqdomain.h>
-#include <linux/irqreturn.h>
-#include <linux/module.h>
-#include <linux/of.h>
-#include <linux/of_pci.h>
-#include <linux/pci.h>
-#include <linux/platform_device.h>
-
-#include "pcie-designware.h"
-#include "pci-keystone.h"
-
-/* Application register defines */
-#define LTSSM_EN_VAL 1
-#define LTSSM_STATE_MASK 0x1f
-#define LTSSM_STATE_L0 0x11
-#define DBI_CS2_EN_VAL 0x20
-#define OB_XLAT_EN_VAL 2
-
-/* Application registers */
-#define CMD_STATUS 0x004
-#define CFG_SETUP 0x008
-#define OB_SIZE 0x030
-#define CFG_PCIM_WIN_SZ_IDX 3
-#define CFG_PCIM_WIN_CNT 32
-#define SPACE0_REMOTE_CFG_OFFSET 0x1000
-#define OB_OFFSET_INDEX(n) (0x200 + (8 * n))
-#define OB_OFFSET_HI(n) (0x204 + (8 * n))
-
-/* IRQ register defines */
-#define IRQ_EOI 0x050
-#define IRQ_STATUS 0x184
-#define IRQ_ENABLE_SET 0x188
-#define IRQ_ENABLE_CLR 0x18c
-
-#define MSI_IRQ 0x054
-#define MSI0_IRQ_STATUS 0x104
-#define MSI0_IRQ_ENABLE_SET 0x108
-#define MSI0_IRQ_ENABLE_CLR 0x10c
-#define IRQ_STATUS 0x184
-#define MSI_IRQ_OFFSET 4
-
-/* Error IRQ bits */
-#define ERR_AER BIT(5) /* ECRC error */
-#define ERR_AXI BIT(4) /* AXI tag lookup fatal error */
-#define ERR_CORR BIT(3) /* Correctable error */
-#define ERR_NONFATAL BIT(2) /* Non-fatal error */
-#define ERR_FATAL BIT(1) /* Fatal error */
-#define ERR_SYS BIT(0) /* System (fatal, non-fatal, or correctable) */
-#define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \
- ERR_NONFATAL | ERR_FATAL | ERR_SYS)
-#define ERR_FATAL_IRQ (ERR_FATAL | ERR_AXI)
-#define ERR_IRQ_STATUS_RAW 0x1c0
-#define ERR_IRQ_STATUS 0x1c4
-#define ERR_IRQ_ENABLE_SET 0x1c8
-#define ERR_IRQ_ENABLE_CLR 0x1cc
-
-/* Config space registers */
-#define DEBUG0 0x728
-
-#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
-
-static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
- u32 *bit_pos)
-{
- *reg_offset = offset % 8;
- *bit_pos = offset >> 3;
-}
-
-phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp)
-{
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
-
- return ks_pcie->app.start + MSI_IRQ;
-}
-
-static u32 ks_dw_app_readl(struct keystone_pcie *ks_pcie, u32 offset)
-{
- return readl(ks_pcie->va_app_base + offset);
-}
-
-static void ks_dw_app_writel(struct keystone_pcie *ks_pcie, u32 offset, u32 val)
-{
- writel(val, ks_pcie->va_app_base + offset);
-}
-
-void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
-{
- struct dw_pcie *pci = ks_pcie->pci;
- struct pcie_port *pp = &pci->pp;
- struct device *dev = pci->dev;
- u32 pending, vector;
- int src, virq;
-
- pending = ks_dw_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4));
-
- /*
- * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit
- * shows 1, 9, 17, 25 and so forth
- */
- for (src = 0; src < 4; src++) {
- if (BIT(src) & pending) {
- vector = offset + (src << 3);
- virq = irq_linear_revmap(pp->irq_domain, vector);
- dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n",
- src, vector, virq);
- generic_handle_irq(virq);
- }
- }
-}
-
-void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp)
-{
- u32 reg_offset, bit_pos;
- struct keystone_pcie *ks_pcie;
- struct dw_pcie *pci;
-
- pci = to_dw_pcie_from_pp(pp);
- ks_pcie = to_keystone_pcie(pci);
- update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
-
- ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4),
- BIT(bit_pos));
- ks_dw_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET);
-}
-
-void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
-{
- u32 reg_offset, bit_pos;
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
-
- update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
- ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4),
- BIT(bit_pos));
-}
-
-void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
-{
- u32 reg_offset, bit_pos;
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
-
- update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
- ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4),
- BIT(bit_pos));
-}
-
-int ks_dw_pcie_msi_host_init(struct pcie_port *pp)
-{
- return dw_pcie_allocate_domains(pp);
-}
-
-void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
-{
- int i;
-
- for (i = 0; i < PCI_NUM_INTX; i++)
- ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1);
-}
-
-void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset)
-{
- struct dw_pcie *pci = ks_pcie->pci;
- struct device *dev = pci->dev;
- u32 pending;
- int virq;
-
- pending = ks_dw_app_readl(ks_pcie, IRQ_STATUS + (offset << 4));
-
- if (BIT(0) & pending) {
- virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset);
- dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq);
- generic_handle_irq(virq);
- }
-
- /* EOI the INTx interrupt */
- ks_dw_app_writel(ks_pcie, IRQ_EOI, offset);
-}
-
-void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie)
-{
- ks_dw_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL);
-}
-
-irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie)
-{
- u32 status;
-
- status = ks_dw_app_readl(ks_pcie, ERR_IRQ_STATUS_RAW) & ERR_IRQ_ALL;
- if (!status)
- return IRQ_NONE;
-
- if (status & ERR_FATAL_IRQ)
- dev_err(ks_pcie->pci->dev, "fatal error (status %#010x)\n",
- status);
-
- /* Ack the IRQ; status bits are RW1C */
- ks_dw_app_writel(ks_pcie, ERR_IRQ_STATUS, status);
- return IRQ_HANDLED;
-}
-
-static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d)
-{
-}
-
-static void ks_dw_pcie_mask_legacy_irq(struct irq_data *d)
-{
-}
-
-static void ks_dw_pcie_unmask_legacy_irq(struct irq_data *d)
-{
-}
-
-static struct irq_chip ks_dw_pcie_legacy_irq_chip = {
- .name = "Keystone-PCI-Legacy-IRQ",
- .irq_ack = ks_dw_pcie_ack_legacy_irq,
- .irq_mask = ks_dw_pcie_mask_legacy_irq,
- .irq_unmask = ks_dw_pcie_unmask_legacy_irq,
-};
-
-static int ks_dw_pcie_init_legacy_irq_map(struct irq_domain *d,
- unsigned int irq, irq_hw_number_t hw_irq)
-{
- irq_set_chip_and_handler(irq, &ks_dw_pcie_legacy_irq_chip,
- handle_level_irq);
- irq_set_chip_data(irq, d->host_data);
-
- return 0;
-}
-
-static const struct irq_domain_ops ks_dw_pcie_legacy_irq_domain_ops = {
- .map = ks_dw_pcie_init_legacy_irq_map,
- .xlate = irq_domain_xlate_onetwocell,
-};
-
-/**
- * ks_dw_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask
- * registers
- *
- * Since modification of dbi_cs2 involves different clock domain, read the
- * status back to ensure the transition is complete.
- */
-static void ks_dw_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
-{
- u32 val;
-
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- ks_dw_app_writel(ks_pcie, CMD_STATUS, DBI_CS2_EN_VAL | val);
-
- do {
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- } while (!(val & DBI_CS2_EN_VAL));
-}
-
-/**
- * ks_dw_pcie_clear_dbi_mode() - Disable DBI mode
- *
- * Since modification of dbi_cs2 involves different clock domain, read the
- * status back to ensure the transition is complete.
- */
-static void ks_dw_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
-{
- u32 val;
-
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- ks_dw_app_writel(ks_pcie, CMD_STATUS, ~DBI_CS2_EN_VAL & val);
-
- do {
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- } while (val & DBI_CS2_EN_VAL);
-}
-
-void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
-{
- struct dw_pcie *pci = ks_pcie->pci;
- struct pcie_port *pp = &pci->pp;
- u32 start = pp->mem->start, end = pp->mem->end;
- int i, tr_size;
- u32 val;
-
- /* Disable BARs for inbound access */
- ks_dw_pcie_set_dbi_mode(ks_pcie);
- dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
- dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0);
- ks_dw_pcie_clear_dbi_mode(ks_pcie);
-
- /* Set outbound translation size per window division */
- ks_dw_app_writel(ks_pcie, OB_SIZE, CFG_PCIM_WIN_SZ_IDX & 0x7);
-
- tr_size = (1 << (CFG_PCIM_WIN_SZ_IDX & 0x7)) * SZ_1M;
-
- /* Using Direct 1:1 mapping of RC <-> PCI memory space */
- for (i = 0; (i < CFG_PCIM_WIN_CNT) && (start < end); i++) {
- ks_dw_app_writel(ks_pcie, OB_OFFSET_INDEX(i), start | 1);
- ks_dw_app_writel(ks_pcie, OB_OFFSET_HI(i), 0);
- start += tr_size;
- }
-
- /* Enable OB translation */
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- ks_dw_app_writel(ks_pcie, CMD_STATUS, OB_XLAT_EN_VAL | val);
-}
-
-/**
- * ks_pcie_cfg_setup() - Set up configuration space address for a device
- *
- * @ks_pcie: ptr to keystone_pcie structure
- * @bus: Bus number the device is residing on
- * @devfn: device, function number info
- *
- * Forms and returns the address of configuration space mapped in PCIESS
- * address space 0. Also configures CFG_SETUP for remote configuration space
- * access.
- *
- * The address space has two regions to access configuration - local and remote.
- * We access local region for bus 0 (as RC is attached on bus 0) and remote
- * region for others with TYPE 1 access when bus > 1. As for device on bus = 1,
- * we will do TYPE 0 access as it will be on our secondary bus (logical).
- * CFG_SETUP is needed only for remote configuration access.
- */
-static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus,
- unsigned int devfn)
-{
- u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn);
- struct dw_pcie *pci = ks_pcie->pci;
- struct pcie_port *pp = &pci->pp;
- u32 regval;
-
- if (bus == 0)
- return pci->dbi_base;
-
- regval = (bus << 16) | (device << 8) | function;
-
- /*
- * Since Bus#1 will be a virtual bus, we need to have TYPE0
- * access only.
- * TYPE 1
- */
- if (bus != 1)
- regval |= BIT(24);
-
- ks_dw_app_writel(ks_pcie, CFG_SETUP, regval);
- return pp->va_cfg0_base;
-}
-
-int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
- unsigned int devfn, int where, int size, u32 *val)
-{
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
- u8 bus_num = bus->number;
- void __iomem *addr;
-
- addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
-
- return dw_pcie_read(addr + where, size, val);
-}
-
-int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
- unsigned int devfn, int where, int size, u32 val)
-{
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
- u8 bus_num = bus->number;
- void __iomem *addr;
-
- addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
-
- return dw_pcie_write(addr + where, size, val);
-}
-
-/**
- * ks_dw_pcie_v3_65_scan_bus() - keystone scan_bus post initialization
- *
- * This sets BAR0 to enable inbound access for MSI_IRQ register
- */
-void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp)
-{
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
-
- /* Configure and set up BAR0 */
- ks_dw_pcie_set_dbi_mode(ks_pcie);
-
- /* Enable BAR0 */
- dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
- dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
-
- ks_dw_pcie_clear_dbi_mode(ks_pcie);
-
- /*
- * For BAR0, just setting bus address for inbound writes (MSI) should
- * be sufficient. Use physical address to avoid any conflicts.
- */
- dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
-}
-
-/**
- * ks_dw_pcie_link_up() - Check if link up
- */
-int ks_dw_pcie_link_up(struct dw_pcie *pci)
-{
- u32 val;
-
- val = dw_pcie_readl_dbi(pci, DEBUG0);
- return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0;
-}
-
-void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
-{
- u32 val;
-
- /* Disable Link training */
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- val &= ~LTSSM_EN_VAL;
- ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
-
- /* Initiate Link Training */
- val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
- ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
-}
-
-/**
- * ks_dw_pcie_host_init() - initialize host for v3_65 dw hardware
- *
- * Ioremap the register resources, initialize legacy irq domain
- * and call dw_pcie_v3_65_host_init() API to initialize the Keystone
- * PCI host controller.
- */
-int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie,
- struct device_node *msi_intc_np)
-{
- struct dw_pcie *pci = ks_pcie->pci;
- struct pcie_port *pp = &pci->pp;
- struct device *dev = pci->dev;
- struct platform_device *pdev = to_platform_device(dev);
- struct resource *res;
-
- /* Index 0 is the config reg. space address */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
- if (IS_ERR(pci->dbi_base))
- return PTR_ERR(pci->dbi_base);
-
- /*
- * We set these same and is used in pcie rd/wr_other_conf
- * functions
- */
- pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
- pp->va_cfg1_base = pp->va_cfg0_base;
-
- /* Index 1 is the application reg. space address */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
- ks_pcie->va_app_base = devm_ioremap_resource(dev, res);
- if (IS_ERR(ks_pcie->va_app_base))
- return PTR_ERR(ks_pcie->va_app_base);
-
- ks_pcie->app = *res;
-
- /* Create legacy IRQ domain */
- ks_pcie->legacy_irq_domain =
- irq_domain_add_linear(ks_pcie->legacy_intc_np,
- PCI_NUM_INTX,
- &ks_dw_pcie_legacy_irq_domain_ops,
- NULL);
- if (!ks_pcie->legacy_irq_domain) {
- dev_err(dev, "Failed to add irq domain for legacy irqs\n");
- return -EINVAL;
- }
-
- return dw_pcie_host_init(pp);
-}
* Implementation based on pci-exynos.c and pcie-designware.c
*/
-#include <linux/irqchip/chained_irq.h>
#include <linux/clk.h>
#include <linux/delay.h>
+#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
-#include <linux/init.h>
+#include <linux/mfd/syscon.h>
#include <linux/msi.h>
-#include <linux/of_irq.h>
#include <linux/of.h>
+#include <linux/of_irq.h>
#include <linux/of_pci.h>
-#include <linux/platform_device.h>
#include <linux/phy/phy.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include "pcie-designware.h"
-#include "pci-keystone.h"
-#define DRIVER_NAME "keystone-pcie"
+#define PCIE_VENDORID_MASK 0xffff
+#define PCIE_DEVICEID_SHIFT 16
+
+/* Application registers */
+#define CMD_STATUS 0x004
+#define LTSSM_EN_VAL BIT(0)
+#define OB_XLAT_EN_VAL BIT(1)
+#define DBI_CS2 BIT(5)
+
+#define CFG_SETUP 0x008
+#define CFG_BUS(x) (((x) & 0xff) << 16)
+#define CFG_DEVICE(x) (((x) & 0x1f) << 8)
+#define CFG_FUNC(x) ((x) & 0x7)
+#define CFG_TYPE1 BIT(24)
+
+#define OB_SIZE 0x030
+#define SPACE0_REMOTE_CFG_OFFSET 0x1000
+#define OB_OFFSET_INDEX(n) (0x200 + (8 * (n)))
+#define OB_OFFSET_HI(n) (0x204 + (8 * (n)))
+#define OB_ENABLEN BIT(0)
+#define OB_WIN_SIZE 8 /* 8MB */
+
+/* IRQ register defines */
+#define IRQ_EOI 0x050
+#define IRQ_STATUS 0x184
+#define IRQ_ENABLE_SET 0x188
+#define IRQ_ENABLE_CLR 0x18c
+
+#define MSI_IRQ 0x054
+#define MSI0_IRQ_STATUS 0x104
+#define MSI0_IRQ_ENABLE_SET 0x108
+#define MSI0_IRQ_ENABLE_CLR 0x10c
+#define IRQ_STATUS 0x184
+#define MSI_IRQ_OFFSET 4
+
+#define ERR_IRQ_STATUS 0x1c4
+#define ERR_IRQ_ENABLE_SET 0x1c8
+#define ERR_AER BIT(5) /* ECRC error */
+#define ERR_AXI BIT(4) /* AXI tag lookup fatal error */
+#define ERR_CORR BIT(3) /* Correctable error */
+#define ERR_NONFATAL BIT(2) /* Non-fatal error */
+#define ERR_FATAL BIT(1) /* Fatal error */
+#define ERR_SYS BIT(0) /* System error */
+#define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \
+ ERR_NONFATAL | ERR_FATAL | ERR_SYS)
+
+#define MAX_MSI_HOST_IRQS 8
+/* PCIE controller device IDs */
+#define PCIE_RC_K2HK 0xb008
+#define PCIE_RC_K2E 0xb009
+#define PCIE_RC_K2L 0xb00a
+#define PCIE_RC_K2G 0xb00b
+
+#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
+
+struct keystone_pcie {
+ struct dw_pcie *pci;
+ /* PCI Device ID */
+ u32 device_id;
+ int num_legacy_host_irqs;
+ int legacy_host_irqs[PCI_NUM_INTX];
+ struct device_node *legacy_intc_np;
+
+ int num_msi_host_irqs;
+ int msi_host_irqs[MAX_MSI_HOST_IRQS];
+ int num_lanes;
+ u32 num_viewport;
+ struct phy **phy;
+ struct device_link **link;
+ struct device_node *msi_intc_np;
+ struct irq_domain *legacy_irq_domain;
+ struct device_node *np;
+
+ int error_irq;
+
+ /* Application register space */
+ void __iomem *va_app_base; /* DT 1st resource */
+ struct resource app;
+};
-/* DEV_STAT_CTRL */
-#define PCIE_CAP_BASE 0x70
+static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
+ u32 *bit_pos)
+{
+ *reg_offset = offset % 8;
+ *bit_pos = offset >> 3;
+}
-/* PCIE controller device IDs */
-#define PCIE_RC_K2HK 0xb008
-#define PCIE_RC_K2E 0xb009
-#define PCIE_RC_K2L 0xb00a
+static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+
+ return ks_pcie->app.start + MSI_IRQ;
+}
+
+static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset)
+{
+ return readl(ks_pcie->va_app_base + offset);
+}
+
+static void ks_pcie_app_writel(struct keystone_pcie *ks_pcie, u32 offset,
+ u32 val)
+{
+ writel(val, ks_pcie->va_app_base + offset);
+}
+
+static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
+{
+ struct dw_pcie *pci = ks_pcie->pci;
+ struct pcie_port *pp = &pci->pp;
+ struct device *dev = pci->dev;
+ u32 pending, vector;
+ int src, virq;
+
+ pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4));
+
+ /*
+ * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit
+ * shows 1, 9, 17, 25 and so forth
+ */
+ for (src = 0; src < 4; src++) {
+ if (BIT(src) & pending) {
+ vector = offset + (src << 3);
+ virq = irq_linear_revmap(pp->irq_domain, vector);
+ dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n",
+ src, vector, virq);
+ generic_handle_irq(virq);
+ }
+ }
+}
+
+static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp)
+{
+ u32 reg_offset, bit_pos;
+ struct keystone_pcie *ks_pcie;
+ struct dw_pcie *pci;
+
+ pci = to_dw_pcie_from_pp(pp);
+ ks_pcie = to_keystone_pcie(pci);
+ update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
+
+ ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4),
+ BIT(bit_pos));
+ ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET);
+}
+
+static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq)
+{
+ u32 reg_offset, bit_pos;
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+
+ update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
+ ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4),
+ BIT(bit_pos));
+}
+
+static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
+{
+ u32 reg_offset, bit_pos;
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+
+ update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
+ ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4),
+ BIT(bit_pos));
+}
+
+static int ks_pcie_msi_host_init(struct pcie_port *pp)
+{
+ return dw_pcie_allocate_domains(pp);
+}
+
+static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
+{
+ int i;
+
+ for (i = 0; i < PCI_NUM_INTX; i++)
+ ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1);
+}
+
+static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie,
+ int offset)
+{
+ struct dw_pcie *pci = ks_pcie->pci;
+ struct device *dev = pci->dev;
+ u32 pending;
+ int virq;
+
+ pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4));
+
+ if (BIT(0) & pending) {
+ virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset);
+ dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq);
+ generic_handle_irq(virq);
+ }
+
+ /* EOI the INTx interrupt */
+ ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset);
+}
+
+static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie)
+{
+ ks_pcie_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL);
+}
+
+static irqreturn_t ks_pcie_handle_error_irq(struct keystone_pcie *ks_pcie)
+{
+ u32 reg;
+ struct device *dev = ks_pcie->pci->dev;
+
+ reg = ks_pcie_app_readl(ks_pcie, ERR_IRQ_STATUS);
+ if (!reg)
+ return IRQ_NONE;
+
+ if (reg & ERR_SYS)
+ dev_err(dev, "System Error\n");
+
+ if (reg & ERR_FATAL)
+ dev_err(dev, "Fatal Error\n");
+
+ if (reg & ERR_NONFATAL)
+ dev_dbg(dev, "Non Fatal Error\n");
+
+ if (reg & ERR_CORR)
+ dev_dbg(dev, "Correctable Error\n");
+
+ if (reg & ERR_AXI)
+ dev_err(dev, "AXI tag lookup fatal Error\n");
+
+ if (reg & ERR_AER)
+ dev_err(dev, "ECRC Error\n");
+
+ ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg);
+
+ return IRQ_HANDLED;
+}
+
+static void ks_pcie_ack_legacy_irq(struct irq_data *d)
+{
+}
+
+static void ks_pcie_mask_legacy_irq(struct irq_data *d)
+{
+}
+
+static void ks_pcie_unmask_legacy_irq(struct irq_data *d)
+{
+}
+
+static struct irq_chip ks_pcie_legacy_irq_chip = {
+ .name = "Keystone-PCI-Legacy-IRQ",
+ .irq_ack = ks_pcie_ack_legacy_irq,
+ .irq_mask = ks_pcie_mask_legacy_irq,
+ .irq_unmask = ks_pcie_unmask_legacy_irq,
+};
+
+static int ks_pcie_init_legacy_irq_map(struct irq_domain *d,
+ unsigned int irq,
+ irq_hw_number_t hw_irq)
+{
+ irq_set_chip_and_handler(irq, &ks_pcie_legacy_irq_chip,
+ handle_level_irq);
+ irq_set_chip_data(irq, d->host_data);
+
+ return 0;
+}
+
+static const struct irq_domain_ops ks_pcie_legacy_irq_domain_ops = {
+ .map = ks_pcie_init_legacy_irq_map,
+ .xlate = irq_domain_xlate_onetwocell,
+};
+
+/**
+ * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask
+ * registers
+ *
+ * Since modification of dbi_cs2 involves different clock domain, read the
+ * status back to ensure the transition is complete.
+ */
+static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
+{
+ u32 val;
+
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ val |= DBI_CS2;
+ ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
+
+ do {
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ } while (!(val & DBI_CS2));
+}
+
+/**
+ * ks_pcie_clear_dbi_mode() - Disable DBI mode
+ *
+ * Since modification of dbi_cs2 involves different clock domain, read the
+ * status back to ensure the transition is complete.
+ */
+static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
+{
+ u32 val;
+
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ val &= ~DBI_CS2;
+ ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
+
+ do {
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ } while (val & DBI_CS2);
+}
+
+static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
+{
+ u32 val;
+ u32 num_viewport = ks_pcie->num_viewport;
+ struct dw_pcie *pci = ks_pcie->pci;
+ struct pcie_port *pp = &pci->pp;
+ u64 start = pp->mem->start;
+ u64 end = pp->mem->end;
+ int i;
+
+ /* Disable BARs for inbound access */
+ ks_pcie_set_dbi_mode(ks_pcie);
+ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
+ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0);
+ ks_pcie_clear_dbi_mode(ks_pcie);
+
+ val = ilog2(OB_WIN_SIZE);
+ ks_pcie_app_writel(ks_pcie, OB_SIZE, val);
+
+ /* Using Direct 1:1 mapping of RC <-> PCI memory space */
+ for (i = 0; i < num_viewport && (start < end); i++) {
+ ks_pcie_app_writel(ks_pcie, OB_OFFSET_INDEX(i),
+ lower_32_bits(start) | OB_ENABLEN);
+ ks_pcie_app_writel(ks_pcie, OB_OFFSET_HI(i),
+ upper_32_bits(start));
+ start += OB_WIN_SIZE;
+ }
+
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ val |= OB_XLAT_EN_VAL;
+ ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
+}
+
+static int ks_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
+ unsigned int devfn, int where, int size,
+ u32 *val)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+ u32 reg;
+
+ reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
+ CFG_FUNC(PCI_FUNC(devfn));
+ if (bus->parent->number != pp->root_bus_nr)
+ reg |= CFG_TYPE1;
+ ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg);
+
+ return dw_pcie_read(pp->va_cfg0_base + where, size, val);
+}
+
+static int ks_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
+ unsigned int devfn, int where, int size,
+ u32 val)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+ u32 reg;
+
+ reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
+ CFG_FUNC(PCI_FUNC(devfn));
+ if (bus->parent->number != pp->root_bus_nr)
+ reg |= CFG_TYPE1;
+ ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg);
+
+ return dw_pcie_write(pp->va_cfg0_base + where, size, val);
+}
+
+/**
+ * ks_pcie_v3_65_scan_bus() - keystone scan_bus post initialization
+ *
+ * This sets BAR0 to enable inbound access for MSI_IRQ register
+ */
+static void ks_pcie_v3_65_scan_bus(struct pcie_port *pp)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+
+ /* Configure and set up BAR0 */
+ ks_pcie_set_dbi_mode(ks_pcie);
+
+ /* Enable BAR0 */
+ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
+ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
+
+ ks_pcie_clear_dbi_mode(ks_pcie);
+
+ /*
+ * For BAR0, just setting bus address for inbound writes (MSI) should
+ * be sufficient. Use physical address to avoid any conflicts.
+ */
+ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
+}
+
+/**
+ * ks_pcie_link_up() - Check if link up
+ */
+static int ks_pcie_link_up(struct dw_pcie *pci)
+{
+ u32 val;
+
+ val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0);
+ val &= PORT_LOGIC_LTSSM_STATE_MASK;
+ return (val == PORT_LOGIC_LTSSM_STATE_L0);
+}
+
+static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
+{
+ u32 val;
-#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
+ /* Disable Link training */
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ val &= ~LTSSM_EN_VAL;
+ ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
-static void quirk_limit_mrrs(struct pci_dev *dev)
+ /* Initiate Link Training */
+ val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
+}
+
+/**
+ * ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware
+ *
+ * Ioremap the register resources, initialize legacy irq domain
+ * and call dw_pcie_v3_65_host_init() API to initialize the Keystone
+ * PCI host controller.
+ */
+static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie)
+{
+ struct dw_pcie *pci = ks_pcie->pci;
+ struct pcie_port *pp = &pci->pp;
+ struct device *dev = pci->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *res;
+
+ /* Index 0 is the config reg. space address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
+ if (IS_ERR(pci->dbi_base))
+ return PTR_ERR(pci->dbi_base);
+
+ /*
+ * We set these same and is used in pcie rd/wr_other_conf
+ * functions
+ */
+ pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
+ pp->va_cfg1_base = pp->va_cfg0_base;
+
+ /* Index 1 is the application reg. space address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ ks_pcie->va_app_base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(ks_pcie->va_app_base))
+ return PTR_ERR(ks_pcie->va_app_base);
+
+ ks_pcie->app = *res;
+
+ /* Create legacy IRQ domain */
+ ks_pcie->legacy_irq_domain =
+ irq_domain_add_linear(ks_pcie->legacy_intc_np,
+ PCI_NUM_INTX,
+ &ks_pcie_legacy_irq_domain_ops,
+ NULL);
+ if (!ks_pcie->legacy_irq_domain) {
+ dev_err(dev, "Failed to add irq domain for legacy irqs\n");
+ return -EINVAL;
+ }
+
+ return dw_pcie_host_init(pp);
+}
+
+static void ks_pcie_quirk(struct pci_dev *dev)
{
struct pci_bus *bus = dev->bus;
- struct pci_dev *bridge = bus->self;
+ struct pci_dev *bridge;
static const struct pci_device_id rc_pci_devids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
+ { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G),
+ .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ 0, },
};
if (pci_is_root_bus(bus))
- return;
+ bridge = dev;
/* look for the host bridge */
while (!pci_is_root_bus(bus)) {
bus = bus->parent;
}
- if (bridge) {
- /*
- * Keystone PCI controller has a h/w limitation of
- * 256 bytes maximum read request size. It can't handle
- * anything higher than this. So force this limit on
- * all downstream devices.
- */
- if (pci_match_id(rc_pci_devids, bridge)) {
- if (pcie_get_readrq(dev) > 256) {
- dev_info(&dev->dev, "limiting MRRS to 256\n");
- pcie_set_readrq(dev, 256);
- }
+ if (!bridge)
+ return;
+
+ /*
+ * Keystone PCI controller has a h/w limitation of
+ * 256 bytes maximum read request size. It can't handle
+ * anything higher than this. So force this limit on
+ * all downstream devices.
+ */
+ if (pci_match_id(rc_pci_devids, bridge)) {
+ if (pcie_get_readrq(dev) > 256) {
+ dev_info(&dev->dev, "limiting MRRS to 256\n");
+ pcie_set_readrq(dev, 256);
}
}
}
-DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs);
+DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk);
static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie)
{
struct dw_pcie *pci = ks_pcie->pci;
- struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
- unsigned int retries;
-
- dw_pcie_setup_rc(pp);
if (dw_pcie_link_up(pci)) {
dev_info(dev, "Link already up\n");
return 0;
}
+ ks_pcie_initiate_link_train(ks_pcie);
+
/* check if the link is up or not */
- for (retries = 0; retries < 5; retries++) {
- ks_dw_pcie_initiate_link_train(ks_pcie);
- if (!dw_pcie_wait_for_link(pci))
- return 0;
- }
+ if (!dw_pcie_wait_for_link(pci))
+ return 0;
dev_err(dev, "phy link never came up\n");
return -ETIMEDOUT;
* ack operation.
*/
chained_irq_enter(chip, desc);
- ks_dw_pcie_handle_msi_irq(ks_pcie, offset);
+ ks_pcie_handle_msi_irq(ks_pcie, offset);
chained_irq_exit(chip, desc);
}
* ack operation.
*/
chained_irq_enter(chip, desc);
- ks_dw_pcie_handle_legacy_irq(ks_pcie, irq_offset);
+ ks_pcie_handle_legacy_irq(ks_pcie, irq_offset);
chained_irq_exit(chip, desc);
}
ks_pcie_legacy_irq_handler,
ks_pcie);
}
- ks_dw_pcie_enable_legacy_irqs(ks_pcie);
+ ks_pcie_enable_legacy_irqs(ks_pcie);
/* MSI IRQ */
if (IS_ENABLED(CONFIG_PCI_MSI)) {
}
if (ks_pcie->error_irq > 0)
- ks_dw_pcie_enable_error_irq(ks_pcie);
+ ks_pcie_enable_error_irq(ks_pcie);
}
/*
* bus error instead of returning 0xffffffff. This handler always returns 0
* for this kind of faults.
*/
-static int keystone_pcie_fault(unsigned long addr, unsigned int fsr,
- struct pt_regs *regs)
+static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
+ struct pt_regs *regs)
{
unsigned long instr = *(unsigned long *) instruction_pointer(regs);
return 0;
}
+static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
+{
+ int ret;
+ unsigned int id;
+ struct regmap *devctrl_regs;
+ struct dw_pcie *pci = ks_pcie->pci;
+ struct device *dev = pci->dev;
+ struct device_node *np = dev->of_node;
+
+ devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-id");
+ if (IS_ERR(devctrl_regs))
+ return PTR_ERR(devctrl_regs);
+
+ ret = regmap_read(devctrl_regs, 0, &id);
+ if (ret)
+ return ret;
+
+ dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK);
+ dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT);
+
+ return 0;
+}
+
static int __init ks_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
- u32 val;
+ int ret;
+
+ dw_pcie_setup_rc(pp);
ks_pcie_establish_link(ks_pcie);
- ks_dw_pcie_setup_rc_app_regs(ks_pcie);
+ ks_pcie_setup_rc_app_regs(ks_pcie);
ks_pcie_setup_interrupts(ks_pcie);
writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8),
pci->dbi_base + PCI_IO_BASE);
- /* update the Vendor ID */
- writew(ks_pcie->device_id, pci->dbi_base + PCI_DEVICE_ID);
-
- /* update the DEV_STAT_CTRL to publish right mrrs */
- val = readl(pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
- val &= ~PCI_EXP_DEVCTL_READRQ;
- /* set the mrrs to 256 bytes */
- val |= BIT(12);
- writel(val, pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
+ ret = ks_pcie_init_id(ks_pcie);
+ if (ret < 0)
+ return ret;
/*
* PCIe access errors that result into OCP errors are caught by ARM as
* "External aborts"
*/
- hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0,
+ hook_fault_code(17, ks_pcie_fault, SIGBUS, 0,
"Asynchronous external abort");
return 0;
}
-static const struct dw_pcie_host_ops keystone_pcie_host_ops = {
- .rd_other_conf = ks_dw_pcie_rd_other_conf,
- .wr_other_conf = ks_dw_pcie_wr_other_conf,
+static const struct dw_pcie_host_ops ks_pcie_host_ops = {
+ .rd_other_conf = ks_pcie_rd_other_conf,
+ .wr_other_conf = ks_pcie_wr_other_conf,
.host_init = ks_pcie_host_init,
- .msi_set_irq = ks_dw_pcie_msi_set_irq,
- .msi_clear_irq = ks_dw_pcie_msi_clear_irq,
- .get_msi_addr = ks_dw_pcie_get_msi_addr,
- .msi_host_init = ks_dw_pcie_msi_host_init,
- .msi_irq_ack = ks_dw_pcie_msi_irq_ack,
- .scan_bus = ks_dw_pcie_v3_65_scan_bus,
+ .msi_set_irq = ks_pcie_msi_set_irq,
+ .msi_clear_irq = ks_pcie_msi_clear_irq,
+ .get_msi_addr = ks_pcie_get_msi_addr,
+ .msi_host_init = ks_pcie_msi_host_init,
+ .msi_irq_ack = ks_pcie_msi_irq_ack,
+ .scan_bus = ks_pcie_v3_65_scan_bus,
};
-static irqreturn_t pcie_err_irq_handler(int irq, void *priv)
+static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv)
{
struct keystone_pcie *ks_pcie = priv;
- return ks_dw_pcie_handle_error_irq(ks_pcie);
+ return ks_pcie_handle_error_irq(ks_pcie);
}
-static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
- struct platform_device *pdev)
+static int __init ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie,
+ struct platform_device *pdev)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
if (ks_pcie->error_irq <= 0)
dev_info(dev, "no error IRQ defined\n");
else {
- ret = request_irq(ks_pcie->error_irq, pcie_err_irq_handler,
+ ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler,
IRQF_SHARED, "pcie-error-irq", ks_pcie);
if (ret < 0) {
dev_err(dev, "failed to request error IRQ %d\n",
}
}
- pp->ops = &keystone_pcie_host_ops;
- ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np);
+ pp->ops = &ks_pcie_host_ops;
+ ret = ks_pcie_dw_host_init(ks_pcie);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
{ },
};
-static const struct dw_pcie_ops dw_pcie_ops = {
- .link_up = ks_dw_pcie_link_up,
+static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = {
+ .link_up = ks_pcie_link_up,
};
-static int __exit ks_pcie_remove(struct platform_device *pdev)
+static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie)
{
- struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
+ int num_lanes = ks_pcie->num_lanes;
- clk_disable_unprepare(ks_pcie->clk);
+ while (num_lanes--) {
+ phy_power_off(ks_pcie->phy[num_lanes]);
+ phy_exit(ks_pcie->phy[num_lanes]);
+ }
+}
+
+static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie)
+{
+ int i;
+ int ret;
+ int num_lanes = ks_pcie->num_lanes;
+
+ for (i = 0; i < num_lanes; i++) {
+ ret = phy_init(ks_pcie->phy[i]);
+ if (ret < 0)
+ goto err_phy;
+
+ ret = phy_power_on(ks_pcie->phy[i]);
+ if (ret < 0) {
+ phy_exit(ks_pcie->phy[i]);
+ goto err_phy;
+ }
+ }
return 0;
+
+err_phy:
+ while (--i >= 0) {
+ phy_power_off(ks_pcie->phy[i]);
+ phy_exit(ks_pcie->phy[i]);
+ }
+
+ return ret;
}
static int __init ks_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
struct dw_pcie *pci;
struct keystone_pcie *ks_pcie;
- struct resource *res;
- void __iomem *reg_p;
- struct phy *phy;
+ struct device_link **link;
+ u32 num_viewport;
+ struct phy **phy;
+ u32 num_lanes;
+ char name[10];
int ret;
+ int i;
ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL);
if (!ks_pcie)
return -ENOMEM;
pci->dev = dev;
- pci->ops = &dw_pcie_ops;
+ pci->ops = &ks_pcie_dw_pcie_ops;
- ks_pcie->pci = pci;
+ ret = of_property_read_u32(np, "num-viewport", &num_viewport);
+ if (ret < 0) {
+ dev_err(dev, "unable to read *num-viewport* property\n");
+ return ret;
+ }
- /* initialize SerDes Phy if present */
- phy = devm_phy_get(dev, "pcie-phy");
- if (PTR_ERR_OR_ZERO(phy) == -EPROBE_DEFER)
- return PTR_ERR(phy);
+ ret = of_property_read_u32(np, "num-lanes", &num_lanes);
+ if (ret)
+ num_lanes = 1;
- if (!IS_ERR_OR_NULL(phy)) {
- ret = phy_init(phy);
- if (ret < 0)
- return ret;
+ phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL);
+ if (!phy)
+ return -ENOMEM;
+
+ link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL);
+ if (!link)
+ return -ENOMEM;
+
+ for (i = 0; i < num_lanes; i++) {
+ snprintf(name, sizeof(name), "pcie-phy%d", i);
+ phy[i] = devm_phy_optional_get(dev, name);
+ if (IS_ERR(phy[i])) {
+ ret = PTR_ERR(phy[i]);
+ goto err_link;
+ }
+
+ if (!phy[i])
+ continue;
+
+ link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
+ if (!link[i]) {
+ ret = -EINVAL;
+ goto err_link;
+ }
}
- /* index 2 is to read PCI DEVICE_ID */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
- reg_p = devm_ioremap_resource(dev, res);
- if (IS_ERR(reg_p))
- return PTR_ERR(reg_p);
- ks_pcie->device_id = readl(reg_p) >> 16;
- devm_iounmap(dev, reg_p);
- devm_release_mem_region(dev, res->start, resource_size(res));
+ ks_pcie->np = np;
+ ks_pcie->pci = pci;
+ ks_pcie->link = link;
+ ks_pcie->num_lanes = num_lanes;
+ ks_pcie->num_viewport = num_viewport;
+ ks_pcie->phy = phy;
- ks_pcie->np = dev->of_node;
- platform_set_drvdata(pdev, ks_pcie);
- ks_pcie->clk = devm_clk_get(dev, "pcie");
- if (IS_ERR(ks_pcie->clk)) {
- dev_err(dev, "Failed to get pcie rc clock\n");
- return PTR_ERR(ks_pcie->clk);
+ ret = ks_pcie_enable_phy(ks_pcie);
+ if (ret) {
+ dev_err(dev, "failed to enable phy\n");
+ goto err_link;
}
- ret = clk_prepare_enable(ks_pcie->clk);
- if (ret)
- return ret;
platform_set_drvdata(pdev, ks_pcie);
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "pm_runtime_get_sync failed\n");
+ goto err_get_sync;
+ }
- ret = ks_add_pcie_port(ks_pcie, pdev);
+ ret = ks_pcie_add_pcie_port(ks_pcie, pdev);
if (ret < 0)
- goto fail_clk;
+ goto err_get_sync;
return 0;
-fail_clk:
- clk_disable_unprepare(ks_pcie->clk);
+
+err_get_sync:
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+ ks_pcie_disable_phy(ks_pcie);
+
+err_link:
+ while (--i >= 0 && link[i])
+ device_link_del(link[i]);
return ret;
}
+static int __exit ks_pcie_remove(struct platform_device *pdev)
+{
+ struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
+ struct device_link **link = ks_pcie->link;
+ int num_lanes = ks_pcie->num_lanes;
+ struct device *dev = &pdev->dev;
+
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+ ks_pcie_disable_phy(ks_pcie);
+ while (num_lanes--)
+ device_link_del(link[num_lanes]);
+
+ return 0;
+}
+
static struct platform_driver ks_pcie_driver __refdata = {
.probe = ks_pcie_probe,
.remove = __exit_p(ks_pcie_remove),
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Keystone PCI Controller's common includes
- *
- * Copyright (C) 2013-2014 Texas Instruments., Ltd.
- * http://www.ti.com
- *
- * Author: Murali Karicheri <m-karicheri2@ti.com>
- */
-
-#define MAX_MSI_HOST_IRQS 8
-
-struct keystone_pcie {
- struct dw_pcie *pci;
- struct clk *clk;
- /* PCI Device ID */
- u32 device_id;
- int num_legacy_host_irqs;
- int legacy_host_irqs[PCI_NUM_INTX];
- struct device_node *legacy_intc_np;
-
- int num_msi_host_irqs;
- int msi_host_irqs[MAX_MSI_HOST_IRQS];
- struct device_node *msi_intc_np;
- struct irq_domain *legacy_irq_domain;
- struct device_node *np;
-
- int error_irq;
-
- /* Application register space */
- void __iomem *va_app_base; /* DT 1st resource */
- struct resource app;
-};
-
-/* Keystone DW specific MSI controller APIs/definitions */
-void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset);
-phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp);
-
-/* Keystone specific PCI controller APIs */
-void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie);
-void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset);
-void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie);
-irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie);
-int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie,
- struct device_node *msi_intc_np);
-int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
- unsigned int devfn, int where, int size, u32 val);
-int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
- unsigned int devfn, int where, int size, u32 *val);
-void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie);
-void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie);
-void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp);
-void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq);
-void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq);
-void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp);
-int ks_dw_pcie_msi_host_init(struct pcie_port *pp);
-int ks_dw_pcie_link_up(struct dw_pcie *pci);
#define PORT_LINK_MODE_4_LANES (0x7 << 16)
#define PORT_LINK_MODE_8_LANES (0xf << 16)
+#define PCIE_PORT_DEBUG0 0x728
+#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
+#define PORT_LOGIC_LTSSM_STATE_L0 0x11
+
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
#define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8)
return 0;
}
-static int __init kirin_add_pcie_port(struct dw_pcie *pci,
- struct platform_device *pdev)
+static int kirin_add_pcie_port(struct dw_pcie *pci,
+ struct platform_device *pdev)
{
int ret;
struct qcom_pcie *pcie = to_qcom_pcie(pci);
int ret;
- pm_runtime_get_sync(pci->dev);
qcom_ep_reset_assert(pcie);
ret = pcie->ops->init(pcie);
phy_power_off(pcie->phy);
err_deinit:
pcie->ops->deinit(pcie);
- pm_runtime_put(pci->dev);
return ret;
}
return -ENOMEM;
pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ pm_runtime_disable(dev);
+ return ret;
+ }
+
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
pcie->ops = of_device_get_match_data(dev);
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
- if (IS_ERR(pcie->reset))
- return PTR_ERR(pcie->reset);
+ if (IS_ERR(pcie->reset)) {
+ ret = PTR_ERR(pcie->reset);
+ goto err_pm_runtime_put;
+ }
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf");
pcie->parf = devm_ioremap_resource(dev, res);
- if (IS_ERR(pcie->parf))
- return PTR_ERR(pcie->parf);
+ if (IS_ERR(pcie->parf)) {
+ ret = PTR_ERR(pcie->parf);
+ goto err_pm_runtime_put;
+ }
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
- if (IS_ERR(pci->dbi_base))
- return PTR_ERR(pci->dbi_base);
+ if (IS_ERR(pci->dbi_base)) {
+ ret = PTR_ERR(pci->dbi_base);
+ goto err_pm_runtime_put;
+ }
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
pcie->elbi = devm_ioremap_resource(dev, res);
- if (IS_ERR(pcie->elbi))
- return PTR_ERR(pcie->elbi);
+ if (IS_ERR(pcie->elbi)) {
+ ret = PTR_ERR(pcie->elbi);
+ goto err_pm_runtime_put;
+ }
pcie->phy = devm_phy_optional_get(dev, "pciephy");
- if (IS_ERR(pcie->phy))
- return PTR_ERR(pcie->phy);
+ if (IS_ERR(pcie->phy)) {
+ ret = PTR_ERR(pcie->phy);
+ goto err_pm_runtime_put;
+ }
ret = pcie->ops->get_resources(pcie);
if (ret)
- return ret;
+ goto err_pm_runtime_put;
pp->ops = &qcom_pcie_dw_ops;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq_byname(pdev, "msi");
- if (pp->msi_irq < 0)
- return pp->msi_irq;
+ if (pp->msi_irq < 0) {
+ ret = pp->msi_irq;
+ goto err_pm_runtime_put;
+ }
}
ret = phy_init(pcie->phy);
if (ret) {
pm_runtime_disable(&pdev->dev);
- return ret;
+ goto err_pm_runtime_put;
}
platform_set_drvdata(pdev, pcie);
if (ret) {
dev_err(dev, "cannot initialize host\n");
pm_runtime_disable(&pdev->dev);
- return ret;
+ goto err_pm_runtime_put;
}
return 0;
+
+err_pm_runtime_put:
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+
+ return ret;
}
static const struct of_device_id qcom_pcie_match[] = {
#include <linux/of_pci.h>
#include "../pci.h"
+#include "../pci-bridge-emul.h"
/* PCIe core registers */
+#define PCIE_CORE_DEV_ID_REG 0x0
#define PCIE_CORE_CMD_STATUS_REG 0x4
#define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0)
#define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1)
#define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2)
+#define PCIE_CORE_DEV_REV_REG 0x8
+#define PCIE_CORE_PCIEXP_CAP 0xc0
#define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8
#define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4)
#define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8)
-
+#define PCIE_CORE_INT_A_ASSERT_ENABLE 1
+#define PCIE_CORE_INT_B_ASSERT_ENABLE 2
+#define PCIE_CORE_INT_C_ASSERT_ENABLE 3
+#define PCIE_CORE_INT_D_ASSERT_ENABLE 4
/* PIO registers base address and register offsets */
#define PIO_BASE_ADDR 0x4000
#define PIO_CTRL (PIO_BASE_ADDR + 0x0)
#define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5)
#define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6)
#define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10)
+#define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30)
#define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40)
+#define PCIE_MSG_PM_PME_MASK BIT(7)
#define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44)
#define PCIE_ISR0_MSI_INT_PENDING BIT(24)
#define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val))
struct mutex msi_used_lock;
u16 msi_msg;
int root_bus_nr;
+ struct pci_bridge_emul bridge;
};
static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg)
return -ETIMEDOUT;
}
+
+static pci_bridge_emul_read_status_t
+advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ int reg, u32 *value)
+{
+ struct advk_pcie *pcie = bridge->data;
+
+
+ switch (reg) {
+ case PCI_EXP_SLTCTL:
+ *value = PCI_EXP_SLTSTA_PDS << 16;
+ return PCI_BRIDGE_EMUL_HANDLED;
+
+ case PCI_EXP_RTCTL: {
+ u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ *value = (val & PCIE_MSG_PM_PME_MASK) ? PCI_EXP_RTCTL_PMEIE : 0;
+ return PCI_BRIDGE_EMUL_HANDLED;
+ }
+
+ case PCI_EXP_RTSTA: {
+ u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
+ u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
+ *value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
+ return PCI_BRIDGE_EMUL_HANDLED;
+ }
+
+ case PCI_CAP_LIST_ID:
+ case PCI_EXP_DEVCAP:
+ case PCI_EXP_DEVCTL:
+ case PCI_EXP_LNKCAP:
+ case PCI_EXP_LNKCTL:
+ *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+ return PCI_BRIDGE_EMUL_HANDLED;
+ default:
+ return PCI_BRIDGE_EMUL_NOT_HANDLED;
+ }
+
+}
+
+static void
+advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ int reg, u32 old, u32 new, u32 mask)
+{
+ struct advk_pcie *pcie = bridge->data;
+
+ switch (reg) {
+ case PCI_EXP_DEVCTL:
+ case PCI_EXP_LNKCTL:
+ advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg);
+ break;
+
+ case PCI_EXP_RTCTL:
+ new = (new & PCI_EXP_RTCTL_PMEIE) << 3;
+ advk_writel(pcie, new, PCIE_ISR0_MASK_REG);
+ break;
+
+ case PCI_EXP_RTSTA:
+ new = (new & PCI_EXP_RTSTA_PME) >> 9;
+ advk_writel(pcie, new, PCIE_ISR0_REG);
+ break;
+
+ default:
+ break;
+ }
+}
+
+struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
+ .read_pcie = advk_pci_bridge_emul_pcie_conf_read,
+ .write_pcie = advk_pci_bridge_emul_pcie_conf_write,
+};
+
+/*
+ * Initialize the configuration space of the PCI-to-PCI bridge
+ * associated with the given PCIe interface.
+ */
+static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+{
+ struct pci_bridge_emul *bridge = &pcie->bridge;
+
+ bridge->conf.vendor = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff;
+ bridge->conf.device = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16;
+ bridge->conf.class_revision =
+ advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff;
+
+ /* Support 32 bits I/O addressing */
+ bridge->conf.iobase = PCI_IO_RANGE_TYPE_32;
+ bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
+
+ /* Support 64 bits memory pref */
+ bridge->conf.pref_mem_base = PCI_PREF_RANGE_TYPE_64;
+ bridge->conf.pref_mem_limit = PCI_PREF_RANGE_TYPE_64;
+
+ /* Support interrupt A for MSI feature */
+ bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
+
+ bridge->has_pcie = true;
+ bridge->data = pcie;
+ bridge->ops = &advk_pci_bridge_emul_ops;
+
+ pci_bridge_emul_init(bridge);
+
+}
+
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
int devfn)
{
return PCIBIOS_DEVICE_NOT_FOUND;
}
+ if (bus->number == pcie->root_bus_nr)
+ return pci_bridge_emul_conf_read(&pcie->bridge, where,
+ size, val);
+
/* Start PIO */
advk_writel(pcie, 0, PIO_START);
advk_writel(pcie, 1, PIO_ISR);
/* Program the control register */
reg = advk_readl(pcie, PIO_CTRL);
reg &= ~PIO_CTRL_TYPE_MASK;
- if (bus->number == pcie->root_bus_nr)
+ if (bus->primary == pcie->root_bus_nr)
reg |= PCIE_CONFIG_RD_TYPE0;
else
reg |= PCIE_CONFIG_RD_TYPE1;
if (!advk_pcie_valid_device(pcie, bus, devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
+ if (bus->number == pcie->root_bus_nr)
+ return pci_bridge_emul_conf_write(&pcie->bridge, where,
+ size, val);
+
if (where % size)
return PCIBIOS_SET_FAILED;
/* Program the control register */
reg = advk_readl(pcie, PIO_CTRL);
reg &= ~PIO_CTRL_TYPE_MASK;
- if (bus->number == pcie->root_bus_nr)
+ if (bus->primary == pcie->root_bus_nr)
reg |= PCIE_CONFIG_WR_TYPE0;
else
reg |= PCIE_CONFIG_WR_TYPE1;
advk_pcie_setup_hw(pcie);
+ advk_sw_pci_bridge_init(pcie);
+
ret = advk_pcie_init_irq_domain(pcie);
if (ret) {
dev_err(dev, "Failed to initialize irq\n");
int pci_host_common_probe(struct platform_device *pdev,
struct pci_ecam_ops *ops)
{
- const char *type;
struct device *dev = &pdev->dev;
- struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
struct pci_config_window *cfg;
struct list_head resources;
if (!bridge)
return -ENOMEM;
- type = of_get_property(np, "device_type", NULL);
- if (!type || strcmp(type, "pci")) {
- dev_err(dev, "invalid \"device_type\" %s\n", type);
- return -EINVAL;
- }
-
of_pci_check_probe_only();
/* Parse and map our Configuration Space windows */
#include <linux/of_platform.h>
#include "../pci.h"
+#include "../pci-bridge-emul.h"
/*
* PCIe unit register offsets.
#define PCIE_DEBUG_CTRL 0x1a60
#define PCIE_DEBUG_SOFT_RESET BIT(20)
-enum {
- PCISWCAP = PCI_BRIDGE_CONTROL + 2,
- PCISWCAP_EXP_LIST_ID = PCISWCAP + PCI_CAP_LIST_ID,
- PCISWCAP_EXP_DEVCAP = PCISWCAP + PCI_EXP_DEVCAP,
- PCISWCAP_EXP_DEVCTL = PCISWCAP + PCI_EXP_DEVCTL,
- PCISWCAP_EXP_LNKCAP = PCISWCAP + PCI_EXP_LNKCAP,
- PCISWCAP_EXP_LNKCTL = PCISWCAP + PCI_EXP_LNKCTL,
- PCISWCAP_EXP_SLTCAP = PCISWCAP + PCI_EXP_SLTCAP,
- PCISWCAP_EXP_SLTCTL = PCISWCAP + PCI_EXP_SLTCTL,
- PCISWCAP_EXP_RTCTL = PCISWCAP + PCI_EXP_RTCTL,
- PCISWCAP_EXP_RTSTA = PCISWCAP + PCI_EXP_RTSTA,
- PCISWCAP_EXP_DEVCAP2 = PCISWCAP + PCI_EXP_DEVCAP2,
- PCISWCAP_EXP_DEVCTL2 = PCISWCAP + PCI_EXP_DEVCTL2,
- PCISWCAP_EXP_LNKCAP2 = PCISWCAP + PCI_EXP_LNKCAP2,
- PCISWCAP_EXP_LNKCTL2 = PCISWCAP + PCI_EXP_LNKCTL2,
- PCISWCAP_EXP_SLTCAP2 = PCISWCAP + PCI_EXP_SLTCAP2,
- PCISWCAP_EXP_SLTCTL2 = PCISWCAP + PCI_EXP_SLTCTL2,
-};
-
-/* PCI configuration space of a PCI-to-PCI bridge */
-struct mvebu_sw_pci_bridge {
- u16 vendor;
- u16 device;
- u16 command;
- u16 status;
- u16 class;
- u8 interface;
- u8 revision;
- u8 bist;
- u8 header_type;
- u8 latency_timer;
- u8 cache_line_size;
- u32 bar[2];
- u8 primary_bus;
- u8 secondary_bus;
- u8 subordinate_bus;
- u8 secondary_latency_timer;
- u8 iobase;
- u8 iolimit;
- u16 secondary_status;
- u16 membase;
- u16 memlimit;
- u16 iobaseupper;
- u16 iolimitupper;
- u32 romaddr;
- u8 intline;
- u8 intpin;
- u16 bridgectrl;
-
- /* PCI express capability */
- u32 pcie_sltcap;
- u16 pcie_devctl;
- u16 pcie_rtctl;
-};
-
struct mvebu_pcie_port;
/* Structure representing all PCIe interfaces */
struct clk *clk;
struct gpio_desc *reset_gpio;
char *reset_name;
- struct mvebu_sw_pci_bridge bridge;
+ struct pci_bridge_emul bridge;
struct device_node *dn;
struct mvebu_pcie *pcie;
struct mvebu_pcie_window memwin;
static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {};
+ struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new iobase/iolimit values invalid? */
- if (port->bridge.iolimit < port->bridge.iobase ||
- port->bridge.iolimitupper < port->bridge.iobaseupper ||
- !(port->bridge.command & PCI_COMMAND_IO)) {
+ if (conf->iolimit < conf->iobase ||
+ conf->iolimitupper < conf->iobaseupper ||
+ !(conf->command & PCI_COMMAND_IO)) {
mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin);
return;
* specifications. iobase is the bus address, port->iowin_base
* is the CPU address.
*/
- desired.remap = ((port->bridge.iobase & 0xF0) << 8) |
- (port->bridge.iobaseupper << 16);
+ desired.remap = ((conf->iobase & 0xF0) << 8) |
+ (conf->iobaseupper << 16);
desired.base = port->pcie->io.start + desired.remap;
- desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
- (port->bridge.iolimitupper << 16)) -
+ desired.size = ((0xFFF | ((conf->iolimit & 0xF0) << 8) |
+ (conf->iolimitupper << 16)) -
desired.remap) +
1;
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP};
+ struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new membase/memlimit values invalid? */
- if (port->bridge.memlimit < port->bridge.membase ||
- !(port->bridge.command & PCI_COMMAND_MEMORY)) {
+ if (conf->memlimit < conf->membase ||
+ !(conf->command & PCI_COMMAND_MEMORY)) {
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
&desired, &port->memwin);
return;
* window to setup, according to the PCI-to-PCI bridge
* specifications.
*/
- desired.base = ((port->bridge.membase & 0xFFF0) << 16);
- desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
+ desired.base = ((conf->membase & 0xFFF0) << 16);
+ desired.size = (((conf->memlimit & 0xFFF0) << 16) | 0xFFFFF) -
desired.base + 1;
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
&port->memwin);
}
-/*
- * Initialize the configuration space of the PCI-to-PCI bridge
- * associated with the given PCIe interface.
- */
-static void mvebu_sw_pci_bridge_init(struct mvebu_pcie_port *port)
-{
- struct mvebu_sw_pci_bridge *bridge = &port->bridge;
-
- memset(bridge, 0, sizeof(struct mvebu_sw_pci_bridge));
-
- bridge->class = PCI_CLASS_BRIDGE_PCI;
- bridge->vendor = PCI_VENDOR_ID_MARVELL;
- bridge->device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
- bridge->revision = mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff;
- bridge->header_type = PCI_HEADER_TYPE_BRIDGE;
- bridge->cache_line_size = 0x10;
-
- /* We support 32 bits I/O addressing */
- bridge->iobase = PCI_IO_RANGE_TYPE_32;
- bridge->iolimit = PCI_IO_RANGE_TYPE_32;
-
- /* Add capabilities */
- bridge->status = PCI_STATUS_CAP_LIST;
-}
-
-/*
- * Read the configuration space of the PCI-to-PCI bridge associated to
- * the given PCIe interface.
- */
-static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port,
- unsigned int where, int size, u32 *value)
+static pci_bridge_emul_read_status_t
+mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ int reg, u32 *value)
{
- struct mvebu_sw_pci_bridge *bridge = &port->bridge;
-
- switch (where & ~3) {
- case PCI_VENDOR_ID:
- *value = bridge->device << 16 | bridge->vendor;
- break;
-
- case PCI_COMMAND:
- *value = bridge->command | bridge->status << 16;
- break;
-
- case PCI_CLASS_REVISION:
- *value = bridge->class << 16 | bridge->interface << 8 |
- bridge->revision;
- break;
-
- case PCI_CACHE_LINE_SIZE:
- *value = bridge->bist << 24 | bridge->header_type << 16 |
- bridge->latency_timer << 8 | bridge->cache_line_size;
- break;
-
- case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1:
- *value = bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4];
- break;
-
- case PCI_PRIMARY_BUS:
- *value = (bridge->secondary_latency_timer << 24 |
- bridge->subordinate_bus << 16 |
- bridge->secondary_bus << 8 |
- bridge->primary_bus);
- break;
-
- case PCI_IO_BASE:
- if (!mvebu_has_ioport(port))
- *value = bridge->secondary_status << 16;
- else
- *value = (bridge->secondary_status << 16 |
- bridge->iolimit << 8 |
- bridge->iobase);
- break;
-
- case PCI_MEMORY_BASE:
- *value = (bridge->memlimit << 16 | bridge->membase);
- break;
-
- case PCI_PREF_MEMORY_BASE:
- *value = 0;
- break;
-
- case PCI_IO_BASE_UPPER16:
- *value = (bridge->iolimitupper << 16 | bridge->iobaseupper);
- break;
-
- case PCI_CAPABILITY_LIST:
- *value = PCISWCAP;
- break;
-
- case PCI_ROM_ADDRESS1:
- *value = 0;
- break;
+ struct mvebu_pcie_port *port = bridge->data;
- case PCI_INTERRUPT_LINE:
- /* LINE PIN MIN_GNT MAX_LAT */
- *value = 0;
- break;
-
- case PCISWCAP_EXP_LIST_ID:
- /* Set PCIe v2, root port, slot support */
- *value = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
- PCI_EXP_FLAGS_SLOT) << 16 | PCI_CAP_ID_EXP;
- break;
-
- case PCISWCAP_EXP_DEVCAP:
+ switch (reg) {
+ case PCI_EXP_DEVCAP:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP);
break;
- case PCISWCAP_EXP_DEVCTL:
+ case PCI_EXP_DEVCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) &
~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
- *value |= bridge->pcie_devctl;
break;
- case PCISWCAP_EXP_LNKCAP:
+ case PCI_EXP_LNKCAP:
/*
* PCIe requires the clock power management capability to be
* hard-wired to zero for downstream ports
~PCI_EXP_LNKCAP_CLKPM;
break;
- case PCISWCAP_EXP_LNKCTL:
+ case PCI_EXP_LNKCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
break;
- case PCISWCAP_EXP_SLTCAP:
- *value = bridge->pcie_sltcap;
- break;
-
- case PCISWCAP_EXP_SLTCTL:
+ case PCI_EXP_SLTCTL:
*value = PCI_EXP_SLTSTA_PDS << 16;
break;
- case PCISWCAP_EXP_RTCTL:
- *value = bridge->pcie_rtctl;
- break;
-
- case PCISWCAP_EXP_RTSTA:
+ case PCI_EXP_RTSTA:
*value = mvebu_readl(port, PCIE_RC_RTSTA);
break;
- /* PCIe requires the v2 fields to be hard-wired to zero */
- case PCISWCAP_EXP_DEVCAP2:
- case PCISWCAP_EXP_DEVCTL2:
- case PCISWCAP_EXP_LNKCAP2:
- case PCISWCAP_EXP_LNKCTL2:
- case PCISWCAP_EXP_SLTCAP2:
- case PCISWCAP_EXP_SLTCTL2:
default:
- /*
- * PCI defines configuration read accesses to reserved or
- * unimplemented registers to read as zero and complete
- * normally.
- */
- *value = 0;
- return PCIBIOS_SUCCESSFUL;
+ return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
- if (size == 2)
- *value = (*value >> (8 * (where & 3))) & 0xffff;
- else if (size == 1)
- *value = (*value >> (8 * (where & 3))) & 0xff;
-
- return PCIBIOS_SUCCESSFUL;
+ return PCI_BRIDGE_EMUL_HANDLED;
}
-/* Write to the PCI-to-PCI bridge configuration space */
-static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port,
- unsigned int where, int size, u32 value)
+static void
+mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
+ int reg, u32 old, u32 new, u32 mask)
{
- struct mvebu_sw_pci_bridge *bridge = &port->bridge;
- u32 mask, reg;
- int err;
-
- if (size == 4)
- mask = 0x0;
- else if (size == 2)
- mask = ~(0xffff << ((where & 3) * 8));
- else if (size == 1)
- mask = ~(0xff << ((where & 3) * 8));
- else
- return PCIBIOS_BAD_REGISTER_NUMBER;
-
- err = mvebu_sw_pci_bridge_read(port, where & ~3, 4, ®);
- if (err)
- return err;
+ struct mvebu_pcie_port *port = bridge->data;
+ struct pci_bridge_emul_conf *conf = &bridge->conf;
- value = (reg & mask) | value << ((where & 3) * 8);
-
- switch (where & ~3) {
+ switch (reg) {
case PCI_COMMAND:
{
- u32 old = bridge->command;
-
if (!mvebu_has_ioport(port))
- value &= ~PCI_COMMAND_IO;
+ conf->command &= ~PCI_COMMAND_IO;
- bridge->command = value & 0xffff;
- if ((old ^ bridge->command) & PCI_COMMAND_IO)
+ if ((old ^ new) & PCI_COMMAND_IO)
mvebu_pcie_handle_iobase_change(port);
- if ((old ^ bridge->command) & PCI_COMMAND_MEMORY)
+ if ((old ^ new) & PCI_COMMAND_MEMORY)
mvebu_pcie_handle_membase_change(port);
- break;
- }
- case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1:
- bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value;
break;
+ }
case PCI_IO_BASE:
/*
- * We also keep bit 1 set, it is a read-only bit that
+ * We keep bit 1 set, it is a read-only bit that
* indicates we support 32 bits addressing for the
* I/O
*/
- bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32;
- bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32;
+ conf->iobase |= PCI_IO_RANGE_TYPE_32;
+ conf->iolimit |= PCI_IO_RANGE_TYPE_32;
mvebu_pcie_handle_iobase_change(port);
break;
case PCI_MEMORY_BASE:
- bridge->membase = value & 0xffff;
- bridge->memlimit = value >> 16;
mvebu_pcie_handle_membase_change(port);
break;
case PCI_IO_BASE_UPPER16:
- bridge->iobaseupper = value & 0xffff;
- bridge->iolimitupper = value >> 16;
mvebu_pcie_handle_iobase_change(port);
break;
case PCI_PRIMARY_BUS:
- bridge->primary_bus = value & 0xff;
- bridge->secondary_bus = (value >> 8) & 0xff;
- bridge->subordinate_bus = (value >> 16) & 0xff;
- bridge->secondary_latency_timer = (value >> 24) & 0xff;
- mvebu_pcie_set_local_bus_nr(port, bridge->secondary_bus);
+ mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
+ break;
+
+ default:
break;
+ }
+}
+
+static void
+mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ int reg, u32 old, u32 new, u32 mask)
+{
+ struct mvebu_pcie_port *port = bridge->data;
- case PCISWCAP_EXP_DEVCTL:
+ switch (reg) {
+ case PCI_EXP_DEVCTL:
/*
* Armada370 data says these bits must always
* be zero when in root complex mode.
*/
- value &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
- PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
-
- /*
- * If the mask is 0xffff0000, then we only want to write
- * the device control register, rather than clearing the
- * RW1C bits in the device status register. Mask out the
- * status register bits.
- */
- if (mask == 0xffff0000)
- value &= 0xffff;
+ new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
+ PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
- mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
+ mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
break;
- case PCISWCAP_EXP_LNKCTL:
+ case PCI_EXP_LNKCTL:
/*
* If we don't support CLKREQ, we must ensure that the
* CLKREQ enable bit always reads zero. Since we haven't
* had this capability, and it's dependent on board wiring,
* disable it for the time being.
*/
- value &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
+ new &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
- /*
- * If the mask is 0xffff0000, then we only want to write
- * the link control register, rather than clearing the
- * RW1C bits in the link status register. Mask out the
- * RW1C status register bits.
- */
- if (mask == 0xffff0000)
- value &= ~((PCI_EXP_LNKSTA_LABS |
- PCI_EXP_LNKSTA_LBMS) << 16);
-
- mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
+ mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
break;
- case PCISWCAP_EXP_RTSTA:
- mvebu_writel(port, value, PCIE_RC_RTSTA);
+ case PCI_EXP_RTSTA:
+ mvebu_writel(port, new, PCIE_RC_RTSTA);
break;
+ }
+}
- default:
- break;
+struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
+ .write_base = mvebu_pci_bridge_emul_base_conf_write,
+ .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read,
+ .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write,
+};
+
+/*
+ * Initialize the configuration space of the PCI-to-PCI bridge
+ * associated with the given PCIe interface.
+ */
+static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+{
+ struct pci_bridge_emul *bridge = &port->bridge;
+
+ bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
+ bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
+ bridge->conf.class_revision =
+ mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff;
+
+ if (mvebu_has_ioport(port)) {
+ /* We support 32 bits I/O addressing */
+ bridge->conf.iobase = PCI_IO_RANGE_TYPE_32;
+ bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
}
- return PCIBIOS_SUCCESSFUL;
+ bridge->has_pcie = true;
+ bridge->data = port;
+ bridge->ops = &mvebu_pci_bridge_emul_ops;
+
+ pci_bridge_emul_init(bridge);
}
static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
if (bus->number == 0 && port->devfn == devfn)
return port;
if (bus->number != 0 &&
- bus->number >= port->bridge.secondary_bus &&
- bus->number <= port->bridge.subordinate_bus)
+ bus->number >= port->bridge.conf.secondary_bus &&
+ bus->number <= port->bridge.conf.subordinate_bus)
return port;
}
/* Access the emulated PCI-to-PCI bridge */
if (bus->number == 0)
- return mvebu_sw_pci_bridge_write(port, where, size, val);
+ return pci_bridge_emul_conf_write(&port->bridge, where,
+ size, val);
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
/* Access the emulated PCI-to-PCI bridge */
if (bus->number == 0)
- return mvebu_sw_pci_bridge_read(port, where, size, val);
+ return pci_bridge_emul_conf_read(&port->bridge, where,
+ size, val);
if (!mvebu_pcie_link_up(port)) {
*val = 0xffffffff;
mvebu_pcie_setup_hw(port);
mvebu_pcie_set_local_dev_nr(port, 1);
- mvebu_sw_pci_bridge_init(port);
+ mvebu_pci_bridge_emul_init(port);
}
pcie->nports = i;
u8 intx, bool is_asserted)
{
struct cdns_pcie *pcie = &ep->pcie;
- u32 r = ep->max_regions - 1;
u32 offset;
u16 status;
u8 msg_code;
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
ep->irq_pci_fn != fn)) {
- /* Last region was reserved for IRQ writes. */
- cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
+ /* First region was reserved for IRQ writes. */
+ cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
ep->irq_phys_addr);
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
ep->irq_pci_fn = fn;
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
ep->irq_pci_fn != fn)) {
- /* Last region was reserved for IRQ writes. */
- cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
+ /* First region was reserved for IRQ writes. */
+ cdns_pcie_set_outbound_region(pcie, fn, 0,
false,
ep->irq_phys_addr,
pci_addr & ~pci_addr_mask,
ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
ep->irq_pci_fn = fn;
}
- writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
+ writel(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
return 0;
}
goto free_epc_mem;
}
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
+ /* Reserve region 0 for IRQs */
+ set_bit(0, &ep->ob_region_map);
return 0;
static int cdns_pcie_host_probe(struct platform_device *pdev)
{
- const char *type;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
rc->device_id = 0xffff;
of_property_read_u16(np, "device-id", &rc->device_id);
- type = of_get_property(np, "device_type", NULL);
- if (!type || strcmp(type, "pci")) {
- dev_err(dev, "invalid \"device_type\" %s\n", type);
- return -EINVAL;
- }
-
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
pcie->reg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pcie->reg_base)) {
for (i = 0; i < phy_count; i++) {
of_property_read_string_index(np, "phy-names", i, &name);
- phy[i] = devm_phy_optional_get(dev, name);
- if (IS_ERR(phy))
- return PTR_ERR(phy);
-
+ phy[i] = devm_phy_get(dev, name);
+ if (IS_ERR(phy[i])) {
+ ret = PTR_ERR(phy[i]);
+ goto err_phy;
+ }
link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
if (!link[i]) {
+ devm_phy_put(dev, phy[i]);
ret = -EINVAL;
- goto err_link;
+ goto err_phy;
}
}
ret = cdns_pcie_enable_phy(pcie);
if (ret)
- goto err_link;
+ goto err_phy;
return 0;
-err_link:
- while (--i >= 0)
+err_phy:
+ while (--i >= 0) {
device_link_del(link[i]);
+ devm_phy_put(dev, phy[i]);
+ }
return ret;
}
return (pcie->base + offset);
}
- /*
- * PAXC is connected to an internally emulated EP within the SoC. It
- * allows only one device.
- */
- if (pcie->ep_is_internal)
- if (slot > 0)
- return NULL;
-
return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where);
}
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/msi.h>
+#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
* @phy: pointer to PHY control block
* @lane: lane count
* @slot: port slot
+ * @irq: GIC irq
* @irq_domain: legacy INTx IRQ domain
* @inner_domain: inner IRQ domain
* @msi_domain: MSI IRQ domain
struct phy *phy;
u32 lane;
u32 slot;
+ int irq;
struct irq_domain *irq_domain;
struct irq_domain *inner_domain;
struct irq_domain *msi_domain;
clk_disable_unprepare(pcie->free_ck);
- if (dev->pm_domain) {
- pm_runtime_put_sync(dev);
- pm_runtime_disable(dev);
- }
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
}
static void mtk_pcie_port_free(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = bus->sysdata;
struct mtk_pcie_port *port;
+ struct pci_dev *dev = NULL;
+
+ /*
+ * Walk the bus hierarchy to get the devfn value
+ * of the port in the root bus.
+ */
+ while (bus && bus->number) {
+ dev = bus->self;
+ bus = dev->bus;
+ devfn = dev->devfn;
+ }
list_for_each_entry(port, &pcie->ports, list)
if (port->slot == PCI_SLOT(devfn))
.write = mtk_pcie_config_write,
};
-static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
-{
- struct mtk_pcie *pcie = port->pcie;
- struct resource *mem = &pcie->mem;
- const struct mtk_pcie_soc *soc = port->pcie->soc;
- u32 val;
- size_t size;
- int err;
-
- /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
- if (pcie->base) {
- val = readl(pcie->base + PCIE_SYS_CFG_V2);
- val |= PCIE_CSR_LTSSM_EN(port->slot) |
- PCIE_CSR_ASPM_L1_EN(port->slot);
- writel(val, pcie->base + PCIE_SYS_CFG_V2);
- }
-
- /* Assert all reset signals */
- writel(0, port->base + PCIE_RST_CTRL);
-
- /*
- * Enable PCIe link down reset, if link status changed from link up to
- * link down, this will reset MAC control registers and configuration
- * space.
- */
- writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
-
- /* De-assert PHY, PE, PIPE, MAC and configuration reset */
- val = readl(port->base + PCIE_RST_CTRL);
- val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
- PCIE_MAC_SRSTB | PCIE_CRSTB;
- writel(val, port->base + PCIE_RST_CTRL);
-
- /* Set up vendor ID and class code */
- if (soc->need_fix_class_id) {
- val = PCI_VENDOR_ID_MEDIATEK;
- writew(val, port->base + PCIE_CONF_VEND_ID);
-
- val = PCI_CLASS_BRIDGE_HOST;
- writew(val, port->base + PCIE_CONF_CLASS_ID);
- }
-
- /* 100ms timeout value should be enough for Gen1/2 training */
- err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
- !!(val & PCIE_PORT_LINKUP_V2), 20,
- 100 * USEC_PER_MSEC);
- if (err)
- return -ETIMEDOUT;
-
- /* Set INTx mask */
- val = readl(port->base + PCIE_INT_MASK);
- val &= ~INTX_MASK;
- writel(val, port->base + PCIE_INT_MASK);
-
- /* Set AHB to PCIe translation windows */
- size = mem->end - mem->start;
- val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
- writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
-
- val = upper_32_bits(mem->start);
- writel(val, port->base + PCIE_AHB_TRANS_BASE0_H);
-
- /* Set PCIe to AXI translation memory space.*/
- val = fls(0xffffffff) | WIN_ENABLE;
- writel(val, port->base + PCIE_AXI_WINDOW0);
-
- return 0;
-}
-
static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
writel(val, port->base + PCIE_INT_MASK);
}
+static void mtk_pcie_irq_teardown(struct mtk_pcie *pcie)
+{
+ struct mtk_pcie_port *port, *tmp;
+
+ list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
+ irq_set_chained_handler_and_data(port->irq, NULL, NULL);
+
+ if (port->irq_domain)
+ irq_domain_remove(port->irq_domain);
+
+ if (IS_ENABLED(CONFIG_PCI_MSI)) {
+ if (port->msi_domain)
+ irq_domain_remove(port->msi_domain);
+ if (port->inner_domain)
+ irq_domain_remove(port->inner_domain);
+ }
+
+ irq_dispose_mapping(port->irq);
+ }
+}
+
static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
ret = mtk_pcie_allocate_msi_domains(port);
if (ret)
return ret;
-
- mtk_pcie_enable_msi(port);
}
return 0;
struct mtk_pcie *pcie = port->pcie;
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
- int err, irq;
+ int err;
err = mtk_pcie_init_irq_domain(port, node);
if (err) {
return err;
}
- irq = platform_get_irq(pdev, port->slot);
- irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port);
+ port->irq = platform_get_irq(pdev, port->slot);
+ irq_set_chained_handler_and_data(port->irq,
+ mtk_pcie_intr_handler, port);
+
+ return 0;
+}
+
+static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+{
+ struct mtk_pcie *pcie = port->pcie;
+ struct resource *mem = &pcie->mem;
+ const struct mtk_pcie_soc *soc = port->pcie->soc;
+ u32 val;
+ size_t size;
+ int err;
+
+ /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
+ if (pcie->base) {
+ val = readl(pcie->base + PCIE_SYS_CFG_V2);
+ val |= PCIE_CSR_LTSSM_EN(port->slot) |
+ PCIE_CSR_ASPM_L1_EN(port->slot);
+ writel(val, pcie->base + PCIE_SYS_CFG_V2);
+ }
+
+ /* Assert all reset signals */
+ writel(0, port->base + PCIE_RST_CTRL);
+
+ /*
+ * Enable PCIe link down reset, if link status changed from link up to
+ * link down, this will reset MAC control registers and configuration
+ * space.
+ */
+ writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
+
+ /* De-assert PHY, PE, PIPE, MAC and configuration reset */
+ val = readl(port->base + PCIE_RST_CTRL);
+ val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
+ PCIE_MAC_SRSTB | PCIE_CRSTB;
+ writel(val, port->base + PCIE_RST_CTRL);
+
+ /* Set up vendor ID and class code */
+ if (soc->need_fix_class_id) {
+ val = PCI_VENDOR_ID_MEDIATEK;
+ writew(val, port->base + PCIE_CONF_VEND_ID);
+
+ val = PCI_CLASS_BRIDGE_PCI;
+ writew(val, port->base + PCIE_CONF_CLASS_ID);
+ }
+
+ /* 100ms timeout value should be enough for Gen1/2 training */
+ err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
+ !!(val & PCIE_PORT_LINKUP_V2), 20,
+ 100 * USEC_PER_MSEC);
+ if (err)
+ return -ETIMEDOUT;
+
+ /* Set INTx mask */
+ val = readl(port->base + PCIE_INT_MASK);
+ val &= ~INTX_MASK;
+ writel(val, port->base + PCIE_INT_MASK);
+
+ if (IS_ENABLED(CONFIG_PCI_MSI))
+ mtk_pcie_enable_msi(port);
+
+ /* Set AHB to PCIe translation windows */
+ size = mem->end - mem->start;
+ val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
+ writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
+
+ val = upper_32_bits(mem->start);
+ writel(val, port->base + PCIE_AHB_TRANS_BASE0_H);
+
+ /* Set PCIe to AXI translation memory space.*/
+ val = fls(0xffffffff) | WIN_ENABLE;
+ writel(val, port->base + PCIE_AXI_WINDOW0);
return 0;
}
pcie->free_ck = NULL;
}
- if (dev->pm_domain) {
- pm_runtime_enable(dev);
- pm_runtime_get_sync(dev);
- }
+ pm_runtime_enable(dev);
+ pm_runtime_get_sync(dev);
/* enable top level clock */
err = clk_prepare_enable(pcie->free_ck);
return 0;
err_free_ck:
- if (dev->pm_domain) {
- pm_runtime_put_sync(dev);
- pm_runtime_disable(dev);
- }
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
return err;
}
if (err < 0)
return err;
- devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start);
-
- return 0;
-}
-
-static int mtk_pcie_register_host(struct pci_host_bridge *host)
-{
- struct mtk_pcie *pcie = pci_host_bridge_priv(host);
- struct pci_bus *child;
- int err;
-
- host->busnr = pcie->busn.start;
- host->dev.parent = pcie->dev;
- host->ops = pcie->soc->ops;
- host->map_irq = of_irq_parse_and_map_pci;
- host->swizzle_irq = pci_common_swizzle;
- host->sysdata = pcie;
-
- err = pci_scan_root_bus_bridge(host);
- if (err < 0)
+ err = devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start);
+ if (err)
return err;
- pci_bus_size_bridges(host->bus);
- pci_bus_assign_resources(host->bus);
-
- list_for_each_entry(child, &host->bus->children, node)
- pcie_bus_configure_settings(child);
-
- pci_bus_add_devices(host->bus);
-
return 0;
}
if (err)
goto put_resources;
- err = mtk_pcie_register_host(host);
+ host->busnr = pcie->busn.start;
+ host->dev.parent = pcie->dev;
+ host->ops = pcie->soc->ops;
+ host->map_irq = of_irq_parse_and_map_pci;
+ host->swizzle_irq = pci_common_swizzle;
+ host->sysdata = pcie;
+
+ err = pci_host_probe(host);
if (err)
goto put_resources;
return err;
}
+
+static void mtk_pcie_free_resources(struct mtk_pcie *pcie)
+{
+ struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
+ struct list_head *windows = &host->windows;
+
+ pci_free_resource_list(windows);
+}
+
+static int mtk_pcie_remove(struct platform_device *pdev)
+{
+ struct mtk_pcie *pcie = platform_get_drvdata(pdev);
+ struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
+
+ pci_stop_root_bus(host->bus);
+ pci_remove_root_bus(host->bus);
+ mtk_pcie_free_resources(pcie);
+
+ mtk_pcie_irq_teardown(pcie);
+
+ mtk_pcie_put_resources(pcie);
+
+ return 0;
+}
+
+static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev)
+{
+ struct mtk_pcie *pcie = dev_get_drvdata(dev);
+ struct mtk_pcie_port *port;
+
+ if (list_empty(&pcie->ports))
+ return 0;
+
+ list_for_each_entry(port, &pcie->ports, list) {
+ clk_disable_unprepare(port->pipe_ck);
+ clk_disable_unprepare(port->obff_ck);
+ clk_disable_unprepare(port->axi_ck);
+ clk_disable_unprepare(port->aux_ck);
+ clk_disable_unprepare(port->ahb_ck);
+ clk_disable_unprepare(port->sys_ck);
+ phy_power_off(port->phy);
+ phy_exit(port->phy);
+ }
+
+ clk_disable_unprepare(pcie->free_ck);
+
+ return 0;
+}
+
+static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev)
+{
+ struct mtk_pcie *pcie = dev_get_drvdata(dev);
+ struct mtk_pcie_port *port, *tmp;
+
+ if (list_empty(&pcie->ports))
+ return 0;
+
+ clk_prepare_enable(pcie->free_ck);
+
+ list_for_each_entry_safe(port, tmp, &pcie->ports, list)
+ mtk_pcie_enable_port(port);
+
+ /* In case of EP was removed while system suspend. */
+ if (list_empty(&pcie->ports))
+ clk_disable_unprepare(pcie->free_ck);
+
+ return 0;
+}
+
+static const struct dev_pm_ops mtk_pcie_pm_ops = {
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq,
+ mtk_pcie_resume_noirq)
+};
+
static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
.ops = &mtk_pcie_ops,
.startup = mtk_pcie_startup_port,
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
+ .remove = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie",
.of_match_table = mtk_pcie_ids,
.suppress_bind_attrs = true,
+ .pm = &mtk_pcie_pm_ops,
},
};
-builtin_platform_driver(mtk_pcie_driver);
+module_platform_driver(mtk_pcie_driver);
+MODULE_LICENSE("GPL v2");
struct platform_device *pdev = pcie->pdev;
struct device_node *node = dev->of_node;
struct resource *res;
- const char *type;
-
- type = of_get_property(node, "device_type", NULL);
- if (!type || strcmp(type, "pci")) {
- dev_err(dev, "invalid \"device_type\" %s\n", type);
- return -EINVAL;
- }
/* map config resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
struct platform_device *pdev)
{
struct device *dev = pcie->dev;
- struct device_node *node = dev->of_node;
struct resource *res;
- const char *type;
-
- /* Check for device type */
- type = of_get_property(node, "device_type", NULL);
- if (!type || strcmp(type, "pci")) {
- dev_err(dev, "invalid \"device_type\" %s\n", type);
- return -EINVAL;
- }
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg");
pcie->breg_base = devm_ioremap_resource(dev, res);
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct resource regs;
- const char *type;
int err;
- type = of_get_property(node, "device_type", NULL);
- if (!type || strcmp(type, "pci")) {
- dev_err(dev, "invalid \"device_type\" %s\n", type);
- return -EINVAL;
- }
-
err = of_address_to_resource(node, 0, ®s);
if (err) {
dev_err(dev, "missing \"reg\" property\n");
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
- vmd_detach_resources(vmd);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_stop_root_bus(vmd->bus);
pci_remove_root_bus(vmd->bus);
vmd_cleanup_srcu(vmd);
vmd_teardown_dma_ops(vmd);
+ vmd_detach_resources(vmd);
irq_domain_remove(vmd->irq_domain);
}
--- /dev/null
+Contributions are solicited in particular to remedy the following issues:
+
+cpcihp:
+
+* There are no implementations of the ->hardware_test, ->get_power and
+ ->set_power callbacks in struct cpci_hp_controller_ops. Why were they
+ introduced? Can they be removed from the struct?
+
+cpqphp:
+
+* The driver spawns a kthread cpqhp_event_thread() which is woken by the
+ hardirq handler cpqhp_ctrl_intr(). Convert this to threaded IRQ handling.
+ The kthread is also woken from the timer pushbutton_helper_thread(),
+ convert it to call irq_wake_thread(). Use pciehp as a template.
+
+* A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource
+ management. Doesn't this duplicate functionality in the core?
+
+ibmphp:
+
+* Implementations of hotplug_slot_ops callbacks such as get_adapter_present()
+ in ibmphp_core.c create a copy of the struct slot on the stack, then perform
+ the actual operation on that copy. Determine if this overhead is necessary,
+ delete it if not. The functions also perform a NULL pointer check on the
+ struct hotplug_slot, this seems superfluous.
+
+* Several functions access the pci_slot member in struct hotplug_slot even
+ though pci_hotplug.h declares it private. See get_max_bus_speed() for an
+ example. Either the pci_slot member should no longer be declared private
+ or ibmphp should store a pointer to its bus in struct slot. Probably the
+ former.
+
+* The functions get_max_adapter_speed() and get_bus_name() are commented out.
+ Can they be deleted? There are also forward declarations at the top of
+ ibmphp_core.c as well as pointers in ibmphp_hotplug_slot_ops, likewise
+ commented out.
+
+* ibmphp_init_devno() takes a struct slot **, it could instead take a
+ struct slot *.
+
+* The return value of pci_hp_register() is not checked.
+
+* iounmap(io_mem) is called in the error path of ebda_rsrc_controller()
+ and once more in the error path of its caller ibmphp_access_ebda().
+
+* The various slot data structures are difficult to follow and need to be
+ simplified. A lot of functions are too large and too complex, they need
+ to be broken up into smaller, manageable pieces. Negative examples are
+ ebda_rsrc_controller() and configure_bridge().
+
+* A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource
+ management. Doesn't this duplicate functionality in the core?
+
+sgi_hotplug:
+
+* Several functions access the pci_slot member in struct hotplug_slot even
+ though pci_hotplug.h declares it private. See sn_hp_destroy() for an
+ example. Either the pci_slot member should no longer be declared private
+ or sgi_hotplug should store a pointer to it in struct slot. Probably the
+ former.
+
+shpchp:
+
+* There is only a single implementation of struct hpc_ops. Can the struct be
+ removed and its functions invoked directly? This has already been done in
+ pciehp with commit 82a9e79ef132 ("PCI: pciehp: remove hpc_ops"). Clarify
+ if there was a specific reason not to apply the same change to shpchp.
+
+* The ->get_mode1_ECC_cap callback in shpchp_hpc_ops is never invoked.
+ Why was it introduced? Can it be removed?
+
+* The hardirq handler shpc_isr() queues events on a workqueue. It can be
+ simplified by converting it to threaded IRQ handling. Use pciehp as a
+ template.
* struct slot - slot information for each *physical* slot
*/
struct slot {
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct acpiphp_slot *acpi_slot;
- struct hotplug_slot_info info;
unsigned int sun; /* ACPI _SUN (Slot User Number) value */
};
static inline const char *slot_name(struct slot *slot)
{
- return hotplug_slot_name(slot->hotplug_slot);
+ return hotplug_slot_name(&slot->hotplug_slot);
+}
+
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
}
/*
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
-static struct hotplug_slot_ops acpi_hotplug_slot_ops = {
+static const struct hotplug_slot_ops acpi_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.set_attention_status = set_attention_status,
*/
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*/
static int disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*/
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*/
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*/
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
if (!slot)
goto error;
- slot->hotplug_slot = kzalloc(sizeof(*slot->hotplug_slot), GFP_KERNEL);
- if (!slot->hotplug_slot)
- goto error_slot;
-
- slot->hotplug_slot->info = &slot->info;
-
- slot->hotplug_slot->private = slot;
- slot->hotplug_slot->ops = &acpi_hotplug_slot_ops;
+ slot->hotplug_slot.ops = &acpi_hotplug_slot_ops;
slot->acpi_slot = acpiphp_slot;
- slot->hotplug_slot->info->power_status = acpiphp_get_power_status(slot->acpi_slot);
- slot->hotplug_slot->info->attention_status = 0;
- slot->hotplug_slot->info->latch_status = acpiphp_get_latch_status(slot->acpi_slot);
- slot->hotplug_slot->info->adapter_status = acpiphp_get_adapter_status(slot->acpi_slot);
acpiphp_slot->slot = slot;
slot->sun = sun;
snprintf(name, SLOT_NAME_SIZE, "%u", sun);
- retval = pci_hp_register(slot->hotplug_slot, acpiphp_slot->bus,
+ retval = pci_hp_register(&slot->hotplug_slot, acpiphp_slot->bus,
acpiphp_slot->device, name);
if (retval == -EBUSY)
- goto error_hpslot;
+ goto error_slot;
if (retval) {
pr_err("pci_hp_register failed with error %d\n", retval);
- goto error_hpslot;
+ goto error_slot;
}
pr_info("Slot [%s] registered\n", slot_name(slot));
return 0;
-error_hpslot:
- kfree(slot->hotplug_slot);
error_slot:
kfree(slot);
error:
pr_info("Slot [%s] unregistered\n", slot_name(slot));
- pci_hp_deregister(slot->hotplug_slot);
- kfree(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
kfree(slot);
}
#define IBM_HARDWARE_ID1 "IBM37D0"
#define IBM_HARDWARE_ID2 "IBM37D4"
-#define hpslot_to_sun(A) (((struct slot *)((A)->private))->sun)
+#define hpslot_to_sun(A) (to_slot(A)->sun)
/* union apci_descriptor - allows access to the
* various device descriptors that are embedded in the
unsigned int devfn;
struct pci_bus *bus;
struct pci_dev *dev;
+ unsigned int latch_status:1;
+ unsigned int adapter_status:1;
unsigned int extracting;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct list_head slot_list;
};
static inline const char *slot_name(struct slot *slot)
{
- return hotplug_slot_name(slot->hotplug_slot);
+ return hotplug_slot_name(&slot->hotplug_slot);
+}
+
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
}
int cpci_hp_register_controller(struct cpci_hp_controller *controller);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
-static struct hotplug_slot_ops cpci_hotplug_slot_ops = {
+static const struct hotplug_slot_ops cpci_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.set_attention_status = set_attention_status,
.get_latch_status = get_latch_status,
};
-static int
-update_latch_status(struct hotplug_slot *hotplug_slot, u8 value)
-{
- struct hotplug_slot_info info;
-
- memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info));
- info.latch_status = value;
- return pci_hp_change_slot_info(hotplug_slot, &info);
-}
-
-static int
-update_adapter_status(struct hotplug_slot *hotplug_slot, u8 value)
-{
- struct hotplug_slot_info info;
-
- memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info));
- info.adapter_status = value;
- return pci_hp_change_slot_info(hotplug_slot, &info);
-}
-
static int
enable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int retval = 0;
dbg("%s - physical_slot = %s", __func__, slot_name(slot));
static int
disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int retval = 0;
dbg("%s - physical_slot = %s", __func__, slot_name(slot));
goto disable_error;
}
- if (update_adapter_status(slot->hotplug_slot, 0))
- warn("failure to update adapter file");
+ slot->adapter_status = 0;
if (slot->extracting) {
slot->extracting = 0;
static int
get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
*value = cpci_get_power_status(slot);
return 0;
static int
get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
*value = cpci_get_attention_status(slot);
return 0;
static int
set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
- return cpci_set_attention_status(hotplug_slot->private, status);
+ return cpci_set_attention_status(to_slot(hotplug_slot), status);
}
static int
get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- *value = hotplug_slot->info->adapter_status;
+ struct slot *slot = to_slot(hotplug_slot);
+
+ *value = slot->adapter_status;
return 0;
}
static int
get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- *value = hotplug_slot->info->latch_status;
+ struct slot *slot = to_slot(hotplug_slot);
+
+ *value = slot->latch_status;
return 0;
}
static void release_slot(struct slot *slot)
{
- kfree(slot->hotplug_slot->info);
- kfree(slot->hotplug_slot);
pci_dev_put(slot->dev);
kfree(slot);
}
cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last)
{
struct slot *slot;
- struct hotplug_slot *hotplug_slot;
- struct hotplug_slot_info *info;
char name[SLOT_NAME_SIZE];
int status;
int i;
goto error;
}
- hotplug_slot =
- kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
- if (!hotplug_slot) {
- status = -ENOMEM;
- goto error_slot;
- }
- slot->hotplug_slot = hotplug_slot;
-
- info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
- if (!info) {
- status = -ENOMEM;
- goto error_hpslot;
- }
- hotplug_slot->info = info;
-
slot->bus = bus;
slot->number = i;
slot->devfn = PCI_DEVFN(i, 0);
snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i);
- hotplug_slot->private = slot;
- hotplug_slot->ops = &cpci_hotplug_slot_ops;
-
- /*
- * Initialize the slot info structure with some known
- * good values.
- */
- dbg("initializing slot %s", name);
- info->power_status = cpci_get_power_status(slot);
- info->attention_status = cpci_get_attention_status(slot);
+ slot->hotplug_slot.ops = &cpci_hotplug_slot_ops;
dbg("registering slot %s", name);
- status = pci_hp_register(slot->hotplug_slot, bus, i, name);
+ status = pci_hp_register(&slot->hotplug_slot, bus, i, name);
if (status) {
err("pci_hp_register failed with error %d", status);
- goto error_info;
+ goto error_slot;
}
dbg("slot registered with name: %s", slot_name(slot));
up_write(&list_rwsem);
}
return 0;
-error_info:
- kfree(info);
-error_hpslot:
- kfree(hotplug_slot);
error_slot:
kfree(slot);
error:
slots--;
dbg("deregistering slot %s", slot_name(slot));
- pci_hp_deregister(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
release_slot(slot);
}
}
__func__, slot_name(slot));
dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0));
if (dev) {
- if (update_adapter_status(slot->hotplug_slot, 1))
- warn("failure to update adapter file");
- if (update_latch_status(slot->hotplug_slot, 1))
- warn("failure to update latch file");
+ slot->adapter_status = 1;
+ slot->latch_status = 1;
slot->dev = dev;
}
}
dbg("%s - slot %s HS_CSR (2) = %04x",
__func__, slot_name(slot), hs_csr);
- if (update_latch_status(slot->hotplug_slot, 1))
- warn("failure to update latch file");
-
- if (update_adapter_status(slot->hotplug_slot, 1))
- warn("failure to update adapter file");
+ slot->latch_status = 1;
+ slot->adapter_status = 1;
cpci_led_off(slot);
__func__, slot_name(slot), hs_csr);
if (!slot->extracting) {
- if (update_latch_status(slot->hotplug_slot, 0))
- warn("failure to update latch file");
-
+ slot->latch_status = 0;
slot->extracting = 1;
atomic_inc(&extracting);
}
*/
err("card in slot %s was improperly removed",
slot_name(slot));
- if (update_adapter_status(slot->hotplug_slot, 0))
- warn("failure to update adapter file");
+ slot->adapter_status = 0;
slot->extracting = 0;
atomic_dec(&extracting);
}
goto cleanup_null;
list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) {
list_del(&slot->slot_list);
- pci_hp_deregister(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
release_slot(slot);
}
cleanup_null:
slot->devfn,
hs_cap + 2,
hs_csr)) {
- err("Could not set LOO for slot %s",
- hotplug_slot_name(slot->hotplug_slot));
+ err("Could not set LOO for slot %s", slot_name(slot));
return -ENODEV;
}
}
slot->devfn,
hs_cap + 2,
hs_csr)) {
- err("Could not clear LOO for slot %s",
- hotplug_slot_name(slot->hotplug_slot));
+ err("Could not clear LOO for slot %s", slot_name(slot));
return -ENODEV;
}
}
u8 hp_slot;
struct controller *ctrl;
void __iomem *p_sm_slot;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
};
struct pci_resource {
static inline const char *slot_name(struct slot *slot)
{
- return hotplug_slot_name(slot->hotplug_slot);
+ return hotplug_slot_name(&slot->hotplug_slot);
+}
+
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
}
/*
{
u32 tempdword;
u32 number_of_slots;
- u8 physical_slot;
if (!ctrl)
return 1;
number_of_slots = readb(ctrl->hpc_reg + SLOT_MASK) & 0x0F;
/* Loop through slots */
while (number_of_slots) {
- physical_slot = tempdword;
writeb(0, ctrl->hpc_reg + SLOT_SERR);
tempdword++;
number_of_slots--;
while (old_slot) {
next_slot = old_slot->next;
- pci_hp_deregister(old_slot->hotplug_slot);
- kfree(old_slot->hotplug_slot->info);
- kfree(old_slot->hotplug_slot);
+ pci_hp_deregister(&old_slot->hotplug_slot);
kfree(old_slot);
old_slot = next_slot;
}
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
struct pci_func *slot_func;
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
static int process_SI(struct hotplug_slot *hotplug_slot)
{
struct pci_func *slot_func;
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
static int process_SS(struct hotplug_slot *hotplug_slot)
{
struct pci_func *slot_func;
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
return 0;
}
-static struct hotplug_slot_ops cpqphp_hotplug_slot_ops = {
+static const struct hotplug_slot_ops cpqphp_hotplug_slot_ops = {
.set_attention_status = set_attention_status,
.enable_slot = process_SI,
.disable_slot = process_SS,
void __iomem *smbios_table)
{
struct slot *slot;
- struct hotplug_slot *hotplug_slot;
- struct hotplug_slot_info *hotplug_slot_info;
struct pci_bus *bus = ctrl->pci_bus;
u8 number_of_slots;
u8 slot_device;
goto error;
}
- slot->hotplug_slot = kzalloc(sizeof(*(slot->hotplug_slot)),
- GFP_KERNEL);
- if (!slot->hotplug_slot) {
- result = -ENOMEM;
- goto error_slot;
- }
- hotplug_slot = slot->hotplug_slot;
-
- hotplug_slot->info = kzalloc(sizeof(*(hotplug_slot->info)),
- GFP_KERNEL);
- if (!hotplug_slot->info) {
- result = -ENOMEM;
- goto error_hpslot;
- }
- hotplug_slot_info = hotplug_slot->info;
-
slot->ctrl = ctrl;
slot->bus = ctrl->bus;
slot->device = slot_device;
((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04;
/* register this slot with the hotplug pci core */
- hotplug_slot->private = slot;
snprintf(name, SLOT_NAME_SIZE, "%u", slot->number);
- hotplug_slot->ops = &cpqphp_hotplug_slot_ops;
-
- hotplug_slot_info->power_status = get_slot_enabled(ctrl, slot);
- hotplug_slot_info->attention_status =
- cpq_get_attention_status(ctrl, slot);
- hotplug_slot_info->latch_status =
- cpq_get_latch_status(ctrl, slot);
- hotplug_slot_info->adapter_status =
- get_presence_status(ctrl, slot);
+ slot->hotplug_slot.ops = &cpqphp_hotplug_slot_ops;
dbg("registering bus %d, dev %d, number %d, ctrl->slot_device_offset %d, slot %d\n",
slot->bus, slot->device,
slot->number, ctrl->slot_device_offset,
slot_number);
- result = pci_hp_register(hotplug_slot,
+ result = pci_hp_register(&slot->hotplug_slot,
ctrl->pci_dev->bus,
slot->device,
name);
if (result) {
err("pci_hp_register failed with error %d\n", result);
- goto error_info;
+ goto error_slot;
}
slot->next = ctrl->slot;
}
return 0;
-error_info:
- kfree(hotplug_slot_info);
-error_hpslot:
- kfree(hotplug_slot);
error_slot:
kfree(slot);
error:
for (slot = ctrl->slot; slot; slot = slot->next) {
if (slot->device == (hp_slot + ctrl->slot_device_offset))
continue;
- if (!slot->hotplug_slot || !slot->hotplug_slot->info)
- continue;
- if (slot->hotplug_slot->info->adapter_status == 0)
+ if (get_presence_status(ctrl, slot) == 0)
continue;
/* If another adapter is running on the same segment but at a
* lower speed/mode, we allow the new adapter to function at
}
-static int update_slot_info(struct controller *ctrl, struct slot *slot)
-{
- struct hotplug_slot_info *info;
- int result;
-
- info = kmalloc(sizeof(*info), GFP_KERNEL);
- if (!info)
- return -ENOMEM;
-
- info->power_status = get_slot_enabled(ctrl, slot);
- info->attention_status = cpq_get_attention_status(ctrl, slot);
- info->latch_status = cpq_get_latch_status(ctrl, slot);
- info->adapter_status = get_presence_status(ctrl, slot);
- result = pci_hp_change_slot_info(slot->hotplug_slot, info);
- kfree(info);
- return result;
-}
-
static void interrupt_event_handler(struct controller *ctrl)
{
int loop = 0;
/***********POWER FAULT */
else if (ctrl->event_queue[loop].event_type == INT_POWER_FAULT) {
dbg("power fault\n");
- } else {
- /* refresh notification */
- update_slot_info(ctrl, p_slot);
}
ctrl->event_queue[loop].event_type = 0;
if (rc)
dbg("%s: rc = %d\n", __func__, rc);
- if (p_slot)
- update_slot_info(ctrl, p_slot);
-
return rc;
}
rc = 1;
}
- if (p_slot)
- update_slot_info(ctrl, p_slot);
-
return rc;
}
u8 supported_bus_mode;
u8 flag; /* this is for disable slot and polling */
u8 ctlr_index;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct controller *ctrl;
struct pci_func *func;
u8 irq[4];
int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */
int ibmphp_configure_card(struct pci_func *, u8);
int ibmphp_unconfigure_card(struct slot **, int);
-extern struct hotplug_slot_ops ibmphp_hotplug_slot_ops;
+extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops;
+
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
+}
#endif //__IBMPHP_H
break;
}
if (rc == 0) {
- pslot = hotplug_slot->private;
- if (pslot)
- rc = ibmphp_hpc_writeslot(pslot, cmd);
- else
- rc = -ENODEV;
+ pslot = to_slot(hotplug_slot);
+ rc = ibmphp_hpc_writeslot(pslot, cmd);
}
} else
rc = -ENODEV;
ibmphp_lock_operations();
if (hotplug_slot) {
- pslot = hotplug_slot->private;
- if (pslot) {
- memcpy(&myslot, pslot, sizeof(struct slot));
- rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
- &(myslot.status));
- if (!rc)
- rc = ibmphp_hpc_readslot(pslot,
- READ_EXTSLOTSTATUS,
- &(myslot.ext_status));
- if (!rc)
- *value = SLOT_ATTN(myslot.status,
- myslot.ext_status);
- }
+ pslot = to_slot(hotplug_slot);
+ memcpy(&myslot, pslot, sizeof(struct slot));
+ rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
+ &myslot.status);
+ if (!rc)
+ rc = ibmphp_hpc_readslot(pslot, READ_EXTSLOTSTATUS,
+ &myslot.ext_status);
+ if (!rc)
+ *value = SLOT_ATTN(myslot.status, myslot.ext_status);
}
ibmphp_unlock_operations();
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
- pslot = hotplug_slot->private;
- if (pslot) {
- memcpy(&myslot, pslot, sizeof(struct slot));
- rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
- &(myslot.status));
- if (!rc)
- *value = SLOT_LATCH(myslot.status);
- }
+ pslot = to_slot(hotplug_slot);
+ memcpy(&myslot, pslot, sizeof(struct slot));
+ rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
+ &myslot.status);
+ if (!rc)
+ *value = SLOT_LATCH(myslot.status);
}
ibmphp_unlock_operations();
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
- pslot = hotplug_slot->private;
- if (pslot) {
- memcpy(&myslot, pslot, sizeof(struct slot));
- rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
- &(myslot.status));
- if (!rc)
- *value = SLOT_PWRGD(myslot.status);
- }
+ pslot = to_slot(hotplug_slot);
+ memcpy(&myslot, pslot, sizeof(struct slot));
+ rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
+ &myslot.status);
+ if (!rc)
+ *value = SLOT_PWRGD(myslot.status);
}
ibmphp_unlock_operations();
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
- pslot = hotplug_slot->private;
- if (pslot) {
- memcpy(&myslot, pslot, sizeof(struct slot));
- rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
- &(myslot.status));
- if (!rc) {
- present = SLOT_PRESENT(myslot.status);
- if (present == HPC_SLOT_EMPTY)
- *value = 0;
- else
- *value = 1;
- }
+ pslot = to_slot(hotplug_slot);
+ memcpy(&myslot, pslot, sizeof(struct slot));
+ rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
+ &myslot.status);
+ if (!rc) {
+ present = SLOT_PRESENT(myslot.status);
+ if (present == HPC_SLOT_EMPTY)
+ *value = 0;
+ else
+ *value = 1;
}
}
int rc = 0;
u8 mode = 0;
enum pci_bus_speed speed;
- struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus;
+ struct pci_bus *bus = slot->hotplug_slot.pci_slot->bus;
debug("%s - Entry slot[%p]\n", __func__, slot);
****************************************************************************/
int ibmphp_update_slot_info(struct slot *slot_cur)
{
- struct hotplug_slot_info *info;
- struct pci_bus *bus = slot_cur->hotplug_slot->pci_slot->bus;
- int rc;
+ struct pci_bus *bus = slot_cur->hotplug_slot.pci_slot->bus;
u8 bus_speed;
u8 mode;
- info = kmalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
- if (!info)
- return -ENOMEM;
-
- info->power_status = SLOT_PWRGD(slot_cur->status);
- info->attention_status = SLOT_ATTN(slot_cur->status,
- slot_cur->ext_status);
- info->latch_status = SLOT_LATCH(slot_cur->status);
- if (!SLOT_PRESENT(slot_cur->status)) {
- info->adapter_status = 0;
-/* info->max_adapter_speed_status = MAX_ADAPTER_NONE; */
- } else {
- info->adapter_status = 1;
-/* get_max_adapter_speed_1(slot_cur->hotplug_slot,
- &info->max_adapter_speed_status, 0); */
- }
-
bus_speed = slot_cur->bus_on->current_speed;
mode = slot_cur->bus_on->current_bus_mode;
bus->cur_bus_speed = bus_speed;
// To do: bus_names
- rc = pci_hp_change_slot_info(slot_cur->hotplug_slot, info);
- kfree(info);
- return rc;
+ return 0;
}
list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head,
ibm_slot_list) {
- pci_hp_del(slot_cur->hotplug_slot);
+ pci_hp_del(&slot_cur->hotplug_slot);
slot_cur->ctrl = NULL;
slot_cur->bus_on = NULL;
*/
ibmphp_unconfigure_card(&slot_cur, -1);
- pci_hp_destroy(slot_cur->hotplug_slot);
- kfree(slot_cur->hotplug_slot->info);
- kfree(slot_cur->hotplug_slot);
+ pci_hp_destroy(&slot_cur->hotplug_slot);
kfree(slot_cur);
}
debug("%s -- exit\n", __func__);
ibmphp_lock_operations();
debug("ENABLING SLOT........\n");
- slot_cur = hs->private;
+ slot_cur = to_slot(hs);
rc = validate(slot_cur, ENABLE);
if (rc) {
slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL);
if (!slot_cur->func) {
- /* We cannot do update_slot_info here, since no memory for
- * kmalloc n.e.ways, and update_slot_info allocates some */
+ /* do update_slot_info here? */
rc = -ENOMEM;
goto error_power;
}
**************************************************************/
static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int rc;
ibmphp_lock_operations();
goto exit;
}
-struct hotplug_slot_ops ibmphp_hotplug_slot_ops = {
+const struct hotplug_slot_ops ibmphp_hotplug_slot_ops = {
.set_attention_status = set_attention_status,
.enable_slot = enable_slot,
.disable_slot = ibmphp_disable_slot,
struct slot *slot;
int rc = 0;
- if (!hotplug_slot || !hotplug_slot->private)
- return -EINVAL;
-
- slot = hotplug_slot->private;
+ slot = to_slot(hotplug_slot);
rc = ibmphp_hpc_readslot(slot, READ_ALLSTAT, NULL);
- if (rc)
- return rc;
-
- // power - enabled:1 not:0
- hotplug_slot->info->power_status = SLOT_POWER(slot->status);
-
- // attention - off:0, on:1, blinking:2
- hotplug_slot->info->attention_status = SLOT_ATTN(slot->status, slot->ext_status);
-
- // latch - open:1 closed:0
- hotplug_slot->info->latch_status = SLOT_LATCH(slot->status);
-
- // pci board - present:1 not:0
- if (SLOT_PRESENT(slot->status))
- hotplug_slot->info->adapter_status = 1;
- else
- hotplug_slot->info->adapter_status = 0;
-/*
- if (slot->bus_on->supported_bus_mode
- && (slot->bus_on->supported_speed == BUS_SPEED_66))
- hotplug_slot->info->max_bus_speed_status = BUS_SPEED_66PCIX;
- else
- hotplug_slot->info->max_bus_speed_status = slot->bus_on->supported_speed;
-*/
-
return rc;
}
u8 ctlr_id, temp, bus_index;
u16 ctlr, slot, bus;
u16 slot_num, bus_num, index;
- struct hotplug_slot *hp_slot_ptr;
struct controller *hpc_ptr;
struct ebda_hpc_bus *bus_ptr;
struct ebda_hpc_slot *slot_ptr;
bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL);
if (!bus_info_ptr1) {
rc = -ENOMEM;
- goto error_no_hp_slot;
+ goto error_no_slot;
}
bus_info_ptr1->slot_min = slot_ptr->slot_num;
bus_info_ptr1->slot_max = slot_ptr->slot_num;
(hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1),
"ibmphp")) {
rc = -ENODEV;
- goto error_no_hp_slot;
+ goto error_no_slot;
}
hpc_ptr->irq = readb(io_mem + addr + 4);
addr += 5;
break;
default:
rc = -ENODEV;
- goto error_no_hp_slot;
+ goto error_no_slot;
}
//reorganize chassis' linked list
// register slots with hpc core as well as create linked list of ibm slot
for (index = 0; index < hpc_ptr->slot_count; index++) {
-
- hp_slot_ptr = kzalloc(sizeof(*hp_slot_ptr), GFP_KERNEL);
- if (!hp_slot_ptr) {
- rc = -ENOMEM;
- goto error_no_hp_slot;
- }
-
- hp_slot_ptr->info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
- if (!hp_slot_ptr->info) {
- rc = -ENOMEM;
- goto error_no_hp_info;
- }
-
tmp_slot = kzalloc(sizeof(*tmp_slot), GFP_KERNEL);
if (!tmp_slot) {
rc = -ENOMEM;
bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num);
if (!bus_info_ptr1) {
- kfree(tmp_slot);
rc = -ENODEV;
goto error;
}
tmp_slot->ctlr_index = hpc_ptr->slots[index].ctl_index;
tmp_slot->number = hpc_ptr->slots[index].slot_num;
- tmp_slot->hotplug_slot = hp_slot_ptr;
-
- hp_slot_ptr->private = tmp_slot;
- rc = fillslotinfo(hp_slot_ptr);
+ rc = fillslotinfo(&tmp_slot->hotplug_slot);
if (rc)
goto error;
- rc = ibmphp_init_devno((struct slot **) &hp_slot_ptr->private);
+ rc = ibmphp_init_devno(&tmp_slot);
if (rc)
goto error;
- hp_slot_ptr->ops = &ibmphp_hotplug_slot_ops;
+ tmp_slot->hotplug_slot.ops = &ibmphp_hotplug_slot_ops;
// end of registering ibm slot with hotplug core
- list_add(&((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head);
+ list_add(&tmp_slot->ibm_slot_list, &ibmphp_slot_head);
}
print_bus_info();
list_for_each_entry(tmp_slot, &ibmphp_slot_head, ibm_slot_list) {
snprintf(name, SLOT_NAME_SIZE, "%s", create_file_name(tmp_slot));
- pci_hp_register(tmp_slot->hotplug_slot,
+ pci_hp_register(&tmp_slot->hotplug_slot,
pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name);
}
return 0;
error:
- kfree(hp_slot_ptr->private);
+ kfree(tmp_slot);
error_no_slot:
- kfree(hp_slot_ptr->info);
-error_no_hp_info:
- kfree(hp_slot_ptr);
-error_no_hp_slot:
free_ebda_hpc(hpc_ptr);
error_no_hpc:
iounmap(io_mem);
#define GET_STATUS(name, type) \
static int get_##name(struct hotplug_slot *slot, type *value) \
{ \
- struct hotplug_slot_ops *ops = slot->ops; \
+ const struct hotplug_slot_ops *ops = slot->ops; \
int retval = 0; \
- if (!try_module_get(ops->owner)) \
+ if (!try_module_get(slot->owner)) \
return -ENODEV; \
if (ops->get_##name) \
retval = ops->get_##name(slot, value); \
- else \
- *value = slot->info->name; \
- module_put(ops->owner); \
+ module_put(slot->owner); \
return retval; \
}
power = (u8)(lpower & 0xff);
dbg("power = %d\n", power);
- if (!try_module_get(slot->ops->owner)) {
+ if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
err("Illegal value specified for power\n");
retval = -EINVAL;
}
- module_put(slot->ops->owner);
+ module_put(slot->owner);
exit:
if (retval)
static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf,
size_t count)
{
- struct hotplug_slot_ops *ops = pci_slot->hotplug->ops;
+ struct hotplug_slot *slot = pci_slot->hotplug;
+ const struct hotplug_slot_ops *ops = slot->ops;
unsigned long lattention;
u8 attention;
int retval = 0;
attention = (u8)(lattention & 0xff);
dbg(" - attention = %d\n", attention);
- if (!try_module_get(ops->owner)) {
+ if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (ops->set_attention_status)
- retval = ops->set_attention_status(pci_slot->hotplug, attention);
- module_put(ops->owner);
+ retval = ops->set_attention_status(slot, attention);
+ module_put(slot->owner);
exit:
if (retval)
test = (u32)(ltest & 0xffffffff);
dbg("test = %d\n", test);
- if (!try_module_get(slot->ops->owner)) {
+ if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (slot->ops->hardware_test)
retval = slot->ops->hardware_test(slot, test);
- module_put(slot->ops->owner);
+ module_put(slot->owner);
exit:
if (retval)
if (slot == NULL)
return -ENODEV;
- if ((slot->info == NULL) || (slot->ops == NULL))
+ if (slot->ops == NULL)
return -EINVAL;
- slot->ops->owner = owner;
- slot->ops->mod_name = mod_name;
+ slot->owner = owner;
+ slot->mod_name = mod_name;
/*
* No problems if we call this interface from both ACPI_PCI_SLOT
}
EXPORT_SYMBOL_GPL(pci_hp_destroy);
-/**
- * pci_hp_change_slot_info - changes the slot's information structure in the core
- * @slot: pointer to the slot whose info has changed
- * @info: pointer to the info copy into the slot's info structure
- *
- * @slot must have been registered with the pci
- * hotplug subsystem previously with a call to pci_hp_register().
- *
- * Returns 0 if successful, anything else for an error.
- */
-int pci_hp_change_slot_info(struct hotplug_slot *slot,
- struct hotplug_slot_info *info)
-{
- if (!slot || !info)
- return -ENODEV;
-
- memcpy(slot->info, info, sizeof(struct hotplug_slot_info));
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(pci_hp_change_slot_info);
-
static int __init pci_hotplug_init(void)
{
int result;
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
#include <linux/delay.h>
-#include <linux/sched/signal.h> /* signal_pending() */
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/workqueue.h>
#define SLOT_NAME_SIZE 10
-/**
- * struct slot - PCIe hotplug slot
- * @state: current state machine position
- * @ctrl: pointer to the slot's controller structure
- * @hotplug_slot: pointer to the structure registered with the PCI hotplug core
- * @work: work item to turn the slot on or off after 5 seconds in response to
- * an Attention Button press
- * @lock: protects reads and writes of @state;
- * protects scheduling, execution and cancellation of @work
- */
-struct slot {
- u8 state;
- struct controller *ctrl;
- struct hotplug_slot *hotplug_slot;
- struct delayed_work work;
- struct mutex lock;
-};
-
/**
* struct controller - PCIe hotplug controller
- * @ctrl_lock: serializes writes to the Slot Control register
* @pcie: pointer to the controller's PCIe port service device
- * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
- * Link Status register and to the Presence Detect State bit in the Slot
- * Status register during a slot reset which may cause them to flap
- * @slot: pointer to the controller's slot structure
- * @queue: wait queue to wake up on reception of a Command Completed event,
- * used for synchronous writes to the Slot Control register
* @slot_cap: cached copy of the Slot Capabilities register
* @slot_ctrl: cached copy of the Slot Control register
- * @poll_thread: thread to poll for slot events if no IRQ is available,
- * enabled with pciehp_poll_mode module parameter
+ * @ctrl_lock: serializes writes to the Slot Control register
* @cmd_started: jiffies when the Slot Control register was last written;
* the next write is allowed 1 second later, absent a Command Completed
* interrupt (PCIe r4.0, sec 6.7.3.2)
* @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler
* on reception of a Command Completed event
- * @link_active_reporting: cached copy of Data Link Layer Link Active Reporting
- * Capable bit in Link Capabilities register; if this bit is zero, the
- * Data Link Layer Link Active bit in the Link Status register will never
- * be set and the driver is thus confined to wait 1 second before assuming
- * the link to a hotplugged device is up and accessing it
+ * @queue: wait queue to wake up on reception of a Command Completed event,
+ * used for synchronous writes to the Slot Control register
+ * @pending_events: used by the IRQ handler to save events retrieved from the
+ * Slot Status register for later consumption by the IRQ thread
* @notification_enabled: whether the IRQ was requested successfully
* @power_fault_detected: whether a power fault was detected by the hardware
* that has not yet been cleared by the user
- * @pending_events: used by the IRQ handler to save events retrieved from the
- * Slot Status register for later consumption by the IRQ thread
+ * @poll_thread: thread to poll for slot events if no IRQ is available,
+ * enabled with pciehp_poll_mode module parameter
+ * @state: current state machine position
+ * @state_lock: protects reads and writes of @state;
+ * protects scheduling, execution and cancellation of @button_work
+ * @button_work: work item to turn the slot on or off after 5 seconds
+ * in response to an Attention Button press
+ * @hotplug_slot: structure registered with the PCI hotplug core
+ * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
+ * Link Status register and to the Presence Detect State bit in the Slot
+ * Status register during a slot reset which may cause them to flap
* @request_result: result of last user request submitted to the IRQ thread
* @requester: wait queue to wake up on completion of user request,
* used for synchronous slot enable/disable request via sysfs
+ *
+ * PCIe hotplug has a 1:1 relationship between controller and slot, hence
+ * unlike other drivers, the two aren't represented by separate structures.
*/
struct controller {
- struct mutex ctrl_lock;
struct pcie_device *pcie;
- struct rw_semaphore reset_lock;
- struct slot *slot;
- wait_queue_head_t queue;
- u32 slot_cap;
- u16 slot_ctrl;
- struct task_struct *poll_thread;
- unsigned long cmd_started; /* jiffies */
+
+ u32 slot_cap; /* capabilities and quirks */
+
+ u16 slot_ctrl; /* control register access */
+ struct mutex ctrl_lock;
+ unsigned long cmd_started;
unsigned int cmd_busy:1;
- unsigned int link_active_reporting:1;
+ wait_queue_head_t queue;
+
+ atomic_t pending_events; /* event handling */
unsigned int notification_enabled:1;
unsigned int power_fault_detected;
- atomic_t pending_events;
+ struct task_struct *poll_thread;
+
+ u8 state; /* state machine */
+ struct mutex state_lock;
+ struct delayed_work button_work;
+
+ struct hotplug_slot hotplug_slot; /* hotplug core interface */
+ struct rw_semaphore reset_lock;
int request_result;
wait_queue_head_t requester;
};
#define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS)
#define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19)
-int pciehp_sysfs_enable_slot(struct slot *slot);
-int pciehp_sysfs_disable_slot(struct slot *slot);
void pciehp_request(struct controller *ctrl, int action);
-void pciehp_handle_button_press(struct slot *slot);
-void pciehp_handle_disable_request(struct slot *slot);
-void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events);
-int pciehp_configure_device(struct slot *p_slot);
-void pciehp_unconfigure_device(struct slot *p_slot);
+void pciehp_handle_button_press(struct controller *ctrl);
+void pciehp_handle_disable_request(struct controller *ctrl);
+void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events);
+int pciehp_configure_device(struct controller *ctrl);
+void pciehp_unconfigure_device(struct controller *ctrl, bool presence);
void pciehp_queue_pushbutton_work(struct work_struct *work);
struct controller *pcie_init(struct pcie_device *dev);
int pcie_init_notification(struct controller *ctrl);
void pcie_shutdown_notification(struct controller *ctrl);
void pcie_clear_hotplug_events(struct controller *ctrl);
-int pciehp_power_on_slot(struct slot *slot);
-void pciehp_power_off_slot(struct slot *slot);
-void pciehp_get_power_status(struct slot *slot, u8 *status);
-void pciehp_get_attention_status(struct slot *slot, u8 *status);
-
-void pciehp_set_attention_status(struct slot *slot, u8 status);
-void pciehp_get_latch_status(struct slot *slot, u8 *status);
-void pciehp_get_adapter_status(struct slot *slot, u8 *status);
-int pciehp_query_power_fault(struct slot *slot);
-void pciehp_green_led_on(struct slot *slot);
-void pciehp_green_led_off(struct slot *slot);
-void pciehp_green_led_blink(struct slot *slot);
+void pcie_enable_interrupt(struct controller *ctrl);
+void pcie_disable_interrupt(struct controller *ctrl);
+int pciehp_power_on_slot(struct controller *ctrl);
+void pciehp_power_off_slot(struct controller *ctrl);
+void pciehp_get_power_status(struct controller *ctrl, u8 *status);
+
+void pciehp_set_attention_status(struct controller *ctrl, u8 status);
+void pciehp_get_latch_status(struct controller *ctrl, u8 *status);
+int pciehp_query_power_fault(struct controller *ctrl);
+void pciehp_green_led_on(struct controller *ctrl);
+void pciehp_green_led_off(struct controller *ctrl);
+void pciehp_green_led_blink(struct controller *ctrl);
+bool pciehp_card_present(struct controller *ctrl);
+bool pciehp_card_present_or_link_active(struct controller *ctrl);
int pciehp_check_link_status(struct controller *ctrl);
bool pciehp_check_link_active(struct controller *ctrl);
void pciehp_release_ctrl(struct controller *ctrl);
-int pciehp_reset_slot(struct slot *slot, int probe);
+int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
+int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot);
+int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe);
+int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status);
int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status);
int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status);
-static inline const char *slot_name(struct slot *slot)
+static inline const char *slot_name(struct controller *ctrl)
+{
+ return hotplug_slot_name(&ctrl->hotplug_slot);
+}
+
+static inline struct controller *to_ctrl(struct hotplug_slot *hotplug_slot)
{
- return hotplug_slot_name(slot->hotplug_slot);
+ return container_of(hotplug_slot, struct controller, hotplug_slot);
}
#endif /* _PCIEHP_H */
#include <linux/types.h>
#include <linux/pci.h>
#include "pciehp.h"
-#include <linux/interrupt.h>
-#include <linux/time.h>
#include "../pci.h"
#define PCIE_MODULE_NAME "pciehp"
static int set_attention_status(struct hotplug_slot *slot, u8 value);
-static int enable_slot(struct hotplug_slot *slot);
-static int disable_slot(struct hotplug_slot *slot);
static int get_power_status(struct hotplug_slot *slot, u8 *value);
-static int get_attention_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
-static int reset_slot(struct hotplug_slot *slot, int probe);
static int init_slot(struct controller *ctrl)
{
- struct slot *slot = ctrl->slot;
- struct hotplug_slot *hotplug = NULL;
- struct hotplug_slot_info *info = NULL;
- struct hotplug_slot_ops *ops = NULL;
+ struct hotplug_slot_ops *ops;
char name[SLOT_NAME_SIZE];
- int retval = -ENOMEM;
-
- hotplug = kzalloc(sizeof(*hotplug), GFP_KERNEL);
- if (!hotplug)
- goto out;
-
- info = kzalloc(sizeof(*info), GFP_KERNEL);
- if (!info)
- goto out;
+ int retval;
/* Setup hotplug slot ops */
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
if (!ops)
- goto out;
+ return -ENOMEM;
- ops->enable_slot = enable_slot;
- ops->disable_slot = disable_slot;
+ ops->enable_slot = pciehp_sysfs_enable_slot;
+ ops->disable_slot = pciehp_sysfs_disable_slot;
ops->get_power_status = get_power_status;
ops->get_adapter_status = get_adapter_status;
- ops->reset_slot = reset_slot;
+ ops->reset_slot = pciehp_reset_slot;
if (MRL_SENS(ctrl))
ops->get_latch_status = get_latch_status;
if (ATTN_LED(ctrl)) {
- ops->get_attention_status = get_attention_status;
+ ops->get_attention_status = pciehp_get_attention_status;
ops->set_attention_status = set_attention_status;
} else if (ctrl->pcie->port->hotplug_user_indicators) {
ops->get_attention_status = pciehp_get_raw_indicator_status;
}
/* register this slot with the hotplug pci core */
- hotplug->info = info;
- hotplug->private = slot;
- hotplug->ops = ops;
- slot->hotplug_slot = hotplug;
+ ctrl->hotplug_slot.ops = ops;
snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl));
- retval = pci_hp_initialize(hotplug,
+ retval = pci_hp_initialize(&ctrl->hotplug_slot,
ctrl->pcie->port->subordinate, 0, name);
- if (retval)
- ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval);
-out:
if (retval) {
+ ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval);
kfree(ops);
- kfree(info);
- kfree(hotplug);
}
return retval;
}
static void cleanup_slot(struct controller *ctrl)
{
- struct hotplug_slot *hotplug_slot = ctrl->slot->hotplug_slot;
+ struct hotplug_slot *hotplug_slot = &ctrl->hotplug_slot;
pci_hp_destroy(hotplug_slot);
kfree(hotplug_slot->ops);
- kfree(hotplug_slot->info);
- kfree(hotplug_slot);
}
/*
*/
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
- struct slot *slot = hotplug_slot->private;
- struct pci_dev *pdev = slot->ctrl->pcie->port;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
+ struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
- pciehp_set_attention_status(slot, status);
+ pciehp_set_attention_status(ctrl, status);
pci_config_pm_runtime_put(pdev);
return 0;
}
-
-static int enable_slot(struct hotplug_slot *hotplug_slot)
-{
- struct slot *slot = hotplug_slot->private;
-
- return pciehp_sysfs_enable_slot(slot);
-}
-
-
-static int disable_slot(struct hotplug_slot *hotplug_slot)
-{
- struct slot *slot = hotplug_slot->private;
-
- return pciehp_sysfs_disable_slot(slot);
-}
-
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
- struct pci_dev *pdev = slot->ctrl->pcie->port;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
+ struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
- pciehp_get_power_status(slot, value);
+ pciehp_get_power_status(ctrl, value);
pci_config_pm_runtime_put(pdev);
return 0;
}
-static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
-{
- struct slot *slot = hotplug_slot->private;
-
- pciehp_get_attention_status(slot, value);
- return 0;
-}
-
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
- struct pci_dev *pdev = slot->ctrl->pcie->port;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
+ struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
- pciehp_get_latch_status(slot, value);
+ pciehp_get_latch_status(ctrl, value);
pci_config_pm_runtime_put(pdev);
return 0;
}
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
- struct pci_dev *pdev = slot->ctrl->pcie->port;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
+ struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
- pciehp_get_adapter_status(slot, value);
+ *value = pciehp_card_present_or_link_active(ctrl);
pci_config_pm_runtime_put(pdev);
return 0;
}
-static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
-{
- struct slot *slot = hotplug_slot->private;
-
- return pciehp_reset_slot(slot, probe);
-}
-
/**
* pciehp_check_presence() - synthesize event if presence has changed
*
*/
static void pciehp_check_presence(struct controller *ctrl)
{
- struct slot *slot = ctrl->slot;
- u8 occupied;
+ bool occupied;
down_read(&ctrl->reset_lock);
- mutex_lock(&slot->lock);
+ mutex_lock(&ctrl->state_lock);
- pciehp_get_adapter_status(slot, &occupied);
- if ((occupied && (slot->state == OFF_STATE ||
- slot->state == BLINKINGON_STATE)) ||
- (!occupied && (slot->state == ON_STATE ||
- slot->state == BLINKINGOFF_STATE)))
+ occupied = pciehp_card_present_or_link_active(ctrl);
+ if ((occupied && (ctrl->state == OFF_STATE ||
+ ctrl->state == BLINKINGON_STATE)) ||
+ (!occupied && (ctrl->state == ON_STATE ||
+ ctrl->state == BLINKINGOFF_STATE)))
pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC);
- mutex_unlock(&slot->lock);
+ mutex_unlock(&ctrl->state_lock);
up_read(&ctrl->reset_lock);
}
{
int rc;
struct controller *ctrl;
- struct slot *slot;
/* If this is not a "hotplug" service, we have no business here. */
if (dev->service != PCIE_PORT_SERVICE_HP)
}
/* Publish to user space */
- slot = ctrl->slot;
- rc = pci_hp_add(slot->hotplug_slot);
+ rc = pci_hp_add(&ctrl->hotplug_slot);
if (rc) {
ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc);
goto err_out_shutdown_notification;
{
struct controller *ctrl = get_service_data(dev);
- pci_hp_del(ctrl->slot->hotplug_slot);
+ pci_hp_del(&ctrl->hotplug_slot);
pcie_shutdown_notification(ctrl);
cleanup_slot(ctrl);
pciehp_release_ctrl(ctrl);
}
#ifdef CONFIG_PM
+static bool pme_is_native(struct pcie_device *dev)
+{
+ const struct pci_host_bridge *host;
+
+ host = pci_find_host_bridge(dev->port->bus);
+ return pcie_ports_native || host->native_pme;
+}
+
static int pciehp_suspend(struct pcie_device *dev)
{
+ /*
+ * Disable hotplug interrupt so that it does not trigger
+ * immediately when the downstream link goes down.
+ */
+ if (pme_is_native(dev))
+ pcie_disable_interrupt(get_service_data(dev));
+
return 0;
}
static int pciehp_resume_noirq(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
- struct slot *slot = ctrl->slot;
/* pci_restore_state() just wrote to the Slot Control register */
ctrl->cmd_started = jiffies;
ctrl->cmd_busy = true;
/* clear spurious events from rediscovery of inserted card */
- if (slot->state == ON_STATE || slot->state == BLINKINGOFF_STATE)
+ if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE)
pcie_clear_hotplug_events(ctrl);
return 0;
{
struct controller *ctrl = get_service_data(dev);
+ if (pme_is_native(dev))
+ pcie_enable_interrupt(ctrl);
+
pciehp_check_presence(ctrl);
return 0;
}
+
+static int pciehp_runtime_resume(struct pcie_device *dev)
+{
+ struct controller *ctrl = get_service_data(dev);
+
+ /* pci_restore_state() just wrote to the Slot Control register */
+ ctrl->cmd_started = jiffies;
+ ctrl->cmd_busy = true;
+
+ /* clear spurious events from rediscovery of inserted card */
+ if ((ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) &&
+ pme_is_native(dev))
+ pcie_clear_hotplug_events(ctrl);
+
+ return pciehp_resume(dev);
+}
#endif /* PM */
static struct pcie_port_service_driver hpdriver_portdrv = {
.suspend = pciehp_suspend,
.resume_noirq = pciehp_resume_noirq,
.resume = pciehp_resume,
+ .runtime_suspend = pciehp_suspend,
+ .runtime_resume = pciehp_runtime_resume,
#endif /* PM */
};
-static int __init pcied_init(void)
+int __init pcie_hp_init(void)
{
int retval = 0;
return retval;
}
-device_initcall(pcied_init);
*
*/
-#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
-#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include <linux/pci.h>
-#include "../pci.h"
#include "pciehp.h"
/* The following routines constitute the bulk of the
hotplug controller logic
*/
-static void set_slot_off(struct controller *ctrl, struct slot *pslot)
+#define SAFE_REMOVAL true
+#define SURPRISE_REMOVAL false
+
+static void set_slot_off(struct controller *ctrl)
{
/* turn off slot, turn on Amber LED, turn off Green LED if supported*/
if (POWER_CTRL(ctrl)) {
- pciehp_power_off_slot(pslot);
+ pciehp_power_off_slot(ctrl);
/*
* After turning power off, we must wait for at least 1 second
msleep(1000);
}
- pciehp_green_led_off(pslot);
- pciehp_set_attention_status(pslot, 1);
+ pciehp_green_led_off(ctrl);
+ pciehp_set_attention_status(ctrl, 1);
}
/**
* board_added - Called after a board has been added to the system.
- * @p_slot: &slot where board is added
+ * @ctrl: PCIe hotplug controller where board is added
*
* Turns power on for the board.
* Configures board.
*/
-static int board_added(struct slot *p_slot)
+static int board_added(struct controller *ctrl)
{
int retval = 0;
- struct controller *ctrl = p_slot->ctrl;
struct pci_bus *parent = ctrl->pcie->port->subordinate;
if (POWER_CTRL(ctrl)) {
/* Power on slot */
- retval = pciehp_power_on_slot(p_slot);
+ retval = pciehp_power_on_slot(ctrl);
if (retval)
return retval;
}
- pciehp_green_led_blink(p_slot);
+ pciehp_green_led_blink(ctrl);
/* Check link training status */
retval = pciehp_check_link_status(ctrl);
}
/* Check for a power fault */
- if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) {
- ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(p_slot));
+ if (ctrl->power_fault_detected || pciehp_query_power_fault(ctrl)) {
+ ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
retval = -EIO;
goto err_exit;
}
- retval = pciehp_configure_device(p_slot);
+ retval = pciehp_configure_device(ctrl);
if (retval) {
if (retval != -EEXIST) {
ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n",
}
}
- pciehp_green_led_on(p_slot);
- pciehp_set_attention_status(p_slot, 0);
+ pciehp_green_led_on(ctrl);
+ pciehp_set_attention_status(ctrl, 0);
return 0;
err_exit:
- set_slot_off(ctrl, p_slot);
+ set_slot_off(ctrl);
return retval;
}
/**
* remove_board - Turns off slot and LEDs
- * @p_slot: slot where board is being removed
+ * @ctrl: PCIe hotplug controller where board is being removed
+ * @safe_removal: whether the board is safely removed (versus surprise removed)
*/
-static void remove_board(struct slot *p_slot)
+static void remove_board(struct controller *ctrl, bool safe_removal)
{
- struct controller *ctrl = p_slot->ctrl;
-
- pciehp_unconfigure_device(p_slot);
+ pciehp_unconfigure_device(ctrl, safe_removal);
if (POWER_CTRL(ctrl)) {
- pciehp_power_off_slot(p_slot);
+ pciehp_power_off_slot(ctrl);
/*
* After turning power off, we must wait for at least 1 second
}
/* turn off Green LED */
- pciehp_green_led_off(p_slot);
+ pciehp_green_led_off(ctrl);
}
-static int pciehp_enable_slot(struct slot *slot);
-static int pciehp_disable_slot(struct slot *slot);
+static int pciehp_enable_slot(struct controller *ctrl);
+static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal);
void pciehp_request(struct controller *ctrl, int action)
{
void pciehp_queue_pushbutton_work(struct work_struct *work)
{
- struct slot *p_slot = container_of(work, struct slot, work.work);
- struct controller *ctrl = p_slot->ctrl;
+ struct controller *ctrl = container_of(work, struct controller,
+ button_work.work);
- mutex_lock(&p_slot->lock);
- switch (p_slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case BLINKINGOFF_STATE:
pciehp_request(ctrl, DISABLE_SLOT);
break;
default:
break;
}
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
}
-void pciehp_handle_button_press(struct slot *p_slot)
+void pciehp_handle_button_press(struct controller *ctrl)
{
- struct controller *ctrl = p_slot->ctrl;
-
- mutex_lock(&p_slot->lock);
- switch (p_slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case OFF_STATE:
case ON_STATE:
- if (p_slot->state == ON_STATE) {
- p_slot->state = BLINKINGOFF_STATE;
+ if (ctrl->state == ON_STATE) {
+ ctrl->state = BLINKINGOFF_STATE;
ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n",
- slot_name(p_slot));
+ slot_name(ctrl));
} else {
- p_slot->state = BLINKINGON_STATE;
+ ctrl->state = BLINKINGON_STATE;
ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n",
- slot_name(p_slot));
+ slot_name(ctrl));
}
/* blink green LED and turn off amber */
- pciehp_green_led_blink(p_slot);
- pciehp_set_attention_status(p_slot, 0);
- schedule_delayed_work(&p_slot->work, 5 * HZ);
+ pciehp_green_led_blink(ctrl);
+ pciehp_set_attention_status(ctrl, 0);
+ schedule_delayed_work(&ctrl->button_work, 5 * HZ);
break;
case BLINKINGOFF_STATE:
case BLINKINGON_STATE:
* press the attention again before the 5 sec. limit
* expires to cancel hot-add or hot-remove
*/
- ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot));
- cancel_delayed_work(&p_slot->work);
- if (p_slot->state == BLINKINGOFF_STATE) {
- p_slot->state = ON_STATE;
- pciehp_green_led_on(p_slot);
+ ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl));
+ cancel_delayed_work(&ctrl->button_work);
+ if (ctrl->state == BLINKINGOFF_STATE) {
+ ctrl->state = ON_STATE;
+ pciehp_green_led_on(ctrl);
} else {
- p_slot->state = OFF_STATE;
- pciehp_green_led_off(p_slot);
+ ctrl->state = OFF_STATE;
+ pciehp_green_led_off(ctrl);
}
- pciehp_set_attention_status(p_slot, 0);
+ pciehp_set_attention_status(ctrl, 0);
ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n",
- slot_name(p_slot));
+ slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n",
- slot_name(p_slot), p_slot->state);
+ slot_name(ctrl), ctrl->state);
break;
}
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
}
-void pciehp_handle_disable_request(struct slot *slot)
+void pciehp_handle_disable_request(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
-
- mutex_lock(&slot->lock);
- switch (slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case BLINKINGON_STATE:
case BLINKINGOFF_STATE:
- cancel_delayed_work(&slot->work);
+ cancel_delayed_work(&ctrl->button_work);
break;
}
- slot->state = POWEROFF_STATE;
- mutex_unlock(&slot->lock);
+ ctrl->state = POWEROFF_STATE;
+ mutex_unlock(&ctrl->state_lock);
- ctrl->request_result = pciehp_disable_slot(slot);
+ ctrl->request_result = pciehp_disable_slot(ctrl, SAFE_REMOVAL);
}
-void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events)
+void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
{
- struct controller *ctrl = slot->ctrl;
- bool link_active;
- u8 present;
+ bool present, link_active;
/*
* If the slot is on and presence or link has changed, turn it off.
* Even if it's occupied again, we cannot assume the card is the same.
*/
- mutex_lock(&slot->lock);
- switch (slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case BLINKINGOFF_STATE:
- cancel_delayed_work(&slot->work);
+ cancel_delayed_work(&ctrl->button_work);
/* fall through */
case ON_STATE:
- slot->state = POWEROFF_STATE;
- mutex_unlock(&slot->lock);
+ ctrl->state = POWEROFF_STATE;
+ mutex_unlock(&ctrl->state_lock);
if (events & PCI_EXP_SLTSTA_DLLSC)
ctrl_info(ctrl, "Slot(%s): Link Down\n",
- slot_name(slot));
+ slot_name(ctrl));
if (events & PCI_EXP_SLTSTA_PDC)
ctrl_info(ctrl, "Slot(%s): Card not present\n",
- slot_name(slot));
- pciehp_disable_slot(slot);
+ slot_name(ctrl));
+ pciehp_disable_slot(ctrl, SURPRISE_REMOVAL);
break;
default:
- mutex_unlock(&slot->lock);
+ mutex_unlock(&ctrl->state_lock);
break;
}
/* Turn the slot on if it's occupied or link is up */
- mutex_lock(&slot->lock);
- pciehp_get_adapter_status(slot, &present);
+ mutex_lock(&ctrl->state_lock);
+ present = pciehp_card_present(ctrl);
link_active = pciehp_check_link_active(ctrl);
if (!present && !link_active) {
- mutex_unlock(&slot->lock);
+ mutex_unlock(&ctrl->state_lock);
return;
}
- switch (slot->state) {
+ switch (ctrl->state) {
case BLINKINGON_STATE:
- cancel_delayed_work(&slot->work);
+ cancel_delayed_work(&ctrl->button_work);
/* fall through */
case OFF_STATE:
- slot->state = POWERON_STATE;
- mutex_unlock(&slot->lock);
+ ctrl->state = POWERON_STATE;
+ mutex_unlock(&ctrl->state_lock);
if (present)
ctrl_info(ctrl, "Slot(%s): Card present\n",
- slot_name(slot));
+ slot_name(ctrl));
if (link_active)
ctrl_info(ctrl, "Slot(%s): Link Up\n",
- slot_name(slot));
- ctrl->request_result = pciehp_enable_slot(slot);
+ slot_name(ctrl));
+ ctrl->request_result = pciehp_enable_slot(ctrl);
break;
default:
- mutex_unlock(&slot->lock);
+ mutex_unlock(&ctrl->state_lock);
break;
}
}
-static int __pciehp_enable_slot(struct slot *p_slot)
+static int __pciehp_enable_slot(struct controller *ctrl)
{
u8 getstatus = 0;
- struct controller *ctrl = p_slot->ctrl;
- pciehp_get_adapter_status(p_slot, &getstatus);
- if (!getstatus) {
- ctrl_info(ctrl, "Slot(%s): No adapter\n", slot_name(p_slot));
- return -ENODEV;
- }
- if (MRL_SENS(p_slot->ctrl)) {
- pciehp_get_latch_status(p_slot, &getstatus);
+ if (MRL_SENS(ctrl)) {
+ pciehp_get_latch_status(ctrl, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Slot(%s): Latch open\n",
- slot_name(p_slot));
+ slot_name(ctrl));
return -ENODEV;
}
}
- if (POWER_CTRL(p_slot->ctrl)) {
- pciehp_get_power_status(p_slot, &getstatus);
+ if (POWER_CTRL(ctrl)) {
+ pciehp_get_power_status(ctrl, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
- slot_name(p_slot));
+ slot_name(ctrl));
return 0;
}
}
- return board_added(p_slot);
+ return board_added(ctrl);
}
-static int pciehp_enable_slot(struct slot *slot)
+static int pciehp_enable_slot(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
int ret;
pm_runtime_get_sync(&ctrl->pcie->port->dev);
- ret = __pciehp_enable_slot(slot);
+ ret = __pciehp_enable_slot(ctrl);
if (ret && ATTN_BUTTN(ctrl))
- pciehp_green_led_off(slot); /* may be blinking */
+ pciehp_green_led_off(ctrl); /* may be blinking */
pm_runtime_put(&ctrl->pcie->port->dev);
- mutex_lock(&slot->lock);
- slot->state = ret ? OFF_STATE : ON_STATE;
- mutex_unlock(&slot->lock);
+ mutex_lock(&ctrl->state_lock);
+ ctrl->state = ret ? OFF_STATE : ON_STATE;
+ mutex_unlock(&ctrl->state_lock);
return ret;
}
-static int __pciehp_disable_slot(struct slot *p_slot)
+static int __pciehp_disable_slot(struct controller *ctrl, bool safe_removal)
{
u8 getstatus = 0;
- struct controller *ctrl = p_slot->ctrl;
- if (POWER_CTRL(p_slot->ctrl)) {
- pciehp_get_power_status(p_slot, &getstatus);
+ if (POWER_CTRL(ctrl)) {
+ pciehp_get_power_status(ctrl, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "Slot(%s): Already disabled\n",
- slot_name(p_slot));
+ slot_name(ctrl));
return -EINVAL;
}
}
- remove_board(p_slot);
+ remove_board(ctrl, safe_removal);
return 0;
}
-static int pciehp_disable_slot(struct slot *slot)
+static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal)
{
- struct controller *ctrl = slot->ctrl;
int ret;
pm_runtime_get_sync(&ctrl->pcie->port->dev);
- ret = __pciehp_disable_slot(slot);
+ ret = __pciehp_disable_slot(ctrl, safe_removal);
pm_runtime_put(&ctrl->pcie->port->dev);
- mutex_lock(&slot->lock);
- slot->state = OFF_STATE;
- mutex_unlock(&slot->lock);
+ mutex_lock(&ctrl->state_lock);
+ ctrl->state = OFF_STATE;
+ mutex_unlock(&ctrl->state_lock);
return ret;
}
-int pciehp_sysfs_enable_slot(struct slot *p_slot)
+int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot)
{
- struct controller *ctrl = p_slot->ctrl;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
- mutex_lock(&p_slot->lock);
- switch (p_slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case BLINKINGON_STATE:
case OFF_STATE:
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
/*
* The IRQ thread becomes a no-op if the user pulls out the
* card before the thread wakes up, so initialize to -ENODEV.
return ctrl->request_result;
case POWERON_STATE:
ctrl_info(ctrl, "Slot(%s): Already in powering on state\n",
- slot_name(p_slot));
+ slot_name(ctrl));
break;
case BLINKINGOFF_STATE:
case ON_STATE:
case POWEROFF_STATE:
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
- slot_name(p_slot));
+ slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n",
- slot_name(p_slot), p_slot->state);
+ slot_name(ctrl), ctrl->state);
break;
}
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
return -ENODEV;
}
-int pciehp_sysfs_disable_slot(struct slot *p_slot)
+int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct controller *ctrl = p_slot->ctrl;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
- mutex_lock(&p_slot->lock);
- switch (p_slot->state) {
+ mutex_lock(&ctrl->state_lock);
+ switch (ctrl->state) {
case BLINKINGOFF_STATE:
case ON_STATE:
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
pciehp_request(ctrl, DISABLE_SLOT);
wait_event(ctrl->requester,
!atomic_read(&ctrl->pending_events));
return ctrl->request_result;
case POWEROFF_STATE:
ctrl_info(ctrl, "Slot(%s): Already in powering off state\n",
- slot_name(p_slot));
+ slot_name(ctrl));
break;
case BLINKINGON_STATE:
case OFF_STATE:
case POWERON_STATE:
ctrl_info(ctrl, "Slot(%s): Already disabled\n",
- slot_name(p_slot));
+ slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n",
- slot_name(p_slot), p_slot->state);
+ slot_name(ctrl), ctrl->state);
break;
}
- mutex_unlock(&p_slot->lock);
+ mutex_unlock(&ctrl->state_lock);
return -ENODEV;
}
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/types.h>
-#include <linux/signal.h>
#include <linux/jiffies.h>
#include <linux/kthread.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/interrupt.h>
-#include <linux/time.h>
#include <linux/slab.h>
#include "../pci.h"
if (pciehp_poll_mode) {
ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl,
"pciehp_poll-%s",
- slot_name(ctrl->slot));
+ slot_name(ctrl));
return PTR_ERR_OR_ZERO(ctrl->poll_thread);
}
return ret;
}
-static void pcie_wait_link_active(struct controller *ctrl)
-{
- struct pci_dev *pdev = ctrl_dev(ctrl);
-
- pcie_wait_for_link(pdev, true);
-}
-
static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
{
u32 l;
bool found;
u16 lnk_status;
- /*
- * Data Link Layer Link Active Reporting must be capable for
- * hot-plug capable downstream port. But old controller might
- * not implement it. In this case, we wait for 1000 ms.
- */
- if (ctrl->link_active_reporting)
- pcie_wait_link_active(ctrl);
- else
- msleep(1000);
+ if (!pcie_wait_for_link(pdev, true))
+ return -1;
- /* wait 100ms before read pci conf, and try in 1s */
- msleep(100);
found = pci_bus_check_dev(ctrl->pcie->port->subordinate,
PCI_DEVFN(0, 0));
int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 *status)
{
- struct slot *slot = hotplug_slot->private;
- struct pci_dev *pdev = ctrl_dev(slot->ctrl);
+ struct controller *ctrl = to_ctrl(hotplug_slot);
+ struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
pci_config_pm_runtime_get(pdev);
return 0;
}
-void pciehp_get_attention_status(struct slot *slot, u8 *status)
+int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status)
{
- struct controller *ctrl = slot->ctrl;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
*status = 0xFF;
break;
}
+
+ return 0;
}
-void pciehp_get_power_status(struct slot *slot, u8 *status)
+void pciehp_get_power_status(struct controller *ctrl, u8 *status)
{
- struct controller *ctrl = slot->ctrl;
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
}
}
-void pciehp_get_latch_status(struct slot *slot, u8 *status)
+void pciehp_get_latch_status(struct controller *ctrl, u8 *status)
{
- struct pci_dev *pdev = ctrl_dev(slot->ctrl);
+ struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
*status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS);
}
-void pciehp_get_adapter_status(struct slot *slot, u8 *status)
+bool pciehp_card_present(struct controller *ctrl)
{
- struct pci_dev *pdev = ctrl_dev(slot->ctrl);
+ struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
- *status = !!(slot_status & PCI_EXP_SLTSTA_PDS);
+ return slot_status & PCI_EXP_SLTSTA_PDS;
}
-int pciehp_query_power_fault(struct slot *slot)
+/**
+ * pciehp_card_present_or_link_active() - whether given slot is occupied
+ * @ctrl: PCIe hotplug controller
+ *
+ * Unlike pciehp_card_present(), which determines presence solely from the
+ * Presence Detect State bit, this helper also returns true if the Link Active
+ * bit is set. This is a concession to broken hotplug ports which hardwire
+ * Presence Detect State to zero, such as Wilocity's [1ae9:0200].
+ */
+bool pciehp_card_present_or_link_active(struct controller *ctrl)
+{
+ return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl);
+}
+
+int pciehp_query_power_fault(struct controller *ctrl)
{
- struct pci_dev *pdev = ctrl_dev(slot->ctrl);
+ struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 status)
{
- struct slot *slot = hotplug_slot->private;
- struct controller *ctrl = slot->ctrl;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
pci_config_pm_runtime_get(pdev);
return 0;
}
-void pciehp_set_attention_status(struct slot *slot, u8 value)
+void pciehp_set_attention_status(struct controller *ctrl, u8 value)
{
- struct controller *ctrl = slot->ctrl;
u16 slot_cmd;
if (!ATTN_LED(ctrl))
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
}
-void pciehp_green_led_on(struct slot *slot)
+void pciehp_green_led_on(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
-
if (!PWR_LED(ctrl))
return;
PCI_EXP_SLTCTL_PWR_IND_ON);
}
-void pciehp_green_led_off(struct slot *slot)
+void pciehp_green_led_off(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
-
if (!PWR_LED(ctrl))
return;
PCI_EXP_SLTCTL_PWR_IND_OFF);
}
-void pciehp_green_led_blink(struct slot *slot)
+void pciehp_green_led_blink(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
-
if (!PWR_LED(ctrl))
return;
PCI_EXP_SLTCTL_PWR_IND_BLINK);
}
-int pciehp_power_on_slot(struct slot *slot)
+int pciehp_power_on_slot(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
int retval;
return retval;
}
-void pciehp_power_off_slot(struct slot *slot)
+void pciehp_power_off_slot(struct controller *ctrl)
{
- struct controller *ctrl = slot->ctrl;
-
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
u16 status, events;
/*
- * Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4).
+ * Interrupts only occur in D3hot or shallower and only if enabled
+ * in the Slot Control register (PCIe r4.0, sec 6.7.3.4).
*/
- if (pdev->current_state == PCI_D3cold)
+ if (pdev->current_state == PCI_D3cold ||
+ (!(ctrl->slot_ctrl & PCI_EXP_SLTCTL_HPIE) && !pciehp_poll_mode))
return IRQ_NONE;
/*
{
struct controller *ctrl = (struct controller *)dev_id;
struct pci_dev *pdev = ctrl_dev(ctrl);
- struct slot *slot = ctrl->slot;
irqreturn_t ret;
u32 events;
/* Check Attention Button Pressed */
if (events & PCI_EXP_SLTSTA_ABP) {
ctrl_info(ctrl, "Slot(%s): Attention button pressed\n",
- slot_name(slot));
- pciehp_handle_button_press(slot);
+ slot_name(ctrl));
+ pciehp_handle_button_press(ctrl);
}
/* Check Power Fault Detected */
if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
ctrl->power_fault_detected = 1;
- ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot));
- pciehp_set_attention_status(slot, 1);
- pciehp_green_led_off(slot);
+ ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
+ pciehp_set_attention_status(ctrl, 1);
+ pciehp_green_led_off(ctrl);
}
/*
*/
down_read(&ctrl->reset_lock);
if (events & DISABLE_SLOT)
- pciehp_handle_disable_request(slot);
+ pciehp_handle_disable_request(ctrl);
else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
- pciehp_handle_presence_or_link_change(slot, events);
+ pciehp_handle_presence_or_link_change(ctrl, events);
up_read(&ctrl->reset_lock);
pci_config_pm_runtime_put(pdev);
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC);
}
+void pcie_enable_interrupt(struct controller *ctrl)
+{
+ pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
+}
+
+void pcie_disable_interrupt(struct controller *ctrl)
+{
+ pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
+}
+
/*
* pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary
* bus reset of the bridge, but at the same time we want to ensure that it is
* momentarily, if we see that they could interfere. Also, clear any spurious
* events after.
*/
-int pciehp_reset_slot(struct slot *slot, int probe)
+int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
{
- struct controller *ctrl = slot->ctrl;
+ struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 stat_mask = 0, ctrl_mask = 0;
int rc;
}
}
-static int pcie_init_slot(struct controller *ctrl)
-{
- struct pci_bus *subordinate = ctrl_dev(ctrl)->subordinate;
- struct slot *slot;
-
- slot = kzalloc(sizeof(*slot), GFP_KERNEL);
- if (!slot)
- return -ENOMEM;
-
- down_read(&pci_bus_sem);
- slot->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
- up_read(&pci_bus_sem);
-
- slot->ctrl = ctrl;
- mutex_init(&slot->lock);
- INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work);
- ctrl->slot = slot;
- return 0;
-}
-
-static void pcie_cleanup_slot(struct controller *ctrl)
-{
- struct slot *slot = ctrl->slot;
-
- cancel_delayed_work_sync(&slot->work);
- kfree(slot);
-}
-
static inline void dbg_ctrl(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl->pcie->port;
{
struct controller *ctrl;
u32 slot_cap, link_cap;
- u8 occupied, poweron;
+ u8 poweron;
struct pci_dev *pdev = dev->port;
+ struct pci_bus *subordinate = pdev->subordinate;
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
- goto abort;
+ return NULL;
ctrl->pcie = dev;
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
ctrl->slot_cap = slot_cap;
mutex_init(&ctrl->ctrl_lock);
+ mutex_init(&ctrl->state_lock);
init_rwsem(&ctrl->reset_lock);
init_waitqueue_head(&ctrl->requester);
init_waitqueue_head(&ctrl->queue);
+ INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work);
dbg_ctrl(ctrl);
+ down_read(&pci_bus_sem);
+ ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
+ up_read(&pci_bus_sem);
+
/* Check if Data Link Layer Link Active Reporting is implemented */
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap);
- if (link_cap & PCI_EXP_LNKCAP_DLLLARC)
- ctrl->link_active_reporting = 1;
/* Clear all remaining event bits in Slot Status register. */
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
- if (pcie_init_slot(ctrl))
- goto abort_ctrl;
-
/*
* If empty slot's power status is on, turn power off. The IRQ isn't
* requested yet, so avoid triggering a notification with this command.
*/
if (POWER_CTRL(ctrl)) {
- pciehp_get_adapter_status(ctrl->slot, &occupied);
- pciehp_get_power_status(ctrl->slot, &poweron);
- if (!occupied && poweron) {
+ pciehp_get_power_status(ctrl, &poweron);
+ if (!pciehp_card_present_or_link_active(ctrl) && poweron) {
pcie_disable_notification(ctrl);
- pciehp_power_off_slot(ctrl->slot);
+ pciehp_power_off_slot(ctrl);
}
}
return ctrl;
-
-abort_ctrl:
- kfree(ctrl);
-abort:
- return NULL;
}
void pciehp_release_ctrl(struct controller *ctrl)
{
- pcie_cleanup_slot(ctrl);
+ cancel_delayed_work_sync(&ctrl->button_work);
kfree(ctrl);
}
*
*/
-#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include "../pci.h"
#include "pciehp.h"
-int pciehp_configure_device(struct slot *p_slot)
+/**
+ * pciehp_configure_device() - enumerate PCI devices below a hotplug bridge
+ * @ctrl: PCIe hotplug controller
+ *
+ * Enumerate PCI devices below a hotplug bridge and add them to the system.
+ * Return 0 on success, %-EEXIST if the devices are already enumerated or
+ * %-ENODEV if enumeration failed.
+ */
+int pciehp_configure_device(struct controller *ctrl)
{
struct pci_dev *dev;
- struct pci_dev *bridge = p_slot->ctrl->pcie->port;
+ struct pci_dev *bridge = ctrl->pcie->port;
struct pci_bus *parent = bridge->subordinate;
int num, ret = 0;
- struct controller *ctrl = p_slot->ctrl;
pci_lock_rescan_remove();
return ret;
}
-void pciehp_unconfigure_device(struct slot *p_slot)
+/**
+ * pciehp_unconfigure_device() - remove PCI devices below a hotplug bridge
+ * @ctrl: PCIe hotplug controller
+ * @presence: whether the card is still present in the slot;
+ * true for safe removal via sysfs or an Attention Button press,
+ * false for surprise removal
+ *
+ * Unbind PCI devices below a hotplug bridge from their drivers and remove
+ * them from the system. Safely removed devices are quiesced. Surprise
+ * removed devices are marked as such to prevent further accesses.
+ */
+void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
{
- u8 presence = 0;
struct pci_dev *dev, *temp;
- struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate;
+ struct pci_bus *parent = ctrl->pcie->port->subordinate;
u16 command;
- struct controller *ctrl = p_slot->ctrl;
ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n",
__func__, pci_domain_nr(parent), parent->number);
- pciehp_get_adapter_status(p_slot, &presence);
+
+ if (!presence)
+ pci_walk_bus(parent, pci_dev_set_disconnected, NULL);
pci_lock_rescan_remove();
list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
bus_list) {
pci_dev_get(dev);
- if (!presence) {
- pci_dev_set_disconnected(dev, NULL);
- if (pci_has_subordinate(dev))
- pci_walk_bus(dev->subordinate,
- pci_dev_set_disconnected, NULL);
- }
pci_stop_and_remove_bus_device(dev);
/*
* Ensure that no new Requests will be generated from
goto free_fdt1;
}
- fdt = kzalloc(fdt_totalsize(fdt1), GFP_KERNEL);
+ fdt = kmemdup(fdt1, fdt_totalsize(fdt1), GFP_KERNEL);
if (!fdt) {
ret = -ENOMEM;
goto free_fdt1;
}
/* Unflatten device tree blob */
- memcpy(fdt, fdt1, fdt_totalsize(fdt1));
dt = of_fdt_unflatten_tree(fdt, php_slot->dn, NULL);
if (!dt) {
ret = -EINVAL;
return ret;
}
+static inline struct pnv_php_slot *to_pnv_php_slot(struct hotplug_slot *slot)
+{
+ return container_of(slot, struct pnv_php_slot, slot);
+}
+
int pnv_php_set_slot_power_state(struct hotplug_slot *slot,
uint8_t state)
{
- struct pnv_php_slot *php_slot = slot->private;
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
struct opal_msg msg;
int ret;
static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state)
{
- struct pnv_php_slot *php_slot = slot->private;
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
uint8_t power_state = OPAL_PCI_SLOT_POWER_ON;
int ret;
ret);
} else {
*state = power_state;
- slot->info->power_status = power_state;
}
return 0;
static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
{
- struct pnv_php_slot *php_slot = slot->private;
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
uint8_t presence = OPAL_PCI_SLOT_EMPTY;
int ret;
ret = pnv_pci_get_presence_state(php_slot->id, &presence);
if (ret >= 0) {
*state = presence;
- slot->info->adapter_status = presence;
ret = 0;
} else {
pci_warn(php_slot->pdev, "Error %d getting presence\n", ret);
return ret;
}
+static int pnv_php_get_attention_state(struct hotplug_slot *slot, u8 *state)
+{
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+
+ *state = php_slot->attention_state;
+ return 0;
+}
+
static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state)
{
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+
/* FIXME: Make it real once firmware supports it */
- slot->info->attention_status = state;
+ php_slot->attention_state = state;
return 0;
}
static int pnv_php_enable_slot(struct hotplug_slot *slot)
{
- struct pnv_php_slot *php_slot = container_of(slot,
- struct pnv_php_slot, slot);
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
return pnv_php_enable(php_slot, true);
}
static int pnv_php_disable_slot(struct hotplug_slot *slot)
{
- struct pnv_php_slot *php_slot = slot->private;
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
int ret;
if (php_slot->state != PNV_PHP_STATE_POPULATED)
return ret;
}
-static struct hotplug_slot_ops php_slot_ops = {
+static const struct hotplug_slot_ops php_slot_ops = {
.get_power_status = pnv_php_get_power_state,
.get_adapter_status = pnv_php_get_adapter_state,
+ .get_attention_status = pnv_php_get_attention_state,
.set_attention_status = pnv_php_set_attention_state,
.enable_slot = pnv_php_enable_slot,
.disable_slot = pnv_php_disable_slot,
php_slot->id = id;
php_slot->power_state_check = false;
php_slot->slot.ops = &php_slot_ops;
- php_slot->slot.info = &php_slot->slot_info;
- php_slot->slot.private = php_slot;
INIT_LIST_HEAD(&php_slot->children);
INIT_LIST_HEAD(&php_slot->link);
u32 index;
u32 type;
u32 power_domain;
+ u8 attention_status;
char *name;
struct device_node *dn;
struct pci_bus *bus;
struct list_head *pci_devs;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
};
-extern struct hotplug_slot_ops rpaphp_hotplug_slot_ops;
+extern const struct hotplug_slot_ops rpaphp_hotplug_slot_ops;
extern struct list_head rpaphp_slot_head;
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
+}
+
/* function prototypes */
/* rpaphp_pci.c */
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value)
{
int rc;
- struct slot *slot = (struct slot *)hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
switch (value) {
case 0:
rc = rtas_set_indicator(DR_INDICATOR, slot->index, value);
if (!rc)
- hotplug_slot->info->attention_status = value;
+ slot->attention_status = value;
return rc;
}
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
int retval, level;
- struct slot *slot = (struct slot *)hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
retval = rtas_get_power_level(slot->power_domain, &level);
if (!retval)
*/
static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = (struct slot *)hotplug_slot->private;
- *value = slot->hotplug_slot->info->attention_status;
+ struct slot *slot = to_slot(hotplug_slot);
+ *value = slot->attention_status;
return 0;
}
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = (struct slot *)hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int rc, state;
rc = rpaphp_get_sensor_state(slot, &state);
list_for_each_entry_safe(slot, next, &rpaphp_slot_head,
rpaphp_slot_list) {
list_del(&slot->rpaphp_slot_list);
- pci_hp_deregister(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
dealloc_slot_struct(slot);
}
return;
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = (struct slot *)hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int state;
int retval;
static int disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = (struct slot *)hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
if (slot->state == NOT_CONFIGURED)
return -EINVAL;
return 0;
}
-struct hotplug_slot_ops rpaphp_hotplug_slot_ops = {
+const struct hotplug_slot_ops rpaphp_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.set_attention_status = set_attention_status,
* rpaphp_enable_slot - record slot state, config pci device
* @slot: target &slot
*
- * Initialize values in the slot, and the hotplug_slot info
- * structures to indicate if there is a pci card plugged into
- * the slot. If the slot is not empty, run the pcibios routine
+ * Initialize values in the slot structure to indicate if there is a pci card
+ * plugged into the slot. If the slot is not empty, run the pcibios routine
* to get pcibios stuff correctly set up.
*/
int rpaphp_enable_slot(struct slot *slot)
{
int rc, level, state;
struct pci_bus *bus;
- struct hotplug_slot_info *info = slot->hotplug_slot->info;
- info->adapter_status = NOT_VALID;
slot->state = EMPTY;
/* Find out if the power is turned on for the slot */
rc = rtas_get_power_level(slot->power_domain, &level);
if (rc)
return rc;
- info->power_status = level;
/* Figure out if there is an adapter in the slot */
rc = rpaphp_get_sensor_state(slot, &state);
return -EINVAL;
}
- info->adapter_status = EMPTY;
slot->bus = bus;
slot->pci_devs = &bus->devices;
/* if there's an adapter in the slot, go add the pci devices */
if (state == PRESENT) {
- info->adapter_status = NOT_CONFIGURED;
slot->state = NOT_CONFIGURED;
/* non-empty slot has to have child */
pci_hp_add_devices(bus);
if (!list_empty(&bus->devices)) {
- info->adapter_status = CONFIGURED;
slot->state = CONFIGURED;
}
/* free up the memory used by a slot */
void dealloc_slot_struct(struct slot *slot)
{
- kfree(slot->hotplug_slot->info);
kfree(slot->name);
- kfree(slot->hotplug_slot);
kfree(slot);
}
slot = kzalloc(sizeof(struct slot), GFP_KERNEL);
if (!slot)
goto error_nomem;
- slot->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
- if (!slot->hotplug_slot)
- goto error_slot;
- slot->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info),
- GFP_KERNEL);
- if (!slot->hotplug_slot->info)
- goto error_hpslot;
slot->name = kstrdup(drc_name, GFP_KERNEL);
if (!slot->name)
- goto error_info;
+ goto error_slot;
slot->dn = dn;
slot->index = drc_index;
slot->power_domain = power_domain;
- slot->hotplug_slot->private = slot;
- slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops;
+ slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops;
return (slot);
-error_info:
- kfree(slot->hotplug_slot->info);
-error_hpslot:
- kfree(slot->hotplug_slot);
error_slot:
kfree(slot);
error_nomem:
int rpaphp_deregister_slot(struct slot *slot)
{
int retval = 0;
- struct hotplug_slot *php_slot = slot->hotplug_slot;
+ struct hotplug_slot *php_slot = &slot->hotplug_slot;
dbg("%s - Entry: deregistering slot=%s\n",
__func__, slot->name);
int rpaphp_register_slot(struct slot *slot)
{
- struct hotplug_slot *php_slot = slot->hotplug_slot;
+ struct hotplug_slot *php_slot = &slot->hotplug_slot;
struct device_node *child;
u32 my_index;
int retval;
*/
struct slot {
struct list_head slot_list;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct zpci_dev *zdev;
};
+static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
+{
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
+}
+
static inline int slot_configure(struct slot *slot)
{
int ret = sclp_pci_configure(slot->zdev->fid);
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
int rc;
if (slot->zdev->state != ZPCI_FN_STATE_STANDBY)
static int disable_slot(struct hotplug_slot *hotplug_slot)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
struct pci_dev *pdev;
int rc;
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
- struct slot *slot = hotplug_slot->private;
+ struct slot *slot = to_slot(hotplug_slot);
switch (slot->zdev->state) {
case ZPCI_FN_STATE_STANDBY:
return 0;
}
-static struct hotplug_slot_ops s390_hotplug_slot_ops = {
+static const struct hotplug_slot_ops s390_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.get_power_status = get_power_status,
int zpci_init_slot(struct zpci_dev *zdev)
{
- struct hotplug_slot *hotplug_slot;
- struct hotplug_slot_info *info;
char name[SLOT_NAME_SIZE];
struct slot *slot;
int rc;
if (!slot)
goto error;
- hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL);
- if (!hotplug_slot)
- goto error_hp;
- hotplug_slot->private = slot;
-
- slot->hotplug_slot = hotplug_slot;
slot->zdev = zdev;
-
- info = kzalloc(sizeof(*info), GFP_KERNEL);
- if (!info)
- goto error_info;
- hotplug_slot->info = info;
-
- hotplug_slot->ops = &s390_hotplug_slot_ops;
-
- get_power_status(hotplug_slot, &info->power_status);
- get_adapter_status(hotplug_slot, &info->adapter_status);
+ slot->hotplug_slot.ops = &s390_hotplug_slot_ops;
snprintf(name, SLOT_NAME_SIZE, "%08x", zdev->fid);
- rc = pci_hp_register(slot->hotplug_slot, zdev->bus,
+ rc = pci_hp_register(&slot->hotplug_slot, zdev->bus,
ZPCI_DEVFN, name);
if (rc)
goto error_reg;
return 0;
error_reg:
- kfree(info);
-error_info:
- kfree(hotplug_slot);
-error_hp:
kfree(slot);
error:
return -ENOMEM;
if (slot->zdev != zdev)
continue;
list_del(&slot->slot_list);
- pci_hp_deregister(slot->hotplug_slot);
- kfree(slot->hotplug_slot->info);
- kfree(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
kfree(slot);
}
}
int device_num;
struct pci_bus *pci_bus;
/* this struct for glue internal only */
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct list_head hp_list;
char physical_path[SN_SLOT_NAME_SIZE];
};
static int disable_slot(struct hotplug_slot *slot);
static inline int get_power_status(struct hotplug_slot *slot, u8 *value);
-static struct hotplug_slot_ops sn_hotplug_slot_ops = {
+static const struct hotplug_slot_ops sn_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.get_power_status = get_power_status,
static DEFINE_MUTEX(sn_hotplug_mutex);
+static struct slot *to_slot(struct hotplug_slot *bss_hotplug_slot)
+{
+ return container_of(bss_hotplug_slot, struct slot, hotplug_slot);
+}
+
static ssize_t path_show(struct pci_slot *pci_slot, char *buf)
{
int retval = -ENOENT;
- struct slot *slot = pci_slot->hotplug->private;
+ struct slot *slot = to_slot(pci_slot->hotplug);
if (!slot)
return retval;
return -EIO;
}
-static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot,
+static int sn_hp_slot_private_alloc(struct hotplug_slot **bss_hotplug_slot,
struct pci_bus *pci_bus, int device,
char *name)
{
slot = kzalloc(sizeof(*slot), GFP_KERNEL);
if (!slot)
return -ENOMEM;
- bss_hotplug_slot->private = slot;
slot->device_num = device;
slot->pci_bus = pci_bus;
sn_generate_path(pci_bus, slot->physical_path);
- slot->hotplug_slot = bss_hotplug_slot;
list_add(&slot->hp_list, &sn_hp_list);
+ *bss_hotplug_slot = &slot->hotplug_slot;
return 0;
}
struct hotplug_slot *bss_hotplug_slot = NULL;
list_for_each_entry(slot, &sn_hp_list, hp_list) {
- bss_hotplug_slot = slot->hotplug_slot;
+ bss_hotplug_slot = &slot->hotplug_slot;
pci_slot = bss_hotplug_slot->pci_slot;
- list_del(&((struct slot *)bss_hotplug_slot->private)->
- hp_list);
+ list_del(&slot->hp_list);
sysfs_remove_file(&pci_slot->kobj,
&sn_slot_path_attr.attr);
break;
static int sn_slot_enable(struct hotplug_slot *bss_hotplug_slot,
int device_num, char **ssdt)
{
- struct slot *slot = bss_hotplug_slot->private;
+ struct slot *slot = to_slot(bss_hotplug_slot);
struct pcibus_info *pcibus_info;
struct pcibr_slot_enable_resp resp;
int rc;
static int sn_slot_disable(struct hotplug_slot *bss_hotplug_slot,
int device_num, int action)
{
- struct slot *slot = bss_hotplug_slot->private;
+ struct slot *slot = to_slot(bss_hotplug_slot);
struct pcibus_info *pcibus_info;
struct pcibr_slot_disable_resp resp;
int rc;
*/
static int enable_slot(struct hotplug_slot *bss_hotplug_slot)
{
- struct slot *slot = bss_hotplug_slot->private;
+ struct slot *slot = to_slot(bss_hotplug_slot);
struct pci_bus *new_bus = NULL;
struct pci_dev *dev;
int num_funcs;
static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
{
- struct slot *slot = bss_hotplug_slot->private;
+ struct slot *slot = to_slot(bss_hotplug_slot);
struct pci_dev *dev, *temp;
int rc;
acpi_handle ssdt_hdl = NULL;
static inline int get_power_status(struct hotplug_slot *bss_hotplug_slot,
u8 *value)
{
- struct slot *slot = bss_hotplug_slot->private;
+ struct slot *slot = to_slot(bss_hotplug_slot);
struct pcibus_info *pcibus_info;
u32 power;
static void sn_release_slot(struct hotplug_slot *bss_hotplug_slot)
{
- kfree(bss_hotplug_slot->info);
- kfree(bss_hotplug_slot->private);
- kfree(bss_hotplug_slot);
+ kfree(to_slot(bss_hotplug_slot));
}
static int sn_hotplug_slot_register(struct pci_bus *pci_bus)
if (sn_pci_slot_valid(pci_bus, device) != 1)
continue;
- bss_hotplug_slot = kzalloc(sizeof(*bss_hotplug_slot),
- GFP_KERNEL);
- if (!bss_hotplug_slot) {
- rc = -ENOMEM;
- goto alloc_err;
- }
-
- bss_hotplug_slot->info =
- kzalloc(sizeof(struct hotplug_slot_info),
- GFP_KERNEL);
- if (!bss_hotplug_slot->info) {
- rc = -ENOMEM;
- goto alloc_err;
- }
-
- if (sn_hp_slot_private_alloc(bss_hotplug_slot,
+ if (sn_hp_slot_private_alloc(&bss_hotplug_slot,
pci_bus, device, name)) {
rc = -ENOMEM;
goto alloc_err;
rc = sysfs_create_file(&pci_slot->kobj,
&sn_slot_path_attr.attr);
if (rc)
- goto register_err;
+ goto alloc_err;
}
pci_dbg(pci_bus->self, "Registered bus with hotplug\n");
return rc;
pci_dbg(pci_bus->self, "bus failed to register with err = %d\n",
rc);
-alloc_err:
- if (rc == -ENOMEM)
- pci_dbg(pci_bus->self, "Memory allocation error\n");
-
/* destroy THIS element */
- if (bss_hotplug_slot)
- sn_release_slot(bss_hotplug_slot);
+ sn_hp_destroy();
+ sn_release_slot(bss_hotplug_slot);
+alloc_err:
/* destroy anything else on the list */
while ((bss_hotplug_slot = sn_hp_destroy())) {
pci_hp_deregister(bss_hotplug_slot);
u32 number;
u8 is_a_board;
u8 state;
+ u8 attention_save;
u8 presence_save;
+ u8 latch_save;
u8 pwr_save;
struct controller *ctrl;
const struct hpc_ops *hpc_ops;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct list_head slot_list;
struct delayed_work work; /* work for button event */
struct mutex lock;
static inline const char *slot_name(struct slot *slot)
{
- return hotplug_slot_name(slot->hotplug_slot);
+ return hotplug_slot_name(&slot->hotplug_slot);
}
struct ctrl_reg {
static inline struct slot *get_slot(struct hotplug_slot *hotplug_slot)
{
- return hotplug_slot->private;
+ return container_of(hotplug_slot, struct slot, hotplug_slot);
}
static inline struct slot *shpchp_find_slot(struct controller *ctrl, u8 device)
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
-static struct hotplug_slot_ops shpchp_hotplug_slot_ops = {
+static const struct hotplug_slot_ops shpchp_hotplug_slot_ops = {
.set_attention_status = set_attention_status,
.enable_slot = enable_slot,
.disable_slot = disable_slot,
{
struct slot *slot;
struct hotplug_slot *hotplug_slot;
- struct hotplug_slot_info *info;
char name[SLOT_NAME_SIZE];
int retval;
int i;
goto error;
}
- hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL);
- if (!hotplug_slot) {
- retval = -ENOMEM;
- goto error_slot;
- }
- slot->hotplug_slot = hotplug_slot;
-
- info = kzalloc(sizeof(*info), GFP_KERNEL);
- if (!info) {
- retval = -ENOMEM;
- goto error_hpslot;
- }
- hotplug_slot->info = info;
+ hotplug_slot = &slot->hotplug_slot;
slot->hp_slot = i;
slot->ctrl = ctrl;
slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
if (!slot->wq) {
retval = -ENOMEM;
- goto error_info;
+ goto error_slot;
}
mutex_init(&slot->lock);
INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work);
/* register this slot with the hotplug pci core */
- hotplug_slot->private = slot;
snprintf(name, SLOT_NAME_SIZE, "%d", slot->number);
hotplug_slot->ops = &shpchp_hotplug_slot_ops;
pci_domain_nr(ctrl->pci_dev->subordinate),
slot->bus, slot->device, slot->hp_slot, slot->number,
ctrl->slot_device_offset);
- retval = pci_hp_register(slot->hotplug_slot,
+ retval = pci_hp_register(hotplug_slot,
ctrl->pci_dev->subordinate, slot->device, name);
if (retval) {
ctrl_err(ctrl, "pci_hp_register failed with error %d\n",
goto error_slotwq;
}
- get_power_status(hotplug_slot, &info->power_status);
- get_attention_status(hotplug_slot, &info->attention_status);
- get_latch_status(hotplug_slot, &info->latch_status);
- get_adapter_status(hotplug_slot, &info->adapter_status);
+ get_power_status(hotplug_slot, &slot->pwr_save);
+ get_attention_status(hotplug_slot, &slot->attention_save);
+ get_latch_status(hotplug_slot, &slot->latch_save);
+ get_adapter_status(hotplug_slot, &slot->presence_save);
list_add(&slot->slot_list, &ctrl->slot_list);
}
return 0;
error_slotwq:
destroy_workqueue(slot->wq);
-error_info:
- kfree(info);
-error_hpslot:
- kfree(hotplug_slot);
error_slot:
kfree(slot);
error:
list_del(&slot->slot_list);
cancel_delayed_work(&slot->work);
destroy_workqueue(slot->wq);
- pci_hp_deregister(slot->hotplug_slot);
- kfree(slot->hotplug_slot->info);
- kfree(slot->hotplug_slot);
+ pci_hp_deregister(&slot->hotplug_slot);
kfree(slot);
}
}
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
- hotplug_slot->info->attention_status = status;
+ slot->attention_save = status;
slot->hpc_ops->set_attention_status(slot, status);
return 0;
retval = slot->hpc_ops->get_power_status(slot, value);
if (retval < 0)
- *value = hotplug_slot->info->power_status;
+ *value = slot->pwr_save;
return 0;
}
retval = slot->hpc_ops->get_attention_status(slot, value);
if (retval < 0)
- *value = hotplug_slot->info->attention_status;
+ *value = slot->attention_save;
return 0;
}
retval = slot->hpc_ops->get_latch_status(slot, value);
if (retval < 0)
- *value = hotplug_slot->info->latch_status;
+ *value = slot->latch_save;
return 0;
}
retval = slot->hpc_ops->get_adapter_status(slot, value);
if (retval < 0)
- *value = hotplug_slot->info->adapter_status;
+ *value = slot->presence_save;
return 0;
}
mutex_unlock(&p_slot->lock);
}
-static int update_slot_info (struct slot *slot)
+static void update_slot_info(struct slot *slot)
{
- struct hotplug_slot_info *info;
- int result;
-
- info = kmalloc(sizeof(*info), GFP_KERNEL);
- if (!info)
- return -ENOMEM;
-
- slot->hpc_ops->get_power_status(slot, &(info->power_status));
- slot->hpc_ops->get_attention_status(slot, &(info->attention_status));
- slot->hpc_ops->get_latch_status(slot, &(info->latch_status));
- slot->hpc_ops->get_adapter_status(slot, &(info->adapter_status));
-
- result = pci_hp_change_slot_info(slot->hotplug_slot, info);
- kfree (info);
- return result;
+ slot->hpc_ops->get_power_status(slot, &slot->pwr_save);
+ slot->hpc_ops->get_attention_status(slot, &slot->attention_save);
+ slot->hpc_ops->get_latch_status(slot, &slot->latch_save);
+ slot->hpc_ops->get_adapter_status(slot, &slot->presence_save);
}
/*
#include <linux/export.h>
#include <linux/string.h>
#include <linux/delay.h>
-#include <linux/pci-ats.h>
#include "pci.h"
#define VIRTFN_ID_LEN 16
&physfn->sriov->subsystem_vendor);
pci_read_config_word(virtfn, PCI_SUBSYSTEM_ID,
&physfn->sriov->subsystem_device);
+
+ physfn->sriov->cfg_size = pci_cfg_space_size(virtfn);
}
int pci_iov_add_virtfn(struct pci_dev *dev, int id)
}
}
}
- WARN_ON(!!dev->msix_enabled);
/* Check whether driver already requested for MSI irq */
if (dev->msi_enabled) {
if (!pci_msi_supported(dev, minvec))
return -EINVAL;
- WARN_ON(!!dev->msi_enabled);
-
/* Check whether driver already requested MSI-X irqs */
if (dev->msix_enabled) {
pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
if (maxvec < minvec)
return -ERANGE;
+ if (WARN_ON_ONCE(dev->msi_enabled))
+ return -EINVAL;
+
nvec = pci_msi_vec_count(dev);
if (nvec < 0)
return nvec;
if (maxvec < minvec)
return -ERANGE;
+ if (WARN_ON_ONCE(dev->msix_enabled))
+ return -EINVAL;
+
for (;;) {
if (affd) {
nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PCI Peer 2 Peer DMA support.
+ *
+ * Copyright (c) 2016-2018, Logan Gunthorpe
+ * Copyright (c) 2016-2017, Microsemi Corporation
+ * Copyright (c) 2017, Christoph Hellwig
+ * Copyright (c) 2018, Eideticom Inc.
+ */
+
+#define pr_fmt(fmt) "pci-p2pdma: " fmt
+#include <linux/ctype.h>
+#include <linux/pci-p2pdma.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/genalloc.h>
+#include <linux/memremap.h>
+#include <linux/percpu-refcount.h>
+#include <linux/random.h>
+#include <linux/seq_buf.h>
+
+struct pci_p2pdma {
+ struct percpu_ref devmap_ref;
+ struct completion devmap_ref_done;
+ struct gen_pool *pool;
+ bool p2pmem_published;
+};
+
+static ssize_t size_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ size_t size = 0;
+
+ if (pdev->p2pdma->pool)
+ size = gen_pool_size(pdev->p2pdma->pool);
+
+ return snprintf(buf, PAGE_SIZE, "%zd\n", size);
+}
+static DEVICE_ATTR_RO(size);
+
+static ssize_t available_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ size_t avail = 0;
+
+ if (pdev->p2pdma->pool)
+ avail = gen_pool_avail(pdev->p2pdma->pool);
+
+ return snprintf(buf, PAGE_SIZE, "%zd\n", avail);
+}
+static DEVICE_ATTR_RO(available);
+
+static ssize_t published_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n",
+ pdev->p2pdma->p2pmem_published);
+}
+static DEVICE_ATTR_RO(published);
+
+static struct attribute *p2pmem_attrs[] = {
+ &dev_attr_size.attr,
+ &dev_attr_available.attr,
+ &dev_attr_published.attr,
+ NULL,
+};
+
+static const struct attribute_group p2pmem_group = {
+ .attrs = p2pmem_attrs,
+ .name = "p2pmem",
+};
+
+static void pci_p2pdma_percpu_release(struct percpu_ref *ref)
+{
+ struct pci_p2pdma *p2p =
+ container_of(ref, struct pci_p2pdma, devmap_ref);
+
+ complete_all(&p2p->devmap_ref_done);
+}
+
+static void pci_p2pdma_percpu_kill(void *data)
+{
+ struct percpu_ref *ref = data;
+
+ /*
+ * pci_p2pdma_add_resource() may be called multiple times
+ * by a driver and may register the percpu_kill devm action multiple
+ * times. We only want the first action to actually kill the
+ * percpu_ref.
+ */
+ if (percpu_ref_is_dying(ref))
+ return;
+
+ percpu_ref_kill(ref);
+}
+
+static void pci_p2pdma_release(void *data)
+{
+ struct pci_dev *pdev = data;
+
+ if (!pdev->p2pdma)
+ return;
+
+ wait_for_completion(&pdev->p2pdma->devmap_ref_done);
+ percpu_ref_exit(&pdev->p2pdma->devmap_ref);
+
+ gen_pool_destroy(pdev->p2pdma->pool);
+ sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group);
+ pdev->p2pdma = NULL;
+}
+
+static int pci_p2pdma_setup(struct pci_dev *pdev)
+{
+ int error = -ENOMEM;
+ struct pci_p2pdma *p2p;
+
+ p2p = devm_kzalloc(&pdev->dev, sizeof(*p2p), GFP_KERNEL);
+ if (!p2p)
+ return -ENOMEM;
+
+ p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev));
+ if (!p2p->pool)
+ goto out;
+
+ init_completion(&p2p->devmap_ref_done);
+ error = percpu_ref_init(&p2p->devmap_ref,
+ pci_p2pdma_percpu_release, 0, GFP_KERNEL);
+ if (error)
+ goto out_pool_destroy;
+
+ error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev);
+ if (error)
+ goto out_pool_destroy;
+
+ pdev->p2pdma = p2p;
+
+ error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group);
+ if (error)
+ goto out_pool_destroy;
+
+ return 0;
+
+out_pool_destroy:
+ pdev->p2pdma = NULL;
+ gen_pool_destroy(p2p->pool);
+out:
+ devm_kfree(&pdev->dev, p2p);
+ return error;
+}
+
+/**
+ * pci_p2pdma_add_resource - add memory for use as p2p memory
+ * @pdev: the device to add the memory to
+ * @bar: PCI BAR to add
+ * @size: size of the memory to add, may be zero to use the whole BAR
+ * @offset: offset into the PCI BAR
+ *
+ * The memory will be given ZONE_DEVICE struct pages so that it may
+ * be used with any DMA request.
+ */
+int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
+ u64 offset)
+{
+ struct dev_pagemap *pgmap;
+ void *addr;
+ int error;
+
+ if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM))
+ return -EINVAL;
+
+ if (offset >= pci_resource_len(pdev, bar))
+ return -EINVAL;
+
+ if (!size)
+ size = pci_resource_len(pdev, bar) - offset;
+
+ if (size + offset > pci_resource_len(pdev, bar))
+ return -EINVAL;
+
+ if (!pdev->p2pdma) {
+ error = pci_p2pdma_setup(pdev);
+ if (error)
+ return error;
+ }
+
+ pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL);
+ if (!pgmap)
+ return -ENOMEM;
+
+ pgmap->res.start = pci_resource_start(pdev, bar) + offset;
+ pgmap->res.end = pgmap->res.start + size - 1;
+ pgmap->res.flags = pci_resource_flags(pdev, bar);
+ pgmap->ref = &pdev->p2pdma->devmap_ref;
+ pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
+ pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) -
+ pci_resource_start(pdev, bar);
+
+ addr = devm_memremap_pages(&pdev->dev, pgmap);
+ if (IS_ERR(addr)) {
+ error = PTR_ERR(addr);
+ goto pgmap_free;
+ }
+
+ error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr,
+ pci_bus_address(pdev, bar) + offset,
+ resource_size(&pgmap->res), dev_to_node(&pdev->dev));
+ if (error)
+ goto pgmap_free;
+
+ error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_percpu_kill,
+ &pdev->p2pdma->devmap_ref);
+ if (error)
+ goto pgmap_free;
+
+ pci_info(pdev, "added peer-to-peer DMA memory %pR\n",
+ &pgmap->res);
+
+ return 0;
+
+pgmap_free:
+ devm_kfree(&pdev->dev, pgmap);
+ return error;
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource);
+
+/*
+ * Note this function returns the parent PCI device with a
+ * reference taken. It is the caller's responsibily to drop
+ * the reference.
+ */
+static struct pci_dev *find_parent_pci_dev(struct device *dev)
+{
+ struct device *parent;
+
+ dev = get_device(dev);
+
+ while (dev) {
+ if (dev_is_pci(dev))
+ return to_pci_dev(dev);
+
+ parent = get_device(dev->parent);
+ put_device(dev);
+ dev = parent;
+ }
+
+ return NULL;
+}
+
+/*
+ * Check if a PCI bridge has its ACS redirection bits set to redirect P2P
+ * TLPs upstream via ACS. Returns 1 if the packets will be redirected
+ * upstream, 0 otherwise.
+ */
+static int pci_bridge_has_acs_redir(struct pci_dev *pdev)
+{
+ int pos;
+ u16 ctrl;
+
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ACS);
+ if (!pos)
+ return 0;
+
+ pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl);
+
+ if (ctrl & (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC))
+ return 1;
+
+ return 0;
+}
+
+static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev)
+{
+ if (!buf)
+ return;
+
+ seq_buf_printf(buf, "%s;", pci_name(pdev));
+}
+
+/*
+ * Find the distance through the nearest common upstream bridge between
+ * two PCI devices.
+ *
+ * If the two devices are the same device then 0 will be returned.
+ *
+ * If there are two virtual functions of the same device behind the same
+ * bridge port then 2 will be returned (one step down to the PCIe switch,
+ * then one step back to the same device).
+ *
+ * In the case where two devices are connected to the same PCIe switch, the
+ * value 4 will be returned. This corresponds to the following PCI tree:
+ *
+ * -+ Root Port
+ * \+ Switch Upstream Port
+ * +-+ Switch Downstream Port
+ * + \- Device A
+ * \-+ Switch Downstream Port
+ * \- Device B
+ *
+ * The distance is 4 because we traverse from Device A through the downstream
+ * port of the switch, to the common upstream port, back up to the second
+ * downstream port and then to Device B.
+ *
+ * Any two devices that don't have a common upstream bridge will return -1.
+ * In this way devices on separate PCIe root ports will be rejected, which
+ * is what we want for peer-to-peer seeing each PCIe root port defines a
+ * separate hierarchy domain and there's no way to determine whether the root
+ * complex supports forwarding between them.
+ *
+ * In the case where two devices are connected to different PCIe switches,
+ * this function will still return a positive distance as long as both
+ * switches eventually have a common upstream bridge. Note this covers
+ * the case of using multiple PCIe switches to achieve a desired level of
+ * fan-out from a root port. The exact distance will be a function of the
+ * number of switches between Device A and Device B.
+ *
+ * If a bridge which has any ACS redirection bits set is in the path
+ * then this functions will return -2. This is so we reject any
+ * cases where the TLPs are forwarded up into the root complex.
+ * In this case, a list of all infringing bridge addresses will be
+ * populated in acs_list (assuming it's non-null) for printk purposes.
+ */
+static int upstream_bridge_distance(struct pci_dev *a,
+ struct pci_dev *b,
+ struct seq_buf *acs_list)
+{
+ int dist_a = 0;
+ int dist_b = 0;
+ struct pci_dev *bb = NULL;
+ int acs_cnt = 0;
+
+ /*
+ * Note, we don't need to take references to devices returned by
+ * pci_upstream_bridge() seeing we hold a reference to a child
+ * device which will already hold a reference to the upstream bridge.
+ */
+
+ while (a) {
+ dist_b = 0;
+
+ if (pci_bridge_has_acs_redir(a)) {
+ seq_buf_print_bus_devfn(acs_list, a);
+ acs_cnt++;
+ }
+
+ bb = b;
+
+ while (bb) {
+ if (a == bb)
+ goto check_b_path_acs;
+
+ bb = pci_upstream_bridge(bb);
+ dist_b++;
+ }
+
+ a = pci_upstream_bridge(a);
+ dist_a++;
+ }
+
+ return -1;
+
+check_b_path_acs:
+ bb = b;
+
+ while (bb) {
+ if (a == bb)
+ break;
+
+ if (pci_bridge_has_acs_redir(bb)) {
+ seq_buf_print_bus_devfn(acs_list, bb);
+ acs_cnt++;
+ }
+
+ bb = pci_upstream_bridge(bb);
+ }
+
+ if (acs_cnt)
+ return -2;
+
+ return dist_a + dist_b;
+}
+
+static int upstream_bridge_distance_warn(struct pci_dev *provider,
+ struct pci_dev *client)
+{
+ struct seq_buf acs_list;
+ int ret;
+
+ seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE);
+ if (!acs_list.buffer)
+ return -ENOMEM;
+
+ ret = upstream_bridge_distance(provider, client, &acs_list);
+ if (ret == -2) {
+ pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider (%s)\n",
+ pci_name(provider));
+ /* Drop final semicolon */
+ acs_list.buffer[acs_list.len-1] = 0;
+ pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n",
+ acs_list.buffer);
+
+ } else if (ret < 0) {
+ pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge\n",
+ pci_name(provider));
+ }
+
+ kfree(acs_list.buffer);
+
+ return ret;
+}
+
+/**
+ * pci_p2pdma_distance_many - Determive the cumulative distance between
+ * a p2pdma provider and the clients in use.
+ * @provider: p2pdma provider to check against the client list
+ * @clients: array of devices to check (NULL-terminated)
+ * @num_clients: number of clients in the array
+ * @verbose: if true, print warnings for devices when we return -1
+ *
+ * Returns -1 if any of the clients are not compatible (behind the same
+ * root port as the provider), otherwise returns a positive number where
+ * a lower number is the preferrable choice. (If there's one client
+ * that's the same as the provider it will return 0, which is best choice).
+ *
+ * For now, "compatible" means the provider and the clients are all behind
+ * the same PCI root port. This cuts out cases that may work but is safest
+ * for the user. Future work can expand this to white-list root complexes that
+ * can safely forward between each ports.
+ */
+int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
+ int num_clients, bool verbose)
+{
+ bool not_supported = false;
+ struct pci_dev *pci_client;
+ int distance = 0;
+ int i, ret;
+
+ if (num_clients == 0)
+ return -1;
+
+ for (i = 0; i < num_clients; i++) {
+ pci_client = find_parent_pci_dev(clients[i]);
+ if (!pci_client) {
+ if (verbose)
+ dev_warn(clients[i],
+ "cannot be used for peer-to-peer DMA as it is not a PCI device\n");
+ return -1;
+ }
+
+ if (verbose)
+ ret = upstream_bridge_distance_warn(provider,
+ pci_client);
+ else
+ ret = upstream_bridge_distance(provider, pci_client,
+ NULL);
+
+ pci_dev_put(pci_client);
+
+ if (ret < 0)
+ not_supported = true;
+
+ if (not_supported && !verbose)
+ break;
+
+ distance += ret;
+ }
+
+ if (not_supported)
+ return -1;
+
+ return distance;
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many);
+
+/**
+ * pci_has_p2pmem - check if a given PCI device has published any p2pmem
+ * @pdev: PCI device to check
+ */
+bool pci_has_p2pmem(struct pci_dev *pdev)
+{
+ return pdev->p2pdma && pdev->p2pdma->p2pmem_published;
+}
+EXPORT_SYMBOL_GPL(pci_has_p2pmem);
+
+/**
+ * pci_p2pmem_find - find a peer-to-peer DMA memory device compatible with
+ * the specified list of clients and shortest distance (as determined
+ * by pci_p2pmem_dma())
+ * @clients: array of devices to check (NULL-terminated)
+ * @num_clients: number of client devices in the list
+ *
+ * If multiple devices are behind the same switch, the one "closest" to the
+ * client devices in use will be chosen first. (So if one of the providers are
+ * the same as one of the clients, that provider will be used ahead of any
+ * other providers that are unrelated). If multiple providers are an equal
+ * distance away, one will be chosen at random.
+ *
+ * Returns a pointer to the PCI device with a reference taken (use pci_dev_put
+ * to return the reference) or NULL if no compatible device is found. The
+ * found provider will also be assigned to the client list.
+ */
+struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients)
+{
+ struct pci_dev *pdev = NULL;
+ int distance;
+ int closest_distance = INT_MAX;
+ struct pci_dev **closest_pdevs;
+ int dev_cnt = 0;
+ const int max_devs = PAGE_SIZE / sizeof(*closest_pdevs);
+ int i;
+
+ closest_pdevs = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!closest_pdevs)
+ return NULL;
+
+ while ((pdev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, pdev))) {
+ if (!pci_has_p2pmem(pdev))
+ continue;
+
+ distance = pci_p2pdma_distance_many(pdev, clients,
+ num_clients, false);
+ if (distance < 0 || distance > closest_distance)
+ continue;
+
+ if (distance == closest_distance && dev_cnt >= max_devs)
+ continue;
+
+ if (distance < closest_distance) {
+ for (i = 0; i < dev_cnt; i++)
+ pci_dev_put(closest_pdevs[i]);
+
+ dev_cnt = 0;
+ closest_distance = distance;
+ }
+
+ closest_pdevs[dev_cnt++] = pci_dev_get(pdev);
+ }
+
+ if (dev_cnt)
+ pdev = pci_dev_get(closest_pdevs[prandom_u32_max(dev_cnt)]);
+
+ for (i = 0; i < dev_cnt; i++)
+ pci_dev_put(closest_pdevs[i]);
+
+ kfree(closest_pdevs);
+ return pdev;
+}
+EXPORT_SYMBOL_GPL(pci_p2pmem_find_many);
+
+/**
+ * pci_alloc_p2p_mem - allocate peer-to-peer DMA memory
+ * @pdev: the device to allocate memory from
+ * @size: number of bytes to allocate
+ *
+ * Returns the allocated memory or NULL on error.
+ */
+void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size)
+{
+ void *ret;
+
+ if (unlikely(!pdev->p2pdma))
+ return NULL;
+
+ if (unlikely(!percpu_ref_tryget_live(&pdev->p2pdma->devmap_ref)))
+ return NULL;
+
+ ret = (void *)gen_pool_alloc(pdev->p2pdma->pool, size);
+
+ if (unlikely(!ret))
+ percpu_ref_put(&pdev->p2pdma->devmap_ref);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(pci_alloc_p2pmem);
+
+/**
+ * pci_free_p2pmem - free peer-to-peer DMA memory
+ * @pdev: the device the memory was allocated from
+ * @addr: address of the memory that was allocated
+ * @size: number of bytes that was allocated
+ */
+void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size)
+{
+ gen_pool_free(pdev->p2pdma->pool, (uintptr_t)addr, size);
+ percpu_ref_put(&pdev->p2pdma->devmap_ref);
+}
+EXPORT_SYMBOL_GPL(pci_free_p2pmem);
+
+/**
+ * pci_virt_to_bus - return the PCI bus address for a given virtual
+ * address obtained with pci_alloc_p2pmem()
+ * @pdev: the device the memory was allocated from
+ * @addr: address of the memory that was allocated
+ */
+pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr)
+{
+ if (!addr)
+ return 0;
+ if (!pdev->p2pdma)
+ return 0;
+
+ /*
+ * Note: when we added the memory to the pool we used the PCI
+ * bus address as the physical address. So gen_pool_virt_to_phys()
+ * actually returns the bus address despite the misleading name.
+ */
+ return gen_pool_virt_to_phys(pdev->p2pdma->pool, (unsigned long)addr);
+}
+EXPORT_SYMBOL_GPL(pci_p2pmem_virt_to_bus);
+
+/**
+ * pci_p2pmem_alloc_sgl - allocate peer-to-peer DMA memory in a scatterlist
+ * @pdev: the device to allocate memory from
+ * @nents: the number of SG entries in the list
+ * @length: number of bytes to allocate
+ *
+ * Returns 0 on success
+ */
+struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev,
+ unsigned int *nents, u32 length)
+{
+ struct scatterlist *sg;
+ void *addr;
+
+ sg = kzalloc(sizeof(*sg), GFP_KERNEL);
+ if (!sg)
+ return NULL;
+
+ sg_init_table(sg, 1);
+
+ addr = pci_alloc_p2pmem(pdev, length);
+ if (!addr)
+ goto out_free_sg;
+
+ sg_set_buf(sg, addr, length);
+ *nents = 1;
+ return sg;
+
+out_free_sg:
+ kfree(sg);
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(pci_p2pmem_alloc_sgl);
+
+/**
+ * pci_p2pmem_free_sgl - free a scatterlist allocated by pci_p2pmem_alloc_sgl()
+ * @pdev: the device to allocate memory from
+ * @sgl: the allocated scatterlist
+ */
+void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl)
+{
+ struct scatterlist *sg;
+ int count;
+
+ for_each_sg(sgl, sg, INT_MAX, count) {
+ if (!sg)
+ break;
+
+ pci_free_p2pmem(pdev, sg_virt(sg), sg->length);
+ }
+ kfree(sgl);
+}
+EXPORT_SYMBOL_GPL(pci_p2pmem_free_sgl);
+
+/**
+ * pci_p2pmem_publish - publish the peer-to-peer DMA memory for use by
+ * other devices with pci_p2pmem_find()
+ * @pdev: the device with peer-to-peer DMA memory to publish
+ * @publish: set to true to publish the memory, false to unpublish it
+ *
+ * Published memory can be used by other PCI device drivers for
+ * peer-2-peer DMA operations. Non-published memory is reserved for
+ * exlusive use of the device driver that registers the peer-to-peer
+ * memory.
+ */
+void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
+{
+ if (pdev->p2pdma)
+ pdev->p2pdma->p2pmem_published = publish;
+}
+EXPORT_SYMBOL_GPL(pci_p2pmem_publish);
+
+/**
+ * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA
+ * @dev: device doing the DMA request
+ * @sg: scatter list to map
+ * @nents: elements in the scatterlist
+ * @dir: DMA direction
+ *
+ * Scatterlists mapped with this function should not be unmapped in any way.
+ *
+ * Returns the number of SG entries mapped or 0 on error.
+ */
+int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir)
+{
+ struct dev_pagemap *pgmap;
+ struct scatterlist *s;
+ phys_addr_t paddr;
+ int i;
+
+ /*
+ * p2pdma mappings are not compatible with devices that use
+ * dma_virt_ops. If the upper layers do the right thing
+ * this should never happen because it will be prevented
+ * by the check in pci_p2pdma_add_client()
+ */
+ if (WARN_ON_ONCE(IS_ENABLED(CONFIG_DMA_VIRT_OPS) &&
+ dev->dma_ops == &dma_virt_ops))
+ return 0;
+
+ for_each_sg(sg, s, nents, i) {
+ pgmap = sg_page(s)->pgmap;
+ paddr = sg_phys(s);
+
+ s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset;
+ sg_dma_len(s) = s->length;
+ }
+
+ return nents;
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg);
+
+/**
+ * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store
+ * to enable p2pdma
+ * @page: contents of the value to be stored
+ * @p2p_dev: returns the PCI device that was selected to be used
+ * (if one was specified in the stored value)
+ * @use_p2pdma: returns whether to enable p2pdma or not
+ *
+ * Parses an attribute value to decide whether to enable p2pdma.
+ * The value can select a PCI device (using it's full BDF device
+ * name) or a boolean (in any format strtobool() accepts). A false
+ * value disables p2pdma, a true value expects the caller
+ * to automatically find a compatible device and specifying a PCI device
+ * expects the caller to use the specific provider.
+ *
+ * pci_p2pdma_enable_show() should be used as the show operation for
+ * the attribute.
+ *
+ * Returns 0 on success
+ */
+int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
+ bool *use_p2pdma)
+{
+ struct device *dev;
+
+ dev = bus_find_device_by_name(&pci_bus_type, NULL, page);
+ if (dev) {
+ *use_p2pdma = true;
+ *p2p_dev = to_pci_dev(dev);
+
+ if (!pci_has_p2pmem(*p2p_dev)) {
+ pci_err(*p2p_dev,
+ "PCI device has no peer-to-peer memory: %s\n",
+ page);
+ pci_dev_put(*p2p_dev);
+ return -ENODEV;
+ }
+
+ return 0;
+ } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) {
+ /*
+ * If the user enters a PCI device that doesn't exist
+ * like "0000:01:00.1", we don't want strtobool to think
+ * it's a '0' when it's clearly not what the user wanted.
+ * So we require 0's and 1's to be exactly one character.
+ */
+ } else if (!strtobool(page, use_p2pdma)) {
+ return 0;
+ }
+
+ pr_err("No such PCI device: %.*s\n", (int)strcspn(page, "\n"), page);
+ return -ENODEV;
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_enable_store);
+
+/**
+ * pci_p2pdma_enable_show - show a configfs/sysfs attribute indicating
+ * whether p2pdma is enabled
+ * @page: contents of the stored value
+ * @p2p_dev: the selected p2p device (NULL if no device is selected)
+ * @use_p2pdma: whether p2pdme has been enabled
+ *
+ * Attributes that use pci_p2pdma_enable_store() should use this function
+ * to show the value of the attribute.
+ *
+ * Returns 0 on success
+ */
+ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
+ bool use_p2pdma)
+{
+ if (!use_p2pdma)
+ return sprintf(page, "0\n");
+
+ if (!p2p_dev)
+ return sprintf(page, "1\n");
+
+ return sprintf(page, "%s\n", pci_name(p2p_dev));
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show);
return PCI_POWER_ERROR;
}
+static struct acpi_device *acpi_pci_find_companion(struct device *dev);
+
+static bool acpi_pci_bridge_d3(struct pci_dev *dev)
+{
+ const struct fwnode_handle *fwnode;
+ struct acpi_device *adev;
+ struct pci_dev *root;
+ u8 val;
+
+ if (!dev->is_hotplug_bridge)
+ return false;
+
+ /*
+ * Look for a special _DSD property for the root port and if it
+ * is set we know the hierarchy behind it supports D3 just fine.
+ */
+ root = pci_find_pcie_root_port(dev);
+ if (!root)
+ return false;
+
+ adev = ACPI_COMPANION(&root->dev);
+ if (root == dev) {
+ /*
+ * It is possible that the ACPI companion is not yet bound
+ * for the root port so look it up manually here.
+ */
+ if (!adev && !pci_dev_is_added(root))
+ adev = acpi_pci_find_companion(&root->dev);
+ }
+
+ if (!adev)
+ return false;
+
+ fwnode = acpi_fwnode_handle(adev);
+ if (fwnode_property_read_u8(fwnode, "HotPlugSupportInD3", &val))
+ return false;
+
+ return val == 1;
+}
+
static bool acpi_pci_power_manageable(struct pci_dev *dev)
{
struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
error = -EBUSY;
break;
}
+ /* Fall through */
case PCI_D0:
case PCI_D1:
case PCI_D2:
}
static const struct pci_platform_pm_ops acpi_pci_platform_pm = {
+ .bridge_d3 = acpi_pci_bridge_d3,
.is_manageable = acpi_pci_power_manageable,
.set_state = acpi_pci_set_power_state,
.get_state = acpi_pci_get_power_state,
{
struct pci_dev *pci_dev = to_pci_dev(dev);
struct acpi_device *adev = ACPI_COMPANION(dev);
+ int node;
if (!adev)
return;
+ node = acpi_get_node(adev->handle);
+ if (node != NUMA_NO_NODE)
+ set_dev_node(dev, node);
+
pci_acpi_optimize_delay(pci_dev, adev->handle);
pci_acpi_add_pm_notifier(adev, pci_dev);
return;
device_set_wakeup_capable(dev, true);
+ /*
+ * For bridges that can do D3 we enable wake automatically (as
+ * we do for the power management itself in that case). The
+ * reason is that the bridge may have additional methods such as
+ * _DSW that need to be called.
+ */
+ if (pci_dev->bridge_d3)
+ device_wakeup_enable(dev);
+
acpi_pci_wakeup(pci_dev, false);
}
static void pci_acpi_cleanup(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
+ struct pci_dev *pci_dev = to_pci_dev(dev);
if (!adev)
return;
pci_acpi_remove_pm_notifier(adev);
- if (adev->wakeup.flags.valid)
+ if (adev->wakeup.flags.valid) {
+ if (pci_dev->bridge_d3)
+ device_wakeup_disable(dev);
+
device_set_wakeup_capable(dev, false);
+ }
}
static bool pci_acpi_bus_match(struct device *dev)
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Marvell
+ *
+ * Author: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
+ *
+ * This file helps PCI controller drivers implement a fake root port
+ * PCI bridge when the HW doesn't provide such a root port PCI
+ * bridge.
+ *
+ * It emulates a PCI bridge by providing a fake PCI configuration
+ * space (and optionally a PCIe capability configuration space) in
+ * memory. By default the read/write operations simply read and update
+ * this fake configuration space in memory. However, PCI controller
+ * drivers can provide through the 'struct pci_sw_bridge_ops'
+ * structure a set of operations to override or complement this
+ * default behavior.
+ */
+
+#include <linux/pci.h>
+#include "pci-bridge-emul.h"
+
+#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
+#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
+#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
+
+/*
+ * Initialize a pci_bridge_emul structure to represent a fake PCI
+ * bridge configuration space. The caller needs to have initialized
+ * the PCI configuration space with whatever values make sense
+ * (typically at least vendor, device, revision), the ->ops pointer,
+ * and optionally ->data and ->has_pcie.
+ */
+void pci_bridge_emul_init(struct pci_bridge_emul *bridge)
+{
+ bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
+ bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
+ bridge->conf.cache_line_size = 0x10;
+ bridge->conf.status = PCI_STATUS_CAP_LIST;
+
+ if (bridge->has_pcie) {
+ bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
+ bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+ /* Set PCIe v2, root port, slot support */
+ bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+ PCI_EXP_FLAGS_SLOT;
+ }
+}
+
+struct pci_bridge_reg_behavior {
+ /* Read-only bits */
+ u32 ro;
+
+ /* Read-write bits */
+ u32 rw;
+
+ /* Write-1-to-clear bits */
+ u32 w1c;
+
+ /* Reserved bits (hardwired to 0) */
+ u32 rsvd;
+};
+
+const static struct pci_bridge_reg_behavior pci_regs_behavior[] = {
+ [PCI_VENDOR_ID / 4] = { .ro = ~0 },
+ [PCI_COMMAND / 4] = {
+ .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER | PCI_COMMAND_PARITY |
+ PCI_COMMAND_SERR),
+ .ro = ((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE |
+ PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT |
+ PCI_COMMAND_FAST_BACK) |
+ (PCI_STATUS_CAP_LIST | PCI_STATUS_66MHZ |
+ PCI_STATUS_FAST_BACK | PCI_STATUS_DEVSEL_MASK) << 16),
+ .rsvd = GENMASK(15, 10) | ((BIT(6) | GENMASK(3, 0)) << 16),
+ .w1c = (PCI_STATUS_PARITY |
+ PCI_STATUS_SIG_TARGET_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT |
+ PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_SIG_SYSTEM_ERROR |
+ PCI_STATUS_DETECTED_PARITY) << 16,
+ },
+ [PCI_CLASS_REVISION / 4] = { .ro = ~0 },
+
+ /*
+ * Cache Line Size register: implement as read-only, we do not
+ * pretend implementing "Memory Write and Invalidate"
+ * transactions"
+ *
+ * Latency Timer Register: implemented as read-only, as "A
+ * bridge that is not capable of a burst transfer of more than
+ * two data phases on its primary interface is permitted to
+ * hardwire the Latency Timer to a value of 16 or less"
+ *
+ * Header Type: always read-only
+ *
+ * BIST register: implemented as read-only, as "A bridge that
+ * does not support BIST must implement this register as a
+ * read-only register that returns 0 when read"
+ */
+ [PCI_CACHE_LINE_SIZE / 4] = { .ro = ~0 },
+
+ /*
+ * Base Address registers not used must be implemented as
+ * read-only registers that return 0 when read.
+ */
+ [PCI_BASE_ADDRESS_0 / 4] = { .ro = ~0 },
+ [PCI_BASE_ADDRESS_1 / 4] = { .ro = ~0 },
+
+ [PCI_PRIMARY_BUS / 4] = {
+ /* Primary, secondary and subordinate bus are RW */
+ .rw = GENMASK(24, 0),
+ /* Secondary latency is read-only */
+ .ro = GENMASK(31, 24),
+ },
+
+ [PCI_IO_BASE / 4] = {
+ /* The high four bits of I/O base/limit are RW */
+ .rw = (GENMASK(15, 12) | GENMASK(7, 4)),
+
+ /* The low four bits of I/O base/limit are RO */
+ .ro = (((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
+ PCI_STATUS_DEVSEL_MASK) << 16) |
+ GENMASK(11, 8) | GENMASK(3, 0)),
+
+ .w1c = (PCI_STATUS_PARITY |
+ PCI_STATUS_SIG_TARGET_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT |
+ PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_SIG_SYSTEM_ERROR |
+ PCI_STATUS_DETECTED_PARITY) << 16,
+
+ .rsvd = ((BIT(6) | GENMASK(4, 0)) << 16),
+ },
+
+ [PCI_MEMORY_BASE / 4] = {
+ /* The high 12-bits of mem base/limit are RW */
+ .rw = GENMASK(31, 20) | GENMASK(15, 4),
+
+ /* The low four bits of mem base/limit are RO */
+ .ro = GENMASK(19, 16) | GENMASK(3, 0),
+ },
+
+ [PCI_PREF_MEMORY_BASE / 4] = {
+ /* The high 12-bits of pref mem base/limit are RW */
+ .rw = GENMASK(31, 20) | GENMASK(15, 4),
+
+ /* The low four bits of pref mem base/limit are RO */
+ .ro = GENMASK(19, 16) | GENMASK(3, 0),
+ },
+
+ [PCI_PREF_BASE_UPPER32 / 4] = {
+ .rw = ~0,
+ },
+
+ [PCI_PREF_LIMIT_UPPER32 / 4] = {
+ .rw = ~0,
+ },
+
+ [PCI_IO_BASE_UPPER16 / 4] = {
+ .rw = ~0,
+ },
+
+ [PCI_CAPABILITY_LIST / 4] = {
+ .ro = GENMASK(7, 0),
+ .rsvd = GENMASK(31, 8),
+ },
+
+ [PCI_ROM_ADDRESS1 / 4] = {
+ .rw = GENMASK(31, 11) | BIT(0),
+ .rsvd = GENMASK(10, 1),
+ },
+
+ /*
+ * Interrupt line (bits 7:0) are RW, interrupt pin (bits 15:8)
+ * are RO, and bridge control (31:16) are a mix of RW, RO,
+ * reserved and W1C bits
+ */
+ [PCI_INTERRUPT_LINE / 4] = {
+ /* Interrupt line is RW */
+ .rw = (GENMASK(7, 0) |
+ ((PCI_BRIDGE_CTL_PARITY |
+ PCI_BRIDGE_CTL_SERR |
+ PCI_BRIDGE_CTL_ISA |
+ PCI_BRIDGE_CTL_VGA |
+ PCI_BRIDGE_CTL_MASTER_ABORT |
+ PCI_BRIDGE_CTL_BUS_RESET |
+ BIT(8) | BIT(9) | BIT(11)) << 16)),
+
+ /* Interrupt pin is RO */
+ .ro = (GENMASK(15, 8) | ((PCI_BRIDGE_CTL_FAST_BACK) << 16)),
+
+ .w1c = BIT(10) << 16,
+
+ .rsvd = (GENMASK(15, 12) | BIT(4)) << 16,
+ },
+};
+
+const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ [PCI_CAP_LIST_ID / 4] = {
+ /*
+ * Capability ID, Next Capability Pointer and
+ * Capabilities register are all read-only.
+ */
+ .ro = ~0,
+ },
+
+ [PCI_EXP_DEVCAP / 4] = {
+ .ro = ~0,
+ },
+
+ [PCI_EXP_DEVCTL / 4] = {
+ /* Device control register is RW */
+ .rw = GENMASK(15, 0),
+
+ /*
+ * Device status register has 4 bits W1C, then 2 bits
+ * RO, the rest is reserved
+ */
+ .w1c = GENMASK(19, 16),
+ .ro = GENMASK(20, 19),
+ .rsvd = GENMASK(31, 21),
+ },
+
+ [PCI_EXP_LNKCAP / 4] = {
+ /* All bits are RO, except bit 23 which is reserved */
+ .ro = lower_32_bits(~BIT(23)),
+ .rsvd = BIT(23),
+ },
+
+ [PCI_EXP_LNKCTL / 4] = {
+ /*
+ * Link control has bits [1:0] and [11:3] RW, the
+ * other bits are reserved.
+ * Link status has bits [13:0] RO, and bits [14:15]
+ * W1C.
+ */
+ .rw = GENMASK(11, 3) | GENMASK(1, 0),
+ .ro = GENMASK(13, 0) << 16,
+ .w1c = GENMASK(15, 14) << 16,
+ .rsvd = GENMASK(15, 12) | BIT(2),
+ },
+
+ [PCI_EXP_SLTCAP / 4] = {
+ .ro = ~0,
+ },
+
+ [PCI_EXP_SLTCTL / 4] = {
+ /*
+ * Slot control has bits [12:0] RW, the rest is
+ * reserved.
+ *
+ * Slot status has a mix of W1C and RO bits, as well
+ * as reserved bits.
+ */
+ .rw = GENMASK(12, 0),
+ .w1c = (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
+ PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC |
+ PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16,
+ .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS |
+ PCI_EXP_SLTSTA_EIS) << 16,
+ .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16),
+ },
+
+ [PCI_EXP_RTCTL / 4] = {
+ /*
+ * Root control has bits [4:0] RW, the rest is
+ * reserved.
+ *
+ * Root status has bit 0 RO, the rest is reserved.
+ */
+ .rw = (PCI_EXP_RTCTL_SECEE | PCI_EXP_RTCTL_SENFEE |
+ PCI_EXP_RTCTL_SEFEE | PCI_EXP_RTCTL_PMEIE |
+ PCI_EXP_RTCTL_CRSSVE),
+ .ro = PCI_EXP_RTCAP_CRSVIS << 16,
+ .rsvd = GENMASK(15, 5) | (GENMASK(15, 1) << 16),
+ },
+
+ [PCI_EXP_RTSTA / 4] = {
+ .ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING,
+ .w1c = PCI_EXP_RTSTA_PME,
+ .rsvd = GENMASK(31, 18),
+ },
+};
+
+/*
+ * Should be called by the PCI controller driver when reading the PCI
+ * configuration space of the fake bridge. It will call back the
+ * ->ops->read_base or ->ops->read_pcie operations.
+ */
+int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ int size, u32 *value)
+{
+ int ret;
+ int reg = where & ~3;
+ pci_bridge_emul_read_status_t (*read_op)(struct pci_bridge_emul *bridge,
+ int reg, u32 *value);
+ u32 *cfgspace;
+ const struct pci_bridge_reg_behavior *behavior;
+
+ if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) {
+ *value = 0;
+ return PCIBIOS_SUCCESSFUL;
+ }
+
+ if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) {
+ *value = 0;
+ return PCIBIOS_SUCCESSFUL;
+ }
+
+ if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) {
+ reg -= PCI_CAP_PCIE_START;
+ read_op = bridge->ops->read_pcie;
+ cfgspace = (u32 *) &bridge->pcie_conf;
+ behavior = pcie_cap_regs_behavior;
+ } else {
+ read_op = bridge->ops->read_base;
+ cfgspace = (u32 *) &bridge->conf;
+ behavior = pci_regs_behavior;
+ }
+
+ if (read_op)
+ ret = read_op(bridge, reg, value);
+ else
+ ret = PCI_BRIDGE_EMUL_NOT_HANDLED;
+
+ if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED)
+ *value = cfgspace[reg / 4];
+
+ /*
+ * Make sure we never return any reserved bit with a value
+ * different from 0.
+ */
+ *value &= ~behavior[reg / 4].rsvd;
+
+ if (size == 1)
+ *value = (*value >> (8 * (where & 3))) & 0xff;
+ else if (size == 2)
+ *value = (*value >> (8 * (where & 3))) & 0xffff;
+ else if (size != 4)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+/*
+ * Should be called by the PCI controller driver when writing the PCI
+ * configuration space of the fake bridge. It will call back the
+ * ->ops->write_base or ->ops->write_pcie operations.
+ */
+int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ int size, u32 value)
+{
+ int reg = where & ~3;
+ int mask, ret, old, new, shift;
+ void (*write_op)(struct pci_bridge_emul *bridge, int reg,
+ u32 old, u32 new, u32 mask);
+ u32 *cfgspace;
+ const struct pci_bridge_reg_behavior *behavior;
+
+ if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END)
+ return PCIBIOS_SUCCESSFUL;
+
+ if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END)
+ return PCIBIOS_SUCCESSFUL;
+
+ shift = (where & 0x3) * 8;
+
+ if (size == 4)
+ mask = 0xffffffff;
+ else if (size == 2)
+ mask = 0xffff << shift;
+ else if (size == 1)
+ mask = 0xff << shift;
+ else
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old);
+ if (ret != PCIBIOS_SUCCESSFUL)
+ return ret;
+
+ if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) {
+ reg -= PCI_CAP_PCIE_START;
+ write_op = bridge->ops->write_pcie;
+ cfgspace = (u32 *) &bridge->pcie_conf;
+ behavior = pcie_cap_regs_behavior;
+ } else {
+ write_op = bridge->ops->write_base;
+ cfgspace = (u32 *) &bridge->conf;
+ behavior = pci_regs_behavior;
+ }
+
+ /* Keep all bits, except the RW bits */
+ new = old & (~mask | ~behavior[reg / 4].rw);
+
+ /* Update the value of the RW bits */
+ new |= (value << shift) & (behavior[reg / 4].rw & mask);
+
+ /* Clear the W1C bits */
+ new &= ~((value << shift) & (behavior[reg / 4].w1c & mask));
+
+ cfgspace[reg / 4] = new;
+
+ if (write_op)
+ write_op(bridge, reg, old, new, mask);
+
+ return PCIBIOS_SUCCESSFUL;
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __PCI_BRIDGE_EMUL_H__
+#define __PCI_BRIDGE_EMUL_H__
+
+#include <linux/kernel.h>
+
+/* PCI configuration space of a PCI-to-PCI bridge. */
+struct pci_bridge_emul_conf {
+ u16 vendor;
+ u16 device;
+ u16 command;
+ u16 status;
+ u32 class_revision;
+ u8 cache_line_size;
+ u8 latency_timer;
+ u8 header_type;
+ u8 bist;
+ u32 bar[2];
+ u8 primary_bus;
+ u8 secondary_bus;
+ u8 subordinate_bus;
+ u8 secondary_latency_timer;
+ u8 iobase;
+ u8 iolimit;
+ u16 secondary_status;
+ u16 membase;
+ u16 memlimit;
+ u16 pref_mem_base;
+ u16 pref_mem_limit;
+ u32 prefbaseupper;
+ u32 preflimitupper;
+ u16 iobaseupper;
+ u16 iolimitupper;
+ u8 capabilities_pointer;
+ u8 reserve[3];
+ u32 romaddr;
+ u8 intline;
+ u8 intpin;
+ u16 bridgectrl;
+};
+
+/* PCI configuration space of the PCIe capabilities */
+struct pci_bridge_emul_pcie_conf {
+ u8 cap_id;
+ u8 next;
+ u16 cap;
+ u32 devcap;
+ u16 devctl;
+ u16 devsta;
+ u32 lnkcap;
+ u16 lnkctl;
+ u16 lnksta;
+ u32 slotcap;
+ u16 slotctl;
+ u16 slotsta;
+ u16 rootctl;
+ u16 rsvd;
+ u32 rootsta;
+ u32 devcap2;
+ u16 devctl2;
+ u16 devsta2;
+ u32 lnkcap2;
+ u16 lnkctl2;
+ u16 lnksta2;
+ u32 slotcap2;
+ u16 slotctl2;
+ u16 slotsta2;
+};
+
+struct pci_bridge_emul;
+
+typedef enum { PCI_BRIDGE_EMUL_HANDLED,
+ PCI_BRIDGE_EMUL_NOT_HANDLED } pci_bridge_emul_read_status_t;
+
+struct pci_bridge_emul_ops {
+ /*
+ * Called when reading from the regular PCI bridge
+ * configuration space. Return PCI_BRIDGE_EMUL_HANDLED when the
+ * operation has handled the read operation and filled in the
+ * *value, or PCI_BRIDGE_EMUL_NOT_HANDLED when the read should
+ * be emulated by the common code by reading from the
+ * in-memory copy of the configuration space.
+ */
+ pci_bridge_emul_read_status_t (*read_base)(struct pci_bridge_emul *bridge,
+ int reg, u32 *value);
+
+ /*
+ * Same as ->read_base(), except it is for reading from the
+ * PCIe capability configuration space.
+ */
+ pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge,
+ int reg, u32 *value);
+ /*
+ * Called when writing to the regular PCI bridge configuration
+ * space. old is the current value, new is the new value being
+ * written, and mask indicates which parts of the value are
+ * being changed.
+ */
+ void (*write_base)(struct pci_bridge_emul *bridge, int reg,
+ u32 old, u32 new, u32 mask);
+
+ /*
+ * Same as ->write_base(), except it is for writing from the
+ * PCIe capability configuration space.
+ */
+ void (*write_pcie)(struct pci_bridge_emul *bridge, int reg,
+ u32 old, u32 new, u32 mask);
+};
+
+struct pci_bridge_emul {
+ struct pci_bridge_emul_conf conf;
+ struct pci_bridge_emul_pcie_conf pcie_conf;
+ struct pci_bridge_emul_ops *ops;
+ void *data;
+ bool has_pcie;
+};
+
+void pci_bridge_emul_init(struct pci_bridge_emul *bridge);
+int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ int size, u32 *value);
+int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ int size, u32 value);
+
+#endif /* __PCI_BRIDGE_EMUL_H__ */
#include <linux/aer.h>
#include "pci.h"
+DEFINE_MUTEX(pci_slot_mutex);
+
const char *pci_power_names[] = {
"error", "D0", "D1", "D2", "D3hot", "D3cold", "unknown",
};
/**
* pci_dev_str_match_path - test if a path string matches a device
* @dev: the PCI device to test
- * @p: string to match the device against
+ * @path: string to match the device against
* @endptr: pointer to the string after the match
*
* Test if a string (typically from a kernel parameter) formatted as a
return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false;
}
+static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
+{
+ return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
+}
+
/**
* pci_raw_set_power_state - Use PCI PM registers to set the power state of
* given PCI device
* because have already delayed for the bridge.
*/
if (dev->runtime_d3cold) {
- if (dev->d3cold_delay)
+ if (dev->d3cold_delay && !dev->imm_ready)
msleep(dev->d3cold_delay);
/*
* When powering on a bridge from D3cold, the
if (i != 0)
return i;
+ pci_save_dpc_state(dev);
return pci_save_vc_state(dev);
}
EXPORT_SYMBOL(pci_save_state);
pci_restore_ats_state(dev);
pci_restore_vc_state(dev);
pci_restore_rebar_state(dev);
+ pci_restore_dpc_state(dev);
pci_cleanup_aer_error_status_regs(dev);
int ret = 0;
/*
- * Bridges can only signal wakeup on behalf of subordinate devices,
- * but that is set up elsewhere, so skip them.
+ * Bridges that are not power-manageable directly only signal
+ * wakeup on behalf of subordinate devices which is set up
+ * elsewhere, so skip them. However, bridges that are
+ * power-manageable may signal wakeup for themselves (for example,
+ * on a hotplug event) and they need to be covered here.
*/
- if (pci_has_subordinate(dev))
+ if (!pci_power_manageable(dev))
return 0;
/* Don't do the same thing twice in a row for one device. */
if (bridge->is_thunderbolt)
return true;
+ /* Platform might know better if the bridge supports D3 */
+ if (platform_pci_bridge_d3(bridge))
+ return true;
+
/*
* Hotplug ports handled natively by the OS were not validated
* by vendors for runtime D3 at least until 2018 because there
void pci_pm_init(struct pci_dev *dev)
{
int pm;
+ u16 status;
u16 pmc;
pm_runtime_forbid(&dev->dev);
/* Disable the PME# generation functionality */
pci_pme_active(dev, false);
}
+
+ pci_read_config_word(dev, PCI_STATUS, &status);
+ if (status & PCI_STATUS_IMM_READY)
+ dev->imm_ready = 1;
}
static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop)
pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR);
+ if (dev->imm_ready)
+ return 0;
+
/*
* Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within
* 100ms, but may silently discard requests while the FLR is in
pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR);
+ if (dev->imm_ready)
+ return 0;
+
/*
* Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006,
* updated 27 July 2006; a device must complete an FLR within
bool ret;
u16 lnk_status;
+ /*
+ * Some controllers might not implement link active reporting. In this
+ * case, we wait for 1000 + 100 ms.
+ */
+ if (!pdev->link_active_reporting) {
+ msleep(1100);
+ return true;
+ }
+
+ /*
+ * PCIe r4.0 sec 6.6.1, a component must enter LTSSM Detect within 20ms,
+ * after which we should expect an link active if the reset was
+ * successful. If so, software must wait a minimum 100ms before sending
+ * configuration requests to devices downstream this port.
+ *
+ * If the link fails to activate, either the device was physically
+ * removed or the link is permanently failed.
+ */
+ if (active)
+ msleep(20);
for (;;) {
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
if (ret == active)
- return true;
+ break;
if (timeout <= 0)
break;
msleep(10);
timeout -= 10;
}
-
- pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
- active ? "set" : "cleared");
-
- return false;
+ if (active && ret)
+ msleep(100);
+ else if (ret != active)
+ pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
+ active ? "set" : "cleared");
+ return ret == active;
}
void pci_reset_secondary_bus(struct pci_dev *dev)
{
int rc = -ENOTTY;
- if (!hotplug || !try_module_get(hotplug->ops->owner))
+ if (!hotplug || !try_module_get(hotplug->owner))
return rc;
if (hotplug->ops->reset_slot)
rc = hotplug->ops->reset_slot(hotplug, probe);
- module_put(hotplug->ops->owner);
+ module_put(hotplug->owner);
return rc;
}
return ret;
}
+/**
+ * pci_bus_error_reset - reset the bridge's subordinate bus
+ * @bridge: The parent device that connects to the bus to reset
+ *
+ * This function will first try to reset the slots on this bus if the method is
+ * available. If slot reset fails or is not available, this will fall back to a
+ * secondary bus reset.
+ */
+int pci_bus_error_reset(struct pci_dev *bridge)
+{
+ struct pci_bus *bus = bridge->subordinate;
+ struct pci_slot *slot;
+
+ if (!bus)
+ return -ENOTTY;
+
+ mutex_lock(&pci_slot_mutex);
+ if (list_empty(&bus->slots))
+ goto bus_reset;
+
+ list_for_each_entry(slot, &bus->slots, list)
+ if (pci_probe_reset_slot(slot))
+ goto bus_reset;
+
+ list_for_each_entry(slot, &bus->slots, list)
+ if (pci_slot_reset(slot, 0))
+ goto bus_reset;
+
+ mutex_unlock(&pci_slot_mutex);
+ return 0;
+bus_reset:
+ mutex_unlock(&pci_slot_mutex);
+ return pci_bus_reset(bridge->subordinate, 0);
+}
+
/**
* pci_probe_reset_bus - probe whether a PCI bus can be reset
* @bus: PCI bus to probe
void pci_add_dma_alias(struct pci_dev *dev, u8 devfn)
{
if (!dev->dma_alias_mask)
- dev->dma_alias_mask = kcalloc(BITS_TO_LONGS(U8_MAX),
- sizeof(long), GFP_KERNEL);
+ dev->dma_alias_mask = bitmap_zalloc(U8_MAX, GFP_KERNEL);
if (!dev->dma_alias_mask) {
pci_warn(dev, "Unable to allocate DMA alias mask\n");
return;
int pci_probe_reset_function(struct pci_dev *dev);
int pci_bridge_secondary_bus_reset(struct pci_dev *dev);
+int pci_bus_error_reset(struct pci_dev *dev);
/**
* struct pci_platform_pm_ops - Firmware PM callbacks
*
+ * @bridge_d3: Does the bridge allow entering into D3
+ *
* @is_manageable: returns 'true' if given device is power manageable by the
* platform firmware
*
* these callbacks are mandatory.
*/
struct pci_platform_pm_ops {
+ bool (*bridge_d3)(struct pci_dev *dev);
bool (*is_manageable)(struct pci_dev *dev);
int (*set_state)(struct pci_dev *dev, pci_power_t state);
pci_power_t (*get_state)(struct pci_dev *dev);
/* Lock for read/write access to pci device and bus lists */
extern struct rw_semaphore pci_bus_sem;
+extern struct mutex pci_slot_mutex;
extern raw_spinlock_t pci_lock;
u16 driver_max_VFs; /* Max num VFs driver supports */
struct pci_dev *dev; /* Lowest numbered PF */
struct pci_dev *self; /* This PF */
+ u32 cfg_size; /* VF config space size */
u32 class; /* VF device */
u8 hdr_type; /* VF header type */
u16 subsystem_vendor; /* VF subsystem vendor */
bool drivers_autoprobe; /* Auto probing of VFs by driver */
};
-/* pci_dev priv_flags */
-#define PCI_DEV_DISCONNECTED 0
-#define PCI_DEV_ADDED 1
+/**
+ * pci_dev_set_io_state - Set the new error state if possible.
+ *
+ * @dev - pci device to set new error_state
+ * @new - the state we want dev to be in
+ *
+ * Must be called with device_lock held.
+ *
+ * Returns true if state has been changed to the requested state.
+ */
+static inline bool pci_dev_set_io_state(struct pci_dev *dev,
+ pci_channel_state_t new)
+{
+ bool changed = false;
+
+ device_lock_assert(&dev->dev);
+ switch (new) {
+ case pci_channel_io_perm_failure:
+ switch (dev->error_state) {
+ case pci_channel_io_frozen:
+ case pci_channel_io_normal:
+ case pci_channel_io_perm_failure:
+ changed = true;
+ break;
+ }
+ break;
+ case pci_channel_io_frozen:
+ switch (dev->error_state) {
+ case pci_channel_io_frozen:
+ case pci_channel_io_normal:
+ changed = true;
+ break;
+ }
+ break;
+ case pci_channel_io_normal:
+ switch (dev->error_state) {
+ case pci_channel_io_frozen:
+ case pci_channel_io_normal:
+ changed = true;
+ break;
+ }
+ break;
+ }
+ if (changed)
+ dev->error_state = new;
+ return changed;
+}
static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
{
- set_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags);
+ device_lock(&dev->dev);
+ pci_dev_set_io_state(dev, pci_channel_io_perm_failure);
+ device_unlock(&dev->dev);
+
return 0;
}
static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
{
- return test_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags);
+ return dev->error_state == pci_channel_io_perm_failure;
}
+/* pci_dev priv_flags */
+#define PCI_DEV_ADDED 0
+
static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
{
assign_bit(PCI_DEV_ADDED, &dev->priv_flags, added);
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
#endif /* CONFIG_PCIEAER */
+#ifdef CONFIG_PCIE_DPC
+void pci_save_dpc_state(struct pci_dev *dev);
+void pci_restore_dpc_state(struct pci_dev *dev);
+#else
+static inline void pci_save_dpc_state(struct pci_dev *dev) {}
+static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
+#endif
+
#ifdef CONFIG_PCI_ATS
void pci_restore_ats_state(struct pci_dev *dev);
#else
#endif
/* PCI error reporting and recovery */
-void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service);
-void pcie_do_nonfatal_recovery(struct pci_dev *dev);
+void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
+ u32 service);
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
#ifdef CONFIG_PCIEASPM
config PCIEAER_INJECT
tristate "PCI Express error injection support"
depends on PCIEAER
- default n
help
This enables PCI Express Root Port Advanced Error Reporting
(AER) software error injector.
config PCIEASPM_DEBUG
bool "Debug PCI Express ASPM"
depends on PCIEASPM
- default n
help
This enables PCI Express ASPM debug support. It will add per-device
interface to control ASPM.
config PCIE_DPC
bool "PCI Express Downstream Port Containment support"
depends on PCIEPORTBUS && PCIEAER
- default n
help
This enables PCI Express Downstream Port Containment (DPC)
driver support. DPC events from Root and Downstream ports
config PCIE_PTM
bool "PCI Express Precision Time Measurement support"
- default n
depends on PCIEPORTBUS
help
This enables PCI Express Precision Time Measurement (PTM)
#include "../pci.h"
#include "portdrv.h"
-#define AER_ERROR_SOURCES_MAX 100
+#define AER_ERROR_SOURCES_MAX 128
#define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */
#define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/
struct aer_rpc {
struct pci_dev *rpd; /* Root Port device */
- struct work_struct dpc_handler;
- struct aer_err_source e_sources[AER_ERROR_SOURCES_MAX];
- struct aer_err_info e_info;
- unsigned short prod_idx; /* Error Producer Index */
- unsigned short cons_idx; /* Error Consumer Index */
- int isr;
- spinlock_t e_lock; /*
- * Lock access to Error Status/ID Regs
- * and error producer/consumer index
- */
- struct mutex rpc_mutex; /*
- * only one thread could do
- * recovery on the same
- * root port hierarchy
- */
+ DECLARE_KFIFO(aer_fifo, struct aer_err_source, AER_ERROR_SOURCES_MAX);
};
/* AER stats for the device */
static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev)
{
if (e_info->error_dev_num < AER_MAX_MULTI_ERR_DEVICES) {
- e_info->dev[e_info->error_dev_num] = dev;
+ e_info->dev[e_info->error_dev_num] = pci_dev_get(dev);
e_info->error_dev_num++;
return 0;
}
info->status);
pci_aer_clear_device_status(dev);
} else if (info->severity == AER_NONFATAL)
- pcie_do_nonfatal_recovery(dev);
+ pcie_do_recovery(dev, pci_channel_io_normal,
+ PCIE_PORT_SERVICE_AER);
else if (info->severity == AER_FATAL)
- pcie_do_fatal_recovery(dev, PCIE_PORT_SERVICE_AER);
+ pcie_do_recovery(dev, pci_channel_io_frozen,
+ PCIE_PORT_SERVICE_AER);
+ pci_dev_put(dev);
}
#ifdef CONFIG_ACPI_APEI_PCIEAER
}
cper_print_aer(pdev, entry.severity, entry.regs);
if (entry.severity == AER_NONFATAL)
- pcie_do_nonfatal_recovery(pdev);
+ pcie_do_recovery(pdev, pci_channel_io_normal,
+ PCIE_PORT_SERVICE_AER);
else if (entry.severity == AER_FATAL)
- pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_AER);
+ pcie_do_recovery(pdev, pci_channel_io_frozen,
+ PCIE_PORT_SERVICE_AER);
pci_dev_put(pdev);
}
}
void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn,
int severity, struct aer_capability_regs *aer_regs)
{
- unsigned long flags;
struct aer_recover_entry entry = {
.bus = bus,
.devfn = devfn,
.regs = aer_regs,
};
- spin_lock_irqsave(&aer_recover_ring_lock, flags);
- if (kfifo_put(&aer_recover_ring, entry))
+ if (kfifo_in_spinlocked(&aer_recover_ring, &entry, sizeof(entry),
+ &aer_recover_ring_lock))
schedule_work(&aer_recover_work);
else
pr_err("AER recover: Buffer overflow when recovering AER for %04x:%02x:%02x:%x\n",
domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
- spin_unlock_irqrestore(&aer_recover_ring_lock, flags);
}
EXPORT_SYMBOL_GPL(aer_recover_queue);
#endif
&info->mask);
if (!(info->status & ~info->mask))
return 0;
- } else if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
- info->severity == AER_NONFATAL) {
+ } else if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
+ info->severity == AER_NONFATAL) {
/* Link is still healthy for IO reads */
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS,
struct aer_err_source *e_src)
{
struct pci_dev *pdev = rpc->rpd;
- struct aer_err_info *e_info = &rpc->e_info;
+ struct aer_err_info e_info;
pci_rootport_aer_stats_incr(pdev, e_src);
* uncorrectable error being logged. Report correctable error first.
*/
if (e_src->status & PCI_ERR_ROOT_COR_RCV) {
- e_info->id = ERR_COR_ID(e_src->id);
- e_info->severity = AER_CORRECTABLE;
+ e_info.id = ERR_COR_ID(e_src->id);
+ e_info.severity = AER_CORRECTABLE;
if (e_src->status & PCI_ERR_ROOT_MULTI_COR_RCV)
- e_info->multi_error_valid = 1;
+ e_info.multi_error_valid = 1;
else
- e_info->multi_error_valid = 0;
- aer_print_port_info(pdev, e_info);
+ e_info.multi_error_valid = 0;
+ aer_print_port_info(pdev, &e_info);
- if (find_source_device(pdev, e_info))
- aer_process_err_devices(e_info);
+ if (find_source_device(pdev, &e_info))
+ aer_process_err_devices(&e_info);
}
if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) {
- e_info->id = ERR_UNCOR_ID(e_src->id);
+ e_info.id = ERR_UNCOR_ID(e_src->id);
if (e_src->status & PCI_ERR_ROOT_FATAL_RCV)
- e_info->severity = AER_FATAL;
+ e_info.severity = AER_FATAL;
else
- e_info->severity = AER_NONFATAL;
+ e_info.severity = AER_NONFATAL;
if (e_src->status & PCI_ERR_ROOT_MULTI_UNCOR_RCV)
- e_info->multi_error_valid = 1;
+ e_info.multi_error_valid = 1;
else
- e_info->multi_error_valid = 0;
+ e_info.multi_error_valid = 0;
- aer_print_port_info(pdev, e_info);
+ aer_print_port_info(pdev, &e_info);
- if (find_source_device(pdev, e_info))
- aer_process_err_devices(e_info);
+ if (find_source_device(pdev, &e_info))
+ aer_process_err_devices(&e_info);
}
}
-/**
- * get_e_source - retrieve an error source
- * @rpc: pointer to the root port which holds an error
- * @e_src: pointer to store retrieved error source
- *
- * Return 1 if an error source is retrieved, otherwise 0.
- *
- * Invoked by DPC handler to consume an error.
- */
-static int get_e_source(struct aer_rpc *rpc, struct aer_err_source *e_src)
-{
- unsigned long flags;
-
- /* Lock access to Root error producer/consumer index */
- spin_lock_irqsave(&rpc->e_lock, flags);
- if (rpc->prod_idx == rpc->cons_idx) {
- spin_unlock_irqrestore(&rpc->e_lock, flags);
- return 0;
- }
-
- *e_src = rpc->e_sources[rpc->cons_idx];
- rpc->cons_idx++;
- if (rpc->cons_idx == AER_ERROR_SOURCES_MAX)
- rpc->cons_idx = 0;
- spin_unlock_irqrestore(&rpc->e_lock, flags);
-
- return 1;
-}
-
/**
* aer_isr - consume errors detected by root port
* @work: definition of this work item
*
* Invoked, as DPC, when root port records new detected error
*/
-static void aer_isr(struct work_struct *work)
+static irqreturn_t aer_isr(int irq, void *context)
{
- struct aer_rpc *rpc = container_of(work, struct aer_rpc, dpc_handler);
+ struct pcie_device *dev = (struct pcie_device *)context;
+ struct aer_rpc *rpc = get_service_data(dev);
struct aer_err_source uninitialized_var(e_src);
- mutex_lock(&rpc->rpc_mutex);
- while (get_e_source(rpc, &e_src))
+ if (kfifo_is_empty(&rpc->aer_fifo))
+ return IRQ_NONE;
+
+ while (kfifo_get(&rpc->aer_fifo, &e_src))
aer_isr_one_error(rpc, &e_src);
- mutex_unlock(&rpc->rpc_mutex);
+ return IRQ_HANDLED;
}
/**
*
* Invoked when Root Port detects AER messages.
*/
-irqreturn_t aer_irq(int irq, void *context)
+static irqreturn_t aer_irq(int irq, void *context)
{
- unsigned int status, id;
struct pcie_device *pdev = (struct pcie_device *)context;
struct aer_rpc *rpc = get_service_data(pdev);
- int next_prod_idx;
- unsigned long flags;
- int pos;
-
- pos = pdev->port->aer_cap;
- /*
- * Must lock access to Root Error Status Reg, Root Error ID Reg,
- * and Root error producer/consumer index
- */
- spin_lock_irqsave(&rpc->e_lock, flags);
+ struct pci_dev *rp = rpc->rpd;
+ struct aer_err_source e_src = {};
+ int pos = rp->aer_cap;
- /* Read error status */
- pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, &status);
- if (!(status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) {
- spin_unlock_irqrestore(&rpc->e_lock, flags);
+ pci_read_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, &e_src.status);
+ if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV)))
return IRQ_NONE;
- }
- /* Read error source and clear error status */
- pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_ERR_SRC, &id);
- pci_write_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, status);
+ pci_read_config_dword(rp, pos + PCI_ERR_ROOT_ERR_SRC, &e_src.id);
+ pci_write_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, e_src.status);
- /* Store error source for later DPC handler */
- next_prod_idx = rpc->prod_idx + 1;
- if (next_prod_idx == AER_ERROR_SOURCES_MAX)
- next_prod_idx = 0;
- if (next_prod_idx == rpc->cons_idx) {
- /*
- * Error Storm Condition - possibly the same error occurred.
- * Drop the error.
- */
- spin_unlock_irqrestore(&rpc->e_lock, flags);
+ if (!kfifo_put(&rpc->aer_fifo, e_src))
return IRQ_HANDLED;
- }
- rpc->e_sources[rpc->prod_idx].status = status;
- rpc->e_sources[rpc->prod_idx].id = id;
- rpc->prod_idx = next_prod_idx;
- spin_unlock_irqrestore(&rpc->e_lock, flags);
-
- /* Invoke DPC handler */
- schedule_work(&rpc->dpc_handler);
- return IRQ_HANDLED;
+ return IRQ_WAKE_THREAD;
}
-EXPORT_SYMBOL_GPL(aer_irq);
static int set_device_error_reporting(struct pci_dev *dev, void *data)
{
pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_STATUS, reg32);
}
-/**
- * aer_alloc_rpc - allocate Root Port data structure
- * @dev: pointer to the pcie_dev data structure
- *
- * Invoked when Root Port's AER service is loaded.
- */
-static struct aer_rpc *aer_alloc_rpc(struct pcie_device *dev)
-{
- struct aer_rpc *rpc;
-
- rpc = kzalloc(sizeof(struct aer_rpc), GFP_KERNEL);
- if (!rpc)
- return NULL;
-
- /* Initialize Root lock access, e_lock, to Root Error Status Reg */
- spin_lock_init(&rpc->e_lock);
-
- rpc->rpd = dev->port;
- INIT_WORK(&rpc->dpc_handler, aer_isr);
- mutex_init(&rpc->rpc_mutex);
-
- /* Use PCIe bus function to store rpc into PCIe device */
- set_service_data(dev, rpc);
-
- return rpc;
-}
-
/**
* aer_remove - clean up resources
* @dev: pointer to the pcie_dev data structure
{
struct aer_rpc *rpc = get_service_data(dev);
- if (rpc) {
- /* If register interrupt service, it must be free. */
- if (rpc->isr)
- free_irq(dev->irq, dev);
-
- flush_work(&rpc->dpc_handler);
- aer_disable_rootport(rpc);
- kfree(rpc);
- set_service_data(dev, NULL);
- }
+ aer_disable_rootport(rpc);
}
/**
{
int status;
struct aer_rpc *rpc;
- struct device *device = &dev->port->dev;
+ struct device *device = &dev->device;
- /* Alloc rpc data structure */
- rpc = aer_alloc_rpc(dev);
+ rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL);
if (!rpc) {
dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n");
- aer_remove(dev);
return -ENOMEM;
}
+ rpc->rpd = dev->port;
+ set_service_data(dev, rpc);
- /* Request IRQ ISR */
- status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev);
+ status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr,
+ IRQF_SHARED, "aerdrv", dev);
if (status) {
dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n",
dev->irq);
- aer_remove(dev);
return status;
}
- rpc->isr = 1;
-
aer_enable_rootport(rpc);
dev_info(device, "AER enabled with IRQ %d\n", dev->irq);
return 0;
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32);
- rc = pci_bridge_secondary_bus_reset(dev);
+ rc = pci_bus_error_reset(dev);
pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n");
/* Clear Root Error Status */
return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
-/**
- * aer_error_resume - clean up corresponding error status bits
- * @dev: pointer to Root Port's pci_dev data structure
- *
- * Invoked by Port Bus driver during nonfatal recovery.
- */
-static void aer_error_resume(struct pci_dev *dev)
-{
- pci_aer_clear_device_status(dev);
- pci_cleanup_aer_uncorrect_error_status(dev);
-}
-
static struct pcie_port_service_driver aerdriver = {
.name = "aer",
.port_type = PCI_EXP_TYPE_ROOT_PORT,
.probe = aer_probe,
.remove = aer_remove,
- .error_resume = aer_error_resume,
.reset_link = aer_root_reset,
};
*
* Invoked when AER root service driver is loaded.
*/
-static int __init aer_service_init(void)
+int __init pcie_aer_init(void)
{
if (!pci_aer_available() || aer_acpi_firmware_first())
return -ENXIO;
return pcie_port_service_register(&aerdriver);
}
-device_initcall(aer_service_init);
#include <linux/module.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <linux/miscdevice.h>
#include <linux/pci.h>
#include <linux/slab.h>
return target;
}
+static int aer_inj_read(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 *val)
+{
+ struct pci_ops *ops, *my_ops;
+ int rv;
+
+ ops = __find_pci_bus_ops(bus);
+ if (!ops)
+ return -1;
+
+ my_ops = bus->ops;
+ bus->ops = ops;
+ rv = ops->read(bus, devfn, where, size, val);
+ bus->ops = my_ops;
+
+ return rv;
+}
+
+static int aer_inj_write(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 val)
+{
+ struct pci_ops *ops, *my_ops;
+ int rv;
+
+ ops = __find_pci_bus_ops(bus);
+ if (!ops)
+ return -1;
+
+ my_ops = bus->ops;
+ bus->ops = ops;
+ rv = ops->write(bus, devfn, where, size, val);
+ bus->ops = my_ops;
+
+ return rv;
+}
+
static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
u32 *sim;
struct aer_error *err;
unsigned long flags;
- struct pci_ops *ops;
- struct pci_ops *my_ops;
int domain;
int rv;
return 0;
}
out:
- ops = __find_pci_bus_ops(bus);
- /*
- * pci_lock must already be held, so we can directly
- * manipulate bus->ops. Many config access functions,
- * including pci_generic_config_read() require the original
- * bus->ops be installed to function, so temporarily put them
- * back.
- */
- my_ops = bus->ops;
- bus->ops = ops;
- rv = ops->read(bus, devfn, where, size, val);
- bus->ops = my_ops;
+ rv = aer_inj_read(bus, devfn, where, size, val);
spin_unlock_irqrestore(&inject_lock, flags);
return rv;
}
struct aer_error *err;
unsigned long flags;
int rw1cs;
- struct pci_ops *ops;
- struct pci_ops *my_ops;
int domain;
int rv;
return 0;
}
out:
- ops = __find_pci_bus_ops(bus);
- /*
- * pci_lock must already be held, so we can directly
- * manipulate bus->ops. Many config access functions,
- * including pci_generic_config_write() require the original
- * bus->ops be installed to function, so temporarily put them
- * back.
- */
- my_ops = bus->ops;
- bus->ops = ops;
- rv = ops->write(bus, devfn, where, size, val);
- bus->ops = my_ops;
+ rv = aer_inj_write(bus, devfn, where, size, val);
spin_unlock_irqrestore(&inject_lock, flags);
return rv;
}
return 0;
}
-static int find_aer_device_iter(struct device *device, void *data)
-{
- struct pcie_device **result = data;
- struct pcie_device *pcie_dev;
-
- if (device->bus == &pcie_port_bus_type) {
- pcie_dev = to_pcie_device(device);
- if (pcie_dev->service & PCIE_PORT_SERVICE_AER) {
- *result = pcie_dev;
- return 1;
- }
- }
- return 0;
-}
-
-static int find_aer_device(struct pci_dev *dev, struct pcie_device **result)
-{
- return device_for_each_child(&dev->dev, result, find_aer_device_iter);
-}
-
static int aer_inject(struct aer_error_inj *einj)
{
struct aer_error *err, *rperr;
struct aer_error *err_alloc = NULL, *rperr_alloc = NULL;
struct pci_dev *dev, *rpdev;
struct pcie_device *edev;
+ struct device *device;
unsigned long flags;
unsigned int devfn = PCI_DEVFN(einj->dev, einj->fn);
int pos_cap_err, rp_pos_cap_err;
if (ret)
goto out_put;
- if (find_aer_device(rpdev, &edev)) {
+ device = pcie_port_find_device(rpdev, PCIE_PORT_SERVICE_AER);
+ if (device) {
+ edev = to_pcie_device(device);
if (!get_service_data(edev)) {
dev_warn(&edev->device,
"aer_inject: AER service is not initialized\n");
dev_info(&edev->device,
"aer_inject: Injecting errors %08x/%08x into device %s\n",
einj->cor_status, einj->uncor_status, pci_name(dev));
- aer_irq(-1, edev);
+ local_irq_disable();
+ generic_handle_irq(edev->irq);
+ local_irq_enable();
} else {
pci_err(rpdev, "aer_inject: AER device not found\n");
ret = -ENODEV;
struct pcie_link_state *link;
int blacklist = !!pcie_aspm_sanity_check(pdev);
- if (!aspm_support_enabled)
+ if (!aspm_support_enabled || aspm_disabled)
return;
if (pdev->link_state)
* All PCIe functions are in one slot, remove one function will remove
* the whole slot, so just wait until we are the last function left.
*/
- if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
+ if (!list_empty(&parent->subordinate->devices))
goto out;
link = parent->link_state;
"Memory Request Completion Timeout", /* Bit Position 18 */
};
+static struct dpc_dev *to_dpc_dev(struct pci_dev *dev)
+{
+ struct device *device;
+
+ device = pcie_port_find_device(dev, PCIE_PORT_SERVICE_DPC);
+ if (!device)
+ return NULL;
+ return get_service_data(to_pcie_device(device));
+}
+
+void pci_save_dpc_state(struct pci_dev *dev)
+{
+ struct dpc_dev *dpc;
+ struct pci_cap_saved_state *save_state;
+ u16 *cap;
+
+ if (!pci_is_pcie(dev))
+ return;
+
+ dpc = to_dpc_dev(dev);
+ if (!dpc)
+ return;
+
+ save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC);
+ if (!save_state)
+ return;
+
+ cap = (u16 *)&save_state->cap.data[0];
+ pci_read_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, cap);
+}
+
+void pci_restore_dpc_state(struct pci_dev *dev)
+{
+ struct dpc_dev *dpc;
+ struct pci_cap_saved_state *save_state;
+ u16 *cap;
+
+ if (!pci_is_pcie(dev))
+ return;
+
+ dpc = to_dpc_dev(dev);
+ if (!dpc)
+ return;
+
+ save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC);
+ if (!save_state)
+ return;
+
+ cap = (u16 *)&save_state->cap.data[0];
+ pci_write_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, *cap);
+}
+
static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
{
unsigned long timeout = jiffies + HZ;
static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
{
struct dpc_dev *dpc;
- struct pcie_device *pciedev;
- struct device *devdpc;
-
u16 cap;
/*
* DPC disables the Link automatically in hardware, so it has
* already been reset by the time we get here.
*/
- devdpc = pcie_port_find_device(pdev, PCIE_PORT_SERVICE_DPC);
- pciedev = to_pcie_device(devdpc);
- dpc = get_service_data(pciedev);
+ dpc = to_dpc_dev(pdev);
cap = dpc->cap_pos;
/*
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
PCI_EXP_DPC_STATUS_TRIGGER);
+ if (!pcie_wait_for_link(pdev, true))
+ return PCI_ERS_RESULT_DISCONNECT;
+
return PCI_ERS_RESULT_RECOVERED;
}
-
static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
{
struct device *dev = &dpc->dev->device;
reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1;
ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5;
- dev_warn(dev, "DPC %s detected, remove downstream devices\n",
+ dev_warn(dev, "DPC %s detected\n",
(reason == 0) ? "unmasked uncorrectable error" :
(reason == 1) ? "ERR_NONFATAL" :
(reason == 2) ? "ERR_FATAL" :
}
/* We configure DPC so it only triggers on ERR_FATAL */
- pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC);
+ pcie_do_recovery(pdev, pci_channel_io_frozen, PCIE_PORT_SERVICE_DPC);
return IRQ_HANDLED;
}
FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP),
FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size,
FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE));
+
+ pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16));
return status;
}
.reset_link = dpc_reset_link,
};
-static int __init dpc_service_init(void)
+int __init pcie_dpc_init(void)
{
return pcie_port_service_register(&dpcdriver);
}
-device_initcall(dpc_service_init);
#include <linux/pci.h>
#include <linux/module.h>
-#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/aer.h>
#include "portdrv.h"
#include "../pci.h"
-struct aer_broadcast_data {
- enum pci_channel_state state;
- enum pci_ers_result result;
-};
-
static pci_ers_result_t merge_result(enum pci_ers_result orig,
enum pci_ers_result new)
{
return orig;
}
-static int report_error_detected(struct pci_dev *dev, void *data)
+static int report_error_detected(struct pci_dev *dev,
+ enum pci_channel_state state,
+ enum pci_ers_result *result)
{
pci_ers_result_t vote;
const struct pci_error_handlers *err_handler;
- struct aer_broadcast_data *result_data;
-
- result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
- dev->error_state = result_data->state;
-
- if (!dev->driver ||
+ if (!pci_dev_set_io_state(dev, state) ||
+ !dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->error_detected) {
- if (result_data->state == pci_channel_io_frozen &&
- dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) {
- /*
- * In case of fatal recovery, if one of down-
- * stream device has no driver. We might be
- * unable to recover because a later insmod
- * of a driver for this device is unaware of
- * its hw state.
- */
- pci_printk(KERN_DEBUG, dev, "device has %s\n",
- dev->driver ?
- "no AER-aware driver" : "no driver");
- }
-
/*
- * If there's any device in the subtree that does not
- * have an error_detected callback, returning
- * PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of
- * the subsequent mmio_enabled/slot_reset/resume
- * callbacks of "any" device in the subtree. All the
- * devices in the subtree are left in the error state
- * without recovery.
+ * If any device in the subtree does not have an error_detected
+ * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent
+ * error callbacks of "any" device in the subtree, and will
+ * exit in the disconnected error state.
*/
-
if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE)
vote = PCI_ERS_RESULT_NO_AER_DRIVER;
else
vote = PCI_ERS_RESULT_NONE;
} else {
err_handler = dev->driver->err_handler;
- vote = err_handler->error_detected(dev, result_data->state);
- pci_uevent_ers(dev, PCI_ERS_RESULT_NONE);
+ vote = err_handler->error_detected(dev, state);
}
-
- result_data->result = merge_result(result_data->result, vote);
+ pci_uevent_ers(dev, vote);
+ *result = merge_result(*result, vote);
device_unlock(&dev->dev);
return 0;
}
+static int report_frozen_detected(struct pci_dev *dev, void *data)
+{
+ return report_error_detected(dev, pci_channel_io_frozen, data);
+}
+
+static int report_normal_detected(struct pci_dev *dev, void *data)
+{
+ return report_error_detected(dev, pci_channel_io_normal, data);
+}
+
static int report_mmio_enabled(struct pci_dev *dev, void *data)
{
- pci_ers_result_t vote;
+ pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler;
- struct aer_broadcast_data *result_data;
-
- result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
if (!dev->driver ||
err_handler = dev->driver->err_handler;
vote = err_handler->mmio_enabled(dev);
- result_data->result = merge_result(result_data->result, vote);
+ *result = merge_result(*result, vote);
out:
device_unlock(&dev->dev);
return 0;
static int report_slot_reset(struct pci_dev *dev, void *data)
{
- pci_ers_result_t vote;
+ pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler;
- struct aer_broadcast_data *result_data;
-
- result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
if (!dev->driver ||
err_handler = dev->driver->err_handler;
vote = err_handler->slot_reset(dev);
- result_data->result = merge_result(result_data->result, vote);
+ *result = merge_result(*result, vote);
out:
device_unlock(&dev->dev);
return 0;
const struct pci_error_handlers *err_handler;
device_lock(&dev->dev);
- dev->error_state = pci_channel_io_normal;
-
- if (!dev->driver ||
+ if (!pci_dev_set_io_state(dev, pci_channel_io_normal) ||
+ !dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->resume)
goto out;
err_handler = dev->driver->err_handler;
err_handler->resume(dev);
- pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);
out:
+ pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);
device_unlock(&dev->dev);
return 0;
}
{
int rc;
- rc = pci_bridge_secondary_bus_reset(dev);
+ rc = pci_bus_error_reset(dev);
pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n");
return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service)
{
- struct pci_dev *udev;
pci_ers_result_t status;
struct pcie_port_service_driver *driver = NULL;
- if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
- /* Reset this port for all subordinates */
- udev = dev;
- } else {
- /* Reset the upstream component (likely downstream port) */
- udev = dev->bus->self;
- }
-
- /* Use the aer driver of the component firstly */
- driver = pcie_port_find_service(udev, service);
-
+ driver = pcie_port_find_service(dev, service);
if (driver && driver->reset_link) {
- status = driver->reset_link(udev);
- } else if (udev->has_secondary_link) {
- status = default_reset_link(udev);
+ status = driver->reset_link(dev);
+ } else if (dev->has_secondary_link) {
+ status = default_reset_link(dev);
} else {
pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n",
- pci_name(udev));
+ pci_name(dev));
return PCI_ERS_RESULT_DISCONNECT;
}
if (status != PCI_ERS_RESULT_RECOVERED) {
pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n",
- pci_name(udev));
+ pci_name(dev));
return PCI_ERS_RESULT_DISCONNECT;
}
return status;
}
-/**
- * broadcast_error_message - handle message broadcast to downstream drivers
- * @dev: pointer to from where in a hierarchy message is broadcasted down
- * @state: error state
- * @error_mesg: message to print
- * @cb: callback to be broadcasted
- *
- * Invoked during error recovery process. Once being invoked, the content
- * of error severity will be broadcasted to all downstream drivers in a
- * hierarchy in question.
- */
-static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
- enum pci_channel_state state,
- char *error_mesg,
- int (*cb)(struct pci_dev *, void *))
-{
- struct aer_broadcast_data result_data;
-
- pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg);
- result_data.state = state;
- if (cb == report_error_detected)
- result_data.result = PCI_ERS_RESULT_CAN_RECOVER;
- else
- result_data.result = PCI_ERS_RESULT_RECOVERED;
-
- if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
- /*
- * If the error is reported by a bridge, we think this error
- * is related to the downstream link of the bridge, so we
- * do error recovery on all subordinates of the bridge instead
- * of the bridge and clear the error status of the bridge.
- */
- if (cb == report_error_detected)
- dev->error_state = state;
- pci_walk_bus(dev->subordinate, cb, &result_data);
- if (cb == report_resume) {
- pci_aer_clear_device_status(dev);
- pci_cleanup_aer_uncorrect_error_status(dev);
- dev->error_state = pci_channel_io_normal;
- }
- } else {
- /*
- * If the error is reported by an end point, we think this
- * error is related to the upstream link of the end point.
- * The error is non fatal so the bus is ok; just invoke
- * the callback for the function that logged the error.
- */
- cb(dev, &result_data);
- }
-
- return result_data.result;
-}
-
-/**
- * pcie_do_fatal_recovery - handle fatal error recovery process
- * @dev: pointer to a pci_dev data structure of agent detecting an error
- *
- * Invoked when an error is fatal. Once being invoked, removes the devices
- * beneath this AER agent, followed by reset link e.g. secondary bus reset
- * followed by re-enumeration of devices.
- */
-void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service)
+void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
+ u32 service)
{
- struct pci_dev *udev;
- struct pci_bus *parent;
- struct pci_dev *pdev, *temp;
- pci_ers_result_t result;
-
- if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)
- udev = dev;
+ pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
+ struct pci_bus *bus;
+
+ /*
+ * Error recovery runs on all subordinates of the first downstream port.
+ * If the downstream port detected the error, it is cleared at the end.
+ */
+ if (!(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM))
+ dev = dev->bus->self;
+ bus = dev->subordinate;
+
+ pci_dbg(dev, "broadcast error_detected message\n");
+ if (state == pci_channel_io_frozen)
+ pci_walk_bus(bus, report_frozen_detected, &status);
else
- udev = dev->bus->self;
-
- parent = udev->subordinate;
- pci_lock_rescan_remove();
- pci_dev_get(dev);
- list_for_each_entry_safe_reverse(pdev, temp, &parent->devices,
- bus_list) {
- pci_dev_get(pdev);
- pci_dev_set_disconnected(pdev, NULL);
- if (pci_has_subordinate(pdev))
- pci_walk_bus(pdev->subordinate,
- pci_dev_set_disconnected, NULL);
- pci_stop_and_remove_bus_device(pdev);
- pci_dev_put(pdev);
- }
-
- result = reset_link(udev, service);
+ pci_walk_bus(bus, report_normal_detected, &status);
- if ((service == PCIE_PORT_SERVICE_AER) &&
- (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) {
- /*
- * If the error is reported by a bridge, we think this error
- * is related to the downstream link of the bridge, so we
- * do error recovery on all subordinates of the bridge instead
- * of the bridge and clear the error status of the bridge.
- */
- pci_aer_clear_fatal_status(dev);
- pci_aer_clear_device_status(dev);
- }
+ if (state == pci_channel_io_frozen &&
+ reset_link(dev, service) != PCI_ERS_RESULT_RECOVERED)
+ goto failed;
- if (result == PCI_ERS_RESULT_RECOVERED) {
- if (pcie_wait_for_link(udev, true))
- pci_rescan_bus(udev->bus);
- pci_info(dev, "Device recovery from fatal error successful\n");
- } else {
- pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT);
- pci_info(dev, "Device recovery from fatal error failed\n");
+ if (status == PCI_ERS_RESULT_CAN_RECOVER) {
+ status = PCI_ERS_RESULT_RECOVERED;
+ pci_dbg(dev, "broadcast mmio_enabled message\n");
+ pci_walk_bus(bus, report_mmio_enabled, &status);
}
- pci_dev_put(dev);
- pci_unlock_rescan_remove();
-}
-
-/**
- * pcie_do_nonfatal_recovery - handle nonfatal error recovery process
- * @dev: pointer to a pci_dev data structure of agent detecting an error
- *
- * Invoked when an error is nonfatal/fatal. Once being invoked, broadcast
- * error detected message to all downstream drivers within a hierarchy in
- * question and return the returned code.
- */
-void pcie_do_nonfatal_recovery(struct pci_dev *dev)
-{
- pci_ers_result_t status;
- enum pci_channel_state state;
-
- state = pci_channel_io_normal;
-
- status = broadcast_error_message(dev,
- state,
- "error_detected",
- report_error_detected);
-
- if (status == PCI_ERS_RESULT_CAN_RECOVER)
- status = broadcast_error_message(dev,
- state,
- "mmio_enabled",
- report_mmio_enabled);
-
if (status == PCI_ERS_RESULT_NEED_RESET) {
/*
* TODO: Should call platform-specific
* functions to reset slot before calling
* drivers' slot_reset callbacks?
*/
- status = broadcast_error_message(dev,
- state,
- "slot_reset",
- report_slot_reset);
+ status = PCI_ERS_RESULT_RECOVERED;
+ pci_dbg(dev, "broadcast slot_reset message\n");
+ pci_walk_bus(bus, report_slot_reset, &status);
}
if (status != PCI_ERS_RESULT_RECOVERED)
goto failed;
- broadcast_error_message(dev,
- state,
- "resume",
- report_resume);
+ pci_dbg(dev, "broadcast resume message\n");
+ pci_walk_bus(bus, report_resume, &status);
+ pci_aer_clear_device_status(dev);
+ pci_cleanup_aer_uncorrect_error_status(dev);
pci_info(dev, "AER: Device recovery successful\n");
return;
kfree(get_service_data(srv));
}
+static int pcie_pme_runtime_suspend(struct pcie_device *srv)
+{
+ struct pcie_pme_service_data *data = get_service_data(srv);
+
+ spin_lock_irq(&data->lock);
+ pcie_pme_interrupt_enable(srv->port, false);
+ pcie_clear_root_pme_status(srv->port);
+ data->noirq = true;
+ spin_unlock_irq(&data->lock);
+
+ return 0;
+}
+
+static int pcie_pme_runtime_resume(struct pcie_device *srv)
+{
+ struct pcie_pme_service_data *data = get_service_data(srv);
+
+ spin_lock_irq(&data->lock);
+ pcie_pme_interrupt_enable(srv->port, true);
+ data->noirq = false;
+ spin_unlock_irq(&data->lock);
+
+ return 0;
+}
+
static struct pcie_port_service_driver pcie_pme_driver = {
.name = "pcie_pme",
.port_type = PCI_EXP_TYPE_ROOT_PORT,
.probe = pcie_pme_probe,
.suspend = pcie_pme_suspend,
+ .runtime_suspend = pcie_pme_runtime_suspend,
+ .runtime_resume = pcie_pme_runtime_resume,
.resume = pcie_pme_resume,
.remove = pcie_pme_remove,
};
/**
* pcie_pme_service_init - Register the PCIe PME service driver.
*/
-static int __init pcie_pme_service_init(void)
+int __init pcie_pme_init(void)
{
return pcie_port_service_register(&pcie_pme_driver);
}
-device_initcall(pcie_pme_service_init);
#define PCIE_PORT_DEVICE_MAXSERVICES 4
+#ifdef CONFIG_PCIEAER
+int pcie_aer_init(void);
+#else
+static inline int pcie_aer_init(void) { return 0; }
+#endif
+
+#ifdef CONFIG_HOTPLUG_PCI_PCIE
+int pcie_hp_init(void);
+#else
+static inline int pcie_hp_init(void) { return 0; }
+#endif
+
+#ifdef CONFIG_PCIE_PME
+int pcie_pme_init(void);
+#else
+static inline int pcie_pme_init(void) { return 0; }
+#endif
+
+#ifdef CONFIG_PCIE_DPC
+int pcie_dpc_init(void);
+#else
+static inline int pcie_dpc_init(void) { return 0; }
+#endif
+
/* Port Type */
#define PCIE_ANY_PORT (~0)
int (*suspend) (struct pcie_device *dev);
int (*resume_noirq) (struct pcie_device *dev);
int (*resume) (struct pcie_device *dev);
+ int (*runtime_suspend) (struct pcie_device *dev);
+ int (*runtime_resume) (struct pcie_device *dev);
/* Device driver may resume normal operations */
void (*error_resume)(struct pci_dev *dev);
int pcie_port_device_suspend(struct device *dev);
int pcie_port_device_resume_noirq(struct device *dev);
int pcie_port_device_resume(struct device *dev);
+int pcie_port_device_runtime_suspend(struct device *dev);
+int pcie_port_device_runtime_resume(struct device *dev);
#endif
void pcie_port_device_remove(struct pci_dev *dev);
int __must_check pcie_port_bus_register(void);
}
#endif
-#ifdef CONFIG_PCIEAER
-irqreturn_t aer_irq(int irq, void *context);
-#endif
-
struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev,
u32 service);
struct device *pcie_port_find_device(struct pci_dev *dev, u32 service);
size_t off = offsetof(struct pcie_port_service_driver, resume);
return device_for_each_child(dev, &off, pm_iter);
}
+
+/**
+ * pcie_port_device_runtime_suspend - runtime suspend port services
+ * @dev: PCI Express port to handle
+ */
+int pcie_port_device_runtime_suspend(struct device *dev)
+{
+ size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend);
+ return device_for_each_child(dev, &off, pm_iter);
+}
+
+/**
+ * pcie_port_device_runtime_resume - runtime resume port services
+ * @dev: PCI Express port to handle
+ */
+int pcie_port_device_runtime_resume(struct device *dev)
+{
+ size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
+ return device_for_each_child(dev, &off, pm_iter);
+}
#endif /* PM */
static int remove_iter(struct device *dev, void *data)
device = pdrvs.dev;
return device;
}
+EXPORT_SYMBOL_GPL(pcie_port_find_device);
/**
* pcie_port_device_remove - unregister PCI Express port service devices
#ifdef CONFIG_PM
static int pcie_port_runtime_suspend(struct device *dev)
{
- return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY;
-}
+ if (!to_pci_dev(dev)->bridge_d3)
+ return -EBUSY;
-static int pcie_port_runtime_resume(struct device *dev)
-{
- return 0;
+ return pcie_port_device_runtime_suspend(dev);
}
static int pcie_port_runtime_idle(struct device *dev)
.restore_noirq = pcie_port_device_resume_noirq,
.restore = pcie_port_device_resume,
.runtime_suspend = pcie_port_runtime_suspend,
- .runtime_resume = pcie_port_runtime_resume,
+ .runtime_resume = pcie_port_device_runtime_resume,
.runtime_idle = pcie_port_runtime_idle,
};
pci_save_state(dev);
- dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_SMART_SUSPEND |
- DPM_FLAG_LEAVE_SUSPENDED);
+ dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NEVER_SKIP |
+ DPM_FLAG_SMART_SUSPEND);
if (pci_bridge_d3_possible(dev)) {
/*
return PCI_ERS_RESULT_CAN_RECOVER;
}
+static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev)
+{
+ pci_restore_state(dev);
+ pci_save_state(dev);
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev)
{
return PCI_ERS_RESULT_RECOVERED;
static const struct pci_error_handlers pcie_portdrv_err_handler = {
.error_detected = pcie_portdrv_error_detected,
+ .slot_reset = pcie_portdrv_slot_reset,
.mmio_enabled = pcie_portdrv_mmio_enabled,
.resume = pcie_portdrv_err_resume,
};
{}
};
+static void __init pcie_init_services(void)
+{
+ pcie_aer_init();
+ pcie_pme_init();
+ pcie_dpc_init();
+ pcie_hp_init();
+}
+
static int __init pcie_portdrv_init(void)
{
if (pcie_ports_disabled)
return -EACCES;
+ pcie_init_services();
dmi_check_system(pcie_portdrv_dmi_table);
return pci_register_driver(&pcie_portdriver);
pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap);
bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS];
+ bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC);
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta);
pcie_update_link_speed(bus, linksta);
return PCI_CFG_SPACE_EXP_SIZE;
}
+#ifdef CONFIG_PCI_IOV
+static bool is_vf0(struct pci_dev *dev)
+{
+ if (pci_iov_virtfn_devfn(dev->physfn, 0) == dev->devfn &&
+ pci_iov_virtfn_bus(dev->physfn, 0) == dev->bus->number)
+ return true;
+
+ return false;
+}
+#endif
+
int pci_cfg_space_size(struct pci_dev *dev)
{
int pos;
u32 status;
u16 class;
+#ifdef CONFIG_PCI_IOV
+ /* Read cached value for all VFs except for VF0 */
+ if (dev->is_virtfn && !is_vf0(dev))
+ return dev->physfn->sriov->cfg_size;
+#endif
+
if (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG)
return PCI_CFG_SPACE_SIZE;
pcibios_release_device(pci_dev);
pci_bus_put(pci_dev->bus);
kfree(pci_dev->driver_override);
- kfree(pci_dev->dma_alias_mask);
+ bitmap_free(pci_dev->dma_alias_mask);
kfree(pci_dev);
}
dev->dev.dma_parms = &dev->dma_parms;
dev->dev.coherent_dma_mask = 0xffffffffull;
- pci_set_dma_max_seg_size(dev, 65536);
- pci_set_dma_seg_boundary(dev, 0xffffffff);
+ dma_set_max_seg_size(&dev->dev, 65536);
+ dma_set_seg_boundary(&dev->dev, 0xffffffff);
/* Fix up broken headers */
pci_fixup_device(pci_fixup_header, dev);
pci_iounmap(dev, regs);
}
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
void __iomem *mmio;
struct ntb_info_regs __iomem *mmio_ntb;
struct ntb_ctrl_regs __iomem *mmio_ctrl;
- struct sys_info_regs __iomem *mmio_sys_info;
u64 partition_map;
u8 partition;
int pp;
mmio_ntb = mmio + SWITCHTEC_GAS_NTB_OFFSET;
mmio_ctrl = (void __iomem *) mmio_ntb + SWITCHTEC_NTB_REG_CTRL_OFFSET;
- mmio_sys_info = mmio + SWITCHTEC_GAS_SYS_INFO_OFFSET;
partition = ioread8(&mmio_ntb->partition_id);
pci_iounmap(pdev, mmio);
pci_disable_device(pdev);
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8531,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8532,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8533,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8534,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8535,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8536,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8543,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8544,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8545,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8546,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8551,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8552,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8553,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8554,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8555,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8556,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8561,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8562,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8563,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8564,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8565,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8566,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8571,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8572,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8573,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8574,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8575,
- quirk_switchtec_ntb_dma_alias);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8576,
- quirk_switchtec_ntb_dma_alias);
+#define SWITCHTEC_QUIRK(vid) \
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_MICROSEMI, vid, \
+ PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias)
+
+SWITCHTEC_QUIRK(0x8531); /* PFX 24xG3 */
+SWITCHTEC_QUIRK(0x8532); /* PFX 32xG3 */
+SWITCHTEC_QUIRK(0x8533); /* PFX 48xG3 */
+SWITCHTEC_QUIRK(0x8534); /* PFX 64xG3 */
+SWITCHTEC_QUIRK(0x8535); /* PFX 80xG3 */
+SWITCHTEC_QUIRK(0x8536); /* PFX 96xG3 */
+SWITCHTEC_QUIRK(0x8541); /* PSX 24xG3 */
+SWITCHTEC_QUIRK(0x8542); /* PSX 32xG3 */
+SWITCHTEC_QUIRK(0x8543); /* PSX 48xG3 */
+SWITCHTEC_QUIRK(0x8544); /* PSX 64xG3 */
+SWITCHTEC_QUIRK(0x8545); /* PSX 80xG3 */
+SWITCHTEC_QUIRK(0x8546); /* PSX 96xG3 */
+SWITCHTEC_QUIRK(0x8551); /* PAX 24XG3 */
+SWITCHTEC_QUIRK(0x8552); /* PAX 32XG3 */
+SWITCHTEC_QUIRK(0x8553); /* PAX 48XG3 */
+SWITCHTEC_QUIRK(0x8554); /* PAX 64XG3 */
+SWITCHTEC_QUIRK(0x8555); /* PAX 80XG3 */
+SWITCHTEC_QUIRK(0x8556); /* PAX 96XG3 */
+SWITCHTEC_QUIRK(0x8561); /* PFXL 24XG3 */
+SWITCHTEC_QUIRK(0x8562); /* PFXL 32XG3 */
+SWITCHTEC_QUIRK(0x8563); /* PFXL 48XG3 */
+SWITCHTEC_QUIRK(0x8564); /* PFXL 64XG3 */
+SWITCHTEC_QUIRK(0x8565); /* PFXL 80XG3 */
+SWITCHTEC_QUIRK(0x8566); /* PFXL 96XG3 */
+SWITCHTEC_QUIRK(0x8571); /* PFXI 24XG3 */
+SWITCHTEC_QUIRK(0x8572); /* PFXI 32XG3 */
+SWITCHTEC_QUIRK(0x8573); /* PFXI 48XG3 */
+SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */
+SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */
+SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */
pci_dev_assign_added(dev, false);
}
-
- if (dev->bus->self)
- pcie_aspm_exit_link_state(dev);
}
static void pci_destroy_dev(struct pci_dev *dev)
list_del(&dev->bus_list);
up_write(&pci_bus_sem);
+ pcie_aspm_exit_link_state(dev);
pci_bridge_d3_update(dev);
pci_free_resources(dev);
put_device(&dev->dev);
static resource_size_t calculate_iosize(resource_size_t size,
resource_size_t min_size,
resource_size_t size1,
+ resource_size_t add_size,
+ resource_size_t children_add_size,
resource_size_t old_size,
resource_size_t align)
{
#if defined(CONFIG_ISA) || defined(CONFIG_EISA)
size = (size & 0xff) + ((size & ~0xffUL) << 2);
#endif
- size = ALIGN(size + size1, align);
+ size = size + size1;
if (size < old_size)
size = old_size;
+
+ size = ALIGN(max(size, add_size) + children_add_size, align);
return size;
}
static resource_size_t calculate_memsize(resource_size_t size,
resource_size_t min_size,
- resource_size_t size1,
+ resource_size_t add_size,
+ resource_size_t children_add_size,
resource_size_t old_size,
resource_size_t align)
{
old_size = 0;
if (size < old_size)
size = old_size;
- size = ALIGN(size + size1, align);
+
+ size = ALIGN(max(size, add_size) + children_add_size, align);
return size;
}
}
}
- size0 = calculate_iosize(size, min_size, size1,
+ size0 = calculate_iosize(size, min_size, size1, 0, 0,
resource_size(b_res), min_align);
- if (children_add_size > add_size)
- add_size = children_add_size;
- size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 :
- calculate_iosize(size, min_size, add_size + size1,
+ size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 :
+ calculate_iosize(size, min_size, size1, add_size, children_add_size,
resource_size(b_res), min_align);
if (!size0 && !size1) {
if (b_res->start || b_res->end)
min_align = calculate_mem_align(aligns, max_order);
min_align = max(min_align, window_alignment(bus, b_res->flags));
- size0 = calculate_memsize(size, min_size, 0, resource_size(b_res), min_align);
+ size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align);
add_align = max(min_align, add_align);
- if (children_add_size > add_size)
- add_size = children_add_size;
- size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 :
- calculate_memsize(size, min_size, add_size,
+ size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 :
+ calculate_memsize(size, min_size, add_size, children_add_size,
resource_size(b_res), add_align);
if (!size0 && !size1) {
if (b_res->start || b_res->end)
struct kset *pci_slots_kset;
EXPORT_SYMBOL_GPL(pci_slots_kset);
-static DEFINE_MUTEX(pci_slot_mutex);
static ssize_t pci_slot_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf)
if (!slot || !slot->ops)
return;
- kobj = kset_find_obj(module_kset, slot->ops->mod_name);
+ kobj = kset_find_obj(module_kset, slot->mod_name);
if (!kobj)
return;
ret = sysfs_create_link(&pci_slot->kobj, kobj, "module");
int asus_hwmon_num_fans;
int asus_hwmon_pwm;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct mutex hotplug_lock;
struct mutex wmi_lock;
struct workqueue_struct *hotplug_workqueue;
if (asus->wlan.rfkill)
rfkill_set_sw_state(asus->wlan.rfkill, blocked);
- if (asus->hotplug_slot) {
+ if (asus->hotplug_slot.ops) {
bus = pci_find_bus(0, 1);
if (!bus) {
pr_warn("Unable to find PCI bus 1?\n");
static int asus_get_adapter_status(struct hotplug_slot *hotplug_slot,
u8 *value)
{
- struct asus_wmi *asus = hotplug_slot->private;
+ struct asus_wmi *asus = container_of(hotplug_slot,
+ struct asus_wmi, hotplug_slot);
int result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_WLAN);
if (result < 0)
return 0;
}
-static struct hotplug_slot_ops asus_hotplug_slot_ops = {
- .owner = THIS_MODULE,
+static const struct hotplug_slot_ops asus_hotplug_slot_ops = {
.get_adapter_status = asus_get_adapter_status,
.get_power_status = asus_get_adapter_status,
};
INIT_WORK(&asus->hotplug_work, asus_hotplug_work);
- asus->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
- if (!asus->hotplug_slot)
- goto error_slot;
-
- asus->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info),
- GFP_KERNEL);
- if (!asus->hotplug_slot->info)
- goto error_info;
+ asus->hotplug_slot.ops = &asus_hotplug_slot_ops;
- asus->hotplug_slot->private = asus;
- asus->hotplug_slot->ops = &asus_hotplug_slot_ops;
- asus_get_adapter_status(asus->hotplug_slot,
- &asus->hotplug_slot->info->adapter_status);
-
- ret = pci_hp_register(asus->hotplug_slot, bus, 0, "asus-wifi");
+ ret = pci_hp_register(&asus->hotplug_slot, bus, 0, "asus-wifi");
if (ret) {
pr_err("Unable to register hotplug slot - %d\n", ret);
goto error_register;
return 0;
error_register:
- kfree(asus->hotplug_slot->info);
-error_info:
- kfree(asus->hotplug_slot);
- asus->hotplug_slot = NULL;
-error_slot:
+ asus->hotplug_slot.ops = NULL;
destroy_workqueue(asus->hotplug_workqueue);
error_workqueue:
return ret;
* asus_unregister_rfkill_notifier()
*/
asus_rfkill_hotplug(asus);
- if (asus->hotplug_slot) {
- pci_hp_deregister(asus->hotplug_slot);
- kfree(asus->hotplug_slot->info);
- kfree(asus->hotplug_slot);
- }
+ if (asus->hotplug_slot.ops)
+ pci_hp_deregister(&asus->hotplug_slot);
if (asus->hotplug_workqueue)
destroy_workqueue(asus->hotplug_workqueue);
struct rfkill *wwan3g_rfkill;
struct rfkill *wimax_rfkill;
- struct hotplug_slot *hotplug_slot;
+ struct hotplug_slot hotplug_slot;
struct mutex hotplug_lock;
struct led_classdev tpd_led;
mutex_lock(&eeepc->hotplug_lock);
pci_lock_rescan_remove();
- if (!eeepc->hotplug_slot)
+ if (!eeepc->hotplug_slot.ops)
goto out_unlock;
port = acpi_get_pci_dev(handle);
static int eeepc_get_adapter_status(struct hotplug_slot *hotplug_slot,
u8 *value)
{
- struct eeepc_laptop *eeepc = hotplug_slot->private;
- int val = get_acpi(eeepc, CM_ASL_WLAN);
+ struct eeepc_laptop *eeepc;
+ int val;
+
+ eeepc = container_of(hotplug_slot, struct eeepc_laptop, hotplug_slot);
+ val = get_acpi(eeepc, CM_ASL_WLAN);
if (val == 1 || val == 0)
*value = val;
return 0;
}
-static struct hotplug_slot_ops eeepc_hotplug_slot_ops = {
- .owner = THIS_MODULE,
+static const struct hotplug_slot_ops eeepc_hotplug_slot_ops = {
.get_adapter_status = eeepc_get_adapter_status,
.get_power_status = eeepc_get_adapter_status,
};
return -ENODEV;
}
- eeepc->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
- if (!eeepc->hotplug_slot)
- goto error_slot;
+ eeepc->hotplug_slot.ops = &eeepc_hotplug_slot_ops;
- eeepc->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info),
- GFP_KERNEL);
- if (!eeepc->hotplug_slot->info)
- goto error_info;
-
- eeepc->hotplug_slot->private = eeepc;
- eeepc->hotplug_slot->ops = &eeepc_hotplug_slot_ops;
- eeepc_get_adapter_status(eeepc->hotplug_slot,
- &eeepc->hotplug_slot->info->adapter_status);
-
- ret = pci_hp_register(eeepc->hotplug_slot, bus, 0, "eeepc-wifi");
+ ret = pci_hp_register(&eeepc->hotplug_slot, bus, 0, "eeepc-wifi");
if (ret) {
pr_err("Unable to register hotplug slot - %d\n", ret);
goto error_register;
return 0;
error_register:
- kfree(eeepc->hotplug_slot->info);
-error_info:
- kfree(eeepc->hotplug_slot);
- eeepc->hotplug_slot = NULL;
-error_slot:
+ eeepc->hotplug_slot.ops = NULL;
return ret;
}
eeepc->wlan_rfkill = NULL;
}
- if (eeepc->hotplug_slot) {
- pci_hp_deregister(eeepc->hotplug_slot);
- kfree(eeepc->hotplug_slot->info);
- kfree(eeepc->hotplug_slot);
- }
+ if (eeepc->hotplug_slot.ops)
+ pci_hp_deregister(&eeepc->hotplug_slot);
if (eeepc->bluetooth_rfkill) {
rfkill_unregister(eeepc->bluetooth_rfkill);
[IMX7_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) | BIT(1) },
[IMX7_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) },
[IMX7_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) },
+ [IMX7_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) },
[IMX7_RESET_DDRC_PRST] = { SRC_DDRC_RCR, BIT(0) },
[IMX7_RESET_DDRC_CORE_RST] = { SRC_DDRC_RCR, BIT(1) },
};
if (ret)
goto err_unmap;
- pci_set_dma_seg_boundary(pdev, SZ_1M - 1);
- pci_set_dma_max_seg_size(pdev, SZ_1M);
+ dma_set_seg_boundary(&pdev->dev, SZ_1M - 1);
+ dma_set_max_seg_size(&pdev->dev, SZ_1M);
pci_set_master(pdev);
ism->smcd = smcd_alloc_dev(&pdev->dev, dev_name(&pdev->dev), &ism_ops,
shost->max_sectors = (shost->sg_tablesize * 8) + 112;
}
- error = pci_set_dma_max_seg_size(pdev,
+ error = dma_set_max_seg_size(&pdev->dev,
(aac->adapter_info.options & AAC_OPT_NEW_COMM) ?
(shost->max_sectors << 9) : 65536);
if (error)
struct scsi_device *sdev = NULL;
struct aac_dev *aac = (struct aac_dev *)shost_priv(shost);
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
if (aac_adapter_ioremap(aac, aac->base_size)) {
dev_err(&pdev->dev, "aacraid: ioremap failed\n");
return PCI_ERS_RESULT_DISCONNECT;
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
return PCI_ERS_RESULT_RECOVERED;
}
if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0)
goto out_disable_device;
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
if (restart_bfa(bfad) == -1)
goto out_disable_device;
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
- pci_cleanup_aer_uncorrect_error_status(pdev);
/* Bring HW s/m to ready state.
* but don't resume IOs.
/* Bring device online, it will be no-op for non-fatal error resume */
lpfc_online(phba);
-
- /* Clean up Advanced Error Reporting (AER) if needed */
- if (phba->hba_flag & HBA_AER_ENABLED)
- pci_cleanup_aer_uncorrect_error_status(pdev);
}
/**
/* Bring the device back online */
lpfc_online(phba);
}
-
- /* Clean up Advanced Error Reporting (AER) if needed */
- if (phba->hba_flag & HBA_AER_ENABLED)
- pci_cleanup_aer_uncorrect_error_status(pdev);
}
/**
pr_info(MPT3SAS_FMT "PCI error: resume callback!!\n", ioc->name);
- pci_cleanup_aer_uncorrect_error_status(pdev);
mpt3sas_base_start_watchdog(ioc);
scsi_unblock_requests(ioc->shost);
}
"The device failed to resume I/O from slot/link_reset.\n");
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
ha->flags.eeh_busy = 0;
}
__func__);
}
- pci_cleanup_aer_uncorrect_error_status(pdev);
clear_bit(AF_EEH_BUSY, &ha->flags);
}
bool put_online:1;
};
+struct acpi_device_properties {
+ const guid_t *guid;
+ const union acpi_object *properties;
+ struct list_head list;
+};
+
/* ACPI Device Specific Data (_DSD) */
struct acpi_device_data {
const union acpi_object *pointer;
- const union acpi_object *properties;
+ struct list_head properties;
const union acpi_object *of_compatible;
struct list_head subnodes;
};
#define IMX7_RESET_DDRC_PRST 23
#define IMX7_RESET_DDRC_CORE_RST 24
-#define IMX7_RESET_NUM 25
+#define IMX7_RESET_PCIE_CTRL_APPS_TURNOFF 25
+
+#define IMX7_RESET_NUM 26
#endif
NR_FWNODE_REFERENCE_ARGS, args);
}
+static inline bool acpi_dev_has_props(const struct acpi_device *adev)
+{
+ return !list_empty(&adev->data.properties);
+}
+
+struct acpi_device_properties *
+acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid,
+ const union acpi_object *properties);
+
int acpi_node_prop_get(const struct fwnode_handle *fwnode, const char *propname,
void **valptr);
int acpi_dev_prop_read_single(struct acpi_device *adev,
#define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */
#define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */
#define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */
+#define QUEUE_FLAG_PCI_P2PDMA 29 /* device supports PCI p2p requests */
#define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
(1 << QUEUE_FLAG_SAME_COMP) | \
#define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
#define blk_queue_scsi_passthrough(q) \
test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags)
+#define blk_queue_pci_p2pdma(q) \
+ test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
#define blk_noretry_request(rq) \
((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
* wakeup event whenever a page is unpinned and becomes idle. This
* wakeup is used to coordinate physical address space management (ex:
* fs truncate/hole punch) vs pinned pages (ex: device dma).
+ *
+ * MEMORY_DEVICE_PCI_P2PDMA:
+ * Device memory residing in a PCI BAR intended for use with Peer-to-Peer
+ * transactions.
*/
enum memory_type {
MEMORY_DEVICE_PRIVATE = 1,
MEMORY_DEVICE_PUBLIC,
MEMORY_DEVICE_FS_DAX,
+ MEMORY_DEVICE_PCI_P2PDMA,
};
/*
struct device *dev;
void *data;
enum memory_type type;
+ u64 pci_p2pdma_bus_offset;
};
#ifdef CONFIG_ZONE_DEVICE
page->pgmap->type == MEMORY_DEVICE_PUBLIC;
}
+#ifdef CONFIG_PCI_P2PDMA
+static inline bool is_pci_p2pdma_page(const struct page *page)
+{
+ return is_zone_device_page(page) &&
+ page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
+}
+#else /* CONFIG_PCI_P2PDMA */
+static inline bool is_pci_p2pdma_page(const struct page *page)
+{
+ return false;
+}
+#endif /* CONFIG_PCI_P2PDMA */
+
#else /* CONFIG_DEV_PAGEMAP_OPS */
static inline void dev_pagemap_get_ops(void)
{
{
return false;
}
+
+static inline bool is_pci_p2pdma_page(const struct page *page)
+{
+ return false;
+}
#endif /* CONFIG_DEV_PAGEMAP_OPS */
static inline void get_page(struct page *page)
{
return dma_set_coherent_mask(&dev->dev, mask);
}
-
-static inline int pci_set_dma_max_seg_size(struct pci_dev *dev,
- unsigned int size)
-{
- return dma_set_max_seg_size(&dev->dev, size);
-}
-
-static inline int pci_set_dma_seg_boundary(struct pci_dev *dev,
- unsigned long mask)
-{
- return dma_set_seg_boundary(&dev->dev, mask);
-}
#else
static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask)
{ return -EIO; }
static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
{ return -EIO; }
-static inline int pci_set_dma_max_seg_size(struct pci_dev *dev,
- unsigned int size)
-{ return -EIO; }
-static inline int pci_set_dma_seg_boundary(struct pci_dev *dev,
- unsigned long mask)
-{ return -EIO; }
#endif
#endif
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _LINUX_PCI_DMA_H
-#define _LINUX_PCI_DMA_H
-
-#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) DEFINE_DMA_UNMAP_ADDR(ADDR_NAME);
-#define DECLARE_PCI_UNMAP_LEN(LEN_NAME) DEFINE_DMA_UNMAP_LEN(LEN_NAME);
-#define pci_unmap_addr dma_unmap_addr
-#define pci_unmap_addr_set dma_unmap_addr_set
-#define pci_unmap_len dma_unmap_len
-#define pci_unmap_len_set dma_unmap_len_set
-
-#endif
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * PCI Peer 2 Peer DMA support.
+ *
+ * Copyright (c) 2016-2018, Logan Gunthorpe
+ * Copyright (c) 2016-2017, Microsemi Corporation
+ * Copyright (c) 2017, Christoph Hellwig
+ * Copyright (c) 2018, Eideticom Inc.
+ */
+
+#ifndef _LINUX_PCI_P2PDMA_H
+#define _LINUX_PCI_P2PDMA_H
+
+#include <linux/pci.h>
+
+struct block_device;
+struct scatterlist;
+
+#ifdef CONFIG_PCI_P2PDMA
+int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
+ u64 offset);
+int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
+ int num_clients, bool verbose);
+bool pci_has_p2pmem(struct pci_dev *pdev);
+struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients);
+void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size);
+void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size);
+pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr);
+struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev,
+ unsigned int *nents, u32 length);
+void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl);
+void pci_p2pmem_publish(struct pci_dev *pdev, bool publish);
+int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir);
+int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
+ bool *use_p2pdma);
+ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
+ bool use_p2pdma);
+#else /* CONFIG_PCI_P2PDMA */
+static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar,
+ size_t size, u64 offset)
+{
+ return -EOPNOTSUPP;
+}
+static inline int pci_p2pdma_distance_many(struct pci_dev *provider,
+ struct device **clients, int num_clients, bool verbose)
+{
+ return -1;
+}
+static inline bool pci_has_p2pmem(struct pci_dev *pdev)
+{
+ return false;
+}
+static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients,
+ int num_clients)
+{
+ return NULL;
+}
+static inline void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size)
+{
+ return NULL;
+}
+static inline void pci_free_p2pmem(struct pci_dev *pdev, void *addr,
+ size_t size)
+{
+}
+static inline pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev,
+ void *addr)
+{
+ return 0;
+}
+static inline struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev,
+ unsigned int *nents, u32 length)
+{
+ return NULL;
+}
+static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev,
+ struct scatterlist *sgl)
+{
+}
+static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
+{
+}
+static inline int pci_p2pdma_map_sg(struct device *dev,
+ struct scatterlist *sg, int nents, enum dma_data_direction dir)
+{
+ return 0;
+}
+static inline int pci_p2pdma_enable_store(const char *page,
+ struct pci_dev **p2p_dev, bool *use_p2pdma)
+{
+ *use_p2pdma = false;
+ return 0;
+}
+static inline ssize_t pci_p2pdma_enable_show(char *page,
+ struct pci_dev *p2p_dev, bool use_p2pdma)
+{
+ return sprintf(page, "none\n");
+}
+#endif /* CONFIG_PCI_P2PDMA */
+
+
+static inline int pci_p2pdma_distance(struct pci_dev *provider,
+ struct device *client, bool verbose)
+{
+ return pci_p2pdma_distance_many(provider, &client, 1, verbose);
+}
+
+static inline struct pci_dev *pci_p2pmem_find(struct device *client)
+{
+ return pci_p2pmem_find_many(&client, 1);
+}
+
+#endif /* _LINUX_PCI_P2P_H */
struct pci_vpd;
struct pci_sriov;
struct pci_ats;
+struct pci_p2pdma;
/* The pci_dev structure describes PCI devices */
struct pci_dev {
pci_power_t current_state; /* Current operating state. In ACPI,
this is D0-D3, D0 being fully
functional, and D3 being off. */
+ unsigned int imm_ready:1; /* Supports Immediate Readiness */
u8 pm_cap; /* PM capability offset */
unsigned int pme_support:5; /* Bitmask of states from which PME#
can be generated */
unsigned int has_secondary_link:1;
unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */
unsigned int is_probed:1; /* Device probing in progress */
+ unsigned int link_active_reporting:1;/* Device capable of reporting link active */
pci_dev_flags_t dev_flags;
atomic_t enable_cnt; /* pci_enable_device has been called */
#endif
#ifdef CONFIG_PCI_PASID
u16 pasid_features;
+#endif
+#ifdef CONFIG_PCI_P2PDMA
+ struct pci_p2pdma *p2pdma;
#endif
phys_addr_t rom; /* Physical address if not from BAR */
size_t romlen; /* Length if not from BAR */
/* kmem_cache style wrapper around pci_alloc_consistent() */
-#include <linux/pci-dma.h>
#include <linux/dmapool.h>
#define pci_pool dma_pool
/**
* struct hotplug_slot_ops -the callbacks that the hotplug pci core can use
- * @owner: The module owner of this structure
- * @mod_name: The module name (KBUILD_MODNAME) of this structure
* @enable_slot: Called when the user wants to enable a specific pci slot
* @disable_slot: Called when the user wants to disable a specific pci slot
* @set_attention_status: Called to set the specific slot's attention LED to
* @hardware_test: Called to run a specified hardware test on the specified
* slot.
* @get_power_status: Called to get the current power status of a slot.
- * If this field is NULL, the value passed in the struct hotplug_slot_info
- * will be used when this value is requested by a user.
* @get_attention_status: Called to get the current attention status of a slot.
- * If this field is NULL, the value passed in the struct hotplug_slot_info
- * will be used when this value is requested by a user.
* @get_latch_status: Called to get the current latch status of a slot.
- * If this field is NULL, the value passed in the struct hotplug_slot_info
- * will be used when this value is requested by a user.
* @get_adapter_status: Called to get see if an adapter is present in the slot or not.
- * If this field is NULL, the value passed in the struct hotplug_slot_info
- * will be used when this value is requested by a user.
* @reset_slot: Optional interface to allow override of a bus reset for the
* slot for cases where a secondary bus reset can result in spurious
* hotplug events or where a slot can be reset independent of the bus.
* set an LED, enable / disable power, etc.)
*/
struct hotplug_slot_ops {
- struct module *owner;
- const char *mod_name;
int (*enable_slot) (struct hotplug_slot *slot);
int (*disable_slot) (struct hotplug_slot *slot);
int (*set_attention_status) (struct hotplug_slot *slot, u8 value);
int (*reset_slot) (struct hotplug_slot *slot, int probe);
};
-/**
- * struct hotplug_slot_info - used to notify the hotplug pci core of the state of the slot
- * @power_status: if power is enabled or not (1/0)
- * @attention_status: if the attention light is enabled or not (1/0)
- * @latch_status: if the latch (if any) is open or closed (1/0)
- * @adapter_status: if there is a pci board present in the slot or not (1/0)
- *
- * Used to notify the hotplug pci core of the status of a specific slot.
- */
-struct hotplug_slot_info {
- u8 power_status;
- u8 attention_status;
- u8 latch_status;
- u8 adapter_status;
-};
-
/**
* struct hotplug_slot - used to register a physical slot with the hotplug pci core
* @ops: pointer to the &struct hotplug_slot_ops to be used for this slot
- * @info: pointer to the &struct hotplug_slot_info for the initial values for
- * this slot.
- * @private: used by the hotplug pci controller driver to store whatever it
- * needs.
+ * @owner: The module owner of this structure
+ * @mod_name: The module name (KBUILD_MODNAME) of this structure
*/
struct hotplug_slot {
- struct hotplug_slot_ops *ops;
- struct hotplug_slot_info *info;
- void *private;
+ const struct hotplug_slot_ops *ops;
/* Variables below this are for use only by the hotplug pci core. */
struct list_head slot_list;
struct pci_slot *pci_slot;
+ struct module *owner;
+ const char *mod_name;
};
static inline const char *hotplug_slot_name(const struct hotplug_slot *slot)
void pci_hp_destroy(struct hotplug_slot *slot);
void pci_hp_deregister(struct hotplug_slot *slot);
-int __must_check pci_hp_change_slot_info(struct hotplug_slot *slot,
- struct hotplug_slot_info *info);
-
/* use a define to avoid include chaining to get THIS_MODULE & friends */
#define pci_hp_register(slot, pbus, devnr, name) \
__pci_hp_register(slot, pbus, devnr, name, THIS_MODULE, KBUILD_MODNAME)
#define PCI_VENDOR_ID_HUAWEI 0x19e5
#define PCI_VENDOR_ID_NETRONOME 0x19ee
-#define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200
-#define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240
#define PCI_DEVICE_ID_NETRONOME_NFP4000 0x4000
#define PCI_DEVICE_ID_NETRONOME_NFP5000 0x5000
#define PCI_DEVICE_ID_NETRONOME_NFP6000 0x6000
#define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */
#define PCI_STATUS 0x06 /* 16 bits */
+#define PCI_STATUS_IMM_READY 0x01 /* Immediate Readiness */
#define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */
#define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */
#define PCI_STATUS_66MHZ 0x20 /* Support 66 MHz PCI 2.1 bus */
@echo ' leds - LEDs tools'
@echo ' liblockdep - user-space wrapper for kernel locking-validator'
@echo ' bpf - misc BPF tools'
+ @echo ' pci - PCI tools'
@echo ' perf - Linux performance measurement and analysis tool'
@echo ' selftests - various kernel selftests'
@echo ' spi - spi tools'
cpupower: FORCE
$(call descend,power/$@)
-cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi: FORCE
+cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi pci: FORCE
$(call descend,$@)
liblockdep: FORCE
all: acpi cgroup cpupower gpio hv firewire liblockdep \
perf selftests spi turbostat usb \
virtio vm bpf x86_energy_perf_policy \
- tmon freefall iio objtool kvm_stat wmi
+ tmon freefall iio objtool kvm_stat wmi pci
acpi_install:
$(call descend,power/$(@:_install=),install)
cpupower_install:
$(call descend,power/$(@:_install=),install)
-cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install:
+cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install pci_install:
$(call descend,$(@:_install=),install)
liblockdep_install:
perf_install selftests_install turbostat_install usb_install \
virtio_install vm_install bpf_install x86_energy_perf_policy_install \
tmon_install freefall_install objtool_install kvm_stat_install \
- wmi_install
+ wmi_install pci_install
acpi_clean:
$(call descend,power/acpi,clean)
cpupower_clean:
$(call descend,power/cpupower,clean)
-cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean:
+cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean:
$(call descend,$(@:_clean=),clean)
liblockdep_clean:
perf_clean selftests_clean turbostat_clean spi_clean usb_clean virtio_clean \
vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \
freefall_clean build_clean libbpf_clean libsubcmd_clean liblockdep_clean \
- gpio_clean objtool_clean leds_clean wmi_clean
+ gpio_clean objtool_clean leds_clean wmi_clean pci_clean
.PHONY: FORCE
--- /dev/null
+pcitest-y += pcitest.o
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0
+include ../scripts/Makefile.include
+
+bindir ?= /usr/bin
+
+ifeq ($(srctree),)
+srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+endif
+
+# Do not use make's built-in rules
+# (this improves performance and avoids hard-to-debug behaviour);
+MAKEFLAGS += -r
+
+CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include
+
+ALL_TARGETS := pcitest pcitest.sh
+ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
+
+all: $(ALL_PROGRAMS)
+
+export srctree OUTPUT CC LD CFLAGS
+include $(srctree)/tools/build/Makefile.include
+
+#
+# We need the following to be outside of kernel tree
+#
+$(OUTPUT)include/linux/: ../../include/uapi/linux/
+ mkdir -p $(OUTPUT)include/linux/ 2>&1 || true
+ ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@
+
+prepare: $(OUTPUT)include/linux/
+
+PCITEST_IN := $(OUTPUT)pcitest-in.o
+$(PCITEST_IN): prepare FORCE
+ $(Q)$(MAKE) $(build)=pcitest
+$(OUTPUT)pcitest: $(PCITEST_IN)
+ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
+
+clean:
+ rm -f $(ALL_PROGRAMS)
+ rm -rf $(OUTPUT)include/
+ find $(if $(OUTPUT),$(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
+
+install: $(ALL_PROGRAMS)
+ install -d -m 755 $(DESTDIR)$(bindir); \
+ for program in $(ALL_PROGRAMS); do \
+ install $$program $(DESTDIR)$(bindir); \
+ done
+
+FORCE:
+
+.PHONY: all install clean FORCE prepare
#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
-#include <time.h>
#include <unistd.h>
#include <linux/pcitest.h>
unsigned long size;
};
-static int run_test(struct pci_test *test)
+static void run_test(struct pci_test *test)
{
long ret;
int fd;
- struct timespec start, end;
- double time;
fd = open(test->device, O_RDWR);
if (fd < 0) {
perror("can't open PCI Endpoint Test device");
- return fd;
+ return;
}
if (test->barnum >= 0 && test->barnum <= 5) {