* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (44 commits)
ACPI: remove function tracing macros from drivers/acpi/*.c
ACPI: add support for Smart Battery
ACPI: handle battery notify event on broken BIOS
ACPI: handle AC notify event on broken BIOS
ACPI: asus_acpi: add S1N WLED control
ACPI: asus_acpi: correct M6N/M6R display nodes
ACPI: asus_acpi: add S1N WLED control
ACPI: asus_acpi: rework model detection
ACPI: asus_acpi: support L5D
ACPI: asus_acpi: handle internal Bluetooth / support W5A
ACPI: asus_acpi: support A4G
ACPI: asus_acpi: support W3400N
ACPI: asus_acpi: LED display support
ACPI: asus_acpi: support A3G
ACPI: asus_acpi: misc cleanups
ACPI: video: Remove unneeded acpi_handle from driver.
ACPI: thermal: Remove unneeded acpi_handle from driver.
ACPI: power: Remove unneeded acpi_handle from driver.
ACPI: pci_root: Remove unneeded acpi_handle from driver.
ACPI: pci_link: Remove unneeded acpi_handle from driver.
...
for most of the implementations. These functions can be replaced by the
board driver if neccecary. Those functions are called via pointers in the
NAND chip description structure. The board driver can set the functions which
- should be replaced by board dependend functions before calling nand_scan().
+ should be replaced by board dependent functions before calling nand_scan().
If the function pointer is NULL on entry to nand_scan() then the pointer
is set to the default function which is suitable for the detected chip type.
</para></listitem>
[REPLACEABLE]</para><para>
Replaceable members hold hardware related functions which can be
provided by the board driver. The board driver can set the functions which
- should be replaced by board dependend functions before calling nand_scan().
+ should be replaced by board dependent functions before calling nand_scan().
If the function pointer is NULL on entry to nand_scan() then the pointer
is set to the default function which is suitable for the detected chip type.
</para></listitem>
<title>Basic board driver</title>
<para>
For most boards it will be sufficient to provide just the
- basic functions and fill out some really board dependend
+ basic functions and fill out some really board dependent
members in the nand chip description structure.
- See drivers/mtd/nand/skeleton for reference.
</para>
<sect1>
<title>Basic defines</title>
</para>
!Idrivers/mtd/nand/nand_base.c
!Idrivers/mtd/nand/nand_bbt.c
-!Idrivers/mtd/nand/nand_ecc.c
+<!-- No internal functions for kernel-doc:
+X!Idrivers/mtd/nand/nand_ecc.c
+-->
</chapter>
<chapter id="credits">
<title>Interrupt Handling</title>
<para>
Our example handler is for an ISA bus device. If it was PCI you would be
- able to share the interrupt and would have set SA_SHIRQ to indicate a
+ able to share the interrupt and would have set IRQF_SHARED to indicate a
shared IRQ. We pass the device pointer as the interrupt routine argument. We
don't need to since we only support one card but doing this will make it
easier to upgrade the driver for multiple devices in the future.
Who: Ralf Baechle <ralf@linux-mips.org>
---------------------------
+
+What: Interrupt only SA_* flags
+When: Januar 2007
+Why: The interrupt related SA_* flags are replaced by IRQF_* to move them
+ out of the signal namespace.
+
+Who: Thomas Gleixner <tglx@linutronix.de>
+
+---------------------------
--- /dev/null
+IRQ-flags state tracing
+
+started by Ingo Molnar <mingo@redhat.com>
+
+the "irq-flags tracing" feature "traces" hardirq and softirq state, in
+that it gives interested subsystems an opportunity to be notified of
+every hardirqs-off/hardirqs-on, softirqs-off/softirqs-on event that
+happens in the kernel.
+
+CONFIG_TRACE_IRQFLAGS_SUPPORT is needed for CONFIG_PROVE_SPIN_LOCKING
+and CONFIG_PROVE_RW_LOCKING to be offered by the generic lock debugging
+code. Otherwise only CONFIG_PROVE_MUTEX_LOCKING and
+CONFIG_PROVE_RWSEM_LOCKING will be offered on an architecture - these
+are locking APIs that are not used in IRQ context. (the one exception
+for rwsems is worked around)
+
+architecture support for this is certainly not in the "trivial"
+category, because lots of lowlevel assembly code deal with irq-flags
+state changes. But an architecture can be irq-flags-tracing enabled in a
+rather straightforward and risk-free manner.
+
+Architectures that want to support this need to do a couple of
+code-organizational changes first:
+
+- move their irq-flags manipulation code from their asm/system.h header
+ to asm/irqflags.h
+
+- rename local_irq_disable()/etc to raw_local_irq_disable()/etc. so that
+ the linux/irqflags.h code can inject callbacks and can construct the
+ real local_irq_disable()/etc APIs.
+
+- add and enable TRACE_IRQFLAGS_SUPPORT in their arch level Kconfig file
+
+and then a couple of functional changes are needed as well to implement
+irq-flags-tracing support:
+
+- in lowlevel entry code add (build-conditional) calls to the
+ trace_hardirqs_off()/trace_hardirqs_on() functions. The lock validator
+ closely guards whether the 'real' irq-flags matches the 'virtual'
+ irq-flags state, and complains loudly (and turns itself off) if the
+ two do not match. Usually most of the time for arch support for
+ irq-flags-tracing is spent in this state: look at the lockdep
+ complaint, try to figure out the assembly code we did not cover yet,
+ fix and repeat. Once the system has booted up and works without a
+ lockdep complaint in the irq-flags-tracing functions arch support is
+ complete.
+- if the architecture has non-maskable interrupts then those need to be
+ excluded from the irq-tracing [and lock validation] mechanism via
+ lockdep_off()/lockdep_on().
+
+in general there is no risk from having an incomplete irq-flags-tracing
+implementation in an architecture: lockdep will detect that and will
+turn itself off. I.e. the lock validator will still be reliable. There
+should be no crashes due to irq-tracing bugs. (except if the assembly
+changes break other code by modifying conditions or registers that
+shouldnt be)
+
debug [KNL] Enable kernel debugging (events log level).
+ debug_locks_verbose=
+ [KNL] verbose self-tests
+ Format=<0|1>
+ Print debugging info while doing the locking API
+ self-tests.
+ We default to 0 (no extra messages), setting it to
+ 1 will print _a lot_ more information - normally
+ only useful to kernel developers.
+
decnet= [HW,NET]
Format: <area>[,<node>]
See also Documentation/networking/decnet.txt.
--- /dev/null
+Runtime locking correctness validator
+=====================================
+
+started by Ingo Molnar <mingo@redhat.com>
+additions by Arjan van de Ven <arjan@linux.intel.com>
+
+Lock-class
+----------
+
+The basic object the validator operates upon is a 'class' of locks.
+
+A class of locks is a group of locks that are logically the same with
+respect to locking rules, even if the locks may have multiple (possibly
+tens of thousands of) instantiations. For example a lock in the inode
+struct is one class, while each inode has its own instantiation of that
+lock class.
+
+The validator tracks the 'state' of lock-classes, and it tracks
+dependencies between different lock-classes. The validator maintains a
+rolling proof that the state and the dependencies are correct.
+
+Unlike an lock instantiation, the lock-class itself never goes away: when
+a lock-class is used for the first time after bootup it gets registered,
+and all subsequent uses of that lock-class will be attached to this
+lock-class.
+
+State
+-----
+
+The validator tracks lock-class usage history into 5 separate state bits:
+
+- 'ever held in hardirq context' [ == hardirq-safe ]
+- 'ever held in softirq context' [ == softirq-safe ]
+- 'ever held with hardirqs enabled' [ == hardirq-unsafe ]
+- 'ever held with softirqs and hardirqs enabled' [ == softirq-unsafe ]
+
+- 'ever used' [ == !unused ]
+
+Single-lock state rules:
+------------------------
+
+A softirq-unsafe lock-class is automatically hardirq-unsafe as well. The
+following states are exclusive, and only one of them is allowed to be
+set for any lock-class:
+
+ <hardirq-safe> and <hardirq-unsafe>
+ <softirq-safe> and <softirq-unsafe>
+
+The validator detects and reports lock usage that violate these
+single-lock state rules.
+
+Multi-lock dependency rules:
+----------------------------
+
+The same lock-class must not be acquired twice, because this could lead
+to lock recursion deadlocks.
+
+Furthermore, two locks may not be taken in different order:
+
+ <L1> -> <L2>
+ <L2> -> <L1>
+
+because this could lead to lock inversion deadlocks. (The validator
+finds such dependencies in arbitrary complexity, i.e. there can be any
+other locking sequence between the acquire-lock operations, the
+validator will still track all dependencies between locks.)
+
+Furthermore, the following usage based lock dependencies are not allowed
+between any two lock-classes:
+
+ <hardirq-safe> -> <hardirq-unsafe>
+ <softirq-safe> -> <softirq-unsafe>
+
+The first rule comes from the fact the a hardirq-safe lock could be
+taken by a hardirq context, interrupting a hardirq-unsafe lock - and
+thus could result in a lock inversion deadlock. Likewise, a softirq-safe
+lock could be taken by an softirq context, interrupting a softirq-unsafe
+lock.
+
+The above rules are enforced for any locking sequence that occurs in the
+kernel: when acquiring a new lock, the validator checks whether there is
+any rule violation between the new lock and any of the held locks.
+
+When a lock-class changes its state, the following aspects of the above
+dependency rules are enforced:
+
+- if a new hardirq-safe lock is discovered, we check whether it
+ took any hardirq-unsafe lock in the past.
+
+- if a new softirq-safe lock is discovered, we check whether it took
+ any softirq-unsafe lock in the past.
+
+- if a new hardirq-unsafe lock is discovered, we check whether any
+ hardirq-safe lock took it in the past.
+
+- if a new softirq-unsafe lock is discovered, we check whether any
+ softirq-safe lock took it in the past.
+
+(Again, we do these checks too on the basis that an interrupt context
+could interrupt _any_ of the irq-unsafe or hardirq-unsafe locks, which
+could lead to a lock inversion deadlock - even if that lock scenario did
+not trigger in practice yet.)
+
+Exception: Nested data dependencies leading to nested locking
+-------------------------------------------------------------
+
+There are a few cases where the Linux kernel acquires more than one
+instance of the same lock-class. Such cases typically happen when there
+is some sort of hierarchy within objects of the same type. In these
+cases there is an inherent "natural" ordering between the two objects
+(defined by the properties of the hierarchy), and the kernel grabs the
+locks in this fixed order on each of the objects.
+
+An example of such an object hieararchy that results in "nested locking"
+is that of a "whole disk" block-dev object and a "partition" block-dev
+object; the partition is "part of" the whole device and as long as one
+always takes the whole disk lock as a higher lock than the partition
+lock, the lock ordering is fully correct. The validator does not
+automatically detect this natural ordering, as the locking rule behind
+the ordering is not static.
+
+In order to teach the validator about this correct usage model, new
+versions of the various locking primitives were added that allow you to
+specify a "nesting level". An example call, for the block device mutex,
+looks like this:
+
+enum bdev_bd_mutex_lock_class
+{
+ BD_MUTEX_NORMAL,
+ BD_MUTEX_WHOLE,
+ BD_MUTEX_PARTITION
+};
+
+ mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);
+
+In this case the locking is done on a bdev object that is known to be a
+partition.
+
+The validator treats a lock that is taken in such a nested fasion as a
+separate (sub)class for the purposes of validation.
+
+Note: When changing code to use the _nested() primitives, be careful and
+check really thoroughly that the hiearchy is correctly mapped; otherwise
+you can get false positives or false negatives.
+
+Proof of 100% correctness:
+--------------------------
+
+The validator achieves perfect, mathematical 'closure' (proof of locking
+correctness) in the sense that for every simple, standalone single-task
+locking sequence that occured at least once during the lifetime of the
+kernel, the validator proves it with a 100% certainty that no
+combination and timing of these locking sequences can cause any class of
+lock related deadlock. [*]
+
+I.e. complex multi-CPU and multi-task locking scenarios do not have to
+occur in practice to prove a deadlock: only the simple 'component'
+locking chains have to occur at least once (anytime, in any
+task/context) for the validator to be able to prove correctness. (For
+example, complex deadlocks that would normally need more than 3 CPUs and
+a very unlikely constellation of tasks, irq-contexts and timings to
+occur, can be detected on a plain, lightly loaded single-CPU system as
+well!)
+
+This radically decreases the complexity of locking related QA of the
+kernel: what has to be done during QA is to trigger as many "simple"
+single-task locking dependencies in the kernel as possible, at least
+once, to prove locking correctness - instead of having to trigger every
+possible combination of locking interaction between CPUs, combined with
+every possible hardirq and softirq nesting scenario (which is impossible
+to do in practice).
+
+[*] assuming that the validator itself is 100% correct, and no other
+ part of the system corrupts the state of the validator in any way.
+ We also assume that all NMI/SMM paths [which could interrupt
+ even hardirq-disabled codepaths] are correct and do not interfere
+ with the validator. We also assume that the 64-bit 'chain hash'
+ value is unique for every lock-chain in the system. Also, lock
+ recursion must not be higher than 20.
+
+Performance:
+------------
+
+The above rules require _massive_ amounts of runtime checking. If we did
+that for every lock taken and for every irqs-enable event, it would
+render the system practically unusably slow. The complexity of checking
+is O(N^2), so even with just a few hundred lock-classes we'd have to do
+tens of thousands of checks for every event.
+
+This problem is solved by checking any given 'locking scenario' (unique
+sequence of locks taken after each other) only once. A simple stack of
+held locks is maintained, and a lightweight 64-bit hash value is
+calculated, which hash is unique for every lock chain. The hash value,
+when the chain is validated for the first time, is then put into a hash
+table, which hash-table can be checked in a lockfree manner. If the
+locking chain occurs again later on, the hash table tells us that we
+dont have to validate the chain again.
--- /dev/null
+/proc/sys/net/ipv4/vs/* Variables:
+
+am_droprate - INTEGER
+ default 10
+
+ It sets the always mode drop rate, which is used in the mode 3
+ of the drop_rate defense.
+
+amemthresh - INTEGER
+ default 1024
+
+ It sets the available memory threshold (in pages), which is
+ used in the automatic modes of defense. When there is no
+ enough available memory, the respective strategy will be
+ enabled and the variable is automatically set to 2, otherwise
+ the strategy is disabled and the variable is set to 1.
+
+cache_bypass - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ If it is enabled, forward packets to the original destination
+ directly when no cache server is available and destination
+ address is not local (iph->daddr is RTN_UNICAST). It is mostly
+ used in transparent web cache cluster.
+
+debug_level - INTEGER
+ 0 - transmission error messages (default)
+ 1 - non-fatal error messages
+ 2 - configuration
+ 3 - destination trash
+ 4 - drop entry
+ 5 - service lookup
+ 6 - scheduling
+ 7 - connection new/expire, lookup and synchronization
+ 8 - state transition
+ 9 - binding destination, template checks and applications
+ 10 - IPVS packet transmission
+ 11 - IPVS packet handling (ip_vs_in/ip_vs_out)
+ 12 or more - packet traversal
+
+ Only available when IPVS is compiled with the CONFIG_IPVS_DEBUG
+
+ Higher debugging levels include the messages for lower debugging
+ levels, so setting debug level 2, includes level 0, 1 and 2
+ messages. Thus, logging becomes more and more verbose the higher
+ the level.
+
+drop_entry - INTEGER
+ 0 - disabled (default)
+
+ The drop_entry defense is to randomly drop entries in the
+ connection hash table, just in order to collect back some
+ memory for new connections. In the current code, the
+ drop_entry procedure can be activated every second, then it
+ randomly scans 1/32 of the whole and drops entries that are in
+ the SYN-RECV/SYNACK state, which should be effective against
+ syn-flooding attack.
+
+ The valid values of drop_entry are from 0 to 3, where 0 means
+ that this strategy is always disabled, 1 and 2 mean automatic
+ modes (when there is no enough available memory, the strategy
+ is enabled and the variable is automatically set to 2,
+ otherwise the strategy is disabled and the variable is set to
+ 1), and 3 means that that the strategy is always enabled.
+
+drop_packet - INTEGER
+ 0 - disabled (default)
+
+ The drop_packet defense is designed to drop 1/rate packets
+ before forwarding them to real servers. If the rate is 1, then
+ drop all the incoming packets.
+
+ The value definition is the same as that of the drop_entry. In
+ the automatic mode, the rate is determined by the follow
+ formula: rate = amemthresh / (amemthresh - available_memory)
+ when available memory is less than the available memory
+ threshold. When the mode 3 is set, the always mode drop rate
+ is controlled by the /proc/sys/net/ipv4/vs/am_droprate.
+
+expire_nodest_conn - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ The default value is 0, the load balancer will silently drop
+ packets when its destination server is not available. It may
+ be useful, when user-space monitoring program deletes the
+ destination server (because of server overload or wrong
+ detection) and add back the server later, and the connections
+ to the server can continue.
+
+ If this feature is enabled, the load balancer will expire the
+ connection immediately when a packet arrives and its
+ destination server is not available, then the client program
+ will be notified that the connection is closed. This is
+ equivalent to the feature some people requires to flush
+ connections when its destination is not available.
+
+expire_quiescent_template - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ When set to a non-zero value, the load balancer will expire
+ persistent templates when the destination server is quiescent.
+ This may be useful, when a user makes a destination server
+ quiescent by setting its weight to 0 and it is desired that
+ subsequent otherwise persistent connections are sent to a
+ different destination server. By default new persistent
+ connections are allowed to quiescent destination servers.
+
+ If this feature is enabled, the load balancer will expire the
+ persistence template if it is to be used to schedule a new
+ connection and the destination server is quiescent.
+
+nat_icmp_send - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ It controls sending icmp error messages (ICMP_DEST_UNREACH)
+ for VS/NAT when the load balancer receives packets from real
+ servers but the connection entries don't exist.
+
+secure_tcp - INTEGER
+ 0 - disabled (default)
+
+ The secure_tcp defense is to use a more complicated state
+ transition table and some possible short timeouts of each
+ state. In the VS/NAT, it delays the entering the ESTABLISHED
+ until the real server starts to send data and ACK packet
+ (after 3-way handshake).
+
+ The value definition is the same as that of drop_entry or
+ drop_packet.
+
+sync_threshold - INTEGER
+ default 3
+
+ It sets synchronization threshold, which is the minimum number
+ of incoming packets that a connection needs to receive before
+ the connection will be synchronized. A connection will be
+ synchronized, every time the number of its incoming packets
+ modulus 50 equals the threshold. The range of the threshold is
+ from 0 to 49.
Use these for address resources that are not described by "normal" PCI
interfaces (e.g. BAR).
- All interrupt handlers should be registered with SA_SHIRQ and use the devid
+ All interrupt handlers should be registered with IRQF_SHARED and use the devid
to map IRQs to devices (remember that all PCI interrupts are shared).
interrupts = <1d 3>;
interrupt-parent = <40000>;
num-channels = <4>;
- channel-fifo-len = <24>;
+ channel-fifo-len = <18>;
exec-units-mask = <000000fe>;
- descriptor-types-mask = <073f1127>;
+ descriptor-types-mask = <012b0ebf>;
};
+1 Release Date : Sun May 14 22:49:52 PDT 2006 - Sumant Patro <Sumant.Patro@lsil.com>
+2 Current Version : 00.00.03.01
+3 Older Version : 00.00.02.04
+
+i. Added support for ZCR controller.
+
+ New device id 0x413 added.
+
+ii. Bug fix : Disable controller interrupt before firing INIT cmd to FW.
+
+ Interrupt is enabled after required initialization is over.
+ This is done to ensure that driver is ready to handle interrupts when
+ it is generated by the controller.
+
+ -Sumant Patro <Sumant.Patro@lsil.com>
+
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.04
If you want to share the IRQ with another device and the driver refuses to
do so, you might succeed with changing the DC390_IRQ type in tmscsim.c to
-SA_SHIRQ | SA_INTERRUPT.
+IRQF_SHARED | IRQF_DISABLED.
3.Features
}
chip->port = pci_resource_start(pci, 0);
if (request_irq(pci->irq, snd_mychip_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "My Chip", chip)) {
+ IRQF_DISABLED|IRQF_SHARED, "My Chip", chip)) {
printk(KERN_ERR "cannot grab irq %d\n", pci->irq);
snd_mychip_free(chip);
return -EBUSY;
<programlisting>
<![CDATA[
if (request_irq(pci->irq, snd_mychip_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "My Chip", chip)) {
+ IRQF_DISABLED|IRQF_SHARED, "My Chip", chip)) {
printk(KERN_ERR "cannot grab irq %d\n", pci->irq);
snd_mychip_free(chip);
return -EBUSY;
<para>
On the PCI bus, the interrupts can be shared. Thus,
- <constant>SA_SHIRQ</constant> is given as the interrupt flag of
+ <constant>IRQF_SHARED</constant> is given as the interrupt flag of
<function>request_irq()</function>.
</para>
- block_dump
- drop-caches
- zone_reclaim_mode
+- min_unmapped_ratio
- panic_on_oom
==============================================================
=============================================================
+min_unmapped_ratio:
+
+This is available only on NUMA kernels.
+
+A percentage of the file backed pages in each zone. Zone reclaim will only
+occur if more than this percentage of pages are file backed and unmapped.
+This is to insure that a minimal amount of local pages is still available for
+file I/O even if the node is overallocated.
+
+The default is 1 percent.
+
+=============================================================
+
panic_on_oom
This enables or disables panic on out-of-memory feature. If this is set to 1,
DOCBOOK FOR DOCUMENTATION
P: Martin Waitz
M: tali@admingilde.org
+P: Randy Dunlap
+M: rdunlap@xenotime.net
T: git http://tali.admingilde.org/git/linux-docbook.git
S: Maintained
W: http://www.pnd-pc.demon.co.uk/promise/
S: Maintained
+PVRUSB2 VIDEO4LINUX DRIVER
+P: Mike Isely
+M: isely@pobox.com
+L: pvrusb2@isely.net
+L: video4linux-list@redhat.com
+W: http://www.isely.net/pvrusb2/
+S: Maintained
+
PXA2xx SUPPORT
P: Nicolas Pitre
M: nico@cam.org
CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -fno-common
+# Force gcc to behave correct even for buggy distributions
+CFLAGS += $(call cc-option, -fno-stack-protector-all \
+ -fno-stack-protector)
AFLAGS := -D__ASSEMBLY__
# Read KERNELRELEASE from include/config/kernel.release (if it exists)
# prepare2 creates a makefile if using a separate output directory
prepare2: prepare3 outputmakefile
-prepare1: prepare2 include/linux/version.h include/asm \
- include/config/auto.conf
+prepare1: prepare2 include/linux/version.h include/linux/utsrelease.h \
+ include/asm include/config/auto.conf
ifneq ($(KBUILD_MODULES),)
$(Q)mkdir -p $(MODVERDIR)
$(Q)rm -f $(MODVERDIR)/*
# needs to be updated, so this check is forced on all builds
uts_len := 64
+define filechk_utsrelease.h
+ if [ `echo -n "$(KERNELRELEASE)" | wc -c ` -gt $(uts_len) ]; then \
+ echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
+ exit 1; \
+ fi; \
+ (echo \#define UTS_RELEASE \"$(KERNELRELEASE)\";)
+endef
define filechk_version.h
- if [ `echo -n "$(KERNELRELEASE)" | wc -c ` -gt $(uts_len) ]; then \
- echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
- exit 1; \
- fi; \
- (echo \#define UTS_RELEASE \"$(KERNELRELEASE)\"; \
- echo \#define LINUX_VERSION_CODE `expr $(VERSION) \\* 65536 + $(PATCHLEVEL) \\* 256 + $(SUBLEVEL)`; \
- echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'; \
- )
+ (echo \#define LINUX_VERSION_CODE $(shell \
+ expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
+ echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))';)
endef
-include/linux/version.h: $(srctree)/Makefile include/config/kernel.release FORCE
+include/linux/version.h: $(srctree)/Makefile FORCE
$(call filechk,version.h)
+include/linux/utsrelease.h: include/config/kernel.release FORCE
+ $(call filechk,utsrelease.h)
+
# ---------------------------------------------------------------------------
PHONY += depend dep
# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config include2
MRPROPER_FILES += .config .config.old include/asm .version .old_version \
- include/linux/autoconf.h include/linux/version.h \
+ include/linux/autoconf.h include/linux/version.h \
+ include/linux/utsrelease.h \
Module.symvers tags TAGS cscope*
# clean - Delete most, but leave enough to build external modules
*/
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>
*/
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>
*/
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>
#endif
seq_printf(p, " %14s", irq_desc[irq].chip->typename);
seq_printf(p, " %c%s",
- (action->flags & SA_INTERRUPT)?'+':' ',
+ (action->flags & IRQF_DISABLED)?'+':' ',
action->name);
for (action=action->next; action; action = action->next) {
seq_printf(p, ", %c%s",
- (action->flags & SA_INTERRUPT)?'+':' ',
+ (action->flags & IRQF_DISABLED)?'+':' ',
action->name);
}
struct irqaction timer_irqaction = {
.handler = timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "timer",
};
*/
unsigned long
-thread_saved_pc(task_t *t)
+thread_saved_pc(struct task_struct *t)
{
unsigned long base = (unsigned long)task_stack_page(t);
unsigned long fp, sp = task_thread_info(t)->pcb.ksp;
* the IPL from being dropped during handler processing.
*/
if (irq_desc[irq].action)
- irq_desc[irq].action->flags |= SA_INTERRUPT;
+ irq_desc[irq].action->flags |= IRQF_DISABLED;
return 0;
}
* all reported to the kernel as machine checks, so the handler
* is a nop so it can be called to count the individual events.
*/
- request_irq(63+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(63+16, titan_intr_nop, IRQF_DISABLED,
"CChip Error", NULL);
- request_irq(62+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(62+16, titan_intr_nop, IRQF_DISABLED,
"PChip 0 H_Error", NULL);
- request_irq(61+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(61+16, titan_intr_nop, IRQF_DISABLED,
"PChip 1 H_Error", NULL);
- request_irq(60+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(60+16, titan_intr_nop, IRQF_DISABLED,
"PChip 0 C_Error", NULL);
- request_irq(59+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(59+16, titan_intr_nop, IRQF_DISABLED,
"PChip 1 C_Error", NULL);
/*
* Hook a couple of extra err interrupts that the
* common titan code won't.
*/
- request_irq(53+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(53+16, titan_intr_nop, IRQF_DISABLED,
"NMI", NULL);
- request_irq(50+16, titan_intr_nop, SA_INTERRUPT,
+ request_irq(50+16, titan_intr_nop, IRQF_DISABLED,
"Temperature Warning", NULL);
/*
<file:Documentation/mca.txt> (and especially the web page given
there) before attempting to build an MCA bus kernel.
+config GENERIC_HARDIRQS
+ bool
+ default y
+
+config HARDIRQS_SW_RESEND
+ bool
+ default y
+
+config GENERIC_IRQ_PROBE
+ bool
+ default y
+
config RWSEM_GENERIC_SPINLOCK
bool
default y
help
This enables support for ARM Ltd Versatile board.
-config ARCH_AT91RM9200
- bool "Atmel AT91RM9200"
+config ARCH_AT91
+ bool "Atmel AT91"
help
- Say Y here if you intend to run this kernel on an Atmel
- AT91RM9200-based board.
+ This enables support for systems based on the Atmel AT91RM9200
+ and AT91SAM9xxx processors.
config ARCH_CLPS7500
bool "Cirrus CL-PS7500FE"
ARCH_LUBBOCK || MACH_MAINSTONE || ARCH_NETWINDER || \
ARCH_OMAP || ARCH_P720T || ARCH_PXA_IDP || \
ARCH_SA1100 || ARCH_SHARK || ARCH_VERSATILE || \
- ARCH_AT91RM9200
+ ARCH_AT91RM9200 || MACH_TRIZEPS4
help
If you say Y here, the LEDs on your machine will be used
to provide useful information about your current system status.
endmenu
-if (ARCH_SA1100 || ARCH_INTEGRATOR || ARCH_OMAP1)
+if (ARCH_SA1100 || ARCH_INTEGRATOR || ARCH_OMAP)
menu "CPU Frequency scaling"
machine-$(CONFIG_ARCH_H720X) := h720x
machine-$(CONFIG_ARCH_AAEC2000) := aaec2000
machine-$(CONFIG_ARCH_REALVIEW) := realview
- machine-$(CONFIG_ARCH_AT91RM9200) := at91rm9200
+ machine-$(CONFIG_ARCH_AT91) := at91rm9200
machine-$(CONFIG_ARCH_EP93XX) := ep93xx
machine-$(CONFIG_ARCH_PNX4008) := pnx4008
machine-$(CONFIG_ARCH_NETX) := netx
mov r1, #-1
mcr p15, 0, r3, c2, c0, 0 @ load page table pointer
mcr p15, 0, r1, c3, c0, 0 @ load domain access control
- mcr p15, 0, r0, c1, c0, 0 @ load control register
- mov pc, lr
+ b 1f
+ .align 5 @ cache line aligned
+1: mcr p15, 0, r0, c1, c0, 0 @ load control register
+ mrc p15, 0, r0, c1, c0, 0 @ and read it back to
+ sub pc, lr, r0, lsr #32 @ properly flush pipeline
/*
* All code following this line is relocatable. It is relocated by
static void __iomem *gic_dist_base;
static void __iomem *gic_cpu_base;
+static DEFINE_SPINLOCK(irq_controller_lock);
/*
* Routines to acknowledge, disable and enable interrupts
static void gic_ack_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
+
+ spin_lock(&irq_controller_lock);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
writel(irq, gic_cpu_base + GIC_CPU_EOI);
+ spin_unlock(&irq_controller_lock);
}
static void gic_mask_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
+
+ spin_lock(&irq_controller_lock);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
+ spin_unlock(&irq_controller_lock);
}
static void gic_unmask_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
+
+ spin_lock(&irq_controller_lock);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_SET + (irq / 32) * 4);
+ spin_unlock(&irq_controller_lock);
}
#ifdef CONFIG_SMP
-static void gic_set_cpu(struct irqdesc *desc, unsigned int irq, unsigned int cpu)
+static void gic_set_cpu(unsigned int irq, cpumask_t mask_val)
{
void __iomem *reg = gic_dist_base + GIC_DIST_TARGET + (irq & ~3);
unsigned int shift = (irq % 4) * 8;
+ unsigned int cpu = first_cpu(mask_val);
u32 val;
+ spin_lock(&irq_controller_lock);
+ irq_desc[irq].cpu = cpu;
val = readl(reg) & ~(0xff << shift);
val |= 1 << (cpu + shift);
writel(val, reg);
+ spin_unlock(&irq_controller_lock);
}
#endif
.mask = gic_mask_irq,
.unmask = gic_unmask_irq,
#ifdef CONFIG_SMP
- .set_cpu = gic_set_cpu,
+ .set_affinity = gic_set_cpu,
#endif
};
sa1111_irq_handler(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
{
unsigned int stat0, stat1, i;
- void __iomem *base = desc->data;
+ void __iomem *base = get_irq_data(irq);
stat0 = sa1111_readl(base + SA1111_INTSTATCLR0);
stat1 = sa1111_readl(base + SA1111_INTSTATCLR1);
for (i = IRQ_SA1111_START; stat0; i++, stat0 >>= 1)
if (stat0 & 1)
- do_edge_IRQ(i, irq_desc + i, regs);
+ handle_edge_irq(i, irq_desc + i, regs);
for (i = IRQ_SA1111_START + 32; stat1; i++, stat1 >>= 1)
if (stat1 & 1)
- do_edge_IRQ(i, irq_desc + i, regs);
+ handle_edge_irq(i, irq_desc + i, regs);
/* For level-based interrupts */
desc->chip->unmask(irq);
#include <linux/timex.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <asm/hardware.h>
#include <asm/io.h>
static struct irqaction ioc_timer_irq = {
.name = "timer",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.handler = ioc_timer_interrupt
};
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_VERSATILE is not set
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.14
-# Wed Nov 9 18:53:40 2005
+# Linux kernel version: 2.6.17
+# Thu Jun 29 15:25:18 2006
#
CONFIG_ARM=y
CONFIG_MMU=y
-CONFIG_UID16=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_VECTORS_BASE=0xffff0000
+CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
-CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
-# CONFIG_HOTPLUG is not set
-CONFIG_KOBJECT_UEVENT=y
# CONFIG_IKCONFIG is not set
+# CONFIG_RELAY is not set
CONFIG_INITRAMFS_SOURCE=""
+CONFIG_UID16=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
+CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
-CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SHMEM=y
-CONFIG_CC_ALIGN_FUNCTIONS=0
-CONFIG_CC_ALIGN_LABELS=0
-CONFIG_CC_ALIGN_LOOPS=0
-CONFIG_CC_ALIGN_JUMPS=0
+CONFIG_SLAB=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
+# CONFIG_SLOB is not set
#
# Loadable module support
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
-CONFIG_OBSOLETE_MODPARM=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_KMOD is not set
#
# Block layer
#
+# CONFIG_BLK_DEV_IO_TRACE is not set
#
# IO Schedulers
#
# System Type
#
+# CONFIG_ARCH_AAEC2000 is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_REALVIEW is not set
+# CONFIG_ARCH_VERSATILE is not set
+# CONFIG_ARCH_AT91RM9200 is not set
# CONFIG_ARCH_CLPS7500 is not set
# CONFIG_ARCH_CLPS711X is not set
# CONFIG_ARCH_CO285 is not set
# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_EP93XX is not set
# CONFIG_ARCH_FOOTBRIDGE is not set
-# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_NETX is not set
+# CONFIG_ARCH_H720X is not set
+# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_IOP3XX is not set
# CONFIG_ARCH_IXP4XX is not set
# CONFIG_ARCH_IXP2000 is not set
+# CONFIG_ARCH_IXP23XX is not set
# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_PNX4008 is not set
# CONFIG_ARCH_PXA is not set
# CONFIG_ARCH_RPC is not set
# CONFIG_ARCH_SA1100 is not set
# CONFIG_ARCH_SHARK is not set
# CONFIG_ARCH_LH7A40X is not set
CONFIG_ARCH_OMAP=y
-# CONFIG_ARCH_VERSATILE is not set
-# CONFIG_ARCH_REALVIEW is not set
-# CONFIG_ARCH_IMX is not set
-# CONFIG_ARCH_H720X is not set
-# CONFIG_ARCH_AAEC2000 is not set
#
# TI OMAP Implementations
CONFIG_MACH_OMAP_H2=y
# CONFIG_MACH_OMAP_H3 is not set
# CONFIG_MACH_OMAP_OSK is not set
+# CONFIG_MACH_NOKIA770 is not set
# CONFIG_MACH_OMAP_GENERIC is not set
#
#
# Bus support
#
-CONFIG_ISA_DMA_API=y
#
# PCCARD (PCMCIA/CardBus) support
#
CONFIG_PREEMPT=y
CONFIG_NO_IDLE_HZ=y
+CONFIG_HZ=128
+# CONFIG_AEABI is not set
# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
# Power management options
#
CONFIG_PM=y
+CONFIG_PM_LEGACY=y
+# CONFIG_PM_DEBUG is not set
# CONFIG_APM is not set
#
#
# Networking options
#
+# CONFIG_NETDEBUG is not set
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_XFRM_TUNNEL is not set
# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_XFRM_MODE_TRANSPORT=y
+CONFIG_INET_XFRM_MODE_TUNNEL=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
+# CONFIG_INET6_XFRM_TUNNEL is not set
+# CONFIG_INET6_TUNNEL is not set
+# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETFILTER is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
+
+#
+# TIPC Configuration (EXPERIMENTAL)
+#
+# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
-# CONFIG_NET_CLS_ROUTE is not set
#
# Network testing
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
# CONFIG_FW_LOADER is not set
+# CONFIG_SYS_HYPERVISOR is not set
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
+# CONFIG_VT_HW_CONSOLE_BINDING is not set
# CONFIG_SERIAL_NONSTANDARD is not set
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=4
+CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
+# CONFIG_HW_RANDOM is not set
# CONFIG_NVRAM is not set
-# CONFIG_RTC is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
#
# TPM devices
#
+# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
#
#
# CONFIG_I2C is not set
+#
+# SPI support
+#
+# CONFIG_SPI is not set
+# CONFIG_SPI_MASTER is not set
+
+#
+# Dallas's 1-wire bus
+#
+
#
# Hardware Monitoring support
#
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
+# CONFIG_SENSORS_ABITUGURU is not set
+# CONFIG_SENSORS_F71805F is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
#
#
#
-# Multimedia Capabilities Port drivers
+# LED devices
+#
+# CONFIG_NEW_LEDS is not set
+
+#
+# LED drivers
+#
+
+#
+# LED Triggers
#
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_V4L2=y
#
# Digital Video Broadcasting Devices
#
# Graphics support
#
+CONFIG_FIRMWARE_EDID=y
CONFIG_FB=y
# CONFIG_FB_CFB_FILLRECT is not set
# CONFIG_FB_CFB_COPYAREA is not set
# CONFIG_FB_CFB_IMAGEBLIT is not set
# CONFIG_FB_MACMODES is not set
+# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
# CONFIG_FB_TILEBLITTING is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FONT_SUN8x16 is not set
# CONFIG_FONT_SUN12x22 is not set
# CONFIG_FONT_10x18 is not set
-# CONFIG_FONT_RL is not set
#
# Logo configuration
# Open Sound System
#
CONFIG_SOUND_PRIME=y
-# CONFIG_OBSOLETE_OSS_DRIVER is not set
# CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_SOUND_MSNDPIN is not set
-# CONFIG_SOUND_OSS is not set
#
# USB support
#
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
+# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB is not set
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
-# CONFIG_USB_GADGET_NET2280 is not set
-# CONFIG_USB_GADGET_PXA2XX is not set
-# CONFIG_USB_GADGET_GOKU is not set
-# CONFIG_USB_GADGET_LH7A40X is not set
-# CONFIG_USB_GADGET_OMAP is not set
-# CONFIG_USB_GADGET_DUMMY_HCD is not set
-# CONFIG_USB_ZERO is not set
-# CONFIG_USB_ETH is not set
-# CONFIG_USB_GADGETFS is not set
-# CONFIG_USB_FILE_STORAGE is not set
-# CONFIG_USB_G_SERIAL is not set
#
# MMC/SD Card support
#
# CONFIG_MMC is not set
+#
+# Real Time Clock
+#
+CONFIG_RTC_LIB=y
+# CONFIG_RTC_CLASS is not set
+
#
# File systems
#
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT2_FS_XIP is not set
# CONFIG_EXT3_FS is not set
-# CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# CONFIG_XFS_FS is not set
+# CONFIG_OCFS2_FS is not set
# CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y
CONFIG_INOTIFY=y
+CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_TMPFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
-# CONFIG_RELAYFS_FS is not set
+# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
+# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_DEBUG_KERNEL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_DEBUG_BUGVERBOSE=y
+# CONFIG_DEBUG_FS is not set
CONFIG_FRAME_POINTER=y
+# CONFIG_UNWIND_INFO is not set
# CONFIG_DEBUG_USER is not set
#
# CONFIG_ARCH_INTEGRATOR is not set
# CONFIG_ARCH_REALVIEW is not set
# CONFIG_ARCH_VERSATILE is not set
+CONFIG_ARCH_AT91=y
CONFIG_ARCH_AT91RM9200=y
# CONFIG_ARCH_CLPS7500 is not set
# CONFIG_ARCH_CLPS711X is not set
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.17
+# Sat Jun 24 22:45:14 2006
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_ARCH_MTD_XIP=y
+CONFIG_VECTORS_BASE=0xffff0000
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_BROKEN_ON_SMP=y
+CONFIG_LOCK_KERNEL=y
+CONFIG_INIT_ENV_ARG_LIMIT=32
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_BSD_PROCESS_ACCT_V3=y
+CONFIG_SYSCTL=y
+CONFIG_AUDIT=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+# CONFIG_RELAY is not set
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_UID16=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_EXTRA_PASS=y
+CONFIG_HOTPLUG=y
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_SHMEM=y
+CONFIG_SLAB=y
+# CONFIG_TINY_SHMEM is not set
+CONFIG_BASE_SMALL=0
+# CONFIG_SLOB is not set
+CONFIG_OBSOLETE_INTERMODULE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+# CONFIG_MODVERSIONS is not set
+CONFIG_MODULE_SRCVERSION_ALL=y
+CONFIG_KMOD=y
+
+#
+# Block layer
+#
+# CONFIG_BLK_DEV_IO_TRACE is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_DEFAULT_AS=y
+# CONFIG_DEFAULT_DEADLINE is not set
+# CONFIG_DEFAULT_CFQ is not set
+# CONFIG_DEFAULT_NOOP is not set
+CONFIG_DEFAULT_IOSCHED="anticipatory"
+
+#
+# System Type
+#
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_EP93XX is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+# CONFIG_ARCH_IXP4XX is not set
+# CONFIG_ARCH_IXP2000 is not set
+# CONFIG_ARCH_IXP23XX is not set
+# CONFIG_ARCH_L7200 is not set
+CONFIG_ARCH_PXA=y
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_VERSATILE is not set
+# CONFIG_ARCH_REALVIEW is not set
+# CONFIG_ARCH_IMX is not set
+# CONFIG_ARCH_H720X is not set
+# CONFIG_ARCH_AAEC2000 is not set
+# CONFIG_ARCH_AT91RM9200 is not set
+
+#
+# Intel PXA2xx Implementations
+#
+# CONFIG_ARCH_LUBBOCK is not set
+# CONFIG_MACH_LOGICPD_PXA270 is not set
+# CONFIG_MACH_MAINSTONE is not set
+# CONFIG_ARCH_PXA_IDP is not set
+# CONFIG_PXA_SHARPSL is not set
+CONFIG_MACH_TRIZEPS4=y
+CONFIG_MACH_TRIZEPS4_CONXS=y
+# CONFIG_MACH_TRIZEPS4_ANY is not set
+CONFIG_PXA27x=y
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5T=y
+CONFIG_CPU_CACHE_VIVT=y
+CONFIG_CPU_TLB_V4WBI=y
+
+#
+# Processor Features
+#
+CONFIG_ARM_THUMB=y
+CONFIG_XSCALE_PMU=y
+
+#
+# Bus support
+#
+
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+CONFIG_PCCARD=m
+# CONFIG_PCMCIA_DEBUG is not set
+CONFIG_PCMCIA=m
+CONFIG_PCMCIA_LOAD_CIS=y
+CONFIG_PCMCIA_IOCTL=y
+
+#
+# PC-card bridges
+#
+CONFIG_PCMCIA_PXA2XX=m
+
+#
+# Kernel Features
+#
+CONFIG_PREEMPT=y
+# CONFIG_NO_IDLE_HZ is not set
+CONFIG_HZ=100
+# CONFIG_AEABI is not set
+# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+# CONFIG_SPARSEMEM_STATIC is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4096
+CONFIG_LEDS=y
+CONFIG_LEDS_TIMER=y
+CONFIG_LEDS_CPU=y
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Boot options
+#
+CONFIG_ZBOOT_ROM_TEXT=0
+CONFIG_ZBOOT_ROM_BSS=0
+CONFIG_CMDLINE="root=/dev/nfs ip=bootp console=ttyS0,115200n8"
+# CONFIG_XIP_KERNEL is not set
+
+#
+# Floating point emulation
+#
+
+#
+# At least one emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+CONFIG_FPE_NWFPE_XP=y
+# CONFIG_FPE_FASTFPE is not set
+
+#
+# Userspace binary formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_AOUT is not set
+CONFIG_BINFMT_MISC=m
+# CONFIG_ARTHUR is not set
+
+#
+# Power management options
+#
+CONFIG_PM=y
+CONFIG_PM_LEGACY=y
+# CONFIG_PM_DEBUG is not set
+CONFIG_APM=y
+
+#
+# Networking
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_NETDEBUG is not set
+CONFIG_PACKET=y
+CONFIG_PACKET_MMAP=y
+CONFIG_UNIX=y
+CONFIG_XFRM=y
+CONFIG_XFRM_USER=m
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_FIB_HASH=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_XFRM_TUNNEL is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_BIC=y
+
+#
+# IP: Virtual Server Configuration
+#
+# CONFIG_IP_VS is not set
+CONFIG_IPV6=m
+# CONFIG_IPV6_PRIVACY is not set
+# CONFIG_IPV6_ROUTER_PREF is not set
+# CONFIG_INET6_AH is not set
+# CONFIG_INET6_ESP is not set
+# CONFIG_INET6_IPCOMP is not set
+# CONFIG_INET6_XFRM_TUNNEL is not set
+# CONFIG_INET6_TUNNEL is not set
+# CONFIG_IPV6_TUNNEL is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+
+#
+# Core Netfilter Configuration
+#
+# CONFIG_NETFILTER_NETLINK is not set
+# CONFIG_NETFILTER_XTABLES is not set
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_IP_NF_CONNTRACK=m
+CONFIG_IP_NF_CT_ACCT=y
+CONFIG_IP_NF_CONNTRACK_MARK=y
+# CONFIG_IP_NF_CONNTRACK_EVENTS is not set
+# CONFIG_IP_NF_CT_PROTO_SCTP is not set
+CONFIG_IP_NF_FTP=m
+CONFIG_IP_NF_IRC=m
+# CONFIG_IP_NF_NETBIOS_NS is not set
+CONFIG_IP_NF_TFTP=m
+CONFIG_IP_NF_AMANDA=m
+# CONFIG_IP_NF_PPTP is not set
+# CONFIG_IP_NF_H323 is not set
+CONFIG_IP_NF_QUEUE=m
+
+#
+# IPv6: Netfilter Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP6_NF_QUEUE is not set
+
+#
+# DCCP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_DCCP is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+
+#
+# TIPC Configuration (EXPERIMENTAL)
+#
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+CONFIG_VLAN_8021Q=m
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_HAMRADIO is not set
+CONFIG_IRDA=m
+
+#
+# IrDA protocols
+#
+CONFIG_IRLAN=m
+CONFIG_IRNET=m
+CONFIG_IRCOMM=m
+CONFIG_IRDA_ULTRA=y
+
+#
+# IrDA options
+#
+CONFIG_IRDA_CACHE_LAST_LSAP=y
+CONFIG_IRDA_FAST_RR=y
+# CONFIG_IRDA_DEBUG is not set
+
+#
+# Infrared-port device drivers
+#
+
+#
+# SIR device drivers
+#
+CONFIG_IRTTY_SIR=m
+
+#
+# Dongle support
+#
+# CONFIG_DONGLE is not set
+
+#
+# Old SIR device drivers
+#
+# CONFIG_IRPORT_SIR is not set
+
+#
+# Old Serial dongle support
+#
+
+#
+# FIR device drivers
+#
+# CONFIG_USB_IRDA is not set
+# CONFIG_SIGMATEL_FIR is not set
+# CONFIG_PXA_FICP is not set
+CONFIG_BT=m
+CONFIG_BT_L2CAP=m
+CONFIG_BT_SCO=m
+CONFIG_BT_RFCOMM=m
+CONFIG_BT_RFCOMM_TTY=y
+CONFIG_BT_BNEP=m
+CONFIG_BT_BNEP_MC_FILTER=y
+CONFIG_BT_BNEP_PROTO_FILTER=y
+CONFIG_BT_HIDP=m
+
+#
+# Bluetooth device drivers
+#
+# CONFIG_BT_HCIUSB is not set
+# CONFIG_BT_HCIUART is not set
+# CONFIG_BT_HCIBCM203X is not set
+# CONFIG_BT_HCIBPA10X is not set
+# CONFIG_BT_HCIBFUSB is not set
+# CONFIG_BT_HCIDTL1 is not set
+# CONFIG_BT_HCIBT3C is not set
+# CONFIG_BT_HCIBLUECARD is not set
+# CONFIG_BT_HCIBTUART is not set
+# CONFIG_BT_HCIVHCI is not set
+CONFIG_IEEE80211=m
+# CONFIG_IEEE80211_DEBUG is not set
+CONFIG_IEEE80211_CRYPT_WEP=m
+CONFIG_IEEE80211_CRYPT_CCMP=m
+CONFIG_IEEE80211_CRYPT_TKIP=m
+CONFIG_IEEE80211_SOFTMAC=m
+# CONFIG_IEEE80211_SOFTMAC_DEBUG is not set
+CONFIG_WIRELESS_EXT=y
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+CONFIG_FW_LOADER=y
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+CONFIG_CONNECTOR=y
+CONFIG_PROC_EVENTS=y
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_CONCAT=y
+CONFIG_MTD_PARTITIONS=y
+CONFIG_MTD_REDBOOT_PARTS=y
+CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
+CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED=y
+CONFIG_MTD_REDBOOT_PARTS_READONLY=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+CONFIG_NFTL=y
+CONFIG_NFTL_RW=y
+CONFIG_INFTL=y
+# CONFIG_RFD_FTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+CONFIG_MTD_JEDECPROBE=y
+CONFIG_MTD_GEN_PROBE=y
+CONFIG_MTD_CFI_ADV_OPTIONS=y
+# CONFIG_MTD_CFI_NOSWAP is not set
+# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
+CONFIG_MTD_CFI_LE_BYTE_SWAP=y
+CONFIG_MTD_CFI_GEOMETRY=y
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+# CONFIG_MTD_OTP is not set
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+# CONFIG_MTD_CFI_STAA is not set
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+# CONFIG_MTD_XIP is not set
+
+#
+# Mapping drivers for chip access
+#
+CONFIG_MTD_COMPLEX_MAPPINGS=y
+CONFIG_MTD_PHYSMAP=y
+CONFIG_MTD_PHYSMAP_START=0x0
+CONFIG_MTD_PHYSMAP_LEN=0x4000000
+CONFIG_MTD_PHYSMAP_BANKWIDTH=2
+# CONFIG_MTD_TRIZEPS4 is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+# CONFIG_MTD_IMPA7 is not set
+# CONFIG_MTD_SHARP_SL is not set
+# CONFIG_MTD_PLATRAM is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_DATAFLASH is not set
+# CONFIG_MTD_M25P80 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLOCK2MTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+CONFIG_MTD_DOC2001PLUS=y
+CONFIG_MTD_DOCPROBE=y
+CONFIG_MTD_DOCECC=y
+# CONFIG_MTD_DOCPROBE_ADVANCED is not set
+CONFIG_MTD_DOCPROBE_ADDRESS=0
+
+#
+# NAND Flash Device Drivers
+#
+CONFIG_MTD_NAND=y
+# CONFIG_MTD_NAND_VERIFY_WRITE is not set
+# CONFIG_MTD_NAND_H1900 is not set
+CONFIG_MTD_NAND_IDS=y
+CONFIG_MTD_NAND_DISKONCHIP=y
+# CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADVANCED is not set
+CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADDRESS=0
+# CONFIG_MTD_NAND_DISKONCHIP_BBTWRITE is not set
+# CONFIG_MTD_NAND_SHARPSL is not set
+# CONFIG_MTD_NAND_NANDSIM is not set
+
+#
+# OneNAND Flash Device Drivers
+#
+# CONFIG_MTD_ONENAND is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_COW_COMMON is not set
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_CRYPTOLOOP=m
+CONFIG_BLK_DEV_NBD=y
+# CONFIG_BLK_DEV_UB is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=4
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_CDROM_PKTCDVD is not set
+# CONFIG_ATA_OVER_ETH is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_IDE_SATA is not set
+CONFIG_BLK_DEV_IDEDISK=y
+CONFIG_IDEDISK_MULTI_MODE=y
+CONFIG_BLK_DEV_IDECS=m
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_BLK_DEV_IDESCSI is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+CONFIG_IDE_PXA_CF=y
+CONFIG_IDE_ARM=y
+# CONFIG_BLK_DEV_IDEDMA is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_RAID_ATTRS is not set
+CONFIG_SCSI=m
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=m
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+CONFIG_CHR_DEV_SG=m
+# CONFIG_CHR_DEV_SCH is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+CONFIG_SCSI_MULTI_LUN=y
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+# CONFIG_SCSI_SPI_ATTRS is not set
+# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
+# CONFIG_SCSI_SAS_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_ISCSI_TCP is not set
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# PCMCIA SCSI adapter support
+#
+# CONFIG_PCMCIA_AHA152X is not set
+# CONFIG_PCMCIA_FDOMAIN is not set
+# CONFIG_PCMCIA_NINJA_SCSI is not set
+# CONFIG_PCMCIA_QLOGIC is not set
+# CONFIG_PCMCIA_SYM53C500 is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+
+#
+# I2O device support
+#
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# PHY device support
+#
+CONFIG_PHYLIB=y
+
+#
+# MII PHY device drivers
+#
+# CONFIG_MARVELL_PHY is not set
+CONFIG_DAVICOM_PHY=y
+# CONFIG_QSEMI_PHY is not set
+# CONFIG_LXT_PHY is not set
+# CONFIG_CICADA_PHY is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_SMC91X is not set
+CONFIG_DM9000=y
+
+#
+# Ethernet (1000 Mbit)
+#
+
+#
+# Ethernet (10000 Mbit)
+#
+
+#
+# Token Ring devices
+#
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+# CONFIG_NET_WIRELESS_RTNETLINK is not set
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+# CONFIG_PCMCIA_WAVELAN is not set
+# CONFIG_PCMCIA_NETWAVE is not set
+
+#
+# Wireless 802.11 Frequency Hopping cards support
+#
+# CONFIG_PCMCIA_RAYCS is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+CONFIG_HERMES=m
+# CONFIG_ATMEL is not set
+
+#
+# Wireless 802.11b Pcmcia/Cardbus cards support
+#
+CONFIG_PCMCIA_HERMES=m
+# CONFIG_PCMCIA_SPECTRUM is not set
+CONFIG_AIRO_CS=m
+# CONFIG_PCMCIA_WL3501 is not set
+CONFIG_HOSTAP=m
+CONFIG_HOSTAP_FIRMWARE=y
+CONFIG_HOSTAP_FIRMWARE_NVRAM=y
+CONFIG_HOSTAP_CS=m
+CONFIG_NET_WIRELESS=y
+
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+CONFIG_PPP=m
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_MPPE=m
+# CONFIG_PPPOE is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=640
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_TSDEV=y
+CONFIG_INPUT_TSDEV_SCREEN_X=640
+CONFIG_INPUT_TSDEV_SCREEN_Y=480
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+CONFIG_INPUT_MOUSE=y
+# CONFIG_MOUSE_PS2 is not set
+CONFIG_MOUSE_SERIAL=y
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_JOYSTICK is not set
+CONFIG_INPUT_TOUCHSCREEN=y
+# CONFIG_TOUCHSCREEN_ADS7846 is not set
+# CONFIG_TOUCHSCREEN_GUNZE is not set
+# CONFIG_TOUCHSCREEN_ELO is not set
+# CONFIG_TOUCHSCREEN_MTOUCH is not set
+# CONFIG_TOUCHSCREEN_MK712 is not set
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_UINPUT=m
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+CONFIG_SERIO_SERPORT=y
+CONFIG_SERIO_LIBPS2=y
+# CONFIG_SERIO_RAW is not set
+# CONFIG_GAMEPORT is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_PXA=y
+CONFIG_SERIAL_PXA_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+CONFIG_SA1100_WATCHDOG=y
+
+#
+# USB-based Watchdog Cards
+#
+# CONFIG_USBPCWATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+# CONFIG_CARDMAN_4000 is not set
+# CONFIG_CARDMAN_4040 is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# TPM devices
+#
+# CONFIG_TCG_TPM is not set
+# CONFIG_TELCLOCK is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+CONFIG_I2C_PXA=y
+CONFIG_I2C_PXA_SLAVE=y
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Miscellaneous I2C Chip support
+#
+# CONFIG_SENSORS_DS1337 is not set
+# CONFIG_SENSORS_DS1374 is not set
+CONFIG_SENSORS_EEPROM=m
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCA9539 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_MAX6875 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# SPI support
+#
+CONFIG_SPI=y
+CONFIG_SPI_MASTER=y
+
+#
+# SPI Master Controller Drivers
+#
+# CONFIG_SPI_BITBANG is not set
+CONFIG_SPI_PXA2XX=m
+
+#
+# SPI Protocol Masters
+#
+
+#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
+#
+# Hardware Monitoring support
+#
+CONFIG_HWMON=y
+# CONFIG_HWMON_VID is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ADM9240 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_ATXP1 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_F71805F is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_FSCPOS is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_GL520SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_LM92 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83792D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+# CONFIG_SENSORS_W83627EHF is not set
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia Capabilities Port drivers
+#
+CONFIG_UCB1400=y
+CONFIG_UCB1400_TS=y
+
+#
+# LED devices
+#
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+
+#
+# LED drivers
+#
+
+#
+# LED Triggers
+#
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_TIMER=y
+CONFIG_LEDS_TRIGGER_IDE_DISK=y
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_V4L2=y
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+# CONFIG_USB_DABUSB is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+# CONFIG_FB_MACMODES is not set
+CONFIG_FB_FIRMWARE_EDID=y
+# CONFIG_FB_MODE_HELPERS is not set
+# CONFIG_FB_TILEBLITTING is not set
+# CONFIG_FB_S1D13XXX is not set
+CONFIG_FB_PXA=y
+# CONFIG_FB_PXA_PARAMETERS is not set
+# CONFIG_FB_VIRTUAL is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
+CONFIG_FONTS=y
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+# CONFIG_FONT_6x11 is not set
+# CONFIG_FONT_7x14 is not set
+# CONFIG_FONT_PEARL_8x8 is not set
+# CONFIG_FONT_ACORN_8x8 is not set
+# CONFIG_FONT_MINI_4x6 is not set
+# CONFIG_FONT_SUN8x16 is not set
+# CONFIG_FONT_SUN12x22 is not set
+# CONFIG_FONT_10x18 is not set
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+CONFIG_LOGO_LINUX_MONO=y
+CONFIG_LOGO_LINUX_VGA16=y
+CONFIG_LOGO_LINUX_CLUT224=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_BACKLIGHT_DEVICE=y
+CONFIG_LCD_CLASS_DEVICE=y
+CONFIG_LCD_DEVICE=y
+
+#
+# Sound
+#
+CONFIG_SOUND=y
+
+#
+# Advanced Linux Sound Architecture
+#
+CONFIG_SND=y
+CONFIG_SND_TIMER=y
+CONFIG_SND_PCM=y
+CONFIG_SND_HWDEP=m
+CONFIG_SND_RAWMIDI=m
+CONFIG_SND_SEQUENCER=m
+# CONFIG_SND_SEQ_DUMMY is not set
+CONFIG_SND_OSSEMUL=y
+CONFIG_SND_MIXER_OSS=y
+CONFIG_SND_PCM_OSS=y
+CONFIG_SND_PCM_OSS_PLUGINS=y
+# CONFIG_SND_SEQUENCER_OSS is not set
+# CONFIG_SND_DYNAMIC_MINORS is not set
+CONFIG_SND_SUPPORT_OLD_API=y
+CONFIG_SND_VERBOSE_PROCFS=y
+CONFIG_SND_VERBOSE_PRINTK=y
+# CONFIG_SND_DEBUG is not set
+
+#
+# Generic devices
+#
+CONFIG_SND_AC97_CODEC=y
+CONFIG_SND_AC97_BUS=y
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_VIRMIDI is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+
+#
+# ALSA ARM devices
+#
+CONFIG_SND_PXA2XX_PCM=y
+CONFIG_SND_PXA2XX_AC97=y
+
+#
+# USB devices
+#
+CONFIG_SND_USB_AUDIO=m
+
+#
+# PCMCIA devices
+#
+# CONFIG_SND_VXPOCKET is not set
+# CONFIG_SND_PDAUDIOCF is not set
+
+#
+# Open Sound System
+#
+# CONFIG_SOUND_PRIME is not set
+
+#
+# USB support
+#
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+# CONFIG_USB_ARCH_HAS_EHCI is not set
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_DYNAMIC_MINORS is not set
+# CONFIG_USB_SUSPEND is not set
+# CONFIG_USB_OTG is not set
+
+#
+# USB Host Controller Drivers
+#
+# CONFIG_USB_ISP116X_HCD is not set
+CONFIG_USB_OHCI_HCD=y
+# CONFIG_USB_OHCI_BIG_ENDIAN is not set
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+# CONFIG_USB_SL811_HCD is not set
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
+#
+
+#
+# may also be needed; see USB_STORAGE Help for more information
+#
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
+# CONFIG_USB_STORAGE_FREECOM is not set
+# CONFIG_USB_STORAGE_ISD200 is not set
+# CONFIG_USB_STORAGE_DPCM is not set
+# CONFIG_USB_STORAGE_USBAT is not set
+# CONFIG_USB_STORAGE_SDDR09 is not set
+# CONFIG_USB_STORAGE_SDDR55 is not set
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
+# CONFIG_USB_STORAGE_ALAUDA is not set
+# CONFIG_USB_LIBUSUAL is not set
+
+#
+# USB Input Devices
+#
+CONFIG_USB_HID=m
+CONFIG_USB_HIDINPUT=y
+# CONFIG_USB_HIDINPUT_POWERBOOK is not set
+# CONFIG_HID_FF is not set
+# CONFIG_USB_HIDDEV is not set
+
+#
+# USB HID Boot Protocol drivers
+#
+# CONFIG_USB_KBD is not set
+# CONFIG_USB_MOUSE is not set
+# CONFIG_USB_AIPTEK is not set
+# CONFIG_USB_WACOM is not set
+# CONFIG_USB_ACECAD is not set
+# CONFIG_USB_KBTAB is not set
+# CONFIG_USB_POWERMATE is not set
+CONFIG_USB_TOUCHSCREEN=m
+# CONFIG_USB_TOUCHSCREEN_EGALAX is not set
+# CONFIG_USB_TOUCHSCREEN_PANJIT is not set
+# CONFIG_USB_TOUCHSCREEN_3M is not set
+# CONFIG_USB_TOUCHSCREEN_ITM is not set
+# CONFIG_USB_YEALINK is not set
+# CONFIG_USB_XPAD is not set
+# CONFIG_USB_ATI_REMOTE is not set
+# CONFIG_USB_ATI_REMOTE2 is not set
+# CONFIG_USB_KEYSPAN_REMOTE is not set
+# CONFIG_USB_APPLETOUCH is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+
+#
+# USB Network Adapters
+#
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_RTL8150 is not set
+# CONFIG_USB_USBNET is not set
+# CONFIG_USB_ZD1201 is not set
+CONFIG_USB_MON=y
+
+#
+# USB port drivers
+#
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_AUERSWALD is not set
+# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_LED is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETKIT is not set
+# CONFIG_USB_PHIDGETSERVO is not set
+# CONFIG_USB_IDMOUSE is not set
+# CONFIG_USB_LD is not set
+# CONFIG_USB_TEST is not set
+
+#
+# USB DSL modem support
+#
+
+#
+# USB Gadget Support
+#
+CONFIG_USB_GADGET=y
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+CONFIG_USB_GADGET_SELECTED=y
+# CONFIG_USB_GADGET_NET2280 is not set
+# CONFIG_USB_GADGET_PXA2XX is not set
+# CONFIG_USB_GADGET_GOKU is not set
+# CONFIG_USB_GADGET_LH7A40X is not set
+# CONFIG_USB_GADGET_OMAP is not set
+# CONFIG_USB_GADGET_AT91 is not set
+CONFIG_USB_GADGET_DUMMY_HCD=y
+CONFIG_USB_DUMMY_HCD=y
+CONFIG_USB_GADGET_DUALSPEED=y
+# CONFIG_USB_ZERO is not set
+CONFIG_USB_ETH=m
+CONFIG_USB_ETH_RNDIS=y
+CONFIG_USB_GADGETFS=m
+CONFIG_USB_FILE_STORAGE=m
+# CONFIG_USB_FILE_STORAGE_TEST is not set
+CONFIG_USB_G_SERIAL=m
+
+#
+# MMC/SD Card support
+#
+CONFIG_MMC=y
+# CONFIG_MMC_DEBUG is not set
+CONFIG_MMC_BLOCK=y
+CONFIG_MMC_PXA=y
+
+#
+# Real Time Clock
+#
+CONFIG_RTC_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+
+#
+# RTC drivers
+#
+# CONFIG_RTC_DRV_X1205 is not set
+# CONFIG_RTC_DRV_DS1672 is not set
+# CONFIG_RTC_DRV_PCF8563 is not set
+# CONFIG_RTC_DRV_RS5C372 is not set
+# CONFIG_RTC_DRV_M48T86 is not set
+CONFIG_RTC_DRV_SA1100=y
+# CONFIG_RTC_DRV_TEST is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+CONFIG_EXT2_FS_SECURITY=y
+# CONFIG_EXT2_FS_XIP is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+CONFIG_EXT3_FS_SECURITY=y
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+CONFIG_FS_POSIX_ACL=y
+# CONFIG_XFS_FS is not set
+# CONFIG_OCFS2_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+CONFIG_AUTOFS4_FS=y
+# CONFIG_FUSE_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=m
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-15"
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+# CONFIG_CONFIGFS_FS is not set
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+CONFIG_JFFS_FS=y
+CONFIG_JFFS_FS_VERBOSE=0
+CONFIG_JFFS_PROC_FS=y
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+CONFIG_JFFS2_FS_WRITEBUFFER=y
+# CONFIG_JFFS2_SUMMARY is not set
+CONFIG_JFFS2_COMPRESSION_OPTIONS=y
+CONFIG_JFFS2_ZLIB=y
+CONFIG_JFFS2_RTIME=y
+# CONFIG_JFFS2_RUBIN is not set
+# CONFIG_JFFS2_CMODE_NONE is not set
+CONFIG_JFFS2_CMODE_PRIORITY=y
+# CONFIG_JFFS2_CMODE_SIZE is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=y
+# CONFIG_NFS_DIRECTIO is not set
+CONFIG_NFSD=y
+CONFIG_NFSD_V2_ACL=y
+CONFIG_NFSD_V3=y
+CONFIG_NFSD_V3_ACL=y
+CONFIG_NFSD_V4=y
+CONFIG_NFSD_TCP=y
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_EXPORTFS=y
+CONFIG_NFS_ACL_SUPPORT=y
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=y
+CONFIG_SUNRPC_GSS=y
+CONFIG_RPCSEC_GSS_KRB5=y
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+CONFIG_SMB_FS=m
+# CONFIG_SMB_NLS_DEFAULT is not set
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS is not set
+# CONFIG_CIFS_XATTR is not set
+# CONFIG_CIFS_EXPERIMENTAL is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+# CONFIG_9P_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+CONFIG_LDM_PARTITION=y
+# CONFIG_LDM_DEBUG is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_KARMA_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-15"
+CONFIG_NLS_CODEPAGE_437=y
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+CONFIG_NLS_CODEPAGE_850=y
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=m
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+CONFIG_NLS_ISO8859_15=m
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+CONFIG_NLS_UTF8=m
+
+#
+# Profiling support
+#
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+
+#
+# Kernel hacking
+#
+# CONFIG_PRINTK_TIME is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_KERNEL is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_DEBUG_BUGVERBOSE is not set
+# CONFIG_DEBUG_FS is not set
+CONFIG_FRAME_POINTER=y
+# CONFIG_UNWIND_INFO is not set
+CONFIG_DEBUG_USER=y
+
+#
+# Security options
+#
+CONFIG_KEYS=y
+CONFIG_KEYS_DEBUG_PROC_KEYS=y
+CONFIG_SECURITY=y
+# CONFIG_SECURITY_NETWORK is not set
+CONFIG_SECURITY_CAPABILITIES=y
+# CONFIG_SECURITY_ROOTPLUG is not set
+# CONFIG_SECURITY_SECLVL is not set
+
+#
+# Cryptographic options
+#
+CONFIG_CRYPTO=y
+# CONFIG_CRYPTO_HMAC is not set
+# CONFIG_CRYPTO_NULL is not set
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_SHA1=m
+CONFIG_CRYPTO_SHA256=m
+CONFIG_CRYPTO_SHA512=m
+# CONFIG_CRYPTO_WP512 is not set
+# CONFIG_CRYPTO_TGR192 is not set
+CONFIG_CRYPTO_DES=y
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+# CONFIG_CRYPTO_SERPENT is not set
+CONFIG_CRYPTO_AES=m
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+# CONFIG_CRYPTO_TEA is not set
+CONFIG_CRYPTO_ARC4=m
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+CONFIG_CRYPTO_DEFLATE=m
+CONFIG_CRYPTO_MICHAEL_MIC=m
+CONFIG_CRYPTO_CRC32C=y
+# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
+
+#
+# Library routines
+#
+CONFIG_CRC_CCITT=y
+CONFIG_CRC16=y
+CONFIG_CRC32=y
+CONFIG_LIBCRC32C=y
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
+CONFIG_REED_SOLOMON=y
+CONFIG_REED_SOLOMON_DEC16=y
obj-$(CONFIG_CRUNCH) += crunch.o crunch-bits.o
AFLAGS_crunch-bits.o := -Wa,-mcpu=ep9312
-obj-$(CONFIG_IWMMXT) += iwmmxt.o
+obj-$(CONFIG_IWMMXT) += iwmmxt.o iwmmxt-notifier.o
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
ifneq ($(CONFIG_ARCH_EBSA110),y)
BLANK();
DEFINE(PROC_INFO_SZ, sizeof(struct proc_info_list));
DEFINE(PROCINFO_INITFUNC, offsetof(struct proc_info_list, __cpu_flush));
- DEFINE(PROCINFO_MMUFLAGS, offsetof(struct proc_info_list, __cpu_mmu_flags));
+ DEFINE(PROCINFO_MM_MMUFLAGS, offsetof(struct proc_info_list, __cpu_mm_mmu_flags));
+ DEFINE(PROCINFO_IO_MMUFLAGS, offsetof(struct proc_info_list, __cpu_io_mmu_flags));
return 0;
}
ecard_t *ec = slot_to_ecard(slot);
if (ec->claimed) {
- struct irqdesc *d = irqdesc + ec->irq;
+ struct irq_desc *d = irq_desc + ec->irq;
/*
* this ugly code is so that we can operate a
* prioritorising system:
int i;
for (i = 0; i < ECARD_NUM_RESOURCES; i++)
- str += sprintf(str, "%08lx %08lx %08lx\n",
+ str += sprintf(str, "%08x %08x %08lx\n",
ec->resource[i].start,
ec->resource[i].end,
ec->resource[i].flags);
#ifdef CONFIG_MMU
mcr p15, 0, r6, c3, c0, 0 @ Set domain register
#endif
-#if defined(CONFIG_IWMMXT)
- bl iwmmxt_task_switch
-#elif defined(CONFIG_CPU_XSCALE)
+#if defined(CONFIG_CPU_XSCALE) && !defined(CONFIG_IWMMXT)
add r4, r2, #TI_CPU_DOMAIN + 40 @ cpu_context_save->extra
ldmib r4, {r4, r5}
mar acc0, r4, r5
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
#include <linux/seq_file.h>
#include <asm/cacheflush.h>
teq r0, r6
bne 1b
- ldr r7, [r10, #PROCINFO_MMUFLAGS] @ mmuflags
+ ldr r7, [r10, #PROCINFO_MM_MMUFLAGS] @ mm_mmuflags
/*
* Create identity mapping for first MB of kernel to
#endif
#ifdef CONFIG_DEBUG_LL
- bic r7, r7, #0x0c @ turn off cacheable
- @ and bufferable bits
+ ldr r7, [r10, #PROCINFO_IO_MMUFLAGS] @ io_mmuflags
/*
* Map in IO space for serial debugging.
* This allows debug messages to be output
#include <linux/signal.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/ptrace.h>
#include <linux/slab.h>
#include <linux/random.h>
#include <linux/kallsyms.h>
#include <linux/proc_fs.h>
-#include <asm/irq.h>
#include <asm/system.h>
-#include <asm/mach/irq.h>
#include <asm/mach/time.h>
-/*
- * Maximum IRQ count. Currently, this is arbitary. However, it should
- * not be set too low to prevent false triggering. Conversely, if it
- * is set too high, then you could miss a stuck IRQ.
- *
- * Maybe we ought to set a timer and re-enable the IRQ at a later time?
- */
-#define MAX_IRQ_CNT 100000
-
-static int noirqdebug __read_mostly;
-static volatile unsigned long irq_err_count;
-static DEFINE_SPINLOCK(irq_controller_lock);
-static LIST_HEAD(irq_pending);
-
-struct irqdesc irq_desc[NR_IRQS];
-void (*init_arch_irq)(void) __initdata = NULL;
-
/*
* No architecture-specific irq_finish function defined in arm/arch/irqs.h.
*/
#define irq_finish(irq) do { } while (0)
#endif
-/*
- * Dummy mask/unmask handler
- */
-void dummy_mask_unmask_irq(unsigned int irq)
-{
-}
-
-irqreturn_t no_action(int irq, void *dev_id, struct pt_regs *regs)
-{
- return IRQ_NONE;
-}
-
-void do_bad_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
-{
- irq_err_count++;
- printk(KERN_ERR "IRQ: spurious interrupt %d\n", irq);
-}
-
-static struct irqchip bad_chip = {
- .ack = dummy_mask_unmask_irq,
- .mask = dummy_mask_unmask_irq,
- .unmask = dummy_mask_unmask_irq,
-};
-
-static struct irqdesc bad_irq_desc = {
- .chip = &bad_chip,
- .handle = do_bad_IRQ,
- .pend = LIST_HEAD_INIT(bad_irq_desc.pend),
- .disable_depth = 1,
-};
-
-#ifdef CONFIG_SMP
-void synchronize_irq(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
-
- while (desc->running)
- barrier();
-}
-EXPORT_SYMBOL(synchronize_irq);
-
-#define smp_set_running(desc) do { desc->running = 1; } while (0)
-#define smp_clear_running(desc) do { desc->running = 0; } while (0)
-#else
-#define smp_set_running(desc) do { } while (0)
-#define smp_clear_running(desc) do { } while (0)
-#endif
-
-/**
- * disable_irq_nosync - disable an irq without waiting
- * @irq: Interrupt to disable
- *
- * Disable the selected interrupt line. Enables and disables
- * are nested. We do this lazily.
- *
- * This function may be called from IRQ context.
- */
-void disable_irq_nosync(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- desc->disable_depth++;
- list_del_init(&desc->pend);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-EXPORT_SYMBOL(disable_irq_nosync);
-
-/**
- * disable_irq - disable an irq and wait for completion
- * @irq: Interrupt to disable
- *
- * Disable the selected interrupt line. Enables and disables
- * are nested. This functions waits for any pending IRQ
- * handlers for this interrupt to complete before returning.
- * If you use this function while holding a resource the IRQ
- * handler may need you will deadlock.
- *
- * This function may be called - with care - from IRQ context.
- */
-void disable_irq(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
-
- disable_irq_nosync(irq);
- if (desc->action)
- synchronize_irq(irq);
-}
-EXPORT_SYMBOL(disable_irq);
-
-/**
- * enable_irq - enable interrupt handling on an irq
- * @irq: Interrupt to enable
- *
- * Re-enables the processing of interrupts on this IRQ line.
- * Note that this may call the interrupt handler, so you may
- * get unexpected results if you hold IRQs disabled.
- *
- * This function may be called from IRQ context.
- */
-void enable_irq(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- if (unlikely(!desc->disable_depth)) {
- printk("enable_irq(%u) unbalanced from %p\n", irq,
- __builtin_return_address(0));
- } else if (!--desc->disable_depth) {
- desc->probing = 0;
- desc->chip->unmask(irq);
-
- /*
- * If the interrupt is waiting to be processed,
- * try to re-run it. We can't directly run it
- * from here since the caller might be in an
- * interrupt-protected region.
- */
- if (desc->pending && list_empty(&desc->pend)) {
- desc->pending = 0;
- if (!desc->chip->retrigger ||
- desc->chip->retrigger(irq))
- list_add(&desc->pend, &irq_pending);
- }
- }
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-EXPORT_SYMBOL(enable_irq);
-
-/*
- * Enable wake on selected irq
- */
-void enable_irq_wake(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- if (desc->chip->set_wake)
- desc->chip->set_wake(irq, 1);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-EXPORT_SYMBOL(enable_irq_wake);
-
-void disable_irq_wake(unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- if (desc->chip->set_wake)
- desc->chip->set_wake(irq, 0);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-EXPORT_SYMBOL(disable_irq_wake);
+void (*init_arch_irq)(void) __initdata = NULL;
+unsigned long irq_err_count;
int show_interrupts(struct seq_file *p, void *v)
{
}
if (i < NR_IRQS) {
- spin_lock_irqsave(&irq_controller_lock, flags);
- action = irq_desc[i].action;
+ spin_lock_irqsave(&irq_desc[i].lock, flags);
+ action = irq_desc[i].action;
if (!action)
goto unlock;
seq_putc(p, '\n');
unlock:
- spin_unlock_irqrestore(&irq_controller_lock, flags);
+ spin_unlock_irqrestore(&irq_desc[i].lock, flags);
} else if (i == NR_IRQS) {
#ifdef CONFIG_ARCH_ACORN
show_fiq_list(p, v);
return 0;
}
-/*
- * IRQ lock detection.
- *
- * Hopefully, this should get us out of a few locked situations.
- * However, it may take a while for this to happen, since we need
- * a large number if IRQs to appear in the same jiffie with the
- * same instruction pointer (or within 2 instructions).
- */
-static int check_irq_lock(struct irqdesc *desc, int irq, struct pt_regs *regs)
-{
- unsigned long instr_ptr = instruction_pointer(regs);
-
- if (desc->lck_jif == jiffies &&
- desc->lck_pc >= instr_ptr && desc->lck_pc < instr_ptr + 8) {
- desc->lck_cnt += 1;
-
- if (desc->lck_cnt > MAX_IRQ_CNT) {
- printk(KERN_ERR "IRQ LOCK: IRQ%d is locking the system, disabled\n", irq);
- return 1;
- }
- } else {
- desc->lck_cnt = 0;
- desc->lck_pc = instruction_pointer(regs);
- desc->lck_jif = jiffies;
- }
- return 0;
-}
-
-static void
-report_bad_irq(unsigned int irq, struct pt_regs *regs, struct irqdesc *desc, int ret)
-{
- static int count = 100;
- struct irqaction *action;
-
- if (noirqdebug)
- return;
-
- if (ret != IRQ_HANDLED && ret != IRQ_NONE) {
- if (!count)
- return;
- count--;
- printk("irq%u: bogus retval mask %x\n", irq, ret);
- } else {
- desc->irqs_unhandled++;
- if (desc->irqs_unhandled <= 99900)
- return;
- desc->irqs_unhandled = 0;
- printk("irq%u: nobody cared\n", irq);
- }
- show_regs(regs);
- dump_stack();
- printk(KERN_ERR "handlers:");
- action = desc->action;
- do {
- printk("\n" KERN_ERR "[<%p>]", action->handler);
- print_symbol(" (%s)", (unsigned long)action->handler);
- action = action->next;
- } while (action);
- printk("\n");
-}
-
-static int
-__do_irq(unsigned int irq, struct irqaction *action, struct pt_regs *regs)
-{
- unsigned int status;
- int ret, retval = 0;
-
- spin_unlock(&irq_controller_lock);
-
-#ifdef CONFIG_NO_IDLE_HZ
- if (!(action->flags & SA_TIMER) && system_timer->dyn_tick != NULL) {
- spin_lock(&system_timer->dyn_tick->lock);
- if (system_timer->dyn_tick->state & DYN_TICK_ENABLED)
- system_timer->dyn_tick->handler(irq, 0, regs);
- spin_unlock(&system_timer->dyn_tick->lock);
- }
-#endif
-
- if (!(action->flags & SA_INTERRUPT))
- local_irq_enable();
-
- status = 0;
- do {
- ret = action->handler(irq, action->dev_id, regs);
- if (ret == IRQ_HANDLED)
- status |= action->flags;
- retval |= ret;
- action = action->next;
- } while (action);
-
- if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
-
- spin_lock_irq(&irq_controller_lock);
-
- return retval;
-}
-
-/*
- * This is for software-decoded IRQs. The caller is expected to
- * handle the ack, clear, mask and unmask issues.
- */
-void
-do_simple_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
-{
- struct irqaction *action;
- const unsigned int cpu = smp_processor_id();
-
- desc->triggered = 1;
-
- kstat_cpu(cpu).irqs[irq]++;
-
- smp_set_running(desc);
-
- action = desc->action;
- if (action) {
- int ret = __do_irq(irq, action, regs);
- if (ret != IRQ_HANDLED)
- report_bad_irq(irq, regs, desc, ret);
- }
-
- smp_clear_running(desc);
-}
-
-/*
- * Most edge-triggered IRQ implementations seem to take a broken
- * approach to this. Hence the complexity.
- */
-void
-do_edge_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
-{
- const unsigned int cpu = smp_processor_id();
-
- desc->triggered = 1;
-
- /*
- * If we're currently running this IRQ, or its disabled,
- * we shouldn't process the IRQ. Instead, turn on the
- * hardware masks.
- */
- if (unlikely(desc->running || desc->disable_depth))
- goto running;
-
- /*
- * Acknowledge and clear the IRQ, but don't mask it.
- */
- desc->chip->ack(irq);
-
- /*
- * Mark the IRQ currently in progress.
- */
- desc->running = 1;
-
- kstat_cpu(cpu).irqs[irq]++;
-
- do {
- struct irqaction *action;
-
- action = desc->action;
- if (!action)
- break;
-
- if (desc->pending && !desc->disable_depth) {
- desc->pending = 0;
- desc->chip->unmask(irq);
- }
-
- __do_irq(irq, action, regs);
- } while (desc->pending && !desc->disable_depth);
-
- desc->running = 0;
-
- /*
- * If we were disabled or freed, shut down the handler.
- */
- if (likely(desc->action && !check_irq_lock(desc, irq, regs)))
- return;
-
- running:
- /*
- * We got another IRQ while this one was masked or
- * currently running. Delay it.
- */
- desc->pending = 1;
- desc->chip->mask(irq);
- desc->chip->ack(irq);
-}
-
-/*
- * Level-based IRQ handler. Nice and simple.
- */
-void
-do_level_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
-{
- struct irqaction *action;
- const unsigned int cpu = smp_processor_id();
-
- desc->triggered = 1;
-
- /*
- * Acknowledge, clear _AND_ disable the interrupt.
- */
- desc->chip->ack(irq);
-
- if (likely(!desc->disable_depth)) {
- kstat_cpu(cpu).irqs[irq]++;
-
- smp_set_running(desc);
-
- /*
- * Return with this interrupt masked if no action
- */
- action = desc->action;
- if (action) {
- int ret = __do_irq(irq, desc->action, regs);
-
- if (ret != IRQ_HANDLED)
- report_bad_irq(irq, regs, desc, ret);
-
- if (likely(!desc->disable_depth &&
- !check_irq_lock(desc, irq, regs)))
- desc->chip->unmask(irq);
- }
-
- smp_clear_running(desc);
- }
-}
-
-static void do_pending_irqs(struct pt_regs *regs)
-{
- struct list_head head, *l, *n;
-
- do {
- struct irqdesc *desc;
-
- /*
- * First, take the pending interrupts off the list.
- * The act of calling the handlers may add some IRQs
- * back onto the list.
- */
- head = irq_pending;
- INIT_LIST_HEAD(&irq_pending);
- head.next->prev = &head;
- head.prev->next = &head;
-
- /*
- * Now run each entry. We must delete it from our
- * list before calling the handler.
- */
- list_for_each_safe(l, n, &head) {
- desc = list_entry(l, struct irqdesc, pend);
- list_del_init(&desc->pend);
- desc_handle_irq(desc - irq_desc, desc, regs);
- }
-
- /*
- * The list must be empty.
- */
- BUG_ON(!list_empty(&head));
- } while (!list_empty(&irq_pending));
-}
+/* Handle bad interrupts */
+static struct irq_desc bad_irq_desc = {
+ .handle_irq = handle_bad_irq,
+ .lock = SPIN_LOCK_UNLOCKED
+};
/*
* do_IRQ handles all hardware IRQ's. Decoded IRQs should not
desc = &bad_irq_desc;
irq_enter();
- spin_lock(&irq_controller_lock);
- desc_handle_irq(irq, desc, regs);
- /*
- * Now re-run any pending interrupts.
- */
- if (!list_empty(&irq_pending))
- do_pending_irqs(regs);
+ desc_handle_irq(irq, desc, regs);
+ /* AT91 specific workaround */
irq_finish(irq);
- spin_unlock(&irq_controller_lock);
irq_exit();
}
-void __set_irq_handler(unsigned int irq, irq_handler_t handle, int is_chained)
-{
- struct irqdesc *desc;
- unsigned long flags;
-
- if (irq >= NR_IRQS) {
- printk(KERN_ERR "Trying to install handler for IRQ%d\n", irq);
- return;
- }
-
- if (handle == NULL)
- handle = do_bad_IRQ;
-
- desc = irq_desc + irq;
-
- if (is_chained && desc->chip == &bad_chip)
- printk(KERN_WARNING "Trying to install chained handler for IRQ%d\n", irq);
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- if (handle == do_bad_IRQ) {
- desc->chip->mask(irq);
- desc->chip->ack(irq);
- desc->disable_depth = 1;
- }
- desc->handle = handle;
- if (handle != do_bad_IRQ && is_chained) {
- desc->valid = 0;
- desc->probe_ok = 0;
- desc->disable_depth = 0;
- desc->chip->unmask(irq);
- }
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-
-void set_irq_chip(unsigned int irq, struct irqchip *chip)
-{
- struct irqdesc *desc;
- unsigned long flags;
-
- if (irq >= NR_IRQS) {
- printk(KERN_ERR "Trying to install chip for IRQ%d\n", irq);
- return;
- }
-
- if (chip == NULL)
- chip = &bad_chip;
-
- desc = irq_desc + irq;
- spin_lock_irqsave(&irq_controller_lock, flags);
- desc->chip = chip;
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-
-int set_irq_type(unsigned int irq, unsigned int type)
-{
- struct irqdesc *desc;
- unsigned long flags;
- int ret = -ENXIO;
-
- if (irq >= NR_IRQS) {
- printk(KERN_ERR "Trying to set irq type for IRQ%d\n", irq);
- return -ENODEV;
- }
-
- desc = irq_desc + irq;
- if (desc->chip->set_type) {
- spin_lock_irqsave(&irq_controller_lock, flags);
- ret = desc->chip->set_type(irq, type);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
- }
-
- return ret;
-}
-EXPORT_SYMBOL(set_irq_type);
-
void set_irq_flags(unsigned int irq, unsigned int iflags)
{
struct irqdesc *desc;
}
desc = irq_desc + irq;
- spin_lock_irqsave(&irq_controller_lock, flags);
- desc->valid = (iflags & IRQF_VALID) != 0;
- desc->probe_ok = (iflags & IRQF_PROBE) != 0;
- desc->noautoenable = (iflags & IRQF_NOAUTOEN) != 0;
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
-
-int setup_irq(unsigned int irq, struct irqaction *new)
-{
- int shared = 0;
- struct irqaction *old, **p;
- unsigned long flags;
- struct irqdesc *desc;
-
- /*
- * Some drivers like serial.c use request_irq() heavily,
- * so we have to be careful not to interfere with a
- * running system.
- */
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }
-
- /*
- * The following block of code has to be executed atomically
- */
- desc = irq_desc + irq;
- spin_lock_irqsave(&irq_controller_lock, flags);
- p = &desc->action;
- if ((old = *p) != NULL) {
- /*
- * Can't share interrupts unless both agree to and are
- * the same type.
- */
- if (!(old->flags & new->flags & SA_SHIRQ) ||
- (~old->flags & new->flags) & SA_TRIGGER_MASK) {
- spin_unlock_irqrestore(&irq_controller_lock, flags);
- return -EBUSY;
- }
-
- /* add new interrupt at end of irq queue */
- do {
- p = &old->next;
- old = *p;
- } while (old);
- shared = 1;
- }
-
- *p = new;
-
- if (!shared) {
- desc->probing = 0;
- desc->running = 0;
- desc->pending = 0;
- desc->disable_depth = 1;
-
- if (new->flags & SA_TRIGGER_MASK &&
- desc->chip->set_type) {
- unsigned int type = new->flags & SA_TRIGGER_MASK;
- desc->chip->set_type(irq, type);
- }
-
- if (!desc->noautoenable) {
- desc->disable_depth = 0;
- desc->chip->unmask(irq);
- }
- }
-
- spin_unlock_irqrestore(&irq_controller_lock, flags);
- return 0;
-}
-
-/**
- * request_irq - allocate an interrupt line
- * @irq: Interrupt line to allocate
- * @handler: Function to be called when the IRQ occurs
- * @irqflags: Interrupt type flags
- * @devname: An ascii name for the claiming device
- * @dev_id: A cookie passed back to the handler function
- *
- * This call allocates interrupt resources and enables the
- * interrupt line and IRQ handling. From the point this
- * call is made your handler function may be invoked. Since
- * your handler function must clear any interrupt the board
- * raises, you must take care both to initialise your hardware
- * and to set up the interrupt handler in the right order.
- *
- * Dev_id must be globally unique. Normally the address of the
- * device data structure is used as the cookie. Since the handler
- * receives this value it makes sense to use it.
- *
- * If your interrupt is shared you must pass a non NULL dev_id
- * as this is required when freeing the interrupt.
- *
- * Flags:
- *
- * SA_SHIRQ Interrupt is shared
- *
- * SA_INTERRUPT Disable local interrupts while processing
- *
- * SA_SAMPLE_RANDOM The interrupt can be used for entropy
- *
- */
-int request_irq(unsigned int irq, irqreturn_t (*handler)(int, void *, struct pt_regs *),
- unsigned long irq_flags, const char * devname, void *dev_id)
-{
- unsigned long retval;
- struct irqaction *action;
-
- if (irq >= NR_IRQS || !irq_desc[irq].valid || !handler ||
- (irq_flags & SA_SHIRQ && !dev_id))
- return -EINVAL;
-
- action = (struct irqaction *)kmalloc(sizeof(struct irqaction), GFP_KERNEL);
- if (!action)
- return -ENOMEM;
-
- action->handler = handler;
- action->flags = irq_flags;
- cpus_clear(action->mask);
- action->name = devname;
- action->next = NULL;
- action->dev_id = dev_id;
-
- retval = setup_irq(irq, action);
-
- if (retval)
- kfree(action);
- return retval;
-}
-
-EXPORT_SYMBOL(request_irq);
-
-/**
- * free_irq - free an interrupt
- * @irq: Interrupt line to free
- * @dev_id: Device identity to free
- *
- * Remove an interrupt handler. The handler is removed and if the
- * interrupt line is no longer in use by any driver it is disabled.
- * On a shared IRQ the caller must ensure the interrupt is disabled
- * on the card it drives before calling this function.
- *
- * This function must not be called from interrupt context.
- */
-void free_irq(unsigned int irq, void *dev_id)
-{
- struct irqaction * action, **p;
- unsigned long flags;
-
- if (irq >= NR_IRQS || !irq_desc[irq].valid) {
- printk(KERN_ERR "Trying to free IRQ%d\n",irq);
- dump_stack();
- return;
- }
-
- spin_lock_irqsave(&irq_controller_lock, flags);
- for (p = &irq_desc[irq].action; (action = *p) != NULL; p = &action->next) {
- if (action->dev_id != dev_id)
- continue;
-
- /* Found it - now free it */
- *p = action->next;
- break;
- }
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-
- if (!action) {
- printk(KERN_ERR "Trying to free free IRQ%d\n",irq);
- dump_stack();
- } else {
- synchronize_irq(irq);
- kfree(action);
- }
-}
-
-EXPORT_SYMBOL(free_irq);
-
-static DECLARE_MUTEX(probe_sem);
-
-/* Start the interrupt probing. Unlike other architectures,
- * we don't return a mask of interrupts from probe_irq_on,
- * but return the number of interrupts enabled for the probe.
- * The interrupts which have been enabled for probing is
- * instead recorded in the irq_desc structure.
- */
-unsigned long probe_irq_on(void)
-{
- unsigned int i, irqs = 0;
- unsigned long delay;
-
- down(&probe_sem);
-
- /*
- * first snaffle up any unassigned but
- * probe-able interrupts
- */
- spin_lock_irq(&irq_controller_lock);
- for (i = 0; i < NR_IRQS; i++) {
- if (!irq_desc[i].probe_ok || irq_desc[i].action)
- continue;
-
- irq_desc[i].probing = 1;
- irq_desc[i].triggered = 0;
- if (irq_desc[i].chip->set_type)
- irq_desc[i].chip->set_type(i, IRQT_PROBE);
- irq_desc[i].chip->unmask(i);
- irqs += 1;
- }
- spin_unlock_irq(&irq_controller_lock);
-
- /*
- * wait for spurious interrupts to mask themselves out again
- */
- for (delay = jiffies + HZ/10; time_before(jiffies, delay); )
- /* min 100ms delay */;
-
- /*
- * now filter out any obviously spurious interrupts
- */
- spin_lock_irq(&irq_controller_lock);
- for (i = 0; i < NR_IRQS; i++) {
- if (irq_desc[i].probing && irq_desc[i].triggered) {
- irq_desc[i].probing = 0;
- irqs -= 1;
- }
- }
- spin_unlock_irq(&irq_controller_lock);
-
- return irqs;
-}
-
-EXPORT_SYMBOL(probe_irq_on);
-
-unsigned int probe_irq_mask(unsigned long irqs)
-{
- unsigned int mask = 0, i;
-
- spin_lock_irq(&irq_controller_lock);
- for (i = 0; i < 16 && i < NR_IRQS; i++)
- if (irq_desc[i].probing && irq_desc[i].triggered)
- mask |= 1 << i;
- spin_unlock_irq(&irq_controller_lock);
-
- up(&probe_sem);
-
- return mask;
-}
-EXPORT_SYMBOL(probe_irq_mask);
-
-/*
- * Possible return values:
- * >= 0 - interrupt number
- * -1 - no interrupt/many interrupts
- */
-int probe_irq_off(unsigned long irqs)
-{
- unsigned int i;
- int irq_found = NO_IRQ;
-
- /*
- * look at the interrupts, and find exactly one
- * that we were probing has been triggered
- */
- spin_lock_irq(&irq_controller_lock);
- for (i = 0; i < NR_IRQS; i++) {
- if (irq_desc[i].probing &&
- irq_desc[i].triggered) {
- if (irq_found != NO_IRQ) {
- irq_found = NO_IRQ;
- goto out;
- }
- irq_found = i;
- }
- }
-
- if (irq_found == -1)
- irq_found = NO_IRQ;
-out:
- spin_unlock_irq(&irq_controller_lock);
-
- up(&probe_sem);
-
- return irq_found;
-}
-
-EXPORT_SYMBOL(probe_irq_off);
-
-#ifdef CONFIG_SMP
-static void route_irq(struct irqdesc *desc, unsigned int irq, unsigned int cpu)
-{
- pr_debug("IRQ%u: moving from cpu%u to cpu%u\n", irq, desc->cpu, cpu);
-
- spin_lock_irq(&irq_controller_lock);
- desc->cpu = cpu;
- desc->chip->set_cpu(desc, irq, cpu);
- spin_unlock_irq(&irq_controller_lock);
-}
-
-#ifdef CONFIG_PROC_FS
-static int
-irq_affinity_read_proc(char *page, char **start, off_t off, int count,
- int *eof, void *data)
-{
- struct irqdesc *desc = irq_desc + ((int)data);
- int len = cpumask_scnprintf(page, count, desc->affinity);
-
- if (count - len < 2)
- return -EINVAL;
- page[len++] = '\n';
- page[len] = '\0';
-
- return len;
-}
-
-static int
-irq_affinity_write_proc(struct file *file, const char __user *buffer,
- unsigned long count, void *data)
-{
- unsigned int irq = (unsigned int)data;
- struct irqdesc *desc = irq_desc + irq;
- cpumask_t affinity, tmp;
- int ret = -EIO;
-
- if (!desc->chip->set_cpu)
- goto out;
-
- ret = cpumask_parse(buffer, count, affinity);
- if (ret)
- goto out;
-
- cpus_and(tmp, affinity, cpu_online_map);
- if (cpus_empty(tmp)) {
- ret = -EINVAL;
- goto out;
- }
-
- desc->affinity = affinity;
- route_irq(desc, irq, first_cpu(tmp));
- ret = count;
-
- out:
- return ret;
-}
-#endif
-#endif
-
-void __init init_irq_proc(void)
-{
-#if defined(CONFIG_SMP) && defined(CONFIG_PROC_FS)
- struct proc_dir_entry *dir;
- int irq;
-
- dir = proc_mkdir("irq", NULL);
- if (!dir)
- return;
-
- for (irq = 0; irq < NR_IRQS; irq++) {
- struct proc_dir_entry *entry;
- struct irqdesc *desc;
- char name[16];
-
- desc = irq_desc + irq;
- memset(name, 0, sizeof(name));
- snprintf(name, sizeof(name) - 1, "%u", irq);
-
- desc->procdir = proc_mkdir(name, dir);
- if (!desc->procdir)
- continue;
-
- entry = create_proc_entry("smp_affinity", 0600, desc->procdir);
- if (entry) {
- entry->nlink = 1;
- entry->data = (void *)irq;
- entry->read_proc = irq_affinity_read_proc;
- entry->write_proc = irq_affinity_write_proc;
- }
- }
-#endif
+ spin_lock_irqsave(&desc->lock, flags);
+ desc->status |= IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN;
+ if (iflags & IRQF_VALID)
+ desc->status &= ~IRQ_NOREQUEST;
+ if (iflags & IRQF_PROBE)
+ desc->status &= ~IRQ_NOPROBE;
+ if (!(iflags & IRQF_NOAUTOEN))
+ desc->status &= ~IRQ_NOAUTOEN;
+ spin_unlock_irqrestore(&desc->lock, flags);
}
void __init init_IRQ(void)
{
- struct irqdesc *desc;
int irq;
+ for (irq = 0; irq < NR_IRQS; irq++)
+ irq_desc[irq].status |= IRQ_NOREQUEST | IRQ_DELAYED_DISABLE |
+ IRQ_NOPROBE;
+
#ifdef CONFIG_SMP
bad_irq_desc.affinity = CPU_MASK_ALL;
bad_irq_desc.cpu = smp_processor_id();
#endif
-
- for (irq = 0, desc = irq_desc; irq < NR_IRQS; irq++, desc++) {
- *desc = bad_irq_desc;
- INIT_LIST_HEAD(&desc->pend);
- }
-
init_arch_irq();
}
-static int __init noirqdebug_setup(char *str)
-{
- noirqdebug = 1;
- return 1;
-}
-
-__setup("noirqdebug", noirqdebug_setup);
-
#ifdef CONFIG_HOTPLUG_CPU
/*
* The CPU has been marked offline. Migrate IRQs off this CPU. If
--- /dev/null
+/*
+ * linux/arch/arm/kernel/iwmmxt-notifier.c
+ *
+ * XScale iWMMXt (Concan) context switching and handling
+ *
+ * Initial code:
+ * Copyright (c) 2003, Intel Corporation
+ *
+ * Full lazy switching support, optimizations and more, by Nicolas Pitre
+ * Copyright (c) 2003-2004, MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <asm/thread_notify.h>
+#include <asm/io.h>
+
+static int iwmmxt_do(struct notifier_block *self, unsigned long cmd, void *t)
+{
+ struct thread_info *thread = t;
+
+ switch (cmd) {
+ case THREAD_NOTIFY_FLUSH:
+ /*
+ * flush_thread() zeroes thread->fpstate, so no need
+ * to do anything here.
+ *
+ * FALLTHROUGH: Ensure we don't try to overwrite our newly
+ * initialised state information on the first fault.
+ */
+
+ case THREAD_NOTIFY_RELEASE:
+ iwmmxt_task_release(thread);
+ break;
+
+ case THREAD_NOTIFY_SWITCH:
+ iwmmxt_task_switch(thread);
+ break;
+ }
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block iwmmxt_notifier_block = {
+ .notifier_call = iwmmxt_do,
+};
+
+static int __init iwmmxt_init(void)
+{
+ thread_register_notifier(&iwmmxt_notifier_block);
+
+ return 0;
+}
+
+late_initcall(iwmmxt_init);
/*
* Concan handling on task switch
*
- * r0 = previous task_struct pointer (must be preserved)
- * r1 = previous thread_info pointer
- * r2 = next thread_info pointer (must be preserved)
+ * r0 = next thread_info pointer
*
- * Called only from __switch_to with task preemption disabled.
- * No need to care about preserving r4 and above.
+ * Called only from the iwmmxt notifier with task preemption disabled.
*/
ENTRY(iwmmxt_task_switch)
- mrc p15, 0, r4, c15, c1, 0
- tst r4, #0x3 @ CP0 and CP1 accessible?
+ mrc p15, 0, r1, c15, c1, 0
+ tst r1, #0x3 @ CP0 and CP1 accessible?
bne 1f @ yes: block them for next task
- ldr r5, =concan_owner
- add r6, r2, #TI_IWMMXT_STATE @ get next task Concan save area
- ldr r5, [r5] @ get current Concan owner
- teq r5, r6 @ next task owns it?
+ ldr r2, =concan_owner
+ add r3, r0, #TI_IWMMXT_STATE @ get next task Concan save area
+ ldr r2, [r2] @ get current Concan owner
+ teq r2, r3 @ next task owns it?
movne pc, lr @ no: leave Concan disabled
-1: eor r4, r4, #3 @ flip Concan access
- mcr p15, 0, r4, c15, c1, 0
+1: eor r1, r1, #3 @ flip Concan access
+ mcr p15, 0, r1, c15, c1, 0
- mrc p15, 0, r4, c2, c0, 0
- sub pc, lr, r4, lsr #32 @ cpwait and return
+ mrc p15, 0, r1, c2, c0, 0
+ sub pc, lr, r1, lsr #32 @ cpwait and return
/*
* Remove Concan ownership of given task
memset(&thread->fpstate, 0, sizeof(union fp_state));
thread_notify(THREAD_NOTIFY_FLUSH, thread);
-#if defined(CONFIG_IWMMXT)
- iwmmxt_task_release(thread);
-#endif
}
void release_thread(struct task_struct *dead_task)
struct thread_info *thread = task_thread_info(dead_task);
thread_notify(THREAD_NOTIFY_RELEASE, thread);
-#if defined(CONFIG_IWMMXT)
- iwmmxt_task_release(thread);
-#endif
}
asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
cpu_cache = *list->cache;
#endif
- printk("CPU: %s [%08x] revision %d (ARMv%s)\n",
+ printk("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",
cpu_name, processor_id, (int)processor_id & 15,
- proc_arch[cpu_architecture()]);
+ proc_arch[cpu_architecture()], cr_alignment);
sprintf(system_utsname.machine, "%s%c", list->arch_name, ENDIANNESS);
sprintf(elf_platform, "%s%c", list->elf_name, ENDIANNESS);
static struct irqaction aaec2000_timer_irq = {
.name = "AAEC-2000 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = aaec2000_timer_interrupt,
};
-if ARCH_AT91RM9200
+if ARCH_AT91
+
+menu "Atmel AT91 System-on-Chip"
+
+comment "Atmel AT91 Processors"
+
+config ARCH_AT91RM9200
+ bool "AT91RM9200"
-menu "AT91RM9200 Implementations"
+config ARCH_AT91SAM9260
+ bool "AT91SAM9260"
+
+config ARCH_AT91SAM9261
+ bool "AT91SAM9261"
+
+# ----------------------------------------------------------
+
+if ARCH_AT91RM9200
comment "AT91RM9200 Board Type"
bool "Ajeco 1ARM Single Board Computer"
depends on ARCH_AT91RM9200
help
- Select this if you are using Ajeco's 1ARM Single Board Computer
+ Select this if you are using Ajeco's 1ARM Single Board Computer.
+ <http://www.ajeco.fi/products.htm>
config ARCH_AT91RM9200DK
bool "Atmel AT91RM9200-DK Development board"
depends on ARCH_AT91RM9200
help
- Select this if you are using Atmel's AT91RM9200-DK Development board
+ Select this if you are using Atmel's AT91RM9200-DK Development board.
+ (Discontinued)
+
config MACH_AT91RM9200EK
bool "Atmel AT91RM9200-EK Evaluation Kit"
depends on ARCH_AT91RM9200
help
- Select this if you are using Atmel's AT91RM9200-EK Evaluation Kit
+ Select this if you are using Atmel's AT91RM9200-EK Evaluation Kit.
+ <http://www.atmel.com/dyn/products/tools_card.asp?tool_id=3507>
config MACH_CSB337
- bool "Cogent CSB337 board"
+ bool "Cogent CSB337"
depends on ARCH_AT91RM9200
help
- Select this if you are using Cogent's CSB337 board
+ Select this if you are using Cogent's CSB337 board.
+ <http://www.cogcomp.com/csb_csb337.htm>
config MACH_CSB637
- bool "Cogent CSB637 board"
+ bool "Cogent CSB637"
depends on ARCH_AT91RM9200
help
- Select this if you are using Cogent's CSB637 board
+ Select this if you are using Cogent's CSB637 board.
+ <http://www.cogcomp.com/csb_csb637.htm>
config MACH_CARMEVA
- bool "Conitec's ARM&EVA"
+ bool "Conitec ARM&EVA"
depends on ARCH_AT91RM9200
help
- Select this if you are using Conitec's AT91RM9200-MCU-Module
+ Select this if you are using Conitec's AT91RM9200-MCU-Module.
+ <http://www.conitec.net/english/linuxboard.htm>
-config MACH_KB9200
- bool "KwikByte's KB920x"
+config MACH_ATEB9200
+ bool "Embest ATEB9200"
depends on ARCH_AT91RM9200
help
- Select this if you are using KwikByte's KB920x board
+ Select this if you are using Embest's ATEB9200 board.
+ <http://www.embedinfo.com/english/product/ATEB9200.asp>
-config MACH_ATEB9200
- bool "Embest's ATEB9200"
+config MACH_KB9200
+ bool "KwikByte KB920x"
depends on ARCH_AT91RM9200
help
- Select this if you are using Embest's ATEB9200 board
+ Select this if you are using KwikByte's KB920x board.
+ <http://kwikbyte.com/KB9202_description_new.htm>
config MACH_KAFA
bool "Sperry-Sun KAFA board"
depends on ARCH_AT91RM9200
help
- Select this if you are using Sperry-Sun's KAFA board
+ Select this if you are using Sperry-Sun's KAFA board.
+
+endif
+
+# ----------------------------------------------------------
+
+if ARCH_AT91SAM9260
+
+comment "AT91SAM9260 Board Type"
+
+endif
+
+# ----------------------------------------------------------
+
+if ARCH_AT91SAM9261
+
+comment "AT91SAM9261 Board Type"
+
+endif
+
+# ----------------------------------------------------------
-comment "AT91RM9200 Feature Selections"
+comment "AT91 Feature Selections"
config AT91_PROGRAMMABLE_CLOCKS
bool "Programmable Clocks"
# Makefile for the linux kernel.
#
-obj-y := clock.o irq.o time.o gpio.o common.o devices.o
+obj-y := clock.o irq.o gpio.o devices.o
obj-m :=
obj-n :=
obj- :=
obj-$(CONFIG_PM) += pm.o
-# Board-specific support
+# CPU-specific support
+obj-$(CONFIG_ARCH_AT91RM9200) += at91rm9200.o at91rm9200_time.o
+obj-$(CONFIG_ARCH_AT91SAM9260) +=
+obj-$(CONFIG_ARCH_AT91SAM9261) +=
+
+# AT91RM9200 Board-specific support
obj-$(CONFIG_MACH_ONEARM) += board-1arm.o
obj-$(CONFIG_ARCH_AT91RM9200DK) += board-dk.o
obj-$(CONFIG_MACH_AT91RM9200EK) += board-ek.o
obj-$(CONFIG_MACH_ATEB9200) += board-eb9200.o
obj-$(CONFIG_MACH_KAFA) += board-kafa.o
+# AT91SAM9260 board-specific support
+
+# AT91SAM9261 board-specific support
+
# LEDs support
led-$(CONFIG_ARCH_AT91RM9200DK) += leds.o
led-$(CONFIG_MACH_AT91RM9200EK) += leds.o
--- /dev/null
+/*
+ * arch/arm/mach-at91rm9200/at91rm9200.c
+ *
+ * Copyright (C) 2005 SAN People
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+
+#include <linux/module.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+
+#include <asm/hardware.h>
+#include "generic.h"
+
+static struct map_desc at91rm9200_io_desc[] __initdata = {
+ {
+ .virtual = AT91_VA_BASE_SYS,
+ .pfn = __phys_to_pfn(AT91_BASE_SYS),
+ .length = SZ_4K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_SPI,
+ .pfn = __phys_to_pfn(AT91_BASE_SPI),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_SSC2,
+ .pfn = __phys_to_pfn(AT91_BASE_SSC2),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_SSC1,
+ .pfn = __phys_to_pfn(AT91_BASE_SSC1),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_SSC0,
+ .pfn = __phys_to_pfn(AT91_BASE_SSC0),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_US3,
+ .pfn = __phys_to_pfn(AT91_BASE_US3),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_US2,
+ .pfn = __phys_to_pfn(AT91_BASE_US2),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_US1,
+ .pfn = __phys_to_pfn(AT91_BASE_US1),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_US0,
+ .pfn = __phys_to_pfn(AT91_BASE_US0),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_EMAC,
+ .pfn = __phys_to_pfn(AT91_BASE_EMAC),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_TWI,
+ .pfn = __phys_to_pfn(AT91_BASE_TWI),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_MCI,
+ .pfn = __phys_to_pfn(AT91_BASE_MCI),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_UDP,
+ .pfn = __phys_to_pfn(AT91_BASE_UDP),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_TCB1,
+ .pfn = __phys_to_pfn(AT91_BASE_TCB1),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_VA_BASE_TCB0,
+ .pfn = __phys_to_pfn(AT91_BASE_TCB0),
+ .length = SZ_16K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = AT91_SRAM_VIRT_BASE,
+ .pfn = __phys_to_pfn(AT91_SRAM_BASE),
+ .length = AT91_SRAM_SIZE,
+ .type = MT_DEVICE,
+ },
+};
+
+void __init at91rm9200_map_io(void)
+{
+ iotable_init(at91rm9200_io_desc, ARRAY_SIZE(at91rm9200_io_desc));
+}
+
--- /dev/null
+/*
+ * linux/arch/arm/mach-at91rm9200/at91rm9200_time.c
+ *
+ * Copyright (C) 2003 SAN People
+ * Copyright (C) 2003 ATMEL
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/time.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/mach/time.h>
+
+static unsigned long last_crtr;
+
+/*
+ * The ST_CRTR is updated asynchronously to the master clock. It is therefore
+ * necessary to read it twice (with the same value) to ensure accuracy.
+ */
+static inline unsigned long read_CRTR(void) {
+ unsigned long x1, x2;
+
+ do {
+ x1 = at91_sys_read(AT91_ST_CRTR);
+ x2 = at91_sys_read(AT91_ST_CRTR);
+ } while (x1 != x2);
+
+ return x1;
+}
+
+/*
+ * Returns number of microseconds since last timer interrupt. Note that interrupts
+ * will have been disabled by do_gettimeofday()
+ * 'LATCH' is hwclock ticks (see CLOCK_TICK_RATE in timex.h) per jiffy.
+ * 'tick' is usecs per jiffy (linux/timex.h).
+ */
+static unsigned long at91rm9200_gettimeoffset(void)
+{
+ unsigned long elapsed;
+
+ elapsed = (read_CRTR() - last_crtr) & AT91_ST_ALMV;
+
+ return (unsigned long)(elapsed * (tick_nsec / 1000)) / LATCH;
+}
+
+/*
+ * IRQ handler for the timer.
+ */
+static irqreturn_t at91rm9200_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ if (at91_sys_read(AT91_ST_SR) & AT91_ST_PITS) { /* This is a shared interrupt */
+ write_seqlock(&xtime_lock);
+
+ while (((read_CRTR() - last_crtr) & AT91_ST_ALMV) >= LATCH) {
+ timer_tick(regs);
+ last_crtr = (last_crtr + LATCH) & AT91_ST_ALMV;
+ }
+
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+ }
+ else
+ return IRQ_NONE; /* not handled */
+}
+
+static struct irqaction at91rm9200_timer_irq = {
+ .name = "at91_tick",
+ .flags = IRQF_SHARED | IRQF_DISABLED | IRQF_TIMER,
+ .handler = at91rm9200_timer_interrupt
+};
+
+void at91rm9200_timer_reset(void)
+{
+ last_crtr = 0;
+
+ /* Real time counter incremented every 30.51758 microseconds */
+ at91_sys_write(AT91_ST_RTMR, 1);
+
+ /* Set Period Interval timer */
+ at91_sys_write(AT91_ST_PIMR, LATCH);
+
+ /* Enable Period Interval Timer interrupt */
+ at91_sys_write(AT91_ST_IER, AT91_ST_PITS);
+}
+
+/*
+ * Set up timer interrupt.
+ */
+void __init at91rm9200_timer_init(void)
+{
+ /* Disable all timer interrupts */
+ at91_sys_write(AT91_ST_IDR, AT91_ST_PITS | AT91_ST_WDOVF | AT91_ST_RTTINC | AT91_ST_ALMS);
+ (void) at91_sys_read(AT91_ST_SR); /* Clear any pending interrupts */
+
+ /* Make IRQs happen for the system timer */
+ setup_irq(AT91_ID_SYS, &at91rm9200_timer_irq);
+
+ /* Change the kernel's 'tick' value to 10009 usec. (the default is 10000) */
+ tick_usec = (LATCH * 1000000) / CLOCK_TICK_RATE;
+
+ /* Initialize and enable the timer interrupt */
+ at91rm9200_timer_reset();
+}
+
+#ifdef CONFIG_PM
+static void at91rm9200_timer_suspend(void)
+{
+ /* disable Period Interval Timer interrupt */
+ at91_sys_write(AT91_ST_IDR, AT91_ST_PITS);
+}
+#else
+#define at91rm9200_timer_suspend NULL
+#endif
+
+struct sys_timer at91rm9200_timer = {
+ .init = at91rm9200_timer_init,
+ .offset = at91rm9200_gettimeoffset,
+ .suspend = at91rm9200_timer_suspend,
+ .resume = at91rm9200_timer_reset,
+};
+
+++ /dev/null
-/*
- * arch/arm/mach-at91rm9200/common.c
- *
- * Copyright (C) 2005 SAN People
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- */
-
-#include <linux/module.h>
-
-#include <asm/mach/arch.h>
-#include <asm/mach/map.h>
-
-#include <asm/hardware.h>
-#include "generic.h"
-
-static struct map_desc at91rm9200_io_desc[] __initdata = {
- {
- .virtual = AT91_VA_BASE_SYS,
- .pfn = __phys_to_pfn(AT91_BASE_SYS),
- .length = SZ_4K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_SPI,
- .pfn = __phys_to_pfn(AT91_BASE_SPI),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_SSC2,
- .pfn = __phys_to_pfn(AT91_BASE_SSC2),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_SSC1,
- .pfn = __phys_to_pfn(AT91_BASE_SSC1),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_SSC0,
- .pfn = __phys_to_pfn(AT91_BASE_SSC0),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_US3,
- .pfn = __phys_to_pfn(AT91_BASE_US3),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_US2,
- .pfn = __phys_to_pfn(AT91_BASE_US2),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_US1,
- .pfn = __phys_to_pfn(AT91_BASE_US1),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_US0,
- .pfn = __phys_to_pfn(AT91_BASE_US0),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_EMAC,
- .pfn = __phys_to_pfn(AT91_BASE_EMAC),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_TWI,
- .pfn = __phys_to_pfn(AT91_BASE_TWI),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_MCI,
- .pfn = __phys_to_pfn(AT91_BASE_MCI),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_UDP,
- .pfn = __phys_to_pfn(AT91_BASE_UDP),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_TCB1,
- .pfn = __phys_to_pfn(AT91_BASE_TCB1),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_VA_BASE_TCB0,
- .pfn = __phys_to_pfn(AT91_BASE_TCB0),
- .length = SZ_16K,
- .type = MT_DEVICE,
- }, {
- .virtual = AT91_SRAM_VIRT_BASE,
- .pfn = __phys_to_pfn(AT91_SRAM_BASE),
- .length = AT91_SRAM_SIZE,
- .type = MT_DEVICE,
- },
-};
-
-void __init at91rm9200_map_io(void)
-{
- iotable_init(at91rm9200_io_desc, ARRAY_SIZE(at91rm9200_io_desc));
-}
-
*/
#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <asm/io.h>
-#include <asm/mach/irq.h>
#include <asm/hardware.h>
#include <asm/arch/gpio.h>
void __iomem *pio;
u32 isr;
- pio = desc->base;
+ pio = get_irq_chip_data(irq);
/* temporarily mask (level sensitive) parent IRQ */
desc->chip->ack(irq);
if (!isr)
break;
- pin = (unsigned) desc->data;
+ pin = (unsigned) get_irq_data(irq);
gpio = &irq_desc[pin];
while (isr) {
if (isr & 1) {
- if (unlikely(gpio->disable_depth)) {
+ if (unlikely(gpio->depth)) {
/*
* The core ARM interrupt handler lazily disables IRQs so
* another IRQ must be generated before it actually gets
gpio_irq_mask(pin);
}
else
- gpio->handle(pin, gpio, regs);
+ desc_handle_irq(pin, gpio, regs);
}
pin++;
gpio++;
+++ /dev/null
-/*
- * linux/arch/arm/mach-at91rm9200/time.c
- *
- * Copyright (C) 2003 SAN People
- * Copyright (C) 2003 ATMEL
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-#include <linux/init.h>
-#include <linux/interrupt.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/time.h>
-
-#include <asm/hardware.h>
-#include <asm/io.h>
-#include <asm/irq.h>
-#include <asm/mach/time.h>
-
-static unsigned long last_crtr;
-
-/*
- * The ST_CRTR is updated asynchronously to the master clock. It is therefore
- * necessary to read it twice (with the same value) to ensure accuracy.
- */
-static inline unsigned long read_CRTR(void) {
- unsigned long x1, x2;
-
- do {
- x1 = at91_sys_read(AT91_ST_CRTR);
- x2 = at91_sys_read(AT91_ST_CRTR);
- } while (x1 != x2);
-
- return x1;
-}
-
-/*
- * Returns number of microseconds since last timer interrupt. Note that interrupts
- * will have been disabled by do_gettimeofday()
- * 'LATCH' is hwclock ticks (see CLOCK_TICK_RATE in timex.h) per jiffy.
- * 'tick' is usecs per jiffy (linux/timex.h).
- */
-static unsigned long at91rm9200_gettimeoffset(void)
-{
- unsigned long elapsed;
-
- elapsed = (read_CRTR() - last_crtr) & AT91_ST_ALMV;
-
- return (unsigned long)(elapsed * (tick_nsec / 1000)) / LATCH;
-}
-
-/*
- * IRQ handler for the timer.
- */
-static irqreturn_t at91rm9200_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
-{
- if (at91_sys_read(AT91_ST_SR) & AT91_ST_PITS) { /* This is a shared interrupt */
- write_seqlock(&xtime_lock);
-
- while (((read_CRTR() - last_crtr) & AT91_ST_ALMV) >= LATCH) {
- timer_tick(regs);
- last_crtr = (last_crtr + LATCH) & AT91_ST_ALMV;
- }
-
- write_sequnlock(&xtime_lock);
-
- return IRQ_HANDLED;
- }
- else
- return IRQ_NONE; /* not handled */
-}
-
-static struct irqaction at91rm9200_timer_irq = {
- .name = "at91_tick",
- .flags = SA_SHIRQ | SA_INTERRUPT | SA_TIMER,
- .handler = at91rm9200_timer_interrupt
-};
-
-void at91rm9200_timer_reset(void)
-{
- last_crtr = 0;
-
- /* Real time counter incremented every 30.51758 microseconds */
- at91_sys_write(AT91_ST_RTMR, 1);
-
- /* Set Period Interval timer */
- at91_sys_write(AT91_ST_PIMR, LATCH);
-
- /* Enable Period Interval Timer interrupt */
- at91_sys_write(AT91_ST_IER, AT91_ST_PITS);
-}
-
-/*
- * Set up timer interrupt.
- */
-void __init at91rm9200_timer_init(void)
-{
- /* Disable all timer interrupts */
- at91_sys_write(AT91_ST_IDR, AT91_ST_PITS | AT91_ST_WDOVF | AT91_ST_RTTINC | AT91_ST_ALMS);
- (void) at91_sys_read(AT91_ST_SR); /* Clear any pending interrupts */
-
- /* Make IRQs happen for the system timer */
- setup_irq(AT91_ID_SYS, &at91rm9200_timer_irq);
-
- /* Change the kernel's 'tick' value to 10009 usec. (the default is 10000) */
- tick_usec = (LATCH * 1000000) / CLOCK_TICK_RATE;
-
- /* Initialize and enable the timer interrupt */
- at91rm9200_timer_reset();
-}
-
-#ifdef CONFIG_PM
-static void at91rm9200_timer_suspend(void)
-{
- /* disable Period Interval Timer interrupt */
- at91_sys_write(AT91_ST_IDR, AT91_ST_PITS);
-}
-#else
-#define at91rm9200_timer_suspend NULL
-#endif
-
-struct sys_timer at91rm9200_timer = {
- .init = at91rm9200_timer_init,
- .offset = at91rm9200_gettimeoffset,
- .suspend = at91rm9200_timer_suspend,
- .resume = at91rm9200_timer_reset,
-};
-
#include <linux/timex.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/sched.h>
#include <asm/hardware.h>
static struct irqaction clps711x_timer_irq = {
.name = "CLPS711x Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = p720t_timer_interrupt,
};
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/list.h>
#include <linux/sched.h>
#include <linux/init.h>
static struct irqaction clps7500_timer_irq = {
.name = "CLPS7500 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = clps7500_timer_interrupt,
};
static struct irqaction ebsa110_timer_irq = {
.name = "EBSA110 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = ebsa110_timer_interrupt,
};
comment "EP93xx Platforms"
+config MACH_EDB9302
+ bool "Support Cirrus Logic EDB9302"
+ help
+ Say 'Y' here if you want your kernel to support the Cirrus
+ Logic EDB9302 Evaluation Board.
+
config MACH_EDB9315
bool "Support Cirrus Logic EDB9315"
help
Say 'Y' here if you want your kernel to support the Cirrus
Logic EDB9315 Evaluation Board.
+config MACH_EDB9315A
+ bool "Support Cirrus Logic EDB9315A"
+ help
+ Say 'Y' here if you want your kernel to support the Cirrus
+ Logic EDB9315A Evaluation Board.
+
config MACH_GESBC9312
bool "Support Glomation GESBC-9312-sx"
help
obj-n :=
obj- :=
+obj-$(CONFIG_MACH_EDB9302) += edb9302.o
obj-$(CONFIG_MACH_EDB9315) += edb9315.o
+obj-$(CONFIG_MACH_EDB9315A) += edb9315a.o
obj-$(CONFIG_MACH_GESBC9312) += gesbc9312.o
obj-$(CONFIG_MACH_TS72XX) += ts72xx.o
static struct irqaction ep93xx_timer_irq = {
.name = "ep93xx timer",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = ep93xx_timer_interrupt,
};
--- /dev/null
+/*
+ * arch/arm/mach-ep93xx/edb9302.c
+ * Cirrus Logic EDB9302 support.
+ *
+ * Copyright (C) 2006 George Kashperko <george@chas.com.ua>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/mtd/physmap.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+
+static struct physmap_flash_data edb9302_flash_data = {
+ .width = 2,
+};
+
+static struct resource edb9302_flash_resource = {
+ .start = 0x60000000,
+ .end = 0x60ffffff,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device edb9302_flash = {
+ .name = "physmap-flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &edb9302_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &edb9302_flash_resource,
+};
+
+static void __init edb9302_init_machine(void)
+{
+ ep93xx_init_devices();
+ platform_device_register(&edb9302_flash);
+}
+
+MACHINE_START(EDB9302, "Cirrus Logic EDB9302 Evaluation Board")
+ /* Maintainer: George Kashperko <george@chas.com.ua> */
+ .phys_io = EP93XX_APB_PHYS_BASE,
+ .io_pg_offst = ((EP93XX_APB_VIRT_BASE) >> 18) & 0xfffc,
+ .boot_params = 0x00000100,
+ .map_io = ep93xx_map_io,
+ .init_irq = ep93xx_init_irq,
+ .timer = &ep93xx_timer,
+ .init_machine = edb9302_init_machine,
+MACHINE_END
--- /dev/null
+/*
+ * arch/arm/mach-ep93xx/edb9315a.c
+ * Cirrus Logic EDB9315A support.
+ *
+ * Copyright (C) 2006 Lennert Buytenhek <buytenh@wantstofly.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/mtd/physmap.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+
+static struct physmap_flash_data edb9315a_flash_data = {
+ .width = 2,
+};
+
+static struct resource edb9315a_flash_resource = {
+ .start = 0x60000000,
+ .end = 0x60ffffff,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device edb9315a_flash = {
+ .name = "physmap-flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &edb9315a_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &edb9315a_flash_resource,
+};
+
+static void __init edb9315a_init_machine(void)
+{
+ ep93xx_init_devices();
+ platform_device_register(&edb9315a_flash);
+}
+
+MACHINE_START(EDB9315A, "Cirrus Logic EDB9315A Evaluation Board")
+ /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */
+ .phys_io = EP93XX_APB_PHYS_BASE,
+ .io_pg_offst = ((EP93XX_APB_VIRT_BASE) >> 18) & 0xfffc,
+ .boot_params = 0xc0000100,
+ .map_io = ep93xx_map_io,
+ .init_irq = ep93xx_init_irq,
+ .timer = &ep93xx_timer,
+ .init_machine = edb9315a_init_machine,
+MACHINE_END
*/
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <asm/irq.h>
static struct irqaction footbridge_timer_irq = {
.name = "Timer1 timer tick",
.handler = timer1_interrupt,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
/*
/*
* We don't care if these fail.
*/
- request_irq(IRQ_PCI_SERR, dc21285_serr_irq, SA_INTERRUPT,
+ request_irq(IRQ_PCI_SERR, dc21285_serr_irq, IRQF_DISABLED,
"PCI system error", &serr_timer);
- request_irq(IRQ_PCI_PERR, dc21285_parity_irq, SA_INTERRUPT,
+ request_irq(IRQ_PCI_PERR, dc21285_parity_irq, IRQF_DISABLED,
"PCI parity error", &perr_timer);
- request_irq(IRQ_PCI_ABORT, dc21285_abort_irq, SA_INTERRUPT,
+ request_irq(IRQ_PCI_ABORT, dc21285_abort_irq, IRQF_DISABLED,
"PCI abort", NULL);
- request_irq(IRQ_DISCARD_TIMER, dc21285_discard_irq, SA_INTERRUPT,
+ request_irq(IRQ_DISCARD_TIMER, dc21285_discard_irq, IRQF_DISABLED,
"Discard timer", NULL);
- request_irq(IRQ_PCI_DPERR, dc21285_dparity_irq, SA_INTERRUPT,
+ request_irq(IRQ_PCI_DPERR, dc21285_dparity_irq, IRQF_DISABLED,
"PCI data parity", NULL);
if (cfn_mode) {
desc_handle_irq(isa_irq, desc, regs);
}
-static struct irqaction irq_cascade = { .handler = no_action, .name = "cascade", };
-static struct resource pic1_resource = { "pic1", 0x20, 0x3f };
-static struct resource pic2_resource = { "pic2", 0xa0, 0xbf };
+static struct irqaction irq_cascade = {
+ .handler = no_action,
+ .name = "cascade",
+};
+
+static struct resource pic1_resource = {
+ .name = "pic1",
+ .start = 0x20,
+ .end = 0x3f,
+};
+
+static struct resource pic2_resource = {
+ .name = "pic2",
+ .start = 0xa0,
+ .end = 0xbf,
+};
void __init isa_init_irq(unsigned int host_irq)
{
*/
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <asm/io.h>
#include <asm/irq.h>
static struct irqaction isa_timer_irq = {
.name = "ISA timer tick",
.handler = isa_timer_interrupt,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
static void __init isa_timer_init(void)
static struct irqaction h7201_timer_irq = {
.name = "h7201 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = h7201_timer_interrupt,
};
static struct irqaction h7202_timer_irq = {
.name = "h7202 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = h7202_timer_interrupt,
};
#include <linux/sched.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/time.h>
#include <asm/hardware.h>
static struct irqaction imx_timer_irq = {
.name = "i.MX Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = imx_timer_interrupt,
};
#include <linux/device.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/sched.h>
#include <linux/smp.h>
#include <linux/termios.h>
static struct irqaction integrator_timer_irq = {
.name = "Integrator Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = integrator_timer_interrupt,
};
xtime.tv_sec = __raw_readl(rtc_base + RTC_DR);
- ret = request_irq(dev->irq[0], arm_rtc_interrupt, SA_INTERRUPT,
+ ret = request_irq(dev->irq[0], arm_rtc_interrupt, IRQF_DISABLED,
"rtc-pl030", dev);
if (ret)
goto map_out;
select ARCH_IOP331
help
Say Y here if you want to run your kernel on the Intel IQ80332
- evaluation kit for the IOP332 chipset
+ evaluation kit for the IOP332 chipset.
config ARCH_EP80219
- bool "Enable support for EP80219"
- select ARCH_IOP321
- select ARCH_IQ31244
+ bool "Enable support for EP80219"
+ select ARCH_IOP321
+ select ARCH_IQ31244
+ help
+ Say Y here if you want to run your kernel on the Intel EP80219
+ evaluation kit for the Intel 80219 chipset (a IOP321 variant).
# Which IOP variant are we running?
config ARCH_IOP321
bool "Chip stepping D of the IOP80331 processor or IOP80333"
depends on (ARCH_IOP331)
help
- Say Y here if you have StepD of the IOP80331 or IOP8033
- based platforms.
+ Say Y here if you have StepD of the IOP80331 or IOP8033
+ based platforms.
endmenu
endif
static struct irqaction iop321_timer_irq = {
.name = "IOP321 Timer Tick",
.handler = iop321_timer_interrupt,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
static void __init iop321_timer_init(void)
static struct irqaction iop331_timer_irq = {
.name = "IOP331 Timer Tick",
.handler = iop331_timer_interrupt,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
static void __init iop331_timer_init(void)
#include <linux/spinlock.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/serial.h>
#include <linux/tty.h>
#include <linux/bitops.h>
static struct irqaction ixp2000_timer_irq = {
.name = "IXP2000 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = ixp2000_timer_interrupt,
};
for(i = 31; i >= 0; i--) {
if(status & (1 << i)) {
desc = irq_desc + IRQ_IXP2000_DRAM0_MIN_ERR + i;
- desc->handle(IRQ_IXP2000_DRAM0_MIN_ERR + i, desc, regs);
+ desc_handle_irq(IRQ_IXP2000_DRAM0_MIN_ERR + i, desc, regs);
}
}
}
}
/* Hook into PCI interrupt */
- set_irq_chained_handler(IRQ_IXP2000_PCIB, &ixdp2x00_irq_handler);
+ set_irq_chained_handler(IRQ_IXP2000_PCIB, ixdp2x00_irq_handler);
}
/*************************************************************************
}
/* Hook into PCI interrupts */
- set_irq_chained_handler(IRQ_IXP2000_PCIB, &ixdp2x01_irq_handler);
+ set_irq_chained_handler(IRQ_IXP2000_PCIB, ixdp2x01_irq_handler);
}
}
int_desc = irq_desc + irqno;
- int_desc->handle(irqno, int_desc, regs);
+ desc_handle_irq(irqno, int_desc, regs);
desc->chip->unmask(irq);
}
static struct irqaction ixp23xx_timer_irq = {
.name = "IXP23xx Timer Tick",
.handler = ixp23xx_timer_interrupt,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
void __init ixp23xx_init_timer(void)
#include <linux/spinlock.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/serial.h>
#include <linux/tty.h>
#include <linux/bitops.h>
#include <asm/memory.h>
#include <asm/hardware.h>
#include <asm/mach-types.h>
-#include <asm/irq.h>
#include <asm/system.h>
#include <asm/tlbflush.h>
#include <asm/pgtable.h>
int cpld_irq =
IXP23XX_MACH_IRQ(IXDP2351_INTA_IRQ_BASE + i);
cpld_desc = irq_desc + cpld_irq;
- cpld_desc->handle(cpld_irq, cpld_desc, regs);
+ desc_handle_irq(cpld_irq, cpld_desc, regs);
}
}
int cpld_irq =
IXP23XX_MACH_IRQ(IXDP2351_INTB_IRQ_BASE + i);
cpld_desc = irq_desc + cpld_irq;
- cpld_desc->handle(cpld_irq, cpld_desc, regs);
+ desc_handle_irq(cpld_irq, cpld_desc, regs);
}
}
}
}
- set_irq_chained_handler(IRQ_IXP23XX_INTA, &ixdp2351_inta_handler);
- set_irq_chained_handler(IRQ_IXP23XX_INTB, &ixdp2351_intb_handler);
+ set_irq_chained_handler(IRQ_IXP23XX_INTA, ixdp2351_inta_handler);
+ set_irq_chained_handler(IRQ_IXP23XX_INTB, ixdp2351_intb_handler);
}
/*
static struct irqaction ixp4xx_timer_irq = {
.name = "IXP4xx Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = ixp4xx_timer_interrupt,
};
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <asm/mach-types.h>
#include <asm/hardware.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <linux/delay.h>
#include <asm/mach/pci.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <asm/mach-types.h>
#include <asm/hardware.h>
-#include <asm/irq.h>
#include <asm/mach/pci.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <asm/mach/pci.h>
#include <asm/mach-types.h>
*
*/
-#include <linux/module.h>
-#include <linux/reboot.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/module.h>
#include <linux/reboot.h>
#include <asm/mach-types.h>
set_irq_type(NAS100D_RB_IRQ, IRQT_LOW);
if (request_irq(NAS100D_RB_IRQ, &nas100d_reset_handler,
- SA_INTERRUPT, "NAS100D reset button", NULL) < 0) {
+ IRQF_DISABLED, "NAS100D reset button", NULL) < 0) {
printk(KERN_DEBUG "Reset Button IRQ %d not available\n",
NAS100D_RB_IRQ);
set_irq_type(NSLU2_PB_IRQ, IRQT_HIGH);
if (request_irq(NSLU2_RB_IRQ, &nslu2_reset_handler,
- SA_INTERRUPT, "NSLU2 reset button", NULL) < 0) {
+ IRQF_DISABLED, "NSLU2 reset button", NULL) < 0) {
printk(KERN_DEBUG "Reset Button IRQ %d not available\n",
NSLU2_RB_IRQ);
}
if (request_irq(NSLU2_PB_IRQ, &nslu2_power_handler,
- SA_INTERRUPT, "NSLU2 power button", NULL) < 0) {
+ IRQF_DISABLED, "NSLU2 power button", NULL) < 0) {
printk(KERN_DEBUG "Power Button IRQ %d not available\n",
NSLU2_PB_IRQ);
*/
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <linux/device.h>
#include <asm/types.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <asm/hardware.h>
#include <asm/setup.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/time.h>
#include <asm/hardware.h>
static struct irqaction lh7a40x_timer_irq = {
.name = "LHA740x Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = lh7a40x_timer_interrupt,
};
static struct irqaction netx_timer_irq = {
.name = "NetX Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = netx_timer_interrupt,
};
Support for TI OMAP 730 Perseus2 board. Say Y here if you have such
a board.
+config MACH_OMAP_FSAMPLE
+ bool "TI F-Sample"
+ depends on ARCH_OMAP1 && ARCH_OMAP730
+ help
+ Support for TI OMAP 850 F-Sample board. Say Y here if you have such
+ a board.
+
config MACH_VOICEBLUE
bool "Voiceblue"
depends on ARCH_OMAP1 && ARCH_OMAP15XX
obj-$(CONFIG_MACH_OMAP_INNOVATOR) += board-innovator.o
obj-$(CONFIG_MACH_OMAP_GENERIC) += board-generic.o
obj-$(CONFIG_MACH_OMAP_PERSEUS2) += board-perseus2.o
+obj-$(CONFIG_MACH_OMAP_FSAMPLE) += board-fsample.o
obj-$(CONFIG_MACH_OMAP_OSK) += board-osk.o
obj-$(CONFIG_MACH_OMAP_H3) += board-h3.o
obj-$(CONFIG_MACH_VOICEBLUE) += board-voiceblue.o
.enabled_uarts = 1,
};
+static struct omap_usb_config ams_delta_usb_config __initdata = {
+ .register_host = 1,
+ .hmc_mode = 16,
+ .pins[0] = 2,
+};
+
static struct omap_board_config_kernel ams_delta_config[] = {
{ OMAP_TAG_UART, &ams_delta_uart_config },
+ { OMAP_TAG_USB, &ams_delta_usb_config },
};
static struct platform_device ams_delta_led_device = {
--- /dev/null
+/*
+ * linux/arch/arm/mach-omap1/board-fsample.c
+ *
+ * Modified from board-perseus2.c
+ *
+ * Original OMAP730 support by Jean Pihet <j-pihet@ti.com>
+ * Updated for 2.6 by Kevin Hilman <kjh@hilman.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <linux/input.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+#include <asm/mach/map.h>
+
+#include <asm/arch/tc.h>
+#include <asm/arch/gpio.h>
+#include <asm/arch/mux.h>
+#include <asm/arch/fpga.h>
+#include <asm/arch/keypad.h>
+#include <asm/arch/common.h>
+#include <asm/arch/board.h>
+#include <asm/arch/board-fsample.h>
+
+static int fsample_keymap[] = {
+ KEY(0,0,KEY_UP),
+ KEY(0,1,KEY_RIGHT),
+ KEY(0,2,KEY_LEFT),
+ KEY(0,3,KEY_DOWN),
+ KEY(0,4,KEY_CENTER),
+ KEY(0,5,KEY_0_5),
+ KEY(1,0,KEY_SOFT2),
+ KEY(1,1,KEY_SEND),
+ KEY(1,2,KEY_END),
+ KEY(1,3,KEY_VOLUMEDOWN),
+ KEY(1,4,KEY_VOLUMEUP),
+ KEY(1,5,KEY_RECORD),
+ KEY(2,0,KEY_SOFT1),
+ KEY(2,1,KEY_3),
+ KEY(2,2,KEY_6),
+ KEY(2,3,KEY_9),
+ KEY(2,4,KEY_SHARP),
+ KEY(2,5,KEY_2_5),
+ KEY(3,0,KEY_BACK),
+ KEY(3,1,KEY_2),
+ KEY(3,2,KEY_5),
+ KEY(3,3,KEY_8),
+ KEY(3,4,KEY_0),
+ KEY(3,5,KEY_HEADSETHOOK),
+ KEY(4,0,KEY_HOME),
+ KEY(4,1,KEY_1),
+ KEY(4,2,KEY_4),
+ KEY(4,3,KEY_7),
+ KEY(4,4,KEY_STAR),
+ KEY(4,5,KEY_POWER),
+ 0
+};
+
+static struct resource smc91x_resources[] = {
+ [0] = {
+ .start = H2P2_DBG_FPGA_ETHR_START, /* Physical */
+ .end = H2P2_DBG_FPGA_ETHR_START + 0xf,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = INT_730_MPU_EXT_NIRQ,
+ .end = 0,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct mtd_partition nor_partitions[] = {
+ /* bootloader (U-Boot, etc) in first sector */
+ {
+ .name = "bootloader",
+ .offset = 0,
+ .size = SZ_128K,
+ .mask_flags = MTD_WRITEABLE, /* force read-only */
+ },
+ /* bootloader params in the next sector */
+ {
+ .name = "params",
+ .offset = MTDPART_OFS_APPEND,
+ .size = SZ_128K,
+ .mask_flags = 0,
+ },
+ /* kernel */
+ {
+ .name = "kernel",
+ .offset = MTDPART_OFS_APPEND,
+ .size = SZ_2M,
+ .mask_flags = 0
+ },
+ /* rest of flash is a file system */
+ {
+ .name = "rootfs",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL,
+ .mask_flags = 0
+ },
+};
+
+static struct flash_platform_data nor_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+ .parts = nor_partitions,
+ .nr_parts = ARRAY_SIZE(nor_partitions),
+};
+
+static struct resource nor_resource = {
+ .start = OMAP_CS0_PHYS,
+ .end = OMAP_CS0_PHYS + SZ_32M - 1,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device nor_device = {
+ .name = "omapflash",
+ .id = 0,
+ .dev = {
+ .platform_data = &nor_data,
+ },
+ .num_resources = 1,
+ .resource = &nor_resource,
+};
+
+static struct nand_platform_data nand_data = {
+ .options = NAND_SAMSUNG_LP_OPTIONS,
+};
+
+static struct resource nand_resource = {
+ .start = OMAP_CS3_PHYS,
+ .end = OMAP_CS3_PHYS + SZ_4K - 1,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device nand_device = {
+ .name = "omapnand",
+ .id = 0,
+ .dev = {
+ .platform_data = &nand_data,
+ },
+ .num_resources = 1,
+ .resource = &nand_resource,
+};
+
+static struct platform_device smc91x_device = {
+ .name = "smc91x",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(smc91x_resources),
+ .resource = smc91x_resources,
+};
+
+static struct resource kp_resources[] = {
+ [0] = {
+ .start = INT_730_MPUIO_KEYPAD,
+ .end = INT_730_MPUIO_KEYPAD,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct omap_kp_platform_data kp_data = {
+ .rows = 8,
+ .cols = 8,
+ .keymap = fsample_keymap,
+};
+
+static struct platform_device kp_device = {
+ .name = "omap-keypad",
+ .id = -1,
+ .dev = {
+ .platform_data = &kp_data,
+ },
+ .num_resources = ARRAY_SIZE(kp_resources),
+ .resource = kp_resources,
+};
+
+static struct platform_device lcd_device = {
+ .name = "lcd_p2",
+ .id = -1,
+};
+
+static struct platform_device *devices[] __initdata = {
+ &nor_device,
+ &nand_device,
+ &smc91x_device,
+ &kp_device,
+ &lcd_device,
+};
+
+#define P2_NAND_RB_GPIO_PIN 62
+
+static int nand_dev_ready(struct nand_platform_data *data)
+{
+ return omap_get_gpio_datain(P2_NAND_RB_GPIO_PIN);
+}
+
+static struct omap_uart_config fsample_uart_config __initdata = {
+ .enabled_uarts = ((1 << 0) | (1 << 1)),
+};
+
+static struct omap_lcd_config fsample_lcd_config __initdata = {
+ .ctrl_name = "internal",
+};
+
+static struct omap_board_config_kernel fsample_config[] = {
+ { OMAP_TAG_UART, &fsample_uart_config },
+ { OMAP_TAG_LCD, &fsample_lcd_config },
+};
+
+static void __init omap_fsample_init(void)
+{
+ if (!(omap_request_gpio(P2_NAND_RB_GPIO_PIN)))
+ nand_data.dev_ready = nand_dev_ready;
+
+ omap_cfg_reg(L3_1610_FLASH_CS2B_OE);
+ omap_cfg_reg(M8_1610_FLASH_CS2B_WE);
+
+ platform_add_devices(devices, ARRAY_SIZE(devices));
+
+ omap_board_config = fsample_config;
+ omap_board_config_size = ARRAY_SIZE(fsample_config);
+ omap_serial_init();
+}
+
+static void __init fsample_init_smc91x(void)
+{
+ fpga_write(1, H2P2_DBG_FPGA_LAN_RESET);
+ mdelay(50);
+ fpga_write(fpga_read(H2P2_DBG_FPGA_LAN_RESET) & ~1,
+ H2P2_DBG_FPGA_LAN_RESET);
+ mdelay(50);
+}
+
+void omap_fsample_init_irq(void)
+{
+ omap1_init_common_hw();
+ omap_init_irq();
+ omap_gpio_init();
+ fsample_init_smc91x();
+}
+
+/* Only FPGA needs to be mapped here. All others are done with ioremap */
+static struct map_desc omap_fsample_io_desc[] __initdata = {
+ {
+ .virtual = H2P2_DBG_FPGA_BASE,
+ .pfn = __phys_to_pfn(H2P2_DBG_FPGA_START),
+ .length = H2P2_DBG_FPGA_SIZE,
+ .type = MT_DEVICE
+ },
+ {
+ .virtual = FSAMPLE_CPLD_BASE,
+ .pfn = __phys_to_pfn(FSAMPLE_CPLD_START),
+ .length = FSAMPLE_CPLD_SIZE,
+ .type = MT_DEVICE
+ }
+};
+
+static void __init omap_fsample_map_io(void)
+{
+ omap1_map_common_io();
+ iotable_init(omap_fsample_io_desc,
+ ARRAY_SIZE(omap_fsample_io_desc));
+
+ /* Early, board-dependent init */
+
+ /*
+ * Hold GSM Reset until needed
+ */
+ omap_writew(omap_readw(OMAP730_DSP_M_CTL) & ~1, OMAP730_DSP_M_CTL);
+
+ /*
+ * UARTs -> done automagically by 8250 driver
+ */
+
+ /*
+ * CSx timings, GPIO Mux ... setup
+ */
+
+ /* Flash: CS0 timings setup */
+ omap_writel(0x0000fff3, OMAP730_FLASH_CFG_0);
+ omap_writel(0x00000088, OMAP730_FLASH_ACFG_0);
+
+ /*
+ * Ethernet support through the debug board
+ * CS1 timings setup
+ */
+ omap_writel(0x0000fff3, OMAP730_FLASH_CFG_1);
+ omap_writel(0x00000000, OMAP730_FLASH_ACFG_1);
+
+ /*
+ * Configure MPU_EXT_NIRQ IO in IO_CONF9 register,
+ * It is used as the Ethernet controller interrupt
+ */
+ omap_writel(omap_readl(OMAP730_IO_CONF_9) & 0x1FFFFFFF, OMAP730_IO_CONF_9);
+}
+
+MACHINE_START(OMAP_FSAMPLE, "OMAP730 F-Sample")
+/* Maintainer: Brian Swetland <swetland@google.com> */
+ .phys_io = 0xfff00000,
+ .io_pg_offst = ((0xfef00000) >> 18) & 0xfffc,
+ .boot_params = 0x10000100,
+ .map_io = omap_fsample_map_io,
+ .init_irq = omap_fsample_init_irq,
+ .init_machine = omap_fsample_init,
+ .timer = &omap_timer,
+MACHINE_END
#include <asm/arch/usb.h>
#include <asm/arch/keypad.h>
#include <asm/arch/common.h>
+#include <asm/arch/mcbsp.h>
+#include <asm/arch/omap-alsa.h>
static int innovator_keymap[] = {
KEY(0, 0, KEY_F1),
.resource = &innovator_flash_resource,
};
+#define DEFAULT_BITPERSAMPLE 16
+
+static struct omap_mcbsp_reg_cfg mcbsp_regs = {
+ .spcr2 = FREE | FRST | GRST | XRST | XINTM(3),
+ .spcr1 = RINTM(3) | RRST,
+ .rcr2 = RPHASE | RFRLEN2(OMAP_MCBSP_WORD_8) |
+ RWDLEN2(OMAP_MCBSP_WORD_16) | RDATDLY(0),
+ .rcr1 = RFRLEN1(OMAP_MCBSP_WORD_8) | RWDLEN1(OMAP_MCBSP_WORD_16),
+ .xcr2 = XPHASE | XFRLEN2(OMAP_MCBSP_WORD_8) |
+ XWDLEN2(OMAP_MCBSP_WORD_16) | XDATDLY(0) | XFIG,
+ .xcr1 = XFRLEN1(OMAP_MCBSP_WORD_8) | XWDLEN1(OMAP_MCBSP_WORD_16),
+ .srgr1 = FWID(DEFAULT_BITPERSAMPLE - 1),
+ .srgr2 = GSYNC | CLKSP | FSGM | FPER(DEFAULT_BITPERSAMPLE * 2 - 1),
+ /*.pcr0 = FSXM | FSRM | CLKXM | CLKRM | CLKXP | CLKRP,*/ /* mcbsp: master */
+ .pcr0 = CLKXP | CLKRP, /* mcbsp: slave */
+};
+
+static struct omap_alsa_codec_config alsa_config = {
+ .name = "OMAP Innovator AIC23",
+ .mcbsp_regs_alsa = &mcbsp_regs,
+ .codec_configure_dev = NULL, // aic23_configure,
+ .codec_set_samplerate = NULL, // aic23_set_samplerate,
+ .codec_clock_setup = NULL, // aic23_clock_setup,
+ .codec_clock_on = NULL, // aic23_clock_on,
+ .codec_clock_off = NULL, // aic23_clock_off,
+ .get_default_samplerate = NULL, // aic23_get_default_samplerate,
+};
+
+static struct platform_device innovator_mcbsp1_device = {
+ .name = "omap_alsa_mcbsp",
+ .id = 1,
+ .dev = {
+ .platform_data = &alsa_config,
+ },
+};
+
static struct resource innovator_kp_resources[] = {
[0] = {
.start = INT_KEYBOARD,
#ifdef CONFIG_ARCH_OMAP15XX
+#include <linux/spi/spi.h>
+#include <linux/spi/ads7846.h>
+
+
/* Only FPGA needs to be mapped here. All others are done with ioremap */
static struct map_desc innovator1510_io_desc[] __initdata = {
{
.id = -1,
};
+static struct platform_device innovator1510_spi_device = {
+ .name = "spi_inn1510",
+ .id = -1,
+};
+
static struct platform_device *innovator1510_devices[] __initdata = {
&innovator_flash_device,
&innovator1510_smc91x_device,
+ &innovator_mcbsp1_device,
&innovator_kp_device,
&innovator1510_lcd_device,
+ &innovator1510_spi_device,
};
+static int innovator_get_pendown_state(void)
+{
+ return !(fpga_read(OMAP1510_FPGA_TOUCHSCREEN) & (1 << 5));
+}
+
+static const struct ads7846_platform_data innovator1510_ts_info = {
+ .model = 7846,
+ .vref_delay_usecs = 100, /* internal, no capacitor */
+ .x_plate_ohms = 419,
+ .y_plate_ohms = 486,
+ .get_pendown_state = innovator_get_pendown_state,
+};
+
+static struct spi_board_info __initdata innovator1510_boardinfo[] = { {
+ /* FPGA (bus "10") CS0 has an ads7846e */
+ .modalias = "ads7846",
+ .platform_data = &innovator1510_ts_info,
+ .irq = OMAP1510_INT_FPGA_TS,
+ .max_speed_hz = 120000 /* max sample rate at 3V */
+ * 26 /* command + data + overhead */,
+ .bus_num = 10,
+ .chip_select = 0,
+} };
+
#endif /* CONFIG_ARCH_OMAP15XX */
#ifdef CONFIG_ARCH_OMAP16XX
#ifdef CONFIG_ARCH_OMAP15XX
if (cpu_is_omap1510()) {
platform_add_devices(innovator1510_devices, ARRAY_SIZE(innovator1510_devices));
+ spi_register_board_info(innovator1510_boardinfo,
+ ARRAY_SIZE(innovator1510_boardinfo));
}
#endif
#ifdef CONFIG_ARCH_OMAP16XX
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/platform_device.h>
-#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
-#include <linux/input.h>
#include <asm/hardware.h>
#include <asm/mach-types.h>
#include <asm/arch/usb.h>
#include <asm/arch/mux.h>
#include <asm/arch/tc.h>
-#include <asm/arch/keypad.h>
#include <asm/arch/common.h>
#include <asm/arch/mcbsp.h>
#include <asm/arch/omap-alsa.h>
-static int osk_keymap[] = {
- KEY(0, 0, KEY_F1),
- KEY(0, 3, KEY_UP),
- KEY(1, 1, KEY_LEFTCTRL),
- KEY(1, 2, KEY_LEFT),
- KEY(2, 0, KEY_SPACE),
- KEY(2, 1, KEY_ESC),
- KEY(2, 2, KEY_DOWN),
- KEY(3, 2, KEY_ENTER),
- KEY(3, 3, KEY_RIGHT),
- 0
-};
-
-
static struct mtd_partition osk_partitions[] = {
/* bootloader (U-Boot, etc) in first sector */
{
static struct platform_device osk5912_mcbsp1_device = {
.name = "omap_alsa_mcbsp",
- .id = 1,
+ .id = 1,
.dev = {
.platform_data = &alsa_config,
},
};
-static struct resource osk5912_kp_resources[] = {
- [0] = {
- .start = INT_KEYBOARD,
- .end = INT_KEYBOARD,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static struct omap_kp_platform_data osk_kp_data = {
- .rows = 8,
- .cols = 8,
- .keymap = osk_keymap,
-};
-
-static struct platform_device osk5912_kp_device = {
- .name = "omap-keypad",
- .id = -1,
- .dev = {
- .platform_data = &osk_kp_data,
- },
- .num_resources = ARRAY_SIZE(osk5912_kp_resources),
- .resource = osk5912_kp_resources,
-};
-
-static struct platform_device osk5912_lcd_device = {
- .name = "lcd_osk",
- .id = -1,
-};
-
static struct platform_device *osk5912_devices[] __initdata = {
&osk5912_flash_device,
&osk5912_smc91x_device,
&osk5912_cf_device,
&osk5912_mcbsp1_device,
- &osk5912_kp_device,
- &osk5912_lcd_device,
};
static void __init osk_init_smc91x(void)
.enabled_uarts = (1 << 0),
};
+#ifdef CONFIG_OMAP_OSK_MISTRAL
static struct omap_lcd_config osk_lcd_config __initdata = {
.ctrl_name = "internal",
};
+#endif
static struct omap_board_config_kernel osk_config[] = {
{ OMAP_TAG_USB, &osk_usb_config },
{ OMAP_TAG_UART, &osk_uart_config },
+#ifdef CONFIG_OMAP_OSK_MISTRAL
{ OMAP_TAG_LCD, &osk_lcd_config },
+#endif
};
#ifdef CONFIG_OMAP_OSK_MISTRAL
+#include <linux/input.h>
+#include <linux/spi/spi.h>
+#include <linux/spi/ads7846.h>
+
+#include <asm/arch/keypad.h>
+
+static const int osk_keymap[] = {
+ /* KEY(col, row, code) */
+ KEY(0, 0, KEY_F1), /* SW4 */
+ KEY(0, 3, KEY_UP), /* (sw2/up) */
+ KEY(1, 1, KEY_LEFTCTRL), /* SW5 */
+ KEY(1, 2, KEY_LEFT), /* (sw2/left) */
+ KEY(2, 0, KEY_SPACE), /* SW3 */
+ KEY(2, 1, KEY_ESC), /* SW6 */
+ KEY(2, 2, KEY_DOWN), /* (sw2/down) */
+ KEY(3, 2, KEY_ENTER), /* (sw2/select) */
+ KEY(3, 3, KEY_RIGHT), /* (sw2/right) */
+ 0
+};
+
+static struct omap_kp_platform_data osk_kp_data = {
+ .rows = 8,
+ .cols = 8,
+ .keymap = (int *) osk_keymap,
+};
+
+static struct resource osk5912_kp_resources[] = {
+ [0] = {
+ .start = INT_KEYBOARD,
+ .end = INT_KEYBOARD,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device osk5912_kp_device = {
+ .name = "omap-keypad",
+ .id = -1,
+ .dev = {
+ .platform_data = &osk_kp_data,
+ },
+ .num_resources = ARRAY_SIZE(osk5912_kp_resources),
+ .resource = osk5912_kp_resources,
+};
+
+static struct platform_device osk5912_lcd_device = {
+ .name = "lcd_osk",
+ .id = -1,
+};
+
+static struct platform_device *mistral_devices[] __initdata = {
+ &osk5912_kp_device,
+ &osk5912_lcd_device,
+};
+
+static int mistral_get_pendown_state(void)
+{
+ return !omap_get_gpio_datain(4);
+}
+
+static const struct ads7846_platform_data mistral_ts_info = {
+ .model = 7846,
+ .vref_delay_usecs = 100, /* internal, no capacitor */
+ .x_plate_ohms = 419,
+ .y_plate_ohms = 486,
+ .get_pendown_state = mistral_get_pendown_state,
+};
+
+static struct spi_board_info __initdata mistral_boardinfo[] = { {
+ /* MicroWire (bus 2) CS0 has an ads7846e */
+ .modalias = "ads7846",
+ .platform_data = &mistral_ts_info,
+ .irq = OMAP_GPIO_IRQ(4),
+ .max_speed_hz = 120000 /* max sample rate at 3V */
+ * 26 /* command + data + overhead */,
+ .bus_num = 2,
+ .chip_select = 0,
+} };
+
#ifdef CONFIG_PM
static irqreturn_t
osk_mistral_wake_interrupt(int irq, void *ignored, struct pt_regs *regs)
static void __init osk_mistral_init(void)
{
- /* FIXME here's where to feed in framebuffer, touchpad, and
- * keyboard setup ... not in the drivers for those devices!
- *
- * NOTE: we could actually tell if there's a Mistral board
+ /* NOTE: we could actually tell if there's a Mistral board
* attached, e.g. by trying to read something from the ads7846.
- * But this is too early for that...
+ * But this arch_init() code is too early for that, since we
+ * can't talk to the ads or even the i2c eeprom.
*/
+ // omap_cfg_reg(P19_1610_GPIO6); // BUSY
+ omap_cfg_reg(P20_1610_GPIO4); // PENIRQ
+ set_irq_type(OMAP_GPIO_IRQ(4), IRQT_FALLING);
+ spi_register_board_info(mistral_boardinfo,
+ ARRAY_SIZE(mistral_boardinfo));
+
/* the sideways button (SW1) is for use as a "wakeup" button */
omap_cfg_reg(N15_1610_MPUIO2);
if (omap_request_gpio(OMAP_MPUIO(2)) == 0) {
*/
ret = request_irq(OMAP_GPIO_IRQ(OMAP_MPUIO(2)),
&osk_mistral_wake_interrupt,
- SA_SHIRQ, "mistral_wakeup",
+ IRQF_SHARED, "mistral_wakeup",
&osk_mistral_wake_interrupt);
if (ret != 0) {
omap_free_gpio(OMAP_MPUIO(2));
#endif
} else
printk(KERN_ERR "OSK+Mistral: wakeup button is awol\n");
+
+ platform_add_devices(mistral_devices, ARRAY_SIZE(mistral_devices));
}
#else
static void __init osk_mistral_init(void) { }
+//kernel/linux-omap-fsample/arch/arm/mach-omap1/clock.c#2 - edit change 3808 (text)
/*
* linux/arch/arm/mach-omap1/clock.c
*
#include <asm/io.h>
+#include <asm/arch/cpu.h>
#include <asm/arch/usb.h>
#include <asm/arch/clock.h>
#include <asm/arch/sram.h>
/*
* In most cases we should not need to reprogram DPLL.
* Reprogramming the DPLL is tricky, it must be done from SRAM.
+ * (on 730, bit 13 must always be 1)
*/
- omap_sram_reprogram_clock(ptr->dpllctl_val, ptr->ckctl_val);
+ if (cpu_is_omap730())
+ omap_sram_reprogram_clock(ptr->dpllctl_val, ptr->ckctl_val | 0x2000);
+ else
+ omap_sram_reprogram_clock(ptr->dpllctl_val, ptr->ckctl_val);
ck_dpll1.rate = ptr->pll_rate;
propagate_rate(&ck_dpll1);
printk(KERN_ERR "System frequencies not set. Check your config.\n");
/* Guess sane values (60MHz) */
omap_writew(0x2290, DPLL_CTL);
- omap_writew(0x1005, ARM_CKCTL);
+ omap_writew(cpu_is_omap730() ? 0x3005 : 0x1005, ARM_CKCTL);
ck_dpll1.rate = 60000000;
propagate_rate(&ck_dpll1);
}
ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,
arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);
-#ifdef CONFIG_MACH_OMAP_PERSEUS2
+#if defined(CONFIG_MACH_OMAP_PERSEUS2) || defined(CONFIG_MACH_OMAP_FSAMPLE)
/* Select slicer output as OMAP input clock */
omap_writew(omap_readw(OMAP730_PCC_UPLD_CTRL) & ~0x1, OMAP730_PCC_UPLD_CTRL);
#endif
/* Turn off DSP and ARM_TIMXO. Make sure ARM_INTHCK is not divided */
- omap_writew(omap_readw(ARM_CKCTL) & 0x0fff, ARM_CKCTL);
+ /* (on 730, bit 13 must not be cleared) */
+ if (cpu_is_omap730())
+ omap_writew(omap_readw(ARM_CKCTL) & 0x2fff, ARM_CKCTL);
+ else
+ omap_writew(omap_readw(ARM_CKCTL) & 0x0fff, ARM_CKCTL);
/* Put DSP/MPUI into reset until needed */
omap_writew(0, ARM_RSTCT1);
* mask_ack routine for all of the FPGA interrupts has been changed from
* fpga_mask_ack_irq() to fpga_ack_irq() so that the specific FPGA interrupt
* being serviced is left unmasked. We can do this because the FPGA cascade
- * interrupt is installed with the SA_INTERRUPT flag, which leaves all
+ * interrupt is installed with the IRQF_DISABLED flag, which leaves all
* interrupts masked at the CPU while an FPGA interrupt handler executes.
*
* Limited testing indicates that this workaround appears to be effective
+//kernel/linux-omap-fsample/arch/arm/mach-omap1/pm.c#3 - integrate change 4545 (text)
/*
* linux/arch/arm/mach-omap1/pm.c
*
#include <asm/mach/irq.h>
#include <asm/mach-types.h>
+#include <asm/arch/cpu.h>
#include <asm/arch/irqs.h>
#include <asm/arch/clock.h>
#include <asm/arch/sram.h>
/* stop DSP */
omap_writew(omap_readw(ARM_RSTCT1) & ~(1 << DSP_EN), ARM_RSTCT1);
- /* shut down dsp_ck */
- omap_writew(omap_readw(ARM_CKCTL) & ~(1 << EN_DSPCK), ARM_CKCTL);
+ /* shut down dsp_ck */
+ if (!cpu_is_omap730())
+ omap_writew(omap_readw(ARM_CKCTL) & ~(1 << EN_DSPCK), ARM_CKCTL);
/* temporarily enabling api_ck to access DSP registers */
omap_writew(omap_readw(ARM_IDLECT2) | 1 << EN_APICK, ARM_IDLECT2);
static struct irqaction omap_wakeup_irq = {
.name = "peripheral wakeup",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.handler = omap_wakeup_interrupt
};
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/irq.h>
#include <linux/delay.h>
#include <linux/serial.h>
#include <linux/tty.h>
}
omap_set_gpio_direction(gpio_nr, 1);
ret = request_irq(OMAP_GPIO_IRQ(gpio_nr), &omap_serial_wake_interrupt,
- SA_TRIGGER_RISING, "serial wakeup", NULL);
+ IRQF_TRIGGER_RISING, "serial wakeup", NULL);
if (ret) {
omap_free_gpio(gpio_nr);
printk(KERN_ERR "No interrupt for UART wake GPIO: %i\n",
* will break. On P2, the timer count rate is 6.5 MHz after programming PTV
* with 0. This divides the 13MHz input by 2, and is undocumented.
*/
-#ifdef CONFIG_MACH_OMAP_PERSEUS2
+#if defined(CONFIG_MACH_OMAP_PERSEUS2) || defined(CONFIG_MACH_OMAP_FSAMPLE)
/* REVISIT: This ifdef construct should be replaced by a query to clock
* framework to see if timer base frequency is 12.0, 13.0 or 19.2 MHz.
*/
static struct irqaction omap_mpu_timer_irq = {
.name = "mpu timer",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = omap_mpu_timer_interrupt,
};
static struct irqaction omap_mpu_timer1_irq = {
.name = "mpu timer1 overflow",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.handler = omap_mpu_timer1_interrupt,
};
config ARCH_OMAP2420
bool "OMAP2420 support"
depends on ARCH_OMAP24XX
+ select OMAP_DM_TIMER
comment "OMAP Board Type"
depends on ARCH_OMAP2
#
# Common support
-obj-y := irq.o id.o io.o sram-fn.o memory.o prcm.o clock.o mux.o devices.o serial.o
+obj-y := irq.o id.o io.o sram-fn.o memory.o prcm.o clock.o mux.o devices.o \
+ serial.o gpmc.o
obj-$(CONFIG_OMAP_MPU_TIMER) += timer-gp.o
# Power Management
-obj-$(CONFIG_PM) += pm.o sleep.o
+obj-$(CONFIG_PM) += pm.o pm-domain.o sleep.o
# Specific board support
obj-$(CONFIG_MACH_OMAP_GENERIC) += board-generic.o
set_irq_type(OMAP_GPIO_IRQ(SW_ENTER_GPIO16), IRQT_RISING);
if (request_irq(OMAP_GPIO_IRQ(SW_ENTER_GPIO16), &apollon_sw_interrupt,
- SA_SHIRQ, "enter sw",
+ IRQF_SHARED, "enter sw",
&apollon_sw_interrupt))
return;
set_irq_type(OMAP_GPIO_IRQ(SW_UP_GPIO17), IRQT_RISING);
if (request_irq(OMAP_GPIO_IRQ(SW_UP_GPIO17), &apollon_sw_interrupt,
- SA_SHIRQ, "up sw",
+ IRQF_SHARED, "up sw",
&apollon_sw_interrupt))
return;
set_irq_type(OMAP_GPIO_IRQ(SW_DOWN_GPIO58), IRQT_RISING);
if (request_irq(OMAP_GPIO_IRQ(SW_DOWN_GPIO58), &apollon_sw_interrupt,
- SA_SHIRQ, "down sw",
+ IRQF_SHARED, "down sw",
&apollon_sw_interrupt))
return;
}
/* Isolate control register */
div_sel = (SRC_RATE_SEL_MASK & clk->flags);
- div_off = clk->src_offset;
+ div_off = clk->rate_offset;
validrate = omap2_clksel_round_rate(clk, rate, &new_div);
- if(validrate != rate)
+ if (validrate != rate)
return(ret);
field_val = omap2_get_clksel(&div_sel, &field_mask, clk);
if (div_sel == 0)
return ret;
- if(clk->flags & CM_SYSCLKOUT_SEL1){
- switch(new_div){
- case 16: field_val = 4; break;
- case 8: field_val = 3; break;
- case 4: field_val = 2; break;
- case 2: field_val = 1; break;
- case 1: field_val = 0; break;
+ if (clk->flags & CM_SYSCLKOUT_SEL1) {
+ switch (new_div) {
+ case 16:
+ field_val = 4;
+ break;
+ case 8:
+ field_val = 3;
+ break;
+ case 4:
+ field_val = 2;
+ break;
+ case 2:
+ field_val = 1;
+ break;
+ case 1:
+ field_val = 0;
+ break;
}
- }
- else
+ } else
field_val = new_div;
reg = (void __iomem *)div_sel;
val = 0x2;
break;
case CM_WKUP_SEL1:
- src_reg_addr = (u32)&CM_CLKSEL2_CORE;
+ src_reg_addr = (u32)&CM_CLKSEL_WKUP;
mask = 0x3;
if (src_clk == &func_32k_ck)
val = 0x0;
val = 0;
if (src_clk == &sys_ck)
val = 1;
- if (src_clk == &func_54m_ck)
- val = 2;
if (src_clk == &func_96m_ck)
+ val = 2;
+ if (src_clk == &func_54m_ck)
val = 3;
break;
}
.parent = &l4_ck,
.flags = CLOCK_IN_OMAP242X | CLOCK_IN_OMAP243X,
.enable_reg = (void __iomem *)&CM_ICLKEN1_CORE, /* Bit4 */
- .enable_bit = 0,
+ .enable_bit = 4,
.recalc = &omap2_followparent_recalc,
};
static inline void omap_init_sti(void) {}
#endif
+#if defined(CONFIG_SPI_OMAP24XX)
+
+#include <asm/arch/mcspi.h>
+
+#define OMAP2_MCSPI1_BASE 0x48098000
+#define OMAP2_MCSPI2_BASE 0x4809a000
+
+/* FIXME: use resources instead */
+
+static struct omap2_mcspi_platform_config omap2_mcspi1_config = {
+ .base = io_p2v(OMAP2_MCSPI1_BASE),
+ .num_cs = 4,
+};
+
+struct platform_device omap2_mcspi1 = {
+ .name = "omap2_mcspi",
+ .id = 1,
+ .dev = {
+ .platform_data = &omap2_mcspi1_config,
+ },
+};
+
+static struct omap2_mcspi_platform_config omap2_mcspi2_config = {
+ .base = io_p2v(OMAP2_MCSPI2_BASE),
+ .num_cs = 2,
+};
+
+struct platform_device omap2_mcspi2 = {
+ .name = "omap2_mcspi",
+ .id = 2,
+ .dev = {
+ .platform_data = &omap2_mcspi2_config,
+ },
+};
+
+static void omap_init_mcspi(void)
+{
+ platform_device_register(&omap2_mcspi1);
+ platform_device_register(&omap2_mcspi2);
+}
+
+#else
+static inline void omap_init_mcspi(void) {}
+#endif
+
/*-------------------------------------------------------------------------*/
static int __init omap2_init_devices(void)
* in alphabetical order so they're easier to sort through.
*/
omap_init_i2c();
+ omap_init_mcspi();
omap_init_sti();
return 0;
--- /dev/null
+/*
+ * GPMC support functions
+ *
+ * Copyright (C) 2005-2006 Nokia Corporation
+ *
+ * Author: Juha Yrjola
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/err.h>
+#include <linux/clk.h>
+
+#include <asm/io.h>
+#include <asm/arch/gpmc.h>
+
+#undef DEBUG
+
+#define GPMC_BASE 0x6800a000
+#define GPMC_REVISION 0x00
+#define GPMC_SYSCONFIG 0x10
+#define GPMC_SYSSTATUS 0x14
+#define GPMC_IRQSTATUS 0x18
+#define GPMC_IRQENABLE 0x1c
+#define GPMC_TIMEOUT_CONTROL 0x40
+#define GPMC_ERR_ADDRESS 0x44
+#define GPMC_ERR_TYPE 0x48
+#define GPMC_CONFIG 0x50
+#define GPMC_STATUS 0x54
+#define GPMC_PREFETCH_CONFIG1 0x1e0
+#define GPMC_PREFETCH_CONFIG2 0x1e4
+#define GPMC_PREFETCH_CONTROL 0x1e8
+#define GPMC_PREFETCH_STATUS 0x1f0
+#define GPMC_ECC_CONFIG 0x1f4
+#define GPMC_ECC_CONTROL 0x1f8
+#define GPMC_ECC_SIZE_CONFIG 0x1fc
+
+#define GPMC_CS0 0x60
+#define GPMC_CS_SIZE 0x30
+
+static void __iomem *gpmc_base =
+ (void __iomem *) IO_ADDRESS(GPMC_BASE);
+static void __iomem *gpmc_cs_base =
+ (void __iomem *) IO_ADDRESS(GPMC_BASE) + GPMC_CS0;
+
+static struct clk *gpmc_l3_clk;
+
+static void gpmc_write_reg(int idx, u32 val)
+{
+ __raw_writel(val, gpmc_base + idx);
+}
+
+static u32 gpmc_read_reg(int idx)
+{
+ return __raw_readl(gpmc_base + idx);
+}
+
+void gpmc_cs_write_reg(int cs, int idx, u32 val)
+{
+ void __iomem *reg_addr;
+
+ reg_addr = gpmc_cs_base + (cs * GPMC_CS_SIZE) + idx;
+ __raw_writel(val, reg_addr);
+}
+
+u32 gpmc_cs_read_reg(int cs, int idx)
+{
+ return __raw_readl(gpmc_cs_base + (cs * GPMC_CS_SIZE) + idx);
+}
+
+/* TODO: Add support for gpmc_fck to clock framework and use it */
+static unsigned long gpmc_get_fclk_period(void)
+{
+ /* In picoseconds */
+ return 1000000000 / ((clk_get_rate(gpmc_l3_clk)) / 1000);
+}
+
+unsigned int gpmc_ns_to_ticks(unsigned int time_ns)
+{
+ unsigned long tick_ps;
+
+ /* Calculate in picosecs to yield more exact results */
+ tick_ps = gpmc_get_fclk_period();
+
+ return (time_ns * 1000 + tick_ps - 1) / tick_ps;
+}
+
+#ifdef DEBUG
+static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
+ int time, const char *name)
+#else
+static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
+ int time)
+#endif
+{
+ u32 l;
+ int ticks, mask, nr_bits;
+
+ if (time == 0)
+ ticks = 0;
+ else
+ ticks = gpmc_ns_to_ticks(time);
+ nr_bits = end_bit - st_bit + 1;
+ if (ticks >= 1 << nr_bits)
+ return -1;
+
+ mask = (1 << nr_bits) - 1;
+ l = gpmc_cs_read_reg(cs, reg);
+#ifdef DEBUG
+ printk(KERN_INFO "GPMC CS%d: %-10s: %d ticks, %3lu ns (was %i ticks)\n",
+ cs, name, ticks, gpmc_get_fclk_period() * ticks / 1000,
+ (l >> st_bit) & mask);
+#endif
+ l &= ~(mask << st_bit);
+ l |= ticks << st_bit;
+ gpmc_cs_write_reg(cs, reg, l);
+
+ return 0;
+}
+
+#ifdef DEBUG
+#define GPMC_SET_ONE(reg, st, end, field) \
+ if (set_gpmc_timing_reg(cs, (reg), (st), (end), \
+ t->field, #field) < 0) \
+ return -1
+#else
+#define GPMC_SET_ONE(reg, st, end, field) \
+ if (set_gpmc_timing_reg(cs, (reg), (st), (end), t->field) < 0) \
+ return -1
+#endif
+
+int gpmc_cs_calc_divider(int cs, unsigned int sync_clk)
+{
+ int div;
+ u32 l;
+
+ l = sync_clk * 1000 + (gpmc_get_fclk_period() - 1);
+ div = l / gpmc_get_fclk_period();
+ if (div > 4)
+ return -1;
+ if (div < 0)
+ div = 1;
+
+ return div;
+}
+
+int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t)
+{
+ int div;
+ u32 l;
+
+ div = gpmc_cs_calc_divider(cs, t->sync_clk);
+ if (div < 0)
+ return -1;
+
+ GPMC_SET_ONE(GPMC_CS_CONFIG2, 0, 3, cs_on);
+ GPMC_SET_ONE(GPMC_CS_CONFIG2, 8, 12, cs_rd_off);
+ GPMC_SET_ONE(GPMC_CS_CONFIG2, 16, 20, cs_wr_off);
+
+ GPMC_SET_ONE(GPMC_CS_CONFIG3, 0, 3, adv_on);
+ GPMC_SET_ONE(GPMC_CS_CONFIG3, 8, 12, adv_rd_off);
+ GPMC_SET_ONE(GPMC_CS_CONFIG3, 16, 20, adv_wr_off);
+
+ GPMC_SET_ONE(GPMC_CS_CONFIG4, 0, 3, oe_on);
+ GPMC_SET_ONE(GPMC_CS_CONFIG4, 8, 12, oe_off);
+ GPMC_SET_ONE(GPMC_CS_CONFIG4, 16, 19, we_on);
+ GPMC_SET_ONE(GPMC_CS_CONFIG4, 24, 28, we_off);
+
+ GPMC_SET_ONE(GPMC_CS_CONFIG5, 0, 4, rd_cycle);
+ GPMC_SET_ONE(GPMC_CS_CONFIG5, 8, 12, wr_cycle);
+ GPMC_SET_ONE(GPMC_CS_CONFIG5, 16, 20, access);
+
+ GPMC_SET_ONE(GPMC_CS_CONFIG5, 24, 27, page_burst_access);
+
+#ifdef DEBUG
+ printk(KERN_INFO "GPMC CS%d CLK period is %lu (div %d)\n",
+ cs, gpmc_get_fclk_period(), div);
+#endif
+
+ l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1);
+ l &= ~0x03;
+ l |= (div - 1);
+
+ return 0;
+}
+
+unsigned long gpmc_cs_get_base_addr(int cs)
+{
+ return (gpmc_cs_read_reg(cs, GPMC_CS_CONFIG7) & 0x1f) << 24;
+}
+
+void __init gpmc_init(void)
+{
+ u32 l;
+
+ gpmc_l3_clk = clk_get(NULL, "core_l3_ck");
+ BUG_ON(IS_ERR(gpmc_l3_clk));
+
+ l = gpmc_read_reg(GPMC_REVISION);
+ printk(KERN_INFO "GPMC revision %d.%d\n", (l >> 4) & 0x0f, l & 0x0f);
+ /* Set smart idle mode and automatic L3 clock gating */
+ l = gpmc_read_reg(GPMC_SYSCONFIG);
+ l &= 0x03 << 3;
+ l |= (0x02 << 3) | (1 << 0);
+ gpmc_write_reg(GPMC_SYSCONFIG, l);
+}
extern void omap_sram_init(void);
extern int omap2_clk_init(void);
extern void omap2_check_revision(void);
+extern void gpmc_init(void);
/*
* The machine specific code may provide the extra mapping besides the
{
omap2_mux_init();
omap2_clk_init();
+ gpmc_init();
}
/* 24xx clocks */
MUX_CFG_24XX("W14_24XX_SYS_CLKOUT", 0x137, 0, 1, 1, 1)
+/* 24xx GPMC wait pin monitoring */
+MUX_CFG_24XX("L3_GPMC_WAIT0", 0x09a, 0, 1, 1, 1)
+MUX_CFG_24XX("N7_GPMC_WAIT1", 0x09b, 0, 1, 1, 1)
+MUX_CFG_24XX("M1_GPMC_WAIT2", 0x09c, 0, 1, 1, 1)
+MUX_CFG_24XX("P1_GPMC_WAIT3", 0x09d, 0, 1, 1, 1)
+
/* 24xx McBSP */
MUX_CFG_24XX("Y15_24XX_MCBSP2_CLKX", 0x124, 1, 1, 0, 1)
MUX_CFG_24XX("R14_24XX_MCBSP2_FSX", 0x125, 1, 1, 0, 1)
MUX_CFG_24XX("V15_24XX_MCBSP2_DX", 0x127, 1, 1, 0, 1)
/* 24xx GPIO */
-MUX_CFG_24XX("M21_242X_GPIO11", 0x0c9, 3, 1, 1, 1)
+MUX_CFG_24XX("M21_242X_GPIO11", 0x0c9, 3, 1, 1, 1)
MUX_CFG_24XX("AA10_242X_GPIO13", 0x0e5, 3, 0, 0, 1)
-MUX_CFG_24XX("AA6_242X_GPIO14", 0x0e6, 3, 0, 0, 1)
-MUX_CFG_24XX("AA4_242X_GPIO15", 0x0e7, 3, 0, 0, 1)
-MUX_CFG_24XX("Y11_242X_GPIO16", 0x0e8, 3, 0, 0, 1)
+MUX_CFG_24XX("AA6_242X_GPIO14", 0x0e6, 3, 0, 0, 1)
+MUX_CFG_24XX("AA4_242X_GPIO15", 0x0e7, 3, 0, 0, 1)
+MUX_CFG_24XX("Y11_242X_GPIO16", 0x0e8, 3, 0, 0, 1)
MUX_CFG_24XX("AA12_242X_GPIO17", 0x0e9, 3, 0, 0, 1)
-MUX_CFG_24XX("AA8_242X_GPIO58", 0x0ea, 3, 0, 0, 1)
+MUX_CFG_24XX("AA8_242X_GPIO58", 0x0ea, 3, 0, 0, 1)
MUX_CFG_24XX("Y20_24XX_GPIO60", 0x12c, 3, 0, 0, 1)
-MUX_CFG_24XX("W4__24XX_GPIO74", 0x0f2, 3, 0, 0, 1)
+MUX_CFG_24XX("W4__24XX_GPIO74", 0x0f2, 3, 0, 0, 1)
MUX_CFG_24XX("M15_24XX_GPIO92", 0x10a, 3, 0, 0, 1)
MUX_CFG_24XX("V14_24XX_GPIO117", 0x128, 3, 1, 0, 1)
+/* 242x DBG GPIO */
+MUX_CFG_24XX("V4_242X_GPIO49", 0xd3, 3, 0, 0, 1)
+MUX_CFG_24XX("W2_242X_GPIO50", 0xd4, 3, 0, 0, 1)
+MUX_CFG_24XX("U4_242X_GPIO51", 0xd5, 3, 0, 0, 1)
+MUX_CFG_24XX("V3_242X_GPIO52", 0xd6, 3, 0, 0, 1)
+MUX_CFG_24XX("V2_242X_GPIO53", 0xd7, 3, 0, 0, 1)
+MUX_CFG_24XX("V6_242X_GPIO53", 0xcf, 3, 0, 0, 1)
+MUX_CFG_24XX("T4_242X_GPIO54", 0xd8, 3, 0, 0, 1)
+MUX_CFG_24XX("Y4_242X_GPIO54", 0xd0, 3, 0, 0, 1)
+MUX_CFG_24XX("T3_242X_GPIO55", 0xd9, 3, 0, 0, 1)
+MUX_CFG_24XX("U2_242X_GPIO56", 0xda, 3, 0, 0, 1)
+
+/* 24xx external DMA requests */
+MUX_CFG_24XX("AA10_242X_DMAREQ0", 0x0e5, 2, 0, 0, 1)
+MUX_CFG_24XX("AA6_242X_DMAREQ1", 0x0e6, 2, 0, 0, 1)
+MUX_CFG_24XX("E4_242X_DMAREQ2", 0x074, 2, 0, 0, 1)
+MUX_CFG_24XX("G4_242X_DMAREQ3", 0x073, 2, 0, 0, 1)
+MUX_CFG_24XX("D3_242X_DMAREQ4", 0x072, 2, 0, 0, 1)
+MUX_CFG_24XX("E3_242X_DMAREQ5", 0x071, 2, 0, 0, 1)
+
/* TSC IRQ */
MUX_CFG_24XX("P20_24XX_TSC_IRQ", 0x108, 0, 0, 0, 1)
--- /dev/null
+/*
+ * linux/arch/arm/mach-omap2/pm-domain.c
+ *
+ * Power domain functions for OMAP2
+ *
+ * Copyright (C) 2006 Nokia Corporation
+ * Tony Lindgren <tony@atomide.com>
+ *
+ * Some code based on earlier OMAP2 sample PM code
+ * Copyright (C) 2005 Texas Instruments, Inc.
+ * Richard Woodruff <r-woodruff2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/clk.h>
+
+#include <asm/io.h>
+
+#include "prcm-regs.h"
+
+/* Power domain offsets */
+#define PM_MPU_OFFSET 0x100
+#define PM_CORE_OFFSET 0x200
+#define PM_GFX_OFFSET 0x300
+#define PM_WKUP_OFFSET 0x400 /* Autoidle only */
+#define PM_PLL_OFFSET 0x500 /* Autoidle only */
+#define PM_DSP_OFFSET 0x800
+#define PM_MDM_OFFSET 0xc00
+
+/* Power domain wake-up dependency control register */
+#define PM_WKDEP_OFFSET 0xc8
+#define EN_MDM (1 << 5)
+#define EN_WKUP (1 << 4)
+#define EN_GFX (1 << 3)
+#define EN_DSP (1 << 2)
+#define EN_MPU (1 << 1)
+#define EN_CORE (1 << 0)
+
+/* Core power domain state transition control register */
+#define PM_PWSTCTRL_OFFSET 0xe0
+#define FORCESTATE (1 << 18) /* Only for DSP & GFX */
+#define MEM4RETSTATE (1 << 6)
+#define MEM3RETSTATE (1 << 5)
+#define MEM2RETSTATE (1 << 4)
+#define MEM1RETSTATE (1 << 3)
+#define LOGICRETSTATE (1 << 2) /* Logic is retained */
+#define POWERSTATE_OFF 0x3
+#define POWERSTATE_RETENTION 0x1
+#define POWERSTATE_ON 0x0
+
+/* Power domain state register */
+#define PM_PWSTST_OFFSET 0xe4
+
+/* Hardware supervised state transition control register */
+#define CM_CLKSTCTRL_OFFSET 0x48
+#define AUTOSTAT_MPU (1 << 0) /* MPU */
+#define AUTOSTAT_DSS (1 << 2) /* Core */
+#define AUTOSTAT_L4 (1 << 1) /* Core */
+#define AUTOSTAT_L3 (1 << 0) /* Core */
+#define AUTOSTAT_GFX (1 << 0) /* GFX */
+#define AUTOSTAT_IVA (1 << 8) /* 2420 IVA in DSP domain */
+#define AUTOSTAT_DSP (1 << 0) /* DSP */
+#define AUTOSTAT_MDM (1 << 0) /* MDM */
+
+/* Automatic control of interface clock idling */
+#define CM_AUTOIDLE1_OFFSET 0x30
+#define CM_AUTOIDLE2_OFFSET 0x34 /* Core only */
+#define CM_AUTOIDLE3_OFFSET 0x38 /* Core only */
+#define CM_AUTOIDLE4_OFFSET 0x3c /* Core only */
+#define AUTO_54M(x) (((x) & 0x3) << 6)
+#define AUTO_96M(x) (((x) & 0x3) << 2)
+#define AUTO_DPLL(x) (((x) & 0x3) << 0)
+#define AUTO_STOPPED 0x3
+#define AUTO_BYPASS_FAST 0x2 /* DPLL only */
+#define AUTO_BYPASS_LOW_POWER 0x1 /* DPLL only */
+#define AUTO_DISABLED 0x0
+
+/* Voltage control PRCM_VOLTCTRL bits */
+#define AUTO_EXTVOLT (1 << 15)
+#define FORCE_EXTVOLT (1 << 14)
+#define SETOFF_LEVEL(x) (((x) & 0x3) << 12)
+#define MEMRETCTRL (1 << 8)
+#define SETRET_LEVEL(x) (((x) & 0x3) << 6)
+#define VOLT_LEVEL(x) (((x) & 0x3) << 0)
+
+#define OMAP24XX_PRCM_VBASE IO_ADDRESS(OMAP24XX_PRCM_BASE)
+#define prcm_readl(r) __raw_readl(OMAP24XX_PRCM_VBASE + (r))
+#define prcm_writel(v, r) __raw_writel((v), OMAP24XX_PRCM_VBASE + (r))
+
+static u32 pmdomain_get_wakeup_dependencies(int domain_offset)
+{
+ return prcm_readl(domain_offset + PM_WKDEP_OFFSET);
+}
+
+static void pmdomain_set_wakeup_dependencies(u32 state, int domain_offset)
+{
+ prcm_writel(state, domain_offset + PM_WKDEP_OFFSET);
+}
+
+static u32 pmdomain_get_powerstate(int domain_offset)
+{
+ return prcm_readl(domain_offset + PM_PWSTCTRL_OFFSET);
+}
+
+static void pmdomain_set_powerstate(u32 state, int domain_offset)
+{
+ prcm_writel(state, domain_offset + PM_PWSTCTRL_OFFSET);
+}
+
+static u32 pmdomain_get_clock_autocontrol(int domain_offset)
+{
+ return prcm_readl(domain_offset + CM_CLKSTCTRL_OFFSET);
+}
+
+static void pmdomain_set_clock_autocontrol(u32 state, int domain_offset)
+{
+ prcm_writel(state, domain_offset + CM_CLKSTCTRL_OFFSET);
+}
+
+static u32 pmdomain_get_clock_autoidle1(int domain_offset)
+{
+ return prcm_readl(domain_offset + CM_AUTOIDLE1_OFFSET);
+}
+
+/* Core domain only */
+static u32 pmdomain_get_clock_autoidle2(int domain_offset)
+{
+ return prcm_readl(domain_offset + CM_AUTOIDLE2_OFFSET);
+}
+
+/* Core domain only */
+static u32 pmdomain_get_clock_autoidle3(int domain_offset)
+{
+ return prcm_readl(domain_offset + CM_AUTOIDLE3_OFFSET);
+}
+
+/* Core domain only */
+static u32 pmdomain_get_clock_autoidle4(int domain_offset)
+{
+ return prcm_readl(domain_offset + CM_AUTOIDLE4_OFFSET);
+}
+
+static void pmdomain_set_clock_autoidle1(u32 state, int domain_offset)
+{
+ prcm_writel(state, CM_AUTOIDLE1_OFFSET + domain_offset);
+}
+
+/* Core domain only */
+static void pmdomain_set_clock_autoidle2(u32 state, int domain_offset)
+{
+ prcm_writel(state, CM_AUTOIDLE2_OFFSET + domain_offset);
+}
+
+/* Core domain only */
+static void pmdomain_set_clock_autoidle3(u32 state, int domain_offset)
+{
+ prcm_writel(state, CM_AUTOIDLE3_OFFSET + domain_offset);
+}
+
+/* Core domain only */
+static void pmdomain_set_clock_autoidle4(u32 state, int domain_offset)
+{
+ prcm_writel(state, CM_AUTOIDLE4_OFFSET + domain_offset);
+}
+
+/*
+ * Configures power management domains to idle clocks automatically.
+ */
+void pmdomain_set_autoidle(void)
+{
+ u32 val;
+
+ /* Set PLL auto stop for 54M, 96M & DPLL */
+ pmdomain_set_clock_autoidle1(AUTO_54M(AUTO_STOPPED) |
+ AUTO_96M(AUTO_STOPPED) |
+ AUTO_DPLL(AUTO_STOPPED), PM_PLL_OFFSET);
+
+ /* External clock input control
+ * REVISIT: Should this be in clock framework?
+ */
+ PRCM_CLKSRC_CTRL |= (0x3 << 3);
+
+ /* Configure number of 32KHz clock cycles for sys_clk */
+ PRCM_CLKSSETUP = 0x00ff;
+
+ /* Configure automatic voltage transition */
+ PRCM_VOLTSETUP = 0;
+ val = PRCM_VOLTCTRL;
+ val &= ~(SETOFF_LEVEL(0x3) | VOLT_LEVEL(0x3));
+ val |= SETOFF_LEVEL(1) | VOLT_LEVEL(1) | AUTO_EXTVOLT;
+ PRCM_VOLTCTRL = val;
+
+ /* Disable emulation tools functional clock */
+ PRCM_CLKEMUL_CTRL = 0x0;
+
+ /* Set core memory retention state */
+ val = pmdomain_get_powerstate(PM_CORE_OFFSET);
+ if (cpu_is_omap2420()) {
+ val &= ~(0x7 << 3);
+ val |= (MEM3RETSTATE | MEM2RETSTATE | MEM1RETSTATE);
+ } else {
+ val &= ~(0xf << 3);
+ val |= (MEM4RETSTATE | MEM3RETSTATE | MEM2RETSTATE |
+ MEM1RETSTATE);
+ }
+ pmdomain_set_powerstate(val, PM_CORE_OFFSET);
+
+ /* OCP interface smart idle. REVISIT: Enable autoidle bit0 ? */
+ val = SMS_SYSCONFIG;
+ val &= ~(0x3 << 3);
+ val |= (0x2 << 3) | (1 << 0);
+ SMS_SYSCONFIG |= val;
+
+ val = SDRC_SYSCONFIG;
+ val &= ~(0x3 << 3);
+ val |= (0x2 << 3);
+ SDRC_SYSCONFIG = val;
+
+ /* Configure L3 interface for smart idle.
+ * REVISIT: Enable autoidle bit0 ?
+ */
+ val = GPMC_SYSCONFIG;
+ val &= ~(0x3 << 3);
+ val |= (0x2 << 3) | (1 << 0);
+ GPMC_SYSCONFIG = val;
+
+ pmdomain_set_powerstate(LOGICRETSTATE | POWERSTATE_RETENTION,
+ PM_MPU_OFFSET);
+ pmdomain_set_powerstate(POWERSTATE_RETENTION, PM_CORE_OFFSET);
+ if (!cpu_is_omap2420())
+ pmdomain_set_powerstate(POWERSTATE_RETENTION, PM_MDM_OFFSET);
+
+ /* Assume suspend function has saved the state for DSP and GFX */
+ pmdomain_set_powerstate(FORCESTATE | POWERSTATE_OFF, PM_DSP_OFFSET);
+ pmdomain_set_powerstate(FORCESTATE | POWERSTATE_OFF, PM_GFX_OFFSET);
+
+#if 0
+ /* REVISIT: Internal USB needs special handling */
+ force_standby_usb();
+ if (cpu_is_omap2430())
+ force_hsmmc();
+ sdram_self_refresh_on_idle_req(1);
+#endif
+
+ /* Enable clock auto control for all domains.
+ * Note that CORE domain includes also DSS, L4 & L3.
+ */
+ pmdomain_set_clock_autocontrol(AUTOSTAT_MPU, PM_MPU_OFFSET);
+ pmdomain_set_clock_autocontrol(AUTOSTAT_GFX, PM_GFX_OFFSET);
+ pmdomain_set_clock_autocontrol(AUTOSTAT_DSS | AUTOSTAT_L4 | AUTOSTAT_L3,
+ PM_CORE_OFFSET);
+ if (cpu_is_omap2420())
+ pmdomain_set_clock_autocontrol(AUTOSTAT_IVA | AUTOSTAT_DSP,
+ PM_DSP_OFFSET);
+ else {
+ pmdomain_set_clock_autocontrol(AUTOSTAT_DSP, PM_DSP_OFFSET);
+ pmdomain_set_clock_autocontrol(AUTOSTAT_MDM, PM_MDM_OFFSET);
+ }
+
+ /* Enable clock autoidle for all domains */
+ pmdomain_set_clock_autoidle1(0x2, PM_DSP_OFFSET);
+ if (cpu_is_omap2420()) {
+ pmdomain_set_clock_autoidle1(0xfffffff9, PM_CORE_OFFSET);
+ pmdomain_set_clock_autoidle2(0x7, PM_CORE_OFFSET);
+ pmdomain_set_clock_autoidle1(0x3f, PM_WKUP_OFFSET);
+ } else {
+ pmdomain_set_clock_autoidle1(0xeafffff1, PM_CORE_OFFSET);
+ pmdomain_set_clock_autoidle2(0xfff, PM_CORE_OFFSET);
+ pmdomain_set_clock_autoidle1(0x7f, PM_WKUP_OFFSET);
+ pmdomain_set_clock_autoidle1(0x3, PM_MDM_OFFSET);
+ }
+ pmdomain_set_clock_autoidle3(0x7, PM_CORE_OFFSET);
+ pmdomain_set_clock_autoidle4(0x1f, PM_CORE_OFFSET);
+}
+
+/*
+ * Initializes power domains by removing wake-up dependencies and powering
+ * down DSP and GFX. Gets called from PM init. Note that DSP and IVA code
+ * must re-enable DSP and GFX when used.
+ */
+void __init pmdomain_init(void)
+{
+ /* Remove all domain wakeup dependencies */
+ pmdomain_set_wakeup_dependencies(EN_WKUP | EN_CORE, PM_MPU_OFFSET);
+ pmdomain_set_wakeup_dependencies(0, PM_DSP_OFFSET);
+ pmdomain_set_wakeup_dependencies(0, PM_GFX_OFFSET);
+ pmdomain_set_wakeup_dependencies(EN_WKUP | EN_MPU, PM_CORE_OFFSET);
+ if (cpu_is_omap2430())
+ pmdomain_set_wakeup_dependencies(0, PM_MDM_OFFSET);
+
+ /* Power down DSP and GFX */
+ pmdomain_set_powerstate(POWERSTATE_OFF | FORCESTATE, PM_DSP_OFFSET);
+ pmdomain_set_powerstate(POWERSTATE_OFF | FORCESTATE, PM_GFX_OFFSET);
+}
#include <linux/interrupt.h>
#include <linux/sysfs.h>
#include <linux/module.h>
+#include <linux/delay.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/arch/sram.h>
#include <asm/arch/pm.h>
+#include "prcm-regs.h"
+
static struct clk *vclk;
static void (*omap2_sram_idle)(void);
static void (*omap2_sram_suspend)(int dllctrl, int cpu_rev);
static void (*saved_idle)(void);
+extern void __init pmdomain_init(void);
+extern void pmdomain_set_autoidle(void);
+
+static unsigned int omap24xx_sleep_save[OMAP24XX_SLEEP_SAVE_SIZE];
+
void omap2_pm_idle(void)
{
local_irq_disable();
return error;
}
+#define INT0_WAKE_MASK (OMAP_IRQ_BIT(INT_24XX_GPIO_BANK1) | \
+ OMAP_IRQ_BIT(INT_24XX_GPIO_BANK2) | \
+ OMAP_IRQ_BIT(INT_24XX_GPIO_BANK3))
+
+#define INT1_WAKE_MASK (OMAP_IRQ_BIT(INT_24XX_GPIO_BANK4))
+
+#define INT2_WAKE_MASK (OMAP_IRQ_BIT(INT_24XX_UART1_IRQ) | \
+ OMAP_IRQ_BIT(INT_24XX_UART2_IRQ) | \
+ OMAP_IRQ_BIT(INT_24XX_UART3_IRQ))
+
+#define preg(reg) printk("%s\t(0x%p):\t0x%08x\n", #reg, ®, reg);
+
+static void omap2_pm_debug(char * desc)
+{
+ printk("%s:\n", desc);
+
+ preg(CM_CLKSTCTRL_MPU);
+ preg(CM_CLKSTCTRL_CORE);
+ preg(CM_CLKSTCTRL_GFX);
+ preg(CM_CLKSTCTRL_DSP);
+ preg(CM_CLKSTCTRL_MDM);
+
+ preg(PM_PWSTCTRL_MPU);
+ preg(PM_PWSTCTRL_CORE);
+ preg(PM_PWSTCTRL_GFX);
+ preg(PM_PWSTCTRL_DSP);
+ preg(PM_PWSTCTRL_MDM);
+
+ preg(PM_PWSTST_MPU);
+ preg(PM_PWSTST_CORE);
+ preg(PM_PWSTST_GFX);
+ preg(PM_PWSTST_DSP);
+ preg(PM_PWSTST_MDM);
+
+ preg(CM_AUTOIDLE1_CORE);
+ preg(CM_AUTOIDLE2_CORE);
+ preg(CM_AUTOIDLE3_CORE);
+ preg(CM_AUTOIDLE4_CORE);
+ preg(CM_AUTOIDLE_WKUP);
+ preg(CM_AUTOIDLE_PLL);
+ preg(CM_AUTOIDLE_DSP);
+ preg(CM_AUTOIDLE_MDM);
+
+ preg(CM_ICLKEN1_CORE);
+ preg(CM_ICLKEN2_CORE);
+ preg(CM_ICLKEN3_CORE);
+ preg(CM_ICLKEN4_CORE);
+ preg(CM_ICLKEN_GFX);
+ preg(CM_ICLKEN_WKUP);
+ preg(CM_ICLKEN_DSP);
+ preg(CM_ICLKEN_MDM);
+
+ preg(CM_IDLEST1_CORE);
+ preg(CM_IDLEST2_CORE);
+ preg(CM_IDLEST3_CORE);
+ preg(CM_IDLEST4_CORE);
+ preg(CM_IDLEST_GFX);
+ preg(CM_IDLEST_WKUP);
+ preg(CM_IDLEST_CKGEN);
+ preg(CM_IDLEST_DSP);
+ preg(CM_IDLEST_MDM);
+
+ preg(RM_RSTST_MPU);
+ preg(RM_RSTST_GFX);
+ preg(RM_RSTST_WKUP);
+ preg(RM_RSTST_DSP);
+ preg(RM_RSTST_MDM);
+
+ preg(PM_WKDEP_MPU);
+ preg(PM_WKDEP_CORE);
+ preg(PM_WKDEP_GFX);
+ preg(PM_WKDEP_DSP);
+ preg(PM_WKDEP_MDM);
+
+ preg(CM_FCLKEN_WKUP);
+ preg(CM_ICLKEN_WKUP);
+ preg(CM_IDLEST_WKUP);
+ preg(CM_AUTOIDLE_WKUP);
+ preg(CM_CLKSEL_WKUP);
+
+ preg(PM_WKEN_WKUP);
+ preg(PM_WKST_WKUP);
+}
+
+static inline void omap2_pm_save_registers(void)
+{
+ /* Save interrupt registers */
+ OMAP24XX_SAVE(INTC_MIR0);
+ OMAP24XX_SAVE(INTC_MIR1);
+ OMAP24XX_SAVE(INTC_MIR2);
+
+ /* Save power control registers */
+ OMAP24XX_SAVE(CM_CLKSTCTRL_MPU);
+ OMAP24XX_SAVE(CM_CLKSTCTRL_CORE);
+ OMAP24XX_SAVE(CM_CLKSTCTRL_GFX);
+ OMAP24XX_SAVE(CM_CLKSTCTRL_DSP);
+ OMAP24XX_SAVE(CM_CLKSTCTRL_MDM);
+
+ /* Save power state registers */
+ OMAP24XX_SAVE(PM_PWSTCTRL_MPU);
+ OMAP24XX_SAVE(PM_PWSTCTRL_CORE);
+ OMAP24XX_SAVE(PM_PWSTCTRL_GFX);
+ OMAP24XX_SAVE(PM_PWSTCTRL_DSP);
+ OMAP24XX_SAVE(PM_PWSTCTRL_MDM);
+
+ /* Save autoidle registers */
+ OMAP24XX_SAVE(CM_AUTOIDLE1_CORE);
+ OMAP24XX_SAVE(CM_AUTOIDLE2_CORE);
+ OMAP24XX_SAVE(CM_AUTOIDLE3_CORE);
+ OMAP24XX_SAVE(CM_AUTOIDLE4_CORE);
+ OMAP24XX_SAVE(CM_AUTOIDLE_WKUP);
+ OMAP24XX_SAVE(CM_AUTOIDLE_PLL);
+ OMAP24XX_SAVE(CM_AUTOIDLE_DSP);
+ OMAP24XX_SAVE(CM_AUTOIDLE_MDM);
+
+ /* Save idle state registers */
+ OMAP24XX_SAVE(CM_IDLEST1_CORE);
+ OMAP24XX_SAVE(CM_IDLEST2_CORE);
+ OMAP24XX_SAVE(CM_IDLEST3_CORE);
+ OMAP24XX_SAVE(CM_IDLEST4_CORE);
+ OMAP24XX_SAVE(CM_IDLEST_GFX);
+ OMAP24XX_SAVE(CM_IDLEST_WKUP);
+ OMAP24XX_SAVE(CM_IDLEST_CKGEN);
+ OMAP24XX_SAVE(CM_IDLEST_DSP);
+ OMAP24XX_SAVE(CM_IDLEST_MDM);
+
+ /* Save clock registers */
+ OMAP24XX_SAVE(CM_FCLKEN1_CORE);
+ OMAP24XX_SAVE(CM_FCLKEN2_CORE);
+ OMAP24XX_SAVE(CM_ICLKEN1_CORE);
+ OMAP24XX_SAVE(CM_ICLKEN2_CORE);
+ OMAP24XX_SAVE(CM_ICLKEN3_CORE);
+ OMAP24XX_SAVE(CM_ICLKEN4_CORE);
+}
+
+static inline void omap2_pm_restore_registers(void)
+{
+ /* Restore clock state registers */
+ OMAP24XX_RESTORE(CM_CLKSTCTRL_MPU);
+ OMAP24XX_RESTORE(CM_CLKSTCTRL_CORE);
+ OMAP24XX_RESTORE(CM_CLKSTCTRL_GFX);
+ OMAP24XX_RESTORE(CM_CLKSTCTRL_DSP);
+ OMAP24XX_RESTORE(CM_CLKSTCTRL_MDM);
+
+ /* Restore power state registers */
+ OMAP24XX_RESTORE(PM_PWSTCTRL_MPU);
+ OMAP24XX_RESTORE(PM_PWSTCTRL_CORE);
+ OMAP24XX_RESTORE(PM_PWSTCTRL_GFX);
+ OMAP24XX_RESTORE(PM_PWSTCTRL_DSP);
+ OMAP24XX_RESTORE(PM_PWSTCTRL_MDM);
+
+ /* Restore idle state registers */
+ OMAP24XX_RESTORE(CM_IDLEST1_CORE);
+ OMAP24XX_RESTORE(CM_IDLEST2_CORE);
+ OMAP24XX_RESTORE(CM_IDLEST3_CORE);
+ OMAP24XX_RESTORE(CM_IDLEST4_CORE);
+ OMAP24XX_RESTORE(CM_IDLEST_GFX);
+ OMAP24XX_RESTORE(CM_IDLEST_WKUP);
+ OMAP24XX_RESTORE(CM_IDLEST_CKGEN);
+ OMAP24XX_RESTORE(CM_IDLEST_DSP);
+ OMAP24XX_RESTORE(CM_IDLEST_MDM);
+
+ /* Restore autoidle registers */
+ OMAP24XX_RESTORE(CM_AUTOIDLE1_CORE);
+ OMAP24XX_RESTORE(CM_AUTOIDLE2_CORE);
+ OMAP24XX_RESTORE(CM_AUTOIDLE3_CORE);
+ OMAP24XX_RESTORE(CM_AUTOIDLE4_CORE);
+ OMAP24XX_RESTORE(CM_AUTOIDLE_WKUP);
+ OMAP24XX_RESTORE(CM_AUTOIDLE_PLL);
+ OMAP24XX_RESTORE(CM_AUTOIDLE_DSP);
+ OMAP24XX_RESTORE(CM_AUTOIDLE_MDM);
+
+ /* Restore clock registers */
+ OMAP24XX_RESTORE(CM_FCLKEN1_CORE);
+ OMAP24XX_RESTORE(CM_FCLKEN2_CORE);
+ OMAP24XX_RESTORE(CM_ICLKEN1_CORE);
+ OMAP24XX_RESTORE(CM_ICLKEN2_CORE);
+ OMAP24XX_RESTORE(CM_ICLKEN3_CORE);
+ OMAP24XX_RESTORE(CM_ICLKEN4_CORE);
+
+ /* REVISIT: Clear interrupts here */
+
+ /* Restore interrupt registers */
+ OMAP24XX_RESTORE(INTC_MIR0);
+ OMAP24XX_RESTORE(INTC_MIR1);
+ OMAP24XX_RESTORE(INTC_MIR2);
+}
+
+static int omap2_pm_suspend(void)
+{
+ int processor_type = 0;
+
+ /* REVISIT: 0x21 or 0x26? */
+ if (cpu_is_omap2420())
+ processor_type = 0x21;
+
+ if (!processor_type)
+ return -ENOTSUPP;
+
+ local_irq_disable();
+ local_fiq_disable();
+
+ omap2_pm_save_registers();
+
+ /* Disable interrupts except for the wake events */
+ INTC_MIR_SET0 = 0xffffffff & ~INT0_WAKE_MASK;
+ INTC_MIR_SET1 = 0xffffffff & ~INT1_WAKE_MASK;
+ INTC_MIR_SET2 = 0xffffffff & ~INT2_WAKE_MASK;
+
+ pmdomain_set_autoidle();
+
+ /* Clear old wake-up events */
+ PM_WKST1_CORE = 0;
+ PM_WKST2_CORE = 0;
+ PM_WKST_WKUP = 0;
+
+ /* Enable wake-up events */
+ PM_WKEN1_CORE = (1 << 22) | (1 << 21); /* UART1 & 2 */
+ PM_WKEN2_CORE = (1 << 2); /* UART3 */
+ PM_WKEN_WKUP = (1 << 2) | (1 << 0); /* GPIO & GPT1 */
+
+ /* Disable clocks except for CM_ICLKEN2_CORE. It gets disabled
+ * in the SRAM suspend code */
+ CM_FCLKEN1_CORE = 0;
+ CM_FCLKEN2_CORE = 0;
+ CM_ICLKEN1_CORE = 0;
+ CM_ICLKEN3_CORE = 0;
+ CM_ICLKEN4_CORE = 0;
+
+ omap2_pm_debug("Status before suspend");
+
+ /* Must wait for serial buffers to clear */
+ mdelay(200);
+
+ /* Jump to SRAM suspend code
+ * REVISIT: When is this SDRC_DLLB_CTRL?
+ */
+ omap2_sram_suspend(SDRC_DLLA_CTRL, processor_type);
+
+ /* Back from sleep */
+ omap2_pm_restore_registers();
+
+ local_fiq_enable();
+ local_irq_enable();
+
+ return 0;
+}
+
static int omap2_pm_enter(suspend_state_t state)
{
+ int ret = 0;
+
switch (state)
{
case PM_SUSPEND_STANDBY:
case PM_SUSPEND_MEM:
- /* FIXME: Add suspend */
+ ret = omap2_pm_suspend();
break;
-
case PM_SUSPEND_DISK:
- return -ENOTSUPP;
-
+ ret = -ENOTSUPP;
+ break;
default:
- return -EINVAL;
+ ret = -EINVAL;
}
- return 0;
+ return ret;
}
static int omap2_pm_finish(suspend_state_t state)
pm_set_ops(&omap_pm_ops);
pm_idle = omap2_pm_idle;
+ pmdomain_init();
+
return 0;
}
* Copyright (C) 2005 Nokia Corporation
* Author: Paul Mundt <paul.mundt@nokia.com>
* Juha Yrjölä <juha.yrjola@nokia.com>
+ * OMAP Dual-mode timer framework support by Timo Teras
*
* Some parts based off of TI's 24xx code:
*
#include <linux/interrupt.h>
#include <linux/err.h>
#include <linux/clk.h>
+#include <linux/delay.h>
#include <asm/mach/time.h>
-#include <asm/delay.h>
-#include <asm/io.h>
+#include <asm/arch/dmtimer.h>
-#define OMAP2_GP_TIMER1_BASE 0x48028000
-#define OMAP2_GP_TIMER2_BASE 0x4802a000
-#define OMAP2_GP_TIMER3_BASE 0x48078000
-#define OMAP2_GP_TIMER4_BASE 0x4807a000
+static struct omap_dm_timer *gptimer;
-#define GP_TIMER_TIDR 0x00
-#define GP_TIMER_TISR 0x18
-#define GP_TIMER_TIER 0x1c
-#define GP_TIMER_TCLR 0x24
-#define GP_TIMER_TCRR 0x28
-#define GP_TIMER_TLDR 0x2c
-#define GP_TIMER_TSICR 0x40
-
-#define OS_TIMER_NR 1 /* GP timer 2 */
-
-static unsigned long timer_base[] = {
- IO_ADDRESS(OMAP2_GP_TIMER1_BASE),
- IO_ADDRESS(OMAP2_GP_TIMER2_BASE),
- IO_ADDRESS(OMAP2_GP_TIMER3_BASE),
- IO_ADDRESS(OMAP2_GP_TIMER4_BASE),
-};
-
-static inline unsigned int timer_read_reg(int nr, unsigned int reg)
-{
- return __raw_readl(timer_base[nr] + reg);
-}
-
-static inline void timer_write_reg(int nr, unsigned int reg, unsigned int val)
-{
- __raw_writel(val, timer_base[nr] + reg);
-}
-
-/* Note that we always enable the clock prescale divider bit */
-static inline void omap2_gp_timer_start(int nr, unsigned long load_val)
+static inline void omap2_gp_timer_start(unsigned long load_val)
{
- unsigned int tmp;
-
- tmp = 0xffffffff - load_val;
-
- timer_write_reg(nr, GP_TIMER_TLDR, tmp);
- timer_write_reg(nr, GP_TIMER_TCRR, tmp);
- timer_write_reg(nr, GP_TIMER_TIER, 1 << 1);
- timer_write_reg(nr, GP_TIMER_TCLR, (1 << 5) | (1 << 1) | 1);
+ omap_dm_timer_set_load(gptimer, 1, 0xffffffff - load_val);
+ omap_dm_timer_set_int_enable(gptimer, OMAP_TIMER_INT_OVERFLOW);
+ omap_dm_timer_start(gptimer);
}
static irqreturn_t omap2_gp_timer_interrupt(int irq, void *dev_id,
{
write_seqlock(&xtime_lock);
- timer_write_reg(OS_TIMER_NR, GP_TIMER_TISR, 1 << 1);
+ omap_dm_timer_write_status(gptimer, OMAP_TIMER_INT_OVERFLOW);
timer_tick(regs);
write_sequnlock(&xtime_lock);
static struct irqaction omap2_gp_timer_irq = {
.name = "gp timer",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = omap2_gp_timer_interrupt,
};
static void __init omap2_gp_timer_init(void)
{
- struct clk * sys_ck;
- u32 tick_period = 120000;
- u32 l;
+ u32 tick_period;
- /* Reset clock and prescale value */
- timer_write_reg(OS_TIMER_NR, GP_TIMER_TCLR, 0);
+ omap_dm_timer_init();
+ gptimer = omap_dm_timer_request_specific(1);
+ BUG_ON(gptimer == NULL);
- sys_ck = clk_get(NULL, "sys_ck");
- if (IS_ERR(sys_ck))
- printk(KERN_ERR "Could not get sys_ck\n");
- else {
- clk_enable(sys_ck);
- tick_period = clk_get_rate(sys_ck) / 100;
- clk_put(sys_ck);
- }
-
- tick_period /= 2; /* Minimum prescale divider is 2 */
+ omap_dm_timer_set_source(gptimer, OMAP_TIMER_SRC_SYS_CLK);
+ tick_period = clk_get_rate(omap_dm_timer_get_fclk(gptimer)) / 100;
tick_period -= 1;
- l = timer_read_reg(OS_TIMER_NR, GP_TIMER_TIDR);
- printk(KERN_INFO "OMAP2 GP timer (HW version %d.%d)\n",
- (l >> 4) & 0x0f, l & 0x0f);
-
- setup_irq(38, &omap2_gp_timer_irq);
-
- omap2_gp_timer_start(OS_TIMER_NR, tick_period);
+ setup_irq(omap_dm_timer_get_irq(gptimer), &omap2_gp_timer_irq);
+ omap2_gp_timer_start(tick_period);
}
struct sys_timer omap_timer = {
.init = omap2_gp_timer_init,
};
-
static struct irqaction pnx4008_timer_irq = {
.name = "PNX4008 Tick Timer",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = pnx4008_timer_interrupt
};
SL-C3000 (Spitz), SL-C3100 (Borzoi) or SL-C6000x (Tosa)
handheld computer.
+config MACH_TRIZEPS4
+ bool "Keith und Koep Trizeps4 DIMM-Module"
+ select PXA27x
+
endchoice
if PXA_SHARPSL
endif
+if MACH_TRIZEPS4
+
+choice
+ prompt "Select base board for Trizeps 4 module"
+
+config MACH_TRIZEPS4_CONXS
+ bool "ConXS Eval Board"
+
+config MACH_TRIZEPS4_ANY
+ bool "another Board"
+
+endchoice
+
+endif
+
endmenu
config MACH_POODLE
obj-$(CONFIG_MACH_LOGICPD_PXA270) += lpd270.o
obj-$(CONFIG_MACH_MAINSTONE) += mainstone.o
obj-$(CONFIG_ARCH_PXA_IDP) += idp.o
+obj-$(CONFIG_MACH_TRIZEPS4) += trizeps4.o
obj-$(CONFIG_PXA_SHARP_C7xx) += corgi.o corgi_ssp.o corgi_lcd.o sharpsl_pm.o corgi_pm.o
obj-$(CONFIG_PXA_SHARP_Cxx00) += spitz.o corgi_ssp.o corgi_lcd.o sharpsl_pm.o spitz_pm.o
obj-$(CONFIG_MACH_AKITA) += akita-ioexp.o
led-$(CONFIG_ARCH_LUBBOCK) += leds-lubbock.o
led-$(CONFIG_MACH_MAINSTONE) += leds-mainstone.o
led-$(CONFIG_ARCH_PXA_IDP) += leds-idp.o
+led-$(CONFIG_MACH_TRIZEPS4) += leds-trizeps4.o
obj-$(CONFIG_LEDS) += $(led-y)
corgi_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(CORGI_IRQ_GPIO_nSD_DETECT, corgi_detect_int,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "corgi_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/platform_device.h>
#include <linux/fb.h>
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/leds-trizeps4.c
+ *
+ * Author: Jürgen Schindele
+ * Created: 20 02, 2006
+ * Copyright: Jürgen Schindele
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/system.h>
+#include <asm/types.h>
+#include <asm/leds.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/trizeps4.h>
+
+#include "leds.h"
+
+#define LED_STATE_ENABLED 1
+#define LED_STATE_CLAIMED 2
+
+#define SYS_BUSY 0x01
+#define HEARTBEAT 0x02
+#define BLINK 0x04
+
+static unsigned int led_state;
+static unsigned int hw_led_state;
+
+void trizeps4_leds_event(led_event_t evt)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+
+ switch (evt) {
+ case led_start:
+ hw_led_state = 0;
+ pxa_gpio_mode( GPIO_SYS_BUSY_LED | GPIO_OUT); /* LED1 */
+ pxa_gpio_mode( GPIO_HEARTBEAT_LED | GPIO_OUT); /* LED2 */
+ led_state = LED_STATE_ENABLED;
+ break;
+
+ case led_stop:
+ led_state &= ~LED_STATE_ENABLED;
+ break;
+
+ case led_claim:
+ led_state |= LED_STATE_CLAIMED;
+ hw_led_state = 0;
+ break;
+
+ case led_release:
+ led_state &= ~LED_STATE_CLAIMED;
+ hw_led_state = 0;
+ break;
+
+#ifdef CONFIG_LEDS_TIMER
+ case led_timer:
+ hw_led_state ^= HEARTBEAT;
+ break;
+#endif
+
+#ifdef CONFIG_LEDS_CPU
+ case led_idle_start:
+ hw_led_state &= ~SYS_BUSY;
+ break;
+
+ case led_idle_end:
+ hw_led_state |= SYS_BUSY;
+ break;
+#endif
+
+ case led_halted:
+ break;
+
+ case led_green_on:
+ hw_led_state |= BLINK;
+ break;
+
+ case led_green_off:
+ hw_led_state &= ~BLINK;
+ break;
+
+ case led_amber_on:
+ break;
+
+ case led_amber_off:
+ break;
+
+ case led_red_on:
+ break;
+
+ case led_red_off:
+ break;
+
+ default:
+ break;
+ }
+
+ if (led_state & LED_STATE_ENABLED) {
+ switch (hw_led_state) {
+ case 0:
+ GPSR(GPIO_SYS_BUSY_LED) |= GPIO_bit(GPIO_SYS_BUSY_LED);
+ GPSR(GPIO_HEARTBEAT_LED) |= GPIO_bit(GPIO_HEARTBEAT_LED);
+ break;
+ case 1:
+ GPCR(GPIO_SYS_BUSY_LED) |= GPIO_bit(GPIO_SYS_BUSY_LED);
+ GPSR(GPIO_HEARTBEAT_LED) |= GPIO_bit(GPIO_HEARTBEAT_LED);
+ break;
+ case 2:
+ GPSR(GPIO_SYS_BUSY_LED) |= GPIO_bit(GPIO_SYS_BUSY_LED);
+ GPCR(GPIO_HEARTBEAT_LED) |= GPIO_bit(GPIO_HEARTBEAT_LED);
+ break;
+ case 3:
+ GPCR(GPIO_SYS_BUSY_LED) |= GPIO_bit(GPIO_SYS_BUSY_LED);
+ GPCR(GPIO_HEARTBEAT_LED) |= GPIO_bit(GPIO_HEARTBEAT_LED);
+ break;
+ }
+ }
+ else {
+ /* turn all off */
+ GPSR(GPIO_SYS_BUSY_LED) |= GPIO_bit(GPIO_SYS_BUSY_LED);
+ GPSR(GPIO_HEARTBEAT_LED) |= GPIO_bit(GPIO_HEARTBEAT_LED);
+ }
+
+ local_irq_restore(flags);
+}
leds_event = mainstone_leds_event;
if (machine_is_pxa_idp())
leds_event = idp_leds_event;
+ if (machine_is_trizeps4())
+ leds_event = trizeps4_leds_event;
leds_event(led_start);
return 0;
extern void idp_leds_event(led_event_t evt);
extern void lubbock_leds_event(led_event_t evt);
extern void mainstone_leds_event(led_event_t evt);
+extern void trizeps4_leds_event(led_event_t evt);
/* 5.7" TFT QVGA (LoLo display number 1) */
static struct pxafb_mach_info sharp_lq057q3dc02 __initdata = {
- .pixclock = 100000,
- .xres = 240,
- .yres = 320,
+ .pixclock = 150000,
+ .xres = 320,
+ .yres = 240,
.bpp = 16,
- .hsync_len = 64,
- .left_margin = 0x27,
- .right_margin = 0x09,
- .vsync_len = 0x04,
+ .hsync_len = 0x14,
+ .left_margin = 0x28,
+ .right_margin = 0x0a,
+ .vsync_len = 0x02,
.upper_margin = 0x08,
.lower_margin = 0x14,
- .sync = 0,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
.lccr0 = 0x07800080,
- .lccr3 = 0x04400007,
+ .lccr3 = 0x00400000,
+ .pxafb_backlight_power = lpd270_backlight_power,
+};
+
+/* 12.1" TFT SVGA (LoLo display number 2) */
+static struct pxafb_mach_info sharp_lq121s1dg31 __initdata = {
+ .pixclock = 50000,
+ .xres = 800,
+ .yres = 600,
+ .bpp = 16,
+ .hsync_len = 0x05,
+ .left_margin = 0x52,
+ .right_margin = 0x05,
+ .vsync_len = 0x04,
+ .upper_margin = 0x14,
+ .lower_margin = 0x0a,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = 0x07800080,
+ .lccr3 = 0x00400000,
+ .pxafb_backlight_power = lpd270_backlight_power,
+};
+
+/* 3.6" TFT QVGA (LoLo display number 3) */
+static struct pxafb_mach_info sharp_lq036q1da01 __initdata = {
+ .pixclock = 150000,
+ .xres = 320,
+ .yres = 240,
+ .bpp = 16,
+ .hsync_len = 0x0e,
+ .left_margin = 0x04,
+ .right_margin = 0x0a,
+ .vsync_len = 0x03,
+ .upper_margin = 0x03,
+ .lower_margin = 0x03,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = 0x07800080,
+ .lccr3 = 0x00400000,
.pxafb_backlight_power = lpd270_backlight_power,
};
/* 6.4" TFT VGA (LoLo display number 5) */
static struct pxafb_mach_info sharp_lq64d343 __initdata = {
- .pixclock = 20000,
+ .pixclock = 25000,
.xres = 640,
.yres = 480,
.bpp = 16,
- .hsync_len = 49,
+ .hsync_len = 0x31,
.left_margin = 0x89,
.right_margin = 0x19,
- .vsync_len = 18,
+ .vsync_len = 0x12,
.upper_margin = 0x22,
- .lower_margin = 0,
+ .lower_margin = 0x00,
.sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
.lccr0 = 0x07800080,
- .lccr3 = 0x04400001,
+ .lccr3 = 0x00400000,
+ .pxafb_backlight_power = lpd270_backlight_power,
+};
+
+/* 10.4" TFT VGA (LoLo display number 7) */
+static struct pxafb_mach_info sharp_lq10d368 __initdata = {
+ .pixclock = 25000,
+ .xres = 640,
+ .yres = 480,
+ .bpp = 16,
+ .hsync_len = 0x31,
+ .left_margin = 0x89,
+ .right_margin = 0x19,
+ .vsync_len = 0x12,
+ .upper_margin = 0x22,
+ .lower_margin = 0x00,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = 0x07800080,
+ .lccr3 = 0x00400000,
.pxafb_backlight_power = lpd270_backlight_power,
};
/* 3.5" TFT QVGA (LoLo display number 8) */
static struct pxafb_mach_info sharp_lq035q7db02_20 __initdata = {
- .pixclock = 100000,
+ .pixclock = 150000,
.xres = 240,
.yres = 320,
.bpp = 16,
- .hsync_len = 0x34,
- .left_margin = 0x09,
- .right_margin = 0x09,
- .vsync_len = 0x08,
+ .hsync_len = 0x0e,
+ .left_margin = 0x0a,
+ .right_margin = 0x0a,
+ .vsync_len = 0x03,
.upper_margin = 0x05,
.lower_margin = 0x14,
- .sync = 0,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
.lccr0 = 0x07800080,
- .lccr3 = 0x04400007,
+ .lccr3 = 0x00400000,
.pxafb_backlight_power = lpd270_backlight_power,
};
+static struct pxafb_mach_info *lpd270_lcd_to_use;
+
+static int __init lpd270_set_lcd(char *str)
+{
+ if (!strnicmp(str, "lq057q3dc02", 11)) {
+ lpd270_lcd_to_use = &sharp_lq057q3dc02;
+ } else if (!strnicmp(str, "lq121s1dg31", 11)) {
+ lpd270_lcd_to_use = &sharp_lq121s1dg31;
+ } else if (!strnicmp(str, "lq036q1da01", 11)) {
+ lpd270_lcd_to_use = &sharp_lq036q1da01;
+ } else if (!strnicmp(str, "lq64d343", 8)) {
+ lpd270_lcd_to_use = &sharp_lq64d343;
+ } else if (!strnicmp(str, "lq10d368", 8)) {
+ lpd270_lcd_to_use = &sharp_lq10d368;
+ } else if (!strnicmp(str, "lq035q7db02-20", 14)) {
+ lpd270_lcd_to_use = &sharp_lq035q7db02_20;
+ } else {
+ printk(KERN_INFO "lpd270: unknown lcd panel [%s]\n", str);
+ }
+
+ return 1;
+}
+
+__setup("lcd=", lpd270_set_lcd);
+
static struct platform_device *platform_devices[] __initdata = {
&smc91x_device,
&lpd270_audio_device,
platform_add_devices(platform_devices, ARRAY_SIZE(platform_devices));
- // set_pxa_fb_info(&sharp_lq057q3dc02);
- set_pxa_fb_info(&sharp_lq64d343);
- // set_pxa_fb_info(&sharp_lq035q7db02_20);
+ if (lpd270_lcd_to_use != NULL)
+ set_pxa_fb_info(lpd270_lcd_to_use);
pxa_set_ohci_info(&lpd270_ohci_platform_data);
}
init_timer(&mmc_timer);
mmc_timer.data = (unsigned long) data;
return request_irq(LUBBOCK_SD_IRQ, lubbock_detect_int,
- SA_SAMPLE_RANDOM, "lubbock-sd-detect", data);
+ IRQF_SAMPLE_RANDOM, "lubbock-sd-detect", data);
}
static int lubbock_mci_get_ro(struct device *dev)
*/
MST_MSCWR1 &= ~MST_MSCWR1_MS_SEL;
- err = request_irq(MAINSTONE_MMC_IRQ, mstone_detect_int, SA_INTERRUPT,
+ err = request_irq(MAINSTONE_MMC_IRQ, mstone_detect_int, IRQF_DISABLED,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "mainstone_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
poodle_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(POODLE_IRQ_GPIO_nSD_DETECT, poodle_detect_int,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "poodle_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/platform_device.h>
#include <asm/hardware.h>
#include <asm/mach-types.h>
-#include <asm/irq.h>
#include <asm/apm.h>
#include <asm/arch/pm.h>
#include <asm/arch/pxa-regs.h>
pxa_gpio_mode(sharpsl_pm.machinfo->gpio_batlock | GPIO_IN);
/* Register interrupt handlers */
- if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_acin), sharpsl_ac_isr, SA_INTERRUPT, "AC Input Detect", sharpsl_ac_isr)) {
+ if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_acin), sharpsl_ac_isr, IRQF_DISABLED, "AC Input Detect", sharpsl_ac_isr)) {
dev_err(sharpsl_pm.dev, "Could not get irq %d.\n", IRQ_GPIO(sharpsl_pm.machinfo->gpio_acin));
}
else set_irq_type(IRQ_GPIO(sharpsl_pm.machinfo->gpio_acin),IRQT_BOTHEDGE);
- if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batlock), sharpsl_fatal_isr, SA_INTERRUPT, "Battery Cover", sharpsl_fatal_isr)) {
+ if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batlock), sharpsl_fatal_isr, IRQF_DISABLED, "Battery Cover", sharpsl_fatal_isr)) {
dev_err(sharpsl_pm.dev, "Could not get irq %d.\n", IRQ_GPIO(sharpsl_pm.machinfo->gpio_batlock));
}
else set_irq_type(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batlock),IRQT_FALLING);
if (sharpsl_pm.machinfo->gpio_fatal) {
- if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_fatal), sharpsl_fatal_isr, SA_INTERRUPT, "Fatal Battery", sharpsl_fatal_isr)) {
+ if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_fatal), sharpsl_fatal_isr, IRQF_DISABLED, "Fatal Battery", sharpsl_fatal_isr)) {
dev_err(sharpsl_pm.dev, "Could not get irq %d.\n", IRQ_GPIO(sharpsl_pm.machinfo->gpio_fatal));
}
else set_irq_type(IRQ_GPIO(sharpsl_pm.machinfo->gpio_fatal),IRQT_FALLING);
if (sharpsl_pm.machinfo->batfull_irq)
{
/* Register interrupt handler. */
- if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batfull), sharpsl_chrg_full_isr, SA_INTERRUPT, "CO", sharpsl_chrg_full_isr)) {
+ if (request_irq(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batfull), sharpsl_chrg_full_isr, IRQF_DISABLED, "CO", sharpsl_chrg_full_isr)) {
dev_err(sharpsl_pm.dev, "Could not get irq %d.\n", IRQ_GPIO(sharpsl_pm.machinfo->gpio_batfull));
}
else set_irq_type(IRQ_GPIO(sharpsl_pm.machinfo->gpio_batfull),IRQT_RISING);
spitz_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(SPITZ_IRQ_GPIO_nSD_DETECT, spitz_detect_int,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "spitz_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
static struct irqaction pxa_timer_irq = {
.name = "PXA Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = pxa_timer_interrupt,
};
tosa_mci_platform_data.detect_delay = msecs_to_jiffies(250);
- err = request_irq(TOSA_IRQ_GPIO_nSD_DETECT, tosa_detect_int, SA_INTERRUPT,
+ err = request_irq(TOSA_IRQ_GPIO_nSD_DETECT, tosa_detect_int, IRQF_DISABLED,
"MMC/SD card detect", data);
if (err) {
printk(KERN_ERR "tosa_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/trizeps4.c
+ *
+ * Support for the Keith und Koep Trizeps4 Module Platform.
+ *
+ * Author: Jürgen Schindele
+ * Created: 20 02, 2006
+ * Copyright: Jürgen Schindele
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/sysdev.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <linux/bitops.h>
+#include <linux/fb.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/serial_8250.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/mach-types.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/sizes.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/flash.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/trizeps4.h>
+#include <asm/arch/audio.h>
+#include <asm/arch/pxafb.h>
+#include <asm/arch/mmc.h>
+#include <asm/arch/irda.h>
+#include <asm/arch/ohci.h>
+
+#include "generic.h"
+
+/********************************************************************************************
+ * ONBOARD FLASH
+ ********************************************************************************************/
+static struct mtd_partition trizeps4_partitions[] = {
+ {
+ .name = "Bootloader",
+ .size = 0x00040000,
+ .offset = 0,
+ .mask_flags = MTD_WRITEABLE /* force read-only */
+ },{
+ .name = "Kernel",
+ .size = 0x00400000,
+ .offset = 0x00040000
+ },{
+ .name = "Filesystem",
+ .size = MTDPART_SIZ_FULL,
+ .offset = 0x00440000
+ }
+};
+
+static struct flash_platform_data trizeps4_flash_data[] = {
+ {
+ .map_name = "cfi_probe",
+ .parts = trizeps4_partitions,
+ .nr_parts = ARRAY_SIZE(trizeps4_partitions)
+ }
+};
+
+static struct resource flash_resource = {
+ .start = PXA_CS0_PHYS,
+ .end = PXA_CS0_PHYS + SZ_64M - 1,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device flash_device = {
+ .name = "pxa2xx-flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &trizeps4_flash_data,
+ },
+ .resource = &flash_resource,
+ .num_resources = 1,
+};
+
+/********************************************************************************************
+ * DAVICOM DM9000 Ethernet
+ ********************************************************************************************/
+static struct resource dm9000_resources[] = {
+ [0] = {
+ .start = TRIZEPS4_ETH_PHYS+0x300,
+ .end = TRIZEPS4_ETH_PHYS+0x400-1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = TRIZEPS4_ETH_PHYS+0x8300,
+ .end = TRIZEPS4_ETH_PHYS+0x8400-1,
+ .flags = IORESOURCE_MEM,
+ },
+ [2] = {
+ .start = TRIZEPS4_ETH_IRQ,
+ .end = TRIZEPS4_ETH_IRQ,
+ .flags = (IORESOURCE_IRQ | IRQT_RISING),
+ },
+};
+
+static struct platform_device dm9000_device = {
+ .name = "dm9000",
+ .id = -1,
+ .num_resources = ARRAY_SIZE(dm9000_resources),
+ .resource = dm9000_resources,
+};
+
+/********************************************************************************************
+ * PXA270 serial ports
+ ********************************************************************************************/
+static struct plat_serial8250_port tri_serial_ports[] = {
+#ifdef CONFIG_SERIAL_PXA
+ /* this uses the own PXA driver */
+ {
+ 0,
+ },
+#else
+ /* this uses the generic 8520 driver */
+ [0] = {
+ .membase = (void *)&FFUART,
+ .irq = IRQ_FFUART,
+ .flags = UPF_BOOT_AUTOCONF,
+ .iotype = UPIO_MEM32,
+ .regshift = 2,
+ .uartclk = (921600*16),
+ },
+ [1] = {
+ .membase = (void *)&BTUART,
+ .irq = IRQ_BTUART,
+ .flags = UPF_BOOT_AUTOCONF,
+ .iotype = UPIO_MEM32,
+ .regshift = 2,
+ .uartclk = (921600*16),
+ },
+ {
+ 0,
+ },
+#endif
+};
+
+static struct platform_device uart_devices = {
+ .name = "serial8250",
+ .id = 0,
+ .dev = {
+ .platform_data = tri_serial_ports,
+ },
+ .num_resources = 0,
+ .resource = NULL,
+};
+
+/********************************************************************************************
+ * PXA270 ac97 sound codec
+ ********************************************************************************************/
+static struct platform_device ac97_audio_device = {
+ .name = "pxa2xx-ac97",
+ .id = -1,
+};
+
+static struct platform_device * trizeps4_devices[] __initdata = {
+ &flash_device,
+ &uart_devices,
+ &dm9000_device,
+ &ac97_audio_device,
+};
+
+#ifdef CONFIG_MACH_TRIZEPS4_CONXS
+static short trizeps_conxs_bcr;
+
+/* PCCARD power switching supports only 3,3V */
+void board_pcmcia_power(int power)
+{
+ if (power) {
+ /* switch power on, put in reset and enable buffers */
+ trizeps_conxs_bcr |= power;
+ trizeps_conxs_bcr |= ConXS_BCR_CF_RESET;
+ trizeps_conxs_bcr &= ~(ConXS_BCR_CF_BUF_EN);
+ ConXS_BCR = trizeps_conxs_bcr;
+ /* wait a little */
+ udelay(2000);
+ /* take reset away */
+ trizeps_conxs_bcr &= ~(ConXS_BCR_CF_RESET);
+ ConXS_BCR = trizeps_conxs_bcr;
+ udelay(2000);
+ } else {
+ /* put in reset */
+ trizeps_conxs_bcr |= ConXS_BCR_CF_RESET;
+ ConXS_BCR = trizeps_conxs_bcr;
+ udelay(1000);
+ /* switch power off */
+ trizeps_conxs_bcr &= ~(0xf);
+ ConXS_BCR = trizeps_conxs_bcr;
+
+ }
+ pr_debug("%s: o%s 0x%x\n", __FUNCTION__, power ? "n": "ff", trizeps_conxs_bcr);
+}
+
+/* backlight power switching for LCD panel */
+static void board_backlight_power(int on)
+{
+ if (on) {
+ trizeps_conxs_bcr |= ConXS_BCR_L_DISP;
+ } else {
+ trizeps_conxs_bcr &= ~ConXS_BCR_L_DISP;
+ }
+ pr_debug("%s: o%s 0x%x\n", __FUNCTION__, on ? "n" : "ff", trizeps_conxs_bcr);
+ ConXS_BCR = trizeps_conxs_bcr;
+}
+
+/* Powersupply for MMC/SD cardslot */
+static void board_mci_power(struct device *dev, unsigned int vdd)
+{
+ struct pxamci_platform_data* p_d = dev->platform_data;
+
+ if (( 1 << vdd) & p_d->ocr_mask) {
+ pr_debug("%s: on\n", __FUNCTION__);
+ /* FIXME fill in values here */
+ } else {
+ pr_debug("%s: off\n", __FUNCTION__);
+ /* FIXME fill in values here */
+ }
+}
+
+static short trizeps_conxs_ircr;
+
+/* Switch modes and Power for IRDA receiver */
+static void board_irda_mode(struct device *dev, int mode)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (mode & IR_SIRMODE) {
+ /* Slow mode */
+ trizeps_conxs_ircr &= ~ConXS_IRCR_MODE;
+ } else if (mode & IR_FIRMODE) {
+ /* Fast mode */
+ trizeps_conxs_ircr |= ConXS_IRCR_MODE;
+ }
+ if (mode & IR_OFF) {
+ trizeps_conxs_ircr |= ConXS_IRCR_SD;
+ } else {
+ trizeps_conxs_ircr &= ~ConXS_IRCR_SD;
+ }
+ /* FIXME write values to register */
+ local_irq_restore(flags);
+}
+
+#else
+/* for other baseboards define dummies */
+void board_pcmcia_power(int power) {;}
+#define board_backlight_power NULL
+#define board_mci_power NULL
+#define board_irda_mode NULL
+
+#endif /* CONFIG_MACH_TRIZEPS4_CONXS */
+EXPORT_SYMBOL(board_pcmcia_power);
+
+static int trizeps4_mci_init(struct device *dev, irqreturn_t (*mci_detect_int)(int, void *, struct pt_regs *), void *data)
+{
+ int err;
+ /* setup GPIO for PXA27x MMC controller */
+ pxa_gpio_mode(GPIO32_MMCCLK_MD);
+ pxa_gpio_mode(GPIO112_MMCCMD_MD);
+ pxa_gpio_mode(GPIO92_MMCDAT0_MD);
+ pxa_gpio_mode(GPIO109_MMCDAT1_MD);
+ pxa_gpio_mode(GPIO110_MMCDAT2_MD);
+ pxa_gpio_mode(GPIO111_MMCDAT3_MD);
+
+ pxa_gpio_mode(GPIO_MMC_DET | GPIO_IN);
+
+ err = request_irq(TRIZEPS4_MMC_IRQ, mci_detect_int,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING,
+ "MMC card detect", data);
+ if (err) {
+ printk(KERN_ERR "trizeps4_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
+ return -1;
+ }
+ return 0;
+}
+
+static void trizeps4_mci_exit(struct device *dev, void *data)
+{
+ free_irq(TRIZEPS4_MMC_IRQ, data);
+}
+
+static struct pxamci_platform_data trizeps4_mci_platform_data = {
+ .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
+ .init = trizeps4_mci_init,
+ .exit = trizeps4_mci_exit,
+ .setpower = board_mci_power,
+};
+
+static struct pxaficp_platform_data trizeps4_ficp_platform_data = {
+ .transceiver_cap = IR_SIRMODE | IR_FIRMODE | IR_OFF,
+ .transceiver_mode = board_irda_mode,
+};
+
+static int trizeps4_ohci_init(struct device *dev)
+{
+ /* setup Port1 GPIO pin. */
+ pxa_gpio_mode( 88 | GPIO_ALT_FN_1_IN); /* USBHPWR1 */
+ pxa_gpio_mode( 89 | GPIO_ALT_FN_2_OUT); /* USBHPEN1 */
+
+ /* Set the Power Control Polarity Low and Power Sense
+ Polarity Low to active low. */
+ UHCHR = (UHCHR | UHCHR_PCPL | UHCHR_PSPL) &
+ ~(UHCHR_SSEP1 | UHCHR_SSEP2 | UHCHR_SSEP3 | UHCHR_SSE);
+
+ return 0;
+}
+
+static void trizeps4_ohci_exit(struct device *dev)
+{
+ ;
+}
+
+static struct pxaohci_platform_data trizeps4_ohci_platform_data = {
+ .port_mode = PMM_PERPORT_MODE,
+ .init = trizeps4_ohci_init,
+ .exit = trizeps4_ohci_exit,
+};
+
+static struct map_desc trizeps4_io_desc[] __initdata = {
+ { /* ConXS CFSR */
+ .virtual = TRIZEPS4_CFSR_VIRT,
+ .pfn = __phys_to_pfn(TRIZEPS4_CFSR_PHYS),
+ .length = 0x00001000,
+ .type = MT_DEVICE
+ },
+ { /* ConXS BCR */
+ .virtual = TRIZEPS4_BOCR_VIRT,
+ .pfn = __phys_to_pfn(TRIZEPS4_BOCR_PHYS),
+ .length = 0x00001000,
+ .type = MT_DEVICE
+ },
+ { /* ConXS IRCR */
+ .virtual = TRIZEPS4_IRCR_VIRT,
+ .pfn = __phys_to_pfn(TRIZEPS4_IRCR_PHYS),
+ .length = 0x00001000,
+ .type = MT_DEVICE
+ },
+ { /* ConXS DCR */
+ .virtual = TRIZEPS4_DICR_VIRT,
+ .pfn = __phys_to_pfn(TRIZEPS4_DICR_PHYS),
+ .length = 0x00001000,
+ .type = MT_DEVICE
+ },
+ { /* ConXS UPSR */
+ .virtual = TRIZEPS4_UPSR_VIRT,
+ .pfn = __phys_to_pfn(TRIZEPS4_UPSR_PHYS),
+ .length = 0x00001000,
+ .type = MT_DEVICE
+ }
+};
+
+static struct pxafb_mach_info sharp_lcd __initdata = {
+ .pixclock = 78000,
+ .xres = 640,
+ .yres = 480,
+ .bpp = 8,
+ .hsync_len = 4,
+ .left_margin = 4,
+ .right_margin = 4,
+ .vsync_len = 2,
+ .upper_margin = 0,
+ .lower_margin = 0,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+ .cmap_greyscale = 0,
+ .cmap_inverse = 0,
+ .cmap_static = 0,
+ .lccr0 = LCCR0_Color | LCCR0_Pas | LCCR0_Dual,
+ .lccr3 = 0x0340ff02,
+ .pxafb_backlight_power = board_backlight_power,
+};
+
+static void __init trizeps4_fixup(struct machine_desc *desc, struct tag *tags, char **cmdline, struct meminfo *mi)
+{
+}
+
+static void __init trizeps4_init(void)
+{
+ platform_add_devices(trizeps4_devices, ARRAY_SIZE(trizeps4_devices));
+
+ set_pxa_fb_info(&sharp_lcd);
+
+ pxa_set_mci_info(&trizeps4_mci_platform_data);
+ pxa_set_ficp_info(&trizeps4_ficp_platform_data);
+ pxa_set_ohci_info(&trizeps4_ohci_platform_data);
+}
+
+static void __init trizeps4_map_io(void)
+{
+ pxa_map_io();
+ iotable_init(trizeps4_io_desc, ARRAY_SIZE(trizeps4_io_desc));
+
+ /* for DiskOnChip */
+ pxa_gpio_mode(GPIO15_nCS_1_MD);
+
+ /* for off-module PIC on ConXS board */
+ pxa_gpio_mode(GPIO_PIC | GPIO_IN);
+
+ /* UCB1400 irq */
+ pxa_gpio_mode(GPIO_UCB1400 | GPIO_IN);
+
+ /* for DM9000 LAN */
+ pxa_gpio_mode(GPIO78_nCS_2_MD);
+ pxa_gpio_mode(GPIO_DM9000 | GPIO_IN);
+
+ /* for PCMCIA device */
+ pxa_gpio_mode(GPIO_PCD | GPIO_IN);
+ pxa_gpio_mode(GPIO_PRDY | GPIO_IN);
+
+ /* for I2C adapter */
+ pxa_gpio_mode(GPIO117_I2CSCL_MD);
+ pxa_gpio_mode(GPIO118_I2CSDA_MD);
+
+ /* MMC_DET s.o. */
+ pxa_gpio_mode(GPIO_MMC_DET | GPIO_IN);
+
+ /* whats that for ??? */
+ pxa_gpio_mode(GPIO79_nCS_3_MD);
+
+ pxa_gpio_mode( GPIO_SYS_BUSY_LED | GPIO_OUT); /* LED1 */
+ pxa_gpio_mode( GPIO_HEARTBEAT_LED | GPIO_OUT); /* LED2 */
+
+#ifdef CONFIG_MACH_TRIZEPS4_CONXS
+#ifdef CONFIG_IDE_PXA_CF
+ /* if boot direct from compact flash dont disable power */
+ trizeps_conxs_bcr = 0x0009;
+#else
+ /* this is the reset value */
+ trizeps_conxs_bcr = 0x00A0;
+#endif
+ ConXS_BCR = trizeps_conxs_bcr;
+#endif
+
+ PWER = 0x00000002;
+ PFER = 0x00000000;
+ PRER = 0x00000002;
+ PGSR0 = 0x0158C000;
+ PGSR1 = 0x00FF0080;
+ PGSR2 = 0x0001C004;
+ /* Stop 3.6MHz and drive HIGH to PCMCIA and CS */
+ PCFR |= PCFR_OPDE;
+}
+
+MACHINE_START(TRIZEPS4, "Keith und Koep Trizeps IV module")
+ /* MAINTAINER("Jürgen Schindele") */
+ .phys_io = 0x40000000,
+ .io_pg_offst = (io_p2v(0x40000000) >> 18) & 0xfffc,
+ .boot_params = TRIZEPS4_SDRAM_BASE + 0x100,
+ .fixup = trizeps4_fixup,
+ .init_machine = trizeps4_init,
+ .map_io = trizeps4_map_io,
+ .init_irq = pxa_init_irq,
+ .timer = &pxa_timer,
+MACHINE_END
+
static struct irqaction realview_timer_irq = {
.name = "RealView Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = realview_timer_interrupt,
};
static int iomd_request_dma(dmach_t channel, dma_t *dma)
{
return request_irq(dma->dma_irq, iomd_dma_handle,
- SA_INTERRUPT, dma->device_id, dma);
+ IRQF_DISABLED, dma->device_id, dma);
}
static void iomd_free_dma(dmach_t channel, dma_t *dma)
for (i = 0; stat != 0; i++, stat >>= 1) {
if (stat & 1) {
irqno = bast_pc104_irqs[i];
-
- desc_handle_irq(irqno, irq_desc + irqno, regs);
+ desc = irq_desc + irqno;
+ desc_handle_irq(irqno, desc, regs);
}
}
}
set_irq_chained_handler(IRQ_ISA, bast_irq_pc104_demux);
- /* reigster our IRQs */
+ /* register our IRQs */
for (i = 0; i < 4; i++) {
unsigned int irqno = bast_pc104_irqs[i];
pr_debug("dma%d: %s : requesting irq %d\n",
channel, __FUNCTION__, chan->irq);
- err = request_irq(chan->irq, s3c2410_dma_irq, SA_INTERRUPT,
+ err = request_irq(chan->irq, s3c2410_dma_irq, IRQF_DISABLED,
client->name, (void *)chan);
if (err) {
#include <linux/sched.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/err.h>
#include <linux/clk.h>
static struct irqaction s3c2410_timer_irq = {
.name = "S3C2410 Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = s3c2410_timer_interrupt,
};
if (on) {
ret = request_irq(IRQ_USBOC, usb_simtec_ocirq,
- SA_INTERRUPT | SA_TRIGGER_RISING |
- SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING,
"USB Over-current", info);
if (ret != 0) {
printk(KERN_ERR "failed to request usb oc irq\n");
#include <linux/kernel.h>
#include <linux/tty.h>
#include <linux/platform_device.h>
+#include <linux/irq.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
}
/* Register interrupt handler. */
- if ((err = request_irq(COLLIE_IRQ_GPIO_AC_IN, sharpsl_ac_isr, SA_INTERRUPT,
+ if ((err = request_irq(COLLIE_IRQ_GPIO_AC_IN, sharpsl_ac_isr, IRQF_DISABLED,
"ACIN", sharpsl_ac_isr))) {
printk("Could not get irq %d.\n", COLLIE_IRQ_GPIO_AC_IN);
return;
}
- if ((err = request_irq(COLLIE_IRQ_GPIO_CO, sharpsl_chrg_full_isr, SA_INTERRUPT,
+ if ((err = request_irq(COLLIE_IRQ_GPIO_CO, sharpsl_chrg_full_isr, IRQF_DISABLED,
"CO", sharpsl_chrg_full_isr))) {
free_irq(COLLIE_IRQ_GPIO_AC_IN, sharpsl_ac_isr);
printk("Could not get irq %d.\n", COLLIE_IRQ_GPIO_CO);
* SDRAM reads (rev A0, B0, B1)
*
* We ignore rev. A0 and B0 devices; I don't think they're worth supporting.
+ *
+ * The SDRAM type can be passed on the command line as cpu_sa1110.sdram=type
*/
+#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/sched.h>
static struct cpufreq_driver sa1110_driver;
struct sdram_params {
+ const char name[16];
u_char rows; /* bits */
u_char cas_latency; /* cycles */
u_char tck; /* clock cycle time (ns) */
u_int mdcas[3];
};
-static struct sdram_params tc59sm716_cl2_params __initdata = {
- .rows = 12,
- .tck = 10,
- .trcd = 20,
- .trp = 20,
- .twr = 10,
- .refresh = 64000,
- .cas_latency = 2,
-};
-
-static struct sdram_params tc59sm716_cl3_params __initdata = {
- .rows = 12,
- .tck = 8,
- .trcd = 20,
- .trp = 20,
- .twr = 8,
- .refresh = 64000,
- .cas_latency = 3,
-};
-
-static struct sdram_params samsung_k4s641632d_tc75 __initdata = {
- .rows = 14,
- .tck = 9,
- .trcd = 27,
- .trp = 20,
- .twr = 9,
- .refresh = 64000,
- .cas_latency = 3,
-};
-
-static struct sdram_params samsung_km416s4030ct __initdata = {
- .rows = 13,
- .tck = 8,
- .trcd = 24, /* 3 CLKs */
- .trp = 24, /* 3 CLKs */
- .twr = 16, /* Trdl: 2 CLKs */
- .refresh = 64000,
- .cas_latency = 3,
-};
-
-static struct sdram_params wbond_w982516ah75l_cl3_params __initdata = {
- .rows = 16,
- .tck = 8,
- .trcd = 20,
- .trp = 20,
- .twr = 8,
- .refresh = 64000,
- .cas_latency = 3,
+static struct sdram_params sdram_tbl[] __initdata = {
+ { /* Toshiba TC59SM716 CL2 */
+ .name = "TC59SM716-CL2",
+ .rows = 12,
+ .tck = 10,
+ .trcd = 20,
+ .trp = 20,
+ .twr = 10,
+ .refresh = 64000,
+ .cas_latency = 2,
+ }, { /* Toshiba TC59SM716 CL3 */
+ .name = "TC59SM716-CL3",
+ .rows = 12,
+ .tck = 8,
+ .trcd = 20,
+ .trp = 20,
+ .twr = 8,
+ .refresh = 64000,
+ .cas_latency = 3,
+ }, { /* Samsung K4S641632D TC75 */
+ .name = "K4S641632D",
+ .rows = 14,
+ .tck = 9,
+ .trcd = 27,
+ .trp = 20,
+ .twr = 9,
+ .refresh = 64000,
+ .cas_latency = 3,
+ }, { /* Samsung KM416S4030CT */
+ .name = "KM416S4030CT",
+ .rows = 13,
+ .tck = 8,
+ .trcd = 24, /* 3 CLKs */
+ .trp = 24, /* 3 CLKs */
+ .twr = 16, /* Trdl: 2 CLKs */
+ .refresh = 64000,
+ .cas_latency = 3,
+ }, { /* Winbond W982516AH75L CL3 */
+ .name = "W982516AH75L",
+ .rows = 16,
+ .tck = 8,
+ .trcd = 20,
+ .trp = 20,
+ .twr = 8,
+ .refresh = 64000,
+ .cas_latency = 3,
+ },
};
static struct sdram_params sdram_params;
.name = "sa1110",
};
+static struct sdram_params *sa1110_find_sdram(const char *name)
+{
+ struct sdram_params *sdram;
+
+ for (sdram = sdram_tbl; sdram < sdram_tbl + ARRAY_SIZE(sdram_tbl); sdram++)
+ if (strcmp(name, sdram->name) == 0)
+ return sdram;
+
+ return NULL;
+}
+
+static char sdram_name[16];
+
static int __init sa1110_clk_init(void)
{
- struct sdram_params *sdram = NULL;
+ struct sdram_params *sdram;
+ const char *name = sdram_name;
- if (machine_is_assabet())
- sdram = &tc59sm716_cl3_params;
+ if (!name[0]) {
+ if (machine_is_assabet())
+ name = "TC59SM716-CL3";
- if (machine_is_pt_system3())
- sdram = &samsung_k4s641632d_tc75;
+ if (machine_is_pt_system3())
+ name = "K4S641632D";
- if (machine_is_h3100())
- sdram = &samsung_km416s4030ct;
+ if (machine_is_h3100())
+ name = "KM416S4030CT";
+ }
+ sdram = sa1110_find_sdram(name);
if (sdram) {
printk(KERN_DEBUG "SDRAM: tck: %d trcd: %d trp: %d"
" twr: %d refresh: %d cas_latency: %d\n",
return 0;
}
+module_param_string(sdram, sdram_name, sizeof(sdram_name), 0);
arch_initcall(sa1110_clk_init);
i = dma - dma_chan;
regs = (dma_regs_t *)&DDAR(i);
- err = request_irq(IRQ_DMA0 + i, dma_irq_handler, SA_INTERRUPT,
+ err = request_irq(IRQ_DMA0 + i, dma_irq_handler, IRQF_DISABLED,
device_id, regs);
if (err) {
printk(KERN_ERR
static struct irqaction h3800_irq = {
.name = "h3800_asic",
.handler = h3800_IRQ_demux,
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
};
u32 kpio_int_shadow = 0;
}
#endif
set_irq_type(IRQ_GPIO_H3800_ASIC, IRQT_RISING);
- set_irq_chained_handler(IRQ_GPIO_H3800_ASIC, &h3800_IRQ_demux);
+ set_irq_chained_handler(IRQ_GPIO_H3800_ASIC, h3800_IRQ_demux);
}
*/
#include <linux/init.h>
#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/ioport.h>
#include <linux/ptrace.h>
#include <linux/sysdev.h>
#include <asm/hardware.h>
-#include <asm/irq.h>
#include <asm/mach/irq.h>
#include "generic.h"
#include <linux/tty.h>
#include <linux/ioport.h>
#include <linux/platform_device.h>
+#include <linux/irq.h>
#include <linux/mtd/partitions.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/timex.h>
#include <linux/signal.h>
static struct irqaction sa1100_timer_irq = {
.name = "SA11xx Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = sa1100_timer_interrupt,
};
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/sched.h>
#include <linux/serial_8250.h>
static struct irqaction shark_timer_irq = {
.name = "Shark Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = shark_timer_interrupt,
};
static struct irqaction versatile_timer_irq = {
.name = "Versatile Timer Tick",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = versatile_timer_interrupt,
};
# ARM926T
config CPU_ARM926T
bool "Support ARM926T processor"
- depends on ARCH_INTEGRATOR || ARCH_VERSATILE_PB || MACH_VERSATILE_AB || ARCH_OMAP730 || ARCH_OMAP16XX || MACH_REALVIEW_EB || ARCH_PNX4008 || ARCH_NETX || CPU_S3C2412
- default y if ARCH_VERSATILE_PB || MACH_VERSATILE_AB || ARCH_OMAP730 || ARCH_OMAP16XX || ARCH_PNX4008 || ARCH_NETX || CPU_S3C2412
+ depends on ARCH_INTEGRATOR || ARCH_VERSATILE_PB || MACH_VERSATILE_AB || ARCH_OMAP730 || ARCH_OMAP16XX || MACH_REALVIEW_EB || ARCH_PNX4008 || ARCH_NETX || CPU_S3C2412 || ARCH_AT91SAM9260 || ARCH_AT91SAM9261
+ default y if ARCH_VERSATILE_PB || MACH_VERSATILE_AB || ARCH_OMAP730 || ARCH_OMAP16XX || ARCH_PNX4008 || ARCH_NETX || CPU_S3C2412 || ARCH_AT91SAM9260 || ARCH_AT91SAM9261
select CPU_32v5
select CPU_ABRT_EV5TJ
select CPU_CACHE_VIVT
#include <asm/cacheflush.h>
#include <asm/io.h>
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
+#include <asm/sizes.h>
+
+/*
+ * Used by ioremap() and iounmap() code to mark (super)section-mapped
+ * I/O regions in vm_struct->flags field.
+ */
+#define VM_ARM_SECTION_MAPPING 0x80000000
static inline void
remap_area_pte(pte_t * pte, unsigned long address, unsigned long size,
dir++;
} while (address && (address < end));
- flush_cache_vmap(start, end);
return err;
}
+
+void __check_kvm_seq(struct mm_struct *mm)
+{
+ unsigned int seq;
+
+ do {
+ seq = init_mm.context.kvm_seq;
+ memcpy(pgd_offset(mm, VMALLOC_START),
+ pgd_offset_k(VMALLOC_START),
+ sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
+ pgd_index(VMALLOC_START)));
+ mm->context.kvm_seq = seq;
+ } while (seq != init_mm.context.kvm_seq);
+}
+
+#ifndef CONFIG_SMP
+/*
+ * Section support is unsafe on SMP - If you iounmap and ioremap a region,
+ * the other CPUs will not see this change until their next context switch.
+ * Meanwhile, (eg) if an interrupt comes in on one of those other CPUs
+ * which requires the new ioremap'd region to be referenced, the CPU will
+ * reference the _old_ region.
+ *
+ * Note that get_vm_area() allocates a guard 4K page, so we need to mask
+ * the size back to 1MB aligned or we will overflow in the loop below.
+ */
+static void unmap_area_sections(unsigned long virt, unsigned long size)
+{
+ unsigned long addr = virt, end = virt + (size & ~SZ_1M);
+ pgd_t *pgd;
+
+ flush_cache_vunmap(addr, end);
+ pgd = pgd_offset_k(addr);
+ do {
+ pmd_t pmd, *pmdp = pmd_offset(pgd, addr);
+
+ pmd = *pmdp;
+ if (!pmd_none(pmd)) {
+ /*
+ * Clear the PMD from the page table, and
+ * increment the kvm sequence so others
+ * notice this change.
+ *
+ * Note: this is still racy on SMP machines.
+ */
+ pmd_clear(pmdp);
+ init_mm.context.kvm_seq++;
+
+ /*
+ * Free the page table, if there was one.
+ */
+ if ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_TABLE)
+ pte_free_kernel(pmd_page_kernel(pmd));
+ }
+
+ addr += PGDIR_SIZE;
+ pgd++;
+ } while (addr < end);
+
+ /*
+ * Ensure that the active_mm is up to date - we want to
+ * catch any use-after-iounmap cases.
+ */
+ if (current->active_mm->context.kvm_seq != init_mm.context.kvm_seq)
+ __check_kvm_seq(current->active_mm);
+
+ flush_tlb_kernel_range(virt, end);
+}
+
+static int
+remap_area_sections(unsigned long virt, unsigned long pfn,
+ unsigned long size, unsigned long flags)
+{
+ unsigned long prot, addr = virt, end = virt + size;
+ pgd_t *pgd;
+
+ /*
+ * Remove and free any PTE-based mapping, and
+ * sync the current kernel mapping.
+ */
+ unmap_area_sections(virt, size);
+
+ prot = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_DOMAIN(DOMAIN_IO) |
+ (flags & (L_PTE_CACHEABLE | L_PTE_BUFFERABLE));
+
+ /*
+ * ARMv6 and above need XN set to prevent speculative prefetches
+ * hitting IO.
+ */
+ if (cpu_architecture() >= CPU_ARCH_ARMv6)
+ prot |= PMD_SECT_XN;
+
+ pgd = pgd_offset_k(addr);
+ do {
+ pmd_t *pmd = pmd_offset(pgd, addr);
+
+ pmd[0] = __pmd(__pfn_to_phys(pfn) | prot);
+ pfn += SZ_1M >> PAGE_SHIFT;
+ pmd[1] = __pmd(__pfn_to_phys(pfn) | prot);
+ pfn += SZ_1M >> PAGE_SHIFT;
+ flush_pmd_entry(pmd);
+
+ addr += PGDIR_SIZE;
+ pgd++;
+ } while (addr < end);
+
+ return 0;
+}
+
+static int
+remap_area_supersections(unsigned long virt, unsigned long pfn,
+ unsigned long size, unsigned long flags)
+{
+ unsigned long prot, addr = virt, end = virt + size;
+ pgd_t *pgd;
+
+ /*
+ * Remove and free any PTE-based mapping, and
+ * sync the current kernel mapping.
+ */
+ unmap_area_sections(virt, size);
+
+ prot = PMD_TYPE_SECT | PMD_SECT_SUPER | PMD_SECT_AP_WRITE |
+ PMD_DOMAIN(DOMAIN_IO) |
+ (flags & (L_PTE_CACHEABLE | L_PTE_BUFFERABLE));
+
+ /*
+ * ARMv6 and above need XN set to prevent speculative prefetches
+ * hitting IO.
+ */
+ if (cpu_architecture() >= CPU_ARCH_ARMv6)
+ prot |= PMD_SECT_XN;
+
+ pgd = pgd_offset_k(virt);
+ do {
+ unsigned long super_pmd_val, i;
+
+ super_pmd_val = __pfn_to_phys(pfn) | prot;
+ super_pmd_val |= ((pfn >> (32 - PAGE_SHIFT)) & 0xf) << 20;
+
+ for (i = 0; i < 8; i++) {
+ pmd_t *pmd = pmd_offset(pgd, addr);
+
+ pmd[0] = __pmd(super_pmd_val);
+ pmd[1] = __pmd(super_pmd_val);
+ flush_pmd_entry(pmd);
+
+ addr += PGDIR_SIZE;
+ pgd++;
+ }
+
+ pfn += SUPERSECTION_SIZE >> PAGE_SHIFT;
+ } while (addr < end);
+
+ return 0;
+}
+#endif
+
+
/*
* Remap an arbitrary physical address space into the kernel virtual
* address space. Needed when the kernel wants to access high addresses
__ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size,
unsigned long flags)
{
+ int err;
unsigned long addr;
struct vm_struct * area;
+ /*
+ * High mappings must be supersection aligned
+ */
+ if (pfn >= 0x100000 && (__pfn_to_phys(pfn) & ~SUPERSECTION_MASK))
+ return NULL;
+
area = get_vm_area(size, VM_IOREMAP);
if (!area)
return NULL;
addr = (unsigned long)area->addr;
- if (remap_area_pages(addr, pfn, size, flags)) {
+
+#ifndef CONFIG_SMP
+ if ((((cpu_architecture() >= CPU_ARCH_ARMv6) && (get_cr() & CR_XP)) ||
+ cpu_is_xsc3()) &&
+ !((__pfn_to_phys(pfn) | size | addr) & ~SUPERSECTION_MASK)) {
+ area->flags |= VM_ARM_SECTION_MAPPING;
+ err = remap_area_supersections(addr, pfn, size, flags);
+ } else if (!((__pfn_to_phys(pfn) | size | addr) & ~PMD_MASK)) {
+ area->flags |= VM_ARM_SECTION_MAPPING;
+ err = remap_area_sections(addr, pfn, size, flags);
+ } else
+#endif
+ err = remap_area_pages(addr, pfn, size, flags);
+
+ if (err) {
vunmap((void *)addr);
return NULL;
}
- return (void __iomem *) (offset + (char *)addr);
+
+ flush_cache_vmap(addr, addr + size);
+ return (void __iomem *) (offset + addr);
}
EXPORT_SYMBOL(__ioremap_pfn);
void __iounmap(void __iomem *addr)
{
- vunmap((void *)(PAGE_MASK & (unsigned long)addr));
+ struct vm_struct **p, *tmp;
+ unsigned int section_mapping = 0;
+
+ addr = (void __iomem *)(PAGE_MASK & (unsigned long)addr);
+
+#ifndef CONFIG_SMP
+ /*
+ * If this is a section based mapping we need to handle it
+ * specially as the VM subysystem does not know how to handle
+ * such a beast. We need the lock here b/c we need to clear
+ * all the mappings before the area can be reclaimed
+ * by someone else.
+ */
+ write_lock(&vmlist_lock);
+ for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) {
+ if((tmp->flags & VM_IOREMAP) && (tmp->addr == addr)) {
+ if (tmp->flags & VM_ARM_SECTION_MAPPING) {
+ *p = tmp->next;
+ unmap_area_sections((unsigned long)tmp->addr,
+ tmp->size);
+ kfree(tmp);
+ section_mapping = 1;
+ }
+ break;
+ }
+ }
+ write_unlock(&vmlist_lock);
+#endif
+
+ if (!section_mapping)
+ vunmap(addr);
}
EXPORT_SYMBOL(__iounmap);
.prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
L_PTE_WRITE,
.prot_l1 = PMD_TYPE_TABLE,
- .prot_sect = PMD_TYPE_SECT | PMD_SECT_UNCACHED |
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4 | PMD_SECT_UNCACHED |
PMD_SECT_AP_WRITE,
.domain = DOMAIN_IO,
},
[MT_CACHECLEAN] = {
- .prot_sect = PMD_TYPE_SECT,
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4,
.domain = DOMAIN_KERNEL,
},
[MT_MINICLEAN] = {
- .prot_sect = PMD_TYPE_SECT | PMD_SECT_MINICACHE,
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4 | PMD_SECT_MINICACHE,
.domain = DOMAIN_KERNEL,
},
[MT_LOW_VECTORS] = {
.domain = DOMAIN_USER,
},
[MT_MEMORY] = {
- .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE,
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4 | PMD_SECT_AP_WRITE,
.domain = DOMAIN_KERNEL,
},
[MT_ROM] = {
- .prot_sect = PMD_TYPE_SECT,
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4,
.domain = DOMAIN_KERNEL,
},
[MT_IXP2000_DEVICE] = { /* IXP2400 requires XCB=101 for on-chip I/O */
.prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
L_PTE_WRITE,
.prot_l1 = PMD_TYPE_TABLE,
- .prot_sect = PMD_TYPE_SECT | PMD_SECT_UNCACHED |
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4 | PMD_SECT_UNCACHED |
PMD_SECT_AP_WRITE | PMD_SECT_BUFFERABLE |
PMD_SECT_TEX(1),
.domain = DOMAIN_IO,
},
[MT_NONSHARED_DEVICE] = {
.prot_l1 = PMD_TYPE_TABLE,
- .prot_sect = PMD_TYPE_SECT | PMD_SECT_NONSHARED_DEV |
+ .prot_sect = PMD_TYPE_SECT | PMD_BIT4 | PMD_SECT_NONSHARED_DEV |
PMD_SECT_AP_WRITE,
.domain = DOMAIN_IO,
}
ecc_mask = 0;
}
- if (cpu_arch <= CPU_ARCH_ARMv5TEJ && !cpu_is_xscale()) {
- for (i = 0; i < ARRAY_SIZE(mem_types); i++) {
+ /*
+ * Xscale must not have PMD bit 4 set for section mappings.
+ */
+ if (cpu_is_xscale())
+ for (i = 0; i < ARRAY_SIZE(mem_types); i++)
+ mem_types[i].prot_sect &= ~PMD_BIT4;
+
+ /*
+ * ARMv5 and lower, excluding Xscale, bit 4 must be set for
+ * page tables.
+ */
+ if (cpu_arch < CPU_ARCH_ARMv6 && !cpu_is_xscale())
+ for (i = 0; i < ARRAY_SIZE(mem_types); i++)
if (mem_types[i].prot_l1)
mem_types[i].prot_l1 |= PMD_BIT4;
- if (mem_types[i].prot_sect)
- mem_types[i].prot_sect |= PMD_BIT4;
- }
- }
cp = &cache_policies[cachepolicy];
kern_pgprot = user_pgprot = cp->pte;
* bit 4 becomes XN which we must clear for the
* kernel memory mapping.
*/
- mem_types[MT_MEMORY].prot_sect &= ~PMD_BIT4;
- mem_types[MT_ROM].prot_sect &= ~PMD_BIT4;
+ mem_types[MT_MEMORY].prot_sect &= ~PMD_SECT_XN;
+ mem_types[MT_ROM].prot_sect &= ~PMD_SECT_XN;
/*
* Mark cache clean areas and XIP ROM read only
#include <asm/procinfo.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+
+ adr r5, arm1020_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm1020_cr1_clear
bic r0, r0, r5
- ldr r5, arm1020_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
* .RVI ZFRS BLDP WCAM
* .011 1001 ..11 0101
*/
- .type arm1020_cr1_clear, #object
- .type arm1020_cr1_set, #object
-arm1020_cr1_clear:
- .word 0x593f
-arm1020_cr1_set:
- .word 0x3935
+ .type arm1020_crval, #object
+arm1020_crval:
+ crval clear=0x0000593f, mmuset=0x00003935, ucset=0x00001930
__INITDATA
__arm1020_proc_info:
.long 0x4104a200 @ ARM 1020T (Architecture v5T)
.long 0xff0ffff0
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
#include <asm/procinfo.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+ adr r5, arm1020e_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm1020e_cr1_clear
bic r0, r0, r5
- ldr r5, arm1020e_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
* .RVI ZFRS BLDP WCAM
* .011 1001 ..11 0101
*/
- .type arm1020e_cr1_clear, #object
- .type arm1020e_cr1_set, #object
-arm1020e_cr1_clear:
- .word 0x5f3f
-arm1020e_cr1_set:
- .word 0x3935
+ .type arm1020e_crval, #object
+arm1020e_crval:
+ crval clear=0x00007f3f, mmuset=0x00003935, ucset=0x00001930
__INITDATA
.type cpu_arm1020e_name, #object
cpu_arm1020e_name:
- .ascii "ARM1020E"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- .ascii "B"
-#endif
-#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- .ascii "RR"
-#endif
- .ascii "\0"
+ .asciz "ARM1020E"
.size cpu_arm1020e_name, . - cpu_arm1020e_name
.align
__arm1020e_proc_info:
.long 0x4105a200 @ ARM 1020TE (Architecture v5TE)
.long 0xff0ffff0
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
#include <asm/procinfo.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+ adr r5, arm1022_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm1022_cr1_clear
bic r0, r0, r5
- ldr r5, arm1022_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .R..............
#endif
* .011 1001 ..11 0101
*
*/
- .type arm1022_cr1_clear, #object
- .type arm1022_cr1_set, #object
-arm1022_cr1_clear:
- .word 0x7f3f
-arm1022_cr1_set:
- .word 0x3935
+ .type arm1022_crval, #object
+arm1022_crval:
+ crval clear=0x00007f3f, mmuset=0x00003935, ucset=0x00001930
__INITDATA
.type cpu_arm1022_name, #object
cpu_arm1022_name:
- .ascii "arm1022"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- .ascii "B"
-#endif
-#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- .ascii "RR"
-#endif
- .ascii "\0"
+ .asciz "ARM1022"
.size cpu_arm1022_name, . - cpu_arm1022_name
.align
__arm1022_proc_info:
.long 0x4105a220 @ ARM 1022E (v5TE)
.long 0xff0ffff0
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
#include <asm/procinfo.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger
mov r0, #4 @ explicitly disable writeback
mcr p15, 7, r0, c15, c0, 0
#endif
+ adr r5, arm1026_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm1026_cr1_clear
bic r0, r0, r5
- ldr r5, arm1026_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
* .011 1001 ..11 0101
*
*/
- .type arm1026_cr1_clear, #object
- .type arm1026_cr1_set, #object
-arm1026_cr1_clear:
- .word 0x7f3f
-arm1026_cr1_set:
- .word 0x3935
+ .type arm1026_crval, #object
+arm1026_crval:
+ crval clear=0x00007f3f, mmuset=0x00003935, ucset=0x00001934
__INITDATA
.type cpu_arm1026_name, #object
cpu_arm1026_name:
- .ascii "ARM1026EJ-S"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- .ascii "B"
-#endif
-#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- .ascii "RR"
-#endif
- .ascii "\0"
+ .asciz "ARM1026EJ-S"
.size cpu_arm1026_name, . - cpu_arm1026_name
.align
__arm1026_proc_info:
.long 0x4106a260 @ ARM 1026EJ-S (v5TEJ)
.long 0xff0ffff0
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
.long 0x41560600
.long 0xfffffff0
.long 0x00000c1e
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm6_setup
.long cpu_arch_name
.long cpu_elf_name
.long 0x41560610
.long 0xfffffff0
.long 0x00000c1e
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm6_setup
.long cpu_arch_name
.long cpu_elf_name
.long 0x41007000
.long 0xffffff00
.long 0x00000c1e
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm7_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm7_setup
.long cpu_arch_name
.long cpu_elf_name
#include <asm/procinfo.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* Function: arm720_proc_init (void)
* : arm720_proc_fin (void)
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7, 0 @ flush TLB (v4)
#endif
+ adr r5, arm720_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register
- ldr r5, arm720_cr1_clear
bic r0, r0, r5
- ldr r5, arm720_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr @ __ret (head.S)
.size __arm720_setup, . - __arm720_setup
* ..1. 1001 ..11 1101
*
*/
- .type arm720_cr1_clear, #object
- .type arm720_cr1_set, #object
-arm720_cr1_clear:
- .word 0x2f3f
-arm720_cr1_set:
- .word 0x213d
+ .type arm720_crval, #object
+arm720_crval:
+ crval clear=0x00002f3f, mmuset=0x0000213d, ucset=0x00000130
__INITDATA
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm710_setup @ cpu_flush
.long cpu_arch_name @ arch_name
.long cpu_elf_name @ elf_name
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm720_setup @ cpu_flush
.long cpu_arch_name @ arch_name
.long cpu_elf_name @ elf_name
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+ adr r5, arm920_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm920_cr1_clear
bic r0, r0, r5
- ldr r5, arm920_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr
.size __arm920_setup, . - __arm920_setup
* ..11 0001 ..11 0101
*
*/
- .type arm920_cr1_clear, #object
- .type arm920_cr1_set, #object
-arm920_cr1_clear:
- .word 0x3f3f
-arm920_cr1_set:
- .word 0x3135
+ .type arm920_crval, #object
+arm920_crval:
+ crval clear=0x00003f3f, mmuset=0x00003135, ucset=0x00001130
__INITDATA
.type cpu_arm920_name, #object
cpu_arm920_name:
- .ascii "ARM920T"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#endif
- .ascii "\0"
+ .asciz "ARM920T"
.size cpu_arm920_name, . - cpu_arm920_name
.align
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm920_setup
.long cpu_arch_name
.long cpu_elf_name
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+ adr r5, arm922_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm922_cr1_clear
bic r0, r0, r5
- ldr r5, arm922_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr
.size __arm922_setup, . - __arm922_setup
* ..11 0001 ..11 0101
*
*/
- .type arm922_cr1_clear, #object
- .type arm922_cr1_set, #object
-arm922_cr1_clear:
- .word 0x3f3f
-arm922_cr1_set:
- .word 0x3135
+ .type arm922_crval, #object
+arm922_crval:
+ crval clear=0x00003f3f, mmuset=0x00003135, ucset=0x00001130
__INITDATA
.type cpu_arm922_name, #object
cpu_arm922_name:
- .ascii "ARM922T"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#endif
- .ascii "\0"
+ .asciz "ARM922T"
.size cpu_arm922_name, . - cpu_arm922_name
.align
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm922_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 7, r0, c15, c0, 0
#endif
+ adr r5, arm925_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm925_cr1_clear
bic r0, r0, r5
- ldr r5, arm925_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .1.. .... .... ....
#endif
* .011 0001 ..11 1101
*
*/
- .type arm925_cr1_clear, #object
- .type arm925_cr1_set, #object
-arm925_cr1_clear:
- .word 0x7f3f
-arm925_cr1_set:
- .word 0x313d
+ .type arm925_crval, #object
+arm925_crval:
+ crval clear=0x00007f3f, mmuset=0x0000313d, ucset=0x00001130
__INITDATA
.type cpu_arm925_name, #object
cpu_arm925_name:
- .ascii "ARM925T"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- .ascii "RR"
-#endif
-#endif
- .ascii "\0"
+ .asciz "ARM925T"
.size cpu_arm925_name, . - cpu_arm925_name
.align
__arm925_proc_info:
.long 0x54029250
.long 0xfffffff0
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
__arm915_proc_info:
.long 0x54029150
.long 0xfffffff0
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
.long PMD_TYPE_SECT | \
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
mcr p15, 7, r0, c15, c0, 0
#endif
+ adr r5, arm926_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, arm926_cr1_clear
bic r0, r0, r5
- ldr r5, arm926_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .1.. .... .... ....
#endif
* .011 0001 ..11 0101
*
*/
- .type arm926_cr1_clear, #object
- .type arm926_cr1_set, #object
-arm926_cr1_clear:
- .word 0x7f3f
-arm926_cr1_set:
- .word 0x3135
+ .type arm926_crval, #object
+arm926_crval:
+ crval clear=0x00007f3f, mmuset=0x00003135, ucset=0x00001134
__INITDATA
.type cpu_arm926_name, #object
cpu_arm926_name:
- .ascii "ARM926EJ-S"
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- .ascii "i"
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- .ascii "d"
-#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
- .ascii "(wt)"
-#else
- .ascii "(wb)"
-#endif
-#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- .ascii "RR"
-#endif
-#endif
- .ascii "\0"
+ .asciz "ARM926EJ-S"
.size cpu_arm926_name, . - cpu_arm926_name
.align
PMD_BIT4 | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm926_setup
.long cpu_arch_name
.long cpu_elf_name
.macro asid, rd, rn
and \rd, \rn, #255
.endm
+
+ .macro crval, clear, mmuset, ucset
+#ifdef CONFIG_MMU
+ .word \clear
+ .word \mmuset
+#else
+ .word \clear
+ .word \ucset
+#endif
+ .endm
#include <asm/pgtable.h>
#include <asm/ptrace.h>
+#include "proc-macros.S"
+
/*
* the cache line size of the I and D cache
*/
#ifdef CONFIG_MMU
mcr p15, 0, r10, c8, c7 @ invalidate I,D TLBs on v4
#endif
+
+ adr r5, sa110_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, sa110_cr1_clear
bic r0, r0, r5
- ldr r5, sa110_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr
.size __sa110_setup, . - __sa110_setup
* ..01 0001 ..11 1101
*
*/
- .type sa110_cr1_clear, #object
- .type sa110_cr1_set, #object
-sa110_cr1_clear:
- .word 0x3f3f
-sa110_cr1_set:
- .word 0x113d
+ .type sa110_crval, #object
+sa110_crval:
+ crval clear=0x00003f3f, mmuset=0x0000113d, ucset=0x00001130
__INITDATA
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __sa110_setup
.long cpu_arch_name
.long cpu_elf_name
#include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h>
+#include "proc-macros.S"
+
/*
* the cache line size of the I and D cache
*/
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
#endif
+ adr r5, sa1100_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
- ldr r5, sa1100_cr1_clear
bic r0, r0, r5
- ldr r5, sa1100_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr
.size __sa1100_setup, . - __sa1100_setup
* ..11 0001 ..11 1101
*
*/
- .type sa1100_cr1_clear, #object
- .type sa1100_cr1_set, #object
-sa1100_cr1_clear:
- .word 0x3f3f
-sa1100_cr1_set:
- .word 0x313d
+ .type sa1100_crval, #object
+sa1100_crval:
+ crval clear=0x00003f3f, mmuset=0x0000313d, ucset=0x00001130
__INITDATA
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __sa1100_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __sa1100_setup
.long cpu_arch_name
.long cpu_elf_name
orr r0, r0, #(0xf << 20)
mcr p15, 0, r0, c1, c0, 2 @ Enable full access to VFP
#endif
+ adr r5, v6_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0, 0 @ read control register
- ldr r5, v6_cr1_clear @ get mask for bits to clear
bic r0, r0, r5 @ clear bits them
- ldr r5, v6_cr1_set @ get mask for bits to set
- orr r0, r0, r5 @ set them
+ orr r0, r0, r6 @ set them
mov pc, lr @ return to head.S:__ret
/*
* rrrr rrrx xxx0 0101 xxxx xxxx x111 xxxx < forced
* 0 110 0011 1.00 .111 1101 < we want
*/
- .type v6_cr1_clear, #object
- .type v6_cr1_set, #object
-v6_cr1_clear:
- .word 0x01e0fb7f
-v6_cr1_set:
- .word 0x00c0387d
+ .type v6_crval, #object
+v6_crval:
+ crval clear=0x01e0fb7f, mmuset=0x00c0387d, ucset=0x00c0187c
.type v6_processor_functions, #object
ENTRY(v6_processor_functions)
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_XN | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __v6_setup
.long cpu_arch_name
.long cpu_elf_name
orr r0, r0, #(1 << 10) @ enable L2 for LLR cache
#endif
mcr p15, 0, r0, c1, c0, 1 @ set auxiliary control reg
+
+ adr r5, xsc3_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0, 0 @ get control register
- bic r0, r0, #0x0002 @ .... .... .... ..A.
- orr r0, r0, #0x0005 @ .... .... .... .C.M
+ bic r0, r0, r5 @ .... .... .... ..A.
+ orr r0, r0, r6 @ .... .... .... .C.M
#if BTB_ENABLE
- bic r0, r0, #0x0200 @ .... ..R. .... ....
- orr r0, r0, #0x3900 @ ..VI Z..S .... ....
-#else
- bic r0, r0, #0x0a00 @ .... Z.R. .... ....
- orr r0, r0, #0x3100 @ ..VI ...S .... ....
+ orr r0, r0, #0x00000800 @ ..VI Z..S .... ....
#endif
#if L2_CACHE_ENABLE
- orr r0, r0, #0x4000000 @ L2 enable
+ orr r0, r0, #0x04000000 @ L2 enable
#endif
mov pc, lr
.size __xsc3_setup, . - __xsc3_setup
+ .type xsc3_crval, #object
+xsc3_crval:
+ crval clear=0x04003b02, mmuset=0x00003105, ucset=0x00001100
+
__INITDATA
/*
__xsc3_proc_info:
.long 0x69056000
.long 0xffffe000
- .long 0x00000c0e
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xsc3_setup
.long cpu_arch_name
.long cpu_elf_name
* to what would be the reset vector.
*
* loc: location to jump to for soft reset
+ *
+ * Beware PXA270 erratum E7.
*/
.align 5
ENTRY(cpu_xscale_reset)
mov r1, #PSR_F_BIT|PSR_I_BIT|SVC_MODE
msr cpsr_c, r1 @ reset CPSR
+ mcr p15, 0, r1, c10, c4, 1 @ unlock I-TLB
+ mcr p15, 0, r1, c8, c5, 0 @ invalidate I-TLB
mrc p15, 0, r1, c1, c0, 0 @ ctrl register
bic r1, r1, #0x0086 @ ........B....CA.
bic r1, r1, #0x3900 @ ..VIZ..S........
+ sub pc, pc, #4 @ flush pipeline
+ @ *** cache line aligned ***
mcr p15, 0, r1, c1, c0, 0 @ ctrl register
- mcr p15, 0, ip, c7, c7, 0 @ invalidate I,D caches & BTB
bic r1, r1, #0x0001 @ ...............M
+ mcr p15, 0, ip, c7, c7, 0 @ invalidate I,D caches & BTB
mcr p15, 0, r1, c1, c0, 0 @ ctrl register
@ CAUTION: MMU turned off from this point. We count on the pipeline
@ already containing those two last instructions to survive.
orr r0, r0, #1 << 6 @ cp6 for IOP3xx and Bulverde
orr r0, r0, #1 << 13 @ Its undefined whether this
mcr p15, 0, r0, c15, c1, 0 @ affects USR or SVC modes
+
+ adr r5, xscale_crval
+ ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0, 0 @ get control register
- ldr r5, xscale_cr1_clear
bic r0, r0, r5
- ldr r5, xscale_cr1_set
- orr r0, r0, r5
+ orr r0, r0, r6
mov pc, lr
.size __xscale_setup, . - __xscale_setup
* ..11 1.01 .... .101
*
*/
- .type xscale_cr1_clear, #object
- .type xscale_cr1_set, #object
-xscale_cr1_clear:
- .word 0x3b07
-xscale_cr1_set:
- .word 0x3905
+ .type xscale_crval, #object
+xscale_crval:
+ crval clear=0x00003b07, mmuset=0x00003905, ucset=0x00001900
__INITDATA
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
__ixp46x_proc_info:
.long 0x69054200
.long 0xffffff00
- .long 0x00000c0e
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
PMD_SECT_CACHEABLE | \
PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __xscale_setup
.long cpu_arch_name
.long cpu_elf_name
int ret;
u32 pmnc = read_pmnc();
- ret = request_irq(XSCALE_PMU_IRQ, xscale_pmu_interrupt, SA_INTERRUPT,
+ ret = request_irq(XSCALE_PMU_IRQ, xscale_pmu_interrupt, IRQF_DISABLED,
"XScale PMU", (void *)results);
if (ret < 0) {
config OMAP_DM_TIMER
bool "Use dual-mode timer"
- depends on ARCH_OMAP16XX
+ depends on ARCH_OMAP16XX || ARCH_OMAP24XX
help
Select this option if you want to use OMAP Dual-Mode timers.
#include <asm/arch/clock.h>
-LIST_HEAD(clocks);
+static LIST_HEAD(clocks);
static DEFINE_MUTEX(clocks_mutex);
-DEFINE_SPINLOCK(clockfw_lock);
+static DEFINE_SPINLOCK(clockfw_lock);
static struct clk_functions *arch_clock;
#include <asm/io.h>
#include <asm/system.h>
+#define VERY_HI_RATE 900000000
+
+#ifdef CONFIG_ARCH_OMAP1
+#define MPU_CLK "mpu"
+#else
+#define MPU_CLK "virt_prcm_set"
+#endif
+
/* TODO: Add support for SDRAM timing changes */
int omap_verify_speed(struct cpufreq_policy *policy)
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
- mpu_clk = clk_get(NULL, "mpu");
+ mpu_clk = clk_get(NULL, MPU_CLK);
if (IS_ERR(mpu_clk))
return PTR_ERR(mpu_clk);
policy->min = clk_round_rate(mpu_clk, policy->min * 1000) / 1000;
if (cpu)
return 0;
- mpu_clk = clk_get(NULL, "mpu");
+ mpu_clk = clk_get(NULL, MPU_CLK);
if (IS_ERR(mpu_clk))
return 0;
rate = clk_get_rate(mpu_clk) / 1000;
struct cpufreq_freqs freqs;
int ret = 0;
- mpu_clk = clk_get(NULL, "mpu");
+ mpu_clk = clk_get(NULL, MPU_CLK);
if (IS_ERR(mpu_clk))
return PTR_ERR(mpu_clk);
{
struct clk * mpu_clk;
- mpu_clk = clk_get(NULL, "mpu");
+ mpu_clk = clk_get(NULL, MPU_CLK);
if (IS_ERR(mpu_clk))
return PTR_ERR(mpu_clk);
policy->cur = policy->min = policy->max = omap_getspeed(0);
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
policy->cpuinfo.min_freq = clk_round_rate(mpu_clk, 0) / 1000;
- policy->cpuinfo.max_freq = clk_round_rate(mpu_clk, 216000000) / 1000;
+ policy->cpuinfo.max_freq = clk_round_rate(mpu_clk, VERY_HI_RATE) / 1000;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
clk_put(mpu_clk);
omap_cfg_reg(E20_1610_KBR3);
omap_cfg_reg(E19_1610_KBR4);
omap_cfg_reg(N19_1610_KBR5);
- } else if (machine_is_omap_perseus2()) {
+ } else if (machine_is_omap_perseus2() || machine_is_omap_fsample()) {
omap_cfg_reg(E2_730_KBR0);
omap_cfg_reg(J7_730_KBR1);
omap_cfg_reg(E1_730_KBR2);
static struct resource mmc1_resources[] = {
{
- .start = IO_ADDRESS(OMAP_MMC1_BASE),
- .end = IO_ADDRESS(OMAP_MMC1_BASE) + 0x7f,
+ .start = OMAP_MMC1_BASE,
+ .end = OMAP_MMC1_BASE + 0x7f,
.flags = IORESOURCE_MEM,
},
{
static struct resource mmc2_resources[] = {
{
- .start = IO_ADDRESS(OMAP_MMC2_BASE),
- .end = IO_ADDRESS(OMAP_MMC2_BASE) + 0x7f,
+ .start = OMAP_MMC2_BASE,
+ .end = OMAP_MMC2_BASE + 0x7f,
.flags = IORESOURCE_MEM,
},
{
#include <linux/spinlock.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <asm/system.h>
-#include <asm/irq.h>
#include <asm/hardware.h>
#include <asm/dma.h>
#include <asm/io.h>
#define OMAP_DMA_ACTIVE 0x01
#define OMAP_DMA_CCR_EN (1 << 7)
+#define OMAP2_DMA_CSR_CLEAR_MASK 0xffe
#define OMAP_FUNC_MUX_ARM_BASE (0xfffe1000 + 0xec)
if (cpu_is_omap24xx() && dma_trigger) {
u32 val = OMAP_DMA_CCR_REG(lch);
+ val &= ~(3 << 19);
if (dma_trigger > 63)
val |= 1 << 20;
if (dma_trigger > 31)
val |= 1 << 19;
+ val &= ~(0x1f);
val |= (dma_trigger & 0x1f);
if (sync_mode & OMAP_DMA_SYNC_FRAME)
val |= 1 << 5;
+ else
+ val &= ~(1 << 5);
if (sync_mode & OMAP_DMA_SYNC_BLOCK)
val |= 1 << 18;
+ else
+ val &= ~(1 << 18);
if (src_or_dst_synch)
val |= 1 << 24; /* source synch */
void omap_set_dma_src_burst_mode(int lch, enum omap_dma_burst_mode burst_mode)
{
+ unsigned int burst = 0;
OMAP_DMA_CSDP_REG(lch) &= ~(0x03 << 7);
switch (burst_mode) {
case OMAP_DMA_DATA_BURST_DIS:
break;
case OMAP_DMA_DATA_BURST_4:
- OMAP_DMA_CSDP_REG(lch) |= (0x02 << 7);
+ if (cpu_is_omap24xx())
+ burst = 0x1;
+ else
+ burst = 0x2;
break;
case OMAP_DMA_DATA_BURST_8:
- /* not supported by current hardware
+ if (cpu_is_omap24xx()) {
+ burst = 0x2;
+ break;
+ }
+ /* not supported by current hardware on OMAP1
* w |= (0x03 << 7);
* fall through
*/
+ case OMAP_DMA_DATA_BURST_16:
+ if (cpu_is_omap24xx()) {
+ burst = 0x3;
+ break;
+ }
+ /* OMAP1 don't support burst 16
+ * fall through
+ */
default:
BUG();
}
+ OMAP_DMA_CSDP_REG(lch) |= (burst << 7);
}
/* Note that dest_port is only for OMAP1 */
void omap_set_dma_dest_burst_mode(int lch, enum omap_dma_burst_mode burst_mode)
{
+ unsigned int burst = 0;
OMAP_DMA_CSDP_REG(lch) &= ~(0x03 << 14);
switch (burst_mode) {
case OMAP_DMA_DATA_BURST_DIS:
break;
case OMAP_DMA_DATA_BURST_4:
- OMAP_DMA_CSDP_REG(lch) |= (0x02 << 14);
+ if (cpu_is_omap24xx())
+ burst = 0x1;
+ else
+ burst = 0x2;
break;
case OMAP_DMA_DATA_BURST_8:
- OMAP_DMA_CSDP_REG(lch) |= (0x03 << 14);
+ if (cpu_is_omap24xx())
+ burst = 0x2;
+ else
+ burst = 0x3;
break;
+ case OMAP_DMA_DATA_BURST_16:
+ if (cpu_is_omap24xx()) {
+ burst = 0x3;
+ break;
+ }
+ /* OMAP1 don't support burst 16
+ * fall through
+ */
default:
printk(KERN_ERR "Invalid DMA burst mode\n");
BUG();
return;
}
+ OMAP_DMA_CSDP_REG(lch) |= (burst << 14);
}
static inline void omap_enable_channel_irq(int lch)
{
u32 status;
- /* Read CSR to make sure it's cleared. */
- status = OMAP_DMA_CSR_REG(lch);
+ /* Clear CSR */
+ if (cpu_class_is_omap1())
+ status = OMAP_DMA_CSR_REG(lch);
+ else if (cpu_is_omap24xx())
+ OMAP_DMA_CSR_REG(lch) = OMAP2_DMA_CSR_CLEAR_MASK;
/* Enable some nice interrupts. */
OMAP_DMA_CICR_REG(lch) = dma_chan[lch].enabled_irqs;
chan->dev_name = dev_name;
chan->callback = callback;
chan->data = data;
- chan->enabled_irqs = OMAP_DMA_TOUT_IRQ | OMAP_DMA_DROP_IRQ |
- OMAP_DMA_BLOCK_IRQ;
+ chan->enabled_irqs = OMAP_DMA_DROP_IRQ | OMAP_DMA_BLOCK_IRQ;
- if (cpu_is_omap24xx())
- chan->enabled_irqs |= OMAP2_DMA_TRANS_ERR_IRQ;
+ if (cpu_class_is_omap1())
+ chan->enabled_irqs |= OMAP1_DMA_TOUT_IRQ;
+ else if (cpu_is_omap24xx())
+ chan->enabled_irqs |= OMAP2_DMA_MISALIGNED_ERR_IRQ |
+ OMAP2_DMA_TRANS_ERR_IRQ;
if (cpu_is_omap16xx()) {
/* If the sync device is set, configure it dynamically. */
omap_enable_channel_irq(free_ch);
/* Clear the CSR register and IRQ status register */
- OMAP_DMA_CSR_REG(free_ch) = 0x0;
+ OMAP_DMA_CSR_REG(free_ch) = OMAP2_DMA_CSR_CLEAR_MASK;
omap_writel(~0x0, OMAP_DMA4_IRQSTATUS_L0);
}
omap_writel(val, OMAP_DMA4_IRQENABLE_L0);
/* Clear the CSR register and IRQ status register */
- OMAP_DMA_CSR_REG(lch) = 0x0;
+ OMAP_DMA_CSR_REG(lch) = OMAP2_DMA_CSR_CLEAR_MASK;
val = omap_readl(OMAP_DMA4_IRQSTATUS_L0);
val |= 1 << lch;
"%d (CSR %04x)\n", ch, csr);
return 0;
}
- if (unlikely(csr & OMAP_DMA_TOUT_IRQ))
+ if (unlikely(csr & OMAP1_DMA_TOUT_IRQ))
printk(KERN_WARNING "DMA timeout with device %d\n",
dma_chan[ch].dev_id);
if (unlikely(csr & OMAP_DMA_DROP_IRQ))
return 0;
if (unlikely(dma_chan[ch].dev_id == -1))
return 0;
- /* REVISIT: According to 24xx TRM, there's no TOUT_IE */
- if (unlikely(status & OMAP_DMA_TOUT_IRQ))
- printk(KERN_INFO "DMA timeout with device %d\n",
- dma_chan[ch].dev_id);
if (unlikely(status & OMAP_DMA_DROP_IRQ))
printk(KERN_INFO
"DMA synchronization event drop occurred with device "
"%d\n", dma_chan[ch].dev_id);
-
if (unlikely(status & OMAP2_DMA_TRANS_ERR_IRQ))
printk(KERN_INFO "DMA transaction error with device %d\n",
dma_chan[ch].dev_id);
+ if (unlikely(status & OMAP2_DMA_SECURE_ERR_IRQ))
+ printk(KERN_INFO "DMA secure error with device %d\n",
+ dma_chan[ch].dev_id);
+ if (unlikely(status & OMAP2_DMA_MISALIGNED_ERR_IRQ))
+ printk(KERN_INFO "DMA misaligned error with device %d\n",
+ dma_chan[ch].dev_id);
- OMAP_DMA_CSR_REG(ch) = 0x20;
+ OMAP_DMA_CSR_REG(ch) = OMAP2_DMA_CSR_CLEAR_MASK;
val = omap_readl(OMAP_DMA4_IRQSTATUS_L0);
/* ch in this function is from 0-31 while in register it is 1-32 */
static struct irqaction omap24xx_dma_irq = {
.name = "DMA",
.handler = omap2_dma_irq_handler,
- .flags = SA_INTERRUPT
+ .flags = IRQF_DISABLED
};
#else
* OMAP Dual-Mode Timers
*
* Copyright (C) 2005 Nokia Corporation
- * Author: Lauri Leukkunen <lauri.leukkunen@nokia.com>
+ * OMAP2 support by Juha Yrjola
+ * API improvements and OMAP2 clock framework support by Timo Teras
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
*/
#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/clk.h>
+#include <linux/delay.h>
#include <asm/hardware.h>
#include <asm/arch/dmtimer.h>
#include <asm/io.h>
#include <asm/arch/irqs.h>
-#include <linux/spinlock.h>
-#include <linux/list.h>
-
-#define OMAP_TIMER_COUNT 8
+/* register offsets */
#define OMAP_TIMER_ID_REG 0x00
#define OMAP_TIMER_OCP_CFG_REG 0x10
#define OMAP_TIMER_SYS_STAT_REG 0x14
#define OMAP_TIMER_CAPTURE_REG 0x3c
#define OMAP_TIMER_IF_CTRL_REG 0x40
+/* timer control reg bits */
+#define OMAP_TIMER_CTRL_GPOCFG (1 << 14)
+#define OMAP_TIMER_CTRL_CAPTMODE (1 << 13)
+#define OMAP_TIMER_CTRL_PT (1 << 12)
+#define OMAP_TIMER_CTRL_TCM_LOWTOHIGH (0x1 << 8)
+#define OMAP_TIMER_CTRL_TCM_HIGHTOLOW (0x2 << 8)
+#define OMAP_TIMER_CTRL_TCM_BOTHEDGES (0x3 << 8)
+#define OMAP_TIMER_CTRL_SCPWM (1 << 7)
+#define OMAP_TIMER_CTRL_CE (1 << 6) /* compare enable */
+#define OMAP_TIMER_CTRL_PRE (1 << 5) /* prescaler enable */
+#define OMAP_TIMER_CTRL_PTV_SHIFT 2 /* how much to shift the prescaler value */
+#define OMAP_TIMER_CTRL_AR (1 << 1) /* auto-reload enable */
+#define OMAP_TIMER_CTRL_ST (1 << 0) /* start timer */
+
+struct omap_dm_timer {
+ unsigned long phys_base;
+ int irq;
+#ifdef CONFIG_ARCH_OMAP2
+ struct clk *iclk, *fclk;
+#endif
+ void __iomem *io_base;
+ unsigned reserved:1;
+};
-static struct dmtimer_info_struct {
- struct list_head unused_timers;
- struct list_head reserved_timers;
-} dm_timer_info;
+#ifdef CONFIG_ARCH_OMAP1
static struct omap_dm_timer dm_timers[] = {
- { .base=0xfffb1400, .irq=INT_1610_GPTIMER1 },
- { .base=0xfffb1c00, .irq=INT_1610_GPTIMER2 },
- { .base=0xfffb2400, .irq=INT_1610_GPTIMER3 },
- { .base=0xfffb2c00, .irq=INT_1610_GPTIMER4 },
- { .base=0xfffb3400, .irq=INT_1610_GPTIMER5 },
- { .base=0xfffb3c00, .irq=INT_1610_GPTIMER6 },
- { .base=0xfffb4400, .irq=INT_1610_GPTIMER7 },
- { .base=0xfffb4c00, .irq=INT_1610_GPTIMER8 },
- { .base=0x0 },
+ { .phys_base = 0xfffb1400, .irq = INT_1610_GPTIMER1 },
+ { .phys_base = 0xfffb1c00, .irq = INT_1610_GPTIMER2 },
+ { .phys_base = 0xfffb2400, .irq = INT_1610_GPTIMER3 },
+ { .phys_base = 0xfffb2c00, .irq = INT_1610_GPTIMER4 },
+ { .phys_base = 0xfffb3400, .irq = INT_1610_GPTIMER5 },
+ { .phys_base = 0xfffb3c00, .irq = INT_1610_GPTIMER6 },
+ { .phys_base = 0xfffb4400, .irq = INT_1610_GPTIMER7 },
+ { .phys_base = 0xfffb4c00, .irq = INT_1610_GPTIMER8 },
};
+#elif defined(CONFIG_ARCH_OMAP2)
+
+static struct omap_dm_timer dm_timers[] = {
+ { .phys_base = 0x48028000, .irq = INT_24XX_GPTIMER1 },
+ { .phys_base = 0x4802a000, .irq = INT_24XX_GPTIMER2 },
+ { .phys_base = 0x48078000, .irq = INT_24XX_GPTIMER3 },
+ { .phys_base = 0x4807a000, .irq = INT_24XX_GPTIMER4 },
+ { .phys_base = 0x4807c000, .irq = INT_24XX_GPTIMER5 },
+ { .phys_base = 0x4807e000, .irq = INT_24XX_GPTIMER6 },
+ { .phys_base = 0x48080000, .irq = INT_24XX_GPTIMER7 },
+ { .phys_base = 0x48082000, .irq = INT_24XX_GPTIMER8 },
+ { .phys_base = 0x48084000, .irq = INT_24XX_GPTIMER9 },
+ { .phys_base = 0x48086000, .irq = INT_24XX_GPTIMER10 },
+ { .phys_base = 0x48088000, .irq = INT_24XX_GPTIMER11 },
+ { .phys_base = 0x4808a000, .irq = INT_24XX_GPTIMER12 },
+};
+
+static const char *dm_source_names[] = {
+ "sys_ck",
+ "func_32k_ck",
+ "alt_ck"
+};
+static struct clk *dm_source_clocks[3];
+
+#else
+
+#error OMAP architecture not supported!
+
+#endif
+
+static const int dm_timer_count = ARRAY_SIZE(dm_timers);
static spinlock_t dm_timer_lock;
+static inline u32 omap_dm_timer_read_reg(struct omap_dm_timer *timer, int reg)
+{
+ return readl(timer->io_base + reg);
+}
-inline void omap_dm_timer_write_reg(struct omap_dm_timer *timer, int reg, u32 value)
+static void omap_dm_timer_write_reg(struct omap_dm_timer *timer, int reg, u32 value)
{
- omap_writel(value, timer->base + reg);
+ writel(value, timer->io_base + reg);
while (omap_dm_timer_read_reg(timer, OMAP_TIMER_WRITE_PEND_REG))
;
}
-u32 omap_dm_timer_read_reg(struct omap_dm_timer *timer, int reg)
+static void omap_dm_timer_wait_for_reset(struct omap_dm_timer *timer)
{
- return omap_readl(timer->base + reg);
+ int c;
+
+ c = 0;
+ while (!(omap_dm_timer_read_reg(timer, OMAP_TIMER_SYS_STAT_REG) & 1)) {
+ c++;
+ if (c > 100000) {
+ printk(KERN_ERR "Timer failed to reset\n");
+ return;
+ }
+ }
}
-int omap_dm_timers_active(void)
+static void omap_dm_timer_reset(struct omap_dm_timer *timer)
+{
+ u32 l;
+
+ if (timer != &dm_timers[0]) {
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_IF_CTRL_REG, 0x06);
+ omap_dm_timer_wait_for_reset(timer);
+ }
+ omap_dm_timer_set_source(timer, OMAP_TIMER_SRC_SYS_CLK);
+
+ /* Set to smart-idle mode */
+ l = omap_dm_timer_read_reg(timer, OMAP_TIMER_OCP_CFG_REG);
+ l |= 0x02 << 3;
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_OCP_CFG_REG, l);
+}
+
+static void omap_dm_timer_prepare(struct omap_dm_timer *timer)
+{
+#ifdef CONFIG_ARCH_OMAP2
+ clk_enable(timer->iclk);
+ clk_enable(timer->fclk);
+#endif
+ omap_dm_timer_reset(timer);
+}
+
+struct omap_dm_timer *omap_dm_timer_request(void)
+{
+ struct omap_dm_timer *timer = NULL;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&dm_timer_lock, flags);
+ for (i = 0; i < dm_timer_count; i++) {
+ if (dm_timers[i].reserved)
+ continue;
+
+ timer = &dm_timers[i];
+ timer->reserved = 1;
+ break;
+ }
+ spin_unlock_irqrestore(&dm_timer_lock, flags);
+
+ if (timer != NULL)
+ omap_dm_timer_prepare(timer);
+
+ return timer;
+}
+
+struct omap_dm_timer *omap_dm_timer_request_specific(int id)
{
struct omap_dm_timer *timer;
+ unsigned long flags;
- for (timer = &dm_timers[0]; timer->base; ++timer)
- if (omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG) &
- OMAP_TIMER_CTRL_ST)
- return 1;
+ spin_lock_irqsave(&dm_timer_lock, flags);
+ if (id <= 0 || id > dm_timer_count || dm_timers[id-1].reserved) {
+ spin_unlock_irqrestore(&dm_timer_lock, flags);
+ printk("BUG: warning at %s:%d/%s(): unable to get timer %d\n",
+ __FILE__, __LINE__, __FUNCTION__, id);
+ dump_stack();
+ return NULL;
+ }
- return 0;
+ timer = &dm_timers[id-1];
+ timer->reserved = 1;
+ spin_unlock_irqrestore(&dm_timer_lock, flags);
+
+ omap_dm_timer_prepare(timer);
+
+ return timer;
}
+void omap_dm_timer_free(struct omap_dm_timer *timer)
+{
+ omap_dm_timer_reset(timer);
+#ifdef CONFIG_ARCH_OMAP2
+ clk_disable(timer->iclk);
+ clk_disable(timer->fclk);
+#endif
+ WARN_ON(!timer->reserved);
+ timer->reserved = 0;
+}
+
+int omap_dm_timer_get_irq(struct omap_dm_timer *timer)
+{
+ return timer->irq;
+}
+
+#if defined(CONFIG_ARCH_OMAP1)
+
+struct clk *omap_dm_timer_get_fclk(struct omap_dm_timer *timer)
+{
+ BUG();
+}
/**
* omap_dm_timer_modify_idlect_mask - Check if any running timers use ARMXOR
*/
__u32 omap_dm_timer_modify_idlect_mask(__u32 inputmask)
{
- int n;
+ int i;
/* If ARMXOR cannot be idled this function call is unnecessary */
if (!(inputmask & (1 << 1)))
return inputmask;
/* If any active timer is using ARMXOR return modified mask */
- for (n = 0; dm_timers[n].base; ++n)
- if (omap_dm_timer_read_reg(&dm_timers[n], OMAP_TIMER_CTRL_REG)&
- OMAP_TIMER_CTRL_ST) {
- if (((omap_readl(MOD_CONF_CTRL_1)>>(n*2)) & 0x03) == 0)
+ for (i = 0; i < dm_timer_count; i++) {
+ u32 l;
+
+ l = omap_dm_timer_read_reg(&dm_timers[i], OMAP_TIMER_CTRL_REG);
+ if (l & OMAP_TIMER_CTRL_ST) {
+ if (((omap_readl(MOD_CONF_CTRL_1) >> (i * 2)) & 0x03) == 0)
inputmask &= ~(1 << 1);
else
inputmask &= ~(1 << 2);
}
+ }
return inputmask;
}
+#elif defined(CONFIG_ARCH_OMAP2)
-void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
+struct clk *omap_dm_timer_get_fclk(struct omap_dm_timer *timer)
{
- int n = (timer - dm_timers) << 1;
- u32 l;
+ return timer->fclk;
+}
- l = omap_readl(MOD_CONF_CTRL_1) & ~(0x03 << n);
- l |= source << n;
- omap_writel(l, MOD_CONF_CTRL_1);
+__u32 omap_dm_timer_modify_idlect_mask(__u32 inputmask)
+{
+ BUG();
}
+#endif
-static void omap_dm_timer_reset(struct omap_dm_timer *timer)
+void omap_dm_timer_trigger(struct omap_dm_timer *timer)
{
- /* Reset and set posted mode */
- omap_dm_timer_write_reg(timer, OMAP_TIMER_IF_CTRL_REG, 0x06);
- omap_dm_timer_write_reg(timer, OMAP_TIMER_OCP_CFG_REG, 0x02);
-
- omap_dm_timer_set_source(timer, OMAP_TIMER_SRC_ARMXOR);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_TRIGGER_REG, 0);
}
+void omap_dm_timer_start(struct omap_dm_timer *timer)
+{
+ u32 l;
+ l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
+ if (!(l & OMAP_TIMER_CTRL_ST)) {
+ l |= OMAP_TIMER_CTRL_ST;
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
+ }
+}
-struct omap_dm_timer * omap_dm_timer_request(void)
+void omap_dm_timer_stop(struct omap_dm_timer *timer)
{
- struct omap_dm_timer *timer = NULL;
- unsigned long flags;
+ u32 l;
- spin_lock_irqsave(&dm_timer_lock, flags);
- if (!list_empty(&dm_timer_info.unused_timers)) {
- timer = (struct omap_dm_timer *)
- dm_timer_info.unused_timers.next;
- list_move_tail((struct list_head *)timer,
- &dm_timer_info.reserved_timers);
+ l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
+ if (l & OMAP_TIMER_CTRL_ST) {
+ l &= ~0x1;
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
- spin_unlock_irqrestore(&dm_timer_lock, flags);
-
- return timer;
}
+#ifdef CONFIG_ARCH_OMAP1
-void omap_dm_timer_free(struct omap_dm_timer *timer)
+void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
{
- unsigned long flags;
-
- omap_dm_timer_reset(timer);
+ int n = (timer - dm_timers) << 1;
+ u32 l;
- spin_lock_irqsave(&dm_timer_lock, flags);
- list_move_tail((struct list_head *)timer, &dm_timer_info.unused_timers);
- spin_unlock_irqrestore(&dm_timer_lock, flags);
+ l = omap_readl(MOD_CONF_CTRL_1) & ~(0x03 << n);
+ l |= source << n;
+ omap_writel(l, MOD_CONF_CTRL_1);
}
-void omap_dm_timer_set_int_enable(struct omap_dm_timer *timer,
- unsigned int value)
-{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_INT_EN_REG, value);
-}
+#else
-unsigned int omap_dm_timer_read_status(struct omap_dm_timer *timer)
+void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
{
- return omap_dm_timer_read_reg(timer, OMAP_TIMER_STAT_REG);
-}
+ if (source < 0 || source >= 3)
+ return;
-void omap_dm_timer_write_status(struct omap_dm_timer *timer, unsigned int value)
-{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_STAT_REG, value);
+ clk_disable(timer->fclk);
+ clk_set_parent(timer->fclk, dm_source_clocks[source]);
+ clk_enable(timer->fclk);
+
+ /* When the functional clock disappears, too quick writes seem to
+ * cause an abort. */
+ __delay(15000);
}
-void omap_dm_timer_enable_autoreload(struct omap_dm_timer *timer)
+#endif
+
+void omap_dm_timer_set_load(struct omap_dm_timer *timer, int autoreload,
+ unsigned int load)
{
u32 l;
+
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
- l |= OMAP_TIMER_CTRL_AR;
+ if (autoreload)
+ l |= OMAP_TIMER_CTRL_AR;
+ else
+ l &= ~OMAP_TIMER_CTRL_AR;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_LOAD_REG, load);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_TRIGGER_REG, 0);
}
-void omap_dm_timer_trigger(struct omap_dm_timer *timer)
-{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_TRIGGER_REG, 1);
-}
-
-void omap_dm_timer_set_trigger(struct omap_dm_timer *timer, unsigned int value)
+void omap_dm_timer_set_match(struct omap_dm_timer *timer, int enable,
+ unsigned int match)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
- l |= value & 0x3;
+ if (enable)
+ l |= OMAP_TIMER_CTRL_CE;
+ else
+ l &= ~OMAP_TIMER_CTRL_CE;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_MATCH_REG, match);
}
-void omap_dm_timer_start(struct omap_dm_timer *timer)
+
+void omap_dm_timer_set_pwm(struct omap_dm_timer *timer, int def_on,
+ int toggle, int trigger)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
- l |= OMAP_TIMER_CTRL_ST;
+ l &= ~(OMAP_TIMER_CTRL_GPOCFG | OMAP_TIMER_CTRL_SCPWM |
+ OMAP_TIMER_CTRL_PT | (0x03 << 10));
+ if (def_on)
+ l |= OMAP_TIMER_CTRL_SCPWM;
+ if (toggle)
+ l |= OMAP_TIMER_CTRL_PT;
+ l |= trigger << 10;
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
-void omap_dm_timer_stop(struct omap_dm_timer *timer)
+void omap_dm_timer_set_prescaler(struct omap_dm_timer *timer, int prescaler)
{
u32 l;
l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
- l &= ~0x1;
+ l &= ~(OMAP_TIMER_CTRL_PRE | (0x07 << 2));
+ if (prescaler >= 0x00 && prescaler <= 0x07) {
+ l |= OMAP_TIMER_CTRL_PRE;
+ l |= prescaler << 2;
+ }
omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
}
-unsigned int omap_dm_timer_read_counter(struct omap_dm_timer *timer)
+void omap_dm_timer_set_int_enable(struct omap_dm_timer *timer,
+ unsigned int value)
{
- return omap_dm_timer_read_reg(timer, OMAP_TIMER_COUNTER_REG);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_INT_EN_REG, value);
}
-void omap_dm_timer_reset_counter(struct omap_dm_timer *timer)
+unsigned int omap_dm_timer_read_status(struct omap_dm_timer *timer)
{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG, 0);
+ return omap_dm_timer_read_reg(timer, OMAP_TIMER_STAT_REG);
}
-void omap_dm_timer_set_load(struct omap_dm_timer *timer, unsigned int load)
+void omap_dm_timer_write_status(struct omap_dm_timer *timer, unsigned int value)
{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_LOAD_REG, load);
+ omap_dm_timer_write_reg(timer, OMAP_TIMER_STAT_REG, value);
}
-void omap_dm_timer_set_match(struct omap_dm_timer *timer, unsigned int match)
+unsigned int omap_dm_timer_read_counter(struct omap_dm_timer *timer)
{
- omap_dm_timer_write_reg(timer, OMAP_TIMER_MATCH_REG, match);
+ return omap_dm_timer_read_reg(timer, OMAP_TIMER_COUNTER_REG);
}
-void omap_dm_timer_enable_compare(struct omap_dm_timer *timer)
+void omap_dm_timer_write_counter(struct omap_dm_timer *timer, unsigned int value)
{
- u32 l;
-
- l = omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
- l |= OMAP_TIMER_CTRL_CE;
- omap_dm_timer_write_reg(timer, OMAP_TIMER_CTRL_REG, l);
+ return omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG, value);
}
+int omap_dm_timers_active(void)
+{
+ int i;
+
+ for (i = 0; i < dm_timer_count; i++) {
+ struct omap_dm_timer *timer;
+
+ timer = &dm_timers[i];
+ if (omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG) &
+ OMAP_TIMER_CTRL_ST)
+ return 1;
+ }
+ return 0;
+}
-static inline void __dm_timer_init(void)
+int omap_dm_timer_init(void)
{
struct omap_dm_timer *timer;
+ int i;
+
+ if (!(cpu_is_omap16xx() || cpu_is_omap24xx()))
+ return -ENODEV;
spin_lock_init(&dm_timer_lock);
- INIT_LIST_HEAD(&dm_timer_info.unused_timers);
- INIT_LIST_HEAD(&dm_timer_info.reserved_timers);
-
- timer = &dm_timers[0];
- while (timer->base) {
- list_add_tail((struct list_head *)timer, &dm_timer_info.unused_timers);
- omap_dm_timer_reset(timer);
- timer++;
+#ifdef CONFIG_ARCH_OMAP2
+ for (i = 0; i < ARRAY_SIZE(dm_source_names); i++) {
+ dm_source_clocks[i] = clk_get(NULL, dm_source_names[i]);
+ BUG_ON(dm_source_clocks[i] == NULL);
+ }
+#endif
+
+ for (i = 0; i < dm_timer_count; i++) {
+#ifdef CONFIG_ARCH_OMAP2
+ char clk_name[16];
+#endif
+
+ timer = &dm_timers[i];
+ timer->io_base = (void __iomem *) io_p2v(timer->phys_base);
+#ifdef CONFIG_ARCH_OMAP2
+ sprintf(clk_name, "gpt%d_ick", i + 1);
+ timer->iclk = clk_get(NULL, clk_name);
+ sprintf(clk_name, "gpt%d_fck", i + 1);
+ timer->fclk = clk_get(NULL, clk_name);
+#endif
}
-}
-static int __init omap_dm_timer_init(void)
-{
- if (cpu_is_omap16xx())
- __dm_timer_init();
return 0;
}
-
-arch_initcall(omap_dm_timer_init);
_clear_gpio_irqbank(bank, 1 << get_gpio_index(gpio));
}
+static u32 _get_gpio_irqbank_mask(struct gpio_bank *bank)
+{
+ void __iomem *reg = bank->base;
+ int inv = 0;
+ u32 l;
+ u32 mask;
+
+ switch (bank->method) {
+ case METHOD_MPUIO:
+ reg += OMAP_MPUIO_GPIO_MASKIT;
+ mask = 0xffff;
+ inv = 1;
+ break;
+ case METHOD_GPIO_1510:
+ reg += OMAP1510_GPIO_INT_MASK;
+ mask = 0xffff;
+ inv = 1;
+ break;
+ case METHOD_GPIO_1610:
+ reg += OMAP1610_GPIO_IRQENABLE1;
+ mask = 0xffff;
+ break;
+ case METHOD_GPIO_730:
+ reg += OMAP730_GPIO_INT_MASK;
+ mask = 0xffffffff;
+ inv = 1;
+ break;
+ case METHOD_GPIO_24XX:
+ reg += OMAP24XX_GPIO_IRQENABLE1;
+ mask = 0xffffffff;
+ break;
+ default:
+ BUG();
+ return 0;
+ }
+
+ l = __raw_readl(reg);
+ if (inv)
+ l = ~l;
+ l &= mask;
+ return l;
+}
+
static void _enable_gpio_irqbank(struct gpio_bank *bank, int gpio_mask, int enable)
{
void __iomem *reg = bank->base;
u32 isr;
unsigned int gpio_irq;
struct gpio_bank *bank;
+ u32 retrigger = 0;
+ int unmasked = 0;
desc->chip->ack(irq);
- bank = (struct gpio_bank *) desc->data;
+ bank = get_irq_data(irq);
if (bank->method == METHOD_MPUIO)
isr_reg = bank->base + OMAP_MPUIO_GPIO_INT;
#ifdef CONFIG_ARCH_OMAP15XX
#endif
while(1) {
u32 isr_saved, level_mask = 0;
+ u32 enabled;
- isr_saved = isr = __raw_readl(isr_reg);
+ enabled = _get_gpio_irqbank_mask(bank);
+ isr_saved = isr = __raw_readl(isr_reg) & enabled;
if (cpu_is_omap15xx() && (bank->method == METHOD_MPUIO))
isr &= 0x0000ffff;
- if (cpu_is_omap24xx())
+ if (cpu_is_omap24xx()) {
level_mask =
__raw_readl(bank->base +
OMAP24XX_GPIO_LEVELDETECT0) |
__raw_readl(bank->base +
OMAP24XX_GPIO_LEVELDETECT1);
+ level_mask &= enabled;
+ }
/* clear edge sensitive interrupts before handler(s) are
called so that we don't miss any interrupt occurred while
/* if there is only edge sensitive GPIO pin interrupts
configured, we could unmask GPIO bank interrupt immediately */
- if (!level_mask)
+ if (!level_mask && !unmasked) {
+ unmasked = 1;
desc->chip->unmask(irq);
+ }
+ isr |= retrigger;
+ retrigger = 0;
if (!isr)
break;
gpio_irq = bank->virtual_irq_start;
for (; isr != 0; isr >>= 1, gpio_irq++) {
struct irqdesc *d;
+ int irq_mask;
if (!(isr & 1))
continue;
d = irq_desc + gpio_irq;
+ /* Don't run the handler if it's already running
+ * or was disabled lazely.
+ */
+ if (unlikely((d->depth ||
+ (d->status & IRQ_INPROGRESS)))) {
+ irq_mask = 1 <<
+ (gpio_irq - bank->virtual_irq_start);
+ /* The unmasking will be done by
+ * enable_irq in case it is disabled or
+ * after returning from the handler if
+ * it's already running.
+ */
+ _enable_gpio_irqbank(bank, irq_mask, 0);
+ if (!d->depth) {
+ /* Level triggered interrupts
+ * won't ever be reentered
+ */
+ BUG_ON(level_mask & irq_mask);
+ d->status |= IRQ_PENDING;
+ }
+ continue;
+ }
+
desc_handle_irq(gpio_irq, d, regs);
+
+ if (unlikely((d->status & IRQ_PENDING) && !d->depth)) {
+ irq_mask = 1 <<
+ (gpio_irq - bank->virtual_irq_start);
+ d->status &= ~IRQ_PENDING;
+ _enable_gpio_irqbank(bank, irq_mask, 1);
+ retrigger |= irq_mask;
+ }
}
if (cpu_is_omap24xx()) {
_enable_gpio_irqbank(bank, isr_saved & level_mask, 1);
}
- /* if bank has any level sensitive GPIO pin interrupt
- configured, we must unmask the bank interrupt only after
- handler(s) are executed in order to avoid spurious bank
- interrupt */
- if (level_mask)
- desc->chip->unmask(irq);
}
+ /* if bank has any level sensitive GPIO pin interrupt
+ configured, we must unmask the bank interrupt only after
+ handler(s) are executed in order to avoid spurious bank
+ interrupt */
+ if (!unmasked)
+ desc->chip->unmask(irq);
+
}
static void gpio_ack_irq(unsigned int irq)
static struct irqaction omap_wakeup_irq = {
.name = "peripheral wakeup",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.handler = omap_wakeup_interrupt
};
{ /* .length gets filled in at runtime */
.virtual = OMAP1_SRAM_VA,
.pfn = __phys_to_pfn(OMAP1_SRAM_PA),
- .type = MT_DEVICE
+ .type = MT_MEMORY
}
};
/*
- * In order to use last 2kB of SRAM on 1611b, we must round the size
- * up to multiple of PAGE_SIZE. We cannot use ioremap for SRAM, as
- * clock init needs SRAM early.
+ * Note that we cannot use ioremap for SRAM, as clock init needs SRAM early.
*/
void __init omap_map_sram(void)
{
omap_sram_io_desc[0].pfn = __phys_to_pfn(base);
}
- omap_sram_io_desc[0].length = (omap_sram_size + PAGE_SIZE-1)/PAGE_SIZE;
- omap_sram_io_desc[0].length *= PAGE_SIZE;
+ omap_sram_io_desc[0].length = 1024 * 1024; /* Use section desc */
iotable_init(omap_sram_io_desc, ARRAY_SIZE(omap_sram_io_desc));
printk(KERN_INFO "SRAM: Mapped pa 0x%08lx to va 0x%08lx size: 0x%lx\n",
* Partial timer rewrite and additional dynamic tick timer support by
* Tony Lindgen <tony@atomide.com> and
* Tuukka Tikkanen <tuukka.tikkanen@elektrobit.com>
+ * OMAP Dual-mode timer framework support by Timo Teras
*
* MPU timer code based on the older MPU timer code for OMAP
* Copyright (C) 2000 RidgeRun, Inc.
#include <asm/irq.h>
#include <asm/mach/irq.h>
#include <asm/mach/time.h>
+#include <asm/arch/dmtimer.h>
struct sys_timer omap_timer;
#define OMAP1_32K_TIMER_TVR 0x00
#define OMAP1_32K_TIMER_TCR 0x04
-/* 24xx specific defines */
-#define OMAP2_GP_TIMER_BASE 0x48028000
-#define CM_CLKSEL_WKUP 0x48008440
-#define GP_TIMER_TIDR 0x00
-#define GP_TIMER_TISR 0x18
-#define GP_TIMER_TIER 0x1c
-#define GP_TIMER_TCLR 0x24
-#define GP_TIMER_TCRR 0x28
-#define GP_TIMER_TLDR 0x2c
-#define GP_TIMER_TTGR 0x30
-#define GP_TIMER_TSICR 0x40
-
#define OMAP_32K_TICKS_PER_HZ (32768 / HZ)
/*
#define JIFFIES_TO_HW_TICKS(nr_jiffies, clock_rate) \
(((nr_jiffies) * (clock_rate)) / HZ)
+#if defined(CONFIG_ARCH_OMAP1)
+
static inline void omap_32k_timer_write(int val, int reg)
{
- if (cpu_class_is_omap1())
- omap_writew(val, OMAP1_32K_TIMER_BASE + reg);
-
- if (cpu_is_omap24xx())
- omap_writel(val, OMAP2_GP_TIMER_BASE + reg);
+ omap_writew(val, OMAP1_32K_TIMER_BASE + reg);
}
static inline unsigned long omap_32k_timer_read(int reg)
{
- if (cpu_class_is_omap1())
- return omap_readl(OMAP1_32K_TIMER_BASE + reg) & 0xffffff;
+ return omap_readl(OMAP1_32K_TIMER_BASE + reg) & 0xffffff;
+}
- if (cpu_is_omap24xx())
- return omap_readl(OMAP2_GP_TIMER_BASE + reg);
+static inline void omap_32k_timer_start(unsigned long load_val)
+{
+ omap_32k_timer_write(load_val, OMAP1_32K_TIMER_TVR);
+ omap_32k_timer_write(0x0f, OMAP1_32K_TIMER_CR);
}
-/*
- * The 32KHz synchronized timer is an additional timer on 16xx.
- * It is always running.
- */
-static inline unsigned long omap_32k_sync_timer_read(void)
+static inline void omap_32k_timer_stop(void)
{
- return omap_readl(TIMER_32K_SYNCHRONIZED);
+ omap_32k_timer_write(0x0, OMAP1_32K_TIMER_CR);
}
+#define omap_32k_timer_ack_irq()
+
+#elif defined(CONFIG_ARCH_OMAP2)
+
+static struct omap_dm_timer *gptimer;
+
static inline void omap_32k_timer_start(unsigned long load_val)
{
- if (cpu_class_is_omap1()) {
- omap_32k_timer_write(load_val, OMAP1_32K_TIMER_TVR);
- omap_32k_timer_write(0x0f, OMAP1_32K_TIMER_CR);
- }
-
- if (cpu_is_omap24xx()) {
- omap_32k_timer_write(0xffffffff - load_val, GP_TIMER_TCRR);
- omap_32k_timer_write((1 << 1), GP_TIMER_TIER);
- omap_32k_timer_write((1 << 1) | 1, GP_TIMER_TCLR);
- }
+ omap_dm_timer_set_load(gptimer, 1, 0xffffffff - load_val);
+ omap_dm_timer_set_int_enable(gptimer, OMAP_TIMER_INT_OVERFLOW);
+ omap_dm_timer_start(gptimer);
}
static inline void omap_32k_timer_stop(void)
{
- if (cpu_class_is_omap1())
- omap_32k_timer_write(0x0, OMAP1_32K_TIMER_CR);
+ omap_dm_timer_stop(gptimer);
+}
- if (cpu_is_omap24xx())
- omap_32k_timer_write(0x0, GP_TIMER_TCLR);
+static inline void omap_32k_timer_ack_irq(void)
+{
+ u32 status = omap_dm_timer_read_status(gptimer);
+ omap_dm_timer_write_status(gptimer, status);
+}
+
+#endif
+
+/*
+ * The 32KHz synchronized timer is an additional timer on 16xx.
+ * It is always running.
+ */
+static inline unsigned long omap_32k_sync_timer_read(void)
+{
+ return omap_readl(TIMER_32K_SYNCHRONIZED);
}
/*
write_seqlock_irqsave(&xtime_lock, flags);
- if (cpu_is_omap24xx()) {
- u32 status = omap_32k_timer_read(GP_TIMER_TISR);
- omap_32k_timer_write(status, GP_TIMER_TISR);
- }
-
+ omap_32k_timer_ack_irq();
now = omap_32k_sync_timer_read();
while ((signed long)(now - omap_32k_last_tick)
static struct irqaction omap_32k_timer_irq = {
.name = "32KHz timer",
- .flags = SA_INTERRUPT | SA_TIMER,
+ .flags = IRQF_DISABLED | IRQF_TIMER,
.handler = omap_32k_timer_interrupt,
};
-static struct clk * gpt1_ick;
-static struct clk * gpt1_fck;
-
static __init void omap_init_32k_timer(void)
{
#ifdef CONFIG_NO_IDLE_HZ
if (cpu_class_is_omap1())
setup_irq(INT_OS_TIMER, &omap_32k_timer_irq);
- if (cpu_is_omap24xx())
- setup_irq(37, &omap_32k_timer_irq);
omap_timer.offset = omap_32k_timer_gettimeoffset;
omap_32k_last_tick = omap_32k_sync_timer_read();
+#ifdef CONFIG_ARCH_OMAP2
/* REVISIT: Check 24xx TIOCP_CFG settings after idle works */
if (cpu_is_omap24xx()) {
- omap_32k_timer_write(0, GP_TIMER_TCLR);
- omap_writel(0, CM_CLKSEL_WKUP); /* 32KHz clock source */
-
- gpt1_ick = clk_get(NULL, "gpt1_ick");
- if (IS_ERR(gpt1_ick))
- printk(KERN_ERR "Could not get gpt1_ick\n");
- else
- clk_enable(gpt1_ick);
-
- gpt1_fck = clk_get(NULL, "gpt1_fck");
- if (IS_ERR(gpt1_fck))
- printk(KERN_ERR "Could not get gpt1_fck\n");
- else
- clk_enable(gpt1_fck);
-
- mdelay(100); /* Wait for clocks to stabilize */
-
- omap_32k_timer_write(0x7, GP_TIMER_TISR);
+ gptimer = omap_dm_timer_request_specific(1);
+ BUG_ON(gptimer == NULL);
+
+ omap_dm_timer_set_source(gptimer, OMAP_TIMER_SRC_32_KHZ);
+ setup_irq(omap_dm_timer_get_irq(gptimer), &omap_32k_timer_irq);
+ omap_dm_timer_set_int_enable(gptimer,
+ OMAP_TIMER_INT_CAPTURE | OMAP_TIMER_INT_OVERFLOW |
+ OMAP_TIMER_INT_MATCH);
}
+#endif
omap_32k_timer_start(OMAP_32K_TIMER_TICK_PERIOD);
}
*/
static void __init omap_timer_init(void)
{
+#ifdef CONFIG_OMAP_DM_TIMER
+ omap_dm_timer_init();
+#endif
omap_init_32k_timer();
}
int ret;
spin_unlock(&irq_controller_lock);
- if (!(action->flags & SA_INTERRUPT))
+ if (!(action->flags & IRQF_DISABLED))
local_irq_enable();
status = 0;
action = action->next;
} while (action);
- if (status & SA_SAMPLE_RANDOM)
+ if (status & IRQF_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
spin_lock_irq(&irq_controller_lock);
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
+ if (new->flags & IRQF_SAMPLE_RANDOM) {
/*
* This function might sleep, we want to call it first,
* outside of the atomic block.
p = &desc->action;
if ((old = *p) != NULL) {
/* Can't share interrupts unless both agree to */
- if (!(old->flags & new->flags & SA_SHIRQ)) {
+ if (!(old->flags & new->flags & IRQF_SHARED)) {
spin_unlock_irqrestore(&irq_controller_lock, flags);
return -EBUSY;
}
*
* Flags:
*
- * SA_SHIRQ Interrupt is shared
+ * IRQF_SHARED Interrupt is shared
*
- * SA_INTERRUPT Disable local interrupts while processing
+ * IRQF_DISABLED Disable local interrupts while processing
*
- * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ * IRQF_SAMPLE_RANDOM The interrupt can be used for entropy
*
*/
struct irqaction *action;
if (irq >= NR_IRQS || !irq_desc[irq].valid || !handler ||
- (irq_flags & SA_SHIRQ && !dev_id))
+ (irq_flags & IRQF_SHARED && !dev_id))
return -EINVAL;
action = (struct irqaction *)kmalloc(sizeof(struct irqaction), GFP_KERNEL);
static struct irqaction timer_irq = {
.name = "timer",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.handler = timer_interrupt,
};
* in some tests.
*/
if (request_irq(TIMER0_IRQ_NBR, gpio_poll_timer_interrupt,
- SA_SHIRQ | SA_INTERRUPT,"gpio poll", NULL)) {
+ IRQF_SHARED | IRQF_DISABLED,"gpio poll", NULL)) {
printk(KERN_CRIT "err: timer0 irq for gpio\n");
}
if (request_irq(PA_IRQ_NBR, gpio_pa_interrupt,
- SA_SHIRQ | SA_INTERRUPT,"gpio PA", NULL)) {
+ IRQF_SHARED | IRQF_DISABLED,"gpio PA", NULL)) {
printk(KERN_CRIT "err: PA irq for gpio\n");
}
return IRQ_HANDLED;
}
-/* timer is SA_SHIRQ so drivers can add stuff to the timer irq chain
- * it needs to be SA_INTERRUPT to make the jiffies update work properly
+/* timer is IRQF_SHARED so drivers can add stuff to the timer irq chain
+ * it needs to be IRQF_DISABLED to make the jiffies update work properly
*/
-static struct irqaction irq2 = { timer_interrupt, SA_SHIRQ | SA_INTERRUPT,
+static struct irqaction irq2 = { timer_interrupt, IRQF_SHARED | IRQF_DISABLED,
CPU_MASK_NONE, "timer", NULL, NULL};
void __init
* in some tests.
*/
if (request_irq(TIMER_INTR_VECT, gpio_poll_timer_interrupt,
- SA_SHIRQ | SA_INTERRUPT,"gpio poll", &alarmlist)) {
+ IRQF_SHARED | IRQF_DISABLED,"gpio poll", &alarmlist)) {
printk("err: timer0 irq for gpio\n");
}
if (request_irq(GEN_IO_INTR_VECT, gpio_pa_interrupt,
- SA_SHIRQ | SA_INTERRUPT,"gpio PA", &alarmlist)) {
+ IRQF_SHARED | IRQF_DISABLED,"gpio PA", &alarmlist)) {
printk("err: PA irq for gpio\n");
}
/* enable the gio and timer irq in global config */
crisv32_arbiter_config(EXT_REGION);
crisv32_arbiter_config(INT_REGION);
- if (request_irq(MEMARB_INTR_VECT, crisv32_arbiter_irq, SA_INTERRUPT,
+ if (request_irq(MEMARB_INTR_VECT, crisv32_arbiter_irq, IRQF_DISABLED,
"arbiter", NULL))
printk(KERN_ERR "Couldn't allocate arbiter IRQ\n");
proc_register_dynamic(&proc_root, &fasttimer_proc_entry);
#endif
#endif /* PROC_FS */
- if(request_irq(TIMER_INTR_VECT, timer_trig_interrupt, SA_INTERRUPT,
+ if(request_irq(TIMER_INTR_VECT, timer_trig_interrupt, IRQF_DISABLED,
"fast timer int", NULL))
{
printk("err: timer1 irq\n");
crisv32_do_IRQ(int irq, int block, struct pt_regs* regs)
{
/* Interrupts that may not be moved to another CPU and
- * are SA_INTERRUPT may skip blocking. This is currently
+ * are IRQF_DISABLED may skip blocking. This is currently
* only valid for the timer IRQ and the IPI and is used
* for the timer interrupt to avoid watchdog starvation.
*/
static irqreturn_t crisv32_ipi_interrupt(int irq, void *dev_id, struct pt_regs *regs);
static int send_ipi(int vector, int wait, cpumask_t cpu_mask);
-static struct irqaction irq_ipi = { crisv32_ipi_interrupt, SA_INTERRUPT,
+static struct irqaction irq_ipi = { crisv32_ipi_interrupt, IRQF_DISABLED,
CPU_MASK_NONE, "ipi", NULL, NULL};
extern void cris_mmu_init(void);
return IRQ_HANDLED;
}
-/* timer is SA_SHIRQ so drivers can add stuff to the timer irq chain
- * it needs to be SA_INTERRUPT to make the jiffies update work properly
+/* timer is IRQF_SHARED so drivers can add stuff to the timer irq chain
+ * it needs to be IRQF_DISABLED to make the jiffies update work properly
*/
-static struct irqaction irq_timer = { timer_interrupt, SA_SHIRQ | SA_INTERRUPT,
- CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq_timer = {
+ .mask = timer_interrupt,
+ .flags = IRQF_SHARED | IRQF_DISABLED,
+ .mask = CPU_MASK_NONE,
+ .name = "timer"
+};
void __init
cris_timer_init(void)
/* called by the assembler IRQ entry functions defined in irq.h
* to dispatch the interrupts to registred handlers
* interrupts are disabled upon entry - depending on if the
- * interrupt was registred with SA_INTERRUPT or not, interrupts
+ * interrupt was registred with IRQF_DISABLED or not, interrupts
* are re-enabled or not.
*/
if (action) {
int status = 0;
-// if (!(action->flags & SA_INTERRUPT))
+// if (!(action->flags & IRQF_DISABLED))
// local_irq_enable();
do {
action = action->next;
} while (action);
- if (status & SA_SAMPLE_RANDOM)
+ if (status & IRQF_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
local_irq_disable();
}
*
* Flags:
*
- * SA_SHIRQ Interrupt is shared
+ * IRQF_SHARED Interrupt is shared
*
- * SA_INTERRUPT Disable local interrupts while processing
+ * IRQF_DISABLED Disable local interrupts while processing
*
- * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ * IRQF_SAMPLE_RANDOM The interrupt can be used for entropy
*
*/
* to figure out which interrupt is which (messes up the
* interrupt freeing logic etc).
*/
- if (irqflags & SA_SHIRQ) {
+ if (irqflags & IRQF_SHARED) {
if (!dev_id)
printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n",
devname, (&irq)[-1]);
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
+ if (new->flags & IRQF_SAMPLE_RANDOM) {
/*
* This function might sleep, we want to call it first,
* outside of the atomic block.
spin_lock_irqsave(&level->lock, flags);
/* can't share interrupts unless all parties agree to */
- if (level->usage != 0 && !(level->flags & new->flags & SA_SHIRQ)) {
+ if (level->usage != 0 && !(level->flags & new->flags & IRQF_SHARED)) {
spin_unlock_irqrestore(&level->lock,flags);
return -EBUSY;
}
* 2 of the License, or (at your option) any later version.
*/
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/delay.h>
static irqreturn_t timer_interrupt(int irq, void *dummy, struct pt_regs *regs);
static struct irqaction timer_irq = {
- timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL
+ timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "timer", NULL, NULL
};
static inline int set_rtc_mmss(unsigned long nowtime)
irq_handle->devname = devname;
irq_list[irq] = irq_handle;
- if (irq_handle->flags & SA_SAMPLE_RANDOM)
+ if (irq_handle->flags & IRQF_SAMPLE_RANDOM)
rand_initialize_irq(irq);
enable_irq(irq);
if (irq_list[irq]) {
irq_list[irq]->handler(irq, irq_list[irq]->dev_id, fp);
irq_list[irq]->count++;
- if (irq_list[irq]->flags & SA_SAMPLE_RANDOM)
+ if (irq_list[irq]->flags & IRQF_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
}
} else {
irq_handle->dev_id = dev_id;
irq_handle->devname = devname;
irq_list[irq] = irq_handle;
- if (irq_handle->flags & SA_SAMPLE_RANDOM)
+ if (irq_handle->flags & IRQF_SAMPLE_RANDOM)
rand_initialize_irq(irq);
/* enable interrupt */
if (irq_list[vec]) {
irq_list[vec]->handler(vec, irq_list[vec]->dev_id, fp);
irq_list[vec]->count++;
- if (irq_list[vec]->flags & SA_SAMPLE_RANDOM)
+ if (irq_list[vec]->flags & IRQF_SAMPLE_RANDOM)
add_interrupt_randomness(vec);
}
} else {
bool
default y
+config LOCKDEP_SUPPORT
+ bool
+ default y
+
+config STACKTRACE_SUPPORT
+ bool
+ default y
+
config SEMAPHORE_SLEEPERS
bool
default y
menu "Kernel hacking"
+config TRACE_IRQFLAGS_SUPPORT
+ bool
+ default y
+
source "lib/Kconfig.debug"
config EARLY_PRINTK
This option will slow down process creation somewhat.
-config STACK_BACKTRACE_COLS
- int "Stack backtraces per line" if DEBUG_KERNEL
- range 1 3
- default 2
- help
- Selects how many stack backtrace entries per line to display.
-
- This can save screen space when displaying traces.
-
comment "Page alloc debug is incompatible with Software Suspend on i386"
depends on DEBUG_KERNEL && SOFTWARE_SUSPEND
*/
#include <asm/segment.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/compile.h>
#include <asm/boot.h>
#include <asm/e820.h>
pci-dma.o i386_ksyms.o i387.o bootflag.o \
quirks.o i8237.o topology.o alternative.o i8253.o tsc.o
+obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += cpu/
obj-y += acpi/
obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o
struct smp_alt_module *mod;
unsigned long flags;
+#ifdef CONFIG_LOCKDEP
+ /*
+ * A not yet fixed binutils section handling bug prevents
+ * alternatives-replacement from working reliably, so turn
+ * it off:
+ */
+ printk("lockdep: not fixing up alternatives.\n");
+ return;
+#endif
+
if (no_replacement || smp_alt_once)
return;
BUG_ON(!smp && (num_online_cpus() > 1));
return err;
}
+#ifdef CONFIG_HOTPLUG_CPU
static int cpuid_class_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
{
.notifier_call = cpuid_class_cpu_callback,
};
+#endif /* !CONFIG_HOTPLUG_CPU */
static int __init cpuid_init(void)
{
if (err != 0)
goto out_class;
}
- register_cpu_notifier(&cpuid_class_cpu_notifier);
+ register_hotcpu_notifier(&cpuid_class_cpu_notifier);
err = 0;
goto out;
class_device_destroy(cpuid_class, MKDEV(CPUID_MAJOR, cpu));
class_destroy(cpuid_class);
unregister_chrdev(CPUID_MAJOR, "cpu/cpuid");
- unregister_cpu_notifier(&cpuid_class_cpu_notifier);
+ unregister_hotcpu_notifier(&cpuid_class_cpu_notifier);
}
module_init(cpuid_init);
#include <linux/linkage.h>
#include <asm/thread_info.h>
+#include <asm/irqflags.h>
#include <asm/errno.h>
#include <asm/segment.h>
#include <asm/smp.h>
VM_MASK = 0x00020000
#ifdef CONFIG_PREEMPT
-#define preempt_stop cli
+#define preempt_stop cli; TRACE_IRQS_OFF
#else
#define preempt_stop
#define resume_kernel restore_nocheck
#endif
+.macro TRACE_IRQS_IRET
+#ifdef CONFIG_TRACE_IRQFLAGS
+ testl $IF_MASK,EFLAGS(%esp) # interrupts off?
+ jz 1f
+ TRACE_IRQS_ON
+1:
+#endif
+.endm
+
#ifdef CONFIG_VM86
#define resume_userspace_sig check_userspace
#else
CFI_REGISTER esp, ebp
movl TSS_sysenter_esp0(%esp),%esp
sysenter_past_esp:
+ /*
+ * No need to follow this irqs on/off section: the syscall
+ * disabled irqs and here we enable it straight after entry:
+ */
sti
pushl $(__USER_DS)
CFI_ADJUST_CFA_OFFSET 4
call *sys_call_table(,%eax,4)
movl %eax,EAX(%esp)
cli
+ TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx
jne syscall_exit_work
movl EIP(%esp), %edx
movl OLDESP(%esp), %ecx
xorl %ebp,%ebp
+ TRACE_IRQS_ON
sti
sysexit
CFI_ENDPROC
cli # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
+ TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx # current->work
jne syscall_exit_work
CFI_REMEMBER_STATE
je ldt_ss # returning to user-space with LDT SS
restore_nocheck:
+ TRACE_IRQS_IRET
+restore_nocheck_notrace:
RESTORE_REGS
addl $4, %esp
CFI_ADJUST_CFA_OFFSET -4
1: iret
.section .fixup,"ax"
iret_exc:
+ TRACE_IRQS_ON
sti
pushl $0 # no error code
pushl $do_iret_error
subl $8, %esp # reserve space for switch16 pointer
CFI_ADJUST_CFA_OFFSET 8
cli
+ TRACE_IRQS_OFF
movl %esp, %eax
/* Set up the 16bit stack frame with switch32 pointer on top,
* and a switch16 pointer on top of the current frame. */
call setup_x86_bogus_stack
CFI_ADJUST_CFA_OFFSET -8 # frame has moved
+ TRACE_IRQS_IRET
RESTORE_REGS
lss 20+4(%esp), %esp # switch to 16bit stack
1: iret
cli # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
+ TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
andl $_TIF_WORK_MASK, %ecx # is there any work to be done other
# than syscall tracing?
syscall_exit_work:
testb $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP), %cl
jz work_pending
+ TRACE_IRQS_ON
sti # could let do_syscall_trace() call
# schedule() instead
movl %esp, %eax
vector=vector+1
.endr
+/*
+ * the CPU automatically disables interrupts when executing an IRQ vector,
+ * so IRQ-flags tracing has to follow that:
+ */
ALIGN
common_interrupt:
SAVE_ALL
+ TRACE_IRQS_OFF
movl %esp,%eax
call do_IRQ
jmp ret_from_intr
pushl $~(nr); \
CFI_ADJUST_CFA_OFFSET 4; \
SAVE_ALL; \
+ TRACE_IRQS_OFF \
movl %esp,%eax; \
call smp_/**/name; \
- jmp ret_from_intr; \
+ jmp ret_from_intr; \
CFI_ENDPROC
/* The include is where all of the SMP etc. interrupts come from */
xorl %edx,%edx # zero error code
movl %esp,%eax # pt_regs pointer
call do_nmi
- jmp restore_all
+ jmp restore_nocheck_notrace
CFI_ENDPROC
nmi_stack_fixup:
irqctx->tinfo.task = NULL;
irqctx->tinfo.exec_domain = NULL;
irqctx->tinfo.cpu = cpu;
- irqctx->tinfo.preempt_count = SOFTIRQ_OFFSET;
+ irqctx->tinfo.preempt_count = 0;
irqctx->tinfo.addr_limit = MAKE_MM_SEG(0);
softirq_ctx[cpu] = irqctx;
: "0"(isp)
: "memory", "cc", "edx", "ecx", "eax"
);
+ /*
+ * Shouldnt happen, we returned above if in_interrupt():
+ */
+ WARN_ON_ONCE(softirq_count());
}
local_irq_restore(flags);
static __init void nmi_cpu_busy(void *data)
{
volatile int *endflag = data;
- local_irq_enable();
+ local_irq_enable_in_hardirq();
/* Intentionally don't use cpu_relax here. This is
to make sure that the performance counter really ticks,
even if there is a simulator or similar that catches the
--- /dev/null
+/*
+ * arch/i386/kernel/stacktrace.c
+ *
+ * Stack trace management functions
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ */
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+
+static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
+{
+ return p > (void *)tinfo &&
+ p < (void *)tinfo + THREAD_SIZE - 3;
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer:
+ */
+static inline unsigned long
+save_context_stack(struct stack_trace *trace, unsigned int skip,
+ struct thread_info *tinfo, unsigned long *stack,
+ unsigned long ebp)
+{
+ unsigned long addr;
+
+#ifdef CONFIG_FRAME_POINTER
+ while (valid_stack_ptr(tinfo, (void *)ebp)) {
+ addr = *(unsigned long *)(ebp + 4);
+ if (!skip)
+ trace->entries[trace->nr_entries++] = addr;
+ else
+ skip--;
+ if (trace->nr_entries >= trace->max_entries)
+ break;
+ /*
+ * break out of recursive entries (such as
+ * end_of_stack_stop_unwind_function):
+ */
+ if (ebp == *(unsigned long *)ebp)
+ break;
+
+ ebp = *(unsigned long *)ebp;
+ }
+#else
+ while (valid_stack_ptr(tinfo, stack)) {
+ addr = *stack++;
+ if (__kernel_text_address(addr)) {
+ if (!skip)
+ trace->entries[trace->nr_entries++] = addr;
+ else
+ skip--;
+ if (trace->nr_entries >= trace->max_entries)
+ break;
+ }
+ }
+#endif
+
+ return ebp;
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ * If all_contexts is set, all contexts (hardirq, softirq and process)
+ * are saved. If not set then only the current context is saved.
+ */
+void save_stack_trace(struct stack_trace *trace,
+ struct task_struct *task, int all_contexts,
+ unsigned int skip)
+{
+ unsigned long ebp;
+ unsigned long *stack = &ebp;
+
+ WARN_ON(trace->nr_entries || !trace->max_entries);
+
+ if (!task || task == current) {
+ /* Grab ebp right from our regs: */
+ asm ("movl %%ebp, %0" : "=r" (ebp));
+ } else {
+ /* ebp is the last reg pushed by switch_to(): */
+ ebp = *(unsigned long *) task->thread.esp;
+ }
+
+ while (1) {
+ struct thread_info *context = (struct thread_info *)
+ ((unsigned long)stack & (~(THREAD_SIZE - 1)));
+
+ ebp = save_context_stack(trace, skip, context, stack, ebp);
+ stack = (unsigned long *)context->previous_esp;
+ if (!all_contexts || !stack ||
+ trace->nr_entries >= trace->max_entries)
+ break;
+ trace->entries[trace->nr_entries++] = ULONG_MAX;
+ if (trace->nr_entries >= trace->max_entries)
+ break;
+ }
+}
+
}
/*
- * Print CONFIG_STACK_BACKTRACE_COLS address/symbol entries per line.
+ * Print one address/symbol entries per line.
*/
-static inline int print_addr_and_symbol(unsigned long addr, char *log_lvl,
- int printed)
+static inline void print_addr_and_symbol(unsigned long addr, char *log_lvl)
{
- if (!printed)
- printk(log_lvl);
-
-#if CONFIG_STACK_BACKTRACE_COLS == 1
printk(" [<%08lx>] ", addr);
-#else
- printk(" <%08lx> ", addr);
-#endif
- print_symbol("%s", addr);
- printed = (printed + 1) % CONFIG_STACK_BACKTRACE_COLS;
- if (printed)
- printk(" ");
- else
- printk("\n");
-
- return printed;
+ print_symbol("%s\n", addr);
}
static inline unsigned long print_context_stack(struct thread_info *tinfo,
char *log_lvl)
{
unsigned long addr;
- int printed = 0; /* nr of entries already printed on current line */
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
- printed = print_addr_and_symbol(addr, log_lvl, printed);
+ print_addr_and_symbol(addr, log_lvl);
/*
* break out of recursive entries (such as
* end_of_stack_stop_unwind_function):
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr))
- printed = print_addr_and_symbol(addr, log_lvl, printed);
+ print_addr_and_symbol(addr, log_lvl);
}
#endif
- if (printed)
- printk("\n");
-
return ebp;
}
-static asmlinkage int show_trace_unwind(struct unwind_frame_info *info, void *log_lvl)
+static asmlinkage int
+show_trace_unwind(struct unwind_frame_info *info, void *log_lvl)
{
int n = 0;
- int printed = 0; /* nr of entries already printed on current line */
while (unwind(info) == 0 && UNW_PC(info)) {
- ++n;
- printed = print_addr_and_symbol(UNW_PC(info), log_lvl, printed);
+ n++;
+ print_addr_and_symbol(UNW_PC(info), log_lvl);
if (arch_unw_user_mode(info))
break;
}
- if (printed)
- printk("\n");
return n;
}
{
}
-static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq0 = { timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "timer", NULL, NULL};
/**
* time_init_hook - do any specific initialisations for the system timer.
static struct irqaction irq0 = {
.handler = timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "timer",
};
{
}
-static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq0 = { timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "timer", NULL, NULL};
void __init time_init_hook(void)
{
for (i = 0; i < 16; i++) {
if (!(mask & (1 << i)))
continue;
- if (pirq_penalty[i] < pirq_penalty[newirq] && can_request_irq(i, SA_SHIRQ))
+ if (pirq_penalty[i] < pirq_penalty[newirq] && can_request_irq(i, IRQF_SHARED))
newirq = i;
}
}
#define NR_PORTS 1 /* only one port for now */
-#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
+#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? IRQF_SHARED : IRQF_DISABLED)
#define SSC_GETCHAR 21
static struct irqaction ipi_irqaction = {
.handler = handle_IPI,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "IPI"
};
#endif
*/
static void
-ia64_mca_modify_comm(const task_t *previous_current)
+ia64_mca_modify_comm(const struct task_struct *previous_current)
{
char *p, comm[sizeof(current->comm)];
if (previous_current->pid)
* that we can do backtrace on the MCA/INIT handler code itself.
*/
-static task_t *
+static struct task_struct *
ia64_mca_modify_original_stack(struct pt_regs *regs,
const struct switch_stack *sw,
struct ia64_sal_os_state *sos,
ia64_va va;
extern char ia64_leave_kernel[]; /* Need asm address, not function descriptor */
const pal_min_state_area_t *ms = sos->pal_min_state;
- task_t *previous_current;
+ struct task_struct *previous_current;
struct pt_regs *old_regs;
struct switch_stack *old_sw;
unsigned size = sizeof(struct pt_regs) +
pal_processor_state_info_t *psp = (pal_processor_state_info_t *)
&sos->proc_state_param;
int recover, cpu = smp_processor_id();
- task_t *previous_current;
+ struct task_struct *previous_current;
struct ia64_mca_notify_die nd =
{ .sos = sos, .monarch_cpu = &monarch_cpu };
{
static atomic_t slaves;
static atomic_t monarchs;
- task_t *previous_current;
+ struct task_struct *previous_current;
int cpu = smp_processor_id();
struct ia64_mca_notify_die nd =
{ .sos = sos, .monarch_cpu = &monarch_cpu };
static struct irqaction cmci_irqaction = {
.handler = ia64_mca_cmc_int_handler,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "cmc_hndlr"
};
static struct irqaction cmcp_irqaction = {
.handler = ia64_mca_cmc_int_caller,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "cmc_poll"
};
static struct irqaction mca_rdzv_irqaction = {
.handler = ia64_mca_rendez_int_handler,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "mca_rdzv"
};
static struct irqaction mca_wkup_irqaction = {
.handler = ia64_mca_wakeup_int_handler,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "mca_wkup"
};
#ifdef CONFIG_ACPI
static struct irqaction mca_cpe_irqaction = {
.handler = ia64_mca_cpe_int_handler,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "cpe_hndlr"
};
static struct irqaction mca_cpep_irqaction = {
.handler = ia64_mca_cpe_int_caller,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "cpe_poll"
};
#endif /* CONFIG_ACPI */
static struct irqaction perfmon_irqaction = {
.handler = pfm_interrupt_handler,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "perfmon"
};
extern void start_ap (void);
extern unsigned long ia64_iobase;
-task_t *task_for_booting_cpu;
+struct task_struct *task_for_booting_cpu;
/*
* State for each CPU
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "timer"
};
*/
void hub_error_init(struct hubdev_info *hubdev_info)
{
- if (request_irq(SGI_II_ERROR, (void *)hub_eint_handler, SA_SHIRQ,
+ if (request_irq(SGI_II_ERROR, (void *)hub_eint_handler, IRQF_SHARED,
"SN_hub_error", (void *)hubdev_info))
printk("hub_error_init: Failed to request_irq for 0x%p\n",
hubdev_info);
void ice_error_init(struct hubdev_info *hubdev_info)
{
if (request_irq
- (SGI_TIO_ERROR, (void *)hub_eint_handler, SA_SHIRQ, "SN_TIO_error",
+ (SGI_TIO_ERROR, (void *)hub_eint_handler, IRQF_SHARED, "SN_TIO_error",
(void *)hubdev_info))
printk("ice_error_init: request_irq() error hubdev_info 0x%p\n",
hubdev_info);
init_waitqueue_head(&part->channel_mgr_wq);
sprintf(part->IPI_owner, "xpc%02d", partid);
- ret = request_irq(SGI_XPC_NOTIFY, xpc_notify_IRQ_handler, SA_SHIRQ,
+ ret = request_irq(SGI_XPC_NOTIFY, xpc_notify_IRQ_handler, IRQF_SHARED,
part->IPI_owner, (void *) (u64) partid);
if (ret != 0) {
dev_err(xpc_chan, "can't register NOTIFY IRQ handler, "
* register the bridge's error interrupt handler
*/
if (request_irq(SGI_PCIASIC_ERROR, (void *)pcibr_error_intr_handler,
- SA_SHIRQ, "PCIBR error", (void *)(soft))) {
+ IRQF_SHARED, "PCIBR error", (void *)(soft))) {
printk(KERN_WARNING
"pcibr cannot allocate interrupt for error handler\n");
}
if (request_irq(SGI_TIOCA_ERROR,
tioca_error_intr_handler,
- SA_SHIRQ, "TIOCA error", (void *)tioca_common))
+ IRQF_SHARED, "TIOCA error", (void *)tioca_common))
printk(KERN_WARNING
"%s: Unable to get irq %d. "
"Error interrupts won't be routed for TIOCA bus %d\n",
if (request_irq(SGI_PCIASIC_ERROR,
tioce_error_intr_handler,
- SA_SHIRQ, "TIOCE error", (void *)tioce_common))
+ IRQF_SHARED, "TIOCE error", (void *)tioce_common))
printk(KERN_WARNING
"%s: Unable to get irq %d. "
"Error interrupts won't be routed for "
return IRQ_HANDLED;
}
-struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE,
+struct irqaction irq0 = { timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE,
"MFT2", NULL, NULL };
void __init time_init(void)
*
* 07/08/99: rewamp of the interrupt handling - we now have two types of
* interrupts, normal and fast handlers, fast handlers being
- * marked with SA_INTERRUPT and runs with all other interrupts
+ * marked with IRQF_DISABLED and runs with all other interrupts
* disabled. Normal interrupts disable their own source but
* run with all other interrupt sources enabled.
* PORTS and EXTER interrupts are always shared even if the
/* override auto int and install CIA handler */
m68k_setup_irq_controller(&auto_irq_controller, base->handler_irq, 1);
m68k_irq_startup(base->handler_irq);
- request_irq(base->handler_irq, cia_handler, SA_SHIRQ, base->name, base);
+ request_irq(base->handler_irq, cia_handler, IRQF_SHARED, base->name, base);
}
prev = irq_list + irq;
if (*prev) {
/* Can't share interrupts unless both agree to */
- if (!((*prev)->flags & node->flags & SA_SHIRQ)) {
+ if (!((*prev)->flags & node->flags & IRQF_SHARED)) {
spin_unlock_irqrestore(&contr->lock, flags);
return -EBUSY;
}
volatile unsigned char *icrp;
volatile unsigned long *imrp;
- request_irq(MCFINT_VECBASE + MCFINT_PIT1, handler, SA_INTERRUPT,
+ request_irq(MCFINT_VECBASE + MCFINT_PIT1, handler, IRQF_DISABLED,
"ColdFire Timer", NULL);
icrp = (volatile unsigned char *) (MCF_IPSBAR + MCFICM_INTC0 +
__raw_writew(MCFTIMER_TMR_ENORI | MCFTIMER_TMR_CLK16 |
MCFTIMER_TMR_RESTART | MCFTIMER_TMR_ENABLE, TA(MCFTIMER_TMR));
- request_irq(mcf_timervector, handler, SA_INTERRUPT, "timer", NULL);
+ request_irq(mcf_timervector, handler, IRQF_DISABLED, "timer", NULL);
mcf_settimericr(1, mcf_timerlevel);
#ifdef CONFIG_HIGHPROFILE
MCFTIMER_TMR_RESTART | MCFTIMER_TMR_ENABLE, PA(MCFTIMER_TMR));
request_irq(mcf_profilevector, coldfire_profile_tick,
- (SA_INTERRUPT | IRQ_FLG_FAST), "profile timer", NULL);
+ (IRQF_DISABLED | IRQ_FLG_FAST), "profile timer", NULL);
mcf_settimericr(2, 7);
}
#error Unknown Au1x00 SOC
#endif
- if (request_irq(irq_nr, dbdma_interrupt, SA_INTERRUPT,
+ if (request_irq(irq_nr, dbdma_interrupt, IRQF_DISABLED,
"Au1xxx dbdma", (void *)dbdma_gptr))
printk("Can't get 1550 dbdma irq");
}
* can avoid it. --cgray
*/
action.dev_id = handler;
- action.flags = SA_INTERRUPT;
+ action.flags = IRQF_DISABLED;
cpus_clear(action.mask);
action.name = "Au1xxx TOY";
action.handler = handler;
*/
/* request the USB device transfer complete interrupt */
- if (request_irq(AU1000_USB_DEV_REQ_INT, req_sus_intr, SA_INTERRUPT,
+ if (request_irq(AU1000_USB_DEV_REQ_INT, req_sus_intr, IRQF_DISABLED,
"USBdev req", &usbdev)) {
err("Can't get device request intr");
ret = -ENXIO;
goto out;
}
/* request the USB device suspend interrupt */
- if (request_irq(AU1000_USB_DEV_SUS_INT, req_sus_intr, SA_INTERRUPT,
+ if (request_irq(AU1000_USB_DEV_SUS_INT, req_sus_intr, IRQF_DISABLED,
"USBdev sus", &usbdev)) {
err("Can't get device suspend intr");
ret = -ENXIO;
if ((ep0->indma = request_au1000_dma(ep_dma_id[0].id,
ep_dma_id[0].str,
dma_done_ep0_intr,
- SA_INTERRUPT,
+ IRQF_DISABLED,
&usbdev)) < 0) {
err("Can't get %s DMA", ep_dma_id[0].str);
ret = -ENXIO;
request_au1000_dma(ep_dma_id[ep->address].id,
ep_dma_id[ep->address].str,
dma_done_ep_intr,
- SA_INTERRUPT,
+ IRQF_DISABLED,
&usbdev);
if (ep->indma < 0) {
err("Can't get %s DMA",
static int iodev_open(struct inode *i, struct file *f)
{
- return request_irq(iodev_irq, iodev_irqhdl, SA_INTERRUPT,
+ return request_irq(iodev_irq, iodev_irqhdl, IRQF_DISABLED,
iodev_name, &miscdev);
}
};
static struct irqaction busirq = {
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "bus error",
};
case MACH_DS23100: /* DS2100/DS3100 Pmin/Pmax */
board_be_handler = dec_kn01_be_handler;
busirq.handler = dec_kn01_be_interrupt;
- busirq.flags |= SA_SHIRQ;
+ busirq.flags |= IRQF_SHARED;
dec_kn01_be_init();
break;
case MACH_DS5000_1XX: /* DS5000/1xx 3min */
* the values to the correct interrupt line.
*/
timer.handler = gt64120_irq;
- timer.flags = SA_SHIRQ | SA_INTERRUPT;
+ timer.flags = IRQF_SHARED | IRQF_DISABLED;
timer.name = "timer";
timer.dev_id = NULL;
timer.next = NULL;
#endif
FEXPORT(ret_from_fork)
- jal schedule_tail # a0 = task_t *prev
+ jal schedule_tail # a0 = struct task_struct *prev
FEXPORT(syscall_exit)
local_irq_disable # make sure need_resched and
* used in sys_sched_set/getaffinity() in kernel/sched.c, so
* cloned here.
*/
-static inline task_t *find_process_by_pid(pid_t pid)
+static inline struct task_struct *find_process_by_pid(pid_t pid)
{
return pid ? find_task_by_pid(pid) : current;
}
cpumask_t new_mask;
cpumask_t effective_mask;
int retval;
- task_t *p;
+ struct task_struct *p;
if (len < sizeof(new_mask))
return -EINVAL;
unsigned int real_len;
cpumask_t mask;
int retval;
- task_t *p;
+ struct task_struct *p;
real_len = sizeof(mask);
if (len < real_len)
static struct irqaction rtlx_irq = {
.handler = rtlx_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "RTLX",
};
static struct irqaction irq_resched = {
.handler = ipi_resched_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "IPI_resched"
};
static struct irqaction irq_call = {
.handler = ipi_call_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "IPI_call"
};
set_vi_handler(MIPS_CPU_IPI_IRQ, ipi_irq_dispatch);
irq_ipi.handler = ipi_interrupt;
- irq_ipi.flags = SA_INTERRUPT;
+ irq_ipi.flags = IRQF_DISABLED;
irq_ipi.name = "SMTC_IPI";
setup_irq_smtc(cpu_ipi_irq, &irq_ipi, (0x100 << MIPS_CPU_IPI_IRQ));
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "timer",
};
}
static struct irqaction cascade_mv64340 = {
- no_action, SA_INTERRUPT, CPU_MASK_NONE, "MV64340-Cascade", NULL, NULL
+ no_action, IRQF_DISABLED, CPU_MASK_NONE, "MV64340-Cascade", NULL, NULL
};
void __init arch_init_irq(void)
#include <asm/system.h>
static struct irqaction cascade_mv64340 = {
- no_action, SA_INTERRUPT, CPU_MASK_NONE, "MV64340-Cascade", NULL, NULL
+ no_action, IRQF_DISABLED, CPU_MASK_NONE, "MV64340-Cascade", NULL, NULL
};
void __init arch_init_irq(void)
extern void cpci_irq_init(void);
static struct irqaction cascade_fpga = {
- no_action, SA_INTERRUPT, CPU_MASK_NONE, "cascade via FPGA", NULL, NULL
+ no_action, IRQF_DISABLED, CPU_MASK_NONE, "cascade via FPGA", NULL, NULL
};
static struct irqaction cascade_mv64340 = {
- no_action, SA_INTERRUPT, CPU_MASK_NONE, "cascade via MV64340", NULL, NULL
+ no_action, IRQF_DISABLED, CPU_MASK_NONE, "cascade via MV64340", NULL, NULL
};
extern void ll_uart_irq(struct pt_regs *regs);
* the values to the correct interrupt line.
*/
timer.handler = >64240_p0int_irq;
- timer.flags = SA_SHIRQ | SA_INTERRUPT;
+ timer.flags = IRQF_SHARED | IRQF_DISABLED;
timer.name = "timer";
timer.dev_id = NULL;
timer.next = NULL;
static struct irqaction gic_action = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "GIC",
};
static struct irqaction timer_action = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "Timer",
};
static struct irqaction local0_cascade = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "local0 cascade",
};
static struct irqaction local1_cascade = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "local1 cascade",
};
static struct irqaction buserr = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "Bus Error",
};
static struct irqaction map0_cascade = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "mapable0 cascade",
};
#ifdef USE_LIO3_IRQ
static struct irqaction map1_cascade = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "mapable1 cascade",
};
#define SGI_INTERRUPTS SGINT_END
}
/*
- * This code is unnecessarily complex, because we do SA_INTERRUPT
+ * This code is unnecessarily complex, because we do IRQF_DISABLED
* intr enabling. Basically, once we grab the set of intrs we need
* to service, we must mask _all_ these interrupts; firstly, to make
* sure the same intr does not intr again, causing recursion that
static struct irqaction rt_irqaction = {
.handler = ip27_rt_timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "timer"
};
extern irqreturn_t crime_cpuerr_intr (int irq, void *dev_id,
struct pt_regs *regs);
-struct irqaction memerr_irq = { crime_memerr_intr, SA_INTERRUPT,
+struct irqaction memerr_irq = { crime_memerr_intr, IRQF_DISABLED,
CPU_MASK_NONE, "CRIME memory error", NULL, NULL };
-struct irqaction cpuerr_irq = { crime_cpuerr_intr, SA_INTERRUPT,
+struct irqaction cpuerr_irq = { crime_cpuerr_intr, IRQF_DISABLED,
CPU_MASK_NONE, "CRIME CPU error", NULL, NULL };
/*
MACEISA_KEYB_POLL_INT | \
MACEISA_MOUSE_INT | \
MACEISA_MOUSE_POLL_INT | \
- MACEISA_TIMER0_INT | \
- MACEISA_TIMER1_INT | \
- MACEISA_TIMER2_INT)
+ MACEIIRQF_TIMER0_INT | \
+ MACEIIRQF_TIMER1_INT | \
+ MACEIIRQF_TIMER2_INT)
#define MACEISA_SUPERIO_INT (MACEISA_PARALLEL_INT | \
MACEISA_PAR_CTXA_INT | \
MACEISA_PAR_CTXB_INT | \
case MACEISA_AUDIO_SW_IRQ ... MACEISA_AUDIO3_MERR_IRQ:
crime_int = MACE_AUDIO_INT;
break;
- case MACEISA_RTC_IRQ ... MACEISA_TIMER2_IRQ:
+ case MACEISA_RTC_IRQ ... MACEIIRQF_TIMER2_IRQ:
crime_int = MACE_MISC_INT;
break;
case MACEISA_PARALLEL_IRQ ... MACEISA_SERIAL2_RDMAOR_IRQ:
}
//#define TOSHIBA_RBTX4927_PIC_ACTION(s) { no_action, 0, CPU_MASK_NONE, s, NULL, NULL }
-#define TOSHIBA_RBTX4927_PIC_ACTION(s) { no_action, SA_SHIRQ, CPU_MASK_NONE, s, NULL, NULL }
+#define TOSHIBA_RBTX4927_PIC_ACTION(s) { no_action, IRQF_SHARED, CPU_MASK_NONE, s, NULL, NULL }
static struct irqaction toshiba_rbtx4927_irq_ioc_action =
TOSHIBA_RBTX4927_PIC_ACTION(TOSHIBA_RBTX4927_IOC_NAME);
#ifdef CONFIG_TOSHIBA_FPCIB0
static struct irqaction timer_action = {
.handler = timer_interrupt,
.name = "timer",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
#ifdef CONFIG_SMP
static struct irqaction ipi_action = {
.handler = ipi_interrupt,
.name = "IPI",
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
#endif
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.17
+# Mon Jul 3 12:08:41 2006
+#
+# CONFIG_PPC64 is not set
+CONFIG_PPC32=y
+CONFIG_PPC_MERGE=y
+CONFIG_MMU=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_GENERIC_FIND_NEXT_BIT=y
+CONFIG_PPC=y
+CONFIG_EARLY_PRINTK=y
+CONFIG_GENERIC_NVRAM=y
+CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_PPC_OF=y
+CONFIG_PPC_UDBG_16550=y
+CONFIG_GENERIC_TBSYNC=y
+# CONFIG_DEFAULT_UIMAGE is not set
+
+#
+# Processor support
+#
+CONFIG_CLASSIC32=y
+# CONFIG_PPC_52xx is not set
+# CONFIG_PPC_82xx is not set
+# CONFIG_PPC_83xx is not set
+# CONFIG_PPC_85xx is not set
+# CONFIG_PPC_86xx is not set
+# CONFIG_40x is not set
+# CONFIG_44x is not set
+# CONFIG_8xx is not set
+# CONFIG_E200 is not set
+CONFIG_6xx=y
+CONFIG_PPC_FPU=y
+# CONFIG_ALTIVEC is not set
+CONFIG_PPC_STD_MMU=y
+CONFIG_PPC_STD_MMU_32=y
+CONFIG_SMP=y
+CONFIG_NR_CPUS=4
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_LOCK_KERNEL=y
+CONFIG_INIT_ENV_ARG_LIMIT=32
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+# CONFIG_LOCALVERSION_AUTO is not set
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+# CONFIG_CPUSETS is not set
+# CONFIG_RELAY is not set
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_HOTPLUG=y
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_SHMEM=y
+CONFIG_SLAB=y
+# CONFIG_TINY_SHMEM is not set
+CONFIG_BASE_SMALL=0
+# CONFIG_SLOB is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
+CONFIG_KMOD=y
+CONFIG_STOP_MACHINE=y
+
+#
+# Block layer
+#
+CONFIG_LBD=y
+# CONFIG_BLK_DEV_IO_TRACE is not set
+# CONFIG_LSF is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_DEFAULT_AS=y
+# CONFIG_DEFAULT_DEADLINE is not set
+# CONFIG_DEFAULT_CFQ is not set
+# CONFIG_DEFAULT_NOOP is not set
+CONFIG_DEFAULT_IOSCHED="anticipatory"
+
+#
+# Platform support
+#
+CONFIG_PPC_MULTIPLATFORM=y
+# CONFIG_PPC_ISERIES is not set
+# CONFIG_EMBEDDED6xx is not set
+# CONFIG_APUS is not set
+CONFIG_PPC_CHRP=y
+# CONFIG_PPC_PMAC is not set
+# CONFIG_PPC_CELL is not set
+# CONFIG_PPC_CELL_NATIVE is not set
+CONFIG_MPIC=y
+CONFIG_PPC_RTAS=y
+# CONFIG_RTAS_ERROR_LOGGING is not set
+CONFIG_RTAS_PROC=y
+# CONFIG_MMIO_NVRAM is not set
+CONFIG_PPC_MPC106=y
+# CONFIG_PPC_970_NAP is not set
+# CONFIG_CPU_FREQ is not set
+# CONFIG_TAU is not set
+# CONFIG_WANT_EARLY_SERIAL is not set
+
+#
+# Kernel options
+#
+CONFIG_HIGHMEM=y
+# CONFIG_HZ_100 is not set
+CONFIG_HZ_250=y
+# CONFIG_HZ_1000 is not set
+CONFIG_HZ=250
+CONFIG_PREEMPT_NONE=y
+# CONFIG_PREEMPT_VOLUNTARY is not set
+# CONFIG_PREEMPT is not set
+CONFIG_PREEMPT_BKL=y
+CONFIG_BINFMT_ELF=y
+CONFIG_BINFMT_MISC=y
+# CONFIG_KEXEC is not set
+CONFIG_IRQ_ALL_CPUS=y
+CONFIG_ARCH_FLATMEM_ENABLE=y
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+# CONFIG_SPARSEMEM_STATIC is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4
+CONFIG_PROC_DEVICETREE=y
+# CONFIG_CMDLINE_BOOL is not set
+# CONFIG_PM is not set
+CONFIG_SECCOMP=y
+CONFIG_ISA_DMA_API=y
+
+#
+# Bus options
+#
+CONFIG_ISA=y
+CONFIG_GENERIC_ISA_DMA=y
+CONFIG_PPC_I8259=y
+CONFIG_PPC_INDIRECT_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+# CONFIG_PCIEPORTBUS is not set
+# CONFIG_PCI_DEBUG is not set
+
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PCI Hotplug Support
+#
+# CONFIG_HOTPLUG_PCI is not set
+
+#
+# Advanced setup
+#
+# CONFIG_ADVANCED_OPTIONS is not set
+
+#
+# Default settings for advanced configuration options are used
+#
+CONFIG_HIGHMEM_START=0xfe000000
+CONFIG_LOWMEM_SIZE=0x30000000
+CONFIG_KERNEL_START=0xc0000000
+CONFIG_TASK_SIZE=0x80000000
+CONFIG_BOOT_LOAD=0x00800000
+
+#
+# Networking
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_NETDEBUG is not set
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_FIB_HASH=y
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_XFRM_TUNNEL is not set
+# CONFIG_INET_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_BIC=y
+
+#
+# IP: Virtual Server Configuration
+#
+# CONFIG_IP_VS is not set
+# CONFIG_IPV6 is not set
+# CONFIG_INET6_XFRM_TUNNEL is not set
+# CONFIG_INET6_TUNNEL is not set
+# CONFIG_NETWORK_SECMARK is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+
+#
+# Core Netfilter Configuration
+#
+# CONFIG_NETFILTER_NETLINK is not set
+# CONFIG_NETFILTER_XTABLES is not set
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_IP_NF_CONNTRACK=m
+# CONFIG_IP_NF_CT_ACCT is not set
+# CONFIG_IP_NF_CONNTRACK_MARK is not set
+# CONFIG_IP_NF_CONNTRACK_EVENTS is not set
+# CONFIG_IP_NF_CT_PROTO_SCTP is not set
+CONFIG_IP_NF_FTP=m
+CONFIG_IP_NF_IRC=m
+# CONFIG_IP_NF_NETBIOS_NS is not set
+CONFIG_IP_NF_TFTP=m
+CONFIG_IP_NF_AMANDA=m
+# CONFIG_IP_NF_PPTP is not set
+# CONFIG_IP_NF_H323 is not set
+# CONFIG_IP_NF_SIP is not set
+# CONFIG_IP_NF_QUEUE is not set
+
+#
+# DCCP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_DCCP is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+
+#
+# TIPC Configuration (EXPERIMENTAL)
+#
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_IEEE80211 is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+# CONFIG_STANDALONE is not set
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_SYS_HYPERVISOR is not set
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+# CONFIG_CONNECTOR is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+# CONFIG_PNP is not set
+
+#
+# Block devices
+#
+CONFIG_BLK_DEV_FD=y
+# CONFIG_BLK_DEV_XD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+# CONFIG_BLK_DEV_UB is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_CDROM_PKTCDVD is not set
+# CONFIG_ATA_OVER_ETH is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_IDE_SATA is not set
+CONFIG_BLK_DEV_IDEDISK=y
+CONFIG_IDEDISK_MULTI_MODE=y
+CONFIG_BLK_DEV_IDECD=y
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_BLK_DEV_IDESCSI is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+CONFIG_BLK_DEV_IDEPCI=y
+CONFIG_IDEPCI_SHARE_IRQ=y
+# CONFIG_BLK_DEV_OFFBOARD is not set
+CONFIG_BLK_DEV_GENERIC=y
+# CONFIG_BLK_DEV_OPTI621 is not set
+CONFIG_BLK_DEV_SL82C105=y
+CONFIG_BLK_DEV_IDEDMA_PCI=y
+# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
+CONFIG_IDEDMA_PCI_AUTO=y
+# CONFIG_IDEDMA_ONLYDISK is not set
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_BLK_DEV_ALI15X3 is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+# CONFIG_BLK_DEV_CMD64X is not set
+# CONFIG_BLK_DEV_TRIFLEX is not set
+# CONFIG_BLK_DEV_CY82C693 is not set
+# CONFIG_BLK_DEV_CS5520 is not set
+# CONFIG_BLK_DEV_CS5530 is not set
+# CONFIG_BLK_DEV_HPT34X is not set
+# CONFIG_BLK_DEV_HPT366 is not set
+# CONFIG_BLK_DEV_SC1200 is not set
+# CONFIG_BLK_DEV_PIIX is not set
+# CONFIG_BLK_DEV_IT821X is not set
+# CONFIG_BLK_DEV_NS87415 is not set
+# CONFIG_BLK_DEV_PDC202XX_OLD is not set
+# CONFIG_BLK_DEV_PDC202XX_NEW is not set
+# CONFIG_BLK_DEV_SVWKS is not set
+# CONFIG_BLK_DEV_SIIMAGE is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
+# CONFIG_BLK_DEV_TRM290 is not set
+CONFIG_BLK_DEV_VIA82CXXX=y
+# CONFIG_IDE_ARM is not set
+# CONFIG_IDE_CHIPSETS is not set
+CONFIG_BLK_DEV_IDEDMA=y
+# CONFIG_IDEDMA_IVB is not set
+CONFIG_IDEDMA_AUTO=y
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_RAID_ATTRS is not set
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=y
+# CONFIG_CHR_DEV_OSST is not set
+CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=y
+# CONFIG_CHR_DEV_SCH is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+# CONFIG_SCSI_MULTI_LUN is not set
+CONFIG_SCSI_CONSTANTS=y
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+CONFIG_SCSI_SPI_ATTRS=y
+# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
+# CONFIG_SCSI_SAS_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_ISCSI_TCP is not set
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_3W_9XXX is not set
+# CONFIG_SCSI_7000FASST is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AHA152X is not set
+# CONFIG_SCSI_AHA1542 is not set
+# CONFIG_SCSI_AACRAID is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_SCSI_IN2000 is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
+# CONFIG_MEGARAID_SAS is not set
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_HPTIOP is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_DTC3280 is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_GENERIC_NCR5380 is not set
+# CONFIG_SCSI_GENERIC_NCR5380_MMIO is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_NCR53C406A is not set
+CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
+CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
+CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
+CONFIG_SCSI_SYM53C8XX_MMIO=y
+# CONFIG_SCSI_IPR is not set
+# CONFIG_SCSI_PAS16 is not set
+# CONFIG_SCSI_PSI240I is not set
+# CONFIG_SCSI_QLOGIC_FAS is not set
+# CONFIG_SCSI_QLOGIC_1280 is not set
+# CONFIG_SCSI_QLA_FC is not set
+# CONFIG_SCSI_LPFC is not set
+# CONFIG_SCSI_SYM53C416 is not set
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_T128 is not set
+# CONFIG_SCSI_U14_34F is not set
+# CONFIG_SCSI_NSP32 is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Old CD-ROM drivers (not SCSI, not IDE)
+#
+# CONFIG_CD_NO_IDESCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+# CONFIG_FUSION_SPI is not set
+# CONFIG_FUSION_FC is not set
+# CONFIG_FUSION_SAS is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Macintosh device drivers
+#
+# CONFIG_WINDFARM is not set
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# PHY device support
+#
+# CONFIG_PHYLIB is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_CASSINI is not set
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_LANCE is not set
+# CONFIG_NET_VENDOR_SMC is not set
+# CONFIG_NET_VENDOR_RACAL is not set
+
+#
+# Tulip family network device support
+#
+CONFIG_NET_TULIP=y
+# CONFIG_DE2104X is not set
+# CONFIG_TULIP is not set
+CONFIG_DE4X5=y
+# CONFIG_WINBOND_840 is not set
+# CONFIG_DM9102 is not set
+# CONFIG_ULI526X is not set
+# CONFIG_AT1700 is not set
+# CONFIG_DEPCA is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_ISA is not set
+CONFIG_NET_PCI=y
+CONFIG_PCNET32=y
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_AC3200 is not set
+# CONFIG_APRICOT is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_CS89x0 is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+CONFIG_8139CP=y
+CONFIG_8139TOO=y
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_8139_OLD_RX_RESET is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+CONFIG_VIA_RHINE=y
+# CONFIG_VIA_RHINE_MMIO is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SIS190 is not set
+# CONFIG_SKGE is not set
+# CONFIG_SKY2 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_VIA_VELOCITY is not set
+# CONFIG_TIGON3 is not set
+# CONFIG_BNX2 is not set
+CONFIG_MV643XX_ETH=y
+# CONFIG_MV643XX_ETH_0 is not set
+# CONFIG_MV643XX_ETH_1 is not set
+# CONFIG_MV643XX_ETH_2 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_CHELSIO_T1 is not set
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+# CONFIG_MYRI10GE is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+CONFIG_PPP=m
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_MPPE=m
+CONFIG_PPPOE=m
+# CONFIG_SLIP is not set
+# CONFIG_NET_FC is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_INPORT is not set
+# CONFIG_MOUSE_LOGIBM is not set
+# CONFIG_MOUSE_PC110PAD is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+CONFIG_INPUT_MISC=y
+# CONFIG_INPUT_PCSPKR is not set
+CONFIG_INPUT_UINPUT=y
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_LIBPS2=y
+# CONFIG_SERIO_RAW is not set
+# CONFIG_GAMEPORT is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+CONFIG_SERIAL_8250_RUNTIME_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_SERIAL_JSM is not set
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_HVC_RTAS is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+CONFIG_NVRAM=y
+CONFIG_GEN_RTC=y
+# CONFIG_GEN_RTC_X is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# TPM devices
+#
+# CONFIG_TCG_TPM is not set
+# CONFIG_TELCLOCK is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+# CONFIG_I2C_CHARDEV is not set
+
+#
+# I2C Algorithms
+#
+CONFIG_I2C_ALGOBIT=y
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_HYDRA is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_MPC is not set
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_OCORES is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Miscellaneous I2C Chip support
+#
+# CONFIG_SENSORS_DS1337 is not set
+# CONFIG_SENSORS_DS1374 is not set
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCA9539 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_M41T00 is not set
+# CONFIG_SENSORS_MAX6875 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# SPI support
+#
+# CONFIG_SPI is not set
+# CONFIG_SPI_MASTER is not set
+
+#
+# Dallas's 1-wire bus
+#
+
+#
+# Hardware Monitoring support
+#
+# CONFIG_HWMON is not set
+# CONFIG_HWMON_VID is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_V4L2=y
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+# CONFIG_USB_DABUSB is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+CONFIG_FB_MACMODES=y
+CONFIG_FB_FIRMWARE_EDID=y
+# CONFIG_FB_BACKLIGHT is not set
+CONFIG_FB_MODE_HELPERS=y
+CONFIG_FB_TILEBLITTING=y
+# CONFIG_FB_CIRRUS is not set
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_CYBER2000 is not set
+CONFIG_FB_OF=y
+# CONFIG_FB_CT65550 is not set
+# CONFIG_FB_ASILIANT is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_VGA16 is not set
+# CONFIG_FB_S1D13XXX is not set
+# CONFIG_FB_NVIDIA is not set
+# CONFIG_FB_RIVA is not set
+CONFIG_FB_MATROX=y
+CONFIG_FB_MATROX_MILLENIUM=y
+CONFIG_FB_MATROX_MYSTIQUE=y
+CONFIG_FB_MATROX_G=y
+# CONFIG_FB_MATROX_I2C is not set
+# CONFIG_FB_MATROX_MULTIHEAD is not set
+CONFIG_FB_RADEON=y
+CONFIG_FB_RADEON_I2C=y
+# CONFIG_FB_RADEON_DEBUG is not set
+# CONFIG_FB_ATY128 is not set
+CONFIG_FB_ATY=y
+CONFIG_FB_ATY_CT=y
+# CONFIG_FB_ATY_GENERIC_LCD is not set
+CONFIG_FB_ATY_GX=y
+# CONFIG_FB_SAVAGE is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+# CONFIG_FB_KYRO is not set
+CONFIG_FB_3DFX=y
+# CONFIG_FB_3DFX_ACCEL is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_VIRTUAL is not set
+
+#
+# Console display driver support
+#
+CONFIG_VGA_CONSOLE=y
+# CONFIG_VGACON_SOFT_SCROLLBACK is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+CONFIG_LOGO_LINUX_MONO=y
+CONFIG_LOGO_LINUX_VGA16=y
+CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+CONFIG_USB_ARCH_HAS_EHCI=y
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_DYNAMIC_MINORS is not set
+# CONFIG_USB_OTG is not set
+
+#
+# USB Host Controller Drivers
+#
+CONFIG_USB_EHCI_HCD=m
+# CONFIG_USB_EHCI_SPLIT_ISO is not set
+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
+# CONFIG_USB_EHCI_TT_NEWSCHED is not set
+# CONFIG_USB_ISP116X_HCD is not set
+CONFIG_USB_OHCI_HCD=y
+# CONFIG_USB_OHCI_BIG_ENDIAN is not set
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB_UHCI_HCD=y
+# CONFIG_USB_SL811_HCD is not set
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
+#
+
+#
+# may also be needed; see USB_STORAGE Help for more information
+#
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
+# CONFIG_USB_STORAGE_FREECOM is not set
+# CONFIG_USB_STORAGE_ISD200 is not set
+# CONFIG_USB_STORAGE_DPCM is not set
+# CONFIG_USB_STORAGE_USBAT is not set
+# CONFIG_USB_STORAGE_SDDR09 is not set
+# CONFIG_USB_STORAGE_SDDR55 is not set
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
+# CONFIG_USB_STORAGE_ALAUDA is not set
+# CONFIG_USB_STORAGE_ONETOUCH is not set
+# CONFIG_USB_LIBUSUAL is not set
+
+#
+# USB Input Devices
+#
+CONFIG_USB_HID=y
+CONFIG_USB_HIDINPUT=y
+# CONFIG_USB_HIDINPUT_POWERBOOK is not set
+# CONFIG_HID_FF is not set
+# CONFIG_USB_HIDDEV is not set
+# CONFIG_USB_AIPTEK is not set
+# CONFIG_USB_WACOM is not set
+# CONFIG_USB_ACECAD is not set
+# CONFIG_USB_KBTAB is not set
+# CONFIG_USB_POWERMATE is not set
+# CONFIG_USB_TOUCHSCREEN is not set
+# CONFIG_USB_YEALINK is not set
+# CONFIG_USB_XPAD is not set
+# CONFIG_USB_ATI_REMOTE is not set
+# CONFIG_USB_ATI_REMOTE2 is not set
+# CONFIG_USB_KEYSPAN_REMOTE is not set
+# CONFIG_USB_APPLETOUCH is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+
+#
+# USB Network Adapters
+#
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_RTL8150 is not set
+# CONFIG_USB_USBNET is not set
+CONFIG_USB_MON=y
+
+#
+# USB port drivers
+#
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_AUERSWALD is not set
+# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_LED is not set
+# CONFIG_USB_CY7C63 is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETKIT is not set
+# CONFIG_USB_PHIDGETSERVO is not set
+# CONFIG_USB_IDMOUSE is not set
+# CONFIG_USB_APPLEDISPLAY is not set
+# CONFIG_USB_SISUSBVGA is not set
+# CONFIG_USB_LD is not set
+# CONFIG_USB_TEST is not set
+
+#
+# USB DSL modem support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# LED devices
+#
+# CONFIG_NEW_LEDS is not set
+
+#
+# LED drivers
+#
+
+#
+# LED Triggers
+#
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
+#
+# EDAC - error detection and reporting (RAS) (EXPERIMENTAL)
+#
+
+#
+# Real Time Clock
+#
+# CONFIG_RTC_CLASS is not set
+
+#
+# DMA Engine support
+#
+# CONFIG_DMA_ENGINE is not set
+
+#
+# DMA Clients
+#
+
+#
+# DMA Devices
+#
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT2_FS_XIP is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_FS_POSIX_ACL is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_OCFS2_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
+CONFIG_INOTIFY_USER=y
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+# CONFIG_FUSE_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+CONFIG_ISO9660_FS=y
+# CONFIG_JOLIET is not set
+# CONFIG_ZISOFS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=m
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+# CONFIG_CONFIGFS_FS is not set
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+# CONFIG_NFS_FS is not set
+# CONFIG_NFSD is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+# CONFIG_9P_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+CONFIG_MAC_PARTITION=y
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_KARMA_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=m
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Library routines
+#
+CONFIG_CRC_CCITT=m
+# CONFIG_CRC16 is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=m
+CONFIG_ZLIB_DEFLATE=m
+CONFIG_TEXTSEARCH=y
+CONFIG_TEXTSEARCH_KMP=m
+
+#
+# Instrumentation Support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_PRINTK_TIME is not set
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_KERNEL=y
+CONFIG_LOG_BUF_SHIFT=15
+CONFIG_DETECT_SOFTLOCKUP=y
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_DEBUG_MUTEXES=y
+# CONFIG_DEBUG_SPINLOCK is not set
+CONFIG_DEBUG_SPINLOCK_SLEEP=y
+# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_HIGHMEM is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_DEBUG_FS is not set
+# CONFIG_DEBUG_VM is not set
+CONFIG_FORCED_INLINING=y
+# CONFIG_RCU_TORTURE_TEST is not set
+CONFIG_DEBUGGER=y
+CONFIG_XMON=y
+CONFIG_XMON_DEFAULT=y
+# CONFIG_BDI_SWITCH is not set
+# CONFIG_BOOTX_TEXT is not set
+# CONFIG_PPC_EARLY_DEBUG is not set
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+CONFIG_CRYPTO=y
+# CONFIG_CRYPTO_HMAC is not set
+# CONFIG_CRYPTO_NULL is not set
+# CONFIG_CRYPTO_MD4 is not set
+# CONFIG_CRYPTO_MD5 is not set
+CONFIG_CRYPTO_SHA1=m
+# CONFIG_CRYPTO_SHA256 is not set
+# CONFIG_CRYPTO_SHA512 is not set
+# CONFIG_CRYPTO_WP512 is not set
+# CONFIG_CRYPTO_TGR192 is not set
+# CONFIG_CRYPTO_DES is not set
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+# CONFIG_CRYPTO_SERPENT is not set
+# CONFIG_CRYPTO_AES is not set
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+# CONFIG_CRYPTO_TEA is not set
+CONFIG_CRYPTO_ARC4=m
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+# CONFIG_CRYPTO_DEFLATE is not set
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_CRC32C is not set
+# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.17
+# Fri Jun 30 17:53:25 2006
+#
+# CONFIG_PPC64 is not set
+CONFIG_PPC32=y
+CONFIG_PPC_MERGE=y
+CONFIG_MMU=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_IRQ_PER_CPU=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_GENERIC_FIND_NEXT_BIT=y
+CONFIG_PPC=y
+CONFIG_EARLY_PRINTK=y
+CONFIG_GENERIC_NVRAM=y
+CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_PPC_OF=y
+CONFIG_PPC_UDBG_16550=y
+# CONFIG_GENERIC_TBSYNC is not set
+CONFIG_DEFAULT_UIMAGE=y
+
+#
+# Processor support
+#
+# CONFIG_CLASSIC32 is not set
+# CONFIG_PPC_52xx is not set
+# CONFIG_PPC_82xx is not set
+CONFIG_PPC_83xx=y
+# CONFIG_PPC_85xx is not set
+# CONFIG_PPC_86xx is not set
+# CONFIG_40x is not set
+# CONFIG_44x is not set
+# CONFIG_8xx is not set
+# CONFIG_E200 is not set
+CONFIG_6xx=y
+CONFIG_83xx=y
+CONFIG_PPC_FPU=y
+CONFIG_PPC_STD_MMU=y
+CONFIG_PPC_STD_MMU_32=y
+# CONFIG_SMP is not set
+CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_BROKEN_ON_SMP=y
+CONFIG_INIT_ENV_ARG_LIMIT=32
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+# CONFIG_IKCONFIG is not set
+# CONFIG_RELAY is not set
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_EMBEDDED=y
+# CONFIG_KALLSYMS is not set
+CONFIG_HOTPLUG=y
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_BASE_FULL=y
+CONFIG_RT_MUTEXES=y
+CONFIG_FUTEX=y
+# CONFIG_EPOLL is not set
+CONFIG_SHMEM=y
+CONFIG_SLAB=y
+# CONFIG_TINY_SHMEM is not set
+CONFIG_BASE_SMALL=0
+# CONFIG_SLOB is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
+# CONFIG_KMOD is not set
+
+#
+# Block layer
+#
+# CONFIG_LBD is not set
+# CONFIG_BLK_DEV_IO_TRACE is not set
+# CONFIG_LSF is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_DEFAULT_AS=y
+# CONFIG_DEFAULT_DEADLINE is not set
+# CONFIG_DEFAULT_CFQ is not set
+# CONFIG_DEFAULT_NOOP is not set
+CONFIG_DEFAULT_IOSCHED="anticipatory"
+CONFIG_PPC_GEN550=y
+# CONFIG_WANT_EARLY_SERIAL is not set
+
+#
+# Platform support
+#
+# CONFIG_MPC834x_SYS is not set
+CONFIG_MPC834x_ITX=y
+CONFIG_MPC834x=y
+
+#
+# Kernel options
+#
+# CONFIG_HIGHMEM is not set
+# CONFIG_HZ_100 is not set
+CONFIG_HZ_250=y
+# CONFIG_HZ_1000 is not set
+CONFIG_HZ=250
+CONFIG_PREEMPT_NONE=y
+# CONFIG_PREEMPT_VOLUNTARY is not set
+# CONFIG_PREEMPT is not set
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
+CONFIG_ARCH_FLATMEM_ENABLE=y
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+# CONFIG_SPARSEMEM_STATIC is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4
+# CONFIG_RESOURCES_64BIT is not set
+CONFIG_PROC_DEVICETREE=y
+# CONFIG_CMDLINE_BOOL is not set
+# CONFIG_PM is not set
+# CONFIG_SOFTWARE_SUSPEND is not set
+CONFIG_SECCOMP=y
+CONFIG_ISA_DMA_API=y
+
+#
+# Bus options
+#
+CONFIG_GENERIC_ISA_DMA=y
+# CONFIG_PPC_I8259 is not set
+CONFIG_PPC_INDIRECT_PCI=y
+CONFIG_FSL_SOC=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+# CONFIG_PCIEPORTBUS is not set
+# CONFIG_PCI_DEBUG is not set
+
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PCI Hotplug Support
+#
+# CONFIG_HOTPLUG_PCI is not set
+
+#
+# Advanced setup
+#
+# CONFIG_ADVANCED_OPTIONS is not set
+
+#
+# Default settings for advanced configuration options are used
+#
+CONFIG_HIGHMEM_START=0xfe000000
+CONFIG_LOWMEM_SIZE=0x30000000
+CONFIG_KERNEL_START=0xc0000000
+CONFIG_TASK_SIZE=0x80000000
+CONFIG_BOOT_LOAD=0x00800000
+
+#
+# Networking
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_NETDEBUG is not set
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+CONFIG_UNIX=y
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_FIB_HASH=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_XFRM_TUNNEL is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_XFRM_MODE_TRANSPORT=y
+CONFIG_INET_XFRM_MODE_TUNNEL=y
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_BIC=y
+# CONFIG_IPV6 is not set
+# CONFIG_INET6_XFRM_TUNNEL is not set
+# CONFIG_INET6_TUNNEL is not set
+# CONFIG_NETWORK_SECMARK is not set
+# CONFIG_NETFILTER is not set
+
+#
+# DCCP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_DCCP is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+
+#
+# TIPC Configuration (EXPERIMENTAL)
+#
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_IEEE80211 is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_SYS_HYPERVISOR is not set
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+# CONFIG_CONNECTOR is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+# CONFIG_MTD_CONCAT is not set
+# CONFIG_MTD_PARTITIONS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+# CONFIG_MTD_BLOCK is not set
+# CONFIG_MTD_BLOCK_RO is not set
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+# CONFIG_RFD_FTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+# CONFIG_MTD_CFI_INTELEXT is not set
+CONFIG_MTD_CFI_AMDSTD=y
+# CONFIG_MTD_CFI_STAA is not set
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+CONFIG_MTD_PHYSMAP=y
+CONFIG_MTD_PHYSMAP_START=0xfe000000
+CONFIG_MTD_PHYSMAP_LEN=0x1000000
+CONFIG_MTD_PHYSMAP_BANKWIDTH=2
+# CONFIG_MTD_PLATRAM is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_DATAFLASH is not set
+# CONFIG_MTD_M25P80 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLOCK2MTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# OneNAND Flash Device Drivers
+#
+# CONFIG_MTD_ONENAND is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+# CONFIG_BLK_DEV_UB is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=32768
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_CDROM_PKTCDVD is not set
+# CONFIG_ATA_OVER_ETH is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+# CONFIG_BLK_DEV_IDE is not set
+# CONFIG_BLK_DEV_HD_ONLY is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_RAID_ATTRS is not set
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+CONFIG_CHR_DEV_SG=y
+# CONFIG_CHR_DEV_SCH is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+# CONFIG_SCSI_MULTI_LUN is not set
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+CONFIG_SCSI_SPI_ATTRS=y
+# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
+# CONFIG_SCSI_SAS_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_ISCSI_TCP is not set
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_3W_9XXX is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AACRAID is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
+# CONFIG_MEGARAID_SAS is not set
+CONFIG_SCSI_SATA=y
+# CONFIG_SCSI_SATA_AHCI is not set
+# CONFIG_SCSI_SATA_SVW is not set
+# CONFIG_SCSI_ATA_PIIX is not set
+# CONFIG_SCSI_SATA_MV is not set
+# CONFIG_SCSI_SATA_NV is not set
+# CONFIG_SCSI_PDC_ADMA is not set
+# CONFIG_SCSI_HPTIOP is not set
+# CONFIG_SCSI_SATA_QSTOR is not set
+# CONFIG_SCSI_SATA_PROMISE is not set
+# CONFIG_SCSI_SATA_SX4 is not set
+CONFIG_SCSI_SATA_SIL=y
+# CONFIG_SCSI_SATA_SIL24 is not set
+# CONFIG_SCSI_SATA_SIS is not set
+# CONFIG_SCSI_SATA_ULI is not set
+# CONFIG_SCSI_SATA_VIA is not set
+# CONFIG_SCSI_SATA_VITESSE is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_IPR is not set
+# CONFIG_SCSI_QLOGIC_1280 is not set
+# CONFIG_SCSI_QLA_FC is not set
+# CONFIG_SCSI_LPFC is not set
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_NSP32 is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+# CONFIG_MD_RAID10 is not set
+# CONFIG_MD_RAID456 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_MD_FAULTY is not set
+# CONFIG_BLK_DEV_DM is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+# CONFIG_FUSION_SPI is not set
+# CONFIG_FUSION_FC is not set
+# CONFIG_FUSION_SAS is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Macintosh device drivers
+#
+# CONFIG_WINDFARM is not set
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# PHY device support
+#
+CONFIG_PHYLIB=y
+
+#
+# MII PHY device drivers
+#
+# CONFIG_MARVELL_PHY is not set
+# CONFIG_DAVICOM_PHY is not set
+# CONFIG_QSEMI_PHY is not set
+# CONFIG_LXT_PHY is not set
+CONFIG_CICADA_PHY=y
+# CONFIG_VITESSE_PHY is not set
+# CONFIG_SMSC_PHY is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_CASSINI is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+CONFIG_E100=y
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SIS190 is not set
+# CONFIG_SKGE is not set
+# CONFIG_SKY2 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_VIA_VELOCITY is not set
+# CONFIG_TIGON3 is not set
+# CONFIG_BNX2 is not set
+CONFIG_GIANFAR=y
+CONFIG_GFAR_NAPI=y
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_CHELSIO_T1 is not set
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+# CONFIG_MYRI10GE is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_NET_FC is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Hardware I/O ports
+#
+# CONFIG_SERIO is not set
+# CONFIG_GAMEPORT is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+CONFIG_SERIAL_8250_RUNTIME_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_SERIAL_JSM is not set
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+CONFIG_83xx_WDT=y
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+
+#
+# USB-based Watchdog Cards
+#
+# CONFIG_USBPCWATCHDOG is not set
+CONFIG_HW_RANDOM=y
+# CONFIG_NVRAM is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# TPM devices
+#
+# CONFIG_TCG_TPM is not set
+# CONFIG_TELCLOCK is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+# CONFIG_I2C_PIIX4 is not set
+CONFIG_I2C_MPC=y
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_OCORES is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Miscellaneous I2C Chip support
+#
+# CONFIG_SENSORS_DS1337 is not set
+# CONFIG_SENSORS_DS1374 is not set
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCA9539 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_M41T00 is not set
+# CONFIG_SENSORS_MAX6875 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# SPI support
+#
+CONFIG_SPI=y
+# CONFIG_SPI_DEBUG is not set
+CONFIG_SPI_MASTER=y
+
+#
+# SPI Master Controller Drivers
+#
+CONFIG_SPI_BITBANG=y
+CONFIG_SPI_MPC83xx=y
+
+#
+# SPI Protocol Masters
+#
+
+#
+# Dallas's 1-wire bus
+#
+
+#
+# Hardware Monitoring support
+#
+CONFIG_HWMON=y
+# CONFIG_HWMON_VID is not set
+# CONFIG_SENSORS_ABITUGURU is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ADM9240 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_ATXP1 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_F71805F is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_FSCPOS is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_GL520SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM70 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_LM92 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SIS5595 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_SMSC47M192 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_VT8231 is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83791D is not set
+# CONFIG_SENSORS_W83792D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+# CONFIG_SENSORS_W83627EHF is not set
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_V4L2=y
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+# CONFIG_USB_DABUSB is not set
+
+#
+# Graphics support
+#
+CONFIG_FIRMWARE_EDID=y
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+CONFIG_USB_ARCH_HAS_EHCI=y
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEVICEFS=y
+# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_DYNAMIC_MINORS is not set
+# CONFIG_USB_OTG is not set
+
+#
+# USB Host Controller Drivers
+#
+CONFIG_USB_EHCI_HCD=y
+# CONFIG_USB_EHCI_SPLIT_ISO is not set
+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
+# CONFIG_USB_EHCI_TT_NEWSCHED is not set
+# CONFIG_USB_ISP116X_HCD is not set
+CONFIG_USB_OHCI_HCD=y
+# CONFIG_USB_OHCI_BIG_ENDIAN is not set
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB_UHCI_HCD=y
+# CONFIG_USB_SL811_HCD is not set
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
+#
+
+#
+# may also be needed; see USB_STORAGE Help for more information
+#
+CONFIG_USB_STORAGE=y
+# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
+# CONFIG_USB_STORAGE_FREECOM is not set
+# CONFIG_USB_STORAGE_DPCM is not set
+# CONFIG_USB_STORAGE_USBAT is not set
+# CONFIG_USB_STORAGE_SDDR09 is not set
+# CONFIG_USB_STORAGE_SDDR55 is not set
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
+# CONFIG_USB_STORAGE_ALAUDA is not set
+# CONFIG_USB_LIBUSUAL is not set
+
+#
+# USB Input Devices
+#
+# CONFIG_USB_HID is not set
+
+#
+# USB HID Boot Protocol drivers
+#
+# CONFIG_USB_KBD is not set
+# CONFIG_USB_MOUSE is not set
+# CONFIG_USB_AIPTEK is not set
+# CONFIG_USB_WACOM is not set
+# CONFIG_USB_ACECAD is not set
+# CONFIG_USB_KBTAB is not set
+# CONFIG_USB_POWERMATE is not set
+# CONFIG_USB_TOUCHSCREEN is not set
+# CONFIG_USB_YEALINK is not set
+# CONFIG_USB_XPAD is not set
+# CONFIG_USB_ATI_REMOTE is not set
+# CONFIG_USB_ATI_REMOTE2 is not set
+# CONFIG_USB_KEYSPAN_REMOTE is not set
+# CONFIG_USB_APPLETOUCH is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+
+#
+# USB Network Adapters
+#
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_RTL8150 is not set
+# CONFIG_USB_USBNET is not set
+CONFIG_USB_MON=y
+
+#
+# USB port drivers
+#
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_AUERSWALD is not set
+# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_LED is not set
+# CONFIG_USB_CY7C63 is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETKIT is not set
+# CONFIG_USB_PHIDGETSERVO is not set
+# CONFIG_USB_IDMOUSE is not set
+# CONFIG_USB_APPLEDISPLAY is not set
+# CONFIG_USB_SISUSBVGA is not set
+# CONFIG_USB_LD is not set
+# CONFIG_USB_TEST is not set
+
+#
+# USB DSL modem support
+#
+
+#
+# USB Gadget Support
+#
+CONFIG_USB_GADGET=y
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+CONFIG_USB_GADGET_SELECTED=y
+CONFIG_USB_GADGET_NET2280=y
+CONFIG_USB_NET2280=y
+# CONFIG_USB_GADGET_PXA2XX is not set
+# CONFIG_USB_GADGET_GOKU is not set
+# CONFIG_USB_GADGET_LH7A40X is not set
+# CONFIG_USB_GADGET_OMAP is not set
+# CONFIG_USB_GADGET_AT91 is not set
+# CONFIG_USB_GADGET_DUMMY_HCD is not set
+CONFIG_USB_GADGET_DUALSPEED=y
+# CONFIG_USB_ZERO is not set
+CONFIG_USB_ETH=y
+CONFIG_USB_ETH_RNDIS=y
+# CONFIG_USB_GADGETFS is not set
+# CONFIG_USB_FILE_STORAGE is not set
+# CONFIG_USB_G_SERIAL is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# LED devices
+#
+# CONFIG_NEW_LEDS is not set
+
+#
+# LED drivers
+#
+
+#
+# LED Triggers
+#
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
+#
+# EDAC - error detection and reporting (RAS) (EXPERIMENTAL)
+#
+
+#
+# Real Time Clock
+#
+CONFIG_RTC_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+CONFIG_RTC_INTF_DEV_UIE_EMUL=y
+
+#
+# RTC drivers
+#
+# CONFIG_RTC_DRV_X1205 is not set
+CONFIG_RTC_DRV_DS1307=y
+# CONFIG_RTC_DRV_DS1553 is not set
+# CONFIG_RTC_DRV_DS1672 is not set
+# CONFIG_RTC_DRV_DS1742 is not set
+# CONFIG_RTC_DRV_PCF8563 is not set
+# CONFIG_RTC_DRV_PCF8583 is not set
+# CONFIG_RTC_DRV_RS5C348 is not set
+# CONFIG_RTC_DRV_RS5C372 is not set
+# CONFIG_RTC_DRV_M48T86 is not set
+# CONFIG_RTC_DRV_TEST is not set
+# CONFIG_RTC_DRV_MAX6902 is not set
+# CONFIG_RTC_DRV_V3020 is not set
+
+#
+# DMA Engine support
+#
+CONFIG_DMA_ENGINE=y
+
+#
+# DMA Clients
+#
+CONFIG_NET_DMA=y
+
+#
+# DMA Devices
+#
+CONFIG_INTEL_IOATDMA=y
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT2_FS_XIP is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_FS_POSIX_ACL is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_OCFS2_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
+CONFIG_INOTIFY_USER=y
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+# CONFIG_FUSE_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+# CONFIG_CONFIGFS_FS is not set
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+# CONFIG_JFFS2_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V3_ACL is not set
+CONFIG_NFS_V4=y
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=y
+CONFIG_SUNRPC_GSS=y
+CONFIG_RPCSEC_GSS_KRB5=y
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_CIFS_DEBUG2 is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+# CONFIG_9P_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+# CONFIG_MSDOS_PARTITION is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_KARMA_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+# CONFIG_CRC16 is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_PLIST=y
+
+#
+# Instrumentation Support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+CONFIG_PRINTK_TIME=y
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_KERNEL=y
+CONFIG_LOG_BUF_SHIFT=17
+CONFIG_DETECT_SOFTLOCKUP=y
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_MUTEXES is not set
+# CONFIG_DEBUG_RT_MUTEXES is not set
+# CONFIG_RT_MUTEX_TESTER is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+# CONFIG_DEBUG_KOBJECT is not set
+CONFIG_DEBUG_INFO=y
+# CONFIG_DEBUG_FS is not set
+# CONFIG_DEBUG_VM is not set
+CONFIG_FORCED_INLINING=y
+# CONFIG_RCU_TORTURE_TEST is not set
+# CONFIG_DEBUGGER is not set
+# CONFIG_BDI_SWITCH is not set
+CONFIG_BOOTX_TEXT=y
+CONFIG_SERIAL_TEXT_DEBUG=y
+# CONFIG_PPC_EARLY_DEBUG is not set
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+CONFIG_CRYPTO=y
+# CONFIG_CRYPTO_HMAC is not set
+# CONFIG_CRYPTO_NULL is not set
+# CONFIG_CRYPTO_MD4 is not set
+CONFIG_CRYPTO_MD5=y
+# CONFIG_CRYPTO_SHA1 is not set
+# CONFIG_CRYPTO_SHA256 is not set
+# CONFIG_CRYPTO_SHA512 is not set
+# CONFIG_CRYPTO_WP512 is not set
+# CONFIG_CRYPTO_TGR192 is not set
+CONFIG_CRYPTO_DES=y
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+# CONFIG_CRYPTO_SERPENT is not set
+# CONFIG_CRYPTO_AES is not set
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+# CONFIG_CRYPTO_TEA is not set
+# CONFIG_CRYPTO_ARC4 is not set
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+# CONFIG_CRYPTO_DEFLATE is not set
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_CRC32C is not set
+# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
logicalDisplayBase = (unsigned char *)address;
dispDeviceBase = (unsigned char *)address;
dispDeviceRowBytes = pitch;
- dispDeviceDepth = depth;
+ dispDeviceDepth = depth == 15 ? 16 : depth;
dispDeviceRect[0] = dispDeviceRect[1] = 0;
dispDeviceRect[2] = width;
dispDeviceRect[3] = height;
unsigned long address = 0;
u32 *prop;
- prop = (u32 *)get_property(np, "width", NULL);
+ prop = (u32 *)get_property(np, "linux,bootx-width", NULL);
+ if (prop == NULL)
+ prop = (u32 *)get_property(np, "width", NULL);
if (prop == NULL)
return -EINVAL;
width = *prop;
- prop = (u32 *)get_property(np, "height", NULL);
+ prop = (u32 *)get_property(np, "linux,bootx-height", NULL);
+ if (prop == NULL)
+ prop = (u32 *)get_property(np, "height", NULL);
if (prop == NULL)
return -EINVAL;
height = *prop;
- prop = (u32 *)get_property(np, "depth", NULL);
+ prop = (u32 *)get_property(np, "linux,bootx-depth", NULL);
+ if (prop == NULL)
+ prop = (u32 *)get_property(np, "depth", NULL);
if (prop == NULL)
return -EINVAL;
depth = *prop;
pitch = width * ((depth + 7) / 8);
- prop = (u32 *)get_property(np, "linebytes", NULL);
+ prop = (u32 *)get_property(np, "linux,bootx-linebytes", NULL);
+ if (prop == NULL)
+ prop = (u32 *)get_property(np, "linebytes", NULL);
if (prop)
pitch = *prop;
if (pitch == 1)
g_max_loc_Y = height / 16;
dispDeviceBase = (unsigned char *)address;
dispDeviceRowBytes = pitch;
- dispDeviceDepth = depth;
+ dispDeviceDepth = depth == 15 ? 16 : depth;
dispDeviceRect[0] = dispDeviceRect[1] = 0;
dispDeviceRect[2] = width;
dispDeviceRect[3] = height;
unsigned long irq_flags, const char * devname,
void *dev_id)
{
- unsigned int irq = virt_irq_create_mapping(ist);
+ unsigned int irq = irq_create_mapping(NULL, ist, 0);
if (irq == NO_IRQ)
return -EINVAL;
- irq = irq_offset_up(irq);
-
return request_irq(irq, handler,
irq_flags, devname, dev_id);
}
void ibmebus_free_irq(struct ibmebus_dev *dev, u32 ist, void *dev_id)
{
- unsigned int irq = virt_irq_create_mapping(ist);
+ unsigned int irq = irq_find_mapping(NULL, ist);
- irq = irq_offset_up(irq);
free_irq(irq, dev_id);
-
- return;
}
EXPORT_SYMBOL(ibmebus_free_irq);
* to reduce code space and undefined function references.
*/
+#undef DEBUG
+
#include <linux/module.h>
#include <linux/threads.h>
#include <linux/kernel_stat.h>
#include <linux/cpumask.h>
#include <linux/profile.h>
#include <linux/bitops.h>
-#include <linux/pci.h>
+#include <linux/list.h>
+#include <linux/radix-tree.h>
+#include <linux/mutex.h>
+#include <linux/bootmem.h>
#include <asm/uaccess.h>
#include <asm/system.h>
#include <asm/prom.h>
#include <asm/ptrace.h>
#include <asm/machdep.h>
+#include <asm/udbg.h>
#ifdef CONFIG_PPC_ISERIES
#include <asm/paca.h>
#endif
int __irq_offset_value;
-#ifdef CONFIG_PPC32
-EXPORT_SYMBOL(__irq_offset_value);
-#endif
-
static int ppc_spurious_interrupts;
#ifdef CONFIG_PPC32
-#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
+EXPORT_SYMBOL(__irq_offset_value);
+atomic_t ppc_n_lost_interrupts;
+#ifndef CONFIG_PPC_MERGE
+#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
-atomic_t ppc_n_lost_interrupts;
+#endif
#ifdef CONFIG_TAU_INT
extern int tau_initialized;
extern int tau_interrupts(int);
#endif
+#endif /* CONFIG_PPC32 */
#if defined(CONFIG_SMP) && !defined(CONFIG_PPC_MERGE)
extern atomic_t ipi_recv;
extern atomic_t ipi_sent;
#endif
-#endif /* CONFIG_PPC32 */
#ifdef CONFIG_PPC64
EXPORT_SYMBOL(irq_desc);
int distribute_irqs = 1;
-u64 ppc64_interrupt_controller;
#endif /* CONFIG_PPC64 */
int show_interrupts(struct seq_file *p, void *v)
void do_IRQ(struct pt_regs *regs)
{
- int irq;
+ unsigned int irq;
#ifdef CONFIG_IRQSTACKS
struct thread_info *curtp, *irqtp;
#endif
*/
irq = ppc_md.get_irq(regs);
- if (irq >= 0) {
+ if (irq != NO_IRQ && irq != NO_IRQ_IGNORE) {
#ifdef CONFIG_IRQSTACKS
/* Switch to the irq stack to handle this */
curtp = current_thread_info();
irqtp = hardirq_ctx[smp_processor_id()];
if (curtp != irqtp) {
+ struct irq_desc *desc = irq_desc + irq;
+ void *handler = desc->handle_irq;
+ if (handler == NULL)
+ handler = &__do_IRQ;
irqtp->task = curtp->task;
irqtp->flags = 0;
- call___do_IRQ(irq, regs, irqtp);
+ call_handle_irq(irq, desc, regs, irqtp, handler);
irqtp->task = NULL;
if (irqtp->flags)
set_bits(irqtp->flags, &curtp->flags);
} else
#endif
- __do_IRQ(irq, regs);
- } else if (irq != -2)
+ generic_handle_irq(irq, regs);
+ } else if (irq != NO_IRQ_IGNORE)
/* That's not SMP safe ... but who cares ? */
ppc_spurious_interrupts++;
void __init init_IRQ(void)
{
+ ppc_md.init_IRQ();
#ifdef CONFIG_PPC64
- static int once = 0;
+ irq_ctx_init();
+#endif
+}
+
+
+#ifdef CONFIG_IRQSTACKS
+struct thread_info *softirq_ctx[NR_CPUS] __read_mostly;
+struct thread_info *hardirq_ctx[NR_CPUS] __read_mostly;
+
+void irq_ctx_init(void)
+{
+ struct thread_info *tp;
+ int i;
+
+ for_each_possible_cpu(i) {
+ memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
+ tp = softirq_ctx[i];
+ tp->cpu = i;
+ tp->preempt_count = SOFTIRQ_OFFSET;
+
+ memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
+ tp = hardirq_ctx[i];
+ tp->cpu = i;
+ tp->preempt_count = HARDIRQ_OFFSET;
+ }
+}
+
+static inline void do_softirq_onstack(void)
+{
+ struct thread_info *curtp, *irqtp;
+
+ curtp = current_thread_info();
+ irqtp = softirq_ctx[smp_processor_id()];
+ irqtp->task = curtp->task;
+ call_do_softirq(irqtp);
+ irqtp->task = NULL;
+}
- if (once)
+#else
+#define do_softirq_onstack() __do_softirq()
+#endif /* CONFIG_IRQSTACKS */
+
+void do_softirq(void)
+{
+ unsigned long flags;
+
+ if (in_interrupt())
return;
- once++;
+ local_irq_save(flags);
-#endif
- ppc_md.init_IRQ();
-#ifdef CONFIG_PPC64
- irq_ctx_init();
-#endif
+ if (local_softirq_pending())
+ do_softirq_onstack();
+
+ local_irq_restore(flags);
}
+EXPORT_SYMBOL(do_softirq);
+
-#ifdef CONFIG_PPC64
/*
- * Virtual IRQ mapping code, used on systems with XICS interrupt controllers.
+ * IRQ controller and virtual interrupts
*/
-#define UNDEFINED_IRQ 0xffffffff
-unsigned int virt_irq_to_real_map[NR_IRQS];
+#ifdef CONFIG_PPC_MERGE
-/*
- * Don't use virtual irqs 0, 1, 2 for devices.
- * The pcnet32 driver considers interrupt numbers < 2 to be invalid,
- * and 2 is the XICS IPI interrupt.
- * We limit virtual irqs to __irq_offet_value less than virt_irq_max so
- * that when we offset them we don't end up with an interrupt
- * number >= virt_irq_max.
- */
-#define MIN_VIRT_IRQ 3
+static LIST_HEAD(irq_hosts);
+static spinlock_t irq_big_lock = SPIN_LOCK_UNLOCKED;
-unsigned int virt_irq_max;
-static unsigned int max_virt_irq;
-static unsigned int nr_virt_irqs;
+struct irq_map_entry irq_map[NR_IRQS];
+static unsigned int irq_virq_count = NR_IRQS;
+static struct irq_host *irq_default_host;
-void
-virt_irq_init(void)
+struct irq_host *irq_alloc_host(unsigned int revmap_type,
+ unsigned int revmap_arg,
+ struct irq_host_ops *ops,
+ irq_hw_number_t inval_irq)
{
- int i;
+ struct irq_host *host;
+ unsigned int size = sizeof(struct irq_host);
+ unsigned int i;
+ unsigned int *rmap;
+ unsigned long flags;
- if ((virt_irq_max == 0) || (virt_irq_max > (NR_IRQS - 1)))
- virt_irq_max = NR_IRQS - 1;
- max_virt_irq = virt_irq_max - __irq_offset_value;
- nr_virt_irqs = max_virt_irq - MIN_VIRT_IRQ + 1;
+ /* Allocate structure and revmap table if using linear mapping */
+ if (revmap_type == IRQ_HOST_MAP_LINEAR)
+ size += revmap_arg * sizeof(unsigned int);
+ if (mem_init_done)
+ host = kzalloc(size, GFP_KERNEL);
+ else {
+ host = alloc_bootmem(size);
+ if (host)
+ memset(host, 0, size);
+ }
+ if (host == NULL)
+ return NULL;
- for (i = 0; i < NR_IRQS; i++)
- virt_irq_to_real_map[i] = UNDEFINED_IRQ;
+ /* Fill structure */
+ host->revmap_type = revmap_type;
+ host->inval_irq = inval_irq;
+ host->ops = ops;
+
+ spin_lock_irqsave(&irq_big_lock, flags);
+
+ /* If it's a legacy controller, check for duplicates and
+ * mark it as allocated (we use irq 0 host pointer for that
+ */
+ if (revmap_type == IRQ_HOST_MAP_LEGACY) {
+ if (irq_map[0].host != NULL) {
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ /* If we are early boot, we can't free the structure,
+ * too bad...
+ * this will be fixed once slab is made available early
+ * instead of the current cruft
+ */
+ if (mem_init_done)
+ kfree(host);
+ return NULL;
+ }
+ irq_map[0].host = host;
+ }
+
+ list_add(&host->link, &irq_hosts);
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+
+ /* Additional setups per revmap type */
+ switch(revmap_type) {
+ case IRQ_HOST_MAP_LEGACY:
+ /* 0 is always the invalid number for legacy */
+ host->inval_irq = 0;
+ /* setup us as the host for all legacy interrupts */
+ for (i = 1; i < NUM_ISA_INTERRUPTS; i++) {
+ irq_map[i].hwirq = 0;
+ smp_wmb();
+ irq_map[i].host = host;
+ smp_wmb();
+
+ /* Clear some flags */
+ get_irq_desc(i)->status
+ &= ~(IRQ_NOREQUEST | IRQ_LEVEL);
+
+ /* Legacy flags are left to default at this point,
+ * one can then use irq_create_mapping() to
+ * explicitely change them
+ */
+ ops->map(host, i, i, 0);
+ }
+ break;
+ case IRQ_HOST_MAP_LINEAR:
+ rmap = (unsigned int *)(host + 1);
+ for (i = 0; i < revmap_arg; i++)
+ rmap[i] = IRQ_NONE;
+ host->revmap_data.linear.size = revmap_arg;
+ smp_wmb();
+ host->revmap_data.linear.revmap = rmap;
+ break;
+ default:
+ break;
+ }
+
+ pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
+
+ return host;
}
-/* Create a mapping for a real_irq if it doesn't already exist.
- * Return the virtual irq as a convenience.
- */
-int virt_irq_create_mapping(unsigned int real_irq)
+struct irq_host *irq_find_host(struct device_node *node)
{
- unsigned int virq, first_virq;
- static int warned;
+ struct irq_host *h, *found = NULL;
+ unsigned long flags;
+
+ /* We might want to match the legacy controller last since
+ * it might potentially be set to match all interrupts in
+ * the absence of a device node. This isn't a problem so far
+ * yet though...
+ */
+ spin_lock_irqsave(&irq_big_lock, flags);
+ list_for_each_entry(h, &irq_hosts, link)
+ if (h->ops->match == NULL || h->ops->match(h, node)) {
+ found = h;
+ break;
+ }
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ return found;
+}
+EXPORT_SYMBOL_GPL(irq_find_host);
+
+void irq_set_default_host(struct irq_host *host)
+{
+ pr_debug("irq: Default host set to @0x%p\n", host);
+
+ irq_default_host = host;
+}
- if (ppc64_interrupt_controller == IC_OPEN_PIC)
- return real_irq; /* no mapping for openpic (for now) */
+void irq_set_virq_count(unsigned int count)
+{
+ pr_debug("irq: Trying to set virq count to %d\n", count);
- if (ppc64_interrupt_controller == IC_CELL_PIC)
- return real_irq; /* no mapping for iic either */
+ BUG_ON(count < NUM_ISA_INTERRUPTS);
+ if (count < NR_IRQS)
+ irq_virq_count = count;
+}
- /* don't map interrupts < MIN_VIRT_IRQ */
- if (real_irq < MIN_VIRT_IRQ) {
- virt_irq_to_real_map[real_irq] = real_irq;
- return real_irq;
+unsigned int irq_create_mapping(struct irq_host *host,
+ irq_hw_number_t hwirq,
+ unsigned int flags)
+{
+ unsigned int virq, hint;
+
+ pr_debug("irq: irq_create_mapping(0x%p, 0x%lx, 0x%x)\n",
+ host, hwirq, flags);
+
+ /* Look for default host if nececssary */
+ if (host == NULL)
+ host = irq_default_host;
+ if (host == NULL) {
+ printk(KERN_WARNING "irq_create_mapping called for"
+ " NULL host, hwirq=%lx\n", hwirq);
+ WARN_ON(1);
+ return NO_IRQ;
}
+ pr_debug("irq: -> using host @%p\n", host);
- /* map to a number between MIN_VIRT_IRQ and max_virt_irq */
- virq = real_irq;
- if (virq > max_virt_irq)
- virq = (virq % nr_virt_irqs) + MIN_VIRT_IRQ;
-
- /* search for this number or a free slot */
- first_virq = virq;
- while (virt_irq_to_real_map[virq] != UNDEFINED_IRQ) {
- if (virt_irq_to_real_map[virq] == real_irq)
- return virq;
- if (++virq > max_virt_irq)
- virq = MIN_VIRT_IRQ;
- if (virq == first_virq)
- goto nospace; /* oops, no free slots */
+ /* Check if mapping already exist, if it does, call
+ * host->ops->map() to update the flags
+ */
+ virq = irq_find_mapping(host, hwirq);
+ if (virq != IRQ_NONE) {
+ pr_debug("irq: -> existing mapping on virq %d\n", virq);
+ host->ops->map(host, virq, hwirq, flags);
+ return virq;
+ }
+
+ /* Get a virtual interrupt number */
+ if (host->revmap_type == IRQ_HOST_MAP_LEGACY) {
+ /* Handle legacy */
+ virq = (unsigned int)hwirq;
+ if (virq == 0 || virq >= NUM_ISA_INTERRUPTS)
+ return NO_IRQ;
+ return virq;
+ } else {
+ /* Allocate a virtual interrupt number */
+ hint = hwirq % irq_virq_count;
+ virq = irq_alloc_virt(host, 1, hint);
+ if (virq == NO_IRQ) {
+ pr_debug("irq: -> virq allocation failed\n");
+ return NO_IRQ;
+ }
}
+ pr_debug("irq: -> obtained virq %d\n", virq);
- virt_irq_to_real_map[virq] = real_irq;
+ /* Clear some flags */
+ get_irq_desc(virq)->status &= ~(IRQ_NOREQUEST | IRQ_LEVEL);
+
+ /* map it */
+ if (host->ops->map(host, virq, hwirq, flags)) {
+ pr_debug("irq: -> mapping failed, freeing\n");
+ irq_free_virt(virq, 1);
+ return NO_IRQ;
+ }
+ smp_wmb();
+ irq_map[virq].hwirq = hwirq;
+ smp_mb();
return virq;
+}
+EXPORT_SYMBOL_GPL(irq_create_mapping);
- nospace:
- if (!warned) {
- printk(KERN_CRIT "Interrupt table is full\n");
- printk(KERN_CRIT "Increase virt_irq_max (currently %d) "
- "in your kernel sources and rebuild.\n", virt_irq_max);
- warned = 1;
+extern unsigned int irq_create_of_mapping(struct device_node *controller,
+ u32 *intspec, unsigned int intsize)
+{
+ struct irq_host *host;
+ irq_hw_number_t hwirq;
+ unsigned int flags = IRQ_TYPE_NONE;
+
+ if (controller == NULL)
+ host = irq_default_host;
+ else
+ host = irq_find_host(controller);
+ if (host == NULL)
+ return NO_IRQ;
+
+ /* If host has no translation, then we assume interrupt line */
+ if (host->ops->xlate == NULL)
+ hwirq = intspec[0];
+ else {
+ if (host->ops->xlate(host, controller, intspec, intsize,
+ &hwirq, &flags))
+ return NO_IRQ;
}
- return NO_IRQ;
+
+ return irq_create_mapping(host, hwirq, flags);
}
+EXPORT_SYMBOL_GPL(irq_create_of_mapping);
-/*
- * In most cases will get a hit on the very first slot checked in the
- * virt_irq_to_real_map. Only when there are a large number of
- * IRQs will this be expensive.
- */
-unsigned int real_irq_to_virt_slowpath(unsigned int real_irq)
+unsigned int irq_of_parse_and_map(struct device_node *dev, int index)
{
- unsigned int virq;
- unsigned int first_virq;
+ struct of_irq oirq;
- virq = real_irq;
+ if (of_irq_map_one(dev, index, &oirq))
+ return NO_IRQ;
- if (virq > max_virt_irq)
- virq = (virq % nr_virt_irqs) + MIN_VIRT_IRQ;
+ return irq_create_of_mapping(oirq.controller, oirq.specifier,
+ oirq.size);
+}
+EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
- first_virq = virq;
+void irq_dispose_mapping(unsigned int virq)
+{
+ struct irq_host *host = irq_map[virq].host;
+ irq_hw_number_t hwirq;
+ unsigned long flags;
- do {
- if (virt_irq_to_real_map[virq] == real_irq)
- return virq;
+ WARN_ON (host == NULL);
+ if (host == NULL)
+ return;
- virq++;
+ /* Never unmap legacy interrupts */
+ if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
+ return;
- if (virq >= max_virt_irq)
- virq = 0;
+ /* remove chip and handler */
+ set_irq_chip_and_handler(virq, NULL, NULL);
+
+ /* Make sure it's completed */
+ synchronize_irq(virq);
+
+ /* Tell the PIC about it */
+ if (host->ops->unmap)
+ host->ops->unmap(host, virq);
+ smp_mb();
+
+ /* Clear reverse map */
+ hwirq = irq_map[virq].hwirq;
+ switch(host->revmap_type) {
+ case IRQ_HOST_MAP_LINEAR:
+ if (hwirq < host->revmap_data.linear.size)
+ host->revmap_data.linear.revmap[hwirq] = IRQ_NONE;
+ break;
+ case IRQ_HOST_MAP_TREE:
+ /* Check if radix tree allocated yet */
+ if (host->revmap_data.tree.gfp_mask == 0)
+ break;
+ /* XXX radix tree not safe ! remove lock whem it becomes safe
+ * and use some RCU sync to make sure everything is ok before we
+ * can re-use that map entry
+ */
+ spin_lock_irqsave(&irq_big_lock, flags);
+ radix_tree_delete(&host->revmap_data.tree, hwirq);
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ break;
+ }
- } while (first_virq != virq);
+ /* Destroy map */
+ smp_mb();
+ irq_map[virq].hwirq = host->inval_irq;
- return NO_IRQ;
+ /* Set some flags */
+ get_irq_desc(virq)->status |= IRQ_NOREQUEST;
+ /* Free it */
+ irq_free_virt(virq, 1);
}
-#endif /* CONFIG_PPC64 */
+EXPORT_SYMBOL_GPL(irq_dispose_mapping);
-#ifdef CONFIG_IRQSTACKS
-struct thread_info *softirq_ctx[NR_CPUS] __read_mostly;
-struct thread_info *hardirq_ctx[NR_CPUS] __read_mostly;
+unsigned int irq_find_mapping(struct irq_host *host,
+ irq_hw_number_t hwirq)
+{
+ unsigned int i;
+ unsigned int hint = hwirq % irq_virq_count;
+
+ /* Look for default host if nececssary */
+ if (host == NULL)
+ host = irq_default_host;
+ if (host == NULL)
+ return NO_IRQ;
+
+ /* legacy -> bail early */
+ if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
+ return hwirq;
+
+ /* Slow path does a linear search of the map */
+ if (hint < NUM_ISA_INTERRUPTS)
+ hint = NUM_ISA_INTERRUPTS;
+ i = hint;
+ do {
+ if (irq_map[i].host == host &&
+ irq_map[i].hwirq == hwirq)
+ return i;
+ i++;
+ if (i >= irq_virq_count)
+ i = NUM_ISA_INTERRUPTS;
+ } while(i != hint);
+ return NO_IRQ;
+}
+EXPORT_SYMBOL_GPL(irq_find_mapping);
-void irq_ctx_init(void)
+
+unsigned int irq_radix_revmap(struct irq_host *host,
+ irq_hw_number_t hwirq)
{
- struct thread_info *tp;
- int i;
+ struct radix_tree_root *tree;
+ struct irq_map_entry *ptr;
+ unsigned int virq;
+ unsigned long flags;
- for_each_possible_cpu(i) {
- memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
- tp = softirq_ctx[i];
- tp->cpu = i;
- tp->preempt_count = SOFTIRQ_OFFSET;
+ WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE);
- memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
- tp = hardirq_ctx[i];
- tp->cpu = i;
- tp->preempt_count = HARDIRQ_OFFSET;
+ /* Check if the radix tree exist yet. We test the value of
+ * the gfp_mask for that. Sneaky but saves another int in the
+ * structure. If not, we fallback to slow mode
+ */
+ tree = &host->revmap_data.tree;
+ if (tree->gfp_mask == 0)
+ return irq_find_mapping(host, hwirq);
+
+ /* XXX Current radix trees are NOT SMP safe !!! Remove that lock
+ * when that is fixed (when Nick's patch gets in
+ */
+ spin_lock_irqsave(&irq_big_lock, flags);
+
+ /* Now try to resolve */
+ ptr = radix_tree_lookup(tree, hwirq);
+ /* Found it, return */
+ if (ptr) {
+ virq = ptr - irq_map;
+ goto bail;
}
+
+ /* If not there, try to insert it */
+ virq = irq_find_mapping(host, hwirq);
+ if (virq != NO_IRQ)
+ radix_tree_insert(tree, virq, &irq_map[virq]);
+ bail:
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ return virq;
}
-static inline void do_softirq_onstack(void)
+unsigned int irq_linear_revmap(struct irq_host *host,
+ irq_hw_number_t hwirq)
{
- struct thread_info *curtp, *irqtp;
+ unsigned int *revmap;
- curtp = current_thread_info();
- irqtp = softirq_ctx[smp_processor_id()];
- irqtp->task = curtp->task;
- call_do_softirq(irqtp);
- irqtp->task = NULL;
+ WARN_ON(host->revmap_type != IRQ_HOST_MAP_LINEAR);
+
+ /* Check revmap bounds */
+ if (unlikely(hwirq >= host->revmap_data.linear.size))
+ return irq_find_mapping(host, hwirq);
+
+ /* Check if revmap was allocated */
+ revmap = host->revmap_data.linear.revmap;
+ if (unlikely(revmap == NULL))
+ return irq_find_mapping(host, hwirq);
+
+ /* Fill up revmap with slow path if no mapping found */
+ if (unlikely(revmap[hwirq] == NO_IRQ))
+ revmap[hwirq] = irq_find_mapping(host, hwirq);
+
+ return revmap[hwirq];
}
-#else
-#define do_softirq_onstack() __do_softirq()
-#endif /* CONFIG_IRQSTACKS */
+unsigned int irq_alloc_virt(struct irq_host *host,
+ unsigned int count,
+ unsigned int hint)
+{
+ unsigned long flags;
+ unsigned int i, j, found = NO_IRQ;
+ unsigned int limit = irq_virq_count - count;
-void do_softirq(void)
+ if (count == 0 || count > (irq_virq_count - NUM_ISA_INTERRUPTS))
+ return NO_IRQ;
+
+ spin_lock_irqsave(&irq_big_lock, flags);
+
+ /* Use hint for 1 interrupt if any */
+ if (count == 1 && hint >= NUM_ISA_INTERRUPTS &&
+ hint < irq_virq_count && irq_map[hint].host == NULL) {
+ found = hint;
+ goto hint_found;
+ }
+
+ /* Look for count consecutive numbers in the allocatable
+ * (non-legacy) space
+ */
+ for (i = NUM_ISA_INTERRUPTS; i <= limit; ) {
+ for (j = i; j < (i + count); j++)
+ if (irq_map[j].host != NULL) {
+ i = j + 1;
+ continue;
+ }
+ found = i;
+ break;
+ }
+ if (found == NO_IRQ) {
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ return NO_IRQ;
+ }
+ hint_found:
+ for (i = found; i < (found + count); i++) {
+ irq_map[i].hwirq = host->inval_irq;
+ smp_wmb();
+ irq_map[i].host = host;
+ }
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+ return found;
+}
+
+void irq_free_virt(unsigned int virq, unsigned int count)
{
unsigned long flags;
+ unsigned int i;
- if (in_interrupt())
- return;
+ WARN_ON (virq < NUM_ISA_INTERRUPTS);
+ WARN_ON (count == 0 || (virq + count) > irq_virq_count);
- local_irq_save(flags);
+ spin_lock_irqsave(&irq_big_lock, flags);
+ for (i = virq; i < (virq + count); i++) {
+ struct irq_host *host;
- if (local_softirq_pending()) {
- account_system_vtime(current);
- local_bh_disable();
- do_softirq_onstack();
- account_system_vtime(current);
- __local_bh_enable();
+ if (i < NUM_ISA_INTERRUPTS ||
+ (virq + count) > irq_virq_count)
+ continue;
+
+ host = irq_map[i].host;
+ irq_map[i].hwirq = host->inval_irq;
+ smp_wmb();
+ irq_map[i].host = NULL;
}
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+}
- local_irq_restore(flags);
+void irq_early_init(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < NR_IRQS; i++)
+ get_irq_desc(i)->status |= IRQ_NOREQUEST;
}
-EXPORT_SYMBOL(do_softirq);
+
+/* We need to create the radix trees late */
+static int irq_late_init(void)
+{
+ struct irq_host *h;
+ unsigned long flags;
+
+ spin_lock_irqsave(&irq_big_lock, flags);
+ list_for_each_entry(h, &irq_hosts, link) {
+ if (h->revmap_type == IRQ_HOST_MAP_TREE)
+ INIT_RADIX_TREE(&h->revmap_data.tree, GFP_ATOMIC);
+ }
+ spin_unlock_irqrestore(&irq_big_lock, flags);
+
+ return 0;
+}
+arch_initcall(irq_late_init);
+
+#endif /* CONFIG_PPC_MERGE */
#ifdef CONFIG_PCI_MSI
int pci_enable_msi(struct pci_dev * pdev)
struct device_node *np;
unsigned int speed;
unsigned int clock;
+ int irq_check_parent;
phys_addr_t taddr;
} legacy_serial_infos[MAX_LEGACY_SERIAL_PORTS];
static unsigned int legacy_serial_count;
static int __init add_legacy_port(struct device_node *np, int want_index,
int iotype, phys_addr_t base,
phys_addr_t taddr, unsigned long irq,
- upf_t flags)
+ upf_t flags, int irq_check_parent)
{
u32 *clk, *spd, clock = BASE_BAUD * 16;
int index;
if (legacy_serial_infos[index].np != 0) {
/* if we still have some room, move it, else override */
if (legacy_serial_count < MAX_LEGACY_SERIAL_PORTS) {
- printk(KERN_INFO "Moved legacy port %d -> %d\n",
+ printk(KERN_DEBUG "Moved legacy port %d -> %d\n",
index, legacy_serial_count);
legacy_serial_ports[legacy_serial_count] =
legacy_serial_ports[index];
legacy_serial_infos[index];
legacy_serial_count++;
} else {
- printk(KERN_INFO "Replacing legacy port %d\n", index);
+ printk(KERN_DEBUG "Replacing legacy port %d\n", index);
}
}
legacy_serial_infos[index].np = of_node_get(np);
legacy_serial_infos[index].clock = clock;
legacy_serial_infos[index].speed = spd ? *spd : 0;
+ legacy_serial_infos[index].irq_check_parent = irq_check_parent;
- printk(KERN_INFO "Found legacy serial port %d for %s\n",
+ printk(KERN_DEBUG "Found legacy serial port %d for %s\n",
index, np->full_name);
- printk(KERN_INFO " %s=%llx, taddr=%llx, irq=%lx, clk=%d, speed=%d\n",
+ printk(KERN_DEBUG " %s=%llx, taddr=%llx, irq=%lx, clk=%d, speed=%d\n",
(iotype == UPIO_PORT) ? "port" : "mem",
(unsigned long long)base, (unsigned long long)taddr, irq,
legacy_serial_ports[index].uartclk,
return -1;
addr = of_translate_address(soc_dev, addrp);
+ if (addr == OF_BAD_ADDR)
+ return -1;
/* Add port, irq will be dealt with later. We passed a translated
* IO port value. It will be fixed up later along with the irq
*/
- return add_legacy_port(np, -1, UPIO_MEM, addr, addr, NO_IRQ, flags);
+ return add_legacy_port(np, -1, UPIO_MEM, addr, addr, NO_IRQ, flags, 0);
}
static int __init add_legacy_isa_port(struct device_node *np,
int index = -1;
phys_addr_t taddr;
+ DBG(" -> add_legacy_isa_port(%s)\n", np->full_name);
+
/* Get the ISA port number */
reg = (u32 *)get_property(np, "reg", NULL);
if (reg == NULL)
/* Translate ISA address */
taddr = of_translate_address(np, reg);
+ if (taddr == OF_BAD_ADDR)
+ return -1;
/* Add port, irq will be dealt with later */
- return add_legacy_port(np, index, UPIO_PORT, reg[1], taddr, NO_IRQ, UPF_BOOT_AUTOCONF);
+ return add_legacy_port(np, index, UPIO_PORT, reg[1], taddr,
+ NO_IRQ, UPF_BOOT_AUTOCONF, 0);
}
unsigned int flags;
int iotype, index = -1, lindex = 0;
+ DBG(" -> add_legacy_pci_port(%s)\n", np->full_name);
+
/* We only support ports that have a clock frequency properly
* encoded in the device-tree (that is have an fcode). Anything
* else can't be used that early and will be normally probed by
/* We only support BAR 0 for now */
iotype = (flags & IORESOURCE_MEM) ? UPIO_MEM : UPIO_PORT;
addr = of_translate_address(pci_dev, addrp);
+ if (addr == OF_BAD_ADDR)
+ return -1;
/* Set the IO base to the same as the translated address for MMIO,
* or to the domain local IO base for PIO (it will be fixed up later)
/* Add port, irq will be dealt with later. We passed a translated
* IO port value. It will be fixed up later along with the irq
*/
- return add_legacy_port(np, index, iotype, base, addr, NO_IRQ, UPF_BOOT_AUTOCONF);
+ return add_legacy_port(np, index, iotype, base, addr, NO_IRQ,
+ UPF_BOOT_AUTOCONF, np != pci_dev);
}
#endif
struct device_node *np,
struct plat_serial8250_port *port)
{
+ unsigned int virq;
+
DBG("fixup_port_irq(%d)\n", index);
- /* Check for interrupts in that node */
- if (np->n_intrs > 0) {
- port->irq = np->intrs[0].line;
- DBG(" port %d (%s), irq=%d\n",
- index, np->full_name, port->irq);
- return;
+ virq = irq_of_parse_and_map(np, 0);
+ if (virq == NO_IRQ && legacy_serial_infos[index].irq_check_parent) {
+ np = of_get_parent(np);
+ if (np == NULL)
+ return;
+ virq = irq_of_parse_and_map(np, 0);
+ of_node_put(np);
}
-
- /* Check for interrupts in the parent */
- np = of_get_parent(np);
- if (np == NULL)
+ if (virq == NO_IRQ)
return;
- if (np->n_intrs > 0) {
- port->irq = np->intrs[0].line;
- DBG(" port %d (%s), irq=%d\n",
- index, np->full_name, port->irq);
- }
- of_node_put(np);
+ port->irq = virq;
}
static void __init fixup_port_pio(int index,
mtlr r0
blr
-_GLOBAL(call___do_IRQ)
+_GLOBAL(call_handle_irq)
+ ld r8,0(r7)
mflr r0
std r0,16(r1)
- stdu r1,THREAD_SIZE-112(r5)
- mr r1,r5
- bl .__do_IRQ
+ mtctr r8
+ stdu r1,THREAD_SIZE-112(r6)
+ mr r1,r6
+ bctrl
ld r1,0(r1)
ld r0,16(r1)
mtlr r0
/* XXX FIXME - update OF device tree node interrupt property */
}
+#ifdef CONFIG_PPC_MERGE
+/* XXX This is a copy of the ppc64 version. This is temporary until we start
+ * merging the 2 PCI layers
+ */
+/*
+ * Reads the interrupt pin to determine if interrupt is use by card.
+ * If the interrupt is used, then gets the interrupt line from the
+ * openfirmware and sets it in the pci_dev and pci_config line.
+ */
+int pci_read_irq_line(struct pci_dev *pci_dev)
+{
+ struct of_irq oirq;
+ unsigned int virq;
+
+ DBG("Try to map irq for %s...\n", pci_name(pci_dev));
+
+ if (of_irq_map_pci(pci_dev, &oirq)) {
+ DBG(" -> failed !\n");
+ return -1;
+ }
+
+ DBG(" -> got one, spec %d cells (0x%08x...) on %s\n",
+ oirq.size, oirq.specifier[0], oirq.controller->full_name);
+
+ virq = irq_create_of_mapping(oirq.controller, oirq.specifier, oirq.size);
+ if(virq == NO_IRQ) {
+ DBG(" -> failed to map !\n");
+ return -1;
+ }
+ pci_dev->irq = virq;
+ pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, virq);
+
+ return 0;
+}
+EXPORT_SYMBOL(pci_read_irq_line);
+#endif /* CONFIG_PPC_MERGE */
+
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
u16 cmd, old_cmd;
} else {
dev->hdr_type = PCI_HEADER_TYPE_NORMAL;
dev->rom_base_reg = PCI_ROM_ADDRESS;
+ /* Maybe do a default OF mapping here */
dev->irq = NO_IRQ;
- if (node->n_intrs > 0) {
- dev->irq = node->intrs[0].line;
- pci_write_config_byte(dev, PCI_INTERRUPT_LINE,
- dev->irq);
- }
}
pci_parse_of_addrs(node, dev);
*/
int pci_read_irq_line(struct pci_dev *pci_dev)
{
- u8 intpin;
- struct device_node *node;
-
- pci_read_config_byte(pci_dev, PCI_INTERRUPT_PIN, &intpin);
- if (intpin == 0)
- return 0;
+ struct of_irq oirq;
+ unsigned int virq;
- node = pci_device_to_OF_node(pci_dev);
- if (node == NULL)
- return -1;
+ DBG("Try to map irq for %s...\n", pci_name(pci_dev));
- if (node->n_intrs == 0)
+ if (of_irq_map_pci(pci_dev, &oirq)) {
+ DBG(" -> failed !\n");
return -1;
+ }
- pci_dev->irq = node->intrs[0].line;
+ DBG(" -> got one, spec %d cells (0x%08x...) on %s\n",
+ oirq.size, oirq.specifier[0], oirq.controller->full_name);
- pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, pci_dev->irq);
+ virq = irq_create_of_mapping(oirq.controller, oirq.specifier, oirq.size);
+ if(virq == NO_IRQ) {
+ DBG(" -> failed to map !\n");
+ return -1;
+ }
+ pci_dev->irq = virq;
+ pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, virq);
return 0;
}
#include <linux/module.h>
#include <linux/kexec.h>
#include <linux/debugfs.h>
+#include <linux/irq.h>
#include <asm/prom.h>
#include <asm/rtas.h>
/* export that to outside world */
struct device_node *of_chosen;
-struct device_node *dflt_interrupt_controller;
-int num_interrupt_controllers;
-
-/*
- * Wrapper for allocating memory for various data that needs to be
- * attached to device nodes as they are processed at boot or when
- * added to the device tree later (e.g. DLPAR). At boot there is
- * already a region reserved so we just increment *mem_start by size;
- * otherwise we call kmalloc.
- */
-static void * prom_alloc(unsigned long size, unsigned long *mem_start)
-{
- unsigned long tmp;
-
- if (!mem_start)
- return kmalloc(size, GFP_KERNEL);
-
- tmp = *mem_start;
- *mem_start += size;
- return (void *)tmp;
-}
-
-/*
- * Find the device_node with a given phandle.
- */
-static struct device_node * find_phandle(phandle ph)
-{
- struct device_node *np;
-
- for (np = allnodes; np != 0; np = np->allnext)
- if (np->linux_phandle == ph)
- return np;
- return NULL;
-}
-
-/*
- * Find the interrupt parent of a node.
- */
-static struct device_node * __devinit intr_parent(struct device_node *p)
-{
- phandle *parp;
-
- parp = (phandle *) get_property(p, "interrupt-parent", NULL);
- if (parp == NULL)
- return p->parent;
- p = find_phandle(*parp);
- if (p != NULL)
- return p;
- /*
- * On a powermac booted with BootX, we don't get to know the
- * phandles for any nodes, so find_phandle will return NULL.
- * Fortunately these machines only have one interrupt controller
- * so there isn't in fact any ambiguity. -- paulus
- */
- if (num_interrupt_controllers == 1)
- p = dflt_interrupt_controller;
- return p;
-}
-
-/*
- * Find out the size of each entry of the interrupts property
- * for a node.
- */
-int __devinit prom_n_intr_cells(struct device_node *np)
-{
- struct device_node *p;
- unsigned int *icp;
-
- for (p = np; (p = intr_parent(p)) != NULL; ) {
- icp = (unsigned int *)
- get_property(p, "#interrupt-cells", NULL);
- if (icp != NULL)
- return *icp;
- if (get_property(p, "interrupt-controller", NULL) != NULL
- || get_property(p, "interrupt-map", NULL) != NULL) {
- printk("oops, node %s doesn't have #interrupt-cells\n",
- p->full_name);
- return 1;
- }
- }
-#ifdef DEBUG_IRQ
- printk("prom_n_intr_cells failed for %s\n", np->full_name);
-#endif
- return 1;
-}
-
-/*
- * Map an interrupt from a device up to the platform interrupt
- * descriptor.
- */
-static int __devinit map_interrupt(unsigned int **irq, struct device_node **ictrler,
- struct device_node *np, unsigned int *ints,
- int nintrc)
-{
- struct device_node *p, *ipar;
- unsigned int *imap, *imask, *ip;
- int i, imaplen, match;
- int newintrc = 0, newaddrc = 0;
- unsigned int *reg;
- int naddrc;
-
- reg = (unsigned int *) get_property(np, "reg", NULL);
- naddrc = prom_n_addr_cells(np);
- p = intr_parent(np);
- while (p != NULL) {
- if (get_property(p, "interrupt-controller", NULL) != NULL)
- /* this node is an interrupt controller, stop here */
- break;
- imap = (unsigned int *)
- get_property(p, "interrupt-map", &imaplen);
- if (imap == NULL) {
- p = intr_parent(p);
- continue;
- }
- imask = (unsigned int *)
- get_property(p, "interrupt-map-mask", NULL);
- if (imask == NULL) {
- printk("oops, %s has interrupt-map but no mask\n",
- p->full_name);
- return 0;
- }
- imaplen /= sizeof(unsigned int);
- match = 0;
- ipar = NULL;
- while (imaplen > 0 && !match) {
- /* check the child-interrupt field */
- match = 1;
- for (i = 0; i < naddrc && match; ++i)
- match = ((reg[i] ^ imap[i]) & imask[i]) == 0;
- for (; i < naddrc + nintrc && match; ++i)
- match = ((ints[i-naddrc] ^ imap[i]) & imask[i]) == 0;
- imap += naddrc + nintrc;
- imaplen -= naddrc + nintrc;
- /* grab the interrupt parent */
- ipar = find_phandle((phandle) *imap++);
- --imaplen;
- if (ipar == NULL && num_interrupt_controllers == 1)
- /* cope with BootX not giving us phandles */
- ipar = dflt_interrupt_controller;
- if (ipar == NULL) {
- printk("oops, no int parent %x in map of %s\n",
- imap[-1], p->full_name);
- return 0;
- }
- /* find the parent's # addr and intr cells */
- ip = (unsigned int *)
- get_property(ipar, "#interrupt-cells", NULL);
- if (ip == NULL) {
- printk("oops, no #interrupt-cells on %s\n",
- ipar->full_name);
- return 0;
- }
- newintrc = *ip;
- ip = (unsigned int *)
- get_property(ipar, "#address-cells", NULL);
- newaddrc = (ip == NULL)? 0: *ip;
- imap += newaddrc + newintrc;
- imaplen -= newaddrc + newintrc;
- }
- if (imaplen < 0) {
- printk("oops, error decoding int-map on %s, len=%d\n",
- p->full_name, imaplen);
- return 0;
- }
- if (!match) {
-#ifdef DEBUG_IRQ
- printk("oops, no match in %s int-map for %s\n",
- p->full_name, np->full_name);
-#endif
- return 0;
- }
- p = ipar;
- naddrc = newaddrc;
- nintrc = newintrc;
- ints = imap - nintrc;
- reg = ints - naddrc;
- }
- if (p == NULL) {
-#ifdef DEBUG_IRQ
- printk("hmmm, int tree for %s doesn't have ctrler\n",
- np->full_name);
-#endif
- return 0;
- }
- *irq = ints;
- *ictrler = p;
- return nintrc;
-}
-
-static unsigned char map_isa_senses[4] = {
- IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE,
- IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE,
- IRQ_SENSE_EDGE | IRQ_POLARITY_NEGATIVE,
- IRQ_SENSE_EDGE | IRQ_POLARITY_POSITIVE
-};
-
-static unsigned char map_mpic_senses[4] = {
- IRQ_SENSE_EDGE | IRQ_POLARITY_POSITIVE,
- IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE,
- /* 2 seems to be used for the 8259 cascade... */
- IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE,
- IRQ_SENSE_EDGE | IRQ_POLARITY_NEGATIVE,
-};
-
-static int __devinit finish_node_interrupts(struct device_node *np,
- unsigned long *mem_start,
- int measure_only)
-{
- unsigned int *ints;
- int intlen, intrcells, intrcount;
- int i, j, n, sense;
- unsigned int *irq, virq;
- struct device_node *ic;
- int trace = 0;
-
- //#define TRACE(fmt...) do { if (trace) { printk(fmt); mdelay(1000); } } while(0)
-#define TRACE(fmt...)
-
- if (!strcmp(np->name, "smu-doorbell"))
- trace = 1;
-
- TRACE("Finishing SMU doorbell ! num_interrupt_controllers = %d\n",
- num_interrupt_controllers);
-
- if (num_interrupt_controllers == 0) {
- /*
- * Old machines just have a list of interrupt numbers
- * and no interrupt-controller nodes.
- */
- ints = (unsigned int *) get_property(np, "AAPL,interrupts",
- &intlen);
- /* XXX old interpret_pci_props looked in parent too */
- /* XXX old interpret_macio_props looked for interrupts
- before AAPL,interrupts */
- if (ints == NULL)
- ints = (unsigned int *) get_property(np, "interrupts",
- &intlen);
- if (ints == NULL)
- return 0;
-
- np->n_intrs = intlen / sizeof(unsigned int);
- np->intrs = prom_alloc(np->n_intrs * sizeof(np->intrs[0]),
- mem_start);
- if (!np->intrs)
- return -ENOMEM;
- if (measure_only)
- return 0;
-
- for (i = 0; i < np->n_intrs; ++i) {
- np->intrs[i].line = *ints++;
- np->intrs[i].sense = IRQ_SENSE_LEVEL
- | IRQ_POLARITY_NEGATIVE;
- }
- return 0;
- }
-
- ints = (unsigned int *) get_property(np, "interrupts", &intlen);
- TRACE("ints=%p, intlen=%d\n", ints, intlen);
- if (ints == NULL)
- return 0;
- intrcells = prom_n_intr_cells(np);
- intlen /= intrcells * sizeof(unsigned int);
- TRACE("intrcells=%d, new intlen=%d\n", intrcells, intlen);
- np->intrs = prom_alloc(intlen * sizeof(*(np->intrs)), mem_start);
- if (!np->intrs)
- return -ENOMEM;
-
- if (measure_only)
- return 0;
-
- intrcount = 0;
- for (i = 0; i < intlen; ++i, ints += intrcells) {
- n = map_interrupt(&irq, &ic, np, ints, intrcells);
- TRACE("map, irq=%d, ic=%p, n=%d\n", irq, ic, n);
- if (n <= 0)
- continue;
-
- /* don't map IRQ numbers under a cascaded 8259 controller */
- if (ic && device_is_compatible(ic, "chrp,iic")) {
- np->intrs[intrcount].line = irq[0];
- sense = (n > 1)? (irq[1] & 3): 3;
- np->intrs[intrcount].sense = map_isa_senses[sense];
- } else {
- virq = virt_irq_create_mapping(irq[0]);
- TRACE("virq=%d\n", virq);
-#ifdef CONFIG_PPC64
- if (virq == NO_IRQ) {
- printk(KERN_CRIT "Could not allocate interrupt"
- " number for %s\n", np->full_name);
- continue;
- }
-#endif
- np->intrs[intrcount].line = irq_offset_up(virq);
- sense = (n > 1)? (irq[1] & 3): 1;
-
- /* Apple uses bits in there in a different way, let's
- * only keep the real sense bit on macs
- */
- if (machine_is(powermac))
- sense &= 0x1;
- np->intrs[intrcount].sense = map_mpic_senses[sense];
- }
-
-#ifdef CONFIG_PPC64
- /* We offset irq numbers for the u3 MPIC by 128 in PowerMac */
- if (machine_is(powermac) && ic && ic->parent) {
- char *name = get_property(ic->parent, "name", NULL);
- if (name && !strcmp(name, "u3"))
- np->intrs[intrcount].line += 128;
- else if (!(name && (!strcmp(name, "mac-io") ||
- !strcmp(name, "u4"))))
- /* ignore other cascaded controllers, such as
- the k2-sata-root */
- break;
- }
-#endif /* CONFIG_PPC64 */
- if (n > 2) {
- printk("hmmm, got %d intr cells for %s:", n,
- np->full_name);
- for (j = 0; j < n; ++j)
- printk(" %d", irq[j]);
- printk("\n");
- }
- ++intrcount;
- }
- np->n_intrs = intrcount;
-
- return 0;
-}
-
-static int __devinit finish_node(struct device_node *np,
- unsigned long *mem_start,
- int measure_only)
-{
- struct device_node *child;
- int rc = 0;
-
- rc = finish_node_interrupts(np, mem_start, measure_only);
- if (rc)
- goto out;
-
- for (child = np->child; child != NULL; child = child->sibling) {
- rc = finish_node(child, mem_start, measure_only);
- if (rc)
- goto out;
- }
-out:
- return rc;
-}
-
-static void __init scan_interrupt_controllers(void)
-{
- struct device_node *np;
- int n = 0;
- char *name, *ic;
- int iclen;
-
- for (np = allnodes; np != NULL; np = np->allnext) {
- ic = get_property(np, "interrupt-controller", &iclen);
- name = get_property(np, "name", NULL);
- /* checking iclen makes sure we don't get a false
- match on /chosen.interrupt_controller */
- if ((name != NULL
- && strcmp(name, "interrupt-controller") == 0)
- || (ic != NULL && iclen == 0
- && strcmp(name, "AppleKiwi"))) {
- if (n == 0)
- dflt_interrupt_controller = np;
- ++n;
- }
- }
- num_interrupt_controllers = n;
-}
-
-/**
- * finish_device_tree is called once things are running normally
- * (i.e. with text and data mapped to the address they were linked at).
- * It traverses the device tree and fills in some of the additional,
- * fields in each node like {n_}addrs and {n_}intrs, the virt interrupt
- * mapping is also initialized at this point.
- */
-void __init finish_device_tree(void)
-{
- unsigned long start, end, size = 0;
-
- DBG(" -> finish_device_tree\n");
-
-#ifdef CONFIG_PPC64
- /* Initialize virtual IRQ map */
- virt_irq_init();
-#endif
- scan_interrupt_controllers();
-
- /*
- * Finish device-tree (pre-parsing some properties etc...)
- * We do this in 2 passes. One with "measure_only" set, which
- * will only measure the amount of memory needed, then we can
- * allocate that memory, and call finish_node again. However,
- * we must be careful as most routines will fail nowadays when
- * prom_alloc() returns 0, so we must make sure our first pass
- * doesn't start at 0. We pre-initialize size to 16 for that
- * reason and then remove those additional 16 bytes
- */
- size = 16;
- finish_node(allnodes, &size, 1);
- size -= 16;
-
- if (0 == size)
- end = start = 0;
- else
- end = start = (unsigned long)__va(lmb_alloc(size, 128));
-
- finish_node(allnodes, &end, 0);
- BUG_ON(end != start + size);
-
- DBG(" <- finish_device_tree\n");
-}
-
static inline char *find_flat_dt_string(u32 offset)
{
return ((char *)initial_boot_params) +
}
EXPORT_SYMBOL(prom_n_size_cells);
-/**
- * Work out the sense (active-low level / active-high edge)
- * of each interrupt from the device tree.
- */
-void __init prom_get_irq_senses(unsigned char *senses, int off, int max)
-{
- struct device_node *np;
- int i, j;
-
- /* default to level-triggered */
- memset(senses, IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE, max - off);
-
- for (np = allnodes; np != 0; np = np->allnext) {
- for (j = 0; j < np->n_intrs; j++) {
- i = np->intrs[j].line;
- if (i >= off && i < max)
- senses[i-off] = np->intrs[j].sense;
- }
- }
-}
-
/**
* Construct and return a list of the device_nodes with a given name.
*/
node->deadprops = NULL;
}
}
- kfree(node->intrs);
kfree(node->full_name);
kfree(node->data);
kfree(node);
#ifdef CONFIG_PPC_PSERIES
/*
* Fix up the uninitialized fields in a new device node:
- * name, type, n_addrs, addrs, n_intrs, intrs, and pci-specific fields
- *
- * A lot of boot-time code is duplicated here, because functions such
- * as finish_node_interrupts, interpret_pci_props, etc. cannot use the
- * slab allocator.
- *
- * This should probably be split up into smaller chunks.
+ * name, type and pci-specific fields
*/
static int of_finish_dynamic_node(struct device_node *node)
switch (action) {
case PSERIES_RECONFIG_ADD:
err = of_finish_dynamic_node(node);
- if (!err)
- finish_node(node, NULL, 0);
if (err < 0) {
printk(KERN_ERR "finish_node returned %d\n", err);
err = NOTIFY_BAD;
* Find a property with a given name for a given node
* and return the value.
*/
-unsigned char *get_property(struct device_node *np, const char *name,
- int *lenp)
+void *get_property(struct device_node *np, const char *name, int *lenp)
{
struct property *pp = of_find_property(np,name,lenp);
return pp ? pp->value : NULL;
static void __init fixup_device_tree_maple(void)
{
phandle isa;
+ u32 rloc = 0x01002000; /* IO space; PCI device = 4 */
u32 isa_ranges[6];
-
- isa = call_prom("finddevice", 1, 1, ADDR("/ht@0/isa@4"));
+ char *name;
+
+ name = "/ht@0/isa@4";
+ isa = call_prom("finddevice", 1, 1, ADDR(name));
+ if (!PHANDLE_VALID(isa)) {
+ name = "/ht@0/isa@6";
+ isa = call_prom("finddevice", 1, 1, ADDR(name));
+ rloc = 0x01003000; /* IO space; PCI device = 6 */
+ }
if (!PHANDLE_VALID(isa))
return;
+ if (prom_getproplen(isa, "ranges") != 12)
+ return;
if (prom_getprop(isa, "ranges", isa_ranges, sizeof(isa_ranges))
== PROM_ERROR)
return;
isa_ranges[2] != 0x00010000)
return;
- prom_printf("fixing up bogus ISA range on Maple...\n");
+ prom_printf("Fixing up bogus ISA range on Maple/Apache...\n");
isa_ranges[0] = 0x1;
isa_ranges[1] = 0x0;
- isa_ranges[2] = 0x01002000; /* IO space; PCI device = 4 */
+ isa_ranges[2] = rloc;
isa_ranges[3] = 0x0;
isa_ranges[4] = 0x0;
isa_ranges[5] = 0x00010000;
- prom_setprop(isa, "/ht@0/isa@4", "ranges",
+ prom_setprop(isa, name, "ranges",
isa_ranges, sizeof(isa_ranges));
}
#else
static void of_dump_addr(const char *s, u32 *addr, int na) { }
#endif
-/* Read a big address */
-static inline u64 of_read_addr(u32 *cell, int size)
-{
- u64 r = 0;
- while (size--)
- r = (r << 32) | *(cell++);
- return r;
-}
/* Callbacks for bus specific translators */
struct of_bus {
{
u64 cp, s, da;
- cp = of_read_addr(range, na);
- s = of_read_addr(range + na + pna, ns);
- da = of_read_addr(addr, na);
+ cp = of_read_number(range, na);
+ s = of_read_number(range + na + pna, ns);
+ da = of_read_number(addr, na);
DBG("OF: default map, cp="PRu64", s="PRu64", da="PRu64"\n",
cp, s, da);
static int of_bus_default_translate(u32 *addr, u64 offset, int na)
{
- u64 a = of_read_addr(addr, na);
+ u64 a = of_read_number(addr, na);
memset(addr, 0, na * 4);
a += offset;
if (na > 1)
return OF_BAD_ADDR;
/* Read address values, skipping high cell */
- cp = of_read_addr(range + 1, na - 1);
- s = of_read_addr(range + na + pna, ns);
- da = of_read_addr(addr + 1, na - 1);
+ cp = of_read_number(range + 1, na - 1);
+ s = of_read_number(range + na + pna, ns);
+ da = of_read_number(addr + 1, na - 1);
DBG("OF: PCI map, cp="PRu64", s="PRu64", da="PRu64"\n", cp, s, da);
return OF_BAD_ADDR;
/* Read address values, skipping high cell */
- cp = of_read_addr(range + 1, na - 1);
- s = of_read_addr(range + na + pna, ns);
- da = of_read_addr(addr + 1, na - 1);
+ cp = of_read_number(range + 1, na - 1);
+ s = of_read_number(range + na + pna, ns);
+ da = of_read_number(addr + 1, na - 1);
DBG("OF: ISA map, cp="PRu64", s="PRu64", da="PRu64"\n", cp, s, da);
*/
ranges = (u32 *)get_property(parent, "ranges", &rlen);
if (ranges == NULL || rlen == 0) {
- offset = of_read_addr(addr, na);
+ offset = of_read_number(addr, na);
memset(addr, 0, pna * 4);
DBG("OF: no ranges, 1:1 translation\n");
goto finish;
/* If root, we have finished */
if (parent == NULL) {
DBG("OF: reached root node\n");
- result = of_read_addr(addr, na);
+ result = of_read_number(addr, na);
break;
}
for (i = 0; psize >= onesize; psize -= onesize, prop += onesize, i++)
if (i == index) {
if (size)
- *size = of_read_addr(prop + na, ns);
+ *size = of_read_number(prop + na, ns);
if (flags)
*flags = bus->get_flags(prop);
return prop;
for (i = 0; psize >= onesize; psize -= onesize, prop += onesize, i++)
if ((prop[0] & 0xff) == ((bar_no * 4) + PCI_BASE_ADDRESS_0)) {
if (size)
- *size = of_read_addr(prop + na, ns);
+ *size = of_read_number(prop + na, ns);
if (flags)
*flags = bus->get_flags(prop);
return prop;
prop = get_property(dn, "#address-cells", NULL);
cells = prop ? *(u32 *)prop : prom_n_addr_cells(dn);
- *phys = of_read_addr(dma_window, cells);
+ *phys = of_read_number(dma_window, cells);
dma_window += cells;
prop = get_property(dn, "ibm,#dma-size-cells", NULL);
cells = prop ? *(u32 *)prop : prom_n_size_cells(dn);
- *size = of_read_addr(dma_window, cells);
+ *size = of_read_number(dma_window, cells);
+}
+
+/*
+ * Interrupt remapper
+ */
+
+static unsigned int of_irq_workarounds;
+static struct device_node *of_irq_dflt_pic;
+
+static struct device_node *of_irq_find_parent(struct device_node *child)
+{
+ struct device_node *p;
+ phandle *parp;
+
+ if (!of_node_get(child))
+ return NULL;
+
+ do {
+ parp = (phandle *)get_property(child, "interrupt-parent", NULL);
+ if (parp == NULL)
+ p = of_get_parent(child);
+ else {
+ if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)
+ p = of_node_get(of_irq_dflt_pic);
+ else
+ p = of_find_node_by_phandle(*parp);
+ }
+ of_node_put(child);
+ child = p;
+ } while (p && get_property(p, "#interrupt-cells", NULL) == NULL);
+
+ return p;
+}
+
+static u8 of_irq_pci_swizzle(u8 slot, u8 pin)
+{
+ return (((pin - 1) + slot) % 4) + 1;
}
+
+/* This doesn't need to be called if you don't have any special workaround
+ * flags to pass
+ */
+void of_irq_map_init(unsigned int flags)
+{
+ of_irq_workarounds = flags;
+
+ /* OldWorld, don't bother looking at other things */
+ if (flags & OF_IMAP_OLDWORLD_MAC)
+ return;
+
+ /* If we don't have phandles, let's try to locate a default interrupt
+ * controller (happens when booting with BootX). We do a first match
+ * here, hopefully, that only ever happens on machines with one
+ * controller.
+ */
+ if (flags & OF_IMAP_NO_PHANDLE) {
+ struct device_node *np;
+
+ for(np = NULL; (np = of_find_all_nodes(np)) != NULL;) {
+ if (get_property(np, "interrupt-controller", NULL)
+ == NULL)
+ continue;
+ /* Skip /chosen/interrupt-controller */
+ if (strcmp(np->name, "chosen") == 0)
+ continue;
+ /* It seems like at least one person on this planet wants
+ * to use BootX on a machine with an AppleKiwi controller
+ * which happens to pretend to be an interrupt
+ * controller too.
+ */
+ if (strcmp(np->name, "AppleKiwi") == 0)
+ continue;
+ /* I think we found one ! */
+ of_irq_dflt_pic = np;
+ break;
+ }
+ }
+
+}
+
+int of_irq_map_raw(struct device_node *parent, u32 *intspec, u32 *addr,
+ struct of_irq *out_irq)
+{
+ struct device_node *ipar, *tnode, *old = NULL, *newpar = NULL;
+ u32 *tmp, *imap, *imask;
+ u32 intsize = 1, addrsize, newintsize = 0, newaddrsize = 0;
+ int imaplen, match, i;
+
+ ipar = of_node_get(parent);
+
+ /* First get the #interrupt-cells property of the current cursor
+ * that tells us how to interpret the passed-in intspec. If there
+ * is none, we are nice and just walk up the tree
+ */
+ do {
+ tmp = (u32 *)get_property(ipar, "#interrupt-cells", NULL);
+ if (tmp != NULL) {
+ intsize = *tmp;
+ break;
+ }
+ tnode = ipar;
+ ipar = of_irq_find_parent(ipar);
+ of_node_put(tnode);
+ } while (ipar);
+ if (ipar == NULL) {
+ DBG(" -> no parent found !\n");
+ goto fail;
+ }
+
+ DBG("of_irq_map_raw: ipar=%s, size=%d\n", ipar->full_name, intsize);
+
+ /* Look for this #address-cells. We have to implement the old linux
+ * trick of looking for the parent here as some device-trees rely on it
+ */
+ old = of_node_get(ipar);
+ do {
+ tmp = (u32 *)get_property(old, "#address-cells", NULL);
+ tnode = of_get_parent(old);
+ of_node_put(old);
+ old = tnode;
+ } while(old && tmp == NULL);
+ of_node_put(old);
+ old = NULL;
+ addrsize = (tmp == NULL) ? 2 : *tmp;
+
+ DBG(" -> addrsize=%d\n", addrsize);
+
+ /* Now start the actual "proper" walk of the interrupt tree */
+ while (ipar != NULL) {
+ /* Now check if cursor is an interrupt-controller and if it is
+ * then we are done
+ */
+ if (get_property(ipar, "interrupt-controller", NULL) != NULL) {
+ DBG(" -> got it !\n");
+ memcpy(out_irq->specifier, intspec,
+ intsize * sizeof(u32));
+ out_irq->size = intsize;
+ out_irq->controller = ipar;
+ of_node_put(old);
+ return 0;
+ }
+
+ /* Now look for an interrupt-map */
+ imap = (u32 *)get_property(ipar, "interrupt-map", &imaplen);
+ /* No interrupt map, check for an interrupt parent */
+ if (imap == NULL) {
+ DBG(" -> no map, getting parent\n");
+ newpar = of_irq_find_parent(ipar);
+ goto skiplevel;
+ }
+ imaplen /= sizeof(u32);
+
+ /* Look for a mask */
+ imask = (u32 *)get_property(ipar, "interrupt-map-mask", NULL);
+
+ /* If we were passed no "reg" property and we attempt to parse
+ * an interrupt-map, then #address-cells must be 0.
+ * Fail if it's not.
+ */
+ if (addr == NULL && addrsize != 0) {
+ DBG(" -> no reg passed in when needed !\n");
+ goto fail;
+ }
+
+ /* Parse interrupt-map */
+ match = 0;
+ while (imaplen > (addrsize + intsize + 1) && !match) {
+ /* Compare specifiers */
+ match = 1;
+ for (i = 0; i < addrsize && match; ++i) {
+ u32 mask = imask ? imask[i] : 0xffffffffu;
+ match = ((addr[i] ^ imap[i]) & mask) == 0;
+ }
+ for (; i < (addrsize + intsize) && match; ++i) {
+ u32 mask = imask ? imask[i] : 0xffffffffu;
+ match =
+ ((intspec[i-addrsize] ^ imap[i]) & mask) == 0;
+ }
+ imap += addrsize + intsize;
+ imaplen -= addrsize + intsize;
+
+ DBG(" -> match=%d (imaplen=%d)\n", match, imaplen);
+
+ /* Get the interrupt parent */
+ if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)
+ newpar = of_node_get(of_irq_dflt_pic);
+ else
+ newpar = of_find_node_by_phandle((phandle)*imap);
+ imap++;
+ --imaplen;
+
+ /* Check if not found */
+ if (newpar == NULL) {
+ DBG(" -> imap parent not found !\n");
+ goto fail;
+ }
+
+ /* Get #interrupt-cells and #address-cells of new
+ * parent
+ */
+ tmp = (u32 *)get_property(newpar, "#interrupt-cells",
+ NULL);
+ if (tmp == NULL) {
+ DBG(" -> parent lacks #interrupt-cells !\n");
+ goto fail;
+ }
+ newintsize = *tmp;
+ tmp = (u32 *)get_property(newpar, "#address-cells",
+ NULL);
+ newaddrsize = (tmp == NULL) ? 0 : *tmp;
+
+ DBG(" -> newintsize=%d, newaddrsize=%d\n",
+ newintsize, newaddrsize);
+
+ /* Check for malformed properties */
+ if (imaplen < (newaddrsize + newintsize))
+ goto fail;
+
+ imap += newaddrsize + newintsize;
+ imaplen -= newaddrsize + newintsize;
+
+ DBG(" -> imaplen=%d\n", imaplen);
+ }
+ if (!match)
+ goto fail;
+
+ of_node_put(old);
+ old = of_node_get(newpar);
+ addrsize = newaddrsize;
+ intsize = newintsize;
+ intspec = imap - intsize;
+ addr = intspec - addrsize;
+
+ skiplevel:
+ /* Iterate again with new parent */
+ DBG(" -> new parent: %s\n", newpar ? newpar->full_name : "<>");
+ of_node_put(ipar);
+ ipar = newpar;
+ newpar = NULL;
+ }
+ fail:
+ of_node_put(ipar);
+ of_node_put(old);
+ of_node_put(newpar);
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(of_irq_map_raw);
+
+#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
+static int of_irq_map_oldworld(struct device_node *device, int index,
+ struct of_irq *out_irq)
+{
+ u32 *ints;
+ int intlen;
+
+ /*
+ * Old machines just have a list of interrupt numbers
+ * and no interrupt-controller nodes.
+ */
+ ints = (u32 *) get_property(device, "AAPL,interrupts", &intlen);
+ if (ints == NULL)
+ return -EINVAL;
+ intlen /= sizeof(u32);
+
+ if (index >= intlen)
+ return -EINVAL;
+
+ out_irq->controller = NULL;
+ out_irq->specifier[0] = ints[index];
+ out_irq->size = 1;
+
+ return 0;
+}
+#else /* defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32) */
+static int of_irq_map_oldworld(struct device_node *device, int index,
+ struct of_irq *out_irq)
+{
+ return -EINVAL;
+}
+#endif /* !(defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)) */
+
+int of_irq_map_one(struct device_node *device, int index, struct of_irq *out_irq)
+{
+ struct device_node *p;
+ u32 *intspec, *tmp, intsize, intlen, *addr;
+ int res;
+
+ DBG("of_irq_map_one: dev=%s, index=%d\n", device->full_name, index);
+
+ /* OldWorld mac stuff is "special", handle out of line */
+ if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
+ return of_irq_map_oldworld(device, index, out_irq);
+
+ /* Get the interrupts property */
+ intspec = (u32 *)get_property(device, "interrupts", &intlen);
+ if (intspec == NULL)
+ return -EINVAL;
+ intlen /= sizeof(u32);
+
+ /* Get the reg property (if any) */
+ addr = (u32 *)get_property(device, "reg", NULL);
+
+ /* Look for the interrupt parent. */
+ p = of_irq_find_parent(device);
+ if (p == NULL)
+ return -EINVAL;
+
+ /* Get size of interrupt specifier */
+ tmp = (u32 *)get_property(p, "#interrupt-cells", NULL);
+ if (tmp == NULL) {
+ of_node_put(p);
+ return -EINVAL;
+ }
+ intsize = *tmp;
+
+ /* Check index */
+ if (index * intsize >= intlen)
+ return -EINVAL;
+
+ /* Get new specifier and map it */
+ res = of_irq_map_raw(p, intspec + index * intsize, addr, out_irq);
+ of_node_put(p);
+ return res;
+}
+EXPORT_SYMBOL_GPL(of_irq_map_one);
+
+int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq)
+{
+ struct device_node *dn, *ppnode;
+ struct pci_dev *ppdev;
+ u32 lspec;
+ u32 laddr[3];
+ u8 pin;
+ int rc;
+
+ /* Check if we have a device node, if yes, fallback to standard OF
+ * parsing
+ */
+ dn = pci_device_to_OF_node(pdev);
+ if (dn)
+ return of_irq_map_one(dn, 0, out_irq);
+
+ /* Ok, we don't, time to have fun. Let's start by building up an
+ * interrupt spec. we assume #interrupt-cells is 1, which is standard
+ * for PCI. If you do different, then don't use that routine.
+ */
+ rc = pci_read_config_byte(pdev, PCI_INTERRUPT_PIN, &pin);
+ if (rc != 0)
+ return rc;
+ /* No pin, exit */
+ if (pin == 0)
+ return -ENODEV;
+
+ /* Now we walk up the PCI tree */
+ lspec = pin;
+ for (;;) {
+ /* Get the pci_dev of our parent */
+ ppdev = pdev->bus->self;
+
+ /* Ouch, it's a host bridge... */
+ if (ppdev == NULL) {
+#ifdef CONFIG_PPC64
+ ppnode = pci_bus_to_OF_node(pdev->bus);
+#else
+ struct pci_controller *host;
+ host = pci_bus_to_host(pdev->bus);
+ ppnode = host ? host->arch_data : NULL;
+#endif
+ /* No node for host bridge ? give up */
+ if (ppnode == NULL)
+ return -EINVAL;
+ } else
+ /* We found a P2P bridge, check if it has a node */
+ ppnode = pci_device_to_OF_node(ppdev);
+
+ /* Ok, we have found a parent with a device-node, hand over to
+ * the OF parsing code.
+ * We build a unit address from the linux device to be used for
+ * resolution. Note that we use the linux bus number which may
+ * not match your firmware bus numbering.
+ * Fortunately, in most cases, interrupt-map-mask doesn't include
+ * the bus number as part of the matching.
+ * You should still be careful about that though if you intend
+ * to rely on this function (you ship a firmware that doesn't
+ * create device nodes for all PCI devices).
+ */
+ if (ppnode)
+ break;
+
+ /* We can only get here if we hit a P2P bridge with no node,
+ * let's do standard swizzling and try again
+ */
+ lspec = of_irq_pci_swizzle(PCI_SLOT(pdev->devfn), lspec);
+ pdev = ppdev;
+ }
+
+ laddr[0] = (pdev->bus->number << 16)
+ | (pdev->devfn << 8);
+ laddr[1] = laddr[2] = 0;
+ return of_irq_map_raw(ppnode, &lspec, laddr, out_irq);
+}
+EXPORT_SYMBOL_GPL(of_irq_map_pci);
+
struct device_node *node;
struct pci_controller *phb;
unsigned int index;
- unsigned int root_size_cells = 0;
- unsigned int *opprop = NULL;
struct device_node *root = of_find_node_by_path("/");
- if (ppc64_interrupt_controller == IC_OPEN_PIC) {
- opprop = (unsigned int *)get_property(root,
- "platform-open-pic", NULL);
- }
-
- root_size_cells = prom_n_size_cells(root);
-
index = 0;
-
for (node = of_get_next_child(root, NULL);
node != NULL;
node = of_get_next_child(root, node)) {
setup_phb(node, phb);
pci_process_bridge_OF_ranges(phb, node, 0);
pci_setup_phb_io(phb, index == 0);
-#ifdef CONFIG_PPC_PSERIES
- /* XXX This code need serious fixing ... --BenH */
- if (ppc64_interrupt_controller == IC_OPEN_PIC && pSeries_mpic) {
- int addr = root_size_cells * (index + 2) - 1;
- mpic_assign_isu(pSeries_mpic, index, opprop[addr]);
- }
-#endif
index++;
}
extern void bootx_init(unsigned long r4, unsigned long phys);
-boot_infos_t *boot_infos;
struct ide_machdep_calls ppc_ide_md;
int boot_cpuid;
ppc_md.init_early();
find_legacy_serial_ports();
- finish_device_tree();
smp_setup_cpu_maps();
/*
* Fill the ppc64_caches & systemcfg structures with informations
- * retrieved from the device-tree. Need to be called before
- * finish_device_tree() since the later requires some of the
- * informations filled up here to properly parse the interrupt tree.
+ * retrieved from the device-tree.
*/
initialize_cache_info();
+ /*
+ * Initialize irq remapping subsystem
+ */
+ irq_early_init();
+
#ifdef CONFIG_PPC_RTAS
/*
* Initialize RTAS if available
*/
find_legacy_serial_ports();
- /*
- * "Finish" the device-tree, that is do the actual parsing of
- * some of the properties like the interrupt map
- */
- finish_device_tree();
-
/*
* Initialize xmon
*/
printk("-----------------------------------------------------\n");
printk("ppc64_pft_size = 0x%lx\n", ppc64_pft_size);
- printk("ppc64_interrupt_controller = 0x%ld\n",
- ppc64_interrupt_controller);
printk("physicalMemorySize = 0x%lx\n", lmb_phys_mem_size());
printk("ppc64_caches.dcache_line_size = 0x%x\n",
ppc64_caches.dline_size);
{
struct vio_dev *viodev;
unsigned int *unit_address;
- unsigned int *irq_p;
/* we need the 'device_type' property, in order to match with drivers */
if (of_node->type == NULL) {
viodev->dev.platform_data = of_node_get(of_node);
- viodev->irq = NO_IRQ;
- irq_p = (unsigned int *)get_property(of_node, "interrupts", NULL);
- if (irq_p) {
- int virq = virt_irq_create_mapping(*irq_p);
- if (virq == NO_IRQ) {
- printk(KERN_ERR "Unable to allocate interrupt "
- "number for %s\n", of_node->full_name);
- } else
- viodev->irq = irq_offset_up(virq);
- }
+ viodev->irq = irq_of_parse_and_map(of_node, 0);
snprintf(viodev->dev.bus_id, BUS_ID_SIZE, "%x", *unit_address);
viodev->name = of_node->name;
3 PCI slots. The PIBs PCI initialization is the bootloader's
responsiblilty.
+config MPC834x_ITX
+ bool "Freescale MPC834x ITX"
+ select DEFAULT_UIMAGE
+ help
+ This option enables support for the MPC 834x ITX evaluation board.
+
+ Be aware that PCI initialization is the bootloader's
+ responsiblilty.
+
endchoice
config MPC834x
bool
select PPC_UDBG_16550
select PPC_INDIRECT_PCI
- default y if MPC834x_SYS
+ default y if MPC834x_SYS || MPC834x_ITX
endmenu
obj-y := misc.o
obj-$(CONFIG_PCI) += pci.o
obj-$(CONFIG_MPC834x_SYS) += mpc834x_sys.o
+obj-$(CONFIG_MPC834x_ITX) += mpc834x_itx.o
--- /dev/null
+/*
+ * arch/powerpc/platforms/83xx/mpc834x_itx.c
+ *
+ * MPC834x ITX board specific routines
+ *
+ * Maintainer: Kumar Gala <galak@kernel.crashing.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+
+#include <asm/system.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/ipic.h>
+#include <asm/bootinfo.h>
+#include <asm/irq.h>
+#include <asm/prom.h>
+#include <asm/udbg.h>
+#include <sysdev/fsl_soc.h>
+
+#include "mpc83xx.h"
+
+#include <platforms/83xx/mpc834x_sys.h>
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+#endif
+
+#ifdef CONFIG_PCI
+static int
+mpc83xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* idsel 0x0e */
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* idsel 0x0f */
+ {PIRQC, PIRQD, PIRQA, PIRQB}, /* idsel 0x10 */
+ };
+
+ const long min_idsel = 0x0e, max_idsel = 0x10, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+#endif /* CONFIG_PCI */
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init mpc834x_itx_setup_arch(void)
+{
+ struct device_node *np;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc834x_itx_setup_arch()", 0);
+
+ np = of_find_node_by_type(NULL, "cpu");
+ if (np != 0) {
+ unsigned int *fp =
+ (int *)get_property(np, "clock-frequency", NULL);
+ if (fp != 0)
+ loops_per_jiffy = *fp / HZ;
+ else
+ loops_per_jiffy = 50000000 / HZ;
+ of_node_put(np);
+ }
+#ifdef CONFIG_PCI
+ for (np = NULL; (np = of_find_node_by_type(np, "pci")) != NULL;)
+ add_bridge(np);
+
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = mpc83xx_map_irq;
+ ppc_md.pci_exclude_device = mpc83xx_exclude_device;
+#endif
+
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+}
+
+void __init mpc834x_itx_init_IRQ(void)
+{
+ u8 senses[8] = {
+ 0, /* EXT 0 */
+ IRQ_SENSE_LEVEL, /* EXT 1 */
+ IRQ_SENSE_LEVEL, /* EXT 2 */
+ 0, /* EXT 3 */
+#ifdef CONFIG_PCI
+ IRQ_SENSE_LEVEL, /* EXT 4 */
+ IRQ_SENSE_LEVEL, /* EXT 5 */
+ IRQ_SENSE_LEVEL, /* EXT 6 */
+ IRQ_SENSE_LEVEL, /* EXT 7 */
+#else
+ 0, /* EXT 4 */
+ 0, /* EXT 5 */
+ 0, /* EXT 6 */
+ 0, /* EXT 7 */
+#endif
+ };
+
+ ipic_init(get_immrbase() + 0x00700, 0, 0, senses, 8);
+
+ /* Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ ipic_set_default_priority();
+}
+
+/*
+ * Called very early, MMU is off, device-tree isn't unflattened
+ */
+static int __init mpc834x_itx_probe(void)
+{
+ /* We always match for now, eventually we should look at the flat
+ dev tree to ensure this is the board we are suppose to run on
+ */
+ return 1;
+}
+
+define_machine(mpc834x_itx) {
+ .name = "MPC834x ITX",
+ .probe = mpc834x_itx_probe,
+ .setup_arch = mpc834x_itx_setup_arch,
+ .init_IRQ = mpc834x_itx_init_IRQ,
+ .get_irq = ipic_get_irq,
+ .restart = mpc83xx_restart,
+ .time_init = mpc83xx_time_init,
+ .calibrate_decr = generic_calibrate_decr,
+ .progress = udbg_progress,
+};
--- /dev/null
+/*
+ * arch/powerpc/platforms/83xx/mpc834x_itx.h
+ *
+ * MPC834X ITX common board definitions
+ *
+ * Maintainer: Kumar Gala <galak@kernel.crashing.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC83XX_ITX_H__
+#define __MACH_MPC83XX_ITX_H__
+
+#define PIRQA MPC83xx_IRQ_EXT4
+#define PIRQB MPC83xx_IRQ_EXT5
+#define PIRQC MPC83xx_IRQ_EXT6
+#define PIRQD MPC83xx_IRQ_EXT7
+
+#endif /* __MACH_MPC83XX_ITX_H__ */
/*
* Cell Internal Interrupt Controller
*
+ * Copyright (C) 2006 Benjamin Herrenschmidt (benh@kernel.crashing.org)
+ * IBM, Corp.
+ *
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Arnd Bergmann <arndb@de.ibm.com>
#include <linux/module.h>
#include <linux/percpu.h>
#include <linux/types.h>
+#include <linux/ioport.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/prom.h>
#include <asm/ptrace.h>
+#include <asm/machdep.h>
#include "interrupt.h"
#include "cbe_regs.h"
struct iic {
struct cbe_iic_thread_regs __iomem *regs;
u8 target_id;
+ u8 eoi_stack[16];
+ int eoi_ptr;
+ struct irq_host *host;
};
static DEFINE_PER_CPU(struct iic, iic);
+#define IIC_NODE_COUNT 2
+static struct irq_host *iic_hosts[IIC_NODE_COUNT];
-void iic_local_enable(void)
+/* Convert between "pending" bits and hw irq number */
+static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits)
{
- struct iic *iic = &__get_cpu_var(iic);
- u64 tmp;
-
- /*
- * There seems to be a bug that is present in DD2.x CPUs
- * and still only partially fixed in DD3.1.
- * This bug causes a value written to the priority register
- * not to make it there, resulting in a system hang unless we
- * write it again.
- * Masking with 0xf0 is done because the Cell BE does not
- * implement the lower four bits of the interrupt priority,
- * they always read back as zeroes, although future CPUs
- * might implement different bits.
- */
- do {
- out_be64(&iic->regs->prio, 0xff);
- tmp = in_be64(&iic->regs->prio);
- } while ((tmp & 0xf0) != 0xf0);
-}
-
-void iic_local_disable(void)
-{
- out_be64(&__get_cpu_var(iic).regs->prio, 0x0);
-}
+ unsigned char unit = bits.source & 0xf;
-static unsigned int iic_startup(unsigned int irq)
-{
- return 0;
+ if (bits.flags & CBE_IIC_IRQ_IPI)
+ return IIC_IRQ_IPI0 | (bits.prio >> 4);
+ else if (bits.class <= 3)
+ return (bits.class << 4) | unit;
+ else
+ return IIC_IRQ_INVALID;
}
-static void iic_enable(unsigned int irq)
+static void iic_mask(unsigned int irq)
{
- iic_local_enable();
}
-static void iic_disable(unsigned int irq)
+static void iic_unmask(unsigned int irq)
{
}
-static void iic_end(unsigned int irq)
+static void iic_eoi(unsigned int irq)
{
- iic_local_enable();
+ struct iic *iic = &__get_cpu_var(iic);
+ out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
+ BUG_ON(iic->eoi_ptr < 0);
}
-static struct hw_interrupt_type iic_pic = {
+static struct irq_chip iic_chip = {
.typename = " CELL-IIC ",
- .startup = iic_startup,
- .enable = iic_enable,
- .disable = iic_disable,
- .end = iic_end,
+ .mask = iic_mask,
+ .unmask = iic_unmask,
+ .eoi = iic_eoi,
};
-static int iic_external_get_irq(struct cbe_iic_pending_bits pending)
-{
- int irq;
- unsigned char node, unit;
-
- node = pending.source >> 4;
- unit = pending.source & 0xf;
- irq = -1;
-
- /*
- * This mapping is specific to the Cell Broadband
- * Engine. We might need to get the numbers
- * from the device tree to support future CPUs.
- */
- switch (unit) {
- case 0x00:
- case 0x0b:
- /*
- * One of these units can be connected
- * to an external interrupt controller.
- */
- if (pending.class != 2)
- break;
- irq = IIC_EXT_OFFSET
- + spider_get_irq(node)
- + node * IIC_NODE_STRIDE;
- break;
- case 0x01 ... 0x04:
- case 0x07 ... 0x0a:
- /*
- * These units are connected to the SPEs
- */
- if (pending.class > 2)
- break;
- irq = IIC_SPE_OFFSET
- + pending.class * IIC_CLASS_STRIDE
- + node * IIC_NODE_STRIDE
- + unit;
- break;
- }
- if (irq == -1)
- printk(KERN_WARNING "Unexpected interrupt class %02x, "
- "source %02x, prio %02x, cpu %02x\n", pending.class,
- pending.source, pending.prio, smp_processor_id());
- return irq;
-}
-
/* Get an IRQ number from the pending state register of the IIC */
-int iic_get_irq(struct pt_regs *regs)
+static unsigned int iic_get_irq(struct pt_regs *regs)
{
- struct iic *iic;
- int irq;
- struct cbe_iic_pending_bits pending;
-
- iic = &__get_cpu_var(iic);
- *(unsigned long *) &pending =
- in_be64((unsigned long __iomem *) &iic->regs->pending_destr);
-
- irq = -1;
- if (pending.flags & CBE_IIC_IRQ_VALID) {
- if (pending.flags & CBE_IIC_IRQ_IPI) {
- irq = IIC_IPI_OFFSET + (pending.prio >> 4);
-/*
- if (irq > 0x80)
- printk(KERN_WARNING "Unexpected IPI prio %02x"
- "on CPU %02x\n", pending.prio,
- smp_processor_id());
-*/
- } else {
- irq = iic_external_get_irq(pending);
- }
- }
- return irq;
-}
-
-/* hardcoded part to be compatible with older firmware */
-
-static int setup_iic_hardcoded(void)
-{
- struct device_node *np;
- int nodeid, cpu;
- unsigned long regs;
- struct iic *iic;
-
- for_each_possible_cpu(cpu) {
- iic = &per_cpu(iic, cpu);
- nodeid = cpu/2;
-
- for (np = of_find_node_by_type(NULL, "cpu");
- np;
- np = of_find_node_by_type(np, "cpu")) {
- if (nodeid == *(int *)get_property(np, "node-id", NULL))
- break;
- }
-
- if (!np) {
- printk(KERN_WARNING "IIC: CPU %d not found\n", cpu);
- iic->regs = NULL;
- iic->target_id = 0xff;
- return -ENODEV;
- }
-
- regs = *(long *)get_property(np, "iic", NULL);
-
- /* hack until we have decided on the devtree info */
- regs += 0x400;
- if (cpu & 1)
- regs += 0x20;
-
- printk(KERN_INFO "IIC for CPU %d at %lx\n", cpu, regs);
- iic->regs = ioremap(regs, sizeof(struct cbe_iic_thread_regs));
- iic->target_id = (nodeid << 4) + ((cpu & 1) ? 0xf : 0xe);
- }
-
- return 0;
-}
-
-static int setup_iic(void)
-{
- struct device_node *dn;
- unsigned long *regs;
- char *compatible;
- unsigned *np, found = 0;
- struct iic *iic = NULL;
-
- for (dn = NULL; (dn = of_find_node_by_name(dn, "interrupt-controller"));) {
- compatible = (char *)get_property(dn, "compatible", NULL);
-
- if (!compatible) {
- printk(KERN_WARNING "no compatible property found !\n");
- continue;
- }
-
- if (strstr(compatible, "IBM,CBEA-Internal-Interrupt-Controller"))
- regs = (unsigned long *)get_property(dn,"reg", NULL);
- else
- continue;
-
- if (!regs)
- printk(KERN_WARNING "IIC: no reg property\n");
-
- np = (unsigned int *)get_property(dn, "ibm,interrupt-server-ranges", NULL);
-
- if (!np) {
- printk(KERN_WARNING "IIC: CPU association not found\n");
- iic->regs = NULL;
- iic->target_id = 0xff;
- return -ENODEV;
- }
-
- iic = &per_cpu(iic, np[0]);
- iic->regs = ioremap(regs[0], sizeof(struct cbe_iic_thread_regs));
- iic->target_id = ((np[0] & 2) << 3) + ((np[0] & 1) ? 0xf : 0xe);
- printk("IIC for CPU %d at %lx mapped to %p\n", np[0], regs[0], iic->regs);
-
- iic = &per_cpu(iic, np[1]);
- iic->regs = ioremap(regs[2], sizeof(struct cbe_iic_thread_regs));
- iic->target_id = ((np[1] & 2) << 3) + ((np[1] & 1) ? 0xf : 0xe);
- printk("IIC for CPU %d at %lx mapped to %p\n", np[1], regs[2], iic->regs);
-
- found++;
- }
-
- if (found)
- return 0;
- else
- return -ENODEV;
+ struct cbe_iic_pending_bits pending;
+ struct iic *iic;
+
+ iic = &__get_cpu_var(iic);
+ *(unsigned long *) &pending =
+ in_be64((unsigned long __iomem *) &iic->regs->pending_destr);
+ iic->eoi_stack[++iic->eoi_ptr] = pending.prio;
+ BUG_ON(iic->eoi_ptr > 15);
+ if (pending.flags & CBE_IIC_IRQ_VALID)
+ return irq_linear_revmap(iic->host,
+ iic_pending_to_hwnum(pending));
+ return NO_IRQ;
}
#ifdef CONFIG_SMP
/* Use the highest interrupt priorities for IPI */
static inline int iic_ipi_to_irq(int ipi)
{
- return IIC_IPI_OFFSET + IIC_NUM_IPIS - 1 - ipi;
+ return IIC_IRQ_IPI0 + IIC_NUM_IPIS - 1 - ipi;
}
static inline int iic_irq_to_ipi(int irq)
{
- return IIC_NUM_IPIS - 1 - (irq - IIC_IPI_OFFSET);
+ return IIC_NUM_IPIS - 1 - (irq - IIC_IRQ_IPI0);
}
void iic_setup_cpu(void)
}
EXPORT_SYMBOL_GPL(iic_get_target_id);
+struct irq_host *iic_get_irq_host(int node)
+{
+ if (node < 0 || node >= IIC_NODE_COUNT)
+ return NULL;
+ return iic_hosts[node];
+}
+EXPORT_SYMBOL_GPL(iic_get_irq_host);
+
+
static irqreturn_t iic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
{
- smp_message_recv(iic_irq_to_ipi(irq), regs);
+ int ipi = (int)(long)dev_id;
+
+ smp_message_recv(ipi, regs);
+
return IRQ_HANDLED;
}
static void iic_request_ipi(int ipi, const char *name)
{
- int irq;
-
- irq = iic_ipi_to_irq(ipi);
- /* IPIs are marked SA_INTERRUPT as they must run with irqs
- * disabled */
- get_irq_desc(irq)->chip = &iic_pic;
- get_irq_desc(irq)->status |= IRQ_PER_CPU;
- request_irq(irq, iic_ipi_action, SA_INTERRUPT, name, NULL);
+ int node, virq;
+
+ for (node = 0; node < IIC_NODE_COUNT; node++) {
+ char *rname;
+ if (iic_hosts[node] == NULL)
+ continue;
+ virq = irq_create_mapping(iic_hosts[node],
+ iic_ipi_to_irq(ipi), 0);
+ if (virq == NO_IRQ) {
+ printk(KERN_ERR
+ "iic: failed to map IPI %s on node %d\n",
+ name, node);
+ continue;
+ }
+ rname = kzalloc(strlen(name) + 16, GFP_KERNEL);
+ if (rname)
+ sprintf(rname, "%s node %d", name, node);
+ else
+ rname = (char *)name;
+ if (request_irq(virq, iic_ipi_action, IRQF_DISABLED,
+ rname, (void *)(long)ipi))
+ printk(KERN_ERR
+ "iic: failed to request IPI %s on node %d\n",
+ name, node);
+ }
}
void iic_request_IPIs(void)
iic_request_ipi(PPC_MSG_DEBUGGER_BREAK, "IPI-debug");
#endif /* CONFIG_DEBUGGER */
}
+
#endif /* CONFIG_SMP */
-static void iic_setup_spe_handlers(void)
+
+static int iic_host_match(struct irq_host *h, struct device_node *node)
+{
+ return h->host_data != NULL && node == h->host_data;
+}
+
+static int iic_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ if (hw < IIC_IRQ_IPI0)
+ set_irq_chip_and_handler(virq, &iic_chip, handle_fasteoi_irq);
+ else
+ set_irq_chip_and_handler(virq, &iic_chip, handle_percpu_irq);
+ return 0;
+}
+
+static int iic_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags)
+
+{
+ /* Currently, we don't translate anything. That needs to be fixed as
+ * we get better defined device-trees. iic interrupts have to be
+ * explicitely mapped by whoever needs them
+ */
+ return -ENODEV;
+}
+
+static struct irq_host_ops iic_host_ops = {
+ .match = iic_host_match,
+ .map = iic_host_map,
+ .xlate = iic_host_xlate,
+};
+
+static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr,
+ struct irq_host *host)
{
- int be, isrc;
+ /* XXX FIXME: should locate the linux CPU number from the HW cpu
+ * number properly. We are lucky for now
+ */
+ struct iic *iic = &per_cpu(iic, hw_cpu);
+
+ iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs));
+ BUG_ON(iic->regs == NULL);
- /* Assume two threads per BE are present */
- for (be=0; be < num_present_cpus() / 2; be++) {
- for (isrc = 0; isrc < IIC_CLASS_STRIDE * 3; isrc++) {
- int irq = IIC_NODE_STRIDE * be + IIC_SPE_OFFSET + isrc;
- get_irq_desc(irq)->chip = &iic_pic;
+ iic->target_id = ((hw_cpu & 2) << 3) | ((hw_cpu & 1) ? 0xf : 0xe);
+ iic->eoi_stack[0] = 0xff;
+ iic->host = host;
+ out_be64(&iic->regs->prio, 0);
+
+ printk(KERN_INFO "IIC for CPU %d at %lx mapped to %p, target id 0x%x\n",
+ hw_cpu, addr, iic->regs, iic->target_id);
+}
+
+static int __init setup_iic(void)
+{
+ struct device_node *dn;
+ struct resource r0, r1;
+ struct irq_host *host;
+ int found = 0;
+ u32 *np;
+
+ for (dn = NULL;
+ (dn = of_find_node_by_name(dn,"interrupt-controller")) != NULL;) {
+ if (!device_is_compatible(dn,
+ "IBM,CBEA-Internal-Interrupt-Controller"))
+ continue;
+ np = (u32 *)get_property(dn, "ibm,interrupt-server-ranges",
+ NULL);
+ if (np == NULL) {
+ printk(KERN_WARNING "IIC: CPU association not found\n");
+ of_node_put(dn);
+ return -ENODEV;
+ }
+ if (of_address_to_resource(dn, 0, &r0) ||
+ of_address_to_resource(dn, 1, &r1)) {
+ printk(KERN_WARNING "IIC: Can't resolve addresses\n");
+ of_node_put(dn);
+ return -ENODEV;
}
+ host = NULL;
+ if (found < IIC_NODE_COUNT) {
+ host = irq_alloc_host(IRQ_HOST_MAP_LINEAR,
+ IIC_SOURCE_COUNT,
+ &iic_host_ops,
+ IIC_IRQ_INVALID);
+ iic_hosts[found] = host;
+ BUG_ON(iic_hosts[found] == NULL);
+ iic_hosts[found]->host_data = of_node_get(dn);
+ found++;
+ }
+ init_one_iic(np[0], r0.start, host);
+ init_one_iic(np[1], r1.start, host);
}
+
+ if (found)
+ return 0;
+ else
+ return -ENODEV;
}
-void iic_init_IRQ(void)
+void __init iic_init_IRQ(void)
{
- int cpu, irq_offset;
- struct iic *iic;
-
+ /* Discover and initialize iics */
if (setup_iic() < 0)
- setup_iic_hardcoded();
+ panic("IIC: Failed to initialize !\n");
- irq_offset = 0;
- for_each_possible_cpu(cpu) {
- iic = &per_cpu(iic, cpu);
- if (iic->regs)
- out_be64(&iic->regs->prio, 0xff);
- }
- iic_setup_spe_handlers();
+ /* Set master interrupt handling function */
+ ppc_md.get_irq = iic_get_irq;
+
+ /* Enable on current CPU */
+ iic_setup_cpu();
}
*/
enum {
- IIC_EXT_OFFSET = 0x00, /* Start of south bridge IRQs */
- IIC_NUM_EXT = 0x40, /* Number of south bridge IRQs */
- IIC_SPE_OFFSET = 0x40, /* Start of SPE interrupts */
- IIC_CLASS_STRIDE = 0x10, /* SPE IRQs per class */
- IIC_IPI_OFFSET = 0x70, /* Start of IPI IRQs */
- IIC_NUM_IPIS = 0x10, /* IRQs reserved for IPI */
- IIC_NODE_STRIDE = 0x80, /* Total IRQs per node */
+ IIC_IRQ_INVALID = 0xff,
+ IIC_IRQ_MAX = 0x3f,
+ IIC_IRQ_EXT_IOIF0 = 0x20,
+ IIC_IRQ_EXT_IOIF1 = 0x2b,
+ IIC_IRQ_IPI0 = 0x40,
+ IIC_NUM_IPIS = 0x10, /* IRQs reserved for IPI */
+ IIC_SOURCE_COUNT = 0x50,
};
extern void iic_init_IRQ(void);
-extern int iic_get_irq(struct pt_regs *regs);
extern void iic_cause_IPI(int cpu, int mesg);
extern void iic_request_IPIs(void);
extern void iic_setup_cpu(void);
-extern void iic_local_enable(void);
-extern void iic_local_disable(void);
extern u8 iic_get_target_id(int cpu);
+extern struct irq_host *iic_get_irq_host(int node);
extern void spider_init_IRQ(void);
-extern int spider_get_irq(int node);
#endif
#endif /* ASM_CELL_PIC_H */
#include <asm/irq.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
+#include <asm/udbg.h>
#include "interrupt.h"
#include "iommu.h"
printk("*** %04x : %s\n", hex, s ? s : "");
}
+static void __init cell_pcibios_fixup(void)
+{
+ struct pci_dev *dev = NULL;
+
+ for_each_pci_dev(dev)
+ pci_read_irq_line(dev);
+}
+
+static void __init cell_init_irq(void)
+{
+ iic_init_IRQ();
+ spider_init_IRQ();
+}
+
static void __init cell_setup_arch(void)
{
- ppc_md.init_IRQ = iic_init_IRQ;
- ppc_md.get_irq = iic_get_irq;
#ifdef CONFIG_SPU_BASE
spu_priv1_ops = &spu_priv1_mmio_ops;
#endif
/* Find and initialize PCI host bridges */
init_pci_config_tokens();
find_and_init_phbs();
- spider_init_IRQ();
cbe_pervasive_init();
#ifdef CONFIG_DUMMY_CONSOLE
conswitchp = &dummy_con;
cell_init_iommu();
- ppc64_interrupt_controller = IC_CELL_PIC;
-
DBG(" <- cell_init_early()\n");
}
.calibrate_decr = generic_calibrate_decr,
.check_legacy_ioport = cell_check_legacy_ioport,
.progress = cell_progress,
+ .init_IRQ = cell_init_irq,
+ .pcibios_fixup = cell_pcibios_fixup,
#ifdef CONFIG_KEXEC
.machine_kexec = default_machine_kexec,
.machine_kexec_prepare = default_machine_kexec_prepare,
#include <linux/interrupt.h>
#include <linux/irq.h>
+#include <linux/ioport.h>
#include <asm/pgtable.h>
#include <asm/prom.h>
REISWAITEN = 0x508, /* Reissue Wait Control*/
};
-static void __iomem *spider_pics[4];
+#define SPIDER_CHIP_COUNT 4
+#define SPIDER_SRC_COUNT 64
+#define SPIDER_IRQ_INVALID 63
-static void __iomem *spider_get_pic(int irq)
-{
- int node = irq / IIC_NODE_STRIDE;
- irq %= IIC_NODE_STRIDE;
-
- if (irq >= IIC_EXT_OFFSET &&
- irq < IIC_EXT_OFFSET + IIC_NUM_EXT &&
- spider_pics)
- return spider_pics[node];
- return NULL;
-}
+struct spider_pic {
+ struct irq_host *host;
+ struct device_node *of_node;
+ void __iomem *regs;
+ unsigned int node_id;
+};
+static struct spider_pic spider_pics[SPIDER_CHIP_COUNT];
-static int spider_get_nr(unsigned int irq)
+static struct spider_pic *spider_virq_to_pic(unsigned int virq)
{
- return (irq % IIC_NODE_STRIDE) - IIC_EXT_OFFSET;
+ return irq_map[virq].host->host_data;
}
-static void __iomem *spider_get_irq_config(int irq)
+static void __iomem *spider_get_irq_config(struct spider_pic *pic,
+ unsigned int src)
{
- void __iomem *pic;
- pic = spider_get_pic(irq);
- return pic + TIR_CFGA + 8 * spider_get_nr(irq);
+ return pic->regs + TIR_CFGA + 8 * src;
}
-static void spider_enable_irq(unsigned int irq)
+static void spider_unmask_irq(unsigned int virq)
{
- int nodeid = (irq / IIC_NODE_STRIDE) * 0x10;
- void __iomem *cfg = spider_get_irq_config(irq);
- irq = spider_get_nr(irq);
+ struct spider_pic *pic = spider_virq_to_pic(virq);
+ void __iomem *cfg = spider_get_irq_config(pic, irq_map[virq].hwirq);
- out_be32(cfg, (in_be32(cfg) & ~0xf0)| 0x3107000eu | nodeid);
- out_be32(cfg + 4, in_be32(cfg + 4) | 0x00020000u | irq);
+ /* We use no locking as we should be covered by the descriptor lock
+ * for access to invidual source configuration registers
+ */
+ out_be32(cfg, in_be32(cfg) | 0x30000000u);
}
-static void spider_disable_irq(unsigned int irq)
+static void spider_mask_irq(unsigned int virq)
{
- void __iomem *cfg = spider_get_irq_config(irq);
- irq = spider_get_nr(irq);
+ struct spider_pic *pic = spider_virq_to_pic(virq);
+ void __iomem *cfg = spider_get_irq_config(pic, irq_map[virq].hwirq);
+ /* We use no locking as we should be covered by the descriptor lock
+ * for access to invidual source configuration registers
+ */
out_be32(cfg, in_be32(cfg) & ~0x30000000u);
}
-static unsigned int spider_startup_irq(unsigned int irq)
+static void spider_ack_irq(unsigned int virq)
{
- spider_enable_irq(irq);
- return 0;
-}
+ struct spider_pic *pic = spider_virq_to_pic(virq);
+ unsigned int src = irq_map[virq].hwirq;
-static void spider_shutdown_irq(unsigned int irq)
-{
- spider_disable_irq(irq);
-}
+ /* Reset edge detection logic if necessary
+ */
+ if (get_irq_desc(virq)->status & IRQ_LEVEL)
+ return;
-static void spider_end_irq(unsigned int irq)
-{
- spider_enable_irq(irq);
-}
+ /* Only interrupts 47 to 50 can be set to edge */
+ if (src < 47 || src > 50)
+ return;
-static void spider_ack_irq(unsigned int irq)
-{
- spider_disable_irq(irq);
- iic_local_enable();
+ /* Perform the clear of the edge logic */
+ out_be32(pic->regs + TIR_EDC, 0x100 | (src & 0xf));
}
-static struct hw_interrupt_type spider_pic = {
+static struct irq_chip spider_pic = {
.typename = " SPIDER ",
- .startup = spider_startup_irq,
- .shutdown = spider_shutdown_irq,
- .enable = spider_enable_irq,
- .disable = spider_disable_irq,
+ .unmask = spider_unmask_irq,
+ .mask = spider_mask_irq,
.ack = spider_ack_irq,
- .end = spider_end_irq,
};
-int spider_get_irq(int node)
+static int spider_host_match(struct irq_host *h, struct device_node *node)
{
- unsigned long cs;
- void __iomem *regs = spider_pics[node];
-
- cs = in_be32(regs + TIR_CS) >> 24;
-
- if (cs == 63)
- return -1;
- else
- return cs;
+ struct spider_pic *pic = h->host_data;
+ return node == pic->of_node;
}
-/* hardcoded part to be compatible with older firmware */
-
-void spider_init_IRQ_hardcoded(void)
+static int spider_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
{
- int node;
- long spiderpic;
- long pics[] = { 0x24000008000, 0x34000008000 };
- int n;
-
- pr_debug("%s(%d): Using hardcoded defaults\n", __FUNCTION__, __LINE__);
-
- for (node = 0; node < num_present_cpus()/2; node++) {
- spiderpic = pics[node];
- printk(KERN_DEBUG "SPIDER addr: %lx\n", spiderpic);
- spider_pics[node] = ioremap(spiderpic, 0x800);
- for (n = 0; n < IIC_NUM_EXT; n++) {
- int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
- get_irq_desc(irq)->chip = &spider_pic;
- }
-
- /* do not mask any interrupts because of level */
- out_be32(spider_pics[node] + TIR_MSK, 0x0);
-
- /* disable edge detection clear */
- /* out_be32(spider_pics[node] + TIR_EDC, 0x0); */
-
- /* enable interrupt packets to be output */
- out_be32(spider_pics[node] + TIR_PIEN,
- in_be32(spider_pics[node] + TIR_PIEN) | 0x1);
-
- /* Enable the interrupt detection enable bit. Do this last! */
- out_be32(spider_pics[node] + TIR_DEN,
- in_be32(spider_pics[node] + TIR_DEN) | 0x1);
+ unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
+ struct spider_pic *pic = h->host_data;
+ void __iomem *cfg = spider_get_irq_config(pic, hw);
+ int level = 0;
+ u32 ic;
+
+ /* Note that only level high is supported for most interrupts */
+ if (sense != IRQ_TYPE_NONE && sense != IRQ_TYPE_LEVEL_HIGH &&
+ (hw < 47 || hw > 50))
+ return -EINVAL;
+
+ /* Decode sense type */
+ switch(sense) {
+ case IRQ_TYPE_EDGE_RISING:
+ ic = 0x3;
+ break;
+ case IRQ_TYPE_EDGE_FALLING:
+ ic = 0x2;
+ break;
+ case IRQ_TYPE_LEVEL_LOW:
+ ic = 0x0;
+ level = 1;
+ break;
+ case IRQ_TYPE_LEVEL_HIGH:
+ case IRQ_TYPE_NONE:
+ ic = 0x1;
+ level = 1;
+ break;
+ default:
+ return -EINVAL;
}
-}
-void spider_init_IRQ(void)
-{
- long spider_reg;
- struct device_node *dn;
- char *compatible;
- int n, node = 0;
+ /* Configure the source. One gross hack that was there before and
+ * that I've kept around is the priority to the BE which I set to
+ * be the same as the interrupt source number. I don't know wether
+ * that's supposed to make any kind of sense however, we'll have to
+ * decide that, but for now, I'm not changing the behaviour.
+ */
+ out_be32(cfg, (ic << 24) | (0x7 << 16) | (pic->node_id << 4) | 0xe);
+ out_be32(cfg + 4, (0x2 << 16) | (hw & 0xff));
+
+ if (level)
+ get_irq_desc(virq)->status |= IRQ_LEVEL;
+ set_irq_chip_and_handler(virq, &spider_pic, handle_level_irq);
+ return 0;
+}
- for (dn = NULL; (dn = of_find_node_by_name(dn, "interrupt-controller"));) {
- compatible = (char *)get_property(dn, "compatible", NULL);
+static int spider_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags)
- if (!compatible)
- continue;
+{
+ /* Spider interrupts have 2 cells, first is the interrupt source,
+ * second, well, I don't know for sure yet ... We mask the top bits
+ * because old device-trees encode a node number in there
+ */
+ *out_hwirq = intspec[0] & 0x3f;
+ *out_flags = IRQ_TYPE_LEVEL_HIGH;
+ return 0;
+}
- if (strstr(compatible, "CBEA,platform-spider-pic"))
- spider_reg = *(long *)get_property(dn,"reg", NULL);
- else if (strstr(compatible, "sti,platform-spider-pic")) {
- spider_init_IRQ_hardcoded();
- return;
- } else
- continue;
+static struct irq_host_ops spider_host_ops = {
+ .match = spider_host_match,
+ .map = spider_host_map,
+ .xlate = spider_host_xlate,
+};
- if (!spider_reg)
- printk("interrupt controller does not have reg property !\n");
+static void spider_irq_cascade(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs)
+{
+ struct spider_pic *pic = desc->handler_data;
+ unsigned int cs, virq;
- n = prom_n_addr_cells(dn);
+ cs = in_be32(pic->regs + TIR_CS) >> 24;
+ if (cs == SPIDER_IRQ_INVALID)
+ virq = NO_IRQ;
+ else
+ virq = irq_linear_revmap(pic->host, cs);
+ if (virq != NO_IRQ)
+ generic_handle_irq(virq, regs);
+ desc->chip->eoi(irq);
+}
- if ( n != 2)
- printk("reg property with invalid number of elements \n");
+/* For hooking up the cascace we have a problem. Our device-tree is
+ * crap and we don't know on which BE iic interrupt we are hooked on at
+ * least not the "standard" way. We can reconstitute it based on two
+ * informations though: which BE node we are connected to and wether
+ * we are connected to IOIF0 or IOIF1. Right now, we really only care
+ * about the IBM cell blade and we know that its firmware gives us an
+ * interrupt-map property which is pretty strange.
+ */
+static unsigned int __init spider_find_cascade_and_node(struct spider_pic *pic)
+{
+ unsigned int virq;
+ u32 *imap, *tmp;
+ int imaplen, intsize, unit;
+ struct device_node *iic;
+ struct irq_host *iic_host;
+
+#if 0 /* Enable that when we have a way to retreive the node as well */
+ /* First, we check wether we have a real "interrupts" in the device
+ * tree in case the device-tree is ever fixed
+ */
+ struct of_irq oirq;
+ if (of_irq_map_one(pic->of_node, 0, &oirq) == 0) {
+ virq = irq_create_of_mapping(oirq.controller, oirq.specifier,
+ oirq.size);
+ goto bail;
+ }
+#endif
+
+ /* Now do the horrible hacks */
+ tmp = (u32 *)get_property(pic->of_node, "#interrupt-cells", NULL);
+ if (tmp == NULL)
+ return NO_IRQ;
+ intsize = *tmp;
+ imap = (u32 *)get_property(pic->of_node, "interrupt-map", &imaplen);
+ if (imap == NULL || imaplen < (intsize + 1))
+ return NO_IRQ;
+ iic = of_find_node_by_phandle(imap[intsize]);
+ if (iic == NULL)
+ return NO_IRQ;
+ imap += intsize + 1;
+ tmp = (u32 *)get_property(iic, "#interrupt-cells", NULL);
+ if (tmp == NULL)
+ return NO_IRQ;
+ intsize = *tmp;
+ /* Assume unit is last entry of interrupt specifier */
+ unit = imap[intsize - 1];
+ /* Ok, we have a unit, now let's try to get the node */
+ tmp = (u32 *)get_property(iic, "ibm,interrupt-server-ranges", NULL);
+ if (tmp == NULL) {
+ of_node_put(iic);
+ return NO_IRQ;
+ }
+ /* ugly as hell but works for now */
+ pic->node_id = (*tmp) >> 1;
+ of_node_put(iic);
+
+ /* Ok, now let's get cracking. You may ask me why I just didn't match
+ * the iic host from the iic OF node, but that way I'm still compatible
+ * with really really old old firmwares for which we don't have a node
+ */
+ iic_host = iic_get_irq_host(pic->node_id);
+ if (iic_host == NULL)
+ return NO_IRQ;
+ /* Manufacture an IIC interrupt number of class 2 */
+ virq = irq_create_mapping(iic_host, 0x20 | unit, 0);
+ if (virq == NO_IRQ)
+ printk(KERN_ERR "spider_pic: failed to map cascade !");
+ return virq;
+}
- spider_pics[node] = ioremap(spider_reg, 0x800);
- printk("SPIDER addr: %lx with %i addr_cells mapped to %p\n",
- spider_reg, n, spider_pics[node]);
+static void __init spider_init_one(struct device_node *of_node, int chip,
+ unsigned long addr)
+{
+ struct spider_pic *pic = &spider_pics[chip];
+ int i, virq;
+
+ /* Map registers */
+ pic->regs = ioremap(addr, 0x1000);
+ if (pic->regs == NULL)
+ panic("spider_pic: can't map registers !");
+
+ /* Allocate a host */
+ pic->host = irq_alloc_host(IRQ_HOST_MAP_LINEAR, SPIDER_SRC_COUNT,
+ &spider_host_ops, SPIDER_IRQ_INVALID);
+ if (pic->host == NULL)
+ panic("spider_pic: can't allocate irq host !");
+ pic->host->host_data = pic;
+
+ /* Fill out other bits */
+ pic->of_node = of_node_get(of_node);
+
+ /* Go through all sources and disable them */
+ for (i = 0; i < SPIDER_SRC_COUNT; i++) {
+ void __iomem *cfg = pic->regs + TIR_CFGA + 8 * i;
+ out_be32(cfg, in_be32(cfg) & ~0x30000000u);
+ }
- for (n = 0; n < IIC_NUM_EXT; n++) {
- int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
- get_irq_desc(irq)->chip = &spider_pic;
- }
+ /* do not mask any interrupts because of level */
+ out_be32(pic->regs + TIR_MSK, 0x0);
- /* do not mask any interrupts because of level */
- out_be32(spider_pics[node] + TIR_MSK, 0x0);
+ /* enable interrupt packets to be output */
+ out_be32(pic->regs + TIR_PIEN, in_be32(pic->regs + TIR_PIEN) | 0x1);
- /* disable edge detection clear */
- /* out_be32(spider_pics[node] + TIR_EDC, 0x0); */
+ /* Hook up the cascade interrupt to the iic and nodeid */
+ virq = spider_find_cascade_and_node(pic);
+ if (virq == NO_IRQ)
+ return;
+ set_irq_data(virq, pic);
+ set_irq_chained_handler(virq, spider_irq_cascade);
- /* enable interrupt packets to be output */
- out_be32(spider_pics[node] + TIR_PIEN,
- in_be32(spider_pics[node] + TIR_PIEN) | 0x1);
+ printk(KERN_INFO "spider_pic: node %d, addr: 0x%lx %s\n",
+ pic->node_id, addr, of_node->full_name);
- /* Enable the interrupt detection enable bit. Do this last! */
- out_be32(spider_pics[node] + TIR_DEN,
- in_be32(spider_pics[node] + TIR_DEN) | 0x1);
+ /* Enable the interrupt detection enable bit. Do this last! */
+ out_be32(pic->regs + TIR_DEN, in_be32(pic->regs + TIR_DEN) | 0x1);
+}
- node++;
+void __init spider_init_IRQ(void)
+{
+ struct resource r;
+ struct device_node *dn;
+ int chip = 0;
+
+ /* XXX node numbers are totally bogus. We _hope_ we get the device
+ * nodes in the right order here but that's definitely not guaranteed,
+ * we need to get the node from the device tree instead.
+ * There is currently no proper property for it (but our whole
+ * device-tree is bogus anyway) so all we can do is pray or maybe test
+ * the address and deduce the node-id
+ */
+ for (dn = NULL;
+ (dn = of_find_node_by_name(dn, "interrupt-controller"));) {
+ if (device_is_compatible(dn, "CBEA,platform-spider-pic")) {
+ if (of_address_to_resource(dn, 0, &r)) {
+ printk(KERN_WARNING "spider-pic: Failed\n");
+ continue;
+ }
+ } else if (device_is_compatible(dn, "sti,platform-spider-pic")
+ && (chip < 2)) {
+ static long hard_coded_pics[] =
+ { 0x24000008000, 0x34000008000 };
+ r.start = hard_coded_pics[chip];
+ } else
+ continue;
+ spider_init_one(dn, chip++, r.start);
}
}
return stat ? IRQ_HANDLED : IRQ_NONE;
}
-static int
-spu_request_irqs(struct spu *spu)
+static int spu_request_irqs(struct spu *spu)
{
- int ret;
- int irq_base;
-
- irq_base = IIC_NODE_STRIDE * spu->node + IIC_SPE_OFFSET;
-
- snprintf(spu->irq_c0, sizeof (spu->irq_c0), "spe%02d.0", spu->number);
- ret = request_irq(irq_base + spu->isrc,
- spu_irq_class_0, SA_INTERRUPT, spu->irq_c0, spu);
- if (ret)
- goto out;
-
- snprintf(spu->irq_c1, sizeof (spu->irq_c1), "spe%02d.1", spu->number);
- ret = request_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc,
- spu_irq_class_1, SA_INTERRUPT, spu->irq_c1, spu);
- if (ret)
- goto out1;
+ int ret = 0;
- snprintf(spu->irq_c2, sizeof (spu->irq_c2), "spe%02d.2", spu->number);
- ret = request_irq(irq_base + 2*IIC_CLASS_STRIDE + spu->isrc,
- spu_irq_class_2, SA_INTERRUPT, spu->irq_c2, spu);
- if (ret)
- goto out2;
- goto out;
+ if (spu->irqs[0] != NO_IRQ) {
+ snprintf(spu->irq_c0, sizeof (spu->irq_c0), "spe%02d.0",
+ spu->number);
+ ret = request_irq(spu->irqs[0], spu_irq_class_0,
+ IRQF_DISABLED,
+ spu->irq_c0, spu);
+ if (ret)
+ goto bail0;
+ }
+ if (spu->irqs[1] != NO_IRQ) {
+ snprintf(spu->irq_c1, sizeof (spu->irq_c1), "spe%02d.1",
+ spu->number);
+ ret = request_irq(spu->irqs[1], spu_irq_class_1,
+ IRQF_DISABLED,
+ spu->irq_c1, spu);
+ if (ret)
+ goto bail1;
+ }
+ if (spu->irqs[2] != NO_IRQ) {
+ snprintf(spu->irq_c2, sizeof (spu->irq_c2), "spe%02d.2",
+ spu->number);
+ ret = request_irq(spu->irqs[2], spu_irq_class_2,
+ IRQF_DISABLED,
+ spu->irq_c2, spu);
+ if (ret)
+ goto bail2;
+ }
+ return 0;
-out2:
- free_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc, spu);
-out1:
- free_irq(irq_base + spu->isrc, spu);
-out:
+bail2:
+ if (spu->irqs[1] != NO_IRQ)
+ free_irq(spu->irqs[1], spu);
+bail1:
+ if (spu->irqs[0] != NO_IRQ)
+ free_irq(spu->irqs[0], spu);
+bail0:
return ret;
}
-static void
-spu_free_irqs(struct spu *spu)
+static void spu_free_irqs(struct spu *spu)
{
- int irq_base;
-
- irq_base = IIC_NODE_STRIDE * spu->node + IIC_SPE_OFFSET;
-
- free_irq(irq_base + spu->isrc, spu);
- free_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc, spu);
- free_irq(irq_base + 2*IIC_CLASS_STRIDE + spu->isrc, spu);
+ if (spu->irqs[0] != NO_IRQ)
+ free_irq(spu->irqs[0], spu);
+ if (spu->irqs[1] != NO_IRQ)
+ free_irq(spu->irqs[1], spu);
+ if (spu->irqs[2] != NO_IRQ)
+ free_irq(spu->irqs[2], spu);
}
static LIST_HEAD(spu_list);
iounmap((u8 __iomem *)spu->local_store);
}
+/* This function shall be abstracted for HV platforms */
+static int __init spu_map_interrupts(struct spu *spu, struct device_node *np)
+{
+ struct irq_host *host;
+ unsigned int isrc;
+ u32 *tmp;
+
+ host = iic_get_irq_host(spu->node);
+ if (host == NULL)
+ return -ENODEV;
+
+ /* Get the interrupt source from the device-tree */
+ tmp = (u32 *)get_property(np, "isrc", NULL);
+ if (!tmp)
+ return -ENODEV;
+ spu->isrc = isrc = tmp[0];
+
+ /* Now map interrupts of all 3 classes */
+ spu->irqs[0] = irq_create_mapping(host, 0x00 | isrc, 0);
+ spu->irqs[1] = irq_create_mapping(host, 0x10 | isrc, 0);
+ spu->irqs[2] = irq_create_mapping(host, 0x20 | isrc, 0);
+
+ /* Right now, we only fail if class 2 failed */
+ return spu->irqs[2] == NO_IRQ ? -EINVAL : 0;
+}
+
static int __init spu_map_device(struct spu *spu, struct device_node *node)
{
char *prop;
int ret;
ret = -ENODEV;
- prop = get_property(node, "isrc", NULL);
- if (!prop)
- goto out;
- spu->isrc = *(unsigned int *)prop;
-
spu->name = get_property(node, "name", NULL);
if (!spu->name)
goto out;
return ret;
}
- sysdev_create_file(&spu->sysdev, &attr_isrc);
+ if (spu->isrc != 0)
+ sysdev_create_file(&spu->sysdev, &attr_isrc);
sysfs_add_device_to_node(&spu->sysdev, spu->nid);
return 0;
spu->nid = of_node_to_nid(spe);
if (spu->nid == -1)
spu->nid = 0;
+ ret = spu_map_interrupts(spu, spe);
+ if (ret)
+ goto out_unmap;
spin_lock_init(&spu->register_lock);
spu_mfc_sdr_set(spu, mfspr(SPRN_SDR1));
spu_mfc_sr1_set(spu, 0x33);
#include <asm/machdep.h>
#include <asm/sections.h>
#include <asm/pci-bridge.h>
-#include <asm/open_pic.h>
#include <asm/grackle.h>
#include <asm/rtas.h>
chrp_pcibios_fixup(void)
{
struct pci_dev *dev = NULL;
- struct device_node *np;
- /* PCI interrupts are controlled by the OpenPIC */
- for_each_pci_dev(dev) {
- np = pci_device_to_OF_node(dev);
- if ((np != 0) && (np->n_intrs > 0) && (np->intrs[0].line != 0))
- dev->irq = np->intrs[0].line;
- pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
- }
+ for_each_pci_dev(dev)
+ pci_read_irq_line(dev);
}
#define PRG_CL_RESET_VALID 0x00010000
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/pci.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/adb.h>
#include <linux/module.h>
#include <linux/delay.h>
int _chrp_type;
EXPORT_SYMBOL(_chrp_type);
-struct mpic *chrp_mpic;
+static struct mpic *chrp_mpic;
/* Used for doing CHRP event-scans */
DEFINE_PER_CPU(struct timer_list, heartbeat_timer);
jiffies + event_scan_interval);
}
+static void chrp_8259_cascade(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs)
+{
+ unsigned int cascade_irq = i8259_irq(regs);
+ if (cascade_irq != NO_IRQ)
+ generic_handle_irq(cascade_irq, regs);
+ desc->chip->eoi(irq);
+}
+
/*
* Finds the open-pic node and sets up the mpic driver.
*/
static void __init chrp_find_openpic(void)
{
struct device_node *np, *root;
- int len, i, j, irq_count;
+ int len, i, j;
int isu_size, idu_size;
unsigned int *iranges, *opprop = NULL;
int oplen = 0;
unsigned long opaddr;
int na = 1;
- unsigned char init_senses[NR_IRQS - NUM_8259_INTERRUPTS];
- np = find_type_devices("open-pic");
+ np = of_find_node_by_type(NULL, "open-pic");
if (np == NULL)
return;
- root = find_path_device("/");
+ root = of_find_node_by_path("/");
if (root) {
opprop = (unsigned int *) get_property
(root, "platform-open-pic", &oplen);
oplen /= na * sizeof(unsigned int);
} else {
struct resource r;
- if (of_address_to_resource(np, 0, &r))
- return;
+ if (of_address_to_resource(np, 0, &r)) {
+ goto bail;
+ }
opaddr = r.start;
oplen = 0;
}
printk(KERN_INFO "OpenPIC at %lx\n", opaddr);
- irq_count = NR_IRQS - NUM_ISA_INTERRUPTS - 4; /* leave room for IPIs */
- prom_get_irq_senses(init_senses, NUM_ISA_INTERRUPTS, NR_IRQS - 4);
- /* i8259 cascade is always positive level */
- init_senses[0] = IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE;
-
iranges = (unsigned int *) get_property(np, "interrupt-ranges", &len);
if (iranges == NULL)
len = 0; /* non-distributed mpic */
if (len > 1)
isu_size = iranges[3];
- chrp_mpic = mpic_alloc(opaddr, MPIC_PRIMARY,
- isu_size, NUM_ISA_INTERRUPTS, irq_count,
- NR_IRQS - 4, init_senses, irq_count,
- " MPIC ");
+ chrp_mpic = mpic_alloc(np, opaddr, MPIC_PRIMARY,
+ isu_size, 0, " MPIC ");
if (chrp_mpic == NULL) {
printk(KERN_ERR "Failed to allocate MPIC structure\n");
- return;
+ goto bail;
}
-
j = na - 1;
for (i = 1; i < len; ++i) {
iranges += 2;
}
mpic_init(chrp_mpic);
- mpic_setup_cascade(NUM_ISA_INTERRUPTS, i8259_irq_cascade, NULL);
+ ppc_md.get_irq = mpic_get_irq;
+ bail:
+ of_node_put(root);
+ of_node_put(np);
}
#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
};
#endif
-void __init chrp_init_IRQ(void)
+static void __init chrp_find_8259(void)
{
- struct device_node *np;
+ struct device_node *np, *pic = NULL;
unsigned long chrp_int_ack = 0;
-#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
- struct device_node *kbd;
-#endif
+ unsigned int cascade_irq;
+
+ /* Look for cascade */
+ for_each_node_by_type(np, "interrupt-controller")
+ if (device_is_compatible(np, "chrp,iic")) {
+ pic = np;
+ break;
+ }
+ /* Ok, 8259 wasn't found. We need to handle the case where
+ * we have a pegasos that claims to be chrp but doesn't have
+ * a proper interrupt tree
+ */
+ if (pic == NULL && chrp_mpic != NULL) {
+ printk(KERN_ERR "i8259: Not found in device-tree"
+ " assuming no legacy interrupts\n");
+ return;
+ }
+ /* Look for intack. In a perfect world, we would look for it on
+ * the ISA bus that holds the 8259 but heh... Works that way. If
+ * we ever see a problem, we can try to re-use the pSeries code here.
+ * Also, Pegasos-type platforms don't have a proper node to start
+ * from anyway
+ */
for (np = find_devices("pci"); np != NULL; np = np->next) {
unsigned int *addrp = (unsigned int *)
get_property(np, "8259-interrupt-acknowledge", NULL);
break;
}
if (np == NULL)
- printk(KERN_ERR "Cannot find PCI interrupt acknowledge address\n");
+ printk(KERN_WARNING "Cannot find PCI interrupt acknowledge"
+ " address, polling\n");
+
+ i8259_init(pic, chrp_int_ack);
+ if (ppc_md.get_irq == NULL)
+ ppc_md.get_irq = i8259_irq;
+ if (chrp_mpic != NULL) {
+ cascade_irq = irq_of_parse_and_map(pic, 0);
+ if (cascade_irq == NO_IRQ)
+ printk(KERN_ERR "i8259: failed to map cascade irq\n");
+ else
+ set_irq_chained_handler(cascade_irq,
+ chrp_8259_cascade);
+ }
+}
+void __init chrp_init_IRQ(void)
+{
+#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
+ struct device_node *kbd;
+#endif
chrp_find_openpic();
-
- i8259_init(chrp_int_ack, 0);
+ chrp_find_8259();
if (_chrp_type == _CHRP_Pegasos)
ppc_md.get_irq = i8259_irq;
DMA_MODE_READ = 0x44;
DMA_MODE_WRITE = 0x48;
isa_io_base = CHRP_ISA_IO_BASE; /* default value */
- ppc_do_canonicalize_irqs = 1;
-
- /* Assume we have an 8259... */
- __irq_offset_value = NUM_ISA_INTERRUPTS;
return 1;
}
.init = chrp_init2,
.show_cpuinfo = chrp_show_cpuinfo,
.init_IRQ = chrp_init_IRQ,
- .get_irq = mpic_get_irq,
.pcibios_fixup = chrp_pcibios_fixup,
.restart = rtas_restart,
.power_off = rtas_power_off,
#include <asm/smp.h>
#include <asm/residual.h>
#include <asm/time.h>
-#include <asm/open_pic.h>
#include <asm/machdep.h>
#include <asm/smp.h>
#include <asm/mpic.h>
printk(KERN_ERR "pci_event_handler: NULL event received\n");
}
-/*
- * This is called by init_IRQ. set in ppc_md.init_IRQ by iSeries_setup.c
- * It must be called before the bus walk.
- */
-void __init iSeries_init_IRQ(void)
-{
- /* Register PCI event handler and open an event path */
- int ret;
-
- ret = HvLpEvent_registerHandler(HvLpEvent_Type_PciIo,
- &pci_event_handler);
- if (ret == 0) {
- ret = HvLpEvent_openPath(HvLpEvent_Type_PciIo, 0);
- if (ret != 0)
- printk(KERN_ERR "iseries_init_IRQ: open event path "
- "failed with rc 0x%x\n", ret);
- } else
- printk(KERN_ERR "iseries_init_IRQ: register handler "
- "failed with rc 0x%x\n", ret);
-}
-
#define REAL_IRQ_TO_SUBBUS(irq) (((irq) >> 14) & 0xff)
#define REAL_IRQ_TO_BUS(irq) ((((irq) >> 6) & 0xff) + 1)
#define REAL_IRQ_TO_IDSEL(irq) ((((irq) >> 3) & 7) + 1)
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
- unsigned int rirq = virt_irq_to_real_map[irq];
+ unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* The IRQ has already been locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
- unsigned int rirq = virt_irq_to_real_map[irq];
+ unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
bus = REAL_IRQ_TO_BUS(rirq);
function = REAL_IRQ_TO_FUNC(rirq);
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
- unsigned int rirq = virt_irq_to_real_map[irq];
+ unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* irq should be locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
- unsigned int rirq = virt_irq_to_real_map[irq];
+ unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* The IRQ has already been locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
static void iseries_end_IRQ(unsigned int irq)
{
- unsigned int rirq = virt_irq_to_real_map[irq];
+ unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
HvCallPci_eoi(REAL_IRQ_TO_BUS(rirq), REAL_IRQ_TO_SUBBUS(rirq),
(REAL_IRQ_TO_IDSEL(rirq) << 4) + REAL_IRQ_TO_FUNC(rirq));
}
-static hw_irq_controller iSeries_IRQ_handler = {
- .typename = "iSeries irq controller",
- .startup = iseries_startup_IRQ,
- .shutdown = iseries_shutdown_IRQ,
- .enable = iseries_enable_IRQ,
- .disable = iseries_disable_IRQ,
- .end = iseries_end_IRQ
+static struct irq_chip iseries_pic = {
+ .typename = "iSeries irq controller",
+ .startup = iseries_startup_IRQ,
+ .shutdown = iseries_shutdown_IRQ,
+ .unmask = iseries_enable_IRQ,
+ .mask = iseries_disable_IRQ,
+ .eoi = iseries_end_IRQ
};
/*
int __init iSeries_allocate_IRQ(HvBusNumber bus,
HvSubBusNumber sub_bus, u32 bsubbus)
{
- int virtirq;
unsigned int realirq;
u8 idsel = ISERIES_GET_DEVICE_FROM_SUBBUS(bsubbus);
u8 function = ISERIES_GET_FUNCTION_FROM_SUBBUS(bsubbus);
realirq = (((((sub_bus << 8) + (bus - 1)) << 3) + (idsel - 1)) << 3)
+ function;
- virtirq = virt_irq_create_mapping(realirq);
- irq_desc[virtirq].chip = &iSeries_IRQ_handler;
- return virtirq;
+ return irq_create_mapping(NULL, realirq, IRQ_TYPE_NONE);
}
#endif /* CONFIG_PCI */
/*
* Get the next pending IRQ.
*/
-int iSeries_get_irq(struct pt_regs *regs)
+unsigned int iSeries_get_irq(struct pt_regs *regs)
{
- /* -2 means ignore this interrupt */
- int irq = -2;
+ int irq = NO_IRQ_IGNORE;
#ifdef CONFIG_SMP
if (get_lppaca()->int_dword.fields.ipi_cnt) {
}
spin_unlock(&pending_irqs_lock);
if (irq >= NR_IRQS)
- irq = -2;
+ irq = NO_IRQ_IGNORE;
}
#endif
return irq;
}
+
+static int iseries_irq_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ set_irq_chip_and_handler(virq, &iseries_pic, handle_fasteoi_irq);
+
+ return 0;
+}
+
+static struct irq_host_ops iseries_irq_host_ops = {
+ .map = iseries_irq_host_map,
+};
+
+/*
+ * This is called by init_IRQ. set in ppc_md.init_IRQ by iSeries_setup.c
+ * It must be called before the bus walk.
+ */
+void __init iSeries_init_IRQ(void)
+{
+ /* Register PCI event handler and open an event path */
+ struct irq_host *host;
+ int ret;
+
+ /*
+ * The Hypervisor only allows us up to 256 interrupt
+ * sources (the irq number is passed in a u8).
+ */
+ irq_set_virq_count(256);
+
+ /* Create irq host. No need for a revmap since HV will give us
+ * back our virtual irq number
+ */
+ host = irq_alloc_host(IRQ_HOST_MAP_NOMAP, 0, &iseries_irq_host_ops, 0);
+ BUG_ON(host == NULL);
+ irq_set_default_host(host);
+
+ ret = HvLpEvent_registerHandler(HvLpEvent_Type_PciIo,
+ &pci_event_handler);
+ if (ret == 0) {
+ ret = HvLpEvent_openPath(HvLpEvent_Type_PciIo, 0);
+ if (ret != 0)
+ printk(KERN_ERR "iseries_init_IRQ: open event path "
+ "failed with rc 0x%x\n", ret);
+ } else
+ printk(KERN_ERR "iseries_init_IRQ: register handler "
+ "failed with rc 0x%x\n", ret);
+}
+
extern void iSeries_init_IRQ(void);
extern int iSeries_allocate_IRQ(HvBusNumber, HvSubBusNumber, u32);
extern void iSeries_activate_IRQs(void);
-extern int iSeries_get_irq(struct pt_regs *);
+extern unsigned int iSeries_get_irq(struct pt_regs *);
#endif /* _ISERIES_IRQ_H */
{
DBG(" -> iSeries_init_early()\n");
- ppc64_interrupt_controller = IC_ISERIES;
-
#if defined(CONFIG_BLK_DEV_INITRD)
/*
* If the init RAM disk has been configured and there is
powerpc_firmware_features |= FW_FEATURE_ISERIES;
powerpc_firmware_features |= FW_FEATURE_LPAR;
- /*
- * The Hypervisor only allows us up to 256 interrupt
- * sources (the irq number is passed in a u8).
- */
- virt_irq_max = 255;
-
hpte_init_iSeries();
return 1;
int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel)
{
struct device_node *np;
- int irq = channel ? 15 : 14;
+ unsigned int defirq = channel ? 15 : 14;
+ unsigned int irq;
if (pdev->vendor != PCI_VENDOR_ID_AMD ||
pdev->device != PCI_DEVICE_ID_AMD_8111_IDE)
- return irq;
+ return defirq;
np = pci_device_to_OF_node(pdev);
if (np == NULL)
- return irq;
- if (np->n_intrs < 2)
- return irq;
- return np->intrs[channel & 0x1].line;
+ return defirq;
+ irq = irq_of_parse_and_map(np, channel & 0x1);
+ if (irq == NO_IRQ) {
+ printk("Failed to map onboard IDE interrupt for channel %d\n",
+ channel);
+ return defirq;
+ }
+ return irq;
}
/* XXX: To remove once all firmwares are ok */
*
*/
-#define DEBUG
+#undef DEBUG
#include <linux/init.h>
#include <linux/errno.h>
{
DBG(" -> maple_init_early\n");
- /* Setup interrupt mapping options */
- ppc64_interrupt_controller = IC_OPEN_PIC;
-
iommu_init_early_dart();
DBG(" <- maple_init_early\n");
}
-
-static __init void maple_init_IRQ(void)
+/*
+ * This is almost identical to pSeries and CHRP. We need to make that
+ * code generic at one point, with appropriate bits in the device-tree to
+ * identify the presence of an HT APIC
+ */
+static void __init maple_init_IRQ(void)
{
- struct device_node *root;
+ struct device_node *root, *np, *mpic_node = NULL;
unsigned int *opprop;
- unsigned long opic_addr;
+ unsigned long openpic_addr = 0;
+ int naddr, n, i, opplen, has_isus = 0;
struct mpic *mpic;
- unsigned char senses[128];
- int n;
+ unsigned int flags = MPIC_PRIMARY;
- DBG(" -> maple_init_IRQ\n");
+ /* Locate MPIC in the device-tree. Note that there is a bug
+ * in Maple device-tree where the type of the controller is
+ * open-pic and not interrupt-controller
+ */
+ for_each_node_by_type(np, "open-pic") {
+ mpic_node = np;
+ break;
+ }
+ if (mpic_node == NULL) {
+ printk(KERN_ERR
+ "Failed to locate the MPIC interrupt controller\n");
+ return;
+ }
- /* XXX: Non standard, replace that with a proper openpic/mpic node
- * in the device-tree. Find the Open PIC if present */
+ /* Find address list in /platform-open-pic */
root = of_find_node_by_path("/");
- opprop = (unsigned int *) get_property(root,
- "platform-open-pic", NULL);
- if (opprop == 0)
- panic("OpenPIC not found !\n");
-
- n = prom_n_addr_cells(root);
- for (opic_addr = 0; n > 0; --n)
- opic_addr = (opic_addr << 32) + *opprop++;
+ naddr = prom_n_addr_cells(root);
+ opprop = (unsigned int *) get_property(root, "platform-open-pic",
+ &opplen);
+ if (opprop != 0) {
+ openpic_addr = of_read_number(opprop, naddr);
+ has_isus = (opplen > naddr);
+ printk(KERN_DEBUG "OpenPIC addr: %lx, has ISUs: %d\n",
+ openpic_addr, has_isus);
+ }
of_node_put(root);
- /* Obtain sense values from device-tree */
- prom_get_irq_senses(senses, 0, 128);
+ BUG_ON(openpic_addr == 0);
+
+ /* Check for a big endian MPIC */
+ if (get_property(np, "big-endian", NULL) != NULL)
+ flags |= MPIC_BIG_ENDIAN;
- mpic = mpic_alloc(opic_addr,
- MPIC_PRIMARY | MPIC_BIG_ENDIAN |
- MPIC_BROKEN_U3 | MPIC_WANTS_RESET,
- 0, 0, 128, 128, senses, 128, "U3-MPIC");
+ /* XXX Maple specific bits */
+ flags |= MPIC_BROKEN_U3 | MPIC_WANTS_RESET;
+
+ /* Setup the openpic driver. More device-tree junks, we hard code no
+ * ISUs for now. I'll have to revisit some stuffs with the folks doing
+ * the firmware for those
+ */
+ mpic = mpic_alloc(mpic_node, openpic_addr, flags,
+ /*has_isus ? 16 :*/ 0, 0, " MPIC ");
BUG_ON(mpic == NULL);
- mpic_init(mpic);
- DBG(" <- maple_init_IRQ\n");
+ /* Add ISUs */
+ opplen /= sizeof(u32);
+ for (n = 0, i = naddr; i < opplen; i += naddr, n++) {
+ unsigned long isuaddr = of_read_number(opprop + i, naddr);
+ mpic_assign_isu(mpic, n, isuaddr);
+ }
+
+ /* All ISUs are setup, complete initialization */
+ mpic_init(mpic);
+ ppc_md.get_irq = mpic_get_irq;
+ of_node_put(mpic_node);
+ of_node_put(root);
}
static void __init maple_progress(char *s, unsigned short hex)
static int __init maple_probe(void)
{
unsigned long root = of_get_flat_dt_root();
- if (!of_flat_dt_is_compatible(root, "Momentum,Maple"))
+
+ if (!of_flat_dt_is_compatible(root, "Momentum,Maple") &&
+ !of_flat_dt_is_compatible(root, "Momentum,Apache"))
return 0;
/*
* On U3, the DART (iommu) must be allocated now since it
.setup_arch = maple_setup_arch,
.init_early = maple_init_early,
.init_IRQ = maple_init_IRQ,
- .get_irq = mpic_get_irq,
.pcibios_fixup = maple_pcibios_fixup,
.pci_get_legacy_ide_irq = maple_pci_get_legacy_ide_irq,
.restart = maple_restart,
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/init.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <asm/sections.h>
#include <asm/prom.h>
#include <asm/page.h>
{
u32 val;
+ bootx_dt_add_prop("linux,bootx", NULL, 0, mem_end);
+
if (bootx_info->kernelParamsOffset) {
char *args = (char *)((unsigned long)bootx_info) +
bootx_info->kernelParamsOffset;
static void __init bootx_add_display_props(unsigned long base,
unsigned long *mem_end)
{
+ boot_infos_t *bi = bootx_info;
+ u32 tmp;
+
bootx_dt_add_prop("linux,boot-display", NULL, 0, mem_end);
bootx_dt_add_prop("linux,opened", NULL, 0, mem_end);
+ tmp = bi->dispDeviceDepth;
+ bootx_dt_add_prop("linux,bootx-depth", &tmp, 4, mem_end);
+ tmp = bi->dispDeviceRect[2] - bi->dispDeviceRect[0];
+ bootx_dt_add_prop("linux,bootx-width", &tmp, 4, mem_end);
+ tmp = bi->dispDeviceRect[3] - bi->dispDeviceRect[1];
+ bootx_dt_add_prop("linux,bootx-height", &tmp, 4, mem_end);
+ tmp = bi->dispDeviceRowBytes;
+ bootx_dt_add_prop("linux,bootx-linebytes", &tmp, 4, mem_end);
+ tmp = (u32)bi->dispDeviceBase;
+ if (tmp == 0)
+ tmp = (u32)bi->logicalDisplayBase;
+ tmp += bi->dispDeviceRect[1] * bi->dispDeviceRowBytes;
+ tmp += bi->dispDeviceRect[0] * ((bi->dispDeviceDepth + 7) / 8);
+ bootx_dt_add_prop("linux,bootx-addr", &tmp, 4, mem_end);
}
static void __init bootx_dt_add_string(char *s, unsigned long *mem_end)
if (!strcmp(namep, "/chosen")) {
DBG(" detected /chosen ! adding properties names !\n");
- bootx_dt_add_string("linux,platform", mem_end);
+ bootx_dt_add_string("linux,bootx", mem_end);
bootx_dt_add_string("linux,stdout-path", mem_end);
bootx_dt_add_string("linux,initrd-start", mem_end);
bootx_dt_add_string("linux,initrd-end", mem_end);
DBG(" detected display ! adding properties names !\n");
bootx_dt_add_string("linux,boot-display", mem_end);
bootx_dt_add_string("linux,opened", mem_end);
+ bootx_dt_add_string("linux,bootx-depth", mem_end);
+ bootx_dt_add_string("linux,bootx-width", mem_end);
+ bootx_dt_add_string("linux,bootx-height", mem_end);
+ bootx_dt_add_string("linux,bootx-linebytes", mem_end);
+ bootx_dt_add_string("linux,bootx-addr", mem_end);
strncpy(bootx_disp_path, namep, 255);
}
if (!BOOT_INFO_IS_V2_COMPATIBLE(bi))
bi->logicalDisplayBase = bi->dispDeviceBase;
+ /* Fixup depth 16 -> 15 as that's what MacOS calls 16bpp */
+ if (bi->dispDeviceDepth == 16)
+ bi->dispDeviceDepth = 15;
+
#ifdef CONFIG_BOOTX_TEXT
+ ptr = (unsigned long)bi->logicalDisplayBase;
+ ptr += bi->dispDeviceRect[1] * bi->dispDeviceRowBytes;
+ ptr += bi->dispDeviceRect[0] * ((bi->dispDeviceDepth + 7) / 8);
btext_setup_display(bi->dispDeviceRect[2] - bi->dispDeviceRect[0],
bi->dispDeviceRect[3] - bi->dispDeviceRect[1],
bi->dispDeviceDepth, bi->dispDeviceRowBytes,
host->speed = KW_I2C_MODE_25KHZ;
break;
}
- if (np->n_intrs > 0)
- host->irq = np->intrs[0].line;
- else
- host->irq = NO_IRQ;
+ host->irq = irq_of_parse_and_map(np, 0);
+ if (host->irq == NO_IRQ)
+ printk(KERN_WARNING
+ "low_i2c: Failed to map interrupt for %s\n",
+ np->full_name);
host->base = ioremap((*addrp), 0x1000);
if (host->base == NULL) {
#include <asm/machdep.h>
#include <asm/nvram.h>
+#include "pmac.h"
+
#define DEBUG
#ifdef DEBUG
// XXX Turn that into a sem
static DEFINE_SPINLOCK(nv_lock);
-extern int pmac_newworld;
-extern int system_running;
-
static int (*core99_write_bank)(int bank, u8* datas);
static int (*core99_erase_bank)(int bank);
static struct pci_controller *u3_agp;
static struct pci_controller *u4_pcie;
static struct pci_controller *u3_ht;
+#define has_second_ohare 0
+#else
+static int has_second_ohare;
#endif /* CONFIG_PPC64 */
extern u8 pci_cache_line_size;
early_write_config_word(hose, bus, devfn, PCI_BRIDGE_CONTROL, val);
}
+static void __init init_second_ohare(void)
+{
+ struct device_node *np = of_find_node_by_name(NULL, "pci106b,7");
+ unsigned char bus, devfn;
+ unsigned short cmd;
+
+ if (np == NULL)
+ return;
+
+ /* This must run before we initialize the PICs since the second
+ * ohare hosts a PIC that will be accessed there.
+ */
+ if (pci_device_from_OF_node(np, &bus, &devfn) == 0) {
+ struct pci_controller* hose =
+ pci_find_hose_for_OF_device(np);
+ if (!hose) {
+ printk(KERN_ERR "Can't find PCI hose for OHare2 !\n");
+ return;
+ }
+ early_read_config_word(hose, bus, devfn, PCI_COMMAND, &cmd);
+ cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
+ cmd &= ~PCI_COMMAND_IO;
+ early_write_config_word(hose, bus, devfn, PCI_COMMAND, cmd);
+ }
+ has_second_ohare = 1;
+}
+
/*
* Some Apple desktop machines have a NEC PD720100A USB2 controller
* on the motherboard. Open Firmware, on these, will disable the
" EHCI, fixing up...\n");
data &= ~1UL;
early_write_config_dword(hose, bus, devfn, 0xe4, data);
- early_write_config_byte(hose, bus,
- devfn | 2, PCI_INTERRUPT_LINE,
- nec->intrs[0].line);
}
}
}
return 0;
}
-static void __init pcibios_fixup_OF_interrupts(void)
+void __init pmac_pcibios_fixup(void)
{
struct pci_dev* dev = NULL;
- /*
- * Open Firmware often doesn't initialize the
- * PCI_INTERRUPT_LINE config register properly, so we
- * should find the device node and apply the interrupt
- * obtained from the OF device-tree
- */
for_each_pci_dev(dev) {
- struct device_node *node;
- node = pci_device_to_OF_node(dev);
- /* this is the node, see if it has interrupts */
- if (node && node->n_intrs > 0)
- dev->irq = node->intrs[0].line;
- pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
+ /* Read interrupt from the device-tree */
+ pci_read_irq_line(dev);
+
+ /* Fixup interrupt for the modem/ethernet combo controller.
+ * on machines with a second ohare chip.
+ * The number in the device tree (27) is bogus (correct for
+ * the ethernet-only board but not the combo ethernet/modem
+ * board). The real interrupt is 28 on the second controller
+ * -> 28+32 = 60.
+ */
+ if (has_second_ohare &&
+ dev->vendor == PCI_VENDOR_ID_DEC &&
+ dev->device == PCI_DEVICE_ID_DEC_TULIP_PLUS)
+ dev->irq = irq_create_mapping(NULL, 60, 0);
}
}
-void __init pmac_pcibios_fixup(void)
-{
- /* Fixup interrupts according to OF tree */
- pcibios_fixup_OF_interrupts();
-}
-
#ifdef CONFIG_PPC64
static void __init pmac_fixup_phb_resources(void)
{
#else /* CONFIG_PPC64 */
init_p2pbridge();
+ init_second_ohare();
fixup_nec_usb2();
/* We are still having some issues with the Xserve G4, enabling
static int macio_do_gpio_irq_enable(struct pmf_function *func)
{
- if (func->node->n_intrs < 1)
+ unsigned int irq = irq_of_parse_and_map(func->node, 0);
+ if (irq == NO_IRQ)
return -EINVAL;
-
- return request_irq(func->node->intrs[0].line, macio_gpio_irq, 0,
- func->node->name, func);
+ return request_irq(irq, macio_gpio_irq, 0, func->node->name, func);
}
static int macio_do_gpio_irq_disable(struct pmf_function *func)
{
- if (func->node->n_intrs < 1)
+ unsigned int irq = irq_of_parse_and_map(func->node, 0);
+ if (irq == NO_IRQ)
return -EINVAL;
-
- free_irq(func->node->intrs[0].line, func);
+ free_irq(irq, func);
return 0;
}
static DEFINE_SPINLOCK(pmac_pic_lock);
-#define GATWICK_IRQ_POOL_SIZE 10
-static struct interrupt_info gatwick_int_pool[GATWICK_IRQ_POOL_SIZE];
-
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
static unsigned long ppc_lost_interrupts[NR_MASK_WORDS];
+static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
+static int pmac_irq_cascade = -1;
+static struct irq_host *pmac_pic_host;
-/*
- * Mark an irq as "lost". This is only used on the pmac
- * since it can lose interrupts (see pmac_set_irq_mask).
- * -- Cort
- */
-void __set_lost(unsigned long irq_nr, int nokick)
+static void __pmac_retrigger(unsigned int irq_nr)
{
- if (!test_and_set_bit(irq_nr, ppc_lost_interrupts)) {
+ if (irq_nr >= max_real_irqs && pmac_irq_cascade > 0) {
+ __set_bit(irq_nr, ppc_lost_interrupts);
+ irq_nr = pmac_irq_cascade;
+ mb();
+ }
+ if (!__test_and_set_bit(irq_nr, ppc_lost_interrupts)) {
atomic_inc(&ppc_n_lost_interrupts);
- if (!nokick)
- set_dec(1);
+ set_dec(1);
}
}
-static void pmac_mask_and_ack_irq(unsigned int irq_nr)
+static void pmac_mask_and_ack_irq(unsigned int virq)
{
- unsigned long bit = 1UL << (irq_nr & 0x1f);
- int i = irq_nr >> 5;
+ unsigned int src = irq_map[virq].hwirq;
+ unsigned long bit = 1UL << (virq & 0x1f);
+ int i = virq >> 5;
unsigned long flags;
- if ((unsigned)irq_nr >= max_irqs)
- return;
-
- clear_bit(irq_nr, ppc_cached_irq_mask);
- if (test_and_clear_bit(irq_nr, ppc_lost_interrupts))
- atomic_dec(&ppc_n_lost_interrupts);
spin_lock_irqsave(&pmac_pic_lock, flags);
+ __clear_bit(src, ppc_cached_irq_mask);
+ if (__test_and_clear_bit(src, ppc_lost_interrupts))
+ atomic_dec(&ppc_n_lost_interrupts);
out_le32(&pmac_irq_hw[i]->enable, ppc_cached_irq_mask[i]);
out_le32(&pmac_irq_hw[i]->ack, bit);
do {
spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
-static void pmac_set_irq_mask(unsigned int irq_nr, int nokicklost)
+static void pmac_ack_irq(unsigned int virq)
+{
+ unsigned int src = irq_map[virq].hwirq;
+ unsigned long bit = 1UL << (src & 0x1f);
+ int i = src >> 5;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ if (__test_and_clear_bit(src, ppc_lost_interrupts))
+ atomic_dec(&ppc_n_lost_interrupts);
+ out_le32(&pmac_irq_hw[i]->ack, bit);
+ (void)in_le32(&pmac_irq_hw[i]->ack);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
+}
+
+static void __pmac_set_irq_mask(unsigned int irq_nr, int nokicklost)
{
unsigned long bit = 1UL << (irq_nr & 0x1f);
int i = irq_nr >> 5;
- unsigned long flags;
if ((unsigned)irq_nr >= max_irqs)
return;
- spin_lock_irqsave(&pmac_pic_lock, flags);
/* enable unmasked interrupts */
out_le32(&pmac_irq_hw[i]->enable, ppc_cached_irq_mask[i]);
* the bit in the flag register or request another interrupt.
*/
if (bit & ppc_cached_irq_mask[i] & in_le32(&pmac_irq_hw[i]->level))
- __set_lost((ulong)irq_nr, nokicklost);
- spin_unlock_irqrestore(&pmac_pic_lock, flags);
+ __pmac_retrigger(irq_nr);
}
/* When an irq gets requested for the first client, if it's an
* edge interrupt, we clear any previous one on the controller
*/
-static unsigned int pmac_startup_irq(unsigned int irq_nr)
+static unsigned int pmac_startup_irq(unsigned int virq)
{
- unsigned long bit = 1UL << (irq_nr & 0x1f);
- int i = irq_nr >> 5;
+ unsigned long flags;
+ unsigned int src = irq_map[virq].hwirq;
+ unsigned long bit = 1UL << (src & 0x1f);
+ int i = src >> 5;
- if ((irq_desc[irq_nr].status & IRQ_LEVEL) == 0)
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ if ((irq_desc[virq].status & IRQ_LEVEL) == 0)
out_le32(&pmac_irq_hw[i]->ack, bit);
- set_bit(irq_nr, ppc_cached_irq_mask);
- pmac_set_irq_mask(irq_nr, 0);
+ __set_bit(src, ppc_cached_irq_mask);
+ __pmac_set_irq_mask(src, 0);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
return 0;
}
-static void pmac_mask_irq(unsigned int irq_nr)
+static void pmac_mask_irq(unsigned int virq)
{
- clear_bit(irq_nr, ppc_cached_irq_mask);
- pmac_set_irq_mask(irq_nr, 0);
- mb();
+ unsigned long flags;
+ unsigned int src = irq_map[virq].hwirq;
+
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ __clear_bit(src, ppc_cached_irq_mask);
+ __pmac_set_irq_mask(src, 0);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
-static void pmac_unmask_irq(unsigned int irq_nr)
+static void pmac_unmask_irq(unsigned int virq)
{
- set_bit(irq_nr, ppc_cached_irq_mask);
- pmac_set_irq_mask(irq_nr, 0);
+ unsigned long flags;
+ unsigned int src = irq_map[virq].hwirq;
+
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ __set_bit(src, ppc_cached_irq_mask);
+ __pmac_set_irq_mask(src, 0);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
-static void pmac_end_irq(unsigned int irq_nr)
+static int pmac_retrigger(unsigned int virq)
{
- if (!(irq_desc[irq_nr].status & (IRQ_DISABLED|IRQ_INPROGRESS))
- && irq_desc[irq_nr].action) {
- set_bit(irq_nr, ppc_cached_irq_mask);
- pmac_set_irq_mask(irq_nr, 1);
- }
-}
+ unsigned long flags;
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ __pmac_retrigger(irq_map[virq].hwirq);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
+ return 1;
+}
-struct hw_interrupt_type pmac_pic = {
+static struct irq_chip pmac_pic = {
.typename = " PMAC-PIC ",
.startup = pmac_startup_irq,
- .enable = pmac_unmask_irq,
- .disable = pmac_mask_irq,
- .ack = pmac_mask_and_ack_irq,
- .end = pmac_end_irq,
-};
-
-struct hw_interrupt_type gatwick_pic = {
- .typename = " GATWICK ",
- .startup = pmac_startup_irq,
- .enable = pmac_unmask_irq,
- .disable = pmac_mask_irq,
- .ack = pmac_mask_and_ack_irq,
- .end = pmac_end_irq,
+ .mask = pmac_mask_irq,
+ .ack = pmac_ack_irq,
+ .mask_ack = pmac_mask_and_ack_irq,
+ .unmask = pmac_unmask_irq,
+ .retrigger = pmac_retrigger,
};
static irqreturn_t gatwick_action(int cpl, void *dev_id, struct pt_regs *regs)
{
+ unsigned long flags;
int irq, bits;
+ int rc = IRQ_NONE;
+ spin_lock_irqsave(&pmac_pic_lock, flags);
for (irq = max_irqs; (irq -= 32) >= max_real_irqs; ) {
int i = irq >> 5;
bits = in_le32(&pmac_irq_hw[i]->event) | ppc_lost_interrupts[i];
if (bits == 0)
continue;
irq += __ilog2(bits);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
__do_IRQ(irq, regs);
- return IRQ_HANDLED;
+ spin_lock_irqsave(&pmac_pic_lock, flags);
+ rc = IRQ_HANDLED;
}
- printk("gatwick irq not from gatwick pic\n");
- return IRQ_NONE;
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
+ return rc;
}
-static int pmac_get_irq(struct pt_regs *regs)
+static unsigned int pmac_pic_get_irq(struct pt_regs *regs)
{
int irq;
unsigned long bits = 0;
+ unsigned long flags;
#ifdef CONFIG_SMP
void psurge_smp_message_recv(struct pt_regs *);
/* IPI's are a hack on the powersurge -- Cort */
if ( smp_processor_id() != 0 ) {
psurge_smp_message_recv(regs);
- return -2; /* ignore, already handled */
+ return NO_IRQ_IGNORE; /* ignore, already handled */
}
#endif /* CONFIG_SMP */
+ spin_lock_irqsave(&pmac_pic_lock, flags);
for (irq = max_real_irqs; (irq -= 32) >= 0; ) {
int i = irq >> 5;
bits = in_le32(&pmac_irq_hw[i]->event) | ppc_lost_interrupts[i];
irq += __ilog2(bits);
break;
}
-
- return irq;
-}
-
-/* This routine will fix some missing interrupt values in the device tree
- * on the gatwick mac-io controller used by some PowerBooks
- *
- * Walking of OF nodes could use a bit more fixing up here, but it's not
- * very important as this is all boot time code on static portions of the
- * device-tree.
- *
- * However, the modifications done to "intrs" will have to be removed and
- * replaced with proper updates of the "interrupts" properties or
- * AAPL,interrupts, yet to be decided, once the dynamic parsing is there.
- */
-static void __init pmac_fix_gatwick_interrupts(struct device_node *gw,
- int irq_base)
-{
- struct device_node *node;
- int count;
-
- memset(gatwick_int_pool, 0, sizeof(gatwick_int_pool));
- count = 0;
- for (node = NULL; (node = of_get_next_child(gw, node)) != NULL;) {
- /* Fix SCC */
- if ((strcasecmp(node->name, "escc") == 0) && node->child) {
- if (node->child->n_intrs < 3) {
- node->child->intrs = &gatwick_int_pool[count];
- count += 3;
- }
- node->child->n_intrs = 3;
- node->child->intrs[0].line = 15+irq_base;
- node->child->intrs[1].line = 4+irq_base;
- node->child->intrs[2].line = 5+irq_base;
- printk(KERN_INFO "irq: fixed SCC on gatwick"
- " (%d,%d,%d)\n",
- node->child->intrs[0].line,
- node->child->intrs[1].line,
- node->child->intrs[2].line);
- }
- /* Fix media-bay & left SWIM */
- if (strcasecmp(node->name, "media-bay") == 0) {
- struct device_node* ya_node;
-
- if (node->n_intrs == 0)
- node->intrs = &gatwick_int_pool[count++];
- node->n_intrs = 1;
- node->intrs[0].line = 29+irq_base;
- printk(KERN_INFO "irq: fixed media-bay on gatwick"
- " (%d)\n", node->intrs[0].line);
-
- ya_node = node->child;
- while(ya_node) {
- if (strcasecmp(ya_node->name, "floppy") == 0) {
- if (ya_node->n_intrs < 2) {
- ya_node->intrs = &gatwick_int_pool[count];
- count += 2;
- }
- ya_node->n_intrs = 2;
- ya_node->intrs[0].line = 19+irq_base;
- ya_node->intrs[1].line = 1+irq_base;
- printk(KERN_INFO "irq: fixed floppy on second controller (%d,%d)\n",
- ya_node->intrs[0].line, ya_node->intrs[1].line);
- }
- if (strcasecmp(ya_node->name, "ata4") == 0) {
- if (ya_node->n_intrs < 2) {
- ya_node->intrs = &gatwick_int_pool[count];
- count += 2;
- }
- ya_node->n_intrs = 2;
- ya_node->intrs[0].line = 14+irq_base;
- ya_node->intrs[1].line = 3+irq_base;
- printk(KERN_INFO "irq: fixed ide on second controller (%d,%d)\n",
- ya_node->intrs[0].line, ya_node->intrs[1].line);
- }
- ya_node = ya_node->sibling;
- }
- }
- }
- if (count > 10) {
- printk("WARNING !! Gatwick interrupt pool overflow\n");
- printk(" GATWICK_IRQ_POOL_SIZE = %d\n", GATWICK_IRQ_POOL_SIZE);
- printk(" requested = %d\n", count);
- }
-}
-
-/*
- * The PowerBook 3400/2400/3500 can have a combo ethernet/modem
- * card which includes an ohare chip that acts as a second interrupt
- * controller. If we find this second ohare, set it up and fix the
- * interrupt value in the device tree for the ethernet chip.
- */
-static void __init enable_second_ohare(struct device_node *np)
-{
- unsigned char bus, devfn;
- unsigned short cmd;
- struct device_node *ether;
-
- /* This code doesn't strictly belong here, it could be part of
- * either the PCI initialisation or the feature code. It's kept
- * here for historical reasons.
- */
- if (pci_device_from_OF_node(np, &bus, &devfn) == 0) {
- struct pci_controller* hose =
- pci_find_hose_for_OF_device(np);
- if (!hose) {
- printk(KERN_ERR "Can't find PCI hose for OHare2 !\n");
- return;
- }
- early_read_config_word(hose, bus, devfn, PCI_COMMAND, &cmd);
- cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
- cmd &= ~PCI_COMMAND_IO;
- early_write_config_word(hose, bus, devfn, PCI_COMMAND, cmd);
- }
-
- /* Fix interrupt for the modem/ethernet combo controller. The number
- * in the device tree (27) is bogus (correct for the ethernet-only
- * board but not the combo ethernet/modem board).
- * The real interrupt is 28 on the second controller -> 28+32 = 60.
- */
- ether = of_find_node_by_name(NULL, "pci1011,14");
- if (ether && ether->n_intrs > 0) {
- ether->intrs[0].line = 60;
- printk(KERN_INFO "irq: Fixed ethernet IRQ to %d\n",
- ether->intrs[0].line);
- }
- of_node_put(ether);
+ spin_unlock_irqrestore(&pmac_pic_lock, flags);
+ if (unlikely(irq < 0))
+ return NO_IRQ;
+ return irq_linear_revmap(pmac_pic_host, irq);
}
#ifdef CONFIG_XMON
static struct irqaction gatwick_cascade_action = {
.handler = gatwick_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "cascade",
};
+static int pmac_pic_host_match(struct irq_host *h, struct device_node *node)
+{
+ /* We match all, we don't always have a node anyway */
+ return 1;
+}
+
+static int pmac_pic_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ struct irq_desc *desc = get_irq_desc(virq);
+ int level;
+
+ if (hw >= max_irqs)
+ return -EINVAL;
+
+ /* Mark level interrupts, set delayed disable for edge ones and set
+ * handlers
+ */
+ level = !!(level_mask[hw >> 5] & (1UL << (hw & 0x1f)));
+ if (level)
+ desc->status |= IRQ_LEVEL;
+ else
+ desc->status |= IRQ_DELAYED_DISABLE;
+ set_irq_chip_and_handler(virq, &pmac_pic, level ?
+ handle_level_irq : handle_edge_irq);
+ return 0;
+}
+
+static int pmac_pic_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq,
+ unsigned int *out_flags)
+
+{
+ *out_hwirq = *intspec;
+ return 0;
+}
+
+static struct irq_host_ops pmac_pic_host_ops = {
+ .match = pmac_pic_host_match,
+ .map = pmac_pic_host_map,
+ .xlate = pmac_pic_host_xlate,
+};
+
static void __init pmac_pic_probe_oldstyle(void)
{
int i;
- int irq_cascade = -1;
struct device_node *master = NULL;
struct device_node *slave = NULL;
u8 __iomem *addr;
struct resource r;
/* Set our get_irq function */
- ppc_md.get_irq = pmac_get_irq;
+ ppc_md.get_irq = pmac_pic_get_irq;
/*
* Find the interrupt controller type & node
if (slave) {
max_irqs = 64;
level_mask[1] = OHARE_LEVEL_MASK;
- enable_second_ohare(slave);
}
} else if ((master = of_find_node_by_name(NULL, "mac-io")) != NULL) {
max_irqs = max_real_irqs = 64;
max_irqs = 128;
level_mask[2] = HEATHROW_LEVEL_MASK;
level_mask[3] = 0;
- pmac_fix_gatwick_interrupts(slave, max_real_irqs);
}
}
BUG_ON(master == NULL);
- /* Set the handler for the main PIC */
- for ( i = 0; i < max_real_irqs ; i++ )
- irq_desc[i].chip = &pmac_pic;
+ /*
+ * Allocate an irq host
+ */
+ pmac_pic_host = irq_alloc_host(IRQ_HOST_MAP_LINEAR, max_irqs,
+ &pmac_pic_host_ops,
+ max_irqs);
+ BUG_ON(pmac_pic_host == NULL);
+ irq_set_default_host(pmac_pic_host);
/* Get addresses of first controller if we have a node for it */
BUG_ON(of_address_to_resource(master, 0, &r));
pmac_irq_hw[i++] =
(volatile struct pmac_irq_hw __iomem *)
(addr + 0x10);
- irq_cascade = slave->intrs[0].line;
+ pmac_irq_cascade = irq_of_parse_and_map(slave, 0);
printk(KERN_INFO "irq: Found slave Apple PIC %s for %d irqs"
" cascade: %d\n", slave->full_name,
- max_irqs - max_real_irqs, irq_cascade);
+ max_irqs - max_real_irqs, pmac_irq_cascade);
}
of_node_put(slave);
- /* disable all interrupts in all controllers */
+ /* Disable all interrupts in all controllers */
for (i = 0; i * 32 < max_irqs; ++i)
out_le32(&pmac_irq_hw[i]->enable, 0);
- /* mark level interrupts */
- for (i = 0; i < max_irqs; i++)
- if (level_mask[i >> 5] & (1UL << (i & 0x1f)))
- irq_desc[i].status = IRQ_LEVEL;
+ /* Hookup cascade irq */
+ if (slave && pmac_irq_cascade != NO_IRQ)
+ setup_irq(pmac_irq_cascade, &gatwick_cascade_action);
- /* Setup handlers for secondary controller and hook cascade irq*/
- if (slave) {
- for ( i = max_real_irqs ; i < max_irqs ; i++ )
- irq_desc[i].chip = &gatwick_pic;
- setup_irq(irq_cascade, &gatwick_cascade_action);
- }
printk(KERN_INFO "irq: System has %d possible interrupts\n", max_irqs);
#ifdef CONFIG_XMON
- setup_irq(20, &xmon_action);
+ setup_irq(irq_create_mapping(NULL, 20, 0), &xmon_action);
#endif
}
#endif /* CONFIG_PPC32 */
-static int pmac_u3_cascade(struct pt_regs *regs, void *data)
+static void pmac_u3_cascade(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs)
{
- return mpic_get_one_irq((struct mpic *)data, regs);
+ struct mpic *mpic = desc->handler_data;
+
+ unsigned int cascade_irq = mpic_get_one_irq(mpic, regs);
+ if (cascade_irq != NO_IRQ)
+ generic_handle_irq(cascade_irq, regs);
+ desc->chip->eoi(irq);
}
static void __init pmac_pic_setup_mpic_nmi(struct mpic *mpic)
int nmi_irq;
pswitch = of_find_node_by_name(NULL, "programmer-switch");
- if (pswitch && pswitch->n_intrs) {
- nmi_irq = pswitch->intrs[0].line;
- mpic_irq_set_priority(nmi_irq, 9);
- setup_irq(nmi_irq, &xmon_action);
+ if (pswitch) {
+ nmi_irq = irq_of_parse_and_map(pswitch, 0);
+ if (nmi_irq != NO_IRQ) {
+ mpic_irq_set_priority(nmi_irq, 9);
+ setup_irq(nmi_irq, &xmon_action);
+ }
+ of_node_put(pswitch);
}
- of_node_put(pswitch);
#endif /* defined(CONFIG_XMON) && defined(CONFIG_PPC32) */
}
static struct mpic * __init pmac_setup_one_mpic(struct device_node *np,
int master)
{
- unsigned char senses[128];
- int offset = master ? 0 : 128;
- int count = master ? 128 : 124;
const char *name = master ? " MPIC 1 " : " MPIC 2 ";
struct resource r;
struct mpic *mpic;
pmac_call_feature(PMAC_FTR_ENABLE_MPIC, np, 0, 0);
- prom_get_irq_senses(senses, offset, offset + count);
-
flags |= MPIC_WANTS_RESET;
if (get_property(np, "big-endian", NULL))
flags |= MPIC_BIG_ENDIAN;
if (master && (flags & MPIC_BIG_ENDIAN))
flags |= MPIC_BROKEN_U3;
- mpic = mpic_alloc(r.start, flags, 0, offset, count, master ? 252 : 0,
- senses, count, name);
+ mpic = mpic_alloc(np, r.start, flags, 0, 0, name);
if (mpic == NULL)
return NULL;
{
struct mpic *mpic1, *mpic2;
struct device_node *np, *master = NULL, *slave = NULL;
+ unsigned int cascade;
/* We can have up to 2 MPICs cascaded */
for (np = NULL; (np = of_find_node_by_type(np, "open-pic"))
of_node_put(master);
/* No slave, let's go out */
- if (slave == NULL || slave->n_intrs < 1)
+ if (slave == NULL)
+ return 0;
+
+ /* Get/Map slave interrupt */
+ cascade = irq_of_parse_and_map(slave, 0);
+ if (cascade == NO_IRQ) {
+ printk(KERN_ERR "Failed to map cascade IRQ\n");
return 0;
+ }
mpic2 = pmac_setup_one_mpic(slave, 0);
if (mpic2 == NULL) {
of_node_put(slave);
return 0;
}
- mpic_setup_cascade(slave->intrs[0].line, pmac_u3_cascade, mpic2);
+ set_irq_data(cascade, mpic2);
+ set_irq_chained_handler(cascade, pmac_u3_cascade);
of_node_put(slave);
return 0;
void __init pmac_pic_init(void)
{
+ unsigned int flags = 0;
+
+ /* We configure the OF parsing based on our oldworld vs. newworld
+ * platform type and wether we were booted by BootX.
+ */
+#ifdef CONFIG_PPC32
+ if (!pmac_newworld)
+ flags |= OF_IMAP_OLDWORLD_MAC;
+ if (get_property(of_chosen, "linux,bootx", NULL) != NULL)
+ flags |= OF_IMAP_NO_PHANDLE;
+ of_irq_map_init(flags);
+#endif /* CONFIG_PPC_32 */
+
/* We first try to detect Apple's new Core99 chipset, since mac-io
* is quite different on those machines and contains an IBM MPIC2.
*/
/* This used to be passed by the PMU driver but that link got
* broken with the new driver model. We use this tweak for now...
+ * We really want to do things differently though...
*/
static int pmacpic_find_viaint(void)
{
np = of_find_node_by_name(NULL, "via-pmu");
if (np == NULL)
goto not_found;
- viaint = np->intrs[0].line;
+ viaint = irq_of_parse_and_map(np, 0);;
#endif /* CONFIG_ADB_PMU */
not_found:
struct rtc_time;
+extern int pmac_newworld;
+
extern long pmac_time_init(void);
extern unsigned long pmac_get_boot_time(void);
extern void pmac_get_rtc_time(struct rtc_time *);
udbg_adb_init(!!strstr(cmd_line, "btextdbg"));
#ifdef CONFIG_PPC64
- /* Setup interrupt mapping options */
- ppc64_interrupt_controller = IC_OPEN_PIC;
-
iommu_init_early_dart();
#endif
}
static struct irqaction psurge_irqaction = {
.handler = psurge_primary_intr,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "primary IPI",
};
/* #define DEBUG */
-static void request_ras_irqs(struct device_node *np, char *propname,
+
+static void request_ras_irqs(struct device_node *np,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
const char *name)
{
- unsigned int *ireg, len, i;
- int virq, n_intr;
-
- ireg = (unsigned int *)get_property(np, propname, &len);
- if (ireg == NULL)
- return;
- n_intr = prom_n_intr_cells(np);
- len /= n_intr * sizeof(*ireg);
-
- for (i = 0; i < len; i++) {
- virq = virt_irq_create_mapping(*ireg);
- if (virq == NO_IRQ) {
- printk(KERN_ERR "Unable to allocate interrupt "
- "number for %s\n", np->full_name);
- return;
+ int i, index, count = 0;
+ struct of_irq oirq;
+ u32 *opicprop;
+ unsigned int opicplen;
+ unsigned int virqs[16];
+
+ /* Check for obsolete "open-pic-interrupt" property. If present, then
+ * map those interrupts using the default interrupt host and default
+ * trigger
+ */
+ opicprop = (u32 *)get_property(np, "open-pic-interrupt", &opicplen);
+ if (opicprop) {
+ opicplen /= sizeof(u32);
+ for (i = 0; i < opicplen; i++) {
+ if (count > 15)
+ break;
+ virqs[count] = irq_create_mapping(NULL, *(opicprop++),
+ IRQ_TYPE_NONE);
+ if (virqs[count] == NO_IRQ)
+ printk(KERN_ERR "Unable to allocate interrupt "
+ "number for %s\n", np->full_name);
+ else
+ count++;
+
}
- if (request_irq(irq_offset_up(virq), handler, 0, name, NULL)) {
+ }
+ /* Else use normal interrupt tree parsing */
+ else {
+ /* First try to do a proper OF tree parsing */
+ for (index = 0; of_irq_map_one(np, index, &oirq) == 0;
+ index++) {
+ if (count > 15)
+ break;
+ virqs[count] = irq_create_of_mapping(oirq.controller,
+ oirq.specifier,
+ oirq.size);
+ if (virqs[count] == NO_IRQ)
+ printk(KERN_ERR "Unable to allocate interrupt "
+ "number for %s\n", np->full_name);
+ else
+ count++;
+ }
+ }
+
+ /* Now request them */
+ for (i = 0; i < count; i++) {
+ if (request_irq(virqs[i], handler, 0, name, NULL)) {
printk(KERN_ERR "Unable to request interrupt %d for "
- "%s\n", irq_offset_up(virq), np->full_name);
+ "%s\n", virqs[i], np->full_name);
return;
}
- ireg += n_intr;
}
}
/* Internal Errors */
np = of_find_node_by_path("/event-sources/internal-errors");
if (np != NULL) {
- request_ras_irqs(np, "open-pic-interrupt", ras_error_interrupt,
- "RAS_ERROR");
- request_ras_irqs(np, "interrupts", ras_error_interrupt,
- "RAS_ERROR");
+ request_ras_irqs(np, ras_error_interrupt, "RAS_ERROR");
of_node_put(np);
}
/* EPOW Events */
np = of_find_node_by_path("/event-sources/epow-events");
if (np != NULL) {
- request_ras_irqs(np, "open-pic-interrupt", ras_epow_interrupt,
- "RAS_EPOW");
- request_ras_irqs(np, "interrupts", ras_epow_interrupt,
- "RAS_EPOW");
+ request_ras_irqs(np, ras_epow_interrupt, "RAS_EPOW");
of_node_put(np);
}
status = rtas_call(ras_check_exception_token, 6, 1, NULL,
RAS_VECTOR_OFFSET,
- virt_irq_to_real(irq_offset_down(irq)),
+ irq_map[irq].hwirq,
RTAS_EPOW_WARNING | RTAS_POWERMGM_EVENTS,
critical, __pa(&ras_log_buf),
rtas_get_error_log_max());
status = rtas_call(ras_check_exception_token, 6, 1, NULL,
RAS_VECTOR_OFFSET,
- virt_irq_to_real(irq_offset_down(irq)),
+ irq_map[irq].hwirq,
RTAS_INTERNAL_ERROR, 1 /*Time Critical */,
__pa(&ras_log_buf),
rtas_get_error_log_max());
#define DBG(fmt...)
#endif
+/* move those away to a .h */
+extern void smp_init_pseries_mpic(void);
+extern void smp_init_pseries_xics(void);
extern void find_udbg_vterm(void);
int fwnmi_active; /* TRUE if an FWNMI handler is present */
static void pseries_shared_idle_sleep(void);
static void pseries_dedicated_idle_sleep(void);
-struct mpic *pSeries_mpic;
+static struct device_node *pSeries_mpic_node;
static void pSeries_show_cpuinfo(struct seq_file *m)
{
fwnmi_active = 1;
}
-static void __init pSeries_init_mpic(void)
+void pseries_8259_cascade(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs)
{
- unsigned int *addrp;
- struct device_node *np;
- unsigned long intack = 0;
-
- /* All ISUs are setup, complete initialization */
- mpic_init(pSeries_mpic);
-
- /* Check what kind of cascade ACK we have */
- if (!(np = of_find_node_by_name(NULL, "pci"))
- || !(addrp = (unsigned int *)
- get_property(np, "8259-interrupt-acknowledge", NULL)))
- printk(KERN_ERR "Cannot find pci to get ack address\n");
- else
- intack = addrp[prom_n_addr_cells(np)-1];
- of_node_put(np);
-
- /* Setup the legacy interrupts & controller */
- i8259_init(intack, 0);
-
- /* Hook cascade to mpic */
- mpic_setup_cascade(NUM_ISA_INTERRUPTS, i8259_irq_cascade, NULL);
+ unsigned int cascade_irq = i8259_irq(regs);
+ if (cascade_irq != NO_IRQ)
+ generic_handle_irq(cascade_irq, regs);
+ desc->chip->eoi(irq);
}
-static void __init pSeries_setup_mpic(void)
+static void __init pseries_mpic_init_IRQ(void)
{
+ struct device_node *np, *old, *cascade = NULL;
+ unsigned int *addrp;
+ unsigned long intack = 0;
unsigned int *opprop;
unsigned long openpic_addr = 0;
- unsigned char senses[NR_IRQS - NUM_ISA_INTERRUPTS];
- struct device_node *root;
- int irq_count;
+ unsigned int cascade_irq;
+ int naddr, n, i, opplen;
+ struct mpic *mpic;
- /* Find the Open PIC if present */
- root = of_find_node_by_path("/");
- opprop = (unsigned int *) get_property(root, "platform-open-pic", NULL);
+ np = of_find_node_by_path("/");
+ naddr = prom_n_addr_cells(np);
+ opprop = (unsigned int *) get_property(np, "platform-open-pic", &opplen);
if (opprop != 0) {
- int n = prom_n_addr_cells(root);
-
- for (openpic_addr = 0; n > 0; --n)
- openpic_addr = (openpic_addr << 32) + *opprop++;
+ openpic_addr = of_read_number(opprop, naddr);
printk(KERN_DEBUG "OpenPIC addr: %lx\n", openpic_addr);
}
- of_node_put(root);
+ of_node_put(np);
BUG_ON(openpic_addr == 0);
- /* Get the sense values from OF */
- prom_get_irq_senses(senses, NUM_ISA_INTERRUPTS, NR_IRQS);
-
/* Setup the openpic driver */
- irq_count = NR_IRQS - NUM_ISA_INTERRUPTS - 4; /* leave room for IPIs */
- pSeries_mpic = mpic_alloc(openpic_addr, MPIC_PRIMARY,
- 16, 16, irq_count, /* isu size, irq offset, irq count */
- NR_IRQS - 4, /* ipi offset */
- senses, irq_count, /* sense & sense size */
- " MPIC ");
+ mpic = mpic_alloc(pSeries_mpic_node, openpic_addr,
+ MPIC_PRIMARY,
+ 16, 250, /* isu size, irq count */
+ " MPIC ");
+ BUG_ON(mpic == NULL);
+
+ /* Add ISUs */
+ opplen /= sizeof(u32);
+ for (n = 0, i = naddr; i < opplen; i += naddr, n++) {
+ unsigned long isuaddr = of_read_number(opprop + i, naddr);
+ mpic_assign_isu(mpic, n, isuaddr);
+ }
+
+ /* All ISUs are setup, complete initialization */
+ mpic_init(mpic);
+
+ /* Look for cascade */
+ for_each_node_by_type(np, "interrupt-controller")
+ if (device_is_compatible(np, "chrp,iic")) {
+ cascade = np;
+ break;
+ }
+ if (cascade == NULL)
+ return;
+
+ cascade_irq = irq_of_parse_and_map(cascade, 0);
+ if (cascade == NO_IRQ) {
+ printk(KERN_ERR "xics: failed to map cascade interrupt");
+ return;
+ }
+
+ /* Check ACK type */
+ for (old = of_node_get(cascade); old != NULL ; old = np) {
+ np = of_get_parent(old);
+ of_node_put(old);
+ if (np == NULL)
+ break;
+ if (strcmp(np->name, "pci") != 0)
+ continue;
+ addrp = (u32 *)get_property(np, "8259-interrupt-acknowledge",
+ NULL);
+ if (addrp == NULL)
+ continue;
+ naddr = prom_n_addr_cells(np);
+ intack = addrp[naddr-1];
+ if (naddr > 1)
+ intack |= ((unsigned long)addrp[naddr-2]) << 32;
+ }
+ if (intack)
+ printk(KERN_DEBUG "mpic: PCI 8259 intack at 0x%016lx\n",
+ intack);
+ i8259_init(cascade, intack);
+ of_node_put(cascade);
+ set_irq_chained_handler(cascade_irq, pseries_8259_cascade);
}
static void pseries_lpar_enable_pmcs(void)
get_lppaca()->pmcregs_in_use = 1;
}
-static void __init pSeries_setup_arch(void)
+#ifdef CONFIG_KEXEC
+static void pseries_kexec_cpu_down_mpic(int crash_shutdown, int secondary)
{
- /* Fixup ppc_md depending on the type of interrupt controller */
- if (ppc64_interrupt_controller == IC_OPEN_PIC) {
- ppc_md.init_IRQ = pSeries_init_mpic;
- ppc_md.get_irq = mpic_get_irq;
- /* Allocate the mpic now, so that find_and_init_phbs() can
- * fill the ISUs */
- pSeries_setup_mpic();
- } else {
- ppc_md.init_IRQ = xics_init_IRQ;
- ppc_md.get_irq = xics_get_irq;
+ mpic_teardown_this_cpu(secondary);
+}
+
+static void pseries_kexec_cpu_down_xics(int crash_shutdown, int secondary)
+{
+ /* Don't risk a hypervisor call if we're crashing */
+ if (firmware_has_feature(FW_FEATURE_SPLPAR) && !crash_shutdown) {
+ unsigned long vpa = __pa(get_lppaca());
+
+ if (unregister_vpa(hard_smp_processor_id(), vpa)) {
+ printk("VPA deregistration of cpu %u (hw_cpu_id %d) "
+ "failed\n", smp_processor_id(),
+ hard_smp_processor_id());
+ }
}
+ xics_teardown_cpu(secondary);
+}
+#endif /* CONFIG_KEXEC */
+static void __init pseries_discover_pic(void)
+{
+ struct device_node *np;
+ char *typep;
+
+ for (np = NULL; (np = of_find_node_by_name(np,
+ "interrupt-controller"));) {
+ typep = (char *)get_property(np, "compatible", NULL);
+ if (strstr(typep, "open-pic")) {
+ pSeries_mpic_node = of_node_get(np);
+ ppc_md.init_IRQ = pseries_mpic_init_IRQ;
+ ppc_md.get_irq = mpic_get_irq;
+#ifdef CONFIG_KEXEC
+ ppc_md.kexec_cpu_down = pseries_kexec_cpu_down_mpic;
+#endif
+#ifdef CONFIG_SMP
+ smp_init_pseries_mpic();
+#endif
+ return;
+ } else if (strstr(typep, "ppc-xicp")) {
+ ppc_md.init_IRQ = xics_init_IRQ;
+#ifdef CONFIG_KEXEC
+ ppc_md.kexec_cpu_down = pseries_kexec_cpu_down_xics;
+#endif
#ifdef CONFIG_SMP
- smp_init_pSeries();
+ smp_init_pseries_xics();
#endif
+ return;
+ }
+ }
+ printk(KERN_ERR "pSeries_discover_pic: failed to recognize"
+ " interrupt-controller\n");
+}
+
+static void __init pSeries_setup_arch(void)
+{
+ /* Discover PIC type and setup ppc_md accordingly */
+ pseries_discover_pic();
+
/* openpic global configuration register (64-bit format). */
/* openpic Interrupt Source Unit pointer (64-bit format). */
/* python0 facility area (mmio) (64-bit format) REAL address. */
}
arch_initcall(pSeries_init_panel);
-static void __init pSeries_discover_pic(void)
-{
- struct device_node *np;
- char *typep;
-
- /*
- * Setup interrupt mapping options that are needed for finish_device_tree
- * to properly parse the OF interrupt tree & do the virtual irq mapping
- */
- __irq_offset_value = NUM_ISA_INTERRUPTS;
- ppc64_interrupt_controller = IC_INVALID;
- for (np = NULL; (np = of_find_node_by_name(np, "interrupt-controller"));) {
- typep = (char *)get_property(np, "compatible", NULL);
- if (strstr(typep, "open-pic")) {
- ppc64_interrupt_controller = IC_OPEN_PIC;
- break;
- } else if (strstr(typep, "ppc-xicp")) {
- ppc64_interrupt_controller = IC_PPC_XIC;
- break;
- }
- }
- if (ppc64_interrupt_controller == IC_INVALID)
- printk("pSeries_discover_pic: failed to recognize"
- " interrupt-controller\n");
-
-}
-
static void pSeries_mach_cpu_die(void)
{
local_irq_disable();
idle_task_exit();
- /* Some hardware requires clearing the CPPR, while other hardware does not
- * it is safe either way
- */
- pSeriesLP_cppr_info(0, 0);
+ xics_teardown_cpu(0);
rtas_stop_self();
/* Should never get here... */
BUG();
iommu_init_early_pSeries();
- pSeries_discover_pic();
-
DBG(" <- pSeries_init_early()\n");
}
return PCI_PROBE_NORMAL;
}
-#ifdef CONFIG_KEXEC
-static void pseries_kexec_cpu_down(int crash_shutdown, int secondary)
-{
- /* Don't risk a hypervisor call if we're crashing */
- if (firmware_has_feature(FW_FEATURE_SPLPAR) && !crash_shutdown) {
- unsigned long vpa = __pa(get_lppaca());
-
- if (unregister_vpa(hard_smp_processor_id(), vpa)) {
- printk("VPA deregistration of cpu %u (hw_cpu_id %d) "
- "failed\n", smp_processor_id(),
- hard_smp_processor_id());
- }
- }
-
- if (ppc64_interrupt_controller == IC_OPEN_PIC)
- mpic_teardown_this_cpu(secondary);
- else
- xics_teardown_cpu(secondary);
-}
-#endif
-
define_machine(pseries) {
.name = "pSeries",
.probe = pSeries_probe,
.system_reset_exception = pSeries_system_reset_exception,
.machine_check_exception = pSeries_machine_check_exception,
#ifdef CONFIG_KEXEC
- .kexec_cpu_down = pseries_kexec_cpu_down,
.machine_kexec = default_machine_kexec,
.machine_kexec_prepare = default_machine_kexec_prepare,
.machine_crash_shutdown = default_machine_crash_shutdown,
#endif
/* This is called very early */
-void __init smp_init_pSeries(void)
+static void __init smp_init_pseries(void)
{
int i;
DBG(" -> smp_init_pSeries()\n");
- switch (ppc64_interrupt_controller) {
-#ifdef CONFIG_MPIC
- case IC_OPEN_PIC:
- smp_ops = &pSeries_mpic_smp_ops;
- break;
-#endif
-#ifdef CONFIG_XICS
- case IC_PPC_XIC:
- smp_ops = &pSeries_xics_smp_ops;
- break;
-#endif
- default:
- panic("Invalid interrupt controller");
- }
-
#ifdef CONFIG_HOTPLUG_CPU
smp_ops->cpu_disable = pSeries_cpu_disable;
smp_ops->cpu_die = pSeries_cpu_die;
DBG(" <- smp_init_pSeries()\n");
}
+#ifdef CONFIG_MPIC
+void __init smp_init_pseries_mpic(void)
+{
+ smp_ops = &pSeries_mpic_smp_ops;
+
+ smp_init_pseries();
+}
+#endif
+
+void __init smp_init_pseries_xics(void)
+{
+ smp_ops = &pSeries_xics_smp_ops;
+
+ smp_init_pseries();
+}
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
+
+#undef DEBUG
+
#include <linux/types.h>
#include <linux/threads.h>
#include <linux/kernel.h>
#include <linux/gfp.h>
#include <linux/radix-tree.h>
#include <linux/cpu.h>
+
#include <asm/firmware.h>
#include <asm/prom.h>
#include <asm/io.h>
#include "xics.h"
-static unsigned int xics_startup(unsigned int irq);
-static void xics_enable_irq(unsigned int irq);
-static void xics_disable_irq(unsigned int irq);
-static void xics_mask_and_ack_irq(unsigned int irq);
-static void xics_end_irq(unsigned int irq);
-static void xics_set_affinity(unsigned int irq_nr, cpumask_t cpumask);
-
-static struct hw_interrupt_type xics_pic = {
- .typename = " XICS ",
- .startup = xics_startup,
- .enable = xics_enable_irq,
- .disable = xics_disable_irq,
- .ack = xics_mask_and_ack_irq,
- .end = xics_end_irq,
- .set_affinity = xics_set_affinity
-};
-
-/* This is used to map real irq numbers to virtual */
-static struct radix_tree_root irq_map = RADIX_TREE_INIT(GFP_ATOMIC);
-
#define XICS_IPI 2
#define XICS_IRQ_SPURIOUS 0
/*
* Mark IPIs as higher priority so we can take them inside interrupts that
- * arent marked SA_INTERRUPT
+ * arent marked IRQF_DISABLED
*/
#define IPI_PRIORITY 4
static struct xics_ipl __iomem *xics_per_cpu[NR_CPUS];
-static int xics_irq_8259_cascade = 0;
-static int xics_irq_8259_cascade_real = 0;
static unsigned int default_server = 0xFF;
static unsigned int default_distrib_server = 0;
static unsigned int interrupt_server_size = 8;
+static struct irq_host *xics_host;
+
/*
* XICS only has a single IPI, so encode the messages per CPU
*/
static int ibm_int_on;
static int ibm_int_off;
-typedef struct {
- int (*xirr_info_get)(int cpu);
- void (*xirr_info_set)(int cpu, int val);
- void (*cppr_info)(int cpu, u8 val);
- void (*qirr_info)(int cpu, u8 val);
-} xics_ops;
+/* Direct HW low level accessors */
-/* SMP */
-static int pSeries_xirr_info_get(int n_cpu)
+static inline unsigned int direct_xirr_info_get(int n_cpu)
{
return in_be32(&xics_per_cpu[n_cpu]->xirr.word);
}
-static void pSeries_xirr_info_set(int n_cpu, int value)
+static inline void direct_xirr_info_set(int n_cpu, int value)
{
out_be32(&xics_per_cpu[n_cpu]->xirr.word, value);
}
-static void pSeries_cppr_info(int n_cpu, u8 value)
+static inline void direct_cppr_info(int n_cpu, u8 value)
{
out_8(&xics_per_cpu[n_cpu]->xirr.bytes[0], value);
}
-static void pSeries_qirr_info(int n_cpu, u8 value)
+static inline void direct_qirr_info(int n_cpu, u8 value)
{
out_8(&xics_per_cpu[n_cpu]->qirr.bytes[0], value);
}
-static xics_ops pSeries_ops = {
- pSeries_xirr_info_get,
- pSeries_xirr_info_set,
- pSeries_cppr_info,
- pSeries_qirr_info
-};
-static xics_ops *ops = &pSeries_ops;
+/* LPAR low level accessors */
-/* LPAR */
-
static inline long plpar_eoi(unsigned long xirr)
{
return plpar_hcall_norets(H_EOI, xirr);
return plpar_hcall(H_XIRR, 0, 0, 0, 0, xirr_ret, &dummy, &dummy);
}
-static int pSeriesLP_xirr_info_get(int n_cpu)
+static inline unsigned int lpar_xirr_info_get(int n_cpu)
{
unsigned long lpar_rc;
unsigned long return_value;
lpar_rc = plpar_xirr(&return_value);
if (lpar_rc != H_SUCCESS)
panic(" bad return code xirr - rc = %lx \n", lpar_rc);
- return (int)return_value;
+ return (unsigned int)return_value;
}
-static void pSeriesLP_xirr_info_set(int n_cpu, int value)
+static inline void lpar_xirr_info_set(int n_cpu, int value)
{
unsigned long lpar_rc;
unsigned long val64 = value & 0xffffffff;
val64);
}
-void pSeriesLP_cppr_info(int n_cpu, u8 value)
+static inline void lpar_cppr_info(int n_cpu, u8 value)
{
unsigned long lpar_rc;
panic("bad return code cppr - rc = %lx\n", lpar_rc);
}
-static void pSeriesLP_qirr_info(int n_cpu , u8 value)
+static inline void lpar_qirr_info(int n_cpu , u8 value)
{
unsigned long lpar_rc;
panic("bad return code qirr - rc = %lx\n", lpar_rc);
}
-xics_ops pSeriesLP_ops = {
- pSeriesLP_xirr_info_get,
- pSeriesLP_xirr_info_set,
- pSeriesLP_cppr_info,
- pSeriesLP_qirr_info
-};
-
-static unsigned int xics_startup(unsigned int virq)
-{
- unsigned int irq;
-
- irq = irq_offset_down(virq);
- if (radix_tree_insert(&irq_map, virt_irq_to_real(irq),
- &virt_irq_to_real_map[irq]) == -ENOMEM)
- printk(KERN_CRIT "Out of memory creating real -> virtual"
- " IRQ mapping for irq %u (real 0x%x)\n",
- virq, virt_irq_to_real(irq));
- xics_enable_irq(virq);
- return 0; /* return value is ignored */
-}
-static unsigned int real_irq_to_virt(unsigned int real_irq)
-{
- unsigned int *ptr;
+/* High level handlers and init code */
- ptr = radix_tree_lookup(&irq_map, real_irq);
- if (ptr == NULL)
- return NO_IRQ;
- return ptr - virt_irq_to_real_map;
-}
#ifdef CONFIG_SMP
-static int get_irq_server(unsigned int irq)
+static int get_irq_server(unsigned int virq)
{
unsigned int server;
/* For the moment only implement delivery to all cpus or one cpu */
- cpumask_t cpumask = irq_desc[irq].affinity;
+ cpumask_t cpumask = irq_desc[virq].affinity;
cpumask_t tmp = CPU_MASK_NONE;
if (!distribute_irqs)
}
#else
-static int get_irq_server(unsigned int irq)
+static int get_irq_server(unsigned int virq)
{
return default_server;
}
#endif
-static void xics_enable_irq(unsigned int virq)
+
+static void xics_unmask_irq(unsigned int virq)
{
unsigned int irq;
int call_status;
unsigned int server;
- irq = virt_irq_to_real(irq_offset_down(virq));
- if (irq == XICS_IPI)
+ pr_debug("xics: unmask virq %d\n", virq);
+
+ irq = (unsigned int)irq_map[virq].hwirq;
+ pr_debug(" -> map to hwirq 0x%x\n", irq);
+ if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
return;
server = get_irq_server(virq);
+
call_status = rtas_call(ibm_set_xive, 3, 1, NULL, irq, server,
DEFAULT_PRIORITY);
if (call_status != 0) {
}
}
-static void xics_disable_real_irq(unsigned int irq)
+static void xics_mask_real_irq(unsigned int irq)
{
int call_status;
unsigned int server;
}
}
-static void xics_disable_irq(unsigned int virq)
+static void xics_mask_irq(unsigned int virq)
{
unsigned int irq;
- irq = virt_irq_to_real(irq_offset_down(virq));
- xics_disable_real_irq(irq);
+ pr_debug("xics: mask virq %d\n", virq);
+
+ irq = (unsigned int)irq_map[virq].hwirq;
+ if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
+ return;
+ xics_mask_real_irq(irq);
+}
+
+static unsigned int xics_startup(unsigned int virq)
+{
+ unsigned int irq;
+
+ /* force a reverse mapping of the interrupt so it gets in the cache */
+ irq = (unsigned int)irq_map[virq].hwirq;
+ irq_radix_revmap(xics_host, irq);
+
+ /* unmask it */
+ xics_unmask_irq(virq);
+ return 0;
}
-static void xics_end_irq(unsigned int irq)
+static void xics_eoi_direct(unsigned int virq)
{
int cpu = smp_processor_id();
+ unsigned int irq = (unsigned int)irq_map[virq].hwirq;
iosync();
- ops->xirr_info_set(cpu, ((0xff << 24) |
- (virt_irq_to_real(irq_offset_down(irq)))));
-
+ direct_xirr_info_set(cpu, (0xff << 24) | irq);
}
-static void xics_mask_and_ack_irq(unsigned int irq)
+
+static void xics_eoi_lpar(unsigned int virq)
{
int cpu = smp_processor_id();
+ unsigned int irq = (unsigned int)irq_map[virq].hwirq;
- if (irq < irq_offset_value()) {
- i8259_pic.ack(irq);
- iosync();
- ops->xirr_info_set(cpu, ((0xff<<24) |
- xics_irq_8259_cascade_real));
- iosync();
- }
+ iosync();
+ lpar_xirr_info_set(cpu, (0xff << 24) | irq);
}
-int xics_get_irq(struct pt_regs *regs)
+static inline unsigned int xics_remap_irq(unsigned int vec)
{
- unsigned int cpu = smp_processor_id();
- unsigned int vec;
- int irq;
+ unsigned int irq;
- vec = ops->xirr_info_get(cpu);
- /* (vec >> 24) == old priority */
vec &= 0x00ffffff;
- /* for sanity, this had better be < NR_IRQS - 16 */
- if (vec == xics_irq_8259_cascade_real) {
- irq = i8259_irq(regs);
- xics_end_irq(irq_offset_up(xics_irq_8259_cascade));
- } else if (vec == XICS_IRQ_SPURIOUS) {
- irq = -1;
- } else {
- irq = real_irq_to_virt(vec);
- if (irq == NO_IRQ)
- irq = real_irq_to_virt_slowpath(vec);
- if (irq == NO_IRQ) {
- printk(KERN_ERR "Interrupt %u (real) is invalid,"
- " disabling it.\n", vec);
- xics_disable_real_irq(vec);
- } else
- irq = irq_offset_up(irq);
- }
- return irq;
+ if (vec == XICS_IRQ_SPURIOUS)
+ return NO_IRQ;
+ irq = irq_radix_revmap(xics_host, vec);
+ if (likely(irq != NO_IRQ))
+ return irq;
+
+ printk(KERN_ERR "Interrupt %u (real) is invalid,"
+ " disabling it.\n", vec);
+ xics_mask_real_irq(vec);
+ return NO_IRQ;
}
-#ifdef CONFIG_SMP
+static unsigned int xics_get_irq_direct(struct pt_regs *regs)
+{
+ unsigned int cpu = smp_processor_id();
-static irqreturn_t xics_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
+ return xics_remap_irq(direct_xirr_info_get(cpu));
+}
+
+static unsigned int xics_get_irq_lpar(struct pt_regs *regs)
{
- int cpu = smp_processor_id();
+ unsigned int cpu = smp_processor_id();
+
+ return xics_remap_irq(lpar_xirr_info_get(cpu));
+}
- ops->qirr_info(cpu, 0xff);
+#ifdef CONFIG_SMP
+static irqreturn_t xics_ipi_dispatch(int cpu, struct pt_regs *regs)
+{
WARN_ON(cpu_is_offline(cpu));
while (xics_ipi_message[cpu].value) {
return IRQ_HANDLED;
}
+static irqreturn_t xics_ipi_action_direct(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int cpu = smp_processor_id();
+
+ direct_qirr_info(cpu, 0xff);
+
+ return xics_ipi_dispatch(cpu, regs);
+}
+
+static irqreturn_t xics_ipi_action_lpar(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int cpu = smp_processor_id();
+
+ lpar_qirr_info(cpu, 0xff);
+
+ return xics_ipi_dispatch(cpu, regs);
+}
+
void xics_cause_IPI(int cpu)
{
- ops->qirr_info(cpu, IPI_PRIORITY);
+ if (firmware_has_feature(FW_FEATURE_LPAR))
+ lpar_qirr_info(cpu, IPI_PRIORITY);
+ else
+ direct_qirr_info(cpu, IPI_PRIORITY);
}
+
#endif /* CONFIG_SMP */
+static void xics_set_cpu_priority(int cpu, unsigned char cppr)
+{
+ if (firmware_has_feature(FW_FEATURE_LPAR))
+ lpar_cppr_info(cpu, cppr);
+ else
+ direct_cppr_info(cpu, cppr);
+ iosync();
+}
+
+static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
+{
+ unsigned int irq;
+ int status;
+ int xics_status[2];
+ unsigned long newmask;
+ cpumask_t tmp = CPU_MASK_NONE;
+
+ irq = (unsigned int)irq_map[virq].hwirq;
+ if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
+ return;
+
+ status = rtas_call(ibm_get_xive, 1, 3, xics_status, irq);
+
+ if (status) {
+ printk(KERN_ERR "xics_set_affinity: irq=%u ibm,get-xive "
+ "returns %d\n", irq, status);
+ return;
+ }
+
+ /* For the moment only implement delivery to all cpus or one cpu */
+ if (cpus_equal(cpumask, CPU_MASK_ALL)) {
+ newmask = default_distrib_server;
+ } else {
+ cpus_and(tmp, cpu_online_map, cpumask);
+ if (cpus_empty(tmp))
+ return;
+ newmask = get_hard_smp_processor_id(first_cpu(tmp));
+ }
+
+ status = rtas_call(ibm_set_xive, 3, 1, NULL,
+ irq, newmask, xics_status[1]);
+
+ if (status) {
+ printk(KERN_ERR "xics_set_affinity: irq=%u ibm,set-xive "
+ "returns %d\n", irq, status);
+ return;
+ }
+}
+
void xics_setup_cpu(void)
{
int cpu = smp_processor_id();
- ops->cppr_info(cpu, 0xff);
- iosync();
+ xics_set_cpu_priority(cpu, 0xff);
/*
* Put the calling processor into the GIQ. This is really only
(1UL << interrupt_server_size) - 1 - default_distrib_server, 1);
}
-void xics_init_IRQ(void)
+
+static struct irq_chip xics_pic_direct = {
+ .typename = " XICS ",
+ .startup = xics_startup,
+ .mask = xics_mask_irq,
+ .unmask = xics_unmask_irq,
+ .eoi = xics_eoi_direct,
+ .set_affinity = xics_set_affinity
+};
+
+
+static struct irq_chip xics_pic_lpar = {
+ .typename = " XICS ",
+ .startup = xics_startup,
+ .mask = xics_mask_irq,
+ .unmask = xics_unmask_irq,
+ .eoi = xics_eoi_lpar,
+ .set_affinity = xics_set_affinity
+};
+
+
+static int xics_host_match(struct irq_host *h, struct device_node *node)
+{
+ /* IBM machines have interrupt parents of various funky types for things
+ * like vdevices, events, etc... The trick we use here is to match
+ * everything here except the legacy 8259 which is compatible "chrp,iic"
+ */
+ return !device_is_compatible(node, "chrp,iic");
+}
+
+static int xics_host_map_direct(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
+
+ pr_debug("xics: map_direct virq %d, hwirq 0x%lx, flags: 0x%x\n",
+ virq, hw, flags);
+
+ if (sense && sense != IRQ_TYPE_LEVEL_LOW)
+ printk(KERN_WARNING "xics: using unsupported sense 0x%x"
+ " for irq %d (h: 0x%lx)\n", flags, virq, hw);
+
+ get_irq_desc(virq)->status |= IRQ_LEVEL;
+ set_irq_chip_and_handler(virq, &xics_pic_direct, handle_fasteoi_irq);
+ return 0;
+}
+
+static int xics_host_map_lpar(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
+
+ pr_debug("xics: map_lpar virq %d, hwirq 0x%lx, flags: 0x%x\n",
+ virq, hw, flags);
+
+ if (sense && sense != IRQ_TYPE_LEVEL_LOW)
+ printk(KERN_WARNING "xics: using unsupported sense 0x%x"
+ " for irq %d (h: 0x%lx)\n", flags, virq, hw);
+
+ get_irq_desc(virq)->status |= IRQ_LEVEL;
+ set_irq_chip_and_handler(virq, &xics_pic_lpar, handle_fasteoi_irq);
+ return 0;
+}
+
+static int xics_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags)
+
+{
+ /* Current xics implementation translates everything
+ * to level. It is not technically right for MSIs but this
+ * is irrelevant at this point. We might get smarter in the future
+ */
+ *out_hwirq = intspec[0];
+ *out_flags = IRQ_TYPE_LEVEL_LOW;
+
+ return 0;
+}
+
+static struct irq_host_ops xics_host_direct_ops = {
+ .match = xics_host_match,
+ .map = xics_host_map_direct,
+ .xlate = xics_host_xlate,
+};
+
+static struct irq_host_ops xics_host_lpar_ops = {
+ .match = xics_host_match,
+ .map = xics_host_map_lpar,
+ .xlate = xics_host_xlate,
+};
+
+static void __init xics_init_host(void)
+{
+ struct irq_host_ops *ops;
+
+ if (firmware_has_feature(FW_FEATURE_LPAR))
+ ops = &xics_host_lpar_ops;
+ else
+ ops = &xics_host_direct_ops;
+ xics_host = irq_alloc_host(IRQ_HOST_MAP_TREE, 0, ops,
+ XICS_IRQ_SPURIOUS);
+ BUG_ON(xics_host == NULL);
+ irq_set_default_host(xics_host);
+}
+
+static void __init xics_map_one_cpu(int hw_id, unsigned long addr,
+ unsigned long size)
{
+#ifdef CONFIG_SMP
int i;
- unsigned long intr_size = 0;
- struct device_node *np;
- uint *ireg, ilen, indx = 0;
- unsigned long intr_base = 0;
- struct xics_interrupt_node {
- unsigned long addr;
- unsigned long size;
- } intnodes[NR_CPUS];
- ppc64_boot_msg(0x20, "XICS Init");
+ /* This may look gross but it's good enough for now, we don't quite
+ * have a hard -> linux processor id matching.
+ */
+ for_each_possible_cpu(i) {
+ if (!cpu_present(i))
+ continue;
+ if (hw_id == get_hard_smp_processor_id(i)) {
+ xics_per_cpu[i] = ioremap(addr, size);
+ return;
+ }
+ }
+#else
+ if (hw_id != 0)
+ return;
+ xics_per_cpu[0] = ioremap(addr, size);
+#endif /* CONFIG_SMP */
+}
- ibm_get_xive = rtas_token("ibm,get-xive");
- ibm_set_xive = rtas_token("ibm,set-xive");
- ibm_int_on = rtas_token("ibm,int-on");
- ibm_int_off = rtas_token("ibm,int-off");
+static void __init xics_init_one_node(struct device_node *np,
+ unsigned int *indx)
+{
+ unsigned int ilen;
+ u32 *ireg;
- np = of_find_node_by_type(NULL, "PowerPC-External-Interrupt-Presentation");
- if (!np)
- panic("xics_init_IRQ: can't find interrupt presentation");
+ /* This code does the theorically broken assumption that the interrupt
+ * server numbers are the same as the hard CPU numbers.
+ * This happens to be the case so far but we are playing with fire...
+ * should be fixed one of these days. -BenH.
+ */
+ ireg = (u32 *)get_property(np, "ibm,interrupt-server-ranges", NULL);
-nextnode:
- ireg = (uint *)get_property(np, "ibm,interrupt-server-ranges", NULL);
+ /* Do that ever happen ? we'll know soon enough... but even good'old
+ * f80 does have that property ..
+ */
+ WARN_ON(ireg == NULL);
if (ireg) {
/*
* set node starting index for this node
*/
- indx = *ireg;
+ *indx = *ireg;
}
-
- ireg = (uint *)get_property(np, "reg", &ilen);
+ ireg = (u32 *)get_property(np, "reg", &ilen);
if (!ireg)
panic("xics_init_IRQ: can't find interrupt reg property");
- while (ilen) {
- intnodes[indx].addr = (unsigned long)*ireg++ << 32;
- ilen -= sizeof(uint);
- intnodes[indx].addr |= *ireg++;
- ilen -= sizeof(uint);
- intnodes[indx].size = (unsigned long)*ireg++ << 32;
- ilen -= sizeof(uint);
- intnodes[indx].size |= *ireg++;
- ilen -= sizeof(uint);
- indx++;
- if (indx >= NR_CPUS) break;
+ while (ilen >= (4 * sizeof(u32))) {
+ unsigned long addr, size;
+
+ /* XXX Use proper OF parsing code here !!! */
+ addr = (unsigned long)*ireg++ << 32;
+ ilen -= sizeof(u32);
+ addr |= *ireg++;
+ ilen -= sizeof(u32);
+ size = (unsigned long)*ireg++ << 32;
+ ilen -= sizeof(u32);
+ size |= *ireg++;
+ ilen -= sizeof(u32);
+ xics_map_one_cpu(*indx, addr, size);
+ (*indx)++;
+ }
+}
+
+
+static void __init xics_setup_8259_cascade(void)
+{
+ struct device_node *np, *old, *found = NULL;
+ int cascade, naddr;
+ u32 *addrp;
+ unsigned long intack = 0;
+
+ for_each_node_by_type(np, "interrupt-controller")
+ if (device_is_compatible(np, "chrp,iic")) {
+ found = np;
+ break;
+ }
+ if (found == NULL) {
+ printk(KERN_DEBUG "xics: no ISA interrupt controller\n");
+ return;
+ }
+ cascade = irq_of_parse_and_map(found, 0);
+ if (cascade == NO_IRQ) {
+ printk(KERN_ERR "xics: failed to map cascade interrupt");
+ return;
+ }
+ pr_debug("xics: cascade mapped to irq %d\n", cascade);
+
+ for (old = of_node_get(found); old != NULL ; old = np) {
+ np = of_get_parent(old);
+ of_node_put(old);
+ if (np == NULL)
+ break;
+ if (strcmp(np->name, "pci") != 0)
+ continue;
+ addrp = (u32 *)get_property(np, "8259-interrupt-acknowledge", NULL);
+ if (addrp == NULL)
+ continue;
+ naddr = prom_n_addr_cells(np);
+ intack = addrp[naddr-1];
+ if (naddr > 1)
+ intack |= ((unsigned long)addrp[naddr-2]) << 32;
+ }
+ if (intack)
+ printk(KERN_DEBUG "xics: PCI 8259 intack at 0x%016lx\n", intack);
+ i8259_init(found, intack);
+ of_node_put(found);
+ set_irq_chained_handler(cascade, pseries_8259_cascade);
+}
+
+void __init xics_init_IRQ(void)
+{
+ int i;
+ struct device_node *np;
+ u32 *ireg, ilen, indx = 0;
+ int found = 0;
+
+ ppc64_boot_msg(0x20, "XICS Init");
+
+ ibm_get_xive = rtas_token("ibm,get-xive");
+ ibm_set_xive = rtas_token("ibm,set-xive");
+ ibm_int_on = rtas_token("ibm,int-on");
+ ibm_int_off = rtas_token("ibm,int-off");
+
+ for_each_node_by_type(np, "PowerPC-External-Interrupt-Presentation") {
+ found = 1;
+ if (firmware_has_feature(FW_FEATURE_LPAR))
+ break;
+ xics_init_one_node(np, &indx);
}
+ if (found == 0)
+ return;
- np = of_find_node_by_type(np, "PowerPC-External-Interrupt-Presentation");
- if ((indx < NR_CPUS) && np) goto nextnode;
+ xics_init_host();
/* Find the server numbers for the boot cpu. */
for (np = of_find_node_by_type(NULL, "cpu");
np;
np = of_find_node_by_type(np, "cpu")) {
- ireg = (uint *)get_property(np, "reg", &ilen);
+ ireg = (u32 *)get_property(np, "reg", &ilen);
if (ireg && ireg[0] == get_hard_smp_processor_id(boot_cpuid)) {
- ireg = (uint *)get_property(np, "ibm,ppc-interrupt-gserver#s",
- &ilen);
+ ireg = (u32 *)get_property(np,
+ "ibm,ppc-interrupt-gserver#s",
+ &ilen);
i = ilen / sizeof(int);
if (ireg && i > 0) {
default_server = ireg[0];
- default_distrib_server = ireg[i-1]; /* take last element */
+ /* take last element */
+ default_distrib_server = ireg[i-1];
}
- ireg = (uint *)get_property(np,
+ ireg = (u32 *)get_property(np,
"ibm,interrupt-server#-size", NULL);
if (ireg)
interrupt_server_size = *ireg;
}
of_node_put(np);
- intr_base = intnodes[0].addr;
- intr_size = intnodes[0].size;
-
- np = of_find_node_by_type(NULL, "interrupt-controller");
- if (!np) {
- printk(KERN_DEBUG "xics: no ISA interrupt controller\n");
- xics_irq_8259_cascade_real = -1;
- xics_irq_8259_cascade = -1;
- } else {
- ireg = (uint *) get_property(np, "interrupts", NULL);
- if (!ireg)
- panic("xics_init_IRQ: can't find ISA interrupts property");
-
- xics_irq_8259_cascade_real = *ireg;
- xics_irq_8259_cascade
- = virt_irq_create_mapping(xics_irq_8259_cascade_real);
- i8259_init(0, 0);
- of_node_put(np);
- }
-
if (firmware_has_feature(FW_FEATURE_LPAR))
- ops = &pSeriesLP_ops;
- else {
-#ifdef CONFIG_SMP
- for_each_possible_cpu(i) {
- int hard_id;
-
- /* FIXME: Do this dynamically! --RR */
- if (!cpu_present(i))
- continue;
-
- hard_id = get_hard_smp_processor_id(i);
- xics_per_cpu[i] = ioremap(intnodes[hard_id].addr,
- intnodes[hard_id].size);
- }
-#else
- xics_per_cpu[0] = ioremap(intr_base, intr_size);
-#endif /* CONFIG_SMP */
- }
-
- for (i = irq_offset_value(); i < NR_IRQS; ++i)
- get_irq_desc(i)->chip = &xics_pic;
+ ppc_md.get_irq = xics_get_irq_lpar;
+ else
+ ppc_md.get_irq = xics_get_irq_direct;
xics_setup_cpu();
+ xics_setup_8259_cascade();
+
ppc64_boot_msg(0x21, "XICS Done");
}
-/*
- * We cant do this in init_IRQ because we need the memory subsystem up for
- * request_irq()
- */
-static int __init xics_setup_i8259(void)
-{
- if (ppc64_interrupt_controller == IC_PPC_XIC &&
- xics_irq_8259_cascade != -1) {
- if (request_irq(irq_offset_up(xics_irq_8259_cascade),
- no_action, 0, "8259 cascade", NULL))
- printk(KERN_ERR "xics_setup_i8259: couldn't get 8259 "
- "cascade\n");
- }
- return 0;
-}
-arch_initcall(xics_setup_i8259);
#ifdef CONFIG_SMP
void xics_request_IPIs(void)
{
- virt_irq_to_real_map[XICS_IPI] = XICS_IPI;
+ unsigned int ipi;
- /* IPIs are marked SA_INTERRUPT as they must run with irqs disabled */
- request_irq(irq_offset_up(XICS_IPI), xics_ipi_action, SA_INTERRUPT,
- "IPI", NULL);
- get_irq_desc(irq_offset_up(XICS_IPI))->status |= IRQ_PER_CPU;
-}
-#endif
-
-static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
-{
- unsigned int irq;
- int status;
- int xics_status[2];
- unsigned long newmask;
- cpumask_t tmp = CPU_MASK_NONE;
-
- irq = virt_irq_to_real(irq_offset_down(virq));
- if (irq == XICS_IPI || irq == NO_IRQ)
- return;
-
- status = rtas_call(ibm_get_xive, 1, 3, xics_status, irq);
+ ipi = irq_create_mapping(xics_host, XICS_IPI, 0);
+ BUG_ON(ipi == NO_IRQ);
- if (status) {
- printk(KERN_ERR "xics_set_affinity: irq=%u ibm,get-xive "
- "returns %d\n", irq, status);
- return;
- }
-
- /* For the moment only implement delivery to all cpus or one cpu */
- if (cpus_equal(cpumask, CPU_MASK_ALL)) {
- newmask = default_distrib_server;
- } else {
- cpus_and(tmp, cpu_online_map, cpumask);
- if (cpus_empty(tmp))
- return;
- newmask = get_hard_smp_processor_id(first_cpu(tmp));
- }
-
- status = rtas_call(ibm_set_xive, 3, 1, NULL,
- irq, newmask, xics_status[1]);
-
- if (status) {
- printk(KERN_ERR "xics_set_affinity: irq=%u ibm,set-xive "
- "returns %d\n", irq, status);
- return;
- }
+ /*
+ * IPIs are marked IRQF_DISABLED as they must run with irqs
+ * disabled
+ */
+ set_irq_handler(ipi, handle_percpu_irq);
+ if (firmware_has_feature(FW_FEATURE_LPAR))
+ request_irq(ipi, xics_ipi_action_lpar, IRQF_DISABLED,
+ "IPI", NULL);
+ else
+ request_irq(ipi, xics_ipi_action_direct, IRQF_DISABLED,
+ "IPI", NULL);
}
+#endif /* CONFIG_SMP */
void xics_teardown_cpu(int secondary)
{
int cpu = smp_processor_id();
+ unsigned int ipi;
+ struct irq_desc *desc;
- ops->cppr_info(cpu, 0x00);
- iosync();
-
- /* Clear IPI */
- ops->qirr_info(cpu, 0xff);
+ xics_set_cpu_priority(cpu, 0);
/*
* we need to EOI the IPI if we got here from kexec down IPI
* should we be flagging idle loop instead?
* or creating some task to be scheduled?
*/
- ops->xirr_info_set(cpu, XICS_IPI);
+
+ ipi = irq_find_mapping(xics_host, XICS_IPI);
+ if (ipi == XICS_IRQ_SPURIOUS)
+ return;
+ desc = get_irq_desc(ipi);
+ if (desc->chip && desc->chip->eoi)
+ desc->chip->eoi(XICS_IPI);
/*
* Some machines need to have at least one cpu in the GIQ,
*/
if (secondary)
rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
- (1UL << interrupt_server_size) - 1 -
- default_distrib_server, 0);
+ (1UL << interrupt_server_size) - 1 -
+ default_distrib_server, 0);
}
#ifdef CONFIG_HOTPLUG_CPU
unsigned int irq, virq, cpu = smp_processor_id();
/* Reject any interrupt that was queued to us... */
- ops->cppr_info(cpu, 0);
- iosync();
+ xics_set_cpu_priority(cpu, 0);
/* remove ourselves from the global interrupt queue */
status = rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
WARN_ON(status < 0);
/* Allow IPIs again... */
- ops->cppr_info(cpu, DEFAULT_PRIORITY);
- iosync();
+ xics_set_cpu_priority(cpu, DEFAULT_PRIORITY);
for_each_irq(virq) {
- irq_desc_t *desc;
+ struct irq_desc *desc;
int xics_status[2];
unsigned long flags;
/* We cant set affinity on ISA interrupts */
- if (virq < irq_offset_value())
+ if (virq < NUM_ISA_INTERRUPTS)
continue;
-
- desc = get_irq_desc(virq);
- irq = virt_irq_to_real(irq_offset_down(virq));
-
+ if (irq_map[virq].host != xics_host)
+ continue;
+ irq = (unsigned int)irq_map[virq].hwirq;
/* We need to get IPIs still. */
- if (irq == XICS_IPI || irq == NO_IRQ)
+ if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
continue;
+ desc = get_irq_desc(virq);
/* We only need to migrate enabled IRQS */
if (desc == NULL || desc->chip == NULL
#include <linux/cache.h>
-void xics_init_IRQ(void);
-int xics_get_irq(struct pt_regs *);
-void xics_setup_cpu(void);
-void xics_teardown_cpu(int secondary);
-void xics_cause_IPI(int cpu);
-void xics_request_IPIs(void);
-void xics_migrate_irqs_away(void);
+extern void xics_init_IRQ(void);
+extern void xics_setup_cpu(void);
+extern void xics_teardown_cpu(int secondary);
+extern void xics_cause_IPI(int cpu);
+extern void xics_request_IPIs(void);
+extern void xics_migrate_irqs_away(void);
/* first argument is ignored for now*/
void pSeriesLP_cppr_info(int n_cpu, u8 value);
extern struct xics_ipi_struct xics_ipi_message[NR_CPUS] __cacheline_aligned;
+struct irq_desc;
+extern void pseries_8259_cascade(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs);
+
#endif /* _POWERPC_KERNEL_XICS_H */
obj-$(CONFIG_MPIC) += mpic.o
obj-$(CONFIG_PPC_INDIRECT_PCI) += indirect_pci.o
-obj-$(CONFIG_PPC_I8259) += i8259.o
obj-$(CONFIG_PPC_MPC106) += grackle.o
obj-$(CONFIG_BOOKE) += dcr.o
obj-$(CONFIG_40x) += dcr.o
obj-$(CONFIG_FSL_SOC) += fsl_soc.o
obj-$(CONFIG_PPC_TODC) += todc.o
obj-$(CONFIG_TSI108_BRIDGE) += tsi108_pci.o tsi108_dev.o
+
+ifeq ($(CONFIG_PPC_MERGE),y)
+obj-$(CONFIG_PPC_I8259) += i8259.o
+ endif
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
+#undef DEBUG
+
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
#include <asm/io.h>
#include <asm/i8259.h>
+#include <asm/prom.h>
static volatile void __iomem *pci_intack; /* RO, gives us the irq vector */
static DEFINE_SPINLOCK(i8259_lock);
-static int i8259_pic_irq_offset;
+static struct device_node *i8259_node;
+static struct irq_host *i8259_host;
/*
* Acknowledge the IRQ using either the PCI host bridge's interrupt
* which is called. It should be noted that polling is broken on some
* IBM and Motorola PReP boxes so we must use the int-ack feature on them.
*/
-int i8259_irq(struct pt_regs *regs)
+unsigned int i8259_irq(struct pt_regs *regs)
{
int irq;
-
- spin_lock(&i8259_lock);
+ int lock = 0;
/* Either int-ack or poll for the IRQ */
if (pci_intack)
irq = readb(pci_intack);
else {
+ spin_lock(&i8259_lock);
+ lock = 1;
+
/* Perform an interrupt acknowledge cycle on controller 1. */
outb(0x0C, 0x20); /* prepare for poll */
irq = inb(0x20) & 7;
if (!pci_intack)
outb(0x0B, 0x20); /* ISR register */
if(~inb(0x20) & 0x80)
- irq = -1;
- }
+ irq = NO_IRQ;
+ } else if (irq == 0xff)
+ irq = NO_IRQ;
- spin_unlock(&i8259_lock);
- return irq + i8259_pic_irq_offset;
-}
-
-int i8259_irq_cascade(struct pt_regs *regs, void *unused)
-{
- return i8259_irq(regs);
+ if (lock)
+ spin_unlock(&i8259_lock);
+ return irq;
}
static void i8259_mask_and_ack_irq(unsigned int irq_nr)
unsigned long flags;
spin_lock_irqsave(&i8259_lock, flags);
- irq_nr -= i8259_pic_irq_offset;
if (irq_nr > 7) {
cached_A1 |= 1 << (irq_nr-8);
inb(0xA1); /* DUMMY */
{
unsigned long flags;
+ pr_debug("i8259_mask_irq(%d)\n", irq_nr);
+
spin_lock_irqsave(&i8259_lock, flags);
- irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 |= 1 << irq_nr;
else
{
unsigned long flags;
+ pr_debug("i8259_unmask_irq(%d)\n", irq_nr);
+
spin_lock_irqsave(&i8259_lock, flags);
- irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 &= ~(1 << irq_nr);
else
spin_unlock_irqrestore(&i8259_lock, flags);
}
-static void i8259_end_irq(unsigned int irq)
-{
- if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS))
- && irq_desc[irq].action)
- i8259_unmask_irq(irq);
-}
-
-struct hw_interrupt_type i8259_pic = {
- .typename = " i8259 ",
- .enable = i8259_unmask_irq,
- .disable = i8259_mask_irq,
- .ack = i8259_mask_and_ack_irq,
- .end = i8259_end_irq,
+static struct irq_chip i8259_pic = {
+ .typename = " i8259 ",
+ .mask = i8259_mask_irq,
+ .unmask = i8259_unmask_irq,
+ .mask_ack = i8259_mask_and_ack_irq,
};
static struct resource pic1_iores = {
.flags = IORESOURCE_BUSY,
};
-static struct irqaction i8259_irqaction = {
- .handler = no_action,
- .flags = SA_INTERRUPT,
- .mask = CPU_MASK_NONE,
- .name = "82c59 secondary cascade",
+static int i8259_host_match(struct irq_host *h, struct device_node *node)
+{
+ return i8259_node == NULL || i8259_node == node;
+}
+
+static int i8259_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ pr_debug("i8259_host_map(%d, 0x%lx)\n", virq, hw);
+
+ /* We block the internal cascade */
+ if (hw == 2)
+ get_irq_desc(virq)->status |= IRQ_NOREQUEST;
+
+ /* We use the level stuff only for now, we might want to
+ * be more cautious here but that works for now
+ */
+ get_irq_desc(virq)->status |= IRQ_LEVEL;
+ set_irq_chip_and_handler(virq, &i8259_pic, handle_level_irq);
+ return 0;
+}
+
+static void i8259_host_unmap(struct irq_host *h, unsigned int virq)
+{
+ /* Make sure irq is masked in hardware */
+ i8259_mask_irq(virq);
+
+ /* remove chip and handler */
+ set_irq_chip_and_handler(virq, NULL, NULL);
+
+ /* Make sure it's completed */
+ synchronize_irq(virq);
+}
+
+static int i8259_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags)
+{
+ static unsigned char map_isa_senses[4] = {
+ IRQ_TYPE_LEVEL_LOW,
+ IRQ_TYPE_LEVEL_HIGH,
+ IRQ_TYPE_EDGE_FALLING,
+ IRQ_TYPE_EDGE_RISING,
+ };
+
+ *out_hwirq = intspec[0];
+ if (intsize > 1 && intspec[1] < 4)
+ *out_flags = map_isa_senses[intspec[1]];
+ else
+ *out_flags = IRQ_TYPE_NONE;
+
+ return 0;
+}
+
+static struct irq_host_ops i8259_host_ops = {
+ .match = i8259_host_match,
+ .map = i8259_host_map,
+ .unmap = i8259_host_unmap,
+ .xlate = i8259_host_xlate,
};
-/*
- * i8259_init()
- * intack_addr - PCI interrupt acknowledge (real) address which will return
- * the active irq from the 8259
+/****
+ * i8259_init - Initialize the legacy controller
+ * @node: device node of the legacy PIC (can be NULL, but then, it will match
+ * all interrupts, so beware)
+ * @intack_addr: PCI interrupt acknowledge (real) address which will return
+ * the active irq from the 8259
*/
-void __init i8259_init(unsigned long intack_addr, int offset)
+void i8259_init(struct device_node *node, unsigned long intack_addr)
{
unsigned long flags;
- int i;
+ /* initialize the controller */
spin_lock_irqsave(&i8259_lock, flags);
- i8259_pic_irq_offset = offset;
+
+ /* Mask all first */
+ outb(0xff, 0xA1);
+ outb(0xff, 0x21);
/* init master interrupt controller */
outb(0x11, 0x20); /* Start init sequence */
outb(0x02, 0xA1); /* edge triggered, Cascade (slave) on IRQ2 */
outb(0x01, 0xA1); /* Select 8086 mode */
+ /* That thing is slow */
+ udelay(100);
+
/* always read ISR */
outb(0x0B, 0x20);
outb(0x0B, 0xA0);
- /* Mask all interrupts */
+ /* Unmask the internal cascade */
+ cached_21 &= ~(1 << 2);
+
+ /* Set interrupt masks */
outb(cached_A1, 0xA1);
outb(cached_21, 0x21);
spin_unlock_irqrestore(&i8259_lock, flags);
- for (i = 0; i < NUM_ISA_INTERRUPTS; ++i)
- irq_desc[offset + i].chip = &i8259_pic;
+ /* create a legacy host */
+ if (node)
+ i8259_node = of_node_get(node);
+ i8259_host = irq_alloc_host(IRQ_HOST_MAP_LEGACY, 0, &i8259_host_ops, 0);
+ if (i8259_host == NULL) {
+ printk(KERN_ERR "i8259: failed to allocate irq host !\n");
+ return;
+ }
/* reserve our resources */
- setup_irq(offset + 2, &i8259_irqaction);
+ /* XXX should we continue doing that ? it seems to cause problems
+ * with further requesting of PCI IO resources for that range...
+ * need to look into it.
+ */
request_resource(&ioport_resource, &pic1_iores);
request_resource(&ioport_resource, &pic2_iores);
request_resource(&ioport_resource, &pic_edgectrl_iores);
if (intack_addr != 0)
pci_intack = ioremap(intack_addr, 1);
+ printk(KERN_INFO "i8259 legacy interrupt controller initialized\n");
}
if (mpic->flags & MPIC_PRIMARY)
cpu = hard_smp_processor_id();
-
- return _mpic_read(mpic->flags & MPIC_BIG_ENDIAN, mpic->cpuregs[cpu], reg);
+ return _mpic_read(mpic->flags & MPIC_BIG_ENDIAN,
+ mpic->cpuregs[cpu], reg);
}
static inline void _mpic_cpu_write(struct mpic *mpic, unsigned int reg, u32 value)
#endif /* CONFIG_MPIC_BROKEN_U3 */
+#define mpic_irq_to_hw(virq) ((unsigned int)irq_map[virq].hwirq)
+
/* Find an mpic associated with a given linux interrupt */
static struct mpic *mpic_find(unsigned int irq, unsigned int *is_ipi)
{
- struct mpic *mpic = mpics;
-
- while(mpic) {
- /* search IPIs first since they may override the main interrupts */
- if (irq >= mpic->ipi_offset && irq < (mpic->ipi_offset + 4)) {
- if (is_ipi)
- *is_ipi = 1;
- return mpic;
- }
- if (irq >= mpic->irq_offset &&
- irq < (mpic->irq_offset + mpic->irq_count)) {
- if (is_ipi)
- *is_ipi = 0;
- return mpic;
- }
- mpic = mpic -> next;
- }
- return NULL;
+ unsigned int src = mpic_irq_to_hw(irq);
+
+ if (irq < NUM_ISA_INTERRUPTS)
+ return NULL;
+ if (is_ipi)
+ *is_ipi = (src >= MPIC_VEC_IPI_0 && src <= MPIC_VEC_IPI_3);
+
+ return irq_desc[irq].chip_data;
}
/* Convert a cpu mask from logical to physical cpu numbers. */
/* Get the mpic structure from the IPI number */
static inline struct mpic * mpic_from_ipi(unsigned int ipi)
{
- return container_of(irq_desc[ipi].chip, struct mpic, hc_ipi);
+ return irq_desc[ipi].chip_data;
}
#endif
/* Get the mpic structure from the irq number */
static inline struct mpic * mpic_from_irq(unsigned int irq)
{
- return container_of(irq_desc[irq].chip, struct mpic, hc_irq);
+ return irq_desc[irq].chip_data;
}
/* Send an EOI */
#ifdef CONFIG_SMP
static irqreturn_t mpic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
{
- struct mpic *mpic = dev_id;
-
- smp_message_recv(irq - mpic->ipi_offset, regs);
+ smp_message_recv(mpic_irq_to_hw(irq) - MPIC_VEC_IPI_0, regs);
return IRQ_HANDLED;
}
#endif /* CONFIG_SMP */
*/
-static void mpic_enable_irq(unsigned int irq)
+static void mpic_unmask_irq(unsigned int irq)
{
unsigned int loops = 100000;
struct mpic *mpic = mpic_from_irq(irq);
- unsigned int src = irq - mpic->irq_offset;
+ unsigned int src = mpic_irq_to_hw(irq);
DBG("%p: %s: enable_irq: %d (src %d)\n", mpic, mpic->name, irq, src);
break;
}
} while(mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI) & MPIC_VECPRI_MASK);
-
-#ifdef CONFIG_MPIC_BROKEN_U3
- if (mpic->flags & MPIC_BROKEN_U3) {
- unsigned int src = irq - mpic->irq_offset;
- if (mpic_is_ht_interrupt(mpic, src) &&
- (irq_desc[irq].status & IRQ_LEVEL))
- mpic_ht_end_irq(mpic, src);
- }
-#endif /* CONFIG_MPIC_BROKEN_U3 */
-}
-
-static unsigned int mpic_startup_irq(unsigned int irq)
-{
-#ifdef CONFIG_MPIC_BROKEN_U3
- struct mpic *mpic = mpic_from_irq(irq);
- unsigned int src = irq - mpic->irq_offset;
-#endif /* CONFIG_MPIC_BROKEN_U3 */
-
- mpic_enable_irq(irq);
-
-#ifdef CONFIG_MPIC_BROKEN_U3
- if (mpic_is_ht_interrupt(mpic, src))
- mpic_startup_ht_interrupt(mpic, src, irq_desc[irq].status);
-#endif /* CONFIG_MPIC_BROKEN_U3 */
-
- return 0;
}
-static void mpic_disable_irq(unsigned int irq)
+static void mpic_mask_irq(unsigned int irq)
{
unsigned int loops = 100000;
struct mpic *mpic = mpic_from_irq(irq);
- unsigned int src = irq - mpic->irq_offset;
+ unsigned int src = mpic_irq_to_hw(irq);
DBG("%s: disable_irq: %d (src %d)\n", mpic->name, irq, src);
} while(!(mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI) & MPIC_VECPRI_MASK));
}
-static void mpic_shutdown_irq(unsigned int irq)
+static void mpic_end_irq(unsigned int irq)
{
+ struct mpic *mpic = mpic_from_irq(irq);
+
+#ifdef DEBUG_IRQ
+ DBG("%s: end_irq: %d\n", mpic->name, irq);
+#endif
+ /* We always EOI on end_irq() even for edge interrupts since that
+ * should only lower the priority, the MPIC should have properly
+ * latched another edge interrupt coming in anyway
+ */
+
+ mpic_eoi(mpic);
+}
+
#ifdef CONFIG_MPIC_BROKEN_U3
+
+static void mpic_unmask_ht_irq(unsigned int irq)
+{
struct mpic *mpic = mpic_from_irq(irq);
- unsigned int src = irq - mpic->irq_offset;
+ unsigned int src = mpic_irq_to_hw(irq);
- if (mpic_is_ht_interrupt(mpic, src))
- mpic_shutdown_ht_interrupt(mpic, src, irq_desc[irq].status);
+ mpic_unmask_irq(irq);
-#endif /* CONFIG_MPIC_BROKEN_U3 */
+ if (irq_desc[irq].status & IRQ_LEVEL)
+ mpic_ht_end_irq(mpic, src);
+}
+
+static unsigned int mpic_startup_ht_irq(unsigned int irq)
+{
+ struct mpic *mpic = mpic_from_irq(irq);
+ unsigned int src = mpic_irq_to_hw(irq);
+
+ mpic_unmask_irq(irq);
+ mpic_startup_ht_interrupt(mpic, src, irq_desc[irq].status);
- mpic_disable_irq(irq);
+ return 0;
}
-static void mpic_end_irq(unsigned int irq)
+static void mpic_shutdown_ht_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
+ unsigned int src = mpic_irq_to_hw(irq);
+
+ mpic_shutdown_ht_interrupt(mpic, src, irq_desc[irq].status);
+ mpic_mask_irq(irq);
+}
+
+static void mpic_end_ht_irq(unsigned int irq)
+{
+ struct mpic *mpic = mpic_from_irq(irq);
+ unsigned int src = mpic_irq_to_hw(irq);
#ifdef DEBUG_IRQ
DBG("%s: end_irq: %d\n", mpic->name, irq);
* latched another edge interrupt coming in anyway
*/
-#ifdef CONFIG_MPIC_BROKEN_U3
- if (mpic->flags & MPIC_BROKEN_U3) {
- unsigned int src = irq - mpic->irq_offset;
- if (mpic_is_ht_interrupt(mpic, src) &&
- (irq_desc[irq].status & IRQ_LEVEL))
- mpic_ht_end_irq(mpic, src);
- }
-#endif /* CONFIG_MPIC_BROKEN_U3 */
-
+ if (irq_desc[irq].status & IRQ_LEVEL)
+ mpic_ht_end_irq(mpic, src);
mpic_eoi(mpic);
}
+#endif /* CONFIG_MPIC_BROKEN_U3 */
+
#ifdef CONFIG_SMP
-static void mpic_enable_ipi(unsigned int irq)
+static void mpic_unmask_ipi(unsigned int irq)
{
struct mpic *mpic = mpic_from_ipi(irq);
- unsigned int src = irq - mpic->ipi_offset;
+ unsigned int src = mpic_irq_to_hw(irq) - MPIC_VEC_IPI_0;
DBG("%s: enable_ipi: %d (ipi %d)\n", mpic->name, irq, src);
mpic_ipi_write(src, mpic_ipi_read(src) & ~MPIC_VECPRI_MASK);
}
-static void mpic_disable_ipi(unsigned int irq)
+static void mpic_mask_ipi(unsigned int irq)
{
/* NEVER disable an IPI... that's just plain wrong! */
}
* IPIs are marked IRQ_PER_CPU. This has the side effect of
* preventing the IRQ_PENDING/IRQ_INPROGRESS logic from
* applying to them. We EOI them late to avoid re-entering.
- * We mark IPI's with SA_INTERRUPT as they must run with
+ * We mark IPI's with IRQF_DISABLED as they must run with
* irqs disabled.
*/
mpic_eoi(mpic);
static void mpic_set_affinity(unsigned int irq, cpumask_t cpumask)
{
struct mpic *mpic = mpic_from_irq(irq);
+ unsigned int src = mpic_irq_to_hw(irq);
cpumask_t tmp;
cpus_and(tmp, cpumask, cpu_online_map);
- mpic_irq_write(irq - mpic->irq_offset, MPIC_IRQ_DESTINATION,
+ mpic_irq_write(src, MPIC_IRQ_DESTINATION,
mpic_physmask(cpus_addr(tmp)[0]));
}
+static unsigned int mpic_flags_to_vecpri(unsigned int flags, int *level)
+{
+ unsigned int vecpri;
+
+ /* Now convert sense value */
+ switch(flags & IRQ_TYPE_SENSE_MASK) {
+ case IRQ_TYPE_EDGE_RISING:
+ vecpri = MPIC_VECPRI_SENSE_EDGE |
+ MPIC_VECPRI_POLARITY_POSITIVE;
+ *level = 0;
+ break;
+ case IRQ_TYPE_EDGE_FALLING:
+ vecpri = MPIC_VECPRI_SENSE_EDGE |
+ MPIC_VECPRI_POLARITY_NEGATIVE;
+ *level = 0;
+ break;
+ case IRQ_TYPE_LEVEL_HIGH:
+ vecpri = MPIC_VECPRI_SENSE_LEVEL |
+ MPIC_VECPRI_POLARITY_POSITIVE;
+ *level = 1;
+ break;
+ case IRQ_TYPE_LEVEL_LOW:
+ default:
+ vecpri = MPIC_VECPRI_SENSE_LEVEL |
+ MPIC_VECPRI_POLARITY_NEGATIVE;
+ *level = 1;
+ }
+ return vecpri;
+}
+
+static struct irq_chip mpic_irq_chip = {
+ .mask = mpic_mask_irq,
+ .unmask = mpic_unmask_irq,
+ .eoi = mpic_end_irq,
+};
+
+#ifdef CONFIG_SMP
+static struct irq_chip mpic_ipi_chip = {
+ .mask = mpic_mask_ipi,
+ .unmask = mpic_unmask_ipi,
+ .eoi = mpic_end_ipi,
+};
+#endif /* CONFIG_SMP */
+
+#ifdef CONFIG_MPIC_BROKEN_U3
+static struct irq_chip mpic_irq_ht_chip = {
+ .startup = mpic_startup_ht_irq,
+ .shutdown = mpic_shutdown_ht_irq,
+ .mask = mpic_mask_irq,
+ .unmask = mpic_unmask_ht_irq,
+ .eoi = mpic_end_ht_irq,
+};
+#endif /* CONFIG_MPIC_BROKEN_U3 */
+
+
+static int mpic_host_match(struct irq_host *h, struct device_node *node)
+{
+ struct mpic *mpic = h->host_data;
+
+ /* Exact match, unless mpic node is NULL */
+ return mpic->of_node == NULL || mpic->of_node == node;
+}
+
+static int mpic_host_map(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags)
+{
+ struct irq_desc *desc = get_irq_desc(virq);
+ struct irq_chip *chip;
+ struct mpic *mpic = h->host_data;
+ unsigned int vecpri = MPIC_VECPRI_SENSE_LEVEL |
+ MPIC_VECPRI_POLARITY_NEGATIVE;
+ int level;
+
+ pr_debug("mpic: map virq %d, hwirq 0x%lx, flags: 0x%x\n",
+ virq, hw, flags);
+
+ if (hw == MPIC_VEC_SPURRIOUS)
+ return -EINVAL;
+#ifdef CONFIG_SMP
+ else if (hw >= MPIC_VEC_IPI_0) {
+ WARN_ON(!(mpic->flags & MPIC_PRIMARY));
+
+ pr_debug("mpic: mapping as IPI\n");
+ set_irq_chip_data(virq, mpic);
+ set_irq_chip_and_handler(virq, &mpic->hc_ipi,
+ handle_percpu_irq);
+ return 0;
+ }
+#endif /* CONFIG_SMP */
+
+ if (hw >= mpic->irq_count)
+ return -EINVAL;
+
+ /* If no sense provided, check default sense array */
+ if (((flags & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_NONE) &&
+ mpic->senses && hw < mpic->senses_count)
+ flags |= mpic->senses[hw];
+
+ vecpri = mpic_flags_to_vecpri(flags, &level);
+ if (level)
+ desc->status |= IRQ_LEVEL;
+ chip = &mpic->hc_irq;
+
+#ifdef CONFIG_MPIC_BROKEN_U3
+ /* Check for HT interrupts, override vecpri */
+ if (mpic_is_ht_interrupt(mpic, hw)) {
+ vecpri &= ~(MPIC_VECPRI_SENSE_MASK |
+ MPIC_VECPRI_POLARITY_MASK);
+ vecpri |= MPIC_VECPRI_POLARITY_POSITIVE;
+ chip = &mpic->hc_ht_irq;
+ }
+#endif
+
+ /* Reconfigure irq */
+ vecpri |= MPIC_VECPRI_MASK | hw | (8 << MPIC_VECPRI_PRIORITY_SHIFT);
+ mpic_irq_write(hw, MPIC_IRQ_VECTOR_PRI, vecpri);
+
+ pr_debug("mpic: mapping as IRQ\n");
+
+ set_irq_chip_data(virq, mpic);
+ set_irq_chip_and_handler(virq, chip, handle_fasteoi_irq);
+ return 0;
+}
+
+static int mpic_host_xlate(struct irq_host *h, struct device_node *ct,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags)
+
+{
+ static unsigned char map_mpic_senses[4] = {
+ IRQ_TYPE_EDGE_RISING,
+ IRQ_TYPE_LEVEL_LOW,
+ IRQ_TYPE_LEVEL_HIGH,
+ IRQ_TYPE_EDGE_FALLING,
+ };
+
+ *out_hwirq = intspec[0];
+ if (intsize > 1 && intspec[1] < 4)
+ *out_flags = map_mpic_senses[intspec[1]];
+ else
+ *out_flags = IRQ_TYPE_NONE;
+
+ return 0;
+}
+
+static struct irq_host_ops mpic_host_ops = {
+ .match = mpic_host_match,
+ .map = mpic_host_map,
+ .xlate = mpic_host_xlate,
+};
/*
* Exported functions
*/
-
-struct mpic * __init mpic_alloc(unsigned long phys_addr,
+struct mpic * __init mpic_alloc(struct device_node *node,
+ unsigned long phys_addr,
unsigned int flags,
unsigned int isu_size,
- unsigned int irq_offset,
unsigned int irq_count,
- unsigned int ipi_offset,
- unsigned char *senses,
- unsigned int senses_count,
const char *name)
{
struct mpic *mpic;
if (mpic == NULL)
return NULL;
-
memset(mpic, 0, sizeof(struct mpic));
mpic->name = name;
+ mpic->of_node = node ? of_node_get(node) : NULL;
+ mpic->irqhost = irq_alloc_host(IRQ_HOST_MAP_LINEAR, 256,
+ &mpic_host_ops,
+ MPIC_VEC_SPURRIOUS);
+ if (mpic->irqhost == NULL) {
+ of_node_put(node);
+ return NULL;
+ }
+
+ mpic->irqhost->host_data = mpic;
+ mpic->hc_irq = mpic_irq_chip;
mpic->hc_irq.typename = name;
- mpic->hc_irq.startup = mpic_startup_irq;
- mpic->hc_irq.shutdown = mpic_shutdown_irq;
- mpic->hc_irq.enable = mpic_enable_irq;
- mpic->hc_irq.disable = mpic_disable_irq;
- mpic->hc_irq.end = mpic_end_irq;
if (flags & MPIC_PRIMARY)
mpic->hc_irq.set_affinity = mpic_set_affinity;
+#ifdef CONFIG_MPIC_BROKEN_U3
+ mpic->hc_ht_irq = mpic_irq_ht_chip;
+ mpic->hc_ht_irq.typename = name;
+ if (flags & MPIC_PRIMARY)
+ mpic->hc_ht_irq.set_affinity = mpic_set_affinity;
+#endif /* CONFIG_MPIC_BROKEN_U3 */
#ifdef CONFIG_SMP
+ mpic->hc_ipi = mpic_ipi_chip;
mpic->hc_ipi.typename = name;
- mpic->hc_ipi.enable = mpic_enable_ipi;
- mpic->hc_ipi.disable = mpic_disable_ipi;
- mpic->hc_ipi.end = mpic_end_ipi;
#endif /* CONFIG_SMP */
mpic->flags = flags;
mpic->isu_size = isu_size;
- mpic->irq_offset = irq_offset;
mpic->irq_count = irq_count;
- mpic->ipi_offset = ipi_offset;
mpic->num_sources = 0; /* so far */
- mpic->senses = senses;
- mpic->senses_count = senses_count;
/* Map the global registers */
mpic->gregs = ioremap(phys_addr + MPIC_GREG_BASE, 0x1000);
mpic->next = mpics;
mpics = mpic;
- if (flags & MPIC_PRIMARY)
+ if (flags & MPIC_PRIMARY) {
mpic_primary = mpic;
+ irq_set_default_host(mpic->irqhost);
+ }
return mpic;
}
mpic->num_sources = isu_first + mpic->isu_size;
}
-void __init mpic_setup_cascade(unsigned int irq, mpic_cascade_t handler,
- void *data)
+void __init mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count)
{
- struct mpic *mpic = mpic_find(irq, NULL);
- unsigned long flags;
-
- /* Synchronization here is a bit dodgy, so don't try to replace cascade
- * interrupts on the fly too often ... but normally it's set up at boot.
- */
- spin_lock_irqsave(&mpic_lock, flags);
- if (mpic->cascade)
- mpic_disable_irq(mpic->cascade_vec + mpic->irq_offset);
- mpic->cascade = NULL;
- wmb();
- mpic->cascade_vec = irq - mpic->irq_offset;
- mpic->cascade_data = data;
- wmb();
- mpic->cascade = handler;
- mpic_enable_irq(irq);
- spin_unlock_irqrestore(&mpic_lock, flags);
+ mpic->senses = senses;
+ mpic->senses_count = count;
}
void __init mpic_init(struct mpic *mpic)
int i;
BUG_ON(mpic->num_sources == 0);
+ WARN_ON(mpic->num_sources > MPIC_VEC_IPI_0);
+
+ /* Sanitize source count */
+ if (mpic->num_sources > MPIC_VEC_IPI_0)
+ mpic->num_sources = MPIC_VEC_IPI_0;
printk(KERN_INFO "mpic: Initializing for %d sources\n", mpic->num_sources);
MPIC_VECPRI_MASK |
(10 << MPIC_VECPRI_PRIORITY_SHIFT) |
(MPIC_VEC_IPI_0 + i));
-#ifdef CONFIG_SMP
- if (!(mpic->flags & MPIC_PRIMARY))
- continue;
- irq_desc[mpic->ipi_offset+i].status |= IRQ_PER_CPU;
- irq_desc[mpic->ipi_offset+i].chip = &mpic->hc_ipi;
-#endif /* CONFIG_SMP */
}
/* Initialize interrupt sources */
/* Do the HT PIC fixups on U3 broken mpic */
DBG("MPIC flags: %x\n", mpic->flags);
if ((mpic->flags & MPIC_BROKEN_U3) && (mpic->flags & MPIC_PRIMARY))
- mpic_scan_ht_pics(mpic);
+ mpic_scan_ht_pics(mpic);
#endif /* CONFIG_MPIC_BROKEN_U3 */
for (i = 0; i < mpic->num_sources; i++) {
/* start with vector = source number, and masked */
u32 vecpri = MPIC_VECPRI_MASK | i | (8 << MPIC_VECPRI_PRIORITY_SHIFT);
- int level = 0;
+ int level = 1;
- /* if it's an IPI, we skip it */
- if ((mpic->irq_offset + i) >= (mpic->ipi_offset + i) &&
- (mpic->irq_offset + i) < (mpic->ipi_offset + i + 4))
- continue;
-
/* do senses munging */
- if (mpic->senses && i < mpic->senses_count) {
- if (mpic->senses[i] & IRQ_SENSE_LEVEL)
- vecpri |= MPIC_VECPRI_SENSE_LEVEL;
- if (mpic->senses[i] & IRQ_POLARITY_POSITIVE)
- vecpri |= MPIC_VECPRI_POLARITY_POSITIVE;
- } else
+ if (mpic->senses && i < mpic->senses_count)
+ vecpri = mpic_flags_to_vecpri(mpic->senses[i],
+ &level);
+ else
vecpri |= MPIC_VECPRI_SENSE_LEVEL;
- /* remember if it was a level interrupts */
- level = (vecpri & MPIC_VECPRI_SENSE_LEVEL);
-
/* deal with broken U3 */
if (mpic->flags & MPIC_BROKEN_U3) {
#ifdef CONFIG_MPIC_BROKEN_U3
mpic_irq_write(i, MPIC_IRQ_VECTOR_PRI, vecpri);
mpic_irq_write(i, MPIC_IRQ_DESTINATION,
1 << hard_smp_processor_id());
-
- /* init linux descriptors */
- if (i < mpic->irq_count) {
- irq_desc[mpic->irq_offset+i].status = level ? IRQ_LEVEL : 0;
- irq_desc[mpic->irq_offset+i].chip = &mpic->hc_irq;
- }
}
/* Init spurrious vector */
{
int is_ipi;
struct mpic *mpic = mpic_find(irq, &is_ipi);
+ unsigned int src = mpic_irq_to_hw(irq);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&mpic_lock, flags);
if (is_ipi) {
- reg = mpic_ipi_read(irq - mpic->ipi_offset) &
+ reg = mpic_ipi_read(src - MPIC_VEC_IPI_0) &
~MPIC_VECPRI_PRIORITY_MASK;
- mpic_ipi_write(irq - mpic->ipi_offset,
+ mpic_ipi_write(src - MPIC_VEC_IPI_0,
reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT));
} else {
- reg = mpic_irq_read(irq - mpic->irq_offset,MPIC_IRQ_VECTOR_PRI)
+ reg = mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI)
& ~MPIC_VECPRI_PRIORITY_MASK;
- mpic_irq_write(irq - mpic->irq_offset, MPIC_IRQ_VECTOR_PRI,
+ mpic_irq_write(src, MPIC_IRQ_VECTOR_PRI,
reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT));
}
spin_unlock_irqrestore(&mpic_lock, flags);
{
int is_ipi;
struct mpic *mpic = mpic_find(irq, &is_ipi);
+ unsigned int src = mpic_irq_to_hw(irq);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&mpic_lock, flags);
if (is_ipi)
- reg = mpic_ipi_read(irq - mpic->ipi_offset);
+ reg = mpic_ipi_read(src = MPIC_VEC_IPI_0);
else
- reg = mpic_irq_read(irq - mpic->irq_offset, MPIC_IRQ_VECTOR_PRI);
+ reg = mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI);
spin_unlock_irqrestore(&mpic_lock, flags);
return (reg & MPIC_VECPRI_PRIORITY_MASK) >> MPIC_VECPRI_PRIORITY_SHIFT;
}
mpic_physmask(cpu_mask & cpus_addr(cpu_online_map)[0]));
}
-int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs)
+unsigned int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs)
{
- u32 irq;
+ u32 src;
- irq = mpic_cpu_read(MPIC_CPU_INTACK) & MPIC_VECPRI_VECTOR_MASK;
-#ifdef DEBUG_LOW
- DBG("%s: get_one_irq(): %d\n", mpic->name, irq);
-#endif
- if (mpic->cascade && irq == mpic->cascade_vec) {
+ src = mpic_cpu_read(MPIC_CPU_INTACK) & MPIC_VECPRI_VECTOR_MASK;
#ifdef DEBUG_LOW
- DBG("%s: cascading ...\n", mpic->name);
-#endif
- irq = mpic->cascade(regs, mpic->cascade_data);
- mpic_eoi(mpic);
- return irq;
- }
- if (unlikely(irq == MPIC_VEC_SPURRIOUS))
- return -1;
- if (irq < MPIC_VEC_IPI_0) {
-#ifdef DEBUG_IRQ
- DBG("%s: irq %d\n", mpic->name, irq + mpic->irq_offset);
-#endif
- return irq + mpic->irq_offset;
- }
-#ifdef DEBUG_IPI
- DBG("%s: ipi %d !\n", mpic->name, irq - MPIC_VEC_IPI_0);
+ DBG("%s: get_one_irq(): %d\n", mpic->name, src);
#endif
- return irq - MPIC_VEC_IPI_0 + mpic->ipi_offset;
+ if (unlikely(src == MPIC_VEC_SPURRIOUS))
+ return NO_IRQ;
+ return irq_linear_revmap(mpic->irqhost, src);
}
-int mpic_get_irq(struct pt_regs *regs)
+unsigned int mpic_get_irq(struct pt_regs *regs)
{
struct mpic *mpic = mpic_primary;
void mpic_request_ipis(void)
{
struct mpic *mpic = mpic_primary;
-
+ int i;
+ static char *ipi_names[] = {
+ "IPI0 (call function)",
+ "IPI1 (reschedule)",
+ "IPI2 (unused)",
+ "IPI3 (debugger break)",
+ };
BUG_ON(mpic == NULL);
-
- printk("requesting IPIs ... \n");
-
- /* IPIs are marked SA_INTERRUPT as they must run with irqs disabled */
- request_irq(mpic->ipi_offset+0, mpic_ipi_action, SA_INTERRUPT,
- "IPI0 (call function)", mpic);
- request_irq(mpic->ipi_offset+1, mpic_ipi_action, SA_INTERRUPT,
- "IPI1 (reschedule)", mpic);
- request_irq(mpic->ipi_offset+2, mpic_ipi_action, SA_INTERRUPT,
- "IPI2 (unused)", mpic);
- request_irq(mpic->ipi_offset+3, mpic_ipi_action, SA_INTERRUPT,
- "IPI3 (debugger break)", mpic);
-
- printk("IPIs requested... \n");
+
+ printk(KERN_INFO "mpic: requesting IPIs ... \n");
+
+ for (i = 0; i < 4; i++) {
+ unsigned int vipi = irq_create_mapping(mpic->irqhost,
+ MPIC_VEC_IPI_0 + i, 0);
+ if (vipi == NO_IRQ) {
+ printk(KERN_ERR "Failed to map IPI %d\n", i);
+ break;
+ }
+ request_irq(vipi, mpic_ipi_action, IRQF_DISABLED,
+ ipi_names[i], mpic);
+ }
}
void smp_mpic_message_pass(int target, int msg)
#ifdef PHY_INTERRUPT
#ifdef CONFIG_ADS8272
- if (request_irq(PHY_INTERRUPT, mii_link_interrupt, SA_SHIRQ,
+ if (request_irq(PHY_INTERRUPT, mii_link_interrupt, IRQF_SHARED,
"mii", dev) < 0)
printk(KERN_CRIT "Can't get MII IRQ %d\n", PHY_INTERRUPT);
#else
static struct irqaction cpm2_irqaction = {
.handler = cpm2_cascade,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "cpm2_cascade",
};
static struct irqaction cpm2_irqaction = {
.handler = cpm2_cascade,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "cpm2_cascade",
};
static struct irqaction cpm2_irqaction = {
.handler = cpm2_cascade,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "cpm2_cascade",
};
static struct irqaction cpm2_irqaction = {
.handler = cpm2_cascade,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "cpm2_cascade",
};
mv64x60_write(&bh, MV64360_CPU0_DOORBELL_CLR, 0xff);
mv64x60_write(&bh, MV64360_CPU0_DOORBELL_MASK, 0xff);
request_irq(60, hdpu_smp_cpu0_int_handler,
- SA_INTERRUPT, hdpu_smp0, 0);
+ IRQF_DISABLED, hdpu_smp0, 0);
}
if (cpu_nr == 1) {
mv64x60_write(&bh, MV64360_CPU1_DOORBELL_CLR, 0x0);
mv64x60_write(&bh, MV64360_CPU1_DOORBELL_MASK, 0xff);
request_irq(28, hdpu_smp_cpu1_int_handler,
- SA_INTERRUPT, hdpu_smp1, 0);
+ IRQF_DISABLED, hdpu_smp1, 0);
}
}
/* Hook up i8259 interrupt which is connected to GPP28 */
request_irq(mv64360_irq_base + MV64x60_IRQ_GPP28, ppc7d_i8259_intr,
- SA_INTERRUPT, "I8259 (GPP28) interrupt", (void *)0);
+ IRQF_DISABLED, "I8259 (GPP28) interrupt", (void *)0);
/* Configure MPP16 as watchdog NMI, MPP17 as watchdog WDE */
spin_lock_irqsave(&mv64x60_lock, flags);
static struct irqaction sbc82xx_i8259_irqaction = {
.handler = sbc82xx_i8259_demux,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "i8259 demux",
};
ifeq ($(CONFIG_PPC_MPC52xx),y)
obj-$(CONFIG_PCI) += mpc52xx_pci.o
endif
+
+obj-$(CONFIG_PPC_I8259) += i8259.o
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/init.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <asm/sections.h>
#include <asm/bootx.h>
/* Register CPU interface error interrupt handler */
if ((rc = request_irq(MV64x60_IRQ_CPU_ERR,
- gt64260_cpu_error_int_handler, SA_INTERRUPT, CPU_INTR_STR, 0)))
+ gt64260_cpu_error_int_handler, IRQF_DISABLED, CPU_INTR_STR, 0)))
printk(KERN_WARNING "Can't register cpu error handler: %d", rc);
mv64x60_write(&bh, MV64x60_CPU_ERR_MASK, 0);
/* Register PCI 0 error interrupt handler */
if ((rc = request_irq(MV64360_IRQ_PCI0, gt64260_pci_error_int_handler,
- SA_INTERRUPT, PCI0_INTR_STR, (void *)0)))
+ IRQF_DISABLED, PCI0_INTR_STR, (void *)0)))
printk(KERN_WARNING "Can't register pci 0 error handler: %d",
rc);
/* Register PCI 1 error interrupt handler */
if ((rc = request_irq(MV64360_IRQ_PCI1, gt64260_pci_error_int_handler,
- SA_INTERRUPT, PCI1_INTR_STR, (void *)1)))
+ IRQF_DISABLED, PCI1_INTR_STR, (void *)1)))
printk(KERN_WARNING "Can't register pci 1 error handler: %d",
rc);
--- /dev/null
+/*
+ * i8259 interrupt controller driver.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <asm/io.h>
+#include <asm/i8259.h>
+
+static volatile void __iomem *pci_intack; /* RO, gives us the irq vector */
+
+static unsigned char cached_8259[2] = { 0xff, 0xff };
+#define cached_A1 (cached_8259[0])
+#define cached_21 (cached_8259[1])
+
+static DEFINE_SPINLOCK(i8259_lock);
+
+static int i8259_pic_irq_offset;
+
+/*
+ * Acknowledge the IRQ using either the PCI host bridge's interrupt
+ * acknowledge feature or poll. How i8259_init() is called determines
+ * which is called. It should be noted that polling is broken on some
+ * IBM and Motorola PReP boxes so we must use the int-ack feature on them.
+ */
+int i8259_irq(struct pt_regs *regs)
+{
+ int irq;
+
+ spin_lock(&i8259_lock);
+
+ /* Either int-ack or poll for the IRQ */
+ if (pci_intack)
+ irq = readb(pci_intack);
+ else {
+ /* Perform an interrupt acknowledge cycle on controller 1. */
+ outb(0x0C, 0x20); /* prepare for poll */
+ irq = inb(0x20) & 7;
+ if (irq == 2 ) {
+ /*
+ * Interrupt is cascaded so perform interrupt
+ * acknowledge on controller 2.
+ */
+ outb(0x0C, 0xA0); /* prepare for poll */
+ irq = (inb(0xA0) & 7) + 8;
+ }
+ }
+
+ if (irq == 7) {
+ /*
+ * This may be a spurious interrupt.
+ *
+ * Read the interrupt status register (ISR). If the most
+ * significant bit is not set then there is no valid
+ * interrupt.
+ */
+ if (!pci_intack)
+ outb(0x0B, 0x20); /* ISR register */
+ if(~inb(0x20) & 0x80)
+ irq = -1;
+ }
+
+ spin_unlock(&i8259_lock);
+ return irq + i8259_pic_irq_offset;
+}
+
+static void i8259_mask_and_ack_irq(unsigned int irq_nr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&i8259_lock, flags);
+ irq_nr -= i8259_pic_irq_offset;
+ if (irq_nr > 7) {
+ cached_A1 |= 1 << (irq_nr-8);
+ inb(0xA1); /* DUMMY */
+ outb(cached_A1, 0xA1);
+ outb(0x20, 0xA0); /* Non-specific EOI */
+ outb(0x20, 0x20); /* Non-specific EOI to cascade */
+ } else {
+ cached_21 |= 1 << irq_nr;
+ inb(0x21); /* DUMMY */
+ outb(cached_21, 0x21);
+ outb(0x20, 0x20); /* Non-specific EOI */
+ }
+ spin_unlock_irqrestore(&i8259_lock, flags);
+}
+
+static void i8259_set_irq_mask(int irq_nr)
+{
+ outb(cached_A1,0xA1);
+ outb(cached_21,0x21);
+}
+
+static void i8259_mask_irq(unsigned int irq_nr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&i8259_lock, flags);
+ irq_nr -= i8259_pic_irq_offset;
+ if (irq_nr < 8)
+ cached_21 |= 1 << irq_nr;
+ else
+ cached_A1 |= 1 << (irq_nr-8);
+ i8259_set_irq_mask(irq_nr);
+ spin_unlock_irqrestore(&i8259_lock, flags);
+}
+
+static void i8259_unmask_irq(unsigned int irq_nr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&i8259_lock, flags);
+ irq_nr -= i8259_pic_irq_offset;
+ if (irq_nr < 8)
+ cached_21 &= ~(1 << irq_nr);
+ else
+ cached_A1 &= ~(1 << (irq_nr-8));
+ i8259_set_irq_mask(irq_nr);
+ spin_unlock_irqrestore(&i8259_lock, flags);
+}
+
+static struct irq_chip i8259_pic = {
+ .typename = " i8259 ",
+ .mask = i8259_mask_irq,
+ .unmask = i8259_unmask_irq,
+ .mask_ack = i8259_mask_and_ack_irq,
+};
+
+static struct resource pic1_iores = {
+ .name = "8259 (master)",
+ .start = 0x20,
+ .end = 0x21,
+ .flags = IORESOURCE_BUSY,
+};
+
+static struct resource pic2_iores = {
+ .name = "8259 (slave)",
+ .start = 0xa0,
+ .end = 0xa1,
+ .flags = IORESOURCE_BUSY,
+};
+
+static struct resource pic_edgectrl_iores = {
+ .name = "8259 edge control",
+ .start = 0x4d0,
+ .end = 0x4d1,
+ .flags = IORESOURCE_BUSY,
+};
+
+static struct irqaction i8259_irqaction = {
+ .handler = no_action,
+ .flags = SA_INTERRUPT,
+ .mask = CPU_MASK_NONE,
+ .name = "82c59 secondary cascade",
+};
+
+/*
+ * i8259_init()
+ * intack_addr - PCI interrupt acknowledge (real) address which will return
+ * the active irq from the 8259
+ */
+void __init i8259_init(unsigned long intack_addr, int offset)
+{
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&i8259_lock, flags);
+ i8259_pic_irq_offset = offset;
+
+ /* init master interrupt controller */
+ outb(0x11, 0x20); /* Start init sequence */
+ outb(0x00, 0x21); /* Vector base */
+ outb(0x04, 0x21); /* edge tiggered, Cascade (slave) on IRQ2 */
+ outb(0x01, 0x21); /* Select 8086 mode */
+
+ /* init slave interrupt controller */
+ outb(0x11, 0xA0); /* Start init sequence */
+ outb(0x08, 0xA1); /* Vector base */
+ outb(0x02, 0xA1); /* edge triggered, Cascade (slave) on IRQ2 */
+ outb(0x01, 0xA1); /* Select 8086 mode */
+
+ /* always read ISR */
+ outb(0x0B, 0x20);
+ outb(0x0B, 0xA0);
+
+ /* Mask all interrupts */
+ outb(cached_A1, 0xA1);
+ outb(cached_21, 0x21);
+
+ spin_unlock_irqrestore(&i8259_lock, flags);
+
+ for (i = 0; i < NUM_ISA_INTERRUPTS; ++i) {
+ set_irq_chip_and_handler(offset + i, &i8259_pic,
+ handle_level_irq);
+ irq_desc[offset + i].status |= IRQ_LEVEL;
+ }
+
+ /* reserve our resources */
+ setup_irq(offset + 2, &i8259_irqaction);
+ request_resource(&ioport_resource, &pic1_iores);
+ request_resource(&ioport_resource, &pic2_iores);
+ request_resource(&ioport_resource, &pic_edgectrl_iores);
+
+ if (intack_addr != 0)
+ pci_intack = ioremap(intack_addr, 1);
+
+}
unsigned long flags;
/* Install error handler */
- if (request_irq(87, l2c_error_handler, SA_INTERRUPT, "L2C", 0) < 0){
+ if (request_irq(87, l2c_error_handler, IRQF_DISABLED, "L2C", 0) < 0){
printk(KERN_ERR "Cannot install L2C error handler, cache is not enabled\n");
return;
}
static struct irqaction pq2pci_irqaction = {
.handler = pq2pci_irq_demux,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "PQ2 PCI cascade",
};
/* Clear old errors and register CPU interface error intr handler */
mv64x60_write(&bh, MV64x60_CPU_ERR_CAUSE, 0);
if ((rc = request_irq(MV64x60_IRQ_CPU_ERR + mv64360_irq_base,
- mv64360_cpu_error_int_handler, SA_INTERRUPT, CPU_INTR_STR, 0)))
+ mv64360_cpu_error_int_handler, IRQF_DISABLED, CPU_INTR_STR, 0)))
printk(KERN_WARNING "Can't register cpu error handler: %d", rc);
mv64x60_write(&bh, MV64x60_CPU_ERR_MASK, 0);
/* Clear old errors and register internal SRAM error intr handler */
mv64x60_write(&bh, MV64360_SRAM_ERR_CAUSE, 0);
if ((rc = request_irq(MV64360_IRQ_SRAM_PAR_ERR + mv64360_irq_base,
- mv64360_sram_error_int_handler,SA_INTERRUPT,SRAM_INTR_STR, 0)))
+ mv64360_sram_error_int_handler,IRQF_DISABLED,SRAM_INTR_STR, 0)))
printk(KERN_WARNING "Can't register SRAM error handler: %d",rc);
/* Clear old errors and register PCI 0 error intr handler */
mv64x60_write(&bh, MV64x60_PCI0_ERR_CAUSE, 0);
if ((rc = request_irq(MV64360_IRQ_PCI0 + mv64360_irq_base,
mv64360_pci_error_int_handler,
- SA_INTERRUPT, PCI0_INTR_STR, (void *)0)))
+ IRQF_DISABLED, PCI0_INTR_STR, (void *)0)))
printk(KERN_WARNING "Can't register pci 0 error handler: %d",
rc);
mv64x60_write(&bh, MV64x60_PCI1_ERR_CAUSE, 0);
if ((rc = request_irq(MV64360_IRQ_PCI1 + mv64360_irq_base,
mv64360_pci_error_int_handler,
- SA_INTERRUPT, PCI1_INTR_STR, (void *)1)))
+ IRQF_DISABLED, PCI1_INTR_STR, (void *)1)))
printk(KERN_WARNING "Can't register pci 1 error handler: %d",
rc);
if (OpenPIC == NULL)
return;
- /* IPIs are marked SA_INTERRUPT as they must run with irqs disabled */
+ /*
+ * IPIs are marked IRQF_DISABLED as they must run with irqs
+ * disabled
+ */
request_irq(OPENPIC_VEC_IPI+open_pic_irq_offset,
- openpic_ipi_action, SA_INTERRUPT,
+ openpic_ipi_action, IRQF_DISABLED,
"IPI0 (call function)", NULL);
request_irq(OPENPIC_VEC_IPI+open_pic_irq_offset+1,
- openpic_ipi_action, SA_INTERRUPT,
+ openpic_ipi_action, IRQF_DISABLED,
"IPI1 (reschedule)", NULL);
request_irq(OPENPIC_VEC_IPI+open_pic_irq_offset+2,
- openpic_ipi_action, SA_INTERRUPT,
+ openpic_ipi_action, IRQF_DISABLED,
"IPI2 (invalidate tlb)", NULL);
request_irq(OPENPIC_VEC_IPI+open_pic_irq_offset+3,
- openpic_ipi_action, SA_INTERRUPT,
+ openpic_ipi_action, IRQF_DISABLED,
"IPI3 (xmon break)", NULL);
for ( i = 0; i < OPENPIC_NUM_IPI ; i++ )
static struct irqaction openpic_cascade_irqaction = {
.handler = no_action,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
};
bool
default y
+config LOCKDEP_SUPPORT
+ bool
+ default y
+
+config STACKTRACE_SUPPORT
+ bool
+ default y
+
config RWSEM_GENERIC_SPINLOCK
bool
menu "Kernel hacking"
+config TRACE_IRQFLAGS_SUPPORT
+ bool
+ default y
+
source "lib/Kconfig.debug"
endmenu
cflags-$(CONFIG_MARCH_Z900) += $(call cc-option,-march=z900)
cflags-$(CONFIG_MARCH_Z990) += $(call cc-option,-march=z990)
+#
+# Prevent tail-call optimizations, to get clearer backtraces:
+#
+cflags-$(CONFIG_FRAME_POINTER) += -fno-optimize-sibling-calls
+
# old style option for packed stacks
ifeq ($(call cc-option-yn,-mkernel-backchain),y)
cflags-$(CONFIG_PACK_STACK) += -mkernel-backchain -D__PACK_STACK
obj-$(CONFIG_BINFMT_ELF32) += binfmt_elf32.o
obj-$(CONFIG_VIRT_TIMER) += vtime.o
+obj-$(CONFIG_STACKTRACE) += stacktrace.o
# Kexec part
S390_KEXEC_OBJS := machine_kexec.o crash.o
#define BASED(name) name-system_call(%r13)
+#ifdef CONFIG_TRACE_IRQFLAGS
+ .macro TRACE_IRQS_ON
+ l %r1,BASED(.Ltrace_irq_on)
+ basr %r14,%r1
+ .endm
+
+ .macro TRACE_IRQS_OFF
+ l %r1,BASED(.Ltrace_irq_off)
+ basr %r14,%r1
+ .endm
+#else
+#define TRACE_IRQS_ON
+#define TRACE_IRQS_OFF
+#endif
+
/*
* Register usage in interrupt handlers:
* R9 - pointer to current task structure
st %r15,SP_R15(%r15) # store stack pointer for new kthread
0: l %r1,BASED(.Lschedtail)
basr %r14,%r1
+ TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
b BASED(sysc_return)
mvc __THREAD_per+__PER_address(4,%r1),__LC_PER_ADDRESS
mvc __THREAD_per+__PER_access_id(1,%r1),__LC_PER_ACCESS_ID
oi __TI_flags+3(%r9),_TIF_SINGLE_STEP # set TIF_SINGLE_STEP
+ TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
b BASED(sysc_do_svc)
io_no_vtime:
#endif
l %r9,__LC_THREAD_INFO # load pointer to thread_info struct
+ TRACE_IRQS_OFF
l %r1,BASED(.Ldo_IRQ) # load address of do_IRQ
la %r2,SP_PTREGS(%r15) # address of register-save area
basr %r14,%r1 # branch to standard irq handler
+ TRACE_IRQS_ON
io_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
ext_no_vtime:
#endif
l %r9,__LC_THREAD_INFO # load pointer to thread_info struct
+ TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
lh %r3,__LC_EXT_INT_CODE # get interruption code
l %r1,BASED(.Ldo_extint)
basr %r14,%r1
+ TRACE_IRQS_ON
b BASED(io_return)
__critical_end:
stosm __SF_EMPTY(%r15),0x04 # turn dat on
tm __TI_flags+3(%r9),_TIF_MCCK_PENDING
bno BASED(mcck_return)
+ TRACE_IRQS_OFF
l %r1,BASED(.Ls390_handle_mcck)
basr %r14,%r1 # call machine check handler
+ TRACE_IRQS_ON
mcck_return:
mvc __LC_RETURN_MCCK_PSW(8),SP_PSW(%r15) # move return PSW
ni __LC_RETURN_MCCK_PSW+1,0xfd # clear wait state bit
.Lvfork: .long sys_vfork
.Lschedtail: .long schedule_tail
.Lsysc_table: .long sys_call_table
-
+#ifdef CONFIG_TRACE_IRQFLAGS
+.Ltrace_irq_on:.long trace_hardirqs_on
+.Ltrace_irq_off:
+ .long trace_hardirqs_off
+#endif
.Lcritical_start:
.long __critical_start + 0x80000000
.Lcritical_end:
#define BASED(name) name-system_call(%r13)
+#ifdef CONFIG_TRACE_IRQFLAGS
+ .macro TRACE_IRQS_ON
+ brasl %r14,trace_hardirqs_on
+ .endm
+
+ .macro TRACE_IRQS_OFF
+ brasl %r14,trace_hardirqs_off
+ .endm
+#else
+#define TRACE_IRQS_ON
+#define TRACE_IRQS_OFF
+#endif
+
.macro STORE_TIMER lc_offset
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
stpt \lc_offset
jo 0f
stg %r15,SP_R15(%r15) # store stack pointer for new kthread
0: brasl %r14,schedule_tail
+ TRACE_IRQS_ON
stosm 24(%r15),0x03 # reenable interrupts
j sysc_return
mvc __THREAD_per+__PER_address(8,%r1),__LC_PER_ADDRESS
mvc __THREAD_per+__PER_access_id(1,%r1),__LC_PER_ACCESS_ID
oi __TI_flags+7(%r9),_TIF_SINGLE_STEP # set TIF_SINGLE_STEP
+ TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
j sysc_do_svc
io_no_vtime:
#endif
lg %r9,__LC_THREAD_INFO # load pointer to thread_info struct
+ TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
brasl %r14,do_IRQ # call standard irq handler
+ TRACE_IRQS_ON
io_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
ext_no_vtime:
#endif
lg %r9,__LC_THREAD_INFO # load pointer to thread_info struct
+ TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
llgh %r3,__LC_EXT_INT_CODE # get interruption code
brasl %r14,do_extint
+ TRACE_IRQS_ON
j io_return
__critical_end:
stosm __SF_EMPTY(%r15),0x04 # turn dat on
tm __TI_flags+7(%r9),_TIF_MCCK_PENDING
jno mcck_return
+ TRACE_IRQS_OFF
brasl %r14,s390_handle_mcck
+ TRACE_IRQS_ON
mcck_return:
mvc __LC_RETURN_MCCK_PSW(16),SP_PSW(%r15) # move return PSW
ni __LC_RETURN_MCCK_PSW+1,0xfd # clear wait state bit
local_irq_save(flags);
- account_system_vtime(current);
-
- local_bh_disable();
-
if (local_softirq_pending()) {
/* Get current stack pointer. */
asm volatile("la %0,0(15)" : "=a" (old));
__do_softirq();
}
- account_system_vtime(current);
-
- __local_bh_enable();
-
local_irq_restore(flags);
}
return;
}
+ trace_hardirqs_on();
/* Wait for external, I/O or machine check interrupt. */
__load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_WAIT |
PSW_MASK_IO | PSW_MASK_EXT);
--- /dev/null
+/*
+ * arch/s390/kernel/stacktrace.c
+ *
+ * Stack trace management functions
+ *
+ * Copyright (C) IBM Corp. 2006
+ * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/kallsyms.h>
+
+static inline unsigned long save_context_stack(struct stack_trace *trace,
+ unsigned int *skip,
+ unsigned long sp,
+ unsigned long low,
+ unsigned long high)
+{
+ struct stack_frame *sf;
+ struct pt_regs *regs;
+ unsigned long addr;
+
+ while(1) {
+ sp &= PSW_ADDR_INSN;
+ if (sp < low || sp > high)
+ return sp;
+ sf = (struct stack_frame *)sp;
+ while(1) {
+ addr = sf->gprs[8] & PSW_ADDR_INSN;
+ if (!(*skip))
+ trace->entries[trace->nr_entries++] = addr;
+ else
+ (*skip)--;
+ if (trace->nr_entries >= trace->max_entries)
+ return sp;
+ low = sp;
+ sp = sf->back_chain & PSW_ADDR_INSN;
+ if (!sp)
+ break;
+ if (sp <= low || sp > high - sizeof(*sf))
+ return sp;
+ sf = (struct stack_frame *)sp;
+ }
+ /* Zero backchain detected, check for interrupt frame. */
+ sp = (unsigned long)(sf + 1);
+ if (sp <= low || sp > high - sizeof(*regs))
+ return sp;
+ regs = (struct pt_regs *)sp;
+ addr = regs->psw.addr & PSW_ADDR_INSN;
+ if (!(*skip))
+ trace->entries[trace->nr_entries++] = addr;
+ else
+ (*skip)--;
+ if (trace->nr_entries >= trace->max_entries)
+ return sp;
+ low = sp;
+ sp = regs->gprs[15];
+ }
+}
+
+void save_stack_trace(struct stack_trace *trace,
+ struct task_struct *task, int all_contexts,
+ unsigned int skip)
+{
+ register unsigned long sp asm ("15");
+ unsigned long orig_sp;
+
+ sp &= PSW_ADDR_INSN;
+ orig_sp = sp;
+
+ sp = save_context_stack(trace, &skip, sp,
+ S390_lowcore.panic_stack - PAGE_SIZE,
+ S390_lowcore.panic_stack);
+ if ((sp != orig_sp) && !all_contexts)
+ return;
+ sp = save_context_stack(trace, &skip, sp,
+ S390_lowcore.async_stack - ASYNC_SIZE,
+ S390_lowcore.async_stack);
+ if ((sp != orig_sp) && !all_contexts)
+ return;
+ if (task)
+ save_context_stack(trace, &skip, sp,
+ (unsigned long) task_stack_page(task),
+ (unsigned long) task_stack_page(task) + THREAD_SIZE);
+ else
+ save_context_stack(trace, &skip, sp, S390_lowcore.thread_info,
+ S390_lowcore.thread_info + THREAD_SIZE);
+ return;
+}
{
printk("SnapGear: EraseConfig init\n");
/* Setup "EraseConfig" switch on external IRQ 0 */
- if (request_irq(IRL0_IRQ, eraseconfig_interrupt, SA_INTERRUPT,
+ if (request_irq(IRL0_IRQ, eraseconfig_interrupt, IRQF_DISABLED,
"Erase Config", NULL))
printk("SnapGear: failed to register IRQ%d for Reset witch\n",
IRL0_IRQ);
return __irq_demux(irq);
}
-static struct irqaction irq0 = { hd64461_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "HD64461", NULL, NULL };
+static struct irqaction irq0 = { hd64461_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "HD64461", NULL, NULL };
int __init setup_hd64461(void)
{
if (!request_region(HD64465_REG_GPACR, 0x1000, MODNAME))
return -EBUSY;
if (request_irq(HD64465_IRQ_GPIO, hd64465_gpio_interrupt,
- SA_INTERRUPT, MODNAME, 0))
+ IRQF_DISABLED, MODNAME, 0))
goto out_irqfailed;
printk("HD64465 GPIO layer on irq %d\n", HD64465_IRQ_GPIO);
return irq;
}
-static struct irqaction irq0 = { hd64465_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "HD64465", NULL, NULL};
+static struct irqaction irq0 = { hd64465_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "HD64465", NULL, NULL};
static int __init setup_hd64465(void)
static struct irqaction irq0 = {
.name = "voyagergx",
.handler = voyagergx_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
};
static struct irqaction g2_dma_irq = {
.name = "g2 DMA handler",
.handler = g2_dma_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
static int g2_enable_dma(struct dma_channel *chan)
static struct irqaction pvr2_dma_irq = {
.name = "pvr2 DMA handler",
.handler = pvr2_dma_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
static struct dma_ops pvr2_dma_ops = {
chan->chan);
return request_irq(get_dmte_irq(chan->chan), dma_tei,
- SA_INTERRUPT, name, chan);
+ IRQF_DISABLED, name, chan);
}
static void sh_dmac_free_dma(struct dma_channel *chan)
#ifdef CONFIG_CPU_SH4
make_ipr_irq(DMAE_IRQ, DMA_IPR_ADDR, DMA_IPR_POS, DMA_PRIORITY);
- i = request_irq(DMAE_IRQ, dma_err, SA_INTERRUPT, "DMAC Address Error", 0);
+ i = request_irq(DMAE_IRQ, dma_err, IRQF_DISABLED, "DMAC Address Error", 0);
if (i < 0)
return i;
#endif
PHYSADDR(memory_end) - PHYSADDR(memory_start));
if (request_irq(ST40PCI_ERR_IRQ, st40_pci_irq,
- SA_INTERRUPT, "st40pci", NULL)) {
+ IRQF_DISABLED, "st40pci", NULL)) {
printk(KERN_ERR "st40pci: Cannot hook interrupt\n");
return -EIO;
}
static struct irqaction tmu_irq = {
.name = "timer",
.handler = tmu_timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
};
static struct irqaction irq_dmte = {
.handler = dma_mte,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "DMA MTE",
};
static struct irqaction irq_derr = {
.handler = dma_err,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "DMA Error",
};
static int __init pcibios_init(void)
{
if (request_irq(IRQ_ERR, pcish5_err_irq,
- SA_INTERRUPT, "PCI Error",NULL) < 0) {
+ IRQF_DISABLED, "PCI Error",NULL) < 0) {
printk(KERN_ERR "PCISH5: Cannot hook PCI_PERR interrupt\n");
return -EINVAL;
}
if (request_irq(IRQ_SERR, pcish5_serr_irq,
- SA_INTERRUPT, "PCI SERR interrupt", NULL) < 0) {
+ IRQF_DISABLED, "PCI SERR interrupt", NULL) < 0) {
printk(KERN_ERR "PCISH5: Cannot hook PCI_SERR interrupt\n");
return -EINVAL;
}
return IRQ_HANDLED;
}
-static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
-static struct irqaction irq1 = { sh64_rtc_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "rtc", NULL, NULL};
+static struct irqaction irq0 = { timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq1 = { sh64_rtc_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "rtc", NULL, NULL};
void __init time_init(void)
{
static struct irqaction cayman_action_smsc = {
.name = "Cayman SMSC Mux",
.handler = cayman_interrupt_smsc,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
static struct irqaction cayman_action_pci2 = {
.name = "Cayman PCI2 Mux",
.handler = cayman_interrupt_pci2,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
};
static void enable_cayman_irq(unsigned int irq)
}
#endif
seq_printf(p, " %c %s",
- (action->flags & SA_INTERRUPT) ? '+' : ' ',
+ (action->flags & IRQF_DISABLED) ? '+' : ' ',
action->name);
for (action=action->next; action; action = action->next) {
seq_printf(p, ",%s %s",
- (action->flags & SA_INTERRUPT) ? " +" : "",
+ (action->flags & IRQF_DISABLED) ? " +" : "",
action->name);
}
seq_putc(p, '\n');
printk("Trying to free free shared IRQ%d\n",irq);
goto out_unlock;
}
- } else if (action->flags & SA_SHIRQ) {
+ } else if (action->flags & IRQF_SHARED) {
printk("Trying to free shared IRQ%d with NULL device ID\n", irq);
goto out_unlock;
}
action = sparc_irq[cpu_irq].action;
if(action) {
- if(action->flags & SA_SHIRQ)
+ if(action->flags & IRQF_SHARED)
panic("Trying to register fast irq when already shared.\n");
- if(irqflags & SA_SHIRQ)
+ if(irqflags & IRQF_SHARED)
panic("Trying to register fast irq as shared.\n");
/* Anyway, someone already owns it so cannot be made fast. */
actionp = &sparc_irq[cpu_irq].action;
action = *actionp;
if (action) {
- if (!(action->flags & SA_SHIRQ) || !(irqflags & SA_SHIRQ)) {
+ if (!(action->flags & IRQF_SHARED) || !(irqflags & IRQF_SHARED)) {
ret = -EBUSY;
goto out_unlock;
}
- if ((action->flags & SA_INTERRUPT) != (irqflags & SA_INTERRUPT)) {
+ if ((action->flags & IRQF_DISABLED) != (irqflags & IRQF_DISABLED)) {
printk("Attempt to mix fast and slow interrupts on IRQ%d denied\n", irq);
ret = -EBUSY;
goto out_unlock;
writel (PCI_COUNTER_IRQ_SET(timer_irq, 0),
pcic->pcic_regs+PCI_COUNTER_IRQ);
irq = request_irq(timer_irq, pcic_timer_handler,
- (SA_INTERRUPT | SA_STATIC_ALLOC), "timer", NULL);
+ (IRQF_DISABLED | SA_STATIC_ALLOC), "timer", NULL);
if (irq) {
prom_printf("time_init: unable to attach IRQ%d\n", timer_irq);
prom_halt();
irq = request_irq(TIMER_IRQ,
counter_fn,
- (SA_INTERRUPT | SA_STATIC_ALLOC),
+ (IRQF_DISABLED | SA_STATIC_ALLOC),
"timer", NULL);
if (irq) {
prom_printf("time_init: unable to attach IRQ%d\n",TIMER_IRQ);
kstat_cpu(cpu_logical_map(x)).irqs[i]);
#endif
seq_printf(p, "%c %s",
- (action->flags & SA_INTERRUPT) ? '+' : ' ',
+ (action->flags & IRQF_DISABLED) ? '+' : ' ',
action->name);
action = action->next;
for (;;) {
for (; action; action = action->next) {
seq_printf(p, ",%s %s",
- (action->flags & SA_INTERRUPT) ? " +" : "",
+ (action->flags & IRQF_DISABLED) ? " +" : "",
action->name);
}
if (!sbusl) break;
printk("Trying to free free shared IRQ%d\n",irq);
goto out_unlock;
}
- } else if (action->flags & SA_SHIRQ) {
+ } else if (action->flags & IRQF_SHARED) {
printk("Trying to free shared IRQ%d with NULL device ID\n", irq);
goto out_unlock;
}
action = *actionp;
if (action) {
- if ((action->flags & SA_SHIRQ) && (irqflags & SA_SHIRQ)) {
+ if ((action->flags & IRQF_SHARED) && (irqflags & IRQF_SHARED)) {
for (tmp = action; tmp->next; tmp = tmp->next);
} else {
ret = -EBUSY;
goto out_unlock;
}
- if ((action->flags & SA_INTERRUPT) ^ (irqflags & SA_INTERRUPT)) {
+ if ((action->flags & IRQF_DISABLED) ^ (irqflags & IRQF_DISABLED)) {
printk("Attempt to mix fast and slow interrupts on IRQ%d denied\n", irq);
ret = -EBUSY;
goto out_unlock;
irq = request_irq(TIMER_IRQ,
counter_fn,
- (SA_INTERRUPT | SA_STATIC_ALLOC),
+ (IRQF_DISABLED | SA_STATIC_ALLOC),
"timer", NULL);
if (irq) {
prom_printf("time_init: unable to attach IRQ%d\n",TIMER_IRQ);
irq = request_irq(TIMER_IRQ,
counter_fn,
- (SA_INTERRUPT | SA_STATIC_ALLOC),
+ (IRQF_DISABLED | SA_STATIC_ALLOC),
"timer", NULL);
if (irq) {
prom_printf("time_init: unable to attach IRQ%d\n",TIMER_IRQ);
if (!request_irq(irq_nr,
handler,
- (SA_INTERRUPT | SA_STATIC_ALLOC),
+ (IRQF_DISABLED | SA_STATIC_ALLOC),
"counter14",
NULL)) {
install_linux_ticker();
if (on) {
if (p->flags & EBUS_DMA_FLAG_USE_EBDMA_HANDLER) {
- if (request_irq(p->irq, ebus_dma_irq, SA_SHIRQ, p->name, p))
+ if (request_irq(p->irq, ebus_dma_irq, IRQF_SHARED, p->name, p))
return -EBUSY;
}
if (op->num_irqs < 6)
return;
- request_irq(op->irqs[1], psycho_ue_intr, SA_SHIRQ, "PSYCHO UE", p);
- request_irq(op->irqs[2], psycho_ce_intr, SA_SHIRQ, "PSYCHO CE", p);
- request_irq(op->irqs[5], psycho_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[1], psycho_ue_intr, IRQF_SHARED, "PSYCHO UE", p);
+ request_irq(op->irqs[2], psycho_ce_intr, IRQF_SHARED, "PSYCHO CE", p);
+ request_irq(op->irqs[5], psycho_pcierr_intr, IRQF_SHARED,
"PSYCHO PCIERR-A", &p->pbm_A);
- request_irq(op->irqs[0], psycho_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[0], psycho_pcierr_intr, IRQF_SHARED,
"PSYCHO PCIERR-B", &p->pbm_B);
/* Enable UE and CE interrupts for controller. */
SABRE_UEAFSR_SDRD | SABRE_UEAFSR_SDWR |
SABRE_UEAFSR_SDTE | SABRE_UEAFSR_PDTE));
- request_irq(op->irqs[1], sabre_ue_intr, SA_SHIRQ, "SABRE UE", p);
+ request_irq(op->irqs[1], sabre_ue_intr, IRQF_SHARED, "SABRE UE", p);
sabre_write(base + SABRE_CE_AFSR,
(SABRE_CEAFSR_PDRD | SABRE_CEAFSR_PDWR |
SABRE_CEAFSR_SDRD | SABRE_CEAFSR_SDWR));
- request_irq(op->irqs[2], sabre_ce_intr, SA_SHIRQ, "SABRE CE", p);
- request_irq(op->irqs[0], sabre_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[2], sabre_ce_intr, IRQF_SHARED, "SABRE CE", p);
+ request_irq(op->irqs[0], sabre_pcierr_intr, IRQF_SHARED,
"SABRE PCIERR", p);
tmp = sabre_read(base + SABRE_PCICTRL);
pbm = pbm_for_ino(p, SCHIZO_UE_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[1], schizo_ue_intr, SA_SHIRQ,
+ request_irq(op->irqs[1], schizo_ue_intr, IRQF_SHARED,
"TOMATILLO_UE", p);
pbm = pbm_for_ino(p, SCHIZO_CE_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[2], schizo_ce_intr, SA_SHIRQ,
+ request_irq(op->irqs[2], schizo_ce_intr, IRQF_SHARED,
"TOMATILLO CE", p);
pbm = pbm_for_ino(p, SCHIZO_PCIERR_A_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[0], schizo_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[0], schizo_pcierr_intr, IRQF_SHARED,
"TOMATILLO PCIERR-A", pbm);
pbm = pbm_for_ino(p, SCHIZO_PCIERR_B_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[0], schizo_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[0], schizo_pcierr_intr, IRQF_SHARED,
"TOMATILLO PCIERR-B", pbm);
pbm = pbm_for_ino(p, SCHIZO_SERR_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[3], schizo_safarierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[3], schizo_safarierr_intr, IRQF_SHARED,
"TOMATILLO SERR", p);
/* Enable UE and CE interrupts for controller. */
pbm = pbm_for_ino(p, SCHIZO_UE_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[1], schizo_ue_intr, SA_SHIRQ,
+ request_irq(op->irqs[1], schizo_ue_intr, IRQF_SHARED,
"SCHIZO_UE", p);
pbm = pbm_for_ino(p, SCHIZO_CE_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[2], schizo_ce_intr, SA_SHIRQ,
+ request_irq(op->irqs[2], schizo_ce_intr, IRQF_SHARED,
"SCHIZO CE", p);
pbm = pbm_for_ino(p, SCHIZO_PCIERR_A_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[0], schizo_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[0], schizo_pcierr_intr, IRQF_SHARED,
"SCHIZO PCIERR-A", pbm);
pbm = pbm_for_ino(p, SCHIZO_PCIERR_B_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[0], schizo_pcierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[0], schizo_pcierr_intr, IRQF_SHARED,
"SCHIZO PCIERR-B", pbm);
pbm = pbm_for_ino(p, SCHIZO_SERR_INO);
op = of_find_device_by_node(pbm->prom_node);
if (op)
- request_irq(op->irqs[3], schizo_safarierr_intr, SA_SHIRQ,
+ request_irq(op->irqs[3], schizo_safarierr_intr, IRQF_SHARED,
"SCHIZO SERR", p);
/* Enable UE and CE interrupts for controller. */
irq = sbus_build_irq(sbus, SYSIO_UE_INO);
if (request_irq(irq, sysio_ue_handler,
- SA_SHIRQ, "SYSIO UE", sbus) < 0) {
+ IRQF_SHARED, "SYSIO UE", sbus) < 0) {
prom_printf("SYSIO[%x]: Cannot register UE interrupt.\n",
sbus->portid);
prom_halt();
irq = sbus_build_irq(sbus, SYSIO_CE_INO);
if (request_irq(irq, sysio_ce_handler,
- SA_SHIRQ, "SYSIO CE", sbus) < 0) {
+ IRQF_SHARED, "SYSIO CE", sbus) < 0) {
prom_printf("SYSIO[%x]: Cannot register CE interrupt.\n",
sbus->portid);
prom_halt();
irq = sbus_build_irq(sbus, SYSIO_SBUSERR_INO);
if (request_irq(irq, sysio_sbus_error_handler,
- SA_SHIRQ, "SYSIO SBUS Error", sbus) < 0) {
+ IRQF_SHARED, "SYSIO SBUS Error", sbus) < 0) {
prom_printf("SYSIO[%x]: Cannot register SBUS Error interrupt.\n",
sbus->portid);
prom_halt();
int err;
/* Interrupts are enabled here because we registered the interrupt with
- * SA_INTERRUPT (see line_setup_irq).*/
+ * IRQF_DISABLED (see line_setup_irq).*/
spin_lock_irq(&line->lock);
err = flush_buffer(line);
int line_setup_irq(int fd, int input, int output, struct line *line, void *data)
{
struct line_driver *driver = line->driver;
- int err = 0, flags = SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM;
+ int err = 0, flags = IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM;
if (input)
err = um_request_irq(driver->read_irq, fd, IRQ_READ,
spin_unlock(&winch_handler_lock);
if(um_request_irq(WINCH_IRQ, fd, IRQ_READ, winch_interrupt,
- SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM,
"winch", winch) < 0)
printk("register_winch_irq - failed to register IRQ\n");
}
register_reboot_notifier(&reboot_notifier);
err = um_request_irq(MCONSOLE_IRQ, sock, IRQ_READ, mconsole_interrupt,
- SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM,
"mconsole", (void *)sock);
if (err){
printk("Failed to get IRQ for management console\n");
}
err = um_request_irq(dev->irq, lp->fd, IRQ_READ, uml_net_interrupt,
- SA_INTERRUPT | SA_SHIRQ, dev->name, dev);
+ IRQF_DISABLED | IRQF_SHARED, dev->name, dev);
if(err != 0){
printk(KERN_ERR "uml_net_open: failed to get irq(%d)\n", err);
err = -ENETUNREACH;
.port = port });
if(um_request_irq(TELNETD_IRQ, socket[0], IRQ_READ, pipe_interrupt,
- SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM,
"telnetd", conn)){
printk(KERN_ERR "port_accept : failed to get IRQ for "
"telnetd\n");
goto out_free;
}
if(um_request_irq(ACCEPT_IRQ, fd, IRQ_READ, port_interrupt,
- SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM, "port",
+ IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM, "port",
port)){
printk(KERN_ERR "Failed to get IRQ for port %d\n", port_num);
goto out_close;
return(0);
}
err = um_request_irq(UBD_IRQ, thread_fd, IRQ_READ, ubd_intr,
- SA_INTERRUPT, "ubd", ubd_dev);
+ IRQF_DISABLED, "ubd", ubd_dev);
if(err != 0)
printk(KERN_ERR "um_request_irq failed - errno = %d\n", -err);
return 0;
init_completion(&data->ready);
err = um_request_irq(XTERM_IRQ, socket, IRQ_READ, xterm_interrupt,
- SA_INTERRUPT | SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SHARED | IRQF_SAMPLE_RANDOM,
"xterm", data);
if (err){
printk(KERN_ERR "xterm_fd : failed to get IRQ for xterm, "
}
err = um_request_irq(irq, fds[0], IRQ_READ, handler,
- SA_INTERRUPT | SA_SAMPLE_RANDOM, name,
+ IRQF_DISABLED | IRQF_SAMPLE_RANDOM, name,
(void *) (long) fds[0]);
if (err) {
printk("init_aio_irq - : um_request_irq failed, err = %d\n",
int err;
err = um_request_irq(SIGIO_WRITE_IRQ, fd, IRQ_READ, sigio_interrupt,
- SA_INTERRUPT | SA_SAMPLE_RANDOM, "write sigio",
+ IRQF_DISABLED | IRQF_SAMPLE_RANDOM, "write sigio",
NULL);
if(err){
printk("write_sigio_irq : um_request_irq failed, err = %d\n",
int err;
user_time_init();
- err = request_irq(TIMER_IRQ, um_timer, SA_INTERRUPT, "timer", NULL);
+ err = request_irq(TIMER_IRQ, um_timer, IRQF_DISABLED, "timer", NULL);
if(err != 0)
printk(KERN_ERR "timer_init : request_irq failed - "
"errno = %d\n", -err);
panic("read failed in suspend_new_thread, err = %d", -err);
}
-void schedule_tail(task_t *prev);
+void schedule_tail(struct task_struct *prev);
static void new_thread_handler(int sig)
{
{
}
+#ifdef CONFIG_SMP
void alternatives_smp_module_add(struct module *mod, char *name,
void *locks, void *locks_end,
void *text, void *text_end)
void alternatives_smp_module_del(struct module *mod)
{
}
+#endif
/* First enable the CPU interrupt. */
int rval =
request_irq (IRQ_GINT(gint), gbus_int_handle_irq,
- SA_INTERRUPT,
+ IRQF_DISABLED,
"gbus_int_handler",
&gint_num_active_irqs[gint]);
if (rval != 0)
if (cb_pic_active_irqs == 0) {
rval = request_irq (IRQ_CB_PIC, cb_pic_handle_irq,
- SA_INTERRUPT, "cb_pic_handler", 0);
+ IRQF_DISABLED, "cb_pic_handler", 0);
if (rval != 0)
return rval;
}
static int timer_dev_id;
static struct irqaction timer_irqaction = {
timer_interrupt,
- SA_INTERRUPT,
+ IRQF_DISABLED,
CPU_MASK_NONE,
"timer",
&timer_dev_id,
bool
default y
+config LOCKDEP_SUPPORT
+ bool
+ default y
+
+config STACKTRACE_SUPPORT
+ bool
+ default y
+
config SEMAPHORE_SLEEPERS
bool
default y
menu "Kernel hacking"
+config TRACE_IRQFLAGS_SUPPORT
+ bool
+ default y
+
source "lib/Kconfig.debug"
config DEBUG_RODATA
*/
#include <asm/segment.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/compile.h>
#include <asm/boot.h>
#include <asm/e820.h>
#include <asm/thread_info.h>
#include <asm/segment.h>
#include <asm/vsyscall32.h>
+#include <asm/irqflags.h>
#include <linux/linkage.h>
#define IA32_NR_syscalls ((ia32_syscall_end - ia32_sys_call_table)/8)
swapgs
movq %gs:pda_kernelstack, %rsp
addq $(PDA_STACKOFFSET),%rsp
+ /*
+ * No need to follow this irqs on/off section: the syscall
+ * disabled irqs, here we enable it straight after entry:
+ */
sti
movl %ebp,%ebp /* zero extension */
pushq $__USER32_DS
movq %rax,RAX-ARGOFFSET(%rsp)
GET_THREAD_INFO(%r10)
cli
+ TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK,threadinfo_flags(%r10)
jnz int_ret_from_sys_call
andl $~TS_COMPAT,threadinfo_status(%r10)
CFI_REGISTER rsp,rcx
movl $VSYSCALL32_SYSEXIT,%edx /* User %eip */
CFI_REGISTER rip,rdx
+ TRACE_IRQS_ON
swapgs
sti /* sti only takes effect after the next instruction */
/* sysexit */
movl %esp,%r8d
CFI_REGISTER rsp,r8
movq %gs:pda_kernelstack,%rsp
+ /*
+ * No need to follow this irqs on/off section: the syscall
+ * disabled irqs and here we enable it straight after entry:
+ */
sti
SAVE_ARGS 8,1,1
movl %eax,%eax /* zero extension */
movq %rax,RAX-ARGOFFSET(%rsp)
GET_THREAD_INFO(%r10)
cli
+ TRACE_IRQS_OFF
testl $_TIF_ALLWORK_MASK,threadinfo_flags(%r10)
jnz int_ret_from_sys_call
andl $~TS_COMPAT,threadinfo_status(%r10)
CFI_REGISTER rip,rcx
movl EFLAGS-ARGOFFSET(%rsp),%r11d
/*CFI_REGISTER rflags,r11*/
+ TRACE_IRQS_ON
movl RSP-ARGOFFSET(%rsp),%esp
CFI_RESTORE rsp
swapgs
/*CFI_REL_OFFSET rflags,EFLAGS-RIP*/
/*CFI_REL_OFFSET cs,CS-RIP*/
CFI_REL_OFFSET rip,RIP-RIP
- swapgs
+ swapgs
+ /*
+ * No need to follow this irqs on/off section: the syscall
+ * disabled irqs and here we enable it straight after entry:
+ */
sti
movl %eax,%eax
pushq %rax
setup64.o bootflag.o e820.o reboot.o quirks.o i8237.o \
pci-dma.o pci-nommu.o alternative.o
+obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_X86_MCE) += mce.o
obj-$(CONFIG_X86_MCE_INTEL) += mce_intel.o
obj-$(CONFIG_X86_MCE_AMD) += mce_amd.o
#include <asm/thread_info.h>
#include <asm/hw_irq.h>
#include <asm/page.h>
+#include <asm/irqflags.h>
.code64
#ifndef CONFIG_PREEMPT
#define retint_kernel retint_restore_args
#endif
-
+
+
+.macro TRACE_IRQS_IRETQ offset=ARGOFFSET
+#ifdef CONFIG_TRACE_IRQFLAGS
+ bt $9,EFLAGS-\offset(%rsp) /* interrupts off? */
+ jnc 1f
+ TRACE_IRQS_ON
+1:
+#endif
+.endm
+
/*
* C code is not supposed to know about undefined top of stack. Every time
* a C function with an pt_regs argument is called from the SYSCALL based
swapgs
movq %rsp,%gs:pda_oldrsp
movq %gs:pda_kernelstack,%rsp
+ /*
+ * No need to follow this irqs off/on section - it's straight
+ * and short:
+ */
sti
SAVE_ARGS 8,1
movq %rax,ORIG_RAX-ARGOFFSET(%rsp)
sysret_check:
GET_THREAD_INFO(%rcx)
cli
+ TRACE_IRQS_OFF
movl threadinfo_flags(%rcx),%edx
andl %edi,%edx
CFI_REMEMBER_STATE
jnz sysret_careful
+ /*
+ * sysretq will re-enable interrupts:
+ */
+ TRACE_IRQS_ON
movq RIP-ARGOFFSET(%rsp),%rcx
CFI_REGISTER rip,rcx
RESTORE_ARGS 0,-ARG_SKIP,1
CFI_RESTORE_STATE
bt $TIF_NEED_RESCHED,%edx
jnc sysret_signal
+ TRACE_IRQS_ON
sti
pushq %rdi
CFI_ADJUST_CFA_OFFSET 8
/* Handle a signal */
sysret_signal:
+ TRACE_IRQS_ON
sti
testl $(_TIF_SIGPENDING|_TIF_NOTIFY_RESUME|_TIF_SINGLESTEP),%edx
jz 1f
/* Use IRET because user could have changed frame. This
works because ptregscall_common has called FIXUP_TOP_OF_STACK. */
cli
+ TRACE_IRQS_OFF
jmp int_with_check
badsys:
CFI_REL_OFFSET r10,R10-ARGOFFSET
CFI_REL_OFFSET r11,R11-ARGOFFSET
cli
+ TRACE_IRQS_OFF
testl $3,CS-ARGOFFSET(%rsp)
je retint_restore_args
movl $_TIF_ALLWORK_MASK,%edi
int_careful:
bt $TIF_NEED_RESCHED,%edx
jnc int_very_careful
+ TRACE_IRQS_ON
sti
pushq %rdi
CFI_ADJUST_CFA_OFFSET 8
popq %rdi
CFI_ADJUST_CFA_OFFSET -8
cli
+ TRACE_IRQS_OFF
jmp int_with_check
/* handle signals and tracing -- both require a full stack frame */
int_very_careful:
+ TRACE_IRQS_ON
sti
SAVE_REST
/* Check for syscall exit trace */
CFI_ADJUST_CFA_OFFSET -8
andl $~(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP),%edi
cli
+ TRACE_IRQS_OFF
jmp int_restore_rest
int_signal:
int_restore_rest:
RESTORE_REST
cli
+ TRACE_IRQS_OFF
jmp int_with_check
CFI_ENDPROC
END(int_ret_from_sys_call)
swapgs
1: incl %gs:pda_irqcount # RED-PEN should check preempt count
cmoveq %gs:pda_irqstackptr,%rsp
+ /*
+ * We entered an interrupt context - irqs are off:
+ */
+ TRACE_IRQS_OFF
call \func
.endm
/* 0(%rsp): oldrsp-ARGOFFSET */
ret_from_intr:
cli
+ TRACE_IRQS_OFF
decl %gs:pda_irqcount
leaveq
CFI_DEF_CFA_REGISTER rsp
CFI_REMEMBER_STATE
jnz retint_careful
retint_swapgs:
+ /*
+ * The iretq could re-enable interrupts:
+ */
+ cli
+ TRACE_IRQS_IRETQ
swapgs
+ jmp restore_args
+
retint_restore_args:
cli
+ /*
+ * The iretq could re-enable interrupts:
+ */
+ TRACE_IRQS_IRETQ
+restore_args:
RESTORE_ARGS 0,8,0
iret_label:
iretq
/* running with kernel gs */
bad_iret:
movq $11,%rdi /* SIGSEGV */
+ TRACE_IRQS_ON
sti
jmp do_exit
.previous
CFI_RESTORE_STATE
bt $TIF_NEED_RESCHED,%edx
jnc retint_signal
+ TRACE_IRQS_ON
sti
pushq %rdi
CFI_ADJUST_CFA_OFFSET 8
CFI_ADJUST_CFA_OFFSET -8
GET_THREAD_INFO(%rcx)
cli
+ TRACE_IRQS_OFF
jmp retint_check
retint_signal:
testl $(_TIF_SIGPENDING|_TIF_NOTIFY_RESUME|_TIF_SINGLESTEP),%edx
jz retint_swapgs
+ TRACE_IRQS_ON
sti
SAVE_REST
movq $-1,ORIG_RAX(%rsp)
call do_notify_resume
RESTORE_REST
cli
+ TRACE_IRQS_OFF
movl $_TIF_NEED_RESCHED,%edi
GET_THREAD_INFO(%rcx)
jmp retint_check
/* error code is on the stack already */
/* handle NMI like exceptions that can happen everywhere */
- .macro paranoidentry sym, ist=0
+ .macro paranoidentry sym, ist=0, irqtrace=1
SAVE_ALL
cld
movl $1,%ebx
addq $EXCEPTION_STKSZ, per_cpu__init_tss + TSS_ist + (\ist - 1) * 8(%rbp)
.endif
cli
+ .if \irqtrace
+ TRACE_IRQS_OFF
+ .endif
.endm
-
+
+ /*
+ * "Paranoid" exit path from exception stack.
+ * Paranoid because this is used by NMIs and cannot take
+ * any kernel state for granted.
+ * We don't do kernel preemption checks here, because only
+ * NMI should be common and it does not enable IRQs and
+ * cannot get reschedule ticks.
+ *
+ * "trace" is 0 for the NMI handler only, because irq-tracing
+ * is fundamentally NMI-unsafe. (we cannot change the soft and
+ * hard flags at once, atomically)
+ */
+ .macro paranoidexit trace=1
+ /* ebx: no swapgs flag */
+paranoid_exit\trace:
+ testl %ebx,%ebx /* swapgs needed? */
+ jnz paranoid_restore\trace
+ testl $3,CS(%rsp)
+ jnz paranoid_userspace\trace
+paranoid_swapgs\trace:
+ TRACE_IRQS_IRETQ 0
+ swapgs
+paranoid_restore\trace:
+ RESTORE_ALL 8
+ iretq
+paranoid_userspace\trace:
+ GET_THREAD_INFO(%rcx)
+ movl threadinfo_flags(%rcx),%ebx
+ andl $_TIF_WORK_MASK,%ebx
+ jz paranoid_swapgs\trace
+ movq %rsp,%rdi /* &pt_regs */
+ call sync_regs
+ movq %rax,%rsp /* switch stack for scheduling */
+ testl $_TIF_NEED_RESCHED,%ebx
+ jnz paranoid_schedule\trace
+ movl %ebx,%edx /* arg3: thread flags */
+ .if \trace
+ TRACE_IRQS_ON
+ .endif
+ sti
+ xorl %esi,%esi /* arg2: oldset */
+ movq %rsp,%rdi /* arg1: &pt_regs */
+ call do_notify_resume
+ cli
+ .if \trace
+ TRACE_IRQS_OFF
+ .endif
+ jmp paranoid_userspace\trace
+paranoid_schedule\trace:
+ .if \trace
+ TRACE_IRQS_ON
+ .endif
+ sti
+ call schedule
+ cli
+ .if \trace
+ TRACE_IRQS_OFF
+ .endif
+ jmp paranoid_userspace\trace
+ CFI_ENDPROC
+ .endm
+
/*
* Exception entry point. This expects an error code/orig_rax on the stack
* and the exception handler in %rax.
movl %ebx,%eax
RESTORE_REST
cli
+ TRACE_IRQS_OFF
GET_THREAD_INFO(%rcx)
testl %eax,%eax
jne retint_kernel
movl $_TIF_WORK_MASK,%edi
andl %edi,%edx
jnz retint_careful
+ /*
+ * The iret might restore flags:
+ */
+ TRACE_IRQS_IRETQ
swapgs
RESTORE_ARGS 0,8,0
jmp iret_label
pushq $0
CFI_ADJUST_CFA_OFFSET 8
paranoidentry do_debug, DEBUG_STACK
- jmp paranoid_exit
- CFI_ENDPROC
+ paranoidexit
END(debug)
.previous .text
INTR_FRAME
pushq $-1
CFI_ADJUST_CFA_OFFSET 8
- paranoidentry do_nmi
- /*
- * "Paranoid" exit path from exception stack.
- * Paranoid because this is used by NMIs and cannot take
- * any kernel state for granted.
- * We don't do kernel preemption checks here, because only
- * NMI should be common and it does not enable IRQs and
- * cannot get reschedule ticks.
- */
- /* ebx: no swapgs flag */
-paranoid_exit:
- testl %ebx,%ebx /* swapgs needed? */
- jnz paranoid_restore
- testl $3,CS(%rsp)
- jnz paranoid_userspace
-paranoid_swapgs:
- swapgs
-paranoid_restore:
- RESTORE_ALL 8
- iretq
-paranoid_userspace:
- GET_THREAD_INFO(%rcx)
- movl threadinfo_flags(%rcx),%ebx
- andl $_TIF_WORK_MASK,%ebx
- jz paranoid_swapgs
- movq %rsp,%rdi /* &pt_regs */
- call sync_regs
- movq %rax,%rsp /* switch stack for scheduling */
- testl $_TIF_NEED_RESCHED,%ebx
- jnz paranoid_schedule
- movl %ebx,%edx /* arg3: thread flags */
- sti
- xorl %esi,%esi /* arg2: oldset */
- movq %rsp,%rdi /* arg1: &pt_regs */
- call do_notify_resume
- cli
- jmp paranoid_userspace
-paranoid_schedule:
- sti
- call schedule
- cli
- jmp paranoid_userspace
- CFI_ENDPROC
+ paranoidentry do_nmi, 0, 0
+#ifdef CONFIG_TRACE_IRQFLAGS
+ paranoidexit 0
+#else
+ jmp paranoid_exit1
+ CFI_ENDPROC
+#endif
END(nmi)
.previous .text
pushq $0
CFI_ADJUST_CFA_OFFSET 8
paranoidentry do_int3, DEBUG_STACK
- jmp paranoid_exit
+ jmp paranoid_exit1
CFI_ENDPROC
END(int3)
.previous .text
ENTRY(double_fault)
XCPT_FRAME
paranoidentry do_double_fault
- jmp paranoid_exit
+ jmp paranoid_exit1
CFI_ENDPROC
END(double_fault)
ENTRY(stack_segment)
XCPT_FRAME
paranoidentry do_stack_segment
- jmp paranoid_exit
+ jmp paranoid_exit1
CFI_ENDPROC
END(stack_segment)
pushq $0
CFI_ADJUST_CFA_OFFSET 8
paranoidentry do_machine_check
- jmp paranoid_exit
+ jmp paranoid_exit1
CFI_ENDPROC
END(machine_check)
#endif
asm volatile("lidt %0" :: "m" (idt_descr));
clear_bss();
+ /*
+ * This must be called really, really early:
+ */
+ lockdep_init();
+
/*
* switch to init_level4_pgt from boot_level4_pgt
*/
local_irq_save(flags);
pending = local_softirq_pending();
/* Switch to interrupt stack */
- if (pending)
+ if (pending) {
call_softirq();
+ WARN_ON_ONCE(softirq_count());
+ }
local_irq_restore(flags);
}
EXPORT_SYMBOL(do_softirq);
static __init void nmi_cpu_busy(void *data)
{
volatile int *endflag = data;
- local_irq_enable();
+ local_irq_enable_in_hardirq();
/* Intentionally don't use cpu_relax here. This is
to make sure that the performance counter really ticks,
even if there is a simulator or similar that catches the
system_utsname.version);
printk("RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->rip);
printk_address(regs->rip);
- printk("\nRSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, regs->rsp,
+ printk("RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, regs->rsp,
regs->eflags);
printk("RAX: %016lx RBX: %016lx RCX: %016lx\n",
regs->rax, regs->rbx, regs->rcx);
};
DECLARE_WORK(work, do_fork_idle, &c_idle);
+ lockdep_set_class(&c_idle.done.wait.lock, &waitqueue_lock_key);
+
/* allocate memory for gdts of secondary cpus. Hotplug is considered */
if (!cpu_gdt_descr[cpu].address &&
!(cpu_gdt_descr[cpu].address = get_zeroed_page(GFP_KERNEL))) {
--- /dev/null
+/*
+ * arch/x86_64/kernel/stacktrace.c
+ *
+ * Stack trace management functions
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ */
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+
+#include <asm/smp.h>
+
+static inline int
+in_range(unsigned long start, unsigned long addr, unsigned long end)
+{
+ return addr >= start && addr <= end;
+}
+
+static unsigned long
+get_stack_end(struct task_struct *task, unsigned long stack)
+{
+ unsigned long stack_start, stack_end, flags;
+ int i, cpu;
+
+ /*
+ * The most common case is that we are in the task stack:
+ */
+ stack_start = (unsigned long)task->thread_info;
+ stack_end = stack_start + THREAD_SIZE;
+
+ if (in_range(stack_start, stack, stack_end))
+ return stack_end;
+
+ /*
+ * We are in an interrupt if irqstackptr is set:
+ */
+ raw_local_irq_save(flags);
+ cpu = safe_smp_processor_id();
+ stack_end = (unsigned long)cpu_pda(cpu)->irqstackptr;
+
+ if (stack_end) {
+ stack_start = stack_end & ~(IRQSTACKSIZE-1);
+ if (in_range(stack_start, stack, stack_end))
+ goto out_restore;
+ /*
+ * We get here if we are in an IRQ context but we
+ * are also in an exception stack.
+ */
+ }
+
+ /*
+ * Iterate over all exception stacks, and figure out whether
+ * 'stack' is in one of them:
+ */
+ for (i = 0; i < N_EXCEPTION_STACKS; i++) {
+ /*
+ * set 'end' to the end of the exception stack.
+ */
+ stack_end = per_cpu(init_tss, cpu).ist[i];
+ stack_start = stack_end - EXCEPTION_STKSZ;
+
+ /*
+ * Is 'stack' above this exception frame's end?
+ * If yes then skip to the next frame.
+ */
+ if (stack >= stack_end)
+ continue;
+ /*
+ * Is 'stack' above this exception frame's start address?
+ * If yes then we found the right frame.
+ */
+ if (stack >= stack_start)
+ goto out_restore;
+
+ /*
+ * If this is a debug stack, and if it has a larger size than
+ * the usual exception stacks, then 'stack' might still
+ * be within the lower portion of the debug stack:
+ */
+#if DEBUG_STKSZ > EXCEPTION_STKSZ
+ if (i == DEBUG_STACK - 1 && stack >= stack_end - DEBUG_STKSZ) {
+ /*
+ * Black magic. A large debug stack is composed of
+ * multiple exception stack entries, which we
+ * iterate through now. Dont look:
+ */
+ do {
+ stack_end -= EXCEPTION_STKSZ;
+ stack_start -= EXCEPTION_STKSZ;
+ } while (stack < stack_start);
+
+ goto out_restore;
+ }
+#endif
+ }
+ /*
+ * Ok, 'stack' is not pointing to any of the system stacks.
+ */
+ stack_end = 0;
+
+out_restore:
+ raw_local_irq_restore(flags);
+
+ return stack_end;
+}
+
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer:
+ */
+static inline unsigned long
+save_context_stack(struct stack_trace *trace, unsigned int skip,
+ unsigned long stack, unsigned long stack_end)
+{
+ unsigned long addr;
+
+#ifdef CONFIG_FRAME_POINTER
+ unsigned long prev_stack = 0;
+
+ while (in_range(prev_stack, stack, stack_end)) {
+ pr_debug("stack: %p\n", (void *)stack);
+ addr = (unsigned long)(((unsigned long *)stack)[1]);
+ pr_debug("addr: %p\n", (void *)addr);
+ if (!skip)
+ trace->entries[trace->nr_entries++] = addr-1;
+ else
+ skip--;
+ if (trace->nr_entries >= trace->max_entries)
+ break;
+ if (!addr)
+ return 0;
+ /*
+ * Stack frames must go forwards (otherwise a loop could
+ * happen if the stackframe is corrupted), so we move
+ * prev_stack forwards:
+ */
+ prev_stack = stack;
+ stack = (unsigned long)(((unsigned long *)stack)[0]);
+ }
+ pr_debug("invalid: %p\n", (void *)stack);
+#else
+ while (stack < stack_end) {
+ addr = ((unsigned long *)stack)[0];
+ stack += sizeof(long);
+ if (__kernel_text_address(addr)) {
+ if (!skip)
+ trace->entries[trace->nr_entries++] = addr-1;
+ else
+ skip--;
+ if (trace->nr_entries >= trace->max_entries)
+ break;
+ }
+ }
+#endif
+ return stack;
+}
+
+#define MAX_STACKS 10
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ * If all_contexts is set, all contexts (hardirq, softirq and process)
+ * are saved. If not set then only the current context is saved.
+ */
+void save_stack_trace(struct stack_trace *trace,
+ struct task_struct *task, int all_contexts,
+ unsigned int skip)
+{
+ unsigned long stack = (unsigned long)&stack;
+ int i, nr_stacks = 0, stacks_done[MAX_STACKS];
+
+ WARN_ON(trace->nr_entries || !trace->max_entries);
+
+ if (!task)
+ task = current;
+
+ pr_debug("task: %p, ti: %p\n", task, task->thread_info);
+
+ if (!task || task == current) {
+ /* Grab rbp right from our regs: */
+ asm ("mov %%rbp, %0" : "=r" (stack));
+ pr_debug("rbp: %p\n", (void *)stack);
+ } else {
+ /* rbp is the last reg pushed by switch_to(): */
+ stack = task->thread.rsp;
+ pr_debug("other task rsp: %p\n", (void *)stack);
+ stack = (unsigned long)(((unsigned long *)stack)[0]);
+ pr_debug("other task rbp: %p\n", (void *)stack);
+ }
+
+ while (1) {
+ unsigned long stack_end = get_stack_end(task, stack);
+
+ pr_debug("stack: %p\n", (void *)stack);
+ pr_debug("stack end: %p\n", (void *)stack_end);
+
+ /*
+ * Invalid stack addres?
+ */
+ if (!stack_end)
+ return;
+ /*
+ * Were we in this stack already? (recursion)
+ */
+ for (i = 0; i < nr_stacks; i++)
+ if (stacks_done[i] == stack_end)
+ return;
+ stacks_done[nr_stacks] = stack_end;
+
+ stack = save_context_stack(trace, skip, stack, stack_end);
+ if (!all_contexts || !stack ||
+ trace->nr_entries >= trace->max_entries)
+ return;
+ trace->entries[trace->nr_entries++] = ULONG_MAX;
+ if (trace->nr_entries >= trace->max_entries)
+ return;
+ if (++nr_stacks >= MAX_STACKS)
+ return;
+ }
+}
+
}
static struct irqaction irq0 = {
- timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL
+ timer_interrupt, IRQF_DISABLED, CPU_MASK_NONE, "timer", NULL, NULL
};
void __init time_init(void)
static int call_trace = 1;
#ifdef CONFIG_KALLSYMS
-#include <linux/kallsyms.h>
-int printk_address(unsigned long address)
-{
+# include <linux/kallsyms.h>
+void printk_address(unsigned long address)
+{
unsigned long offset = 0, symsize;
const char *symname;
char *modname;
- char *delim = ":";
+ char *delim = ":";
char namebuf[128];
- symname = kallsyms_lookup(address, &symsize, &offset, &modname, namebuf);
- if (!symname)
- return printk("[<%016lx>]", address);
- if (!modname)
+ symname = kallsyms_lookup(address, &symsize, &offset,
+ &modname, namebuf);
+ if (!symname) {
+ printk(" [<%016lx>]\n", address);
+ return;
+ }
+ if (!modname)
modname = delim = "";
- return printk("<%016lx>{%s%s%s%s%+ld}",
- address, delim, modname, delim, symname, offset);
-}
+ printk(" [<%016lx>] %s%s%s%s+0x%lx/0x%lx\n",
+ address, delim, modname, delim, symname, offset, symsize);
+}
#else
-int printk_address(unsigned long address)
-{
- return printk("[<%016lx>]", address);
-}
+void printk_address(unsigned long address)
+{
+ printk(" [<%016lx>]\n", address);
+}
#endif
static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
};
unsigned k;
+ /*
+ * Iterate over all exception stacks, and figure out whether
+ * 'stack' is in one of them:
+ */
for (k = 0; k < N_EXCEPTION_STACKS; k++) {
unsigned long end;
+ /*
+ * set 'end' to the end of the exception stack.
+ */
switch (k + 1) {
+ /*
+ * TODO: this block is not needed i think, because
+ * setup64.c:cpu_init() sets up t->ist[DEBUG_STACK]
+ * properly too.
+ */
#if DEBUG_STKSZ > EXCEPTION_STKSZ
case DEBUG_STACK:
end = cpu_pda(cpu)->debugstack + DEBUG_STKSZ;
end = per_cpu(init_tss, cpu).ist[k];
break;
}
+ /*
+ * Is 'stack' above this exception frame's end?
+ * If yes then skip to the next frame.
+ */
if (stack >= end)
continue;
+ /*
+ * Is 'stack' above this exception frame's start address?
+ * If yes then we found the right frame.
+ */
if (stack >= end - EXCEPTION_STKSZ) {
+ /*
+ * Make sure we only iterate through an exception
+ * stack once. If it comes up for the second time
+ * then there's something wrong going on - just
+ * break out and return NULL:
+ */
if (*usedp & (1U << k))
break;
*usedp |= 1U << k;
*idp = ids[k];
return (unsigned long *)end;
}
+ /*
+ * If this is a debug stack, and if it has a larger size than
+ * the usual exception stacks, then 'stack' might still
+ * be within the lower portion of the debug stack:
+ */
#if DEBUG_STKSZ > EXCEPTION_STKSZ
if (k == DEBUG_STACK - 1 && stack >= end - DEBUG_STKSZ) {
unsigned j = N_EXCEPTION_STACKS - 1;
+ /*
+ * Black magic. A large debug stack is composed of
+ * multiple exception stack entries, which we
+ * iterate through now. Dont look:
+ */
do {
++j;
end -= EXCEPTION_STKSZ;
static int show_trace_unwind(struct unwind_frame_info *info, void *context)
{
- int i = 11, n = 0;
+ int n = 0;
while (unwind(info) == 0 && UNW_PC(info)) {
- ++n;
- if (i > 50) {
- printk("\n ");
- i = 7;
- } else
- i += printk(" ");
- i += printk_address(UNW_PC(info));
+ n++;
+ printk_address(UNW_PC(info));
if (arch_unw_user_mode(info))
break;
}
- printk("\n");
return n;
}
int i = 11;
unsigned used = 0;
- printk("\nCall Trace:");
+ printk("\nCall Trace:\n");
if (!tsk)
tsk = current;
}
}
+ /*
+ * Print function call entries within a stack. 'cond' is the
+ * "end of stackframe" condition, that the 'stack++'
+ * iteration will eventually trigger.
+ */
#define HANDLE_STACK(cond) \
do while (cond) { \
unsigned long addr = *stack++; \
if (kernel_text_address(addr)) { \
- if (i > 50) { \
- printk("\n "); \
- i = 0; \
- } \
- else \
- i += printk(" "); \
/* \
* If the address is either in the text segment of the \
* kernel, or in the region which contains vmalloc'ed \
* down the cause of the crash will be able to figure \
* out the call path that was taken. \
*/ \
- i += printk_address(addr); \
+ printk_address(addr); \
} \
} while (0)
- for(; ; ) {
+ /*
+ * Print function call entries in all stacks, starting at the
+ * current stack address. If the stacks consist of nested
+ * exceptions
+ */
+ for ( ; ; ) {
const char *id;
unsigned long *estack_end;
estack_end = in_exception_stack(cpu, (unsigned long)stack,
&used, &id);
if (estack_end) {
- i += printk(" <%s>", id);
+ printk(" <%s>", id);
HANDLE_STACK (stack < estack_end);
- i += printk(" <EOE>");
+ printk(" <EOE>");
+ /*
+ * We link to the next stack via the
+ * second-to-last pointer (index -2 to end) in the
+ * exception stack:
+ */
stack = (unsigned long *) estack_end[-2];
continue;
}
(IRQSTACKSIZE - 64) / sizeof(*irqstack);
if (stack >= irqstack && stack < irqstack_end) {
- i += printk(" <IRQ>");
+ printk(" <IRQ>");
HANDLE_STACK (stack < irqstack_end);
+ /*
+ * We link to the next stack (which would be
+ * the process stack normally) the last
+ * pointer (index -1 to end) in the IRQ stack:
+ */
stack = (unsigned long *) (irqstack_end[-1]);
irqstack_end = NULL;
- i += printk(" <EOI>");
+ printk(" <EOI>");
continue;
}
}
break;
}
+ /*
+ * This prints the process stack:
+ */
HANDLE_STACK (((long) stack & (THREAD_SIZE-1)) != 0);
#undef HANDLE_STACK
+
printk("\n");
}
break;
}
if (i && ((i % 4) == 0))
- printk("\n ");
- printk("%016lx ", *stack++);
+ printk("\n");
+ printk(" %016lx", *stack++);
touch_nmi_watchdog();
}
show_trace(tsk, regs, rsp);
thunk_retrax __down_failed_interruptible,__down_interruptible
thunk_retrax __down_failed_trylock,__down_trylock
thunk __up_wakeup,__up
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ thunk trace_hardirqs_on_thunk,trace_hardirqs_on
+ thunk trace_hardirqs_off_thunk,trace_hardirqs_off
+#endif
/* SAVE_ARGS below is used only for the .cfi directives it contains. */
CFI_STARTPROC
printk(KERN_ALERT "Unable to handle kernel paging request");
printk(" at %016lx RIP: \n" KERN_ALERT,address);
printk_address(regs->rip);
- printk("\n");
dump_pagetable(address);
tsk->thread.cr2 = address;
tsk->thread.trap_no = 14;
static irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs);
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
- .flags = SA_INTERRUPT,
+ .flags = IRQF_DISABLED,
.name = "timer",
};
int blk_execute_rq(request_queue_t *q, struct gendisk *bd_disk,
struct request *rq, int at_head)
{
- DECLARE_COMPLETION(wait);
+ DECLARE_COMPLETION_ONSTACK(wait);
char sense[SCSI_SENSE_BUFFERSIZE];
int err = 0;
printk("mfm: detected %d hard drive%s\n", mfm_drives,
mfm_drives == 1 ? "" : "s");
- ret = request_irq(mfm_irq, mfm_interrupt_handler, SA_INTERRUPT, "MFM harddisk", NULL);
+ ret = request_irq(mfm_irq, mfm_interrupt_handler, IRQF_DISABLED, "MFM harddisk", NULL);
if (ret) {
printk("mfm: unable to get IRQ%d\n", mfm_irq);
goto out4;
acpi_irq_handler = handler;
acpi_irq_context = context;
- if (request_irq(irq, acpi_irq, SA_SHIRQ, "acpi", acpi_irq)) {
+ if (request_irq(irq, acpi_irq, IRQF_SHARED, "acpi", acpi_irq)) {
printk(KERN_ERR PREFIX "SCI (IRQ%d) allocation failed\n", irq);
return AE_NOT_ACQUIRED;
}
#include <linux/atmdev.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
+#include <linux/poison.h>
#include <asm/atomic.h>
#include <asm/io.h>
}
i += 1;
}
- if (*pointer == 0xdeadbeef) {
+ if (*pointer == ATM_POISON) {
return loader_start (lb, dev, ucode_start);
} else {
// cast needed as there is no %? for pointer differnces
setup_pci_dev(pci_dev);
// grab (but share) IRQ and install handler
- err = request_irq(irq, interrupt_handler, SA_SHIRQ, DEV_LABEL, dev);
+ err = request_irq(irq, interrupt_handler, IRQF_SHARED, DEV_LABEL, dev);
if (err < 0) {
PRINTK (KERN_ERR, "request IRQ failed!");
goto out_reset;
DPRINTK(">eni_start\n");
eni_dev = ENI_DEV(dev);
- if (request_irq(eni_dev->irq,&eni_int,SA_SHIRQ,DEV_LABEL,dev)) {
+ if (request_irq(eni_dev->irq,&eni_int,IRQF_SHARED,DEV_LABEL,dev)) {
printk(KERN_ERR DEV_LABEL "(itf %d): IRQ%d is already in use\n",
dev->number,eni_dev->irq);
error = -EAGAIN;
init_q (dev, &dev->rx_rq[i], RXB_RQ(i), RXRQ_NENTRIES, 1);
dev->irq = pci_dev->irq;
- if (request_irq (dev->irq, fs_irq, SA_SHIRQ, "firestream", dev)) {
+ if (request_irq (dev->irq, fs_irq, IRQF_SHARED, "firestream", dev)) {
printk (KERN_WARNING "couldn't get irq %d for firestream.\n", pci_dev->irq);
/* XXX undo all previous stuff... */
return 1;
static int __devinit
fore200e_irq_request(struct fore200e* fore200e)
{
- if (request_irq(fore200e->irq, fore200e_interrupt, SA_SHIRQ, fore200e->name, fore200e->atm_dev) < 0) {
+ if (request_irq(fore200e->irq, fore200e_interrupt, IRQF_SHARED, fore200e->name, fore200e->atm_dev) < 0) {
printk(FORE200E "unable to reserve IRQ %s for device %s\n",
fore200e_irq_itoa(fore200e->irq), fore200e->name);
he_writel(he_dev, 0x0, GRP_54_MAP);
he_writel(he_dev, 0x0, GRP_76_MAP);
- if (request_irq(he_dev->pci_dev->irq, he_irq_handler, SA_INTERRUPT|SA_SHIRQ, DEV_LABEL, he_dev)) {
+ if (request_irq(he_dev->pci_dev->irq, he_irq_handler, IRQF_DISABLED|IRQF_SHARED, DEV_LABEL, he_dev)) {
hprintk("irq %d already in use\n", he_dev->pci_dev->irq);
return -EINVAL;
}
irq = pci_dev->irq;
if (request_irq(irq,
interrupt_handler,
- SA_SHIRQ, /* irqflags guess */
+ IRQF_SHARED, /* irqflags guess */
DEV_LABEL, /* name guess */
dev)) {
PRINTD(DBG_WARN, "request IRQ failed!");
#include <linux/module.h>
#include <linux/pci.h>
+#include <linux/poison.h>
#include <linux/skbuff.h>
#include <linux/kernel.h>
#include <linux/vmalloc.h>
writel(SAR_STAT_TMROF, SAR_REG_STAT);
}
IPRINTK("%s: Request IRQ ... ", card->name);
- if (request_irq(pcidev->irq, idt77252_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pcidev->irq, idt77252_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->name, card) != 0) {
printk("%s: can't allocate IRQ.\n", card->name);
deinit_card(card);
writel(SAR_CMD_WRITE_SRAM | (0 << 2), SAR_REG_CMD);
for (addr = 0x4000; addr < 0x80000; addr += 0x4000) {
- writel(0xdeadbeef, SAR_REG_DR0);
+ writel(ATM_POISON, SAR_REG_DR0);
writel(SAR_CMD_WRITE_SRAM | (addr << 2), SAR_REG_CMD);
writel(SAR_CMD_READ_SRAM | (0 << 2), SAR_REG_CMD);
u32 ctrl_reg;
IF_EVENT(printk(">ia_start\n");)
iadev = INPH_IA_DEV(dev);
- if (request_irq(iadev->irq, &ia_int, SA_SHIRQ, DEV_LABEL, dev)) {
+ if (request_irq(iadev->irq, &ia_int, IRQF_SHARED, DEV_LABEL, dev)) {
printk(KERN_ERR DEV_LABEL "(itf %d): IRQ%d is already in use\n",
dev->number, iadev->irq);
error = -EAGAIN;
conf2_write(lanai);
reg_write(lanai, TX_FIFO_DEPTH, TxDepth_Reg);
reg_write(lanai, 0, CBR_ICG_Reg); /* CBR defaults to no limit */
- if ((result = request_irq(lanai->pci->irq, lanai_int, SA_SHIRQ,
+ if ((result = request_irq(lanai->pci->irq, lanai_int, IRQF_SHARED,
DEV_LABEL, lanai)) != 0) {
printk(KERN_ERR DEV_LABEL ": can't allocate interrupt\n");
goto error_vcctable;
if (mac[i] == NULL)
nicstar_init_eprom(card->membase);
- if (request_irq(pcidev->irq, &ns_irq_handler, SA_INTERRUPT | SA_SHIRQ, "nicstar", card) != 0)
+ if (request_irq(pcidev->irq, &ns_irq_handler, IRQF_DISABLED | IRQF_SHARED, "nicstar", card) != 0)
{
printk("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
error = 9;
zatm_dev->rx_map = zatm_dev->tx_map = NULL;
for (i = 0; i < NR_MBX; i++)
zatm_dev->mbx_start[i] = 0;
- error = request_irq(zatm_dev->irq, zatm_int, SA_SHIRQ, DEV_LABEL, dev);
+ error = request_irq(zatm_dev->irq, zatm_int, IRQF_SHARED, DEV_LABEL, dev);
if (error < 0) {
printk(KERN_ERR DEV_LABEL "(itf %d): IRQ%d is already in use\n",
dev->number,zatm_dev->irq);
Acquire shared access to the IRQ Channel.
*/
IRQ_Channel = PCI_Device->irq;
- if (request_irq(IRQ_Channel, InterruptHandler, SA_SHIRQ,
+ if (request_irq(IRQ_Channel, InterruptHandler, IRQF_SHARED,
Controller->FullModelName, Controller) < 0)
{
DAC960_Error("Unable to acquire IRQ Channel %d for Controller at\n",
/* make sure the board interrupts are off */
hba[i]->access.set_intr_mask(hba[i], CCISS_INTR_OFF);
if (request_irq(hba[i]->intr[SIMPLE_MODE_INT], do_cciss_intr,
- SA_INTERRUPT | SA_SHIRQ, hba[i]->devname, hba[i])) {
+ IRQF_DISABLED | IRQF_SHARED, hba[i]->devname, hba[i])) {
printk(KERN_ERR "cciss: Unable to get irq %d for %s\n",
hba[i]->intr[SIMPLE_MODE_INT], hba[i]->devname);
goto clean2;
}
hba[i]->access.set_intr_mask(hba[i], 0);
if (request_irq(hba[i]->intr, do_ida_intr,
- SA_INTERRUPT|SA_SHIRQ, hba[i]->devname, hba[i]))
+ IRQF_DISABLED|IRQF_SHARED, hba[i]->devname, hba[i]))
{
printk(KERN_ERR "cpqarray: Unable to get irq %d for %s\n",
hba[i]->intr, hba[i]->devname);
#include <linux/cdrom.h> /* for the compatibility eject ioctl */
#include <linux/completion.h>
-/*
- * Interrupt freeing also means /proc VFS work - dont do it
- * from interrupt context. We push this work into keventd:
- */
-static void fd_free_irq_fn(void *data)
-{
- fd_free_irq();
-}
-
-static DECLARE_WORK(fd_free_irq_work, fd_free_irq_fn, NULL);
-
-
static struct request *current_req;
static struct request_queue *floppy_queue;
static void do_fd_request(request_queue_t * q);
UDRS->select_date = jiffies;
}
}
- /*
- * We should propagate failures to grab the resources back
- * nicely from here. Actually we ought to rewrite the fd
- * driver some day too.
- */
- if (newdor & FLOPPY_MOTOR_MASK)
- floppy_grab_irq_and_dma();
- if (olddor & FLOPPY_MOTOR_MASK)
- floppy_release_irq_and_dma();
return olddor;
}
line);
return -1;
}
- if (floppy_grab_irq_and_dma() == -1)
- return -EBUSY;
if (test_and_set_bit(0, &fdc_busy)) {
DECLARE_WAITQUEUE(wait, current);
set_current_state(TASK_RUNNING);
remove_wait_queue(&fdc_wait, &wait);
+
+ flush_scheduled_work();
}
command_status = FD_COMMAND_NONE;
if (elv_next_request(floppy_queue))
do_fd_request(floppy_queue);
spin_unlock_irqrestore(&floppy_lock, flags);
- floppy_release_irq_and_dma();
wake_up(&fdc_wait);
}
}
if (!UDRS->fd_ref)
opened_bdev[drive] = NULL;
- floppy_release_irq_and_dma();
mutex_unlock(&open_lock);
+
return 0;
}
if (UDRS->fd_ref == -1 || (UDRS->fd_ref && (filp->f_flags & O_EXCL)))
goto out2;
- if (floppy_grab_irq_and_dma())
- goto out2;
-
if (filp->f_flags & O_EXCL)
UDRS->fd_ref = -1;
else
UDRS->fd_ref--;
if (!UDRS->fd_ref)
opened_bdev[drive] = NULL;
- floppy_release_irq_and_dma();
out2:
mutex_unlock(&open_lock);
return res;
return 1;
if (time_after(jiffies, UDRS->last_checked + UDP->checkfreq)) {
- if (floppy_grab_irq_and_dma()) {
- return 1;
- }
-
lock_fdc(drive, 0);
poll_drive(0, 0);
process_fd_request();
- floppy_release_irq_and_dma();
}
if (UTESTF(FD_DISK_CHANGED) ||
fdc = 0;
del_timer(&fd_timeout);
current_drive = 0;
- floppy_release_irq_and_dma();
initialising = 0;
if (have_no_fdc) {
DPRINT("no floppy controllers found\n");
if (irqdma_allocated) {
fd_disable_dma();
fd_free_dma();
- schedule_work(&fd_free_irq_work);
+ fd_free_irq();
irqdma_allocated = 0;
}
set_dor(0, ~0, 8);
/* eject disk, if any */
fd_eject(0);
- flush_scheduled_work(); /* fd_free_irq() might be pending */
-
wait_for_completion(&device_release);
}
/* try to grab IRQ, and try to grab a slow IRQ if it fails, so we can
share with the SCSI driver */
if (request_irq(PS2ESDI_IRQ, ps2esdi_interrupt_handler,
- SA_INTERRUPT | SA_SHIRQ, "PS/2 ESDI", &ps2esdi_gendisk)
+ IRQF_DISABLED | IRQF_SHARED, "PS/2 ESDI", &ps2esdi_gendisk)
&& request_irq(PS2ESDI_IRQ, ps2esdi_interrupt_handler,
- SA_SHIRQ, "PS/2 ESDI", &ps2esdi_gendisk)
+ IRQF_SHARED, "PS/2 ESDI", &ps2esdi_gendisk)
) {
printk("%s: Unable to get IRQ %d\n", DEVICE_NAME, PS2ESDI_IRQ);
error = -EBUSY;
static int floppy_release(struct inode *inode, struct file *filp);
static int floppy_check_change(struct gendisk *disk);
static int floppy_revalidate(struct gendisk *disk);
-static int swim3_add_device(struct device_node *swims);
-int swim3_init(void);
#ifndef CONFIG_PMAC_MEDIABAY
#define check_media_bay(which, what) 1
.revalidate_disk= floppy_revalidate,
};
-int swim3_init(void)
-{
- struct device_node *swim;
- int err = -ENOMEM;
- int i;
-
- swim = find_devices("floppy");
- while (swim && (floppy_count < MAX_FLOPPIES))
- {
- swim3_add_device(swim);
- swim = swim->next;
- }
-
- swim = find_devices("swim3");
- while (swim && (floppy_count < MAX_FLOPPIES))
- {
- swim3_add_device(swim);
- swim = swim->next;
- }
-
- if (!floppy_count)
- return -ENODEV;
-
- for (i = 0; i < floppy_count; i++) {
- disks[i] = alloc_disk(1);
- if (!disks[i])
- goto out;
- }
-
- if (register_blkdev(FLOPPY_MAJOR, "fd")) {
- err = -EBUSY;
- goto out;
- }
-
- swim3_queue = blk_init_queue(do_fd_request, &swim3_lock);
- if (!swim3_queue) {
- err = -ENOMEM;
- goto out_queue;
- }
-
- for (i = 0; i < floppy_count; i++) {
- struct gendisk *disk = disks[i];
- disk->major = FLOPPY_MAJOR;
- disk->first_minor = i;
- disk->fops = &floppy_fops;
- disk->private_data = &floppy_states[i];
- disk->queue = swim3_queue;
- disk->flags |= GENHD_FL_REMOVABLE;
- sprintf(disk->disk_name, "fd%d", i);
- set_capacity(disk, 2880);
- add_disk(disk);
- }
- return 0;
-
-out_queue:
- unregister_blkdev(FLOPPY_MAJOR, "fd");
-out:
- while (i--)
- put_disk(disks[i]);
- /* shouldn't we do something with results of swim_add_device()? */
- return err;
-}
-
-static int swim3_add_device(struct device_node *swim)
+static int swim3_add_device(struct macio_dev *mdev, int index)
{
+ struct device_node *swim = mdev->ofdev.node;
struct device_node *mediabay;
- struct floppy_state *fs = &floppy_states[floppy_count];
- struct resource res_reg, res_dma;
+ struct floppy_state *fs = &floppy_states[index];
+ int rc = -EBUSY;
- if (of_address_to_resource(swim, 0, &res_reg) ||
- of_address_to_resource(swim, 1, &res_dma)) {
- printk(KERN_ERR "swim3: Can't get addresses\n");
- return -EINVAL;
+ /* Check & Request resources */
+ if (macio_resource_count(mdev) < 2) {
+ printk(KERN_WARNING "ifd%d: no address for %s\n",
+ index, swim->full_name);
+ return -ENXIO;
}
- if (request_mem_region(res_reg.start, res_reg.end - res_reg.start + 1,
- " (reg)") == NULL) {
- printk(KERN_ERR "swim3: Can't request register space\n");
- return -EINVAL;
+ if (macio_irq_count(mdev) < 2) {
+ printk(KERN_WARNING "fd%d: no intrs for device %s\n",
+ index, swim->full_name);
}
- if (request_mem_region(res_dma.start, res_dma.end - res_dma.start + 1,
- " (dma)") == NULL) {
- release_mem_region(res_reg.start,
- res_reg.end - res_reg.start + 1);
- printk(KERN_ERR "swim3: Can't request DMA space\n");
- return -EINVAL;
+ if (macio_request_resource(mdev, 0, "swim3 (mmio)")) {
+ printk(KERN_ERR "fd%d: can't request mmio resource for %s\n",
+ index, swim->full_name);
+ return -EBUSY;
}
-
- if (swim->n_intrs < 2) {
- printk(KERN_INFO "swim3: expecting 2 intrs (n_intrs:%d)\n",
- swim->n_intrs);
- release_mem_region(res_reg.start,
- res_reg.end - res_reg.start + 1);
- release_mem_region(res_dma.start,
- res_dma.end - res_dma.start + 1);
- return -EINVAL;
+ if (macio_request_resource(mdev, 1, "swim3 (dma)")) {
+ printk(KERN_ERR "fd%d: can't request dma resource for %s\n",
+ index, swim->full_name);
+ macio_release_resource(mdev, 0);
+ return -EBUSY;
}
+ dev_set_drvdata(&mdev->ofdev.dev, fs);
- mediabay = (strcasecmp(swim->parent->type, "media-bay") == 0) ? swim->parent : NULL;
+ mediabay = (strcasecmp(swim->parent->type, "media-bay") == 0) ?
+ swim->parent : NULL;
if (mediabay == NULL)
pmac_call_feature(PMAC_FTR_SWIM3_ENABLE, swim, 0, 1);
memset(fs, 0, sizeof(*fs));
spin_lock_init(&fs->lock);
fs->state = idle;
- fs->swim3 = (struct swim3 __iomem *)ioremap(res_reg.start, 0x200);
- fs->dma = (struct dbdma_regs __iomem *)ioremap(res_dma.start, 0x200);
- fs->swim3_intr = swim->intrs[0].line;
- fs->dma_intr = swim->intrs[1].line;
+ fs->swim3 = (struct swim3 __iomem *)
+ ioremap(macio_resource_start(mdev, 0), 0x200);
+ if (fs->swim3 == NULL) {
+ printk("fd%d: couldn't map registers for %s\n",
+ index, swim->full_name);
+ rc = -ENOMEM;
+ goto out_release;
+ }
+ fs->dma = (struct dbdma_regs __iomem *)
+ ioremap(macio_resource_start(mdev, 1), 0x200);
+ if (fs->dma == NULL) {
+ printk("fd%d: couldn't map DMA for %s\n",
+ index, swim->full_name);
+ iounmap(fs->swim3);
+ rc = -ENOMEM;
+ goto out_release;
+ }
+ fs->swim3_intr = macio_irq(mdev, 0);
+ fs->dma_intr = macio_irq(mdev, 1);;
fs->cur_cyl = -1;
fs->cur_sector = -1;
fs->secpercyl = 36;
st_le16(&fs->dma_cmd[1].command, DBDMA_STOP);
if (request_irq(fs->swim3_intr, swim3_interrupt, 0, "SWIM3", fs)) {
- printk(KERN_ERR "Couldn't get irq %d for SWIM3\n", fs->swim3_intr);
+ printk(KERN_ERR "fd%d: couldn't request irq %d for %s\n",
+ index, fs->swim3_intr, swim->full_name);
pmac_call_feature(PMAC_FTR_SWIM3_ENABLE, swim, 0, 0);
+ goto out_unmap;
return -EBUSY;
}
/*
if (request_irq(fs->dma_intr, fd_dma_interrupt, 0, "SWIM3-dma", fs)) {
printk(KERN_ERR "Couldn't get irq %d for SWIM3 DMA",
fs->dma_intr);
- pmac_call_feature(PMAC_FTR_SWIM3_ENABLE, swim, 0, 0);
return -EBUSY;
}
*/
printk(KERN_INFO "fd%d: SWIM3 floppy controller %s\n", floppy_count,
mediabay ? "in media bay" : "");
- floppy_count++;
-
+ return 0;
+
+ out_unmap:
+ iounmap(fs->dma);
+ iounmap(fs->swim3);
+
+ out_release:
+ macio_release_resource(mdev, 0);
+ macio_release_resource(mdev, 1);
+
+ return rc;
+}
+
+static int __devinit swim3_attach(struct macio_dev *mdev, const struct of_device_id *match)
+{
+ int i, rc;
+ struct gendisk *disk;
+
+ /* Add the drive */
+ rc = swim3_add_device(mdev, floppy_count);
+ if (rc)
+ return rc;
+
+ /* Now create the queue if not there yet */
+ if (swim3_queue == NULL) {
+ /* If we failed, there isn't much we can do as the driver is still
+ * too dumb to remove the device, just bail out
+ */
+ if (register_blkdev(FLOPPY_MAJOR, "fd"))
+ return 0;
+ swim3_queue = blk_init_queue(do_fd_request, &swim3_lock);
+ if (swim3_queue == NULL) {
+ unregister_blkdev(FLOPPY_MAJOR, "fd");
+ return 0;
+ }
+ }
+
+ /* Now register that disk. Same comment about failure handling */
+ i = floppy_count++;
+ disk = disks[i] = alloc_disk(1);
+ if (disk == NULL)
+ return 0;
+
+ disk->major = FLOPPY_MAJOR;
+ disk->first_minor = i;
+ disk->fops = &floppy_fops;
+ disk->private_data = &floppy_states[i];
+ disk->queue = swim3_queue;
+ disk->flags |= GENHD_FL_REMOVABLE;
+ sprintf(disk->disk_name, "fd%d", i);
+ set_capacity(disk, 2880);
+ add_disk(disk);
+
+ return 0;
+}
+
+static struct of_device_id swim3_match[] =
+{
+ {
+ .name = "swim3",
+ },
+ {
+ .compatible = "ohare-swim3"
+ },
+ {
+ .compatible = "swim3"
+ },
+};
+
+static struct macio_driver swim3_driver =
+{
+ .name = "swim3",
+ .match_table = swim3_match,
+ .probe = swim3_attach,
+#if 0
+ .suspend = swim3_suspend,
+ .resume = swim3_resume,
+#endif
+};
+
+
+int swim3_init(void)
+{
+ macio_register_driver(&swim3_driver);
return 0;
}
pci_set_master(pdev);
- rc = request_irq(pdev->irq, carm_interrupt, SA_SHIRQ, DRV_NAME, host);
+ rc = request_irq(pdev->irq, carm_interrupt, IRQF_SHARED, DRV_NAME, host);
if (rc) {
printk(KERN_ERR DRV_NAME "(%s): irq alloc failure\n",
pci_name(pdev));
card->win_size = data;
- if (request_irq(dev->irq, mm_interrupt, SA_SHIRQ, "pci-umem", card)) {
+ if (request_irq(dev->irq, mm_interrupt, IRQF_SHARED, "pci-umem", card)) {
printk(KERN_ERR "MM%d: Unable to allocate IRQ\n", card->card_number);
ret = -ENODEV;
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
+ SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = bluecard_hci_open;
hdev->close = bluecard_hci_close;
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
+ SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = bt3c_hci_open;
hdev->close = bt3c_hci_close;
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
+ SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = btuart_hci_open;
hdev->close = btuart_hci_close;
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
+ SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = dtl1_hci_open;
hdev->close = dtl1_hci_close;
/* RTX Telecom based adapter with buggy SCO support */
{ USB_DEVICE(0x0400, 0x0807), .driver_info = HCI_BROKEN_ISOC },
+ /* Belkin F8T012 */
+ { USB_DEVICE(0x050d, 0x0012), .driver_info = HCI_WRONG_SCO_MTU },
+
/* Digianswer devices */
{ USB_DEVICE(0x08fd, 0x0001), .driver_info = HCI_DIGIANSWER },
{ USB_DEVICE(0x08fd, 0x0002), .driver_info = HCI_IGNORE },
/* CSR BlueCore Bluetooth Sniffer */
{ USB_DEVICE(0x0a12, 0x0002), .driver_info = HCI_SNIFFER },
+ /* Frontline ComProbe Bluetooth Sniffer */
+ { USB_DEVICE(0x16d3, 0x0002), .driver_info = HCI_SNIFFER },
+
{ } /* Terminating entry */
};
if (reset || id->driver_info & HCI_RESET)
set_bit(HCI_QUIRK_RESET_ON_INIT, &hdev->quirks);
+ if (id->driver_info & HCI_WRONG_SCO_MTU)
+ set_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks);
+
if (id->driver_info & HCI_SNIFFER) {
if (le16_to_cpu(udev->descriptor.bcdDevice) > 0x997)
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
hci_free_dev(hdev);
}
+static int hci_usb_suspend(struct usb_interface *intf, pm_message_t message)
+{
+ struct hci_usb *husb = usb_get_intfdata(intf);
+ struct list_head killed;
+ unsigned long flags;
+ int i;
+
+ if (!husb || intf == husb->isoc_iface)
+ return 0;
+
+ hci_suspend_dev(husb->hdev);
+
+ INIT_LIST_HEAD(&killed);
+
+ for (i = 0; i < 4; i++) {
+ struct _urb_queue *q = &husb->pending_q[i];
+ struct _urb *_urb, *_tmp;
+
+ while ((_urb = _urb_dequeue(q))) {
+ /* reset queue since _urb_dequeue sets it to NULL */
+ _urb->queue = q;
+ usb_kill_urb(&_urb->urb);
+ list_add(&_urb->list, &killed);
+ }
+
+ spin_lock_irqsave(&q->lock, flags);
+
+ list_for_each_entry_safe(_urb, _tmp, &killed, list) {
+ list_move_tail(&_urb->list, &q->head);
+ }
+
+ spin_unlock_irqrestore(&q->lock, flags);
+ }
+
+ return 0;
+}
+
+static int hci_usb_resume(struct usb_interface *intf)
+{
+ struct hci_usb *husb = usb_get_intfdata(intf);
+ unsigned long flags;
+ int i, err = 0;
+
+ if (!husb || intf == husb->isoc_iface)
+ return 0;
+
+ for (i = 0; i < 4; i++) {
+ struct _urb_queue *q = &husb->pending_q[i];
+ struct _urb *_urb;
+
+ spin_lock_irqsave(&q->lock, flags);
+
+ list_for_each_entry(_urb, &q->head, list) {
+ err = usb_submit_urb(&_urb->urb, GFP_ATOMIC);
+ if (err)
+ break;
+ }
+
+ spin_unlock_irqrestore(&q->lock, flags);
+
+ if (err)
+ return -EIO;
+ }
+
+ hci_resume_dev(husb->hdev);
+
+ return 0;
+}
+
static struct usb_driver hci_usb_driver = {
.name = "hci_usb",
.probe = hci_usb_probe,
.disconnect = hci_usb_disconnect,
+ .suspend = hci_usb_suspend,
+ .resume = hci_usb_resume,
.id_table = bluetooth_ids,
};
#define HCI_SNIFFER 0x10
#define HCI_BCM92035 0x20
#define HCI_BROKEN_ISOC 0x40
+#define HCI_WRONG_SCO_MTU 0x80
#define HCI_MAX_IFACE_NUM 3
hdev->type = HCI_VHCI;
hdev->driver_data = vhci;
- SET_HCIDEV_DEV(hdev, vhci_miscdev.dev);
hdev->open = vhci_open_dev;
hdev->close = vhci_close_dev;
if (cdu31a_irq > 0) {
if (request_irq
- (cdu31a_irq, cdu31a_interrupt, SA_INTERRUPT,
+ (cdu31a_irq, cdu31a_interrupt, IRQF_DISABLED,
"cdu31a", NULL)) {
printk(KERN_WARNING PFX "Unable to grab IRQ%d for "
"the CDU31A driver\n", cdu31a_irq);
}
xtrace(INIT, "init() subscribe irq and i/o\n");
- if (request_irq(stuffp->irq, mcdx_intr, SA_INTERRUPT, "mcdx", stuffp)) {
+ if (request_irq(stuffp->irq, mcdx_intr, IRQF_DISABLED, "mcdx", stuffp)) {
release_region(stuffp->wreg_data, MCDX_IO_SIZE);
xwarn("%s=0x%03x,%d: Init failed. Can't get irq (%d).\n",
MCDX, stuffp->wreg_data, stuffp->irq, stuffp->irq);
}
if (sony535_irq_used > 0) {
if (request_irq(sony535_irq_used, cdu535_interrupt,
- SA_INTERRUPT, CDU535_HANDLE, NULL)) {
+ IRQF_DISABLED, CDU535_HANDLE, NULL)) {
printk("Unable to grab IRQ%d for the " CDU535_MESSAGE_NAME
" driver; polling instead.\n", sony535_irq_used);
sony535_irq_used = 0;
return ret_val;
}
-static struct file_operations agp_fops =
+static const struct file_operations agp_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
/* set ISRs, and then disable the rx interrupts */
request_irq(IRQ_AMIGA_TBE, ser_tx_int, 0, "serial TX", state);
- request_irq(IRQ_AMIGA_RBF, ser_rx_int, SA_INTERRUPT, "serial RX", state);
+ request_irq(IRQ_AMIGA_RBF, ser_rx_int, IRQF_DISABLED, "serial RX", state);
/* turn off Rx and Tx interrupts */
custom.intena = IF_RBF | IF_TBE;
unsigned long);
static irqreturn_t ac_interrupt(int, void *, struct pt_regs *);
-static struct file_operations ac_fops = {
+static const struct file_operations ac_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = ac_read,
continue;
}
- if (request_irq(dev->irq, &ac_interrupt, SA_SHIRQ, "Applicom PCI", &dummy)) {
+ if (request_irq(dev->irq, &ac_interrupt, IRQF_SHARED, "Applicom PCI", &dummy)) {
printk(KERN_INFO "Could not allocate IRQ %d for PCI Applicom device.\n", dev->irq);
iounmap(RamIO);
pci_disable_device(dev);
printk(KERN_NOTICE "Applicom ISA card found at mem 0x%lx, irq %d\n", mem + (LEN_RAM_IO*i), irq);
if (!numisa) {
- if (request_irq(irq, &ac_interrupt, SA_SHIRQ, "Applicom ISA", &dummy)) {
+ if (request_irq(irq, &ac_interrupt, IRQF_SHARED, "Applicom ISA", &dummy)) {
printk(KERN_WARNING "Could not allocate IRQ %d for ISA Applicom device.\n", irq);
iounmap(RamIO);
apbs[boardno - 1].RamIO = NULL;
return nonseekable_open(inode, file);
}
-static struct file_operations cs5535_gpio_fops = {
+static const struct file_operations cs5535_gpio_fops = {
.owner = THIS_MODULE,
.write = cs5535_gpio_write,
.read = cs5535_gpio_read,
/* allocate IRQ */
if(request_irq(cy_isa_irq, cyy_interrupt,
- SA_INTERRUPT, "Cyclom-Y", &cy_card[j]))
+ IRQF_DISABLED, "Cyclom-Y", &cy_card[j]))
{
printk("Cyclom-Y/ISA found at 0x%lx ",
(unsigned long) cy_isa_address);
/* allocate IRQ */
if(request_irq(cy_pci_irq, cyy_interrupt,
- SA_SHIRQ, "Cyclom-Y", &cy_card[j]))
+ IRQF_SHARED, "Cyclom-Y", &cy_card[j]))
{
printk("Cyclom-Y/PCI found at 0x%lx ",
(ulong) cy_pci_phys2);
/* allocate IRQ only if board has an IRQ */
if( (cy_pci_irq != 0) && (cy_pci_irq != 255) ) {
if(request_irq(cy_pci_irq, cyz_interrupt,
- SA_SHIRQ, "Cyclades-Z", &cy_card[j]))
+ IRQF_SHARED, "Cyclades-Z", &cy_card[j]))
{
printk("Cyclom-8Zo/PCI found at 0x%lx ",
(ulong) cy_pci_phys2);
/* allocate IRQ only if board has an IRQ */
if( (cy_pci_irq != 0) && (cy_pci_irq != 255) ) {
if(request_irq(cy_pci_irq, cyz_interrupt,
- SA_SHIRQ, "Cyclades-Z", &cy_card[j]))
+ IRQF_SHARED, "Cyclades-Z", &cy_card[j]))
{
printk("Cyclom-Ze/PCI found at 0x%lx ",
(ulong) cy_pci_phys2);
/* Install handler */
if (drm_core_check_feature(dev, DRIVER_IRQ_SHARED))
- sh_flags = SA_SHIRQ;
+ sh_flags = IRQF_SHARED;
ret = request_irq(dev->irq, dev->driver->irq_handler,
sh_flags, dev->devname, dev);
* The various file operations we support.
*/
-static struct file_operations ds1286_fops = {
+static const struct file_operations ds1286_fops = {
.llseek = no_llseek,
.read = ds1286_read,
.poll = ds1286_poll,
/* The various file operations we support. */
-static struct file_operations rtc_fops = {
+static const struct file_operations rtc_fops = {
.owner = THIS_MODULE,
.ioctl = rtc_ioctl,
};
static struct proc_dir_entry *proc_therm_ds1620;
#endif
-static struct file_operations ds1620_fops = {
+static const struct file_operations ds1620_fops = {
.owner = THIS_MODULE,
.open = nonseekable_open,
.read = ds1620_read,
return 0;
}
-static struct file_operations dsp56k_fops = {
+static const struct file_operations dsp56k_fops = {
.owner = THIS_MODULE,
.read = dsp56k_read,
.write = dsp56k_write,
static int dtlk_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg);
-static struct file_operations dtlk_fops =
+static const struct file_operations dtlk_fops =
{
.owner = THIS_MODULE,
.read = dtlk_read,
* The various file operations we support.
*/
-static struct file_operations efi_rtc_fops = {
+static const struct file_operations efi_rtc_fops = {
.owner = THIS_MODULE,
.ioctl = efi_rtc_ioctl,
.open = efi_rtc_open,
* Allocate the IRQ
*/
- retval = request_irq(info->irq, rs_interrupt_single, SA_SHIRQ,
+ retval = request_irq(info->irq, rs_interrupt_single, IRQF_SHARED,
"esp serial", info);
if (retval) {
/* Get fast interrupt handler.
*/
if (request_irq(fdc.irq, ftape_interrupt,
- SA_INTERRUPT, "ft", ftape_id)) {
+ IRQF_DISABLED, "ft", ftape_id)) {
TRACE_ABORT(-EIO, ft_t_bug,
"Unable to grab IRQ%d for ftape driver",
fdc.irq);
static ssize_t zft_write(struct file *fp, const char __user *buff,
size_t req_len, loff_t *ppos);
-static struct file_operations zft_cdev =
+static const struct file_operations zft_cdev =
{
.owner = THIS_MODULE,
.read = zft_read,
* The various file operations we support.
*/
-static struct file_operations gen_rtc_fops = {
+static const struct file_operations gen_rtc_fops = {
.owner = THIS_MODULE,
#ifdef CONFIG_GEN_RTC_X
.read = gen_rtc_read,
sprintf(devp->hd_name, "hpet%d", (int)(devp - hpetp->hp_dev));
irq_flags = devp->hd_flags & HPET_SHARED_IRQ
- ? SA_SHIRQ : SA_INTERRUPT;
+ ? IRQF_SHARED : IRQF_DISABLED;
if (request_irq(irq, hpet_interrupt, irq_flags,
devp->hd_name, (void *)devp)) {
printk(KERN_ERR "hpet: IRQ %d is not free\n", irq);
return err;
}
-static struct file_operations hpet_fops = {
+static const struct file_operations hpet_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = hpet_read,
spin_unlock_irqrestore(&hp->lock, flags);
/* check error, fallback to non-irq */
if (irq != NO_IRQ)
- rc = request_irq(irq, hvc_handle_interrupt, SA_INTERRUPT, "hvc_console", hp);
+ rc = request_irq(irq, hvc_handle_interrupt, IRQF_DISABLED, "hvc_console", hp);
/*
* If the request_irq() fails and we return an error. The tty layer
* the conn was registered and now.
*/
if (!(rc = request_irq(irq, &hvcs_handle_interrupt,
- SA_INTERRUPT, "ibmhvcs", hvcsd))) {
+ IRQF_DISABLED, "ibmhvcs", hvcsd))) {
/*
* It is possible the vty-server was removed after the irq was
* requested but before we have time to enable interrupts.
struct hvsi_struct *hp = &hvsi_ports[i];
int ret = 1;
- ret = request_irq(hp->virq, hvsi_interrupt, SA_INTERRUPT, "hvsi", hp);
+ ret = request_irq(hp->virq, hvsi_interrupt, IRQF_DISABLED, "hvsi", hp);
if (ret)
printk(KERN_ERR "HVSI: couldn't reserve irq 0x%x (error %i)\n",
hp->virq, ret);
hp->inbuf_end = hp->inbuf;
hp->state = HVSI_CLOSED;
hp->vtermno = *vtermno;
- hp->virq = virt_irq_create_mapping(irq[0]);
+ hp->virq = irq_create_mapping(NULL, irq[0], 0);
if (hp->virq == NO_IRQ) {
printk(KERN_ERR "%s: couldn't create irq mapping for 0x%x\n",
- __FUNCTION__, hp->virq);
+ __FUNCTION__, irq[0]);
continue;
- } else
- hp->virq = irq_offset_up(hp->virq);
+ }
hvsi_count++;
}
}
-static struct file_operations rng_chrdev_ops = {
+static const struct file_operations rng_chrdev_ops = {
.owner = THIS_MODULE,
.open = rng_dev_open,
.read = rng_dev_read,
static int i8k_ioctl(struct inode *, struct file *, unsigned int,
unsigned long);
-static struct file_operations i8k_fops = {
+static const struct file_operations i8k_fops = {
.open = i8k_open_fs,
.read = seq_read,
.llseek = seq_lseek,
/* This is the driver descriptor for the ip2ipl device, which is used to
* download the loadware to the boards.
*/
-static struct file_operations ip2_ipl = {
+static const struct file_operations ip2_ipl = {
.owner = THIS_MODULE,
.read = ip2_ipl_read,
.write = ip2_ipl_write,
/* initialisation of the devices and driver structures, and registers itself */
/* with the relevant kernel modules. */
/******************************************************************************/
-/* SA_INTERRUPT- if set blocks all interrupts else only this line */
-/* SA_SHIRQ - for shared irq PCI or maybe EISA only */
+/* IRQF_DISABLED - if set blocks all interrupts else only this line */
+/* IRQF_SHARED - for shared irq PCI or maybe EISA only */
/* SA_RANDOM - can be source for cert. random number generators */
#define IP2_SA_FLAGS 0
if (have_requested_irq(ip2config.irq[i]))
continue;
rc = request_irq( ip2config.irq[i], ip2_interrupt,
- IP2_SA_FLAGS | (ip2config.type[i] == PCI ? SA_SHIRQ : 0),
+ IP2_SA_FLAGS | (ip2config.type[i] == PCI ? IRQF_SHARED : 0),
pcName, (void *)&pcName);
if (rc) {
printk(KERN_ERR "IP2: an request_irq failed: error %d\n",rc);
* The various file operations we support.
*/
-static struct file_operations rtc_fops = {
+static const struct file_operations rtc_fops = {
.owner = THIS_MODULE,
.ioctl = rtc_ioctl,
.open = rtc_open,
}
#endif
-static struct file_operations ipmi_fops = {
+static const struct file_operations ipmi_fops = {
.owner = THIS_MODULE,
.ioctl = ipmi_ioctl,
#ifdef CONFIG_COMPAT
if (info->si_type == SI_BT) {
rv = request_irq(info->irq,
si_bt_irq_handler,
- SA_INTERRUPT,
+ IRQF_DISABLED,
DEVICE_NAME,
info);
if (!rv)
} else
rv = request_irq(info->irq,
si_irq_handler,
- SA_INTERRUPT,
+ IRQF_DISABLED,
DEVICE_NAME,
info);
if (rv) {
return 0;
}
-static struct file_operations ipmi_wdog_fops = {
+static const struct file_operations ipmi_wdog_fops = {
.owner = THIS_MODULE,
.read = ipmi_read,
.poll = ipmi_poll,
const unsigned int index)
{
struct isi_board *board = pci_get_drvdata(pdev);
- unsigned long irqflags = SA_INTERRUPT;
+ unsigned long irqflags = IRQF_DISABLED;
int retval = -EINVAL;
if (!board->base)
goto end;
if (board->isa == NO)
- irqflags |= SA_SHIRQ;
+ irqflags |= IRQF_SHARED;
retval = request_irq(board->irq, isicom_interrupt, irqflags,
ISICOM_NAME, board);
* will give access to the shared memory on the Stallion intelligent
* board. This is also a very useful debugging tool.
*/
-static struct file_operations stli_fsiomem = {
+static const struct file_operations stli_fsiomem = {
.owner = THIS_MODULE,
.read = stli_memread,
.write = stli_memwrite,
}
}
-static struct file_operations ite_gpio_fops = {
+static const struct file_operations ite_gpio_fops = {
.owner = THIS_MODULE,
.ioctl = ite_gpio_ioctl,
.open = ite_gpio_open,
init_waitqueue_head(&ite_gpio_wait[i]);
}
- if (request_irq(ite_gpio_irq, ite_gpio_irq_handler, SA_SHIRQ, "gpio", 0) < 0) {
+ if (request_irq(ite_gpio_irq, ite_gpio_irq_handler, IRQF_SHARED, "gpio", 0) < 0) {
misc_deregister(&ite_gpio_miscdev);
release_region(ite_gpio_base, 0x1c);
return 0;
* The various file operations we support.
*/
-static struct file_operations lcd_fops = {
+static const struct file_operations lcd_fops = {
.read = lcd_read,
.ioctl = lcd_ioctl,
.open = lcd_open,
return retval;
}
-static struct file_operations lp_fops = {
+static const struct file_operations lp_fops = {
.owner = THIS_MODULE,
.write = lp_write,
.ioctl = lp_ioctl,
getdma->intrHostDest = sn_irq->irq_xtalkaddr;
getdma->intrVector = sn_irq->irq_irq;
if (request_irq(sn_irq->irq_irq,
- (void *)mbcs_completion_intr_handler, SA_SHIRQ,
+ (void *)mbcs_completion_intr_handler, IRQF_SHARED,
"MBCS get intr", (void *)soft)) {
tiocx_irq_free(soft->get_sn_irq);
return -EAGAIN;
putdma->intrHostDest = sn_irq->irq_xtalkaddr;
putdma->intrVector = sn_irq->irq_irq;
if (request_irq(sn_irq->irq_irq,
- (void *)mbcs_completion_intr_handler, SA_SHIRQ,
+ (void *)mbcs_completion_intr_handler, IRQF_SHARED,
"MBCS put intr", (void *)soft)) {
tiocx_irq_free(soft->put_sn_irq);
free_irq(soft->get_sn_irq->irq_irq, soft);
algo->intrHostDest = sn_irq->irq_xtalkaddr;
algo->intrVector = sn_irq->irq_irq;
if (request_irq(sn_irq->irq_irq,
- (void *)mbcs_completion_intr_handler, SA_SHIRQ,
+ (void *)mbcs_completion_intr_handler, IRQF_SHARED,
"MBCS algo intr", (void *)soft)) {
tiocx_irq_free(soft->algo_sn_irq);
free_irq(soft->put_sn_irq->irq_irq, soft);
#define open_kmem open_mem
#define open_oldmem open_mem
-static struct file_operations mem_fops = {
+static const struct file_operations mem_fops = {
.llseek = memory_lseek,
.read = read_mem,
.write = write_mem,
.open = open_mem,
};
-static struct file_operations kmem_fops = {
+static const struct file_operations kmem_fops = {
.llseek = memory_lseek,
.read = read_kmem,
.write = write_kmem,
.open = open_kmem,
};
-static struct file_operations null_fops = {
+static const struct file_operations null_fops = {
.llseek = null_lseek,
.read = read_null,
.write = write_null,
};
#if defined(CONFIG_ISA) || !defined(__mc68000__)
-static struct file_operations port_fops = {
+static const struct file_operations port_fops = {
.llseek = memory_lseek,
.read = read_port,
.write = write_port,
};
#endif
-static struct file_operations zero_fops = {
+static const struct file_operations zero_fops = {
.llseek = zero_lseek,
.read = read_zero,
.write = write_zero,
.capabilities = BDI_CAP_MAP_COPY,
};
-static struct file_operations full_fops = {
+static const struct file_operations full_fops = {
.llseek = full_lseek,
.read = read_full,
.write = write_full,
};
#ifdef CONFIG_CRASH_DUMP
-static struct file_operations oldmem_fops = {
+static const struct file_operations oldmem_fops = {
.read = read_oldmem,
.open = open_oldmem,
};
return ret;
}
-static struct file_operations kmsg_fops = {
+static const struct file_operations kmsg_fops = {
.write = kmsg_write,
};
return 0;
}
-static struct file_operations memory_fops = {
+static const struct file_operations memory_fops = {
.open = memory_open, /* just a selector for the real open */
};
return seq_open(file, &misc_seq_ops);
}
-static struct file_operations misc_proc_fops = {
+static const struct file_operations misc_proc_fops = {
.owner = THIS_MODULE,
.open = misc_seq_open,
.read = seq_read,
*/
static struct class *misc_class;
-static struct file_operations misc_fops = {
+static const struct file_operations misc_fops = {
.owner = THIS_MODULE,
.open = misc_open,
};
*/
static unsigned long mmtimer_femtoperiod = 0;
-static struct file_operations mmtimer_fops = {
+static const struct file_operations mmtimer_fops = {
.owner = THIS_MODULE,
.mmap = mmtimer_mmap,
.ioctl = mmtimer_ioctl,
mmtimer_femtoperiod = ((unsigned long)1E15 + sn_rtc_cycles_per_second /
2) / sn_rtc_cycles_per_second;
- if (request_irq(SGI_MMTIMER_VECTOR, mmtimer_interrupt, SA_PERCPU_IRQ, MMTIMER_NAME, NULL)) {
+ if (request_irq(SGI_MMTIMER_VECTOR, mmtimer_interrupt, IRQF_PERCPU, MMTIMER_NAME, NULL)) {
printk(KERN_WARNING "%s: unable to allocate interrupt.",
MMTIMER_NAME);
return -1;
}
-static struct file_operations mwave_fops = {
+static const struct file_operations mwave_fops = {
.owner = THIS_MODULE,
.read = mwave_read,
.write = mwave_write,
#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK|\
IXON|IXOFF))
-#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? SA_SHIRQ : SA_INTERRUPT)
+#define IRQ_T(info) ((info->flags & ASYNC_SHARE_IRQ) ? IRQF_SHARED : IRQF_DISABLED)
#define C168_ASIC_ID 1
#define C104_ASIC_ID 2
#endif /* CONFIG_PROC_FS */
-static struct file_operations nvram_fops = {
+static const struct file_operations nvram_fops = {
.owner = THIS_MODULE,
.llseek = nvram_llseek,
.read = nvram_read,
* attempts to perform these operations on the device.
*/
-static struct file_operations button_fops = {
+static const struct file_operations button_fops = {
.owner = THIS_MODULE,
.read = button_read,
};
return -EBUSY;
}
- if (request_irq (IRQ_NETWINDER_BUTTON, button_handler, SA_INTERRUPT,
+ if (request_irq (IRQ_NETWINDER_BUTTON, button_handler, IRQF_DISABLED,
"nwbutton", NULL)) {
printk (KERN_WARNING "nwbutton: IRQ %d is not free.\n",
IRQ_NETWINDER_BUTTON);
udelay(25);
}
-static struct file_operations flash_fops =
+static const struct file_operations flash_fops =
{
.owner = THIS_MODULE,
.llseek = flash_llseek,
return nonseekable_open(inode, file);
}
-static struct file_operations pc8736x_gpio_fops = {
+static const struct file_operations pc8736x_gpio_fops = {
.owner = THIS_MODULE,
.open = pc8736x_gpio_open,
.write = nsc_gpio_write,
return;
}
-static struct file_operations cm4000_fops = {
+static const struct file_operations cm4000_fops = {
.owner = THIS_MODULE,
.read = cmm_read,
.write = cmm_write,
return;
}
-static struct file_operations reader_fops = {
+static const struct file_operations reader_fops = {
.owner = THIS_MODULE,
.read = cm4040_read,
.write = cm4040_write,
static struct class *ppdev_class;
-static struct file_operations pp_fops = {
+static const struct file_operations pp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = pp_read,
cir_port_init(cir);
retval = request_irq(IT8172_CIR0_IRQ, kbd_int_handler,
- (unsigned long )(SA_INTERRUPT|SA_SHIRQ),
+ (unsigned long )(IRQF_DISABLED|IRQF_SHARED),
(const char *)"Qtronix IR Keyboard", (void *)cir);
if (retval) {
.poolinfo = &poolinfo_table[0],
.name = "input",
.limit = 1,
- .lock = SPIN_LOCK_UNLOCKED,
+ .lock = __SPIN_LOCK_UNLOCKED(&input_pool.lock),
.pool = input_pool_data
};
.name = "blocking",
.limit = 1,
.pull = &input_pool,
- .lock = SPIN_LOCK_UNLOCKED,
+ .lock = __SPIN_LOCK_UNLOCKED(&blocking_pool.lock),
.pool = blocking_pool_data
};
.poolinfo = &poolinfo_table[1],
.name = "nonblocking",
.pull = &input_pool,
- .lock = SPIN_LOCK_UNLOCKED,
+ .lock = __SPIN_LOCK_UNLOCKED(&nonblocking_pool.lock),
.pool = nonblocking_pool_data
};
static struct class *raw_class;
static struct raw_device_data raw_devices[MAX_RAW_MINORS];
static DEFINE_MUTEX(raw_mutex);
-static struct file_operations raw_ctl_fops; /* forward declaration */
+static const struct file_operations raw_ctl_fops; /* forward declaration */
/*
* Open/close code for raw IO.
}
-static struct file_operations raw_fops = {
+static const struct file_operations raw_fops = {
.read = generic_file_read,
.aio_read = generic_file_aio_read,
.write = raw_file_write,
.owner = THIS_MODULE,
};
-static struct file_operations raw_ctl_fops = {
+static const struct file_operations raw_ctl_fops = {
.ioctl = raw_ctl_ioctl,
.open = raw_open,
.owner = THIS_MODULE,
*
*/
-static struct file_operations rio_fw_fops = {
+static const struct file_operations rio_fw_fops = {
.owner = THIS_MODULE,
.ioctl = rio_fw_ioctl,
};
for (i = 0; i < p->RIONumHosts; i++) {
hp = &p->RIOHosts[i];
if (hp->Ivec) {
- int mode = SA_SHIRQ;
+ int mode = IRQF_SHARED;
if (hp->Ivec & 0x8000) {
mode = 0;
hp->Ivec &= 0x7fff;
if (bp->flags & RC_BOARD_ACTIVE)
return 0;
- error = request_irq(bp->irq, rc_interrupt, SA_INTERRUPT,
+ error = request_irq(bp->irq, rc_interrupt, IRQF_DISABLED,
"RISCom/8", NULL);
if (error)
return error;
#ifdef RTC_IRQ
/*
- * A very tiny interrupt handler. It runs with SA_INTERRUPT set,
+ * A very tiny interrupt handler. It runs with IRQF_DISABLED set,
* but there is possibility of conflicting with the set_rtc_mmss()
* call (the rtc irq and the timer irq can easily run at the same
* time in two different CPUs). So we need to serialize
* The various file operations we support.
*/
-static struct file_operations rtc_fops = {
+static const struct file_operations rtc_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = rtc_read,
.fops = &rtc_fops,
};
-static struct file_operations rtc_proc_fops = {
+static const struct file_operations rtc_proc_fops = {
.owner = THIS_MODULE,
.open = rtc_proc_open,
.read = seq_read,
* XXX Interrupt pin #7 in Espresso is shared between RTC and
* PCI Slot 2 INTA# (and some INTx# in Slot 1).
*/
- if (request_irq(rtc_irq, rtc_interrupt, SA_SHIRQ, "rtc", (void *)&rtc_port)) {
+ if (request_irq(rtc_irq, rtc_interrupt, IRQF_SHARED, "rtc", (void *)&rtc_port)) {
printk(KERN_ERR "rtc: cannot register IRQ %d\n", rtc_irq);
return -EIO;
}
rtc_int_handler_ptr = rtc_interrupt;
}
- if(request_irq(RTC_IRQ, rtc_int_handler_ptr, SA_INTERRUPT, "rtc", NULL)) {
+ if(request_irq(RTC_IRQ, rtc_int_handler_ptr, IRQF_DISABLED, "rtc", NULL)) {
/* Yeah right, seeing as irq 8 doesn't even hit the bus. */
printk(KERN_ERR "rtc: IRQ %d is not free.\n", RTC_IRQ);
release_region(RTC_PORT(0), RTC_IO_EXTENT);
int ret;
ret = request_irq(s3c2410_rtc_alarmno, s3c2410_rtc_alarmirq,
- SA_INTERRUPT, "s3c2410-rtc alarm", NULL);
+ IRQF_DISABLED, "s3c2410-rtc alarm", NULL);
if (ret)
printk(KERN_ERR "IRQ%d already in use\n", s3c2410_rtc_alarmno);
ret = request_irq(s3c2410_rtc_tickno, s3c2410_rtc_tickirq,
- SA_INTERRUPT, "s3c2410-rtc tick", NULL);
+ IRQF_DISABLED, "s3c2410-rtc tick", NULL);
if (ret) {
printk(KERN_ERR "IRQ%d already in use\n", s3c2410_rtc_tickno);
}
-static struct file_operations scx200_gpio_fops = {
+static const struct file_operations scx200_gpio_fops = {
.owner = THIS_MODULE,
.write = nsc_gpio_write,
.read = nsc_gpio_read,
/* hook this subchannel up to the system controller interrupt */
rv = request_irq(SGI_UART_VECTOR, scdrv_interrupt,
- SA_SHIRQ | SA_INTERRUPT,
+ IRQF_SHARED | IRQF_DISABLED,
SYSCTL_BASENAME, sd);
if (rv) {
ia64_sn_irtr_close(sd->sd_nasid, sd->sd_subch);
return mask;
}
-static struct file_operations scdrv_fops = {
+static const struct file_operations scdrv_fops = {
.owner = THIS_MODULE,
.read = scdrv_read,
.write = scdrv_write,
/* hook event subchannel up to the system controller interrupt */
rv = request_irq(SGI_UART_VECTOR, scdrv_event_interrupt,
- SA_SHIRQ | SA_INTERRUPT,
+ IRQF_SHARED | IRQF_DISABLED,
"system controller events", event_sd);
if (rv) {
printk(KERN_WARNING "%s: irq request failed (%d)\n",
return ret;
}
-static struct file_operations sonypi_misc_fops = {
+static const struct file_operations sonypi_misc_fops = {
.owner = THIS_MODULE,
.read = sonypi_misc_read,
.poll = sonypi_misc_poll,
while (irq_list->irq) {
if (!request_irq(irq_list->irq, sonypi_irq,
- SA_SHIRQ, "sonypi", sonypi_irq)) {
+ IRQF_SHARED, "sonypi", sonypi_irq)) {
dev->irq = irq_list->irq;
dev->bits = irq_list->bits;
return 0;
return 0;
if (bp->flags & SX_BOARD_IS_PCI)
- error = request_irq(bp->irq, sx_interrupt, SA_INTERRUPT | SA_SHIRQ, "specialix IO8+", bp);
+ error = request_irq(bp->irq, sx_interrupt, IRQF_DISABLED | IRQF_SHARED, "specialix IO8+", bp);
else
- error = request_irq(bp->irq, sx_interrupt, SA_INTERRUPT, "specialix IO8+", bp);
+ error = request_irq(bp->irq, sx_interrupt, IRQF_DISABLED, "specialix IO8+", bp);
if (error)
return error;
* Define the driver info for a user level control device. Used mainly
* to get at port stats - only not using the port device itself.
*/
-static struct file_operations stl_fsiomem = {
+static const struct file_operations stl_fsiomem = {
.owner = THIS_MODULE,
.ioctl = stl_memioctl,
};
brdp->nrpanels = 1;
brdp->state |= BRD_FOUND;
brdp->hwid = status;
- if (request_irq(brdp->irq, stl_intr, SA_SHIRQ, name, brdp) != 0) {
+ if (request_irq(brdp->irq, stl_intr, IRQF_SHARED, name, brdp) != 0) {
printk("STALLION: failed to register interrupt "
"routine for %s irq=%d\n", name, brdp->irq);
rc = -ENODEV;
outb((brdp->ioctrlval | ECH_BRDDISABLE), brdp->ioctrl);
brdp->state |= BRD_FOUND;
- if (request_irq(brdp->irq, stl_intr, SA_SHIRQ, name, brdp) != 0) {
+ if (request_irq(brdp->irq, stl_intr, IRQF_SHARED, name, brdp) != 0) {
printk("STALLION: failed to register interrupt "
"routine for %s irq=%d\n", name, brdp->irq);
i = -ENODEV;
*
*/
-static struct file_operations sx_fw_fops = {
+static const struct file_operations sx_fw_fops = {
.owner = THIS_MODULE,
.ioctl = sx_fw_ioctl,
};
if(board->irq > 0) {
/* fixed irq, probably PCI */
if(sx_irqmask & (1 << board->irq)) { /* may we use this irq? */
- if(request_irq(board->irq, sx_interrupt, SA_SHIRQ | SA_INTERRUPT, "sx", board)) {
+ if(request_irq(board->irq, sx_interrupt, IRQF_SHARED | IRQF_DISABLED, "sx", board)) {
printk(KERN_ERR "sx: Cannot allocate irq %d.\n", board->irq);
board->irq = 0;
}
int irqmask = sx_irqmask & (IS_SX_BOARD(board) ? SX_ISA_IRQ_MASK : SI2_ISA_IRQ_MASK);
for(irqnr = 15; irqnr > 0; irqnr--)
if(irqmask & (1 << irqnr))
- if(! request_irq(irqnr, sx_interrupt, SA_SHIRQ | SA_INTERRUPT, "sx", board))
+ if(! request_irq(irqnr, sx_interrupt, IRQF_SHARED | IRQF_DISABLED, "sx", board))
break;
if(! irqnr)
printk(KERN_ERR "sx: Cannot allocate IRQ.\n");
info->bus_type = MGSL_BUS_TYPE_PCI;
info->io_addr_size = 8;
- info->irq_flags = SA_SHIRQ;
+ info->irq_flags = IRQF_SHARED;
if (dev->device == 0x0210) {
/* Version 1 PCI9030 based universal PCI adapter */
info->phys_reg_addr = pci_resource_start(pdev,0);
info->bus_type = MGSL_BUS_TYPE_PCI;
- info->irq_flags = SA_SHIRQ;
+ info->irq_flags = IRQF_SHARED;
info->init_error = -1; /* assume error, set to 0 on successful init */
}
info->phys_statctrl_base &= ~(PAGE_SIZE-1);
info->bus_type = MGSL_BUS_TYPE_PCI;
- info->irq_flags = SA_SHIRQ;
+ info->irq_flags = IRQF_SHARED;
init_timer(&info->tx_timer);
info->tx_timer.data = (unsigned long)info;
.enable_mask = SYSRQ_ENABLE_REMOUNT,
};
-#ifdef CONFIG_DEBUG_MUTEXES
+#ifdef CONFIG_LOCKDEP
static void sysrq_handle_showlocks(int key, struct pt_regs *pt_regs,
struct tty_struct *tty)
{
- mutex_debug_show_all_locks();
+ debug_show_all_locks();
}
+
static struct sysrq_key_op sysrq_showlocks_op = {
.handler = sysrq_handle_showlocks,
.help_msg = "show-all-locks(D)",
return 0;
}
-static struct file_operations tb0219_fops = {
+static const struct file_operations tb0219_fops = {
.owner = THIS_MODULE,
.read = tanbac_tb0219_read,
.write = tanbac_tb0219_write,
/* ----- kernel module registering ------------------------------------ */
-static struct file_operations tipar_fops = {
+static const struct file_operations tipar_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = tipar_read,
/* This device is wired through the FPGA IO space of the ATCA blade
* we can't share this IRQ */
result = request_irq(telclk_interrupt, &tlclk_interrupt,
- SA_INTERRUPT, "telco_clock", tlclk_interrupt);
+ IRQF_DISABLED, "telco_clock", tlclk_interrupt);
if (result == -EBUSY) {
printk(KERN_ERR "tlclk: Interrupt can't be reserved.\n");
return -EBUSY;
return 0;
}
-static struct file_operations tlclk_fops = {
+static const struct file_operations tlclk_fops = {
.read = tlclk_read,
.write = tlclk_write,
.open = tlclk_open,
unsigned long);
-static struct file_operations tosh_fops = {
+static const struct file_operations tosh_fops = {
.owner = THIS_MODULE,
.ioctl = tosh_ioctl,
};
return ioread8(chip->vendor.iobase + 1);
}
-static struct file_operations atmel_ops = {
+static const struct file_operations atmel_ops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = tpm_open,
static struct attribute_group inf_attr_grp = {.attrs = inf_attrs };
-static struct file_operations inf_ops = {
+static const struct file_operations inf_ops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = tpm_open,
return inb(chip->vendor.base + NSC_STATUS);
}
-static struct file_operations nsc_ops = {
+static const struct file_operations nsc_ops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = tpm_open,
return rc;
}
-static struct file_operations tis_ops = {
+static const struct file_operations tis_ops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = tpm_open,
iowrite8(i, chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
if (request_irq
- (i, tis_int_probe, SA_SHIRQ,
+ (i, tis_int_probe, IRQF_SHARED,
chip->vendor.miscdev.name, chip) != 0) {
dev_info(chip->dev,
"Unable to request irq: %d for probe\n",
chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
if (request_irq
- (chip->vendor.irq, tis_int_handler, SA_SHIRQ,
+ (chip->vendor.irq, tis_int_handler, IRQF_SHARED,
chip->vendor.miscdev.name, chip) != 0) {
dev_info(chip->dev,
"Unable to request irq: %d for use\n",
return cmd == TIOCSPGRP ? -ENOTTY : -EIO;
}
-static struct file_operations tty_fops = {
+static const struct file_operations tty_fops = {
.llseek = no_llseek,
.read = tty_read,
.write = tty_write,
};
#ifdef CONFIG_UNIX98_PTYS
-static struct file_operations ptmx_fops = {
+static const struct file_operations ptmx_fops = {
.llseek = no_llseek,
.read = tty_read,
.write = tty_write,
};
#endif
-static struct file_operations console_fops = {
+static const struct file_operations console_fops = {
.llseek = no_llseek,
.read = tty_read,
.write = redirected_tty_write,
.fasync = tty_fasync,
};
-static struct file_operations hung_up_tty_fops = {
+static const struct file_operations hung_up_tty_fops = {
.llseek = no_llseek,
.read = hung_up_tty_read,
.write = hung_up_tty_write,
static int tiocsctty(struct tty_struct *tty, int arg)
{
- task_t *p;
+ struct task_struct *p;
if (current->signal->leader &&
(current->signal->session == tty->session))
return 0;
}
-static struct file_operations vcs_fops = {
+static const struct file_operations vcs_fops = {
.llseek = vcs_lseek,
.read = vcs_read,
.write = vcs_write,
return single_open(file, proc_viotape_show, NULL);
}
-static struct file_operations proc_viotape_operations = {
+static const struct file_operations proc_viotape_operations = {
.open = proc_viotape_open,
.read = seq_read,
.llseek = seq_lseek,
port->datap = port->ctrlp + 1;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(MVME147_IRQ_SCCA_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCA_TX, scc_tx_int, IRQF_DISABLED,
"SCC-A TX", port);
- request_irq(MVME147_IRQ_SCCA_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCA_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-A status", port);
- request_irq(MVME147_IRQ_SCCA_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCA_RX, scc_rx_int, IRQF_DISABLED,
"SCC-A RX", port);
- request_irq(MVME147_IRQ_SCCA_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCA_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-A special cond", port);
{
SCC_ACCESS_INIT(port);
port->datap = port->ctrlp + 1;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(MVME147_IRQ_SCCB_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCB_TX, scc_tx_int, IRQF_DISABLED,
"SCC-B TX", port);
- request_irq(MVME147_IRQ_SCCB_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCB_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-B status", port);
- request_irq(MVME147_IRQ_SCCB_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCB_RX, scc_rx_int, IRQF_DISABLED,
"SCC-B RX", port);
- request_irq(MVME147_IRQ_SCCB_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(MVME147_IRQ_SCCB_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-B special cond", port);
{
SCC_ACCESS_INIT(port);
port->datap = port->ctrlp + 2;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(MVME162_IRQ_SCCA_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCA_TX, scc_tx_int, IRQF_DISABLED,
"SCC-A TX", port);
- request_irq(MVME162_IRQ_SCCA_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCA_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-A status", port);
- request_irq(MVME162_IRQ_SCCA_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCA_RX, scc_rx_int, IRQF_DISABLED,
"SCC-A RX", port);
- request_irq(MVME162_IRQ_SCCA_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCA_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-A special cond", port);
{
SCC_ACCESS_INIT(port);
port->datap = port->ctrlp + 2;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(MVME162_IRQ_SCCB_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCB_TX, scc_tx_int, IRQF_DISABLED,
"SCC-B TX", port);
- request_irq(MVME162_IRQ_SCCB_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCB_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-B status", port);
- request_irq(MVME162_IRQ_SCCB_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCB_RX, scc_rx_int, IRQF_DISABLED,
"SCC-B RX", port);
- request_irq(MVME162_IRQ_SCCB_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(MVME162_IRQ_SCCB_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-B special cond", port);
{
port->datap = port->ctrlp + 4;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(BVME_IRQ_SCCA_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCA_TX, scc_tx_int, IRQF_DISABLED,
"SCC-A TX", port);
- request_irq(BVME_IRQ_SCCA_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCA_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-A status", port);
- request_irq(BVME_IRQ_SCCA_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCA_RX, scc_rx_int, IRQF_DISABLED,
"SCC-A RX", port);
- request_irq(BVME_IRQ_SCCA_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCA_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-A special cond", port);
{
SCC_ACCESS_INIT(port);
port->datap = port->ctrlp + 4;
port->port_a = &scc_ports[0];
port->port_b = &scc_ports[1];
- request_irq(BVME_IRQ_SCCB_TX, scc_tx_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCB_TX, scc_tx_int, IRQF_DISABLED,
"SCC-B TX", port);
- request_irq(BVME_IRQ_SCCB_STAT, scc_stat_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCB_STAT, scc_stat_int, IRQF_DISABLED,
"SCC-B status", port);
- request_irq(BVME_IRQ_SCCB_RX, scc_rx_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCB_RX, scc_rx_int, IRQF_DISABLED,
"SCC-B RX", port);
- request_irq(BVME_IRQ_SCCB_SPCOND, scc_spcond_int, SA_INTERRUPT,
+ request_irq(BVME_IRQ_SCCB_SPCOND, scc_spcond_int, IRQF_DISABLED,
"SCC-B special cond", port);
{
return 0;
}
-static struct file_operations gpio_fops = {
+static const struct file_operations gpio_fops = {
.owner = THIS_MODULE,
.read = gpio_read,
.write = gpio_write,
if (vc_cons_allocated(currcons)) {
struct vc_data *vc = vc_cons[currcons].d;
vc->vc_sw->con_deinit(vc);
+ module_put(vc->vc_sw->owner);
if (vc->vc_kmalloced)
kfree(vc->vc_screenbuf);
if (currcons >= MIN_NR_CONSOLES)
* Kernel Interfaces
*/
-static struct file_operations acq_fops = {
+static const struct file_operations acq_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = acq_write,
* Kernel Interfaces
*/
-static struct file_operations advwdt_fops = {
+static const struct file_operations advwdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = advwdt_write,
* Kernel Interfaces
*/
-static struct file_operations ali_fops = {
+static const struct file_operations ali_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = ali_write,
}
}
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner= THIS_MODULE,
.llseek= no_llseek,
.write= fop_write,
/* ......................................................................... */
-static struct file_operations at91wdt_fops = {
+static const struct file_operations at91wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.ioctl = at91_wdt_ioctl,
return 0;
}
-static struct file_operations booke_wdt_fops = {
+static const struct file_operations booke_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = booke_wdt_write,
return count;
}
-static struct file_operations cpu5wdt_fops = {
+static const struct file_operations cpu5wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.ioctl = cpu5wdt_ioctl,
return 0;
}
-static struct file_operations ep93xx_wdt_fops = {
+static const struct file_operations ep93xx_wdt_fops = {
.owner = THIS_MODULE,
.write = ep93xx_wdt_write,
.ioctl = ep93xx_wdt_ioctl,
*/
-static struct file_operations eurwdt_fops = {
+static const struct file_operations eurwdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = eurwdt_write,
goto out;
}
- ret = request_irq(irq, eurwdt_interrupt, SA_INTERRUPT, "eurwdt", NULL);
+ ret = request_irq(irq, eurwdt_interrupt, IRQF_DISABLED, "eurwdt", NULL);
if(ret) {
printk(KERN_ERR "eurwdt: IRQ %d is not free.\n", irq);
goto outmisc;
* Kernel Interfaces
*/
-static struct file_operations esb_fops = {
+static const struct file_operations esb_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = esb_write,
* Kernel Interfaces
*/
-static struct file_operations i8xx_tco_fops = {
+static const struct file_operations i8xx_tco_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = i8xx_tco_write,
* Kernel Interfaces
*/
-static struct file_operations ibwdt_fops = {
+static const struct file_operations ibwdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = ibwdt_write,
return 0;
}
-static struct file_operations asr_fops = {
+static const struct file_operations asr_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = asr_write,
return NOTIFY_DONE;
}
-static struct file_operations indydog_fops = {
+static const struct file_operations indydog_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = indydog_write,
}
-static struct file_operations ixp2000_wdt_fops =
+static const struct file_operations ixp2000_wdt_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
}
-static struct file_operations ixp4xx_wdt_fops =
+static const struct file_operations ixp4xx_wdt_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
-static struct file_operations zf_fops = {
+static const struct file_operations zf_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = zf_write,
return 0;
}
-static struct file_operations mixcomwd_fops=
+static const struct file_operations mixcomwd_fops=
{
.owner = THIS_MODULE,
.llseek = no_llseek,
}
}
-static struct file_operations mpc83xx_wdt_fops = {
+static const struct file_operations mpc83xx_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = mpc83xx_wdt_write,
return 0;
}
-static struct file_operations mpc8xx_wdt_fops = {
+static const struct file_operations mpc8xx_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = mpc8xx_wdt_write,
/*
* Kernel Interfaces
*/
-static struct file_operations mpcore_wdt_fops = {
+static const struct file_operations mpcore_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = mpcore_wdt_write,
goto err_misc;
}
- ret = request_irq(wdt->irq, mpcore_wdt_fire, SA_INTERRUPT, "mpcore_wdt", wdt);
+ ret = request_irq(wdt->irq, mpcore_wdt_fire, IRQF_DISABLED, "mpcore_wdt", wdt);
if (ret) {
dev_printk(KERN_ERR, _dev, "cannot register IRQ%d for watchdog\n", wdt->irq);
goto err_irq;
return 0;
}
-static struct file_operations mv64x60_wdt_fops = {
+static const struct file_operations mv64x60_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = mv64x60_wdt_write,
* Kernel Interfaces
*/
-static struct file_operations pcwd_fops = {
+static const struct file_operations pcwd_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = pcwd_write,
.fops = &pcwd_fops,
};
-static struct file_operations pcwd_temp_fops = {
+static const struct file_operations pcwd_temp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = pcwd_temp_read,
* Kernel Interfaces
*/
-static struct file_operations pcipcwd_fops = {
+static const struct file_operations pcipcwd_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = pcipcwd_write,
.fops = &pcipcwd_fops,
};
-static struct file_operations pcipcwd_temp_fops = {
+static const struct file_operations pcipcwd_temp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = pcipcwd_temp_read,
* Kernel Interfaces
*/
-static struct file_operations usb_pcwd_fops = {
+static const struct file_operations usb_pcwd_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = usb_pcwd_write,
.fops = &usb_pcwd_fops,
};
-static struct file_operations usb_pcwd_temperature_fops = {
+static const struct file_operations usb_pcwd_temperature_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = usb_pcwd_temperature_read,
/* kernel interface */
-static struct file_operations s3c2410wdt_fops = {
+static const struct file_operations s3c2410wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = s3c2410wdt_write,
return ret;
}
-static struct file_operations sa1100dog_fops =
+static const struct file_operations sa1100dog_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
}
}
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = fop_write,
* Kernel Interfaces
*/
-static struct file_operations sbc8360_fops = {
+static const struct file_operations sbc8360_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = sbc8360_write,
return NOTIFY_DONE;
}
-static struct file_operations epx_c3_fops = {
+static const struct file_operations epx_c3_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = epx_c3_write,
.notifier_call = sc1200wdt_notify_sys,
};
-static struct file_operations sc1200wdt_fops =
+static const struct file_operations sc1200wdt_fops =
{
.owner = THIS_MODULE,
.llseek = no_llseek,
}
}
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = fop_write,
}
}
-static struct file_operations scx200_wdt_fops = {
+static const struct file_operations scx200_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = scx200_wdt_write,
return NOTIFY_DONE;
}
-static struct file_operations sh_wdt_fops = {
+static const struct file_operations sh_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = sh_wdt_write,
* Kernel Interfaces
*/
-static struct file_operations softdog_fops = {
+static const struct file_operations softdog_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = softdog_write,
* Kernel Interfaces
*/
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = wdt_write,
}
}
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = fop_write,
return NOTIFY_DONE;
}
-static struct file_operations wdt_fops=
+static const struct file_operations wdt_fops=
{
.owner = THIS_MODULE,
.llseek = no_llseek,
* Kernel Interfaces
*/
-static struct file_operations wafwdt_fops = {
+static const struct file_operations wafwdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = wafwdt_write,
/*** initialization stuff */
-static struct file_operations wdrtas_fops = {
+static const struct file_operations wdrtas_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = wdrtas_write,
.fops = &wdrtas_fops,
};
-static struct file_operations wdrtas_temp_fops = {
+static const struct file_operations wdrtas_temp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = wdrtas_temp_read,
*/
-static struct file_operations wdt_fops = {
+static const struct file_operations wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = wdt_write,
};
#ifdef CONFIG_WDT_501
-static struct file_operations wdt_temp_fops = {
+static const struct file_operations wdt_temp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = wdt_temp_read,
goto out;
}
- ret = request_irq(irq, wdt_interrupt, SA_INTERRUPT, "wdt501p", NULL);
+ ret = request_irq(irq, wdt_interrupt, IRQF_DISABLED, "wdt501p", NULL);
if(ret) {
printk(KERN_ERR "wdt: IRQ %d is not free.\n", irq);
goto outreg;
return ret;
}
-static struct file_operations watchdog_fops = {
+static const struct file_operations watchdog_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = watchdog_write,
return NOTIFY_DONE;
}
-static struct file_operations wdt977_fops=
+static const struct file_operations wdt977_fops=
{
.owner = THIS_MODULE,
.llseek = no_llseek,
*/
-static struct file_operations wdtpci_fops = {
+static const struct file_operations wdtpci_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = wdtpci_write,
};
#ifdef CONFIG_WDT_501_PCI
-static struct file_operations wdtpci_temp_fops = {
+static const struct file_operations wdtpci_temp_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = wdtpci_temp_read,
goto out_pci;
}
- if (request_irq (irq, wdtpci_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq (irq, wdtpci_interrupt, IRQF_DISABLED | IRQF_SHARED,
"wdt_pci", &wdtpci_miscdev)) {
printk (KERN_ERR PFX "IRQ %d is not free\n", irq);
goto out_reg;
}
/**
- * dma_client_chan_free - release a DMA channel
- * @chan: &dma_chan
+ * dma_chan_cleanup - release a DMA channel's resources
+ * @kref: kernel reference structure that contains the DMA channel device
*/
void dma_chan_cleanup(struct kref *kref)
{
* dma_chans_rebalance - reallocate channels to clients
*
* When the number of DMA channel in the system changes,
- * channels need to be rebalanced among clients
+ * channels need to be rebalanced among clients.
*/
static void dma_chans_rebalance(void)
{
/**
* dma_async_client_unregister - unregister a client and free the &dma_client
- * @client:
+ * @client: &dma_client to free
*
* Force frees any allocated DMA channels, frees the &dma_client memory
*/
}
/**
- * dma_async_device_register -
+ * dma_async_device_register - registers DMA devices found
* @device: &dma_device
*/
int dma_async_device_register(struct dma_device *device)
}
/**
- * dma_async_device_unregister -
- * @device: &dma_device
+ * dma_async_device_cleanup - function called when all references are released
+ * @kref: kernel reference object
*/
static void dma_async_device_cleanup(struct kref *kref)
{
complete(&device->done);
}
-void dma_async_device_unregister(struct dma_device* device)
+/**
+ * dma_async_device_unregister - unregisters DMA devices
+ * @device: &dma_device
+ */
+void dma_async_device_unregister(struct dma_device *device)
{
struct dma_chan *chan;
unsigned long flags;
/**
* do_ioat_dma_memcpy - actual function that initiates a IOAT DMA transaction
- * @chan: IOAT DMA channel handle
+ * @ioat_chan: IOAT DMA channel handle
* @dest: DMA destination address
* @src: DMA source address
* @len: transaction length in bytes
* @dest_off: offset into that page
* @src_pg: pointer to the page to copy from
* @src_off: offset into that page
- * @len: transaction length in bytes. This is guaranteed to not make a copy
+ * @len: transaction length in bytes. This is guaranteed not to make a copy
* across a page boundary.
*/
}
/**
- * ioat_dma_memcpy_issue_pending - push potentially unrecognoized appended descriptors to hw
+ * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw
* @chan: DMA channel handle
*/
* ioat_dma_is_complete - poll the status of a IOAT DMA transaction
* @chan: IOAT DMA channel handle
* @cookie: DMA transaction identifier
+ * @done: if not %NULL, updated with last completed transaction
+ * @used: if not %NULL, updated with last used transaction
*/
static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
device->msi = 0;
}
#endif
- err = request_irq(pdev->irq, &ioat_do_interrupt, SA_SHIRQ, "ioat",
+ err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
device);
if (err)
goto err_irq;
/* if forced, worst case is that rmmod hangs */
__unsafe(THIS_MODULE);
- pci_module_init(&ioat_pci_drv);
+ return pci_module_init(&ioat_pci_drv);
}
module_init(ioat_init_module);
#define IOAT_CHANSTS_OFFSET 0x04 /* 64-bit Channel Status Register */
#define IOAT_CHANSTS_OFFSET_LOW 0x04
#define IOAT_CHANSTS_OFFSET_HIGH 0x08
-#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR 0xFFFFFFFFFFFFFFC0
+#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR 0xFFFFFFFFFFFFFFC0UL
#define IOAT_CHANSTS_SOFT_ERR 0x0000000000000010
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS 0x0000000000000007
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_ACTIVE 0x0
#include <asm/io.h>
#include <asm/uaccess.h>
-int num_pages_spanned(struct iovec *iov)
+static int num_pages_spanned(struct iovec *iov)
{
return
((PAGE_ALIGN((unsigned long)iov->iov_base + iov->iov_len) -
irq = sdev->irqs[0];
- if (request_irq (irq, soc_intr, SA_SHIRQ, "SOC", (void *)s)) {
+ if (request_irq (irq, soc_intr, IRQF_SHARED, "SOC", (void *)s)) {
soc_printk ("Cannot order irq %d to go\n", irq);
socs = s->next;
return;
irq = sdev->irqs[0];
- if (request_irq (irq, socal_intr, SA_SHIRQ, "SOCAL", (void *)s)) {
+ if (request_irq (irq, socal_intr, IRQF_SHARED, "SOCAL", (void *)s)) {
socal_printk ("Cannot order irq %d to go\n", irq);
socals = s->next;
return;
if (i2c->irq != 0)
if ((result = request_irq(i2c->irq, mpc_i2c_isr,
- SA_SHIRQ, "i2c-mpc", i2c)) < 0) {
+ IRQF_SHARED, "i2c-mpc", i2c)) < 0) {
printk(KERN_ERR
"i2c-mpc - failed to attach interrupt\n");
goto fail_irq;
#endif
pxa_set_cken(CKEN14_I2C, 1);
- ret = request_irq(IRQ_I2C, i2c_pxa_handler, SA_INTERRUPT,
+ ret = request_irq(IRQ_I2C, i2c_pxa_handler, IRQF_DISABLED,
"pxa2xx-i2c", i2c);
if (ret)
goto out;
goto out;
}
- ret = request_irq(res->start, s3c24xx_i2c_irq, SA_INTERRUPT,
+ ret = request_irq(res->start, s3c24xx_i2c_irq, IRQF_DISABLED,
pdev->name, i2c);
if (ret != 0) {
if (otg_dev)
status = request_irq(otg_dev->resource[1].start, omap_otg_irq,
- SA_INTERRUPT, DRIVER_NAME, isp);
+ IRQF_DISABLED, DRIVER_NAME, isp);
else
status = -ENODEV;
}
status = request_irq(isp->irq, isp1301_irq,
- SA_SAMPLE_RANDOM, DRIVER_NAME, isp);
+ IRQF_SAMPLE_RANDOM, DRIVER_NAME, isp);
if (status < 0) {
dev_dbg(&i2c->dev, "can't get IRQ %d, err %d\n",
isp->irq, status);
}
#ifdef CONFIG_ARM
- irqflags = SA_SAMPLE_RANDOM | SA_TRIGGER_LOW;
+ irqflags = IRQF_SAMPLE_RANDOM | IRQF_TRIGGER_LOW;
if (machine_is_omap_h2()) {
tps->model = TPS65010;
omap_cfg_reg(W4_GPIO58);
tps->irq = OMAP_GPIO_IRQ(58);
omap_request_gpio(58);
omap_set_gpio_direction(58, 1);
- irqflags |= SA_TRIGGER_FALLING;
+ irqflags |= IRQF_TRIGGER_FALLING;
}
if (machine_is_omap_osk()) {
tps->model = TPS65010;
tps->irq = OMAP_GPIO_IRQ(OMAP_MPUIO(1));
omap_request_gpio(OMAP_MPUIO(1));
omap_set_gpio_direction(OMAP_MPUIO(1), 1);
- irqflags |= SA_TRIGGER_FALLING;
+ irqflags |= IRQF_TRIGGER_FALLING;
}
if (machine_is_omap_h3()) {
tps->model = TPS65013;
// FIXME set up this board's IRQ ...
}
#else
- irqflags = SA_SAMPLE_RANDOM;
+ irqflags = IRQF_SAMPLE_RANDOM;
#endif
if (tps->irq > 0) {
"transferred\n", pc->actually_transferred);
clear_bit(PC_DMA_IN_PROGRESS, &pc->flags);
- local_irq_enable();
+ local_irq_enable_in_hardirq();
if (status.b.check || test_bit(PC_DMA_ERROR, &pc->flags)) {
/* Error detected */
u8 stat = hwif->INB(IDE_STATUS_REG);
int retries = 10;
- local_irq_enable();
+ local_irq_enable_in_hardirq();
if ((stat & DRQ_STAT) && args && args[3]) {
u8 io_32bit = drive->io_32bit;
drive->io_32bit = 0;
if (masked_irq != IDE_NO_IRQ && hwif->irq != masked_irq)
disable_irq_nosync(hwif->irq);
spin_unlock(&ide_lock);
- local_irq_enable();
+ local_irq_enable_in_hardirq();
/* allow other IRQs while we start this request */
startstop = start_request(drive, rq);
spin_lock_irq(&ide_lock);
spin_unlock(&ide_lock);
if (drive->unmask)
- local_irq_enable();
+ local_irq_enable_in_hardirq();
/* service this interrupt, may set handler for next interrupt */
startstop = handler(drive);
spin_lock_irq(&ide_lock);
{
unsigned long flags;
ide_hwgroup_t *hwgroup = HWGROUP(drive);
- DECLARE_COMPLETION(wait);
+ DECLARE_COMPLETION_ONSTACK(wait);
int where = ELEVATOR_INSERT_BACK, err;
int must_wait = (action == ide_wait || action == ide_head_wait);
* and irq serialization situations. This is somewhat complex because
* it handles static as well as dynamic (PCMCIA) IDE interfaces.
*
- * The SA_INTERRUPT in sa_flags means ide_intr() is always entered with
+ * The IRQF_DISABLED in sa_flags means ide_intr() is always entered with
* interrupts completely disabled. This can be bad for interrupt latency,
* but anything else has led to problems on some machines. We re-enable
* interrupts as much as we can safely do in most places.
* Allocate the irq, if not already obtained for another hwif
*/
if (!match || match->irq != hwif->irq) {
- int sa = SA_INTERRUPT;
+ int sa = IRQF_DISABLED;
#if defined(__mc68000__) || defined(CONFIG_APUS)
- sa = SA_SHIRQ;
+ sa = IRQF_SHARED;
#endif /* __mc68000__ || CONFIG_APUS */
if (IDE_CHIPSET_IS_PCI(hwif->chipset)) {
- sa = SA_SHIRQ;
+ sa = IRQF_SHARED;
#ifndef CONFIG_IDEPCI_SHARE_IRQ
- sa |= SA_INTERRUPT;
+ sa |= IRQF_DISABLED;
#endif /* CONFIG_IDEPCI_SHARE_IRQ */
}
ide_hwif_t *hwif = HWIF(drive);
u8 stat;
- local_irq_enable();
+ local_irq_enable_in_hardirq();
if (!OK_STAT(stat = hwif->INB(IDE_STATUS_REG),READY_STAT,BAD_STAT)) {
return ide_error(drive, "task_no_data_intr", stat);
/* calls ide_end_drive_cmd */
};
/*
- * This is the hard disk IRQ description. The SA_INTERRUPT in sa_flags
+ * This is the hard disk IRQ description. The IRQF_DISABLED in sa_flags
* means we run the IRQ-handler with interrupts disabled: this is bad for
* interrupt latency, but anything else has led to problems on some
* machines.
p->cyl, p->head, p->sect);
}
- if (request_irq(HD_IRQ, hd_interrupt, SA_INTERRUPT, "hd", NULL)) {
+ if (request_irq(HD_IRQ, hd_interrupt, IRQF_DISABLED, "hd", NULL)) {
printk("hd: unable to get IRQ%d for the hard disk driver\n",
HD_IRQ);
goto out1;
*/
static DEFINE_MUTEX(host_num_alloc);
+/*
+ * The pending_packet_queue is special in that it's processed
+ * from hardirq context too (such as hpsb_bus_reset()). Hence
+ * split the lock class from the usual networking skb-head
+ * lock class by using a separate key for it:
+ */
+static struct lock_class_key pending_packet_queue_key;
+
struct hpsb_host *hpsb_alloc_host(struct hpsb_host_driver *drv, size_t extra,
struct device *dev)
{
h->driver = drv;
skb_queue_head_init(&h->pending_packet_queue);
+ lockdep_set_class(&h->pending_packet_queue.lock,
+ &pending_packet_queue_key);
INIT_LIST_HEAD(&h->addr_space);
for (i = 2; i < 16; i++)
spin_lock_init(&ohci->event_lock);
/*
- * interrupts are disabled, all right, but... due to SA_SHIRQ we
+ * interrupts are disabled, all right, but... due to IRQF_SHARED we
* might get called anyway. We'll see no event, of course, but
* we need to get to that "no event", so enough should be initialized
* by that point.
*/
- if (request_irq(dev->irq, ohci_irq_handler, SA_SHIRQ,
+ if (request_irq(dev->irq, ohci_irq_handler, IRQF_SHARED,
OHCI1394_DRIVER_NAME, ohci))
FAIL(-ENOMEM, "Failed to allocate shared interrupt %d", dev->irq);
sprintf (irq_buf, "%d", dev->irq);
- if (!request_irq(dev->irq, lynx_irq_handler, SA_SHIRQ,
+ if (!request_irq(dev->irq, lynx_irq_handler, IRQF_SHARED,
PCILYNX_DRIVER_NAME, lynx)) {
PRINT(KERN_INFO, lynx->id, "allocated interrupt %s", irq_buf);
lynx->state = have_intr;
"continuing anyway\n");
/*
- * set up our interrupt handler; SA_SHIRQ probably not needed,
+ * set up our interrupt handler; IRQF_SHARED probably not needed,
* since MSI interrupts shouldn't be shared but won't hurt for now.
* check 0 irq after we return from chip-specific bus setup, since
* that can affect this due to setup
ipath_dev_err(dd, "irq is 0, BIOS error? Interrupts won't "
"work\n");
else {
- ret = request_irq(pdev->irq, ipath_intr, SA_SHIRQ,
+ ret = request_irq(pdev->irq, ipath_intr, IRQF_SHARED,
IPATH_DRV_NAME, dd);
if (ret) {
ipath_dev_err(dd, "Couldn't setup irq handler, "
mthca_is_memfree(dev) ?
mthca_arbel_interrupt :
mthca_tavor_interrupt,
- SA_SHIRQ, DRV_NAME, dev);
+ IRQF_SHARED, DRV_NAME, dev);
if (err)
goto err_out_cmd;
dev->eq_table.have_irq = 1;
}
static int
-iscsi_iser_conn_set_param(struct iscsi_cls_conn *cls_conn,
- enum iscsi_param param, uint32_t value)
+iscsi_iser_set_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf, int buflen)
{
- struct iscsi_conn *conn = cls_conn->dd_data;
- struct iscsi_session *session = conn->session;
-
- spin_lock_bh(&session->lock);
- if (conn->c_stage != ISCSI_CONN_INITIAL_STAGE &&
- conn->stop_stage != STOP_CONN_RECOVER) {
- printk(KERN_ERR "iscsi_iser: can not change parameter [%d]\n",
- param);
- spin_unlock_bh(&session->lock);
- return 0;
- }
- spin_unlock_bh(&session->lock);
+ int value;
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
/* TBD */
break;
- case ISCSI_PARAM_MAX_XMIT_DLENGTH:
- conn->max_xmit_dlength = value;
- break;
case ISCSI_PARAM_HDRDGST_EN:
+ sscanf(buf, "%d", &value);
if (value) {
printk(KERN_ERR "DataDigest wasn't negotiated to None");
return -EPROTO;
}
break;
case ISCSI_PARAM_DATADGST_EN:
+ sscanf(buf, "%d", &value);
if (value) {
printk(KERN_ERR "DataDigest wasn't negotiated to None");
return -EPROTO;
}
break;
- case ISCSI_PARAM_INITIAL_R2T_EN:
- session->initial_r2t_en = value;
- break;
- case ISCSI_PARAM_IMM_DATA_EN:
- session->imm_data_en = value;
- break;
- case ISCSI_PARAM_FIRST_BURST:
- session->first_burst = value;
- break;
- case ISCSI_PARAM_MAX_BURST:
- session->max_burst = value;
- break;
- case ISCSI_PARAM_PDU_INORDER_EN:
- session->pdu_inorder_en = value;
- break;
- case ISCSI_PARAM_DATASEQ_INORDER_EN:
- session->dataseq_inorder_en = value;
- break;
- case ISCSI_PARAM_ERL:
- session->erl = value;
- break;
case ISCSI_PARAM_IFMARKER_EN:
+ sscanf(buf, "%d", &value);
if (value) {
printk(KERN_ERR "IFMarker wasn't negotiated to No");
return -EPROTO;
}
break;
case ISCSI_PARAM_OFMARKER_EN:
+ sscanf(buf, "%d", &value);
if (value) {
printk(KERN_ERR "OFMarker wasn't negotiated to No");
return -EPROTO;
}
break;
default:
- break;
- }
-
- return 0;
-}
-
-static int
-iscsi_iser_session_get_param(struct iscsi_cls_session *cls_session,
- enum iscsi_param param, uint32_t *value)
-{
- struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
- struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
-
- switch (param) {
- case ISCSI_PARAM_INITIAL_R2T_EN:
- *value = session->initial_r2t_en;
- break;
- case ISCSI_PARAM_MAX_R2T:
- *value = session->max_r2t;
- break;
- case ISCSI_PARAM_IMM_DATA_EN:
- *value = session->imm_data_en;
- break;
- case ISCSI_PARAM_FIRST_BURST:
- *value = session->first_burst;
- break;
- case ISCSI_PARAM_MAX_BURST:
- *value = session->max_burst;
- break;
- case ISCSI_PARAM_PDU_INORDER_EN:
- *value = session->pdu_inorder_en;
- break;
- case ISCSI_PARAM_DATASEQ_INORDER_EN:
- *value = session->dataseq_inorder_en;
- break;
- case ISCSI_PARAM_ERL:
- *value = session->erl;
- break;
- case ISCSI_PARAM_IFMARKER_EN:
- *value = 0;
- break;
- case ISCSI_PARAM_OFMARKER_EN:
- *value = 0;
- break;
- default:
- return ISCSI_ERR_PARAM_NOT_FOUND;
- }
-
- return 0;
-}
-
-static int
-iscsi_iser_conn_get_param(struct iscsi_cls_conn *cls_conn,
- enum iscsi_param param, uint32_t *value)
-{
- struct iscsi_conn *conn = cls_conn->dd_data;
-
- switch(param) {
- case ISCSI_PARAM_MAX_RECV_DLENGTH:
- *value = conn->max_recv_dlength;
- break;
- case ISCSI_PARAM_MAX_XMIT_DLENGTH:
- *value = conn->max_xmit_dlength;
- break;
- case ISCSI_PARAM_HDRDGST_EN:
- *value = 0;
- break;
- case ISCSI_PARAM_DATADGST_EN:
- *value = 0;
- break;
- /*case ISCSI_PARAM_TARGET_RECV_DLENGTH:
- *value = conn->target_recv_dlength;
- break;
- case ISCSI_PARAM_INITIATOR_RECV_DLENGTH:
- *value = conn->initiator_recv_dlength;
- break;*/
- default:
- return ISCSI_ERR_PARAM_NOT_FOUND;
+ return iscsi_set_param(cls_conn, param, buf, buflen);
}
return 0;
}
-
static void
iscsi_iser_conn_get_stats(struct iscsi_cls_conn *cls_conn, struct iscsi_stats *stats)
{
ISCSI_FIRST_BURST |
ISCSI_MAX_BURST |
ISCSI_PDU_INORDER_EN |
- ISCSI_DATASEQ_INORDER_EN,
+ ISCSI_DATASEQ_INORDER_EN |
+ ISCSI_EXP_STATSN |
+ ISCSI_PERSISTENT_PORT |
+ ISCSI_PERSISTENT_ADDRESS |
+ ISCSI_TARGET_NAME |
+ ISCSI_TPGT,
.host_template = &iscsi_iser_sht,
.conndata_size = sizeof(struct iscsi_conn),
.max_lun = ISCSI_ISER_MAX_LUN,
.create_conn = iscsi_iser_conn_create,
.bind_conn = iscsi_iser_conn_bind,
.destroy_conn = iscsi_iser_conn_destroy,
- .set_param = iscsi_iser_conn_set_param,
- .get_conn_param = iscsi_iser_conn_get_param,
- .get_session_param = iscsi_iser_session_get_param,
+ .set_param = iscsi_iser_set_param,
+ .get_conn_param = iscsi_conn_get_param,
+ .get_session_param = iscsi_session_get_param,
.start_conn = iscsi_iser_conn_start,
.stop_conn = iscsi_conn_stop,
/* these are called as part of conn recovery */
for (i = 0; i < CORGI_KEY_SENSE_NUM; i++) {
pxa_gpio_mode(CORGI_GPIO_KEY_SENSE(i) | GPIO_IN);
if (request_irq(CORGI_IRQ_GPIO_KEY_SENSE(i), corgikbd_interrupt,
- SA_INTERRUPT | SA_TRIGGER_RISING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING,
"corgikbd", corgikbd))
printk(KERN_WARNING "corgikbd: Can't get IRQ: %d!\n", i);
}
for (i = 0; i < SPITZ_KEY_SENSE_NUM; i++) {
pxa_gpio_mode(spitz_senses[i] | GPIO_IN);
if (request_irq(IRQ_GPIO(spitz_senses[i]), spitzkbd_interrupt,
- SA_INTERRUPT|SA_TRIGGER_RISING,
+ IRQF_DISABLED|IRQF_TRIGGER_RISING,
"Spitzkbd Sense", spitzkbd))
printk(KERN_WARNING "spitzkbd: Can't get Sense IRQ: %d!\n", i);
}
pxa_gpio_mode(SPITZ_GPIO_SWB | GPIO_IN);
request_irq(SPITZ_IRQ_GPIO_SYNC, spitzkbd_interrupt,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"Spitzkbd Sync", spitzkbd);
request_irq(SPITZ_IRQ_GPIO_ON_KEY, spitzkbd_interrupt,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"Spitzkbd PwrOn", spitzkbd);
request_irq(SPITZ_IRQ_GPIO_SWA, spitzkbd_hinge_isr,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"Spitzkbd SWA", spitzkbd);
request_irq(SPITZ_IRQ_GPIO_SWB, spitzkbd_hinge_isr,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"Spitzkbd SWB", spitzkbd);
request_irq(SPITZ_IRQ_GPIO_AK_INT, spitzkbd_hinge_isr,
- SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
+ IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
"Spitzkbd HP", spitzkbd);
printk(KERN_INFO "input: Spitz Keyboard Registered\n");
input_dev->event = ixp4xx_spkr_event;
err = request_irq(IRQ_IXP4XX_TIMER2, &ixp4xx_spkr_interrupt,
- SA_INTERRUPT | SA_TIMER, "ixp4xx-beeper", (void *) dev->id);
+ IRQF_DISABLED | IRQF_TIMER, "ixp4xx-beeper", (void *) dev->id);
if (err)
goto err_free_device;
rpcmouse_lastx = (short) iomd_readl(IOMD_MOUSEX);
rpcmouse_lasty = (short) iomd_readl(IOMD_MOUSEY);
- if (request_irq(IRQ_VSYNCPULSE, rpcmouse_irq, SA_SHIRQ, "rpcmouse", rpcmouse_dev)) {
+ if (request_irq(IRQ_VSYNCPULSE, rpcmouse_irq, IRQF_SHARED, "rpcmouse", rpcmouse_dev)) {
printk(KERN_ERR "rpcmouse: unable to allocate VSYNC interrupt\n");
input_free_device(rpcmouse_dev);
return -EBUSY;
serio->dev.parent = &dev->dev;
ret = -EBUSY;
- if (request_irq(dev->irq, gscps2_interrupt, SA_SHIRQ, ps2port->port->name, ps2port))
+ if (request_irq(dev->irq, gscps2_interrupt, IRQF_SHARED, ps2port->port->name, ps2port))
goto fail_miserably;
if (ps2port->id != GSC_ID_KEYBOARD && ps2port->id != GSC_ID_MOUSE) {
},
{},
};
-MODULE_DEVICE_TABLE(of, i8042_match);
+MODULE_DEVICE_TABLE(of, sparc_i8042_match);
static struct of_platform_driver sparc_i8042_driver = {
.name = "i8042",
return 0;
if (request_irq(port->irq, i8042_interrupt,
- SA_SHIRQ, "i8042", i8042_request_irq_cookie)) {
+ IRQF_SHARED, "i8042", i8042_request_irq_cookie)) {
printk(KERN_ERR "i8042.c: Can't get irq %d for %s, unregistering the port.\n", port->irq, port->name);
goto irq_fail;
}
*/
if (request_irq(i8042_ports[I8042_AUX_PORT_NO].irq, i8042_interrupt,
- SA_SHIRQ, "i8042", &i8042_check_aux_cookie))
+ IRQF_SHARED, "i8042", &i8042_check_aux_cookie))
return -1;
free_irq(i8042_ports[I8042_AUX_PORT_NO].irq, &i8042_check_aux_cookie);
return -1;
}
- mutex_lock(&ps2dev->cmd_mutex);
+ mutex_lock_nested(&ps2dev->cmd_mutex, SINGLE_DEPTH_NESTING);
serio_pause_rx(ps2dev->serio);
ps2dev->flags = command == PS2_CMD_GETID ? PS2_FLAG_WAITID : 0;
outb(PS2_CTRL_ENABLE, ps2if->base);
pcips2_flush_input(ps2if);
- ret = request_irq(ps2if->dev->irq, pcips2_interrupt, SA_SHIRQ,
+ ret = request_irq(ps2if->dev->irq, pcips2_interrupt, IRQF_SHARED,
"pcips2", ps2if);
if (ret == 0)
val = PS2_CTRL_ENABLE | PS2_CTRL_RXIRQ;
ts->last_msg = m;
- if (request_irq(spi->irq, ads7846_irq, SA_TRIGGER_FALLING,
+ if (request_irq(spi->irq, ads7846_irq, IRQF_TRIGGER_FALLING,
spi->dev.driver->name, ts)) {
dev_dbg(&spi->dev, "irq %d busy?\n", spi->irq);
err = -EBUSY;
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/slab.h>
-//#include <asm/irq.h>
+#include <linux/irq.h>
#include <asm/arch/sharpsl.h>
#include <asm/arch/hardware.h>
corgi_ssp_ads7846_putget((5u << ADSCTRL_ADR_SH) | ADSCTRL_STS);
mdelay(5);
- if (request_irq(corgi_ts->irq_gpio, ts_interrupt, SA_INTERRUPT, "ts", corgi_ts)) {
+ if (request_irq(corgi_ts->irq_gpio, ts_interrupt, IRQF_DISABLED, "ts", corgi_ts)) {
err = -EBUSY;
goto fail;
}
set_GPIO_IRQ_edge(GPIO_BITSY_NPOWER_BUTTON, GPIO_RISING_EDGE);
if (request_irq(IRQ_GPIO_BITSY_ACTION_BUTTON, action_button_handler,
- SA_SHIRQ | SA_INTERRUPT, "h3600_action", &ts->dev)) {
+ IRQF_SHARED | IRQF_DISABLED, "h3600_action", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Action Button IRQ!\n");
err = -EBUSY;
goto fail2;
}
if (request_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, npower_button_handler,
- SA_SHIRQ | SA_INTERRUPT, "h3600_suspend", &ts->dev)) {
+ IRQF_SHARED | IRQF_DISABLED, "h3600_suspend", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Power Button IRQ!\n");
err = -EBUSY;
goto fail3;
input_register_device(hp680_ts_dev);
if (request_irq(HP680_TS_IRQ, hp680_ts_interrupt,
- SA_INTERRUPT, MODNAME, 0) < 0) {
+ IRQF_DISABLED, MODNAME, 0) < 0) {
printk(KERN_ERR "hp680_touchscreen.c: Can't allocate irq %d\n",
HP680_TS_IRQ);
input_unregister_device(hp680_ts_dev);
b1_reset(card->port);
b1_getrevision(card);
- retval = request_irq(card->irq, b1_interrupt, SA_SHIRQ, card->name, card);
+ retval = request_irq(card->irq, b1_interrupt, IRQF_SHARED, card->name, card);
if (retval) {
printk(KERN_ERR "b1pci: unable to get IRQ %d.\n", card->irq);
retval = -EBUSY;
b1dma_reset(card);
b1_getrevision(card);
- retval = request_irq(card->irq, b1dma_interrupt, SA_SHIRQ, card->name, card);
+ retval = request_irq(card->irq, b1dma_interrupt, IRQF_SHARED, card->name, card);
if (retval) {
printk(KERN_ERR "b1pci: unable to get IRQ %d.\n",
card->irq);
card->irq = irq;
card->cardtype = cardtype;
- retval = request_irq(card->irq, b1_interrupt, SA_SHIRQ, card->name, card);
+ retval = request_irq(card->irq, b1_interrupt, IRQF_SHARED, card->name, card);
if (retval) {
printk(KERN_ERR "b1pcmcia: unable to get IRQ %d.\n",
card->irq);
}
c4_reset(card);
- retval = request_irq(card->irq, c4_interrupt, SA_SHIRQ, card->name, card);
+ retval = request_irq(card->irq, c4_interrupt, IRQF_SHARED, card->name, card);
if (retval) {
printk(KERN_ERR "c4: unable to get IRQ %d.\n",card->irq);
retval = -EBUSY;
}
b1dma_reset(card);
- retval = request_irq(card->irq, b1dma_interrupt, SA_SHIRQ, card->name, card);
+ retval = request_irq(card->irq, b1dma_interrupt, IRQF_SHARED, card->name, card);
if (retval) {
printk(KERN_ERR "t1pci: unable to get IRQ %d.\n", card->irq);
retval = -EBUSY;
int diva_os_register_irq(void *context, byte irq, const char *name)
{
int result = request_irq(irq, diva_os_irq_wrapper,
- SA_INTERRUPT | SA_SHIRQ, name, context);
+ IRQF_DISABLED | IRQF_SHARED, name, context);
return (result);
}
cs->BC_Write_Reg = &WriteHSCX;
cs->BC_Send_Data = &hscx_fill_fifo;
cs->cardmsg = &AVM_card_msg;
- cs->irq_flags = SA_SHIRQ;
+ cs->irq_flags = IRQF_SHARED;
cs->irq_func = &avm_a1p_interrupt;
ISACVersion(cs, "AVM A1 PCMCIA:");
printk(KERN_WARNING "FritzPCI: No PCI card found\n");
return(0);
}
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
#else
printk(KERN_WARNING "FritzPCI: NO_PCI_BIOS\n");
return (0);
cs->BC_Send_Data = &jade_fill_fifo;
cs->cardmsg = &BKM_card_msg;
cs->irq_func = &bkm_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
ISACVersion(cs, "Telekom A4T:");
/* Jade version */
JadeVersion(cs, "Telekom A4T:");
pci_ioaddr5 &= PCI_BASE_ADDRESS_IO_MASK;
/* Take over */
cs->irq = pci_irq;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
/* pci_ioaddr1 is unique to all subdevices */
/* pci_ioaddr2 is for the fourth subdevice only */
/* pci_ioaddr3 is for the third subdevice only */
printk(KERN_WARNING "Diva: No IO-Adr for PCI card found\n");
return(0);
}
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
#else
printk(KERN_WARNING "Diva: cfgreg 0 and NO_PCI_BIOS\n");
printk(KERN_WARNING "Diva: unable to config DIVA PCI\n");
*** ***/
/* Config-Register (Read) */
-#define ELSA_TIMER_RUN 0x02 /* Bit 1 des Config-Reg */
-#define ELSA_TIMER_RUN_PCC8 0x01 /* Bit 0 des Config-Reg bei PCC */
+#define ELIRQF_TIMER_RUN 0x02 /* Bit 1 des Config-Reg */
+#define ELIRQF_TIMER_RUN_PCC8 0x01 /* Bit 0 des Config-Reg bei PCC */
#define ELSA_IRQ_IDX 0x38 /* Bit 3,4,5 des Config-Reg */
#define ELSA_IRQ_IDX_PCC8 0x30 /* Bit 4,5 des Config-Reg */
#define ELSA_IRQ_IDX_PC 0x0c /* Bit 2,3 des Config-Reg */
#define ELSA_S0_POWER_BAD 0x08 /* Bit 3 S0-Bus Spannung fehlt */
/* Status Flags */
-#define ELSA_TIMER_AKTIV 1
+#define ELIRQF_TIMER_AKTIV 1
#define ELSA_BAD_PWR 2
#define ELSA_ASSIGN 4
v = bytein(cs->hw.elsa.cfg);
if ((cs->subtyp == ELSA_QS1000) || (cs->subtyp == ELSA_QS3000))
- return (0 == (v & ELSA_TIMER_RUN));
+ return (0 == (v & ELIRQF_TIMER_RUN));
else if (cs->subtyp == ELSA_PCC8)
- return (v & ELSA_TIMER_RUN_PCC8);
- return (v & ELSA_TIMER_RUN);
+ return (v & ELIRQF_TIMER_RUN_PCC8);
+ return (v & ELIRQF_TIMER_RUN);
}
/*
* fast interrupt HSCX stuff goes here
writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK, 0xFF);
writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK + 0x40, 0xFF);
writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_MASK, 0xFF);
- if (cs->hw.elsa.status & ELSA_TIMER_AKTIV) {
+ if (cs->hw.elsa.status & ELIRQF_TIMER_AKTIV) {
if (!TimerRun(cs)) {
/* Timer Restart */
byteout(cs->hw.elsa.timer, 0);
spin_lock_irqsave(&cs->lock, flags);
cs->hw.elsa.counter = 0;
cs->hw.elsa.ctrl_reg |= ELSA_ENA_TIMER_INT;
- cs->hw.elsa.status |= ELSA_TIMER_AKTIV;
+ cs->hw.elsa.status |= ELIRQF_TIMER_AKTIV;
byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
byteout(cs->hw.elsa.timer, 0);
spin_unlock_irqrestore(&cs->lock, flags);
spin_lock_irqsave(&cs->lock, flags);
cs->hw.elsa.ctrl_reg &= ~ELSA_ENA_TIMER_INT;
byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- cs->hw.elsa.status &= ~ELSA_TIMER_AKTIV;
+ cs->hw.elsa.status &= ~ELIRQF_TIMER_AKTIV;
spin_unlock_irqrestore(&cs->lock, flags);
printk(KERN_INFO "Elsa: %d timer tics in 110 msek\n",
cs->hw.elsa.counter);
cs->hw.elsa.timer = 0;
cs->hw.elsa.trig = 0;
cs->hw.elsa.ctrl = 0;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
printk(KERN_INFO
"Elsa: %s defined at %#lx IRQ %d\n",
Elsa_Types[cs->subtyp],
test_and_set_bit(HW_IPAC, &cs->HW_Flags);
cs->hw.elsa.timer = 0;
cs->hw.elsa.trig = 0;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
printk(KERN_INFO
"Elsa: %s defined at %#lx/0x%x IRQ %d\n",
Elsa_Types[cs->subtyp],
cs->BC_Send_Data = &netjet_fill_dma;
cs->cardmsg = &enpci_card_msg;
cs->irq_func = &enpci_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
return (1);
}
cs->hw.gazel.hscxfifo[0] = cs->hw.gazel.hscx[0];
cs->hw.gazel.hscxfifo[1] = cs->hw.gazel.hscx[1];
cs->irq = pci_irq;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
switch (seekcard) {
case PCI_DEVICE_ID_PLX_R685:
INIT_WORK(&hw->tqueue, (void *) (void *) hfc4s8s_bh, hw);
if (request_irq
- (hw->irq, hfc4s8s_interrupt, SA_SHIRQ, hw->card_name, hw)) {
+ (hw->irq, hfc4s8s_interrupt, IRQF_SHARED, hw->card_name, hw)) {
printk(KERN_INFO
"HFC-4S/8S: unable to alloc irq %d, card ignored\n",
hw->irq);
cs->BC_Read_Reg = NULL;
cs->BC_Write_Reg = NULL;
cs->irq_func = &hfcpci_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
cs->hw.hfcpci.timer.function = (void *) hfcpci_Timer;
cs->hw.hfcpci.timer.data = (long) cs;
init_timer(&cs->hw.hfcpci.timer);
switch (adapter->type) {
case AVM_FRITZ_PCIV2:
- retval = request_irq(adapter->irq, fcpci2_irq, SA_SHIRQ,
+ retval = request_irq(adapter->irq, fcpci2_irq, IRQF_SHARED,
"fcpcipnp", adapter);
break;
case AVM_FRITZ_PCI:
- retval = request_irq(adapter->irq, fcpci_irq, SA_SHIRQ,
+ retval = request_irq(adapter->irq, fcpci_irq, IRQF_SHARED,
"fcpcipnp", adapter);
break;
case AVM_FRITZ_PNP:
printk(KERN_WARNING "Niccy: No PCI card found\n");
return(0);
}
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
cs->hw.niccy.isac = pci_ioaddr + ISAC_PCI_DATA;
cs->hw.niccy.isac_ale = pci_ioaddr + ISAC_PCI_ADDR;
cs->hw.niccy.hscx = pci_ioaddr + HSCX_PCI_DATA;
setup_isac(cs);
cs->cardmsg = &NETjet_S_card_msg;
cs->irq_func = &netjet_s_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
ISACVersion(cs, "NETjet-S:");
return (1);
}
cs->BC_Send_Data = &netjet_fill_dma;
cs->cardmsg = &NETjet_U_card_msg;
cs->irq_func = &netjet_u_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
ICCVersion(cs, "NETspider-U:");
return (1);
}
printk(KERN_WARNING "Sedlbauer: No PCI card found\n");
return(0);
}
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
cs->hw.sedl.bus = SEDL_BUS_PCI;
sub_vendor_id = dev_sedl->subsystem_vendor;
sub_id = dev_sedl->subsystem_device;
cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_HSCX;
cs->hw.sedl.reset_on = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_RESET;
cs->hw.sedl.reset_off = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_RESET;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
} else {
cs->hw.sedl.adr = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_ADR;
cs->hw.sedl.isac = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_ISAC;
cs->hw.teles3.hscx[1] + 96);
return (0);
}
- cs->irq_flags |= SA_SHIRQ; /* cardbus can share */
+ cs->irq_flags |= IRQF_SHARED; /* cardbus can share */
} else {
if (cs->hw.teles3.cfg_reg) {
if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
cs->BC_Send_Data = &hscx_fill_fifo;
cs->cardmsg = &TelesPCI_card_msg;
cs->irq_func = &telespci_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
ISACVersion(cs, "TelesPCI:");
if (HscxVersion(cs, "TelesPCI:")) {
printk(KERN_WARNING
cs->BC_Send_Data = &W6692B_fill_fifo;
cs->cardmsg = &w6692_card_msg;
cs->irq_func = &W6692_interrupt;
- cs->irq_flags |= SA_SHIRQ;
+ cs->irq_flags |= IRQF_SHARED;
W6692Version(cs, "W6692:");
printk(KERN_INFO "W6692 ISTA=0x%X\n", ReadW6692(cs, W_ISTA));
printk(KERN_INFO "W6692 IMASK=0x%X\n", ReadW6692(cs, W_IMASK));
}
ergo_stopcard(card); /* disable interrupts */
- if (request_irq(card->irq, ergo_interrupt, SA_SHIRQ, "HYSDN", card)) {
+ if (request_irq(card->irq, ergo_interrupt, IRQF_SHARED, "HYSDN", card)) {
ergo_releasehardware(card); /* return the acquired hardware */
return (-1);
}
*/
sc_adapter[cinst]->interrupt = irq[b];
if (request_irq(sc_adapter[cinst]->interrupt, interrupt_handler,
- SA_INTERRUPT, interface->id, NULL))
+ IRQF_DISABLED, interface->id, NULL))
{
kfree(sc_adapter[cinst]->channel);
indicate_status(cinst, ISDN_STAT_UNLOAD, 0, NULL); /* Fix me */
{
struct device_node *adbs;
struct resource r;
+ unsigned int irq;
adbs = find_compatible_devices("adb", "chrp,adb0");
if (adbs == 0)
return -ENXIO;
-#if 0
- { int i = 0;
-
- printk("macio_adb_init: node = %p, addrs =", adbs->node);
- while(!of_address_to_resource(adbs, i, &r))
- printk(" %x(%x)", r.start, r.end - r.start);
- printk(", intrs =");
- for (i = 0; i < adbs->n_intrs; ++i)
- printk(" %x", adbs->intrs[i].line);
- printk("\n"); }
-#endif
if (of_address_to_resource(adbs, 0, &r))
return -ENXIO;
adb = ioremap(r.start, sizeof(struct adb_regs));
out_8(&adb->active_lo.r, 0xff);
out_8(&adb->autopoll.r, APE);
- if (request_irq(adbs->intrs[0].line, macio_adb_interrupt,
- 0, "ADB", (void *)0)) {
- printk(KERN_ERR "ADB: can't get irq %d\n",
- adbs->intrs[0].line);
+ irq = irq_of_parse_and_map(adbs, 0);
+ if (request_irq(irq, macio_adb_interrupt, 0, "ADB", (void *)0)) {
+ printk(KERN_ERR "ADB: can't get irq %d\n", irq);
return -EAGAIN;
}
out_8(&adb->intr_enb.r, DFB | TAG);
static int macio_resource_quirks(struct device_node *np, struct resource *res,
int index)
{
- if (res->flags & IORESOURCE_MEM) {
- /* Grand Central has too large resource 0 on some machines */
- if (index == 0 && !strcmp(np->name, "gc"))
- res->end = res->start + 0x1ffff;
+ /* Only quirks for memory resources for now */
+ if ((res->flags & IORESOURCE_MEM) == 0)
+ return 0;
+
+ /* Grand Central has too large resource 0 on some machines */
+ if (index == 0 && !strcmp(np->name, "gc"))
+ res->end = res->start + 0x1ffff;
- /* Airport has bogus resource 2 */
- if (index >= 2 && !strcmp(np->name, "radio"))
- return 1;
+ /* Airport has bogus resource 2 */
+ if (index >= 2 && !strcmp(np->name, "radio"))
+ return 1;
#ifndef CONFIG_PPC64
- /* DBDMAs may have bogus sizes */
- if ((res->start & 0x0001f000) == 0x00008000)
- res->end = res->start + 0xff;
+ /* DBDMAs may have bogus sizes */
+ if ((res->start & 0x0001f000) == 0x00008000)
+ res->end = res->start + 0xff;
#endif /* CONFIG_PPC64 */
- /* ESCC parent eats child resources. We could have added a
- * level of hierarchy, but I don't really feel the need
- * for it
- */
- if (!strcmp(np->name, "escc"))
- return 1;
-
- /* ESCC has bogus resources >= 3 */
- if (index >= 3 && !(strcmp(np->name, "ch-a") &&
- strcmp(np->name, "ch-b")))
- return 1;
-
- /* Media bay has too many resources, keep only first one */
- if (index > 0 && !strcmp(np->name, "media-bay"))
- return 1;
-
- /* Some older IDE resources have bogus sizes */
- if (!(strcmp(np->name, "IDE") && strcmp(np->name, "ATA") &&
- strcmp(np->type, "ide") && strcmp(np->type, "ata"))) {
- if (index == 0 && (res->end - res->start) > 0xfff)
- res->end = res->start + 0xfff;
- if (index == 1 && (res->end - res->start) > 0xff)
- res->end = res->start + 0xff;
- }
+ /* ESCC parent eats child resources. We could have added a
+ * level of hierarchy, but I don't really feel the need
+ * for it
+ */
+ if (!strcmp(np->name, "escc"))
+ return 1;
+
+ /* ESCC has bogus resources >= 3 */
+ if (index >= 3 && !(strcmp(np->name, "ch-a") &&
+ strcmp(np->name, "ch-b")))
+ return 1;
+
+ /* Media bay has too many resources, keep only first one */
+ if (index > 0 && !strcmp(np->name, "media-bay"))
+ return 1;
+
+ /* Some older IDE resources have bogus sizes */
+ if (!(strcmp(np->name, "IDE") && strcmp(np->name, "ATA") &&
+ strcmp(np->type, "ide") && strcmp(np->type, "ata"))) {
+ if (index == 0 && (res->end - res->start) > 0xfff)
+ res->end = res->start + 0xfff;
+ if (index == 1 && (res->end - res->start) > 0xff)
+ res->end = res->start + 0xff;
}
return 0;
}
+static void macio_create_fixup_irq(struct macio_dev *dev, int index,
+ unsigned int line)
+{
+ unsigned int irq;
-static void macio_setup_interrupts(struct macio_dev *dev)
+ irq = irq_create_mapping(NULL, line, 0);
+ if (irq != NO_IRQ) {
+ dev->interrupt[index].start = irq;
+ dev->interrupt[index].flags = IORESOURCE_IRQ;
+ dev->interrupt[index].name = dev->ofdev.dev.bus_id;
+ }
+ if (dev->n_interrupts <= index)
+ dev->n_interrupts = index + 1;
+}
+
+static void macio_add_missing_resources(struct macio_dev *dev)
{
struct device_node *np = dev->ofdev.node;
- int i,j;
+ unsigned int irq_base;
+
+ /* Gatwick has some missing interrupts on child nodes */
+ if (dev->bus->chip->type != macio_gatwick)
+ return;
- /* For now, we use pre-parsed entries in the device-tree for
- * interrupt routing and addresses, but we should change that
- * to dynamically parsed entries and so get rid of most of the
- * clutter in struct device_node
+ /* irq_base is always 64 on gatwick. I have no cleaner way to get
+ * that value from here at this point
*/
- for (i = j = 0; i < np->n_intrs; i++) {
+ irq_base = 64;
+
+ /* Fix SCC */
+ if (strcmp(np->name, "ch-a") == 0) {
+ macio_create_fixup_irq(dev, 0, 15 + irq_base);
+ macio_create_fixup_irq(dev, 1, 4 + irq_base);
+ macio_create_fixup_irq(dev, 2, 5 + irq_base);
+ printk(KERN_INFO "macio: fixed SCC irqs on gatwick\n");
+ }
+
+ /* Fix media-bay */
+ if (strcmp(np->name, "media-bay") == 0) {
+ macio_create_fixup_irq(dev, 0, 29 + irq_base);
+ printk(KERN_INFO "macio: fixed media-bay irq on gatwick\n");
+ }
+
+ /* Fix left media bay childs */
+ if (dev->media_bay != NULL && strcmp(np->name, "floppy") == 0) {
+ macio_create_fixup_irq(dev, 0, 19 + irq_base);
+ macio_create_fixup_irq(dev, 1, 1 + irq_base);
+ printk(KERN_INFO "macio: fixed left floppy irqs\n");
+ }
+ if (dev->media_bay != NULL && strcasecmp(np->name, "ata4") == 0) {
+ macio_create_fixup_irq(dev, 0, 14 + irq_base);
+ macio_create_fixup_irq(dev, 0, 3 + irq_base);
+ printk(KERN_INFO "macio: fixed left ide irqs\n");
+ }
+}
+
+static void macio_setup_interrupts(struct macio_dev *dev)
+{
+ struct device_node *np = dev->ofdev.node;
+ unsigned int irq;
+ int i = 0, j = 0;
+
+ for (;;) {
struct resource *res = &dev->interrupt[j];
if (j >= MACIO_DEV_COUNT_IRQS)
break;
- res->start = np->intrs[i].line;
- res->flags = IORESOURCE_IO;
- if (np->intrs[j].sense)
- res->flags |= IORESOURCE_IRQ_LOWLEVEL;
- else
- res->flags |= IORESOURCE_IRQ_HIGHEDGE;
+ irq = irq_of_parse_and_map(np, i++);
+ if (irq == NO_IRQ)
+ break;
+ res->start = irq;
+ res->flags = IORESOURCE_IRQ;
res->name = dev->ofdev.dev.bus_id;
- if (macio_resource_quirks(np, res, i))
+ if (macio_resource_quirks(np, res, i - 1)) {
memset(res, 0, sizeof(struct resource));
- else
+ continue;
+ } else
j++;
}
dev->n_interrupts = j;
/* MacIO itself has a different reg, we use it's PCI base */
if (np == chip->of_node) {
- sprintf(dev->ofdev.dev.bus_id, "%1d.%016llx:%.*s",
+ sprintf(dev->ofdev.dev.bus_id, "%1d.%08x:%.*s",
chip->lbus.index,
#ifdef CONFIG_PCI
- (unsigned long long)pci_resource_start(chip->lbus.pdev, 0),
+ (unsigned int)pci_resource_start(chip->lbus.pdev, 0),
#else
0, /* NuBus may want to do something better here */
#endif
/* Setup interrupts & resources */
macio_setup_interrupts(dev);
macio_setup_resources(dev, parent_res);
+ macio_add_missing_resources(dev);
/* Register with core */
if (of_device_register(&dev->ofdev) != 0) {
smu->doorbell = *data;
if (smu->doorbell < 0x50)
smu->doorbell += 0x50;
- if (np->n_intrs > 0)
- smu->db_irq = np->intrs[0].line;
+ smu->db_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
smu->msg = *data;
if (smu->msg < 0x50)
smu->msg += 0x50;
- if (np->n_intrs > 0)
- smu->msg_irq = np->intrs[0].line;
+ smu->msg_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
} while(0);
if (smu->db_irq != NO_IRQ) {
if (request_irq(smu->db_irq, smu_db_intr,
- SA_SHIRQ, "SMU doorbell", smu) < 0) {
+ IRQF_SHARED, "SMU doorbell", smu) < 0) {
printk(KERN_WARNING "SMU: can't "
"request interrupt %d\n",
smu->db_irq);
if (smu->msg_irq != NO_IRQ) {
if (request_irq(smu->msg_irq, smu_msg_intr,
- SA_SHIRQ, "SMU message", smu) < 0) {
+ IRQF_SHARED, "SMU message", smu) < 0) {
printk(KERN_WARNING "SMU: can't "
"request interrupt %d\n",
smu->msg_irq);
static volatile unsigned char __iomem *via;
static DEFINE_SPINLOCK(cuda_lock);
-#ifdef CONFIG_MAC
-#define CUDA_IRQ IRQ_MAC_ADB
-#define eieio()
-#else
-#define CUDA_IRQ vias->intrs[0].line
-#endif
-
/* VIA registers - spaced 0x200 bytes apart */
#define RS 0x200 /* skip between registers */
#define B 0 /* B-side data */
static int __init via_cuda_start(void)
{
+ unsigned int irq;
+
if (via == NULL)
return -ENODEV;
- if (request_irq(CUDA_IRQ, cuda_interrupt, 0, "ADB", cuda_interrupt)) {
- printk(KERN_ERR "cuda_init: can't get irq %d\n", CUDA_IRQ);
+#ifdef CONFIG_MAC
+ irq = IRQ_MAC_ADB;
+#else /* CONFIG_MAC */
+ irq = irq_of_parse_and_map(vias, 0);
+ if (irq == NO_IRQ) {
+ printk(KERN_ERR "via-cuda: can't map interrupts for %s\n",
+ vias->full_name);
+ return -ENODEV;
+ }
+#endif /* CONFIG_MAP */
+
+ if (request_irq(irq, cuda_interrupt, 0, "ADB", cuda_interrupt)) {
+ printk(KERN_ERR "via-cuda: can't request irq %d\n", irq);
return -EAGAIN;
}
#include <asm/backlight.h>
#endif
-#ifdef CONFIG_PPC32
-#include <asm/open_pic.h>
-#endif
-
#include "via-pmu-event.h"
/* Some compile options */
static int pmu_has_adb;
static struct device_node *gpio_node;
static unsigned char __iomem *gpio_reg = NULL;
-static int gpio_irq = -1;
+static int gpio_irq = NO_IRQ;
static int gpio_irq_enabled = -1;
static volatile int pmu_suspended = 0;
static spinlock_t pmu_lock;
*/
static int __init via_pmu_start(void)
{
+ unsigned int irq;
+
if (vias == NULL)
return -ENODEV;
batt_req.complete = 1;
-#ifndef CONFIG_PPC_MERGE
- if (pmu_kind == PMU_KEYLARGO_BASED)
- openpic_set_irq_priority(vias->intrs[0].line,
- OPENPIC_PRIORITY_DEFAULT + 1);
-#endif
-
- if (request_irq(vias->intrs[0].line, via_pmu_interrupt, 0, "VIA-PMU",
- (void *)0)) {
- printk(KERN_ERR "VIA-PMU: can't get irq %d\n",
- vias->intrs[0].line);
- return -EAGAIN;
+ irq = irq_of_parse_and_map(vias, 0);
+ if (irq == NO_IRQ) {
+ printk(KERN_ERR "via-pmu: can't map interruptn");
+ return -ENODEV;
+ }
+ if (request_irq(irq, via_pmu_interrupt, 0, "VIA-PMU", (void *)0)) {
+ printk(KERN_ERR "via-pmu: can't request irq %d\n", irq);
+ return -ENODEV;
}
if (pmu_kind == PMU_KEYLARGO_BASED) {
if (gpio_node == NULL)
gpio_node = of_find_node_by_name(NULL,
"pmu-interrupt");
- if (gpio_node && gpio_node->n_intrs > 0)
- gpio_irq = gpio_node->intrs[0].line;
+ if (gpio_node)
+ gpio_irq = irq_of_parse_and_map(gpio_node, 0);
- if (gpio_irq != -1) {
+ if (gpio_irq != NO_IRQ) {
if (request_irq(gpio_irq, gpio1_interrupt, 0,
"GPIO1 ADB", (void *)0))
printk(KERN_ERR "pmu: can't get irq %d"
struct block_device *bdev;
char b[BDEVNAME_SIZE];
- bdev = open_by_devnum(dev, FMODE_READ|FMODE_WRITE);
+ bdev = open_partition_by_devnum(dev, FMODE_READ|FMODE_WRITE);
if (IS_ERR(bdev)) {
printk(KERN_ERR "md: could not open %s.\n",
__bdevname(dev, b));
if (err) {
printk(KERN_ERR "md: could not bd_claim %s.\n",
bdevname(bdev, b));
- blkdev_put(bdev);
+ blkdev_put_partition(bdev);
return err;
}
rdev->bdev = bdev;
if (!bdev)
MD_BUG();
bd_release(bdev);
- blkdev_put(bdev);
+ blkdev_put_partition(bdev);
}
void md_autodetect_dev(dev_t dev);
saa7146_write(dev, MC2, 0xf8000000);
/* request an interrupt for the saa7146 */
- err = request_irq(pci->irq, interrupt_hw, SA_SHIRQ | SA_INTERRUPT,
+ err = request_irq(pci->irq, interrupt_hw, IRQF_SHARED | IRQF_DISABLED,
dev->name, dev);
if (err < 0) {
ERR(("request_irq() failed.\n"));
pci_set_drvdata(fc_pci->pdev, fc_pci);
if ((ret = request_irq(fc_pci->pdev->irq, flexcop_pci_isr,
- SA_SHIRQ, DRIVER_NAME, fc_pci)) != 0)
+ IRQF_SHARED, DRIVER_NAME, fc_pci)) != 0)
goto err_pci_iounmap;
spin_lock_init(&fc_pci->irq_lock);
btwrite(0, BT848_INT_MASK);
result = request_irq(bt->irq, bt878_irq,
- SA_SHIRQ | SA_INTERRUPT, "bt878",
+ IRQF_SHARED | IRQF_DISABLED, "bt878",
(void *) bt);
if (result == -EINVAL) {
printk(KERN_ERR "bt878(%d): Bad irq number or handler\n",
pci_set_drvdata(pdev, pluto);
- ret = request_irq(pdev->irq, pluto_irq, SA_SHIRQ, DRIVER_NAME, pluto);
+ ret = request_irq(pdev->irq, pluto_irq, IRQF_SHARED, DRIVER_NAME, pluto);
if (ret < 0)
goto err_pci_iounmap;
/* disable irqs, register irq handler */
btwrite(0, BT848_INT_MASK);
result = request_irq(btv->c.pci->irq, bttv_irq,
- SA_SHIRQ | SA_INTERRUPT,btv->c.name,(void *)btv);
+ IRQF_SHARED | IRQF_DISABLED,btv->c.name,(void *)btv);
if (result < 0) {
printk(KERN_ERR "bttv%d: can't get IRQ %d\n",
bttv_num,btv->c.pci->irq);
/* get irq */
err = request_irq(chip->pci->irq, cx8801_irq,
- SA_SHIRQ | SA_INTERRUPT, chip->core->name, chip);
+ IRQF_SHARED | IRQF_DISABLED, chip->core->name, chip);
if (err < 0) {
dprintk(0, "%s: can't get IRQ %d\n",
chip->core->name, chip->pci->irq);
/* get irq */
err = request_irq(dev->pci->irq, cx8802_irq,
- SA_SHIRQ | SA_INTERRUPT, dev->core->name, dev);
+ IRQF_SHARED | IRQF_DISABLED, dev->core->name, dev);
if (err < 0) {
printk(KERN_ERR "%s: can't get IRQ %d\n",
dev->core->name, dev->pci->irq);
/* get irq */
err = request_irq(pci_dev->irq, cx8800_irq,
- SA_SHIRQ | SA_INTERRUPT, core->name, dev);
+ IRQF_SHARED | IRQF_DISABLED, core->name, dev);
if (err < 0) {
printk(KERN_ERR "%s: can't get IRQ %d\n",
core->name,pci_dev->irq);
meye.mchip_irq = pcidev->irq;
if (request_irq(meye.mchip_irq, meye_irq,
- SA_INTERRUPT | SA_SHIRQ, "meye", meye_irq)) {
+ IRQF_DISABLED | IRQF_SHARED, "meye", meye_irq)) {
printk(KERN_ERR "meye: request_irq failed\n");
goto outreqirq;
}
err = request_irq(dev->pci->irq, saa7134_alsa_irq,
- SA_SHIRQ | SA_INTERRUPT, dev->name,
+ IRQF_SHARED | IRQF_DISABLED, dev->name,
(void*) &dev->dmasound);
if (err < 0) {
/* get irq */
err = request_irq(pci_dev->irq, saa7134_irq,
- SA_SHIRQ | SA_INTERRUPT, dev->name, dev);
+ IRQF_SHARED | IRQF_DISABLED, dev->name, dev);
if (err < 0) {
printk(KERN_ERR "%s: can't get IRQ %d\n",
dev->name,pci_dev->irq);
{
if ((request_irq(dev->pci->irq, saa7134_oss_irq,
- SA_SHIRQ | SA_INTERRUPT, dev->name,
+ IRQF_SHARED | IRQF_DISABLED, dev->name,
(void*) &dev->dmasound)) < 0)
return -1;
memcpy(&saa->video_dev, &saa_template, sizeof(saa_template));
saawrite(0, SAA7146_IER); /* turn off all interrupts */
- retval = request_irq(saa->irq, saa7146_irq, SA_SHIRQ | SA_INTERRUPT,
+ retval = request_irq(saa->irq, saa7146_irq, IRQF_SHARED | IRQF_DISABLED,
"stradis", saa);
if (retval == -EINVAL)
dev_err(&pdev->dev, "%d: Bad irq number or handler\n", num);
result = request_irq(zr->pci_dev->irq,
zoran_irq,
- SA_SHIRQ | SA_INTERRUPT,
+ IRQF_SHARED | IRQF_DISABLED,
ZR_DEVNAME(zr),
(void *) zr);
if (result < 0) {
DEBUG(printk(KERN_DEBUG "zoran: mapped-memory at 0x%p\n",ztv->zoran_mem));
result = request_irq(dev->irq, zoran_irq,
- SA_SHIRQ|SA_INTERRUPT,"zoran", ztv);
+ IRQF_SHARED|IRQF_DISABLED,"zoran", ztv);
if (result==-EINVAL)
{
iounmap(ztv->zoran_mem);
# For mptfc:
#CFLAGS_mptfc.o += -DMPT_DEBUG_FC
+# For mptsas:
+#CFLAGS_mptsas.o += -DMPT_DEBUG_SAS
+#CFLAGS_mptsas.o += -DMPT_DEBUG_SAS_WIDE
+
+
#=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-} LSI_LOGIC
obj-$(CONFIG_FUSION_SPI) += mptbase.o mptscsih.o mptspi.o
+++ /dev/null
-/*
- * Copyright (c) 2000-2001 LSI Logic Corporation. All rights reserved.
- *
- * NAME: fc_log.h
- * SUMMARY: MPI IocLogInfo definitions for the SYMFC9xx chips
- * DESCRIPTION: Contains the enumerated list of values that may be returned
- * in the IOCLogInfo field of a MPI Default Reply Message.
- *
- * CREATION DATE: 6/02/2000
- * ID: $Id: fc_log.h,v 4.6 2001/07/26 14:41:33 sschremm Exp $
- */
-
-
-/*
- * MpiIocLogInfo_t enum
- *
- * These 32 bit values are used in the IOCLogInfo field of the MPI reply
- * messages.
- * The value is 0xabcccccc where
- * a = The type of log info as per the MPI spec. Since these codes are
- * all for Fibre Channel this value will always be 2.
- * b = Specifies a subclass of the firmware where
- * 0 = FCP Initiator
- * 1 = FCP Target
- * 2 = LAN
- * 3 = MPI Message Layer
- * 4 = FC Link
- * 5 = Context Manager
- * 6 = Invalid Field Offset
- * 7 = State Change Info
- * all others are reserved for future use
- * c = A specific value within the subclass.
- *
- * NOTE: Any new values should be added to the end of each subclass so that the
- * codes remain consistent across firmware releases.
- */
-typedef enum _MpiIocLogInfoFc
-{
- MPI_IOCLOGINFO_FC_INIT_BASE = 0x20000000,
- MPI_IOCLOGINFO_FC_INIT_ERROR_OUT_OF_ORDER_FRAME = 0x20000001, /* received an out of order frame - unsupported */
- MPI_IOCLOGINFO_FC_INIT_ERROR_BAD_START_OF_FRAME = 0x20000002, /* Bad Rx Frame, bad start of frame primative */
- MPI_IOCLOGINFO_FC_INIT_ERROR_BAD_END_OF_FRAME = 0x20000003, /* Bad Rx Frame, bad end of frame primative */
- MPI_IOCLOGINFO_FC_INIT_ERROR_OVER_RUN = 0x20000004, /* Bad Rx Frame, overrun */
- MPI_IOCLOGINFO_FC_INIT_ERROR_RX_OTHER = 0x20000005, /* Other errors caught by IOC which require retries */
- MPI_IOCLOGINFO_FC_INIT_ERROR_SUBPROC_DEAD = 0x20000006, /* Main processor could not initialize sub-processor */
- MPI_IOCLOGINFO_FC_INIT_ERROR_RX_OVERRUN = 0x20000007, /* Scatter Gather overrun */
- MPI_IOCLOGINFO_FC_INIT_ERROR_RX_BAD_STATUS = 0x20000008, /* Receiver detected context mismatch via invalid header */
- MPI_IOCLOGINFO_FC_INIT_ERROR_RX_UNEXPECTED_FRAME= 0x20000009, /* CtxMgr detected unsupported frame type */
- MPI_IOCLOGINFO_FC_INIT_ERROR_LINK_FAILURE = 0x2000000A, /* Link failure occurred */
- MPI_IOCLOGINFO_FC_INIT_ERROR_TX_TIMEOUT = 0x2000000B, /* Transmitter timeout error */
-
- MPI_IOCLOGINFO_FC_TARGET_BASE = 0x21000000,
- MPI_IOCLOGINFO_FC_TARGET_NO_PDISC = 0x21000001, /* not sent because we are waiting for a PDISC from the initiator */
- MPI_IOCLOGINFO_FC_TARGET_NO_LOGIN = 0x21000002, /* not sent because we are not logged in to the remote node */
- MPI_IOCLOGINFO_FC_TARGET_DOAR_KILLED_BY_LIP = 0x21000003, /* Data Out, Auto Response, not sent due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_DIAR_KILLED_BY_LIP = 0x21000004, /* Data In, Auto Response, not sent due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_DIAR_MISSING_DATA = 0x21000005, /* Data In, Auto Response, missing data frames */
- MPI_IOCLOGINFO_FC_TARGET_DONR_KILLED_BY_LIP = 0x21000006, /* Data Out, No Response, not sent due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_WRSP_KILLED_BY_LIP = 0x21000007, /* Auto-response after a write not sent due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_DINR_KILLED_BY_LIP = 0x21000008, /* Data In, No Response, not completed due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_DINR_MISSING_DATA = 0x21000009, /* Data In, No Response, missing data frames */
- MPI_IOCLOGINFO_FC_TARGET_MRSP_KILLED_BY_LIP = 0x2100000a, /* Manual Response not sent due to a LIP */
- MPI_IOCLOGINFO_FC_TARGET_NO_CLASS_3 = 0x2100000b, /* not sent because remote node does not support Class 3 */
- MPI_IOCLOGINFO_FC_TARGET_LOGIN_NOT_VALID = 0x2100000c, /* not sent because login to remote node not validated */
- MPI_IOCLOGINFO_FC_TARGET_FROM_OUTBOUND = 0x2100000e, /* cleared from the outbound queue after a logout */
- MPI_IOCLOGINFO_FC_TARGET_WAITING_FOR_DATA_IN = 0x2100000f, /* cleared waiting for data after a logout */
-
- MPI_IOCLOGINFO_FC_LAN_BASE = 0x22000000,
- MPI_IOCLOGINFO_FC_LAN_TRANS_SGL_MISSING = 0x22000001, /* Transaction Context Sgl Missing */
- MPI_IOCLOGINFO_FC_LAN_TRANS_WRONG_PLACE = 0x22000002, /* Transaction Context found before an EOB */
- MPI_IOCLOGINFO_FC_LAN_TRANS_RES_BITS_SET = 0x22000003, /* Transaction Context value has reserved bits set */
- MPI_IOCLOGINFO_FC_LAN_WRONG_SGL_FLAG = 0x22000004, /* Invalid SGL Flags */
-
- MPI_IOCLOGINFO_FC_MSG_BASE = 0x23000000,
-
- MPI_IOCLOGINFO_FC_LINK_BASE = 0x24000000,
- MPI_IOCLOGINFO_FC_LINK_LOOP_INIT_TIMEOUT = 0x24000001, /* Loop initialization timed out */
- MPI_IOCLOGINFO_FC_LINK_ALREADY_INITIALIZED = 0x24000002, /* Another system controller already initialized the loop */
- MPI_IOCLOGINFO_FC_LINK_LINK_NOT_ESTABLISHED = 0x24000003, /* Not synchronized to signal or still negotiating (possible cable problem) */
- MPI_IOCLOGINFO_FC_LINK_CRC_ERROR = 0x24000004, /* CRC check detected error on received frame */
-
- MPI_IOCLOGINFO_FC_CTX_BASE = 0x25000000,
-
- MPI_IOCLOGINFO_FC_INVALID_FIELD_BYTE_OFFSET = 0x26000000, /* The lower 24 bits give the byte offset of the field in the request message that is invalid */
- MPI_IOCLOGINFO_FC_INVALID_FIELD_MAX_OFFSET = 0x26ffffff,
-
- MPI_IOCLOGINFO_FC_STATE_CHANGE = 0x27000000 /* The lower 24 bits give additional information concerning state change */
-
-} MpiIocLogInfoFc_t;
* Title: MPI Message independent structures and definitions
* Creation Date: July 27, 2000
*
- * mpi.h Version: 01.05.10
+ * mpi.h Version: 01.05.11
*
* Version History
* ---------------
* Added EEDP IOCStatus codes.
* 08-03-05 01.05.09 Bumped MPI_HEADER_VERSION_UNIT.
* 08-30-05 01.05.10 Added 2 new IOCStatus codes for Target.
+ * 03-27-06 01.05.11 Bumped MPI_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
*/
/* Note: The major versions of 0xe0 through 0xff are reserved */
/* versioning for this MPI header set */
-#define MPI_HEADER_VERSION_UNIT (0x0C)
+#define MPI_HEADER_VERSION_UNIT (0x0D)
#define MPI_HEADER_VERSION_DEV (0x00)
#define MPI_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI_HEADER_VERSION_UNIT_SHIFT (8)
* Title: MPI Config message, structures, and Pages
* Creation Date: July 27, 2000
*
- * mpi_cnfg.h Version: 01.05.11
+ * mpi_cnfg.h Version: 01.05.12
*
* Version History
* ---------------
* Added postpone SATA Init bit to SAS IO Unit Page 1
* ControlFlags.
* Changed LogEntry format for Log Page 0.
+ * 03-27-06 01.05.12 Added two new Flags defines for Manufacturing Page 4.
+ * Added Manufacturing Page 7.
+ * Added MPI_IOCPAGE2_CAP_FLAGS_RAID_64_BIT_ADDRESSING.
+ * Added IOC Page 6.
+ * Added PrevBootDeviceForm field to CONFIG_PAGE_BIOS_2.
+ * Added MaxLBAHigh field to RAID Volume Page 0.
+ * Added Nvdata version fields to SAS IO Unit Page 0.
+ * Added AdditionalControlFlags, MaxTargetPortConnectTime,
+ * ReportDeviceMissingDelay, and IODeviceMissingDelay
+ * fields to SAS IO Unit Page 1.
* --------------------------------------------------------------------------
*/
} CONFIG_PAGE_MANUFACTURING_4, MPI_POINTER PTR_CONFIG_PAGE_MANUFACTURING_4,
ManufacturingPage4_t, MPI_POINTER pManufacturingPage4_t;
-#define MPI_MANUFACTURING4_PAGEVERSION (0x03)
+#define MPI_MANUFACTURING4_PAGEVERSION (0x04)
/* defines for the Flags field */
+#define MPI_MANPAGE4_FORCE_BAD_BLOCK_TABLE (0x80)
+#define MPI_MANPAGE4_FORCE_OFFLINE_FAILOVER (0x40)
#define MPI_MANPAGE4_IME_DISABLE (0x20)
#define MPI_MANPAGE4_IM_DISABLE (0x10)
#define MPI_MANPAGE4_IS_DISABLE (0x08)
#define MPI_MANUFACTURING6_PAGEVERSION (0x00)
+typedef struct _MPI_MANPAGE7_CONNECTOR_INFO
+{
+ U32 Pinout; /* 00h */
+ U8 Connector[16]; /* 04h */
+ U8 Location; /* 14h */
+ U8 Reserved1; /* 15h */
+ U16 Slot; /* 16h */
+ U32 Reserved2; /* 18h */
+} MPI_MANPAGE7_CONNECTOR_INFO, MPI_POINTER PTR_MPI_MANPAGE7_CONNECTOR_INFO,
+ MpiManPage7ConnectorInfo_t, MPI_POINTER pMpiManPage7ConnectorInfo_t;
+
+/* defines for the Pinout field */
+#define MPI_MANPAGE7_PINOUT_SFF_8484_L4 (0x00080000)
+#define MPI_MANPAGE7_PINOUT_SFF_8484_L3 (0x00040000)
+#define MPI_MANPAGE7_PINOUT_SFF_8484_L2 (0x00020000)
+#define MPI_MANPAGE7_PINOUT_SFF_8484_L1 (0x00010000)
+#define MPI_MANPAGE7_PINOUT_SFF_8470_L4 (0x00000800)
+#define MPI_MANPAGE7_PINOUT_SFF_8470_L3 (0x00000400)
+#define MPI_MANPAGE7_PINOUT_SFF_8470_L2 (0x00000200)
+#define MPI_MANPAGE7_PINOUT_SFF_8470_L1 (0x00000100)
+#define MPI_MANPAGE7_PINOUT_SFF_8482 (0x00000002)
+#define MPI_MANPAGE7_PINOUT_CONNECTION_UNKNOWN (0x00000001)
+
+/* defines for the Location field */
+#define MPI_MANPAGE7_LOCATION_UNKNOWN (0x01)
+#define MPI_MANPAGE7_LOCATION_INTERNAL (0x02)
+#define MPI_MANPAGE7_LOCATION_EXTERNAL (0x04)
+#define MPI_MANPAGE7_LOCATION_SWITCHABLE (0x08)
+#define MPI_MANPAGE7_LOCATION_AUTO (0x10)
+#define MPI_MANPAGE7_LOCATION_NOT_PRESENT (0x20)
+#define MPI_MANPAGE7_LOCATION_NOT_CONNECTED (0x80)
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumPhys at runtime.
+ */
+#ifndef MPI_MANPAGE7_CONNECTOR_INFO_MAX
+#define MPI_MANPAGE7_CONNECTOR_INFO_MAX (1)
+#endif
+
+typedef struct _CONFIG_PAGE_MANUFACTURING_7
+{
+ CONFIG_PAGE_HEADER Header; /* 00h */
+ U32 Reserved1; /* 04h */
+ U32 Reserved2; /* 08h */
+ U32 Flags; /* 0Ch */
+ U8 EnclosureName[16]; /* 10h */
+ U8 NumPhys; /* 20h */
+ U8 Reserved3; /* 21h */
+ U16 Reserved4; /* 22h */
+ MPI_MANPAGE7_CONNECTOR_INFO ConnectorInfo[MPI_MANPAGE7_CONNECTOR_INFO_MAX]; /* 24h */
+} CONFIG_PAGE_MANUFACTURING_7, MPI_POINTER PTR_CONFIG_PAGE_MANUFACTURING_7,
+ ManufacturingPage7_t, MPI_POINTER pManufacturingPage7_t;
+
+#define MPI_MANUFACTURING7_PAGEVERSION (0x00)
+
+/* defines for the Flags field */
+#define MPI_MANPAGE7_FLAG_USE_SLOT_INFO (0x00000001)
+
+
/****************************************************************************
* IO Unit Config Pages
****************************************************************************/
} CONFIG_PAGE_IOC_2, MPI_POINTER PTR_CONFIG_PAGE_IOC_2,
IOCPage2_t, MPI_POINTER pIOCPage2_t;
-#define MPI_IOCPAGE2_PAGEVERSION (0x03)
+#define MPI_IOCPAGE2_PAGEVERSION (0x04)
/* IOC Page 2 Capabilities flags */
#define MPI_IOCPAGE2_CAP_FLAGS_RAID_6_SUPPORT (0x00000010)
#define MPI_IOCPAGE2_CAP_FLAGS_RAID_10_SUPPORT (0x00000020)
#define MPI_IOCPAGE2_CAP_FLAGS_RAID_50_SUPPORT (0x00000040)
+#define MPI_IOCPAGE2_CAP_FLAGS_RAID_64_BIT_ADDRESSING (0x10000000)
#define MPI_IOCPAGE2_CAP_FLAGS_SES_SUPPORT (0x20000000)
#define MPI_IOCPAGE2_CAP_FLAGS_SAFTE_SUPPORT (0x40000000)
#define MPI_IOCPAGE2_CAP_FLAGS_CROSS_CHANNEL_SUPPORT (0x80000000)
#define MPI_IOCPAGE5_PAGEVERSION (0x00)
+typedef struct _CONFIG_PAGE_IOC_6
+{
+ CONFIG_PAGE_HEADER Header; /* 00h */
+ U32 CapabilitiesFlags; /* 04h */
+ U8 MaxDrivesIS; /* 08h */
+ U8 MaxDrivesIM; /* 09h */
+ U8 MaxDrivesIME; /* 0Ah */
+ U8 Reserved1; /* 0Bh */
+ U8 MinDrivesIS; /* 0Ch */
+ U8 MinDrivesIM; /* 0Dh */
+ U8 MinDrivesIME; /* 0Eh */
+ U8 Reserved2; /* 0Fh */
+ U8 MaxGlobalHotSpares; /* 10h */
+ U8 Reserved3; /* 11h */
+ U16 Reserved4; /* 12h */
+ U32 Reserved5; /* 14h */
+ U32 SupportedStripeSizeMapIS; /* 18h */
+ U32 SupportedStripeSizeMapIME; /* 1Ch */
+ U32 Reserved6; /* 20h */
+ U8 MetadataSize; /* 24h */
+ U8 Reserved7; /* 25h */
+ U16 Reserved8; /* 26h */
+ U16 MaxBadBlockTableEntries; /* 28h */
+ U16 Reserved9; /* 2Ah */
+ U16 IRNvsramUsage; /* 2Ch */
+ U16 Reserved10; /* 2Eh */
+ U32 IRNvsramVersion; /* 30h */
+ U32 Reserved11; /* 34h */
+ U32 Reserved12; /* 38h */
+} CONFIG_PAGE_IOC_6, MPI_POINTER PTR_CONFIG_PAGE_IOC_6,
+ IOCPage6_t, MPI_POINTER pIOCPage6_t;
+
+#define MPI_IOCPAGE6_PAGEVERSION (0x00)
+
+/* IOC Page 6 Capabilities Flags */
+
+#define MPI_IOCPAGE6_CAP_FLAGS_GLOBAL_HOT_SPARE (0x00000001)
+
/****************************************************************************
* BIOS Config Pages
U32 Reserved5; /* 14h */
U32 Reserved6; /* 18h */
U8 BootDeviceForm; /* 1Ch */
- U8 Reserved7; /* 1Dh */
+ U8 PrevBootDeviceForm; /* 1Ch */
U16 Reserved8; /* 1Eh */
MPI_BIOSPAGE2_BOOT_DEVICE BootDevice; /* 20h */
} CONFIG_PAGE_BIOS_2, MPI_POINTER PTR_CONFIG_PAGE_BIOS_2,
BIOSPage2_t, MPI_POINTER pBIOSPage2_t;
-#define MPI_BIOSPAGE2_PAGEVERSION (0x01)
+#define MPI_BIOSPAGE2_PAGEVERSION (0x02)
#define MPI_BIOSPAGE2_FORM_MASK (0x0F)
#define MPI_BIOSPAGE2_FORM_ADAPTER_ORDER (0x00)
RAID_VOL0_STATUS VolumeStatus; /* 08h */
RAID_VOL0_SETTINGS VolumeSettings; /* 0Ch */
U32 MaxLBA; /* 10h */
- U32 Reserved1; /* 14h */
+ U32 MaxLBAHigh; /* 14h */
U32 StripeSize; /* 18h */
U32 Reserved2; /* 1Ch */
U32 Reserved3; /* 20h */
} CONFIG_PAGE_RAID_VOL_0, MPI_POINTER PTR_CONFIG_PAGE_RAID_VOL_0,
RaidVolumePage0_t, MPI_POINTER pRaidVolumePage0_t;
-#define MPI_RAIDVOLPAGE0_PAGEVERSION (0x05)
+#define MPI_RAIDVOLPAGE0_PAGEVERSION (0x06)
/* values for RAID Volume Page 0 InactiveStatus field */
#define MPI_RAIDVOLPAGE0_UNKNOWN_INACTIVE (0x00)
typedef struct _CONFIG_PAGE_SAS_IO_UNIT_0
{
CONFIG_EXTENDED_PAGE_HEADER Header; /* 00h */
- U32 Reserved1; /* 08h */
+ U16 NvdataVersionDefault; /* 08h */
+ U16 NvdataVersionPersistent; /* 0Ah */
U8 NumPhys; /* 0Ch */
U8 Reserved2; /* 0Dh */
U16 Reserved3; /* 0Eh */
} CONFIG_PAGE_SAS_IO_UNIT_0, MPI_POINTER PTR_CONFIG_PAGE_SAS_IO_UNIT_0,
SasIOUnitPage0_t, MPI_POINTER pSasIOUnitPage0_t;
-#define MPI_SASIOUNITPAGE0_PAGEVERSION (0x03)
+#define MPI_SASIOUNITPAGE0_PAGEVERSION (0x04)
/* values for SAS IO Unit Page 0 PortFlags */
#define MPI_SAS_IOUNIT0_PORT_FLAGS_DISCOVERY_IN_PROGRESS (0x08)
typedef struct _MPI_SAS_IO_UNIT1_PHY_DATA
{
- U8 Port; /* 00h */
- U8 PortFlags; /* 01h */
- U8 PhyFlags; /* 02h */
- U8 MaxMinLinkRate; /* 03h */
- U32 ControllerPhyDeviceInfo;/* 04h */
- U32 Reserved1; /* 08h */
+ U8 Port; /* 00h */
+ U8 PortFlags; /* 01h */
+ U8 PhyFlags; /* 02h */
+ U8 MaxMinLinkRate; /* 03h */
+ U32 ControllerPhyDeviceInfo; /* 04h */
+ U16 MaxTargetPortConnectTime; /* 08h */
+ U16 Reserved1; /* 0Ah */
} MPI_SAS_IO_UNIT1_PHY_DATA, MPI_POINTER PTR_MPI_SAS_IO_UNIT1_PHY_DATA,
SasIOUnit1PhyData, MPI_POINTER pSasIOUnit1PhyData;
CONFIG_EXTENDED_PAGE_HEADER Header; /* 00h */
U16 ControlFlags; /* 08h */
U16 MaxNumSATATargets; /* 0Ah */
- U32 Reserved1; /* 0Ch */
+ U16 AdditionalControlFlags; /* 0Ch */
+ U16 Reserved1; /* 0Eh */
U8 NumPhys; /* 10h */
U8 SATAMaxQDepth; /* 11h */
- U16 Reserved2; /* 12h */
+ U8 ReportDeviceMissingDelay; /* 12h */
+ U8 IODeviceMissingDelay; /* 13h */
MPI_SAS_IO_UNIT1_PHY_DATA PhyData[MPI_SAS_IOUNIT1_PHY_MAX]; /* 14h */
} CONFIG_PAGE_SAS_IO_UNIT_1, MPI_POINTER PTR_CONFIG_PAGE_SAS_IO_UNIT_1,
SasIOUnitPage1_t, MPI_POINTER pSasIOUnitPage1_t;
-#define MPI_SASIOUNITPAGE1_PAGEVERSION (0x05)
+#define MPI_SASIOUNITPAGE1_PAGEVERSION (0x06)
/* values for SAS IO Unit Page 1 ControlFlags */
#define MPI_SAS_IOUNIT1_CONTROL_DEVICE_SELF_TEST (0x8000)
#define MPI_SAS_IOUNIT1_CONTROL_FIRST_LVL_DISC_ONLY (0x0002)
#define MPI_SAS_IOUNIT1_CONTROL_CLEAR_AFFILIATION (0x0001)
+/* values for SAS IO Unit Page 1 AdditionalControlFlags */
+#define MPI_SAS_IOUNIT1_ACONTROL_ALLOW_TABLE_TO_TABLE (0x0001)
+
+/* defines for SAS IO Unit Page 1 ReportDeviceMissingDelay */
+#define MPI_SAS_IOUNIT1_REPORT_MISSING_TIMEOUT_MASK (0x7F)
+#define MPI_SAS_IOUNIT1_REPORT_MISSING_UNIT_16 (0x80)
+
/* values for SAS IO Unit Page 1 PortFlags */
#define MPI_SAS_IOUNIT1_PORT_FLAGS_0_TARGET_IOC_NUM (0x00)
#define MPI_SAS_IOUNIT1_PORT_FLAGS_1_TARGET_IOC_NUM (0x04)
Copyright (c) 2000-2005 LSI Logic Corporation.
---------------------------------------
- Header Set Release Version: 01.05.12
- Header Set Release Date: 08-30-05
+ Header Set Release Version: 01.05.13
+ Header Set Release Date: 03-27-06
---------------------------------------
Filename Current version Prior version
---------- --------------- -------------
- mpi.h 01.05.10 01.05.09
- mpi_ioc.h 01.05.10 01.05.09
- mpi_cnfg.h 01.05.11 01.05.10
- mpi_init.h 01.05.06 01.05.06
- mpi_targ.h 01.05.05 01.05.05
+ mpi.h 01.05.11 01.05.10
+ mpi_ioc.h 01.05.11 01.05.10
+ mpi_cnfg.h 01.05.12 01.05.11
+ mpi_init.h 01.05.07 01.05.06
+ mpi_targ.h 01.05.06 01.05.05
mpi_fc.h 01.05.01 01.05.01
mpi_lan.h 01.05.01 01.05.01
mpi_raid.h 01.05.02 01.05.02
mpi_tool.h 01.05.03 01.05.03
mpi_inb.h 01.05.01 01.05.01
- mpi_sas.h 01.05.02 01.05.01
- mpi_type.h 01.05.02 01.05.01
- mpi_history.txt 01.05.12 01.05.11
+ mpi_sas.h 01.05.03 01.05.02
+ mpi_type.h 01.05.02 01.05.02
+ mpi_history.txt 01.05.13 01.05.12
* Date Version Description
* Added EEDP IOCStatus codes.
* 08-03-05 01.05.09 Bumped MPI_HEADER_VERSION_UNIT.
* 08-30-05 01.05.10 Added 2 new IOCStatus codes for Target.
+ * 03-27-06 01.05.11 Bumped MPI_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
mpi_ioc.h
* Added new ReasonCode value for SAS Device Status Change
* event.
* Added new family code for FC949E.
+ * 03-27-06 01.05.11 Added MPI_IOCFACTS_CAPABILITY_TLR.
+ * Added additional Reason Codes and more event data fields
+ * to EVENT_DATA_SAS_DEVICE_STATUS_CHANGE.
+ * Added EVENT_DATA_SAS_BROADCAST_PRIMITIVE structure and
+ * new event.
+ * Added MPI_EVENT_SAS_SMP_ERROR and event data structure.
+ * Added MPI_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE and event
+ * data structure.
+ * Added MPI_EVENT_SAS_INIT_TABLE_OVERFLOW and event
+ * data structure.
+ * Added MPI_EXT_IMAGE_TYPE_INITIALIZATION.
* --------------------------------------------------------------------------
mpi_cnfg.h
* Added postpone SATA Init bit to SAS IO Unit Page 1
* ControlFlags.
* Changed LogEntry format for Log Page 0.
+ * 03-27-06 01.05.12 Added two new Flags defines for Manufacturing Page 4.
+ * Added Manufacturing Page 7.
+ * Added MPI_IOCPAGE2_CAP_FLAGS_RAID_64_BIT_ADDRESSING.
+ * Added IOC Page 6.
+ * Added PrevBootDeviceForm field to CONFIG_PAGE_BIOS_2.
+ * Added MaxLBAHigh field to RAID Volume Page 0.
+ * Added Nvdata version fields to SAS IO Unit Page 0.
+ * Added AdditionalControlFlags, MaxTargetPortConnectTime,
+ * ReportDeviceMissingDelay, and IODeviceMissingDelay
+ * fields to SAS IO Unit Page 1.
* --------------------------------------------------------------------------
mpi_init.h
* Added four new defines for SEP SlotStatus.
* 08-03-05 01.05.06 Fixed some MPI_SCSIIO32_MSGFLGS_ defines to make them
* unique in the first 32 characters.
+ * 03-27-06 01.05.07 Added Task Management type of Clear ACA.
* --------------------------------------------------------------------------
mpi_targ.h
* 02-22-05 01.05.03 Changed a comment.
* 03-11-05 01.05.04 Removed TargetAssistExtended Request.
* 06-24-05 01.05.05 Added TargetAssistExtended structures and defines.
+ * 03-27-06 01.05.06 Added a comment.
* --------------------------------------------------------------------------
mpi_fc.h
* 08-30-05 01.05.02 Added DeviceInfo bit for SEP.
* Added PrimFlags and Primitive field to SAS IO Unit
* Control request, and added a new operation code.
+ * 03-27-06 01.05.03 Added Force Full Discovery, Transmit Port Select Signal,
+ * and Remove Device operations to SAS IO Unit Control.
+ * Added DevHandle field to SAS IO Unit Control request and
+ * reply.
* --------------------------------------------------------------------------
mpi_type.h
mpi_history.txt Parts list history
-Filename 01.05.12 01.05.11 01.05.10 01.05.09
----------- -------- -------- -------- --------
-mpi.h 01.05.10 01.05.09 01.05.08 01.05.07
-mpi_ioc.h 01.05.10 01.05.09 01.05.09 01.05.08
-mpi_cnfg.h 01.05.11 01.05.10 01.05.09 01.05.08
-mpi_init.h 01.05.06 01.05.06 01.05.05 01.05.04
-mpi_targ.h 01.05.05 01.05.05 01.05.05 01.05.04
-mpi_fc.h 01.05.01 01.05.01 01.05.01 01.05.01
-mpi_lan.h 01.05.01 01.05.01 01.05.01 01.05.01
-mpi_raid.h 01.05.02 01.05.02 01.05.02 01.05.02
-mpi_tool.h 01.05.03 01.05.03 01.05.03 01.05.03
-mpi_inb.h 01.05.01 01.05.01 01.05.01 01.05.01
-mpi_sas.h 01.05.02 01.05.01 01.05.01 01.05.01
-mpi_type.h 01.05.02 01.05.01 01.05.01 01.05.01
+Filename 01.05.13 01.05.12 01.05.11 01.05.10 01.05.09
+---------- -------- -------- -------- -------- --------
+mpi.h 01.05.11 01.05.10 01.05.09 01.05.08 01.05.07
+mpi_ioc.h 01.05.11 01.05.10 01.05.09 01.05.09 01.05.08
+mpi_cnfg.h 01.05.12 01.05.11 01.05.10 01.05.09 01.05.08
+mpi_init.h 01.05.07 01.05.06 01.05.06 01.05.05 01.05.04
+mpi_targ.h 01.05.06 01.05.05 01.05.05 01.05.05 01.05.04
+mpi_fc.h 01.05.01 01.05.01 01.05.01 01.05.01 01.05.01
+mpi_lan.h 01.05.01 01.05.01 01.05.01 01.05.01 01.05.01
+mpi_raid.h 01.05.02 01.05.02 01.05.02 01.05.02 01.05.02
+mpi_tool.h 01.05.03 01.05.03 01.05.03 01.05.03 01.05.03
+mpi_inb.h 01.05.01 01.05.01 01.05.01 01.05.01 01.05.01
+mpi_sas.h 01.05.03 01.05.02 01.05.01 01.05.01 01.05.01
+mpi_type.h 01.05.02 01.05.02 01.05.01 01.05.01 01.05.01
Filename 01.05.08 01.05.07 01.05.06 01.05.05 01.05.04 01.05.03
---------- -------- -------- -------- -------- -------- --------
* Title: MPI initiator mode messages and structures
* Creation Date: June 8, 2000
*
- * mpi_init.h Version: 01.05.06
+ * mpi_init.h Version: 01.05.07
*
* Version History
* ---------------
* Added four new defines for SEP SlotStatus.
* 08-03-05 01.05.06 Fixed some MPI_SCSIIO32_MSGFLGS_ defines to make them
* unique in the first 32 characters.
+ * 03-27-06 01.05.07 Added Task Management type of Clear ACA.
* --------------------------------------------------------------------------
*/
#define MPI_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05)
#define MPI_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06)
#define MPI_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07)
+#define MPI_SCSITASKMGMT_TASKTYPE_CLEAR_ACA (0x08)
/* MsgFlags bits */
#define MPI_SCSITASKMGMT_MSGFLAGS_TARGET_RESET_OPTION (0x00)
* Title: MPI IOC, Port, Event, FW Download, and FW Upload messages
* Creation Date: August 11, 2000
*
- * mpi_ioc.h Version: 01.05.10
+ * mpi_ioc.h Version: 01.05.11
*
* Version History
* ---------------
* Added new ReasonCode value for SAS Device Status Change
* event.
* Added new family code for FC949E.
+ * 03-27-06 01.05.11 Added MPI_IOCFACTS_CAPABILITY_TLR.
+ * Added additional Reason Codes and more event data fields
+ * to EVENT_DATA_SAS_DEVICE_STATUS_CHANGE.
+ * Added EVENT_DATA_SAS_BROADCAST_PRIMITIVE structure and
+ * new event.
+ * Added MPI_EVENT_SAS_SMP_ERROR and event data structure.
+ * Added MPI_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE and event
+ * data structure.
+ * Added MPI_EVENT_SAS_INIT_TABLE_OVERFLOW and event
+ * data structure.
+ * Added MPI_EXT_IMAGE_TYPE_INITIALIZATION.
* --------------------------------------------------------------------------
*/
#define MPI_IOCFACTS_CAPABILITY_MULTICAST (0x00000100)
#define MPI_IOCFACTS_CAPABILITY_SCSIIO32 (0x00000200)
#define MPI_IOCFACTS_CAPABILITY_NO_SCSIIO16 (0x00000400)
+#define MPI_IOCFACTS_CAPABILITY_TLR (0x00000800)
/*****************************************************************************
/* Event */
-#define MPI_EVENT_NONE (0x00000000)
-#define MPI_EVENT_LOG_DATA (0x00000001)
-#define MPI_EVENT_STATE_CHANGE (0x00000002)
-#define MPI_EVENT_UNIT_ATTENTION (0x00000003)
-#define MPI_EVENT_IOC_BUS_RESET (0x00000004)
-#define MPI_EVENT_EXT_BUS_RESET (0x00000005)
-#define MPI_EVENT_RESCAN (0x00000006)
-#define MPI_EVENT_LINK_STATUS_CHANGE (0x00000007)
-#define MPI_EVENT_LOOP_STATE_CHANGE (0x00000008)
-#define MPI_EVENT_LOGOUT (0x00000009)
-#define MPI_EVENT_EVENT_CHANGE (0x0000000A)
-#define MPI_EVENT_INTEGRATED_RAID (0x0000000B)
-#define MPI_EVENT_SCSI_DEVICE_STATUS_CHANGE (0x0000000C)
-#define MPI_EVENT_ON_BUS_TIMER_EXPIRED (0x0000000D)
-#define MPI_EVENT_QUEUE_FULL (0x0000000E)
-#define MPI_EVENT_SAS_DEVICE_STATUS_CHANGE (0x0000000F)
-#define MPI_EVENT_SAS_SES (0x00000010)
-#define MPI_EVENT_PERSISTENT_TABLE_FULL (0x00000011)
-#define MPI_EVENT_SAS_PHY_LINK_STATUS (0x00000012)
-#define MPI_EVENT_SAS_DISCOVERY_ERROR (0x00000013)
-#define MPI_EVENT_IR_RESYNC_UPDATE (0x00000014)
-#define MPI_EVENT_IR2 (0x00000015)
-#define MPI_EVENT_SAS_DISCOVERY (0x00000016)
-#define MPI_EVENT_LOG_ENTRY_ADDED (0x00000021)
+#define MPI_EVENT_NONE (0x00000000)
+#define MPI_EVENT_LOG_DATA (0x00000001)
+#define MPI_EVENT_STATE_CHANGE (0x00000002)
+#define MPI_EVENT_UNIT_ATTENTION (0x00000003)
+#define MPI_EVENT_IOC_BUS_RESET (0x00000004)
+#define MPI_EVENT_EXT_BUS_RESET (0x00000005)
+#define MPI_EVENT_RESCAN (0x00000006)
+#define MPI_EVENT_LINK_STATUS_CHANGE (0x00000007)
+#define MPI_EVENT_LOOP_STATE_CHANGE (0x00000008)
+#define MPI_EVENT_LOGOUT (0x00000009)
+#define MPI_EVENT_EVENT_CHANGE (0x0000000A)
+#define MPI_EVENT_INTEGRATED_RAID (0x0000000B)
+#define MPI_EVENT_SCSI_DEVICE_STATUS_CHANGE (0x0000000C)
+#define MPI_EVENT_ON_BUS_TIMER_EXPIRED (0x0000000D)
+#define MPI_EVENT_QUEUE_FULL (0x0000000E)
+#define MPI_EVENT_SAS_DEVICE_STATUS_CHANGE (0x0000000F)
+#define MPI_EVENT_SAS_SES (0x00000010)
+#define MPI_EVENT_PERSISTENT_TABLE_FULL (0x00000011)
+#define MPI_EVENT_SAS_PHY_LINK_STATUS (0x00000012)
+#define MPI_EVENT_SAS_DISCOVERY_ERROR (0x00000013)
+#define MPI_EVENT_IR_RESYNC_UPDATE (0x00000014)
+#define MPI_EVENT_IR2 (0x00000015)
+#define MPI_EVENT_SAS_DISCOVERY (0x00000016)
+#define MPI_EVENT_SAS_BROADCAST_PRIMITIVE (0x00000017)
+#define MPI_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE (0x00000018)
+#define MPI_EVENT_SAS_INIT_TABLE_OVERFLOW (0x00000019)
+#define MPI_EVENT_SAS_SMP_ERROR (0x0000001A)
+#define MPI_EVENT_LOG_ENTRY_ADDED (0x00000021)
/* AckRequired field values */
U8 PhyNum; /* 0Eh */
U8 Reserved1; /* 0Fh */
U64 SASAddress; /* 10h */
+ U8 LUN[8]; /* 18h */
+ U16 TaskTag; /* 20h */
+ U16 Reserved2; /* 22h */
} EVENT_DATA_SAS_DEVICE_STATUS_CHANGE,
MPI_POINTER PTR_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE,
MpiEventDataSasDeviceStatusChange_t,
MPI_POINTER pMpiEventDataSasDeviceStatusChange_t;
/* MPI SAS Device Status Change Event data ReasonCode values */
-#define MPI_EVENT_SAS_DEV_STAT_RC_ADDED (0x03)
-#define MPI_EVENT_SAS_DEV_STAT_RC_NOT_RESPONDING (0x04)
-#define MPI_EVENT_SAS_DEV_STAT_RC_SMART_DATA (0x05)
-#define MPI_EVENT_SAS_DEV_STAT_RC_NO_PERSIST_ADDED (0x06)
-#define MPI_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED (0x07)
-#define MPI_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET (0x08)
+#define MPI_EVENT_SAS_DEV_STAT_RC_ADDED (0x03)
+#define MPI_EVENT_SAS_DEV_STAT_RC_NOT_RESPONDING (0x04)
+#define MPI_EVENT_SAS_DEV_STAT_RC_SMART_DATA (0x05)
+#define MPI_EVENT_SAS_DEV_STAT_RC_NO_PERSIST_ADDED (0x06)
+#define MPI_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED (0x07)
+#define MPI_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET (0x08)
+#define MPI_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL (0x09)
+#define MPI_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL (0x0A)
+#define MPI_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL (0x0B)
+#define MPI_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL (0x0C)
/* SCSI Event data for Queue Full event */
} EVENT_DATA_SAS_SES, MPI_POINTER PTR_EVENT_DATA_SAS_SES,
MpiEventDataSasSes_t, MPI_POINTER pMpiEventDataSasSes_t;
+/* SAS Broadcast Primitive Event data */
+
+typedef struct _EVENT_DATA_SAS_BROADCAST_PRIMITIVE
+{
+ U8 PhyNum; /* 00h */
+ U8 Port; /* 01h */
+ U8 PortWidth; /* 02h */
+ U8 Primitive; /* 04h */
+} EVENT_DATA_SAS_BROADCAST_PRIMITIVE,
+ MPI_POINTER PTR_EVENT_DATA_SAS_BROADCAST_PRIMITIVE,
+ MpiEventDataSasBroadcastPrimitive_t,
+ MPI_POINTER pMpiEventDataSasBroadcastPrimitive_t;
+
+#define MPI_EVENT_PRIMITIVE_CHANGE (0x01)
+#define MPI_EVENT_PRIMITIVE_EXPANDER (0x03)
+#define MPI_EVENT_PRIMITIVE_RESERVED2 (0x04)
+#define MPI_EVENT_PRIMITIVE_RESERVED3 (0x05)
+#define MPI_EVENT_PRIMITIVE_RESERVED4 (0x06)
+#define MPI_EVENT_PRIMITIVE_CHANGE0_RESERVED (0x07)
+#define MPI_EVENT_PRIMITIVE_CHANGE1_RESERVED (0x08)
+
/* SAS Phy Link Status Event data */
typedef struct _EVENT_DATA_SAS_PHY_LINK_STATUS
#define MPI_EVENT_DSCVRY_ERR_DS_MULTPL_PATHS (0x00000800)
#define MPI_EVENT_DSCVRY_ERR_DS_MAX_SATA_TARGETS (0x00001000)
+/* SAS SMP Error Event data */
+
+typedef struct _EVENT_DATA_SAS_SMP_ERROR
+{
+ U8 Status; /* 00h */
+ U8 Port; /* 01h */
+ U8 SMPFunctionResult; /* 02h */
+ U8 Reserved1; /* 03h */
+ U64 SASAddress; /* 04h */
+} EVENT_DATA_SAS_SMP_ERROR, MPI_POINTER PTR_EVENT_DATA_SAS_SMP_ERROR,
+ MpiEventDataSasSmpError_t, MPI_POINTER pMpiEventDataSasSmpError_t;
+
+/* defines for the Status field of the SAS SMP Error event */
+#define MPI_EVENT_SAS_SMP_FUNCTION_RESULT_VALID (0x00)
+#define MPI_EVENT_SAS_SMP_CRC_ERROR (0x01)
+#define MPI_EVENT_SAS_SMP_TIMEOUT (0x02)
+#define MPI_EVENT_SAS_SMP_NO_DESTINATION (0x03)
+#define MPI_EVENT_SAS_SMP_BAD_DESTINATION (0x04)
+
+/* SAS Initiator Device Status Change Event data */
+
+typedef struct _EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE
+{
+ U8 ReasonCode; /* 00h */
+ U8 Port; /* 01h */
+ U16 DevHandle; /* 02h */
+ U64 SASAddress; /* 04h */
+} EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE,
+ MPI_POINTER PTR_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE,
+ MpiEventDataSasInitDevStatusChange_t,
+ MPI_POINTER pMpiEventDataSasInitDevStatusChange_t;
+
+/* defines for the ReasonCode field of the SAS Initiator Device Status Change event */
+#define MPI_EVENT_SAS_INIT_RC_ADDED (0x01)
+
+/* SAS Initiator Device Table Overflow Event data */
+
+typedef struct _EVENT_DATA_SAS_INIT_TABLE_OVERFLOW
+{
+ U8 MaxInit; /* 00h */
+ U8 CurrentInit; /* 01h */
+ U16 Reserved1; /* 02h */
+} EVENT_DATA_SAS_INIT_TABLE_OVERFLOW,
+ MPI_POINTER PTR_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW,
+ MpiEventDataSasInitTableOverflow_t,
+ MPI_POINTER pMpiEventDataSasInitTableOverflow_t;
+
/*****************************************************************************
*
#define MPI_EXT_IMAGE_TYPE_FW (0x01)
#define MPI_EXT_IMAGE_TYPE_NVDATA (0x03)
#define MPI_EXT_IMAGE_TYPE_BOOTLOADER (0x04)
+#define MPI_EXT_IMAGE_TYPE_INITIALIZATION (0x05)
#endif
#ifndef IOPI_IOCLOGINFO_H_INCLUDED
#define IOPI_IOCLOGINFO_H_INCLUDED
+#define SAS_LOGINFO_NEXUS_LOSS 0x31170000
+#define SAS_LOGINFO_MASK 0xFFFF0000
/****************************************************************************/
/* IOC LOGINFO defines, 0x00000000 - 0x0FFFFFFF */
#define IOP_LOGINFO_CODE_CONFIG_INVALID_PAGE_DNM (0x00030500) /* Device Not Mapped */
#define IOP_LOGINFO_CODE_CONFIG_INVALID_PAGE_PERSIST (0x00030600) /* Persistent Page not found */
#define IOP_LOGINFO_CODE_CONFIG_INVALID_PAGE_DEFAULT (0x00030700) /* Default Page not found */
+
+#define IOP_LOGINFO_CODE_DIAG_MSG_ERROR (0x00040000) /* Error handling diag msg - or'd with diag status */
+
#define IOP_LOGINFO_CODE_TASK_TERMINATED (0x00050000)
#define IOP_LOGINFO_CODE_ENCL_MGMT_READ_ACTION_ERR0R (0x00060001) /* Read Action not supported for SEP msg */
#define PL_LOGINFO_CODE_IO_EXECUTED (0x00140000)
#define PL_LOGINFO_CODE_PERS_RESV_OUT_NOT_AFFIL_OWNER (0x00150000)
#define PL_LOGINFO_CODE_OPEN_TXDMA_ABORT (0x00160000)
+#define PL_LOGINFO_CODE_IO_DEVICE_MISSING_DELAY_RETRY (0x00170000)
#define PL_LOGINFO_SUB_CODE_OPEN_FAILURE (0x00000100)
#define PL_LOGINFO_SUB_CODE_OPEN_FAILURE_NO_DEST_TIMEOUT (0x00000101)
#define PL_LOGINFO_SUB_CODE_OPEN_FAILURE_ORR_TIMEOUT (0x0000011A) /* Open Reject (Retry) Timeout */
/****************************************************************************/
/* IR LOGINFO_CODE defines, valid if IOC_LOGINFO_ORIGINATOR = IR */
/****************************************************************************/
-#define IR_LOGINFO_CODE_UNUSED1 (0x00010000)
-#define IR_LOGINFO_CODE_UNUSED2 (0x00020000)
+#define IR_LOGINFO_RAID_ACTION_ERROR (0x00010000)
+#define IR_LOGINFO_CODE_UNUSED2 (0x00020000)
+
+/* Amount of information passed down for Create Volume is too large */
+#define IR_LOGINFO_VOLUME_CREATE_INVALID_LENGTH (0x00010001)
+/* Creation of duplicate volume attempted (Bus/Target ID checked) */
+#define IR_LOGINFO_VOLUME_CREATE_DUPLICATE (0x00010002)
+/* Creation failed due to maximum number of supported volumes exceeded */
+#define IR_LOGINFO_VOLUME_CREATE_NO_SLOTS (0x00010003)
+/* Creation failed due to DMA error in trying to read from host */
+#define IR_LOGINFO_VOLUME_CREATE_DMA_ERROR (0x00010004)
+/* Creation failed due to invalid volume type passed down */
+#define IR_LOGINFO_VOLUME_CREATE_INVALID_VOLUME_TYPE (0x00010005)
+/* Creation failed due to error reading MFG Page 4 */
+#define IR_LOGINFO_VOLUME_MFG_PAGE4_ERROR (0x00010006)
+/* Creation failed when trying to create internal structures */
+#define IR_LOGINFO_VOLUME_INTERNAL_CONFIG_STRUCTURE_ERROR (0x00010007)
+
+/* Activation failed due to trying to activate an already active volume */
+#define IR_LOGINFO_VOLUME_ACTIVATING_AN_ACTIVE_VOLUME (0x00010010)
+/* Activation failed due to trying to active unsupported volume type */
+#define IR_LOGINFO_VOLUME_ACTIVATING_INVALID_VOLUME_TYPE (0x00010011)
+/* Activation failed due to trying to active too many volumes */
+#define IR_LOGINFO_VOLUME_ACTIVATING_TOO_MANY_VOLUMES (0x00010012)
+/* Activation failed due to Volume ID in use already */
+#define IR_LOGINFO_VOLUME_ACTIVATING_VOLUME_ID_IN_USE (0x00010013)
+/* Activation failed call to activateVolume returned failure */
+#define IR_LOGINFO_VOLUME_ACTIVATE_VOLUME_FAILED (0x00010014)
+/* Activation failed trying to import the volume */
+#define IR_LOGINFO_VOLUME_ACTIVATING_IMPORT_VOLUME_FAILED (0x00010015)
+
+/* Phys Disk failed, too many phys disks */
+#define IR_LOGINFO_PHYSDISK_CREATE_TOO_MANY_DISKS (0x00010020)
+/* Amount of information passed down for Create Pnysdisk is too large */
+#define IR_LOGINFO_PHYSDISK_CREATE_INVALID_LENGTH (0x00010021)
+/* Creation failed due to DMA error in trying to read from host */
+#define IR_LOGINFO_PHYSDISK_CREATE_DMA_ERROR (0x00010022)
+/* Creation failed due to invalid Bus TargetID passed down */
+#define IR_LOGINFO_PHYSDISK_CREATE_BUS_TID_INVALID (0x00010023)
+/* Creation failed due to error in creating RAID Phys Disk Config Page */
+#define IR_LOGINFO_PHYSDISK_CREATE_CONFIG_PAGE_ERROR (0x00010024)
+
+
+/* Compatibility Error : IR Disabled */
+#define IR_LOGINFO_COMPAT_ERROR_RAID_DISABLED (0x00010030)
+/* Compatibility Error : Inquiry Comand failed */
+#define IR_LOGINFO_COMPAT_ERROR_INQUIRY_FAILED (0x00010031)
+/* Compatibility Error : Device not direct access device */
+#define IR_LOGINFO_COMPAT_ERROR_NOT_DIRECT_ACCESS (0x00010032)
+/* Compatibility Error : Removable device found */
+#define IR_LOGINFO_COMPAT_ERROR_REMOVABLE_FOUND (0x00010033)
+/* Compatibility Error : Device SCSI Version not 2 or higher */
+#define IR_LOGINFO_COMPAT_ERROR_NEED_SCSI_2_OR_HIGHER (0x00010034)
+/* Compatibility Error : SATA device, 48 BIT LBA not supported */
+#define IR_LOGINFO_COMPAT_ERROR_SATA_48BIT_LBA_NOT_SUPPORTED (0x00010035)
+/* Compatibility Error : Device does not have 512 byte block sizes */
+#define IR_LOGINFO_COMPAT_ERROR_DEVICE_NOT_512_BYTE_BLOCK (0x00010036)
+/* Compatibility Error : Volume Type check failed */
+#define IR_LOGINFO_COMPAT_ERROR_VOLUME_TYPE_CHECK_FAILED (0x00010037)
+/* Compatibility Error : Volume Type is unsupported by FW */
+#define IR_LOGINFO_COMPAT_ERROR_UNSUPPORTED_VOLUME_TYPE (0x00010038)
+/* Compatibility Error : Disk drive too small for use in volume */
+#define IR_LOGINFO_COMPAT_ERROR_DISK_TOO_SMALL (0x00010039)
+/* Compatibility Error : Phys disk for Create Volume not found */
+#define IR_LOGINFO_COMPAT_ERROR_PHYS_DISK_NOT_FOUND (0x0001003A)
+/* Compatibility Error : membership count error, too many or too few disks for volume type */
+#define IR_LOGINFO_COMPAT_ERROR_MEMBERSHIP_COUNT (0x0001003B)
+/* Compatibility Error : Disk stripe sizes must be 64KB */
+#define IR_LOGINFO_COMPAT_ERROR_NON_64K_STRIPE_SIZE (0x0001003C)
+/* Compatibility Error : IME size limited to < 2TB */
+#define IR_LOGINFO_COMPAT_ERROR_IME_VOL_NOT_CURRENTLY_SUPPORTED (0x0001003D)
+
/****************************************************************************/
-/* Defines for convienence */
+/* Defines for convenience */
/****************************************************************************/
#define IOC_LOGINFO_PREFIX_IOP ((MPI_IOCLOGINFO_TYPE_SAS << MPI_IOCLOGINFO_TYPE_SHIFT) | IOC_LOGINFO_ORIGINATOR_IOP)
#define IOC_LOGINFO_PREFIX_PL ((MPI_IOCLOGINFO_TYPE_SAS << MPI_IOCLOGINFO_TYPE_SHIFT) | IOC_LOGINFO_ORIGINATOR_PL)
* Title: MPI Serial Attached SCSI structures and definitions
* Creation Date: August 19, 2004
*
- * mpi_sas.h Version: 01.05.02
+ * mpi_sas.h Version: 01.05.03
*
* Version History
* ---------------
* 08-30-05 01.05.02 Added DeviceInfo bit for SEP.
* Added PrimFlags and Primitive field to SAS IO Unit
* Control request, and added a new operation code.
+ * 03-27-06 01.05.03 Added Force Full Discovery, Transmit Port Select Signal,
+ * and Remove Device operations to SAS IO Unit Control.
+ * Added DevHandle field to SAS IO Unit Control request and
+ * reply.
* --------------------------------------------------------------------------
*/
U8 Reserved1; /* 01h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
- U16 Reserved2; /* 04h */
+ U16 DevHandle; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
#define MPI_SAS_OP_PHY_CLEAR_ERROR_LOG (0x08)
#define MPI_SAS_OP_MAP_CURRENT (0x09)
#define MPI_SAS_OP_SEND_PRIMITIVE (0x0A)
+#define MPI_SAS_OP_FORCE_FULL_DISCOVERY (0x0B)
+#define MPI_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C)
+#define MPI_SAS_OP_TRANSMIT_REMOVE_DEVICE (0x0D)
/* values for the PrimFlags field */
#define MPI_SAS_PRIMFLAGS_SINGLE (0x08)
U8 Reserved1; /* 01h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
- U16 Reserved2; /* 04h */
+ U16 DevHandle; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
* Title: MPI Target mode messages and structures
* Creation Date: June 22, 2000
*
- * mpi_targ.h Version: 01.05.05
+ * mpi_targ.h Version: 01.05.06
*
* Version History
* ---------------
* 02-22-05 01.05.03 Changed a comment.
* 03-11-05 01.05.04 Removed TargetAssistExtended Request.
* 06-24-05 01.05.05 Added TargetAssistExtended structures and defines.
+ * 03-27-06 01.05.06 Added a comment.
* --------------------------------------------------------------------------
*/
#define TARGET_ASSIST_FLAGS_CONFIRMED (0x08)
#define TARGET_ASSIST_FLAGS_REPOST_CMD_BUFFER (0x80)
-
+/* Standard Target Mode Reply message */
typedef struct _MSG_TARGET_ERROR_REPLY
{
U16 Reserved; /* 00h */
mpt_interrupt(int irq, void *bus_id, struct pt_regs *r)
{
MPT_ADAPTER *ioc = bus_id;
- u32 pa;
+ u32 pa = CHIPREG_READ32_dmasync(&ioc->chip->ReplyFifo);
+
+ if (pa == 0xFFFFFFFF)
+ return IRQ_NONE;
/*
* Drain the reply FIFO!
*/
- while (1) {
- pa = CHIPREG_READ32_dmasync(&ioc->chip->ReplyFifo);
- if (pa == 0xFFFFFFFF)
- return IRQ_HANDLED;
- else if (pa & MPI_ADDRESS_REPLY_A_BIT)
+ do {
+ if (pa & MPI_ADDRESS_REPLY_A_BIT)
mpt_reply(ioc, pa);
else
mpt_turbo_reply(ioc, pa);
- }
+ pa = CHIPREG_READ32_dmasync(&ioc->chip->ReplyFifo);
+ } while (pa != 0xFFFFFFFF);
return IRQ_HANDLED;
}
port = psize = 0;
for (ii=0; ii < DEVICE_COUNT_RESOURCE; ii++) {
if (pci_resource_flags(pdev, ii) & PCI_BASE_ADDRESS_SPACE_IO) {
+ if (psize)
+ continue;
/* Get I/O space! */
port = pci_resource_start(pdev, ii);
psize = pci_resource_len(pdev,ii);
} else {
+ if (msize)
+ continue;
/* Get memmap */
mem_phys = pci_resource_start(pdev, ii);
msize = pci_resource_len(pdev,ii);
- break;
}
}
ioc->mem_size = msize;
- if (ii == DEVICE_COUNT_RESOURCE) {
- printk(KERN_ERR MYNAM ": ERROR - MPT adapter has no memory regions defined!\n");
- kfree(ioc);
- return -EINVAL;
- }
-
- dinitprintk((KERN_INFO MYNAM ": MPT adapter @ %lx, msize=%dd bytes\n", mem_phys, msize));
- dinitprintk((KERN_INFO MYNAM ": (port i/o @ %lx, psize=%dd bytes)\n", port, psize));
-
mem = NULL;
/* Get logical ptr for PciMem0 space */
/*mem = ioremap(mem_phys, msize);*/
- mem = ioremap(mem_phys, 0x100);
+ mem = ioremap(mem_phys, msize);
if (mem == NULL) {
printk(KERN_ERR MYNAM ": ERROR - Unable to map adapter memory!\n");
kfree(ioc);
ioc->bus_type = SAS;
ioc->errata_flag_1064 = 1;
}
- else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1066) {
- ioc->prod_name = "LSISAS1066";
- ioc->bus_type = SAS;
- ioc->errata_flag_1064 = 1;
- }
else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1068) {
ioc->prod_name = "LSISAS1068";
ioc->bus_type = SAS;
ioc->prod_name = "LSISAS1064E";
ioc->bus_type = SAS;
}
- else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1066E) {
- ioc->prod_name = "LSISAS1066E";
- ioc->bus_type = SAS;
- }
else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1068E) {
ioc->prod_name = "LSISAS1068E";
ioc->bus_type = SAS;
}
+ else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1078) {
+ ioc->prod_name = "LSISAS1078";
+ ioc->bus_type = SAS;
+ }
if (ioc->errata_flag_1064)
pci_disable_io_access(pdev);
printk(MYIOC_s_INFO_FMT "PCI-MSI enabled\n",
ioc->name);
rc = request_irq(ioc->pcidev->irq, mpt_interrupt,
- SA_SHIRQ, ioc->name, ioc);
+ IRQF_SHARED, ioc->name, ioc);
if (rc < 0) {
printk(MYIOC_s_ERR_FMT "Unable to allocate "
"interrupt %d!\n", ioc->name,
u32 diag1val = 0;
#endif
+ if (ioc->pcidev->device == MPI_MANUFACTPAGE_DEVID_SAS1078) {
+ drsprintk((MYIOC_s_WARN_FMT "%s: Doorbell=%p; 1078 reset "
+ "address=%p\n", ioc->name, __FUNCTION__,
+ &ioc->chip->Doorbell, &ioc->chip->Reset_1078));
+ CHIPREG_WRITE32(&ioc->chip->Reset_1078, 0x07);
+ if (sleepFlag == CAN_SLEEP)
+ msleep(1);
+ else
+ mdelay(1);
+
+ for (count = 0; count < 60; count ++) {
+ doorbell = CHIPREG_READ32(&ioc->chip->Doorbell);
+ doorbell &= MPI_IOC_STATE_MASK;
+
+ drsprintk((MYIOC_s_INFO_FMT
+ "looking for READY STATE: doorbell=%x"
+ " count=%d\n",
+ ioc->name, doorbell, count));
+ if (doorbell == MPI_IOC_STATE_READY) {
+ return 0;
+ }
+
+ /* wait 1 sec */
+ if (sleepFlag == CAN_SLEEP)
+ msleep(1000);
+ else
+ mdelay(1000);
+ }
+ return -1;
+ }
+
/* Clear any existing interrupts */
CHIPREG_WRITE32(&ioc->chip->IntStatus, 0);
#define COPYRIGHT "Copyright (c) 1999-2005 " MODULEAUTHOR
#endif
-#define MPT_LINUX_VERSION_COMMON "3.03.10"
-#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.03.10"
+#define MPT_LINUX_VERSION_COMMON "3.04.00"
+#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.00"
#define WHAT_MAGIC_STRING "@" "(" "#" ")"
#define show_mptmod_ver(s,ver) \
u32 HostIndex; /* 50 Host Index register */
u32 Reserved4[15]; /* 54-8F */
u32 Fubar; /* 90 For Fubar usage */
- u32 Reserved5[27]; /* 94-FF */
+ u32 Reserved5[1050];/* 94-10F8 */
+ u32 Reset_1078; /* 10FC Reset 1078 */
} SYSIF_REGS;
/*
u8 negoFlags; /* bit field, see above */
u8 raidVolume; /* set, if RAID Volume */
u8 type; /* byte 0 of Inquiry data */
+ u8 deleted; /* target in process of being removed */
u32 num_luns;
u32 luns[8]; /* Max LUNs is 256 */
} VirtTarget;
struct mutex sas_discovery_mutex;
u8 sas_discovery_runtime;
u8 sas_discovery_ignore_events;
+ u16 handle;
int sas_index; /* index refrencing */
MPT_SAS_MGMT sas_mgmt;
int num_ports;
- struct work_struct mptscsih_persistTask;
+ struct work_struct sas_persist_task;
struct work_struct fc_setup_reset_work;
struct list_head fc_rports;
struct work_struct fc_rescan_work;
char fc_rescan_work_q_name[KOBJ_NAME_LEN];
struct workqueue_struct *fc_rescan_work_q;
+ u8 port_serial_number;
} MPT_ADAPTER;
/*
#define DBG_DUMP_REQUEST_FRAME_HDR(mfp)
#endif
+// debug sas wide ports
+#ifdef MPT_DEBUG_SAS_WIDE
+#define dsaswideprintk(x) printk x
+#else
+#define dsaswideprintk(x)
+#endif
+
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
*/
static struct pci_device_id mptfc_pci_table[] = {
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC909,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC909,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC919,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC919,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC929,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC929,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC919X,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC919X,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC929X,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC929X,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC939X,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC939X,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC949X,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC949X,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FC949ES,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVICEID_FC949E,
PCI_ANY_ID, PCI_ANY_ID },
{0} /* Terminating entry */
};
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/workqueue.h>
+#include <linux/delay.h> /* for mdelay */
+#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_transport_sas.h>
+#include <scsi/scsi_dbg.h>
#include "mptbase.h"
#include "mptscsih.h"
u32 device_info; /* bitfield detailed info about this device */
};
+/*
+ * Specific details on ports, wide/narrow
+ */
+struct mptsas_portinfo_details{
+ u8 port_id; /* port number provided to transport */
+ u16 num_phys; /* number of phys belong to this port */
+ u64 phy_bitmask; /* TODO, extend support for 255 phys */
+ struct sas_rphy *rphy; /* transport layer rphy object */
+ struct sas_port *port; /* transport layer port object */
+ struct scsi_target *starget;
+ struct mptsas_portinfo *port_info;
+};
+
struct mptsas_phyinfo {
u8 phy_id; /* phy index */
- u8 port_id; /* port number this phy is part of */
+ u8 port_id; /* firmware port identifier */
u8 negotiated_link_rate; /* nego'd link rate for this phy */
u8 hw_link_rate; /* hardware max/min phys link rate */
u8 programmed_link_rate; /* programmed max/min phy link rate */
+ u8 sas_port_add_phy; /* flag to request sas_port_add_phy*/
struct mptsas_devinfo identify; /* point to phy device info */
struct mptsas_devinfo attached; /* point to attached device info */
- struct sas_phy *phy;
- struct sas_rphy *rphy;
- struct scsi_target *starget;
+ struct sas_phy *phy; /* transport layer phy object */
+ struct mptsas_portinfo *portinfo;
+ struct mptsas_portinfo_details * port_details;
};
struct mptsas_portinfo {
struct list_head list;
u16 handle; /* unique id to address this */
- u8 num_phys; /* number of phys */
+ u16 num_phys; /* number of phys */
struct mptsas_phyinfo *phy_info;
};
u8 sep_channel; /* SEP channel logical channel id */
};
-#ifdef SASDEBUG
+#ifdef MPT_DEBUG_SAS
static void mptsas_print_phy_data(MPI_SAS_IO_UNIT0_PHY_DATA *phy_data)
{
printk("---- IO UNIT PAGE 0 ------------\n");
static inline int
mptsas_is_end_device(struct mptsas_devinfo * attached)
{
- if ((attached->handle) &&
+ if ((attached->sas_address) &&
(attached->device_info &
MPI_SAS_DEVICE_INFO_END_DEVICE) &&
((attached->device_info &
return 0;
}
+/* no mutex */
+static void
+mptsas_port_delete(struct mptsas_portinfo_details * port_details)
+{
+ struct mptsas_portinfo *port_info;
+ struct mptsas_phyinfo *phy_info;
+ u8 i;
+
+ if (!port_details)
+ return;
+
+ port_info = port_details->port_info;
+ phy_info = port_info->phy_info;
+
+ dsaswideprintk((KERN_DEBUG "%s: [%p]: port=%02d num_phys=%02d "
+ "bitmask=0x%016llX\n",
+ __FUNCTION__, port_details, port_details->port_id,
+ port_details->num_phys, port_details->phy_bitmask));
+
+ for (i = 0; i < port_info->num_phys; i++, phy_info++) {
+ if(phy_info->port_details != port_details)
+ continue;
+ memset(&phy_info->attached, 0, sizeof(struct mptsas_devinfo));
+ phy_info->port_details = NULL;
+ }
+ kfree(port_details);
+}
+
+static inline struct sas_rphy *
+mptsas_get_rphy(struct mptsas_phyinfo *phy_info)
+{
+ if (phy_info->port_details)
+ return phy_info->port_details->rphy;
+ else
+ return NULL;
+}
+
+static inline void
+mptsas_set_rphy(struct mptsas_phyinfo *phy_info, struct sas_rphy *rphy)
+{
+ if (phy_info->port_details) {
+ phy_info->port_details->rphy = rphy;
+ dsaswideprintk((KERN_DEBUG "sas_rphy_add: rphy=%p\n", rphy));
+ }
+
+#ifdef MPT_DEBUG_SAS_WIDE
+ if (rphy) {
+ dev_printk(KERN_DEBUG, &rphy->dev, "add:");
+ printk("rphy=%p release=%p\n",
+ rphy, rphy->dev.release);
+ }
+#endif
+}
+
+static inline struct sas_port *
+mptsas_get_port(struct mptsas_phyinfo *phy_info)
+{
+ if (phy_info->port_details)
+ return phy_info->port_details->port;
+ else
+ return NULL;
+}
+
+static inline void
+mptsas_set_port(struct mptsas_phyinfo *phy_info, struct sas_port *port)
+{
+ if (phy_info->port_details)
+ phy_info->port_details->port = port;
+
+#ifdef MPT_DEBUG_SAS_WIDE
+ if (port) {
+ dev_printk(KERN_DEBUG, &port->dev, "add: ");
+ printk("port=%p release=%p\n",
+ port, port->dev.release);
+ }
+#endif
+}
+
+static inline struct scsi_target *
+mptsas_get_starget(struct mptsas_phyinfo *phy_info)
+{
+ if (phy_info->port_details)
+ return phy_info->port_details->starget;
+ else
+ return NULL;
+}
+
+static inline void
+mptsas_set_starget(struct mptsas_phyinfo *phy_info, struct scsi_target *
+starget)
+{
+ if (phy_info->port_details)
+ phy_info->port_details->starget = starget;
+}
+
+
+/*
+ * mptsas_setup_wide_ports
+ *
+ * Updates for new and existing narrow/wide port configuration
+ * in the sas_topology
+ */
+static void
+mptsas_setup_wide_ports(MPT_ADAPTER *ioc, struct mptsas_portinfo *port_info)
+{
+ struct mptsas_portinfo_details * port_details;
+ struct mptsas_phyinfo *phy_info, *phy_info_cmp;
+ u64 sas_address;
+ int i, j;
+
+ mutex_lock(&ioc->sas_topology_mutex);
+
+ phy_info = port_info->phy_info;
+ for (i = 0 ; i < port_info->num_phys ; i++, phy_info++) {
+ if (phy_info->attached.handle)
+ continue;
+ port_details = phy_info->port_details;
+ if (!port_details)
+ continue;
+ if (port_details->num_phys < 2)
+ continue;
+ /*
+ * Removing a phy from a port, letting the last
+ * phy be removed by firmware events.
+ */
+ dsaswideprintk((KERN_DEBUG
+ "%s: [%p]: port=%d deleting phy = %d\n",
+ __FUNCTION__, port_details,
+ port_details->port_id, i));
+ port_details->num_phys--;
+ port_details->phy_bitmask &= ~ (1 << phy_info->phy_id);
+ memset(&phy_info->attached, 0, sizeof(struct mptsas_devinfo));
+ sas_port_delete_phy(port_details->port, phy_info->phy);
+ phy_info->port_details = NULL;
+ }
+
+ /*
+ * Populate and refresh the tree
+ */
+ phy_info = port_info->phy_info;
+ for (i = 0 ; i < port_info->num_phys ; i++, phy_info++) {
+ sas_address = phy_info->attached.sas_address;
+ dsaswideprintk((KERN_DEBUG "phy_id=%d sas_address=0x%018llX\n",
+ i, sas_address));
+ if (!sas_address)
+ continue;
+ port_details = phy_info->port_details;
+ /*
+ * Forming a port
+ */
+ if (!port_details) {
+ port_details = kzalloc(sizeof(*port_details),
+ GFP_KERNEL);
+ if (!port_details)
+ goto out;
+ port_details->num_phys = 1;
+ port_details->port_info = port_info;
+ port_details->port_id = ioc->port_serial_number++;
+ if (phy_info->phy_id < 64 )
+ port_details->phy_bitmask |=
+ (1 << phy_info->phy_id);
+ phy_info->sas_port_add_phy=1;
+ dsaswideprintk((KERN_DEBUG "\t\tForming port\n\t\t"
+ "phy_id=%d sas_address=0x%018llX\n",
+ i, sas_address));
+ phy_info->port_details = port_details;
+ }
+
+ if (i == port_info->num_phys - 1)
+ continue;
+ phy_info_cmp = &port_info->phy_info[i + 1];
+ for (j = i + 1 ; j < port_info->num_phys ; j++,
+ phy_info_cmp++) {
+ if (!phy_info_cmp->attached.sas_address)
+ continue;
+ if (sas_address != phy_info_cmp->attached.sas_address)
+ continue;
+ if (phy_info_cmp->port_details == port_details )
+ continue;
+ dsaswideprintk((KERN_DEBUG
+ "\t\tphy_id=%d sas_address=0x%018llX\n",
+ j, phy_info_cmp->attached.sas_address));
+ if (phy_info_cmp->port_details) {
+ port_details->rphy =
+ mptsas_get_rphy(phy_info_cmp);
+ port_details->port =
+ mptsas_get_port(phy_info_cmp);
+ port_details->starget =
+ mptsas_get_starget(phy_info_cmp);
+ port_details->port_id =
+ phy_info_cmp->port_details->port_id;
+ port_details->num_phys =
+ phy_info_cmp->port_details->num_phys;
+// port_info->port_serial_number--;
+ ioc->port_serial_number--;
+ if (!phy_info_cmp->port_details->num_phys)
+ kfree(phy_info_cmp->port_details);
+ } else
+ phy_info_cmp->sas_port_add_phy=1;
+ /*
+ * Adding a phy to a port
+ */
+ phy_info_cmp->port_details = port_details;
+ if (phy_info_cmp->phy_id < 64 )
+ port_details->phy_bitmask |=
+ (1 << phy_info_cmp->phy_id);
+ port_details->num_phys++;
+ }
+ }
+
+ out:
+
+#ifdef MPT_DEBUG_SAS_WIDE
+ for (i = 0; i < port_info->num_phys; i++) {
+ port_details = port_info->phy_info[i].port_details;
+ if (!port_details)
+ continue;
+ dsaswideprintk((KERN_DEBUG
+ "%s: [%p]: phy_id=%02d port_id=%02d num_phys=%02d "
+ "bitmask=0x%016llX\n",
+ __FUNCTION__,
+ port_details, i, port_details->port_id,
+ port_details->num_phys, port_details->phy_bitmask));
+ dsaswideprintk((KERN_DEBUG"\t\tport = %p rphy=%p\n",
+ port_details->port, port_details->rphy));
+ }
+ dsaswideprintk((KERN_DEBUG"\n"));
+#endif
+ mutex_unlock(&ioc->sas_topology_mutex);
+}
+
+static void
+mptsas_target_reset(MPT_ADAPTER *ioc, VirtTarget * vtarget)
+{
+ MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)ioc->sh->hostdata;
+
+ if (mptscsih_TMHandler(hd,
+ MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
+ vtarget->bus_id, vtarget->target_id, 0, 0, 5) < 0) {
+ hd->tmPending = 0;
+ hd->tmState = TM_STATE_NONE;
+ printk(MYIOC_s_WARN_FMT
+ "Error processing TaskMgmt id=%d TARGET_RESET\n",
+ ioc->name, vtarget->target_id);
+ }
+}
+
static int
mptsas_sas_enclosure_pg0(MPT_ADAPTER *ioc, struct mptsas_enclosure *enclosure,
u32 form, u32 form_specific)
return mptscsih_slave_configure(sdev);
}
-/*
- * This is pretty ugly. We will be able to seriously clean it up
- * once the DV code in mptscsih goes away and we can properly
- * implement ->target_alloc.
- */
+static int
+mptsas_target_alloc(struct scsi_target *starget)
+{
+ struct Scsi_Host *host = dev_to_shost(&starget->dev);
+ MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata;
+ VirtTarget *vtarget;
+ u32 target_id;
+ u32 channel;
+ struct sas_rphy *rphy;
+ struct mptsas_portinfo *p;
+ int i;
+
+ vtarget = kzalloc(sizeof(VirtTarget), GFP_KERNEL);
+ if (!vtarget)
+ return -ENOMEM;
+
+ vtarget->starget = starget;
+ vtarget->ioc_id = hd->ioc->id;
+ vtarget->tflags = MPT_TARGET_FLAGS_Q_YES|MPT_TARGET_FLAGS_VALID_INQUIRY;
+
+ target_id = starget->id;
+ channel = 0;
+
+ hd->Targets[target_id] = vtarget;
+
+ /*
+ * RAID volumes placed beyond the last expected port.
+ */
+ if (starget->channel == hd->ioc->num_ports)
+ goto out;
+
+ rphy = dev_to_rphy(starget->dev.parent);
+ mutex_lock(&hd->ioc->sas_topology_mutex);
+ list_for_each_entry(p, &hd->ioc->sas_topology, list) {
+ for (i = 0; i < p->num_phys; i++) {
+ if (p->phy_info[i].attached.sas_address !=
+ rphy->identify.sas_address)
+ continue;
+ target_id = p->phy_info[i].attached.id;
+ channel = p->phy_info[i].attached.channel;
+ mptsas_set_starget(&p->phy_info[i], starget);
+
+ /*
+ * Exposing hidden raid components
+ */
+ if (mptscsih_is_phys_disk(hd->ioc, target_id)) {
+ target_id = mptscsih_raid_id_to_num(hd,
+ target_id);
+ vtarget->tflags |=
+ MPT_TARGET_FLAGS_RAID_COMPONENT;
+ }
+ mutex_unlock(&hd->ioc->sas_topology_mutex);
+ goto out;
+ }
+ }
+ mutex_unlock(&hd->ioc->sas_topology_mutex);
+
+ kfree(vtarget);
+ return -ENXIO;
+
+ out:
+ vtarget->target_id = target_id;
+ vtarget->bus_id = channel;
+ starget->hostdata = vtarget;
+ return 0;
+}
+
+static void
+mptsas_target_destroy(struct scsi_target *starget)
+{
+ struct Scsi_Host *host = dev_to_shost(&starget->dev);
+ MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata;
+ struct sas_rphy *rphy;
+ struct mptsas_portinfo *p;
+ int i;
+
+ if (!starget->hostdata)
+ return;
+
+ if (starget->channel == hd->ioc->num_ports)
+ goto out;
+
+ rphy = dev_to_rphy(starget->dev.parent);
+ list_for_each_entry(p, &hd->ioc->sas_topology, list) {
+ for (i = 0; i < p->num_phys; i++) {
+ if (p->phy_info[i].attached.sas_address !=
+ rphy->identify.sas_address)
+ continue;
+ mptsas_set_starget(&p->phy_info[i], NULL);
+ goto out;
+ }
+ }
+
+ out:
+ kfree(starget->hostdata);
+ starget->hostdata = NULL;
+}
+
+
static int
mptsas_slave_alloc(struct scsi_device *sdev)
{
MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata;
struct sas_rphy *rphy;
struct mptsas_portinfo *p;
- VirtTarget *vtarget;
VirtDevice *vdev;
struct scsi_target *starget;
- u32 target_id;
- int i;
+ int i;
vdev = kzalloc(sizeof(VirtDevice), GFP_KERNEL);
if (!vdev) {
- printk(MYIOC_s_ERR_FMT "slave_alloc kmalloc(%zd) FAILED!\n",
+ printk(MYIOC_s_ERR_FMT "slave_alloc kzalloc(%zd) FAILED!\n",
hd->ioc->name, sizeof(VirtDevice));
return -ENOMEM;
}
- sdev->hostdata = vdev;
starget = scsi_target(sdev);
- vtarget = starget->hostdata;
- vtarget->ioc_id = hd->ioc->id;
- vdev->vtarget = vtarget;
- if (vtarget->num_luns == 0) {
- vtarget->tflags = MPT_TARGET_FLAGS_Q_YES|MPT_TARGET_FLAGS_VALID_INQUIRY;
- hd->Targets[sdev->id] = vtarget;
- }
+ vdev->vtarget = starget->hostdata;
/*
- RAID volumes placed beyond the last expected port.
- */
- if (sdev->channel == hd->ioc->num_ports) {
- target_id = sdev->id;
- vtarget->bus_id = 0;
- vdev->lun = 0;
+ * RAID volumes placed beyond the last expected port.
+ */
+ if (sdev->channel == hd->ioc->num_ports)
goto out;
- }
rphy = dev_to_rphy(sdev->sdev_target->dev.parent);
mutex_lock(&hd->ioc->sas_topology_mutex);
list_for_each_entry(p, &hd->ioc->sas_topology, list) {
for (i = 0; i < p->num_phys; i++) {
- if (p->phy_info[i].attached.sas_address ==
- rphy->identify.sas_address) {
- target_id = p->phy_info[i].attached.id;
- vtarget->bus_id = p->phy_info[i].attached.channel;
- vdev->lun = sdev->lun;
- p->phy_info[i].starget = sdev->sdev_target;
- /*
- * Exposing hidden disk (RAID)
- */
- if (mptscsih_is_phys_disk(hd->ioc, target_id)) {
- target_id = mptscsih_raid_id_to_num(hd,
- target_id);
- vdev->vtarget->tflags |=
- MPT_TARGET_FLAGS_RAID_COMPONENT;
- sdev->no_uld_attach = 1;
- }
- mutex_unlock(&hd->ioc->sas_topology_mutex);
- goto out;
- }
+ if (p->phy_info[i].attached.sas_address !=
+ rphy->identify.sas_address)
+ continue;
+ vdev->lun = sdev->lun;
+ /*
+ * Exposing hidden raid components
+ */
+ if (mptscsih_is_phys_disk(hd->ioc,
+ p->phy_info[i].attached.id))
+ sdev->no_uld_attach = 1;
+ mutex_unlock(&hd->ioc->sas_topology_mutex);
+ goto out;
}
}
mutex_unlock(&hd->ioc->sas_topology_mutex);
return -ENXIO;
out:
- vtarget->target_id = target_id;
- vtarget->num_luns++;
+ vdev->vtarget->num_luns++;
+ sdev->hostdata = vdev;
return 0;
}
-static void
-mptsas_slave_destroy(struct scsi_device *sdev)
+static int
+mptsas_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
{
- struct Scsi_Host *host = sdev->host;
- MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata;
- VirtDevice *vdev;
+ VirtDevice *vdev = SCpnt->device->hostdata;
- /*
- * Issue target reset to flush firmware outstanding commands.
- */
- vdev = sdev->hostdata;
- if (vdev->configured_lun){
- if (mptscsih_TMHandler(hd,
- MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
- vdev->vtarget->bus_id,
- vdev->vtarget->target_id,
- 0, 0, 5 /* 5 second timeout */)
- < 0){
-
- /* The TM request failed!
- * Fatal error case.
- */
- printk(MYIOC_s_WARN_FMT
- "Error processing TaskMgmt id=%d TARGET_RESET\n",
- hd->ioc->name,
- vdev->vtarget->target_id);
-
- hd->tmPending = 0;
- hd->tmState = TM_STATE_NONE;
- }
+// scsi_print_command(SCpnt);
+ if (vdev->vtarget->deleted) {
+ SCpnt->result = DID_NO_CONNECT << 16;
+ done(SCpnt);
+ return 0;
}
- mptscsih_slave_destroy(sdev);
+
+ return mptscsih_qcmd(SCpnt,done);
}
+
static struct scsi_host_template mptsas_driver_template = {
.module = THIS_MODULE,
.proc_name = "mptsas",
.proc_info = mptscsih_proc_info,
.name = "MPT SPI Host",
.info = mptscsih_info,
- .queuecommand = mptscsih_qcmd,
- .target_alloc = mptscsih_target_alloc,
+ .queuecommand = mptsas_qcmd,
+ .target_alloc = mptsas_target_alloc,
.slave_alloc = mptsas_slave_alloc,
.slave_configure = mptsas_slave_configure,
- .target_destroy = mptscsih_target_destroy,
- .slave_destroy = mptsas_slave_destroy,
+ .target_destroy = mptsas_target_destroy,
+ .slave_destroy = mptscsih_slave_destroy,
.change_queue_depth = mptscsih_change_queue_depth,
.eh_abort_handler = mptscsih_abort,
.eh_device_reset_handler = mptscsih_dev_reset,
port_info->num_phys = buffer->NumPhys;
port_info->phy_info = kcalloc(port_info->num_phys,
- sizeof(struct mptsas_phyinfo),GFP_KERNEL);
+ sizeof(*port_info->phy_info),GFP_KERNEL);
if (!port_info->phy_info) {
error = -ENOMEM;
goto out_free_consistent;
buffer->PhyData[i].Port;
port_info->phy_info[i].negotiated_link_rate =
buffer->PhyData[i].NegotiatedLinkRate;
+ port_info->phy_info[i].portinfo = port_info;
}
out_free_consistent:
CONFIGPARMS cfg;
SasExpanderPage0_t *buffer;
dma_addr_t dma_handle;
- int error;
+ int i, error;
hdr.PageVersion = MPI_SASEXPANDER0_PAGEVERSION;
hdr.ExtPageLength = 0;
port_info->num_phys = buffer->NumPhys;
port_info->handle = le16_to_cpu(buffer->DevHandle);
port_info->phy_info = kcalloc(port_info->num_phys,
- sizeof(struct mptsas_phyinfo),GFP_KERNEL);
+ sizeof(*port_info->phy_info),GFP_KERNEL);
if (!port_info->phy_info) {
error = -ENOMEM;
goto out_free_consistent;
}
+ for (i = 0; i < port_info->num_phys; i++)
+ port_info->phy_info[i].portinfo = port_info;
+
out_free_consistent:
pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4,
buffer, dma_handle);
{
MPT_ADAPTER *ioc;
struct sas_phy *phy;
- int error;
+ struct sas_port *port;
+ int error = 0;
- if (!dev)
- return -ENODEV;
+ if (!dev) {
+ error = -ENODEV;
+ goto out;
+ }
if (!phy_info->phy) {
phy = sas_phy_alloc(dev, index);
- if (!phy)
- return -ENOMEM;
+ if (!phy) {
+ error = -ENOMEM;
+ goto out;
+ }
} else
phy = phy_info->phy;
- phy->port_identifier = phy_info->port_id;
mptsas_parse_device_info(&phy->identify, &phy_info->identify);
/*
error = sas_phy_add(phy);
if (error) {
sas_phy_free(phy);
- return error;
+ goto out;
}
phy_info->phy = phy;
}
- if ((phy_info->attached.handle) &&
- (!phy_info->rphy)) {
+ if (!phy_info->attached.handle ||
+ !phy_info->port_details)
+ goto out;
+
+ port = mptsas_get_port(phy_info);
+ ioc = phy_to_ioc(phy_info->phy);
+
+ if (phy_info->sas_port_add_phy) {
+
+ if (!port) {
+ port = sas_port_alloc(dev,
+ phy_info->port_details->port_id);
+ dsaswideprintk((KERN_DEBUG
+ "sas_port_alloc: port=%p dev=%p port_id=%d\n",
+ port, dev, phy_info->port_details->port_id));
+ if (!port) {
+ error = -ENOMEM;
+ goto out;
+ }
+ error = sas_port_add(port);
+ if (error) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
+ goto out;
+ }
+ mptsas_set_port(phy_info, port);
+ }
+ dsaswideprintk((KERN_DEBUG "sas_port_add_phy: phy_id=%d\n",
+ phy_info->phy_id));
+ sas_port_add_phy(port, phy_info->phy);
+ phy_info->sas_port_add_phy = 0;
+ }
+
+ if (!mptsas_get_rphy(phy_info) && port && !port->rphy) {
struct sas_rphy *rphy;
+ struct device *parent;
struct sas_identify identify;
- ioc = phy_to_ioc(phy_info->phy);
-
+ parent = dev->parent->parent;
/*
* Let the hotplug_work thread handle processing
* the adding/removing of devices that occur
*/
if (ioc->sas_discovery_runtime &&
mptsas_is_end_device(&phy_info->attached))
- return 0;
+ goto out;
mptsas_parse_device_info(&identify, &phy_info->attached);
+ if (scsi_is_host_device(parent)) {
+ struct mptsas_portinfo *port_info;
+ int i;
+
+ mutex_lock(&ioc->sas_topology_mutex);
+ port_info = mptsas_find_portinfo_by_handle(ioc,
+ ioc->handle);
+ mutex_unlock(&ioc->sas_topology_mutex);
+
+ for (i = 0; i < port_info->num_phys; i++)
+ if (port_info->phy_info[i].identify.sas_address ==
+ identify.sas_address)
+ goto out;
+
+ } else if (scsi_is_sas_rphy(parent)) {
+ struct sas_rphy *parent_rphy = dev_to_rphy(parent);
+ if (identify.sas_address ==
+ parent_rphy->identify.sas_address)
+ goto out;
+ }
+
switch (identify.device_type) {
case SAS_END_DEVICE:
- rphy = sas_end_device_alloc(phy);
+ rphy = sas_end_device_alloc(port);
break;
case SAS_EDGE_EXPANDER_DEVICE:
case SAS_FANOUT_EXPANDER_DEVICE:
- rphy = sas_expander_alloc(phy, identify.device_type);
+ rphy = sas_expander_alloc(port, identify.device_type);
break;
default:
rphy = NULL;
break;
}
- if (!rphy)
- return 0; /* non-fatal: an rphy can be added later */
+ if (!rphy) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
+ goto out;
+ }
rphy->identify = identify;
-
error = sas_rphy_add(rphy);
if (error) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
sas_rphy_free(rphy);
- return error;
+ goto out;
}
-
- phy_info->rphy = rphy;
+ mptsas_set_rphy(phy_info, rphy);
}
- return 0;
+ out:
+ return error;
}
static int
goto out_free_port_info;
mutex_lock(&ioc->sas_topology_mutex);
+ ioc->handle = hba->handle;
port_info = mptsas_find_portinfo_by_handle(ioc, hba->handle);
if (!port_info) {
port_info = hba;
for (i = 0; i < hba->num_phys; i++)
port_info->phy_info[i].negotiated_link_rate =
hba->phy_info[i].negotiated_link_rate;
- if (hba->phy_info)
- kfree(hba->phy_info);
+ kfree(hba->phy_info);
kfree(hba);
hba = NULL;
}
port_info->phy_info[i].phy_id;
handle = port_info->phy_info[i].identify.handle;
- if (port_info->phy_info[i].attached.handle) {
+ if (port_info->phy_info[i].attached.handle)
mptsas_sas_device_pg0(ioc,
&port_info->phy_info[i].attached,
(MPI_SAS_DEVICE_PGAD_FORM_HANDLE <<
MPI_SAS_DEVICE_PGAD_FORM_SHIFT),
port_info->phy_info[i].attached.handle);
- }
+ }
+
+ mptsas_setup_wide_ports(ioc, port_info);
+ for (i = 0; i < port_info->num_phys; i++, ioc->sas_index++)
mptsas_probe_one_phy(&ioc->sh->shost_gendev,
&port_info->phy_info[i], ioc->sas_index, 1);
- ioc->sas_index++;
- }
return 0;
mptsas_probe_expander_phys(MPT_ADAPTER *ioc, u32 *handle)
{
struct mptsas_portinfo *port_info, *p, *ex;
+ struct device *parent;
+ struct sas_rphy *rphy;
int error = -ENOMEM, i, j;
ex = kzalloc(sizeof(*port_info), GFP_KERNEL);
list_add_tail(&port_info->list, &ioc->sas_topology);
} else {
port_info->handle = ex->handle;
- if (ex->phy_info)
- kfree(ex->phy_info);
+ kfree(ex->phy_info);
kfree(ex);
ex = NULL;
}
mutex_unlock(&ioc->sas_topology_mutex);
for (i = 0; i < port_info->num_phys; i++) {
- struct device *parent;
-
mptsas_sas_expander_pg1(ioc, &port_info->phy_info[i],
(MPI_SAS_EXPAND_PGAD_FORM_HANDLE_PHY_NUM <<
MPI_SAS_EXPAND_PGAD_FORM_SHIFT), (i << 16) + *handle);
port_info->phy_info[i].attached.phy_id =
port_info->phy_info[i].phy_id;
}
+ }
- /*
- * If we find a parent port handle this expander is
- * attached to another expander, else it hangs of the
- * HBA phys.
- */
- parent = &ioc->sh->shost_gendev;
+ parent = &ioc->sh->shost_gendev;
+ for (i = 0; i < port_info->num_phys; i++) {
mutex_lock(&ioc->sas_topology_mutex);
list_for_each_entry(p, &ioc->sas_topology, list) {
for (j = 0; j < p->num_phys; j++) {
- if (port_info->phy_info[i].identify.handle ==
+ if (port_info->phy_info[i].identify.handle !=
p->phy_info[j].attached.handle)
- parent = &p->phy_info[j].rphy->dev;
+ continue;
+ rphy = mptsas_get_rphy(&p->phy_info[j]);
+ parent = &rphy->dev;
}
}
mutex_unlock(&ioc->sas_topology_mutex);
+ }
+
+ mptsas_setup_wide_ports(ioc, port_info);
+ for (i = 0; i < port_info->num_phys; i++, ioc->sas_index++)
mptsas_probe_one_phy(parent, &port_info->phy_info[i],
ioc->sas_index, 0);
- ioc->sas_index++;
- }
return 0;
out_free_port_info:
if (ex) {
- if (ex->phy_info)
- kfree(ex->phy_info);
+ kfree(ex->phy_info);
kfree(ex);
}
out:
{
struct mptsas_portinfo buffer;
struct mptsas_portinfo *port_info, *n, *parent;
+ struct mptsas_phyinfo *phy_info;
+ struct scsi_target * starget;
+ VirtTarget * vtarget;
+ struct sas_port * port;
int i;
+ u64 expander_sas_address;
mutex_lock(&ioc->sas_topology_mutex);
list_for_each_entry_safe(port_info, n, &ioc->sas_topology, list) {
(MPI_SAS_EXPAND_PGAD_FORM_HANDLE <<
MPI_SAS_EXPAND_PGAD_FORM_SHIFT), port_info->handle)) {
+ /*
+ * Issue target reset to all child end devices
+ * then mark them deleted to prevent further
+ * IO going to them.
+ */
+ phy_info = port_info->phy_info;
+ for (i = 0; i < port_info->num_phys; i++, phy_info++) {
+ starget = mptsas_get_starget(phy_info);
+ if (!starget)
+ continue;
+ vtarget = starget->hostdata;
+ if(vtarget->deleted)
+ continue;
+ vtarget->deleted = 1;
+ mptsas_target_reset(ioc, vtarget);
+ sas_port_delete(mptsas_get_port(phy_info));
+ mptsas_port_delete(phy_info->port_details);
+ }
+
/*
* Obtain the port_info instance to the parent port
*/
if (!parent)
goto next_port;
+ expander_sas_address =
+ port_info->phy_info[0].identify.sas_address;
+
/*
* Delete rphys in the parent that point
* to this expander. The transport layer will
* cleanup all the children.
*/
- for (i = 0; i < parent->num_phys; i++) {
- if ((!parent->phy_info[i].rphy) ||
- (parent->phy_info[i].attached.sas_address !=
- port_info->phy_info[i].identify.sas_address))
+ phy_info = parent->phy_info;
+ for (i = 0; i < parent->num_phys; i++, phy_info++) {
+ port = mptsas_get_port(phy_info);
+ if (!port)
+ continue;
+ if (phy_info->attached.sas_address !=
+ expander_sas_address)
continue;
- sas_rphy_delete(parent->phy_info[i].rphy);
- memset(&parent->phy_info[i].attached, 0,
- sizeof(struct mptsas_devinfo));
- parent->phy_info[i].rphy = NULL;
- parent->phy_info[i].starget = NULL;
+#ifdef MPT_DEBUG_SAS_WIDE
+ dev_printk(KERN_DEBUG, &port->dev, "delete\n");
+#endif
+ sas_port_delete(port);
+ mptsas_port_delete(phy_info->port_details);
}
next_port:
+
+ phy_info = port_info->phy_info;
+ for (i = 0; i < port_info->num_phys; i++, phy_info++)
+ mptsas_port_delete(phy_info->port_details);
+
list_del(&port_info->list);
- if (port_info->phy_info)
- kfree(port_info->phy_info);
+ kfree(port_info->phy_info);
kfree(port_info);
}
/*
* Free this memory allocated from inside
* mptsas_sas_expander_pg0
*/
- if (buffer.phy_info)
- kfree(buffer.phy_info);
+ kfree(buffer.phy_info);
}
mutex_unlock(&ioc->sas_topology_mutex);
}
/*
* Work queue thread to handle Runtime discovery
* Mere purpose is the hot add/delete of expanders
+ *(Mutex UNLOCKED)
*/
static void
-mptscsih_discovery_work(void * arg)
+__mptsas_discovery_work(MPT_ADAPTER *ioc)
{
- struct mptsas_discovery_event *ev = arg;
- MPT_ADAPTER *ioc = ev->ioc;
u32 handle = 0xFFFF;
- mutex_lock(&ioc->sas_discovery_mutex);
ioc->sas_discovery_runtime=1;
mptsas_delete_expander_phys(ioc);
mptsas_probe_hba_phys(ioc);
while (!mptsas_probe_expander_phys(ioc, &handle))
;
- kfree(ev);
ioc->sas_discovery_runtime=0;
+}
+
+/*
+ * Work queue thread to handle Runtime discovery
+ * Mere purpose is the hot add/delete of expanders
+ *(Mutex LOCKED)
+ */
+static void
+mptsas_discovery_work(void * arg)
+{
+ struct mptsas_discovery_event *ev = arg;
+ MPT_ADAPTER *ioc = ev->ioc;
+
+ mutex_lock(&ioc->sas_discovery_mutex);
+ __mptsas_discovery_work(ioc);
mutex_unlock(&ioc->sas_discovery_mutex);
+ kfree(ev);
}
static struct mptsas_phyinfo *
-mptsas_find_phyinfo_by_parent(MPT_ADAPTER *ioc, u16 parent_handle, u8 phy_id)
+mptsas_find_phyinfo_by_sas_address(MPT_ADAPTER *ioc, u64 sas_address)
{
struct mptsas_portinfo *port_info;
- struct mptsas_devinfo device_info;
struct mptsas_phyinfo *phy_info = NULL;
- int i, error;
-
- /*
- * Retrieve the parent sas_address
- */
- error = mptsas_sas_device_pg0(ioc, &device_info,
- (MPI_SAS_DEVICE_PGAD_FORM_HANDLE <<
- MPI_SAS_DEVICE_PGAD_FORM_SHIFT),
- parent_handle);
- if (error)
- return NULL;
+ int i;
- /*
- * The phy_info structures are never deallocated during lifetime of
- * a host, so the code below is safe without additional refcounting.
- */
mutex_lock(&ioc->sas_topology_mutex);
list_for_each_entry(port_info, &ioc->sas_topology, list) {
for (i = 0; i < port_info->num_phys; i++) {
- if (port_info->phy_info[i].identify.sas_address ==
- device_info.sas_address &&
- port_info->phy_info[i].phy_id == phy_id) {
- phy_info = &port_info->phy_info[i];
- break;
- }
+ if (port_info->phy_info[i].attached.sas_address
+ != sas_address)
+ continue;
+ if (!mptsas_is_end_device(
+ &port_info->phy_info[i].attached))
+ continue;
+ phy_info = &port_info->phy_info[i];
+ break;
}
}
mutex_unlock(&ioc->sas_topology_mutex);
-
return phy_info;
}
struct mptsas_phyinfo *phy_info = NULL;
int i;
- /*
- * The phy_info structures are never deallocated during lifetime of
- * a host, so the code below is safe without additional refcounting.
- */
mutex_lock(&ioc->sas_topology_mutex);
list_for_each_entry(port_info, &ioc->sas_topology, list) {
- for (i = 0; i < port_info->num_phys; i++)
- if (mptsas_is_end_device(&port_info->phy_info[i].attached))
- if (port_info->phy_info[i].attached.id == id) {
- phy_info = &port_info->phy_info[i];
- break;
- }
+ for (i = 0; i < port_info->num_phys; i++) {
+ if (port_info->phy_info[i].attached.id != id)
+ continue;
+ if (!mptsas_is_end_device(
+ &port_info->phy_info[i].attached))
+ continue;
+ phy_info = &port_info->phy_info[i];
+ break;
+ }
}
mutex_unlock(&ioc->sas_topology_mutex);
-
return phy_info;
}
* Work queue thread to clear the persitency table
*/
static void
-mptscsih_sas_persist_clear_table(void * arg)
+mptsas_persist_clear_table(void * arg)
{
MPT_ADAPTER *ioc = (MPT_ADAPTER *)arg;
mptsas_reprobe_lun);
}
-
/*
* Work queue thread to handle SAS hotplug events
*/
MPT_ADAPTER *ioc = ev->ioc;
struct mptsas_phyinfo *phy_info;
struct sas_rphy *rphy;
+ struct sas_port *port;
struct scsi_device *sdev;
+ struct scsi_target * starget;
struct sas_identify identify;
char *ds = NULL;
struct mptsas_devinfo sas_device;
VirtTarget *vtarget;
+ VirtDevice *vdevice;
- mutex_lock(&ioc->sas_discovery_mutex);
+ mutex_lock(&ioc->sas_discovery_mutex);
switch (ev->event_type) {
case MPTSAS_DEL_DEVICE:
/*
* Sanity checks, for non-existing phys and remote rphys.
*/
- if (!phy_info)
+ if (!phy_info || !phy_info->port_details) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
- if (!phy_info->rphy)
+ }
+ rphy = mptsas_get_rphy(phy_info);
+ if (!rphy) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
- if (phy_info->starget) {
- vtarget = phy_info->starget->hostdata;
+ }
+ port = mptsas_get_port(phy_info);
+ if (!port) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
+ break;
+ }
- if (!vtarget)
+ starget = mptsas_get_starget(phy_info);
+ if (starget) {
+ vtarget = starget->hostdata;
+
+ if (!vtarget) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
+ }
+
/*
* Handling RAID components
*/
if (ev->phys_disk_num_valid) {
vtarget->target_id = ev->phys_disk_num;
vtarget->tflags |= MPT_TARGET_FLAGS_RAID_COMPONENT;
- mptsas_reprobe_target(vtarget->starget, 1);
+ mptsas_reprobe_target(starget, 1);
break;
}
+
+ vtarget->deleted = 1;
+ mptsas_target_reset(ioc, vtarget);
}
if (phy_info->attached.device_info & MPI_SAS_DEVICE_INFO_SSP_TARGET)
"removing %s device, channel %d, id %d, phy %d\n",
ioc->name, ds, ev->channel, ev->id, phy_info->phy_id);
- sas_rphy_delete(phy_info->rphy);
- memset(&phy_info->attached, 0, sizeof(struct mptsas_devinfo));
- phy_info->rphy = NULL;
- phy_info->starget = NULL;
+#ifdef MPT_DEBUG_SAS_WIDE
+ dev_printk(KERN_DEBUG, &port->dev, "delete\n");
+#endif
+ sas_port_delete(port);
+ mptsas_port_delete(phy_info->port_details);
break;
case MPTSAS_ADD_DEVICE:
*/
if (mptsas_sas_device_pg0(ioc, &sas_device,
(MPI_SAS_DEVICE_PGAD_FORM_BUS_TARGET_ID <<
- MPI_SAS_DEVICE_PGAD_FORM_SHIFT), ev->id))
+ MPI_SAS_DEVICE_PGAD_FORM_SHIFT), ev->id)) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
+ }
- phy_info = mptsas_find_phyinfo_by_parent(ioc,
- sas_device.handle_parent, sas_device.phy_id);
+ ssleep(2);
+ __mptsas_discovery_work(ioc);
- if (!phy_info) {
- u32 handle = 0xFFFF;
+ phy_info = mptsas_find_phyinfo_by_sas_address(ioc,
+ sas_device.sas_address);
- /*
- * Its possible when an expander has been hot added
- * containing attached devices, the sas firmware
- * may send a RC_ADDED event prior to the
- * DISCOVERY STOP event. If that occurs, our
- * view of the topology in the driver in respect to this
- * expander might of not been setup, and we hit this
- * condition.
- * Therefore, this code kicks off discovery to
- * refresh the data.
- * Then again, we check whether the parent phy has
- * been created.
- */
- ioc->sas_discovery_runtime=1;
- mptsas_delete_expander_phys(ioc);
- mptsas_probe_hba_phys(ioc);
- while (!mptsas_probe_expander_phys(ioc, &handle))
- ;
- ioc->sas_discovery_runtime=0;
-
- phy_info = mptsas_find_phyinfo_by_parent(ioc,
- sas_device.handle_parent, sas_device.phy_id);
- if (!phy_info)
- break;
+ if (!phy_info || !phy_info->port_details) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
+ break;
}
- if (phy_info->starget) {
- vtarget = phy_info->starget->hostdata;
+ starget = mptsas_get_starget(phy_info);
+ if (starget) {
+ vtarget = starget->hostdata;
- if (!vtarget)
+ if (!vtarget) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
+ }
/*
* Handling RAID components
*/
if (vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT) {
vtarget->tflags &= ~MPT_TARGET_FLAGS_RAID_COMPONENT;
vtarget->target_id = ev->id;
- mptsas_reprobe_target(phy_info->starget, 0);
+ mptsas_reprobe_target(starget, 0);
}
break;
}
- if (phy_info->rphy)
+ if (mptsas_get_rphy(phy_info)) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break;
+ }
+ port = mptsas_get_port(phy_info);
+ if (!port) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
+ break;
+ }
memcpy(&phy_info->attached, &sas_device,
sizeof(struct mptsas_devinfo));
ioc->name, ds, ev->channel, ev->id, ev->phy_id);
mptsas_parse_device_info(&identify, &phy_info->attached);
- switch (identify.device_type) {
- case SAS_END_DEVICE:
- rphy = sas_end_device_alloc(phy_info->phy);
- break;
- case SAS_EDGE_EXPANDER_DEVICE:
- case SAS_FANOUT_EXPANDER_DEVICE:
- rphy = sas_expander_alloc(phy_info->phy, identify.device_type);
- break;
- default:
- rphy = NULL;
- break;
- }
- if (!rphy)
+ rphy = sas_end_device_alloc(port);
+ if (!rphy) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
break; /* non-fatal: an rphy can be added later */
+ }
rphy->identify = identify;
if (sas_rphy_add(rphy)) {
+ dfailprintk((MYIOC_s_ERR_FMT
+ "%s: exit at line=%d\n", ioc->name,
+ __FUNCTION__, __LINE__));
sas_rphy_free(rphy);
break;
}
-
- phy_info->rphy = rphy;
+ mptsas_set_rphy(phy_info, rphy);
break;
case MPTSAS_ADD_RAID:
sdev = scsi_device_lookup(
printk(MYIOC_s_INFO_FMT
"removing raid volume, channel %d, id %d\n",
ioc->name, ioc->num_ports, ev->id);
+ vdevice = sdev->hostdata;
+ vdevice->vtarget->deleted = 1;
+ mptsas_target_reset(ioc, vdevice->vtarget);
scsi_remove_device(sdev);
scsi_device_put(sdev);
mpt_findImVolumes(ioc);
break;
}
- kfree(ev);
mutex_unlock(&ioc->sas_discovery_mutex);
+ kfree(ev);
+
}
static void
-mptscsih_send_sas_event(MPT_ADAPTER *ioc,
+mptsas_send_sas_event(MPT_ADAPTER *ioc,
EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *sas_event_data)
{
struct mptsas_hotplug_event *ev;
switch (sas_event_data->ReasonCode) {
case MPI_EVENT_SAS_DEV_STAT_RC_ADDED:
case MPI_EVENT_SAS_DEV_STAT_RC_NOT_RESPONDING:
- ev = kmalloc(sizeof(*ev), GFP_ATOMIC);
+ ev = kzalloc(sizeof(*ev), GFP_ATOMIC);
if (!ev) {
printk(KERN_WARNING "mptsas: lost hotplug event\n");
break;
/*
* Persistent table is full.
*/
- INIT_WORK(&ioc->mptscsih_persistTask,
- mptscsih_sas_persist_clear_table,
- (void *)ioc);
- schedule_work(&ioc->mptscsih_persistTask);
+ INIT_WORK(&ioc->sas_persist_task,
+ mptsas_persist_clear_table, (void *)ioc);
+ schedule_work(&ioc->sas_persist_task);
break;
case MPI_EVENT_SAS_DEV_STAT_RC_SMART_DATA:
/* TODO */
}
static void
-mptscsih_send_raid_event(MPT_ADAPTER *ioc,
+mptsas_send_raid_event(MPT_ADAPTER *ioc,
EVENT_DATA_RAID *raid_event_data)
{
struct mptsas_hotplug_event *ev;
if (ioc->bus_type != SAS)
return;
- ev = kmalloc(sizeof(*ev), GFP_ATOMIC);
+ ev = kzalloc(sizeof(*ev), GFP_ATOMIC);
if (!ev) {
printk(KERN_WARNING "mptsas: lost hotplug event\n");
return;
}
- memset(ev,0,sizeof(struct mptsas_hotplug_event));
INIT_WORK(&ev->work, mptsas_hotplug_work, ev);
ev->ioc = ioc;
ev->id = raid_event_data->VolumeID;
}
static void
-mptscsih_send_discovery(MPT_ADAPTER *ioc,
+mptsas_send_discovery_event(MPT_ADAPTER *ioc,
EVENT_DATA_SAS_DISCOVERY *discovery_data)
{
struct mptsas_discovery_event *ev;
if (discovery_data->DiscoveryStatus)
return;
- ev = kmalloc(sizeof(*ev), GFP_ATOMIC);
+ ev = kzalloc(sizeof(*ev), GFP_ATOMIC);
if (!ev)
return;
- memset(ev,0,sizeof(struct mptsas_discovery_event));
- INIT_WORK(&ev->work, mptscsih_discovery_work, ev);
+ INIT_WORK(&ev->work, mptsas_discovery_work, ev);
ev->ioc = ioc;
schedule_work(&ev->work);
};
switch (event) {
case MPI_EVENT_SAS_DEVICE_STATUS_CHANGE:
- mptscsih_send_sas_event(ioc,
+ mptsas_send_sas_event(ioc,
(EVENT_DATA_SAS_DEVICE_STATUS_CHANGE *)reply->Data);
break;
case MPI_EVENT_INTEGRATED_RAID:
- mptscsih_send_raid_event(ioc,
+ mptsas_send_raid_event(ioc,
(EVENT_DATA_RAID *)reply->Data);
break;
case MPI_EVENT_PERSISTENT_TABLE_FULL:
- INIT_WORK(&ioc->mptscsih_persistTask,
- mptscsih_sas_persist_clear_table,
+ INIT_WORK(&ioc->sas_persist_task,
+ mptsas_persist_clear_table,
(void *)ioc);
- schedule_work(&ioc->mptscsih_persistTask);
+ schedule_work(&ioc->sas_persist_task);
break;
case MPI_EVENT_SAS_DISCOVERY:
- mptscsih_send_discovery(ioc,
+ mptsas_send_discovery_event(ioc,
(EVENT_DATA_SAS_DISCOVERY *)reply->Data);
break;
default:
return 0;
-out_mptsas_probe:
+ out_mptsas_probe:
mptscsih_remove(pdev);
return error;
{
MPT_ADAPTER *ioc = pci_get_drvdata(pdev);
struct mptsas_portinfo *p, *n;
+ int i;
ioc->sas_discovery_ignore_events=1;
sas_remove_host(ioc->sh);
mutex_lock(&ioc->sas_topology_mutex);
list_for_each_entry_safe(p, n, &ioc->sas_topology, list) {
list_del(&p->list);
- if (p->phy_info)
- kfree(p->phy_info);
+ for (i = 0 ; i < p->num_phys ; i++)
+ mptsas_port_delete(p->phy_info[i].port_details);
+ kfree(p->phy_info);
kfree(p);
}
mutex_unlock(&ioc->sas_topology_mutex);
}
static struct pci_device_id mptsas_pci_table[] = {
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1064,
- PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1066,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_SAS1064,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1068,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_SAS1068,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1064E,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_SAS1064E,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1066E,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_SAS1068E,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1068E,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_SAS1078,
PCI_ANY_ID, PCI_ANY_ID },
{0} /* Terminating entry */
};
*/
static struct pci_device_id mptspi_pci_table[] = {
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_53C1030,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_53C1030,
PCI_ANY_ID, PCI_ANY_ID },
- { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_1030_53C1035,
+ { PCI_VENDOR_ID_LSI_LOGIC, MPI_MANUFACTPAGE_DEVID_53C1035,
PCI_ANY_ID, PCI_ANY_ID },
{0} /* Terminating entry */
};
writel(0xffffffff, c->irq_mask);
if (pdev->irq) {
- rc = request_irq(pdev->irq, i2o_pci_interrupt, SA_SHIRQ,
+ rc = request_irq(pdev->irq, i2o_pci_interrupt, IRQF_SHARED,
c->name, c);
if (rc < 0) {
printk(KERN_ERR "%s: unable to allocate interrupt %d."
goto err_free;
}
- ret = request_irq(ucb->irq, ucb1x00_irq, SA_TRIGGER_RISING,
+ ret = request_irq(ucb->irq, ucb1x00_irq, IRQF_TRIGGER_RISING,
"UCB1x00", ucb);
if (ret) {
printk(KERN_ERR "ucb1x00: unable to grab irq%d: %d\n",
goto error_ioremap;
}
- result = request_irq(sp->irq, ibmasm_interrupt_handler, SA_SHIRQ, sp->devname, (void*)sp);
+ result = request_irq(sp->irq, ibmasm_interrupt_handler, IRQF_SHARED, sp->devname, (void*)sp);
if (result) {
dev_err(sp->dev, "Failed to register interrupt handler\n");
goto error_request_irq;
/*
* Allocate the MCI interrupt
*/
- ret = request_irq(AT91_ID_MCI, at91_mci_irq, SA_SHIRQ, DRIVER_NAME, host);
+ ret = request_irq(AT91_ID_MCI, at91_mci_irq, IRQF_SHARED, DRIVER_NAME, host);
if (ret) {
printk(KERN_ERR "Failed to request MCI interrupt\n");
clk_disable(mci_clk);
int i, ret = 0;
/* THe interrupt is shared among all controllers */
- ret = request_irq(AU1100_SD_IRQ, au1xmmc_irq, SA_INTERRUPT, "MMC", 0);
+ ret = request_irq(AU1100_SD_IRQ, au1xmmc_irq, IRQF_DISABLED, "MMC", 0);
if (ret) {
printk(DRIVER_NAME "ERROR: Couldn't get int %d: %d\n",
int mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
{
- DECLARE_COMPLETION(complete);
+ DECLARE_COMPLETION_ONSTACK(complete);
mrq->done_data = &complete;
mrq->done = mmc_wait_done;
writel(0, host->base + MMCIMASK1);
writel(0xfff, host->base + MMCICLEAR);
- ret = request_irq(dev->irq[0], mmci_irq, SA_SHIRQ, DRIVER_NAME " (cmd)", host);
+ ret = request_irq(dev->irq[0], mmci_irq, IRQF_SHARED, DRIVER_NAME " (cmd)", host);
if (ret)
goto unmap;
- ret = request_irq(dev->irq[1], mmci_pio_irq, SA_SHIRQ, DRIVER_NAME " (pio)", host);
+ ret = request_irq(dev->irq[1], mmci_pio_irq, IRQF_SHARED, DRIVER_NAME " (pio)", host);
if (ret)
goto irq0_free;
unsigned char id; /* 16xx chips have 2 MMC blocks */
struct clk * iclk;
struct clk * fclk;
+ struct resource *res;
void __iomem *base;
int irq;
unsigned char bus_mode;
mmc_omap_xfer_data(struct mmc_omap_host *host, int write)
{
int n;
- void __iomem *reg;
- u16 *p;
if (host->buffer_bytes_left == 0) {
host->sg_idx++;
struct mmc_data *mmcdat = host->data;
if (unlikely(host->dma_ch < 0)) {
- dev_err(mmc_dev(host->mmc), "DMA callback while DMA not
- enabled\n");
+ dev_err(mmc_dev(host->mmc),
+ "DMA callback while DMA not enabled\n");
return;
}
/* FIXME: We really should do something to _handle_ the errors */
- if (ch_status & OMAP_DMA_TOUT_IRQ) {
+ if (ch_status & OMAP1_DMA_TOUT_IRQ) {
dev_err(mmc_dev(host->mmc),"DMA timeout\n");
return;
}
struct omap_mmc_conf *minfo = pdev->dev.platform_data;
struct mmc_host *mmc;
struct mmc_omap_host *host = NULL;
+ struct resource *r;
int ret = 0;
+ int irq;
- if (platform_get_resource(pdev, IORESOURCE_MEM, 0) ||
- platform_get_irq(pdev, IORESOURCE_IRQ, 0)) {
- dev_err(&pdev->dev, "mmc_omap_probe: invalid resource type\n");
- return -ENODEV;
- }
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ irq = platform_get_irq(pdev, 0);
+ if (!r || irq < 0)
+ return -ENXIO;
- if (!request_mem_region(pdev->resource[0].start,
+ r = request_mem_region(pdev->resource[0].start,
pdev->resource[0].end - pdev->resource[0].start + 1,
- pdev->name)) {
- dev_dbg(&pdev->dev, "request_mem_region failed\n");
+ pdev->name);
+ if (!r)
return -EBUSY;
- }
mmc = mmc_alloc_host(sizeof(struct mmc_omap_host), &pdev->dev);
if (!mmc) {
host->dma_timer.data = (unsigned long) host;
host->id = pdev->id;
+ host->res = r;
+ host->irq = irq;
if (cpu_is_omap24xx()) {
host->iclk = clk_get(&pdev->dev, "mmc_ick");
host->dma_ch = -1;
host->irq = pdev->resource[1].start;
- host->base = ioremap(pdev->res.start, SZ_4K);
- if (!host->base) {
- ret = -ENOMEM;
- goto out;
- }
+ host->base = (void __iomem*)IO_ADDRESS(r->start);
- if (minfo->wire4)
+ if (minfo->wire4)
mmc->caps |= MMC_CAP_4_BIT_DATA;
mmc->ops = &mmc_omap_ops;
if (host->power_pin >= 0) {
if ((ret = omap_request_gpio(host->power_pin)) != 0) {
- dev_err(mmc_dev(host->mmc), "Unable to get GPIO
- pin for MMC power\n");
+ dev_err(mmc_dev(host->mmc),
+ "Unable to get GPIO pin for MMC power\n");
goto out;
}
omap_set_gpio_direction(host->power_pin, 0);
omap_set_gpio_direction(host->switch_pin, 1);
ret = request_irq(OMAP_GPIO_IRQ(host->switch_pin),
- mmc_omap_switch_irq, SA_TRIGGER_RISING, DRIVER_NAME, host);
+ mmc_omap_switch_irq, IRQF_TRIGGER_RISING, DRIVER_NAME, host);
if (ret) {
dev_warn(mmc_dev(host->mmc), "Unable to get IRQ for MMC cover switch\n");
omap_free_gpio(host->switch_pin);
device_remove_file(&pdev->dev, &dev_attr_cover_switch);
}
if (ret) {
- dev_wan(mmc_dev(host->mmc), "Unable to create sysfs attributes\n");
+ dev_warn(mmc_dev(host->mmc), "Unable to create sysfs attributes\n");
free_irq(OMAP_GPIO_IRQ(host->switch_pin), host);
omap_free_gpio(host->switch_pin);
host->switch_pin = -1;
* published by the Free Software Foundation.
*/
- /*
- * Note that PIO transfer is rather crappy atm. The buffer full/empty
- * interrupts aren't reliable so we currently transfer the entire buffer
- * directly. Patches to solve the problem are welcome.
- */
-
#include <linux/delay.h>
#include <linux/highmem.h>
#include <linux/pci.h>
#include "sdhci.h"
#define DRIVER_NAME "sdhci"
-#define DRIVER_VERSION "0.11"
+#define DRIVER_VERSION "0.12"
#define BUGMAIL "<sdhci-devel@list.drzeus.cx>"
#define DBG(f, x...) \
pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x)
+static unsigned int debug_nodma = 0;
+static unsigned int debug_forcedma = 0;
+static unsigned int debug_quirks = 0;
+
+#define SDHCI_QUIRK_CLOCK_BEFORE_RESET (1<<0)
+#define SDHCI_QUIRK_FORCE_DMA (1<<1)
+
static const struct pci_device_id pci_ids[] __devinitdata = {
- /* handle any SD host controller */
- {PCI_DEVICE_CLASS((PCI_CLASS_SYSTEM_SDHCI << 8), 0xFFFF00)},
+ {
+ .vendor = PCI_VENDOR_ID_RICOH,
+ .device = PCI_DEVICE_ID_RICOH_R5C822,
+ .subvendor = PCI_VENDOR_ID_IBM,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = SDHCI_QUIRK_CLOCK_BEFORE_RESET |
+ SDHCI_QUIRK_FORCE_DMA,
+ },
+
+ {
+ .vendor = PCI_VENDOR_ID_RICOH,
+ .device = PCI_DEVICE_ID_RICOH_R5C822,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = SDHCI_QUIRK_FORCE_DMA,
+ },
+
+ {
+ .vendor = PCI_VENDOR_ID_TI,
+ .device = PCI_DEVICE_ID_TI_XX21_XX11_SD,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = SDHCI_QUIRK_FORCE_DMA,
+ },
+
+ { /* Generic SD host controller */
+ PCI_DEVICE_CLASS((PCI_CLASS_SYSTEM_SDHCI << 8), 0xFFFF00)
+ },
+
{ /* end: all zeroes */ },
};
static void sdhci_reset(struct sdhci_host *host, u8 mask)
{
+ unsigned long timeout;
+
writeb(mask, host->ioaddr + SDHCI_SOFTWARE_RESET);
- if (mask & SDHCI_RESET_ALL) {
+ if (mask & SDHCI_RESET_ALL)
host->clock = 0;
- mdelay(50);
+ /* Wait max 100 ms */
+ timeout = 100;
+
+ /* hw clears the bit when it's done */
+ while (readb(host->ioaddr + SDHCI_SOFTWARE_RESET) & mask) {
+ if (timeout == 0) {
+ printk(KERN_ERR "%s: Reset 0x%x never completed. "
+ "Please report this to " BUGMAIL ".\n",
+ mmc_hostname(host->mmc), (int)mask);
+ sdhci_dumpregs(host);
+ return;
+ }
+ timeout--;
+ mdelay(1);
}
}
sdhci_reset(host, SDHCI_RESET_ALL);
- intmask = ~(SDHCI_INT_CARD_INT | SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL);
+ intmask = SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT |
+ SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_INDEX |
+ SDHCI_INT_END_BIT | SDHCI_INT_CRC | SDHCI_INT_TIMEOUT |
+ SDHCI_INT_CARD_REMOVE | SDHCI_INT_CARD_INSERT |
+ SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL |
+ SDHCI_INT_DMA_END | SDHCI_INT_DATA_END | SDHCI_INT_RESPONSE;
writel(intmask, host->ioaddr + SDHCI_INT_ENABLE);
writel(intmask, host->ioaddr + SDHCI_SIGNAL_ENABLE);
-
- /* This is unknown magic. */
- writeb(0xE, host->ioaddr + SDHCI_TIMEOUT_CONTROL);
}
static void sdhci_activate_led(struct sdhci_host *host)
return host->num_sg;
}
-static void sdhci_transfer_pio(struct sdhci_host *host)
+static void sdhci_read_block_pio(struct sdhci_host *host)
{
+ int blksize, chunk_remain;
+ u32 data;
char *buffer;
- u32 mask;
- int bytes, size;
- unsigned long max_jiffies;
+ int size;
- BUG_ON(!host->data);
+ DBG("PIO reading\n");
- if (host->num_sg == 0)
- return;
-
- bytes = 0;
- if (host->data->flags & MMC_DATA_READ)
- mask = SDHCI_DATA_AVAILABLE;
- else
- mask = SDHCI_SPACE_AVAILABLE;
+ blksize = host->data->blksz;
+ chunk_remain = 0;
+ data = 0;
buffer = sdhci_kmap_sg(host) + host->offset;
- /* Transfer shouldn't take more than 5 s */
- max_jiffies = jiffies + HZ * 5;
+ while (blksize) {
+ if (chunk_remain == 0) {
+ data = readl(host->ioaddr + SDHCI_BUFFER);
+ chunk_remain = min(blksize, 4);
+ }
- while (host->size > 0) {
- if (time_after(jiffies, max_jiffies)) {
- printk(KERN_ERR "%s: PIO transfer stalled. "
- "Please report this to "
- BUGMAIL ".\n", mmc_hostname(host->mmc));
- sdhci_dumpregs(host);
+ size = min(host->size, host->remain);
+ size = min(size, chunk_remain);
- sdhci_kunmap_sg(host);
+ chunk_remain -= size;
+ blksize -= size;
+ host->offset += size;
+ host->remain -= size;
+ host->size -= size;
+ while (size) {
+ *buffer = data & 0xFF;
+ buffer++;
+ data >>= 8;
+ size--;
+ }
- host->data->error = MMC_ERR_FAILED;
- sdhci_finish_data(host);
- return;
+ if (host->remain == 0) {
+ sdhci_kunmap_sg(host);
+ if (sdhci_next_sg(host) == 0) {
+ BUG_ON(blksize != 0);
+ return;
+ }
+ buffer = sdhci_kmap_sg(host);
}
+ }
- if (!(readl(host->ioaddr + SDHCI_PRESENT_STATE) & mask))
- continue;
+ sdhci_kunmap_sg(host);
+}
- size = min(host->size, host->remain);
+static void sdhci_write_block_pio(struct sdhci_host *host)
+{
+ int blksize, chunk_remain;
+ u32 data;
+ char *buffer;
+ int bytes, size;
- if (size >= 4) {
- if (host->data->flags & MMC_DATA_READ)
- *(u32*)buffer = readl(host->ioaddr + SDHCI_BUFFER);
- else
- writel(*(u32*)buffer, host->ioaddr + SDHCI_BUFFER);
- size = 4;
- } else if (size >= 2) {
- if (host->data->flags & MMC_DATA_READ)
- *(u16*)buffer = readw(host->ioaddr + SDHCI_BUFFER);
- else
- writew(*(u16*)buffer, host->ioaddr + SDHCI_BUFFER);
- size = 2;
- } else {
- if (host->data->flags & MMC_DATA_READ)
- *(u8*)buffer = readb(host->ioaddr + SDHCI_BUFFER);
- else
- writeb(*(u8*)buffer, host->ioaddr + SDHCI_BUFFER);
- size = 1;
- }
+ DBG("PIO writing\n");
+
+ blksize = host->data->blksz;
+ chunk_remain = 4;
+ data = 0;
- buffer += size;
+ bytes = 0;
+ buffer = sdhci_kmap_sg(host) + host->offset;
+
+ while (blksize) {
+ size = min(host->size, host->remain);
+ size = min(size, chunk_remain);
+
+ chunk_remain -= size;
+ blksize -= size;
host->offset += size;
host->remain -= size;
-
- bytes += size;
host->size -= size;
+ while (size) {
+ data >>= 8;
+ data |= (u32)*buffer << 24;
+ buffer++;
+ size--;
+ }
+
+ if (chunk_remain == 0) {
+ writel(data, host->ioaddr + SDHCI_BUFFER);
+ chunk_remain = min(blksize, 4);
+ }
if (host->remain == 0) {
sdhci_kunmap_sg(host);
if (sdhci_next_sg(host) == 0) {
- DBG("PIO transfer: %d bytes\n", bytes);
+ BUG_ON(blksize != 0);
return;
}
buffer = sdhci_kmap_sg(host);
}
sdhci_kunmap_sg(host);
+}
+
+static void sdhci_transfer_pio(struct sdhci_host *host)
+{
+ u32 mask;
+
+ BUG_ON(!host->data);
+
+ if (host->size == 0)
+ return;
+
+ if (host->data->flags & MMC_DATA_READ)
+ mask = SDHCI_DATA_AVAILABLE;
+ else
+ mask = SDHCI_SPACE_AVAILABLE;
+
+ while (readl(host->ioaddr + SDHCI_PRESENT_STATE) & mask) {
+ if (host->data->flags & MMC_DATA_READ)
+ sdhci_read_block_pio(host);
+ else
+ sdhci_write_block_pio(host);
+
+ if (host->size == 0)
+ break;
+
+ BUG_ON(host->num_sg == 0);
+ }
- DBG("PIO transfer: %d bytes\n", bytes);
+ DBG("PIO transfer complete.\n");
}
static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_data *data)
{
- u16 mode;
+ u8 count;
+ unsigned target_timeout, current_timeout;
WARN_ON(host->data);
- if (data == NULL) {
- writew(0, host->ioaddr + SDHCI_TRANSFER_MODE);
+ if (data == NULL)
return;
- }
DBG("blksz %04x blks %04x flags %08x\n",
data->blksz, data->blocks, data->flags);
DBG("tsac %d ms nsac %d clk\n",
data->timeout_ns / 1000000, data->timeout_clks);
- mode = SDHCI_TRNS_BLK_CNT_EN;
- if (data->blocks > 1)
- mode |= SDHCI_TRNS_MULTI;
- if (data->flags & MMC_DATA_READ)
- mode |= SDHCI_TRNS_READ;
- if (host->flags & SDHCI_USE_DMA)
- mode |= SDHCI_TRNS_DMA;
+ /* Sanity checks */
+ BUG_ON(data->blksz * data->blocks > 524288);
+ BUG_ON(data->blksz > host->max_block);
+ BUG_ON(data->blocks > 65535);
- writew(mode, host->ioaddr + SDHCI_TRANSFER_MODE);
+ /* timeout in us */
+ target_timeout = data->timeout_ns / 1000 +
+ data->timeout_clks / host->clock;
- writew(data->blksz, host->ioaddr + SDHCI_BLOCK_SIZE);
- writew(data->blocks, host->ioaddr + SDHCI_BLOCK_COUNT);
+ /*
+ * Figure out needed cycles.
+ * We do this in steps in order to fit inside a 32 bit int.
+ * The first step is the minimum timeout, which will have a
+ * minimum resolution of 6 bits:
+ * (1) 2^13*1000 > 2^22,
+ * (2) host->timeout_clk < 2^16
+ * =>
+ * (1) / (2) > 2^6
+ */
+ count = 0;
+ current_timeout = (1 << 13) * 1000 / host->timeout_clk;
+ while (current_timeout < target_timeout) {
+ count++;
+ current_timeout <<= 1;
+ if (count >= 0xF)
+ break;
+ }
+
+ if (count >= 0xF) {
+ printk(KERN_WARNING "%s: Too large timeout requested!\n",
+ mmc_hostname(host->mmc));
+ count = 0xE;
+ }
+
+ writeb(count, host->ioaddr + SDHCI_TIMEOUT_CONTROL);
if (host->flags & SDHCI_USE_DMA) {
int count;
host->offset = 0;
host->remain = host->cur_sg->length;
}
+
+ /* We do not handle DMA boundaries, so set it to max (512 KiB) */
+ writew(SDHCI_MAKE_BLKSZ(7, data->blksz),
+ host->ioaddr + SDHCI_BLOCK_SIZE);
+ writew(data->blocks, host->ioaddr + SDHCI_BLOCK_COUNT);
+}
+
+static void sdhci_set_transfer_mode(struct sdhci_host *host,
+ struct mmc_data *data)
+{
+ u16 mode;
+
+ WARN_ON(host->data);
+
+ if (data == NULL)
+ return;
+
+ mode = SDHCI_TRNS_BLK_CNT_EN;
+ if (data->blocks > 1)
+ mode |= SDHCI_TRNS_MULTI;
+ if (data->flags & MMC_DATA_READ)
+ mode |= SDHCI_TRNS_READ;
+ if (host->flags & SDHCI_USE_DMA)
+ mode |= SDHCI_TRNS_DMA;
+
+ writew(mode, host->ioaddr + SDHCI_TRANSFER_MODE);
}
static void sdhci_finish_data(struct sdhci_host *host)
{
struct mmc_data *data;
- u32 intmask;
u16 blocks;
BUG_ON(!host->data);
if (host->flags & SDHCI_USE_DMA) {
pci_unmap_sg(host->chip->pdev, data->sg, data->sg_len,
(data->flags & MMC_DATA_READ)?PCI_DMA_FROMDEVICE:PCI_DMA_TODEVICE);
- } else {
- intmask = readl(host->ioaddr + SDHCI_SIGNAL_ENABLE);
- intmask &= ~(SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL);
- writel(intmask, host->ioaddr + SDHCI_SIGNAL_ENABLE);
-
- intmask = readl(host->ioaddr + SDHCI_INT_ENABLE);
- intmask &= ~(SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL);
- writel(intmask, host->ioaddr + SDHCI_INT_ENABLE);
}
/*
"though there were blocks left. Please report this "
"to " BUGMAIL ".\n", mmc_hostname(host->mmc));
data->error = MMC_ERR_FAILED;
- }
-
- if (host->size != 0) {
+ } else if (host->size != 0) {
printk(KERN_ERR "%s: %d bytes were left untransferred. "
"Please report this to " BUGMAIL ".\n",
mmc_hostname(host->mmc), host->size);
static void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd)
{
int flags;
- u32 present;
- unsigned long max_jiffies;
+ u32 mask;
+ unsigned long timeout;
WARN_ON(host->cmd);
DBG("Sending cmd (%x)\n", cmd->opcode);
/* Wait max 10 ms */
- max_jiffies = jiffies + (HZ + 99)/100;
- do {
- if (time_after(jiffies, max_jiffies)) {
+ timeout = 10;
+
+ mask = SDHCI_CMD_INHIBIT;
+ if ((cmd->data != NULL) || (cmd->flags & MMC_RSP_BUSY))
+ mask |= SDHCI_DATA_INHIBIT;
+
+ /* We shouldn't wait for data inihibit for stop commands, even
+ though they might use busy signaling */
+ if (host->mrq->data && (cmd == host->mrq->data->stop))
+ mask &= ~SDHCI_DATA_INHIBIT;
+
+ while (readl(host->ioaddr + SDHCI_PRESENT_STATE) & mask) {
+ if (timeout == 0) {
printk(KERN_ERR "%s: Controller never released "
- "inhibit bits. Please report this to "
+ "inhibit bit(s). Please report this to "
BUGMAIL ".\n", mmc_hostname(host->mmc));
sdhci_dumpregs(host);
cmd->error = MMC_ERR_FAILED;
tasklet_schedule(&host->finish_tasklet);
return;
}
- present = readl(host->ioaddr + SDHCI_PRESENT_STATE);
- } while (present & (SDHCI_CMD_INHIBIT | SDHCI_DATA_INHIBIT));
+ timeout--;
+ mdelay(1);
+ }
mod_timer(&host->timer, jiffies + 10 * HZ);
writel(cmd->arg, host->ioaddr + SDHCI_ARGUMENT);
+ sdhci_set_transfer_mode(host, cmd->data);
+
if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) {
printk(KERN_ERR "%s: Unsupported response type! "
"Please report this to " BUGMAIL ".\n",
DBG("Ending cmd (%x)\n", host->cmd->opcode);
- if (host->cmd->data) {
- u32 intmask;
-
+ if (host->cmd->data)
host->data = host->cmd->data;
-
- if (!(host->flags & SDHCI_USE_DMA)) {
- /*
- * Don't enable the interrupts until now to make sure we
- * get stable handling of the FIFO.
- */
- intmask = readl(host->ioaddr + SDHCI_INT_ENABLE);
- intmask |= SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL;
- writel(intmask, host->ioaddr + SDHCI_INT_ENABLE);
-
- intmask = readl(host->ioaddr + SDHCI_SIGNAL_ENABLE);
- intmask |= SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL;
- writel(intmask, host->ioaddr + SDHCI_SIGNAL_ENABLE);
-
- /*
- * The buffer interrupts are to unreliable so we
- * start the transfer immediatly.
- */
- sdhci_transfer_pio(host);
- }
- } else
+ else
tasklet_schedule(&host->finish_tasklet);
host->cmd = NULL;
{
int div;
u16 clk;
- unsigned long max_jiffies;
+ unsigned long timeout;
if (clock == host->clock)
return;
writew(clk, host->ioaddr + SDHCI_CLOCK_CONTROL);
/* Wait max 10 ms */
- max_jiffies = jiffies + (HZ + 99)/100;
- do {
- if (time_after(jiffies, max_jiffies)) {
+ timeout = 10;
+ while (!((clk = readw(host->ioaddr + SDHCI_CLOCK_CONTROL))
+ & SDHCI_CLOCK_INT_STABLE)) {
+ if (timeout == 0) {
printk(KERN_ERR "%s: Internal clock never stabilised. "
"Please report this to " BUGMAIL ".\n",
mmc_hostname(host->mmc));
sdhci_dumpregs(host);
return;
}
- clk = readw(host->ioaddr + SDHCI_CLOCK_CONTROL);
- } while (!(clk & SDHCI_CLOCK_INT_STABLE));
+ timeout--;
+ mdelay(1);
+ }
clk |= SDHCI_CLOCK_CARD_EN;
writew(clk, host->ioaddr + SDHCI_CLOCK_CONTROL);
host->clock = clock;
}
+static void sdhci_set_power(struct sdhci_host *host, unsigned short power)
+{
+ u8 pwr;
+
+ if (host->power == power)
+ return;
+
+ writeb(0, host->ioaddr + SDHCI_POWER_CONTROL);
+
+ if (power == (unsigned short)-1)
+ goto out;
+
+ pwr = SDHCI_POWER_ON;
+
+ switch (power) {
+ case MMC_VDD_170:
+ case MMC_VDD_180:
+ case MMC_VDD_190:
+ pwr |= SDHCI_POWER_180;
+ break;
+ case MMC_VDD_290:
+ case MMC_VDD_300:
+ case MMC_VDD_310:
+ pwr |= SDHCI_POWER_300;
+ break;
+ case MMC_VDD_320:
+ case MMC_VDD_330:
+ case MMC_VDD_340:
+ pwr |= SDHCI_POWER_330;
+ break;
+ default:
+ BUG();
+ }
+
+ writeb(pwr, host->ioaddr + SDHCI_POWER_CONTROL);
+
+out:
+ host->power = power;
+}
+
/*****************************************************************************\
* *
* MMC callbacks *
*/
if (ios->power_mode == MMC_POWER_OFF) {
writel(0, host->ioaddr + SDHCI_SIGNAL_ENABLE);
- spin_unlock_irqrestore(&host->lock, flags);
sdhci_init(host);
- spin_lock_irqsave(&host->lock, flags);
}
sdhci_set_clock(host, ios->clock);
if (ios->power_mode == MMC_POWER_OFF)
- writeb(0, host->ioaddr + SDHCI_POWER_CONTROL);
+ sdhci_set_power(host, -1);
else
- writeb(0xFF, host->ioaddr + SDHCI_POWER_CONTROL);
+ sdhci_set_power(host, ios->vdd);
ctrl = readb(host->ioaddr + SDHCI_HOST_CONTROL);
if (ios->bus_width == MMC_BUS_WIDTH_4)
if ((mrq->cmd->error != MMC_ERR_NONE) ||
(mrq->data && ((mrq->data->error != MMC_ERR_NONE) ||
(mrq->data->stop && (mrq->data->stop->error != MMC_ERR_NONE))))) {
+
+ /* Some controllers need this kick or reset won't work here */
+ if (host->chip->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET) {
+ unsigned int clock;
+
+ /* This is to force an update */
+ clock = host->clock;
+ host->clock = 0;
+ sdhci_set_clock(host, clock);
+ }
+
+ /* Spec says we should do both at the same time, but Ricoh
+ controllers do not like that. */
sdhci_reset(host, SDHCI_RESET_CMD);
sdhci_reset(host, SDHCI_RESET_DATA);
}
if (host->data->error != MMC_ERR_NONE)
sdhci_finish_data(host);
else {
- if (intmask & (SDHCI_INT_BUF_FULL | SDHCI_INT_BUF_EMPTY))
+ if (intmask & (SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL))
sdhci_transfer_pio(host);
if (intmask & SDHCI_INT_DATA_END)
DBG("*** %s got interrupt: 0x%08x\n", host->slot_descr, intmask);
- if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE))
+ if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE)) {
+ writel(intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE),
+ host->ioaddr + SDHCI_INT_STATUS);
tasklet_schedule(&host->card_tasklet);
+ }
- if (intmask & SDHCI_INT_CMD_MASK) {
- sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK);
+ intmask &= ~(SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE);
+ if (intmask & SDHCI_INT_CMD_MASK) {
writel(intmask & SDHCI_INT_CMD_MASK,
host->ioaddr + SDHCI_INT_STATUS);
+ sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK);
}
if (intmask & SDHCI_INT_DATA_MASK) {
- sdhci_data_irq(host, intmask & SDHCI_INT_DATA_MASK);
-
writel(intmask & SDHCI_INT_DATA_MASK,
host->ioaddr + SDHCI_INT_STATUS);
+ sdhci_data_irq(host, intmask & SDHCI_INT_DATA_MASK);
}
intmask &= ~(SDHCI_INT_CMD_MASK | SDHCI_INT_DATA_MASK);
- if (intmask & SDHCI_INT_CARD_INT) {
- printk(KERN_ERR "%s: Unexpected card interrupt. Please "
- "report this to " BUGMAIL ".\n",
- mmc_hostname(host->mmc));
- sdhci_dumpregs(host);
- }
-
if (intmask & SDHCI_INT_BUS_POWER) {
- printk(KERN_ERR "%s: Unexpected bus power interrupt. Please "
- "report this to " BUGMAIL ".\n",
+ printk(KERN_ERR "%s: Card is consuming too much power!\n",
mmc_hostname(host->mmc));
- sdhci_dumpregs(host);
+ writel(SDHCI_INT_BUS_POWER, host->ioaddr + SDHCI_INT_STATUS);
}
- if (intmask & SDHCI_INT_ACMD12ERR) {
- printk(KERN_ERR "%s: Unexpected auto CMD12 error. Please "
+ intmask &= SDHCI_INT_BUS_POWER;
+
+ if (intmask) {
+ printk(KERN_ERR "%s: Unexpected interrupt 0x%08x. Please "
"report this to " BUGMAIL ".\n",
- mmc_hostname(host->mmc));
+ mmc_hostname(host->mmc), intmask);
sdhci_dumpregs(host);
- writew(~0, host->ioaddr + SDHCI_ACMD12_ERR);
- }
-
- if (intmask)
writel(intmask, host->ioaddr + SDHCI_INT_STATUS);
+ }
result = IRQ_HANDLED;
static int __devinit sdhci_probe_slot(struct pci_dev *pdev, int slot)
{
int ret;
+ unsigned int version;
struct sdhci_chip *chip;
struct mmc_host *mmc;
struct sdhci_host *host;
return -ENODEV;
}
+ if ((pdev->class & 0x0000FF) == PCI_SDHCI_IFVENDOR) {
+ printk(KERN_ERR DRIVER_NAME ": Vendor specific interface. Aborting.\n");
+ return -ENODEV;
+ }
+
+ if ((pdev->class & 0x0000FF) > PCI_SDHCI_IFVENDOR) {
+ printk(KERN_ERR DRIVER_NAME ": Unknown interface. Aborting.\n");
+ return -ENODEV;
+ }
+
mmc = mmc_alloc_host(sizeof(struct sdhci_host), &pdev->dev);
if (!mmc)
return -ENOMEM;
goto release;
}
+ sdhci_reset(host, SDHCI_RESET_ALL);
+
+ version = readw(host->ioaddr + SDHCI_HOST_VERSION);
+ version = (version & SDHCI_SPEC_VER_MASK) >> SDHCI_SPEC_VER_SHIFT;
+ if (version != 0) {
+ printk(KERN_ERR "%s: Unknown controller version (%d). "
+ "Cowardly refusing to continue.\n", host->slot_descr,
+ version);
+ ret = -ENODEV;
+ goto unmap;
+ }
+
caps = readl(host->ioaddr + SDHCI_CAPABILITIES);
- if ((caps & SDHCI_CAN_DO_DMA) && ((pdev->class & 0x0000FF) == 0x01))
+ if (debug_nodma)
+ DBG("DMA forced off\n");
+ else if (debug_forcedma) {
+ DBG("DMA forced on\n");
+ host->flags |= SDHCI_USE_DMA;
+ } else if (chip->quirks & SDHCI_QUIRK_FORCE_DMA)
+ host->flags |= SDHCI_USE_DMA;
+ else if ((pdev->class & 0x0000FF) != PCI_SDHCI_IFDMA)
+ DBG("Controller doesn't have DMA interface\n");
+ else if (!(caps & SDHCI_CAN_DO_DMA))
+ DBG("Controller doesn't have DMA capability\n");
+ else
host->flags |= SDHCI_USE_DMA;
if (host->flags & SDHCI_USE_DMA) {
else /* XXX: Hack to get MMC layer to avoid highmem */
pdev->dma_mask = 0;
- host->max_clk = (caps & SDHCI_CLOCK_BASE_MASK) >> SDHCI_CLOCK_BASE_SHIFT;
+ host->max_clk =
+ (caps & SDHCI_CLOCK_BASE_MASK) >> SDHCI_CLOCK_BASE_SHIFT;
+ if (host->max_clk == 0) {
+ printk(KERN_ERR "%s: Hardware doesn't specify base clock "
+ "frequency.\n", host->slot_descr);
+ ret = -ENODEV;
+ goto unmap;
+ }
host->max_clk *= 1000000;
+ host->timeout_clk =
+ (caps & SDHCI_TIMEOUT_CLK_MASK) >> SDHCI_TIMEOUT_CLK_SHIFT;
+ if (host->timeout_clk == 0) {
+ printk(KERN_ERR "%s: Hardware doesn't specify timeout clock "
+ "frequency.\n", host->slot_descr);
+ ret = -ENODEV;
+ goto unmap;
+ }
+ if (caps & SDHCI_TIMEOUT_CLK_UNIT)
+ host->timeout_clk *= 1000;
+
+ host->max_block = (caps & SDHCI_MAX_BLOCK_MASK) >> SDHCI_MAX_BLOCK_SHIFT;
+ if (host->max_block >= 3) {
+ printk(KERN_ERR "%s: Invalid maximum block size.\n",
+ host->slot_descr);
+ ret = -ENODEV;
+ goto unmap;
+ }
+ host->max_block = 512 << host->max_block;
+
/*
* Set host parameters.
*/
mmc->ops = &sdhci_ops;
mmc->f_min = host->max_clk / 256;
mmc->f_max = host->max_clk;
- mmc->ocr_avail = MMC_VDD_32_33|MMC_VDD_33_34;
mmc->caps = MMC_CAP_4_BIT_DATA;
+ mmc->ocr_avail = 0;
+ if (caps & SDHCI_CAN_VDD_330)
+ mmc->ocr_avail |= MMC_VDD_32_33|MMC_VDD_33_34;
+ else if (caps & SDHCI_CAN_VDD_300)
+ mmc->ocr_avail |= MMC_VDD_29_30|MMC_VDD_30_31;
+ else if (caps & SDHCI_CAN_VDD_180)
+ mmc->ocr_avail |= MMC_VDD_17_18|MMC_VDD_18_19;
+
+ if (mmc->ocr_avail == 0) {
+ printk(KERN_ERR "%s: Hardware doesn't report any "
+ "support voltages.\n", host->slot_descr);
+ ret = -ENODEV;
+ goto unmap;
+ }
+
spin_lock_init(&host->lock);
/*
mmc->max_phys_segs = 16;
/*
- * Maximum number of sectors in one transfer. Limited by sector
- * count register.
+ * Maximum number of sectors in one transfer. Limited by DMA boundary
+ * size (512KiB), which means (512 KiB/512=) 1024 entries.
*/
- mmc->max_sectors = 0x3FFF;
+ mmc->max_sectors = 1024;
/*
* Maximum segment size. Could be one segment with the maximum number
setup_timer(&host->timer, sdhci_timeout_timer, (long)host);
- ret = request_irq(host->irq, sdhci_irq, SA_SHIRQ,
+ ret = request_irq(host->irq, sdhci_irq, IRQF_SHARED,
host->slot_descr, host);
if (ret)
- goto unmap;
+ goto untasklet;
sdhci_init(host);
return 0;
-unmap:
+untasklet:
tasklet_kill(&host->card_tasklet);
tasklet_kill(&host->finish_tasklet);
-
+unmap:
iounmap(host->ioaddr);
release:
pci_release_region(pdev, host->bar);
const struct pci_device_id *ent)
{
int ret, i;
- u8 slots;
+ u8 slots, rev;
struct sdhci_chip *chip;
BUG_ON(pdev == NULL);
BUG_ON(ent == NULL);
- DBG("found at %s\n", pci_name(pdev));
+ pci_read_config_byte(pdev, PCI_CLASS_REVISION, &rev);
+
+ printk(KERN_INFO DRIVER_NAME
+ ": SDHCI controller found at %s [%04x:%04x] (rev %x)\n",
+ pci_name(pdev), (int)pdev->vendor, (int)pdev->device,
+ (int)rev);
ret = pci_read_config_byte(pdev, PCI_SLOT_INFO, &slots);
if (ret)
}
chip->pdev = pdev;
+ chip->quirks = ent->driver_data;
+
+ if (debug_quirks)
+ chip->quirks = debug_quirks;
chip->num_slots = slots;
pci_set_drvdata(pdev, chip);
module_init(sdhci_drv_init);
module_exit(sdhci_drv_exit);
+module_param(debug_nodma, uint, 0444);
+module_param(debug_forcedma, uint, 0444);
+module_param(debug_quirks, uint, 0444);
+
MODULE_AUTHOR("Pierre Ossman <drzeus@drzeus.cx>");
MODULE_DESCRIPTION("Secure Digital Host Controller Interface driver");
MODULE_VERSION(DRIVER_VERSION);
MODULE_LICENSE("GPL");
+
+MODULE_PARM_DESC(debug_nodma, "Forcefully disable DMA transfers. (default 0)");
+MODULE_PARM_DESC(debug_forcedma, "Forcefully enable DMA transfers. (default 0)");
+MODULE_PARM_DESC(debug_quirks, "Force certain quirks.");
* PCI registers
*/
+#define PCI_SDHCI_IFPIO 0x00
+#define PCI_SDHCI_IFDMA 0x01
+#define PCI_SDHCI_IFVENDOR 0x02
+
#define PCI_SLOT_INFO 0x40 /* 8 bits */
#define PCI_SLOT_INFO_SLOTS(x) ((x >> 4) & 7)
#define PCI_SLOT_INFO_FIRST_BAR_MASK 0x07
#define SDHCI_DMA_ADDRESS 0x00
#define SDHCI_BLOCK_SIZE 0x04
+#define SDHCI_MAKE_BLKSZ(dma, blksz) (((dma & 0x7) << 12) | (blksz & 0xFFF))
#define SDHCI_BLOCK_COUNT 0x06
#define SDHCI_CTRL_4BITBUS 0x02
#define SDHCI_POWER_CONTROL 0x29
+#define SDHCI_POWER_ON 0x01
+#define SDHCI_POWER_180 0x0A
+#define SDHCI_POWER_300 0x0C
+#define SDHCI_POWER_330 0x0E
#define SDHCI_BLOCK_GAP_CONTROL 0x2A
#define SDHCI_INT_RESPONSE 0x00000001
#define SDHCI_INT_DATA_END 0x00000002
#define SDHCI_INT_DMA_END 0x00000008
-#define SDHCI_INT_BUF_EMPTY 0x00000010
-#define SDHCI_INT_BUF_FULL 0x00000020
+#define SDHCI_INT_SPACE_AVAIL 0x00000010
+#define SDHCI_INT_DATA_AVAIL 0x00000020
#define SDHCI_INT_CARD_INSERT 0x00000040
#define SDHCI_INT_CARD_REMOVE 0x00000080
#define SDHCI_INT_CARD_INT 0x00000100
#define SDHCI_INT_CMD_MASK (SDHCI_INT_RESPONSE | SDHCI_INT_TIMEOUT | \
SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX)
#define SDHCI_INT_DATA_MASK (SDHCI_INT_DATA_END | SDHCI_INT_DMA_END | \
- SDHCI_INT_BUF_EMPTY | SDHCI_INT_BUF_FULL | \
+ SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL | \
SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_DATA_CRC | \
SDHCI_INT_DATA_END_BIT)
/* 3E-3F reserved */
#define SDHCI_CAPABILITIES 0x40
-#define SDHCI_CAN_DO_DMA 0x00400000
+#define SDHCI_TIMEOUT_CLK_MASK 0x0000003F
+#define SDHCI_TIMEOUT_CLK_SHIFT 0
+#define SDHCI_TIMEOUT_CLK_UNIT 0x00000080
#define SDHCI_CLOCK_BASE_MASK 0x00003F00
#define SDHCI_CLOCK_BASE_SHIFT 8
+#define SDHCI_MAX_BLOCK_MASK 0x00030000
+#define SDHCI_MAX_BLOCK_SHIFT 16
+#define SDHCI_CAN_DO_DMA 0x00400000
+#define SDHCI_CAN_VDD_330 0x01000000
+#define SDHCI_CAN_VDD_300 0x02000000
+#define SDHCI_CAN_VDD_180 0x04000000
/* 44-47 reserved for more caps */
#define SDHCI_SLOT_INT_STATUS 0xFC
#define SDHCI_HOST_VERSION 0xFE
+#define SDHCI_VENDOR_VER_MASK 0xFF00
+#define SDHCI_VENDOR_VER_SHIFT 8
+#define SDHCI_SPEC_VER_MASK 0x00FF
+#define SDHCI_SPEC_VER_SHIFT 0
struct sdhci_chip;
#define SDHCI_USE_DMA (1<<0)
unsigned int max_clk; /* Max possible freq (MHz) */
+ unsigned int timeout_clk; /* Timeout freq (KHz) */
+ unsigned int max_block; /* Max block size (bytes) */
unsigned int clock; /* Current clock (MHz) */
+ unsigned short power; /* Current voltage */
struct mmc_request *mrq; /* Current request */
struct mmc_command *cmd; /* Current command */
struct sdhci_chip {
struct pci_dev *pdev;
+ unsigned long quirks;
+
int num_slots; /* Slots on controller */
struct sdhci_host *hosts[0]; /* Pointers to hosts */
};
* Allocate interrupt.
*/
- ret = request_irq(irq, wbsd_irq, SA_SHIRQ, DRIVER_NAME, host);
+ ret = request_irq(irq, wbsd_irq, IRQF_SHARED, DRIVER_NAME, host);
if (ret)
return ret;
size_t *retlen, u_char *buf);
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf);
-static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len,
- size_t *retlen, u_char *buf, u_char *eccbuf, struct nand_oobinfo *oobsel);
-static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len,
- size_t *retlen, const u_char *buf, u_char *eccbuf, struct nand_oobinfo *oobsel);
static int doc_read_oob(struct mtd_info *mtd, loff_t ofs,
struct mtd_oob_ops *ops);
static int doc_write_oob(struct mtd_info *mtd, loff_t ofs,
static int doc_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t * retlen, u_char * buf)
-{
- /* Just a special case of doc_read_ecc */
- return doc_read_ecc(mtd, from, len, retlen, buf, NULL, NULL);
-}
-
-static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len,
- size_t * retlen, u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel)
{
struct DiskOnChip *this = mtd->priv;
void __iomem *docptr = this->virtadr;
struct Nand *mychip;
- unsigned char syndrome[6];
+ unsigned char syndrome[6], eccbuf[6];
volatile char dummy;
int i, len256 = 0, ret=0;
size_t left = len;
DoC_Address(this, ADDR_COLUMN_PAGE, from, CDSN_CTRL_WP,
CDSN_CTRL_ECC_IO);
- if (eccbuf) {
- /* Prime the ECC engine */
- WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC(DOC_ECC_EN, docptr, ECCConf);
- } else {
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
- }
+ /* Prime the ECC engine */
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN, docptr, ECCConf);
/* treat crossing 256-byte sector for 2M x 8bits devices */
if (this->page256 && from + len > (from | 0xff) + 1) {
/* Let the caller know we completed it */
*retlen += len;
- if (eccbuf) {
- /* Read the ECC data through the DiskOnChip ECC logic */
- /* Note: this will work even with 2M x 8bit devices as */
- /* they have 8 bytes of OOB per 256 page. mf. */
- DoC_ReadBuf(this, eccbuf, 6);
-
- /* Flush the pipeline */
- if (DoC_is_Millennium(this)) {
- dummy = ReadDOC(docptr, ECCConf);
- dummy = ReadDOC(docptr, ECCConf);
- i = ReadDOC(docptr, ECCConf);
- } else {
- dummy = ReadDOC(docptr, 2k_ECCStatus);
- dummy = ReadDOC(docptr, 2k_ECCStatus);
- i = ReadDOC(docptr, 2k_ECCStatus);
- }
+ /* Read the ECC data through the DiskOnChip ECC logic */
+ /* Note: this will work even with 2M x 8bit devices as */
+ /* they have 8 bytes of OOB per 256 page. mf. */
+ DoC_ReadBuf(this, eccbuf, 6);
- /* Check the ECC Status */
- if (i & 0x80) {
- int nb_errors;
- /* There was an ECC error */
+ /* Flush the pipeline */
+ if (DoC_is_Millennium(this)) {
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
+ i = ReadDOC(docptr, ECCConf);
+ } else {
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ i = ReadDOC(docptr, 2k_ECCStatus);
+ }
+
+ /* Check the ECC Status */
+ if (i & 0x80) {
+ int nb_errors;
+ /* There was an ECC error */
#ifdef ECC_DEBUG
- printk(KERN_ERR "DiskOnChip ECC Error: Read at %lx\n", (long)from);
+ printk(KERN_ERR "DiskOnChip ECC Error: Read at %lx\n", (long)from);
#endif
- /* Read the ECC syndrom through the DiskOnChip ECC logic.
- These syndrome will be all ZERO when there is no error */
- for (i = 0; i < 6; i++) {
- syndrome[i] =
- ReadDOC(docptr, ECCSyndrome0 + i);
- }
- nb_errors = doc_decode_ecc(buf, syndrome);
+ /* Read the ECC syndrom through the DiskOnChip ECC
+ logic. These syndrome will be all ZERO when there
+ is no error */
+ for (i = 0; i < 6; i++) {
+ syndrome[i] =
+ ReadDOC(docptr, ECCSyndrome0 + i);
+ }
+ nb_errors = doc_decode_ecc(buf, syndrome);
#ifdef ECC_DEBUG
- printk(KERN_ERR "Errors corrected: %x\n", nb_errors);
+ printk(KERN_ERR "Errors corrected: %x\n", nb_errors);
#endif
- if (nb_errors < 0) {
- /* We return error, but have actually done the read. Not that
- this can be told to user-space, via sys_read(), but at least
- MTD-aware stuff can know about it by checking *retlen */
- ret = -EIO;
- }
+ if (nb_errors < 0) {
+ /* We return error, but have actually done the
+ read. Not that this can be told to
+ user-space, via sys_read(), but at least
+ MTD-aware stuff can know about it by
+ checking *retlen */
+ ret = -EIO;
}
+ }
#ifdef PSYCHO_DEBUG
- printk(KERN_DEBUG "ECC DATA at %lxB: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long)from, eccbuf[0], eccbuf[1], eccbuf[2],
- eccbuf[3], eccbuf[4], eccbuf[5]);
+ printk(KERN_DEBUG "ECC DATA at %lxB: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long)from, eccbuf[0], eccbuf[1], eccbuf[2],
+ eccbuf[3], eccbuf[4], eccbuf[5]);
#endif
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
- }
+ /* disable the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
/* according to 11.4.1, we need to wait for the busy line
* drop if we read to the end of the page. */
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t * retlen, const u_char * buf)
-{
- char eccbuf[6];
- return doc_write_ecc(mtd, to, len, retlen, buf, eccbuf, NULL);
-}
-
-static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len,
- size_t * retlen, const u_char * buf,
- u_char * eccbuf, struct nand_oobinfo *oobsel)
{
struct DiskOnChip *this = mtd->priv;
int di; /* Yes, DI is a hangover from when I was disassembling the binary driver */
void __iomem *docptr = this->virtadr;
+ unsigned char eccbuf[6];
volatile char dummy;
int len256 = 0;
struct Nand *mychip;
DoC_Command(this, NAND_CMD_SEQIN, 0);
DoC_Address(this, ADDR_COLUMN_PAGE, to, 0, CDSN_CTRL_ECC_IO);
- if (eccbuf) {
- /* Prime the ECC engine */
- WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
- } else {
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
- }
+ /* Prime the ECC engine */
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
/* treat crossing 256-byte sector for 2M x 8bits devices */
if (this->page256 && to + len > (to | 0xff) + 1) {
DoC_WriteBuf(this, &buf[len256], len - len256);
- if (eccbuf) {
- WriteDOC(CDSN_CTRL_ECC_IO | CDSN_CTRL_CE, docptr,
- CDSNControl);
-
- if (DoC_is_Millennium(this)) {
- WriteDOC(0, docptr, NOP);
- WriteDOC(0, docptr, NOP);
- WriteDOC(0, docptr, NOP);
- } else {
- WriteDOC_(0, docptr, this->ioreg);
- WriteDOC_(0, docptr, this->ioreg);
- WriteDOC_(0, docptr, this->ioreg);
- }
+ WriteDOC(CDSN_CTRL_ECC_IO | CDSN_CTRL_CE, docptr, CDSNControl);
- WriteDOC(CDSN_CTRL_ECC_IO | CDSN_CTRL_FLASH_IO | CDSN_CTRL_CE, docptr,
- CDSNControl);
+ if (DoC_is_Millennium(this)) {
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ } else {
+ WriteDOC_(0, docptr, this->ioreg);
+ WriteDOC_(0, docptr, this->ioreg);
+ WriteDOC_(0, docptr, this->ioreg);
+ }
- /* Read the ECC data through the DiskOnChip ECC logic */
- for (di = 0; di < 6; di++) {
- eccbuf[di] = ReadDOC(docptr, ECCSyndrome0 + di);
- }
+ WriteDOC(CDSN_CTRL_ECC_IO | CDSN_CTRL_FLASH_IO | CDSN_CTRL_CE, docptr,
+ CDSNControl);
- /* Reset the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
+ /* Read the ECC data through the DiskOnChip ECC logic */
+ for (di = 0; di < 6; di++) {
+ eccbuf[di] = ReadDOC(docptr, ECCSyndrome0 + di);
+ }
+
+ /* Reset the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
#ifdef PSYCHO_DEBUG
- printk
- ("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
- eccbuf[4], eccbuf[5]);
+ printk
+ ("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
+ eccbuf[4], eccbuf[5]);
#endif
- }
-
DoC_Command(this, NAND_CMD_PAGEPROG, 0);
DoC_Command(this, NAND_CMD_STATUS, CDSN_CTRL_WP);
size_t *retlen, u_char *buf);
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf);
-static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len,
- size_t *retlen, u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel);
-static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len,
- size_t *retlen, const u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel);
static int doc_read_oob(struct mtd_info *mtd, loff_t ofs,
struct mtd_oob_ops *ops);
static int doc_write_oob(struct mtd_info *mtd, loff_t ofs,
static int doc_read (struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
-{
- /* Just a special case of doc_read_ecc */
- return doc_read_ecc(mtd, from, len, retlen, buf, NULL, NULL);
-}
-
-static int doc_read_ecc (struct mtd_info *mtd, loff_t from, size_t len,
- size_t *retlen, u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel)
{
int i, ret;
volatile char dummy;
- unsigned char syndrome[6];
+ unsigned char syndrome[6], eccbuf[6];
struct DiskOnChip *this = mtd->priv;
void __iomem *docptr = this->virtadr;
struct Nand *mychip = &this->chips[from >> (this->chipshift)];
DoC_Address(docptr, 3, from, CDSN_CTRL_WP, 0x00);
DoC_WaitReady(docptr);
- if (eccbuf) {
- /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
- WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC (DOC_ECC_EN, docptr, ECCConf);
- } else {
- /* disable the ECC engine */
- WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC (DOC_ECC_DIS, docptr, ECCConf);
- }
+ /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
+ WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC (DOC_ECC_EN, docptr, ECCConf);
/* Read the data via the internal pipeline through CDSN IO register,
see Pipelined Read Operations 11.3 */
*retlen = len;
ret = 0;
- if (eccbuf) {
- /* Read the ECC data from Spare Data Area,
- see Reed-Solomon EDC/ECC 11.1 */
- dummy = ReadDOC(docptr, ReadPipeInit);
+ /* Read the ECC data from Spare Data Area,
+ see Reed-Solomon EDC/ECC 11.1 */
+ dummy = ReadDOC(docptr, ReadPipeInit);
#ifndef USE_MEMCPY
- for (i = 0; i < 5; i++) {
- /* N.B. you have to increase the source address in this way or the
- ECC logic will not work properly */
- eccbuf[i] = ReadDOC(docptr, Mil_CDSN_IO + i);
- }
+ for (i = 0; i < 5; i++) {
+ /* N.B. you have to increase the source address in this way or the
+ ECC logic will not work properly */
+ eccbuf[i] = ReadDOC(docptr, Mil_CDSN_IO + i);
+ }
#else
- memcpy_fromio(eccbuf, docptr + DoC_Mil_CDSN_IO, 5);
+ memcpy_fromio(eccbuf, docptr + DoC_Mil_CDSN_IO, 5);
#endif
- eccbuf[5] = ReadDOC(docptr, LastDataRead);
+ eccbuf[5] = ReadDOC(docptr, LastDataRead);
- /* Flush the pipeline */
- dummy = ReadDOC(docptr, ECCConf);
- dummy = ReadDOC(docptr, ECCConf);
+ /* Flush the pipeline */
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
- /* Check the ECC Status */
- if (ReadDOC(docptr, ECCConf) & 0x80) {
- int nb_errors;
- /* There was an ECC error */
+ /* Check the ECC Status */
+ if (ReadDOC(docptr, ECCConf) & 0x80) {
+ int nb_errors;
+ /* There was an ECC error */
#ifdef ECC_DEBUG
- printk("DiskOnChip ECC Error: Read at %lx\n", (long)from);
+ printk("DiskOnChip ECC Error: Read at %lx\n", (long)from);
#endif
- /* Read the ECC syndrom through the DiskOnChip ECC logic.
- These syndrome will be all ZERO when there is no error */
- for (i = 0; i < 6; i++) {
- syndrome[i] = ReadDOC(docptr, ECCSyndrome0 + i);
- }
- nb_errors = doc_decode_ecc(buf, syndrome);
+ /* Read the ECC syndrom through the DiskOnChip ECC logic.
+ These syndrome will be all ZERO when there is no error */
+ for (i = 0; i < 6; i++) {
+ syndrome[i] = ReadDOC(docptr, ECCSyndrome0 + i);
+ }
+ nb_errors = doc_decode_ecc(buf, syndrome);
#ifdef ECC_DEBUG
- printk("ECC Errors corrected: %x\n", nb_errors);
+ printk("ECC Errors corrected: %x\n", nb_errors);
#endif
- if (nb_errors < 0) {
- /* We return error, but have actually done the read. Not that
- this can be told to user-space, via sys_read(), but at least
- MTD-aware stuff can know about it by checking *retlen */
- ret = -EIO;
- }
+ if (nb_errors < 0) {
+ /* We return error, but have actually done the read. Not that
+ this can be told to user-space, via sys_read(), but at least
+ MTD-aware stuff can know about it by checking *retlen */
+ ret = -EIO;
}
+ }
#ifdef PSYCHO_DEBUG
- printk("ECC DATA at %lx: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long)from, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
- eccbuf[4], eccbuf[5]);
+ printk("ECC DATA at %lx: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long)from, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
+ eccbuf[4], eccbuf[5]);
#endif
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
- }
+ /* disable the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
return ret;
}
static int doc_write (struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
-{
- char eccbuf[6];
- return doc_write_ecc(mtd, to, len, retlen, buf, eccbuf, NULL);
-}
-
-static int doc_write_ecc (struct mtd_info *mtd, loff_t to, size_t len,
- size_t *retlen, const u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel)
{
int i,ret = 0;
+ char eccbuf[6];
volatile char dummy;
struct DiskOnChip *this = mtd->priv;
void __iomem *docptr = this->virtadr;
DoC_Address(docptr, 3, to, 0x00, 0x00);
DoC_WaitReady(docptr);
- if (eccbuf) {
- /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
- WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC (DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
- } else {
- /* disable the ECC engine */
- WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
- WriteDOC (DOC_ECC_DIS, docptr, ECCConf);
- }
+ /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
+ WriteDOC (DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC (DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
/* Write the data via the internal pipeline through CDSN IO register,
see Pipelined Write Operations 11.2 */
#endif
WriteDOC(0x00, docptr, WritePipeTerm);
- if (eccbuf) {
- /* Write ECC data to flash, the ECC info is generated by the DiskOnChip ECC logic
- see Reed-Solomon EDC/ECC 11.1 */
- WriteDOC(0, docptr, NOP);
- WriteDOC(0, docptr, NOP);
- WriteDOC(0, docptr, NOP);
+ /* Write ECC data to flash, the ECC info is generated by the DiskOnChip ECC logic
+ see Reed-Solomon EDC/ECC 11.1 */
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
- /* Read the ECC data through the DiskOnChip ECC logic */
- for (i = 0; i < 6; i++) {
- eccbuf[i] = ReadDOC(docptr, ECCSyndrome0 + i);
- }
+ /* Read the ECC data through the DiskOnChip ECC logic */
+ for (i = 0; i < 6; i++) {
+ eccbuf[i] = ReadDOC(docptr, ECCSyndrome0 + i);
+ }
- /* ignore the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
+ /* ignore the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr , ECCConf);
#ifndef USE_MEMCPY
- /* Write the ECC data to flash */
- for (i = 0; i < 6; i++) {
- /* N.B. you have to increase the source address in this way or the
- ECC logic will not work properly */
- WriteDOC(eccbuf[i], docptr, Mil_CDSN_IO + i);
- }
+ /* Write the ECC data to flash */
+ for (i = 0; i < 6; i++) {
+ /* N.B. you have to increase the source address in this way or the
+ ECC logic will not work properly */
+ WriteDOC(eccbuf[i], docptr, Mil_CDSN_IO + i);
+ }
#else
- memcpy_toio(docptr + DoC_Mil_CDSN_IO, eccbuf, 6);
+ memcpy_toio(docptr + DoC_Mil_CDSN_IO, eccbuf, 6);
#endif
- /* write the block status BLOCK_USED (0x5555) at the end of ECC data
- FIXME: this is only a hack for programming the IPL area for LinuxBIOS
- and should be replace with proper codes in user space utilities */
- WriteDOC(0x55, docptr, Mil_CDSN_IO);
- WriteDOC(0x55, docptr, Mil_CDSN_IO + 1);
+ /* write the block status BLOCK_USED (0x5555) at the end of ECC data
+ FIXME: this is only a hack for programming the IPL area for LinuxBIOS
+ and should be replace with proper codes in user space utilities */
+ WriteDOC(0x55, docptr, Mil_CDSN_IO);
+ WriteDOC(0x55, docptr, Mil_CDSN_IO + 1);
- WriteDOC(0x00, docptr, WritePipeTerm);
+ WriteDOC(0x00, docptr, WritePipeTerm);
#ifdef PSYCHO_DEBUG
- printk("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
- eccbuf[4], eccbuf[5]);
+ printk("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
+ eccbuf[4], eccbuf[5]);
#endif
- }
/* Commit the Page Program command and wait for ready
see Software Requirement 11.4 item 1.*/
size_t *retlen, u_char *buf);
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf);
-static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len,
- size_t *retlen, u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel);
-static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len,
- size_t *retlen, const u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel);
static int doc_read_oob(struct mtd_info *mtd, loff_t ofs,
struct mtd_oob_ops *ops);
static int doc_write_oob(struct mtd_info *mtd, loff_t ofs,
static int doc_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
-{
- /* Just a special case of doc_read_ecc */
- return doc_read_ecc(mtd, from, len, retlen, buf, NULL, NULL);
-}
-
-static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len,
- size_t *retlen, u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel)
{
int ret, i;
volatile char dummy;
loff_t fofs;
- unsigned char syndrome[6];
+ unsigned char syndrome[6], eccbuf[6];
struct DiskOnChip *this = mtd->priv;
void __iomem * docptr = this->virtadr;
struct Nand *mychip = &this->chips[from >> (this->chipshift)];
WriteDOC(0, docptr, Mplus_FlashControl);
DoC_WaitReady(docptr);
- if (eccbuf) {
- /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
- WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
- WriteDOC(DOC_ECC_EN, docptr, Mplus_ECCConf);
- } else {
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
- }
+ /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
+ WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
+ WriteDOC(DOC_ECC_EN, docptr, Mplus_ECCConf);
/* Let the caller know we completed it */
*retlen = len;
- ret = 0;
+ ret = 0;
ReadDOC(docptr, Mplus_ReadPipeInit);
ReadDOC(docptr, Mplus_ReadPipeInit);
- if (eccbuf) {
- /* Read the data via the internal pipeline through CDSN IO
- register, see Pipelined Read Operations 11.3 */
- MemReadDOC(docptr, buf, len);
+ /* Read the data via the internal pipeline through CDSN IO
+ register, see Pipelined Read Operations 11.3 */
+ MemReadDOC(docptr, buf, len);
- /* Read the ECC data following raw data */
- MemReadDOC(docptr, eccbuf, 4);
- eccbuf[4] = ReadDOC(docptr, Mplus_LastDataRead);
- eccbuf[5] = ReadDOC(docptr, Mplus_LastDataRead);
+ /* Read the ECC data following raw data */
+ MemReadDOC(docptr, eccbuf, 4);
+ eccbuf[4] = ReadDOC(docptr, Mplus_LastDataRead);
+ eccbuf[5] = ReadDOC(docptr, Mplus_LastDataRead);
- /* Flush the pipeline */
- dummy = ReadDOC(docptr, Mplus_ECCConf);
- dummy = ReadDOC(docptr, Mplus_ECCConf);
+ /* Flush the pipeline */
+ dummy = ReadDOC(docptr, Mplus_ECCConf);
+ dummy = ReadDOC(docptr, Mplus_ECCConf);
- /* Check the ECC Status */
- if (ReadDOC(docptr, Mplus_ECCConf) & 0x80) {
- int nb_errors;
- /* There was an ECC error */
+ /* Check the ECC Status */
+ if (ReadDOC(docptr, Mplus_ECCConf) & 0x80) {
+ int nb_errors;
+ /* There was an ECC error */
#ifdef ECC_DEBUG
- printk("DiskOnChip ECC Error: Read at %lx\n", (long)from);
+ printk("DiskOnChip ECC Error: Read at %lx\n", (long)from);
#endif
- /* Read the ECC syndrom through the DiskOnChip ECC logic.
- These syndrome will be all ZERO when there is no error */
- for (i = 0; i < 6; i++)
- syndrome[i] = ReadDOC(docptr, Mplus_ECCSyndrome0 + i);
+ /* Read the ECC syndrom through the DiskOnChip ECC logic.
+ These syndrome will be all ZERO when there is no error */
+ for (i = 0; i < 6; i++)
+ syndrome[i] = ReadDOC(docptr, Mplus_ECCSyndrome0 + i);
- nb_errors = doc_decode_ecc(buf, syndrome);
+ nb_errors = doc_decode_ecc(buf, syndrome);
#ifdef ECC_DEBUG
- printk("ECC Errors corrected: %x\n", nb_errors);
+ printk("ECC Errors corrected: %x\n", nb_errors);
#endif
- if (nb_errors < 0) {
- /* We return error, but have actually done the read. Not that
- this can be told to user-space, via sys_read(), but at least
- MTD-aware stuff can know about it by checking *retlen */
+ if (nb_errors < 0) {
+ /* We return error, but have actually done the
+ read. Not that this can be told to user-space, via
+ sys_read(), but at least MTD-aware stuff can know
+ about it by checking *retlen */
#ifdef ECC_DEBUG
printk("%s(%d): Millennium Plus ECC error (from=0x%x:\n",
__FILE__, __LINE__, (int)from);
eccbuf[3], eccbuf[4], eccbuf[5]);
#endif
ret = -EIO;
- }
}
+ }
#ifdef PSYCHO_DEBUG
- printk("ECC DATA at %lx: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long)from, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
- eccbuf[4], eccbuf[5]);
+ printk("ECC DATA at %lx: %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long)from, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
+ eccbuf[4], eccbuf[5]);
#endif
-
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr , Mplus_ECCConf);
- } else {
- /* Read the data via the internal pipeline through CDSN IO
- register, see Pipelined Read Operations 11.3 */
- MemReadDOC(docptr, buf, len-2);
- buf[len-2] = ReadDOC(docptr, Mplus_LastDataRead);
- buf[len-1] = ReadDOC(docptr, Mplus_LastDataRead);
- }
+ /* disable the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr , Mplus_ECCConf);
/* Disable flash internally */
WriteDOC(0, docptr, Mplus_FlashSelect);
static int doc_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
-{
- char eccbuf[6];
- return doc_write_ecc(mtd, to, len, retlen, buf, eccbuf, NULL);
-}
-
-static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len,
- size_t *retlen, const u_char *buf, u_char *eccbuf,
- struct nand_oobinfo *oobsel)
{
int i, before, ret = 0;
loff_t fto;
volatile char dummy;
+ char eccbuf[6];
struct DiskOnChip *this = mtd->priv;
void __iomem * docptr = this->virtadr;
struct Nand *mychip = &this->chips[to >> (this->chipshift)];
/* Disable the ECC engine */
WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
- if (eccbuf) {
- if (before) {
- /* Write the block status BLOCK_USED (0x5555) */
- WriteDOC(0x55, docptr, Mil_CDSN_IO);
- WriteDOC(0x55, docptr, Mil_CDSN_IO);
- }
-
- /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
- WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, Mplus_ECCConf);
+ if (before) {
+ /* Write the block status BLOCK_USED (0x5555) */
+ WriteDOC(0x55, docptr, Mil_CDSN_IO);
+ WriteDOC(0x55, docptr, Mil_CDSN_IO);
}
+ /* init the ECC engine, see Reed-Solomon EDC/ECC 11.1 .*/
+ WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, Mplus_ECCConf);
+
MemWriteDOC(docptr, (unsigned char *) buf, len);
- if (eccbuf) {
- /* Write ECC data to flash, the ECC info is generated by
- the DiskOnChip ECC logic see Reed-Solomon EDC/ECC 11.1 */
- DoC_Delay(docptr, 3);
+ /* Write ECC data to flash, the ECC info is generated by
+ the DiskOnChip ECC logic see Reed-Solomon EDC/ECC 11.1 */
+ DoC_Delay(docptr, 3);
- /* Read the ECC data through the DiskOnChip ECC logic */
- for (i = 0; i < 6; i++)
- eccbuf[i] = ReadDOC(docptr, Mplus_ECCSyndrome0 + i);
+ /* Read the ECC data through the DiskOnChip ECC logic */
+ for (i = 0; i < 6; i++)
+ eccbuf[i] = ReadDOC(docptr, Mplus_ECCSyndrome0 + i);
- /* disable the ECC engine */
- WriteDOC(DOC_ECC_DIS, docptr, Mplus_ECCConf);
+ /* disable the ECC engine */
+ WriteDOC(DOC_ECC_DIS, docptr, Mplus_ECCConf);
- /* Write the ECC data to flash */
- MemWriteDOC(docptr, eccbuf, 6);
+ /* Write the ECC data to flash */
+ MemWriteDOC(docptr, eccbuf, 6);
- if (!before) {
- /* Write the block status BLOCK_USED (0x5555) */
- WriteDOC(0x55, docptr, Mil_CDSN_IO+6);
- WriteDOC(0x55, docptr, Mil_CDSN_IO+7);
- }
+ if (!before) {
+ /* Write the block status BLOCK_USED (0x5555) */
+ WriteDOC(0x55, docptr, Mil_CDSN_IO+6);
+ WriteDOC(0x55, docptr, Mil_CDSN_IO+7);
+ }
#ifdef PSYCHO_DEBUG
- printk("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
- (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
- eccbuf[4], eccbuf[5]);
+ printk("OOB data at %lx is %2.2X %2.2X %2.2X %2.2X %2.2X %2.2X\n",
+ (long) to, eccbuf[0], eccbuf[1], eccbuf[2], eccbuf[3],
+ eccbuf[4], eccbuf[5]);
#endif
- }
WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
/**
* nand_select_chip - [DEFAULT] control CE line
* @mtd: MTD device structure
- * @chip: chipnumber to select, -1 for deselect
+ * @chipnr: chipnumber to select, -1 for deselect
*
* Default select function for 1 chip devices.
*/
* Send command to NAND device. This is the version for the new large page
* devices We dont have the separate regions as we have in the small page
* devices. We must emulate NAND_CMD_READOOB to keep the code compatible.
- *
*/
static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
int column, int page_addr)
/**
* nand_get_device - [GENERIC] Get chip for selected access
- * @this: the nand chip descriptor
+ * @chip: the nand chip descriptor
* @mtd: MTD device structure
* @new_state: the state which is requested
*
/**
* nand_wait - [DEFAULT] wait until the command is done
* @mtd: MTD device structure
- * @this: NAND chip structure
+ * @chip: NAND chip structure
*
* Wait for command done. This applies to erase and program only
* Erase can take up to 400ms and program up to 20ms according to
* general NAND and SmartMedia specs
- *
-*/
+ */
static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
{
/**
* nand_transfer_oob - [Internal] Transfer oob to client buffer
* @chip: nand chip structure
+ * @oob: oob destination address
* @ops: oob ops structure
*/
static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob,
*
* @mtd: MTD device structure
* @from: offset to read from
+ * @ops: oob ops structure
*
* Internal function. Called with chip held.
*/
/**
* nand_write_oob - [MTD Interface] NAND write data and/or out-of-band
* @mtd: MTD device structure
- * @from: offset to read from
+ * @to: offset to write to
* @ops: oob operation description structure
*/
static int nand_write_oob(struct mtd_info *mtd, loff_t to,
/**
* nand_block_isbad - [MTD Interface] Check if block at offset is bad
* @mtd: MTD device structure
- * @ofs: offset relative to mtd start
+ * @offs: offset relative to mtd start
*/
static int nand_block_isbad(struct mtd_info *mtd, loff_t offs)
{
};
/**
- * nand_calculate_ecc - [NAND Interface] Calculate 3 byte ECC code
- * for 256 byte block
+ * nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256-byte block
* @mtd: MTD block structure
* @dat: raw data
* @ecc_code: buffer for ECC
}
}
- if (machine_is_husky() || machine_is_borzoi() || machine_is_akita()) {
- /* Need to use small eraseblock size for backward compatibility */
- sharpsl_mtd->flags |= MTD_NO_VIRTBLOCKS;
- }
-
add_mtd_partitions(sharpsl_mtd, sharpsl_partition_info, nr_partitions);
/* Return happy */
*/
static void __exit sharpsl_nand_cleanup(void)
{
- struct nand_chip *this = (struct nand_chip *)&sharpsl_mtd[1];
-
/* Release resources, unregister device */
nand_release(sharpsl_mtd);
vp->product_name, dev)) return -EAGAIN;
enable_dma(dev->dma);
set_dma_mode(dev->dma, DMA_MODE_CASCADE);
- } else if (request_irq(dev->irq, &corkscrew_interrupt, SA_SHIRQ,
+ } else if (request_irq(dev->irq, &corkscrew_interrupt, IRQF_SHARED,
vp->product_name, dev)) {
return -EAGAIN;
}
elmc_id_attn586(); /* disable interrupts */
- ret = request_irq(dev->irq, &elmc_interrupt, SA_SHIRQ | SA_SAMPLE_RANDOM,
+ ret = request_irq(dev->irq, &elmc_interrupt, IRQF_SHARED | IRQF_SAMPLE_RANDOM,
dev->name, dev);
if (ret) {
printk(KERN_ERR "%s: couldn't get irq %d\n", dev->name, dev->irq);
* Grab the IRQ
*/
- err = request_irq(dev->irq, &mc32_interrupt, SA_SHIRQ | SA_SAMPLE_RANDOM, DRV_NAME, dev);
+ err = request_irq(dev->irq, &mc32_interrupt, IRQF_SHARED | IRQF_SAMPLE_RANDOM, DRV_NAME, dev);
if (err) {
release_region(dev->base_addr, MC32_IO_EXTENT);
printk(KERN_ERR "%s: unable to get IRQ %d.\n", DRV_NAME, dev->irq);
pci_enable_device(pdev);
pci_set_master(pdev);
if (request_irq(dev->irq, vp->full_bus_master_rx ?
- &boomerang_interrupt : &vortex_interrupt, SA_SHIRQ, dev->name, dev)) {
+ &boomerang_interrupt : &vortex_interrupt, IRQF_SHARED, dev->name, dev)) {
printk(KERN_WARNING "%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
pci_disable_device(pdev);
return -EBUSY;
/* Use the now-standard shared IRQ implementation. */
if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
- &boomerang_interrupt : &vortex_interrupt, SA_SHIRQ, dev->name, dev))) {
+ &boomerang_interrupt : &vortex_interrupt, IRQF_SHARED, dev->name, dev))) {
printk(KERN_ERR "%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
goto out;
}
printk(KERN_DEBUG "dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
}
- disable_irq(dev->irq);
+ disable_irq_lockdep(dev->irq);
old_window = ioread16(ioaddr + EL3_CMD) >> 13;
EL3WINDOW(4);
media_status = ioread16(ioaddr + Wn4_Media);
dev->name, media_tbl[dev->if_port].name);
EL3WINDOW(old_window);
- enable_irq(dev->irq);
+ enable_irq_lockdep(dev->irq);
mod_timer(&vp->timer, RUN_AT(next_tick));
if (vp->deferred)
iowrite16(FakeIntr, ioaddr + EL3_CMD);
cp_init_hw(cp);
- rc = request_irq(dev->irq, cp_interrupt, SA_SHIRQ, dev->name, dev);
+ rc = request_irq(dev->irq, cp_interrupt, IRQF_SHARED, dev->name, dev);
if (rc)
goto err_out_hw;
int retval;
void __iomem *ioaddr = tp->mmio_addr;
- retval = request_irq (dev->irq, rtl8139_interrupt, SA_SHIRQ, dev->name, dev);
+ retval = request_irq (dev->irq, rtl8139_interrupt, IRQF_SHARED, dev->name, dev);
if (retval)
return retval;
/* Ugly but a reset can be slow, yet must be protected */
- disable_irq_nosync(dev->irq);
+ disable_irq_nosync_lockdep(dev->irq);
spin_lock(&ei_local->page_lock);
/* Try to restart the card. Perhaps the user has fixed something. */
NS8390_init(dev, 1);
spin_unlock(&ei_local->page_lock);
- enable_irq(dev->irq);
+ enable_irq_lockdep(dev->irq);
netif_wake_queue(dev);
}
ll->rdp = LE_C0_STOP;
/* Install the Interrupt handler */
- ret = request_irq(IRQ_AMIGA_PORTS, lance_interrupt, SA_SHIRQ,
+ ret = request_irq(IRQ_AMIGA_PORTS, lance_interrupt, IRQF_SHARED,
dev->name, dev);
if (ret) return ret;
goto init_error;
}
- ecode = request_irq(pdev->irq, ace_interrupt, SA_SHIRQ,
+ ecode = request_irq(pdev->irq, ace_interrupt, IRQF_SHARED,
DRV_NAME, dev);
if (ecode) {
printk(KERN_WARNING "%s: Requested IRQ %d is busy\n",
{
struct amd8111e_priv *lp = netdev_priv(dev);
- if(dev->irq ==0 || request_irq(dev->irq, amd8111e_interrupt, SA_SHIRQ,
+ if(dev->irq ==0 || request_irq(dev->irq, amd8111e_interrupt, IRQF_SHARED,
dev->name, dev))
return -EAGAIN;
dev->base_addr = ioaddr;
/* Install the Interrupt handler */
- i = request_irq(IRQ_AMIGA_PORTS, apne_interrupt, SA_SHIRQ, DRV_NAME, dev);
+ i = request_irq(IRQ_AMIGA_PORTS, apne_interrupt, IRQF_SHARED, DRV_NAME, dev);
if (i) return i;
for(i = 0; i < ETHER_ADDR_LEN; i++) {
goto out_port;
}
- if ((err = com20020_found(dev, SA_SHIRQ)) != 0)
+ if ((err = com20020_found(dev, IRQF_SHARED)) != 0)
goto out_port;
return 0;
netif_start_queue(dev);
- i = request_irq(IRQ_AMIGA_PORTS, ariadne_interrupt, SA_SHIRQ,
+ i = request_irq(IRQ_AMIGA_PORTS, ariadne_interrupt, IRQF_SHARED,
dev->name, dev);
if (i) return i;
b44_check_phy(bp);
- err = request_irq(dev->irq, b44_interrupt, SA_SHIRQ, dev->name, dev);
+ err = request_irq(dev->irq, b44_interrupt, IRQF_SHARED, dev->name, dev);
if (unlikely(err < 0)) {
b44_chip_reset(bp);
b44_free_rings(bp);
if (!netif_running(dev))
return 0;
- if (request_irq(dev->irq, b44_interrupt, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, b44_interrupt, IRQF_SHARED, dev->name, dev))
printk(KERN_ERR PFX "%s: request_irq failed\n", dev->name);
spin_lock_irq(&bp->lock);
}
else {
rc = request_irq(bp->pdev->irq, bnx2_interrupt,
- SA_SHIRQ, dev->name, dev);
+ IRQF_SHARED, dev->name, dev);
}
}
else {
- rc = request_irq(bp->pdev->irq, bnx2_interrupt, SA_SHIRQ,
+ rc = request_irq(bp->pdev->irq, bnx2_interrupt, IRQF_SHARED,
dev->name, dev);
}
if (rc) {
if (!rc) {
rc = request_irq(bp->pdev->irq, bnx2_interrupt,
- SA_SHIRQ, dev->name, dev);
+ IRQF_SHARED, dev->name, dev);
}
if (rc) {
bnx2_free_skbs(bp);
* mapping to expose them
*/
if (request_irq(cp->pdev->irq, cas_interrupt,
- SA_SHIRQ, dev->name, (void *) dev)) {
+ IRQF_SHARED, dev->name, (void *) dev)) {
printk(KERN_ERR "%s: failed to request irq !\n",
cp->dev->name);
err = -EAGAIN;
t1_interrupts_clear(adapter);
if ((err = request_irq(adapter->pdev->irq,
- t1_select_intr_handler(adapter), SA_SHIRQ,
+ t1_select_intr_handler(adapter), IRQF_SHARED,
adapter->name, adapter))) {
goto out_err;
}
/* allocate the irq corresponding to the receiving DMA */
if (request_irq(NETWORK_DMA_RX_IRQ_NBR, e100rxtx_interrupt,
- SA_SAMPLE_RANDOM, cardname, (void *)dev)) {
+ IRQF_SAMPLE_RANDOM, cardname, (void *)dev)) {
goto grace_exit0;
}
/* Register IRQ - support shared interrupts by passing device ptr */
- ret = request_irq(dev->irq, dfx_interrupt, SA_SHIRQ, dev->name, dev);
+ ret = request_irq(dev->irq, dfx_interrupt, IRQF_SHARED, dev->name, dev);
if (ret) {
printk(KERN_ERR "%s: Requested IRQ %d is busy\n", dev->name, dev->irq);
return ret;
if (priv->plxreg)
OUTL(dev->base_addr + PLX_LCL2PCI_DOORBELL, 1);
- rc = request_irq(dev->irq, &dgrs_intr, SA_SHIRQ, "RightSwitch", dev);
+ rc = request_irq(dev->irq, &dgrs_intr, IRQF_SHARED, "RightSwitch", dev);
if (rc)
goto err_out;
int i;
u16 macctrl;
- i = request_irq (dev->irq, &rio_interrupt, SA_SHIRQ, dev->name, dev);
+ i = request_irq (dev->irq, &rio_interrupt, IRQF_SHARED, dev->name, dev);
if (i)
return i;
PRINTK2("entering dm9000_open\n");
- if (request_irq(dev->irq, &dm9000_interrupt, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, &dm9000_interrupt, IRQF_SHARED, dev->name, dev))
return -EAGAIN;
/* Initialize DM9000 board */
e100_set_multicast_list(nic->netdev);
e100_start_receiver(nic, NULL);
mod_timer(&nic->watchdog, jiffies);
- if((err = request_irq(nic->pdev->irq, e100_intr, SA_SHIRQ,
+ if((err = request_irq(nic->pdev->irq, e100_intr, IRQF_SHARED,
nic->netdev->name, nic->netdev)))
goto err_no_irq;
netif_wake_queue(nic->netdev);
*data = 0;
/* Hook up test interrupt handler just for this test */
- if (!request_irq(irq, &e1000_test_intr, SA_PROBEIRQ, netdev->name,
- netdev)) {
+ if (!request_irq(irq, &e1000_test_intr, IRQF_PROBE_SHARED,
+ netdev->name, netdev)) {
shared_int = FALSE;
- } else if (request_irq(irq, &e1000_test_intr, SA_SHIRQ,
+ } else if (request_irq(irq, &e1000_test_intr, IRQF_SHARED,
netdev->name, netdev)){
*data = 1;
return -1;
}
#endif
if ((err = request_irq(adapter->pdev->irq, &e1000_intr,
- SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM,
netdev->name, netdev))) {
DPRINTK(PROBE, ERR,
"Unable to allocate interrupt Error: %d\n", err);
eepro_sw2bank0(ioaddr); /* Switch back to Bank 0 */
- if (request_irq (*irqp, NULL, SA_SHIRQ, "bogus", dev) != EBUSY) {
+ if (request_irq (*irqp, NULL, IRQF_SHARED, "bogus", dev) != EBUSY) {
unsigned long irq_mask;
/* Twinkle the interrupt, and check if it's seen */
irq_mask = probe_irq_on();
sp->in_interrupt = 0;
/* .. we can safely take handler calls during init. */
- retval = request_irq(dev->irq, &speedo_interrupt, SA_SHIRQ, dev->name, dev);
+ retval = request_irq(dev->irq, &speedo_interrupt, IRQF_SHARED, dev->name, dev);
if (retval) {
return retval;
}
/* Soft reset the chip. */
outl(0x4001, ioaddr + GENCTL);
- if ((retval = request_irq(dev->irq, &epic_interrupt, SA_SHIRQ, dev->name, dev)))
+ if ((retval = request_irq(dev->irq, &epic_interrupt, IRQF_SHARED, dev->name, dev)))
return retval;
epic_init_ring(dev);
iowrite32(0x00000001, ioaddr + BCR); /* Reset */
- if (request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev))
return -EAGAIN;
for (i = 0; i < 3; i++)
np->msi_flags |= NV_MSI_X_ENABLED;
if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) {
/* Request irq for rx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, SA_SHIRQ, dev->name, dev) != 0) {
+ if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQF_SHARED, dev->name, dev) != 0) {
printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
pci_disable_msix(np->pci_dev);
np->msi_flags &= ~NV_MSI_X_ENABLED;
goto out_err;
}
/* Request irq for tx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, SA_SHIRQ, dev->name, dev) != 0) {
+ if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQF_SHARED, dev->name, dev) != 0) {
printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
pci_disable_msix(np->pci_dev);
np->msi_flags &= ~NV_MSI_X_ENABLED;
goto out_free_rx;
}
/* Request irq for link and timer handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, SA_SHIRQ, dev->name, dev) != 0) {
+ if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQF_SHARED, dev->name, dev) != 0) {
printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
pci_disable_msix(np->pci_dev);
np->msi_flags &= ~NV_MSI_X_ENABLED;
} else {
/* Request irq for all interrupts */
if ((!intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) ||
+ request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
(intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, SA_SHIRQ, dev->name, dev) != 0)) {
+ request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
pci_disable_msix(np->pci_dev);
np->msi_flags &= ~NV_MSI_X_ENABLED;
if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
np->msi_flags |= NV_MSI_ENABLED;
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, SA_SHIRQ, dev->name, dev) != 0)) {
+ if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
+ (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
pci_disable_msi(np->pci_dev);
np->msi_flags &= ~NV_MSI_ENABLED;
}
}
if (ret != 0) {
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, SA_SHIRQ, dev->name, dev) != 0))
+ if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
+ (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0))
goto out_err;
}
if (!using_multi_irqs(dev)) {
if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
+ disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
else
- disable_irq(dev->irq);
+ disable_irq_lockdep(dev->irq);
mask = np->irqmask;
} else {
if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
+ disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
mask |= NVREG_IRQ_RX_ALL;
}
if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
+ disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
mask |= NVREG_IRQ_TX_ALL;
}
if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
+ disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
mask |= NVREG_IRQ_OTHER;
}
}
pci_push(base);
if (!using_multi_irqs(dev)) {
- nv_nic_irq((int) 0, (void *) data, (struct pt_regs *) NULL);
+ nv_nic_irq(0, dev, NULL);
if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
+ enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
else
- enable_irq(dev->irq);
+ enable_irq_lockdep(dev->irq);
} else {
if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
+ nv_nic_irq_rx(0, dev, NULL);
+ enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
}
if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
+ nv_nic_irq_tx(0, dev, NULL);
+ enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
}
if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
+ nv_nic_irq_other(0, dev, NULL);
+ enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
}
}
}
struct fs_enet_private *fep = netdev_priv(dev);
(*fep->ops->pre_request_irq)(dev, irq);
- return request_irq(irq, irqf, SA_SHIRQ, name, dev);
+ return request_irq(irq, irqf, IRQF_SHARED, name, dev);
}
static void fs_free_irq(struct net_device *dev, int irq)
}
if ((retval = request_irq(dev->irq, >96100_interrupt,
- SA_SHIRQ, dev->name, dev))) {
+ IRQF_SHARED, dev->name, dev))) {
err("unable to get IRQ %d\n", dev->irq);
return retval;
}
u32 rx_int_var, tx_int_var;
u16 fifo_info;
- i = request_irq(dev->irq, &hamachi_interrupt, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &hamachi_interrupt, IRQF_SHARED, dev->name, dev);
if (i)
return i;
outb(0, FCR(dev->base_addr)); /* disable FIFOs */
outb(0x0d, MCR(dev->base_addr));
outb(0, IER(dev->base_addr));
- if (request_irq(dev->irq, ser12_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq(dev->irq, ser12_interrupt, IRQF_DISABLED | IRQF_SHARED,
"baycom_ser_fdx", dev)) {
release_region(dev->base_addr, SER12_EXTENT);
return -EBUSY;
outb(0, FCR(dev->base_addr)); /* disable FIFOs */
outb(0x0d, MCR(dev->base_addr));
outb(0, IER(dev->base_addr));
- if (request_irq(dev->irq, ser12_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq(dev->irq, ser12_interrupt, IRQF_DISABLED | IRQF_SHARED,
"baycom_ser12", dev)) {
release_region(dev->base_addr, SER12_EXTENT);
return -EBUSY;
if (!Ivec[hwcfg.irq].used && hwcfg.irq)
{
- if (request_irq(hwcfg.irq, scc_isr, SA_INTERRUPT, "AX.25 SCC", NULL))
+ if (request_irq(hwcfg.irq, scc_isr, IRQF_DISABLED, "AX.25 SCC", NULL))
printk(KERN_WARNING "z8530drv: warning, cannot get IRQ %d\n", hwcfg.irq);
else
Ivec[hwcfg.irq].used = 1;
goto out_release_base;
}
outb(0, IER(dev->base_addr));
- if (request_irq(dev->irq, yam_interrupt, SA_INTERRUPT | SA_SHIRQ, dev->name, dev)) {
+ if (request_irq(dev->irq, yam_interrupt, IRQF_DISABLED | IRQF_SHARED, dev->name, dev)) {
printk(KERN_ERR "%s: irq %d busy\n", dev->name, dev->irq);
ret = -EBUSY;
goto out_release_base;
/* New: if bus is PCI or EISA, interrupts might be shared interrupts */
if (request_irq(dev->irq, hp100_interrupt,
lp->bus == HP100_BUS_PCI || lp->bus ==
- HP100_BUS_EISA ? SA_SHIRQ : SA_INTERRUPT,
+ HP100_BUS_EISA ? IRQF_SHARED : IRQF_DISABLED,
"hp100", dev)) {
printk("hp100: %s: unable to get IRQ %d\n", dev->name, dev->irq);
return -EAGAIN;
dev->irq = IRQ_AMIGA_PORTS;
/* Install the Interrupt handler */
- if (request_irq(IRQ_AMIGA_PORTS, ei_interrupt, SA_SHIRQ, "Hydra Ethernet",
+ if (request_irq(IRQ_AMIGA_PORTS, ei_interrupt, IRQF_SHARED, "Hydra Ethernet",
dev)) {
free_netdev(dev);
return -EAGAIN;
/* register resources - only necessary for IRQ */
- result = request_irq(priv->realirq, irq_handler, SA_SHIRQ | SA_SAMPLE_RANDOM, dev->name, dev);
+ result = request_irq(priv->realirq, irq_handler, IRQF_SHARED | IRQF_SAMPLE_RANDOM, dev->name, dev);
if (result != 0) {
printk(KERN_ERR "%s: failed to register irq %d\n", dev->name, dev->irq);
return result;
{
struct ioc3_private *ip = netdev_priv(dev);
- if (request_irq(dev->irq, ioc3_interrupt, SA_SHIRQ, ioc3_str, dev)) {
+ if (request_irq(dev->irq, ioc3_interrupt, IRQF_SHARED, ioc3_str, dev)) {
printk(KERN_ERR "%s: Can't get irq %d\n", dev->name, dev->irq);
return -EAGAIN;
return 0;
if (request_irq (self->io.irq, toshoboe_interrupt,
- SA_SHIRQ | SA_INTERRUPT, dev->name, (void *) self))
+ IRQF_SHARED | IRQF_DISABLED, dev->name, (void *) self))
{
return -EAGAIN;
}
self->io.fir_base = self->base;
self->io.fir_ext = OBOE_IO_EXTENT;
self->io.irq = pci_dev->irq;
- self->io.irqflags = SA_SHIRQ | SA_INTERRUPT;
+ self->io.irqflags = IRQF_SHARED | IRQF_DISABLED;
self->speed = self->io.speed = 9600;
self->async = 0;
outb(IRINTR_INT_MASK, ndev->base_addr+VLSI_PIO_IRINTR);
- if (request_irq(ndev->irq, vlsi_interrupt, SA_SHIRQ,
+ if (request_irq(ndev->irq, vlsi_interrupt, IRQF_SHARED,
drivername, ndev)) {
IRDA_WARNING("%s: couldn't get IRQ: %d\n",
__FUNCTION__, ndev->irq);
#endif
if((err = request_irq(adapter->pdev->irq, &ixgb_intr,
- SA_SHIRQ | SA_SAMPLE_RANDOM,
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM,
netdev->name, netdev))) {
DPRINTK(PROBE, ERR,
"Unable to allocate interrupt Error: %d\n", err);
if (!nds_open++) {
err = request_irq(IRQ_IXP2000_THDA0, ixpdev_interrupt,
- SA_SHIRQ, "ixp2000_eth", nds);
+ IRQF_SHARED, "ixp2000_eth", nds);
if (err) {
nds_open--;
return err;
module_param(sonic_debug, int, 0);
MODULE_PARM_DESC(sonic_debug, "jazzsonic debug level (1-4)");
-#define SONIC_IRQ_FLAG SA_INTERRUPT
+#define SONIC_IRQ_FLAG IRQF_DISABLED
#include "sonic.c"
{
int i;
- i = request_irq(dev->irq, &i596_interrupt, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &i596_interrupt, IRQF_SHARED, dev->name, dev);
if (i) {
printk(KERN_ERR "%s: IRQ %d not free\n", dev->name, dev->irq);
return i;
}
rc = request_irq(mp->tx_dma_intr, mace_txdma_intr, 0, "MACE-txdma", dev);
if (rc) {
- printk(KERN_ERR "MACE: can't get irq %d\n", mace->intrs[1].line);
+ printk(KERN_ERR "MACE: can't get irq %d\n", mp->tx_dma_intr);
goto err_free_irq;
}
rc = request_irq(mp->rx_dma_intr, mace_rxdma_intr, 0, "MACE-rxdma", dev);
if (rc) {
- printk(KERN_ERR "MACE: can't get irq %d\n", mace->intrs[2].line);
+ printk(KERN_ERR "MACE: can't get irq %d\n", mp->rx_dma_intr);
goto err_free_tx_irq;
}
pr_debug("%s: mipsnet_open\n", dev->name);
err = request_irq(dev->irq, &mipsnet_interrupt,
- SA_SHIRQ, dev->name, (void *) dev);
+ IRQF_SHARED, dev->name, (void *) dev);
if (err) {
pr_debug("%s: %s(): can't get irq %d\n",
int err;
err = request_irq(dev->irq, mv643xx_eth_int_handler,
- SA_SHIRQ | SA_SAMPLE_RANDOM, dev->name, dev);
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM, dev->name, dev);
if (err) {
printk(KERN_ERR "Can not assign IRQ number to MV643XX_eth%d\n",
port_num);
pci_enable_device(pdev);
pci_set_master(pdev);
- status = request_irq(pdev->irq, myri10ge_intr, SA_SHIRQ,
+ status = request_irq(pdev->irq, myri10ge_intr, IRQF_SHARED,
netdev->name, mgp);
if (status != 0) {
dev_err(&pdev->dev, "failed to allocate IRQ\n");
mgp->msi_enabled = 1;
}
- status = request_irq(pdev->irq, myri10ge_intr, SA_SHIRQ,
+ status = request_irq(pdev->irq, myri10ge_intr, IRQF_SHARED,
netdev->name, mgp);
if (status != 0) {
dev_err(&pdev->dev, "failed to allocate IRQ\n");
/* Register interrupt handler now. */
DET(("Requesting MYRIcom IRQ line.\n"));
if (request_irq(dev->irq, &myri_interrupt,
- SA_SHIRQ, "MyriCOM Ethernet", (void *) dev)) {
+ IRQF_SHARED, "MyriCOM Ethernet", (void *) dev)) {
printk("MyriCOM: Cannot register interrupt handler.\n");
goto err;
}
/* Reset the chip, just in case. */
natsemi_reset(dev);
- i = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev);
if (i) return i;
if (netif_msg_ifup(np))
static int ne2k_pci_open(struct net_device *dev)
{
- int ret = request_irq(dev->irq, ei_interrupt, SA_SHIRQ, dev->name, dev);
+ int ret = request_irq(dev->irq, ei_interrupt, IRQF_SHARED, dev->name, dev);
if (ret)
return ret;
struct netx_eth_priv *priv = netdev_priv(ndev);
if (request_irq
- (ndev->irq, &netx_eth_interrupt, SA_SHIRQ, ndev->name, ndev))
+ (ndev->irq, &netx_eth_interrupt, IRQF_SHARED, ndev->name, ndev))
return -EAGAIN;
writel(ndev->dev_addr[0] |
dev->IMR_cache = 0;
- err = request_irq(pci_dev->irq, ns83820_irq, SA_SHIRQ,
+ err = request_irq(pci_dev->irq, ns83820_irq, IRQF_SHARED,
DRV_NAME, ndev);
if (err) {
printk(KERN_INFO "ns83820: unable to register irq %d\n",
DPRINTK ("ENTER\n");
- retval = request_irq (dev->irq, netdrv_interrupt, SA_SHIRQ, dev->name, dev);
+ retval = request_irq (dev->irq, netdrv_interrupt, IRQF_SHARED, dev->name, dev);
if (retval) {
DPRINTK ("EXIT, returning %d\n", retval);
return retval;
link->open++;
- request_irq(dev->irq, ei_irq_wrapper, SA_SHIRQ, "axnet_cs", dev);
+ request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, "axnet_cs", dev);
info->link_status = 0x00;
init_timer(&info->watchdog);
link->open++;
set_misc_reg(dev);
- request_irq(dev->irq, ei_irq_wrapper, SA_SHIRQ, dev_info, dev);
+ request_irq(dev->irq, ei_irq_wrapper, IRQF_SHARED, dev_info, dev);
info->phy_id = info->eth_phy;
info->link_status = 0x00;
unsigned long flags;
if (request_irq(dev->irq, &pcnet32_interrupt,
- lp->shared_irq ? SA_SHIRQ : 0, dev->name,
+ lp->shared_irq ? IRQF_SHARED : 0, dev->name,
(void *)dev)) {
return -EAGAIN;
}
INIT_WORK(&phydev->phy_queue, phy_change, phydev);
if (request_irq(phydev->irq, phy_interrupt,
- SA_SHIRQ,
+ IRQF_SHARED,
"phy_interrupt",
phydev) < 0) {
printk(KERN_WARNING "%s: Can't get IRQ %d (PHY)\n",
rtl8169_set_rxbufsize(tp, dev);
retval =
- request_irq(dev->irq, rtl8169_interrupt, SA_SHIRQ, dev->name, dev);
+ request_irq(dev->irq, rtl8169_interrupt, IRQF_SHARED, dev->name, dev);
if (retval < 0)
goto out;
readl(®s->HostCtrl);
spin_unlock_irqrestore(&rrpriv->lock, flags);
- if (request_irq(dev->irq, rr_interrupt, SA_SHIRQ, dev->name, dev)) {
+ if (request_irq(dev->irq, rr_interrupt, IRQF_SHARED, dev->name, dev)) {
printk(KERN_WARNING "%s: Requested IRQ %d is busy\n",
dev->name, dev->irq);
ecode = -EAGAIN;
/* After proper initialization of H/W, register ISR */
if (sp->intr_type == MSI) {
err = request_irq((int) sp->pdev->irq, s2io_msi_handle,
- SA_SHIRQ, sp->name, dev);
+ IRQF_SHARED, sp->name, dev);
if (err) {
DBG_PRINT(ERR_DBG, "%s: MSI registration \
failed\n", dev->name);
}
}
if (sp->intr_type == INTA) {
- err = request_irq((int) sp->pdev->irq, s2io_isr, SA_SHIRQ,
+ err = request_irq((int) sp->pdev->irq, s2io_isr, IRQF_SHARED,
sp->name, dev);
if (err) {
DBG_PRINT(ERR_DBG, "%s: ISR registration failed\n",
*/
__raw_readq(sc->sbm_isr);
- if (request_irq(dev->irq, &sbmac_intr, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, &sbmac_intr, IRQF_SHARED, dev->name, dev))
return -EBUSY;
/*
sis190_request_timer(dev);
- rc = request_irq(dev->irq, sis190_interrupt, SA_SHIRQ, dev->name, dev);
+ rc = request_irq(dev->irq, sis190_interrupt, IRQF_SHARED, dev->name, dev);
if (rc < 0)
goto err_release_timer_2;
/* Equalizer workaround Rule */
sis630_set_eq(net_dev, sis_priv->chipset_rev);
- ret = request_irq(net_dev->irq, &sis900_interrupt, SA_SHIRQ,
+ ret = request_irq(net_dev->irq, &sis900_interrupt, IRQF_SHARED,
net_dev->name, net_dev);
if (ret)
return ret;
spin_unlock_irqrestore(&pAC->SlowPathLock, Flags);
if (pAC->GIni.GIMacsFound == 2) {
- Ret = request_irq(dev->irq, SkGeIsr, SA_SHIRQ, "sk98lin", dev);
+ Ret = request_irq(dev->irq, SkGeIsr, IRQF_SHARED, "sk98lin", dev);
} else if (pAC->GIni.GIMacsFound == 1) {
- Ret = request_irq(dev->irq, SkGeIsrOnePort, SA_SHIRQ,
+ Ret = request_irq(dev->irq, SkGeIsrOnePort, IRQF_SHARED,
"sk98lin", dev);
} else {
printk(KERN_WARNING "sk98lin: Illegal number of ports: %d\n",
pci_enable_device(pdev);
pci_set_master(pdev);
if (pAC->GIni.GIMacsFound == 2)
- ret = request_irq(dev->irq, SkGeIsr, SA_SHIRQ, "sk98lin", dev);
+ ret = request_irq(dev->irq, SkGeIsr, IRQF_SHARED, "sk98lin", dev);
else
- ret = request_irq(dev->irq, SkGeIsrOnePort, SA_SHIRQ, "sk98lin", dev);
+ ret = request_irq(dev->irq, SkGeIsrOnePort, IRQF_SHARED, "sk98lin", dev);
if (ret) {
printk(KERN_WARNING "sk98lin: unable to acquire IRQ %d\n", dev->irq);
pAC->AllocFlag &= ~SK_ALLOC_IRQ;
/* register resources - only necessary for IRQ */
result =
request_irq(priv->realirq, irq_handler,
- SA_SHIRQ | SA_SAMPLE_RANDOM, "sk_mca", dev);
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM, "sk_mca", dev);
if (result != 0) {
printk("%s: failed to register irq %d\n", dev->name,
dev->irq);
PRINTK(KERN_INFO "entering skfp_open\n");
/* Register IRQ - support shared interrupts by passing device ptr */
- err = request_irq(dev->irq, (void *) skfp_interrupt, SA_SHIRQ,
+ err = request_irq(dev->irq, (void *) skfp_interrupt, IRQF_SHARED,
dev->name, dev);
if (err)
return err;
goto err_out_free_hw;
}
- err = request_irq(pdev->irq, skge_intr, SA_SHIRQ, DRV_NAME, hw);
+ err = request_irq(pdev->irq, skge_intr, IRQF_SHARED, DRV_NAME, hw);
if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n",
pci_name(pdev), pdev->irq);
sky2_write32(hw, B0_IMSK, Y2_IS_IRQ_SW);
- err = request_irq(pdev->irq, sky2_test_intr, SA_SHIRQ, DRV_NAME, hw);
+ err = request_irq(pdev->irq, sky2_test_intr, IRQF_SHARED, DRV_NAME, hw);
if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n",
pci_name(pdev), pdev->irq);
goto err_out_unregister;
}
- err = request_irq(pdev->irq, sky2_intr, SA_SHIRQ, DRV_NAME, hw);
+ err = request_irq(pdev->irq, sky2_intr, IRQF_SHARED, DRV_NAME, hw);
if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n",
pci_name(pdev), pdev->irq);
static int ultra32_open(struct net_device *dev)
{
int ioaddr = dev->base_addr - ULTRA32_NIC_OFFSET; /* ASIC addr */
- int irq_flags = (inb(ioaddr + ULTRA32_CFG5) & 0x08) ? 0 : SA_SHIRQ;
+ int irq_flags = (inb(ioaddr + ULTRA32_CFG5) & 0x08) ? 0 : IRQF_SHARED;
int retval;
retval = request_irq(dev->irq, ei_interrupt, irq_flags, dev->name, dev);
lp->ctl_rspeed = 100;
/* Grab the IRQ */
- retval = request_irq(dev->irq, &smc911x_interrupt, SA_SHIRQ, dev->name, dev);
+ retval = request_irq(dev->irq, &smc911x_interrupt, IRQF_SHARED, dev->name, dev);
if (retval)
goto err_out;
machine_is_omap_h2() \
|| machine_is_omap_h3() \
|| (machine_is_omap_innovator() && !cpu_is_omap1510()) \
- ) ? SA_TRIGGER_FALLING : SA_TRIGGER_RISING)
+ ) ? IRQF_TRIGGER_FALLING : IRQF_TRIGGER_RISING)
#elif defined(CONFIG_SH_SH4202_MICRODEV)
#endif
#ifndef SMC_IRQ_FLAGS
-#define SMC_IRQ_FLAGS SA_TRIGGER_RISING
+#define SMC_IRQ_FLAGS IRQF_TRIGGER_RISING
#endif
#ifndef SMC_INTERRUPT_PREAMBLE
result = -EBUSY;
if (request_irq(netdev->irq, spider_net_interrupt,
- SA_SHIRQ, netdev->name, netdev))
+ IRQF_SHARED, netdev->name, netdev))
goto register_int_failed;
spider_net_enable_card(card);
/* Do we ever need to reset the chip??? */
- retval = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ retval = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev);
if (retval)
return retval;
REGA(CSR0) = CSR0_STOP;
- request_irq(LANCE_IRQ, lance_interrupt, SA_INTERRUPT, "SUN3 Lance", dev);
+ request_irq(LANCE_IRQ, lance_interrupt, IRQF_DISABLED, "SUN3 Lance", dev);
dev->irq = (unsigned short)LANCE_IRQ;
struct bigmac *bp = (struct bigmac *) dev->priv;
int ret;
- ret = request_irq(dev->irq, &bigmac_interrupt, SA_SHIRQ, dev->name, bp);
+ ret = request_irq(dev->irq, &bigmac_interrupt, IRQF_SHARED, dev->name, bp);
if (ret) {
printk(KERN_ERR "BIGMAC: Can't order irq %d to go.\n", dev->irq);
return ret;
/* Do we need to reset the chip??? */
- i = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev);
if (i)
return i;
spin_unlock_irqrestore(&gp->lock, flags);
if (request_irq(gp->pdev->irq, gem_interrupt,
- SA_SHIRQ, dev->name, (void *)dev)) {
+ IRQF_SHARED, dev->name, (void *)dev)) {
printk(KERN_ERR "%s: failed to request irq !\n", gp->dev->name);
spin_lock_irqsave(&gp->lock, flags);
*/
if ((hp->happy_flags & (HFLAG_QUATTRO|HFLAG_PCI)) != HFLAG_QUATTRO) {
if (request_irq(dev->irq, &happy_meal_interrupt,
- SA_SHIRQ, dev->name, (void *)dev)) {
+ IRQF_SHARED, dev->name, (void *)dev)) {
HMD(("EAGAIN\n"));
printk(KERN_ERR "happy_meal(SBUS): Can't order irq %d to go.\n",
dev->irq);
err = request_irq(sdev->irqs[0],
quattro_sbus_interrupt,
- SA_SHIRQ, "Quattro",
+ IRQF_SHARED, "Quattro",
qp);
if (err != 0) {
printk(KERN_ERR "Quattro: Fatal IRQ registery error %d.\n", err);
STOP_LANCE(lp);
- if (request_irq(dev->irq, &lance_interrupt, SA_SHIRQ,
+ if (request_irq(dev->irq, &lance_interrupt, IRQF_SHARED,
lancestr, (void *) dev)) {
printk(KERN_ERR "Lance: Can't get irq %d\n", dev->irq);
return -EAGAIN;
qec_init_once(qecp, qec_sdev);
if (request_irq(qec_sdev->irqs[0], &qec_interrupt,
- SA_SHIRQ, "qec", (void *) qecp)) {
+ IRQF_SHARED, "qec", (void *) qecp)) {
printk(KERN_ERR "qec: Can't register irq.\n");
goto fail;
}
*/
if (dev->irq == 0 ||
- request_irq(dev->irq, &tc35815_interrupt, SA_SHIRQ, cardname, dev)) {
+ request_irq(dev->irq, &tc35815_interrupt, IRQF_SHARED, cardname, dev)) {
return -EAGAIN;
}
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "3.61"
-#define DRV_MODULE_RELDATE "June 29, 2006"
+#define DRV_MODULE_VERSION "3.62"
+#define DRV_MODULE_RELDATE "June 30, 2006"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
goto out_unlock;
}
- tcp_opt_len = ((skb->h.th->doff - 5) * 4);
- ip_tcp_len = (skb->nh.iph->ihl * 4) + sizeof(struct tcphdr);
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+ mss |= (skb_headlen(skb) - ETH_HLEN) << 9;
+ else {
+ tcp_opt_len = ((skb->h.th->doff - 5) * 4);
+ ip_tcp_len = (skb->nh.iph->ihl * 4) +
+ sizeof(struct tcphdr);
+
+ skb->nh.iph->check = 0;
+ skb->nh.iph->tot_len = htons(mss + ip_tcp_len +
+ tcp_opt_len);
+ mss |= (ip_tcp_len + tcp_opt_len) << 9;
+ }
base_flags |= (TXD_FLAG_CPU_PRE_DMA |
TXD_FLAG_CPU_POST_DMA);
- skb->nh.iph->check = 0;
- skb->nh.iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len);
-
skb->h.th->check = 0;
- mss |= (ip_tcp_len + tcp_opt_len) << 9;
}
else if (skb->ip_summed == CHECKSUM_HW)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
fn = tg3_msi;
if (tp->tg3_flags2 & TG3_FLG2_1SHOT_MSI)
fn = tg3_msi_1shot;
- flags = SA_SAMPLE_RANDOM;
+ flags = IRQF_SAMPLE_RANDOM;
} else {
fn = tg3_interrupt;
if (tp->tg3_flags & TG3_FLAG_TAGGED_STATUS)
fn = tg3_interrupt_tagged;
- flags = SA_SHIRQ | SA_SAMPLE_RANDOM;
+ flags = IRQF_SHARED | IRQF_SAMPLE_RANDOM;
}
return (request_irq(tp->pdev->irq, fn, flags, dev->name, dev));
}
free_irq(tp->pdev->irq, dev);
err = request_irq(tp->pdev->irq, tg3_test_isr,
- SA_SHIRQ | SA_SAMPLE_RANDOM, dev->name, dev);
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM, dev->name, dev);
if (err)
return err;
return -EINVAL;
return 0;
}
+ if (tp->tg3_flags2 & TG3_FLG2_HW_TSO_2) {
+ if (value)
+ dev->features |= NETIF_F_TSO6;
+ else
+ dev->features &= ~NETIF_F_TSO6;
+ }
return ethtool_op_set_tso(dev, value);
}
#endif
* Firmware TSO on older chips gives lower performance, so it
* is off by default, but can be enabled using ethtool.
*/
- if (tp->tg3_flags2 & TG3_FLG2_HW_TSO)
+ if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) {
dev->features |= NETIF_F_TSO;
+ if (tp->tg3_flags2 & TG3_FLG2_HW_TSO_2)
+ dev->features |= NETIF_F_TSO6;
+ }
#endif
int err;
priv->tlanRev = TLan_DioRead8( dev->base_addr, TLAN_DEF_REVISION );
- err = request_irq( dev->irq, TLan_HandleInterrupt, SA_SHIRQ, TLanSignature, dev );
+ err = request_irq( dev->irq, TLan_HandleInterrupt, IRQF_SHARED, TLanSignature, dev );
if ( err ) {
printk(KERN_ERR "TLAN: Cannot open %s because IRQ %d is already in use.\n", dev->name, dev->irq );
u16 switchsettings, switchsettings_eeprom ;
- if(request_irq(dev->irq, &xl_interrupt, SA_SHIRQ , "3c359", dev)) {
+ if(request_irq(dev->irq, &xl_interrupt, IRQF_SHARED , "3c359", dev)) {
return -EAGAIN;
}
goto err_out_trdev;
}
- ret = request_irq(pdev->irq, tms380tr_interrupt, SA_SHIRQ,
+ ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED,
dev->name, dev);
if (ret)
goto err_out_region;
rc=streamer_reset(dev);
}
- if (request_irq(dev->irq, &streamer_interrupt, SA_SHIRQ, "lanstreamer", dev)) {
+ if (request_irq(dev->irq, &streamer_interrupt, IRQF_SHARED, "lanstreamer", dev)) {
return -EAGAIN;
}
#if STREAMER_DEBUG
*/
outb(0, dev->base_addr + MC_CONTROL_REG0); /* sanity */
madgemc_setsifsel(dev, 1);
- if (request_irq(dev->irq, madgemc_interrupt, SA_SHIRQ,
+ if (request_irq(dev->irq, madgemc_interrupt, IRQF_SHARED,
"madgemc", dev)) {
ret = -EBUSY;
goto getout3;
olympic_init(dev);
- if(request_irq(dev->irq, &olympic_interrupt, SA_SHIRQ , "olympic", dev)) {
+ if(request_irq(dev->irq, &olympic_interrupt, IRQF_SHARED , "olympic", dev)) {
return -EAGAIN;
}
dev->irq = 15;
break;
}
- if (request_irq(dev->irq, smctr_interrupt, SA_SHIRQ, smctr_name, dev)) {
+ if (request_irq(dev->irq, smctr_interrupt, IRQF_SHARED, smctr_name, dev)) {
release_region(dev->base_addr, SMCTR_IO_EXTENT);
return -ENODEV;
}
goto out2;
}
- if (request_irq(dev->irq, smctr_interrupt, SA_SHIRQ, smctr_name, dev))
+ if (request_irq(dev->irq, smctr_interrupt, IRQF_SHARED, smctr_name, dev))
goto out2;
/* Get 58x Rom Base */
goto err_out_trdev;
}
- ret = request_irq(pdev->irq, tms380tr_interrupt, SA_SHIRQ,
+ ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED,
dev->name, dev);
if (ret)
goto err_out_region;
dw32(IntrMask, 0);
- rc = request_irq(dev->irq, de_interrupt, SA_SHIRQ, dev->name, dev);
+ rc = request_irq(dev->irq, de_interrupt, IRQF_SHARED, dev->name, dev);
if (rc) {
printk(KERN_ERR "%s: IRQ %d request failure, err=%d\n",
dev->name, dev->irq, rc);
0.41 21-Mar-96 Don't check for get_hw_addr checksum unless DEC card
only <niles@axp745gsfc.nasa.gov>
Fix for multiple PCI cards reported by <jos@xos.nl>
- Duh, put the SA_SHIRQ flag into request_interrupt().
+ Duh, put the IRQF_SHARED flag into request_interrupt().
Fix SMC ethernet address in enet_det[].
Print chip name instead of "UNKNOWN" during boot.
0.42 26-Apr-96 Fix MII write TA bit error.
infoblocks.
Added DC21142 and DC21143 functions.
Added byte counters from <phil@tazenda.demon.co.uk>
- Added SA_INTERRUPT temporary fix from
+ Added IRQF_DISABLED temporary fix from
<mjacob@feral.com>.
0.53 12-Nov-97 Fix the *_probe() to include 'eth??' name during
module load: bug reported by
lp->state = OPEN;
de4x5_dbg_open(dev);
- if (request_irq(dev->irq, (void *)de4x5_interrupt, SA_SHIRQ,
+ if (request_irq(dev->irq, (void *)de4x5_interrupt, IRQF_SHARED,
lp->adapter_name, dev)) {
printk("de4x5_open(): Requested IRQ%d is busy - attemping FAST/SHARE...", dev->irq);
- if (request_irq(dev->irq, de4x5_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq(dev->irq, de4x5_interrupt, IRQF_DISABLED | IRQF_SHARED,
lp->adapter_name, dev)) {
printk("\n Cannot get IRQ- reconfigure your hardware.\n");
disable_ast(dev);
DMFE_DBUG(0, "dmfe_open", 0);
- ret = request_irq(dev->irq, &dmfe_interrupt, SA_SHIRQ, dev->name, dev);
+ ret = request_irq(dev->irq, &dmfe_interrupt, IRQF_SHARED, dev->name, dev);
if (ret)
return ret;
{
int retval;
- if ((retval = request_irq(dev->irq, &tulip_interrupt, SA_SHIRQ, dev->name, dev)))
+ if ((retval = request_irq(dev->irq, &tulip_interrupt, IRQF_SHARED, dev->name, dev)))
return retval;
tulip_init_ring (dev);
pci_enable_device(pdev);
- if ((retval = request_irq(dev->irq, &tulip_interrupt, SA_SHIRQ, dev->name, dev))) {
+ if ((retval = request_irq(dev->irq, &tulip_interrupt, IRQF_SHARED, dev->name, dev))) {
printk (KERN_ERR "tulip: request_irq failed in resume\n");
return retval;
}
ULI526X_DBUG(0, "uli526x_open", 0);
- ret = request_irq(dev->irq, &uli526x_interrupt, SA_SHIRQ, dev->name, dev);
+ ret = request_irq(dev->irq, &uli526x_interrupt, IRQF_SHARED, dev->name, dev);
if (ret)
return ret;
iowrite32(0x00000001, ioaddr + PCIBusCfg); /* Reset */
netif_device_detach(dev);
- i = request_irq(dev->irq, &intr_handler, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev);
if (i)
goto out_err;
int retval;
enter("xircom_open");
printk(KERN_INFO "xircom cardbus adaptor found, registering as %s, using irq %i \n",dev->name,dev->irq);
- retval = request_irq(dev->irq, &xircom_interrupt, SA_SHIRQ, dev->name, dev);
+ retval = request_irq(dev->irq, &xircom_interrupt, IRQF_SHARED, dev->name, dev);
if (retval) {
leave("xircom_open - No IRQ");
return retval;
{
struct xircom_private *tp = netdev_priv(dev);
- if (request_irq(dev->irq, &xircom_interrupt, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, &xircom_interrupt, IRQF_SHARED, dev->name, dev))
return -EAGAIN;
xircom_up(dev);
goto out_sleep;
}
- err = request_irq(dev->irq, &typhoon_interrupt, SA_SHIRQ,
+ err = request_irq(dev->irq, &typhoon_interrupt, IRQF_SHARED,
dev->name, dev);
if(err < 0)
goto out_sleep;
void __iomem *ioaddr = rp->base;
int rc;
- rc = request_irq(rp->pdev->irq, &rhine_interrupt, SA_SHIRQ, dev->name,
+ rc = request_irq(rp->pdev->irq, &rhine_interrupt, IRQF_SHARED, dev->name,
dev);
if (rc)
return rc;
if (!netif_running(dev))
return 0;
- if (request_irq(dev->irq, rhine_interrupt, SA_SHIRQ, dev->name, dev))
+ if (request_irq(dev->irq, rhine_interrupt, IRQF_SHARED, dev->name, dev))
printk(KERN_ERR "via-rhine %s: request_irq failed\n", dev->name);
ret = pci_set_power_state(pdev, PCI_D0);
velocity_init_registers(vptr, VELOCITY_INIT_COLD);
- ret = request_irq(vptr->pdev->irq, &velocity_intr, SA_SHIRQ,
+ ret = request_irq(vptr->pdev->irq, &velocity_intr, IRQF_SHARED,
dev->name, dev);
if (ret < 0) {
/* Power down the chip */
priv = pci_get_drvdata(pdev);
- rc = request_irq(pdev->irq, dscc4_irq, SA_SHIRQ, DRV_NAME, priv->root);
+ rc = request_irq(pdev->irq, dscc4_irq, IRQF_SHARED, DRV_NAME, priv->root);
if (rc < 0) {
printk(KERN_WARNING "%s: IRQ %d busy\n", DRV_NAME, pdev->irq);
goto err_release_4;
dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem);
/* Register the interrupt handler */
- if (request_irq(pdev->irq, fst_intr, SA_SHIRQ, FST_DEV_NAME, card)) {
+ if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) {
printk_err("Unable to register interrupt %d\n", card->irq);
pci_release_regions(pdev);
pci_disable_device(pdev);
/* We want a fast IRQ for this device. Actually we'd like an even faster
IRQ ;) - This is one driver RtLinux is made for */
- if(request_irq(irq, &z8530_interrupt, SA_INTERRUPT, "Hostess SV11", dev)<0)
+ if(request_irq(irq, &z8530_interrupt, IRQF_DISABLED, "Hostess SV11", dev)<0)
{
printk(KERN_WARNING "hostess: IRQ %d already in use.\n", irq);
goto fail1;
lmc_softreset (sc);
/* Since we have to use PCI bus, this should work on x86,alpha,ppc */
- if (request_irq (dev->irq, &lmc_interrupt, SA_SHIRQ, dev->name, dev)){
+ if (request_irq (dev->irq, &lmc_interrupt, IRQF_SHARED, dev->name, dev)){
printk(KERN_WARNING "%s: could not get irq: %d\n", dev->name, dev->irq);
lmc_trace(dev, "lmc_open irq failed out");
return -EAGAIN;
}
/* Allocate IRQ */
- if (request_irq(card->hw.irq, cpc_intr, SA_SHIRQ, "Cyclades-PC300", card)) {
+ if (request_irq(card->hw.irq, cpc_intr, IRQF_SHARED, "Cyclades-PC300", card)) {
printk ("PC300 found at RAM 0x%08x, but could not allocate IRQ%d.\n",
card->hw.ramphys, card->hw.irq);
goto err_io_unmap;
writew(readw(p) | 0x0040, p);
/* Allocate IRQ */
- if (request_irq(pdev->irq, sca_intr, SA_SHIRQ, devname, card)) {
+ if (request_irq(pdev->irq, sca_intr, IRQF_SHARED, devname, card)) {
printk(KERN_WARNING "pci200syn: could not allocate IRQ%d.\n",
pdev->irq);
pci200_pci_remove_one(pdev);
}
}
- if( request_irq(dev->irq, sbni_interrupt, SA_SHIRQ, dev->name, dev) ) {
+ if( request_irq(dev->irq, sbni_interrupt, IRQF_SHARED, dev->name, dev) ) {
printk( KERN_ERR "%s: unable to get IRQ %d.\n",
dev->name, dev->irq );
return -EAGAIN;
/* We want a fast IRQ for this device. Actually we'd like an even faster
IRQ ;) - This is one driver RtLinux is made for */
- if(request_irq(irq, &z8530_interrupt, SA_INTERRUPT, "SeaLevel", dev)<0)
+ if(request_irq(irq, &z8530_interrupt, IRQF_DISABLED, "SeaLevel", dev)<0)
{
printk(KERN_WARNING "sealevel: IRQ %d already in use.\n", irq);
goto fail1_1;
pci_name(pdev), plx_phy, ramsize / 1024, mem_phy, pdev->irq);
/* Allocate IRQ */
- if (request_irq(pdev->irq, wanxl_intr, SA_SHIRQ, "wanXL", card)) {
+ if (request_irq(pdev->irq, wanxl_intr, IRQF_SHARED, "wanXL", card)) {
printk(KERN_WARNING "wanXL %s: could not allocate IRQ%i.\n",
pci_name(pdev), pdev->irq);
wanxl_pci_remove_one(pdev);
reset_card (dev, 1);
msleep(400);
- rc = request_irq( dev->irq, airo_interrupt, SA_SHIRQ, dev->name, dev );
+ rc = request_irq( dev->irq, airo_interrupt, IRQF_SHARED, dev->name, dev );
if (rc) {
airo_print_err(dev->name, "register interrupt %d failed, rc %d",
irq, rc);
SET_NETDEV_DEV(dev, sys_dev);
- if ((rc = request_irq(dev->irq, service_interrupt, SA_SHIRQ, dev->name, dev))) {
+ if ((rc = request_irq(dev->irq, service_interrupt, IRQF_SHARED, dev->name, dev))) {
printk(KERN_ERR "%s: register interrupt %d failed, rc %d\n", dev->name, irq, rc);
goto err_out_free;
}
#include <linux/netdevice.h>
#include <linux/pci.h>
#include <linux/string.h>
-#include <linux/version.h>
+#include <linux/utsrelease.h>
static void bcm43xx_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
}
#endif
res = request_irq(bcm->irq, bcm43xx_interrupt_handler,
- SA_SHIRQ, KBUILD_MODNAME, bcm);
+ IRQF_SHARED, KBUILD_MODNAME, bcm);
if (res) {
printk(KERN_ERR PFX "Cannot register IRQ%d\n", bcm->irq);
return -ENODEV;
}
+/*
+ * HostAP uses two layers of net devices, where the inner
+ * layer gets called all the time from the outer layer.
+ * This is a natural nesting, which needs a split lock type.
+ */
+static struct lock_class_key hostap_netdev_xmit_lock_key;
+
+
static struct net_device *
prism2_init_local_data(struct prism2_helper_functions *funcs, int card_idx,
struct device *sdev)
SET_NETDEV_DEV(dev, sdev);
if (ret >= 0)
ret = register_netdevice(dev);
+
+ lockdep_set_class(&dev->_xmit_lock, &hostap_netdev_xmit_lock_key);
rtnl_unlock();
if (ret < 0) {
printk(KERN_WARNING "%s: register netdevice failed!\n",
pci_set_drvdata(pdev, dev);
- if (request_irq(dev->irq, prism2_interrupt, SA_SHIRQ, dev->name,
+ if (request_irq(dev->irq, prism2_interrupt, IRQF_SHARED, dev->name,
dev)) {
printk(KERN_WARNING "%s: request_irq failed\n", dev->name);
goto fail;
pci_set_drvdata(pdev, dev);
- if (request_irq(dev->irq, prism2_interrupt, SA_SHIRQ, dev->name,
+ if (request_irq(dev->irq, prism2_interrupt, IRQF_SHARED, dev->name,
dev)) {
printk(KERN_WARNING "%s: request_irq failed\n", dev->name);
goto fail;
ipw2100_queues_initialize(priv);
err = request_irq(pci_dev->irq,
- ipw2100_interrupt, SA_SHIRQ, dev->name, priv);
+ ipw2100_interrupt, IRQF_SHARED, dev->name, priv);
if (err) {
printk(KERN_WARNING DRV_NAME
"Error calling request_irq: %d.\n", pci_dev->irq);
ipw_sw_reset(priv, 1);
- err = request_irq(pdev->irq, ipw_isr, SA_SHIRQ, DRV_NAME, priv);
+ err = request_irq(pdev->irq, ipw_isr, IRQF_SHARED, DRV_NAME, priv);
if (err) {
IPW_ERROR("Error allocating IRQ %d\n", pdev->irq);
goto out_destroy_workqueue;
hermes_struct_init(&priv->hw, hermes_io, HERMES_16BIT_REGSPACING);
- err = request_irq(pdev->irq, orinoco_interrupt, SA_SHIRQ,
+ err = request_irq(pdev->irq, orinoco_interrupt, IRQF_SHARED,
dev->name, dev);
if (err) {
printk(KERN_ERR PFX "Cannot allocate IRQ %d\n", pdev->irq);
hermes_struct_init(&priv->hw, hermes_io, HERMES_32BIT_REGSPACING);
- err = request_irq(pdev->irq, orinoco_interrupt, SA_SHIRQ,
+ err = request_irq(pdev->irq, orinoco_interrupt, IRQF_SHARED,
dev->name, dev);
if (err) {
printk(KERN_ERR PFX "Cannot allocate IRQ %d\n", pdev->irq);
pci_enable_device(pdev);
pci_restore_state(pdev);
- err = request_irq(pdev->irq, orinoco_interrupt, SA_SHIRQ,
+ err = request_irq(pdev->irq, orinoco_interrupt, IRQF_SHARED,
dev->name, dev);
if (err) {
printk(KERN_ERR "%s: cannot re-allocate IRQ on resume\n",
hermes_struct_init(&priv->hw, hermes_io, HERMES_16BIT_REGSPACING);
- err = request_irq(pdev->irq, orinoco_interrupt, SA_SHIRQ,
+ err = request_irq(pdev->irq, orinoco_interrupt, IRQF_SHARED,
dev->name, dev);
if (err) {
printk(KERN_ERR PFX "Cannot allocate IRQ %d\n", pdev->irq);
hermes_struct_init(&priv->hw, hermes_io, HERMES_16BIT_REGSPACING);
- err = request_irq(pdev->irq, orinoco_interrupt, SA_SHIRQ,
+ err = request_irq(pdev->irq, orinoco_interrupt, IRQF_SHARED,
dev->name, dev);
if (err) {
printk(KERN_ERR PFX "Cannot allocate IRQ %d\n", pdev->irq);
/* request for the interrupt before uploading the firmware */
rvalue = request_irq(pdev->irq, &islpci_interrupt,
- SA_SHIRQ, ndev->name, priv);
+ IRQF_SHARED, ndev->name, priv);
if (rvalue) {
/* error, could not hook the handler to the irq */
/* Reset the chip. */
iowrite32(0x80000000, ioaddr + DMACtrl);
- i = request_irq(dev->irq, &yellowfin_interrupt, SA_SHIRQ, dev->name, dev);
+ i = request_irq(dev->irq, &yellowfin_interrupt, IRQF_SHARED, dev->name, dev);
if (i) return i;
if (yellowfin_debug > 1)
dev->irq = IRQ_AMIGA_PORTS;
/* Install the Interrupt handler */
- i = request_irq(IRQ_AMIGA_PORTS, ei_interrupt, SA_SHIRQ, DRV_NAME, dev);
+ i = request_irq(IRQ_AMIGA_PORTS, ei_interrupt, IRQF_SHARED, DRV_NAME, dev);
if (i) return i;
for(i = 0; i < ETHER_ADDR_LEN; i++) {
}
pcibios_register_hba(&eisa_dev.hba);
- result = request_irq(dev->irq, eisa_irq, SA_SHIRQ, "EISA", &eisa_dev);
+ result = request_irq(dev->irq, eisa_irq, IRQF_SHARED, "EISA", &eisa_dev);
if (result) {
printk(KERN_ERR "EISA: request_irq failed!\n");
return result;
else
printk(KERN_ERR PFX "USB regulator not initialized!\n");
- if (request_irq(pdev->irq, superio_interrupt, SA_INTERRUPT,
+ if (request_irq(pdev->irq, superio_interrupt, IRQF_DISABLED,
SUPERIO, (void *)sio)) {
printk(KERN_ERR PFX "could not get irq\n");
if (irq >= 0) {
/* request irq */
ret = request_irq(irq, parport_ax88796_interrupt,
- SA_TRIGGER_FALLING, pdev->name, pp);
+ IRQF_TRIGGER_FALLING, pdev->name, pp);
if (ret < 0)
goto exit_port;
if (p->irq != PARPORT_IRQ_NONE) {
if (use_cnt++ == 0)
- if (request_irq(IRQ_AMIGA_PORTS, mfc3_interrupt, SA_SHIRQ, p->name, &pp_mfc3_ops))
+ if (request_irq(IRQ_AMIGA_PORTS, mfc3_interrupt, IRQF_SHARED, p->name, &pp_mfc3_ops))
goto out_irq;
}
p->size = size;
if ((err = request_irq(p->irq, parport_sunbpp_interrupt,
- SA_SHIRQ, p->name, p)) != 0) {
+ IRQF_SHARED, p->name, p)) != 0) {
goto out_put_port;
}
dbg("entered cpci_hp_intr");
/* Check to see if it was our interrupt */
- if ((controller->irq_flags & SA_SHIRQ) &&
+ if ((controller->irq_flags & IRQF_SHARED) &&
!controller->ops->check_irq(controller->dev_id)) {
dbg("exited cpci_hp_intr, not our interrupt");
return IRQ_NONE;
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/pci.h>
-#include <linux/signal.h> /* SA_SHIRQ */
+#include <linux/interrupt.h>
+#include <linux/signal.h> /* IRQF_SHARED */
#include "cpci_hotplug.h"
#include "cpcihp_zt5550.h"
zt5550_hpc.ops = &zt5550_hpc_ops;
if(!poll) {
zt5550_hpc.irq = hc_dev->irq;
- zt5550_hpc.irq_flags = SA_SHIRQ;
+ zt5550_hpc.irq_flags = IRQF_SHARED;
zt5550_hpc.dev_id = hc_dev;
zt5550_hpc_ops.enable_irq = zt5550_hc_enable_irq;
/* set up the interrupt */
dbg("HPC interrupt = %d \n", ctrl->interrupt);
if (request_irq(ctrl->interrupt, cpqhp_ctrl_intr,
- SA_SHIRQ, MY_NAME, ctrl)) {
+ IRQF_SHARED, MY_NAME, ctrl)) {
err("Can't get irq %d for the hotplug pci controller\n",
ctrl->interrupt);
rc = -ENODEV;
start_int_poll_timer( php_ctlr, 10 ); /* start with 10 second delay */
} else {
/* Installs the interrupt handler */
- rc = request_irq(php_ctlr->irq, pcie_isr, SA_SHIRQ, MY_NAME, (void *) ctrl);
+ rc = request_irq(php_ctlr->irq, pcie_isr, IRQF_SHARED, MY_NAME, (void *) ctrl);
dbg("%s: request_irq %d for hpc%d (returns %d)\n", __FUNCTION__, php_ctlr->irq, ctlr_seq_num, rc);
if (rc) {
err("Can't get irq %d for the hotplug controller\n", php_ctlr->irq);
} else
php_ctlr->irq = pdev->irq;
- rc = request_irq(php_ctlr->irq, shpc_isr, SA_SHIRQ, MY_NAME, (void *) ctrl);
+ rc = request_irq(php_ctlr->irq, shpc_isr, IRQF_SHARED, MY_NAME, (void *) ctrl);
dbg("%s: request_irq %d for hpc%d (returns %d)\n", __FUNCTION__, php_ctlr->irq, ctlr_seq_num, rc);
if (rc) {
err("Can't get irq %d for the hotplug controller\n", php_ctlr->irq);
/* must be a GPIO; ergo must trigger on both edges */
status = request_irq(board->det_pin, at91_cf_irq,
- SA_SAMPLE_RANDOM, driver_name, cf);
+ IRQF_SAMPLE_RANDOM, driver_name, cf);
if (status < 0)
goto fail0;
device_init_wakeup(&pdev->dev, 1);
*/
if (board->irq_pin) {
status = request_irq(board->irq_pin, at91_cf_irq,
- SA_SHIRQ, driver_name, cf);
+ IRQF_SHARED, driver_name, cf);
if (status < 0)
goto fail0a;
cf->socket.pci_irq = board->irq_pin;
hd64465_register_irq_demux(sp->irq, hs_irq_demux, sp);
- if ((err = request_irq(sp->irq, hs_interrupt, SA_INTERRUPT, MODNAME, sp)) < 0)
+ if ((err = request_irq(sp->irq, hs_interrupt, IRQF_DISABLED, MODNAME, sp)) < 0)
return err;
if (request_mem_region(sp->mem_base, sp->mem_length, MODNAME) == 0) {
sp->mem_base = 0;
/* Register the interrupt handler */
dprintk(KERN_DEBUG "Requesting interrupt %i \n",dev->irq);
- if ((ret = request_irq(dev->irq, i82092aa_interrupt, SA_SHIRQ, "i82092aa", i82092aa_interrupt))) {
+ if ((ret = request_irq(dev->irq, i82092aa_interrupt, IRQF_SHARED, "i82092aa", i82092aa_interrupt))) {
printk(KERN_ERR "i82092aa: Failed to register IRQ %d, aborting\n", dev->irq);
goto err_out_free_res;
}
static u_int __init test_irq(u_short sock, int irq)
{
debug(2, " testing ISA irq %d\n", irq);
- if (request_irq(irq, i365_count_irq, SA_PROBEIRQ, "scan",
+ if (request_irq(irq, i365_count_irq, IRQF_PROBE_SHARED, "scan",
i365_count_irq) != 0)
return 1;
irq_hits = 0; irq_sock = sock;
} else {
/* Fallback: just find interrupts that aren't in use */
for (i = 0; i < 16; i++)
- if ((mask0 & (1 << i)) && (_check_irq(i, SA_PROBEIRQ) == 0))
+ if ((mask0 & (1 << i)) && (_check_irq(i, IRQF_PROBE_SHARED) == 0))
mask1 |= (1 << i);
printk("default");
/* If scan failed, default to polled status */
u_int cs_mask = mask & ((cs_irq) ? (1<<cs_irq) : ~(1<<12));
for (cs_irq = 15; cs_irq > 0; cs_irq--)
if ((cs_mask & (1 << cs_irq)) &&
- (_check_irq(cs_irq, SA_PROBEIRQ) == 0))
+ (_check_irq(cs_irq, IRQF_PROBE_SHARED) == 0))
break;
if (cs_irq) {
grab_irq = 1;
dev_set_drvdata(dev, cf);
/* this primarily just shuts up irq handling noise */
- status = request_irq(irq, omap_cf_irq, SA_SHIRQ,
+ status = request_irq(irq, omap_cf_irq, IRQF_SHARED,
driver_name, cf);
if (status < 0)
goto fail0;
/* Decide what type of interrupt we are registering */
type = 0;
if (s->functions > 1) /* All of this ought to be handled higher up */
- type = SA_SHIRQ;
+ type = IRQF_SHARED;
if (req->Attributes & IRQ_TYPE_DYNAMIC_SHARING)
- type = SA_SHIRQ;
+ type = IRQF_SHARED;
#ifdef CONFIG_PCMCIA_PROBE
if (s->irq.AssignedIRQ != 0) {
if (ret && !s->irq.AssignedIRQ) {
if (!s->pci_irq)
return ret;
- type = SA_SHIRQ;
+ type = IRQF_SHARED;
irq = s->pci_irq;
}
}
/* Make sure the fact the request type was overridden is passed back */
- if (type == SA_SHIRQ && !(req->Attributes & IRQ_TYPE_DYNAMIC_SHARING)) {
+ if (type == IRQF_SHARED && !(req->Attributes & IRQ_TYPE_DYNAMIC_SHARING)) {
req->Attributes |= IRQ_TYPE_DYNAMIC_SHARING;
printk(KERN_WARNING "pcmcia: request for exclusive IRQ could not be fulfilled.\n");
printk(KERN_WARNING "pcmcia: the driver needs updating to supported shared IRQ lines.\n");
pci_set_drvdata(dev, socket);
if (irq_mode == 1) {
/* Register the interrupt handler */
- if ((ret = request_irq(dev->irq, pd6729_interrupt, SA_SHIRQ,
+ if ((ret = request_irq(dev->irq, pd6729_interrupt, IRQF_SHARED,
"pd6729", socket))) {
printk(KERN_ERR "pd6729: Failed to register irq %d, "
"aborting\n", dev->irq);
#include <linux/timer.h>
#include <linux/mm.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/spinlock.h>
#include <linux/cpufreq.h>
#include <asm/hardware.h>
#include <asm/io.h>
-#include <asm/irq.h>
#include <asm/system.h>
#include "soc_common.h"
if (irqs[i].sock != skt->nr)
continue;
res = request_irq(irqs[i].irq, soc_common_pcmcia_interrupt,
- SA_INTERRUPT, irqs[i].str, skt);
+ IRQF_DISABLED, irqs[i].str, skt);
if (res)
break;
set_irq_type(irqs[i].irq, IRQT_NOEDGE);
retval = vrc4171_add_sockets();
if (retval == 0)
- retval = request_irq(vrc4171_irq, pccard_interrupt, SA_SHIRQ,
+ retval = request_irq(vrc4171_irq, pccard_interrupt, IRQF_SHARED,
vrc4171_card_name, vrc4171_sockets);
if (retval < 0) {
return -ENOMEM;
}
- if (request_irq(dev->irq, cardu_interrupt, SA_SHIRQ, socket->name, socket) < 0) {
+ if (request_irq(dev->irq, cardu_interrupt, IRQF_SHARED, socket->name, socket) < 0) {
pcmcia_unregister_socket(socket->pcmcia_socket);
socket->pcmcia_socket = NULL;
iounmap(socket->base);
socket->probe_status = 0;
- if (request_irq(socket->cb_irq, yenta_probe_handler, SA_SHIRQ, "yenta", socket)) {
+ if (request_irq(socket->cb_irq, yenta_probe_handler, IRQF_SHARED, "yenta", socket)) {
printk(KERN_WARNING "Yenta: request_irq() in yenta_probe_cb_irq() failed!\n");
return -1;
}
/* We must finish initialization here */
- if (!socket->cb_irq || request_irq(socket->cb_irq, yenta_interrupt, SA_SHIRQ, "yenta", socket)) {
+ if (!socket->cb_irq || request_irq(socket->cb_irq, yenta_interrupt, IRQF_SHARED, "yenta", socket)) {
/* No IRQ or request_irq failed. Poll */
socket->cb_irq = 0; /* But zero is a valid IRQ number. */
init_timer(&socket->poll_timer);
static void
pnpacpi_parse_allocated_irqresource(struct pnp_resource_table *res, u32 gsi,
- int triggering, int polarity)
+ int triggering, int polarity, int shareable)
{
int i = 0;
int irq;
return;
}
+ if (shareable)
+ res->irq_resource[i].flags |= IORESOURCE_IRQ_SHAREABLE;
+
res->irq_resource[i].start = irq;
res->irq_resource[i].end = irq;
pcibios_penalize_isa_irq(irq, 1);
pnpacpi_parse_allocated_irqresource(res_table,
res->data.irq.interrupts[i],
res->data.irq.triggering,
- res->data.irq.polarity);
+ res->data.irq.polarity,
+ res->data.irq.sharable);
}
break;
pnpacpi_parse_allocated_irqresource(res_table,
res->data.extended_irq.interrupts[i],
res->data.extended_irq.triggering,
- res->data.extended_irq.polarity);
+ res->data.extended_irq.polarity,
+ res->data.extended_irq.sharable);
}
break;
* device is active because it itself may be in use */
if(!dev->active) {
if (request_irq(*irq, pnp_test_handler,
- SA_INTERRUPT|SA_PROBEIRQ, "pnp", NULL))
+ IRQF_DISABLED|IRQF_PROBE_SHARED, "pnp", NULL))
return 0;
free_irq(*irq, NULL);
}
AT91_RTC_CALEV);
ret = request_irq(AT91_ID_SYS, at91_rtc_interrupt,
- SA_SHIRQ, "at91_rtc", pdev);
+ IRQF_SHARED, "at91_rtc", pdev);
if (ret) {
printk(KERN_ERR "at91_rtc: IRQ %d already in use.\n",
AT91_ID_SYS);
if (pdata->irq >= 0) {
writeb(0, ioaddr + RTC_INTERRUPTS);
- if (request_irq(pdata->irq, ds1553_rtc_interrupt, SA_SHIRQ,
+ if (request_irq(pdata->irq, ds1553_rtc_interrupt, IRQF_SHARED,
pdev->name, pdev) < 0) {
dev_warn(&pdev->dev, "interrupt not available.\n");
pdata->irq = -1;
goto out_no_remap;
}
- if (request_irq(adev->irq[0], pl031_interrupt, SA_INTERRUPT,
+ if (request_irq(adev->irq[0], pl031_interrupt, IRQF_DISABLED,
"rtc-pl031", ldata->rtc)) {
ret = -EIO;
goto out_no_irq;
{
int ret;
- ret = request_irq(IRQ_RTC1Hz, sa1100_rtc_interrupt, SA_INTERRUPT,
+ ret = request_irq(IRQ_RTC1Hz, sa1100_rtc_interrupt, IRQF_DISABLED,
"rtc 1Hz", dev);
if (ret) {
dev_err(dev, "IRQ %d already in use.\n", IRQ_RTC1Hz);
goto fail_ui;
}
- ret = request_irq(IRQ_RTCAlrm, sa1100_rtc_interrupt, SA_INTERRUPT,
+ ret = request_irq(IRQ_RTCAlrm, sa1100_rtc_interrupt, IRQF_DISABLED,
"rtc Alrm", dev);
if (ret) {
dev_err(dev, "IRQ %d already in use.\n", IRQ_RTCAlrm);
goto fail_ai;
}
- ret = request_irq(IRQ_OST1, timer1_interrupt, SA_INTERRUPT,
+ ret = request_irq(IRQ_OST1, timer1_interrupt, IRQF_DISABLED,
"rtc timer", dev);
if (ret) {
dev_err(dev, "IRQ %d already in use.\n", IRQ_OST1);
spin_unlock_irq(&rtc_lock);
irq = ELAPSEDTIME_IRQ;
- retval = request_irq(irq, elapsedtime_interrupt, SA_INTERRUPT,
+ retval = request_irq(irq, elapsedtime_interrupt, IRQF_DISABLED,
"elapsed_time", pdev);
if (retval == 0) {
irq = RTCLONG1_IRQ;
- retval = request_irq(irq, rtclong1_interrupt, SA_INTERRUPT,
+ retval = request_irq(irq, rtclong1_interrupt, IRQF_DISABLED,
"rtclong1", pdev);
}
sclp_sync_wait(void)
{
unsigned long psw_mask;
+ unsigned long flags;
unsigned long cr0, cr0_sync;
u64 timeout;
sclp_tod_from_jiffies(sclp_request_timer.expires -
jiffies);
}
+ local_irq_save(flags);
/* Prevent bottom half from executing once we force interrupts open */
local_bh_disable();
/* Enable service-signal interruption, disable timer interrupts */
+ trace_hardirqs_on();
__ctl_store(cr0, 0, 0);
cr0_sync = cr0;
cr0_sync |= 0x00000200;
barrier();
cpu_relax();
}
- /* Restore interrupt settings */
- asm volatile ("SSM 0(%0)"
- : : "a" (&psw_mask) : "memory");
+ local_irq_disable();
__ctl_load(cr0, 0, 0);
- __local_bh_enable();
+ _local_bh_enable();
+ local_irq_restore(flags);
}
EXPORT_SYMBOL(sclp_sync_wait);
sch->driver->irq(&sch->dev);
spin_unlock(&sch->lock);
irq_exit ();
- __local_bh_enable();
+ _local_bh_enable();
return 1;
}
DEFINE_PER_CPU(char[256], qeth_dbf_txt_buf);
+static struct lock_class_key qdio_out_skb_queue_key;
+
/**
* some more definitions and declarations
*/
&card->qdio.out_qs[i]->qdio_bufs[j];
skb_queue_head_init(&card->qdio.out_qs[i]->bufs[j].
skb_list);
+ lockdep_set_class(
+ &card->qdio.out_qs[i]->bufs[j].skb_list.lock,
+ &qdio_out_skb_queue_key);
INIT_LIST_HEAD(&card->qdio.out_qs[i]->bufs[j].ctx_list);
}
}
struct sk_buff_head tmp_list;
skb_queue_head_init(&tmp_list);
+ lockdep_set_class(&tmp_list.lock, &qdio_out_skb_queue_key);
for(i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i){
while ((skb = skb_dequeue(&buf->skb_list))){
if (vlan_tx_tag_present(skb) &&
struct mcck_struct *mcck;
int umode;
+ lockdep_off();
+
mci = (struct mci *) &S390_lowcore.mcck_interruption_code;
mcck = &__get_cpu_var(cpu_mcck);
umode = user_mode(regs);
mcck->warning = 1;
set_thread_flag(TIF_MCCK_PENDING);
}
+ lockdep_on();
}
/*
atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
&adapter->status);
ZFCP_LOG_DEBUG("Doing exchange config data\n");
- write_lock(&adapter->erp_lock);
+ write_lock_irq(&adapter->erp_lock);
zfcp_erp_action_to_running(erp_action);
- write_unlock(&adapter->erp_lock);
+ write_unlock_irq(&adapter->erp_lock);
zfcp_erp_timeout_init(erp_action);
if (zfcp_fsf_exchange_config_data(erp_action)) {
retval = ZFCP_ERP_FAILED;
adapter = erp_action->adapter;
atomic_clear_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status);
- write_lock(&adapter->erp_lock);
+ write_lock_irq(&adapter->erp_lock);
zfcp_erp_action_to_running(erp_action);
- write_unlock(&adapter->erp_lock);
+ write_unlock_irq(&adapter->erp_lock);
zfcp_erp_timeout_init(erp_action);
ret = zfcp_fsf_exchange_port_data(erp_action, adapter, NULL);
zfcp_qdio_reqid_check(struct zfcp_adapter *adapter, void *sbale_addr)
{
struct zfcp_fsf_req *fsf_req;
+ unsigned long flags;
/* invalid (per convention used in this driver) */
if (unlikely(!sbale_addr)) {
fsf_req = (struct zfcp_fsf_req *) sbale_addr;
/* serialize with zfcp_fsf_req_dismiss_all */
- spin_lock(&adapter->fsf_req_list_lock);
+ spin_lock_irqsave(&adapter->fsf_req_list_lock, flags);
if (list_empty(&adapter->fsf_req_list_head)) {
- spin_unlock(&adapter->fsf_req_list_lock);
+ spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags);
return 0;
}
list_del(&fsf_req->list);
atomic_dec(&adapter->fsf_reqs_active);
- spin_unlock(&adapter->fsf_req_list_lock);
-
+ spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags);
+
if (unlikely(adapter != fsf_req->adapter)) {
ZFCP_LOG_NORMAL("bug: invalid reqid (fsf_req=%p, "
"fsf_req->adapter=%p, adapter=%p)\n",
printk("intr pri %d\n", grrr);
#endif
if ((bp->irq=irqs[bn]) && valid_irq(bp->irq) &&
- !request_irq(bp->irq|0x30, aurora_interrupt, SA_SHIRQ, "sio16", bp)) {
+ !request_irq(bp->irq|0x30, aurora_interrupt, IRQF_SHARED, "sio16", bp)) {
free_irq(bp->irq|0x30, bp);
} else
if ((bp->irq=prom_getint(sdev->prom_node, "bintr")) && valid_irq(bp->irq) &&
- !request_irq(bp->irq|0x30, aurora_interrupt, SA_SHIRQ, "sio16", bp)) {
+ !request_irq(bp->irq|0x30, aurora_interrupt, IRQF_SHARED, "sio16", bp)) {
free_irq(bp->irq|0x30, bp);
} else
if ((bp->irq=prom_getint(sdev->prom_node, "intr")) && valid_irq(bp->irq) &&
- !request_irq(bp->irq|0x30, aurora_interrupt, SA_SHIRQ, "sio16", bp)) {
+ !request_irq(bp->irq|0x30, aurora_interrupt, IRQF_SHARED, "sio16", bp)) {
free_irq(bp->irq|0x30, bp);
} else
for(grrr=0;grrr<TYPE_1_IRQS;grrr++) {
- if ((bp->irq=type_1_irq[grrr])&&!request_irq(bp->irq|0x30, aurora_interrupt, SA_SHIRQ, "sio16", bp)) {
+ if ((bp->irq=type_1_irq[grrr])&&!request_irq(bp->irq|0x30, aurora_interrupt, IRQF_SHARED, "sio16", bp)) {
free_irq(bp->irq|0x30, bp);
break;
} else {
#ifdef AURORA_ALLIRQ
int i;
for (i = 0; i < AURORA_ALLIRQ; i++) {
- error = request_irq(allirq[i]|0x30, aurora_interrupt, SA_SHIRQ,
+ error = request_irq(allirq[i]|0x30, aurora_interrupt, IRQF_SHARED,
"sio16", bp);
if (error)
printk(KERN_ERR "IRQ%d request error %d\n",
allirq[i], error);
}
#else
- error = request_irq(bp->irq|0x30, aurora_interrupt, SA_SHIRQ,
+ error = request_irq(bp->irq|0x30, aurora_interrupt, IRQF_SHARED,
"sio16", bp);
if (error) {
printk(KERN_ERR "IRQ request error %d\n", error);
bp->waiting = 0;
init_waitqueue_head(&bp->wq);
if (request_irq(edev->irqs[0], bbc_i2c_interrupt,
- SA_SHIRQ, "bbc_i2c", bp))
+ IRQF_SHARED, "bbc_i2c", bp))
goto fail;
bp->index = index;
{
if (request_irq(wd_dev.irq,
&wd_interrupt,
- SA_SHIRQ,
+ IRQF_SHARED,
WD_OBPNAME,
(void *)wd_dev.regs)) {
printk("%s: Cannot register IRQ %d\n",
TW_PARAM_PORTCOUNT, TW_PARAM_PORTCOUNT_LENGTH)));
/* Now setup the interrupt handler */
- retval = request_irq(pdev->irq, twa_interrupt, SA_SHIRQ, "3w-9xxx", tw_dev);
+ retval = request_irq(pdev->irq, twa_interrupt, IRQF_SHARED, "3w-9xxx", tw_dev);
if (retval) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x30, "Error requesting IRQ");
goto out_remove_host;
printk(KERN_WARNING "3w-xxxx: scsi%d: Found a 3ware Storage Controller at 0x%x, IRQ: %d.\n", host->host_no, tw_dev->base_addr, pdev->irq);
/* Now setup the interrupt handler */
- retval = request_irq(pdev->irq, tw_interrupt, SA_SHIRQ, "3w-xxxx", tw_dev);
+ retval = request_irq(pdev->irq, tw_interrupt, IRQF_SHARED, "3w-xxxx", tw_dev);
if (retval) {
printk(KERN_WARNING "3w-xxxx: Error requesting IRQ.");
goto out_remove_host;
STATIC int NCR_700_host_reset(struct scsi_cmnd * SCpnt);
STATIC void NCR_700_chip_setup(struct Scsi_Host *host);
STATIC void NCR_700_chip_reset(struct Scsi_Host *host);
+STATIC int NCR_700_slave_alloc(struct scsi_device *SDpnt);
STATIC int NCR_700_slave_configure(struct scsi_device *SDpnt);
STATIC void NCR_700_slave_destroy(struct scsi_device *SDpnt);
static int NCR_700_change_queue_depth(struct scsi_device *SDpnt, int depth);
STATIC struct scsi_transport_template *NCR_700_transport_template = NULL;
-struct NCR_700_sense {
- unsigned char cmnd[MAX_COMMAND_SIZE];
-};
-
static char *NCR_700_phase[] = {
"",
"after selection",
tpnt->use_clustering = ENABLE_CLUSTERING;
tpnt->slave_configure = NCR_700_slave_configure;
tpnt->slave_destroy = NCR_700_slave_destroy;
+ tpnt->slave_alloc = NCR_700_slave_alloc;
tpnt->change_queue_depth = NCR_700_change_queue_depth;
tpnt->change_queue_type = NCR_700_change_queue_type;
struct NCR_700_command_slot *slot =
(struct NCR_700_command_slot *)SCp->host_scribble;
- NCR_700_unmap(hostdata, SCp, slot);
+ dma_unmap_single(hostdata->dev, slot->pCmd,
+ sizeof(SCp->cmnd), DMA_TO_DEVICE);
if (slot->flags == NCR_700_FLAG_AUTOSENSE) {
- struct NCR_700_sense *sense = SCp->device->hostdata;
+ char *cmnd = NCR_700_get_sense_cmnd(SCp->device);
#ifdef NCR_700_DEBUG
printk(" ORIGINAL CMD %p RETURNED %d, new return is %d sense is\n",
SCp, SCp->cmnd[7], result);
/* restore the old result if the request sense was
* successful */
if(result == 0)
- result = sense->cmnd[7];
+ result = cmnd[7];
} else
- dma_unmap_single(hostdata->dev, slot->pCmd,
- sizeof(SCp->cmnd), DMA_TO_DEVICE);
+ NCR_700_unmap(hostdata, SCp, slot);
free_slot(slot, hostdata);
#ifdef NCR_700_DEBUG
status_byte(hostdata->status[0]) == COMMAND_TERMINATED) {
struct NCR_700_command_slot *slot =
(struct NCR_700_command_slot *)SCp->host_scribble;
- if(SCp->cmnd[0] == REQUEST_SENSE) {
+ if(slot->flags == NCR_700_FLAG_AUTOSENSE) {
/* OOPS: bad device, returning another
* contingent allegiance condition */
scmd_printk(KERN_ERR, SCp,
"broken device is looping in contingent allegiance: ignoring\n");
NCR_700_scsi_done(hostdata, SCp, hostdata->status[0]);
} else {
- struct NCR_700_sense *sense = SCp->device->hostdata;
+ char *cmnd =
+ NCR_700_get_sense_cmnd(SCp->device);
#ifdef NCR_DEBUG
scsi_print_command(SCp);
printk(" cmd %p has status %d, requesting sense\n",
sizeof(SCp->cmnd),
DMA_TO_DEVICE);
- sense->cmnd[0] = REQUEST_SENSE;
- sense->cmnd[1] = (SCp->device->lun & 0x7) << 5;
- sense->cmnd[2] = 0;
- sense->cmnd[3] = 0;
- sense->cmnd[4] = sizeof(SCp->sense_buffer);
- sense->cmnd[5] = 0;
+ cmnd[0] = REQUEST_SENSE;
+ cmnd[1] = (SCp->device->lun & 0x7) << 5;
+ cmnd[2] = 0;
+ cmnd[3] = 0;
+ cmnd[4] = sizeof(SCp->sense_buffer);
+ cmnd[5] = 0;
/* Here's a quiet hack: the
* REQUEST_SENSE command is six bytes,
* so store a flag indicating that
* this was an internal sense request
* and the original status at the end
* of the command */
- sense->cmnd[6] = NCR_700_INTERNAL_SENSE_MAGIC;
- sense->cmnd[7] = hostdata->status[0];
- slot->pCmd = dma_map_single(hostdata->dev, sense->cmnd, sizeof(sense->cmnd), DMA_TO_DEVICE);
+ cmnd[6] = NCR_700_INTERNAL_SENSE_MAGIC;
+ cmnd[7] = hostdata->status[0];
+ slot->pCmd = dma_map_single(hostdata->dev, cmnd, MAX_COMMAND_SIZE, DMA_TO_DEVICE);
slot->dma_handle = dma_map_single(hostdata->dev, SCp->sense_buffer, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
slot->SG[0].ins = bS_to_host(SCRIPT_MOVE_DATA_IN | sizeof(SCp->sense_buffer));
slot->SG[0].pAddr = bS_to_host(slot->dma_handle);
/* clear all the negotiated parameters */
__shost_for_each_device(SDp, host)
- SDp->hostdata = NULL;
+ NCR_700_clear_flag(SDp, ~0);
/* clear all the slots and their pending commands */
for(i = 0; i < NCR_700_COMMAND_SLOTS_PER_HOST; i++) {
spi_flags(STp) |= NCR_700_DEV_PRINT_SYNC_NEGOTIATION;
}
+STATIC int
+NCR_700_slave_alloc(struct scsi_device *SDp)
+{
+ SDp->hostdata = kzalloc(sizeof(struct NCR_700_Device_Parameters),
+ GFP_KERNEL);
+ if (!SDp->hostdata)
+ return -ENOMEM;
+
+ return 0;
+}
STATIC int
NCR_700_slave_configure(struct scsi_device *SDp)
struct NCR_700_Host_Parameters *hostdata =
(struct NCR_700_Host_Parameters *)SDp->host->hostdata[0];
- SDp->hostdata = kmalloc(GFP_KERNEL, sizeof(struct NCR_700_sense));
-
- if (!SDp->hostdata)
- return -ENOMEM;
-
/* to do here: allocate memory; build a queue_full list */
if(SDp->tagged_supported) {
scsi_set_tag_type(SDp, MSG_ORDERED_TAG);
#include <asm/io.h>
#include <scsi/scsi_device.h>
-
+#include <scsi/scsi_cmnd.h>
/* Turn on for general debugging---too verbose for normal use */
#undef NCR_700_DEBUG
#define SCRIPT_RETURN 0x90080000
};
-/* We use device->hostdata to store negotiated parameters. This is
- * supposed to be a pointer to a device private area, but we cannot
- * really use it as such since it will never be freed, so just use the
- * 32 bits to cram the information. The SYNC negotiation sequence looks
- * like:
+struct NCR_700_Device_Parameters {
+ /* space for creating a request sense command. Really, except
+ * for the annoying SCSI-2 requirement for LUN information in
+ * cmnd[1], this could be in static storage */
+ unsigned char cmnd[MAX_COMMAND_SIZE];
+ __u8 depth;
+};
+
+
+/* The SYNC negotiation sequence looks like:
*
* If DEV_NEGOTIATED_SYNC not set, tack and SDTR message on to the
* initial identify for the device and set DEV_BEGIN_SYNC_NEGOTATION
#define NCR_700_DEV_BEGIN_SYNC_NEGOTIATION (1<<17)
#define NCR_700_DEV_PRINT_SYNC_NEGOTIATION (1<<19)
+static inline char *NCR_700_get_sense_cmnd(struct scsi_device *SDp)
+{
+ struct NCR_700_Device_Parameters *hostdata = SDp->hostdata;
+
+ return hostdata->cmnd;
+}
+
static inline void
NCR_700_set_depth(struct scsi_device *SDp, __u8 depth)
{
- long l = (long)SDp->hostdata;
+ struct NCR_700_Device_Parameters *hostdata = SDp->hostdata;
- l &= 0xffff00ff;
- l |= 0xff00 & (depth << 8);
- SDp->hostdata = (void *)l;
+ hostdata->depth = depth;
}
static inline __u8
NCR_700_get_depth(struct scsi_device *SDp)
{
- return ((((unsigned long)SDp->hostdata) & 0xff00)>>8);
+ struct NCR_700_Device_Parameters *hostdata = SDp->hostdata;
+
+ return hostdata->depth;
}
static inline int
NCR_700_is_flag_set(struct scsi_device *SDp, __u32 flag)
NCR53c7x0_driver_init (host);
- if (request_irq(host->irq, NCR53c7x0_intr, SA_SHIRQ, "53c7xx", host))
+ if (request_irq(host->irq, NCR53c7x0_intr, IRQF_SHARED, "53c7xx", host))
{
printk("scsi%d : IRQ%d not free, detaching\n",
host->host_no, host->irq);
* Purpose : handle NCR53c7x0 interrupts for all NCR devices sharing
* the same IRQ line.
*
- * Inputs : Since we're using the SA_INTERRUPT interrupt handler
+ * Inputs : Since we're using the IRQF_DISABLED interrupt handler
* semantics, irq indicates the interrupt which invoked
* this handler.
*
/*
Acquire shared access to the IRQ Channel.
*/
- if (request_irq(HostAdapter->IRQ_Channel, BusLogic_InterruptHandler, SA_SHIRQ, HostAdapter->FullModelName, HostAdapter) < 0) {
+ if (request_irq(HostAdapter->IRQ_Channel, BusLogic_InterruptHandler, IRQF_SHARED, HostAdapter->FullModelName, HostAdapter) < 0) {
BusLogic_Error("UNABLE TO ACQUIRE IRQ CHANNEL %d - DETACHING\n", HostAdapter, HostAdapter->IRQ_Channel);
return false;
}
NCR5380_setup(instance);
for (trying_irqs = i = 0, mask = 1; i < 16; ++i, mask <<= 1)
- if ((mask & possible) && (request_irq(i, &probe_intr, SA_INTERRUPT, "NCR-probe", NULL) == 0))
+ if ((mask & possible) && (request_irq(i, &probe_intr, IRQF_DISABLED, "NCR-probe", NULL) == 0))
trying_irqs |= mask;
timeout = jiffies + (250 * HZ / 1000);
memset(p, '\0', sizeof(*p));
p->dev = dev;
snprintf(p->name, sizeof(p->name), "D700(%s)", dev->bus_id);
- if (request_irq(irq, NCR_D700_intr, SA_SHIRQ, p->name, p)) {
+ if (request_irq(irq, NCR_D700_intr, IRQF_SHARED, p->name, p)) {
printk(KERN_ERR "D700: request_irq failed\n");
kfree(p);
return -EBUSY;
p->irq = irq;
p->siops = siops;
- if (request_irq(irq, NCR_Q720_intr, SA_SHIRQ, "NCR_Q720", p)) {
+ if (request_irq(irq, NCR_Q720_intr, IRQF_SHARED, "NCR_Q720", p)) {
printk(KERN_ERR "NCR_Q720: request irq %d failed\n", irq);
goto out_release;
}
shost->sg_tablesize = TOTAL_SG_ENTRY;
/* Initial orc chip */
- error = request_irq(pdev->irq, inia100_intr, SA_SHIRQ,
+ error = request_irq(pdev->irq, inia100_intr, IRQF_SHARED,
"inia100", shost);
if (error < 0) {
printk(KERN_WARNING "inia100: unable to get irq %d\n",
regs.SASR = &(DMA(instance)->SASR);
regs.SCMD = &(DMA(instance)->SCMD);
wd33c93_init(instance, regs, dma_setup, dma_stop, WD33C93_FS_8_10);
- request_irq(IRQ_AMIGA_PORTS, a2091_intr, SA_SHIRQ, "A2091 SCSI",
+ request_irq(IRQ_AMIGA_PORTS, a2091_intr, IRQF_SHARED, "A2091 SCSI",
instance);
DMA(instance)->CNTR = CNTR_PDMD | CNTR_INTEN;
num_a2091++;
regs.SASR = &(DMA(a3000_host)->SASR);
regs.SCMD = &(DMA(a3000_host)->SCMD);
wd33c93_init(a3000_host, regs, dma_setup, dma_stop, WD33C93_FS_12_15);
- if (request_irq(IRQ_AMIGA_PORTS, a3000_intr, SA_SHIRQ, "A3000 SCSI",
+ if (request_irq(IRQ_AMIGA_PORTS, a3000_intr, IRQF_SHARED, "A3000 SCSI",
a3000_intr))
goto fail_irq;
DMA(a3000_host)->CNTR = CNTR_PDMD | CNTR_INTEN;
init->AdapterFibsPhysicalAddress = cpu_to_le32((u32)phys);
init->AdapterFibsSize = cpu_to_le32(fibsize);
init->AdapterFibAlign = cpu_to_le32(sizeof(struct hw_fib));
- /*
- * number of 4k pages of host physical memory. The aacraid fw needs
- * this number to be less than 4gb worth of pages. num_physpages is in
- * system page units. New firmware doesn't have any issues with the
- * mapping system, but older Firmware did, and had *troubles* dealing
- * with the math overloading past 32 bits, thus we must limit this
- * field.
- *
- * This assumes the memory is mapped zero->n, which isnt
- * always true on real computers. It also has some slight problems
- * with the GART on x86-64. I've btw never tried DMA from PCI space
- * on this platform but don't be surprised if its problematic.
- * [AK: something is very very wrong when a driver tests this symbol.
- * Someone should figure out what the comment writer really meant here and fix
- * the code. Or just remove that bad code. ]
- */
-#ifndef CONFIG_IOMMU
- if ((num_physpages << (PAGE_SHIFT - 12)) <= AAC_MAX_HOSTPHYSMEMPAGES) {
- init->HostPhysMemPages =
- cpu_to_le32(num_physpages << (PAGE_SHIFT-12));
- } else
-#endif
- {
- init->HostPhysMemPages = cpu_to_le32(AAC_MAX_HOSTPHYSMEMPAGES);
- }
+ init->HostPhysMemPages = cpu_to_le32(AAC_MAX_HOSTPHYSMEMPAGES);
init->InitFlags = 0;
if (dev->new_comm_interface) {
}
msleep(1);
}
- if (request_irq(dev->scsi_host_ptr->irq, aac_rkt_intr, SA_SHIRQ|SA_INTERRUPT, "aacraid", (void *)dev)<0)
+ if (request_irq(dev->scsi_host_ptr->irq, aac_rkt_intr, IRQF_SHARED|IRQF_DISABLED, "aacraid", (void *)dev)<0)
{
printk(KERN_ERR "%s%d: Interrupt unavailable.\n", name, instance);
goto error_iounmap;
}
msleep(1);
}
- if (request_irq(dev->scsi_host_ptr->irq, aac_rx_intr, SA_SHIRQ|SA_INTERRUPT, "aacraid", (void *)dev)<0)
+ if (request_irq(dev->scsi_host_ptr->irq, aac_rx_intr, IRQF_SHARED|IRQF_DISABLED, "aacraid", (void *)dev)<0)
{
printk(KERN_ERR "%s%d: Interrupt unavailable.\n", name, instance);
goto error_iounmap;
msleep(1);
}
- if (request_irq(dev->scsi_host_ptr->irq, aac_sa_intr, SA_SHIRQ|SA_INTERRUPT, "aacraid", (void *)dev ) < 0) {
+ if (request_irq(dev->scsi_host_ptr->irq, aac_sa_intr, IRQF_SHARED|IRQF_DISABLED, "aacraid", (void *)dev ) < 0) {
printk(KERN_WARNING "%s%d: Interrupt unavailable.\n", name, instance);
goto error_iounmap;
}
1.5 (8/8/96):
1. Add support for ABP-940U (PCI Ultra) adapter.
- 2. Add support for IRQ sharing by setting the SA_SHIRQ flag for
+ 2. Add support for IRQ sharing by setting the IRQF_SHARED flag for
request_irq and supplying a dev_id pointer to both request_irq()
and free_irq().
3. In AscSearchIOPortAddr11() restore a call to check_region() which
3. For v2.1.93 and newer kernels use CONFIG_PCI and new PCI BIOS
access functions.
4. Update board serial number printing.
- 5. Try allocating an IRQ both with and without the SA_INTERRUPT
+ 5. Try allocating an IRQ both with and without the IRQF_DISABLED
flag set to allow IRQ sharing with drivers that do not set
- the SA_INTERRUPT flag. Also display a more descriptive error
+ the IRQF_DISABLED flag. Also display a more descriptive error
message if request_irq() fails.
6. Update to latest Asc and Adv Libraries.
/* Register IRQ Number. */
ASC_DBG1(2, "advansys_detect: request_irq() %d\n", shp->irq);
/*
- * If request_irq() fails with the SA_INTERRUPT flag set,
- * then try again without the SA_INTERRUPT flag set. This
+ * If request_irq() fails with the IRQF_DISABLED flag set,
+ * then try again without the IRQF_DISABLED flag set. This
* allows IRQ sharing to work even with other drivers that
- * do not set the SA_INTERRUPT flag.
+ * do not set the IRQF_DISABLED flag.
*
- * If SA_INTERRUPT is not set, then interrupts are enabled
+ * If IRQF_DISABLED is not set, then interrupts are enabled
* before the driver interrupt function is called.
*/
if (((ret = request_irq(shp->irq, advansys_interrupt,
- SA_INTERRUPT | (share_irq == TRUE ? SA_SHIRQ : 0),
+ IRQF_DISABLED | (share_irq == TRUE ? IRQF_SHARED : 0),
"advansys", boardp)) != 0) &&
((ret = request_irq(shp->irq, advansys_interrupt,
- (share_irq == TRUE ? SA_SHIRQ : 0),
+ (share_irq == TRUE ? IRQF_SHARED : 0),
"advansys", boardp)) != 0))
{
if (ret == -EBUSY) {
SETPORT(SIMODE0, 0);
SETPORT(SIMODE1, 0);
- if( request_irq(shpnt->irq, swintr, SA_INTERRUPT|SA_SHIRQ, "aha152x", shpnt) ) {
+ if( request_irq(shpnt->irq, swintr, IRQF_DISABLED|IRQF_SHARED, "aha152x", shpnt) ) {
printk(KERN_ERR "aha152x%d: irq %d busy.\n", shpnt->host_no, shpnt->irq);
goto out_host_put;
}
SETPORT(SSTAT0, 0x7f);
SETPORT(SSTAT1, 0xef);
- if ( request_irq(shpnt->irq, intr, SA_INTERRUPT|SA_SHIRQ, "aha152x", shpnt) ) {
+ if ( request_irq(shpnt->irq, intr, IRQF_DISABLED|IRQF_SHARED, "aha152x", shpnt) ) {
printk(KERN_ERR "aha152x%d: failed to reassign irq %d.\n", shpnt->host_no, shpnt->irq);
goto out_host_put;
}
}
DEB(printk("aha1740_probe: enable interrupt channel %d\n",irq_level));
- if (request_irq(irq_level,aha1740_intr_handle,irq_type ? 0 : SA_SHIRQ,
+ if (request_irq(irq_level,aha1740_intr_handle,irq_type ? 0 : IRQF_SHARED,
"aha1740",shpnt)) {
printk(KERN_ERR "aha1740_probe: Unable to allocate IRQ %d.\n",
irq_level);
probe_ent->port_ops = ahci_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
probe_ent->private_data = hpriv;
shared = 0;
if ((ahc->flags & AHC_EDGE_INTERRUPT) == 0)
- shared = SA_SHIRQ;
+ shared = IRQF_SHARED;
error = request_irq(irq, ahc_linux_isr, shared, "aic7xxx", ahc);
if (error == 0)
} ahd_queue_alg;
void ahd_set_tags(struct ahd_softc *ahd,
+ struct scsi_cmnd *cmd,
struct ahd_devinfo *devinfo,
ahd_queue_alg alg);
/* Notify XPT */
ahd_send_async(ahd, devinfo.channel, devinfo.target,
- CAM_LUN_WILDCARD, AC_SENT_BDR, NULL);
+ CAM_LUN_WILDCARD, AC_SENT_BDR);
/*
* Allow the sequencer to continue with
tinfo->curr.ppr_options = ppr_options;
ahd_send_async(ahd, devinfo->channel, devinfo->target,
- CAM_LUN_WILDCARD, AC_TRANSFER_NEG, NULL);
+ CAM_LUN_WILDCARD, AC_TRANSFER_NEG);
if (bootverbose) {
if (offset != 0) {
int options;
tinfo->curr.width = width;
ahd_send_async(ahd, devinfo->channel, devinfo->target,
- CAM_LUN_WILDCARD, AC_TRANSFER_NEG, NULL);
+ CAM_LUN_WILDCARD, AC_TRANSFER_NEG);
if (bootverbose) {
printf("%s: target %d using %dbit transfers\n",
ahd_name(ahd), devinfo->target,
* Update the current state of tagged queuing for a given target.
*/
void
-ahd_set_tags(struct ahd_softc *ahd, struct ahd_devinfo *devinfo,
- ahd_queue_alg alg)
+ahd_set_tags(struct ahd_softc *ahd, struct scsi_cmnd *cmd,
+ struct ahd_devinfo *devinfo, ahd_queue_alg alg)
{
- ahd_platform_set_tags(ahd, devinfo, alg);
+ struct scsi_device *sdev = cmd->device;
+
+ ahd_platform_set_tags(ahd, sdev, devinfo, alg);
ahd_send_async(ahd, devinfo->channel, devinfo->target,
- devinfo->lun, AC_TRANSFER_NEG, &alg);
+ devinfo->lun, AC_TRANSFER_NEG);
}
static void
printf("(%s:%c:%d:%d): refuses tagged commands. "
"Performing non-tagged I/O\n", ahd_name(ahd),
devinfo->channel, devinfo->target, devinfo->lun);
- ahd_set_tags(ahd, devinfo, AHD_QUEUE_NONE);
+ ahd_set_tags(ahd, scb->io_ctx, devinfo, AHD_QUEUE_NONE);
mask = ~0x23;
} else {
printf("(%s:%c:%d:%d): refuses %s tagged commands. "
ahd_name(ahd), devinfo->channel, devinfo->target,
devinfo->lun, tag_type == MSG_ORDERED_TASK
? "ordered" : "head of queue");
- ahd_set_tags(ahd, devinfo, AHD_QUEUE_BASIC);
+ ahd_set_tags(ahd, scb->io_ctx, devinfo, AHD_QUEUE_BASIC);
mask = ~0x03;
}
if (status != CAM_SEL_TIMEOUT)
ahd_send_async(ahd, devinfo->channel, devinfo->target,
- CAM_LUN_WILDCARD, AC_SENT_BDR, NULL);
+ CAM_LUN_WILDCARD, AC_SENT_BDR);
if (message != NULL && bootverbose)
printf("%s: %s on %c:%d. %d SCBs aborted\n", ahd_name(ahd),
#endif
/* Notify the XPT that a bus reset occurred */
ahd_send_async(ahd, devinfo.channel, CAM_TARGET_WILDCARD,
- CAM_LUN_WILDCARD, AC_BUS_RESET, NULL);
+ CAM_LUN_WILDCARD, AC_BUS_RESET);
/*
* Revert to async/narrow transfers until we renegotiate.
struct seeprom_config *sc = ahd->seep_config;
unsigned long flags;
struct scsi_target **ahd_targp = ahd_linux_target_in_softc(starget);
- struct ahd_linux_target *targ = scsi_transport_target_data(starget);
struct ahd_devinfo devinfo;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
BUG_ON(*ahd_targp != NULL);
*ahd_targp = starget;
- memset(targ, 0, sizeof(*targ));
if (sc) {
int flags = sc->device_flags[starget->id];
{
struct ahd_softc *ahd =
*((struct ahd_softc **)sdev->host->hostdata);
- struct scsi_target *starget = sdev->sdev_target;
- struct ahd_linux_target *targ = scsi_transport_target_data(starget);
struct ahd_linux_device *dev;
if (bootverbose)
printf("%s: Slave Alloc %d\n", ahd_name(ahd), sdev->id);
- BUG_ON(targ->sdev[sdev->lun] != NULL);
-
dev = scsi_transport_device_data(sdev);
memset(dev, 0, sizeof(*dev));
*/
dev->maxtags = 0;
- targ->sdev[sdev->lun] = sdev;
-
return (0);
}
return 0;
}
-static void
-ahd_linux_slave_destroy(struct scsi_device *sdev)
-{
- struct ahd_softc *ahd;
- struct ahd_linux_device *dev = scsi_transport_device_data(sdev);
- struct ahd_linux_target *targ = scsi_transport_target_data(sdev->sdev_target);
-
- ahd = *((struct ahd_softc **)sdev->host->hostdata);
- if (bootverbose)
- printf("%s: Slave Destroy %d\n", ahd_name(ahd), sdev->id);
-
- BUG_ON(dev->active);
-
- targ->sdev[sdev->lun] = NULL;
-
-}
-
#if defined(__i386__)
/*
* Return the disk geometry for the given SCSI device.
.use_clustering = ENABLE_CLUSTERING,
.slave_alloc = ahd_linux_slave_alloc,
.slave_configure = ahd_linux_slave_configure,
- .slave_destroy = ahd_linux_slave_destroy,
.target_alloc = ahd_linux_target_alloc,
.target_destroy = ahd_linux_target_destroy,
};
ahd_platform_free(struct ahd_softc *ahd)
{
struct scsi_target *starget;
- int i, j;
+ int i;
if (ahd->platform_data != NULL) {
/* destroy all of the device and target objects */
for (i = 0; i < AHD_NUM_TARGETS; i++) {
starget = ahd->platform_data->starget[i];
if (starget != NULL) {
- for (j = 0; j < AHD_NUM_LUNS; j++) {
- struct ahd_linux_target *targ =
- scsi_transport_target_data(starget);
- if (targ->sdev[j] == NULL)
- continue;
- targ->sdev[j] = NULL;
- }
ahd->platform_data->starget[i] = NULL;
}
}
}
void
-ahd_platform_set_tags(struct ahd_softc *ahd, struct ahd_devinfo *devinfo,
- ahd_queue_alg alg)
+ahd_platform_set_tags(struct ahd_softc *ahd, struct scsi_device *sdev,
+ struct ahd_devinfo *devinfo, ahd_queue_alg alg)
{
- struct scsi_target *starget;
- struct ahd_linux_target *targ;
struct ahd_linux_device *dev;
- struct scsi_device *sdev;
int was_queuing;
int now_queuing;
- starget = ahd->platform_data->starget[devinfo->target];
- targ = scsi_transport_target_data(starget);
- BUG_ON(targ == NULL);
- sdev = targ->sdev[devinfo->lun];
if (sdev == NULL)
return;
tags = ahd_linux_user_tagdepth(ahd, &devinfo);
if (tags != 0 && sdev->tagged_supported != 0) {
- ahd_set_tags(ahd, &devinfo, AHD_QUEUE_TAGGED);
+ ahd_platform_set_tags(ahd, sdev, &devinfo, AHD_QUEUE_TAGGED);
+ ahd_send_async(ahd, devinfo.channel, devinfo.target,
+ devinfo.lun, AC_TRANSFER_NEG);
ahd_print_devinfo(ahd, &devinfo);
printf("Tagged Queuing enabled. Depth %d\n", tags);
} else {
- ahd_set_tags(ahd, &devinfo, AHD_QUEUE_NONE);
+ ahd_platform_set_tags(ahd, sdev, &devinfo, AHD_QUEUE_NONE);
+ ahd_send_async(ahd, devinfo.channel, devinfo.target,
+ devinfo.lun, AC_TRANSFER_NEG);
}
}
void
ahd_send_async(struct ahd_softc *ahd, char channel,
- u_int target, u_int lun, ac_code code, void *arg)
+ u_int target, u_int lun, ac_code code)
{
switch (code) {
case AC_TRANSFER_NEG:
}
ahd_set_transaction_status(scb, CAM_REQUEUE_REQ);
ahd_set_scsi_status(scb, SCSI_STATUS_OK);
- ahd_platform_set_tags(ahd, &devinfo,
+ ahd_platform_set_tags(ahd, sdev, &devinfo,
(dev->flags & AHD_DEV_Q_BASIC)
? AHD_QUEUE_BASIC : AHD_QUEUE_TAGGED);
break;
* as if the target returned BUSY SCSI status.
*/
dev->openings = 1;
- ahd_platform_set_tags(ahd, &devinfo,
+ ahd_platform_set_tags(ahd, sdev, &devinfo,
(dev->flags & AHD_DEV_Q_BASIC)
? AHD_QUEUE_BASIC : AHD_QUEUE_TAGGED);
ahd_set_scsi_status(scb, SCSI_STATUS_BUSY);
if (!ahd_linux_transport_template)
return -ENODEV;
- scsi_transport_reserve_target(ahd_linux_transport_template,
- sizeof(struct ahd_linux_target));
scsi_transport_reserve_device(ahd_linux_transport_template,
sizeof(struct ahd_linux_device));
AHD_DEV_PERIODIC_OTAG = 0x40, /* Send OTAG to prevent starvation */
} ahd_linux_dev_flags;
-struct ahd_linux_target;
struct ahd_linux_device {
TAILQ_ENTRY(ahd_linux_device) links;
#define AHD_OTAG_THRESH 500
};
-struct ahd_linux_target {
- struct scsi_device *sdev[AHD_NUM_LUNS];
- struct ahd_transinfo last_tinfo;
- struct ahd_softc *ahd;
-};
-
/********************* Definitions Required by the Core ***********************/
/*
* Number of SG segments we require. So long as the S/G segments for
}
}
-void ahd_platform_set_tags(struct ahd_softc *ahd,
+void ahd_platform_set_tags(struct ahd_softc *ahd, struct scsi_device *sdev,
struct ahd_devinfo *devinfo, ahd_queue_alg);
int ahd_platform_abort_scbs(struct ahd_softc *ahd, int target,
char channel, int lun, u_int tag,
ahd_linux_isr(int irq, void *dev_id, struct pt_regs * regs);
void ahd_done(struct ahd_softc*, struct scb*);
void ahd_send_async(struct ahd_softc *, char channel,
- u_int target, u_int lun, ac_code, void *);
+ u_int target, u_int lun, ac_code);
void ahd_print_path(struct ahd_softc *, struct scb *);
#ifdef CONFIG_PCI
int error;
error = request_irq(ahd->dev_softc->irq, ahd_linux_isr,
- SA_SHIRQ, "aic79xx", ahd);
+ IRQF_SHARED, "aic79xx", ahd);
if (!error)
ahd->platform_data->irq = ahd->dev_softc->irq;
static void ahd_dump_target_state(struct ahd_softc *ahd,
struct info_str *info,
u_int our_id, char channel,
- u_int target_id, u_int target_offset);
+ u_int target_id);
static void ahd_dump_device_state(struct info_str *info,
struct scsi_device *sdev);
static int ahd_proc_write_seeprom(struct ahd_softc *ahd,
static void
ahd_dump_target_state(struct ahd_softc *ahd, struct info_str *info,
- u_int our_id, char channel, u_int target_id,
- u_int target_offset)
+ u_int our_id, char channel, u_int target_id)
{
- struct ahd_linux_target *targ;
struct scsi_target *starget;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
copy_info(info, "Target %d Negotiation Settings\n", target_id);
copy_info(info, "\tUser: ");
ahd_format_transinfo(info, &tinfo->user);
- starget = ahd->platform_data->starget[target_offset];
+ starget = ahd->platform_data->starget[target_id];
if (starget == NULL)
return;
- targ = scsi_transport_target_data(starget);
copy_info(info, "\tGoal: ");
ahd_format_transinfo(info, &tinfo->goal);
for (lun = 0; lun < AHD_NUM_LUNS; lun++) {
struct scsi_device *dev;
- dev = targ->sdev[lun];
+ dev = scsi_device_lookup_by_target(starget, lun);
if (dev == NULL)
continue;
copy_info(&info, "Allocated SCBs: %d, SG List Length: %d\n\n",
ahd->scb_data.numscbs, AHD_NSEG);
- max_targ = 15;
+ max_targ = 16;
if (ahd->seep_config == NULL)
copy_info(&info, "No Serial EEPROM\n");
copy_info(&info, "\n");
if ((ahd->features & AHD_WIDE) == 0)
- max_targ = 7;
+ max_targ = 8;
- for (i = 0; i <= max_targ; i++) {
+ for (i = 0; i < max_targ; i++) {
ahd_dump_target_state(ahd, &info, ahd->our_id, 'A',
- /*target_id*/i, /*target_offset*/i);
+ /*target_id*/i);
}
retval = info.pos > info.offset ? info.pos - info.offset : 0;
done:
int error;
error = request_irq(ahc->dev_softc->irq, ahc_linux_isr,
- SA_SHIRQ, "aic7xxx", ahc);
+ IRQF_SHARED, "aic7xxx", ahc);
if (error == 0)
ahc->platform_data->irq = ahc->dev_softc->irq;
}
else
{
- result = (request_irq(p->irq, do_aic7xxx_isr, SA_SHIRQ,
+ result = (request_irq(p->irq, do_aic7xxx_isr, IRQF_SHARED,
"aic7xxx", p));
if (result < 0)
{
- result = (request_irq(p->irq, do_aic7xxx_isr, SA_INTERRUPT | SA_SHIRQ,
+ result = (request_irq(p->irq, do_aic7xxx_isr, IRQF_DISABLED | IRQF_SHARED,
"aic7xxx", p));
}
}
if (!request_region(host->io_port, 2048, "acornscsi(ram)"))
goto err_5;
- ret = request_irq(host->irq, acornscsi_intr, SA_INTERRUPT, "acornscsi", ashost);
+ ret = request_irq(host->irq, acornscsi_intr, IRQF_DISABLED, "acornscsi", ashost);
if (ret) {
printk(KERN_CRIT "scsi%d: IRQ%d not free: %d\n",
host->host_no, ashost->scsi.irq, ret);
((struct NCR5380_hostdata *)host->hostdata)->ctrl = 0;
outb(0x00, host->io_port - 577);
- ret = request_irq(host->irq, cumanascsi_intr, SA_INTERRUPT,
+ ret = request_irq(host->irq, cumanascsi_intr, IRQF_DISABLED,
"CumanaSCSI-1", host);
if (ret) {
printk("scsi%d: IRQ%d not free: %d\n",
goto out_free;
ret = request_irq(ec->irq, cumanascsi_2_intr,
- SA_INTERRUPT, "cumanascsi2", info);
+ IRQF_DISABLED, "cumanascsi2", info);
if (ret) {
printk("scsi%d: IRQ%d not free: %d\n",
host->host_no, ec->irq, ret);
goto out_free;
ret = request_irq(ec->irq, powertecscsi_intr,
- SA_INTERRUPT, "powertec", info);
+ IRQF_DISABLED, "powertec", info);
if (ret) {
printk("scsi%d: IRQ%d not free: %d\n",
host->host_no, ec->irq, ret);
unsigned int base_io, tmport, error,n;
unsigned char host_id;
struct Scsi_Host *shpnt = NULL;
- struct atp_unit atp_dev, *p;
+ struct atp_unit *atpdev, *p;
unsigned char setupdata[2][16];
int count = 0;
-
+
+ atpdev = kzalloc(sizeof(*atpdev), GFP_KERNEL);
+ if (!atpdev)
+ return -ENOMEM;
+
if (pci_enable_device(pdev))
- return -EIO;
+ goto err_eio;
if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)) {
printk(KERN_INFO "atp870u: use 32bit DMA mask.\n");
} else {
printk(KERN_ERR "atp870u: DMA mask required but not available.\n");
- return -EIO;
+ goto err_eio;
}
- memset(&atp_dev, 0, sizeof atp_dev);
/*
* It's probably easier to weed out some revisions like
* this than via the PCI device table
*/
if (ent->device == PCI_DEVICE_ID_ARTOP_AEC7610) {
- error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atp_dev.chip_ver);
- if (atp_dev.chip_ver < 2)
- return -EIO;
+ error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atpdev->chip_ver);
+ if (atpdev->chip_ver < 2)
+ goto err_eio;
}
switch (ent->device) {
case ATP880_DEVID1:
case ATP880_DEVID2:
case ATP885_DEVID:
- atp_dev.chip_ver = 0x04;
+ atpdev->chip_ver = 0x04;
default:
break;
}
base_io = pci_resource_start(pdev, 0);
base_io &= 0xfffffff8;
-
+
if ((ent->device == ATP880_DEVID1)||(ent->device == ATP880_DEVID2)) {
- error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atp_dev.chip_ver);
+ error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atpdev->chip_ver);
pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0x80);//JCC082803
host_id = inb(base_io + 0x39);
printk(KERN_INFO " ACARD AEC-67160 PCI Ultra3 LVD Host Adapter: %d"
" IO:%x, IRQ:%d.\n", count, base_io, pdev->irq);
- atp_dev.ioport[0] = base_io + 0x40;
- atp_dev.pciport[0] = base_io + 0x28;
- atp_dev.dev_id = ent->device;
- atp_dev.host_id[0] = host_id;
+ atpdev->ioport[0] = base_io + 0x40;
+ atpdev->pciport[0] = base_io + 0x28;
+ atpdev->dev_id = ent->device;
+ atpdev->host_id[0] = host_id;
tmport = base_io + 0x22;
- atp_dev.scam_on = inb(tmport);
+ atpdev->scam_on = inb(tmport);
tmport += 0x13;
- atp_dev.global_map[0] = inb(tmport);
+ atpdev->global_map[0] = inb(tmport);
tmport += 0x07;
- atp_dev.ultra_map[0] = inw(tmport);
+ atpdev->ultra_map[0] = inw(tmport);
n = 0x3f09;
next_fblk_880:
if (inb(base_io + 0x30) == 0xff)
goto flash_ok_880;
- atp_dev.sp[0][m++] = inb(base_io + 0x30);
- atp_dev.sp[0][m++] = inb(base_io + 0x31);
- atp_dev.sp[0][m++] = inb(base_io + 0x32);
- atp_dev.sp[0][m++] = inb(base_io + 0x33);
+ atpdev->sp[0][m++] = inb(base_io + 0x30);
+ atpdev->sp[0][m++] = inb(base_io + 0x31);
+ atpdev->sp[0][m++] = inb(base_io + 0x32);
+ atpdev->sp[0][m++] = inb(base_io + 0x33);
outw(n, base_io + 0x34);
n += 0x0002;
- atp_dev.sp[0][m++] = inb(base_io + 0x30);
- atp_dev.sp[0][m++] = inb(base_io + 0x31);
- atp_dev.sp[0][m++] = inb(base_io + 0x32);
- atp_dev.sp[0][m++] = inb(base_io + 0x33);
+ atpdev->sp[0][m++] = inb(base_io + 0x30);
+ atpdev->sp[0][m++] = inb(base_io + 0x31);
+ atpdev->sp[0][m++] = inb(base_io + 0x32);
+ atpdev->sp[0][m++] = inb(base_io + 0x33);
outw(n, base_io + 0x34);
n += 0x0002;
- atp_dev.sp[0][m++] = inb(base_io + 0x30);
- atp_dev.sp[0][m++] = inb(base_io + 0x31);
- atp_dev.sp[0][m++] = inb(base_io + 0x32);
- atp_dev.sp[0][m++] = inb(base_io + 0x33);
+ atpdev->sp[0][m++] = inb(base_io + 0x30);
+ atpdev->sp[0][m++] = inb(base_io + 0x31);
+ atpdev->sp[0][m++] = inb(base_io + 0x32);
+ atpdev->sp[0][m++] = inb(base_io + 0x33);
outw(n, base_io + 0x34);
n += 0x0002;
- atp_dev.sp[0][m++] = inb(base_io + 0x30);
- atp_dev.sp[0][m++] = inb(base_io + 0x31);
- atp_dev.sp[0][m++] = inb(base_io + 0x32);
- atp_dev.sp[0][m++] = inb(base_io + 0x33);
+ atpdev->sp[0][m++] = inb(base_io + 0x30);
+ atpdev->sp[0][m++] = inb(base_io + 0x31);
+ atpdev->sp[0][m++] = inb(base_io + 0x32);
+ atpdev->sp[0][m++] = inb(base_io + 0x33);
n += 0x0018;
goto next_fblk_880;
flash_ok_880:
outw(0, base_io + 0x34);
- atp_dev.ultra_map[0] = 0;
- atp_dev.async[0] = 0;
+ atpdev->ultra_map[0] = 0;
+ atpdev->async[0] = 0;
for (k = 0; k < 16; k++) {
n = 1;
n = n << k;
- if (atp_dev.sp[0][k] > 1) {
- atp_dev.ultra_map[0] |= n;
+ if (atpdev->sp[0][k] > 1) {
+ atpdev->ultra_map[0] |= n;
} else {
- if (atp_dev.sp[0][k] == 0)
- atp_dev.async[0] |= n;
+ if (atpdev->sp[0][k] == 0)
+ atpdev->async[0] |= n;
}
}
- atp_dev.async[0] = ~(atp_dev.async[0]);
- outb(atp_dev.global_map[0], base_io + 0x35);
+ atpdev->async[0] = ~(atpdev->async[0]);
+ outb(atpdev->global_map[0], base_io + 0x35);
shpnt = scsi_host_alloc(&atp870u_template, sizeof(struct atp_unit));
if (!shpnt)
- return -ENOMEM;
+ goto err_nomem;
p = (struct atp_unit *)&shpnt->hostdata;
- atp_dev.host = shpnt;
- atp_dev.pdev = pdev;
+ atpdev->host = shpnt;
+ atpdev->pdev = pdev;
pci_set_drvdata(pdev, p);
- memcpy(p, &atp_dev, sizeof atp_dev);
+ memcpy(p, atpdev, sizeof(*atpdev));
if (atp870u_init_tables(shpnt) < 0) {
printk(KERN_ERR "Unable to allocate tables for Acard controller\n");
goto unregister;
}
- if (request_irq(pdev->irq, atp870u_intr_handle, SA_SHIRQ, "atp880i", shpnt)) {
+ if (request_irq(pdev->irq, atp870u_intr_handle, IRQF_SHARED, "atp880i", shpnt)) {
printk(KERN_ERR "Unable to allocate IRQ%d for Acard controller.\n", pdev->irq);
goto free_tables;
}
printk(KERN_INFO " ACARD AEC-67162 PCI Ultra3 LVD Host Adapter: IO:%x, IRQ:%d.\n"
, base_io, pdev->irq);
- atp_dev.pdev = pdev;
- atp_dev.dev_id = ent->device;
- atp_dev.baseport = base_io;
- atp_dev.ioport[0] = base_io + 0x80;
- atp_dev.ioport[1] = base_io + 0xc0;
- atp_dev.pciport[0] = base_io + 0x40;
- atp_dev.pciport[1] = base_io + 0x50;
+ atpdev->pdev = pdev;
+ atpdev->dev_id = ent->device;
+ atpdev->baseport = base_io;
+ atpdev->ioport[0] = base_io + 0x80;
+ atpdev->ioport[1] = base_io + 0xc0;
+ atpdev->pciport[0] = base_io + 0x40;
+ atpdev->pciport[1] = base_io + 0x50;
shpnt = scsi_host_alloc(&atp870u_template, sizeof(struct atp_unit));
if (!shpnt)
- return -ENOMEM;
+ goto err_nomem;
p = (struct atp_unit *)&shpnt->hostdata;
- atp_dev.host = shpnt;
- atp_dev.pdev = pdev;
+ atpdev->host = shpnt;
+ atpdev->pdev = pdev;
pci_set_drvdata(pdev, p);
- memcpy(p, &atp_dev, sizeof(struct atp_unit));
+ memcpy(p, atpdev, sizeof(struct atp_unit));
if (atp870u_init_tables(shpnt) < 0)
goto unregister;
#ifdef ED_DBGP
printk("request_irq() shpnt %p hostdata %p\n", shpnt, p);
#endif
- if (request_irq(pdev->irq, atp870u_intr_handle, SA_SHIRQ, "atp870u", shpnt)) {
+ if (request_irq(pdev->irq, atp870u_intr_handle, IRQF_SHARED, "atp870u", shpnt)) {
printk(KERN_ERR "Unable to allocate IRQ for Acard controller.\n");
goto free_tables;
}
printk(KERN_INFO " ACARD AEC-671X PCI Ultra/W SCSI-2/3 Host Adapter: %d "
"IO:%x, IRQ:%d.\n", count, base_io, pdev->irq);
- atp_dev.ioport[0] = base_io;
- atp_dev.pciport[0] = base_io + 0x20;
- atp_dev.dev_id = ent->device;
+ atpdev->ioport[0] = base_io;
+ atpdev->pciport[0] = base_io + 0x20;
+ atpdev->dev_id = ent->device;
host_id &= 0x07;
- atp_dev.host_id[0] = host_id;
+ atpdev->host_id[0] = host_id;
tmport = base_io + 0x22;
- atp_dev.scam_on = inb(tmport);
+ atpdev->scam_on = inb(tmport);
tmport += 0x0b;
- atp_dev.global_map[0] = inb(tmport++);
- atp_dev.ultra_map[0] = inw(tmport);
+ atpdev->global_map[0] = inb(tmport++);
+ atpdev->ultra_map[0] = inw(tmport);
- if (atp_dev.ultra_map[0] == 0) {
- atp_dev.scam_on = 0x00;
- atp_dev.global_map[0] = 0x20;
- atp_dev.ultra_map[0] = 0xffff;
+ if (atpdev->ultra_map[0] == 0) {
+ atpdev->scam_on = 0x00;
+ atpdev->global_map[0] = 0x20;
+ atpdev->ultra_map[0] = 0xffff;
}
shpnt = scsi_host_alloc(&atp870u_template, sizeof(struct atp_unit));
if (!shpnt)
- return -ENOMEM;
+ goto err_nomem;
p = (struct atp_unit *)&shpnt->hostdata;
- atp_dev.host = shpnt;
- atp_dev.pdev = pdev;
+ atpdev->host = shpnt;
+ atpdev->pdev = pdev;
pci_set_drvdata(pdev, p);
- memcpy(p, &atp_dev, sizeof atp_dev);
+ memcpy(p, atpdev, sizeof(*atpdev));
if (atp870u_init_tables(shpnt) < 0)
goto unregister;
- if (request_irq(pdev->irq, atp870u_intr_handle, SA_SHIRQ, "atp870i", shpnt)) {
+ if (request_irq(pdev->irq, atp870u_intr_handle, IRQF_SHARED, "atp870i", shpnt)) {
printk(KERN_ERR "Unable to allocate IRQ%d for Acard controller.\n", pdev->irq);
goto free_tables;
}
spin_lock_irqsave(shpnt->host_lock, flags);
- if (atp_dev.chip_ver > 0x07) { /* check if atp876 chip then enable terminator */
+ if (atpdev->chip_ver > 0x07) { /* check if atp876 chip then enable terminator */
tmport = base_io + 0x3e;
outb(0x00, tmport);
}
outb((inb(tmport) & 0xef), tmport);
tmport++;
outb((inb(tmport) | 0x20), tmport);
- if (atp_dev.chip_ver == 4)
+ if (atpdev->chip_ver == 4)
shpnt->max_id = 16;
else
shpnt->max_id = 8;
printk("atp870u_prob:unregister\n");
scsi_host_put(shpnt);
return -1;
+err_eio:
+ kfree(atpdev);
+ return -EIO;
+err_nomem:
+ kfree(atpdev);
+ return -ENOMEM;
}
/* The abort command does not leave the device in a clean state where
esp->irq = IRQ_AMIGA_PORTS;
esp->slot = board+REAL_BLZ1230_ESP_ADDR;
- if (request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ if (request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"Blizzard 1230 SCSI IV", esp->ehost))
goto err_out;
esp->esp_command_dvma = virt_to_bus((void *)cmd_buffer);
esp->irq = IRQ_AMIGA_PORTS;
- request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"Blizzard 2060 SCSI", esp->ehost);
/* Figure out our scsi ID on the bus */
esp->esp_command_dvma = virt_to_bus((void *)cmd_buffer);
esp->irq = IRQ_AMIGA_PORTS;
- request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"CyberStorm SCSI", esp->ehost);
/* Figure out our scsi ID on the bus */
/* The DMA cond flag contains a hardcoded jumper bit
esp->esp_command_dvma = virt_to_bus((void *)cmd_buffer);
esp->irq = IRQ_AMIGA_PORTS;
- request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"CyberStorm SCSI Mk II", esp->ehost);
/* Figure out our scsi ID on the bus */
acb->io_port_base = io_port;
acb->io_port_len = io_port_len;
- if (request_irq(irq, dc395x_interrupt, SA_SHIRQ, DC395X_NAME, acb)) {
+ if (request_irq(irq, dc395x_interrupt, IRQF_SHARED, DC395X_NAME, acb)) {
/* release the region we just claimed */
dprintkl(KERN_INFO, "Failed to register IRQ\n");
goto failed;
esp_initialize(esp);
- if (request_irq(esp->irq, esp_intr, SA_INTERRUPT,
+ if (request_irq(esp->irq, esp_intr, IRQF_DISABLED,
"ncr53c94", esp->ehost))
goto err_dealloc;
if (request_irq(dec_interrupt[DEC_IRQ_ASC_MERR],
- scsi_dma_merr_int, SA_INTERRUPT,
+ scsi_dma_merr_int, IRQF_DISABLED,
"ncr53c94 error", esp->ehost))
goto err_free_irq;
if (request_irq(dec_interrupt[DEC_IRQ_ASC_ERR],
- scsi_dma_err_int, SA_INTERRUPT,
+ scsi_dma_err_int, IRQF_DISABLED,
"ncr53c94 overrun", esp->ehost))
goto err_free_irq_merr;
if (request_irq(dec_interrupt[DEC_IRQ_ASC_DMA],
- scsi_dma_int, SA_INTERRUPT,
+ scsi_dma_int, IRQF_DISABLED,
"ncr53c94 dma", esp->ehost))
goto err_free_irq_err;
esp->dma_mmu_release_scsi_sgl = 0;
esp->dma_advance_sg = 0;
- if (request_irq(esp->irq, esp_intr, SA_INTERRUPT,
+ if (request_irq(esp->irq, esp_intr, IRQF_DISABLED,
"PMAZ_AA", esp->ehost)) {
esp_deallocate(esp);
release_tc_card(slot);
NCR5380_init(shost, FLAG_NO_PSEUDO_DMA | FLAG_DTC3181E);
- if (request_irq(pdev->irq, NCR5380_intr, SA_SHIRQ,
+ if (request_irq(pdev->irq, NCR5380_intr, IRQF_SHARED,
DMX3191D_DRIVER_NAME, shost)) {
/*
* Steam powered scsi controllers run without an IRQ anyway
printk(KERN_INFO" BAR1 %p - size= %x\n",msg_addr_virt,hba_map1_area_size);
}
- if (request_irq (pDev->irq, adpt_isr, SA_SHIRQ, pHba->name, pHba)) {
+ if (request_irq (pDev->irq, adpt_isr, IRQF_SHARED, pHba->name, pHba)) {
printk(KERN_ERR"%s: Couldn't register IRQ %d\n", pHba->name, pDev->irq);
adpt_i2o_delete_hba(pHba);
return -EINVAL;
/* With interrupts enabled, it will sometimes hang when doing heavy
* reads. So better not enable them until I finger it out. */
if (instance->irq != SCSI_IRQ_NONE)
- if (request_irq(instance->irq, dtc_intr, SA_INTERRUPT, "dtc", instance)) {
+ if (request_irq(instance->irq, dtc_intr, IRQF_DISABLED, "dtc", instance)) {
printk(KERN_ERR "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
instance->irq = SCSI_IRQ_NONE;
}
/* Board detected, allocate its IRQ */
if (request_irq(irq, do_interrupt_handler,
- SA_INTERRUPT | ((subversion == ESA) ? SA_SHIRQ : 0),
+ IRQF_DISABLED | ((subversion == ESA) ? IRQF_SHARED : 0),
driver_name, (void *)&sha[j])) {
printk("%s: unable to allocate IRQ %u, detaching.\n", name,
irq);
return 0;
if (!reg_IRQ[gc->IRQ]) { /* Interrupt already registered ? */
- if (!request_irq(gc->IRQ, do_eata_pio_int_handler, SA_INTERRUPT, "EATA-PIO", sh)) {
+ if (!request_irq(gc->IRQ, do_eata_pio_int_handler, IRQF_DISABLED, "EATA-PIO", sh)) {
reg_IRQ[gc->IRQ]++;
if (!gc->IRQ_TR)
reg_IRQL[gc->IRQ] = 1; /* IRQ is edge triggered */
for (i = 0; i <= MAXIRQ; i++)
if (reg_IRQ[i])
- request_irq(i, do_eata_pio_int_handler, SA_INTERRUPT, "EATA-PIO", NULL);
+ request_irq(i, do_eata_pio_int_handler, IRQF_DISABLED, "EATA-PIO", NULL);
HBA_ptr = first_HBA;
* sanely maintain.
*/
if (request_irq(esp->ehost->irq, esp_intr,
- SA_SHIRQ, "ESP SCSI", esp)) {
+ IRQF_SHARED, "ESP SCSI", esp)) {
printk("esp%d: Cannot acquire irq line\n",
esp->esp_id);
return -1;
esp->irq = IRQ_AMIGA_PORTS;
esp->slot = board+FASTLANE_ESP_ADDR;
- if (request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ if (request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"Fastlane SCSI", esp->ehost)) {
printk(KERN_WARNING "Fastlane: Could not get IRQ%d, aborting.\n", IRQ_AMIGA_PORTS);
goto err_unmap;
mca_set_adapter_name(slot - 1, fd_mcs_adapters[loop].name);
/* check irq/region */
- if (request_irq(irq, fd_mcs_intr, SA_SHIRQ, "fd_mcs", hosts)) {
+ if (request_irq(irq, fd_mcs_intr, IRQF_SHARED, "fd_mcs", hosts)) {
printk(KERN_ERR "fd_mcs: interrupt is not available, skipping...\n");
continue;
}
/* Register the IRQ with the kernel */
retcode = request_irq( interrupt_level,
- do_fdomain_16x0_intr, pdev?SA_SHIRQ:0, "fdomain", shpnt);
+ do_fdomain_16x0_intr, pdev?IRQF_SHARED:0, "fdomain", shpnt);
if (retcode < 0) {
if (retcode == -EINVAL) {
instance->irq = NCR5380_probe_irq(instance, 0xffff);
if (instance->irq != SCSI_IRQ_NONE)
- if (request_irq(instance->irq, generic_NCR5380_intr, SA_INTERRUPT, "NCR5380", instance)) {
+ if (request_irq(instance->irq, generic_NCR5380_intr, IRQF_DISABLED, "NCR5380", instance)) {
printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
instance->irq = SCSI_IRQ_NONE;
}
printk("Configuring GDT-ISA HA at BIOS 0x%05X IRQ %u DRQ %u\n",
isa_bios,ha->irq,ha->drq);
- if (request_irq(ha->irq,gdth_interrupt,SA_INTERRUPT,"gdth",ha)) {
+ if (request_irq(ha->irq,gdth_interrupt,IRQF_DISABLED,"gdth",ha)) {
printk("GDT-ISA: Unable to allocate IRQ\n");
scsi_unregister(shp);
continue;
printk("Configuring GDT-EISA HA at Slot %d IRQ %u\n",
eisa_slot>>12,ha->irq);
- if (request_irq(ha->irq,gdth_interrupt,SA_INTERRUPT,"gdth",ha)) {
+ if (request_irq(ha->irq,gdth_interrupt,IRQF_DISABLED,"gdth",ha)) {
printk("GDT-EISA: Unable to allocate IRQ\n");
scsi_unregister(shp);
continue;
pcistr[ctr].bus,PCI_SLOT(pcistr[ctr].device_fn),ha->irq);
if (request_irq(ha->irq, gdth_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "gdth", ha))
+ IRQF_DISABLED|IRQF_SHARED, "gdth", ha))
{
printk("GDT-PCI: Unable to allocate IRQ\n");
scsi_unregister(shp);
(epc & GVP_SCSICLKMASK) ? WD33C93_FS_8_10
: WD33C93_FS_12_15);
- request_irq(IRQ_AMIGA_PORTS, gvp11_intr, SA_SHIRQ, "GVP11 SCSI",
+ request_irq(IRQ_AMIGA_PORTS, gvp11_intr, IRQF_SHARED, "GVP11 SCSI",
instance);
DMA(instance)->CNTR = GVP11_DMAC_INT_ENABLE;
num_gvp11++;
pci_set_drvdata(pcidev, host);
- if (request_irq(pcidev->irq, hptiop_intr, SA_SHIRQ,
+ if (request_irq(pcidev->irq, hptiop_intr, IRQF_SHARED,
driver_name, hba)) {
printk(KERN_ERR "scsi%d: request irq %d failed\n",
hba->host->host_no, pcidev->irq);
#endif
/* get interrupt request level */
- if (request_irq(IM_IRQ, interrupt_handler, SA_SHIRQ, "ibmmcascsi", hosts)) {
+ if (request_irq(IM_IRQ, interrupt_handler, IRQF_SHARED, "ibmmcascsi", hosts)) {
printk(KERN_ERR "IBM MCA SCSI: Unable to get shared IRQ %d.\n", IM_IRQ);
return 0;
} else
/* IRQ11 is used by SCSI-2 F/W Adapter/A */
printk(KERN_DEBUG "IBM MCA SCSI: SCSI-2 F/W adapter needs IRQ 11.\n");
/* get interrupt request level */
- if (request_irq(IM_IRQ_FW, interrupt_handler, SA_SHIRQ, "ibmmcascsi", hosts)) {
+ if (request_irq(IM_IRQ_FW, interrupt_handler, IRQF_SHARED, "ibmmcascsi", hosts)) {
printk(KERN_ERR "IBM MCA SCSI: Unable to get shared IRQ %d.\n", IM_IRQ_FW);
} else
IRQ11_registered++;
/* IRQ11 is used by SCSI-2 F/W Adapter/A */
printk(KERN_DEBUG "IBM MCA SCSI: SCSI-2 F/W adapter needs IRQ 11.\n");
/* get interrupt request level */
- if (request_irq(IM_IRQ_FW, interrupt_handler, SA_SHIRQ, "ibmmcascsi", hosts))
+ if (request_irq(IM_IRQ_FW, interrupt_handler, IRQF_SHARED, "ibmmcascsi", hosts))
printk(KERN_ERR "IBM MCA SCSI: Unable to get shared IRQ %d.\n", IM_IRQ_FW);
else
IRQ11_registered++;
struct ibmvscsi_host_data *hostdata)
{
u64 *crq_as_u64 = (u64 *) &evt_struct->crq;
+ int request_status;
int rc;
/* If we have exhausted our request limit, just fail this request.
* (such as task management requests) that the mid layer may think we
* can handle more requests (can_queue) when we actually can't
*/
- if ((evt_struct->crq.format == VIOSRP_SRP_FORMAT) &&
- (atomic_dec_if_positive(&hostdata->request_limit) < 0))
- goto send_error;
+ if (evt_struct->crq.format == VIOSRP_SRP_FORMAT) {
+ request_status =
+ atomic_dec_if_positive(&hostdata->request_limit);
+ /* If request limit was -1 when we started, it is now even
+ * less than that
+ */
+ if (request_status < -1)
+ goto send_error;
+ /* Otherwise, if we have run out of requests */
+ else if (request_status < 0)
+ goto send_busy;
+ }
/* Copy the IU into the transfer area */
*evt_struct->xfer_iu = evt_struct->iu;
return 0;
- send_error:
+ send_busy:
unmap_cmd_data(&evt_struct->iu.srp.cmd, evt_struct, hostdata->dev);
free_event_struct(&hostdata->pool, evt_struct);
return SCSI_MLQUEUE_HOST_BUSY;
+
+ send_error:
+ unmap_cmd_data(&evt_struct->iu.srp.cmd, evt_struct, hostdata->dev);
+
+ if (evt_struct->cmnd != NULL) {
+ evt_struct->cmnd->result = DID_ERROR << 16;
+ evt_struct->cmnd_done(evt_struct->cmnd);
+ } else if (evt_struct->done)
+ evt_struct->done(evt_struct);
+
+ free_event_struct(&hostdata->pool, evt_struct);
+ return 0;
}
/**
return;
case 0xFF: /* Hypervisor telling us the connection is closed */
scsi_block_requests(hostdata->host);
+ atomic_set(&hostdata->request_limit, 0);
if (crq->format == 0x06) {
/* We need to re-setup the interpartition connection */
printk(KERN_INFO
"ibmvscsi: Re-enabling adapter!\n");
- atomic_set(&hostdata->request_limit, -1);
purge_requests(hostdata, DID_REQUEUE);
- if (ibmvscsi_reenable_crq_queue(&hostdata->queue,
- hostdata) == 0)
- if (ibmvscsi_send_crq(hostdata,
- 0xC001000000000000LL, 0))
+ if ((ibmvscsi_reenable_crq_queue(&hostdata->queue,
+ hostdata) == 0) ||
+ (ibmvscsi_send_crq(hostdata,
+ 0xC001000000000000LL, 0))) {
+ atomic_set(&hostdata->request_limit,
+ -1);
printk(KERN_ERR
- "ibmvscsi: transmit error after"
+ "ibmvscsi: error after"
" enable\n");
+ }
} else {
printk(KERN_INFO
"ibmvscsi: Virtual adapter failed rc %d!\n",
crq->format);
- atomic_set(&hostdata->request_limit, -1);
purge_requests(hostdata, DID_ERROR);
- ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata);
+ if ((ibmvscsi_reset_crq_queue(&hostdata->queue,
+ hostdata)) ||
+ (ibmvscsi_send_crq(hostdata,
+ 0xC001000000000000LL, 0))) {
+ atomic_set(&hostdata->request_limit,
+ -1);
+ printk(KERN_ERR
+ "ibmvscsi: error after reset\n");
+ }
}
scsi_unblock_requests(hostdata->host);
return;
struct Scsi_Host *host;
struct device *dev = &vdev->dev;
unsigned long wait_switch = 0;
+ int rc;
vdev->dev.driver_data = NULL;
atomic_set(&hostdata->request_limit, -1);
hostdata->host->max_sectors = 32 * 8; /* default max I/O 32 pages */
- if (ibmvscsi_init_crq_queue(&hostdata->queue, hostdata,
- max_requests) != 0) {
+ rc = ibmvscsi_init_crq_queue(&hostdata->queue, hostdata, max_requests);
+ if (rc != 0 && rc != H_RESOURCE) {
printk(KERN_ERR "ibmvscsi: couldn't initialize crq\n");
goto init_crq_failed;
}
* to fail if the other end is not acive. In that case we don't
* want to scan
*/
- if (ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0) == 0) {
+ if (ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0) == 0
+ || rc == H_RESOURCE) {
/*
* Wait around max init_timeout secs for the adapter to finish
* initializing. When we are done initializing, we will have a
int max_requests)
{
int rc;
+ int retrc;
struct vio_dev *vdev = to_vio_dev(hostdata->dev);
queue->msgs = (struct viosrp_crq *)get_zeroed_page(GFP_KERNEL);
gather_partition_info();
set_adapter_info(hostdata);
- rc = plpar_hcall_norets(H_REG_CRQ,
+ retrc = rc = plpar_hcall_norets(H_REG_CRQ,
vdev->unit_address,
queue->msg_token, PAGE_SIZE);
if (rc == H_RESOURCE)
tasklet_init(&hostdata->srp_task, (void *)ibmvscsi_task,
(unsigned long)hostdata);
- return 0;
+ return retrc;
req_irq_failed:
do {
write1_io(0, IO_FIFO_READ); /* start fifo out in read mode */
write1_io(0, IO_INTR_MASK); /* allow all ints */
x = int_tab[(switches & (SW_INT0 | SW_INT1)) >> SW_INT_SHIFT];
- if (request_irq(x, in2000_intr, SA_INTERRUPT, "in2000", instance)) {
+ if (request_irq(x, in2000_intr, IRQF_DISABLED, "in2000", instance)) {
printk("in2000_detect: Unable to allocate IRQ.\n");
detect_count--;
continue;
hreg->sg_tablesize = TOTAL_SG_ENTRY; /* Maximun support is 32 */
/* Initial tulip chip */
- ok = request_irq(pHCB->HCS_Intr, i91u_intr, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr, IRQF_DISABLED | IRQF_SHARED, "i91u", hreg);
if (ok < 0) {
printk(KERN_WARNING "i91u: unable to request IRQ %d\n\n", pHCB->HCS_Intr);
return 0;
ioa_cfg->needs_hard_reset = 1;
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
- rc = request_irq(pdev->irq, ipr_isr, SA_SHIRQ, IPR_NAME, ioa_cfg);
+ rc = request_irq(pdev->irq, ipr_isr, IRQF_SHARED, IPR_NAME, ioa_cfg);
if (rc) {
dev_err(&pdev->dev, "Couldn't register IRQ %d! rc=%d\n",
memcpy(ha, oldha, sizeof (ips_ha_t));
free_irq(oldha->irq, oldha);
/* Install the interrupt handler with the new ha */
- if (request_irq(ha->irq, do_ipsintr, SA_SHIRQ, ips_name, ha)) {
+ if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Unable to install interrupt handler\n");
scsi_host_put(sh);
}
/* Install the interrupt handler */
- if (request_irq(ha->irq, do_ipsintr, SA_SHIRQ, ips_name, ha)) {
+ if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Unable to install interrupt handler\n");
return ips_abort_init(ha, index);
static int
iscsi_conn_set_param(struct iscsi_cls_conn *cls_conn, enum iscsi_param param,
- uint32_t value)
+ char *buf, int buflen)
{
struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_session *session = conn->session;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ int value;
switch(param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH: {
char *saveptr = tcp_conn->data;
gfp_t flags = GFP_KERNEL;
+ sscanf(buf, "%d", &value);
if (tcp_conn->data_size >= value) {
- conn->max_recv_dlength = value;
+ iscsi_set_param(cls_conn, param, buf, buflen);
break;
}
else
free_pages((unsigned long)saveptr,
get_order(tcp_conn->data_size));
- conn->max_recv_dlength = value;
+ iscsi_set_param(cls_conn, param, buf, buflen);
tcp_conn->data_size = value;
- }
- break;
- case ISCSI_PARAM_MAX_XMIT_DLENGTH:
- conn->max_xmit_dlength = value;
break;
+ }
case ISCSI_PARAM_HDRDGST_EN:
- conn->hdrdgst_en = value;
+ iscsi_set_param(cls_conn, param, buf, buflen);
tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
if (conn->hdrdgst_en) {
tcp_conn->hdr_size += sizeof(__u32);
}
break;
case ISCSI_PARAM_DATADGST_EN:
- conn->datadgst_en = value;
+ iscsi_set_param(cls_conn, param, buf, buflen);
if (conn->datadgst_en) {
if (!tcp_conn->data_tx_tfm)
tcp_conn->data_tx_tfm =
tcp_conn->sendpage = conn->datadgst_en ?
sock_no_sendpage : tcp_conn->sock->ops->sendpage;
break;
- case ISCSI_PARAM_INITIAL_R2T_EN:
- session->initial_r2t_en = value;
- break;
case ISCSI_PARAM_MAX_R2T:
+ sscanf(buf, "%d", &value);
if (session->max_r2t == roundup_pow_of_two(value))
break;
iscsi_r2tpool_free(session);
- session->max_r2t = value;
+ iscsi_set_param(cls_conn, param, buf, buflen);
if (session->max_r2t & (session->max_r2t - 1))
session->max_r2t = roundup_pow_of_two(session->max_r2t);
if (iscsi_r2tpool_alloc(session))
return -ENOMEM;
break;
- case ISCSI_PARAM_IMM_DATA_EN:
- session->imm_data_en = value;
- break;
- case ISCSI_PARAM_FIRST_BURST:
- session->first_burst = value;
- break;
- case ISCSI_PARAM_MAX_BURST:
- session->max_burst = value;
- break;
- case ISCSI_PARAM_PDU_INORDER_EN:
- session->pdu_inorder_en = value;
- break;
- case ISCSI_PARAM_DATASEQ_INORDER_EN:
- session->dataseq_inorder_en = value;
- break;
- case ISCSI_PARAM_ERL:
- session->erl = value;
- break;
- case ISCSI_PARAM_IFMARKER_EN:
- BUG_ON(value);
- session->ifmarker_en = value;
- break;
- case ISCSI_PARAM_OFMARKER_EN:
- BUG_ON(value);
- session->ofmarker_en = value;
- break;
- case ISCSI_PARAM_EXP_STATSN:
- conn->exp_statsn = value;
- break;
- default:
- break;
- }
-
- return 0;
-}
-
-static int
-iscsi_session_get_param(struct iscsi_cls_session *cls_session,
- enum iscsi_param param, uint32_t *value)
-{
- struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
- struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
-
- switch(param) {
- case ISCSI_PARAM_INITIAL_R2T_EN:
- *value = session->initial_r2t_en;
- break;
- case ISCSI_PARAM_MAX_R2T:
- *value = session->max_r2t;
- break;
- case ISCSI_PARAM_IMM_DATA_EN:
- *value = session->imm_data_en;
- break;
- case ISCSI_PARAM_FIRST_BURST:
- *value = session->first_burst;
- break;
- case ISCSI_PARAM_MAX_BURST:
- *value = session->max_burst;
- break;
- case ISCSI_PARAM_PDU_INORDER_EN:
- *value = session->pdu_inorder_en;
- break;
- case ISCSI_PARAM_DATASEQ_INORDER_EN:
- *value = session->dataseq_inorder_en;
- break;
- case ISCSI_PARAM_ERL:
- *value = session->erl;
- break;
- case ISCSI_PARAM_IFMARKER_EN:
- *value = session->ifmarker_en;
- break;
- case ISCSI_PARAM_OFMARKER_EN:
- *value = session->ofmarker_en;
- break;
default:
- return -EINVAL;
+ return iscsi_set_param(cls_conn, param, buf, buflen);
}
return 0;
}
static int
-iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
- enum iscsi_param param, uint32_t *value)
+iscsi_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf)
{
struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct inet_sock *inet;
+ struct ipv6_pinfo *np;
+ struct sock *sk;
+ int len;
switch(param) {
- case ISCSI_PARAM_MAX_RECV_DLENGTH:
- *value = conn->max_recv_dlength;
- break;
- case ISCSI_PARAM_MAX_XMIT_DLENGTH:
- *value = conn->max_xmit_dlength;
- break;
- case ISCSI_PARAM_HDRDGST_EN:
- *value = conn->hdrdgst_en;
- break;
- case ISCSI_PARAM_DATADGST_EN:
- *value = conn->datadgst_en;
- break;
case ISCSI_PARAM_CONN_PORT:
mutex_lock(&conn->xmitmutex);
if (!tcp_conn->sock) {
}
inet = inet_sk(tcp_conn->sock->sk);
- *value = be16_to_cpu(inet->dport);
+ len = sprintf(buf, "%hu\n", be16_to_cpu(inet->dport));
mutex_unlock(&conn->xmitmutex);
- case ISCSI_PARAM_EXP_STATSN:
- *value = conn->exp_statsn;
break;
- default:
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-iscsi_conn_get_str_param(struct iscsi_cls_conn *cls_conn,
- enum iscsi_param param, char *buf)
-{
- struct iscsi_conn *conn = cls_conn->dd_data;
- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
- struct sock *sk;
- struct inet_sock *inet;
- struct ipv6_pinfo *np;
- int len = 0;
-
- switch (param) {
case ISCSI_PARAM_CONN_ADDRESS:
mutex_lock(&conn->xmitmutex);
if (!tcp_conn->sock) {
mutex_unlock(&conn->xmitmutex);
break;
default:
- return -EINVAL;
+ return iscsi_conn_get_param(cls_conn, param, buf);
}
return len;
ISCSI_ERL |
ISCSI_CONN_PORT |
ISCSI_CONN_ADDRESS |
- ISCSI_EXP_STATSN,
+ ISCSI_EXP_STATSN |
+ ISCSI_PERSISTENT_PORT |
+ ISCSI_PERSISTENT_ADDRESS |
+ ISCSI_TARGET_NAME |
+ ISCSI_TPGT,
.host_template = &iscsi_sht,
.conndata_size = sizeof(struct iscsi_conn),
.max_conn = 1,
.bind_conn = iscsi_tcp_conn_bind,
.destroy_conn = iscsi_tcp_conn_destroy,
.set_param = iscsi_conn_set_param,
- .get_conn_param = iscsi_conn_get_param,
- .get_conn_str_param = iscsi_conn_get_str_param,
+ .get_conn_param = iscsi_tcp_conn_get_param,
.get_session_param = iscsi_session_get_param,
.start_conn = iscsi_conn_start,
.stop_conn = iscsi_conn_stop,
esp->esp_command_dvma = vdma_alloc(CPHYSADDR(cmd_buffer), sizeof (cmd_buffer));
esp->irq = JAZZ_SCSI_IRQ;
- request_irq(JAZZ_SCSI_IRQ, esp_intr, SA_INTERRUPT, "JAZZ SCSI",
+ request_irq(JAZZ_SCSI_IRQ, esp_intr, IRQF_DISABLED, "JAZZ SCSI",
esp->ehost);
/*
host->this_id = 7;
host->base = base;
host->irq = dev->irq;
- if(request_irq(dev->irq, NCR_700_intr, SA_SHIRQ, "lasi700", host)) {
+ if(request_irq(dev->irq, NCR_700_intr, IRQF_SHARED, "lasi700", host)) {
printk(KERN_ERR "lasi700: request_irq failed!\n");
goto out_put_host;
}
return NULL;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->private_data = port[0]->private_data;
if (ports & ATA_PORT_PRIMARY) {
struct ata_queued_cmd *qc;
unsigned int tag, preempted_tag;
u32 preempted_sactive, preempted_qc_active;
- DECLARE_COMPLETION(wait);
+ DECLARE_COMPLETION_ONSTACK(wait);
unsigned long flags;
unsigned int err_mask;
int rc;
if (scsi_add_host(shost, NULL))
goto add_host_fail;
+ if (!try_module_get(iscsit->owner))
+ goto cls_session_fail;
+
cls_session = iscsi_create_session(shost, iscsit, 0);
if (!cls_session)
- goto cls_session_fail;
+ goto module_put;
*(unsigned long*)shost->hostdata = (unsigned long)cls_session;
return cls_session;
+module_put:
+ module_put(iscsit->owner);
cls_session_fail:
scsi_remove_host(shost);
add_host_fail:
iscsi_destroy_session(cls_session);
scsi_host_put(shost);
+ module_put(cls_session->transport->owner);
}
EXPORT_SYMBOL_GPL(iscsi_session_teardown);
}
EXPORT_SYMBOL_GPL(iscsi_conn_bind);
+
+int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf, int buflen)
+{
+ struct iscsi_conn *conn = cls_conn->dd_data;
+ struct iscsi_session *session = conn->session;
+ uint32_t value;
+
+ switch(param) {
+ case ISCSI_PARAM_MAX_RECV_DLENGTH:
+ sscanf(buf, "%d", &conn->max_recv_dlength);
+ break;
+ case ISCSI_PARAM_MAX_XMIT_DLENGTH:
+ sscanf(buf, "%d", &conn->max_xmit_dlength);
+ break;
+ case ISCSI_PARAM_HDRDGST_EN:
+ sscanf(buf, "%d", &conn->hdrdgst_en);
+ break;
+ case ISCSI_PARAM_DATADGST_EN:
+ sscanf(buf, "%d", &conn->datadgst_en);
+ break;
+ case ISCSI_PARAM_INITIAL_R2T_EN:
+ sscanf(buf, "%d", &session->initial_r2t_en);
+ break;
+ case ISCSI_PARAM_MAX_R2T:
+ sscanf(buf, "%d", &session->max_r2t);
+ break;
+ case ISCSI_PARAM_IMM_DATA_EN:
+ sscanf(buf, "%d", &session->imm_data_en);
+ break;
+ case ISCSI_PARAM_FIRST_BURST:
+ sscanf(buf, "%d", &session->first_burst);
+ break;
+ case ISCSI_PARAM_MAX_BURST:
+ sscanf(buf, "%d", &session->max_burst);
+ break;
+ case ISCSI_PARAM_PDU_INORDER_EN:
+ sscanf(buf, "%d", &session->pdu_inorder_en);
+ break;
+ case ISCSI_PARAM_DATASEQ_INORDER_EN:
+ sscanf(buf, "%d", &session->dataseq_inorder_en);
+ break;
+ case ISCSI_PARAM_ERL:
+ sscanf(buf, "%d", &session->erl);
+ break;
+ case ISCSI_PARAM_IFMARKER_EN:
+ sscanf(buf, "%d", &value);
+ BUG_ON(value);
+ break;
+ case ISCSI_PARAM_OFMARKER_EN:
+ sscanf(buf, "%d", &value);
+ BUG_ON(value);
+ break;
+ case ISCSI_PARAM_EXP_STATSN:
+ sscanf(buf, "%u", &conn->exp_statsn);
+ break;
+ case ISCSI_PARAM_TARGET_NAME:
+ /* this should not change between logins */
+ if (session->targetname)
+ break;
+
+ session->targetname = kstrdup(buf, GFP_KERNEL);
+ if (!session->targetname)
+ return -ENOMEM;
+ break;
+ case ISCSI_PARAM_TPGT:
+ sscanf(buf, "%d", &session->tpgt);
+ break;
+ case ISCSI_PARAM_PERSISTENT_PORT:
+ sscanf(buf, "%d", &conn->persistent_port);
+ break;
+ case ISCSI_PARAM_PERSISTENT_ADDRESS:
+ /*
+ * this is the address returned in discovery so it should
+ * not change between logins.
+ */
+ if (conn->persistent_address)
+ break;
+
+ conn->persistent_address = kstrdup(buf, GFP_KERNEL);
+ if (!conn->persistent_address)
+ return -ENOMEM;
+ break;
+ default:
+ return -ENOSYS;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(iscsi_set_param);
+
+int iscsi_session_get_param(struct iscsi_cls_session *cls_session,
+ enum iscsi_param param, char *buf)
+{
+ struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+ struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
+ int len;
+
+ switch(param) {
+ case ISCSI_PARAM_INITIAL_R2T_EN:
+ len = sprintf(buf, "%d\n", session->initial_r2t_en);
+ break;
+ case ISCSI_PARAM_MAX_R2T:
+ len = sprintf(buf, "%hu\n", session->max_r2t);
+ break;
+ case ISCSI_PARAM_IMM_DATA_EN:
+ len = sprintf(buf, "%d\n", session->imm_data_en);
+ break;
+ case ISCSI_PARAM_FIRST_BURST:
+ len = sprintf(buf, "%u\n", session->first_burst);
+ break;
+ case ISCSI_PARAM_MAX_BURST:
+ len = sprintf(buf, "%u\n", session->max_burst);
+ break;
+ case ISCSI_PARAM_PDU_INORDER_EN:
+ len = sprintf(buf, "%d\n", session->pdu_inorder_en);
+ break;
+ case ISCSI_PARAM_DATASEQ_INORDER_EN:
+ len = sprintf(buf, "%d\n", session->dataseq_inorder_en);
+ break;
+ case ISCSI_PARAM_ERL:
+ len = sprintf(buf, "%d\n", session->erl);
+ break;
+ case ISCSI_PARAM_TARGET_NAME:
+ len = sprintf(buf, "%s\n", session->targetname);
+ break;
+ case ISCSI_PARAM_TPGT:
+ len = sprintf(buf, "%d\n", session->tpgt);
+ break;
+ default:
+ return -ENOSYS;
+ }
+
+ return len;
+}
+EXPORT_SYMBOL_GPL(iscsi_session_get_param);
+
+int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf)
+{
+ struct iscsi_conn *conn = cls_conn->dd_data;
+ int len;
+
+ switch(param) {
+ case ISCSI_PARAM_MAX_RECV_DLENGTH:
+ len = sprintf(buf, "%u\n", conn->max_recv_dlength);
+ break;
+ case ISCSI_PARAM_MAX_XMIT_DLENGTH:
+ len = sprintf(buf, "%u\n", conn->max_xmit_dlength);
+ break;
+ case ISCSI_PARAM_HDRDGST_EN:
+ len = sprintf(buf, "%d\n", conn->hdrdgst_en);
+ break;
+ case ISCSI_PARAM_DATADGST_EN:
+ len = sprintf(buf, "%d\n", conn->datadgst_en);
+ break;
+ case ISCSI_PARAM_IFMARKER_EN:
+ len = sprintf(buf, "%d\n", conn->ifmarker_en);
+ break;
+ case ISCSI_PARAM_OFMARKER_EN:
+ len = sprintf(buf, "%d\n", conn->ofmarker_en);
+ break;
+ case ISCSI_PARAM_EXP_STATSN:
+ len = sprintf(buf, "%u\n", conn->exp_statsn);
+ break;
+ case ISCSI_PARAM_PERSISTENT_PORT:
+ len = sprintf(buf, "%d\n", conn->persistent_port);
+ break;
+ case ISCSI_PARAM_PERSISTENT_ADDRESS:
+ len = sprintf(buf, "%s\n", conn->persistent_address);
+ break;
+ default:
+ return -ENOSYS;
+ }
+
+ return len;
+}
+EXPORT_SYMBOL_GPL(iscsi_conn_get_param);
+
MODULE_AUTHOR("Mike Christie");
MODULE_DESCRIPTION("iSCSI library functions");
MODULE_LICENSE("GPL");
dma_addr_t slim2p_mapping;
uint16_t pci_cfg_value;
- struct semaphore hba_can_block;
int32_t hba_state;
#define LPFC_STATE_UNKNOWN 0 /* HBA state is unknown */
pring = &psli->ring[LPFC_ELS_RING]; /* ELS ring */
cmdsize = (sizeof (uint32_t) + sizeof (struct serv_parm));
- elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, 0, did,
+ elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, NULL, did,
ELS_CMD_PLOGI);
if (!elsiocb)
return 1;
ndlp = (struct lpfc_nodelist *) pmb->context2;
xri = (uint16_t) ((unsigned long)(pmb->context1));
- pmb->context1 = 0;
- pmb->context2 = 0;
+ pmb->context1 = NULL;
+ pmb->context2 = NULL;
if (mb->mbxStatus) {
mempool_free( pmb, phba->mbox_mem_pool);
"10-port ", "PCIe"};
break;
default:
- m = (typeof(m)){ 0 };
+ m = (typeof(m)){ NULL };
break;
}
break;
default:
- m = (typeof(m)){ 0 };
+ m = (typeof(m)){ NULL };
break;
}
goto out_put_host;
host->unique_id = phba->brd_no;
- init_MUTEX(&phba->hba_can_block);
INIT_LIST_HEAD(&phba->ctrspbuflist);
INIT_LIST_HEAD(&phba->rnidrspbuflist);
INIT_LIST_HEAD(&phba->freebufList);
if (error)
goto out_remove_host;
- error = request_irq(phba->pcidev->irq, lpfc_intr_handler, SA_SHIRQ,
+ error = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
LPFC_DRIVER_NAME, phba);
if (error) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
#define LPFC_ABORT_WAIT 2
-static inline void
-lpfc_block_requests(struct lpfc_hba * phba)
-{
- down(&phba->hba_can_block);
- scsi_block_requests(phba->host);
-}
-
-static inline void
-lpfc_unblock_requests(struct lpfc_hba * phba)
-{
- scsi_unblock_requests(phba->host);
- up(&phba->hba_can_block);
-}
-
/*
* This routine allocates a scsi buffer, which contains all the necessary
* information needed to initiate a SCSI I/O. The non-DMAable buffer region
unsigned int loop_count = 0;
int ret = SUCCESS;
- lpfc_block_requests(phba);
spin_lock_irq(shost->host_lock);
lpfc_cmd = (struct lpfc_scsi_buf *)cmnd->host_scribble;
cmnd->device->lun, cmnd->serial_number);
spin_unlock_irq(shost->host_lock);
- lpfc_unblock_requests(phba);
return ret;
}
int ret = FAILED;
int cnt, loopcnt;
- lpfc_block_requests(phba);
spin_lock_irq(shost->host_lock);
/*
* If target is not in a MAPPED state, delay the reset until
out:
spin_unlock_irq(shost->host_lock);
- lpfc_unblock_requests(phba);
return ret;
}
int cnt, loopcnt;
struct lpfc_scsi_buf * lpfc_cmd;
- lpfc_block_requests(phba);
spin_lock_irq(shost->host_lock);
lpfc_cmd = lpfc_get_scsi_buf(phba);
phba->brd_no, ret);
out:
spin_unlock_irq(shost->host_lock);
- lpfc_unblock_requests(phba);
return ret;
}
if (request_irq(irq, (adapter->flag & BOARD_MEMMAP) ?
megaraid_isr_memmapped : megaraid_isr_iomapped,
- SA_SHIRQ, "megaraid", adapter)) {
+ IRQF_SHARED, "megaraid", adapter)) {
printk(KERN_WARNING
"megaraid: Couldn't register IRQ %d!\n", irq);
goto out_free_scb_list;
//
// request IRQ and register the interrupt service routine
- if (request_irq(adapter->irq, megaraid_isr, SA_SHIRQ, "megaraid",
+ if (request_irq(adapter->irq, megaraid_isr, IRQF_SHARED, "megaraid",
adapter)) {
con_log(CL_ANN, (KERN_WARNING
* 2 of the License, or (at your option) any later version.
*
* FILE : megaraid_sas.c
- * Version : v00.00.02.04
+ * Version : v00.00.03.01
*
* Authors:
* Sreenivas Bagalkote <Sreenivas.Bagalkote@lsil.com>
{
PCI_VENDOR_ID_LSI_LOGIC,
- PCI_DEVICE_ID_LSI_SAS1064R, // xscale IOP
+ PCI_DEVICE_ID_LSI_SAS1064R, /* xscale IOP */
PCI_ANY_ID,
PCI_ANY_ID,
},
{
PCI_VENDOR_ID_LSI_LOGIC,
- PCI_DEVICE_ID_LSI_SAS1078R, // ppc IOP
+ PCI_DEVICE_ID_LSI_SAS1078R, /* ppc IOP */
PCI_ANY_ID,
PCI_ANY_ID,
},
+ {
+ PCI_VENDOR_ID_LSI_LOGIC,
+ PCI_DEVICE_ID_LSI_VERDE_ZCR, /* xscale IOP, vega */
+ PCI_ANY_ID,
+ PCI_ANY_ID,
+ },
{
PCI_VENDOR_ID_DELL,
- PCI_DEVICE_ID_DELL_PERC5, // xscale IOP
+ PCI_DEVICE_ID_DELL_PERC5, /* xscale IOP */
PCI_ANY_ID,
PCI_ANY_ID,
},
* @regs: MFI register set
*/
static inline void
-megasas_disable_intr(struct megasas_register_set __iomem * regs)
+megasas_disable_intr(struct megasas_instance *instance)
{
u32 mask = 0x1f;
+ struct megasas_register_set __iomem *regs = instance->reg_set;
+
+ if(instance->pdev->device == PCI_DEVICE_ID_LSI_SAS1078R)
+ mask = 0xffffffff;
+
writel(mask, ®s->outbound_intr_mask);
/* Dummy readl to force pci flush */
/*
* Bring it to READY state; assuming max wait 2 secs
*/
- megasas_disable_intr(instance->reg_set);
+ megasas_disable_intr(instance);
writel(MFI_INIT_READY, &instance->reg_set->inbound_doorbell);
max_wait = 10;
init_frame->data_xfer_len = sizeof(struct megasas_init_queue_info);
+ /*
+ * disable the intr before firing the init frame to FW
+ */
+ megasas_disable_intr(instance);
+
/*
* Issue the init frame in polled mode
*/
/*
* Register IRQ
*/
- if (request_irq(pdev->irq, megasas_isr, SA_SHIRQ, "megasas", instance)) {
+ if (request_irq(pdev->irq, megasas_isr, IRQF_SHARED, "megasas", instance)) {
printk(KERN_DEBUG "megasas: Failed to register IRQ\n");
goto fail_irq;
}
megasas_mgmt_info.max_index--;
pci_set_drvdata(pdev, NULL);
- megasas_disable_intr(instance->reg_set);
+ megasas_disable_intr(instance);
free_irq(instance->pdev->irq, instance);
megasas_release_mfi(instance);
pci_set_drvdata(instance->pdev, NULL);
- megasas_disable_intr(instance->reg_set);
+ megasas_disable_intr(instance);
free_irq(instance->pdev->irq, instance);
/**
* MegaRAID SAS Driver meta data
*/
-#define MEGASAS_VERSION "00.00.02.04"
-#define MEGASAS_RELDATE "Feb 03, 2006"
-#define MEGASAS_EXT_VERSION "Fri Feb 03 14:31:44 PST 2006"
+#define MEGASAS_VERSION "00.00.03.01"
+#define MEGASAS_RELDATE "May 14, 2006"
+#define MEGASAS_EXT_VERSION "Sun May 14 22:49:52 PDT 2006"
+
+/*
+ * Device IDs
+ */
+#define PCI_DEVICE_ID_LSI_SAS1078R 0x0060
+#define PCI_DEVICE_ID_LSI_VERDE_ZCR 0x0413
+
/*
* =====================================
* MegaRAID SAS MFI firmware definitions
#define MFI_POLL_TIMEOUT_SECS 10
#define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000
-#define PCI_DEVICE_ID_LSI_SAS1078R 0x00000060
+
+/*
+* register set for both 1068 and 1078 controllers
+* structure extended for 1078 registers
+*/
struct megasas_register_set {
u32 reserved_0[4]; /*0000h*/
struct compat_iovec sgl[MAX_IOCTL_SGE];
} __attribute__ ((packed));
+#define MEGASAS_IOC_FIRMWARE32 _IOWR('M', 1, struct compat_megasas_iocpacket)
#endif
#define MEGASAS_IOC_FIRMWARE _IOWR('M', 1, struct megasas_iocpacket)
-#define MEGASAS_IOC_FIRMWARE32 _IOWR('M', 1, struct compat_megasas_iocpacket)
#define MEGASAS_IOC_GET_AEN _IOW('M', 3, struct megasas_aen)
struct megasas_mgmt_info {
*/
nsp32_do_bus_reset(data);
- ret = request_irq(host->irq, do_nsp32_isr,
- SA_SHIRQ | SA_SAMPLE_RANDOM, "nsp32", data);
+ ret = request_irq(host->irq, do_nsp32_isr, IRQF_SHARED, "nsp32", data);
if (ret < 0) {
nsp32_msg(KERN_ERR, "Unable to allocate IRQ for NinjaSCSI32 "
"SCSI PCI controller. Interrupt: %d", host->irq);
}
#if (LINUX_VERSION_CODE > KERNEL_VERSION(2,5,73))
- scsi_add_host (host, &PCIDEV->dev);
+ ret = scsi_add_host(host, &PCIDEV->dev);
+ if (ret) {
+ nsp32_msg(KERN_ERR, "failed to add scsi host");
+ goto free_region;
+ }
scsi_scan_host(host);
#endif
pci_set_drvdata(PCIDEV, host);
return DETECT_OK;
+ free_region:
+ release_region(host->io_port, host->n_io_port);
+
free_irq:
free_irq(host->irq, data);
esp->esp_command_dvma = (__u32) cmd_buffer;
esp->irq = IRQ_AMIGA_PORTS;
- request_irq(IRQ_AMIGA_PORTS, esp_intr, SA_SHIRQ,
+ request_irq(IRQ_AMIGA_PORTS, esp_intr, IRQF_SHARED,
"BSC Oktagon SCSI", esp->ehost);
/* Figure out our scsi ID on the bus */
instance->irq = NCR5380_probe_irq(instance, PAS16_IRQS);
if (instance->irq != SCSI_IRQ_NONE)
- if (request_irq(instance->irq, pas16_intr, SA_INTERRUPT, "pas16", instance)) {
+ if (request_irq(instance->irq, pas16_intr, IRQF_DISABLED, "pas16", instance)) {
printk("scsi%d : IRQ%d not free, interrupts disabled\n",
instance->host_no, instance->irq);
instance->irq = SCSI_IRQ_NONE;
/* Interrupt handler */
link->irq.Handler = &nspintr;
link->irq.Instance = info;
- link->irq.Attributes |= (SA_SHIRQ | SA_SAMPLE_RANDOM);
+ link->irq.Attributes |= IRQF_SHARED;
/* General socket configuration */
link->conf.Attributes = CONF_ENABLE_IRQ;
data = (struct sym53c500_data *)host->hostdata;
if (irq_level > 0) {
- if (request_irq(irq_level, SYM53C500_intr, SA_SHIRQ, "SYM53C500", host)) {
+ if (request_irq(irq_level, SYM53C500_intr, IRQF_SHARED, "SYM53C500", host)) {
printk("SYM53C500: unable to allocate IRQ %d\n", irq_level);
goto err_free_scsi;
}
probe_ent->port_ops = adma_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
probe_ent->n_ports = ADMA_PORTS;
- Don't walk the entire list in qla1280_putq_t() just to directly
grab the pointer to the last element afterwards
Rev 3.23.5 Beta August 9, 2001, Jes Sorensen
- - Don't use SA_INTERRUPT, it's use is deprecated for this kinda driver
+ - Don't use IRQF_DISABLED, it's use is deprecated for this kinda driver
Rev 3.23.4 Beta August 8, 2001, Jes Sorensen
- Set dev->max_sectors to 1024
Rev 3.23.3 Beta August 6, 2001, Jes Sorensen
}
-static int
+static int __init
qla1280_get_token(char *str)
{
char *sep;
/* Disable ISP interrupts. */
qla1280_disable_intrs(ha);
- if (request_irq(pdev->irq, qla1280_intr_handler, SA_SHIRQ,
+ if (request_irq(pdev->irq, qla1280_intr_handler, IRQF_SHARED,
"qla1280", ha)) {
printk("qla1280 : Failed to reserve interrupt %d already "
"in use\n", pdev->irq);
{
struct scsi_qla_host *ha = to_qla_host(dev_to_shost(container_of(kobj,
struct device, kobj)));
+ char *rbuf = (char *)ha->fw_dump;
if (ha->fw_dump_reading == 0)
return 0;
- if (off > ha->fw_dump_buffer_len)
- return 0;
- if (off + count > ha->fw_dump_buffer_len)
- count = ha->fw_dump_buffer_len - off;
+ if (off > ha->fw_dump_len)
+ return 0;
+ if (off + count > ha->fw_dump_len)
+ count = ha->fw_dump_len - off;
- memcpy(buf, &ha->fw_dump_buffer[off], count);
+ memcpy(buf, &rbuf[off], count);
return (count);
}
struct scsi_qla_host *ha = to_qla_host(dev_to_shost(container_of(kobj,
struct device, kobj)));
int reading;
- uint32_t dump_size;
if (off != 0)
return (0);
reading = simple_strtol(buf, NULL, 10);
switch (reading) {
case 0:
- if (ha->fw_dump_reading == 1) {
- qla_printk(KERN_INFO, ha,
- "Firmware dump cleared on (%ld).\n", ha->host_no);
+ if (!ha->fw_dump_reading)
+ break;
- vfree(ha->fw_dump_buffer);
- ha->fw_dump_buffer = NULL;
- ha->fw_dump_reading = 0;
- ha->fw_dumped = 0;
- }
+ qla_printk(KERN_INFO, ha,
+ "Firmware dump cleared on (%ld).\n", ha->host_no);
+
+ ha->fw_dump_reading = 0;
+ ha->fw_dumped = 0;
break;
case 1:
if (ha->fw_dumped && !ha->fw_dump_reading) {
ha->fw_dump_reading = 1;
- if (IS_QLA24XX(ha) || IS_QLA54XX(ha))
- dump_size = FW_DUMP_SIZE_24XX;
- else {
- dump_size = FW_DUMP_SIZE_1M;
- if (ha->fw_memory_size < 0x20000)
- dump_size = FW_DUMP_SIZE_128K;
- else if (ha->fw_memory_size < 0x80000)
- dump_size = FW_DUMP_SIZE_512K;
- }
- ha->fw_dump_buffer = (char *)vmalloc(dump_size);
- if (ha->fw_dump_buffer == NULL) {
- qla_printk(KERN_WARNING, ha,
- "Unable to allocate memory for firmware "
- "dump buffer (%d).\n", dump_size);
-
- ha->fw_dump_reading = 0;
- return (count);
- }
qla_printk(KERN_INFO, ha,
- "Firmware dump ready for read on (%ld).\n",
+ "Raw firmware dump ready for read on (%ld).\n",
ha->host_no);
- memset(ha->fw_dump_buffer, 0, dump_size);
- ha->isp_ops.ascii_fw_dump(ha);
- ha->fw_dump_buffer_len = strlen(ha->fw_dump_buffer);
}
break;
+ case 2:
+ qla2x00_alloc_fw_dump(ha);
+ break;
}
return (count);
}
if (!capable(CAP_SYS_ADMIN) || off != 0)
return 0;
- if (!IS_QLA24XX(ha) && !IS_QLA54XX(ha))
- return -ENOTSUPP;
-
/* Read NVRAM. */
spin_lock_irqsave(&ha->hardware_lock, flags);
ha->isp_ops.read_nvram(ha, (uint8_t *)buf, ha->vpd_base, ha->vpd_size);
if (!capable(CAP_SYS_ADMIN) || off != 0 || count != ha->vpd_size)
return 0;
- if (!IS_QLA24XX(ha) && !IS_QLA54XX(ha))
- return -ENOTSUPP;
-
/* Write NVRAM. */
spin_lock_irqsave(&ha->hardware_lock, flags);
ha->isp_ops.write_nvram(ha, (uint8_t *)buf, ha->vpd_base, count);
.write = qla2x00_sysfs_write_vpd,
};
+static ssize_t
+qla2x00_sysfs_read_sfp(struct kobject *kobj, char *buf, loff_t off,
+ size_t count)
+{
+ struct scsi_qla_host *ha = to_qla_host(dev_to_shost(container_of(kobj,
+ struct device, kobj)));
+ uint16_t iter, addr, offset;
+ int rval;
+
+ if (!capable(CAP_SYS_ADMIN) || off != 0 || count != SFP_DEV_SIZE * 2)
+ return 0;
+
+ addr = 0xa0;
+ for (iter = 0, offset = 0; iter < (SFP_DEV_SIZE * 2) / SFP_BLOCK_SIZE;
+ iter++, offset += SFP_BLOCK_SIZE) {
+ if (iter == 4) {
+ /* Skip to next device address. */
+ addr = 0xa2;
+ offset = 0;
+ }
+
+ rval = qla2x00_read_sfp(ha, ha->sfp_data_dma, addr, offset,
+ SFP_BLOCK_SIZE);
+ if (rval != QLA_SUCCESS) {
+ qla_printk(KERN_WARNING, ha,
+ "Unable to read SFP data (%x/%x/%x).\n", rval,
+ addr, offset);
+ count = 0;
+ break;
+ }
+ memcpy(buf, ha->sfp_data, SFP_BLOCK_SIZE);
+ buf += SFP_BLOCK_SIZE;
+ }
+
+ return count;
+}
+
+static struct bin_attribute sysfs_sfp_attr = {
+ .attr = {
+ .name = "sfp",
+ .mode = S_IRUSR | S_IWUSR,
+ .owner = THIS_MODULE,
+ },
+ .size = SFP_DEV_SIZE * 2,
+ .read = qla2x00_sysfs_read_sfp,
+};
+
void
qla2x00_alloc_sysfs_attr(scsi_qla_host_t *ha)
{
sysfs_create_bin_file(&host->shost_gendev.kobj, &sysfs_optrom_attr);
sysfs_create_bin_file(&host->shost_gendev.kobj,
&sysfs_optrom_ctl_attr);
- sysfs_create_bin_file(&host->shost_gendev.kobj, &sysfs_vpd_attr);
+ if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
+ sysfs_create_bin_file(&host->shost_gendev.kobj,
+ &sysfs_vpd_attr);
+ sysfs_create_bin_file(&host->shost_gendev.kobj,
+ &sysfs_sfp_attr);
+ }
}
void
sysfs_remove_bin_file(&host->shost_gendev.kobj, &sysfs_optrom_attr);
sysfs_remove_bin_file(&host->shost_gendev.kobj,
&sysfs_optrom_ctl_attr);
- sysfs_remove_bin_file(&host->shost_gendev.kobj, &sysfs_vpd_attr);
+ if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
+ sysfs_remove_bin_file(&host->shost_gendev.kobj,
+ &sysfs_vpd_attr);
+ sysfs_remove_bin_file(&host->shost_gendev.kobj,
+ &sysfs_sfp_attr);
+ }
if (ha->beacon_blink_led == 1)
ha->isp_ops.beacon_off(ha);
#include <linux/delay.h>
-static int qla_uprintf(char **, char *, ...);
+static inline void
+qla2xxx_prep_dump(scsi_qla_host_t *ha, struct qla2xxx_fw_dump *fw_dump)
+{
+ fw_dump->fw_major_version = htonl(ha->fw_major_version);
+ fw_dump->fw_minor_version = htonl(ha->fw_minor_version);
+ fw_dump->fw_subminor_version = htonl(ha->fw_subminor_version);
+ fw_dump->fw_attributes = htonl(ha->fw_attributes);
+
+ fw_dump->vendor = htonl(ha->pdev->vendor);
+ fw_dump->device = htonl(ha->pdev->device);
+ fw_dump->subsystem_vendor = htonl(ha->pdev->subsystem_vendor);
+ fw_dump->subsystem_device = htonl(ha->pdev->subsystem_device);
+}
+
+static inline void *
+qla2xxx_copy_queues(scsi_qla_host_t *ha, void *ptr)
+{
+ /* Request queue. */
+ memcpy(ptr, ha->request_ring, ha->request_q_length *
+ sizeof(request_t));
+
+ /* Response queue. */
+ ptr += ha->request_q_length * sizeof(request_t);
+ memcpy(ptr, ha->response_ring, ha->response_q_length *
+ sizeof(response_t));
+
+ return ptr + (ha->response_q_length * sizeof(response_t));
+}
/**
* qla2300_fw_dump() - Dumps binary data from the 2300 firmware.
"request...\n", ha->fw_dump);
goto qla2300_fw_dump_failed;
}
- fw = ha->fw_dump;
+ fw = &ha->fw_dump->isp.isp23;
+ qla2xxx_prep_dump(ha, ha->fw_dump);
rval = QLA_SUCCESS;
- fw->hccr = RD_REG_WORD(®->hccr);
+ fw->hccr = htons(RD_REG_WORD(®->hccr));
/* Pause RISC. */
WRT_REG_WORD(®->hccr, HCCR_PAUSE_RISC);
if (rval == QLA_SUCCESS) {
dmp_reg = (uint16_t __iomem *)(reg + 0);
for (cnt = 0; cnt < sizeof(fw->pbiu_reg) / 2; cnt++)
- fw->pbiu_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->pbiu_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x10);
for (cnt = 0; cnt < sizeof(fw->risc_host_reg) / 2; cnt++)
- fw->risc_host_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_host_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x40);
for (cnt = 0; cnt < sizeof(fw->mailbox_reg) / 2; cnt++)
- fw->mailbox_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->mailbox_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x40);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->resp_dma_reg) / 2; cnt++)
- fw->resp_dma_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->resp_dma_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x50);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->dma_reg) / 2; cnt++)
- fw->dma_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->dma_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x00);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0xA0);
for (cnt = 0; cnt < sizeof(fw->risc_hdw_reg) / 2; cnt++)
- fw->risc_hdw_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_hdw_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2000);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp0_reg) / 2; cnt++)
- fw->risc_gp0_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp0_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2200);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp1_reg) / 2; cnt++)
- fw->risc_gp1_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp1_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2400);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp2_reg) / 2; cnt++)
- fw->risc_gp2_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp2_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2600);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp3_reg) / 2; cnt++)
- fw->risc_gp3_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp3_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2800);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp4_reg) / 2; cnt++)
- fw->risc_gp4_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp4_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2A00);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp5_reg) / 2; cnt++)
- fw->risc_gp5_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp5_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2C00);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp6_reg) / 2; cnt++)
- fw->risc_gp6_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp6_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2E00);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp7_reg) / 2; cnt++)
- fw->risc_gp7_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp7_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x10);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->frame_buf_hdw_reg) / 2; cnt++)
- fw->frame_buf_hdw_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->frame_buf_hdw_reg[cnt] =
+ htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x20);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->fpm_b0_reg) / 2; cnt++)
- fw->fpm_b0_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->fpm_b0_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x30);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->fpm_b1_reg) / 2; cnt++)
- fw->fpm_b1_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->fpm_b1_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
/* Reset RISC. */
WRT_REG_WORD(®->ctrl_status, CSR_ISP_SOFT_RESET);
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb0 & MBS_MASK;
- fw->risc_ram[cnt] = mb2;
+ fw->risc_ram[cnt] = htons(mb2);
} else {
rval = QLA_FUNCTION_FAILED;
}
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb0 & MBS_MASK;
- fw->stack_ram[cnt] = mb2;
+ fw->stack_ram[cnt] = htons(mb2);
} else {
rval = QLA_FUNCTION_FAILED;
}
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb0 & MBS_MASK;
- fw->data_ram[cnt] = mb2;
+ fw->data_ram[cnt] = htons(mb2);
} else {
rval = QLA_FUNCTION_FAILED;
}
}
+ if (rval == QLA_SUCCESS)
+ qla2xxx_copy_queues(ha, &fw->data_ram[cnt]);
+
if (rval != QLA_SUCCESS) {
qla_printk(KERN_WARNING, ha,
"Failed to dump firmware (%x)!!!\n", rval);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
-/**
- * qla2300_ascii_fw_dump() - Converts a binary firmware dump to ASCII.
- * @ha: HA context
- */
-void
-qla2300_ascii_fw_dump(scsi_qla_host_t *ha)
-{
- uint32_t cnt;
- char *uiter;
- char fw_info[30];
- struct qla2300_fw_dump *fw;
- uint32_t data_ram_cnt;
-
- uiter = ha->fw_dump_buffer;
- fw = ha->fw_dump;
-
- qla_uprintf(&uiter, "%s Firmware Version %s\n", ha->model_number,
- ha->isp_ops.fw_version_str(ha, fw_info));
-
- qla_uprintf(&uiter, "\n[==>BEG]\n");
-
- qla_uprintf(&uiter, "HCCR Register:\n%04x\n\n", fw->hccr);
-
- qla_uprintf(&uiter, "PBIU Registers:");
- for (cnt = 0; cnt < sizeof (fw->pbiu_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->pbiu_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nReqQ-RspQ-Risc2Host Status registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_host_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_host_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nMailbox Registers:");
- for (cnt = 0; cnt < sizeof (fw->mailbox_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->mailbox_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nAuto Request Response DMA Registers:");
- for (cnt = 0; cnt < sizeof (fw->resp_dma_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->resp_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nDMA Registers:");
- for (cnt = 0; cnt < sizeof (fw->dma_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC Hardware Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_hdw_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP0 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp0_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP1 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp1_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP2 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp2_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp2_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP3 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp3_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp3_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP4 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp4_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp4_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP5 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp5_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp5_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP6 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp6_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp6_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP7 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp7_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp7_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFrame Buffer Hardware Registers:");
- for (cnt = 0; cnt < sizeof (fw->frame_buf_hdw_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->frame_buf_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFPM B0 Registers:");
- for (cnt = 0; cnt < sizeof (fw->fpm_b0_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->fpm_b0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFPM B1 Registers:");
- for (cnt = 0; cnt < sizeof (fw->fpm_b1_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->fpm_b1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nCode RAM Dump:");
- for (cnt = 0; cnt < sizeof (fw->risc_ram) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%04x: ", cnt + 0x0800);
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_ram[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nStack RAM Dump:");
- for (cnt = 0; cnt < sizeof (fw->stack_ram) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%05x: ", cnt + 0x10000);
- }
- qla_uprintf(&uiter, "%04x ", fw->stack_ram[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nData RAM Dump:");
- data_ram_cnt = ha->fw_memory_size - 0x11000 + 1;
- for (cnt = 0; cnt < data_ram_cnt; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%05x: ", cnt + 0x11000);
- }
- qla_uprintf(&uiter, "%04x ", fw->data_ram[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\n[<==END] ISP Debug Dump.");
-}
-
/**
* qla2100_fw_dump() - Dumps binary data from the 2100/2200 firmware.
* @ha: HA context
"request...\n", ha->fw_dump);
goto qla2100_fw_dump_failed;
}
- fw = ha->fw_dump;
+ fw = &ha->fw_dump->isp.isp21;
+ qla2xxx_prep_dump(ha, ha->fw_dump);
rval = QLA_SUCCESS;
- fw->hccr = RD_REG_WORD(®->hccr);
+ fw->hccr = htons(RD_REG_WORD(®->hccr));
/* Pause RISC. */
WRT_REG_WORD(®->hccr, HCCR_PAUSE_RISC);
if (rval == QLA_SUCCESS) {
dmp_reg = (uint16_t __iomem *)(reg + 0);
for (cnt = 0; cnt < sizeof(fw->pbiu_reg) / 2; cnt++)
- fw->pbiu_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->pbiu_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x10);
for (cnt = 0; cnt < ha->mbx_count; cnt++) {
if (cnt == 8) {
- dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0xe0);
+ dmp_reg = (uint16_t __iomem *)
+ ((uint8_t __iomem *)reg + 0xe0);
}
- fw->mailbox_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->mailbox_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
}
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x20);
for (cnt = 0; cnt < sizeof(fw->dma_reg) / 2; cnt++)
- fw->dma_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->dma_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x00);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0xA0);
for (cnt = 0; cnt < sizeof(fw->risc_hdw_reg) / 2; cnt++)
- fw->risc_hdw_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_hdw_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2000);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp0_reg) / 2; cnt++)
- fw->risc_gp0_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp0_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2100);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp1_reg) / 2; cnt++)
- fw->risc_gp1_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp1_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2200);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp2_reg) / 2; cnt++)
- fw->risc_gp2_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp2_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2300);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp3_reg) / 2; cnt++)
- fw->risc_gp3_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp3_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2400);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp4_reg) / 2; cnt++)
- fw->risc_gp4_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp4_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2500);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp5_reg) / 2; cnt++)
- fw->risc_gp5_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp5_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2600);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp6_reg) / 2; cnt++)
- fw->risc_gp6_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp6_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->pcr, 0x2700);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->risc_gp7_reg) / 2; cnt++)
- fw->risc_gp7_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->risc_gp7_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x10);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->frame_buf_hdw_reg) / 2; cnt++)
- fw->frame_buf_hdw_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->frame_buf_hdw_reg[cnt] =
+ htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x20);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->fpm_b0_reg) / 2; cnt++)
- fw->fpm_b0_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->fpm_b0_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
WRT_REG_WORD(®->ctrl_status, 0x30);
dmp_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->fpm_b1_reg) / 2; cnt++)
- fw->fpm_b1_reg[cnt] = RD_REG_WORD(dmp_reg++);
+ fw->fpm_b1_reg[cnt] = htons(RD_REG_WORD(dmp_reg++));
/* Reset the ISP. */
WRT_REG_WORD(®->ctrl_status, CSR_ISP_SOFT_RESET);
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb0 & MBS_MASK;
- fw->risc_ram[cnt] = mb2;
+ fw->risc_ram[cnt] = htons(mb2);
} else {
rval = QLA_FUNCTION_FAILED;
}
}
+ if (rval == QLA_SUCCESS)
+ qla2xxx_copy_queues(ha, &fw->risc_ram[cnt]);
+
if (rval != QLA_SUCCESS) {
qla_printk(KERN_WARNING, ha,
"Failed to dump firmware (%x)!!!\n", rval);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
-/**
- * qla2100_ascii_fw_dump() - Converts a binary firmware dump to ASCII.
- * @ha: HA context
- */
-void
-qla2100_ascii_fw_dump(scsi_qla_host_t *ha)
-{
- uint32_t cnt;
- char *uiter;
- char fw_info[30];
- struct qla2100_fw_dump *fw;
-
- uiter = ha->fw_dump_buffer;
- fw = ha->fw_dump;
-
- qla_uprintf(&uiter, "%s Firmware Version %s\n", ha->model_number,
- ha->isp_ops.fw_version_str(ha, fw_info));
-
- qla_uprintf(&uiter, "\n[==>BEG]\n");
-
- qla_uprintf(&uiter, "HCCR Register:\n%04x\n\n", fw->hccr);
-
- qla_uprintf(&uiter, "PBIU Registers:");
- for (cnt = 0; cnt < sizeof (fw->pbiu_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->pbiu_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nMailbox Registers:");
- for (cnt = 0; cnt < sizeof (fw->mailbox_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->mailbox_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nDMA Registers:");
- for (cnt = 0; cnt < sizeof (fw->dma_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC Hardware Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_hdw_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP0 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp0_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP1 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp1_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP2 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp2_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp2_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP3 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp3_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp3_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP4 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp4_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp4_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP5 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp5_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp5_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP6 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp6_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp6_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP7 Registers:");
- for (cnt = 0; cnt < sizeof (fw->risc_gp7_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_gp7_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFrame Buffer Hardware Registers:");
- for (cnt = 0; cnt < sizeof (fw->frame_buf_hdw_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->frame_buf_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFPM B0 Registers:");
- for (cnt = 0; cnt < sizeof (fw->fpm_b0_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->fpm_b0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFPM B1 Registers:");
- for (cnt = 0; cnt < sizeof (fw->fpm_b1_reg) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n");
- }
- qla_uprintf(&uiter, "%04x ", fw->fpm_b1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC SRAM:");
- for (cnt = 0; cnt < sizeof (fw->risc_ram) / 2; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%04x: ", cnt + 0x1000);
- }
- qla_uprintf(&uiter, "%04x ", fw->risc_ram[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\n[<==END] ISP Debug Dump.");
-
- return;
-}
-
-static int
-qla_uprintf(char **uiter, char *fmt, ...)
-{
- int iter, len;
- char buf[128];
- va_list args;
-
- va_start(args, fmt);
- len = vsprintf(buf, fmt, args);
- va_end(args);
-
- for (iter = 0; iter < len; iter++, *uiter += 1)
- *uiter[0] = buf[iter];
-
- return (len);
-}
-
-
void
qla24xx_fw_dump(scsi_qla_host_t *ha, int hardware_locked)
{
unsigned long flags;
struct qla24xx_fw_dump *fw;
uint32_t ext_mem_cnt;
+ void *eft;
risc_address = ext_mem_cnt = 0;
memset(mb, 0, sizeof(mb));
"request...\n", ha->fw_dump);
goto qla24xx_fw_dump_failed;
}
- fw = ha->fw_dump;
+ fw = &ha->fw_dump->isp.isp24;
+ qla2xxx_prep_dump(ha, ha->fw_dump);
rval = QLA_SUCCESS;
- fw->host_status = RD_REG_DWORD(®->host_status);
+ fw->host_status = htonl(RD_REG_DWORD(®->host_status));
/* Pause RISC. */
if ((RD_REG_DWORD(®->hccr) & HCCRX_RISC_PAUSE) == 0) {
/* Host interface registers. */
dmp_reg = (uint32_t __iomem *)(reg + 0);
for (cnt = 0; cnt < sizeof(fw->host_reg) / 4; cnt++)
- fw->host_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->host_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
/* Disable interrupts. */
WRT_REG_DWORD(®->ictrl, 0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0000000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[0] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[0] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0100000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[1] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[1] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0200000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[2] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[2] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0300000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[3] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[3] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0400000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[4] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[4] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0500000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[5] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[5] = htonl(RD_REG_DWORD(dmp_reg));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xF0);
WRT_REG_DWORD(dmp_reg, 0xB0600000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xFC);
- fw->shadow_reg[6] = RD_REG_DWORD(dmp_reg);
+ fw->shadow_reg[6] = htonl(RD_REG_DWORD(dmp_reg));
/* Mailbox registers. */
mbx_reg = (uint16_t __iomem *)((uint8_t __iomem *)reg + 0x80);
for (cnt = 0; cnt < sizeof(fw->mailbox_reg) / 2; cnt++)
- fw->mailbox_reg[cnt] = RD_REG_WORD(mbx_reg++);
+ fw->mailbox_reg[cnt] = htons(RD_REG_WORD(mbx_reg++));
/* Transfer sequence registers. */
iter_reg = fw->xseq_gp_reg;
WRT_REG_DWORD(®->iobase_addr, 0xBF00);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF10);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF20);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF30);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF40);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF50);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF60);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBF70);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBFE0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->xseq_0_reg) / 4; cnt++)
- fw->xseq_0_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->xseq_0_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xBFF0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->xseq_1_reg) / 4; cnt++)
- fw->xseq_1_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->xseq_1_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
/* Receive sequence registers. */
iter_reg = fw->rseq_gp_reg;
WRT_REG_DWORD(®->iobase_addr, 0xFF00);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF10);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF20);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF30);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF40);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF50);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF60);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFF70);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFFD0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->rseq_0_reg) / 4; cnt++)
- fw->rseq_0_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->rseq_0_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFFE0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->rseq_1_reg) / 4; cnt++)
- fw->rseq_1_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->rseq_1_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0xFFF0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->rseq_2_reg) / 4; cnt++)
- fw->rseq_2_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->rseq_2_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
/* Command DMA registers. */
WRT_REG_DWORD(®->iobase_addr, 0x7100);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->cmd_dma_reg) / 4; cnt++)
- fw->cmd_dma_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->cmd_dma_reg[cnt] = htonl(RD_REG_DWORD(dmp_reg++));
/* Queues. */
iter_reg = fw->req0_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7200);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 8; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xE4);
for (cnt = 0; cnt < 7; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->resp0_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7300);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 8; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xE4);
for (cnt = 0; cnt < 7; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->req1_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7400);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 8; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xE4);
for (cnt = 0; cnt < 7; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* Transmit DMA registers. */
iter_reg = fw->xmt0_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7600);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7610);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->xmt1_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7620);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7630);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->xmt2_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7640);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7650);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->xmt3_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7660);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7670);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->xmt4_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7680);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7690);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x76A0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < sizeof(fw->xmt_data_dma_reg) / 4; cnt++)
- fw->xmt_data_dma_reg[cnt] = RD_REG_DWORD(dmp_reg++);
+ fw->xmt_data_dma_reg[cnt] =
+ htonl(RD_REG_DWORD(dmp_reg++));
/* Receive DMA registers. */
iter_reg = fw->rcvt0_data_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7700);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7710);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
iter_reg = fw->rcvt1_data_dma_reg;
WRT_REG_DWORD(®->iobase_addr, 0x7720);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x7730);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* RISC registers. */
iter_reg = fw->risc_gp_reg;
WRT_REG_DWORD(®->iobase_addr, 0x0F00);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F10);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F20);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F30);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F40);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F50);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F60);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x0F70);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* Local memory controller registers. */
iter_reg = fw->lmc_reg;
WRT_REG_DWORD(®->iobase_addr, 0x3000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3010);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3020);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3030);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3040);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3050);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x3060);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* Fibre Protocol Module registers. */
iter_reg = fw->fpm_hdw_reg;
WRT_REG_DWORD(®->iobase_addr, 0x4000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4010);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4020);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4030);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4040);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4050);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4060);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4070);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4080);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x4090);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x40A0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x40B0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* Frame Buffer registers. */
iter_reg = fw->fb_hdw_reg;
WRT_REG_DWORD(®->iobase_addr, 0x6000);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6010);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6020);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6030);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6040);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6100);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6130);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6150);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6170);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x6190);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
WRT_REG_DWORD(®->iobase_addr, 0x61B0);
dmp_reg = (uint32_t __iomem *)((uint8_t __iomem *)reg + 0xC0);
for (cnt = 0; cnt < 16; cnt++)
- *iter_reg++ = RD_REG_DWORD(dmp_reg++);
+ *iter_reg++ = htonl(RD_REG_DWORD(dmp_reg++));
/* Reset RISC. */
WRT_REG_DWORD(®->ctrl_status,
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb[0] & MBS_MASK;
- fw->code_ram[cnt] = (mb[3] << 16) | mb[2];
+ fw->code_ram[cnt] = htonl((mb[3] << 16) | mb[2]);
} else {
rval = QLA_FUNCTION_FAILED;
}
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
rval = mb[0] & MBS_MASK;
- fw->ext_mem[cnt] = (mb[3] << 16) | mb[2];
+ fw->ext_mem[cnt] = htonl((mb[3] << 16) | mb[2]);
} else {
rval = QLA_FUNCTION_FAILED;
}
}
+ if (rval == QLA_SUCCESS) {
+ eft = qla2xxx_copy_queues(ha, &fw->ext_mem[cnt]);
+ if (ha->eft)
+ memcpy(eft, ha->eft, ntohl(ha->fw_dump->eft_size));
+ }
+
if (rval != QLA_SUCCESS) {
qla_printk(KERN_WARNING, ha,
"Failed to dump firmware (%x)!!!\n", rval);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
-void
-qla24xx_ascii_fw_dump(scsi_qla_host_t *ha)
-{
- uint32_t cnt;
- char *uiter;
- struct qla24xx_fw_dump *fw;
- uint32_t ext_mem_cnt;
-
- uiter = ha->fw_dump_buffer;
- fw = ha->fw_dump;
-
- qla_uprintf(&uiter, "ISP FW Version %d.%02d.%02d Attributes %04x\n",
- ha->fw_major_version, ha->fw_minor_version,
- ha->fw_subminor_version, ha->fw_attributes);
-
- qla_uprintf(&uiter, "\nR2H Status Register\n%04x\n", fw->host_status);
-
- qla_uprintf(&uiter, "\nHost Interface Registers");
- for (cnt = 0; cnt < sizeof(fw->host_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->host_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nShadow Registers");
- for (cnt = 0; cnt < sizeof(fw->shadow_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->shadow_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nMailbox Registers");
- for (cnt = 0; cnt < sizeof(fw->mailbox_reg) / 2; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->mailbox_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXSEQ GP Registers");
- for (cnt = 0; cnt < sizeof(fw->xseq_gp_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xseq_gp_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXSEQ-0 Registers");
- for (cnt = 0; cnt < sizeof(fw->xseq_0_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xseq_0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXSEQ-1 Registers");
- for (cnt = 0; cnt < sizeof(fw->xseq_1_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xseq_1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRSEQ GP Registers");
- for (cnt = 0; cnt < sizeof(fw->rseq_gp_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rseq_gp_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRSEQ-0 Registers");
- for (cnt = 0; cnt < sizeof(fw->rseq_0_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rseq_0_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRSEQ-1 Registers");
- for (cnt = 0; cnt < sizeof(fw->rseq_1_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rseq_1_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRSEQ-2 Registers");
- for (cnt = 0; cnt < sizeof(fw->rseq_2_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rseq_2_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nCommand DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->cmd_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->cmd_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRequest0 Queue DMA Channel Registers");
- for (cnt = 0; cnt < sizeof(fw->req0_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->req0_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nResponse0 Queue DMA Channel Registers");
- for (cnt = 0; cnt < sizeof(fw->resp0_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->resp0_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRequest1 Queue DMA Channel Registers");
- for (cnt = 0; cnt < sizeof(fw->req1_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->req1_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT0 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt0_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt0_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT1 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt1_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt1_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT2 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt2_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt2_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT3 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt3_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt3_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT4 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt4_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt4_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nXMT Data DMA Common Registers");
- for (cnt = 0; cnt < sizeof(fw->xmt_data_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->xmt_data_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRCV Thread 0 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->rcvt0_data_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rcvt0_data_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRCV Thread 1 Data DMA Registers");
- for (cnt = 0; cnt < sizeof(fw->rcvt1_data_dma_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->rcvt1_data_dma_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nRISC GP Registers");
- for (cnt = 0; cnt < sizeof(fw->risc_gp_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->risc_gp_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nLMC Registers");
- for (cnt = 0; cnt < sizeof(fw->lmc_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->lmc_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFPM Hardware Registers");
- for (cnt = 0; cnt < sizeof(fw->fpm_hdw_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->fpm_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nFB Hardware Registers");
- for (cnt = 0; cnt < sizeof(fw->fb_hdw_reg) / 4; cnt++) {
- if (cnt % 8 == 0)
- qla_uprintf(&uiter, "\n");
-
- qla_uprintf(&uiter, "%08x ", fw->fb_hdw_reg[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nCode RAM");
- for (cnt = 0; cnt < sizeof (fw->code_ram) / 4; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%08x: ", cnt + 0x20000);
- }
- qla_uprintf(&uiter, "%08x ", fw->code_ram[cnt]);
- }
-
- qla_uprintf(&uiter, "\n\nExternal Memory");
- ext_mem_cnt = ha->fw_memory_size - 0x100000 + 1;
- for (cnt = 0; cnt < ext_mem_cnt; cnt++) {
- if (cnt % 8 == 0) {
- qla_uprintf(&uiter, "\n%08x: ", cnt + 0x100000);
- }
- qla_uprintf(&uiter, "%08x ", fw->ext_mem[cnt]);
- }
-
- qla_uprintf(&uiter, "\n[<==END] ISP Debug Dump");
-}
-
-
/****************************************************************************/
/* Driver Debug Functions. */
/****************************************************************************/
/*
* Macros use for debugging the driver.
*/
-#undef ENTER_TRACE
-#if defined(ENTER_TRACE)
-#define ENTER(x) do { printk("qla2100 : Entering %s()\n", x); } while (0)
-#define LEAVE(x) do { printk("qla2100 : Leaving %s()\n", x); } while (0)
-#define ENTER_INTR(x) do { printk("qla2100 : Entering %s()\n", x); } while (0)
-#define LEAVE_INTR(x) do { printk("qla2100 : Leaving %s()\n", x); } while (0)
-#else
-#define ENTER(x) do {} while (0)
-#define LEAVE(x) do {} while (0)
-#define ENTER_INTR(x) do {} while (0)
-#define LEAVE_INTR(x) do {} while (0)
-#endif
-#if DEBUG_QLA2100
-#define DEBUG(x) do {x;} while (0);
-#else
-#define DEBUG(x) do {} while (0);
-#endif
+#define DEBUG(x) do { if (extended_error_logging) { x; } } while (0)
#if defined(QL_DEBUG_LEVEL_1)
-#define DEBUG1(x) do {x;} while (0);
+#define DEBUG1(x) do {x;} while (0)
#else
-#define DEBUG1(x) do {} while (0);
+#define DEBUG1(x) do {} while (0)
#endif
-#if defined(QL_DEBUG_LEVEL_2)
-#define DEBUG2(x) do {x;} while (0);
-#define DEBUG2_3(x) do {x;} while (0);
-#define DEBUG2_3_11(x) do {x;} while (0);
-#define DEBUG2_9_10(x) do {x;} while (0);
-#define DEBUG2_11(x) do {x;} while (0);
-#define DEBUG2_13(x) do {x;} while (0);
-#else
-#define DEBUG2(x) do {} while (0);
-#endif
+#define DEBUG2(x) do { if (extended_error_logging) { x; } } while (0)
+#define DEBUG2_3(x) do { if (extended_error_logging) { x; } } while (0)
+#define DEBUG2_3_11(x) do { if (extended_error_logging) { x; } } while (0)
+#define DEBUG2_9_10(x) do { if (extended_error_logging) { x; } } while (0)
+#define DEBUG2_11(x) do { if (extended_error_logging) { x; } } while (0)
+#define DEBUG2_13(x) do { if (extended_error_logging) { x; } } while (0)
#if defined(QL_DEBUG_LEVEL_3)
-#define DEBUG3(x) do {x;} while (0);
-#define DEBUG2_3(x) do {x;} while (0);
-#define DEBUG2_3_11(x) do {x;} while (0);
-#define DEBUG3_11(x) do {x;} while (0);
+#define DEBUG3(x) do {x;} while (0)
+#define DEBUG3_11(x) do {x;} while (0)
#else
-#define DEBUG3(x) do {} while (0);
- #if !defined(QL_DEBUG_LEVEL_2)
- #define DEBUG2_3(x) do {} while (0);
- #endif
+#define DEBUG3(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_4)
-#define DEBUG4(x) do {x;} while (0);
+#define DEBUG4(x) do {x;} while (0)
#else
-#define DEBUG4(x) do {} while (0);
+#define DEBUG4(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_5)
-#define DEBUG5(x) do {x;} while (0);
+#define DEBUG5(x) do {x;} while (0)
#else
-#define DEBUG5(x) do {} while (0);
+#define DEBUG5(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_7)
-#define DEBUG7(x) do {x;} while (0);
+#define DEBUG7(x) do {x;} while (0)
#else
-#define DEBUG7(x) do {} while (0);
+#define DEBUG7(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_9)
-#define DEBUG9(x) do {x;} while (0);
-#define DEBUG9_10(x) do {x;} while (0);
-#define DEBUG2_9_10(x) do {x;} while (0);
+#define DEBUG9(x) do {x;} while (0)
+#define DEBUG9_10(x) do {x;} while (0)
#else
-#define DEBUG9(x) do {} while (0);
+#define DEBUG9(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_10)
-#define DEBUG10(x) do {x;} while (0);
-#define DEBUG2_9_10(x) do {x;} while (0);
-#define DEBUG9_10(x) do {x;} while (0);
+#define DEBUG10(x) do {x;} while (0)
+#define DEBUG9_10(x) do {x;} while (0)
#else
-#define DEBUG10(x) do {} while (0);
- #if !defined(DEBUG2_9_10)
- #define DEBUG2_9_10(x) do {} while (0);
- #endif
+#define DEBUG10(x) do {} while (0)
#if !defined(DEBUG9_10)
- #define DEBUG9_10(x) do {} while (0);
+ #define DEBUG9_10(x) do {} while (0)
#endif
#endif
#if defined(QL_DEBUG_LEVEL_11)
-#define DEBUG11(x) do{x;} while(0);
-#if !defined(DEBUG2_11)
-#define DEBUG2_11(x) do{x;} while(0);
-#endif
-#if !defined(DEBUG2_3_11)
-#define DEBUG2_3_11(x) do{x;} while(0);
-#endif
+#define DEBUG11(x) do{x;} while(0)
#if !defined(DEBUG3_11)
-#define DEBUG3_11(x) do{x;} while(0);
+#define DEBUG3_11(x) do{x;} while(0)
#endif
#else
-#define DEBUG11(x) do{} while(0);
- #if !defined(QL_DEBUG_LEVEL_2)
- #define DEBUG2_11(x) do{} while(0);
- #if !defined(QL_DEBUG_LEVEL_3)
- #define DEBUG2_3_11(x) do{} while(0);
- #endif
- #endif
+#define DEBUG11(x) do{} while(0)
#if !defined(QL_DEBUG_LEVEL_3)
- #define DEBUG3_11(x) do{} while(0);
+ #define DEBUG3_11(x) do{} while(0)
#endif
#endif
#if defined(QL_DEBUG_LEVEL_12)
-#define DEBUG12(x) do {x;} while (0);
+#define DEBUG12(x) do {x;} while (0)
#else
-#define DEBUG12(x) do {} while (0);
+#define DEBUG12(x) do {} while (0)
#endif
#if defined(QL_DEBUG_LEVEL_13)
#define DEBUG13(x) do {x;} while (0)
-#if !defined(DEBUG2_13)
-#define DEBUG2_13(x) do {x;} while(0)
-#endif
#else
#define DEBUG13(x) do {} while (0)
-#if !defined(QL_DEBUG_LEVEL_2)
-#define DEBUG2_13(x) do {} while(0)
-#endif
#endif
#if defined(QL_DEBUG_LEVEL_14)
/*
* Firmware Dump structure definition
*/
-#define FW_DUMP_SIZE_128K 0xBC000
-#define FW_DUMP_SIZE_512K 0x2FC000
-#define FW_DUMP_SIZE_1M 0x5FC000
struct qla2300_fw_dump {
uint16_t hccr;
uint16_t risc_ram[0xf000];
};
-#define FW_DUMP_SIZE_24XX 0x2B0000
-
struct qla24xx_fw_dump {
uint32_t host_status;
uint32_t host_reg[32];
uint32_t code_ram[0x2000];
uint32_t ext_mem[1];
};
+
+#define EFT_NUM_BUFFERS 4
+#define EFT_BYTES_PER_BUFFER 0x4000
+#define EFT_SIZE ((EFT_BYTES_PER_BUFFER) * (EFT_NUM_BUFFERS))
+
+struct qla2xxx_fw_dump {
+ uint8_t signature[4];
+ uint32_t version;
+
+ uint32_t fw_major_version;
+ uint32_t fw_minor_version;
+ uint32_t fw_subminor_version;
+ uint32_t fw_attributes;
+
+ uint32_t vendor;
+ uint32_t device;
+ uint32_t subsystem_vendor;
+ uint32_t subsystem_device;
+
+ uint32_t fixed_size;
+ uint32_t mem_size;
+ uint32_t req_q_size;
+ uint32_t rsp_q_size;
+
+ uint32_t eft_size;
+ uint32_t eft_addr_l;
+ uint32_t eft_addr_h;
+
+ uint32_t header_size;
+
+ union {
+ struct qla2100_fw_dump isp21;
+ struct qla2300_fw_dump isp23;
+ struct qla24xx_fw_dump isp24;
+ } isp;
+};
#define MBC_SERDES_PARAMS 0x10 /* Serdes Tx Parameters. */
#define MBC_GET_IOCB_STATUS 0x12 /* Get IOCB status command. */
#define MBC_GET_TIMEOUT_PARAMS 0x22 /* Get FW timeouts. */
+#define MBC_TRACE_CONTROL 0x27 /* Trace control command. */
#define MBC_GEN_SYSTEM_ERROR 0x2a /* Generate System Error. */
+#define MBC_READ_SFP 0x31 /* Read SFP Data. */
#define MBC_SET_TIMEOUT_PARAMS 0x32 /* Set FW timeouts. */
#define MBC_MID_INITIALIZE_FIRMWARE 0x48 /* MID Initialize firmware. */
#define MBC_MID_GET_VP_DATABASE 0x49 /* MID Get VP Database. */
#define MBC_GET_LINK_PRIV_STATS 0x6d /* Get link & private data. */
#define MBC_SET_VENDOR_ID 0x76 /* Set Vendor ID. */
+#define TC_ENABLE 4
+#define TC_DISABLE 5
+
/* Firmware return data sizes */
#define FCAL_MAP_SIZE 128
uint32_t);
void (*fw_dump) (struct scsi_qla_host *, int);
- void (*ascii_fw_dump) (struct scsi_qla_host *);
int (*beacon_on) (struct scsi_qla_host *);
int (*beacon_off) (struct scsi_qla_host *);
uint32_t enable_led_scheme :1;
uint32_t msi_enabled :1;
uint32_t msix_enabled :1;
+ uint32_t disable_serdes :1;
} flags;
atomic_t loop_state;
struct sns_cmd_pkt *sns_cmd;
dma_addr_t sns_cmd_dma;
+#define SFP_DEV_SIZE 256
+#define SFP_BLOCK_SIZE 64
+ void *sfp_data;
+ dma_addr_t sfp_data_dma;
+
struct task_struct *dpc_thread;
uint8_t dpc_active; /* DPC routine is active */
uint16_t fw_seriallink_options24[4];
/* Firmware dump information. */
- void *fw_dump;
+ struct qla2xxx_fw_dump *fw_dump;
+ uint32_t fw_dump_len;
int fw_dumped;
int fw_dump_reading;
- char *fw_dump_buffer;
- int fw_dump_buffer_len;
+ dma_addr_t eft_dma;
+ void *eft;
uint8_t host_str[16];
uint32_t pci_attr;
-#define QLA_MODEL_NAMES 0x4A
+#define QLA_MODEL_NAMES 0x57
/*
* Adapter model names and descriptions.
"QLE2440", "PCI-Express to 4Gb FC, Single Channel", /* 0x145 */
"QLE2464", "PCI-Express to 4Gb FC, Quad Channel", /* 0x146 */
"QLA2440", "PCI-X 2.0 to 4Gb FC, Single Channel", /* 0x147 */
- " ", " ", /* 0x148 */
+ "HP AE369A", "PCI-X 2.0 to 4Gb FC, Dual Channel", /* 0x148 */
"QLA2340", "Sun 133MHz PCI-X to 2Gb FC, Single Channel", /* 0x149 */
+ " ", " ", /* 0x14a */
+ " ", " ", /* 0x14b */
+ "QMC2432M", "IBM eServer BC 4Gb FC Expansion Card CFFE", /* 0x14c */
+ "QMC2422M", "IBM eServer BC 4Gb FC Expansion Card CFFX", /* 0x14d */
+ "QLE220", "Sun PCI-Express to 4Gb FC, Single Channel", /* 0x14e */
+ " ", " ", /* 0x14f */
+ " ", " ", /* 0x150 */
+ " ", " ", /* 0x151 */
+ "QME2462", "PCI-Express to 4Gb FC, Dual Channel Mezz HBA", /* 0x152 */
+ "QMH2462", "PCI-Express to 4Gb FC, Dual Channel Mezz HBA", /* 0x153 */
+ " ", " ", /* 0x154 */
+ "QLE220", "PCI-Express to 4Gb FC, Single Channel", /* 0x155 */
+ "QLE220", "PCI-Express to 4Gb FC, Single Channel", /* 0x156 */
};
* BIT 2 = Enable Memory Map BIOS
* BIT 3 = Enable Selectable Boot
* BIT 4 = Disable RISC code load
- * BIT 5 =
+ * BIT 5 = Disable Serdes
* BIT 6 =
* BIT 7 =
*
uint16_t response_q_length;
uint16_t request_q_length;
- uint16_t link_down_timeout; /* Milliseconds. */
+ uint16_t link_down_on_nos; /* Milliseconds. */
uint16_t prio_request_q_length;
extern void qla24xx_update_fw_options(scsi_qla_host_t *);
extern int qla2x00_load_risc(struct scsi_qla_host *, uint32_t *);
extern int qla24xx_load_risc(scsi_qla_host_t *, uint32_t *);
-extern int qla24xx_load_risc_flash(scsi_qla_host_t *, uint32_t *);
-
-extern fc_port_t *qla2x00_alloc_fcport(scsi_qla_host_t *, gfp_t);
extern int qla2x00_loop_resync(scsi_qla_host_t *);
-extern int qla2x00_find_new_loop_id(scsi_qla_host_t *, fc_port_t *);
extern int qla2x00_fabric_login(scsi_qla_host_t *, fc_port_t *, uint16_t *);
extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *);
extern void qla2x00_update_fcport(scsi_qla_host_t *, fc_port_t *);
extern void qla2x00_reg_remote_port(scsi_qla_host_t *, fc_port_t *);
+extern void qla2x00_alloc_fw_dump(scsi_qla_host_t *);
+
/*
* Global Data in qla_os.c source file.
*/
extern int ql2xplogiabsentdevice;
extern int ql2xloginretrycount;
extern int ql2xfdmienable;
+extern int ql2xallocfwdump;
+extern int extended_error_logging;
extern void qla2x00_sp_compl(scsi_qla_host_t *, srb_t *);
/*
* Global Function Prototypes in qla_iocb.c source file.
*/
-extern void qla2x00_isp_cmd(scsi_qla_host_t *);
-
extern uint16_t qla2x00_calc_iocbs_32(uint16_t);
extern uint16_t qla2x00_calc_iocbs_64(uint16_t);
extern void qla2x00_build_scsi_iocbs_32(srb_t *, cmd_entry_t *, uint16_t);
extern int
qla2x00_stop_firmware(scsi_qla_host_t *);
+extern int
+qla2x00_trace_control(scsi_qla_host_t *, uint16_t, dma_addr_t, uint16_t);
+
+extern int
+qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t);
+
/*
* Global Function Prototypes in qla_isr.c source file.
*/
extern void qla2100_fw_dump(scsi_qla_host_t *, int);
extern void qla2300_fw_dump(scsi_qla_host_t *, int);
extern void qla24xx_fw_dump(scsi_qla_host_t *, int);
-extern void qla2100_ascii_fw_dump(scsi_qla_host_t *);
-extern void qla2300_ascii_fw_dump(scsi_qla_host_t *);
-extern void qla24xx_ascii_fw_dump(scsi_qla_host_t *);
extern void qla2x00_dump_regs(scsi_qla_host_t *);
extern void qla2x00_dump_buffer(uint8_t *, uint32_t);
extern void qla2x00_print_scsi_cmd(struct scsi_cmnd *);
extern void *qla24xx_prep_ms_fdmi_iocb(scsi_qla_host_t *, uint32_t, uint32_t);
extern int qla2x00_fdmi_register(scsi_qla_host_t *);
-/*
- * Global Function Prototypes in qla_xioctl.c source file.
- */
-#define qla2x00_enqueue_aen(ha, cmd, mode) do { } while (0)
-#define qla2x00_alloc_ioctl_mem(ha) (0)
-#define qla2x00_free_ioctl_mem(ha) do { } while (0)
-
/*
* Global Function Prototypes in qla_attr.c source file.
*/
static int qla2x00_restart_isp(scsi_qla_host_t *);
+static int qla2x00_find_new_loop_id(scsi_qla_host_t *ha, fc_port_t *dev);
+
/****************************************************************************/
/* QLogic ISP2x00 Hardware Support Functions. */
/****************************************************************************/
ha->isp_ops.nvram_config(ha);
+ if (ha->flags.disable_serdes) {
+ /* Mask HBA via NVRAM settings? */
+ qla_printk(KERN_INFO, ha, "Masking HBA WWPN "
+ "%02x%02x%02x%02x%02x%02x%02x%02x (via NVRAM).\n",
+ ha->port_name[0], ha->port_name[1],
+ ha->port_name[2], ha->port_name[3],
+ ha->port_name[4], ha->port_name[5],
+ ha->port_name[6], ha->port_name[7]);
+ return QLA_FUNCTION_FAILED;
+ }
+
qla_printk(KERN_INFO, ha, "Verifying loaded RISC code...\n");
retry = 10;
return rval;
}
-static void
+void
qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
{
- uint32_t dump_size = 0;
+ int rval;
+ uint32_t dump_size, fixed_size, mem_size, req_q_size, rsp_q_size,
+ eft_size;
+ dma_addr_t eft_dma;
+ void *eft;
+
+ if (ha->fw_dump) {
+ qla_printk(KERN_WARNING, ha,
+ "Firmware dump previously allocated.\n");
+ return;
+ }
ha->fw_dumped = 0;
+ fixed_size = mem_size = eft_size = 0;
if (IS_QLA2100(ha) || IS_QLA2200(ha)) {
- dump_size = sizeof(struct qla2100_fw_dump);
+ fixed_size = sizeof(struct qla2100_fw_dump);
} else if (IS_QLA23XX(ha)) {
- dump_size = sizeof(struct qla2300_fw_dump);
- dump_size += (ha->fw_memory_size - 0x11000) * sizeof(uint16_t);
- } else if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
- dump_size = sizeof(struct qla24xx_fw_dump);
- dump_size += (ha->fw_memory_size - 0x100000) * sizeof(uint32_t);
+ fixed_size = offsetof(struct qla2300_fw_dump, data_ram);
+ mem_size = (ha->fw_memory_size - 0x11000 + 1) *
+ sizeof(uint16_t);
+ } else if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
+ fixed_size = offsetof(struct qla24xx_fw_dump, ext_mem);
+ mem_size = (ha->fw_memory_size - 0x100000 + 1) *
+ sizeof(uint32_t);
+
+ /* Allocate memory for Extended Trace Buffer. */
+ eft = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &eft_dma,
+ GFP_KERNEL);
+ if (!eft) {
+ qla_printk(KERN_WARNING, ha, "Unable to allocate "
+ "(%d KB) for EFT.\n", EFT_SIZE / 1024);
+ goto cont_alloc;
+ }
+
+ rval = qla2x00_trace_control(ha, TC_ENABLE, eft_dma,
+ EFT_NUM_BUFFERS);
+ if (rval) {
+ qla_printk(KERN_WARNING, ha, "Unable to initialize "
+ "EFT (%d).\n", rval);
+ dma_free_coherent(&ha->pdev->dev, EFT_SIZE, eft,
+ eft_dma);
+ goto cont_alloc;
+ }
+
+ qla_printk(KERN_INFO, ha, "Allocated (%d KB) for EFT...\n",
+ EFT_SIZE / 1024);
+
+ eft_size = EFT_SIZE;
+ memset(eft, 0, eft_size);
+ ha->eft_dma = eft_dma;
+ ha->eft = eft;
}
+cont_alloc:
+ req_q_size = ha->request_q_length * sizeof(request_t);
+ rsp_q_size = ha->response_q_length * sizeof(response_t);
+
+ dump_size = offsetof(struct qla2xxx_fw_dump, isp);
+ dump_size += fixed_size + mem_size + req_q_size + rsp_q_size +
+ eft_size;
ha->fw_dump = vmalloc(dump_size);
- if (ha->fw_dump)
- qla_printk(KERN_INFO, ha, "Allocated (%d KB) for firmware "
- "dump...\n", dump_size / 1024);
- else
+ if (!ha->fw_dump) {
qla_printk(KERN_WARNING, ha, "Unable to allocate (%d KB) for "
"firmware dump!!!\n", dump_size / 1024);
+
+ if (ha->eft) {
+ dma_free_coherent(&ha->pdev->dev, eft_size, ha->eft,
+ ha->eft_dma);
+ ha->eft = NULL;
+ ha->eft_dma = 0;
+ }
+ return;
+ }
+
+ qla_printk(KERN_INFO, ha, "Allocated (%d KB) for firmware dump...\n",
+ dump_size / 1024);
+
+ ha->fw_dump_len = dump_size;
+ ha->fw_dump->signature[0] = 'Q';
+ ha->fw_dump->signature[1] = 'L';
+ ha->fw_dump->signature[2] = 'G';
+ ha->fw_dump->signature[3] = 'C';
+ ha->fw_dump->version = __constant_htonl(1);
+
+ ha->fw_dump->fixed_size = htonl(fixed_size);
+ ha->fw_dump->mem_size = htonl(mem_size);
+ ha->fw_dump->req_q_size = htonl(req_q_size);
+ ha->fw_dump->rsp_q_size = htonl(rsp_q_size);
+
+ ha->fw_dump->eft_size = htonl(eft_size);
+ ha->fw_dump->eft_addr_l = htonl(LSD(ha->eft_dma));
+ ha->fw_dump->eft_addr_h = htonl(MSD(ha->eft_dma));
+
+ ha->fw_dump->header_size =
+ htonl(offsetof(struct qla2xxx_fw_dump, isp));
}
/**
dma_addr_t request_dma;
request_t *request_ring;
- qla2x00_alloc_fw_dump(ha);
-
/* Valid only on recent ISPs. */
if (IS_QLA2100(ha) || IS_QLA2200(ha))
return;
&ha->fw_subminor_version,
&ha->fw_attributes, &ha->fw_memory_size);
qla2x00_resize_request_q(ha);
+
+ if (ql2xallocfwdump)
+ qla2x00_alloc_fw_dump(ha);
}
} else {
DEBUG2(printk(KERN_INFO
rval = QLA_FUNCTION_FAILED;
if (atomic_read(&ha->loop_down_timer) &&
- (fw_state >= FSTATE_LOSS_OF_SYNC ||
- fw_state == FSTATE_WAIT_AL_PA)) {
+ fw_state != FSTATE_READY) {
/* Loop down. Timeout on min_wait for states
* other than Wait for Login.
*/
/*
* Set host adapter parameters.
*/
+ if (nv->host_p[0] & BIT_7)
+ extended_error_logging = 1;
ha->flags.disable_risc_code_load = ((nv->host_p[0] & BIT_4) ? 1 : 0);
/* Always load RISC code on non ISP2[12]00 chips. */
if (!IS_QLA2100(ha) && !IS_QLA2200(ha))
ha->flags.enable_lip_full_login = ((nv->host_p[1] & BIT_2) ? 1 : 0);
ha->flags.enable_target_reset = ((nv->host_p[1] & BIT_3) ? 1 : 0);
ha->flags.enable_led_scheme = (nv->special_options[1] & BIT_4) ? 1 : 0;
+ ha->flags.disable_serdes = 0;
ha->operating_mode =
(icb->add_firmware_options[0] & (BIT_6 | BIT_5 | BIT_4)) >> 4;
*
* Returns a pointer to the allocated fcport, or NULL, if none available.
*/
-fc_port_t *
+static fc_port_t *
qla2x00_alloc_fcport(scsi_qla_host_t *ha, gfp_t flags)
{
fc_port_t *fcport;
* Context:
* Kernel context.
*/
-int
+static int
qla2x00_find_new_loop_id(scsi_qla_host_t *ha, fc_port_t *dev)
{
int rval;
ha->isp_abort_cnt--;
DEBUG(printk("qla%ld: ISP abort - "
"retry remaining %d\n",
- ha->host_no, ha->isp_abort_cnt);)
+ ha->host_no, ha->isp_abort_cnt));
status = 1;
}
} else {
ha->isp_abort_cnt = MAX_RETRIES_OF_ISP_ABORT;
DEBUG(printk("qla2x00(%ld): ISP error recovery "
"- retrying (%d) more times\n",
- ha->host_no, ha->isp_abort_cnt);)
+ ha->host_no, ha->isp_abort_cnt));
set_bit(ISP_ABORT_RETRY, &ha->dpc_flags);
status = 1;
}
} else {
DEBUG(printk(KERN_INFO
"qla2x00_abort_isp(%ld): exiting.\n",
- ha->host_no);)
+ ha->host_no));
}
return(status);
clear_bit(RESET_MARKER_NEEDED, &ha->dpc_flags);
if (!(status = qla2x00_fw_ready(ha))) {
DEBUG(printk("%s(): Start configure loop, "
- "status = %d\n", __func__, status);)
+ "status = %d\n", __func__, status));
/* Issue a marker after FW becomes ready. */
qla2x00_marker(ha, 0, 0, MK_SYNC_ALL);
DEBUG(printk("%s(): Configure loop done, status = 0x%x\n",
__func__,
- status);)
+ status));
}
return (status);
}
nv->node_name[6] = 0x55;
nv->node_name[7] = 0x86;
nv->login_retry_count = __constant_cpu_to_le16(8);
- nv->link_down_timeout = __constant_cpu_to_le16(200);
nv->interrupt_delay_timer = __constant_cpu_to_le16(0);
nv->login_timeout = __constant_cpu_to_le16(0);
nv->firmware_options_1 =
*dptr1++ = *dptr2++;
icb->login_retry_count = nv->login_retry_count;
- icb->link_down_timeout = nv->link_down_timeout;
+ icb->link_down_on_nos = nv->link_down_on_nos;
/* Copy 2nd segment. */
dptr1 = (uint8_t *)&icb->interrupt_delay_timer;
ha->flags.enable_lip_full_login = 1;
ha->flags.enable_target_reset = 1;
ha->flags.enable_led_scheme = 0;
+ ha->flags.disable_serdes = le32_to_cpu(nv->host_p) & BIT_5 ? 1: 0;
ha->operating_mode = (le32_to_cpu(icb->firmware_options_2) &
(BIT_6 | BIT_5 | BIT_4)) >> 4;
return (rval);
}
-int
+static int
qla24xx_load_risc_flash(scsi_qla_host_t *ha, uint32_t *srisc_addr)
{
int rval;
static inline cont_entry_t *qla2x00_prep_cont_type0_iocb(scsi_qla_host_t *);
static inline cont_a64_entry_t *qla2x00_prep_cont_type1_iocb(scsi_qla_host_t *);
static request_t *qla2x00_req_pkt(scsi_qla_host_t *ha);
+static void qla2x00_isp_cmd(scsi_qla_host_t *ha);
/**
* qla2x00_get_cmd_direction() - Determine control_flag data direction.
*
* Note: The caller must hold the hardware lock before calling this routine.
*/
-void
+static void
qla2x00_isp_cmd(scsi_qla_host_t *ha)
{
device_reg_t __iomem *reg = ha->iobase;
set_bit(REGISTER_FC4_NEEDED, &ha->dpc_flags);
ha->flags.management_server_logged_in = 0;
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_LIP_OCCURRED, NULL);
-
break;
case MBA_LOOP_UP: /* Loop Up Event */
link_speed);
ha->flags.management_server_logged_in = 0;
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_LOOP_UP, NULL);
break;
case MBA_LOOP_DOWN: /* Loop Down Event */
ha->link_data_rate = LDR_UNKNOWN;
if (ql2xfdmienable)
set_bit(REGISTER_FDMI_NEEDED, &ha->dpc_flags);
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_LOOP_DOWN, NULL);
break;
case MBA_LIP_RESET: /* LIP reset occurred */
ha->operating_mode = LOOP;
ha->flags.management_server_logged_in = 0;
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_LIP_RESET, NULL);
-
break;
case MBA_POINT_TO_POINT: /* Point-to-Point */
set_bit(LOOP_RESYNC_NEEDED, &ha->dpc_flags);
set_bit(LOCAL_LOOP_UPDATE, &ha->dpc_flags);
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_PORT_UPDATE, NULL);
break;
case MBA_RSCN_UPDATE: /* State Change Registration */
set_bit(LOOP_RESYNC_NEEDED, &ha->dpc_flags);
set_bit(RSCN_UPDATE, &ha->dpc_flags);
-
- /* Update AEN queue. */
- qla2x00_enqueue_aen(ha, MBA_RSCN_UPDATE, &mb[0]);
break;
/* case MBA_RIO_RESPONSE: */
DEBUG3(printk("%s(%ld): pkt=%p pkthandle=%d.\n",
__func__, ha->host_no, pkt, pkt->handle));
- DEBUG9(printk("%s: ct pkt dump:\n", __func__);)
- DEBUG9(qla2x00_dump_buffer((void *)pkt, sizeof(struct ct_entry_24xx));)
+ DEBUG9(printk("%s: ct pkt dump:\n", __func__));
+ DEBUG9(qla2x00_dump_buffer((void *)pkt, sizeof(struct ct_entry_24xx)));
/* Validate handle. */
if (pkt->handle < MAX_OUTSTANDING_COMMANDS)
{
struct semaphore *sem_ptr = (struct semaphore *)data;
- DEBUG11(printk("qla2x00_sem_timeout: entered.\n");)
+ DEBUG11(printk("qla2x00_sem_timeout: entered.\n"));
if (sem_ptr != NULL) {
up(sem_ptr);
}
- DEBUG11(printk("qla2x00_mbx_sem_timeout: exiting.\n");)
+ DEBUG11(printk("qla2x00_mbx_sem_timeout: exiting.\n"));
}
/*
rval = QLA_SUCCESS;
abort_active = test_bit(ABORT_ISP_ACTIVE, &ha->dpc_flags);
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
/*
* Wait for active mailbox commands to finish by waiting at most tov
if (qla2x00_down_timeout(&ha->mbx_cmd_sem, mcp->tov * HZ)) {
/* Timeout occurred. Return error. */
DEBUG2_3_11(printk("%s(%ld): cmd access timeout. "
- "Exiting.\n", __func__, ha->host_no);)
+ "Exiting.\n", __func__, ha->host_no));
return QLA_FUNCTION_TIMEOUT;
}
}
spin_lock_irqsave(&ha->mbx_reg_lock, mbx_flags);
DEBUG11(printk("scsi(%ld): prepare to issue mbox cmd=0x%x.\n",
- ha->host_no, mcp->mb[0]);)
+ ha->host_no, mcp->mb[0]));
spin_lock_irqsave(&ha->hardware_lock, flags);
/* Unlock mbx registers and wait for interrupt */
DEBUG11(printk("%s(%ld): going to unlock irq & waiting for interrupt. "
- "jiffies=%lx.\n", __func__, ha->host_no, jiffies);)
+ "jiffies=%lx.\n", __func__, ha->host_no, jiffies));
/* Wait for mbx cmd completion until timeout */
if (!abort_active && io_lock_on) {
/* sleep on completion semaphore */
DEBUG11(printk("%s(%ld): INTERRUPT MODE. Initializing timer.\n",
- __func__, ha->host_no);)
+ __func__, ha->host_no));
init_timer(&tmp_intr_timer);
tmp_intr_timer.data = (unsigned long)&ha->mbx_intr_sem;
(void (*)(unsigned long))qla2x00_mbx_sem_timeout;
DEBUG11(printk("%s(%ld): Adding timer.\n", __func__,
- ha->host_no);)
+ ha->host_no));
add_timer(&tmp_intr_timer);
DEBUG11(printk("%s(%ld): going to unlock & sleep. "
- "time=0x%lx.\n", __func__, ha->host_no, jiffies);)
+ "time=0x%lx.\n", __func__, ha->host_no, jiffies));
set_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
down(&ha->mbx_intr_sem);
DEBUG11(printk("%s(%ld): waking up. time=0x%lx\n", __func__,
- ha->host_no, jiffies);)
+ ha->host_no, jiffies));
clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
/* delete the timer */
del_timer(&tmp_intr_timer);
} else {
DEBUG3_11(printk("%s(%ld): cmd=%x POLLING MODE.\n", __func__,
- ha->host_no, command);)
+ ha->host_no, command));
if (IS_QLA24XX(ha) || IS_QLA54XX(ha))
WRT_REG_DWORD(®->isp24.hccr, HCCRX_SET_HOST_INT);
uint16_t *iptr2;
DEBUG3_11(printk("%s(%ld): cmd %x completed.\n", __func__,
- ha->host_no, command);)
+ ha->host_no, command));
/* Got interrupt. Clear the flag. */
ha->flags.mbox_int = 0;
if (!abort_active) {
DEBUG11(printk("%s(%ld): checking for additional resp "
- "interrupt.\n", __func__, ha->host_no);)
+ "interrupt.\n", __func__, ha->host_no));
/* polling mode for non isp_abort commands. */
qla2x00_poll(ha);
if (!io_lock_on || (mcp->flags & IOCTL_CMD)) {
/* not in dpc. schedule it for dpc to take over. */
DEBUG(printk("%s(%ld): timeout schedule "
- "isp_abort_needed.\n", __func__, ha->host_no);)
+ "isp_abort_needed.\n", __func__, ha->host_no));
DEBUG2_3_11(printk("%s(%ld): timeout schedule "
- "isp_abort_needed.\n", __func__, ha->host_no);)
+ "isp_abort_needed.\n", __func__, ha->host_no));
qla_printk(KERN_WARNING, ha,
"Mailbox command timeout occured. Scheduling ISP "
"abort.\n");
} else if (!abort_active) {
/* call abort directly since we are in the DPC thread */
DEBUG(printk("%s(%ld): timeout calling abort_isp\n",
- __func__, ha->host_no);)
+ __func__, ha->host_no));
DEBUG2_3_11(printk("%s(%ld): timeout calling "
- "abort_isp\n", __func__, ha->host_no);)
+ "abort_isp\n", __func__, ha->host_no));
qla_printk(KERN_WARNING, ha,
"Mailbox command timeout occured. Issuing ISP "
"abort.\n");
}
clear_bit(ABORT_ISP_ACTIVE, &ha->dpc_flags);
DEBUG(printk("%s(%ld): finished abort_isp\n", __func__,
- ha->host_no);)
+ ha->host_no));
DEBUG2_3_11(printk("%s(%ld): finished abort_isp\n",
- __func__, ha->host_no);)
+ __func__, ha->host_no));
}
}
if (rval) {
DEBUG2_3_11(printk("%s(%ld): **** FAILED. mbx0=%x, mbx1=%x, "
"mbx2=%x, cmd=%x ****\n", __func__, ha->host_no,
- mcp->mb[0], mcp->mb[1], mcp->mb[2], command);)
+ mcp->mb[0], mcp->mb[1], mcp->mb[2], command));
} else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
return rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
mcp->mb[0] = MBC_EXECUTE_FIRMWARE;
mcp->out_mb = MBX_0;
} else {
if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
DEBUG11(printk("%s(%ld): done exchanges=%x.\n",
- __func__, ha->host_no, mcp->mb[1]);)
+ __func__, ha->host_no, mcp->mb[1]));
} else {
DEBUG11(printk("%s(%ld): done.\n", __func__,
- ha->host_no);)
+ ha->host_no));
}
}
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("qla2x00_mbx_reg_test(%ld): entered.\n", ha->host_no);)
+ DEBUG11(printk("qla2x00_mbx_reg_test(%ld): entered.\n", ha->host_no));
mcp->mb[0] = MBC_MAILBOX_REGISTER_TEST;
mcp->mb[1] = 0xAAAA;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_mbx_reg_test(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_mbx_reg_test(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
mcp->mb[0] = MBC_VERIFY_CHECKSUM;
mcp->out_mb = MBX_0;
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed=%x chk sum=%x.\n", __func__,
ha->host_no, rval, (IS_QLA24XX(ha) || IS_QLA54XX(ha) ?
- (mcp->mb[2] << 16) | mcp->mb[1]: mcp->mb[1]));)
+ (mcp->mb[2] << 16) | mcp->mb[1]: mcp->mb[1])));
} else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
return rval;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG(printk("qla2x00_issue_iocb(%ld): failed rval 0x%x\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
DEBUG2(printk("qla2x00_issue_iocb(%ld): failed rval 0x%x\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
sts_entry_t *sts_entry = (sts_entry_t *) buffer;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("qla2x00_abort_command(%ld): entered.\n", ha->host_no);)
+ DEBUG11(printk("qla2x00_abort_command(%ld): entered.\n", ha->host_no));
fcport = sp->fcport;
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("qla2x00_abort_command(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
sp->flags |= SRB_ABORT_PENDING;
DEBUG11(printk("qla2x00_abort_command(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
if (fcport == NULL)
return 0;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, fcport->ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, fcport->ha->host_no));
ha = fcport->ha;
mcp->mb[0] = MBC_ABORT_TARGET;
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("qla2x00_abort_target(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_abort_target(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_get_adapter_id(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_GET_ADAPTER_LOOP_ID;
mcp->out_mb = MBX_0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_get_adapter_id(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_get_adapter_id(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_get_retry_cnt(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_GET_RETRY_COUNT;
mcp->out_mb = MBX_0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_get_retry_cnt(%ld): failed = %x.\n",
- ha->host_no, mcp->mb[0]);)
+ ha->host_no, mcp->mb[0]));
} else {
/* Convert returned data and check our values. */
*r_a_tov = mcp->mb[3] / 2;
}
DEBUG11(printk("qla2x00_get_retry_cnt(%ld): done. mb3=%d "
- "ratov=%d.\n", ha->host_no, mcp->mb[3], ratov);)
+ "ratov=%d.\n", ha->host_no, mcp->mb[3], ratov));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_init_firmware(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_INITIALIZE_FIRMWARE;
mcp->mb[2] = MSW(ha->init_cb_dma);
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_init_firmware(%ld): failed=%x "
"mb0=%x.\n",
- ha->host_no, rval, mcp->mb[0]);)
+ ha->host_no, rval, mcp->mb[0]));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_init_firmware(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
struct port_database_24xx *pd24;
dma_addr_t pd_dma;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
pd24 = NULL;
pd = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &pd_dma);
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_get_firmware_state(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_GET_FIRMWARE_STATE;
mcp->out_mb = MBX_0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_get_firmware_state(%ld): "
- "failed=%x.\n", ha->host_no, rval);)
+ "failed=%x.\n", ha->host_no, rval));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_get_firmware_state(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_get_port_name(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_GET_PORT_NAME;
mcp->out_mb = MBX_1|MBX_0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_get_port_name(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
if (name != NULL) {
/* This function returns name in big endian. */
}
DEBUG11(printk("qla2x00_get_port_name(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
mcp->mb[0] = MBC_LIP_FULL_LOGIN;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("%s(%ld): failed=%x.\n",
- __func__, ha->host_no, rval);)
+ __func__, ha->host_no, rval));
} else {
/*EMPTY*/
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_send_sns(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
DEBUG11(printk("qla2x00_send_sns: retry cnt=%d ratov=%d total "
- "tov=%d.\n", ha->retry_count, ha->login_timeout, mcp->tov);)
+ "tov=%d.\n", ha->retry_count, ha->login_timeout, mcp->tov));
mcp->mb[0] = MBC_SEND_SNS_COMMAND;
mcp->mb[1] = cmd_size;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG(printk("qla2x00_send_sns(%ld): failed=%x mb[0]=%x "
- "mb[1]=%x.\n", ha->host_no, rval, mcp->mb[0], mcp->mb[1]);)
+ "mb[1]=%x.\n", ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
DEBUG2_3_11(printk("qla2x00_send_sns(%ld): failed=%x mb[0]=%x "
- "mb[1]=%x.\n", ha->host_no, rval, mcp->mb[0], mcp->mb[1]);)
+ "mb[1]=%x.\n", ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
} else {
/*EMPTY*/
- DEBUG11(printk("qla2x00_send_sns(%ld): done.\n", ha->host_no);)
+ DEBUG11(printk("qla2x00_send_sns(%ld): done.\n", ha->host_no));
}
return rval;
dma_addr_t lg_dma;
uint32_t iop[2];
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
lg = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &lg_dma);
if (lg == NULL) {
lg->control_flags = __constant_cpu_to_le16(LCF_COMMAND_PLOGI);
if (opt & BIT_0)
lg->control_flags |= __constant_cpu_to_le16(LCF_COND_PLOGI);
+ if (opt & BIT_1)
+ lg->control_flags |= __constant_cpu_to_le16(LCF_SKIP_PRLI);
lg->port_id[0] = al_pa;
lg->port_id[1] = area;
lg->port_id[2] = domain;
rval = qla2x00_issue_iocb(ha, lg, lg_dma, 0);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed to issue Login IOCB "
- "(%x).\n", __func__, ha->host_no, rval);)
+ "(%x).\n", __func__, ha->host_no, rval));
} else if (lg->entry_status != 0) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
"-- error status (%x).\n", __func__, ha->host_no,
break;
}
} else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
iop[0] = le32_to_cpu(lg->io_parameter[0]);
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
- DEBUG11(printk("qla2x00_login_fabric(%ld): entered.\n", ha->host_no);)
+ DEBUG11(printk("qla2x00_login_fabric(%ld): entered.\n", ha->host_no));
mcp->mb[0] = MBC_LOGIN_FABRIC_PORT;
mcp->out_mb = MBX_3|MBX_2|MBX_1|MBX_0;
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_login_fabric(%ld): failed=%x "
"mb[0]=%x mb[1]=%x mb[2]=%x.\n", ha->host_no, rval,
- mcp->mb[0], mcp->mb[1], mcp->mb[2]);)
+ mcp->mb[0], mcp->mb[1], mcp->mb[2]));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_login_fabric(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
fcport->d_id.b.domain, fcport->d_id.b.area,
fcport->d_id.b.al_pa, mb_ret, opt);
- DEBUG3(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG3(printk("%s(%ld): entered.\n", __func__, ha->host_no));
mcp->mb[0] = MBC_LOGIN_LOOP_PORT;
if (HAS_EXTENDED_IDS(ha))
DEBUG(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x "
"mb[6]=%x mb[7]=%x.\n", __func__, ha->host_no, rval,
- mcp->mb[0], mcp->mb[1], mcp->mb[6], mcp->mb[7]);)
+ mcp->mb[0], mcp->mb[1], mcp->mb[6], mcp->mb[7]));
DEBUG2_3(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x "
"mb[6]=%x mb[7]=%x.\n", __func__, ha->host_no, rval,
- mcp->mb[0], mcp->mb[1], mcp->mb[6], mcp->mb[7]);)
+ mcp->mb[0], mcp->mb[1], mcp->mb[6], mcp->mb[7]));
} else {
/*EMPTY*/
- DEBUG3(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG3(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
return (rval);
struct logio_entry_24xx *lg;
dma_addr_t lg_dma;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
lg = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &lg_dma);
if (lg == NULL) {
rval = qla2x00_issue_iocb(ha, lg, lg_dma, 0);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed to issue Logout IOCB "
- "(%x).\n", __func__, ha->host_no, rval);)
+ "(%x).\n", __func__, ha->host_no, rval));
} else if (lg->entry_status != 0) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
"-- error status (%x).\n", __func__, ha->host_no,
"-- completion status (%x) ioparam=%x/%x.\n", __func__,
ha->host_no, le16_to_cpu(lg->comp_status),
le32_to_cpu(lg->io_parameter[0]),
- le32_to_cpu(lg->io_parameter[1]));)
+ le32_to_cpu(lg->io_parameter[1])));
} else {
/*EMPTY*/
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
dma_pool_free(ha->s_dma_pool, lg, lg_dma);
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_fabric_logout(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_LOGOUT_FABRIC_PORT;
mcp->out_mb = MBX_1|MBX_0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_fabric_logout(%ld): failed=%x "
- "mbx1=%x.\n", ha->host_no, rval, mcp->mb[1]);)
+ "mbx1=%x.\n", ha->host_no, rval, mcp->mb[1]));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_fabric_logout(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_full_login_lip(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
mcp->mb[0] = MBC_LIP_FULL_LOGIN;
mcp->mb[1] = 0;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_full_login_lip(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
/*EMPTY*/
DEBUG11(printk("qla2x00_full_login_lip(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
mbx_cmd_t *mcp = &mc;
DEBUG11(printk("qla2x00_get_id_list(%ld): entered.\n",
- ha->host_no);)
+ ha->host_no));
if (id_list == NULL)
return QLA_FUNCTION_FAILED;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("qla2x00_get_id_list(%ld): failed=%x.\n",
- ha->host_no, rval);)
+ ha->host_no, rval));
} else {
*entries = mcp->mb[1];
DEBUG11(printk("qla2x00_get_id_list(%ld): done.\n",
- ha->host_no);)
+ ha->host_no));
}
return rval;
if (rval != QLA_SUCCESS) {
/*EMPTY*/
DEBUG2_3_11(printk("%s(%ld): failed = %x.\n", __func__,
- ha->host_no, mcp->mb[0]);)
+ ha->host_no, mcp->mb[0]));
} else {
DEBUG11(printk("%s(%ld): done. mb1=%x mb2=%x mb3=%x mb6=%x "
"mb7=%x mb10=%x.\n", __func__, ha->host_no,
link_stat_t *stat_buf;
dma_addr_t stat_buf_dma;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
stat_buf = dma_pool_alloc(ha->s_dma_pool, GFP_ATOMIC, &stat_buf_dma);
if (stat_buf == NULL) {
if (rval == QLA_SUCCESS) {
if (mcp->mb[0] != MBS_COMMAND_COMPLETE) {
DEBUG2_3_11(printk("%s(%ld): cmd failed. mbx0=%x.\n",
- __func__, ha->host_no, mcp->mb[0]);)
+ __func__, ha->host_no, mcp->mb[0]));
status[0] = mcp->mb[0];
rval = BIT_1;
} else {
stat_buf->loss_sync_cnt, stat_buf->loss_sig_cnt,
stat_buf->prim_seq_err_cnt,
stat_buf->inval_xmit_word_cnt,
- stat_buf->inval_crc_cnt);)
+ stat_buf->inval_crc_cnt));
}
} else {
/* Failed. */
DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
- ha->host_no, rval);)
+ ha->host_no, rval));
rval = BIT_1;
}
uint32_t *sbuf, *siter;
dma_addr_t sbuf_dma;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
if (dwords > (DMA_POOL_SIZE / 4)) {
DEBUG2_3_11(printk("%s(%ld): Unabled to retrieve %d DWORDs "
dma_addr_t abt_dma;
uint32_t handle;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
fcport = sp->fcport;
rval = qla2x00_issue_iocb(ha, abt, abt_dma, 0);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed to issue IOCB (%x).\n",
- __func__, ha->host_no, rval);)
+ __func__, ha->host_no, rval));
} else if (abt->entry_status != 0) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
"-- error status (%x).\n", __func__, ha->host_no,
} else if (abt->nport_handle != __constant_cpu_to_le16(0)) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
"-- completion status (%x).\n", __func__, ha->host_no,
- le16_to_cpu(abt->nport_handle));)
+ le16_to_cpu(abt->nport_handle)));
rval = QLA_FUNCTION_FAILED;
} else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
sp->flags |= SRB_ABORT_PENDING;
}
if (fcport == NULL)
return 0;
- DEBUG11(printk("%s(%ld): entered.\n", __func__, fcport->ha->host_no);)
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, fcport->ha->host_no));
ha = fcport->ha;
tsk = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &tsk_dma);
rval = qla2x00_issue_iocb(ha, tsk, tsk_dma, 0);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed to issue Target Reset IOCB "
- "(%x).\n", __func__, ha->host_no, rval);)
+ "(%x).\n", __func__, ha->host_no, rval));
goto atarget_done;
} else if (tsk->p.sts.entry_status != 0) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
__constant_cpu_to_le16(CS_COMPLETE)) {
DEBUG2_3_11(printk("%s(%ld): failed to complete IOCB "
"-- completion status (%x).\n", __func__,
- ha->host_no, le16_to_cpu(tsk->p.sts.comp_status));)
+ ha->host_no, le16_to_cpu(tsk->p.sts.comp_status)));
rval = QLA_FUNCTION_FAILED;
goto atarget_done;
}
rval = qla2x00_marker(ha, fcport->loop_id, 0, MK_SYNC_ID);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed to issue Marker IOCB "
- "(%x).\n", __func__, ha->host_no, rval);)
+ "(%x).\n", __func__, ha->host_no, rval));
} else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no);)
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
}
atarget_done:
return rval;
}
+
+int
+qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
+ uint16_t buffers)
+{
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
+
+ if (!IS_QLA24XX(ha) && !IS_QLA54XX(ha))
+ return QLA_FUNCTION_FAILED;
+
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
+ mcp->mb[0] = MBC_TRACE_CONTROL;
+ mcp->mb[1] = ctrl;
+ mcp->out_mb = MBX_1|MBX_0;
+ mcp->in_mb = MBX_1|MBX_0;
+ if (ctrl == TC_ENABLE) {
+ mcp->mb[2] = LSW(eft_dma);
+ mcp->mb[3] = MSW(eft_dma);
+ mcp->mb[4] = LSW(MSD(eft_dma));
+ mcp->mb[5] = MSW(MSD(eft_dma));
+ mcp->mb[6] = buffers;
+ mcp->mb[7] = buffers;
+ mcp->out_mb |= MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2;
+ }
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+
+ if (rval != QLA_SUCCESS) {
+ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
+ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
+ } else {
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+ }
+
+ return rval;
+}
+
+int
+qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
+ uint16_t off, uint16_t count)
+{
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
+
+ if (!IS_QLA24XX(ha) && !IS_QLA54XX(ha))
+ return QLA_FUNCTION_FAILED;
+
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
+ mcp->mb[0] = MBC_READ_SFP;
+ mcp->mb[1] = addr;
+ mcp->mb[2] = MSW(sfp_dma);
+ mcp->mb[3] = LSW(sfp_dma);
+ mcp->mb[6] = MSW(MSD(sfp_dma));
+ mcp->mb[7] = LSW(MSD(sfp_dma));
+ mcp->mb[8] = count;
+ mcp->mb[9] = off;
+ mcp->mb[10] = 0;
+ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
+ mcp->in_mb = MBX_0;
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+
+ if (rval != QLA_SUCCESS) {
+ DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
+ ha->host_no, rval, mcp->mb[0]));
+ } else {
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+ }
+
+ return rval;
+}
int qlport_down_retry = 30;
module_param(qlport_down_retry, int, S_IRUGO|S_IRUSR);
MODULE_PARM_DESC(qlport_down_retry,
- "Maximum number of command retries to a port that returns"
+ "Maximum number of command retries to a port that returns "
"a PORT-DOWN status.");
int ql2xplogiabsentdevice;
module_param(ql2xplogiabsentdevice, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(ql2xplogiabsentdevice,
"Option to enable PLOGI to devices that are not present after "
- "a Fabric scan. This is needed for several broken switches."
+ "a Fabric scan. This is needed for several broken switches. "
"Default is 0 - no PLOGI. 1 - perfom PLOGI.");
int ql2xloginretrycount = 0;
MODULE_PARM_DESC(ql2xloginretrycount,
"Specify an alternate value for the NVRAM login retry count.");
+int ql2xallocfwdump = 1;
+module_param(ql2xallocfwdump, int, S_IRUGO|S_IRUSR);
+MODULE_PARM_DESC(ql2xallocfwdump,
+ "Option to enable allocation of memory for a firmware dump "
+ "during HBA initialization. Memory allocation requirements "
+ "vary by ISP type. Default is 1 - allocate memory.");
+
+int extended_error_logging;
+module_param(extended_error_logging, int, S_IRUGO|S_IRUSR);
+MODULE_PARM_DESC(extended_error_logging,
+ "Option to enable extended error logging, "
+ "Default is 0 - no logging. 1 - log errors.");
+
static void qla2x00_free_device(scsi_qla_host_t *);
static void qla2x00_config_dma_addressing(scsi_qla_host_t *ha);
DEBUG2(printk("%s(%ld): aborting sp %p from RISC. pid=%ld.\n",
__func__, ha->host_no, sp, serial));
- DEBUG3(qla2x00_print_scsi_cmd(cmd);)
+ DEBUG3(qla2x00_print_scsi_cmd(cmd));
spin_unlock_irqrestore(&ha->hardware_lock, flags);
if (ha->isp_ops.abort_command(ha, sp)) {
#endif
} else {
DEBUG2(printk(KERN_INFO
- "%s failed: loop not ready\n",__func__);)
+ "%s failed: loop not ready\n",__func__));
}
if (ret == FAILED) {
/* Empty */
DEBUG2_3(printk("%s(%ld): **** FAILED ****\n",
__func__,
- ha->host_no);)
+ ha->host_no));
} else {
/* Empty */
DEBUG3(printk("%s(%ld): exiting normally.\n",
__func__,
- ha->host_no);)
+ ha->host_no));
}
return(status);
/*
* PCI driver interface
*/
-static int qla2x00_probe_one(struct pci_dev *pdev)
+static int __devinit
+qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
{
int ret = -ENODEV;
device_reg_t __iomem *reg;
ha->isp_ops.read_nvram = qla2x00_read_nvram_data;
ha->isp_ops.write_nvram = qla2x00_write_nvram_data;
ha->isp_ops.fw_dump = qla2100_fw_dump;
- ha->isp_ops.ascii_fw_dump = qla2100_ascii_fw_dump;
ha->isp_ops.read_optrom = qla2x00_read_optrom_data;
ha->isp_ops.write_optrom = qla2x00_write_optrom_data;
if (IS_QLA2100(ha)) {
ha->isp_ops.pci_config = qla2300_pci_config;
ha->isp_ops.intr_handler = qla2300_intr_handler;
ha->isp_ops.fw_dump = qla2300_fw_dump;
- ha->isp_ops.ascii_fw_dump = qla2300_ascii_fw_dump;
ha->isp_ops.beacon_on = qla2x00_beacon_on;
ha->isp_ops.beacon_off = qla2x00_beacon_off;
ha->isp_ops.beacon_blink = qla2x00_beacon_blink;
ha->isp_ops.read_nvram = qla24xx_read_nvram_data;
ha->isp_ops.write_nvram = qla24xx_write_nvram_data;
ha->isp_ops.fw_dump = qla24xx_fw_dump;
- ha->isp_ops.ascii_fw_dump = qla24xx_ascii_fw_dump;
ha->isp_ops.read_optrom = qla24xx_read_optrom_data;
ha->isp_ops.write_optrom = qla24xx_write_optrom_data;
ha->isp_ops.beacon_on = qla24xx_beacon_on;
host->transportt = qla2xxx_transport_template;
ret = request_irq(pdev->irq, ha->isp_ops.intr_handler,
- SA_INTERRUPT|SA_SHIRQ, QLA2XXX_DRIVER_NAME, ha);
+ IRQF_DISABLED|IRQF_SHARED, QLA2XXX_DRIVER_NAME, ha);
if (ret) {
qla_printk(KERN_WARNING, ha,
"Failed to reserve interrupt %d already in use.\n",
return ret;
}
-static void qla2x00_remove_one(struct pci_dev *pdev)
+static void __devexit
+qla2x00_remove_one(struct pci_dev *pdev)
{
scsi_qla_host_t *ha;
kthread_stop(t);
}
+ if (ha->eft)
+ qla2x00_trace_control(ha, TC_DISABLE, 0, 0);
+
/* Stop currently executing firmware. */
qla2x00_stop_firmware(ha);
}
memset(ha->init_cb, 0, ha->init_cb_size);
- /* Allocate ioctl related memory. */
- if (qla2x00_alloc_ioctl_mem(ha)) {
- qla_printk(KERN_WARNING, ha,
- "Memory Allocation failed - ioctl_mem\n");
-
- qla2x00_mem_free(ha);
- msleep(100);
-
- continue;
- }
-
if (qla2x00_allocate_sp_pool(ha)) {
qla_printk(KERN_WARNING, ha,
"Memory Allocation failed - "
continue;
}
memset(ha->ct_sns, 0, sizeof(struct ct_sns_pkt));
+
+ if (IS_QLA24XX(ha) || IS_QLA54XX(ha)) {
+ /*
+ * Get consistent memory allocated for SFP
+ * block.
+ */
+ ha->sfp_data = dma_pool_alloc(ha->s_dma_pool,
+ GFP_KERNEL, &ha->sfp_data_dma);
+ if (ha->sfp_data == NULL) {
+ qla_printk(KERN_WARNING, ha,
+ "Memory Allocation failed - "
+ "sfp_data\n");
+
+ qla2x00_mem_free(ha);
+ msleep(100);
+
+ continue;
+ }
+ memset(ha->sfp_data, 0, SFP_BLOCK_SIZE);
+ }
}
/* Done all allocations without any error. */
return;
}
- /* free ioctl memory */
- qla2x00_free_ioctl_mem(ha);
-
/* free sp pool */
qla2x00_free_sp_pool(ha);
+ if (ha->fw_dump) {
+ if (ha->eft)
+ dma_free_coherent(&ha->pdev->dev,
+ ntohl(ha->fw_dump->eft_size), ha->eft, ha->eft_dma);
+ vfree(ha->fw_dump);
+ }
+
if (ha->sns_cmd)
dma_free_coherent(&ha->pdev->dev, sizeof(struct sns_cmd_pkt),
ha->sns_cmd, ha->sns_cmd_dma);
dma_free_coherent(&ha->pdev->dev, sizeof(struct ct_sns_pkt),
ha->ct_sns, ha->ct_sns_dma);
+ if (ha->sfp_data)
+ dma_pool_free(ha->s_dma_pool, ha->sfp_data, ha->sfp_data_dma);
+
if (ha->ms_iocb)
dma_pool_free(ha->s_dma_pool, ha->ms_iocb, ha->ms_iocb_dma);
(ha->request_q_length + 1) * sizeof(request_t),
ha->request_ring, ha->request_dma);
+ ha->eft = NULL;
+ ha->eft_dma = 0;
ha->sns_cmd = NULL;
ha->sns_cmd_dma = 0;
ha->ct_sns = NULL;
}
INIT_LIST_HEAD(&ha->fcports);
- vfree(ha->fw_dump);
- vfree(ha->fw_dump_buffer);
-
ha->fw_dump = NULL;
ha->fw_dumped = 0;
ha->fw_dump_reading = 0;
- ha->fw_dump_buffer = NULL;
vfree(ha->optrom_buffer);
}
};
MODULE_DEVICE_TABLE(pci, qla2xxx_pci_tbl);
-static int __devinit
-qla2xxx_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
-{
- return qla2x00_probe_one(pdev);
-}
-
-static void __devexit
-qla2xxx_remove_one(struct pci_dev *pdev)
-{
- qla2x00_remove_one(pdev);
-}
-
static struct pci_driver qla2xxx_pci_driver = {
.name = QLA2XXX_DRIVER_NAME,
.driver = {
.owner = THIS_MODULE,
},
.id_table = qla2xxx_pci_tbl,
- .probe = qla2xxx_probe_one,
- .remove = __devexit_p(qla2xxx_remove_one),
+ .probe = qla2x00_probe_one,
+ .remove = __devexit_p(qla2x00_remove_one),
};
-static inline int
-qla2x00_pci_module_init(void)
-{
- return pci_module_init(&qla2xxx_pci_driver);
-}
-
-static inline void
-qla2x00_pci_module_exit(void)
-{
- pci_unregister_driver(&qla2xxx_pci_driver);
-}
-
/**
* qla2x00_module_init - Module initialization.
**/
/* Derive version string. */
strcpy(qla2x00_version_str, QLA2XXX_VERSION);
-#if DEBUG_QLA2100
- strcat(qla2x00_version_str, "-debug");
-#endif
+ if (extended_error_logging)
+ strcat(qla2x00_version_str, "-debug");
+
qla2xxx_transport_template =
fc_attach_transport(&qla2xxx_transport_functions);
if (!qla2xxx_transport_template)
return -ENODEV;
printk(KERN_INFO "QLogic Fibre Channel HBA Driver\n");
- ret = qla2x00_pci_module_init();
+ ret = pci_register_driver(&qla2xxx_pci_driver);
if (ret) {
kmem_cache_destroy(srb_cachep);
fc_release_transport(qla2xxx_transport_template);
static void __exit
qla2x00_module_exit(void)
{
- qla2x00_pci_module_exit();
+ pci_unregister_driver(&qla2xxx_pci_driver);
qla2x00_release_firmware();
kmem_cache_destroy(srb_cachep);
fc_release_transport(qla2xxx_transport_template);
/*
* Driver version
*/
-#define QLA2XXX_VERSION "8.01.05-k2"
+#define QLA2XXX_VERSION "8.01.05-k3"
#define QLA_DRIVER_MAJOR_VER 8
#define QLA_DRIVER_MINOR_VER 1
* sanely maintain.
*/
if (request_irq(qpti->irq, qpti_intr,
- SA_SHIRQ, "Qlogic/PTI", qpti))
+ IRQF_SHARED, "Qlogic/PTI", qpti))
goto fail;
printk("qpti%d: IRQ %d ", qpti->qpti_id, qpti->irq);
probe_ent->port_ops = mv_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
probe_ent->private_data = hpriv;
probe_ent->port_ops = pdc_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
pdc_ata_setup_port(&probe_ent->port[0], base + 0x200);
probe_ent->port_ops = qs_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
probe_ent->n_ports = QS_PORTS;
probe_ent->mwdma_mask = sil_port_info[ent->driver_data].mwdma_mask;
probe_ent->udma_mask = sil_port_info[ent->driver_data].udma_mask;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->host_flags = sil_port_info[ent->driver_data].host_flags;
mmio_base = pci_iomap(pdev, 5, 0);
probe_ent->n_ports = SIL24_FLAG2NPORTS(pinfo->host_flags);
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = port_base;
probe_ent->private_data = hpriv;
probe_ent->port_ops = &k2_sata_ops;
probe_ent->n_ports = 4;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
/* We don't care much about the PIO/UDMA masks, but the core won't like us
probe_ent->port_ops = pdc_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
probe_ent->private_data = hpriv;
probe_ent->port_ops = &svia_sata_ops;
probe_ent->n_ports = N_PORTS;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->pio_mask = 0x1f;
probe_ent->mwdma_mask = 0x07;
probe_ent->udma_mask = 0x7f;
probe_ent->port_ops = &vsc_sata_ops;
probe_ent->n_ports = 4;
probe_ent->irq = pdev->irq;
- probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->irq_flags = IRQF_SHARED;
probe_ent->mmio_base = mmio_base;
/* We don't care much about the PIO/UDMA masks, but the core won't like us
#include "scsi_logging.h"
#include "scsi_debug.h"
-#define SCSI_DEBUG_VERSION "1.75"
-static const char * scsi_debug_version_date = "20050113";
+#define SCSI_DEBUG_VERSION "1.79"
+static const char * scsi_debug_version_date = "20060604";
/* Additional Sense Code (ASC) used */
-#define NO_ADDED_SENSE 0x0
+#define NO_ADDITIONAL_SENSE 0x0
+#define LOGICAL_UNIT_NOT_READY 0x4
#define UNRECOVERED_READ_ERR 0x11
+#define PARAMETER_LIST_LENGTH_ERR 0x1a
#define INVALID_OPCODE 0x20
#define ADDR_OUT_OF_RANGE 0x21
#define INVALID_FIELD_IN_CDB 0x24
+#define INVALID_FIELD_IN_PARAM_LIST 0x26
#define POWERON_RESET 0x29
#define SAVING_PARAMS_UNSUP 0x39
-#define THRESHHOLD_EXCEEDED 0x5d
+#define THRESHOLD_EXCEEDED 0x5d
+#define LOW_POWER_COND_ON 0x5e
#define SDEBUG_TAGGED_QUEUING 0 /* 0 | MSG_SIMPLE_TAG | MSG_ORDERED_TAG */
#define DEF_SCSI_LEVEL 5 /* INQUIRY, byte2 [5->SPC-3] */
#define DEF_PTYPE 0
#define DEF_D_SENSE 0
+#define DEF_NO_LUN_0 0
+#define DEF_VIRTUAL_GB 0
/* bit mask values for scsi_debug_opts */
#define SCSI_DEBUG_OPT_NOISE 1
/* If REPORT LUNS has luns >= 256 it can choose "flat space" (value 1)
* or "peripheral device" addressing (value 0) */
#define SAM2_LUN_ADDRESS_METHOD 0
+#define SAM2_WLUN_REPORT_LUNS 0xc101
static int scsi_debug_add_host = DEF_NUM_HOST;
static int scsi_debug_delay = DEF_DELAY;
static int scsi_debug_scsi_level = DEF_SCSI_LEVEL;
static int scsi_debug_ptype = DEF_PTYPE; /* SCSI peripheral type (0==disk) */
static int scsi_debug_dsense = DEF_D_SENSE;
+static int scsi_debug_no_lun_0 = DEF_NO_LUN_0;
+static int scsi_debug_virtual_gb = DEF_VIRTUAL_GB;
static int scsi_debug_cmnd_count = 0;
#define DEV_READONLY(TGT) (0)
#define DEV_REMOVEABLE(TGT) (0)
-static unsigned long sdebug_store_size; /* in bytes */
+static unsigned int sdebug_store_size; /* in bytes */
+static unsigned int sdebug_store_sectors;
static sector_t sdebug_capacity; /* in sectors */
/* old BIOS stuff, kernel may get rid of them but some mode sense pages
unsigned int target;
unsigned int lun;
struct sdebug_host_info *sdbg_host;
+ unsigned int wlun;
char reset;
+ char stopped;
char used;
};
.bios_param = scsi_debug_biosparam,
.can_queue = SCSI_DEBUG_CANQUEUE,
.this_id = 7,
- .sg_tablesize = 64,
- .cmd_per_lun = 3,
- .max_sectors = 4096,
+ .sg_tablesize = 256,
+ .cmd_per_lun = 16,
+ .max_sectors = 0xffff,
.unchecked_isa_dma = 0,
- .use_clustering = DISABLE_CLUSTERING,
+ .use_clustering = ENABLE_CLUSTERING,
.module = THIS_MODULE,
};
static const int check_condition_result =
(DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
+static unsigned char ctrl_m_pg[] = {0xa, 10, 2, 0, 0, 0, 0, 0,
+ 0, 0, 0x2, 0x4b};
+static unsigned char iec_m_pg[] = {0x1c, 0xa, 0x08, 0, 0, 0, 0, 0,
+ 0, 0, 0x0, 0x0};
+
/* function declarations */
static int resp_inquiry(struct scsi_cmnd * SCpnt, int target,
struct sdebug_dev_info * devip);
static int resp_requests(struct scsi_cmnd * SCpnt,
struct sdebug_dev_info * devip);
+static int resp_start_stop(struct scsi_cmnd * scp,
+ struct sdebug_dev_info * devip);
static int resp_readcap(struct scsi_cmnd * SCpnt,
struct sdebug_dev_info * devip);
-static int resp_mode_sense(struct scsi_cmnd * SCpnt, int target,
+static int resp_readcap16(struct scsi_cmnd * SCpnt,
+ struct sdebug_dev_info * devip);
+static int resp_mode_sense(struct scsi_cmnd * scp, int target,
struct sdebug_dev_info * devip);
-static int resp_read(struct scsi_cmnd * SCpnt, int upper_blk, int block,
- int num, struct sdebug_dev_info * devip);
-static int resp_write(struct scsi_cmnd * SCpnt, int upper_blk, int block,
- int num, struct sdebug_dev_info * devip);
+static int resp_mode_select(struct scsi_cmnd * scp, int mselect6,
+ struct sdebug_dev_info * devip);
+static int resp_log_sense(struct scsi_cmnd * scp,
+ struct sdebug_dev_info * devip);
+static int resp_read(struct scsi_cmnd * SCpnt, unsigned long long lba,
+ unsigned int num, struct sdebug_dev_info * devip);
+static int resp_write(struct scsi_cmnd * SCpnt, unsigned long long lba,
+ unsigned int num, struct sdebug_dev_info * devip);
static int resp_report_luns(struct scsi_cmnd * SCpnt,
struct sdebug_dev_info * devip);
static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
static struct sdebug_dev_info * devInfoReg(struct scsi_device * sdev);
static void mk_sense_buffer(struct sdebug_dev_info * devip, int key,
int asc, int asq);
-static int check_reset(struct scsi_cmnd * SCpnt,
- struct sdebug_dev_info * devip);
+static int check_readiness(struct scsi_cmnd * SCpnt, int reset_only,
+ struct sdebug_dev_info * devip);
static int schedule_resp(struct scsi_cmnd * cmnd,
struct sdebug_dev_info * devip,
done_funct_t done, int scsi_result, int delta_jiff);
static void __init init_all_queued(void);
static void stop_all_queued(void);
static int stop_queued_cmnd(struct scsi_cmnd * cmnd);
-static int inquiry_evpd_83(unsigned char * arr, int dev_id_num,
- const char * dev_id_str, int dev_id_str_len);
+static int inquiry_evpd_83(unsigned char * arr, int target_dev_id,
+ int dev_id_num, const char * dev_id_str,
+ int dev_id_str_len);
+static int inquiry_evpd_88(unsigned char * arr, int target_dev_id);
static void do_create_driverfs_files(void);
static void do_remove_driverfs_files(void);
int scsi_debug_queuecommand(struct scsi_cmnd * SCpnt, done_funct_t done)
{
unsigned char *cmd = (unsigned char *) SCpnt->cmnd;
- int block, upper_blk, num, k;
+ int len, k, j;
+ unsigned int num;
+ unsigned long long lba;
int errsts = 0;
- int target = scmd_id(SCpnt);
+ int target = SCpnt->device->id;
struct sdebug_dev_info * devip = NULL;
int inj_recovered = 0;
+ int delay_override = 0;
if (done == NULL)
return 0; /* assume mid level reprocessing command */
+ SCpnt->resid = 0;
if ((SCSI_DEBUG_OPT_NOISE & scsi_debug_opts) && cmd) {
printk(KERN_INFO "scsi_debug: cmd ");
- for (k = 0, num = SCpnt->cmd_len; k < num; ++k)
+ for (k = 0, len = SCpnt->cmd_len; k < len; ++k)
printk("%02x ", (int)cmd[k]);
printk("\n");
}
DID_NO_CONNECT << 16, 0);
}
- if (SCpnt->device->lun >= scsi_debug_max_luns)
+ if ((SCpnt->device->lun >= scsi_debug_max_luns) &&
+ (SCpnt->device->lun != SAM2_WLUN_REPORT_LUNS))
return schedule_resp(SCpnt, NULL, done,
DID_NO_CONNECT << 16, 0);
devip = devInfoReg(SCpnt->device);
inj_recovered = 1; /* to reads and writes below */
}
+ if (devip->wlun) {
+ switch (*cmd) {
+ case INQUIRY:
+ case REQUEST_SENSE:
+ case TEST_UNIT_READY:
+ case REPORT_LUNS:
+ break; /* only allowable wlun commands */
+ default:
+ if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
+ printk(KERN_INFO "scsi_debug: Opcode: 0x%x "
+ "not supported for wlun\n", *cmd);
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_OPCODE, 0);
+ errsts = check_condition_result;
+ return schedule_resp(SCpnt, devip, done, errsts,
+ 0);
+ }
+ }
+
switch (*cmd) {
case INQUIRY: /* mandatory, ignore unit attention */
+ delay_override = 1;
errsts = resp_inquiry(SCpnt, target, devip);
break;
case REQUEST_SENSE: /* mandatory, ignore unit attention */
+ delay_override = 1;
errsts = resp_requests(SCpnt, devip);
break;
case REZERO_UNIT: /* actually this is REWIND for SSC */
case START_STOP:
- errsts = check_reset(SCpnt, devip);
+ errsts = resp_start_stop(SCpnt, devip);
break;
case ALLOW_MEDIUM_REMOVAL:
- if ((errsts = check_reset(SCpnt, devip)))
+ if ((errsts = check_readiness(SCpnt, 1, devip)))
break;
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
printk(KERN_INFO "scsi_debug: Medium removal %s\n",
cmd[4] ? "inhibited" : "enabled");
break;
case SEND_DIAGNOSTIC: /* mandatory */
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 1, devip);
break;
case TEST_UNIT_READY: /* mandatory */
- errsts = check_reset(SCpnt, devip);
+ delay_override = 1;
+ errsts = check_readiness(SCpnt, 0, devip);
break;
case RESERVE:
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 1, devip);
break;
case RESERVE_10:
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 1, devip);
break;
case RELEASE:
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 1, devip);
break;
case RELEASE_10:
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 1, devip);
break;
case READ_CAPACITY:
errsts = resp_readcap(SCpnt, devip);
break;
+ case SERVICE_ACTION_IN:
+ if (SAI_READ_CAPACITY_16 != cmd[1]) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_OPCODE, 0);
+ errsts = check_condition_result;
+ break;
+ }
+ errsts = resp_readcap16(SCpnt, devip);
+ break;
case READ_16:
case READ_12:
case READ_10:
case READ_6:
- if ((errsts = check_reset(SCpnt, devip)))
+ if ((errsts = check_readiness(SCpnt, 0, devip)))
break;
- upper_blk = 0;
if ((*cmd) == READ_16) {
- upper_blk = cmd[5] + (cmd[4] << 8) +
- (cmd[3] << 16) + (cmd[2] << 24);
- block = cmd[9] + (cmd[8] << 8) +
- (cmd[7] << 16) + (cmd[6] << 24);
+ for (lba = 0, j = 0; j < 8; ++j) {
+ if (j > 0)
+ lba <<= 8;
+ lba += cmd[2 + j];
+ }
num = cmd[13] + (cmd[12] << 8) +
(cmd[11] << 16) + (cmd[10] << 24);
} else if ((*cmd) == READ_12) {
- block = cmd[5] + (cmd[4] << 8) +
+ lba = cmd[5] + (cmd[4] << 8) +
(cmd[3] << 16) + (cmd[2] << 24);
num = cmd[9] + (cmd[8] << 8) +
(cmd[7] << 16) + (cmd[6] << 24);
} else if ((*cmd) == READ_10) {
- block = cmd[5] + (cmd[4] << 8) +
+ lba = cmd[5] + (cmd[4] << 8) +
(cmd[3] << 16) + (cmd[2] << 24);
num = cmd[8] + (cmd[7] << 8);
- } else {
- block = cmd[3] + (cmd[2] << 8) +
+ } else { /* READ (6) */
+ lba = cmd[3] + (cmd[2] << 8) +
((cmd[1] & 0x1f) << 16);
- num = cmd[4];
+ num = (0 == cmd[4]) ? 256 : cmd[4];
}
- errsts = resp_read(SCpnt, upper_blk, block, num, devip);
+ errsts = resp_read(SCpnt, lba, num, devip);
if (inj_recovered && (0 == errsts)) {
mk_sense_buffer(devip, RECOVERED_ERROR,
- THRESHHOLD_EXCEEDED, 0);
+ THRESHOLD_EXCEEDED, 0);
errsts = check_condition_result;
}
break;
case REPORT_LUNS: /* mandatory, ignore unit attention */
+ delay_override = 1;
errsts = resp_report_luns(SCpnt, devip);
break;
case VERIFY: /* 10 byte SBC-2 command */
- errsts = check_reset(SCpnt, devip);
+ errsts = check_readiness(SCpnt, 0, devip);
break;
case WRITE_16:
case WRITE_12:
case WRITE_10:
case WRITE_6:
- if ((errsts = check_reset(SCpnt, devip)))
+ if ((errsts = check_readiness(SCpnt, 0, devip)))
break;
- upper_blk = 0;
if ((*cmd) == WRITE_16) {
- upper_blk = cmd[5] + (cmd[4] << 8) +
- (cmd[3] << 16) + (cmd[2] << 24);
- block = cmd[9] + (cmd[8] << 8) +
- (cmd[7] << 16) + (cmd[6] << 24);
+ for (lba = 0, j = 0; j < 8; ++j) {
+ if (j > 0)
+ lba <<= 8;
+ lba += cmd[2 + j];
+ }
num = cmd[13] + (cmd[12] << 8) +
(cmd[11] << 16) + (cmd[10] << 24);
} else if ((*cmd) == WRITE_12) {
- block = cmd[5] + (cmd[4] << 8) +
+ lba = cmd[5] + (cmd[4] << 8) +
(cmd[3] << 16) + (cmd[2] << 24);
num = cmd[9] + (cmd[8] << 8) +
(cmd[7] << 16) + (cmd[6] << 24);
} else if ((*cmd) == WRITE_10) {
- block = cmd[5] + (cmd[4] << 8) +
+ lba = cmd[5] + (cmd[4] << 8) +
(cmd[3] << 16) + (cmd[2] << 24);
num = cmd[8] + (cmd[7] << 8);
- } else {
- block = cmd[3] + (cmd[2] << 8) +
+ } else { /* WRITE (6) */
+ lba = cmd[3] + (cmd[2] << 8) +
((cmd[1] & 0x1f) << 16);
- num = cmd[4];
+ num = (0 == cmd[4]) ? 256 : cmd[4];
}
- errsts = resp_write(SCpnt, upper_blk, block, num, devip);
+ errsts = resp_write(SCpnt, lba, num, devip);
if (inj_recovered && (0 == errsts)) {
mk_sense_buffer(devip, RECOVERED_ERROR,
- THRESHHOLD_EXCEEDED, 0);
+ THRESHOLD_EXCEEDED, 0);
errsts = check_condition_result;
}
break;
case MODE_SENSE_10:
errsts = resp_mode_sense(SCpnt, target, devip);
break;
+ case MODE_SELECT:
+ errsts = resp_mode_select(SCpnt, 1, devip);
+ break;
+ case MODE_SELECT_10:
+ errsts = resp_mode_select(SCpnt, 0, devip);
+ break;
+ case LOG_SENSE:
+ errsts = resp_log_sense(SCpnt, devip);
+ break;
case SYNCHRONIZE_CACHE:
- errsts = check_reset(SCpnt, devip);
+ delay_override = 1;
+ errsts = check_readiness(SCpnt, 0, devip);
break;
default:
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
printk(KERN_INFO "scsi_debug: Opcode: 0x%x not "
"supported\n", *cmd);
- if ((errsts = check_reset(SCpnt, devip)))
+ if ((errsts = check_readiness(SCpnt, 1, devip)))
break; /* Unit attention takes precedence */
mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_OPCODE, 0);
errsts = check_condition_result;
break;
}
- return schedule_resp(SCpnt, devip, done, errsts, scsi_debug_delay);
+ return schedule_resp(SCpnt, devip, done, errsts,
+ (delay_override ? 0 : scsi_debug_delay));
}
static int scsi_debug_ioctl(struct scsi_device *dev, int cmd, void __user *arg)
/* return -ENOTTY; // correct return but upsets fdisk */
}
-static int check_reset(struct scsi_cmnd * SCpnt, struct sdebug_dev_info * devip)
+static int check_readiness(struct scsi_cmnd * SCpnt, int reset_only,
+ struct sdebug_dev_info * devip)
{
if (devip->reset) {
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
mk_sense_buffer(devip, UNIT_ATTENTION, POWERON_RESET, 0);
return check_condition_result;
}
+ if ((0 == reset_only) && devip->stopped) {
+ if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
+ printk(KERN_INFO "scsi_debug: Reporting Not "
+ "ready: initializing command required\n");
+ mk_sense_buffer(devip, NOT_READY, LOGICAL_UNIT_NOT_READY,
+ 0x2);
+ return check_condition_result;
+ }
return 0;
}
req_len = scp->request_bufflen;
act_len = (req_len < arr_len) ? req_len : arr_len;
memcpy(scp->request_buffer, arr, act_len);
- scp->resid = req_len - act_len;
+ if (scp->resid)
+ scp->resid -= act_len;
+ else
+ scp->resid = req_len - act_len;
return 0;
}
sgpnt = (struct scatterlist *)scp->request_buffer;
}
req_len += sgpnt->length;
}
- scp->resid = req_len - act_len;
+ if (scp->resid)
+ scp->resid -= act_len;
+ else
+ scp->resid = req_len - act_len;
return 0;
}
static const char * inq_product_id = "scsi_debug ";
static const char * inq_product_rev = "0004";
-static int inquiry_evpd_83(unsigned char * arr, int dev_id_num,
- const char * dev_id_str, int dev_id_str_len)
+static int inquiry_evpd_83(unsigned char * arr, int target_dev_id,
+ int dev_id_num, const char * dev_id_str,
+ int dev_id_str_len)
{
- int num;
+ int num, port_a;
+ char b[32];
- /* Two identification descriptors: */
+ port_a = target_dev_id + 1;
/* T10 vendor identifier field format (faked) */
arr[0] = 0x2; /* ASCII */
arr[1] = 0x1;
num = 8 + 16 + dev_id_str_len;
arr[3] = num;
num += 4;
- /* NAA IEEE registered identifier (faked) */
- arr[num] = 0x1; /* binary */
- arr[num + 1] = 0x3;
- arr[num + 2] = 0x0;
- arr[num + 3] = 0x8;
- arr[num + 4] = 0x51; /* ieee company id=0x123456 (faked) */
- arr[num + 5] = 0x23;
- arr[num + 6] = 0x45;
- arr[num + 7] = 0x60;
- arr[num + 8] = (dev_id_num >> 24);
- arr[num + 9] = (dev_id_num >> 16) & 0xff;
- arr[num + 10] = (dev_id_num >> 8) & 0xff;
- arr[num + 11] = dev_id_num & 0xff;
- return num + 12;
+ if (dev_id_num >= 0) {
+ /* NAA-5, Logical unit identifier (binary) */
+ arr[num++] = 0x1; /* binary (not necessarily sas) */
+ arr[num++] = 0x3; /* PIV=0, lu, naa */
+ arr[num++] = 0x0;
+ arr[num++] = 0x8;
+ arr[num++] = 0x53; /* naa-5 ieee company id=0x333333 (fake) */
+ arr[num++] = 0x33;
+ arr[num++] = 0x33;
+ arr[num++] = 0x30;
+ arr[num++] = (dev_id_num >> 24);
+ arr[num++] = (dev_id_num >> 16) & 0xff;
+ arr[num++] = (dev_id_num >> 8) & 0xff;
+ arr[num++] = dev_id_num & 0xff;
+ /* Target relative port number */
+ arr[num++] = 0x61; /* proto=sas, binary */
+ arr[num++] = 0x94; /* PIV=1, target port, rel port */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x4; /* length */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0;
+ arr[num++] = 0x1; /* relative port A */
+ }
+ /* NAA-5, Target port identifier */
+ arr[num++] = 0x61; /* proto=sas, binary */
+ arr[num++] = 0x93; /* piv=1, target port, naa */
+ arr[num++] = 0x0;
+ arr[num++] = 0x8;
+ arr[num++] = 0x52; /* naa-5, company id=0x222222 (fake) */
+ arr[num++] = 0x22;
+ arr[num++] = 0x22;
+ arr[num++] = 0x20;
+ arr[num++] = (port_a >> 24);
+ arr[num++] = (port_a >> 16) & 0xff;
+ arr[num++] = (port_a >> 8) & 0xff;
+ arr[num++] = port_a & 0xff;
+ /* NAA-5, Target device identifier */
+ arr[num++] = 0x61; /* proto=sas, binary */
+ arr[num++] = 0xa3; /* piv=1, target device, naa */
+ arr[num++] = 0x0;
+ arr[num++] = 0x8;
+ arr[num++] = 0x52; /* naa-5, company id=0x222222 (fake) */
+ arr[num++] = 0x22;
+ arr[num++] = 0x22;
+ arr[num++] = 0x20;
+ arr[num++] = (target_dev_id >> 24);
+ arr[num++] = (target_dev_id >> 16) & 0xff;
+ arr[num++] = (target_dev_id >> 8) & 0xff;
+ arr[num++] = target_dev_id & 0xff;
+ /* SCSI name string: Target device identifier */
+ arr[num++] = 0x63; /* proto=sas, UTF-8 */
+ arr[num++] = 0xa8; /* piv=1, target device, SCSI name string */
+ arr[num++] = 0x0;
+ arr[num++] = 24;
+ memcpy(arr + num, "naa.52222220", 12);
+ num += 12;
+ snprintf(b, sizeof(b), "%08X", target_dev_id);
+ memcpy(arr + num, b, 8);
+ num += 8;
+ memset(arr + num, 0, 4);
+ num += 4;
+ return num;
+}
+
+
+static unsigned char vpd84_data[] = {
+/* from 4th byte */ 0x22,0x22,0x22,0x0,0xbb,0x0,
+ 0x22,0x22,0x22,0x0,0xbb,0x1,
+ 0x22,0x22,0x22,0x0,0xbb,0x2,
+};
+
+static int inquiry_evpd_84(unsigned char * arr)
+{
+ memcpy(arr, vpd84_data, sizeof(vpd84_data));
+ return sizeof(vpd84_data);
+}
+
+static int inquiry_evpd_85(unsigned char * arr)
+{
+ int num = 0;
+ const char * na1 = "https://www.kernel.org/config";
+ const char * na2 = "http://www.kernel.org/log";
+ int plen, olen;
+
+ arr[num++] = 0x1; /* lu, storage config */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0;
+ olen = strlen(na1);
+ plen = olen + 1;
+ if (plen % 4)
+ plen = ((plen / 4) + 1) * 4;
+ arr[num++] = plen; /* length, null termianted, padded */
+ memcpy(arr + num, na1, olen);
+ memset(arr + num + olen, 0, plen - olen);
+ num += plen;
+
+ arr[num++] = 0x4; /* lu, logging */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0;
+ olen = strlen(na2);
+ plen = olen + 1;
+ if (plen % 4)
+ plen = ((plen / 4) + 1) * 4;
+ arr[num++] = plen; /* length, null terminated, padded */
+ memcpy(arr + num, na2, olen);
+ memset(arr + num + olen, 0, plen - olen);
+ num += plen;
+
+ return num;
+}
+
+/* SCSI ports VPD page */
+static int inquiry_evpd_88(unsigned char * arr, int target_dev_id)
+{
+ int num = 0;
+ int port_a, port_b;
+
+ port_a = target_dev_id + 1;
+ port_b = port_a + 1;
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0;
+ arr[num++] = 0x1; /* relative port 1 (primary) */
+ memset(arr + num, 0, 6);
+ num += 6;
+ arr[num++] = 0x0;
+ arr[num++] = 12; /* length tp descriptor */
+ /* naa-5 target port identifier (A) */
+ arr[num++] = 0x61; /* proto=sas, binary */
+ arr[num++] = 0x93; /* PIV=1, target port, NAA */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x8; /* length */
+ arr[num++] = 0x52; /* NAA-5, company_id=0x222222 (fake) */
+ arr[num++] = 0x22;
+ arr[num++] = 0x22;
+ arr[num++] = 0x20;
+ arr[num++] = (port_a >> 24);
+ arr[num++] = (port_a >> 16) & 0xff;
+ arr[num++] = (port_a >> 8) & 0xff;
+ arr[num++] = port_a & 0xff;
+
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x0;
+ arr[num++] = 0x2; /* relative port 2 (secondary) */
+ memset(arr + num, 0, 6);
+ num += 6;
+ arr[num++] = 0x0;
+ arr[num++] = 12; /* length tp descriptor */
+ /* naa-5 target port identifier (B) */
+ arr[num++] = 0x61; /* proto=sas, binary */
+ arr[num++] = 0x93; /* PIV=1, target port, NAA */
+ arr[num++] = 0x0; /* reserved */
+ arr[num++] = 0x8; /* length */
+ arr[num++] = 0x52; /* NAA-5, company_id=0x222222 (fake) */
+ arr[num++] = 0x22;
+ arr[num++] = 0x22;
+ arr[num++] = 0x20;
+ arr[num++] = (port_b >> 24);
+ arr[num++] = (port_b >> 16) & 0xff;
+ arr[num++] = (port_b >> 8) & 0xff;
+ arr[num++] = port_b & 0xff;
+
+ return num;
+}
+
+
+static unsigned char vpd89_data[] = {
+/* from 4th byte */ 0,0,0,0,
+'l','i','n','u','x',' ',' ',' ',
+'S','A','T',' ','s','c','s','i','_','d','e','b','u','g',' ',' ',
+'1','2','3','4',
+0x34,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
+0xec,0,0,0,
+0x5a,0xc,0xff,0x3f,0x37,0xc8,0x10,0,0,0,0,0,0x3f,0,0,0,
+0,0,0,0,0x58,0x58,0x58,0x58,0x58,0x58,0x58,0x58,0x20,0x20,0x20,0x20,
+0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0,0,0,0x40,0x4,0,0x2e,0x33,
+0x38,0x31,0x20,0x20,0x20,0x20,0x54,0x53,0x38,0x33,0x30,0x30,0x33,0x31,
+0x53,0x41,
+0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,
+0x20,0x20,
+0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,
+0x10,0x80,
+0,0,0,0x2f,0,0,0,0x2,0,0x2,0x7,0,0xff,0xff,0x1,0,
+0x3f,0,0xc1,0xff,0x3e,0,0x10,0x1,0xb0,0xf8,0x50,0x9,0,0,0x7,0,
+0x3,0,0x78,0,0x78,0,0xf0,0,0x78,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0x2,0,0,0,0,0,0,0,
+0x7e,0,0x1b,0,0x6b,0x34,0x1,0x7d,0x3,0x40,0x69,0x34,0x1,0x3c,0x3,0x40,
+0x7f,0x40,0,0,0,0,0xfe,0xfe,0,0,0,0,0,0xfe,0,0,
+0,0,0,0,0,0,0,0,0xb0,0xf8,0x50,0x9,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0x1,0,0xb0,0xf8,0x50,0x9,0xb0,0xf8,0x50,0x9,0x20,0x20,0x2,0,0xb6,0x42,
+0,0x80,0x8a,0,0x6,0x3c,0xa,0x3c,0xff,0xff,0xc6,0x7,0,0x1,0,0x8,
+0xf0,0xf,0,0x10,0x2,0,0x30,0,0,0,0,0,0,0,0x6,0xfe,
+0,0,0x2,0,0x50,0,0x8a,0,0x4f,0x95,0,0,0x21,0,0xb,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xa5,0x51,
+};
+
+static int inquiry_evpd_89(unsigned char * arr)
+{
+ memcpy(arr, vpd89_data, sizeof(vpd89_data));
+ return sizeof(vpd89_data);
+}
+
+
+static unsigned char vpdb0_data[] = {
+ /* from 4th byte */ 0,0,0,4,
+ 0,0,0x4,0,
+ 0,0,0,64,
+};
+
+static int inquiry_evpd_b0(unsigned char * arr)
+{
+ memcpy(arr, vpdb0_data, sizeof(vpdb0_data));
+ if (sdebug_store_sectors > 0x400) {
+ arr[4] = (sdebug_store_sectors >> 24) & 0xff;
+ arr[5] = (sdebug_store_sectors >> 16) & 0xff;
+ arr[6] = (sdebug_store_sectors >> 8) & 0xff;
+ arr[7] = sdebug_store_sectors & 0xff;
+ }
+ return sizeof(vpdb0_data);
}
#define SDEBUG_LONG_INQ_SZ 96
-#define SDEBUG_MAX_INQ_ARR_SZ 128
+#define SDEBUG_MAX_INQ_ARR_SZ 584
static int resp_inquiry(struct scsi_cmnd * scp, int target,
struct sdebug_dev_info * devip)
unsigned char pq_pdt;
unsigned char arr[SDEBUG_MAX_INQ_ARR_SZ];
unsigned char *cmd = (unsigned char *)scp->cmnd;
- int alloc_len;
+ int alloc_len, n;
alloc_len = (cmd[3] << 8) + cmd[4];
memset(arr, 0, SDEBUG_MAX_INQ_ARR_SZ);
- pq_pdt = (scsi_debug_ptype & 0x1f);
+ if (devip->wlun)
+ pq_pdt = 0x1e; /* present, wlun */
+ else if (scsi_debug_no_lun_0 && (0 == devip->lun))
+ pq_pdt = 0x7f; /* not present, no device type */
+ else
+ pq_pdt = (scsi_debug_ptype & 0x1f);
arr[0] = pq_pdt;
if (0x2 & cmd[1]) { /* CMDDT bit set */
mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
0);
return check_condition_result;
} else if (0x1 & cmd[1]) { /* EVPD bit set */
- int dev_id_num, len;
- char dev_id_str[6];
+ int lu_id_num, target_dev_id, len;
+ char lu_id_str[6];
+ int host_no = devip->sdbg_host->shost->host_no;
- dev_id_num = ((devip->sdbg_host->shost->host_no + 1) * 2000) +
- (devip->target * 1000) + devip->lun;
- len = scnprintf(dev_id_str, 6, "%d", dev_id_num);
+ lu_id_num = devip->wlun ? -1 : (((host_no + 1) * 2000) +
+ (devip->target * 1000) + devip->lun);
+ target_dev_id = ((host_no + 1) * 2000) +
+ (devip->target * 1000) - 3;
+ len = scnprintf(lu_id_str, 6, "%d", lu_id_num);
if (0 == cmd[2]) { /* supported vital product data pages */
- arr[3] = 3;
- arr[4] = 0x0; /* this page */
- arr[5] = 0x80; /* unit serial number */
- arr[6] = 0x83; /* device identification */
+ arr[1] = cmd[2]; /*sanity */
+ n = 4;
+ arr[n++] = 0x0; /* this page */
+ arr[n++] = 0x80; /* unit serial number */
+ arr[n++] = 0x83; /* device identification */
+ arr[n++] = 0x84; /* software interface ident. */
+ arr[n++] = 0x85; /* management network addresses */
+ arr[n++] = 0x86; /* extended inquiry */
+ arr[n++] = 0x87; /* mode page policy */
+ arr[n++] = 0x88; /* SCSI ports */
+ arr[n++] = 0x89; /* ATA information */
+ arr[n++] = 0xb0; /* Block limits (SBC) */
+ arr[3] = n - 4; /* number of supported VPD pages */
} else if (0x80 == cmd[2]) { /* unit serial number */
- arr[1] = 0x80;
+ arr[1] = cmd[2]; /*sanity */
arr[3] = len;
- memcpy(&arr[4], dev_id_str, len);
+ memcpy(&arr[4], lu_id_str, len);
} else if (0x83 == cmd[2]) { /* device identification */
- arr[1] = 0x83;
- arr[3] = inquiry_evpd_83(&arr[4], dev_id_num,
- dev_id_str, len);
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = inquiry_evpd_83(&arr[4], target_dev_id,
+ lu_id_num, lu_id_str, len);
+ } else if (0x84 == cmd[2]) { /* Software interface ident. */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = inquiry_evpd_84(&arr[4]);
+ } else if (0x85 == cmd[2]) { /* Management network addresses */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = inquiry_evpd_85(&arr[4]);
+ } else if (0x86 == cmd[2]) { /* extended inquiry */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = 0x3c; /* number of following entries */
+ arr[4] = 0x0; /* no protection stuff */
+ arr[5] = 0x7; /* head of q, ordered + simple q's */
+ } else if (0x87 == cmd[2]) { /* mode page policy */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = 0x8; /* number of following entries */
+ arr[4] = 0x2; /* disconnect-reconnect mp */
+ arr[6] = 0x80; /* mlus, shared */
+ arr[8] = 0x18; /* protocol specific lu */
+ arr[10] = 0x82; /* mlus, per initiator port */
+ } else if (0x88 == cmd[2]) { /* SCSI Ports */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = inquiry_evpd_88(&arr[4], target_dev_id);
+ } else if (0x89 == cmd[2]) { /* ATA information */
+ arr[1] = cmd[2]; /*sanity */
+ n = inquiry_evpd_89(&arr[4]);
+ arr[2] = (n >> 8);
+ arr[3] = (n & 0xff);
+ } else if (0xb0 == cmd[2]) { /* Block limits (SBC) */
+ arr[1] = cmd[2]; /*sanity */
+ arr[3] = inquiry_evpd_b0(&arr[4]);
} else {
/* Illegal request, invalid field in cdb */
mk_sense_buffer(devip, ILLEGAL_REQUEST,
INVALID_FIELD_IN_CDB, 0);
return check_condition_result;
}
+ len = min(((arr[2] << 8) + arr[3]) + 4, alloc_len);
return fill_from_dev_buffer(scp, arr,
- min(alloc_len, SDEBUG_MAX_INQ_ARR_SZ));
+ min(len, SDEBUG_MAX_INQ_ARR_SZ));
}
/* drops through here for a standard inquiry */
arr[1] = DEV_REMOVEABLE(target) ? 0x80 : 0; /* Removable disk */
arr[2] = scsi_debug_scsi_level;
arr[3] = 2; /* response_data_format==2 */
arr[4] = SDEBUG_LONG_INQ_SZ - 5;
- arr[6] = 0x1; /* claim: ADDR16 */
+ arr[6] = 0x10; /* claim: MultiP */
/* arr[6] |= 0x40; ... claim: EncServ (enclosure services) */
- arr[7] = 0x3a; /* claim: WBUS16, SYNC, LINKED + CMDQUE */
+ arr[7] = 0xa; /* claim: LINKED + CMDQUE */
memcpy(&arr[8], inq_vendor_id, 8);
memcpy(&arr[16], inq_product_id, 16);
memcpy(&arr[32], inq_product_rev, 4);
/* version descriptors (2 bytes each) follow */
- arr[58] = 0x0; arr[59] = 0x40; /* SAM-2 */
- arr[60] = 0x3; arr[61] = 0x0; /* SPC-3 */
+ arr[58] = 0x0; arr[59] = 0x77; /* SAM-3 ANSI */
+ arr[60] = 0x3; arr[61] = 0x14; /* SPC-3 ANSI */
+ n = 62;
if (scsi_debug_ptype == 0) {
- arr[62] = 0x1; arr[63] = 0x80; /* SBC */
+ arr[n++] = 0x3; arr[n++] = 0x3d; /* SBC-2 ANSI */
} else if (scsi_debug_ptype == 1) {
- arr[62] = 0x2; arr[63] = 0x00; /* SSC */
+ arr[n++] = 0x3; arr[n++] = 0x60; /* SSC-2 no version */
}
+ arr[n++] = 0xc; arr[n++] = 0xf; /* SAS-1.1 rev 10 */
return fill_from_dev_buffer(scp, arr,
min(alloc_len, SDEBUG_LONG_INQ_SZ));
}
unsigned char * sbuff;
unsigned char *cmd = (unsigned char *)scp->cmnd;
unsigned char arr[SDEBUG_SENSE_LEN];
+ int want_dsense;
int len = 18;
- memset(arr, 0, SDEBUG_SENSE_LEN);
+ memset(arr, 0, sizeof(arr));
if (devip->reset == 1)
- mk_sense_buffer(devip, 0, NO_ADDED_SENSE, 0);
+ mk_sense_buffer(devip, 0, NO_ADDITIONAL_SENSE, 0);
+ want_dsense = !!(cmd[1] & 1) || scsi_debug_dsense;
sbuff = devip->sense_buff;
- if ((cmd[1] & 1) && (! scsi_debug_dsense)) {
- /* DESC bit set and sense_buff in fixed format */
- arr[0] = 0x72;
- arr[1] = sbuff[2]; /* sense key */
- arr[2] = sbuff[12]; /* asc */
- arr[3] = sbuff[13]; /* ascq */
- len = 8;
- } else
+ if ((iec_m_pg[2] & 0x4) && (6 == (iec_m_pg[3] & 0xf))) {
+ if (want_dsense) {
+ arr[0] = 0x72;
+ arr[1] = 0x0; /* NO_SENSE in sense_key */
+ arr[2] = THRESHOLD_EXCEEDED;
+ arr[3] = 0xff; /* TEST set and MRIE==6 */
+ } else {
+ arr[0] = 0x70;
+ arr[2] = 0x0; /* NO_SENSE in sense_key */
+ arr[7] = 0xa; /* 18 byte sense buffer */
+ arr[12] = THRESHOLD_EXCEEDED;
+ arr[13] = 0xff; /* TEST set and MRIE==6 */
+ }
+ } else if (devip->stopped) {
+ if (want_dsense) {
+ arr[0] = 0x72;
+ arr[1] = 0x0; /* NO_SENSE in sense_key */
+ arr[2] = LOW_POWER_COND_ON;
+ arr[3] = 0x0; /* TEST set and MRIE==6 */
+ } else {
+ arr[0] = 0x70;
+ arr[2] = 0x0; /* NO_SENSE in sense_key */
+ arr[7] = 0xa; /* 18 byte sense buffer */
+ arr[12] = LOW_POWER_COND_ON;
+ arr[13] = 0x0; /* TEST set and MRIE==6 */
+ }
+ } else {
memcpy(arr, sbuff, SDEBUG_SENSE_LEN);
- mk_sense_buffer(devip, 0, NO_ADDED_SENSE, 0);
+ if ((cmd[1] & 1) && (! scsi_debug_dsense)) {
+ /* DESC bit set and sense_buff in fixed format */
+ memset(arr, 0, sizeof(arr));
+ arr[0] = 0x72;
+ arr[1] = sbuff[2]; /* sense key */
+ arr[2] = sbuff[12]; /* asc */
+ arr[3] = sbuff[13]; /* ascq */
+ len = 8;
+ }
+ }
+ mk_sense_buffer(devip, 0, NO_ADDITIONAL_SENSE, 0);
return fill_from_dev_buffer(scp, arr, len);
}
+static int resp_start_stop(struct scsi_cmnd * scp,
+ struct sdebug_dev_info * devip)
+{
+ unsigned char *cmd = (unsigned char *)scp->cmnd;
+ int power_cond, errsts, start;
+
+ if ((errsts = check_readiness(scp, 1, devip)))
+ return errsts;
+ power_cond = (cmd[4] & 0xf0) >> 4;
+ if (power_cond) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
+ 0);
+ return check_condition_result;
+ }
+ start = cmd[4] & 1;
+ if (start == devip->stopped)
+ devip->stopped = !start;
+ return 0;
+}
+
#define SDEBUG_READCAP_ARR_SZ 8
static int resp_readcap(struct scsi_cmnd * scp,
struct sdebug_dev_info * devip)
{
unsigned char arr[SDEBUG_READCAP_ARR_SZ];
- unsigned long capac;
+ unsigned int capac;
int errsts;
- if ((errsts = check_reset(scp, devip)))
+ if ((errsts = check_readiness(scp, 1, devip)))
return errsts;
+ /* following just in case virtual_gb changed */
+ if (scsi_debug_virtual_gb > 0) {
+ sdebug_capacity = 2048 * 1024;
+ sdebug_capacity *= scsi_debug_virtual_gb;
+ } else
+ sdebug_capacity = sdebug_store_sectors;
memset(arr, 0, SDEBUG_READCAP_ARR_SZ);
- capac = (unsigned long)sdebug_capacity - 1;
- arr[0] = (capac >> 24);
- arr[1] = (capac >> 16) & 0xff;
- arr[2] = (capac >> 8) & 0xff;
- arr[3] = capac & 0xff;
+ if (sdebug_capacity < 0xffffffff) {
+ capac = (unsigned int)sdebug_capacity - 1;
+ arr[0] = (capac >> 24);
+ arr[1] = (capac >> 16) & 0xff;
+ arr[2] = (capac >> 8) & 0xff;
+ arr[3] = capac & 0xff;
+ } else {
+ arr[0] = 0xff;
+ arr[1] = 0xff;
+ arr[2] = 0xff;
+ arr[3] = 0xff;
+ }
arr[6] = (SECT_SIZE_PER(target) >> 8) & 0xff;
arr[7] = SECT_SIZE_PER(target) & 0xff;
return fill_from_dev_buffer(scp, arr, SDEBUG_READCAP_ARR_SZ);
}
+#define SDEBUG_READCAP16_ARR_SZ 32
+static int resp_readcap16(struct scsi_cmnd * scp,
+ struct sdebug_dev_info * devip)
+{
+ unsigned char *cmd = (unsigned char *)scp->cmnd;
+ unsigned char arr[SDEBUG_READCAP16_ARR_SZ];
+ unsigned long long capac;
+ int errsts, k, alloc_len;
+
+ if ((errsts = check_readiness(scp, 1, devip)))
+ return errsts;
+ alloc_len = ((cmd[10] << 24) + (cmd[11] << 16) + (cmd[12] << 8)
+ + cmd[13]);
+ /* following just in case virtual_gb changed */
+ if (scsi_debug_virtual_gb > 0) {
+ sdebug_capacity = 2048 * 1024;
+ sdebug_capacity *= scsi_debug_virtual_gb;
+ } else
+ sdebug_capacity = sdebug_store_sectors;
+ memset(arr, 0, SDEBUG_READCAP16_ARR_SZ);
+ capac = sdebug_capacity - 1;
+ for (k = 0; k < 8; ++k, capac >>= 8)
+ arr[7 - k] = capac & 0xff;
+ arr[8] = (SECT_SIZE_PER(target) >> 24) & 0xff;
+ arr[9] = (SECT_SIZE_PER(target) >> 16) & 0xff;
+ arr[10] = (SECT_SIZE_PER(target) >> 8) & 0xff;
+ arr[11] = SECT_SIZE_PER(target) & 0xff;
+ return fill_from_dev_buffer(scp, arr,
+ min(alloc_len, SDEBUG_READCAP16_ARR_SZ));
+}
+
/* <<Following mode page info copied from ST318451LW>> */
static int resp_err_recov_pg(unsigned char * p, int pcontrol, int target)
static int resp_ctrl_m_pg(unsigned char * p, int pcontrol, int target)
{ /* Control mode page for mode_sense */
- unsigned char ctrl_m_pg[] = {0xa, 10, 2, 0, 0, 0, 0, 0,
+ unsigned char ch_ctrl_m_pg[] = {/* 0xa, 10, */ 0x6, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0};
+ unsigned char d_ctrl_m_pg[] = {0xa, 10, 2, 0, 0, 0, 0, 0,
0, 0, 0x2, 0x4b};
if (scsi_debug_dsense)
ctrl_m_pg[2] |= 0x4;
+ else
+ ctrl_m_pg[2] &= ~0x4;
memcpy(p, ctrl_m_pg, sizeof(ctrl_m_pg));
if (1 == pcontrol)
- memset(p + 2, 0, sizeof(ctrl_m_pg) - 2);
+ memcpy(p + 2, ch_ctrl_m_pg, sizeof(ch_ctrl_m_pg));
+ else if (2 == pcontrol)
+ memcpy(p, d_ctrl_m_pg, sizeof(d_ctrl_m_pg));
return sizeof(ctrl_m_pg);
}
+
static int resp_iec_m_pg(unsigned char * p, int pcontrol, int target)
{ /* Informational Exceptions control mode page for mode_sense */
- unsigned char iec_m_pg[] = {0x1c, 0xa, 0x08, 0, 0, 0, 0, 0,
- 0, 0, 0x0, 0x0};
+ unsigned char ch_iec_m_pg[] = {/* 0x1c, 0xa, */ 0x4, 0xf, 0, 0, 0, 0,
+ 0, 0, 0x0, 0x0};
+ unsigned char d_iec_m_pg[] = {0x1c, 0xa, 0x08, 0, 0, 0, 0, 0,
+ 0, 0, 0x0, 0x0};
+
memcpy(p, iec_m_pg, sizeof(iec_m_pg));
if (1 == pcontrol)
- memset(p + 2, 0, sizeof(iec_m_pg) - 2);
+ memcpy(p + 2, ch_iec_m_pg, sizeof(ch_iec_m_pg));
+ else if (2 == pcontrol)
+ memcpy(p, d_iec_m_pg, sizeof(d_iec_m_pg));
return sizeof(iec_m_pg);
}
+static int resp_sas_sf_m_pg(unsigned char * p, int pcontrol, int target)
+{ /* SAS SSP mode page - short format for mode_sense */
+ unsigned char sas_sf_m_pg[] = {0x19, 0x6,
+ 0x6, 0x0, 0x7, 0xd0, 0x0, 0x0};
+
+ memcpy(p, sas_sf_m_pg, sizeof(sas_sf_m_pg));
+ if (1 == pcontrol)
+ memset(p + 2, 0, sizeof(sas_sf_m_pg) - 2);
+ return sizeof(sas_sf_m_pg);
+}
+
+
+static int resp_sas_pcd_m_spg(unsigned char * p, int pcontrol, int target,
+ int target_dev_id)
+{ /* SAS phy control and discover mode page for mode_sense */
+ unsigned char sas_pcd_m_pg[] = {0x59, 0x1, 0, 0x64, 0, 0x6, 0, 2,
+ 0, 0, 0, 0, 0x10, 0x9, 0x8, 0x0,
+ 0x52, 0x22, 0x22, 0x20, 0x0, 0x0, 0x0, 0x0,
+ 0x51, 0x11, 0x11, 0x10, 0x0, 0x0, 0x0, 0x1,
+ 0x2, 0, 0, 0, 0, 0, 0, 0,
+ 0x88, 0x99, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 1, 0, 0, 0x10, 0x9, 0x8, 0x0,
+ 0x52, 0x22, 0x22, 0x20, 0x0, 0x0, 0x0, 0x0,
+ 0x51, 0x11, 0x11, 0x10, 0x0, 0x0, 0x0, 0x1,
+ 0x3, 0, 0, 0, 0, 0, 0, 0,
+ 0x88, 0x99, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ };
+ int port_a, port_b;
+
+ port_a = target_dev_id + 1;
+ port_b = port_a + 1;
+ memcpy(p, sas_pcd_m_pg, sizeof(sas_pcd_m_pg));
+ p[20] = (port_a >> 24);
+ p[21] = (port_a >> 16) & 0xff;
+ p[22] = (port_a >> 8) & 0xff;
+ p[23] = port_a & 0xff;
+ p[48 + 20] = (port_b >> 24);
+ p[48 + 21] = (port_b >> 16) & 0xff;
+ p[48 + 22] = (port_b >> 8) & 0xff;
+ p[48 + 23] = port_b & 0xff;
+ if (1 == pcontrol)
+ memset(p + 4, 0, sizeof(sas_pcd_m_pg) - 4);
+ return sizeof(sas_pcd_m_pg);
+}
+
+static int resp_sas_sha_m_spg(unsigned char * p, int pcontrol)
+{ /* SAS SSP shared protocol specific port mode subpage */
+ unsigned char sas_sha_m_pg[] = {0x59, 0x2, 0, 0xc, 0, 0x6, 0x10, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ };
+
+ memcpy(p, sas_sha_m_pg, sizeof(sas_sha_m_pg));
+ if (1 == pcontrol)
+ memset(p + 4, 0, sizeof(sas_sha_m_pg) - 4);
+ return sizeof(sas_sha_m_pg);
+}
+
#define SDEBUG_MAX_MSENSE_SZ 256
static int resp_mode_sense(struct scsi_cmnd * scp, int target,
unsigned char dbd;
int pcontrol, pcode, subpcode;
unsigned char dev_spec;
- int alloc_len, msense_6, offset, len, errsts;
+ int alloc_len, msense_6, offset, len, errsts, target_dev_id;
unsigned char * ap;
unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
unsigned char *cmd = (unsigned char *)scp->cmnd;
- if ((errsts = check_reset(scp, devip)))
+ if ((errsts = check_readiness(scp, 1, devip)))
return errsts;
dbd = cmd[1] & 0x8;
pcontrol = (cmd[2] & 0xc0) >> 6;
0);
return check_condition_result;
}
+ target_dev_id = ((devip->sdbg_host->shost->host_no + 1) * 2000) +
+ (devip->target * 1000) - 3;
dev_spec = DEV_READONLY(target) ? 0x80 : 0x0;
if (msense_6) {
arr[2] = dev_spec;
}
ap = arr + offset;
- if (0 != subpcode) { /* TODO: Control Extension page */
+ if ((subpcode > 0x0) && (subpcode < 0xff) && (0x19 != pcode)) {
+ /* TODO: Control Extension page */
mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
0);
return check_condition_result;
len = resp_ctrl_m_pg(ap, pcontrol, target);
offset += len;
break;
+ case 0x19: /* if spc==1 then sas phy, control+discover */
+ if ((subpcode > 0x2) && (subpcode < 0xff)) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+ len = 0;
+ if ((0x0 == subpcode) || (0xff == subpcode))
+ len += resp_sas_sf_m_pg(ap + len, pcontrol, target);
+ if ((0x1 == subpcode) || (0xff == subpcode))
+ len += resp_sas_pcd_m_spg(ap + len, pcontrol, target,
+ target_dev_id);
+ if ((0x2 == subpcode) || (0xff == subpcode))
+ len += resp_sas_sha_m_spg(ap + len, pcontrol);
+ offset += len;
+ break;
case 0x1c: /* Informational Exceptions Mode page, all devices */
len = resp_iec_m_pg(ap, pcontrol, target);
offset += len;
break;
case 0x3f: /* Read all Mode pages */
- len = resp_err_recov_pg(ap, pcontrol, target);
- len += resp_disconnect_pg(ap + len, pcontrol, target);
- len += resp_format_pg(ap + len, pcontrol, target);
- len += resp_caching_pg(ap + len, pcontrol, target);
- len += resp_ctrl_m_pg(ap + len, pcontrol, target);
- len += resp_iec_m_pg(ap + len, pcontrol, target);
+ if ((0 == subpcode) || (0xff == subpcode)) {
+ len = resp_err_recov_pg(ap, pcontrol, target);
+ len += resp_disconnect_pg(ap + len, pcontrol, target);
+ len += resp_format_pg(ap + len, pcontrol, target);
+ len += resp_caching_pg(ap + len, pcontrol, target);
+ len += resp_ctrl_m_pg(ap + len, pcontrol, target);
+ len += resp_sas_sf_m_pg(ap + len, pcontrol, target);
+ if (0xff == subpcode) {
+ len += resp_sas_pcd_m_spg(ap + len, pcontrol,
+ target, target_dev_id);
+ len += resp_sas_sha_m_spg(ap + len, pcontrol);
+ }
+ len += resp_iec_m_pg(ap + len, pcontrol, target);
+ } else {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
offset += len;
break;
default:
return fill_from_dev_buffer(scp, arr, min(alloc_len, offset));
}
-static int resp_read(struct scsi_cmnd * SCpnt, int upper_blk, int block,
- int num, struct sdebug_dev_info * devip)
+#define SDEBUG_MAX_MSELECT_SZ 512
+
+static int resp_mode_select(struct scsi_cmnd * scp, int mselect6,
+ struct sdebug_dev_info * devip)
+{
+ int pf, sp, ps, md_len, bd_len, off, spf, pg_len;
+ int param_len, res, errsts, mpage;
+ unsigned char arr[SDEBUG_MAX_MSELECT_SZ];
+ unsigned char *cmd = (unsigned char *)scp->cmnd;
+
+ if ((errsts = check_readiness(scp, 1, devip)))
+ return errsts;
+ memset(arr, 0, sizeof(arr));
+ pf = cmd[1] & 0x10;
+ sp = cmd[1] & 0x1;
+ param_len = mselect6 ? cmd[4] : ((cmd[7] << 8) + cmd[8]);
+ if ((0 == pf) || sp || (param_len > SDEBUG_MAX_MSELECT_SZ)) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+ res = fetch_to_dev_buffer(scp, arr, param_len);
+ if (-1 == res)
+ return (DID_ERROR << 16);
+ else if ((res < param_len) &&
+ (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts))
+ printk(KERN_INFO "scsi_debug: mode_select: cdb indicated=%d, "
+ " IO sent=%d bytes\n", param_len, res);
+ md_len = mselect6 ? (arr[0] + 1) : ((arr[0] << 8) + arr[1] + 2);
+ bd_len = mselect6 ? arr[3] : ((arr[6] << 8) + arr[7]);
+ if ((md_len > 2) || (0 != bd_len)) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_PARAM_LIST, 0);
+ return check_condition_result;
+ }
+ off = bd_len + (mselect6 ? 4 : 8);
+ mpage = arr[off] & 0x3f;
+ ps = !!(arr[off] & 0x80);
+ if (ps) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_PARAM_LIST, 0);
+ return check_condition_result;
+ }
+ spf = !!(arr[off] & 0x40);
+ pg_len = spf ? ((arr[off + 2] << 8) + arr[off + 3] + 4) :
+ (arr[off + 1] + 2);
+ if ((pg_len + off) > param_len) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ PARAMETER_LIST_LENGTH_ERR, 0);
+ return check_condition_result;
+ }
+ switch (mpage) {
+ case 0xa: /* Control Mode page */
+ if (ctrl_m_pg[1] == arr[off + 1]) {
+ memcpy(ctrl_m_pg + 2, arr + off + 2,
+ sizeof(ctrl_m_pg) - 2);
+ scsi_debug_dsense = !!(ctrl_m_pg[2] & 0x4);
+ return 0;
+ }
+ break;
+ case 0x1c: /* Informational Exceptions Mode page */
+ if (iec_m_pg[1] == arr[off + 1]) {
+ memcpy(iec_m_pg + 2, arr + off + 2,
+ sizeof(iec_m_pg) - 2);
+ return 0;
+ }
+ break;
+ default:
+ break;
+ }
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_PARAM_LIST, 0);
+ return check_condition_result;
+}
+
+static int resp_temp_l_pg(unsigned char * arr)
+{
+ unsigned char temp_l_pg[] = {0x0, 0x0, 0x3, 0x2, 0x0, 38,
+ 0x0, 0x1, 0x3, 0x2, 0x0, 65,
+ };
+
+ memcpy(arr, temp_l_pg, sizeof(temp_l_pg));
+ return sizeof(temp_l_pg);
+}
+
+static int resp_ie_l_pg(unsigned char * arr)
+{
+ unsigned char ie_l_pg[] = {0x0, 0x0, 0x3, 0x3, 0x0, 0x0, 38,
+ };
+
+ memcpy(arr, ie_l_pg, sizeof(ie_l_pg));
+ if (iec_m_pg[2] & 0x4) { /* TEST bit set */
+ arr[4] = THRESHOLD_EXCEEDED;
+ arr[5] = 0xff;
+ }
+ return sizeof(ie_l_pg);
+}
+
+#define SDEBUG_MAX_LSENSE_SZ 512
+
+static int resp_log_sense(struct scsi_cmnd * scp,
+ struct sdebug_dev_info * devip)
+{
+ int ppc, sp, pcontrol, pcode, alloc_len, errsts, len, n;
+ unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
+ unsigned char *cmd = (unsigned char *)scp->cmnd;
+
+ if ((errsts = check_readiness(scp, 1, devip)))
+ return errsts;
+ memset(arr, 0, sizeof(arr));
+ ppc = cmd[1] & 0x2;
+ sp = cmd[1] & 0x1;
+ if (ppc || sp) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+ pcontrol = (cmd[2] & 0xc0) >> 6;
+ pcode = cmd[2] & 0x3f;
+ alloc_len = (cmd[7] << 8) + cmd[8];
+ arr[0] = pcode;
+ switch (pcode) {
+ case 0x0: /* Supported log pages log page */
+ n = 4;
+ arr[n++] = 0x0; /* this page */
+ arr[n++] = 0xd; /* Temperature */
+ arr[n++] = 0x2f; /* Informational exceptions */
+ arr[3] = n - 4;
+ break;
+ case 0xd: /* Temperature log page */
+ arr[3] = resp_temp_l_pg(arr + 4);
+ break;
+ case 0x2f: /* Informational exceptions log page */
+ arr[3] = resp_ie_l_pg(arr + 4);
+ break;
+ default:
+ mk_sense_buffer(devip, ILLEGAL_REQUEST,
+ INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+ len = min(((arr[2] << 8) + arr[3]) + 4, alloc_len);
+ return fill_from_dev_buffer(scp, arr,
+ min(len, SDEBUG_MAX_INQ_ARR_SZ));
+}
+
+static int resp_read(struct scsi_cmnd * SCpnt, unsigned long long lba,
+ unsigned int num, struct sdebug_dev_info * devip)
{
unsigned long iflags;
+ unsigned int block, from_bottom;
+ unsigned long long u;
int ret;
- if (upper_blk || (block + num > sdebug_capacity)) {
+ if (lba + num > sdebug_capacity) {
mk_sense_buffer(devip, ILLEGAL_REQUEST, ADDR_OUT_OF_RANGE,
0);
return check_condition_result;
}
+ /* transfer length excessive (tie in to block limits VPD page) */
+ if (num > sdebug_store_sectors) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
+ 0);
+ return check_condition_result;
+ }
if ((SCSI_DEBUG_OPT_MEDIUM_ERR & scsi_debug_opts) &&
- (block <= OPT_MEDIUM_ERR_ADDR) &&
- ((block + num) > OPT_MEDIUM_ERR_ADDR)) {
+ (lba <= OPT_MEDIUM_ERR_ADDR) &&
+ ((lba + num) > OPT_MEDIUM_ERR_ADDR)) {
+ /* claim unrecoverable read error */
mk_sense_buffer(devip, MEDIUM_ERROR, UNRECOVERED_READ_ERR,
0);
- /* claim unrecoverable read error */
+ /* set info field and valid bit for fixed descriptor */
+ if (0x70 == (devip->sense_buff[0] & 0x7f)) {
+ devip->sense_buff[0] |= 0x80; /* Valid bit */
+ ret = OPT_MEDIUM_ERR_ADDR;
+ devip->sense_buff[3] = (ret >> 24) & 0xff;
+ devip->sense_buff[4] = (ret >> 16) & 0xff;
+ devip->sense_buff[5] = (ret >> 8) & 0xff;
+ devip->sense_buff[6] = ret & 0xff;
+ }
return check_condition_result;
}
read_lock_irqsave(&atomic_rw, iflags);
- ret = fill_from_dev_buffer(SCpnt, fake_storep + (block * SECT_SIZE),
- num * SECT_SIZE);
+ if ((lba + num) <= sdebug_store_sectors)
+ ret = fill_from_dev_buffer(SCpnt,
+ fake_storep + (lba * SECT_SIZE),
+ num * SECT_SIZE);
+ else {
+ /* modulo when one arg is 64 bits needs do_div() */
+ u = lba;
+ block = do_div(u, sdebug_store_sectors);
+ from_bottom = 0;
+ if ((block + num) > sdebug_store_sectors)
+ from_bottom = (block + num) - sdebug_store_sectors;
+ ret = fill_from_dev_buffer(SCpnt,
+ fake_storep + (block * SECT_SIZE),
+ (num - from_bottom) * SECT_SIZE);
+ if ((0 == ret) && (from_bottom > 0))
+ ret = fill_from_dev_buffer(SCpnt, fake_storep,
+ from_bottom * SECT_SIZE);
+ }
read_unlock_irqrestore(&atomic_rw, iflags);
return ret;
}
-static int resp_write(struct scsi_cmnd * SCpnt, int upper_blk, int block,
- int num, struct sdebug_dev_info * devip)
+static int resp_write(struct scsi_cmnd * SCpnt, unsigned long long lba,
+ unsigned int num, struct sdebug_dev_info * devip)
{
unsigned long iflags;
+ unsigned int block, to_bottom;
+ unsigned long long u;
int res;
- if (upper_blk || (block + num > sdebug_capacity)) {
+ if (lba + num > sdebug_capacity) {
mk_sense_buffer(devip, ILLEGAL_REQUEST, ADDR_OUT_OF_RANGE,
0);
return check_condition_result;
}
+ /* transfer length excessive (tie in to block limits VPD page) */
+ if (num > sdebug_store_sectors) {
+ mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
+ 0);
+ return check_condition_result;
+ }
write_lock_irqsave(&atomic_rw, iflags);
- res = fetch_to_dev_buffer(SCpnt, fake_storep + (block * SECT_SIZE),
- num * SECT_SIZE);
+ if ((lba + num) <= sdebug_store_sectors)
+ res = fetch_to_dev_buffer(SCpnt,
+ fake_storep + (lba * SECT_SIZE),
+ num * SECT_SIZE);
+ else {
+ /* modulo when one arg is 64 bits needs do_div() */
+ u = lba;
+ block = do_div(u, sdebug_store_sectors);
+ to_bottom = 0;
+ if ((block + num) > sdebug_store_sectors)
+ to_bottom = (block + num) - sdebug_store_sectors;
+ res = fetch_to_dev_buffer(SCpnt,
+ fake_storep + (block * SECT_SIZE),
+ (num - to_bottom) * SECT_SIZE);
+ if ((0 == res) && (to_bottom > 0))
+ res = fetch_to_dev_buffer(SCpnt, fake_storep,
+ to_bottom * SECT_SIZE);
+ }
write_unlock_irqrestore(&atomic_rw, iflags);
if (-1 == res)
return (DID_ERROR << 16);
else if ((res < (num * SECT_SIZE)) &&
(SCSI_DEBUG_OPT_NOISE & scsi_debug_opts))
- printk(KERN_INFO "scsi_debug: write: cdb indicated=%d, "
+ printk(KERN_INFO "scsi_debug: write: cdb indicated=%u, "
" IO sent=%d bytes\n", num * SECT_SIZE, res);
return 0;
}
-#define SDEBUG_RLUN_ARR_SZ 128
+#define SDEBUG_RLUN_ARR_SZ 256
static int resp_report_luns(struct scsi_cmnd * scp,
struct sdebug_dev_info * devip)
{
unsigned int alloc_len;
- int lun_cnt, i, upper;
+ int lun_cnt, i, upper, num, n, wlun, lun;
unsigned char *cmd = (unsigned char *)scp->cmnd;
int select_report = (int)cmd[2];
struct scsi_lun *one_lun;
unsigned char arr[SDEBUG_RLUN_ARR_SZ];
+ unsigned char * max_addr;
alloc_len = cmd[9] + (cmd[8] << 8) + (cmd[7] << 16) + (cmd[6] << 24);
- if ((alloc_len < 16) || (select_report > 2)) {
+ if ((alloc_len < 4) || (select_report > 2)) {
mk_sense_buffer(devip, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB,
0);
return check_condition_result;
/* can produce response with up to 16k luns (lun 0 to lun 16383) */
memset(arr, 0, SDEBUG_RLUN_ARR_SZ);
lun_cnt = scsi_debug_max_luns;
- arr[2] = ((sizeof(struct scsi_lun) * lun_cnt) >> 8) & 0xff;
- arr[3] = (sizeof(struct scsi_lun) * lun_cnt) & 0xff;
- lun_cnt = min((int)((SDEBUG_RLUN_ARR_SZ - 8) /
- sizeof(struct scsi_lun)), lun_cnt);
+ if (1 == select_report)
+ lun_cnt = 0;
+ else if (scsi_debug_no_lun_0 && (lun_cnt > 0))
+ --lun_cnt;
+ wlun = (select_report > 0) ? 1 : 0;
+ num = lun_cnt + wlun;
+ arr[2] = ((sizeof(struct scsi_lun) * num) >> 8) & 0xff;
+ arr[3] = (sizeof(struct scsi_lun) * num) & 0xff;
+ n = min((int)((SDEBUG_RLUN_ARR_SZ - 8) /
+ sizeof(struct scsi_lun)), num);
+ if (n < num) {
+ wlun = 0;
+ lun_cnt = n;
+ }
one_lun = (struct scsi_lun *) &arr[8];
- for (i = 0; i < lun_cnt; i++) {
- upper = (i >> 8) & 0x3f;
+ max_addr = arr + SDEBUG_RLUN_ARR_SZ;
+ for (i = 0, lun = (scsi_debug_no_lun_0 ? 1 : 0);
+ ((i < lun_cnt) && ((unsigned char *)(one_lun + i) < max_addr));
+ i++, lun++) {
+ upper = (lun >> 8) & 0x3f;
if (upper)
one_lun[i].scsi_lun[0] =
(upper | (SAM2_LUN_ADDRESS_METHOD << 6));
- one_lun[i].scsi_lun[1] = i & 0xff;
+ one_lun[i].scsi_lun[1] = lun & 0xff;
+ }
+ if (wlun) {
+ one_lun[i].scsi_lun[0] = (SAM2_WLUN_REPORT_LUNS >> 8) & 0xff;
+ one_lun[i].scsi_lun[1] = SAM2_WLUN_REPORT_LUNS & 0xff;
+ i++;
}
+ alloc_len = (unsigned char *)(one_lun + i) - arr;
return fill_from_dev_buffer(scp, arr,
min((int)alloc_len, SDEBUG_RLUN_ARR_SZ));
}
static int scsi_debug_slave_alloc(struct scsi_device * sdp)
{
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
- sdev_printk(KERN_INFO, sdp, "scsi_debug: slave_alloc\n");
+ printk(KERN_INFO "scsi_debug: slave_alloc <%u %u %u %u>\n",
+ sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
return 0;
}
struct sdebug_dev_info * devip;
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
- sdev_printk(KERN_INFO, sdp, "scsi_debug: slave_configure\n");
+ printk(KERN_INFO "scsi_debug: slave_configure <%u %u %u %u>\n",
+ sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
if (sdp->host->max_cmd_len != SCSI_DEBUG_MAX_CMD_LEN)
sdp->host->max_cmd_len = SCSI_DEBUG_MAX_CMD_LEN;
devip = devInfoReg(sdp);
if (sdp->host->cmd_per_lun)
scsi_adjust_queue_depth(sdp, SDEBUG_TAGGED_QUEUING,
sdp->host->cmd_per_lun);
+ blk_queue_max_segment_size(sdp->request_queue, 256 * 1024);
return 0;
}
(struct sdebug_dev_info *)sdp->hostdata;
if (SCSI_DEBUG_OPT_NOISE & scsi_debug_opts)
- sdev_printk(KERN_INFO, sdp, "scsi_debug: slave_destroy\n");
+ printk(KERN_INFO "scsi_debug: slave_destroy <%u %u %u %u>\n",
+ sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
if (devip) {
/* make this slot avaliable for re-use */
devip->used = 0;
open_devip->sense_buff[0] = 0x70;
open_devip->sense_buff[7] = 0xa;
}
+ if (sdev->lun == SAM2_WLUN_REPORT_LUNS)
+ open_devip->wlun = SAM2_WLUN_REPORT_LUNS & 0xff;
return open_devip;
}
return NULL;
printk(KERN_WARNING "scsi_debug:build_parts: reducing "
"partitions to %d\n", SDEBUG_MAX_PARTS);
}
- num_sectors = (int)(sdebug_store_size / SECT_SIZE);
+ num_sectors = (int)sdebug_store_sectors;
sectors_per_part = (num_sectors - sdebug_sectors_per)
/ scsi_debug_num_parts;
heads_by_sects = sdebug_heads * sdebug_sectors_per;
if (scsi_result) {
struct scsi_device * sdp = cmnd->device;
- sdev_printk(KERN_INFO, sdp,
- "non-zero result=0x%x\n",
- scsi_result);
+ printk(KERN_INFO "scsi_debug: <%u %u %u %u> "
+ "non-zero result=0x%x\n", sdp->host->host_no,
+ sdp->channel, sdp->id, sdp->lun, scsi_result);
}
}
if (cmnd && devip) {
}
}
-/* Set 'perm' (4th argument) to 0 to disable module_param's definition
- * of sysfs parameters (which module_param doesn't yet support).
- * Sysfs parameters defined explicitly below.
- */
-module_param_named(add_host, scsi_debug_add_host, int, 0); /* perm=0644 */
-module_param_named(delay, scsi_debug_delay, int, 0); /* perm=0644 */
-module_param_named(dev_size_mb, scsi_debug_dev_size_mb, int, 0);
-module_param_named(dsense, scsi_debug_dsense, int, 0);
-module_param_named(every_nth, scsi_debug_every_nth, int, 0);
-module_param_named(max_luns, scsi_debug_max_luns, int, 0);
-module_param_named(num_parts, scsi_debug_num_parts, int, 0);
-module_param_named(num_tgts, scsi_debug_num_tgts, int, 0);
-module_param_named(opts, scsi_debug_opts, int, 0); /* perm=0644 */
-module_param_named(ptype, scsi_debug_ptype, int, 0);
-module_param_named(scsi_level, scsi_debug_scsi_level, int, 0);
+module_param_named(add_host, scsi_debug_add_host, int, S_IRUGO | S_IWUSR);
+module_param_named(delay, scsi_debug_delay, int, S_IRUGO | S_IWUSR);
+module_param_named(dev_size_mb, scsi_debug_dev_size_mb, int, S_IRUGO);
+module_param_named(dsense, scsi_debug_dsense, int, S_IRUGO | S_IWUSR);
+module_param_named(every_nth, scsi_debug_every_nth, int, S_IRUGO | S_IWUSR);
+module_param_named(max_luns, scsi_debug_max_luns, int, S_IRUGO | S_IWUSR);
+module_param_named(no_lun_0, scsi_debug_no_lun_0, int, S_IRUGO | S_IWUSR);
+module_param_named(num_parts, scsi_debug_num_parts, int, S_IRUGO);
+module_param_named(num_tgts, scsi_debug_num_tgts, int, S_IRUGO | S_IWUSR);
+module_param_named(opts, scsi_debug_opts, int, S_IRUGO | S_IWUSR);
+module_param_named(ptype, scsi_debug_ptype, int, S_IRUGO | S_IWUSR);
+module_param_named(scsi_level, scsi_debug_scsi_level, int, S_IRUGO);
+module_param_named(virtual_gb, scsi_debug_virtual_gb, int, S_IRUGO | S_IWUSR);
MODULE_AUTHOR("Eric Youngdale + Douglas Gilbert");
MODULE_DESCRIPTION("SCSI debug adapter driver");
MODULE_PARM_DESC(add_host, "0..127 hosts allowed(def=1)");
MODULE_PARM_DESC(delay, "# of jiffies to delay response(def=1)");
-MODULE_PARM_DESC(dev_size_mb, "size in MB of ram shared by devs");
-MODULE_PARM_DESC(dsense, "use descriptor sense format(def: fixed)");
+MODULE_PARM_DESC(dev_size_mb, "size in MB of ram shared by devs(def=8)");
+MODULE_PARM_DESC(dsense, "use descriptor sense format(def=0 -> fixed)");
MODULE_PARM_DESC(every_nth, "timeout every nth command(def=100)");
-MODULE_PARM_DESC(max_luns, "number of SCSI LUNs per target to simulate");
+MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
+MODULE_PARM_DESC(no_lun_0, "no LU number 0 (def=0 -> have lun 0)");
MODULE_PARM_DESC(num_parts, "number of partitions(def=0)");
-MODULE_PARM_DESC(num_tgts, "number of SCSI targets per host to simulate");
-MODULE_PARM_DESC(opts, "1->noise, 2->medium_error, 4->...");
+MODULE_PARM_DESC(num_tgts, "number of targets per host to simulate(def=1)");
+MODULE_PARM_DESC(opts, "1->noise, 2->medium_error, 4->... (def=0)");
MODULE_PARM_DESC(ptype, "SCSI peripheral type(def=0[disk])");
MODULE_PARM_DESC(scsi_level, "SCSI level to simulate(def=5[SPC-3])");
+MODULE_PARM_DESC(virtual_gb, "virtual gigabyte size (def=0 -> use dev_size_mb)");
static char sdebug_info[256];
DRIVER_ATTR(dsense, S_IRUGO | S_IWUSR, sdebug_dsense_show,
sdebug_dsense_store);
+static ssize_t sdebug_no_lun_0_show(struct device_driver * ddp, char * buf)
+{
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_no_lun_0);
+}
+static ssize_t sdebug_no_lun_0_store(struct device_driver * ddp,
+ const char * buf, size_t count)
+{
+ int n;
+
+ if ((count > 0) && (1 == sscanf(buf, "%d", &n)) && (n >= 0)) {
+ scsi_debug_no_lun_0 = n;
+ return count;
+ }
+ return -EINVAL;
+}
+DRIVER_ATTR(no_lun_0, S_IRUGO | S_IWUSR, sdebug_no_lun_0_show,
+ sdebug_no_lun_0_store);
+
static ssize_t sdebug_num_tgts_show(struct device_driver * ddp, char * buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_num_tgts);
}
DRIVER_ATTR(scsi_level, S_IRUGO, sdebug_scsi_level_show, NULL);
+static ssize_t sdebug_virtual_gb_show(struct device_driver * ddp, char * buf)
+{
+ return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_virtual_gb);
+}
+static ssize_t sdebug_virtual_gb_store(struct device_driver * ddp,
+ const char * buf, size_t count)
+{
+ int n;
+
+ if ((count > 0) && (1 == sscanf(buf, "%d", &n)) && (n >= 0)) {
+ scsi_debug_virtual_gb = n;
+ if (scsi_debug_virtual_gb > 0) {
+ sdebug_capacity = 2048 * 1024;
+ sdebug_capacity *= scsi_debug_virtual_gb;
+ } else
+ sdebug_capacity = sdebug_store_sectors;
+ return count;
+ }
+ return -EINVAL;
+}
+DRIVER_ATTR(virtual_gb, S_IRUGO | S_IWUSR, sdebug_virtual_gb_show,
+ sdebug_virtual_gb_store);
+
static ssize_t sdebug_add_host_show(struct device_driver * ddp, char * buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_add_host);
static int __init scsi_debug_init(void)
{
- unsigned long sz;
+ unsigned int sz;
int host_to_add;
int k;
if (scsi_debug_dev_size_mb < 1)
scsi_debug_dev_size_mb = 1; /* force minimum 1 MB ramdisk */
- sdebug_store_size = (unsigned long)scsi_debug_dev_size_mb * 1048576;
- sdebug_capacity = sdebug_store_size / SECT_SIZE;
+ sdebug_store_size = (unsigned int)scsi_debug_dev_size_mb * 1048576;
+ sdebug_store_sectors = sdebug_store_size / SECT_SIZE;
+ if (scsi_debug_virtual_gb > 0) {
+ sdebug_capacity = 2048 * 1024;
+ sdebug_capacity *= scsi_debug_virtual_gb;
+ } else
+ sdebug_capacity = sdebug_store_sectors;
/* play around with geometry, don't waste too much on track 0 */
sdebug_heads = 8;
struct sdebug_dev_info *sdbg_devinfo;
struct list_head *lh, *lh_sf;
- sdbg_host = kzalloc(sizeof(*sdbg_host), GFP_KERNEL);
+ sdbg_host = kzalloc(sizeof(*sdbg_host),GFP_KERNEL);
if (NULL == sdbg_host) {
printk(KERN_ERR "%s: out of memory at line %d\n",
devs_per_host = scsi_debug_num_tgts * scsi_debug_max_luns;
for (k = 0; k < devs_per_host; k++) {
- sdbg_devinfo = kzalloc(sizeof(*sdbg_devinfo), GFP_KERNEL);
+ sdbg_devinfo = kzalloc(sizeof(*sdbg_devinfo),GFP_KERNEL);
if (NULL == sdbg_devinfo) {
printk(KERN_ERR "%s: out of memory at line %d\n",
__FUNCTION__, __LINE__);
hpnt->max_id = scsi_debug_num_tgts + 1;
else
hpnt->max_id = scsi_debug_num_tgts;
- hpnt->max_lun = scsi_debug_max_luns;
+ hpnt->max_lun = SAM2_WLUN_REPORT_LUNS; /* = scsi_debug_max_luns; */
error = scsi_add_host(hpnt, &sdbg_host->dev);
if (error) {
hpnt->max_id = scsi_debug_num_tgts + 1;
else
hpnt->max_id = scsi_debug_num_tgts;
- hpnt->max_lun = scsi_debug_max_luns;
+ hpnt->max_lun = SAM2_WLUN_REPORT_LUNS; /* scsi_debug_max_luns; */
}
spin_unlock(&sdebug_host_list_lock);
}
{"HITACHI", "DISK-SUBSYSTEM", "*", BLIST_ATTACH_PQ3 | BLIST_SPARSELUN | BLIST_LARGELUN},
{"HITACHI", "OPEN-E", "*", BLIST_ATTACH_PQ3 | BLIST_SPARSELUN | BLIST_LARGELUN},
{"HP", "A6189A", NULL, BLIST_SPARSELUN | BLIST_LARGELUN}, /* HP VA7400 */
- {"HP", "OPEN-", "*", BLIST_SPARSELUN | BLIST_LARGELUN}, /* HP XP Arrays */
+ {"HP", "OPEN-", "*", BLIST_REPORTLUN2}, /* HP XP Arrays */
{"HP", "NetRAID-4M", NULL, BLIST_FORCELUN},
{"HP", "HSV100", NULL, BLIST_REPORTLUN2 | BLIST_NOSTARTONADD},
{"HP", "C1557A", NULL, BLIST_FORCELUN},
scsi_reset_provider(struct scsi_device *dev, int flag)
{
struct scsi_cmnd *scmd = scsi_get_command(dev, GFP_KERNEL);
+ struct Scsi_Host *shost = dev->host;
struct request req;
+ unsigned long flags;
int rtn;
scmd->request = &req;
*/
scmd->pid = 0;
+ spin_lock_irqsave(shost->host_lock, flags);
+ shost->tmf_in_progress = 1;
+ spin_unlock_irqrestore(shost->host_lock, flags);
+
switch (flag) {
case SCSI_TRY_RESET_DEVICE:
rtn = scsi_try_bus_device_reset(scmd);
rtn = FAILED;
}
+ spin_lock_irqsave(shost->host_lock, flags);
+ shost->tmf_in_progress = 0;
+ spin_unlock_irqrestore(shost->host_lock, flags);
+
+ /*
+ * be sure to wake up anyone who was sleeping or had their queue
+ * suspended while we performed the TMF.
+ */
+ SCSI_LOG_ERROR_RECOVERY(3,
+ printk("%s: waking up host to restart after TMF\n",
+ __FUNCTION__));
+
+ wake_up(&shost->host_wait);
+
+ scsi_run_host_queues(shost);
+
scsi_next_command(scmd);
return rtn;
}
* b) We can just use scsi_requeue_command() here. This would
* be used if we just wanted to retry, for example.
*/
-void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes,
- unsigned int block_bytes)
+void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
{
int result = cmd->result;
int this_count = cmd->bufflen;
* Next deal with any sectors which we were able to correctly
* handle.
*/
- if (good_bytes >= 0) {
- SCSI_LOG_HLCOMPLETE(1, printk("%ld sectors total, %d bytes done.\n",
- req->nr_sectors, good_bytes));
- SCSI_LOG_HLCOMPLETE(1, printk("use_sg is %d\n", cmd->use_sg));
+ SCSI_LOG_HLCOMPLETE(1, printk("%ld sectors total, "
+ "%d bytes done.\n",
+ req->nr_sectors, good_bytes));
+ SCSI_LOG_HLCOMPLETE(1, printk("use_sg is %d\n", cmd->use_sg));
- if (clear_errors)
- req->errors = 0;
- /*
- * If multiple sectors are requested in one buffer, then
- * they will have been finished off by the first command.
- * If not, then we have a multi-buffer command.
- *
- * If block_bytes != 0, it means we had a medium error
- * of some sort, and that we want to mark some number of
- * sectors as not uptodate. Thus we want to inhibit
- * requeueing right here - we will requeue down below
- * when we handle the bad sectors.
- */
+ if (clear_errors)
+ req->errors = 0;
- /*
- * If the command completed without error, then either
- * finish off the rest of the command, or start a new one.
- */
- if (scsi_end_request(cmd, 1, good_bytes, result == 0) == NULL)
- return;
- }
- /*
- * Now, if we were good little boys and girls, Santa left us a request
- * sense buffer. We can extract information from this, so we
- * can choose a block to remap, etc.
+ /* A number of bytes were successfully read. If there
+ * are leftovers and there is some kind of error
+ * (result != 0), retry the rest.
+ */
+ if (scsi_end_request(cmd, 1, good_bytes, result == 0) == NULL)
+ return;
+
+ /* good_bytes = 0, or (inclusive) there were leftovers and
+ * result = 0, so scsi_end_request couldn't retry.
*/
if (sense_valid && !sense_deferred) {
switch (sshdr.sense_key) {
case UNIT_ATTENTION:
if (cmd->device->removable) {
- /* detected disc change. set a bit
+ /* Detected disc change. Set a bit
* and quietly refuse further access.
*/
cmd->device->changed = 1;
- scsi_end_request(cmd, 0,
- this_count, 1);
+ scsi_end_request(cmd, 0, this_count, 1);
return;
} else {
- /*
- * Must have been a power glitch, or a
- * bus reset. Could not have been a
- * media change, so we just retry the
- * request and see what happens.
- */
+ /* Must have been a power glitch, or a
+ * bus reset. Could not have been a
+ * media change, so we just retry the
+ * request and see what happens.
+ */
scsi_requeue_command(q, cmd);
return;
}
break;
case ILLEGAL_REQUEST:
- /*
- * If we had an ILLEGAL REQUEST returned, then we may
- * have performed an unsupported command. The only
- * thing this should be would be a ten byte read where
- * only a six byte read was supported. Also, on a
- * system where READ CAPACITY failed, we may have read
- * past the end of the disk.
- */
+ /* If we had an ILLEGAL REQUEST returned, then
+ * we may have performed an unsupported
+ * command. The only thing this should be
+ * would be a ten byte read where only a six
+ * byte read was supported. Also, on a system
+ * where READ CAPACITY failed, we may have
+ * read past the end of the disk.
+ */
if ((cmd->device->use_10_for_rw &&
sshdr.asc == 0x20 && sshdr.ascq == 0x00) &&
(cmd->cmnd[0] == READ_10 ||
cmd->cmnd[0] == WRITE_10)) {
cmd->device->use_10_for_rw = 0;
- /*
- * This will cause a retry with a 6-byte
- * command.
+ /* This will cause a retry with a
+ * 6-byte command.
*/
scsi_requeue_command(q, cmd);
- result = 0;
+ return;
} else {
scsi_end_request(cmd, 0, this_count, 1);
return;
}
break;
case NOT_READY:
- /*
- * If the device is in the process of becoming
+ /* If the device is in the process of becoming
* ready, or has a temporary blockage, retry.
*/
if (sshdr.asc == 0x04) {
}
if (!(req->flags & REQ_QUIET)) {
scmd_printk(KERN_INFO, cmd,
- "Device not ready: ");
+ "Device not ready: ");
scsi_print_sense_hdr("", &sshdr);
}
scsi_end_request(cmd, 0, this_count, 1);
case VOLUME_OVERFLOW:
if (!(req->flags & REQ_QUIET)) {
scmd_printk(KERN_INFO, cmd,
- "Volume overflow, CDB: ");
+ "Volume overflow, CDB: ");
__scsi_print_command(cmd->data_cmnd);
scsi_print_sense("", cmd);
}
- scsi_end_request(cmd, 0, block_bytes, 1);
+ /* See SSC3rXX or current. */
+ scsi_end_request(cmd, 0, this_count, 1);
return;
default:
break;
}
- } /* driver byte != 0 */
+ }
if (host_byte(result) == DID_RESET) {
- /*
- * Third party bus reset or reset for error
- * recovery reasons. Just retry the request
- * and see what happens.
+ /* Third party bus reset or reset for error recovery
+ * reasons. Just retry the request and see what
+ * happens.
*/
scsi_requeue_command(q, cmd);
return;
if (result) {
if (!(req->flags & REQ_QUIET)) {
scmd_printk(KERN_INFO, cmd,
- "SCSI error: return code = 0x%x\n", result);
-
+ "SCSI error: return code = 0x%08x\n",
+ result);
if (driver_byte(result) & DRIVER_SENSE)
scsi_print_sense("", cmd);
}
- /*
- * Mark a single buffer as not uptodate. Queue the remainder.
- * We sometimes get this cruft in the event that a medium error
- * isn't properly reported.
- */
- block_bytes = req->hard_cur_sectors << 9;
- if (!block_bytes)
- block_bytes = req->data_len;
- scsi_end_request(cmd, 0, block_bytes, 1);
}
+ scsi_end_request(cmd, 0, this_count, !result);
}
EXPORT_SYMBOL(scsi_io_completion);
* successfully. Since this is a REQ_BLOCK_PC command the
* caller should check the request's errors value
*/
- scsi_io_completion(cmd, cmd->bufflen, 0);
+ scsi_io_completion(cmd, cmd->bufflen);
}
static void scsi_setup_blk_pc_cmnd(struct scsi_cmnd *cmd)
switch (oldstate) {
case SDEV_CREATED:
case SDEV_RUNNING:
+ case SDEV_QUIESCE:
case SDEV_OFFLINE:
case SDEV_BLOCK:
break;
case SDEV_DEL:
switch (oldstate) {
+ case SDEV_CREATED:
+ case SDEV_RUNNING:
+ case SDEV_OFFLINE:
case SDEV_CANCEL:
break;
default:
* classes.
*/
-#define SCSI_DEVICE_BLOCK_MAX_TIMEOUT (HZ*60)
+#define SCSI_DEVICE_BLOCK_MAX_TIMEOUT 600 /* units in seconds */
extern int scsi_internal_device_block(struct scsi_device *sdev);
extern int scsi_internal_device_unblock(struct scsi_device *sdev);
#define _SCSI_SAS_INTERNAL_H
#define SAS_HOST_ATTRS 0
-#define SAS_PORT_ATTRS 17
+#define SAS_PHY_ATTRS 17
+#define SAS_PORT_ATTRS 1
#define SAS_RPORT_ATTRS 7
#define SAS_END_DEV_ATTRS 3
#define SAS_EXPANDER_ATTRS 7
struct sas_domain_function_template *dft;
struct class_device_attribute private_host_attrs[SAS_HOST_ATTRS];
- struct class_device_attribute private_phy_attrs[SAS_PORT_ATTRS];
+ struct class_device_attribute private_phy_attrs[SAS_PHY_ATTRS];
+ struct class_device_attribute private_port_attrs[SAS_PORT_ATTRS];
struct class_device_attribute private_rphy_attrs[SAS_RPORT_ATTRS];
struct class_device_attribute private_end_dev_attrs[SAS_END_DEV_ATTRS];
struct class_device_attribute private_expander_attrs[SAS_EXPANDER_ATTRS];
struct transport_container phy_attr_cont;
+ struct transport_container port_attr_cont;
struct transport_container rphy_attr_cont;
struct transport_container end_dev_attr_cont;
struct transport_container expander_attr_cont;
* needed by scsi_sysfs.c
*/
struct class_device_attribute *host_attrs[SAS_HOST_ATTRS + 1];
- struct class_device_attribute *phy_attrs[SAS_PORT_ATTRS + 1];
+ struct class_device_attribute *phy_attrs[SAS_PHY_ATTRS + 1];
+ struct class_device_attribute *port_attrs[SAS_PORT_ATTRS + 1];
struct class_device_attribute *rphy_attrs[SAS_RPORT_ATTRS + 1];
struct class_device_attribute *end_dev_attrs[SAS_END_DEV_ATTRS + 1];
struct class_device_attribute *expander_attrs[SAS_EXPANDER_ATTRS + 1];
static inline void scsi_destroy_sdev(struct scsi_device *sdev)
{
+ scsi_device_set_state(sdev, SDEV_DEL);
if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev);
transport_destroy_device(&sdev->sdev_gendev);
* should insulate the loss of a remote port.
* The maximum will be capped by the value of SCSI_DEVICE_BLOCK_MAX_TIMEOUT.
*/
-static unsigned int fc_dev_loss_tmo = SCSI_DEVICE_BLOCK_MAX_TIMEOUT;
+static unsigned int fc_dev_loss_tmo = 60; /* seconds */
module_param_named(dev_loss_tmo, fc_dev_loss_tmo, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(dev_loss_tmo,
* @work: Work to queue for execution.
*
* Return value:
- * 0 on success / != 0 for error
+ * 1 - work queued for execution
+ * 0 - work is already queued
+ * -EINVAL - work queue doesn't exist
**/
static int
fc_queue_work(struct Scsi_Host *shost, struct work_struct *work)
struct Scsi_Host *shost = rport_to_shost(rport);
unsigned long flags;
- scsi_target_unblock(&rport->dev);
-
spin_lock_irqsave(shost->host_lock, flags);
if (rport->flags & FC_RPORT_DEVLOSS_PENDING) {
spin_unlock_irqrestore(shost->host_lock, flags);
transport_remove_device(dev);
device_del(dev);
transport_destroy_device(dev);
- put_device(&shost->shost_gendev);
+ put_device(&shost->shost_gendev); /* for fc_host->rport list */
+ put_device(dev); /* for self-reference */
}
else
rport->scsi_target_id = -1;
list_add_tail(&rport->peers, &fc_host->rports);
- get_device(&shost->shost_gendev);
+ get_device(&shost->shost_gendev); /* for fc_host->rport list */
spin_unlock_irqrestore(shost->host_lock, flags);
dev = &rport->dev;
- device_initialize(dev);
- dev->parent = get_device(&shost->shost_gendev);
+ device_initialize(dev); /* takes self reference */
+ dev->parent = get_device(&shost->shost_gendev); /* parent reference */
dev->release = fc_rport_dev_release;
sprintf(dev->bus_id, "rport-%d:%d-%d",
shost->host_no, channel, rport->number);
delete_rport:
transport_destroy_device(dev);
- put_device(dev->parent);
spin_lock_irqsave(shost->host_lock, flags);
list_del(&rport->peers);
- put_device(&shost->shost_gendev);
+ put_device(&shost->shost_gendev); /* for fc_host->rport list */
spin_unlock_irqrestore(shost->host_lock, flags);
put_device(dev->parent);
kfree(rport);
spin_unlock_irqrestore(shost->host_lock, flags);
+ scsi_target_unblock(&rport->dev);
+
return rport;
}
}
/* initiate a scan of the target */
rport->flags |= FC_RPORT_SCAN_PENDING;
scsi_queue_work(shost, &rport->scan_work);
- }
-
- spin_unlock_irqrestore(shost->host_lock, flags);
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ scsi_target_unblock(&rport->dev);
+ } else
+ spin_unlock_irqrestore(shost->host_lock, flags);
return rport;
}
rport->flags |= FC_RPORT_SCAN_PENDING;
scsi_queue_work(shost, &rport->scan_work);
spin_unlock_irqrestore(shost->host_lock, flags);
+ scsi_target_unblock(&rport->dev);
}
}
EXPORT_SYMBOL(fc_remote_port_rolechg);
dev_printk(KERN_ERR, &rport->dev,
"blocked FC remote port time out: no longer"
" a FCP target, removing starget\n");
- fc_queue_work(shost, &rport->stgt_delete_work);
spin_unlock_irqrestore(shost->host_lock, flags);
+ scsi_target_unblock(&rport->dev);
+ fc_queue_work(shost, &rport->stgt_delete_work);
return;
}
* went away and didn't come back - we'll remove
* all attached scsi devices.
*/
- fc_queue_work(shost, &rport->stgt_delete_work);
-
spin_unlock_irqrestore(shost->host_lock, flags);
+
+ scsi_target_unblock(&rport->dev);
+ fc_queue_work(shost, &rport->stgt_delete_work);
}
/**
* fc_scsi_scan_rport - called to perform a scsi scan on a remote port.
*
- * Will unblock the target (in case it went away and has now come back),
- * then invoke a scan.
- *
* @data: remote port to be scanned.
**/
static void
if ((rport->port_state == FC_PORTSTATE_ONLINE) &&
(rport->roles & FC_RPORT_ROLE_FCP_TARGET)) {
- scsi_target_unblock(&rport->dev);
scsi_scan_target(&rport->dev, rport->channel,
rport->scsi_target_id, SCAN_WILD_CARD, 1);
}
static void iscsi_session_release(struct device *dev)
{
struct iscsi_cls_session *session = iscsi_dev_to_session(dev);
- struct iscsi_transport *transport = session->transport;
struct Scsi_Host *shost;
shost = iscsi_session_to_shost(session);
scsi_host_put(shost);
- kfree(session->targetname);
kfree(session);
- module_put(transport->owner);
}
static int iscsi_is_session_dev(const struct device *dev)
mutex_lock(&ihost->mutex);
list_for_each_entry(session, &ihost->sessions, host_list) {
- if ((channel == SCAN_WILD_CARD ||
- channel == session->channel) &&
+ if ((channel == SCAN_WILD_CARD || channel == 0) &&
(id == SCAN_WILD_CARD || id == session->target_id))
- scsi_scan_target(&session->dev, session->channel,
+ scsi_scan_target(&session->dev, 0,
session->target_id, lun, 1);
}
mutex_unlock(&ihost->mutex);
}
EXPORT_SYMBOL_GPL(iscsi_block_session);
-/**
- * iscsi_create_session - create iscsi class session
- * @shost: scsi host
- * @transport: iscsi transport
- *
- * This can be called from a LLD or iscsi_transport.
- **/
struct iscsi_cls_session *
-iscsi_create_session(struct Scsi_Host *shost,
- struct iscsi_transport *transport, int channel)
+iscsi_alloc_session(struct Scsi_Host *shost,
+ struct iscsi_transport *transport)
{
- struct iscsi_host *ihost;
struct iscsi_cls_session *session;
- int err;
-
- if (!try_module_get(transport->owner))
- return NULL;
session = kzalloc(sizeof(*session) + transport->sessiondata_size,
GFP_KERNEL);
if (!session)
- goto module_put;
+ return NULL;
+
session->transport = transport;
session->recovery_tmo = 120;
INIT_WORK(&session->recovery_work, session_recovery_timedout, session);
INIT_LIST_HEAD(&session->host_list);
INIT_LIST_HEAD(&session->sess_list);
+ /* this is released in the dev's release function */
+ scsi_host_get(shost);
+ session->dev.parent = &shost->shost_gendev;
+ session->dev.release = iscsi_session_release;
+ device_initialize(&session->dev);
if (transport->sessiondata_size)
session->dd_data = &session[1];
+ return session;
+}
+EXPORT_SYMBOL_GPL(iscsi_alloc_session);
- /* this is released in the dev's release function */
- scsi_host_get(shost);
- ihost = shost->shost_data;
+int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
+{
+ struct Scsi_Host *shost = iscsi_session_to_shost(session);
+ struct iscsi_host *ihost;
+ int err;
+ ihost = shost->shost_data;
session->sid = iscsi_session_nr++;
- session->channel = channel;
- session->target_id = ihost->next_target_id++;
+ session->target_id = target_id;
snprintf(session->dev.bus_id, BUS_ID_SIZE, "session%u",
session->sid);
- session->dev.parent = &shost->shost_gendev;
- session->dev.release = iscsi_session_release;
- err = device_register(&session->dev);
+ err = device_add(&session->dev);
if (err) {
dev_printk(KERN_ERR, &session->dev, "iscsi: could not "
"register session's dev\n");
- goto free_session;
+ goto release_host;
}
transport_register_device(&session->dev);
mutex_lock(&ihost->mutex);
list_add(&session->host_list, &ihost->sessions);
mutex_unlock(&ihost->mutex);
+ return 0;
- return session;
-
-free_session:
- kfree(session);
-module_put:
- module_put(transport->owner);
- return NULL;
+release_host:
+ scsi_host_put(shost);
+ return err;
}
-
-EXPORT_SYMBOL_GPL(iscsi_create_session);
+EXPORT_SYMBOL_GPL(iscsi_add_session);
/**
- * iscsi_destroy_session - destroy iscsi session
- * @session: iscsi_session
+ * iscsi_create_session - create iscsi class session
+ * @shost: scsi host
+ * @transport: iscsi transport
*
- * Can be called by a LLD or iscsi_transport. There must not be
- * any running connections.
+ * This can be called from a LLD or iscsi_transport.
**/
-int iscsi_destroy_session(struct iscsi_cls_session *session)
+struct iscsi_cls_session *
+iscsi_create_session(struct Scsi_Host *shost,
+ struct iscsi_transport *transport,
+ unsigned int target_id)
+{
+ struct iscsi_cls_session *session;
+
+ session = iscsi_alloc_session(shost, transport);
+ if (!session)
+ return NULL;
+
+ if (iscsi_add_session(session, target_id)) {
+ iscsi_free_session(session);
+ return NULL;
+ }
+ return session;
+}
+EXPORT_SYMBOL_GPL(iscsi_create_session);
+
+void iscsi_remove_session(struct iscsi_cls_session *session)
{
struct Scsi_Host *shost = iscsi_session_to_shost(session);
struct iscsi_host *ihost = shost->shost_data;
list_del(&session->host_list);
mutex_unlock(&ihost->mutex);
+ scsi_remove_target(&session->dev);
+
transport_unregister_device(&session->dev);
- device_unregister(&session->dev);
- return 0;
+ device_del(&session->dev);
+}
+EXPORT_SYMBOL_GPL(iscsi_remove_session);
+
+void iscsi_free_session(struct iscsi_cls_session *session)
+{
+ put_device(&session->dev);
}
+EXPORT_SYMBOL_GPL(iscsi_free_session);
+
+/**
+ * iscsi_destroy_session - destroy iscsi session
+ * @session: iscsi_session
+ *
+ * Can be called by a LLD or iscsi_transport. There must not be
+ * any running connections.
+ **/
+int iscsi_destroy_session(struct iscsi_cls_session *session)
+{
+ iscsi_remove_session(session);
+ iscsi_free_session(session);
+ return 0;
+}
EXPORT_SYMBOL_GPL(iscsi_destroy_session);
+static void mempool_zone_destroy(struct mempool_zone *zp)
+{
+ mempool_destroy(zp->pool);
+ kfree(zp);
+}
+
+static void*
+mempool_zone_alloc_skb(gfp_t gfp_mask, void *pool_data)
+{
+ struct mempool_zone *zone = pool_data;
+
+ return alloc_skb(zone->size, gfp_mask);
+}
+
+static void
+mempool_zone_free_skb(void *element, void *pool_data)
+{
+ kfree_skb(element);
+}
+
+static struct mempool_zone *
+mempool_zone_init(unsigned max, unsigned size, unsigned hiwat)
+{
+ struct mempool_zone *zp;
+
+ zp = kzalloc(sizeof(*zp), GFP_KERNEL);
+ if (!zp)
+ return NULL;
+
+ zp->size = size;
+ zp->hiwat = hiwat;
+ INIT_LIST_HEAD(&zp->freequeue);
+ spin_lock_init(&zp->freelock);
+ atomic_set(&zp->allocated, 0);
+
+ zp->pool = mempool_create(max, mempool_zone_alloc_skb,
+ mempool_zone_free_skb, zp);
+ if (!zp->pool) {
+ kfree(zp);
+ return NULL;
+ }
+
+ return zp;
+}
+
static void iscsi_conn_release(struct device *dev)
{
struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev);
struct device *parent = conn->dev.parent;
- kfree(conn->persistent_address);
+ mempool_zone_destroy(conn->z_pdu);
+ mempool_zone_destroy(conn->z_error);
+
kfree(conn);
put_device(parent);
}
return dev->release == iscsi_conn_release;
}
+static int iscsi_create_event_pools(struct iscsi_cls_conn *conn)
+{
+ conn->z_pdu = mempool_zone_init(Z_MAX_PDU,
+ NLMSG_SPACE(sizeof(struct iscsi_uevent) +
+ sizeof(struct iscsi_hdr) +
+ DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH),
+ Z_HIWAT_PDU);
+ if (!conn->z_pdu) {
+ dev_printk(KERN_ERR, &conn->dev, "iscsi: can not allocate "
+ "pdu zone for new conn\n");
+ return -ENOMEM;
+ }
+
+ conn->z_error = mempool_zone_init(Z_MAX_ERROR,
+ NLMSG_SPACE(sizeof(struct iscsi_uevent)),
+ Z_HIWAT_ERROR);
+ if (!conn->z_error) {
+ dev_printk(KERN_ERR, &conn->dev, "iscsi: can not allocate "
+ "error zone for new conn\n");
+ mempool_zone_destroy(conn->z_pdu);
+ return -ENOMEM;
+ }
+ return 0;
+}
+
/**
* iscsi_create_conn - create iscsi class connection
* @session: iscsi cls session
conn->transport = transport;
conn->cid = cid;
+ if (iscsi_create_event_pools(conn))
+ goto free_conn;
+
/* this is released in the dev's release function */
if (!get_device(&session->dev))
- goto free_conn;
+ goto free_conn_pools;
snprintf(conn->dev.bus_id, BUS_ID_SIZE, "connection%d:%u",
session->sid, cid);
release_parent_ref:
put_device(&session->dev);
+free_conn_pools:
+
free_conn:
kfree(conn);
return NULL;
return (struct list_head *)&skb->cb;
}
-static void*
-mempool_zone_alloc_skb(gfp_t gfp_mask, void *pool_data)
-{
- struct mempool_zone *zone = pool_data;
-
- return alloc_skb(zone->size, gfp_mask);
-}
-
-static void
-mempool_zone_free_skb(void *element, void *pool_data)
-{
- kfree_skb(element);
-}
-
static void
mempool_zone_complete(struct mempool_zone *zone)
{
spin_unlock_irqrestore(&zone->freelock, flags);
}
-static struct mempool_zone *
-mempool_zone_init(unsigned max, unsigned size, unsigned hiwat)
-{
- struct mempool_zone *zp;
-
- zp = kzalloc(sizeof(*zp), GFP_KERNEL);
- if (!zp)
- return NULL;
-
- zp->size = size;
- zp->hiwat = hiwat;
- INIT_LIST_HEAD(&zp->freequeue);
- spin_lock_init(&zp->freelock);
- atomic_set(&zp->allocated, 0);
-
- zp->pool = mempool_create(max, mempool_zone_alloc_skb,
- mempool_zone_free_skb, zp);
- if (!zp->pool) {
- kfree(zp);
- return NULL;
- }
-
- return zp;
-}
-
-static void mempool_zone_destroy(struct mempool_zone *zp)
-{
- mempool_destroy(zp->pool);
- kfree(zp);
-}
-
static struct sk_buff*
mempool_zone_get_skb(struct mempool_zone *zone)
{
return skb;
}
+static int
+iscsi_broadcast_skb(struct mempool_zone *zone, struct sk_buff *skb)
+{
+ unsigned long flags;
+ int rc;
+
+ skb_get(skb);
+ rc = netlink_broadcast(nls, skb, 0, 1, GFP_KERNEL);
+ if (rc < 0) {
+ mempool_free(skb, zone->pool);
+ printk(KERN_ERR "iscsi: can not broadcast skb (%d)\n", rc);
+ return rc;
+ }
+
+ spin_lock_irqsave(&zone->freelock, flags);
+ INIT_LIST_HEAD(skb_to_lh(skb));
+ list_add(skb_to_lh(skb), &zone->freequeue);
+ spin_unlock_irqrestore(&zone->freelock, flags);
+ return 0;
+}
+
static int
iscsi_unicast_skb(struct mempool_zone *zone, struct sk_buff *skb, int pid)
{
ev->r.connerror.cid = conn->cid;
ev->r.connerror.sid = iscsi_conn_get_sid(conn);
- iscsi_unicast_skb(conn->z_error, skb, priv->daemon_pid);
+ iscsi_broadcast_skb(conn->z_error, skb);
dev_printk(KERN_INFO, &conn->dev, "iscsi: detected conn error (%d)\n",
error);
return err;
}
+/**
+ * iscsi_if_destroy_session_done - send session destr. completion event
+ * @conn: last connection for session
+ *
+ * This is called by HW iscsi LLDs to notify userpsace that its HW has
+ * removed a session.
+ **/
+int iscsi_if_destroy_session_done(struct iscsi_cls_conn *conn)
+{
+ struct iscsi_internal *priv;
+ struct iscsi_cls_session *session;
+ struct Scsi_Host *shost;
+ struct iscsi_uevent *ev;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+ unsigned long flags;
+ int rc, len = NLMSG_SPACE(sizeof(*ev));
+
+ priv = iscsi_if_transport_lookup(conn->transport);
+ if (!priv)
+ return -EINVAL;
+
+ session = iscsi_dev_to_session(conn->dev.parent);
+ shost = iscsi_session_to_shost(session);
+
+ mempool_zone_complete(conn->z_pdu);
+
+ skb = mempool_zone_get_skb(conn->z_pdu);
+ if (!skb) {
+ dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+ "session creation event\n");
+ return -ENOMEM;
+ }
+
+ nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
+ ev = NLMSG_DATA(nlh);
+ ev->transport_handle = iscsi_handle(conn->transport);
+ ev->type = ISCSI_KEVENT_DESTROY_SESSION;
+ ev->r.d_session.host_no = shost->host_no;
+ ev->r.d_session.sid = session->sid;
+
+ /*
+ * this will occur if the daemon is not up, so we just warn
+ * the user and when the daemon is restarted it will handle it
+ */
+ rc = iscsi_broadcast_skb(conn->z_pdu, skb);
+ if (rc < 0)
+ dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+ "session destruction event. Check iscsi daemon\n");
+
+ spin_lock_irqsave(&sesslock, flags);
+ list_del(&session->sess_list);
+ spin_unlock_irqrestore(&sesslock, flags);
+
+ spin_lock_irqsave(&connlock, flags);
+ conn->active = 0;
+ list_del(&conn->conn_list);
+ spin_unlock_irqrestore(&connlock, flags);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(iscsi_if_destroy_session_done);
+
+/**
+ * iscsi_if_create_session_done - send session creation completion event
+ * @conn: leading connection for session
+ *
+ * This is called by HW iscsi LLDs to notify userpsace that its HW has
+ * created a session or a existing session is back in the logged in state.
+ **/
+int iscsi_if_create_session_done(struct iscsi_cls_conn *conn)
+{
+ struct iscsi_internal *priv;
+ struct iscsi_cls_session *session;
+ struct Scsi_Host *shost;
+ struct iscsi_uevent *ev;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+ unsigned long flags;
+ int rc, len = NLMSG_SPACE(sizeof(*ev));
+
+ priv = iscsi_if_transport_lookup(conn->transport);
+ if (!priv)
+ return -EINVAL;
+
+ session = iscsi_dev_to_session(conn->dev.parent);
+ shost = iscsi_session_to_shost(session);
+
+ mempool_zone_complete(conn->z_pdu);
+
+ skb = mempool_zone_get_skb(conn->z_pdu);
+ if (!skb) {
+ dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+ "session creation event\n");
+ return -ENOMEM;
+ }
+
+ nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
+ ev = NLMSG_DATA(nlh);
+ ev->transport_handle = iscsi_handle(conn->transport);
+ ev->type = ISCSI_UEVENT_CREATE_SESSION;
+ ev->r.c_session_ret.host_no = shost->host_no;
+ ev->r.c_session_ret.sid = session->sid;
+
+ /*
+ * this will occur if the daemon is not up, so we just warn
+ * the user and when the daemon is restarted it will handle it
+ */
+ rc = iscsi_broadcast_skb(conn->z_pdu, skb);
+ if (rc < 0)
+ dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+ "session creation event. Check iscsi daemon\n");
+
+ spin_lock_irqsave(&sesslock, flags);
+ list_add(&session->sess_list, &sesslist);
+ spin_unlock_irqrestore(&sesslock, flags);
+
+ spin_lock_irqsave(&connlock, flags);
+ list_add(&conn->conn_list, &connlist);
+ conn->active = 1;
+ spin_unlock_irqrestore(&connlock, flags);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(iscsi_if_create_session_done);
+
static int
iscsi_if_create_session(struct iscsi_internal *priv, struct iscsi_uevent *ev)
{
return -ENOMEM;
}
- conn->z_pdu = mempool_zone_init(Z_MAX_PDU,
- NLMSG_SPACE(sizeof(struct iscsi_uevent) +
- sizeof(struct iscsi_hdr) +
- DEFAULT_MAX_RECV_DATA_SEGMENT_LENGTH),
- Z_HIWAT_PDU);
- if (!conn->z_pdu) {
- dev_printk(KERN_ERR, &conn->dev, "iscsi: can not allocate "
- "pdu zone for new conn\n");
- goto destroy_conn;
- }
-
- conn->z_error = mempool_zone_init(Z_MAX_ERROR,
- NLMSG_SPACE(sizeof(struct iscsi_uevent)),
- Z_HIWAT_ERROR);
- if (!conn->z_error) {
- dev_printk(KERN_ERR, &conn->dev, "iscsi: can not allocate "
- "error zone for new conn\n");
- goto free_pdu_pool;
- }
-
ev->r.c_conn_ret.sid = session->sid;
ev->r.c_conn_ret.cid = conn->cid;
spin_unlock_irqrestore(&connlock, flags);
return 0;
-
-free_pdu_pool:
- mempool_zone_destroy(conn->z_pdu);
-destroy_conn:
- if (transport->destroy_conn)
- transport->destroy_conn(conn->dd_data);
- return -ENOMEM;
}
static int
{
unsigned long flags;
struct iscsi_cls_conn *conn;
- struct mempool_zone *z_error, *z_pdu;
conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
if (!conn)
list_del(&conn->conn_list);
spin_unlock_irqrestore(&connlock, flags);
- z_pdu = conn->z_pdu;
- z_error = conn->z_error;
-
if (transport->destroy_conn)
transport->destroy_conn(conn);
-
- mempool_zone_destroy(z_pdu);
- mempool_zone_destroy(z_error);
-
return 0;
}
-static void
-iscsi_copy_param(struct iscsi_uevent *ev, uint32_t *value, char *data)
-{
- if (ev->u.set_param.len != sizeof(uint32_t))
- BUG();
- memcpy(value, data, min_t(uint32_t, sizeof(uint32_t),
- ev->u.set_param.len));
-}
-
static int
iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
{
char *data = (char*)ev + sizeof(*ev);
struct iscsi_cls_conn *conn;
struct iscsi_cls_session *session;
- int err = 0;
- uint32_t value = 0;
+ int err = 0, value = 0;
session = iscsi_session_lookup(ev->u.set_param.sid);
conn = iscsi_conn_lookup(ev->u.set_param.sid, ev->u.set_param.cid);
switch (ev->u.set_param.param) {
case ISCSI_PARAM_SESS_RECOVERY_TMO:
- iscsi_copy_param(ev, &value, data);
+ sscanf(data, "%d", &value);
if (value != 0)
session->recovery_tmo = value;
break;
- case ISCSI_PARAM_TARGET_NAME:
- /* this should not change between logins */
- if (session->targetname)
- return 0;
-
- session->targetname = kstrdup(data, GFP_KERNEL);
- if (!session->targetname)
- return -ENOMEM;
- break;
- case ISCSI_PARAM_TPGT:
- iscsi_copy_param(ev, &value, data);
- session->tpgt = value;
- break;
- case ISCSI_PARAM_PERSISTENT_PORT:
- iscsi_copy_param(ev, &value, data);
- conn->persistent_port = value;
- break;
- case ISCSI_PARAM_PERSISTENT_ADDRESS:
- /*
- * this is the address returned in discovery so it should
- * not change between logins.
- */
- if (conn->persistent_address)
- return 0;
-
- conn->persistent_address = kstrdup(data, GFP_KERNEL);
- if (!conn->persistent_address)
- return -ENOMEM;
- break;
default:
- iscsi_copy_param(ev, &value, data);
- err = transport->set_param(conn, ev->u.set_param.param, value);
+ err = transport->set_param(conn, ev->u.set_param.param,
+ data, ev->u.set_param.len);
}
return err;
return rc;
}
+static int
+iscsi_tgt_dscvr(struct iscsi_transport *transport,
+ struct iscsi_uevent *ev)
+{
+ struct sockaddr *dst_addr;
+
+ if (!transport->tgt_dscvr)
+ return -EINVAL;
+
+ dst_addr = (struct sockaddr *)((char*)ev + sizeof(*ev));
+ return transport->tgt_dscvr(ev->u.tgt_dscvr.type,
+ ev->u.tgt_dscvr.host_no,
+ ev->u.tgt_dscvr.enable, dst_addr);
+}
+
static int
iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
{
case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type);
break;
+ case ISCSI_UEVENT_TGT_DSCVR:
+ err = iscsi_tgt_dscvr(transport, ev);
+ break;
default:
err = -EINVAL;
break;
/*
* iSCSI connection attrs
*/
-#define iscsi_conn_int_attr_show(param, format) \
-static ssize_t \
-show_conn_int_param_##param(struct class_device *cdev, char *buf) \
-{ \
- uint32_t value = 0; \
- struct iscsi_cls_conn *conn = iscsi_cdev_to_conn(cdev); \
- struct iscsi_transport *t = conn->transport; \
- \
- t->get_conn_param(conn, param, &value); \
- return snprintf(buf, 20, format"\n", value); \
-}
-
-#define iscsi_conn_int_attr(field, param, format) \
- iscsi_conn_int_attr_show(param, format) \
-static ISCSI_CLASS_ATTR(conn, field, S_IRUGO, show_conn_int_param_##param, \
- NULL);
-
-iscsi_conn_int_attr(max_recv_dlength, ISCSI_PARAM_MAX_RECV_DLENGTH, "%u");
-iscsi_conn_int_attr(max_xmit_dlength, ISCSI_PARAM_MAX_XMIT_DLENGTH, "%u");
-iscsi_conn_int_attr(header_digest, ISCSI_PARAM_HDRDGST_EN, "%d");
-iscsi_conn_int_attr(data_digest, ISCSI_PARAM_DATADGST_EN, "%d");
-iscsi_conn_int_attr(ifmarker, ISCSI_PARAM_IFMARKER_EN, "%d");
-iscsi_conn_int_attr(ofmarker, ISCSI_PARAM_OFMARKER_EN, "%d");
-iscsi_conn_int_attr(persistent_port, ISCSI_PARAM_PERSISTENT_PORT, "%d");
-iscsi_conn_int_attr(port, ISCSI_PARAM_CONN_PORT, "%d");
-iscsi_conn_int_attr(exp_statsn, ISCSI_PARAM_EXP_STATSN, "%u");
-
-#define iscsi_conn_str_attr_show(param) \
+#define iscsi_conn_attr_show(param) \
static ssize_t \
-show_conn_str_param_##param(struct class_device *cdev, char *buf) \
+show_conn_param_##param(struct class_device *cdev, char *buf) \
{ \
struct iscsi_cls_conn *conn = iscsi_cdev_to_conn(cdev); \
struct iscsi_transport *t = conn->transport; \
- return t->get_conn_str_param(conn, param, buf); \
+ return t->get_conn_param(conn, param, buf); \
}
-#define iscsi_conn_str_attr(field, param) \
- iscsi_conn_str_attr_show(param) \
-static ISCSI_CLASS_ATTR(conn, field, S_IRUGO, show_conn_str_param_##param, \
+#define iscsi_conn_attr(field, param) \
+ iscsi_conn_attr_show(param) \
+static ISCSI_CLASS_ATTR(conn, field, S_IRUGO, show_conn_param_##param, \
NULL);
-iscsi_conn_str_attr(persistent_address, ISCSI_PARAM_PERSISTENT_ADDRESS);
-iscsi_conn_str_attr(address, ISCSI_PARAM_CONN_ADDRESS);
+iscsi_conn_attr(max_recv_dlength, ISCSI_PARAM_MAX_RECV_DLENGTH);
+iscsi_conn_attr(max_xmit_dlength, ISCSI_PARAM_MAX_XMIT_DLENGTH);
+iscsi_conn_attr(header_digest, ISCSI_PARAM_HDRDGST_EN);
+iscsi_conn_attr(data_digest, ISCSI_PARAM_DATADGST_EN);
+iscsi_conn_attr(ifmarker, ISCSI_PARAM_IFMARKER_EN);
+iscsi_conn_attr(ofmarker, ISCSI_PARAM_OFMARKER_EN);
+iscsi_conn_attr(persistent_port, ISCSI_PARAM_PERSISTENT_PORT);
+iscsi_conn_attr(port, ISCSI_PARAM_CONN_PORT);
+iscsi_conn_attr(exp_statsn, ISCSI_PARAM_EXP_STATSN);
+iscsi_conn_attr(persistent_address, ISCSI_PARAM_PERSISTENT_ADDRESS);
+iscsi_conn_attr(address, ISCSI_PARAM_CONN_ADDRESS);
#define iscsi_cdev_to_session(_cdev) \
iscsi_dev_to_session(_cdev->dev)
/*
* iSCSI session attrs
*/
-#define iscsi_session_int_attr_show(param, format) \
-static ssize_t \
-show_session_int_param_##param(struct class_device *cdev, char *buf) \
-{ \
- uint32_t value = 0; \
- struct iscsi_cls_session *session = iscsi_cdev_to_session(cdev); \
- struct iscsi_transport *t = session->transport; \
- \
- t->get_session_param(session, param, &value); \
- return snprintf(buf, 20, format"\n", value); \
-}
-
-#define iscsi_session_int_attr(field, param, format) \
- iscsi_session_int_attr_show(param, format) \
-static ISCSI_CLASS_ATTR(sess, field, S_IRUGO, show_session_int_param_##param, \
- NULL);
-
-iscsi_session_int_attr(initial_r2t, ISCSI_PARAM_INITIAL_R2T_EN, "%d");
-iscsi_session_int_attr(max_outstanding_r2t, ISCSI_PARAM_MAX_R2T, "%hu");
-iscsi_session_int_attr(immediate_data, ISCSI_PARAM_IMM_DATA_EN, "%d");
-iscsi_session_int_attr(first_burst_len, ISCSI_PARAM_FIRST_BURST, "%u");
-iscsi_session_int_attr(max_burst_len, ISCSI_PARAM_MAX_BURST, "%u");
-iscsi_session_int_attr(data_pdu_in_order, ISCSI_PARAM_PDU_INORDER_EN, "%d");
-iscsi_session_int_attr(data_seq_in_order, ISCSI_PARAM_DATASEQ_INORDER_EN, "%d");
-iscsi_session_int_attr(erl, ISCSI_PARAM_ERL, "%d");
-iscsi_session_int_attr(tpgt, ISCSI_PARAM_TPGT, "%d");
-
-#define iscsi_session_str_attr_show(param) \
+#define iscsi_session_attr_show(param) \
static ssize_t \
-show_session_str_param_##param(struct class_device *cdev, char *buf) \
+show_session_param_##param(struct class_device *cdev, char *buf) \
{ \
struct iscsi_cls_session *session = iscsi_cdev_to_session(cdev); \
struct iscsi_transport *t = session->transport; \
- return t->get_session_str_param(session, param, buf); \
+ return t->get_session_param(session, param, buf); \
}
-#define iscsi_session_str_attr(field, param) \
- iscsi_session_str_attr_show(param) \
-static ISCSI_CLASS_ATTR(sess, field, S_IRUGO, show_session_str_param_##param, \
+#define iscsi_session_attr(field, param) \
+ iscsi_session_attr_show(param) \
+static ISCSI_CLASS_ATTR(sess, field, S_IRUGO, show_session_param_##param, \
NULL);
-iscsi_session_str_attr(targetname, ISCSI_PARAM_TARGET_NAME);
+iscsi_session_attr(targetname, ISCSI_PARAM_TARGET_NAME);
+iscsi_session_attr(initial_r2t, ISCSI_PARAM_INITIAL_R2T_EN);
+iscsi_session_attr(max_outstanding_r2t, ISCSI_PARAM_MAX_R2T);
+iscsi_session_attr(immediate_data, ISCSI_PARAM_IMM_DATA_EN);
+iscsi_session_attr(first_burst_len, ISCSI_PARAM_FIRST_BURST);
+iscsi_session_attr(max_burst_len, ISCSI_PARAM_MAX_BURST);
+iscsi_session_attr(data_pdu_in_order, ISCSI_PARAM_PDU_INORDER_EN);
+iscsi_session_attr(data_seq_in_order, ISCSI_PARAM_DATASEQ_INORDER_EN);
+iscsi_session_attr(erl, ISCSI_PARAM_ERL);
+iscsi_session_attr(tpgt, ISCSI_PARAM_TPGT);
-/*
- * Private session and conn attrs. userspace uses several iscsi values
- * to identify each session between reboots. Some of these values may not
- * be present in the iscsi_transport/LLD driver becuase userspace handles
- * login (and failback for login redirect) so for these type of drivers
- * the class manages the attrs and values for the iscsi_transport/LLD
- */
#define iscsi_priv_session_attr_show(field, format) \
static ssize_t \
-show_priv_session_##field(struct class_device *cdev, char *buf) \
+show_priv_session_##field(struct class_device *cdev, char *buf) \
{ \
- struct iscsi_cls_session *session = iscsi_cdev_to_session(cdev); \
+ struct iscsi_cls_session *session = iscsi_cdev_to_session(cdev);\
return sprintf(buf, format"\n", session->field); \
}
iscsi_priv_session_attr_show(field, format) \
static ISCSI_CLASS_ATTR(priv_sess, field, S_IRUGO, show_priv_session_##field, \
NULL)
-iscsi_priv_session_attr(targetname, "%s");
-iscsi_priv_session_attr(tpgt, "%d");
iscsi_priv_session_attr(recovery_tmo, "%d");
-#define iscsi_priv_conn_attr_show(field, format) \
-static ssize_t \
-show_priv_conn_##field(struct class_device *cdev, char *buf) \
-{ \
- struct iscsi_cls_conn *conn = iscsi_cdev_to_conn(cdev); \
- return sprintf(buf, format"\n", conn->field); \
-}
-
-#define iscsi_priv_conn_attr(field, format) \
- iscsi_priv_conn_attr_show(field, format) \
-static ISCSI_CLASS_ATTR(priv_conn, field, S_IRUGO, show_priv_conn_##field, \
- NULL)
-iscsi_priv_conn_attr(persistent_address, "%s");
-iscsi_priv_conn_attr(persistent_port, "%d");
-
#define SETUP_PRIV_SESSION_RD_ATTR(field) \
do { \
priv->session_attrs[count] = &class_device_attr_priv_sess_##field; \
count++; \
} while (0)
+
#define SETUP_SESSION_RD_ATTR(field, param_flag) \
do { \
if (tt->param_mask & param_flag) { \
} \
} while (0)
-#define SETUP_PRIV_CONN_RD_ATTR(field) \
-do { \
- priv->conn_attrs[count] = &class_device_attr_priv_conn_##field; \
- count++; \
-} while (0)
-
#define SETUP_CONN_RD_ATTR(field, param_flag) \
do { \
if (tt->param_mask & param_flag) { \
if (!priv)
return NULL;
INIT_LIST_HEAD(&priv->list);
+ priv->daemon_pid = -1;
priv->iscsi_transport = tt;
priv->t.user_scan = iscsi_user_scan;
SETUP_CONN_RD_ATTR(address, ISCSI_CONN_ADDRESS);
SETUP_CONN_RD_ATTR(port, ISCSI_CONN_PORT);
SETUP_CONN_RD_ATTR(exp_statsn, ISCSI_EXP_STATSN);
-
- if (tt->param_mask & ISCSI_PERSISTENT_ADDRESS)
- SETUP_CONN_RD_ATTR(persistent_address, ISCSI_PERSISTENT_ADDRESS);
- else
- SETUP_PRIV_CONN_RD_ATTR(persistent_address);
-
- if (tt->param_mask & ISCSI_PERSISTENT_PORT)
- SETUP_CONN_RD_ATTR(persistent_port, ISCSI_PERSISTENT_PORT);
- else
- SETUP_PRIV_CONN_RD_ATTR(persistent_port);
+ SETUP_CONN_RD_ATTR(persistent_address, ISCSI_PERSISTENT_ADDRESS);
+ SETUP_CONN_RD_ATTR(persistent_port, ISCSI_PERSISTENT_PORT);
BUG_ON(count > ISCSI_CONN_ATTRS);
priv->conn_attrs[count] = NULL;
SETUP_SESSION_RD_ATTR(data_pdu_in_order, ISCSI_PDU_INORDER_EN);
SETUP_SESSION_RD_ATTR(data_seq_in_order, ISCSI_DATASEQ_INORDER_EN);
SETUP_SESSION_RD_ATTR(erl, ISCSI_ERL);
+ SETUP_SESSION_RD_ATTR(targetname, ISCSI_TARGET_NAME);
+ SETUP_SESSION_RD_ATTR(tpgt, ISCSI_TPGT);
SETUP_PRIV_SESSION_RD_ATTR(recovery_tmo);
- if (tt->param_mask & ISCSI_TARGET_NAME)
- SETUP_SESSION_RD_ATTR(targetname, ISCSI_TARGET_NAME);
- else
- SETUP_PRIV_SESSION_RD_ATTR(targetname);
-
- if (tt->param_mask & ISCSI_TPGT)
- SETUP_SESSION_RD_ATTR(tpgt, ISCSI_TPGT);
- else
- SETUP_PRIV_SESSION_RD_ATTR(tpgt);
-
BUG_ON(count > ISCSI_SESSION_ATTRS);
priv->session_attrs[count] = NULL;
static int do_sas_phy_delete(struct device *dev, void *data)
{
- if (scsi_is_sas_phy(dev))
+ int pass = (int)(unsigned long)data;
+
+ if (pass == 0 && scsi_is_sas_port(dev))
+ sas_port_delete(dev_to_sas_port(dev));
+ else if (pass == 1 && scsi_is_sas_phy(dev))
sas_phy_delete(dev_to_phy(dev));
return 0;
}
+/**
+ * sas_remove_children -- tear down a devices SAS data structures
+ * @dev: device belonging to the sas object
+ *
+ * Removes all SAS PHYs and remote PHYs for a given object
+ */
+void sas_remove_children(struct device *dev)
+{
+ device_for_each_child(dev, (void *)0, do_sas_phy_delete);
+ device_for_each_child(dev, (void *)1, do_sas_phy_delete);
+}
+EXPORT_SYMBOL(sas_remove_children);
+
/**
* sas_remove_host -- tear down a Scsi_Host's SAS data structures
* @shost: Scsi Host that is torn down
*/
void sas_remove_host(struct Scsi_Host *shost)
{
- device_for_each_child(&shost->shost_gendev, NULL, do_sas_phy_delete);
+ sas_remove_children(&shost->shost_gendev);
}
EXPORT_SYMBOL(sas_remove_host);
/*
- * SAS Port attributes
+ * SAS Phy attributes
*/
#define sas_phy_show_simple(field, name, format_string, cast) \
sas_phy_simple_attr(identify.sas_address, sas_address, "0x%016llx\n",
unsigned long long);
sas_phy_simple_attr(identify.phy_identifier, phy_identifier, "%d\n", u8);
-sas_phy_simple_attr(port_identifier, port_identifier, "%d\n", u8);
+//sas_phy_simple_attr(port_identifier, port_identifier, "%d\n", u8);
sas_phy_linkspeed_attr(negotiated_linkrate);
sas_phy_linkspeed_attr(minimum_linkrate_hw);
sas_phy_linkspeed_attr(minimum_linkrate);
device_initialize(&phy->dev);
phy->dev.parent = get_device(parent);
phy->dev.release = sas_phy_release;
+ INIT_LIST_HEAD(&phy->port_siblings);
if (scsi_is_sas_expander_device(parent)) {
struct sas_rphy *rphy = dev_to_rphy(parent);
- sprintf(phy->dev.bus_id, "phy-%d-%d:%d", shost->host_no,
+ sprintf(phy->dev.bus_id, "phy-%d:%d:%d", shost->host_no,
rphy->scsi_target_id, number);
} else
sprintf(phy->dev.bus_id, "phy-%d:%d", shost->host_no, number);
{
struct device *dev = &phy->dev;
- if (phy->rphy)
- sas_rphy_delete(phy->rphy);
+ /* this happens if the phy is still part of a port when deleted */
+ BUG_ON(!list_empty(&phy->port_siblings));
transport_remove_device(dev);
device_del(dev);
}
EXPORT_SYMBOL(scsi_is_sas_phy);
+/*
+ * SAS Port attributes
+ */
+#define sas_port_show_simple(field, name, format_string, cast) \
+static ssize_t \
+show_sas_port_##name(struct class_device *cdev, char *buf) \
+{ \
+ struct sas_port *port = transport_class_to_sas_port(cdev); \
+ \
+ return snprintf(buf, 20, format_string, cast port->field); \
+}
+
+#define sas_port_simple_attr(field, name, format_string, type) \
+ sas_port_show_simple(field, name, format_string, (type)) \
+static CLASS_DEVICE_ATTR(name, S_IRUGO, show_sas_port_##name, NULL)
+
+sas_port_simple_attr(num_phys, num_phys, "%d\n", int);
+
+static DECLARE_TRANSPORT_CLASS(sas_port_class,
+ "sas_port", NULL, NULL, NULL);
+
+static int sas_port_match(struct attribute_container *cont, struct device *dev)
+{
+ struct Scsi_Host *shost;
+ struct sas_internal *i;
+
+ if (!scsi_is_sas_port(dev))
+ return 0;
+ shost = dev_to_shost(dev->parent);
+
+ if (!shost->transportt)
+ return 0;
+ if (shost->transportt->host_attrs.ac.class !=
+ &sas_host_class.class)
+ return 0;
+
+ i = to_sas_internal(shost->transportt);
+ return &i->port_attr_cont.ac == cont;
+}
+
+
+static void sas_port_release(struct device *dev)
+{
+ struct sas_port *port = dev_to_sas_port(dev);
+
+ BUG_ON(!list_empty(&port->phy_list));
+
+ put_device(dev->parent);
+ kfree(port);
+}
+
+static void sas_port_create_link(struct sas_port *port,
+ struct sas_phy *phy)
+{
+ sysfs_create_link(&port->dev.kobj, &phy->dev.kobj, phy->dev.bus_id);
+ sysfs_create_link(&phy->dev.kobj, &port->dev.kobj, "port");
+}
+
+static void sas_port_delete_link(struct sas_port *port,
+ struct sas_phy *phy)
+{
+ sysfs_remove_link(&port->dev.kobj, phy->dev.bus_id);
+ sysfs_remove_link(&phy->dev.kobj, "port");
+}
+
+/** sas_port_alloc - allocate and initialize a SAS port structure
+ *
+ * @parent: parent device
+ * @port_id: port number
+ *
+ * Allocates a SAS port structure. It will be added to the device tree
+ * below the device specified by @parent which must be either a Scsi_Host
+ * or a sas_expander_device.
+ *
+ * Returns %NULL on error
+ */
+struct sas_port *sas_port_alloc(struct device *parent, int port_id)
+{
+ struct Scsi_Host *shost = dev_to_shost(parent);
+ struct sas_port *port;
+
+ port = kzalloc(sizeof(*port), GFP_KERNEL);
+ if (!port)
+ return NULL;
+
+ port->port_identifier = port_id;
+
+ device_initialize(&port->dev);
+
+ port->dev.parent = get_device(parent);
+ port->dev.release = sas_port_release;
+
+ mutex_init(&port->phy_list_mutex);
+ INIT_LIST_HEAD(&port->phy_list);
+
+ if (scsi_is_sas_expander_device(parent)) {
+ struct sas_rphy *rphy = dev_to_rphy(parent);
+ sprintf(port->dev.bus_id, "port-%d:%d:%d", shost->host_no,
+ rphy->scsi_target_id, port->port_identifier);
+ } else
+ sprintf(port->dev.bus_id, "port-%d:%d", shost->host_no,
+ port->port_identifier);
+
+ transport_setup_device(&port->dev);
+
+ return port;
+}
+EXPORT_SYMBOL(sas_port_alloc);
+
+/**
+ * sas_port_add - add a SAS port to the device hierarchy
+ *
+ * @port: port to be added
+ *
+ * publishes a port to the rest of the system
+ */
+int sas_port_add(struct sas_port *port)
+{
+ int error;
+
+ /* No phys should be added until this is made visible */
+ BUG_ON(!list_empty(&port->phy_list));
+
+ error = device_add(&port->dev);
+
+ if (error)
+ return error;
+
+ transport_add_device(&port->dev);
+ transport_configure_device(&port->dev);
+
+ return 0;
+}
+EXPORT_SYMBOL(sas_port_add);
+
+/**
+ * sas_port_free -- free a SAS PORT
+ * @port: SAS PORT to free
+ *
+ * Frees the specified SAS PORT.
+ *
+ * Note:
+ * This function must only be called on a PORT that has not
+ * sucessfully been added using sas_port_add().
+ */
+void sas_port_free(struct sas_port *port)
+{
+ transport_destroy_device(&port->dev);
+ put_device(&port->dev);
+}
+EXPORT_SYMBOL(sas_port_free);
+
+/**
+ * sas_port_delete -- remove SAS PORT
+ * @port: SAS PORT to remove
+ *
+ * Removes the specified SAS PORT. If the SAS PORT has an
+ * associated phys, unlink them from the port as well.
+ */
+void sas_port_delete(struct sas_port *port)
+{
+ struct device *dev = &port->dev;
+ struct sas_phy *phy, *tmp_phy;
+
+ if (port->rphy) {
+ sas_rphy_delete(port->rphy);
+ port->rphy = NULL;
+ }
+
+ mutex_lock(&port->phy_list_mutex);
+ list_for_each_entry_safe(phy, tmp_phy, &port->phy_list,
+ port_siblings) {
+ sas_port_delete_link(port, phy);
+ list_del_init(&phy->port_siblings);
+ }
+ mutex_unlock(&port->phy_list_mutex);
+
+ transport_remove_device(dev);
+ device_del(dev);
+ transport_destroy_device(dev);
+ put_device(dev);
+}
+EXPORT_SYMBOL(sas_port_delete);
+
+/**
+ * scsi_is_sas_port -- check if a struct device represents a SAS port
+ * @dev: device to check
+ *
+ * Returns:
+ * %1 if the device represents a SAS Port, %0 else
+ */
+int scsi_is_sas_port(const struct device *dev)
+{
+ return dev->release == sas_port_release;
+}
+EXPORT_SYMBOL(scsi_is_sas_port);
+
+/**
+ * sas_port_add_phy - add another phy to a port to form a wide port
+ * @port: port to add the phy to
+ * @phy: phy to add
+ *
+ * When a port is initially created, it is empty (has no phys). All
+ * ports must have at least one phy to operated, and all wide ports
+ * must have at least two. The current code makes no difference
+ * between ports and wide ports, but the only object that can be
+ * connected to a remote device is a port, so ports must be formed on
+ * all devices with phys if they're connected to anything.
+ */
+void sas_port_add_phy(struct sas_port *port, struct sas_phy *phy)
+{
+ mutex_lock(&port->phy_list_mutex);
+ if (unlikely(!list_empty(&phy->port_siblings))) {
+ /* make sure we're already on this port */
+ struct sas_phy *tmp;
+
+ list_for_each_entry(tmp, &port->phy_list, port_siblings)
+ if (tmp == phy)
+ break;
+ /* If this trips, you added a phy that was already
+ * part of a different port */
+ if (unlikely(tmp != phy)) {
+ dev_printk(KERN_ERR, &port->dev, "trying to add phy %s fails: it's already part of another port\n", phy->dev.bus_id);
+ BUG();
+ }
+ } else {
+ sas_port_create_link(port, phy);
+ list_add_tail(&phy->port_siblings, &port->phy_list);
+ port->num_phys++;
+ }
+ mutex_unlock(&port->phy_list_mutex);
+}
+EXPORT_SYMBOL(sas_port_add_phy);
+
+/**
+ * sas_port_delete_phy - remove a phy from a port or wide port
+ * @port: port to remove the phy from
+ * @phy: phy to remove
+ *
+ * This operation is used for tearing down ports again. It must be
+ * done to every port or wide port before calling sas_port_delete.
+ */
+void sas_port_delete_phy(struct sas_port *port, struct sas_phy *phy)
+{
+ mutex_lock(&port->phy_list_mutex);
+ sas_port_delete_link(port, phy);
+ list_del_init(&phy->port_siblings);
+ port->num_phys--;
+ mutex_unlock(&port->phy_list_mutex);
+}
+EXPORT_SYMBOL(sas_port_delete_phy);
+
/*
* SAS remote PHY attributes.
*/
* Returns:
* SAS PHY allocated or %NULL if the allocation failed.
*/
-struct sas_rphy *sas_end_device_alloc(struct sas_phy *parent)
+struct sas_rphy *sas_end_device_alloc(struct sas_port *parent)
{
struct Scsi_Host *shost = dev_to_shost(&parent->dev);
struct sas_end_device *rdev;
device_initialize(&rdev->rphy.dev);
rdev->rphy.dev.parent = get_device(&parent->dev);
rdev->rphy.dev.release = sas_end_device_release;
- sprintf(rdev->rphy.dev.bus_id, "end_device-%d:%d-%d",
- shost->host_no, parent->port_identifier, parent->number);
+ if (scsi_is_sas_expander_device(parent->dev.parent)) {
+ struct sas_rphy *rphy = dev_to_rphy(parent->dev.parent);
+ sprintf(rdev->rphy.dev.bus_id, "end_device-%d:%d:%d",
+ shost->host_no, rphy->scsi_target_id, parent->port_identifier);
+ } else
+ sprintf(rdev->rphy.dev.bus_id, "end_device-%d:%d",
+ shost->host_no, parent->port_identifier);
rdev->rphy.identify.device_type = SAS_END_DEVICE;
sas_rphy_initialize(&rdev->rphy);
transport_setup_device(&rdev->rphy.dev);
* Returns:
* SAS PHY allocated or %NULL if the allocation failed.
*/
-struct sas_rphy *sas_expander_alloc(struct sas_phy *parent,
+struct sas_rphy *sas_expander_alloc(struct sas_port *parent,
enum sas_device_type type)
{
struct Scsi_Host *shost = dev_to_shost(&parent->dev);
*/
int sas_rphy_add(struct sas_rphy *rphy)
{
- struct sas_phy *parent = dev_to_phy(rphy->dev.parent);
+ struct sas_port *parent = dev_to_sas_port(rphy->dev.parent);
struct Scsi_Host *shost = dev_to_shost(parent->dev.parent);
struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
struct sas_identify *identify = &rphy->identify;
sas_rphy_delete(struct sas_rphy *rphy)
{
struct device *dev = &rphy->dev;
- struct sas_phy *parent = dev_to_phy(dev->parent);
+ struct sas_port *parent = dev_to_sas_port(dev->parent);
struct Scsi_Host *shost = dev_to_shost(parent->dev.parent);
struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
break;
case SAS_EDGE_EXPANDER_DEVICE:
case SAS_FANOUT_EXPANDER_DEVICE:
- device_for_each_child(dev, NULL, do_sas_phy_delete);
+ sas_remove_children(dev);
break;
default:
break;
mutex_lock(&sas_host->lock);
list_for_each_entry(rphy, &sas_host->rphy_list, list) {
- struct sas_phy *parent = dev_to_phy(rphy->dev.parent);
+ struct sas_port *parent = dev_to_sas_port(rphy->dev.parent);
if (rphy->identify.device_type != SAS_END_DEVICE ||
rphy->scsi_target_id == -1)
#define SETUP_OPTIONAL_RPORT_ATTRIBUTE(field, func) \
SETUP_TEMPLATE(rphy_attrs, field, S_IRUGO, i->f->func)
-#define SETUP_PORT_ATTRIBUTE(field) \
+#define SETUP_PHY_ATTRIBUTE(field) \
SETUP_TEMPLATE(phy_attrs, field, S_IRUGO, 1)
-#define SETUP_OPTIONAL_PORT_ATTRIBUTE(field, func) \
+#define SETUP_PORT_ATTRIBUTE(field) \
+ SETUP_TEMPLATE(port_attrs, field, S_IRUGO, 1)
+
+#define SETUP_OPTIONAL_PHY_ATTRIBUTE(field, func) \
SETUP_TEMPLATE(phy_attrs, field, S_IRUGO, i->f->func)
-#define SETUP_PORT_ATTRIBUTE_WRONLY(field) \
+#define SETUP_PHY_ATTRIBUTE_WRONLY(field) \
SETUP_TEMPLATE(phy_attrs, field, S_IWUGO, 1)
-#define SETUP_OPTIONAL_PORT_ATTRIBUTE_WRONLY(field, func) \
+#define SETUP_OPTIONAL_PHY_ATTRIBUTE_WRONLY(field, func) \
SETUP_TEMPLATE(phy_attrs, field, S_IWUGO, i->f->func)
#define SETUP_END_DEV_ATTRIBUTE(field) \
i->phy_attr_cont.ac.match = sas_phy_match;
transport_container_register(&i->phy_attr_cont);
+ i->port_attr_cont.ac.class = &sas_port_class.class;
+ i->port_attr_cont.ac.attrs = &i->port_attrs[0];
+ i->port_attr_cont.ac.match = sas_port_match;
+ transport_container_register(&i->port_attr_cont);
+
i->rphy_attr_cont.ac.class = &sas_rphy_class.class;
i->rphy_attr_cont.ac.attrs = &i->rphy_attrs[0];
i->rphy_attr_cont.ac.match = sas_rphy_match;
i->f = ft;
count = 0;
+ SETUP_PORT_ATTRIBUTE(num_phys);
i->host_attrs[count] = NULL;
count = 0;
- SETUP_PORT_ATTRIBUTE(initiator_port_protocols);
- SETUP_PORT_ATTRIBUTE(target_port_protocols);
- SETUP_PORT_ATTRIBUTE(device_type);
- SETUP_PORT_ATTRIBUTE(sas_address);
- SETUP_PORT_ATTRIBUTE(phy_identifier);
- SETUP_PORT_ATTRIBUTE(port_identifier);
- SETUP_PORT_ATTRIBUTE(negotiated_linkrate);
- SETUP_PORT_ATTRIBUTE(minimum_linkrate_hw);
- SETUP_PORT_ATTRIBUTE(minimum_linkrate);
- SETUP_PORT_ATTRIBUTE(maximum_linkrate_hw);
- SETUP_PORT_ATTRIBUTE(maximum_linkrate);
-
- SETUP_PORT_ATTRIBUTE(invalid_dword_count);
- SETUP_PORT_ATTRIBUTE(running_disparity_error_count);
- SETUP_PORT_ATTRIBUTE(loss_of_dword_sync_count);
- SETUP_PORT_ATTRIBUTE(phy_reset_problem_count);
- SETUP_OPTIONAL_PORT_ATTRIBUTE_WRONLY(link_reset, phy_reset);
- SETUP_OPTIONAL_PORT_ATTRIBUTE_WRONLY(hard_reset, phy_reset);
+ SETUP_PHY_ATTRIBUTE(initiator_port_protocols);
+ SETUP_PHY_ATTRIBUTE(target_port_protocols);
+ SETUP_PHY_ATTRIBUTE(device_type);
+ SETUP_PHY_ATTRIBUTE(sas_address);
+ SETUP_PHY_ATTRIBUTE(phy_identifier);
+ //SETUP_PHY_ATTRIBUTE(port_identifier);
+ SETUP_PHY_ATTRIBUTE(negotiated_linkrate);
+ SETUP_PHY_ATTRIBUTE(minimum_linkrate_hw);
+ SETUP_PHY_ATTRIBUTE(minimum_linkrate);
+ SETUP_PHY_ATTRIBUTE(maximum_linkrate_hw);
+ SETUP_PHY_ATTRIBUTE(maximum_linkrate);
+
+ SETUP_PHY_ATTRIBUTE(invalid_dword_count);
+ SETUP_PHY_ATTRIBUTE(running_disparity_error_count);
+ SETUP_PHY_ATTRIBUTE(loss_of_dword_sync_count);
+ SETUP_PHY_ATTRIBUTE(phy_reset_problem_count);
+ SETUP_OPTIONAL_PHY_ATTRIBUTE_WRONLY(link_reset, phy_reset);
+ SETUP_OPTIONAL_PHY_ATTRIBUTE_WRONLY(hard_reset, phy_reset);
i->phy_attrs[count] = NULL;
+ count = 0;
+ SETUP_PORT_ATTRIBUTE(num_phys);
+ i->port_attrs[count] = NULL;
+
count = 0;
SETUP_RPORT_ATTRIBUTE(rphy_initiator_port_protocols);
SETUP_RPORT_ATTRIBUTE(rphy_target_port_protocols);
transport_container_unregister(&i->t.host_attrs);
transport_container_unregister(&i->phy_attr_cont);
+ transport_container_unregister(&i->port_attr_cont);
transport_container_unregister(&i->rphy_attr_cont);
transport_container_unregister(&i->end_dev_attr_cont);
transport_container_unregister(&i->expander_attr_cont);
error = transport_class_register(&sas_phy_class);
if (error)
goto out_unregister_transport;
- error = transport_class_register(&sas_rphy_class);
+ error = transport_class_register(&sas_port_class);
if (error)
goto out_unregister_phy;
+ error = transport_class_register(&sas_rphy_class);
+ if (error)
+ goto out_unregister_port;
error = transport_class_register(&sas_end_dev_class);
if (error)
goto out_unregister_rphy;
transport_class_unregister(&sas_end_dev_class);
out_unregister_rphy:
transport_class_unregister(&sas_rphy_class);
+ out_unregister_port:
+ transport_class_unregister(&sas_port_class);
out_unregister_phy:
transport_class_unregister(&sas_phy_class);
out_unregister_transport:
{
transport_class_unregister(&sas_host_class);
transport_class_unregister(&sas_phy_class);
+ transport_class_unregister(&sas_port_class);
transport_class_unregister(&sas_rphy_class);
transport_class_unregister(&sas_end_dev_class);
transport_class_unregister(&sas_expander_class);
int scsicam_bios_param(struct block_device *bdev, sector_t capacity, int *ip)
{
unsigned char *p;
+ u64 capacity64 = capacity; /* Suppress gcc warning */
int ret;
p = scsi_bios_ptable(bdev);
(unsigned int *)ip + 0, (unsigned int *)ip + 1);
kfree(p);
- if (ret == -1) {
+ if (ret == -1 && capacity64 < (1ULL << 32)) {
/* pick some standard mapping with at most 1024 cylinders,
and at most 62 sectors per track - this works up to
7905 MB */
return count;
}
+static ssize_t sd_store_allow_restart(struct class_device *cdev, const char *buf,
+ size_t count)
+{
+ struct scsi_disk *sdkp = to_scsi_disk(cdev);
+ struct scsi_device *sdp = sdkp->device;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ if (sdp->type != TYPE_DISK)
+ return -EINVAL;
+
+ sdp->allow_restart = simple_strtoul(buf, NULL, 10);
+
+ return count;
+}
+
static ssize_t sd_show_cache_type(struct class_device *cdev, char *buf)
{
struct scsi_disk *sdkp = to_scsi_disk(cdev);
return snprintf(buf, 20, "%u\n", sdkp->DPOFUA);
}
+static ssize_t sd_show_allow_restart(struct class_device *cdev, char *buf)
+{
+ struct scsi_disk *sdkp = to_scsi_disk(cdev);
+
+ return snprintf(buf, 40, "%d\n", sdkp->device->allow_restart);
+}
+
static struct class_device_attribute sd_disk_attrs[] = {
__ATTR(cache_type, S_IRUGO|S_IWUSR, sd_show_cache_type,
sd_store_cache_type),
__ATTR(FUA, S_IRUGO, sd_show_fua, NULL),
+ __ATTR(allow_restart, S_IRUGO|S_IWUSR, sd_show_allow_restart,
+ sd_store_allow_restart),
__ATTR_NULL,
};
static void sd_rw_intr(struct scsi_cmnd * SCpnt)
{
int result = SCpnt->result;
- int this_count = SCpnt->request_bufflen;
- int good_bytes = (result == 0 ? this_count : 0);
- sector_t block_sectors = 1;
- u64 first_err_block;
- sector_t error_sector;
+ unsigned int xfer_size = SCpnt->request_bufflen;
+ unsigned int good_bytes = result ? 0 : xfer_size;
+ u64 start_lba = SCpnt->request->sector;
+ u64 bad_lba;
struct scsi_sense_hdr sshdr;
int sense_valid = 0;
int sense_deferred = 0;
if (sense_valid)
sense_deferred = scsi_sense_is_deferred(&sshdr);
}
-
#ifdef CONFIG_SCSI_LOGGING
SCSI_LOG_HLCOMPLETE(1, printk("sd_rw_intr: %s: res=0x%x\n",
SCpnt->request->rq_disk->disk_name, result));
sshdr.sense_key, sshdr.asc, sshdr.ascq));
}
#endif
- /*
- Handle MEDIUM ERRORs that indicate partial success. Since this is a
- relatively rare error condition, no care is taken to avoid
- unnecessary additional work such as memcpy's that could be avoided.
- */
- if (driver_byte(result) != 0 &&
- sense_valid && !sense_deferred) {
- switch (sshdr.sense_key) {
- case MEDIUM_ERROR:
- if (!blk_fs_request(SCpnt->request))
- break;
- info_valid = scsi_get_sense_info_fld(
- SCpnt->sense_buffer, SCSI_SENSE_BUFFERSIZE,
- &first_err_block);
- /*
- * May want to warn and skip if following cast results
- * in actual truncation (if sector_t < 64 bits)
- */
- error_sector = (sector_t)first_err_block;
- if (SCpnt->request->bio != NULL)
- block_sectors = bio_sectors(SCpnt->request->bio);
- switch (SCpnt->device->sector_size) {
- case 1024:
- error_sector <<= 1;
- if (block_sectors < 2)
- block_sectors = 2;
- break;
- case 2048:
- error_sector <<= 2;
- if (block_sectors < 4)
- block_sectors = 4;
- break;
- case 4096:
- error_sector <<=3;
- if (block_sectors < 8)
- block_sectors = 8;
- break;
- case 256:
- error_sector >>= 1;
- break;
- default:
- break;
- }
+ if (driver_byte(result) != DRIVER_SENSE &&
+ (!sense_valid || sense_deferred))
+ goto out;
- error_sector &= ~(block_sectors - 1);
- good_bytes = (error_sector - SCpnt->request->sector) << 9;
- if (good_bytes < 0 || good_bytes >= this_count)
- good_bytes = 0;
+ switch (sshdr.sense_key) {
+ case HARDWARE_ERROR:
+ case MEDIUM_ERROR:
+ if (!blk_fs_request(SCpnt->request))
+ goto out;
+ info_valid = scsi_get_sense_info_fld(SCpnt->sense_buffer,
+ SCSI_SENSE_BUFFERSIZE,
+ &bad_lba);
+ if (!info_valid)
+ goto out;
+ if (xfer_size <= SCpnt->device->sector_size)
+ goto out;
+ switch (SCpnt->device->sector_size) {
+ case 256:
+ start_lba <<= 1;
break;
-
- case RECOVERED_ERROR: /* an error occurred, but it recovered */
- case NO_SENSE: /* LLDD got sense data */
- /*
- * Inform the user, but make sure that it's not treated
- * as a hard error.
- */
- scsi_print_sense("sd", SCpnt);
- SCpnt->result = 0;
- memset(SCpnt->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- good_bytes = this_count;
+ case 512:
break;
-
- case ILLEGAL_REQUEST:
- if (SCpnt->device->use_10_for_rw &&
- (SCpnt->cmnd[0] == READ_10 ||
- SCpnt->cmnd[0] == WRITE_10))
- SCpnt->device->use_10_for_rw = 0;
- if (SCpnt->device->use_10_for_ms &&
- (SCpnt->cmnd[0] == MODE_SENSE_10 ||
- SCpnt->cmnd[0] == MODE_SELECT_10))
- SCpnt->device->use_10_for_ms = 0;
+ case 1024:
+ start_lba >>= 1;
+ break;
+ case 2048:
+ start_lba >>= 2;
+ break;
+ case 4096:
+ start_lba >>= 3;
break;
-
default:
+ /* Print something here with limiting frequency. */
+ goto out;
break;
}
+ /* This computation should always be done in terms of
+ * the resolution of the device's medium.
+ */
+ good_bytes = (bad_lba - start_lba)*SCpnt->device->sector_size;
+ break;
+ case RECOVERED_ERROR:
+ case NO_SENSE:
+ /* Inform the user, but make sure that it's not treated
+ * as a hard error.
+ */
+ scsi_print_sense("sd", SCpnt);
+ SCpnt->result = 0;
+ memset(SCpnt->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ good_bytes = xfer_size;
+ break;
+ case ILLEGAL_REQUEST:
+ if (SCpnt->device->use_10_for_rw &&
+ (SCpnt->cmnd[0] == READ_10 ||
+ SCpnt->cmnd[0] == WRITE_10))
+ SCpnt->device->use_10_for_rw = 0;
+ if (SCpnt->device->use_10_for_ms &&
+ (SCpnt->cmnd[0] == MODE_SENSE_10 ||
+ SCpnt->cmnd[0] == MODE_SELECT_10))
+ SCpnt->device->use_10_for_ms = 0;
+ break;
+ default:
+ break;
}
- /*
- * This calls the generic completion function, now that we know
- * how many actual sectors finished, and how many sectors we need
- * to say have failed.
- */
- scsi_io_completion(SCpnt, good_bytes, block_sectors << 9);
+ out:
+ scsi_io_completion(SCpnt, good_bytes);
}
static int media_not_present(struct scsi_disk *sdkp,
return 0;
hostno = instance->host_no;
- if (request_irq (irq, do_seagate_reconnect_intr, SA_INTERRUPT, (controller_type == SEAGATE) ? "seagate" : "tmc-8xx", instance)) {
+ if (request_irq (irq, do_seagate_reconnect_intr, IRQF_DISABLED, (controller_type == SEAGATE) ? "seagate" : "tmc-8xx", instance)) {
printk(KERN_ERR "scsi%d : unable to allocate IRQ%d\n", hostno, irq);
return 0;
}
Sg_device *sdp = NULL;
struct cdev * cdev = NULL;
int error, k;
+ unsigned long iflags;
disk = alloc_disk(1);
if (!disk) {
error = cdev_add(cdev, MKDEV(SCSI_GENERIC_MAJOR, k), 1);
if (error)
- goto out;
+ goto cdev_add_err;
sdp->cdev = cdev;
if (sg_sysfs_valid) {
return 0;
+cdev_add_err:
+ write_lock_irqsave(&sg_dev_arr_lock, iflags);
+ kfree(sg_dev_arr[k]);
+ sg_dev_arr[k] = NULL;
+ sg_nr_dev--;
+ write_unlock_irqrestore(&sg_dev_arr_lock, iflags);
+
out:
put_disk(disk);
if (cdev)
host->this_id = scsi_id;
host->base = base_addr;
host->irq = irq;
- if (request_irq(irq, NCR_700_intr, SA_SHIRQ, "sim710", host)) {
+ if (request_irq(irq, NCR_700_intr, IRQF_SHARED, "sim710", host)) {
printk(KERN_ERR "sim710: request_irq failed\n");
goto out_put_host;
}
* how many actual sectors finished, and how many sectors we need
* to say have failed.
*/
- scsi_io_completion(SCpnt, good_bytes, block_sectors << 9);
+ scsi_io_completion(SCpnt, good_bytes);
}
static int sr_init_command(struct scsi_cmnd * SCpnt)
tb->use_sg = max_sg;
tb->frp = (struct st_buf_fragment *)(&(tb->sg[0]) + max_sg);
- tb->in_use = 1;
tb->dma = need_dma;
tb->buffer_size = got;
/* The tape buffer descriptor. */
struct st_buffer {
- unsigned char in_use;
unsigned char dma; /* DMA-able buffer */
unsigned char do_dio; /* direct i/o set up? */
int buffer_size;
esp->esp_command_dvma = dvma_vtob((unsigned long)esp->esp_command);
esp->irq = 2;
- if (request_irq(esp->irq, esp_intr, SA_INTERRUPT,
+ if (request_irq(esp->irq, esp_intr, IRQF_DISABLED,
"SUN3X SCSI", esp->ehost)) {
esp_deallocate(esp);
return 0;
* If we synchonize the C code with SCRIPTS on interrupt,
* we do not want to share the INTR line at all.
*/
- if (request_irq(pdev->irq, sym53c8xx_intr, SA_SHIRQ, NAME53C8XX, np)) {
+ if (request_irq(pdev->irq, sym53c8xx_intr, IRQF_SHARED, NAME53C8XX, np)) {
printf_err("%s: request irq %d failure\n",
sym_name(np), pdev->irq);
goto attach_failed;
instance->irq = NCR5380_probe_irq(instance, T128_IRQS);
if (instance->irq != SCSI_IRQ_NONE)
- if (request_irq(instance->irq, t128_intr, SA_INTERRUPT, "t128", instance)) {
+ if (request_irq(instance->irq, t128_intr, IRQF_DISABLED, "t128", instance)) {
printk("scsi%d : IRQ%d not free, interrupts disabled\n",
instance->host_no, instance->irq);
instance->irq = SCSI_IRQ_NONE;
/* Reset Pending INT */
DC390_read8_(INT_Status, io_port);
- if (request_irq(pdev->irq, do_DC390_Interrupt, SA_SHIRQ,
+ if (request_irq(pdev->irq, do_DC390_Interrupt, IRQF_SHARED,
"tmscsim", pACB)) {
printk(KERN_ERR "DC390: register IRQ error!\n");
goto out_release_region;
/* Board detected, allocate its IRQ */
if (request_irq(irq, do_interrupt_handler,
- SA_INTERRUPT | ((subversion == ESA) ? SA_SHIRQ : 0),
+ IRQF_DISABLED | ((subversion == ESA) ? IRQF_SHARED : 0),
driver_name, (void *) &sha[j])) {
printk("%s: unable to allocate IRQ %u, detaching.\n", name, irq);
goto freelock;
return 0;
- if (request_irq(host->irq, wd7000_intr, SA_INTERRUPT, "wd7000", host)) {
+ if (request_irq(host->irq, wd7000_intr, IRQF_DISABLED, "wd7000", host)) {
printk("wd7000_init: can't get IRQ %d.\n", host->irq);
return (0);
}
if (!host)
goto fail;
- if (request_irq(dev->irq, ncr53c8xx_intr, SA_SHIRQ, "zalon", host)) {
+ if (request_irq(dev->irq, ncr53c8xx_intr, IRQF_SHARED, "zalon", host)) {
printk(KERN_ERR "%s: irq problem with %d, detaching\n ",
dev->dev.bus_id, dev->irq);
goto fail;
/*
* Configuration:
- * share_irqs - whether we pass SA_SHIRQ to request_irq(). This option
+ * share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
* is unsafe when used on edge-triggered interrupts.
*/
static unsigned int share_irqs = SERIAL8250_SHARE_IRQS;
static int serial_link_irq_chain(struct uart_8250_port *up)
{
struct irq_info *i = irq_lists + up->port.irq;
- int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? SA_SHIRQ : 0;
+ int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
spin_lock_irq(&i->lock);
* and Keystone have one Diva chip with 3 UARTs. Some later machines have
* one Diva chip, but it has been expanded to 5 UARTs.
*/
-static int __devinit pci_hp_diva_init(struct pci_dev *dev)
+static int pci_hp_diva_init(struct pci_dev *dev)
{
int rc = 0;
/*
* Added for EKF Intel i960 serial boards
*/
-static int __devinit pci_inteli960ni_init(struct pci_dev *dev)
+static int pci_inteli960ni_init(struct pci_dev *dev)
{
unsigned long oldval;
* seems to be mainly needed on card using the PLX which also use I/O
* mapped memory.
*/
-static int __devinit pci_plx9050_init(struct pci_dev *dev)
+static int pci_plx9050_init(struct pci_dev *dev)
{
u8 irq_config;
void __iomem *p;
/* global control register offset for SBS PMC-OctalPro */
#define OCT_REG_CR_OFF 0x500
-static int __devinit sbs_init(struct pci_dev *dev)
+static int sbs_init(struct pci_dev *dev)
{
u8 __iomem *p;
{ 0, NULL }
};
-static int __devinit pci_timedia_init(struct pci_dev *dev)
+static int pci_timedia_init(struct pci_dev *dev)
{
unsigned short *ids;
int i, j;
return setup_port(priv, port, bar, offset, board->reg_shift);
}
-static int __devinit pci_xircom_init(struct pci_dev *dev)
+static int pci_xircom_init(struct pci_dev *dev)
{
msleep(100);
return 0;
}
-static int __devinit pci_netmos_init(struct pci_dev *dev)
+static int pci_netmos_init(struct pci_dev *dev)
{
/* subdevice 0x00PS means <P> parallel, <S> serial */
unsigned int num_serial = dev->subsystem_device & 0xf;
*/
static struct pci_serial_quirk pci_serial_quirks[] = {
/*
- * AFAVLAB cards.
+ * AFAVLAB cards - these may be called via parport_serial
* It is not clear whether this applies to all products.
*/
{
.exit = __devexit_p(sbs_exit),
},
/*
- * SIIG cards.
+ * SIIG cards - these may be called via parport_serial
*/
{
.vendor = PCI_VENDOR_ID_SIIG,
.setup = pci_default_setup,
},
/*
- * Netmos cards
+ * Netmos cards - these may be called via parport_serial
*/
{
.vendor = PCI_VENDOR_ID_NETMOS,
#endif
port.flags |= UPF_SKIP_TEST | UPF_BOOT_AUTOCONF;
+ if (pnp_irq_flags(dev, 0) & IORESOURCE_IRQ_SHAREABLE)
+ port.flags |= UPF_SHARE_IRQ;
port.uartclk = 1843200;
port.dev = &dev->dev;
/*
* Allocate the IRQ
*/
- retval = request_irq(port->irq, at91_interrupt, SA_SHIRQ, "at91_serial", port);
+ retval = request_irq(port->irq, at91_interrupt, IRQF_SHARED, "at91_serial", port);
if (retval) {
printk("at91_serial: at91_startup - Can't get irq\n");
return retval;
* Fixed DEF_TX value that caused the serial transmitter pin (txd) to go to 0 when
* closing the last filehandle, NASTY!.
* Added break generation, not tested though!
- * Use SA_SHIRQ when request_irq() for ser2 and ser3 (shared with) par0 and par1.
+ * Use IRQF_SHARED when request_irq() for ser2 and ser3 (shared with) par0 and par1.
* You can't use them at the same time (yet..), but you can hopefully switch
* between ser2/par0, ser3/par1 with the same kernel config.
* Replaced some magic constants with defines
/* Not needed in simulator. May only complicate stuff. */
/* hook the irq's for DMA channel 6 and 7, serial output and input, and some more... */
- if (request_irq(SERIAL_IRQ_NBR, ser_interrupt, SA_SHIRQ | SA_INTERRUPT, "serial ", NULL))
+ if (request_irq(SERIAL_IRQ_NBR, ser_interrupt, IRQF_SHARED | IRQF_DISABLED, "serial ", NULL))
panic("irq8");
#ifdef CONFIG_ETRAX_SERIAL_PORT0
#ifdef CONFIG_ETRAX_SERIAL_PORT0_DMA6_OUT
- if (request_irq(SER0_DMA_TX_IRQ_NBR, tr_interrupt, SA_INTERRUPT, "serial 0 dma tr", NULL))
+ if (request_irq(SER0_DMA_TX_IRQ_NBR, tr_interrupt, IRQF_DISABLED, "serial 0 dma tr", NULL))
panic("irq22");
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT0_DMA7_IN
- if (request_irq(SER0_DMA_RX_IRQ_NBR, rec_interrupt, SA_INTERRUPT, "serial 0 dma rec", NULL))
+ if (request_irq(SER0_DMA_RX_IRQ_NBR, rec_interrupt, IRQF_DISABLED, "serial 0 dma rec", NULL))
panic("irq23");
#endif
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT1
#ifdef CONFIG_ETRAX_SERIAL_PORT1_DMA8_OUT
- if (request_irq(SER1_DMA_TX_IRQ_NBR, tr_interrupt, SA_INTERRUPT, "serial 1 dma tr", NULL))
+ if (request_irq(SER1_DMA_TX_IRQ_NBR, tr_interrupt, IRQF_DISABLED, "serial 1 dma tr", NULL))
panic("irq24");
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT1_DMA9_IN
- if (request_irq(SER1_DMA_RX_IRQ_NBR, rec_interrupt, SA_INTERRUPT, "serial 1 dma rec", NULL))
+ if (request_irq(SER1_DMA_RX_IRQ_NBR, rec_interrupt, IRQF_DISABLED, "serial 1 dma rec", NULL))
panic("irq25");
#endif
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT2
/* DMA Shared with par0 (and SCSI0 and ATA) */
#ifdef CONFIG_ETRAX_SERIAL_PORT2_DMA2_OUT
- if (request_irq(SER2_DMA_TX_IRQ_NBR, tr_interrupt, SA_SHIRQ | SA_INTERRUPT, "serial 2 dma tr", NULL))
+ if (request_irq(SER2_DMA_TX_IRQ_NBR, tr_interrupt, IRQF_SHARED | IRQF_DISABLED, "serial 2 dma tr", NULL))
panic("irq18");
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT2_DMA3_IN
- if (request_irq(SER2_DMA_RX_IRQ_NBR, rec_interrupt, SA_SHIRQ | SA_INTERRUPT, "serial 2 dma rec", NULL))
+ if (request_irq(SER2_DMA_RX_IRQ_NBR, rec_interrupt, IRQF_SHARED | IRQF_DISABLED, "serial 2 dma rec", NULL))
panic("irq19");
#endif
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT3
/* DMA Shared with par1 (and SCSI1 and Extern DMA 0) */
#ifdef CONFIG_ETRAX_SERIAL_PORT3_DMA4_OUT
- if (request_irq(SER3_DMA_TX_IRQ_NBR, tr_interrupt, SA_SHIRQ | SA_INTERRUPT, "serial 3 dma tr", NULL))
+ if (request_irq(SER3_DMA_TX_IRQ_NBR, tr_interrupt, IRQF_SHARED | IRQF_DISABLED, "serial 3 dma tr", NULL))
panic("irq20");
#endif
#ifdef CONFIG_ETRAX_SERIAL_PORT3_DMA5_IN
- if (request_irq(SER3_DMA_RX_IRQ_NBR, rec_interrupt, SA_SHIRQ | SA_INTERRUPT, "serial 3 dma rec", NULL))
+ if (request_irq(SER3_DMA_RX_IRQ_NBR, rec_interrupt, IRQF_SHARED | IRQF_DISABLED, "serial 3 dma rec", NULL))
panic("irq21");
#endif
#endif
#ifdef CONFIG_ETRAX_SERIAL_FLUSH_DMA_FAST
- if (request_irq(TIMER1_IRQ_NBR, timeout_interrupt, SA_SHIRQ | SA_INTERRUPT,
+ if (request_irq(TIMER1_IRQ_NBR, timeout_interrupt, IRQF_SHARED | IRQF_DISABLED,
"fast serial dma timeout", NULL)) {
printk(KERN_CRIT "err: timer1 irq\n");
}
restore_flags(flags);
if (request_irq(dz_ports[0].port.irq, dz_interrupt,
- SA_INTERRUPT, "DZ", &dz_ports[0]))
+ IRQF_DISABLED, "DZ", &dz_ports[0]))
panic("Unable to register DZ interrupt");
ret = uart_register_driver(&dz_reg);
/* save off irq and request irq line */
if ( (retval = request_irq(dev->irq, icom_interrupt,
- SA_INTERRUPT | SA_SHIRQ, ICOM_DRIVER_NAME,
+ IRQF_DISABLED | IRQF_SHARED, ICOM_DRIVER_NAME,
(void *) icom_adapter))) {
goto probe_exit2;
}
if (retval) goto error_out2;
retval = request_irq(sport->rtsirq, imx_rtsint,
- SA_TRIGGER_FALLING | SA_TRIGGER_RISING,
+ IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
DRIVER_NAME, sport);
if (retval) goto error_out3;
control->ic_soft = soft;
/* Hook up interrupt handler */
- if (!request_irq(idd->idd_pdev->irq, ioc4_intr, SA_SHIRQ,
+ if (!request_irq(idd->idd_pdev->irq, ioc4_intr, IRQF_SHARED,
"sgi-ioc4serial", soft)) {
control->ic_irq = idd->idd_pdev->irq;
} else {
}
rc = request_irq(brd->irq, brd->bd_ops->intr,
- SA_INTERRUPT|SA_SHIRQ, "JSM", brd);
+ IRQF_DISABLED|IRQF_SHARED, "JSM", brd);
if (rc) {
printk(KERN_WARNING "Failed to hook IRQ %d\n",brd->irq);
goto out_iounmap;
static int serial_link_irq_chain(struct uart_sio_port *up)
{
struct irq_info *i = irq_lists + up->port.irq;
- int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? SA_SHIRQ : 0;
+ int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
spin_lock_irq(&i->lock);
/* Clear mask, so no surprise interrupts. */
uartp[MCFUART_UIMR] = 0;
- if (request_irq(info->irq, mcfrs_interrupt, SA_INTERRUPT,
+ if (request_irq(info->irq, mcfrs_interrupt, IRQF_DISABLED,
"ColdFire UART", NULL)) {
printk("MCFRS: Unable to attach ColdFire UART %d interrupt "
"vector=%d\n", info->line, info->irq);
/* Request IRQ */
ret = request_irq(port->irq, mpc52xx_uart_int,
- SA_INTERRUPT | SA_SAMPLE_RANDOM, "mpc52xx_psc_uart", port);
+ IRQF_DISABLED | IRQF_SAMPLE_RANDOM, "mpc52xx_psc_uart", port);
if (ret)
return ret;
spin_lock_init(&port->lock);
port->uartclk = __res.bi_ipbfreq / 2; /* Look at CTLR doc */
- port->fifosize = 255; /* Should be 512 ! But it can't be */
- /* stored in a unsigned char */
+ port->fifosize = 512;
port->iotype = UPIO_MEM;
port->flags = UPF_BOOT_AUTOCONF |
( uart_console(port) ? 0 : UPF_IOREMAP );
/* If irq's are shared, need to set flag */
if (mpsc_ports[0].port.irq == mpsc_ports[1].port.irq)
- flag = SA_SHIRQ;
+ flag = IRQF_SHARED;
if (request_irq(pi->port.irq, mpsc_sdma_intr, flag,
"mpsc-sdma", pi))
}
pmz_get_port_A(uap)->flags |= PMACZILOG_FLAG_IS_IRQ_ON;
- if (request_irq(uap->port.irq, pmz_interrupt, SA_SHIRQ, "PowerMac Zilog", uap)) {
+ if (request_irq(uap->port.irq, pmz_interrupt, IRQF_SHARED, "PowerMac Zilog", uap)) {
dev_err(&uap->dev->ofdev.dev,
"Unable to register zs interrupt handler.\n");
pmz_set_scc_power(uap, 0);
uap->flags &= ~PMACZILOG_FLAG_HAS_DMA;
goto no_dma;
}
- uap->tx_dma_irq = np->intrs[1].line;
- uap->rx_dma_irq = np->intrs[2].line;
+ uap->tx_dma_irq = irq_of_parse_and_map(np, 1);
+ uap->rx_dma_irq = irq_of_parse_and_map(np, 2);
}
no_dma:
* Init remaining bits of "port" structure
*/
uap->port.iotype = UPIO_MEM;
- uap->port.irq = np->intrs[0].line;
+ uap->port.irq = irq_of_parse_and_map(np, 0);
uap->port.uartclk = ZS_CLOCK;
uap->port.fifosize = 1;
uap->port.ops = &pmz_pops;
*/
static DEFINE_MUTEX(port_mutex);
+/*
+ * lockdep: port->lock is initialized in two places, but we
+ * want only one lock-class:
+ */
+static struct lock_class_key port_lock_key;
+
#define HIGH_BITS_OFFSET ((sizeof(long)-sizeof(int))*8)
#define uart_users(state) ((state)->count + ((state)->info ? (state)->info->blocked_open : 0))
(new_serial.baud_base != port->uartclk / 16) ||
(close_delay != state->close_delay) ||
(closing_wait != state->closing_wait) ||
- (new_serial.xmit_fifo_size != port->fifosize) ||
+ (new_serial.xmit_fifo_size &&
+ new_serial.xmit_fifo_size != port->fifosize) ||
(((new_flags ^ old_flags) & ~UPF_USR_MASK) != 0))
goto exit;
port->flags = ((port->flags & ~UPF_USR_MASK) |
port->custom_divisor = new_serial.custom_divisor;
state->close_delay = close_delay;
state->closing_wait = closing_wait;
- port->fifosize = new_serial.xmit_fifo_size;
+ if (new_serial.xmit_fifo_size)
+ port->fifosize = new_serial.xmit_fifo_size;
if (state->info->tty)
state->info->tty->low_latency =
(port->flags & UPF_LOW_LATENCY) ? 1 : 0;
* early.
*/
spin_lock_init(&port->lock);
+ lockdep_set_class(&port->lock, &port_lock_key);
memset(&termios, 0, sizeof(struct termios));
* If this port is a console, then the spinlock is already
* initialised.
*/
- if (!(uart_console(port) && (port->cons->flags & CON_ENABLED)))
+ if (!(uart_console(port) && (port->cons->flags & CON_ENABLED))) {
spin_lock_init(&port->lock);
+ lockdep_set_class(&port->lock, &port_lock_key);
+ }
uart_configure_port(drv, state, port);
sio_out(up, TXX9_SIDISR, 0);
retval = request_irq(up->port.irq, serial_txx9_interrupt,
- SA_SHIRQ, "serial_txx9", up);
+ IRQF_SHARED, "serial_txx9", up);
if (retval)
return retval;
printk(KERN_ERR "sci: Cannot allocate irq.(IRQ=0)\n");
return -ENODEV;
}
- if (request_irq(port->irqs[0], sci_mpxed_interrupt, SA_INTERRUPT,
+ if (request_irq(port->irqs[0], sci_mpxed_interrupt, IRQF_DISABLED,
"sci", port)) {
printk(KERN_ERR "sci: Cannot allocate irq.\n");
return -ENODEV;
for (i = 0; i < ARRAY_SIZE(handlers); i++) {
if (!port->irqs[i])
continue;
- if (request_irq(port->irqs[i], handlers[i], SA_INTERRUPT,
+ if (request_irq(port->irqs[i], handlers[i], IRQF_DISABLED,
desc[i], port)) {
printk(KERN_ERR "sci: Cannot allocate irq.\n");
return -ENODEV;
static int sn_sal_connect_interrupt(struct sn_cons_port *port)
{
if (request_irq(SGI_UART_VECTOR, sn_sal_interrupt,
- SA_INTERRUPT | SA_SHIRQ,
+ IRQF_DISABLED | IRQF_SHARED,
"SAL console driver", port) >= 0) {
return SGI_UART_VECTOR;
}
int err;
err = request_irq(up->port.irq, sunsab_interrupt,
- SA_SHIRQ, "sab", up);
+ IRQF_SHARED, "sab", up);
if (err) {
of_iounmap(up->port.membase,
sizeof(union sab82532_async_regs));
if (up->su_type != SU_PORT_PORT) {
retval = request_irq(up->port.irq, sunsu_kbd_ms_interrupt,
- SA_SHIRQ, su_typev[up->su_type], up);
+ IRQF_SHARED, su_typev[up->su_type], up);
} else {
retval = request_irq(up->port.irq, sunsu_serial_interrupt,
- SA_SHIRQ, su_typev[up->su_type], up);
+ IRQF_SHARED, su_typev[up->su_type], up);
}
if (retval) {
printk("su: Cannot register IRQ %d\n", up->port.irq);
if (zilog_irq == -1) {
zilog_irq = op->irqs[0];
- err = request_irq(zilog_irq, sunzilog_interrupt, SA_SHIRQ,
+ err = request_irq(zilog_irq, sunzilog_interrupt, IRQF_SHARED,
"zs", sunzilog_irq_chain);
if (err) {
of_iounmap(rp, sizeof(struct zilog_layout));
/* Alloc RX irq. */
err = request_irq (V850E_UART_RX_IRQ (port->line), v850e_uart_rx_irq,
- SA_INTERRUPT, "v850e_uart", port);
+ IRQF_DISABLED, "v850e_uart", port);
if (err)
return err;
/* Alloc TX irq. */
err = request_irq (V850E_UART_TX_IRQ (port->line), v850e_uart_tx_irq,
- SA_INTERRUPT, "v850e_uart", port);
+ IRQF_DISABLED, "v850e_uart", port);
if (err) {
free_irq (V850E_UART_RX_IRQ (port->line), port);
return err;
writel(~0, &idd->vma->eisr);
idd->dual_irq = 1;
- if (!request_irq(pdev->irq, ioc3_intr_eth, SA_SHIRQ,
+ if (!request_irq(pdev->irq, ioc3_intr_eth, IRQF_SHARED,
"ioc3-eth", (void *)idd)) {
idd->irq_eth = pdev->irq;
} else {
"%s : request_irq fails for IRQ 0x%x\n ",
__FUNCTION__, pdev->irq);
}
- if (!request_irq(pdev->irq+2, ioc3_intr_io, SA_SHIRQ,
+ if (!request_irq(pdev->irq+2, ioc3_intr_io, IRQF_SHARED,
"ioc3-io", (void *)idd)) {
idd->irq_io = pdev->irq+2;
} else {
__FUNCTION__, pdev->irq+2);
}
} else {
- if (!request_irq(pdev->irq, ioc3_intr_io, SA_SHIRQ,
+ if (!request_irq(pdev->irq, ioc3_intr_io, IRQF_SHARED,
"ioc3", (void *)idd)) {
idd->irq_io = pdev->irq;
} else {
*/
int spi_sync(struct spi_device *spi, struct spi_message *message)
{
- DECLARE_COMPLETION(done);
+ DECLARE_COMPLETION_ONSTACK(done);
int status;
message->complete = spi_complete;
zs_soft[channel].clk_divisor = 16;
zs_soft[channel].zs_baud = get_zsbaud(&zs_soft[channel]);
- if (request_irq(zs_soft[channel].irq, rs_interrupt, SA_SHIRQ,
+ if (request_irq(zs_soft[channel].irq, rs_interrupt, IRQF_SHARED,
"scc", &zs_soft[channel]))
printk(KERN_ERR "decserial: can't get irq %d\n",
zs_soft[channel].irq);
pci_set_master (dev);
- retval = usb_add_hcd (hcd, dev->irq, SA_SHIRQ);
+ retval = usb_add_hcd (hcd, dev->irq, IRQF_SHARED);
if (retval != 0)
goto err4;
return retval;
if (!root)
return;
- mutex_lock(&root->d_inode->i_mutex);
+ mutex_lock_nested(&root->d_inode->i_mutex, I_MUTEX_PARENT);
list_for_each_entry(bus, &root->d_subdirs, d_u.d_child) {
if (bus->d_inode) {
if (!parent || !parent->d_inode)
return;
- mutex_lock(&parent->d_inode->i_mutex);
+ mutex_lock_nested(&parent->d_inode->i_mutex, I_MUTEX_PARENT);
if (usbfs_positive(dentry)) {
if (dentry->d_inode) {
if (S_ISDIR(dentry->d_inode->i_mode))
pullup(udc, 0);
/* request UDC and maybe VBUS irqs */
- if (request_irq(AT91_ID_UDP, at91_udc_irq, SA_INTERRUPT, driver_name, udc)) {
+ if (request_irq(AT91_ID_UDP, at91_udc_irq, IRQF_DISABLED, driver_name, udc)) {
DBG("request irq %d failed\n", AT91_ID_UDP);
retval = -EBUSY;
goto fail1;
}
if (udc->board.vbus_pin > 0) {
- if (request_irq(udc->board.vbus_pin, at91_vbus_irq, SA_INTERRUPT, driver_name, udc)) {
+ if (request_irq(udc->board.vbus_pin, at91_vbus_irq, IRQF_DISABLED, driver_name, udc)) {
DBG("request vbus irq %d failed\n", udc->board.vbus_pin);
free_irq(AT91_ID_UDP, udc);
retval = -EBUSY;
/* init to known state, then setup irqs */
udc_reset(dev);
udc_reinit (dev);
- if (request_irq(pdev->irq, goku_irq, SA_SHIRQ/*|SA_SAMPLE_RANDOM*/,
+ if (request_irq(pdev->irq, goku_irq, IRQF_SHARED/*|IRQF_SAMPLE_RANDOM*/,
driver_name, dev) != 0) {
DBG(dev, "request interrupt %d failed\n", pdev->irq);
retval = -EBUSY;
/* irq setup after old hardware state is cleaned up */
retval =
- request_irq(IRQ_USBINTR, lh7a40x_udc_irq, SA_INTERRUPT, driver_name,
+ request_irq(IRQ_USBINTR, lh7a40x_udc_irq, IRQF_DISABLED, driver_name,
dev);
if (retval != 0) {
DEBUG(KERN_ERR "%s: can't get irq %i, err %d\n", driver_name,
static struct platform_driver udc_driver = {
.probe = lh7a40x_udc_probe,
- .remove = lh7a40x_udc_remove
+ .remove = lh7a40x_udc_remove,
/* FIXME power management support */
/* .suspend = ... disable UDC */
/* .resume = ... re-enable UDC */
goto done;
}
- if (request_irq (pdev->irq, net2280_irq, SA_SHIRQ, driver_name, dev)
+ if (request_irq (pdev->irq, net2280_irq, IRQF_SHARED, driver_name, dev)
!= 0) {
ERROR (dev, "request interrupt %d failed\n", pdev->irq);
retval = -EBUSY;
struct omap_ep *ep = data;
/* if ch_status & OMAP_DMA_DROP_IRQ ... */
- /* if ch_status & OMAP_DMA_TOUT_IRQ ... */
+ /* if ch_status & OMAP1_DMA_TOUT_IRQ ... */
ERR("%s dma error, lch %d status %02x\n", ep->ep.name, lch, ch_status);
/* complete current transfer ... */
/* USB general purpose IRQ: ep0, state changes, dma, etc */
status = request_irq(pdev->resource[1].start, omap_udc_irq,
- SA_SAMPLE_RANDOM, driver_name, udc);
+ IRQF_SAMPLE_RANDOM, driver_name, udc);
if (status != 0) {
ERR( "can't get irq %ld, err %d\n",
pdev->resource[1].start, status);
/* USB "non-iso" IRQ (PIO for all but ep0) */
status = request_irq(pdev->resource[2].start, omap_udc_pio_irq,
- SA_SAMPLE_RANDOM, "omap_udc pio", udc);
+ IRQF_SAMPLE_RANDOM, "omap_udc pio", udc);
if (status != 0) {
ERR( "can't get irq %ld, err %d\n",
pdev->resource[2].start, status);
}
#ifdef USE_ISO
status = request_irq(pdev->resource[3].start, omap_udc_iso_irq,
- SA_INTERRUPT, "omap_udc iso", udc);
+ IRQF_DISABLED, "omap_udc iso", udc);
if (status != 0) {
ERR("can't get irq %ld, err %d\n",
pdev->resource[3].start, status);
/* irq setup after old hardware state is cleaned up */
retval = request_irq(IRQ_USB, pxa2xx_udc_irq,
- SA_INTERRUPT, driver_name, dev);
+ IRQF_DISABLED, driver_name, dev);
if (retval != 0) {
printk(KERN_ERR "%s: can't get irq %i, err %d\n",
driver_name, IRQ_USB, retval);
if (machine_is_lubbock()) {
retval = request_irq(LUBBOCK_USB_DISC_IRQ,
lubbock_vbus_irq,
- SA_INTERRUPT | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SAMPLE_RANDOM,
driver_name, dev);
if (retval != 0) {
printk(KERN_ERR "%s: can't get irq %i, err %d\n",
}
retval = request_irq(LUBBOCK_USB_IRQ,
lubbock_vbus_irq,
- SA_INTERRUPT | SA_SAMPLE_RANDOM,
+ IRQF_DISABLED | IRQF_SAMPLE_RANDOM,
driver_name, dev);
if (retval != 0) {
printk(KERN_ERR "%s: can't get irq %i, err %d\n",
/* ehci_hcd_init(hcd_to_ehci(hcd)); */
retval =
- usb_add_hcd(hcd, dev->resource[1].start, SA_INTERRUPT | SA_SHIRQ);
+ usb_add_hcd(hcd, dev->resource[1].start, IRQF_DISABLED | IRQF_SHARED);
if (retval == 0)
return retval;
temp = in_le32(hcd->regs + 0x1a8);
out_le32(hcd->regs + 0x1a8, temp | 0x3);
- retval = usb_add_hcd(hcd, irq, SA_SHIRQ);
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (retval != 0)
goto err4;
return retval;
goto err6;
}
- ret = usb_add_hcd(hcd, irq, SA_INTERRUPT);
+ ret = usb_add_hcd(hcd, irq, IRQF_DISABLED);
if (ret)
goto err6;
at91_start_hc(pdev);
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, pdev->resource[1].start, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_DISABLED);
if (retval == 0)
return retval;
au1xxx_start_ohc(dev);
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, dev->resource[1].start, SA_INTERRUPT | SA_SHIRQ);
+ retval = usb_add_hcd(hcd, dev->resource[1].start, IRQF_DISABLED | IRQF_SHARED);
if (retval == 0)
return retval;
lh7a404_start_hc(dev);
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, dev->resource[1].start, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, dev->resource[1].start, IRQF_DISABLED);
if (retval == 0)
return retval;
* This file is licenced under the GPL.
*/
-#include <linux/signal.h> /* SA_INTERRUPT */
+#include <linux/signal.h> /* IRQF_DISABLED */
#include <linux/jiffies.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
retval = -ENXIO;
goto err2;
}
- retval = usb_add_hcd(hcd, irq, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, irq, IRQF_DISABLED);
if (retval == 0)
return retval;
ohci->flags |= OHCI_BIG_ENDIAN;
ohci_hcd_init(ohci);
- retval = usb_add_hcd(hcd, irq, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, irq, IRQF_DISABLED);
if (retval == 0)
return retval;
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, pdev->resource[1].start, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_DISABLED);
if (retval == 0)
return retval;
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, dev->resource[1].start, SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, dev->resource[1].start, IRQF_DISABLED);
if (retval != 0)
goto err_ioremap;
sa1111_start_hc(dev);
ohci_hcd_init(hcd_to_ohci(hcd));
- retval = usb_add_hcd(hcd, dev->irq[1], SA_INTERRUPT);
+ retval = usb_add_hcd(hcd, dev->irq[1], IRQF_DISABLED);
if (retval == 0)
return retval;
* was on a system with single edge triggering, so most sorts of
* triggering arrangement should work.
*/
- retval = usb_add_hcd(hcd, irq, SA_INTERRUPT | SA_SHIRQ);
+ retval = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED);
if (retval != 0)
goto err6;
Turn on debugging messages. Note that you can set/unset at run time
through sysfs
+config FB_PNX4008_DUM
+ tristate "Display Update Module support on Philips PNX4008 board"
+ depends on FB && ARCH_PNX4008
+ ---help---
+ Say Y here to enable support for PNX4008 Display Update Module (DUM)
+
+config FB_PNX4008_DUM_RGB
+ tristate "RGB Framebuffer support on Philips PNX4008 board"
+ depends on FB_PNX4008_DUM
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+ ---help---
+ Say Y here to enable support for PNX4008 RGB Framebuffer
+
config FB_VIRTUAL
tristate "Virtual Frame Buffer support (ONLY FOR TESTING!)"
depends on FB
obj-$(CONFIG_FB_S1D13XXX) += s1d13xxxfb.o
obj-$(CONFIG_FB_IMX) += imxfb.o
obj-$(CONFIG_FB_S3C2410) += s3c2410fb.o
+obj-$(CONFIG_FB_PNX4008_DUM) += pnx4008/
+obj-$(CONFIG_FB_PNX4008_DUM_RGB) += pnx4008/
# Platform or fallback drivers go here
obj-$(CONFIG_FB_VESA) += vesafb.o
platform_set_drvdata(dev, info);
if (irq) {
par->irq = irq;
- if (request_irq(par->irq, &arcfb_interrupt, SA_SHIRQ,
+ if (request_irq(par->irq, &arcfb_interrupt, IRQF_SHARED,
"arcfb", info)) {
printk(KERN_INFO
"arcfb: Failed req IRQ %d\n", par->irq);
u32 int_cntl;
if (!test_and_set_bit(0, &par->irq_flags)) {
- if (request_irq(par->irq, aty_irq, SA_SHIRQ, "atyfb", par)) {
+ if (request_irq(par->irq, aty_irq, IRQF_SHARED, "atyfb", par)) {
clear_bit(0, &par->irq_flags);
return -EINVAL;
}
/* Now hook interrupt too */
if ((ret = request_irq(AU1200_LCD_INT, au1200fb_handle_irq,
- SA_INTERRUPT | SA_SHIRQ, "lcd", (void *)dev)) < 0) {
+ IRQF_DISABLED | IRQF_SHARED, "lcd", (void *)dev)) < 0) {
print_err("fail to request interrupt line %d (err: %d)",
AU1200_LCD_INT, ret);
goto failed;
if (!test_and_set_bit(0, &ACCESS_FBINFO(irq_flags))) {
if (request_irq(ACCESS_FBINFO(pcidev)->irq, matrox_irq,
- SA_SHIRQ, "matroxfb", MINFO)) {
+ IRQF_SHARED, "matroxfb", MINFO)) {
clear_bit(0, &ACCESS_FBINFO(irq_flags));
return -EINVAL;
}
u_int transp, struct fb_info *info)
{
struct offb_par *par = (struct offb_par *) info->par;
+ int i, depth;
+ u32 *pal = info->pseudo_palette;
- if (!par->cmap_adr || regno > 255)
+ depth = info->var.bits_per_pixel;
+ if (depth == 16)
+ depth = (info->var.green.length == 5) ? 15 : 16;
+
+ if (regno > 255 ||
+ (depth == 16 && regno > 63) ||
+ (depth == 15 && regno > 31))
return 1;
+ if (regno < 16) {
+ switch (depth) {
+ case 15:
+ pal[regno] = (regno << 10) | (regno << 5) | regno;
+ break;
+ case 16:
+ pal[regno] = (regno << 11) | (regno << 5) | regno;
+ break;
+ case 24:
+ pal[regno] = (regno << 16) | (regno << 8) | regno;
+ break;
+ case 32:
+ i = (regno << 8) | regno;
+ pal[regno] = (i << 16) | i;
+ break;
+ }
+ }
+
red >>= 8;
green >>= 8;
blue >>= 8;
+ if (!par->cmap_adr)
+ return 0;
+
switch (par->cmap_type) {
case cmap_m64:
writeb(regno, par->cmap_adr);
break;
}
- if (regno < 16)
- switch (info->var.bits_per_pixel) {
- case 16:
- ((u16 *) (info->pseudo_palette))[regno] =
- (regno << 10) | (regno << 5) | regno;
- break;
- case 32:
- {
- int i = (regno << 8) | regno;
- ((u32 *) (info->pseudo_palette))[regno] =
- (i << 16) | i;
- break;
- }
- }
return 0;
}
{
struct device_node *dp = NULL, *boot_disp = NULL;
-#if defined(CONFIG_BOOTX_TEXT) && defined(CONFIG_PPC32)
- struct device_node *macos_display = NULL;
-#endif
if (fb_get_options("offb", NULL))
return -ENODEV;
-#if defined(CONFIG_BOOTX_TEXT) && defined(CONFIG_PPC32)
- /* If we're booted from BootX... */
- if (boot_infos != 0) {
- unsigned long addr =
- (unsigned long) boot_infos->dispDeviceBase;
- u32 *addrp;
- u64 daddr, dsize;
- unsigned int flags;
-
- /* find the device node corresponding to the macos display */
- while ((dp = of_find_node_by_type(dp, "display"))) {
- int i;
-
- /*
- * Look for an AAPL,address property first.
- */
- unsigned int na;
- unsigned int *ap =
- (unsigned int *)get_property(dp, "AAPL,address",
- &na);
- if (ap != 0) {
- for (na /= sizeof(unsigned int); na > 0;
- --na, ++ap)
- if (*ap <= addr &&
- addr < *ap + 0x1000000) {
- macos_display = dp;
- goto foundit;
- }
- }
-
- /*
- * See if the display address is in one of the address
- * ranges for this display.
- */
- i = 0;
- for (;;) {
- addrp = of_get_address(dp, i++, &dsize, &flags);
- if (addrp == NULL)
- break;
- if (!(flags & IORESOURCE_MEM))
- continue;
- daddr = of_translate_address(dp, addrp);
- if (daddr == OF_BAD_ADDR)
- continue;
- if (daddr <= addr && addr < (daddr + dsize)) {
- macos_display = dp;
- goto foundit;
- }
- }
- foundit:
- if (macos_display) {
- printk(KERN_INFO "MacOS display is %s\n",
- dp->full_name);
- break;
- }
- }
-
- /* initialize it */
- offb_init_fb(macos_display ? macos_display->
- name : "MacOS display",
- macos_display ? macos_display->
- full_name : "MacOS display",
- boot_infos->dispDeviceRect[2],
- boot_infos->dispDeviceRect[3],
- boot_infos->dispDeviceDepth,
- boot_infos->dispDeviceRowBytes, addr, NULL);
- }
-#endif /* defined(CONFIG_BOOTX_TEXT) && defined(CONFIG_PPC32) */
-
for (dp = NULL; (dp = of_find_node_by_type(dp, "display"));) {
if (get_property(dp, "linux,opened", NULL) &&
get_property(dp, "linux,boot-display", NULL)) {
static void __init offb_init_nodriver(struct device_node *dp)
{
- int *pp, i;
unsigned int len;
- int width = 640, height = 480, depth = 8, pitch;
- unsigned int flags, rsize, *up;
- u64 address = OF_BAD_ADDR;
- u32 *addrp;
+ int i, width = 640, height = 480, depth = 8, pitch = 640;
+ unsigned int flags, rsize, addr_prop = 0;
+ unsigned long max_size = 0;
+ u64 rstart, address = OF_BAD_ADDR;
+ u32 *pp, *addrp, *up;
u64 asize;
- if ((pp = (int *) get_property(dp, "depth", &len)) != NULL
- && len == sizeof(int))
+ pp = (u32 *)get_property(dp, "linux,bootx-depth", &len);
+ if (pp == NULL)
+ pp = (u32 *)get_property(dp, "depth", &len);
+ if (pp && len == sizeof(u32))
depth = *pp;
- if ((pp = (int *) get_property(dp, "width", &len)) != NULL
- && len == sizeof(int))
+
+ pp = (u32 *)get_property(dp, "linux,bootx-width", &len);
+ if (pp == NULL)
+ pp = (u32 *)get_property(dp, "width", &len);
+ if (pp && len == sizeof(u32))
width = *pp;
- if ((pp = (int *) get_property(dp, "height", &len)) != NULL
- && len == sizeof(int))
+
+ pp = (u32 *)get_property(dp, "linux,bootx-height", &len);
+ if (pp == NULL)
+ pp = (u32 *)get_property(dp, "height", &len);
+ if (pp && len == sizeof(u32))
height = *pp;
- if ((pp = (int *) get_property(dp, "linebytes", &len)) != NULL
- && len == sizeof(int)) {
+
+ pp = (u32 *)get_property(dp, "linux,bootx-linebytes", &len);
+ if (pp == NULL)
+ pp = (u32 *)get_property(dp, "linebytes", &len);
+ if (pp && len == sizeof(u32))
pitch = *pp;
- if (pitch == 1)
- pitch = 0x1000;
- } else
- pitch = width;
-
- rsize = (unsigned long)pitch * (unsigned long)height *
- (unsigned long)(depth / 8);
-
- /* Try to match device to a PCI device in order to get a properly
- * translated address rather then trying to decode the open firmware
- * stuff in various incorrect ways
- */
-#ifdef CONFIG_PCI
- /* First try to locate the PCI device if any */
- {
- struct pci_dev *pdev = NULL;
-
- for_each_pci_dev(pdev) {
- if (dp == pci_device_to_OF_node(pdev))
- break;
- }
- if (pdev) {
- for (i = 0; i < 6 && address == OF_BAD_ADDR; i++) {
- if ((pci_resource_flags(pdev, i) &
- IORESOURCE_MEM) &&
- (pci_resource_len(pdev, i) >= rsize))
- address = pci_resource_start(pdev, i);
- }
- pci_dev_put(pdev);
- }
- }
-#endif /* CONFIG_PCI */
-
- /* This one is dodgy, we may drop it ... */
- if (address == OF_BAD_ADDR &&
- (up = (unsigned *) get_property(dp, "address", &len)) != NULL &&
- len == sizeof(unsigned int))
- address = (u64) * up;
-
- if (address == OF_BAD_ADDR) {
- for (i = 0; (addrp = of_get_address(dp, i, &asize, &flags))
- != NULL; i++) {
- if (!(flags & IORESOURCE_MEM))
- continue;
- if (asize >= pitch * height * depth / 8)
- break;
- }
- if (addrp == NULL) {
- printk(KERN_ERR
- "no framebuffer address found for %s\n",
- dp->full_name);
- return;
- }
- address = of_translate_address(dp, addrp);
- if (address == OF_BAD_ADDR) {
- printk(KERN_ERR
- "can't translate framebuffer address for %s\n",
- dp->full_name);
- return;
+ else
+ pitch = width * ((depth + 7) / 8);
+
+ rsize = (unsigned long)pitch * (unsigned long)height;
+
+ /* Ok, now we try to figure out the address of the framebuffer.
+ *
+ * Unfortunately, Open Firmware doesn't provide a standard way to do
+ * so. All we can do is a dodgy heuristic that happens to work in
+ * practice. On most machines, the "address" property contains what
+ * we need, though not on Matrox cards found in IBM machines. What I've
+ * found that appears to give good results is to go through the PCI
+ * ranges and pick one that is both big enough and if possible encloses
+ * the "address" property. If none match, we pick the biggest
+ */
+ up = (u32 *)get_property(dp, "linux,bootx-addr", &len);
+ if (up == NULL)
+ up = (u32 *)get_property(dp, "address", &len);
+ if (up && len == sizeof(u32))
+ addr_prop = *up;
+
+ for (i = 0; (addrp = of_get_address(dp, i, &asize, &flags))
+ != NULL; i++) {
+ int match_addrp = 0;
+
+ if (!(flags & IORESOURCE_MEM))
+ continue;
+ if (asize < rsize)
+ continue;
+ rstart = of_translate_address(dp, addrp);
+ if (rstart == OF_BAD_ADDR)
+ continue;
+ if (addr_prop && (rstart <= addr_prop) &&
+ ((rstart + asize) >= (addr_prop + rsize)))
+ match_addrp = 1;
+ if (match_addrp) {
+ address = addr_prop;
+ break;
}
+ if (rsize > max_size) {
+ max_size = rsize;
+ address = OF_BAD_ADDR;
+ }
+ if (address == OF_BAD_ADDR)
+ address = rstart;
+ }
+ if (address == OF_BAD_ADDR && addr_prop)
+ address = (u64)addr_prop;
+ if (address != OF_BAD_ADDR) {
/* kludge for valkyrie */
if (strcmp(dp->name, "valkyrie") == 0)
address += 0x1000;
+ offb_init_fb(dp->name, dp->full_name, width, height, depth,
+ pitch, address, dp);
}
- offb_init_fb(dp->name, dp->full_name, width, height, depth,
- pitch, address, dp);
-
}
static void __init offb_init_fb(const char *name, const char *full_name,
int pitch, unsigned long address,
struct device_node *dp)
{
- unsigned long res_size = pitch * height * depth / 8;
+ unsigned long res_size = pitch * height * (depth + 7) / 8;
struct offb_par *par = &default_par;
unsigned long res_start = address;
struct fb_fix_screeninfo *fix;
printk(KERN_INFO
"Using unsupported %dx%d %s at %lx, depth=%d, pitch=%d\n",
width, height, name, address, depth, pitch);
- if (depth != 8 && depth != 16 && depth != 32) {
+ if (depth != 8 && depth != 15 && depth != 16 && depth != 32) {
printk(KERN_ERR "%s: can't use depth = %d\n", full_name,
depth);
release_mem_region(res_start, res_size);
: */ FB_VISUAL_TRUECOLOR;
var->xoffset = var->yoffset = 0;
- var->bits_per_pixel = depth;
switch (depth) {
case 8:
var->bits_per_pixel = 8;
var->transp.offset = 0;
var->transp.length = 0;
break;
- case 16: /* RGB 555 */
+ case 15: /* RGB 555 */
var->bits_per_pixel = 16;
var->red.offset = 10;
var->red.length = 5;
var->transp.offset = 0;
var->transp.length = 0;
break;
+ case 16: /* RGB 565 */
+ var->bits_per_pixel = 16;
+ var->red.offset = 11;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 6;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
case 32: /* RGB 888 */
var->bits_per_pixel = 32;
var->red.offset = 16;
--- /dev/null
+#
+# Makefile for the new PNX4008 framebuffer device driver
+#
+
+obj-$(CONFIG_FB_PNX4008_DUM) += sdum.o
+obj-$(CONFIG_FB_PNX4008_DUM_RGB) += pnxrgbfb.o
+
--- /dev/null
+/*
+ * linux/drivers/video/pnx4008/dum.h
+ *
+ * Internal header for SDUM
+ *
+ * 2005 (c) Koninklijke Philips N.V. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __PNX008_DUM_H__
+#define __PNX008_DUM_H__
+
+#include <asm/arch/platform.h>
+
+#define PNX4008_DUMCONF_VA_BASE IO_ADDRESS(PNX4008_DUMCONF_BASE)
+#define PNX4008_DUM_MAIN_VA_BASE IO_ADDRESS(PNX4008_DUM_MAINCFG_BASE)
+
+/* DUM CFG ADDRESSES */
+#define DUM_CH_BASE_ADR (PNX4008_DUMCONF_VA_BASE + 0x00)
+#define DUM_CH_MIN_ADR (PNX4008_DUMCONF_VA_BASE + 0x00)
+#define DUM_CH_MAX_ADR (PNX4008_DUMCONF_VA_BASE + 0x04)
+#define DUM_CH_CONF_ADR (PNX4008_DUMCONF_VA_BASE + 0x08)
+#define DUM_CH_STAT_ADR (PNX4008_DUMCONF_VA_BASE + 0x0C)
+#define DUM_CH_CTRL_ADR (PNX4008_DUMCONF_VA_BASE + 0x10)
+
+#define CH_MARG (0x100 / sizeof(u32))
+#define DUM_CH_MIN(i) (*((volatile u32 *)DUM_CH_MIN_ADR + (i) * CH_MARG))
+#define DUM_CH_MAX(i) (*((volatile u32 *)DUM_CH_MAX_ADR + (i) * CH_MARG))
+#define DUM_CH_CONF(i) (*((volatile u32 *)DUM_CH_CONF_ADR + (i) * CH_MARG))
+#define DUM_CH_STAT(i) (*((volatile u32 *)DUM_CH_STAT_ADR + (i) * CH_MARG))
+#define DUM_CH_CTRL(i) (*((volatile u32 *)DUM_CH_CTRL_ADR + (i) * CH_MARG))
+
+#define DUM_CONF_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x00)
+#define DUM_CTRL_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x04)
+#define DUM_STAT_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x08)
+#define DUM_DECODE_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x0C)
+#define DUM_COM_BASE_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x10)
+#define DUM_SYNC_C_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x14)
+#define DUM_CLK_DIV_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x18)
+#define DUM_DIRTY_LOW_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x20)
+#define DUM_DIRTY_HIGH_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x24)
+#define DUM_FORMAT_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x28)
+#define DUM_WTCFG1_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x30)
+#define DUM_RTCFG1_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x34)
+#define DUM_WTCFG2_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x38)
+#define DUM_RTCFG2_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x3C)
+#define DUM_TCFG_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x40)
+#define DUM_OUTP_FORMAT1_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x44)
+#define DUM_OUTP_FORMAT2_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x48)
+#define DUM_SYNC_MODE_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x4C)
+#define DUM_SYNC_OUT_C_ADR (PNX4008_DUM_MAIN_VA_BASE + 0x50)
+
+#define DUM_CONF (*(volatile u32 *)(DUM_CONF_ADR))
+#define DUM_CTRL (*(volatile u32 *)(DUM_CTRL_ADR))
+#define DUM_STAT (*(volatile u32 *)(DUM_STAT_ADR))
+#define DUM_DECODE (*(volatile u32 *)(DUM_DECODE_ADR))
+#define DUM_COM_BASE (*(volatile u32 *)(DUM_COM_BASE_ADR))
+#define DUM_SYNC_C (*(volatile u32 *)(DUM_SYNC_C_ADR))
+#define DUM_CLK_DIV (*(volatile u32 *)(DUM_CLK_DIV_ADR))
+#define DUM_DIRTY_LOW (*(volatile u32 *)(DUM_DIRTY_LOW_ADR))
+#define DUM_DIRTY_HIGH (*(volatile u32 *)(DUM_DIRTY_HIGH_ADR))
+#define DUM_FORMAT (*(volatile u32 *)(DUM_FORMAT_ADR))
+#define DUM_WTCFG1 (*(volatile u32 *)(DUM_WTCFG1_ADR))
+#define DUM_RTCFG1 (*(volatile u32 *)(DUM_RTCFG1_ADR))
+#define DUM_WTCFG2 (*(volatile u32 *)(DUM_WTCFG2_ADR))
+#define DUM_RTCFG2 (*(volatile u32 *)(DUM_RTCFG2_ADR))
+#define DUM_TCFG (*(volatile u32 *)(DUM_TCFG_ADR))
+#define DUM_OUTP_FORMAT1 (*(volatile u32 *)(DUM_OUTP_FORMAT1_ADR))
+#define DUM_OUTP_FORMAT2 (*(volatile u32 *)(DUM_OUTP_FORMAT2_ADR))
+#define DUM_SYNC_MODE (*(volatile u32 *)(DUM_SYNC_MODE_ADR))
+#define DUM_SYNC_OUT_C (*(volatile u32 *)(DUM_SYNC_OUT_C_ADR))
+
+/* DUM SLAVE ADDRESSES */
+#define DUM_SLAVE_WRITE_ADR (PNX4008_DUM_MAINCFG_BASE + 0x0000000)
+#define DUM_SLAVE_READ1_I_ADR (PNX4008_DUM_MAINCFG_BASE + 0x1000000)
+#define DUM_SLAVE_READ1_R_ADR (PNX4008_DUM_MAINCFG_BASE + 0x1000004)
+#define DUM_SLAVE_READ2_I_ADR (PNX4008_DUM_MAINCFG_BASE + 0x1000008)
+#define DUM_SLAVE_READ2_R_ADR (PNX4008_DUM_MAINCFG_BASE + 0x100000C)
+
+#define DUM_SLAVE_WRITE_W ((volatile u32 *)(DUM_SLAVE_WRITE_ADR))
+#define DUM_SLAVE_WRITE_HW ((volatile u16 *)(DUM_SLAVE_WRITE_ADR))
+#define DUM_SLAVE_READ1_I ((volatile u8 *)(DUM_SLAVE_READ1_I_ADR))
+#define DUM_SLAVE_READ1_R ((volatile u16 *)(DUM_SLAVE_READ1_R_ADR))
+#define DUM_SLAVE_READ2_I ((volatile u8 *)(DUM_SLAVE_READ2_I_ADR))
+#define DUM_SLAVE_READ2_R ((volatile u16 *)(DUM_SLAVE_READ2_R_ADR))
+
+/* Sony display register addresses */
+#define DISP_0_REG (0x00)
+#define DISP_1_REG (0x01)
+#define DISP_CAL_REG (0x20)
+#define DISP_ID_REG (0x2A)
+#define DISP_XMIN_L_REG (0x30)
+#define DISP_XMIN_H_REG (0x31)
+#define DISP_YMIN_REG (0x32)
+#define DISP_XMAX_L_REG (0x34)
+#define DISP_XMAX_H_REG (0x35)
+#define DISP_YMAX_REG (0x36)
+#define DISP_SYNC_EN_REG (0x38)
+#define DISP_SYNC_RISE_L_REG (0x3C)
+#define DISP_SYNC_RISE_H_REG (0x3D)
+#define DISP_SYNC_FALL_L_REG (0x3E)
+#define DISP_SYNC_FALL_H_REG (0x3F)
+#define DISP_PIXEL_REG (0x0B)
+#define DISP_DUMMY1_REG (0x28)
+#define DISP_DUMMY2_REG (0x29)
+#define DISP_TIMING_REG (0x98)
+#define DISP_DUMP_REG (0x99)
+
+/* Sony display constants */
+#define SONY_ID1 (0x22)
+#define SONY_ID2 (0x23)
+
+/* Philips display register addresses */
+#define PH_DISP_ORIENT_REG (0x003)
+#define PH_DISP_YPOINT_REG (0x200)
+#define PH_DISP_XPOINT_REG (0x201)
+#define PH_DISP_PIXEL_REG (0x202)
+#define PH_DISP_YMIN_REG (0x406)
+#define PH_DISP_YMAX_REG (0x407)
+#define PH_DISP_XMIN_REG (0x408)
+#define PH_DISP_XMAX_REG (0x409)
+
+/* Misc constants */
+#define NO_VALID_DISPLAY_FOUND (0)
+#define DISPLAY2_IS_NOT_CONNECTED (0)
+
+/* register values */
+#define V_BAC_ENABLE (BIT(0))
+#define V_BAC_DISABLE_IDLE (BIT(1))
+#define V_BAC_DISABLE_TRIG (BIT(2))
+#define V_DUM_RESET (BIT(3))
+#define V_MUX_RESET (BIT(4))
+#define BAC_ENABLED (BIT(0))
+#define BAC_DISABLED 0
+
+/* Sony LCD commands */
+#define V_LCD_STANDBY_OFF ((BIT(25)) | (0 << 16) | DISP_0_REG)
+#define V_LCD_USE_9BIT_BUS ((BIT(25)) | (2 << 16) | DISP_1_REG)
+#define V_LCD_SYNC_RISE_L ((BIT(25)) | (0 << 16) | DISP_SYNC_RISE_L_REG)
+#define V_LCD_SYNC_RISE_H ((BIT(25)) | (0 << 16) | DISP_SYNC_RISE_H_REG)
+#define V_LCD_SYNC_FALL_L ((BIT(25)) | (160 << 16) | DISP_SYNC_FALL_L_REG)
+#define V_LCD_SYNC_FALL_H ((BIT(25)) | (0 << 16) | DISP_SYNC_FALL_H_REG)
+#define V_LCD_SYNC_ENABLE ((BIT(25)) | (128 << 16) | DISP_SYNC_EN_REG)
+#define V_LCD_DISPLAY_ON ((BIT(25)) | (64 << 16) | DISP_0_REG)
+
+enum {
+ PAD_NONE,
+ PAD_512,
+ PAD_1024
+};
+
+enum {
+ RGB888,
+ RGB666,
+ RGB565,
+ BGR565,
+ ARGB1555,
+ ABGR1555,
+ ARGB4444,
+ ABGR4444
+};
+
+struct dum_setup {
+ int sync_neg_edge;
+ int round_robin;
+ int mux_int;
+ int synced_dirty_flag_int;
+ int dirty_flag_int;
+ int error_int;
+ int pf_empty_int;
+ int sf_empty_int;
+ int bac_dis_int;
+ u32 dirty_base_adr;
+ u32 command_base_adr;
+ u32 sync_clk_div;
+ int sync_output;
+ u32 sync_restart_val;
+ u32 set_sync_high;
+ u32 set_sync_low;
+};
+
+struct dum_ch_setup {
+ int disp_no;
+ u32 xmin;
+ u32 ymin;
+ u32 xmax;
+ u32 ymax;
+ int xmirror;
+ int ymirror;
+ int rotate;
+ u32 minadr;
+ u32 maxadr;
+ u32 dirtybuffer;
+ int pad;
+ int format;
+ int hwdirty;
+ int slave_trans;
+};
+
+struct disp_window {
+ u32 xmin_l;
+ u32 xmin_h;
+ u32 ymin;
+ u32 xmax_l;
+ u32 xmax_h;
+ u32 ymax;
+};
+
+#endif /* #ifndef __PNX008_DUM_H__ */
--- /dev/null
+/*
+ * Copyright (C) 2005 Philips Semiconductors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING. If not, write to
+ * the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 02111-1307, USA, or http://www.gnu.org/licenses/gpl.html
+*/
+
+#define QCIF_W (176)
+#define QCIF_H (144)
+
+#define CIF_W (352)
+#define CIF_H (288)
+
+#define LCD_X_RES 208
+#define LCD_Y_RES 320
+#define LCD_X_PAD 256
+#define LCD_BBP 4 /* Bytes Per Pixel */
+
+#define DISP_MAX_X_SIZE (320)
+#define DISP_MAX_Y_SIZE (208)
+
+#define RETURNVAL_BASE (0x400)
+
+enum fb_ioctl_returntype {
+ ENORESOURCESLEFT = RETURNVAL_BASE,
+ ERESOURCESNOTFREED,
+ EPROCNOTOWNER,
+ EFBNOTOWNER,
+ ECOPYFAILED,
+ EIOREMAPFAILED,
+};
--- /dev/null
+/*
+ * drivers/video/pnx4008/pnxrgbfb.c
+ *
+ * PNX4008's framebuffer support
+ *
+ * Author: Grigory Tolstolytkin <gtolstolytkin@ru.mvista.com>
+ * Based on Philips Semiconductors's code
+ *
+ * Copyrght (c) 2005 MontaVista Software, Inc.
+ * Copyright (c) 2005 Philips Semiconductors
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+
+#include <asm/uaccess.h>
+#include "sdum.h"
+#include "fbcommon.h"
+
+static u32 colreg[16];
+
+static struct fb_var_screeninfo rgbfb_var __initdata = {
+ .xres = LCD_X_RES,
+ .yres = LCD_Y_RES,
+ .xres_virtual = LCD_X_RES,
+ .yres_virtual = LCD_Y_RES,
+ .bits_per_pixel = 32,
+ .red.offset = 16,
+ .red.length = 8,
+ .green.offset = 8,
+ .green.length = 8,
+ .blue.offset = 0,
+ .blue.length = 8,
+ .left_margin = 0,
+ .right_margin = 0,
+ .upper_margin = 0,
+ .lower_margin = 0,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+static struct fb_fix_screeninfo rgbfb_fix __initdata = {
+ .id = "RGBFB",
+ .line_length = LCD_X_RES * LCD_BBP,
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_TRUECOLOR,
+ .xpanstep = 0,
+ .ypanstep = 0,
+ .ywrapstep = 0,
+ .accel = FB_ACCEL_NONE,
+};
+
+static int channel_owned;
+
+static int no_cursor(struct fb_info *info, struct fb_cursor *cursor)
+{
+ return 0;
+}
+
+static int rgbfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int transp, struct fb_info *info)
+{
+ if (regno > 15)
+ return 1;
+
+ colreg[regno] = ((red & 0xff00) << 8) | (green & 0xff00) |
+ ((blue & 0xff00) >> 8);
+ return 0;
+}
+
+static int rgbfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+{
+ return pnx4008_sdum_mmap(info, vma, NULL);
+}
+
+static struct fb_ops rgbfb_ops = {
+ .fb_mmap = rgbfb_mmap,
+ .fb_setcolreg = rgbfb_setcolreg,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+};
+
+static int rgbfb_remove(struct platform_device *pdev)
+{
+ struct fb_info *info = platform_get_drvdata(pdev);
+
+ if (info) {
+ unregister_framebuffer(info);
+ fb_dealloc_cmap(&info->cmap);
+ framebuffer_release(info);
+ platform_set_drvdata(pdev, NULL);
+ kfree(info);
+ }
+
+ pnx4008_free_dum_channel(channel_owned, pdev->id);
+ pnx4008_set_dum_exit_notification(pdev->id);
+
+ return 0;
+}
+
+static int __devinit rgbfb_probe(struct platform_device *pdev)
+{
+ struct fb_info *info;
+ struct dumchannel_uf chan_uf;
+ int ret;
+ char *option;
+
+ info = framebuffer_alloc(sizeof(u32) * 16, &pdev->dev);
+ if (!info) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ pnx4008_get_fb_addresses(FB_TYPE_RGB, (void **)&info->screen_base,
+ (dma_addr_t *) &rgbfb_fix.smem_start,
+ &rgbfb_fix.smem_len);
+
+ if ((ret = pnx4008_alloc_dum_channel(pdev->id)) < 0)
+ goto err0;
+ else {
+ channel_owned = ret;
+ chan_uf.channelnr = channel_owned;
+ chan_uf.dirty = (u32 *) NULL;
+ chan_uf.source = (u32 *) rgbfb_fix.smem_start;
+ chan_uf.x_offset = 0;
+ chan_uf.y_offset = 0;
+ chan_uf.width = LCD_X_RES;
+ chan_uf.height = LCD_Y_RES;
+
+ if ((ret = pnx4008_put_dum_channel_uf(chan_uf, pdev->id))< 0)
+ goto err1;
+
+ if ((ret =
+ pnx4008_set_dum_channel_sync(channel_owned, CONF_SYNC_ON,
+ pdev->id)) < 0)
+ goto err1;
+
+ if ((ret =
+ pnx4008_set_dum_channel_dirty_detect(channel_owned,
+ CONF_DIRTYDETECTION_ON,
+ pdev->id)) < 0)
+ goto err1;
+ }
+
+ if (!fb_get_options("pnxrgbfb", &option) && !strcmp(option, "nocursor"))
+ rgbfb_ops.fb_cursor = no_cursor;
+
+ info->node = -1;
+ info->flags = FBINFO_FLAG_DEFAULT;
+ info->fbops = &rgbfb_ops;
+ info->fix = rgbfb_fix;
+ info->var = rgbfb_var;
+ info->screen_size = rgbfb_fix.smem_len;
+ info->pseudo_palette = info->par;
+ info->par = NULL;
+
+ ret = fb_alloc_cmap(&info->cmap, 256, 0);
+ if (ret < 0)
+ goto err2;
+
+ ret = register_framebuffer(info);
+ if (ret < 0)
+ goto err3;
+ platform_set_drvdata(pdev, info);
+
+ return 0;
+
+err3:
+ fb_dealloc_cmap(&info->cmap);
+err2:
+ framebuffer_release(info);
+err1:
+ pnx4008_free_dum_channel(channel_owned, pdev->id);
+err0:
+ kfree(info);
+err:
+ return ret;
+}
+
+static struct platform_driver rgbfb_driver = {
+ .driver = {
+ .name = "rgbfb",
+ },
+ .probe = rgbfb_probe,
+ .remove = rgbfb_remove,
+};
+
+static int __init rgbfb_init(void)
+{
+ return platform_driver_register(&rgbfb_driver);
+}
+
+static void __exit rgbfb_exit(void)
+{
+ platform_driver_unregister(&rgbfb_driver);
+}
+
+module_init(rgbfb_init);
+module_exit(rgbfb_exit);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * drivers/video/pnx4008/sdum.c
+ *
+ * Display Update Master support
+ *
+ * Authors: Grigory Tolstolytkin <gtolstolytkin@ru.mvista.com>
+ * Vitaly Wool <vitalywool@gmail.com>
+ * Based on Philips Semiconductors's code
+ *
+ * Copyrght (c) 2005-2006 MontaVista Software, Inc.
+ * Copyright (c) 2005 Philips Semiconductors
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/tty.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+#include <linux/clk.h>
+#include <asm/uaccess.h>
+#include <asm/arch/gpio.h>
+
+#include "sdum.h"
+#include "fbcommon.h"
+#include "dum.h"
+
+/* Framebuffers we have */
+
+static struct pnx4008_fb_addr {
+ int fb_type;
+ long addr_offset;
+ long fb_length;
+} fb_addr[] = {
+ [0] = {
+ FB_TYPE_YUV, 0, 0xB0000
+ },
+ [1] = {
+ FB_TYPE_RGB, 0xB0000, 0x50000
+ },
+};
+
+static struct dum_data {
+ u32 lcd_phys_start;
+ u32 lcd_virt_start;
+ u32 slave_phys_base;
+ u32 *slave_virt_base;
+ int fb_owning_channel[MAX_DUM_CHANNELS];
+ struct dumchannel_uf chan_uf_store[MAX_DUM_CHANNELS];
+} dum_data;
+
+/* Different local helper functions */
+
+static u32 nof_pixels_dx(struct dum_ch_setup *ch_setup)
+{
+ return (ch_setup->xmax - ch_setup->xmin + 1);
+}
+
+static u32 nof_pixels_dy(struct dum_ch_setup *ch_setup)
+{
+ return (ch_setup->ymax - ch_setup->ymin + 1);
+}
+
+static u32 nof_pixels_dxy(struct dum_ch_setup *ch_setup)
+{
+ return (nof_pixels_dx(ch_setup) * nof_pixels_dy(ch_setup));
+}
+
+static u32 nof_bytes(struct dum_ch_setup *ch_setup)
+{
+ u32 r = nof_pixels_dxy(ch_setup);
+ switch (ch_setup->format) {
+ case RGB888:
+ case RGB666:
+ r *= 4;
+ break;
+
+ default:
+ r *= 2;
+ break;
+ }
+ return r;
+}
+
+static u32 build_command(int disp_no, u32 reg, u32 val)
+{
+ return ((disp_no << 26) | BIT(25) | (val << 16) | (disp_no << 10) |
+ (reg << 0));
+}
+
+static u32 build_double_index(int disp_no, u32 val)
+{
+ return ((disp_no << 26) | (val << 16) | (disp_no << 10) | (val << 0));
+}
+
+static void build_disp_window(struct dum_ch_setup * ch_setup, struct disp_window * dw)
+{
+ dw->ymin = ch_setup->ymin;
+ dw->ymax = ch_setup->ymax;
+ dw->xmin_l = ch_setup->xmin & 0xFF;
+ dw->xmin_h = (ch_setup->xmin & BIT(8)) >> 8;
+ dw->xmax_l = ch_setup->xmax & 0xFF;
+ dw->xmax_h = (ch_setup->xmax & BIT(8)) >> 8;
+}
+
+static int put_channel(struct dumchannel chan)
+{
+ int i = chan.channelnr;
+
+ if (i < 0 || i > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else {
+ DUM_CH_MIN(i) = chan.dum_ch_min;
+ DUM_CH_MAX(i) = chan.dum_ch_max;
+ DUM_CH_CONF(i) = chan.dum_ch_conf;
+ DUM_CH_CTRL(i) = chan.dum_ch_ctrl;
+ }
+
+ return 0;
+}
+
+static void clear_channel(int channr)
+{
+ struct dumchannel chan;
+
+ chan.channelnr = channr;
+ chan.dum_ch_min = 0;
+ chan.dum_ch_max = 0;
+ chan.dum_ch_conf = 0;
+ chan.dum_ch_ctrl = 0;
+
+ put_channel(chan);
+}
+
+static int put_cmd_string(struct cmdstring cmds)
+{
+ u16 *cmd_str_virtaddr;
+ u32 *cmd_ptr0_virtaddr;
+ u32 cmd_str_physaddr;
+
+ int i = cmds.channelnr;
+
+ if (i < 0 || i > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if ((cmd_ptr0_virtaddr =
+ (int *)ioremap_nocache(DUM_COM_BASE,
+ sizeof(int) * MAX_DUM_CHANNELS)) ==
+ NULL)
+ return -EIOREMAPFAILED;
+ else {
+ cmd_str_physaddr = ioread32(&cmd_ptr0_virtaddr[cmds.channelnr]);
+ if ((cmd_str_virtaddr =
+ (u16 *) ioremap_nocache(cmd_str_physaddr,
+ sizeof(cmds))) == NULL) {
+ iounmap(cmd_ptr0_virtaddr);
+ return -EIOREMAPFAILED;
+ } else {
+ int t;
+ for (t = 0; t < 8; t++)
+ iowrite16(*((u16 *)&cmds.prestringlen + t),
+ cmd_str_virtaddr + t);
+
+ for (t = 0; t < cmds.prestringlen / 2; t++)
+ iowrite16(*((u16 *)&cmds.precmd + t),
+ cmd_str_virtaddr + t + 8);
+
+ for (t = 0; t < cmds.poststringlen / 2; t++)
+ iowrite16(*((u16 *)&cmds.postcmd + t),
+ cmd_str_virtaddr + t + 8 +
+ cmds.prestringlen / 2);
+
+ iounmap(cmd_ptr0_virtaddr);
+ iounmap(cmd_str_virtaddr);
+ }
+ }
+
+ return 0;
+}
+
+static u32 dum_ch_setup(int ch_no, struct dum_ch_setup * ch_setup)
+{
+ struct cmdstring cmds_c;
+ struct cmdstring *cmds = &cmds_c;
+ struct disp_window dw;
+ int standard;
+ u32 orientation = 0;
+ struct dumchannel chan = { 0 };
+ int ret;
+
+ if ((ch_setup->xmirror) || (ch_setup->ymirror) || (ch_setup->rotate)) {
+ standard = 0;
+
+ orientation = BIT(1); /* always set 9-bit-bus */
+ if (ch_setup->xmirror)
+ orientation |= BIT(4);
+ if (ch_setup->ymirror)
+ orientation |= BIT(3);
+ if (ch_setup->rotate)
+ orientation |= BIT(0);
+ } else
+ standard = 1;
+
+ cmds->channelnr = ch_no;
+
+ /* build command string header */
+ if (standard) {
+ cmds->prestringlen = 32;
+ cmds->poststringlen = 0;
+ } else {
+ cmds->prestringlen = 48;
+ cmds->poststringlen = 16;
+ }
+
+ cmds->format =
+ (u16) ((ch_setup->disp_no << 4) | (BIT(3)) | (ch_setup->format));
+ cmds->reserved = 0x0;
+ cmds->startaddr_low = (ch_setup->minadr & 0xFFFF);
+ cmds->startaddr_high = (ch_setup->minadr >> 16);
+
+ if ((ch_setup->minadr == 0) && (ch_setup->maxadr == 0)
+ && (ch_setup->xmin == 0)
+ && (ch_setup->ymin == 0) && (ch_setup->xmax == 0)
+ && (ch_setup->ymax == 0)) {
+ cmds->pixdatlen_low = 0;
+ cmds->pixdatlen_high = 0;
+ } else {
+ u32 nbytes = nof_bytes(ch_setup);
+ cmds->pixdatlen_low = (nbytes & 0xFFFF);
+ cmds->pixdatlen_high = (nbytes >> 16);
+ }
+
+ if (ch_setup->slave_trans)
+ cmds->pixdatlen_high |= BIT(15);
+
+ /* build pre-string */
+ build_disp_window(ch_setup, &dw);
+
+ if (standard) {
+ cmds->precmd[0] =
+ build_command(ch_setup->disp_no, DISP_XMIN_L_REG, 0x99);
+ cmds->precmd[1] =
+ build_command(ch_setup->disp_no, DISP_XMIN_L_REG,
+ dw.xmin_l);
+ cmds->precmd[2] =
+ build_command(ch_setup->disp_no, DISP_XMIN_H_REG,
+ dw.xmin_h);
+ cmds->precmd[3] =
+ build_command(ch_setup->disp_no, DISP_YMIN_REG, dw.ymin);
+ cmds->precmd[4] =
+ build_command(ch_setup->disp_no, DISP_XMAX_L_REG,
+ dw.xmax_l);
+ cmds->precmd[5] =
+ build_command(ch_setup->disp_no, DISP_XMAX_H_REG,
+ dw.xmax_h);
+ cmds->precmd[6] =
+ build_command(ch_setup->disp_no, DISP_YMAX_REG, dw.ymax);
+ cmds->precmd[7] =
+ build_double_index(ch_setup->disp_no, DISP_PIXEL_REG);
+ } else {
+ if (dw.xmin_l == ch_no)
+ cmds->precmd[0] =
+ build_command(ch_setup->disp_no, DISP_XMIN_L_REG,
+ 0x99);
+ else
+ cmds->precmd[0] =
+ build_command(ch_setup->disp_no, DISP_XMIN_L_REG,
+ ch_no);
+
+ cmds->precmd[1] =
+ build_command(ch_setup->disp_no, DISP_XMIN_L_REG,
+ dw.xmin_l);
+ cmds->precmd[2] =
+ build_command(ch_setup->disp_no, DISP_XMIN_H_REG,
+ dw.xmin_h);
+ cmds->precmd[3] =
+ build_command(ch_setup->disp_no, DISP_YMIN_REG, dw.ymin);
+ cmds->precmd[4] =
+ build_command(ch_setup->disp_no, DISP_XMAX_L_REG,
+ dw.xmax_l);
+ cmds->precmd[5] =
+ build_command(ch_setup->disp_no, DISP_XMAX_H_REG,
+ dw.xmax_h);
+ cmds->precmd[6] =
+ build_command(ch_setup->disp_no, DISP_YMAX_REG, dw.ymax);
+ cmds->precmd[7] =
+ build_command(ch_setup->disp_no, DISP_1_REG, orientation);
+ cmds->precmd[8] =
+ build_double_index(ch_setup->disp_no, DISP_PIXEL_REG);
+ cmds->precmd[9] =
+ build_double_index(ch_setup->disp_no, DISP_PIXEL_REG);
+ cmds->precmd[0xA] =
+ build_double_index(ch_setup->disp_no, DISP_PIXEL_REG);
+ cmds->precmd[0xB] =
+ build_double_index(ch_setup->disp_no, DISP_PIXEL_REG);
+ cmds->postcmd[0] =
+ build_command(ch_setup->disp_no, DISP_1_REG, BIT(1));
+ cmds->postcmd[1] =
+ build_command(ch_setup->disp_no, DISP_DUMMY1_REG, 1);
+ cmds->postcmd[2] =
+ build_command(ch_setup->disp_no, DISP_DUMMY1_REG, 2);
+ cmds->postcmd[3] =
+ build_command(ch_setup->disp_no, DISP_DUMMY1_REG, 3);
+ }
+
+ if ((ret = put_cmd_string(cmds_c)) != 0) {
+ return ret;
+ }
+
+ chan.channelnr = cmds->channelnr;
+ chan.dum_ch_min = ch_setup->dirtybuffer + ch_setup->minadr;
+ chan.dum_ch_max = ch_setup->dirtybuffer + ch_setup->maxadr;
+ chan.dum_ch_conf = 0x002;
+ chan.dum_ch_ctrl = 0x04;
+
+ put_channel(chan);
+
+ return 0;
+}
+
+static u32 display_open(int ch_no, int auto_update, u32 * dirty_buffer,
+ u32 * frame_buffer, u32 xpos, u32 ypos, u32 w, u32 h)
+{
+
+ struct dum_ch_setup k;
+ int ret;
+
+ /* keep width & height within display area */
+ if ((xpos + w) > DISP_MAX_X_SIZE)
+ w = DISP_MAX_X_SIZE - xpos;
+
+ if ((ypos + h) > DISP_MAX_Y_SIZE)
+ h = DISP_MAX_Y_SIZE - ypos;
+
+ /* assume 1 display only */
+ k.disp_no = 0;
+ k.xmin = xpos;
+ k.ymin = ypos;
+ k.xmax = xpos + (w - 1);
+ k.ymax = ypos + (h - 1);
+
+ /* adjust min and max values if necessary */
+ if (k.xmin > DISP_MAX_X_SIZE - 1)
+ k.xmin = DISP_MAX_X_SIZE - 1;
+ if (k.ymin > DISP_MAX_Y_SIZE - 1)
+ k.ymin = DISP_MAX_Y_SIZE - 1;
+
+ if (k.xmax > DISP_MAX_X_SIZE - 1)
+ k.xmax = DISP_MAX_X_SIZE - 1;
+ if (k.ymax > DISP_MAX_Y_SIZE - 1)
+ k.ymax = DISP_MAX_Y_SIZE - 1;
+
+ k.xmirror = 0;
+ k.ymirror = 0;
+ k.rotate = 0;
+ k.minadr = (u32) frame_buffer;
+ k.maxadr = (u32) frame_buffer + (((w - 1) << 10) | ((h << 2) - 2));
+ k.pad = PAD_1024;
+ k.dirtybuffer = (u32) dirty_buffer;
+ k.format = RGB888;
+ k.hwdirty = 0;
+ k.slave_trans = 0;
+
+ ret = dum_ch_setup(ch_no, &k);
+
+ return ret;
+}
+
+static void lcd_reset(void)
+{
+ u32 *dum_pio_base = (u32 *)IO_ADDRESS(PNX4008_PIO_BASE);
+
+ udelay(1);
+ iowrite32(BIT(19), &dum_pio_base[2]);
+ udelay(1);
+ iowrite32(BIT(19), &dum_pio_base[1]);
+ udelay(1);
+}
+
+static int dum_init(struct platform_device *pdev)
+{
+ struct clk *clk;
+
+ /* enable DUM clock */
+ clk = clk_get(&pdev->dev, "dum_ck");
+ if (IS_ERR(clk)) {
+ printk(KERN_ERR "pnx4008_dum: Unable to access DUM clock\n");
+ return PTR_ERR(clk);
+ }
+
+ clk_set_rate(clk, 1);
+ clk_put(clk);
+
+ DUM_CTRL = V_DUM_RESET;
+
+ /* set priority to "round-robin". All other params to "false" */
+ DUM_CONF = BIT(9);
+
+ /* Display 1 */
+ DUM_WTCFG1 = PNX4008_DUM_WT_CFG;
+ DUM_RTCFG1 = PNX4008_DUM_RT_CFG;
+ DUM_TCFG = PNX4008_DUM_T_CFG;
+
+ return 0;
+}
+
+static void dum_chan_init(void)
+{
+ int i = 0, ch = 0;
+ u32 *cmdptrs;
+ u32 *cmdstrings;
+
+ DUM_COM_BASE =
+ CMDSTRING_BASEADDR + BYTES_PER_CMDSTRING * NR_OF_CMDSTRINGS;
+
+ if ((cmdptrs =
+ (u32 *) ioremap_nocache(DUM_COM_BASE,
+ sizeof(u32) * NR_OF_CMDSTRINGS)) == NULL)
+ return;
+
+ for (ch = 0; ch < NR_OF_CMDSTRINGS; ch++)
+ iowrite32(CMDSTRING_BASEADDR + BYTES_PER_CMDSTRING * ch,
+ cmdptrs + ch);
+
+ for (ch = 0; ch < MAX_DUM_CHANNELS; ch++)
+ clear_channel(ch);
+
+ /* Clear the cmdstrings */
+ cmdstrings =
+ (u32 *)ioremap_nocache(*cmdptrs,
+ BYTES_PER_CMDSTRING * NR_OF_CMDSTRINGS);
+
+ if (!cmdstrings)
+ goto out;
+
+ for (i = 0; i < NR_OF_CMDSTRINGS * BYTES_PER_CMDSTRING / sizeof(u32);
+ i++)
+ iowrite32(0, cmdstrings + i);
+
+ iounmap((u32 *)cmdstrings);
+
+out:
+ iounmap((u32 *)cmdptrs);
+}
+
+static void lcd_init(void)
+{
+ lcd_reset();
+
+ DUM_OUTP_FORMAT1 = 0; /* RGB666 */
+
+ udelay(1);
+ iowrite32(V_LCD_STANDBY_OFF, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_USE_9BIT_BUS, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_SYNC_RISE_L, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_SYNC_RISE_H, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_SYNC_FALL_L, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_SYNC_FALL_H, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_SYNC_ENABLE, dum_data.slave_virt_base);
+ udelay(1);
+ iowrite32(V_LCD_DISPLAY_ON, dum_data.slave_virt_base);
+ udelay(1);
+}
+
+/* Interface exported to framebuffer drivers */
+
+int pnx4008_get_fb_addresses(int fb_type, void **virt_addr,
+ dma_addr_t *phys_addr, int *fb_length)
+{
+ int i;
+ int ret = -1;
+ for (i = 0; i < ARRAY_SIZE(fb_addr); i++)
+ if (fb_addr[i].fb_type == fb_type) {
+ *virt_addr = (void *)(dum_data.lcd_virt_start +
+ fb_addr[i].addr_offset);
+ *phys_addr =
+ dum_data.lcd_phys_start + fb_addr[i].addr_offset;
+ *fb_length = fb_addr[i].fb_length;
+ ret = 0;
+ break;
+ }
+
+ return ret;
+}
+
+EXPORT_SYMBOL(pnx4008_get_fb_addresses);
+
+int pnx4008_alloc_dum_channel(int dev_id)
+{
+ int i = 0;
+
+ while ((i < MAX_DUM_CHANNELS) && (dum_data.fb_owning_channel[i] != -1))
+ i++;
+
+ if (i == MAX_DUM_CHANNELS)
+ return -ENORESOURCESLEFT;
+ else {
+ dum_data.fb_owning_channel[i] = dev_id;
+ return i;
+ }
+}
+
+EXPORT_SYMBOL(pnx4008_alloc_dum_channel);
+
+int pnx4008_free_dum_channel(int channr, int dev_id)
+{
+ if (channr < 0 || channr > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[channr] != dev_id)
+ return -EFBNOTOWNER;
+ else {
+ clear_channel(channr);
+ dum_data.fb_owning_channel[channr] = -1;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_free_dum_channel);
+
+int pnx4008_put_dum_channel_uf(struct dumchannel_uf chan_uf, int dev_id)
+{
+ int i = chan_uf.channelnr;
+ int ret;
+
+ if (i < 0 || i > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[i] != dev_id)
+ return -EFBNOTOWNER;
+ else if ((ret =
+ display_open(chan_uf.channelnr, 0, chan_uf.dirty,
+ chan_uf.source, chan_uf.y_offset,
+ chan_uf.x_offset, chan_uf.height,
+ chan_uf.width)) != 0)
+ return ret;
+ else {
+ dum_data.chan_uf_store[i].dirty = chan_uf.dirty;
+ dum_data.chan_uf_store[i].source = chan_uf.source;
+ dum_data.chan_uf_store[i].x_offset = chan_uf.x_offset;
+ dum_data.chan_uf_store[i].y_offset = chan_uf.y_offset;
+ dum_data.chan_uf_store[i].width = chan_uf.width;
+ dum_data.chan_uf_store[i].height = chan_uf.height;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_put_dum_channel_uf);
+
+int pnx4008_set_dum_channel_sync(int channr, int val, int dev_id)
+{
+ if (channr < 0 || channr > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[channr] != dev_id)
+ return -EFBNOTOWNER;
+ else {
+ if (val == CONF_SYNC_ON) {
+ DUM_CH_CONF(channr) |= CONF_SYNCENABLE;
+ DUM_CH_CONF(channr) |= DUM_CHANNEL_CFG_SYNC_MASK |
+ DUM_CHANNEL_CFG_SYNC_MASK_SET;
+ } else if (val == CONF_SYNC_OFF)
+ DUM_CH_CONF(channr) &= ~CONF_SYNCENABLE;
+ else
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_set_dum_channel_sync);
+
+int pnx4008_set_dum_channel_dirty_detect(int channr, int val, int dev_id)
+{
+ if (channr < 0 || channr > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[channr] != dev_id)
+ return -EFBNOTOWNER;
+ else {
+ if (val == CONF_DIRTYDETECTION_ON)
+ DUM_CH_CONF(channr) |= CONF_DIRTYENABLE;
+ else if (val == CONF_DIRTYDETECTION_OFF)
+ DUM_CH_CONF(channr) &= ~CONF_DIRTYENABLE;
+ else
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_set_dum_channel_dirty_detect);
+
+#if 0 /* Functions not used currently, but likely to be used in future */
+
+static int get_channel(struct dumchannel *p_chan)
+{
+ int i = p_chan->channelnr;
+
+ if (i < 0 || i > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else {
+ p_chan->dum_ch_min = DUM_CH_MIN(i);
+ p_chan->dum_ch_max = DUM_CH_MAX(i);
+ p_chan->dum_ch_conf = DUM_CH_CONF(i);
+ p_chan->dum_ch_stat = DUM_CH_STAT(i);
+ p_chan->dum_ch_ctrl = 0; /* WriteOnly control register */
+ }
+
+ return 0;
+}
+
+int pnx4008_get_dum_channel_uf(struct dumchannel_uf *p_chan_uf, int dev_id)
+{
+ int i = p_chan_uf->channelnr;
+
+ if (i < 0 || i > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[i] != dev_id)
+ return -EFBNOTOWNER;
+ else {
+ p_chan_uf->dirty = dum_data.chan_uf_store[i].dirty;
+ p_chan_uf->source = dum_data.chan_uf_store[i].source;
+ p_chan_uf->x_offset = dum_data.chan_uf_store[i].x_offset;
+ p_chan_uf->y_offset = dum_data.chan_uf_store[i].y_offset;
+ p_chan_uf->width = dum_data.chan_uf_store[i].width;
+ p_chan_uf->height = dum_data.chan_uf_store[i].height;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_get_dum_channel_uf);
+
+int pnx4008_get_dum_channel_config(int channr, int dev_id)
+{
+ int ret;
+ struct dumchannel chan;
+
+ if (channr < 0 || channr > MAX_DUM_CHANNELS)
+ return -EINVAL;
+ else if (dum_data.fb_owning_channel[channr] != dev_id)
+ return -EFBNOTOWNER;
+ else {
+ chan.channelnr = channr;
+ if ((ret = get_channel(&chan)) != 0)
+ return ret;
+ }
+
+ return (chan.dum_ch_conf & DUM_CHANNEL_CFG_MASK);
+}
+
+EXPORT_SYMBOL(pnx4008_get_dum_channel_config);
+
+int pnx4008_force_update_dum_channel(int channr, int dev_id)
+{
+ if (channr < 0 || channr > MAX_DUM_CHANNELS)
+ return -EINVAL;
+
+ else if (dum_data.fb_owning_channel[channr] != dev_id)
+ return -EFBNOTOWNER;
+ else
+ DUM_CH_CTRL(channr) = CTRL_SETDIRTY;
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_force_update_dum_channel);
+
+#endif
+
+int pnx4008_sdum_mmap(struct fb_info *info, struct vm_area_struct *vma,
+ struct device *dev)
+{
+ unsigned long off = vma->vm_pgoff << PAGE_SHIFT;
+
+ if (off < info->fix.smem_len) {
+ vma->vm_pgoff += 1;
+ return dma_mmap_writecombine(dev, vma,
+ (void *)dum_data.lcd_virt_start,
+ dum_data.lcd_phys_start,
+ FB_DMA_SIZE);
+ }
+ return -EINVAL;
+}
+
+EXPORT_SYMBOL(pnx4008_sdum_mmap);
+
+int pnx4008_set_dum_exit_notification(int dev_id)
+{
+ int i;
+
+ for (i = 0; i < MAX_DUM_CHANNELS; i++)
+ if (dum_data.fb_owning_channel[i] == dev_id)
+ return -ERESOURCESNOTFREED;
+
+ return 0;
+}
+
+EXPORT_SYMBOL(pnx4008_set_dum_exit_notification);
+
+/* Platform device driver for DUM */
+
+static int sdum_suspend(struct platform_device *pdev, pm_message_t state)
+{
+ int retval = 0;
+ struct clk *clk;
+
+ clk = clk_get(0, "dum_ck");
+ if (!IS_ERR(clk)) {
+ clk_set_rate(clk, 0);
+ clk_put(clk);
+ } else
+ retval = PTR_ERR(clk);
+
+ /* disable BAC */
+ DUM_CTRL = V_BAC_DISABLE_IDLE;
+
+ /* LCD standby & turn off display */
+ lcd_reset();
+
+ return retval;
+}
+
+static int sdum_resume(struct platform_device *pdev)
+{
+ int retval = 0;
+ struct clk *clk;
+
+ clk = clk_get(0, "dum_ck");
+ if (!IS_ERR(clk)) {
+ clk_set_rate(clk, 1);
+ clk_put(clk);
+ } else
+ retval = PTR_ERR(clk);
+
+ /* wait for BAC disable */
+ DUM_CTRL = V_BAC_DISABLE_TRIG;
+
+ while (DUM_CTRL & BAC_ENABLED)
+ udelay(10);
+
+ /* re-init LCD */
+ lcd_init();
+
+ /* enable BAC and reset MUX */
+ DUM_CTRL = V_BAC_ENABLE;
+ udelay(1);
+ DUM_CTRL = V_MUX_RESET;
+ return 0;
+}
+
+static int __devinit sdum_probe(struct platform_device *pdev)
+{
+ int ret = 0, i = 0;
+
+ /* map frame buffer */
+ dum_data.lcd_virt_start = (u32) dma_alloc_writecombine(&pdev->dev,
+ FB_DMA_SIZE,
+ &dum_data.lcd_phys_start,
+ GFP_KERNEL);
+
+ if (!dum_data.lcd_virt_start) {
+ ret = -ENOMEM;
+ goto out_3;
+ }
+
+ /* map slave registers */
+ dum_data.slave_phys_base = PNX4008_DUM_SLAVE_BASE;
+ dum_data.slave_virt_base =
+ (u32 *) ioremap_nocache(dum_data.slave_phys_base, sizeof(u32));
+
+ if (dum_data.slave_virt_base == NULL) {
+ ret = -ENOMEM;
+ goto out_2;
+ }
+
+ /* initialize DUM and LCD display */
+ ret = dum_init(pdev);
+ if (ret)
+ goto out_1;
+
+ dum_chan_init();
+ lcd_init();
+
+ DUM_CTRL = V_BAC_ENABLE;
+ udelay(1);
+ DUM_CTRL = V_MUX_RESET;
+
+ /* set decode address and sync clock divider */
+ DUM_DECODE = dum_data.lcd_phys_start & DUM_DECODE_MASK;
+ DUM_CLK_DIV = PNX4008_DUM_CLK_DIV;
+
+ for (i = 0; i < MAX_DUM_CHANNELS; i++)
+ dum_data.fb_owning_channel[i] = -1;
+
+ /*setup wakeup interrupt */
+ start_int_set_rising_edge(SE_DISP_SYNC_INT);
+ start_int_ack(SE_DISP_SYNC_INT);
+ start_int_umask(SE_DISP_SYNC_INT);
+
+ return 0;
+
+out_1:
+ iounmap((void *)dum_data.slave_virt_base);
+out_2:
+ dma_free_writecombine(&pdev->dev, FB_DMA_SIZE,
+ (void *)dum_data.lcd_virt_start,
+ dum_data.lcd_phys_start);
+out_3:
+ return ret;
+}
+
+static int sdum_remove(struct platform_device *pdev)
+{
+ struct clk *clk;
+
+ start_int_mask(SE_DISP_SYNC_INT);
+
+ clk = clk_get(0, "dum_ck");
+ if (!IS_ERR(clk)) {
+ clk_set_rate(clk, 0);
+ clk_put(clk);
+ }
+
+ iounmap((void *)dum_data.slave_virt_base);
+
+ dma_free_writecombine(&pdev->dev, FB_DMA_SIZE,
+ (void *)dum_data.lcd_virt_start,
+ dum_data.lcd_phys_start);
+
+ return 0;
+}
+
+static struct platform_driver sdum_driver = {
+ .driver = {
+ .name = "sdum",
+ },
+ .probe = sdum_probe,
+ .remove = sdum_remove,
+ .suspend = sdum_suspend,
+ .resume = sdum_resume,
+};
+
+int __init sdum_init(void)
+{
+ return platform_driver_register(&sdum_driver);
+}
+
+static void __exit sdum_exit(void)
+{
+ platform_driver_unregister(&sdum_driver);
+};
+
+module_init(sdum_init);
+module_exit(sdum_exit);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Copyright (C) 2005 Philips Semiconductors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING. If not, write to
+ * the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 02111-1307, USA, or http://www.gnu.org/licenses/gpl.html
+*/
+
+#define MAX_DUM_CHANNELS 64
+
+#define RGB_MEM_WINDOW(x) (0x10000000 + (x)*0x00100000)
+
+#define QCIF_OFFSET(x) (((x) == 0) ? 0x00000: ((x) == 1) ? 0x30000: -1)
+#define CIF_OFFSET(x) (((x) == 0) ? 0x00000: ((x) == 1) ? 0x60000: -1)
+
+#define CTRL_SETDIRTY (0x00000001)
+#define CONF_DIRTYENABLE (0x00000020)
+#define CONF_SYNCENABLE (0x00000004)
+
+#define DIRTY_ENABLED(conf) ((conf) & 0x0020)
+#define SYNC_ENABLED(conf) ((conf) & 0x0004)
+
+/* Display 1 & 2 Write Timing Configuration */
+#define PNX4008_DUM_WT_CFG 0x00372000
+
+/* Display 1 & 2 Read Timing Configuration */
+#define PNX4008_DUM_RT_CFG 0x00003A47
+
+/* DUM Transit State Timing Configuration */
+#define PNX4008_DUM_T_CFG 0x1D /* 29 HCLK cycles */
+
+/* DUM Sync count clock divider */
+#define PNX4008_DUM_CLK_DIV 0x02DD
+
+/* Memory size for framebuffer, allocated through dma_alloc_writecombine().
+ * Must be PAGE aligned
+ */
+#define FB_DMA_SIZE (PAGE_ALIGN(SZ_1M + PAGE_SIZE))
+
+#define OFFSET_RGBBUFFER (0xB0000)
+#define OFFSET_YUVBUFFER (0x00000)
+
+#define YUVBUFFER (lcd_video_start + OFFSET_YUVBUFFER)
+#define RGBBUFFER (lcd_video_start + OFFSET_RGBBUFFER)
+
+#define CMDSTRING_BASEADDR (0x00C000) /* iram */
+#define BYTES_PER_CMDSTRING (0x80)
+#define NR_OF_CMDSTRINGS (64)
+
+#define MAX_NR_PRESTRINGS (0x40)
+#define MAX_NR_POSTSTRINGS (0x40)
+
+/* various mask definitions */
+#define DUM_CLK_ENABLE 0x01
+#define DUM_CLK_DISABLE 0
+#define DUM_DECODE_MASK 0x1FFFFFFF
+#define DUM_CHANNEL_CFG_MASK 0x01FF
+#define DUM_CHANNEL_CFG_SYNC_MASK 0xFFFE00FF
+#define DUM_CHANNEL_CFG_SYNC_MASK_SET 0x0CA00
+
+#define SDUM_RETURNVAL_BASE (0x500)
+
+#define CONF_SYNC_OFF (0x602)
+#define CONF_SYNC_ON (0x603)
+
+#define CONF_DIRTYDETECTION_OFF (0x600)
+#define CONF_DIRTYDETECTION_ON (0x601)
+
+/* Set the corresponding bit. */
+#define BIT(n) (0x1U << (n))
+
+struct dumchannel_uf {
+ int channelnr;
+ u32 *dirty;
+ u32 *source;
+ u32 x_offset;
+ u32 y_offset;
+ u32 width;
+ u32 height;
+};
+
+enum {
+ FB_TYPE_YUV,
+ FB_TYPE_RGB
+};
+
+struct cmdstring {
+ int channelnr;
+ uint16_t prestringlen;
+ uint16_t poststringlen;
+ uint16_t format;
+ uint16_t reserved;
+ uint16_t startaddr_low;
+ uint16_t startaddr_high;
+ uint16_t pixdatlen_low;
+ uint16_t pixdatlen_high;
+ u32 precmd[MAX_NR_PRESTRINGS];
+ u32 postcmd[MAX_NR_POSTSTRINGS];
+
+};
+
+struct dumchannel {
+ int channelnr;
+ int dum_ch_min;
+ int dum_ch_max;
+ int dum_ch_conf;
+ int dum_ch_stat;
+ int dum_ch_ctrl;
+};
+
+int pnx4008_alloc_dum_channel(int dev_id);
+int pnx4008_free_dum_channel(int channr, int dev_id);
+
+int pnx4008_get_dum_channel_uf(struct dumchannel_uf *pChan_uf, int dev_id);
+int pnx4008_put_dum_channel_uf(struct dumchannel_uf chan_uf, int dev_id);
+
+int pnx4008_set_dum_channel_sync(int channr, int val, int dev_id);
+int pnx4008_set_dum_channel_dirty_detect(int channr, int val, int dev_id);
+
+int pnx4008_force_dum_update_channel(int channr, int dev_id);
+
+int pnx4008_get_dum_channel_config(int channr, int dev_id);
+
+int pnx4008_sdum_mmap(struct fb_info *info, struct vm_area_struct *vma, struct device *dev);
+int pnx4008_set_dum_exit_notification(int dev_id);
+
+int pnx4008_get_fb_addresses(int fb_type, void **virt_addr,
+ dma_addr_t * phys_addr, int *fb_length);
goto failed;
}
- ret = request_irq(IRQ_LCD, pxafb_handle_irq, SA_INTERRUPT, "LCD", fbi);
+ ret = request_irq(IRQ_LCD, pxafb_handle_irq, IRQF_DISABLED, "LCD", fbi);
if (ret) {
dev_err(&dev->dev, "request_irq failed: %d\n", ret);
ret = -EBUSY;
dprintk("got LCD region\n");
- ret = request_irq(irq, s3c2410fb_irq, SA_INTERRUPT, pdev->name, info);
+ ret = request_irq(irq, s3c2410fb_irq, IRQF_DISABLED, pdev->name, info);
if (ret) {
dev_err(&pdev->dev, "cannot get irq %d - err %d\n", irq, ret);
ret = -EBUSY;
if (ret)
goto failed;
- ret = request_irq(irq, sa1100fb_handle_irq, SA_INTERRUPT,
+ ret = request_irq(irq, sa1100fb_handle_irq, IRQF_DISABLED,
"LCD", fbi);
if (ret) {
printk(KERN_ERR "sa1100fb: request_irq failed: %d\n", ret);
.min_coredump = ELF_EXEC_PAGESIZE
};
-#define BAD_ADDR(x) ((unsigned long)(x) > TASK_SIZE)
+#define BAD_ADDR(x) ((unsigned long)(x) >= TASK_SIZE)
static int set_brk(unsigned long start, unsigned long end)
{
* <= p_memsize so it's only necessary to check p_memsz.
*/
k = load_addr + eppnt->p_vaddr;
- if (k > TASK_SIZE ||
+ if (BAD_ADDR(k) ||
eppnt->p_filesz > eppnt->p_memsz ||
eppnt->p_memsz > TASK_SIZE ||
TASK_SIZE - eppnt->p_memsz < k) {
* allowed task size. Note that p_filesz must always be
* <= p_memsz so it is only necessary to check p_memsz.
*/
- if (k > TASK_SIZE || elf_ppnt->p_filesz > elf_ppnt->p_memsz ||
+ if (BAD_ADDR(k) || elf_ppnt->p_filesz > elf_ppnt->p_memsz ||
elf_ppnt->p_memsz > TASK_SIZE ||
TASK_SIZE - elf_ppnt->p_memsz < k) {
/* set_brk can never work. Avoid overflows. */
interpreter,
&interp_load_addr);
if (BAD_ADDR(elf_entry)) {
- printk(KERN_ERR "Unable to load interpreter %.128s\n",
- elf_interpreter);
force_sig(SIGSEGV, current);
- retval = -ENOEXEC; /* Nobody gets to see this, but.. */
+ retval = IS_ERR((void *)elf_entry) ?
+ (int)elf_entry : -EINVAL;
goto out_free_dentry;
}
reloc_func_desc = interp_load_addr;
} else {
elf_entry = loc->elf_ex.e_entry;
if (BAD_ADDR(elf_entry)) {
- send_sig(SIGSEGV, current, 0);
- retval = -ENOEXEC; /* Nobody gets to see this, but.. */
+ force_sig(SIGSEGV, current);
+ retval = -EINVAL;
goto out_free_dentry;
}
}
if (!bo)
return -ENOMEM;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock_nested(&bdev->bd_mutex, BD_MUTEX_PARTITION);
res = bd_claim(bdev, holder);
if (res || !add_bd_holder(bdev, bo))
free_bd_holder(bo);
if (!kobj)
return;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock_nested(&bdev->bd_mutex, BD_MUTEX_PARTITION);
bd_release(bdev);
if ((bo = del_bd_holder(bdev, kobj)))
free_bd_holder(bo);
EXPORT_SYMBOL(open_by_devnum);
+static int
+blkdev_get_partition(struct block_device *bdev, mode_t mode, unsigned flags);
+
+struct block_device *open_partition_by_devnum(dev_t dev, unsigned mode)
+{
+ struct block_device *bdev = bdget(dev);
+ int err = -ENOMEM;
+ int flags = mode & FMODE_WRITE ? O_RDWR : O_RDONLY;
+ if (bdev)
+ err = blkdev_get_partition(bdev, mode, flags);
+ return err ? ERR_PTR(err) : bdev;
+}
+
+EXPORT_SYMBOL(open_partition_by_devnum);
+
+
/*
* This routine checks whether a removable media has been changed,
* and invalidates all buffer-cache-entries in that case. This
}
EXPORT_SYMBOL(bd_set_size);
-static int do_open(struct block_device *bdev, struct file *file)
+static int
+blkdev_get_whole(struct block_device *bdev, mode_t mode, unsigned flags);
+
+static int
+do_open(struct block_device *bdev, struct file *file, unsigned int subclass)
{
struct module *owner = NULL;
struct gendisk *disk;
}
owner = disk->fops->owner;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock_nested(&bdev->bd_mutex, subclass);
+
if (!bdev->bd_openers) {
bdev->bd_disk = disk;
bdev->bd_contains = bdev;
ret = -ENOMEM;
if (!whole)
goto out_first;
- ret = blkdev_get(whole, file->f_mode, file->f_flags);
+ ret = blkdev_get_whole(whole, file->f_mode, file->f_flags);
if (ret)
goto out_first;
bdev->bd_contains = whole;
- mutex_lock(&whole->bd_mutex);
+ mutex_lock_nested(&whole->bd_mutex, BD_MUTEX_WHOLE);
whole->bd_part_count++;
p = disk->part[part - 1];
bdev->bd_inode->i_data.backing_dev_info =
if (bdev->bd_invalidated)
rescan_partitions(bdev->bd_disk, bdev);
} else {
- mutex_lock(&bdev->bd_contains->bd_mutex);
+ mutex_lock_nested(&bdev->bd_contains->bd_mutex,
+ BD_MUTEX_PARTITION);
bdev->bd_contains->bd_part_count++;
mutex_unlock(&bdev->bd_contains->bd_mutex);
}
fake_file.f_dentry = &fake_dentry;
fake_dentry.d_inode = bdev->bd_inode;
- return do_open(bdev, &fake_file);
+ return do_open(bdev, &fake_file, BD_MUTEX_NORMAL);
}
EXPORT_SYMBOL(blkdev_get);
+static int
+blkdev_get_whole(struct block_device *bdev, mode_t mode, unsigned flags)
+{
+ /*
+ * This crockload is due to bad choice of ->open() type.
+ * It will go away.
+ * For now, block device ->open() routine must _not_
+ * examine anything in 'inode' argument except ->i_rdev.
+ */
+ struct file fake_file = {};
+ struct dentry fake_dentry = {};
+ fake_file.f_mode = mode;
+ fake_file.f_flags = flags;
+ fake_file.f_dentry = &fake_dentry;
+ fake_dentry.d_inode = bdev->bd_inode;
+
+ return do_open(bdev, &fake_file, BD_MUTEX_WHOLE);
+}
+
+static int
+blkdev_get_partition(struct block_device *bdev, mode_t mode, unsigned flags)
+{
+ /*
+ * This crockload is due to bad choice of ->open() type.
+ * It will go away.
+ * For now, block device ->open() routine must _not_
+ * examine anything in 'inode' argument except ->i_rdev.
+ */
+ struct file fake_file = {};
+ struct dentry fake_dentry = {};
+ fake_file.f_mode = mode;
+ fake_file.f_flags = flags;
+ fake_file.f_dentry = &fake_dentry;
+ fake_dentry.d_inode = bdev->bd_inode;
+
+ return do_open(bdev, &fake_file, BD_MUTEX_PARTITION);
+}
+
static int blkdev_open(struct inode * inode, struct file * filp)
{
struct block_device *bdev;
bdev = bd_acquire(inode);
- res = do_open(bdev, filp);
+ res = do_open(bdev, filp, BD_MUTEX_NORMAL);
if (res)
return res;
return res;
}
-int blkdev_put(struct block_device *bdev)
+static int __blkdev_put(struct block_device *bdev, unsigned int subclass)
{
int ret = 0;
struct inode *bd_inode = bdev->bd_inode;
struct gendisk *disk = bdev->bd_disk;
- mutex_lock(&bdev->bd_mutex);
+ mutex_lock_nested(&bdev->bd_mutex, subclass);
lock_kernel();
if (!--bdev->bd_openers) {
sync_blockdev(bdev);
if (disk->fops->release)
ret = disk->fops->release(bd_inode, NULL);
} else {
- mutex_lock(&bdev->bd_contains->bd_mutex);
+ mutex_lock_nested(&bdev->bd_contains->bd_mutex,
+ subclass + 1);
bdev->bd_contains->bd_part_count--;
mutex_unlock(&bdev->bd_contains->bd_mutex);
}
}
bdev->bd_disk = NULL;
bdev->bd_inode->i_data.backing_dev_info = &default_backing_dev_info;
- if (bdev != bdev->bd_contains) {
- blkdev_put(bdev->bd_contains);
- }
+ if (bdev != bdev->bd_contains)
+ __blkdev_put(bdev->bd_contains, subclass + 1);
bdev->bd_contains = NULL;
}
unlock_kernel();
return ret;
}
+int blkdev_put(struct block_device *bdev)
+{
+ return __blkdev_put(bdev, BD_MUTEX_NORMAL);
+}
+
EXPORT_SYMBOL(blkdev_put);
+int blkdev_put_partition(struct block_device *bdev)
+{
+ return __blkdev_put(bdev, BD_MUTEX_PARTITION);
+}
+
+EXPORT_SYMBOL(blkdev_put_partition);
+
static int blkdev_close(struct inode * inode, struct file * filp)
{
struct block_device *bdev = I_BDEV(filp->f_mapping->host);
EXPORT_SYMBOL_GPL(sysctl_vfs_cache_pressure);
__cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lock);
-static seqlock_t rename_lock __cacheline_aligned_in_smp = SEQLOCK_UNLOCKED;
+static __cacheline_aligned_in_smp DEFINE_SEQLOCK(rename_lock);
EXPORT_SYMBOL(dcache_lock);
*/
if (target < dentry) {
spin_lock(&target->d_lock);
- spin_lock(&dentry->d_lock);
+ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
} else {
spin_lock(&dentry->d_lock);
- spin_lock(&target->d_lock);
+ spin_lock_nested(&target->d_lock, DENTRY_D_LOCK_NESTED);
}
/* Move the dentry to the target hash queue, if on different bucket */
if (dio->end_io && dio->result)
dio->end_io(dio->iocb, offset, bytes, dio->map_bh.b_private);
if (dio->lock_type == DIO_LOCKING)
- up_read(&dio->inode->i_alloc_sem);
+ /* lockdep: non-owner release */
+ up_read_non_owner(&dio->inode->i_alloc_sem);
}
/*
}
if (dio_lock_type == DIO_LOCKING)
- down_read(&inode->i_alloc_sem);
+ /* lockdep: not the owner will release it */
+ down_read_non_owner(&inode->i_alloc_sem);
}
/*
*/
struct wake_task_node {
struct list_head llink;
- task_t *task;
+ struct task_struct *task;
wait_queue_head_t *wq;
};
{
int wake_nests = 0;
unsigned long flags;
- task_t *this_task = current;
+ struct task_struct *this_task = current;
struct list_head *lsthead = &psw->wake_task_list, *lnk;
struct wake_task_node *tncur;
struct wake_task_node tnode;
struct buffer_head tmp_bh;
struct buffer_head *bh;
- mutex_lock(&inode->i_mutex);
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
struct buffer_head *bh;
handle_t *handle = journal_current_handle();
- mutex_lock(&inode->i_mutex);
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
return rc;
}
-void jffs2_clear_acl(struct inode *inode)
+void jffs2_clear_acl(struct jffs2_inode_info *f)
{
- struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
-
if (f->i_acl_access && f->i_acl_access != JFFS2_ACL_NOT_CACHED) {
posix_acl_release(f->i_acl_access);
f->i_acl_access = JFFS2_ACL_NOT_CACHED;
extern int jffs2_permission(struct inode *, int, struct nameidata *);
extern int jffs2_acl_chmod(struct inode *);
extern int jffs2_init_acl(struct inode *, struct inode *);
-extern void jffs2_clear_acl(struct inode *);
+extern void jffs2_clear_acl(struct jffs2_inode_info *);
extern struct xattr_handler jffs2_acl_access_xattr_handler;
extern struct xattr_handler jffs2_acl_default_xattr_handler;
#define jffs2_permission NULL
#define jffs2_acl_chmod(inode) (0)
#define jffs2_init_acl(inode,dir) (0)
-#define jffs2_clear_acl(inode)
+#define jffs2_clear_acl(f)
#endif /* CONFIG_JFFS2_FS_POSIX_ACL */
kmem_cache_free(tmp_dnode_info_slab, x);
}
-struct jffs2_raw_node_ref *jffs2_alloc_refblock(void)
+static struct jffs2_raw_node_ref *jffs2_alloc_refblock(void)
{
struct jffs2_raw_node_ref *ret;
/* scan.c */
int jffs2_scan_medium(struct jffs2_sb_info *c);
void jffs2_rotate_lists(struct jffs2_sb_info *c);
-int jffs2_fill_scan_buf(struct jffs2_sb_info *c, void *buf,
- uint32_t ofs, uint32_t len);
struct jffs2_inode_cache *jffs2_scan_make_ino_cache(struct jffs2_sb_info *c, uint32_t ino);
int jffs2_scan_classify_jeb(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
int jffs2_scan_dirty_space(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t size);
struct jffs2_full_dirent *fd, *fds;
int deleted;
+ jffs2_clear_acl(f);
jffs2_xattr_delete_inode(c, f->inocache);
down(&f->sem);
deleted = f->inocache && !f->inocache->nlink;
return ret;
}
-int jffs2_fill_scan_buf (struct jffs2_sb_info *c, void *buf,
- uint32_t ofs, uint32_t len)
+static int jffs2_fill_scan_buf(struct jffs2_sb_info *c, void *buf,
+ uint32_t ofs, uint32_t len)
{
int ret;
size_t retlen;
* is used to write xdatum to medium. xd->version will be incremented.
* create_xattr_datum(c, xprefix, xname, xvalue, xsize)
* is used to create new xdatum and write to medium.
- * delete_xattr_datum(c, xd)
- * is used to delete a xdatum. It marks xd JFFS2_XFLAGS_DEAD, and allows
- * GC to reclaim those physical nodes.
+ * unrefer_xattr_datum(c, xd)
+ * is used to delete a xdatum. When nobody refers this xdatum, JFFS2_XFLAGS_DEAD
+ * is set on xd->flags and chained xattr_dead_list or release it immediately.
+ * In the first case, the garbage collector release it later.
* -------------------------------------------------- */
static uint32_t xattr_datum_hashkey(int xprefix, const char *xname, const char *xvalue, int xsize)
{
return xd;
}
-static void delete_xattr_datum(struct jffs2_sb_info *c, struct jffs2_xattr_datum *xd)
+static void unrefer_xattr_datum(struct jffs2_sb_info *c, struct jffs2_xattr_datum *xd)
{
/* must be called under down_write(xattr_sem) */
- BUG_ON(atomic_read(&xd->refcnt));
+ if (atomic_dec_and_lock(&xd->refcnt, &c->erase_completion_lock)) {
+ uint32_t xid = xd->xid, version = xd->version;
- unload_xattr_datum(c, xd);
- xd->flags |= JFFS2_XFLAGS_DEAD;
- spin_lock(&c->erase_completion_lock);
- if (xd->node == (void *)xd) {
- BUG_ON(!(xd->flags & JFFS2_XFLAGS_INVALID));
- jffs2_free_xattr_datum(xd);
- } else {
- list_add(&xd->xindex, &c->xattr_dead_list);
+ unload_xattr_datum(c, xd);
+ xd->flags |= JFFS2_XFLAGS_DEAD;
+ if (xd->node == (void *)xd) {
+ BUG_ON(!(xd->flags & JFFS2_XFLAGS_INVALID));
+ jffs2_free_xattr_datum(xd);
+ } else {
+ list_add(&xd->xindex, &c->xattr_dead_list);
+ }
+ spin_unlock(&c->erase_completion_lock);
+
+ dbg_xattr("xdatum(xid=%u, version=%u) was removed.\n", xid, version);
}
- spin_unlock(&c->erase_completion_lock);
- dbg_xattr("xdatum(xid=%u, version=%u) was removed.\n", xd->xid, xd->version);
}
/* -------- xref related functions ------------------
dbg_xattr("xref(ino=%u, xid=%u, xseqno=%u) was removed.\n",
ref->ino, ref->xid, ref->xseqno);
- if (atomic_dec_and_test(&xd->refcnt))
- delete_xattr_datum(c, xd);
+ unrefer_xattr_datum(c, xd);
}
void jffs2_xattr_delete_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic)
ref->next = c->xref_dead_list;
c->xref_dead_list = ref;
spin_unlock(&c->erase_completion_lock);
- if (atomic_dec_and_test(&xd->refcnt))
- delete_xattr_datum(c, xd);
+ unrefer_xattr_datum(c, xd);
} else {
ref->ic = ic;
ref->xd = xd;
down_write(&c->xattr_sem);
if (rc) {
JFFS2_WARNING("jffs2_reserve_space()=%d, request=%u\n", rc, request);
- if (atomic_dec_and_test(&xd->refcnt))
- delete_xattr_datum(c, xd);
+ unrefer_xattr_datum(c, xd);
up_write(&c->xattr_sem);
return rc;
}
ic->xref = ref;
}
rc = PTR_ERR(newref);
- if (atomic_dec_and_test(&xd->refcnt))
- delete_xattr_datum(c, xd);
+ unrefer_xattr_datum(c, xd);
} else if (ref) {
delete_xattr_ref(c, ref);
}
struct dentry *p;
if (p1 == p2) {
- mutex_lock(&p1->d_inode->i_mutex);
+ mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_PARENT);
return NULL;
}
for (p = p1; p->d_parent != p; p = p->d_parent) {
if (p->d_parent == p2) {
- mutex_lock(&p2->d_inode->i_mutex);
- mutex_lock(&p1->d_inode->i_mutex);
+ mutex_lock_nested(&p2->d_inode->i_mutex, I_MUTEX_PARENT);
+ mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_CHILD);
return p;
}
}
for (p = p2; p->d_parent != p; p = p->d_parent) {
if (p->d_parent == p1) {
- mutex_lock(&p1->d_inode->i_mutex);
- mutex_lock(&p2->d_inode->i_mutex);
+ mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_PARENT);
+ mutex_lock_nested(&p2->d_inode->i_mutex, I_MUTEX_CHILD);
return p;
}
}
- mutex_lock(&p1->d_inode->i_mutex);
- mutex_lock(&p2->d_inode->i_mutex);
+ mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_PARENT);
+ mutex_lock_nested(&p2->d_inode->i_mutex, I_MUTEX_CHILD);
return NULL;
}
{
struct dentry *dentry = ERR_PTR(-EEXIST);
- mutex_lock(&nd->dentry->d_inode->i_mutex);
+ mutex_lock_nested(&nd->dentry->d_inode->i_mutex, I_MUTEX_PARENT);
/*
* Yucky last component or no last component at all?
* (foo/., foo/.., /////)
error = -EBUSY;
goto exit1;
}
- mutex_lock(&nd.dentry->d_inode->i_mutex);
+ mutex_lock_nested(&nd.dentry->d_inode->i_mutex, I_MUTEX_PARENT);
dentry = lookup_hash(&nd);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
error = -EISDIR;
if (nd.last_type != LAST_NORM)
goto exit1;
- mutex_lock(&nd.dentry->d_inode->i_mutex);
+ mutex_lock_nested(&nd.dentry->d_inode->i_mutex, I_MUTEX_PARENT);
dentry = lookup_hash(&nd);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
#ifdef CONFIG_NFS_V4
extern struct file_system_type clone_nfs4_fs_type;
#endif
-#ifdef CONFIG_PROC_FS
+
extern struct rpc_stat nfs_rpcstat;
-#endif
+
extern int __init register_nfs_fs(void);
extern void __exit unregister_nfs_fs(void);
kmem_cache_free(ntfs_inode_cache, ni);
}
+/*
+ * The attribute runlist lock has separate locking rules from the
+ * normal runlist lock, so split the two lock-classes:
+ */
+static struct lock_class_key attr_list_rl_lock_class;
+
/**
* __ntfs_init_inode - initialize ntfs specific part of an inode
* @sb: super block of mounted volume
ni->attr_list_size = 0;
ni->attr_list = NULL;
ntfs_init_runlist(&ni->attr_list_rl);
+ lockdep_set_class(&ni->attr_list_rl.lock,
+ &attr_list_rl_lock_class);
ni->itype.index.bmp_ino = NULL;
ni->itype.index.block_size = 0;
ni->itype.index.vcn_size = 0;
ni->ext.base_ntfs_ino = NULL;
}
+/*
+ * Extent inodes get MFT-mapped in a nested way, while the base inode
+ * is still mapped. Teach this nesting to the lock validator by creating
+ * a separate class for nested inode's mrec_lock's:
+ */
+static struct lock_class_key extent_inode_mrec_lock_key;
+
inline ntfs_inode *ntfs_new_extent_inode(struct super_block *sb,
unsigned long mft_no)
{
ntfs_debug("Entering.");
if (likely(ni != NULL)) {
__ntfs_init_inode(sb, ni);
+ lockdep_set_class(&ni->mrec_lock, &extent_inode_mrec_lock_key);
ni->mft_no = mft_no;
ni->type = AT_UNUSED;
ni->name = NULL;
return err;
}
+/*
+ * The MFT inode has special locking, so teach the lock validator
+ * about this by splitting off the locking rules of the MFT from
+ * the locking rules of other inodes. The MFT inode can never be
+ * accessed from the VFS side (or even internally), only by the
+ * map_mft functions.
+ */
+static struct lock_class_key mft_ni_runlist_lock_key, mft_ni_mrec_lock_key;
+
/**
* ntfs_read_inode_mount - special read_inode for mount time use only
* @vi: inode to read
ntfs_attr_put_search_ctx(ctx);
ntfs_debug("Done.");
ntfs_free(m);
+
+ /*
+ * Split the locking rules of the MFT inode from the
+ * locking rules of other inodes:
+ */
+ lockdep_set_class(&ni->runlist.lock, &mft_ni_runlist_lock_key);
+ lockdep_set_class(&ni->mrec_lock, &mft_ni_mrec_lock_key);
+
return 0;
em_put_err_out:
return FALSE;
}
+/*
+ * The lcn and mft bitmap inodes are NTFS-internal inodes with
+ * their own special locking rules:
+ */
+static struct lock_class_key
+ lcnbmp_runlist_lock_key, lcnbmp_mrec_lock_key,
+ mftbmp_runlist_lock_key, mftbmp_mrec_lock_key;
+
/**
* load_system_files - open the system files using normal functions
* @vol: ntfs super block describing device whose system files to load
ntfs_error(sb, "Failed to load $MFT/$BITMAP attribute.");
goto iput_mirr_err_out;
}
+ lockdep_set_class(&NTFS_I(vol->mftbmp_ino)->runlist.lock,
+ &mftbmp_runlist_lock_key);
+ lockdep_set_class(&NTFS_I(vol->mftbmp_ino)->mrec_lock,
+ &mftbmp_mrec_lock_key);
/* Read upcase table and setup @vol->upcase and @vol->upcase_len. */
if (!load_and_init_upcase(vol))
goto iput_mftbmp_err_out;
iput(vol->lcnbmp_ino);
goto bitmap_failed;
}
+ lockdep_set_class(&NTFS_I(vol->lcnbmp_ino)->runlist.lock,
+ &lcnbmp_runlist_lock_key);
+ lockdep_set_class(&NTFS_I(vol->lcnbmp_ino)->mrec_lock,
+ &lcnbmp_mrec_lock_key);
+
NInoSetSparseDisabled(NTFS_I(vol->lcnbmp_ino));
if ((vol->nr_clusters + 7) >> 3 > i_size_read(vol->lcnbmp_ino)) {
iput(vol->lcnbmp_ino);
struct inode *tmp_ino;
int blocksize, result;
+ /*
+ * We do a pretty difficult piece of bootstrap by reading the
+ * MFT (and other metadata) from disk into memory. We'll only
+ * release this metadata during umount, so the locking patterns
+ * observed during bootstrap do not count. So turn off the
+ * observation of locking patterns (strictly for this context
+ * only) while mounting NTFS. [The validator is still active
+ * otherwise, even for this context: it will for example record
+ * lock class registrations.]
+ */
+ lockdep_off();
ntfs_debug("Entering.");
#ifndef NTFS_RW
sb->s_flags |= MS_RDONLY;
if (!silent)
ntfs_error(sb, "Allocation of NTFS volume structure "
"failed. Aborting mount...");
+ lockdep_on();
return -ENOMEM;
}
/* Initialize ntfs_volume structure. */
mutex_unlock(&ntfs_lock);
sb->s_export_op = &ntfs_export_ops;
lock_kernel();
+ lockdep_on();
return 0;
}
ntfs_error(sb, "Failed to allocate root directory.");
sb->s_fs_info = NULL;
kfree(vol);
ntfs_debug("Failed, returning -EINVAL.");
+ lockdep_on();
return -EINVAL;
}
size_t towrite = len;
struct buffer_head tmp_bh, *bh;
- mutex_lock(&inode->i_mutex);
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
* Allocates and initializes a new &struct super_block. alloc_super()
* returns a pointer new superblock or %NULL if allocation had failed.
*/
-static struct super_block *alloc_super(void)
+static struct super_block *alloc_super(struct file_system_type *type)
{
struct super_block *s = kzalloc(sizeof(struct super_block), GFP_USER);
static struct super_operations default_op;
INIT_LIST_HEAD(&s->s_inodes);
init_rwsem(&s->s_umount);
mutex_init(&s->s_lock);
+ lockdep_set_class(&s->s_umount, &type->s_umount_key);
+ /*
+ * The locking rules for s_lock are up to the
+ * filesystem. For example ext3fs has different
+ * lock ordering than usbfs:
+ */
+ lockdep_set_class(&s->s_lock, &type->s_lock_key);
down_write(&s->s_umount);
s->s_count = S_BIAS;
atomic_set(&s->s_active, 1);
}
if (!s) {
spin_unlock(&sb_lock);
- s = alloc_super();
+ s = alloc_super(type);
if (!s)
return ERR_PTR(-ENOMEM);
goto retry;
size_t towrite = len;
struct buffer_head *bh;
- mutex_lock(&inode->i_mutex);
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
#define fd_disable_irq() disable_irq(FLOPPY_IRQ)
#define fd_cacheflush(addr,size) /* nothing */
#define fd_request_irq() request_irq(FLOPPY_IRQ, floppy_interrupt,\
- SA_INTERRUPT, "floppy", NULL)
+ IRQF_DISABLED, "floppy", NULL)
#define fd_free_irq() free_irq(FLOPPY_IRQ, NULL);
#ifdef CONFIG_PCI
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
-#endif
};
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
-#else
-#define __RWSEM_DEBUG_INIT /* */
-#endif
-
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) __RWSEM_DEBUG_INIT }
+ LIST_HEAD_INIT((name).wait_list) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
static inline void __down_read(struct rw_semaphore *sem)
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_ONESHOT SA_RESETHAND
#define SA_NOMASK SA_NODEFER
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
/*
* sigaltstack controls
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-omap/board-fsample.h
+ *
+ * Board-specific goodies for TI F-Sample.
+ *
+ * Copyright (C) 2006 Google, Inc.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_ARCH_OMAP_FSAMPLE_H
+#define __ASM_ARCH_OMAP_FSAMPLE_H
+
+/* fsample is pretty close to p2-sample */
+#include <asm/arch/board-perseus2.h>
+
+#define fsample_cpld_read(reg) __raw_readb(reg)
+#define fsample_cpld_write(val, reg) __raw_writeb(val, reg)
+
+#define FSAMPLE_CPLD_BASE 0xE8100000
+#define FSAMPLE_CPLD_SIZE SZ_4K
+#define FSAMPLE_CPLD_START 0x05080000
+
+#define FSAMPLE_CPLD_REG_A (FSAMPLE_CPLD_BASE + 0x00)
+#define FSAMPLE_CPLD_SWITCH (FSAMPLE_CPLD_BASE + 0x02)
+#define FSAMPLE_CPLD_UART (FSAMPLE_CPLD_BASE + 0x02)
+#define FSAMPLE_CPLD_REG_B (FSAMPLE_CPLD_BASE + 0x04)
+#define FSAMPLE_CPLD_VERSION (FSAMPLE_CPLD_BASE + 0x06)
+#define FSAMPLE_CPLD_SET_CLR (FSAMPLE_CPLD_BASE + 0x06)
+
+#define FSAMPLE_CPLD_BIT_BT_RESET 0
+#define FSAMPLE_CPLD_BIT_LCD_RESET 1
+#define FSAMPLE_CPLD_BIT_CAM_PWDN 2
+#define FSAMPLE_CPLD_BIT_CHARGER_ENABLE 3
+#define FSAMPLE_CPLD_BIT_SD_MMC_EN 4
+#define FSAMPLE_CPLD_BIT_aGPS_PWREN 5
+#define FSAMPLE_CPLD_BIT_BACKLIGHT 6
+#define FSAMPLE_CPLD_BIT_aGPS_EN_RESET 7
+#define FSAMPLE_CPLD_BIT_aGPS_SLEEPx_N 8
+#define FSAMPLE_CPLD_BIT_OTG_RESET 9
+
+#define fsample_cpld_set(bit) \
+ fsample_cpld_write((((bit) & 15) << 4) | 0x0f, FSAMPLE_CPLD_SET_CLR)
+
+#define fsample_cpld_clear(bit) \
+ fsample_cpld_write(0xf0 | ((bit) & 15), FSAMPLE_CPLD_SET_CLR)
+
+#endif
#define OMAP_TAG_UART 0x4f07
#define OMAP_TAG_FBMEM 0x4f08
#define OMAP_TAG_STI_CONSOLE 0x4f09
+#define OMAP_TAG_CAMERA_SENSOR 0x4f0a
#define OMAP_TAG_BOOT_REASON 0x4f80
#define OMAP_TAG_FLASH_PART 0x4f81
u8 channel;
};
+struct omap_camera_sensor_config {
+ u16 reset_gpio;
+ int (*power_on)(void * data);
+ int (*power_off)(void * data);
+};
+
struct omap_usb_config {
/* Configure drivers according to the connectors on your board:
* - "A" connector (rectagular)
/* DMA channels for 24xx */
#define OMAP24XX_DMA_NO_DEVICE 0
#define OMAP24XX_DMA_XTI_DMA 1 /* S_DMA_0 */
-#define OMAP24XX_DMA_EXT_NDMA_REQ0 2 /* S_DMA_1 */
-#define OMAP24XX_DMA_EXT_NDMA_REQ1 3 /* S_DMA_2 */
+#define OMAP24XX_DMA_EXT_DMAREQ0 2 /* S_DMA_1 */
+#define OMAP24XX_DMA_EXT_DMAREQ1 3 /* S_DMA_2 */
#define OMAP24XX_DMA_GPMC 4 /* S_DMA_3 */
#define OMAP24XX_DMA_GFX 5 /* S_DMA_4 */
#define OMAP24XX_DMA_DSS 6 /* S_DMA_5 */
#define OMAP24XX_DMA_DES_TX 11 /* S_DMA_10 */
#define OMAP24XX_DMA_DES_RX 12 /* S_DMA_11 */
#define OMAP24XX_DMA_SHA1MD5_RX 13 /* S_DMA_12 */
-
+#define OMAP24XX_DMA_EXT_DMAREQ2 14 /* S_DMA_13 */
+#define OMAP24XX_DMA_EXT_DMAREQ3 15 /* S_DMA_14 */
+#define OMAP24XX_DMA_EXT_DMAREQ4 16 /* S_DMA_15 */
#define OMAP24XX_DMA_EAC_AC_RD 17 /* S_DMA_16 */
#define OMAP24XX_DMA_EAC_AC_WR 18 /* S_DMA_17 */
#define OMAP24XX_DMA_EAC_MD_UL_RD 19 /* S_DMA_18 */
#define OMAP24XX_DMA_MMC1_TX 61 /* SDMA_60 */
#define OMAP24XX_DMA_MMC1_RX 62 /* SDMA_61 */
#define OMAP24XX_DMA_MS 63 /* SDMA_62 */
+#define OMAP24XX_DMA_EXT_DMAREQ5 64 /* S_DMA_63 */
/*----------------------------------------------------------------------------*/
#define OMAP1610_DMA_LCD_LCH_CTRL (OMAP1610_DMA_LCD_BASE + 0xea)
#define OMAP1610_DMA_LCD_SRC_FI_B1_U (OMAP1610_DMA_LCD_BASE + 0xf4)
-#define OMAP_DMA_TOUT_IRQ (1 << 0) /* Only on omap1 */
+#define OMAP1_DMA_TOUT_IRQ (1 << 0)
#define OMAP_DMA_DROP_IRQ (1 << 1)
#define OMAP_DMA_HALF_IRQ (1 << 2)
#define OMAP_DMA_FRAME_IRQ (1 << 3)
OMAP_LCD_DMA_B2_BOTTOM
};
-/* REVISIT: Check if BURST_4 is really 1 (or 2) */
enum omap_dma_burst_mode {
OMAP_DMA_DATA_BURST_DIS = 0,
OMAP_DMA_DATA_BURST_4,
- OMAP_DMA_DATA_BURST_8
+ OMAP_DMA_DATA_BURST_8,
+ OMAP_DMA_DATA_BURST_16,
};
enum omap_dma_color_mode {
*
* Copyright (C) 2005 Nokia Corporation
* Author: Lauri Leukkunen <lauri.leukkunen@nokia.com>
+ * PWM and clock framwork support by Timo Teras.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-#ifndef __ASM_ARCH_TIMER_H
-#define __ASM_ARCH_TIMER_H
-
-#include <linux/list.h>
-
-#define OMAP_TIMER_SRC_ARMXOR 0x00
-#define OMAP_TIMER_SRC_32_KHZ 0x01
-#define OMAP_TIMER_SRC_EXT_CLK 0x02
-
-/* timer control reg bits */
-#define OMAP_TIMER_CTRL_CAPTMODE (1 << 13)
-#define OMAP_TIMER_CTRL_PT (1 << 12)
-#define OMAP_TIMER_CTRL_TRG_OVERFLOW (0x1 << 10)
-#define OMAP_TIMER_CTRL_TRG_OFANDMATCH (0x2 << 10)
-#define OMAP_TIMER_CTRL_TCM_LOWTOHIGH (0x1 << 8)
-#define OMAP_TIMER_CTRL_TCM_HIGHTOLOW (0x2 << 8)
-#define OMAP_TIMER_CTRL_TCM_BOTHEDGES (0x3 << 8)
-#define OMAP_TIMER_CTRL_SCPWM (1 << 7)
-#define OMAP_TIMER_CTRL_CE (1 << 6) /* compare enable */
-#define OMAP_TIMER_CTRL_PRE (1 << 5) /* prescaler enable */
-#define OMAP_TIMER_CTRL_PTV_SHIFT 2 /* how much to shift the prescaler value */
-#define OMAP_TIMER_CTRL_AR (1 << 1) /* auto-reload enable */
-#define OMAP_TIMER_CTRL_ST (1 << 0) /* start timer */
+#ifndef __ASM_ARCH_DMTIMER_H
+#define __ASM_ARCH_DMTIMER_H
-/* timer interrupt enable bits */
-#define OMAP_TIMER_INT_CAPTURE (1 << 2)
-#define OMAP_TIMER_INT_OVERFLOW (1 << 1)
-#define OMAP_TIMER_INT_MATCH (1 << 0)
+/* clock sources */
+#define OMAP_TIMER_SRC_SYS_CLK 0x00
+#define OMAP_TIMER_SRC_32_KHZ 0x01
+#define OMAP_TIMER_SRC_EXT_CLK 0x02
+/* timer interrupt enable bits */
+#define OMAP_TIMER_INT_CAPTURE (1 << 2)
+#define OMAP_TIMER_INT_OVERFLOW (1 << 1)
+#define OMAP_TIMER_INT_MATCH (1 << 0)
-struct omap_dm_timer {
- struct list_head timer_list;
+/* trigger types */
+#define OMAP_TIMER_TRIGGER_NONE 0x00
+#define OMAP_TIMER_TRIGGER_OVERFLOW 0x01
+#define OMAP_TIMER_TRIGGER_OVERFLOW_AND_COMPARE 0x02
- u32 base;
- unsigned int irq;
-};
+struct omap_dm_timer;
+struct clk;
-u32 omap_dm_timer_read_reg(struct omap_dm_timer *timer, int reg);
-void omap_dm_timer_write_reg(struct omap_dm_timer *timer, int reg, u32 value);
+int omap_dm_timer_init(void);
-struct omap_dm_timer * omap_dm_timer_request(void);
+struct omap_dm_timer *omap_dm_timer_request(void);
+struct omap_dm_timer *omap_dm_timer_request_specific(int timer_id);
void omap_dm_timer_free(struct omap_dm_timer *timer);
-void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source);
-void omap_dm_timer_set_int_enable(struct omap_dm_timer *timer, unsigned int value);
-void omap_dm_timer_set_trigger(struct omap_dm_timer *timer, unsigned int value);
-void omap_dm_timer_enable_compare(struct omap_dm_timer *timer);
-void omap_dm_timer_enable_autoreload(struct omap_dm_timer *timer);
+int omap_dm_timer_get_irq(struct omap_dm_timer *timer);
+
+u32 omap_dm_timer_modify_idlect_mask(u32 inputmask);
+struct clk *omap_dm_timer_get_fclk(struct omap_dm_timer *timer);
void omap_dm_timer_trigger(struct omap_dm_timer *timer);
void omap_dm_timer_start(struct omap_dm_timer *timer);
void omap_dm_timer_stop(struct omap_dm_timer *timer);
-void omap_dm_timer_set_load(struct omap_dm_timer *timer, unsigned int load);
-void omap_dm_timer_set_match(struct omap_dm_timer *timer, unsigned int match);
+void omap_dm_timer_set_source(struct omap_dm_timer *timer, int source);
+void omap_dm_timer_set_load(struct omap_dm_timer *timer, int autoreload, unsigned int value);
+void omap_dm_timer_set_match(struct omap_dm_timer *timer, int enable, unsigned int match);
+void omap_dm_timer_set_pwm(struct omap_dm_timer *timer, int def_on, int toggle, int trigger);
+void omap_dm_timer_set_prescaler(struct omap_dm_timer *timer, int prescaler);
+
+void omap_dm_timer_set_int_enable(struct omap_dm_timer *timer, unsigned int value);
unsigned int omap_dm_timer_read_status(struct omap_dm_timer *timer);
void omap_dm_timer_write_status(struct omap_dm_timer *timer, unsigned int value);
-
unsigned int omap_dm_timer_read_counter(struct omap_dm_timer *timer);
-void omap_dm_timer_reset_counter(struct omap_dm_timer *timer);
+void omap_dm_timer_write_counter(struct omap_dm_timer *timer, unsigned int value);
int omap_dm_timers_active(void);
-u32 omap_dm_timer_modify_idlect_mask(u32 inputmask);
-#endif /* __ASM_ARCH_TIMER_H */
+
+#endif /* __ASM_ARCH_DMTIMER_H */
--- /dev/null
+/*
+ * General-Purpose Memory Controller for OMAP2
+ *
+ * Copyright (C) 2005-2006 Nokia Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __OMAP2_GPMC_H
+#define __OMAP2_GPMC_H
+
+#define GPMC_CS_CONFIG1 0x00
+#define GPMC_CS_CONFIG2 0x04
+#define GPMC_CS_CONFIG3 0x08
+#define GPMC_CS_CONFIG4 0x0c
+#define GPMC_CS_CONFIG5 0x10
+#define GPMC_CS_CONFIG6 0x14
+#define GPMC_CS_CONFIG7 0x18
+#define GPMC_CS_NAND_COMMAND 0x1c
+#define GPMC_CS_NAND_ADDRESS 0x20
+#define GPMC_CS_NAND_DATA 0x24
+
+#define GPMC_CONFIG1_WRAPBURST_SUPP (1 << 31)
+#define GPMC_CONFIG1_READMULTIPLE_SUPP (1 << 20)
+#define GPMC_CONFIG1_READTYPE_ASYNC (0 << 29)
+#define GPMC_CONFIG1_READTYPE_SYNC (1 << 29)
+#define GPMC_CONFIG1_WRITETYPE_ASYNC (0 << 27)
+#define GPMC_CONFIG1_WRITETYPE_SYNC (1 << 27)
+#define GPMC_CONFIG1_CLKACTIVATIONTIME(val) ((val & 3) << 25)
+#define GPMC_CONFIG1_PAGE_LEN(val) ((val & 3) << 23)
+#define GPMC_CONFIG1_WAIT_READ_MON (1 << 22)
+#define GPMC_CONFIG1_WAIT_WRITE_MON (1 << 21)
+#define GPMC_CONFIG1_WAIT_MON_IIME(val) ((val & 3) << 18)
+#define GPMC_CONFIG1_WAIT_PIN_SEL(val) ((val & 3) << 16)
+#define GPMC_CONFIG1_DEVICESIZE(val) ((val & 3) << 12)
+#define GPMC_CONFIG1_DEVICESIZE_16 GPMC_CONFIG1_DEVICESIZE(1)
+#define GPMC_CONFIG1_DEVICETYPE(val) ((val & 3) << 10)
+#define GPMC_CONFIG1_DEVICETYPE_NOR GPMC_CONFIG1_DEVICETYPE(0)
+#define GPMC_CONFIG1_DEVICETYPE_NAND GPMC_CONFIG1_DEVICETYPE(1)
+#define GPMC_CONFIG1_MUXADDDATA (1 << 9)
+#define GPMC_CONFIG1_TIME_PARA_GRAN (1 << 4)
+#define GPMC_CONFIG1_FCLK_DIV(val) (val & 3)
+#define GPMC_CONFIG1_FCLK_DIV2 (GPMC_CONFIG1_FCLK_DIV(1))
+#define GPMC_CONFIG1_FCLK_DIV3 (GPMC_CONFIG1_FCLK_DIV(2))
+#define GPMC_CONFIG1_FCLK_DIV4 (GPMC_CONFIG1_FCLK_DIV(3))
+
+/*
+ * Note that all values in this struct are in nanoseconds, while
+ * the register values are in gpmc_fck cycles.
+ */
+struct gpmc_timings {
+ /* Minimum clock period for synchronous mode */
+ u16 sync_clk;
+
+ /* Chip-select signal timings corresponding to GPMC_CS_CONFIG2 */
+ u16 cs_on; /* Assertion time */
+ u16 cs_rd_off; /* Read deassertion time */
+ u16 cs_wr_off; /* Write deassertion time */
+
+ /* ADV signal timings corresponding to GPMC_CONFIG3 */
+ u16 adv_on; /* Assertion time */
+ u16 adv_rd_off; /* Read deassertion time */
+ u16 adv_wr_off; /* Write deassertion time */
+
+ /* WE signals timings corresponding to GPMC_CONFIG4 */
+ u16 we_on; /* WE assertion time */
+ u16 we_off; /* WE deassertion time */
+
+ /* OE signals timings corresponding to GPMC_CONFIG4 */
+ u16 oe_on; /* OE assertion time */
+ u16 oe_off; /* OE deassertion time */
+
+ /* Access time and cycle time timings corresponding to GPMC_CONFIG5 */
+ u16 page_burst_access; /* Multiple access word delay */
+ u16 access; /* Start-cycle to first data valid delay */
+ u16 rd_cycle; /* Total read cycle time */
+ u16 wr_cycle; /* Total write cycle time */
+};
+
+extern unsigned int gpmc_ns_to_ticks(unsigned int time_ns);
+
+extern void gpmc_cs_write_reg(int cs, int idx, u32 val);
+extern u32 gpmc_cs_read_reg(int cs, int idx);
+extern int gpmc_cs_calc_divider(int cs, unsigned int sync_clk);
+extern int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t);
+extern unsigned long gpmc_cs_get_base_addr(int cs);
+
+
+#endif
#include "board-perseus2.h"
#endif
+#ifdef CONFIG_MACH_OMAP_FSAMPLE
+#include "board-fsample.h"
+#endif
+
#ifdef CONFIG_MACH_OMAP_H3
#include "board-h3.h"
#endif
#define INT_24XX_GPIO_BANK2 30
#define INT_24XX_GPIO_BANK3 31
#define INT_24XX_GPIO_BANK4 32
+#define INT_24XX_GPTIMER1 37
+#define INT_24XX_GPTIMER2 38
+#define INT_24XX_GPTIMER3 39
+#define INT_24XX_GPTIMER4 40
+#define INT_24XX_GPTIMER5 41
+#define INT_24XX_GPTIMER6 42
+#define INT_24XX_GPTIMER7 43
+#define INT_24XX_GPTIMER8 44
+#define INT_24XX_GPTIMER9 45
+#define INT_24XX_GPTIMER10 46
+#define INT_24XX_GPTIMER11 47
+#define INT_24XX_GPTIMER12 48
#define INT_24XX_MCBSP1_IRQ_TX 59
#define INT_24XX_MCBSP1_IRQ_RX 60
#define INT_24XX_MCBSP2_IRQ_TX 62
#define INT_24XX_MCBSP2_IRQ_RX 63
+#define INT_24XX_UART1_IRQ 72
+#define INT_24XX_UART2_IRQ 73
#define INT_24XX_UART3_IRQ 74
/* Max. 128 level 2 IRQs (OMAP1610), 192 GPIOs (OMAP730) and
/* 24xx clock */
W14_24XX_SYS_CLKOUT,
+ /* 24xx GPMC wait pin monitoring */
+ L3_GPMC_WAIT0,
+ N7_GPMC_WAIT1,
+ M1_GPMC_WAIT2,
+ P1_GPMC_WAIT3,
+
/* 242X McBSP */
Y15_24XX_MCBSP2_CLKX,
R14_24XX_MCBSP2_FSX,
M15_24XX_GPIO92,
V14_24XX_GPIO117,
+ /* 242x DBG GPIO */
+ V4_242X_GPIO49,
+ W2_242X_GPIO50,
+ U4_242X_GPIO51,
+ V3_242X_GPIO52,
+ V2_242X_GPIO53,
+ V6_242X_GPIO53,
+ T4_242X_GPIO54,
+ Y4_242X_GPIO54,
+ T3_242X_GPIO55,
+ U2_242X_GPIO56,
+
+ /* 24xx external DMA requests */
+ AA10_242X_DMAREQ0,
+ AA6_242X_DMAREQ1,
+ E4_242X_DMAREQ2,
+ G4_242X_DMAREQ3,
+ D3_242X_DMAREQ4,
+ E3_242X_DMAREQ5,
+
P20_24XX_TSC_IRQ,
/* UART3 */
OMAP24XX_SLEEP_SAVE_INTC_MIR0,
OMAP24XX_SLEEP_SAVE_INTC_MIR1,
OMAP24XX_SLEEP_SAVE_INTC_MIR2,
+
+ OMAP24XX_SLEEP_SAVE_CM_CLKSTCTRL_MPU,
+ OMAP24XX_SLEEP_SAVE_CM_CLKSTCTRL_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_CLKSTCTRL_GFX,
+ OMAP24XX_SLEEP_SAVE_CM_CLKSTCTRL_DSP,
+ OMAP24XX_SLEEP_SAVE_CM_CLKSTCTRL_MDM,
+
+ OMAP24XX_SLEEP_SAVE_PM_PWSTCTRL_MPU,
+ OMAP24XX_SLEEP_SAVE_PM_PWSTCTRL_CORE,
+ OMAP24XX_SLEEP_SAVE_PM_PWSTCTRL_GFX,
+ OMAP24XX_SLEEP_SAVE_PM_PWSTCTRL_DSP,
+ OMAP24XX_SLEEP_SAVE_PM_PWSTCTRL_MDM,
+
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST1_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST2_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST3_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST4_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST_GFX,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST_WKUP,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST_CKGEN,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST_DSP,
+ OMAP24XX_SLEEP_SAVE_CM_IDLEST_MDM,
+
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE1_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE2_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE3_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE4_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE_WKUP,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE_PLL,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE_DSP,
+ OMAP24XX_SLEEP_SAVE_CM_AUTOIDLE_MDM,
+
OMAP24XX_SLEEP_SAVE_CM_FCLKEN1_CORE,
OMAP24XX_SLEEP_SAVE_CM_FCLKEN2_CORE,
OMAP24XX_SLEEP_SAVE_CM_ICLKEN1_CORE,
OMAP24XX_SLEEP_SAVE_CM_ICLKEN2_CORE,
+ OMAP24XX_SLEEP_SAVE_CM_ICLKEN3_CORE,
OMAP24XX_SLEEP_SAVE_CM_ICLKEN4_CORE,
OMAP24XX_SLEEP_SAVE_GPIO1_IRQENABLE1,
OMAP24XX_SLEEP_SAVE_GPIO2_IRQENABLE1,
#define GPIO84_NSRXD 84 /* NSSP receive */
#define GPIO85_nPCE_1 85 /* Card Enable for Card Space (PXA27x) */
#define GPIO92_MMCDAT0 92 /* MMC DAT0 (PXA27x) */
+#define GPIO102_nPCE_1 102 /* PCMCIA (PXA27x) */
#define GPIO109_MMCDAT1 109 /* MMC DAT1 (PXA27x) */
#define GPIO110_MMCDAT2 110 /* MMC DAT2 (PXA27x) */
#define GPIO110_MMCCS0 110 /* MMC Chip Select 0 (PXA27x) */
#define GPIO84_NSSP_RX (84 | GPIO_ALT_FN_2_IN)
#define GPIO85_nPCE_1_MD (85 | GPIO_ALT_FN_1_OUT)
#define GPIO92_MMCDAT0_MD (92 | GPIO_ALT_FN_1_OUT)
+#define GPIO102_nPCE_1_MD (102 | GPIO_ALT_FN_1_OUT)
#define GPIO104_pSKTSEL_MD (104 | GPIO_ALT_FN_1_OUT)
#define GPIO109_MMCDAT1_MD (109 | GPIO_ALT_FN_1_OUT)
#define GPIO110_MMCDAT2_MD (110 | GPIO_ALT_FN_1_OUT)
--- /dev/null
+/************************************************************************
+ * Include file for TRIZEPS4 SoM and ConXS eval-board
+ * Copyright (c) Jürgen Schindele
+ * 2006
+ ************************************************************************/
+
+/*
+ * Includes/Defines
+ */
+#ifndef _TRIPEPS4_H_
+#define _TRIPEPS4_H_
+
+/* physical memory regions */
+#define TRIZEPS4_FLASH_PHYS (PXA_CS0_PHYS) /* Flash region */
+#define TRIZEPS4_DISK_PHYS (PXA_CS1_PHYS) /* Disk On Chip region */
+#define TRIZEPS4_ETH_PHYS (PXA_CS2_PHYS) /* Ethernet DM9000 region */
+#define TRIZEPS4_PIC_PHYS (PXA_CS3_PHYS) /* Logic chip on ConXS-Board */
+#define TRIZEPS4_SDRAM_BASE 0xa0000000 /* SDRAM region */
+
+#define TRIZEPS4_CFSR_PHYS (PXA_CS3_PHYS) /* Logic chip on ConXS-Board CSFR register */
+#define TRIZEPS4_BOCR_PHYS (PXA_CS3_PHYS+0x02000000) /* Logic chip on ConXS-Board BOCR register */
+#define TRIZEPS4_IRCR_PHYS (PXA_CS3_PHYS+0x02400000) /* Logic chip on ConXS-Board IRCR register*/
+#define TRIZEPS4_UPSR_PHYS (PXA_CS3_PHYS+0x02800000) /* Logic chip on ConXS-Board UPSR register*/
+#define TRIZEPS4_DICR_PHYS (PXA_CS3_PHYS+0x03800000) /* Logic chip on ConXS-Board DICR register*/
+
+/* virtual memory regions */
+#define TRIZEPS4_DISK_VIRT 0xF0000000 /* Disk On Chip region */
+
+#define TRIZEPS4_PIC_VIRT 0xF0100000 /* not used */
+#define TRIZEPS4_CFSR_VIRT 0xF0100000
+#define TRIZEPS4_BOCR_VIRT 0xF0200000
+#define TRIZEPS4_DICR_VIRT 0xF0300000
+#define TRIZEPS4_IRCR_VIRT 0xF0400000
+#define TRIZEPS4_UPSR_VIRT 0xF0500000
+
+/* size of flash */
+#define TRIZEPS4_FLASH_SIZE 0x02000000 /* Flash size 32 MB */
+
+/* Ethernet Controller Davicom DM9000 */
+#define GPIO_DM9000 101
+#define TRIZEPS4_ETH_IRQ IRQ_GPIO(GPIO_DM9000)
+
+/* UCB1400 audio / TS-controller */
+#define GPIO_UCB1400 1
+#define TRIZEPS4_UCB1400_IRQ IRQ_GPIO(GPIO_UCB1400)
+
+/* PCMCIA socket Compact Flash */
+#define GPIO_PCD 11 /* PCMCIA Card Detect */
+#define TRIZEPS4_CD_IRQ IRQ_GPIO(GPIO_PCD)
+#define GPIO_PRDY 13 /* READY / nINT */
+#define TRIZEPS4_READY_NINT IRQ_GPIO(GPIO_PRDY)
+
+/* MMC socket */
+#define GPIO_MMC_DET 12
+#define TRIZEPS4_MMC_IRQ IRQ_GPIO(GPIO_MMC_DET)
+
+/* LEDS using tx2 / rx2 */
+#define GPIO_SYS_BUSY_LED 46
+#define GPIO_HEARTBEAT_LED 47
+
+/* Off-module PIC on ConXS board */
+#define GPIO_PIC 0
+#define TRIZEPS4_PIC_IRQ IRQ_GPIO(GPIO_PIC)
+
+#define CFSR_P2V(x) ((x) - TRIZEPS4_CFSR_PHYS + TRIZEPS4_CFSR_VIRT)
+#define CFSR_V2P(x) ((x) - TRIZEPS4_CFSR_VIRT + TRIZEPS4_CFSR_PHYS)
+
+#define BCR_P2V(x) ((x) - TRIZEPS4_BOCR_PHYS + TRIZEPS4_BOCR_VIRT)
+#define BCR_V2P(x) ((x) - TRIZEPS4_BOCR_VIRT + TRIZEPS4_BOCR_PHYS)
+
+#define DCR_P2V(x) ((x) - TRIZEPS4_DICR_PHYS + TRIZEPS4_DICR_VIRT)
+#define DCR_V2P(x) ((x) - TRIZEPS4_DICR_VIRT + TRIZEPS4_DICR_PHYS)
+
+#ifndef __ASSEMBLY__
+#define ConXS_CFSR (*((volatile unsigned short *)CFSR_P2V(0x0C000000)))
+#define ConXS_BCR (*((volatile unsigned short *)BCR_P2V(0x0E000000)))
+#define ConXS_DCR (*((volatile unsigned short *)DCR_P2V(0x0F800000)))
+#else
+#define ConXS_CFSR CFSR_P2V(0x0C000000)
+#define ConXS_BCR BCR_P2V(0x0E000000)
+#define ConXS_DCR DCR_P2V(0x0F800000)
+#endif
+
+#define ConXS_CFSR_BVD_MASK 0x0003
+#define ConXS_CFSR_BVD1 (1 << 0)
+#define ConXS_CFSR_BVD2 (1 << 1)
+#define ConXS_CFSR_VS_MASK 0x000C
+#define ConXS_CFSR_VS1 (1 << 2)
+#define ConXS_CFSR_VS2 (1 << 3)
+#define ConXS_CFSR_VS_5V (0x3 << 2)
+#define ConXS_CFSR_VS_3V3 0x0
+
+#define ConXS_BCR_S0_POW_EN0 (1 << 0)
+#define ConXS_BCR_S0_POW_EN1 (1 << 1)
+#define ConXS_BCR_L_DISP (1 << 4)
+#define ConXS_BCR_CF_BUF_EN (1 << 5)
+#define ConXS_BCR_CF_RESET (1 << 7)
+#define ConXS_BCR_S0_VCC_3V3 0x1
+#define ConXS_BCR_S0_VCC_5V0 0x2
+#define ConXS_BCR_S0_VPP_12V 0x4
+#define ConXS_BCR_S0_VPP_3V3 0x8
+
+#define ConXS_IRCR_MODE (1 << 0)
+#define ConXS_IRCR_SD (1 << 1)
+
+#endif /* _TRIPEPS4_H_ */
--- /dev/null
+#ifndef _ASMARM_DYNTICK_H
+#define _ASMARM_DYNTICK_H
+
+#include <asm/mach/time.h>
+
+#endif /* _ASMARM_DYNTICK_H */
#define fd_inb(port) inb((port))
#define fd_request_irq() request_irq(IRQ_FLOPPYDISK,floppy_interrupt,\
- SA_INTERRUPT,"floppy",NULL)
+ IRQF_DISABLED,"floppy",NULL)
#define fd_free_irq() free_irq(IRQ_FLOPPYDISK,NULL)
#define fd_disable_irq() disable_irq(IRQ_FLOPPYDISK)
#define fd_enable_irq() enable_irq(IRQ_FLOPPYDISK)
--- /dev/null
+/*
+ * Nothing to see here yet
+ */
+#ifndef _ARCH_ARM_HW_IRQ_H
+#define _ARCH_ARM_HW_IRQ_H
+
+#include <asm/mach/irq.h>
+
+#if defined(CONFIG_NO_IDLE_HZ)
+# include <asm/dyntick.h>
+# define handle_dynamic_tick(action) \
+ if (!(action->flags & IRQF_TIMER) && system_timer->dyn_tick) { \
+ write_seqlock(&xtime_lock); \
+ if (system_timer->dyn_tick->state & DYN_TICK_ENABLED) \
+ system_timer->dyn_tick->handler(irq, 0, regs); \
+ write_sequnlock(&xtime_lock); \
+ }
+#endif
+
+#endif
struct irqaction;
-extern void disable_irq_nosync(unsigned int);
-extern void disable_irq(unsigned int);
-extern void enable_irq(unsigned int);
-
/*
- * These correspond with the SA_TRIGGER_* defines, and therefore the
- * IORESOURCE_IRQ_* defines.
+ * Migration helpers
*/
-#define __IRQT_RISEDGE (1 << 0)
-#define __IRQT_FALEDGE (1 << 1)
-#define __IRQT_HIGHLVL (1 << 2)
-#define __IRQT_LOWLVL (1 << 3)
+#define __IRQT_FALEDGE IRQ_TYPE_EDGE_FALLING
+#define __IRQT_RISEDGE IRQ_TYPE_EDGE_RISING
+#define __IRQT_LOWLVL IRQ_TYPE_LEVEL_LOW
+#define __IRQT_HIGHLVL IRQ_TYPE_LEVEL_HIGH
#define IRQT_NOEDGE (0)
#define IRQT_RISING (__IRQT_RISEDGE)
#define IRQT_BOTHEDGE (__IRQT_RISEDGE|__IRQT_FALEDGE)
#define IRQT_LOW (__IRQT_LOWLVL)
#define IRQT_HIGH (__IRQT_HIGHLVL)
-#define IRQT_PROBE (1 << 4)
-
-int set_irq_type(unsigned int irq, unsigned int type);
-void disable_irq_wake(unsigned int irq);
-void enable_irq_wake(unsigned int irq);
-int setup_irq(unsigned int, struct irqaction *);
+#define IRQT_PROBE IRQ_TYPE_PROBE
extern void migrate_irqs(void);
#endif
#ifndef __ASM_ARM_MACH_IRQ_H
#define __ASM_ARM_MACH_IRQ_H
-struct irqdesc;
-struct pt_regs;
-struct seq_file;
-
-typedef void (*irq_handler_t)(unsigned int, struct irqdesc *, struct pt_regs *);
-typedef void (*irq_control_t)(unsigned int);
-
-struct irqchip {
- /*
- * Acknowledge the IRQ.
- * If this is a level-based IRQ, then it is expected to mask the IRQ
- * as well.
- */
- void (*ack)(unsigned int);
- /*
- * Mask the IRQ in hardware.
- */
- void (*mask)(unsigned int);
- /*
- * Unmask the IRQ in hardware.
- */
- void (*unmask)(unsigned int);
- /*
- * Ask the hardware to re-trigger the IRQ.
- * Note: This method _must_ _not_ call the interrupt handler.
- * If you are unable to retrigger the interrupt, do not
- * provide a function, or if you do, return non-zero.
- */
- int (*retrigger)(unsigned int);
- /*
- * Set the type of the IRQ.
- */
- int (*set_type)(unsigned int, unsigned int);
- /*
- * Set wakeup-enable on the selected IRQ
- */
- int (*set_wake)(unsigned int, unsigned int);
-
-#ifdef CONFIG_SMP
- /*
- * Route an interrupt to a CPU
- */
- void (*set_cpu)(struct irqdesc *desc, unsigned int irq, unsigned int cpu);
-#endif
-};
-
-struct irqdesc {
- irq_handler_t handle;
- struct irqchip *chip;
- struct irqaction *action;
- struct list_head pend;
- void __iomem *base;
- void *data;
- unsigned int disable_depth;
-
- unsigned int triggered: 1; /* IRQ has occurred */
- unsigned int running : 1; /* IRQ is running */
- unsigned int pending : 1; /* IRQ is pending */
- unsigned int probing : 1; /* IRQ in use for a probe */
- unsigned int probe_ok : 1; /* IRQ can be used for probe */
- unsigned int valid : 1; /* IRQ claimable */
- unsigned int noautoenable : 1; /* don't automatically enable IRQ */
- unsigned int unused :25;
-
- unsigned int irqs_unhandled;
- struct proc_dir_entry *procdir;
-
-#ifdef CONFIG_SMP
- cpumask_t affinity;
- unsigned int cpu;
-#endif
-
- /*
- * IRQ lock detection
- */
- unsigned int lck_cnt;
- unsigned int lck_pc;
- unsigned int lck_jif;
-};
-
-extern struct irqdesc irq_desc[];
+#include <linux/irq.h>
-/*
- * Helpful inline function for calling irq descriptor handlers.
- */
-static inline void desc_handle_irq(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
-{
- desc->handle(irq, desc, regs);
-}
+struct seq_file;
/*
* This is internal. Do not use it.
extern void (*init_arch_irq)(void);
extern void init_FIQ(void);
extern int show_fiq_list(struct seq_file *, void *);
-void __set_irq_handler(unsigned int irq, irq_handler_t, int);
/*
- * External stuff.
+ * Function wrappers
+ */
+#define set_irq_chipdata(irq, d) set_irq_chip_data(irq, d)
+#define get_irq_chipdata(irq) get_irq_chip_data(irq)
+
+/*
+ * Obsolete inline function for calling irq descriptor handlers.
*/
-#define set_irq_handler(irq,handler) __set_irq_handler(irq,handler,0)
-#define set_irq_chained_handler(irq,handler) __set_irq_handler(irq,handler,1)
-#define set_irq_data(irq,d) do { irq_desc[irq].data = d; } while (0)
-#define set_irq_chipdata(irq,d) do { irq_desc[irq].base = d; } while (0)
-#define get_irq_chipdata(irq) (irq_desc[irq].base)
+static inline void desc_handle_irq(unsigned int irq, struct irq_desc *desc,
+ struct pt_regs *regs)
+{
+ desc->handle_irq(irq, desc, regs);
+}
-void set_irq_chip(unsigned int irq, struct irqchip *);
void set_irq_flags(unsigned int irq, unsigned int flags);
#define IRQF_VALID (1 << 0)
#define IRQF_NOAUTOEN (1 << 2)
/*
- * Built-in IRQ handlers.
+ * This is for easy migration, but should be changed in the source
*/
-void do_level_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs);
-void do_edge_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs);
-void do_simple_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs);
-void do_bad_IRQ(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs);
-void dummy_mask_unmask_irq(unsigned int irq);
+#define do_level_IRQ handle_level_irq
+#define do_edge_IRQ handle_edge_irq
+#define do_simple_IRQ handle_simple_irq
+#define irqdesc irq_desc
+#define irqchip irq_chip
+
+#define do_bad_IRQ(irq,desc,regs) \
+do { \
+ spin_lock(&desc->lock); \
+ handle_bad_irq(irq, desc, regs); \
+ spin_unlock(&desc->lock); \
+} while(0)
+
+extern unsigned long irq_err_count;
+static inline void ack_bad_irq(int irq)
+{
+ irq_err_count++;
+}
#endif
/*
* Kernel time keeping support.
*/
+struct timespec;
extern int (*set_rtc)(void);
extern void save_time_delta(struct timespec *delta, struct timespec *rtc);
extern void restore_time_delta(struct timespec *delta, struct timespec *rtc);
*/
#define XIP_VIRT_ADDR(physaddr) (MODULE_START + ((physaddr) & 0x000fffff))
+/*
+ * Allow 16MB-aligned ioremap pages
+ */
+#define IOREMAP_MAX_ORDER 24
+
#else /* CONFIG_MMU */
/*
#if __LINUX_ARM_ARCH__ >= 6
unsigned int id;
#endif
+ unsigned int kvm_seq;
} mm_context_t;
#if __LINUX_ARM_ARCH__ >= 6
#include <asm/cacheflush.h>
#include <asm/proc-fns.h>
+void __check_kvm_seq(struct mm_struct *mm);
+
#if __LINUX_ARM_ARCH__ >= 6
/*
{
if (unlikely((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
__new_context(mm);
+
+ if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
+ __check_kvm_seq(mm);
}
#define init_new_context(tsk,mm) (__init_new_context(tsk,mm),0)
#else
-#define check_context(mm) do { } while (0)
+static inline void check_context(struct mm_struct *mm)
+{
+ if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
+ __check_kvm_seq(mm);
+}
+
#define init_new_context(tsk,mm) 0
#endif
*/
#define PMD_SECT_BUFFERABLE (1 << 2)
#define PMD_SECT_CACHEABLE (1 << 3)
+#define PMD_SECT_XN (1 << 4) /* v6 */
#define PMD_SECT_AP_WRITE (1 << 10)
#define PMD_SECT_AP_READ (1 << 11)
#define PMD_SECT_TEX(x) ((x) << 12) /* v5 */
struct proc_info_list {
unsigned int cpu_val;
unsigned int cpu_mask;
- unsigned long __cpu_mmu_flags; /* used by head.S */
+ unsigned long __cpu_mm_mmu_flags; /* used by head.S */
+ unsigned long __cpu_io_mmu_flags; /* used by head.S */
unsigned long __cpu_flush; /* used by head.S */
const char *arch_name;
const char *elf_name;
* is running in 26-bit.
* SA_ONSTACK allows alternate signal stacks (see sigaltstack(2)).
* SA_RESTART flag to get restarting signals (which were the default long ago)
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_NODEFER prevents the current signal from being masked in the handler.
* SA_RESETHAND clears the handler when the signal is delivered.
*
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
/*
#define MINSIGSTKSZ 2048
#define SIGSTKSZ 8192
-#ifdef __KERNEL__
-#define SA_TIMER 0x40000000
-#endif
-
#include <asm-generic/signal.h>
#ifdef __KERNEL__
extern void iwmmxt_task_copy(struct thread_info *, void *);
extern void iwmmxt_task_restore(struct thread_info *, void *);
extern void iwmmxt_task_release(struct thread_info *);
+extern void iwmmxt_task_switch(struct thread_info *);
#endif
#define fd_inb(port) inb((port))
#define fd_request_irq() request_irq(IRQ_FLOPPYDISK,floppy_interrupt,\
- SA_INTERRUPT,"floppy",NULL)
+ IRQF_DISABLED,"floppy",NULL)
#define fd_free_irq() free_irq(IRQ_FLOPPYDISK,NULL)
#define fd_disable_irq() disable_irq(IRQ_FLOPPYDISK)
#define fd_enable_irq() enable_irq(IRQ_FLOPPYDISK)
* is running in 26-bit.
* SA_ONSTACK allows alternate signal stacks (see sigaltstack(2)).
* SA_RESTART flag to get restarting signals (which were the default long ago)
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_NODEFER prevents the current signal from being masked in the handler.
* SA_RESETHAND clears the handler when the signal is delivered.
*
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
/*
* it here, we would not get the multiple_irq at all.
*
* The non-blocking here is based on the knowledge that the timer interrupt is
- * registred as a fast interrupt (SA_INTERRUPT) so that we _know_ there will not
+ * registred as a fast interrupt (IRQF_DISABLED) so that we _know_ there will not
* be an sti() before the timer irq handler is run to acknowledge the interrupt.
*/
* if we had BLOCK'edit here, we would not get the multiple_irq at all.
*
* The non-blocking here is based on the knowledge that the timer interrupt is
- * registred as a fast interrupt (SA_INTERRUPT) so that we _know_ there will not
+ * registred as a fast interrupt (IRQF_DISABLED) so that we _know_ there will not
* be an sti() before the timer irq handler is run to acknowledge the interrupt.
*/
#define BUILD_TIMER_IRQ(nr, mask) \
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
struct irq_level {
int usage;
int disable_count;
- unsigned long flags; /* current SA_INTERRUPT and SA_SHIRQ settings */
+ unsigned long flags; /* current IRQF_DISABLED and IRQF_SHARED settings */
spinlock_t lock;
struct irq_source *sources;
};
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
#ifndef _ASM_GENERIC_MUTEX_NULL_H
#define _ASM_GENERIC_MUTEX_NULL_H
-/* extra parameter only needed for mutex debugging: */
-#ifndef __IP__
-# define __IP__
-#endif
-
-#define __mutex_fastpath_lock(count, fail_fn) fail_fn(count __RET_IP__)
-#define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count __RET_IP__)
-#define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count __RET_IP__)
-#define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count)
-#define __mutex_slowpath_needs_to_unlock() 1
+#define __mutex_fastpath_lock(count, fail_fn) fail_fn(count)
+#define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count)
+#define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count)
+#define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count)
+#define __mutex_slowpath_needs_to_unlock() 1
#endif
extern unsigned long __per_cpu_offset[NR_CPUS];
+#define per_cpu_offset(x) (__per_cpu_offset[x])
+
/* Separate out the type, so (int[3], foo) works. */
#define DEFINE_PER_CPU(type, name) \
__attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
static int fd_request_irq(void)
{
if(can_use_virtual_dma)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", NULL);
else
- return request_irq(FLOPPY_IRQ, floppy_interrupt, SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_interrupt,
+ IRQF_DISABLED, "floppy", NULL);
}
--- /dev/null
+/*
+ * include/asm-i386/irqflags.h
+ *
+ * IRQ flags handling
+ *
+ * This file gets included from lowlevel asm headers too, to provide
+ * wrapped versions of the local_irq_*() APIs, based on the
+ * raw_local_irq_*() functions from the lowlevel headers.
+ */
+#ifndef _ASM_IRQFLAGS_H
+#define _ASM_IRQFLAGS_H
+
+#ifndef __ASSEMBLY__
+
+static inline unsigned long __raw_local_save_flags(void)
+{
+ unsigned long flags;
+
+ __asm__ __volatile__(
+ "pushfl ; popl %0"
+ : "=g" (flags)
+ : /* no input */
+ );
+
+ return flags;
+}
+
+#define raw_local_save_flags(flags) \
+ do { (flags) = __raw_local_save_flags(); } while (0)
+
+static inline void raw_local_irq_restore(unsigned long flags)
+{
+ __asm__ __volatile__(
+ "pushl %0 ; popfl"
+ : /* no output */
+ :"g" (flags)
+ :"memory", "cc"
+ );
+}
+
+static inline void raw_local_irq_disable(void)
+{
+ __asm__ __volatile__("cli" : : : "memory");
+}
+
+static inline void raw_local_irq_enable(void)
+{
+ __asm__ __volatile__("sti" : : : "memory");
+}
+
+/*
+ * Used in the idle loop; sti takes one instruction cycle
+ * to complete:
+ */
+static inline void raw_safe_halt(void)
+{
+ __asm__ __volatile__("sti; hlt" : : : "memory");
+}
+
+/*
+ * Used when interrupts are already enabled or to
+ * shutdown the processor:
+ */
+static inline void halt(void)
+{
+ __asm__ __volatile__("hlt": : :"memory");
+}
+
+static inline int raw_irqs_disabled_flags(unsigned long flags)
+{
+ return !(flags & (1 << 9));
+}
+
+static inline int raw_irqs_disabled(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ return raw_irqs_disabled_flags(flags);
+}
+
+/*
+ * For spinlocks, etc:
+ */
+static inline unsigned long __raw_local_irq_save(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ raw_local_irq_disable();
+
+ return flags;
+}
+
+#define raw_local_irq_save(flags) \
+ do { (flags) = __raw_local_irq_save(); } while (0)
+
+#endif /* __ASSEMBLY__ */
+
+/*
+ * Do the CPU's IRQ-state tracing from assembly code. We call a
+ * C function, so save all the C-clobbered registers:
+ */
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+# define TRACE_IRQS_ON \
+ pushl %eax; \
+ pushl %ecx; \
+ pushl %edx; \
+ call trace_hardirqs_on; \
+ popl %edx; \
+ popl %ecx; \
+ popl %eax;
+
+# define TRACE_IRQS_OFF \
+ pushl %eax; \
+ pushl %ecx; \
+ pushl %edx; \
+ call trace_hardirqs_off; \
+ popl %edx; \
+ popl %ecx; \
+ popl %eax;
+
+#else
+# define TRACE_IRQS_ON
+# define TRACE_IRQS_OFF
+#endif
+
+#endif
#include <linux/list.h>
#include <linux/spinlock.h>
+#include <linux/lockdep.h>
struct rwsem_waiter;
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
#endif
};
-/*
- * initialisation
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
#else
-#define __RWSEM_DEBUG_INIT /* */
+# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
+
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEBUG_INIT }
+ __RWSEM_DEP_MAP_INIT(name) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-static inline void init_rwsem(struct rw_semaphore *sem)
-{
- sem->count = RWSEM_UNLOCKED_VALUE;
- spin_lock_init(&sem->wait_lock);
- INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
-}
+extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
+ struct lock_class_key *key);
+
+#define init_rwsem(sem) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __init_rwsem((sem), #sem, &__key); \
+} while (0)
/*
* lock for reading
/*
* lock for writing
*/
-static inline void __down_write(struct rw_semaphore *sem)
+static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
{
int tmp;
: "memory", "cc");
}
+static inline void __down_write(struct rw_semaphore *sem)
+{
+ __down_write_nested(sem, 0);
+}
+
/*
* trylock for writing -- returns 1 if successful, 0 if contention
*/
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
"jmp 1b\n" \
"3:\n\t"
+/*
+ * NOTE: there's an irqs-on section here, which normally would have to be
+ * irq-traced, but on CONFIG_TRACE_IRQFLAGS we never use
+ * __raw_spin_lock_string_flags().
+ */
#define __raw_spin_lock_string_flags \
"\n1:\t" \
"lock ; decb %0\n\t" \
"=m" (lock->slock) : : "memory");
}
+/*
+ * It is easier for the lock validator if interrupts are not re-enabled
+ * in the middle of a lock-acquire. This is a performance feature anyway
+ * so we turn it off:
+ */
+#ifndef CONFIG_PROVE_LOCKING
static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags)
{
alternative_smp(
__raw_spin_lock_string_up,
"=m" (lock->slock) : "r" (flags) : "memory");
}
+#endif
static inline int __raw_spin_trylock(raw_spinlock_t *lock)
{
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
-/* interrupt control.. */
-#define local_save_flags(x) do { typecheck(unsigned long,x); __asm__ __volatile__("pushfl ; popl %0":"=g" (x): /* no input */); } while (0)
-#define local_irq_restore(x) do { typecheck(unsigned long,x); __asm__ __volatile__("pushl %0 ; popfl": /* no output */ :"g" (x):"memory", "cc"); } while (0)
-#define local_irq_disable() __asm__ __volatile__("cli": : :"memory")
-#define local_irq_enable() __asm__ __volatile__("sti": : :"memory")
-/* used in the idle loop; sti takes one instruction cycle to complete */
-#define safe_halt() __asm__ __volatile__("sti; hlt": : :"memory")
-/* used when interrupts are already enabled or to shutdown the processor */
-#define halt() __asm__ __volatile__("hlt": : :"memory")
-
-#define irqs_disabled() \
-({ \
- unsigned long flags; \
- local_save_flags(flags); \
- !(flags & (1<<9)); \
-})
-
-/* For spinlocks etc */
-#define local_irq_save(x) __asm__ __volatile__("pushfl ; popl %0 ; cli":"=g" (x): /* no input */ :"memory")
+#include <linux/irqflags.h>
/*
* disable hlt during certain critical i/o operations
#ifdef CONFIG_SMP
extern unsigned long __per_cpu_offset[NR_CPUS];
+#define per_cpu_offset(x) (__per_cpu_offset(x))
/* Equal to __per_cpu_offset[smp_processor_id()], but faster to access: */
DECLARE_PER_CPU(unsigned long, local_per_cpu_offset);
signed long count;
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
-#endif
};
#define RWSEM_UNLOCKED_VALUE __IA64_UL_CONST(0x0000000000000000)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-/*
- * initialization
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
-#else
-#define __RWSEM_DEBUG_INIT /* */
-#endif
-
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEBUG_INIT }
+ LIST_HEAD_INIT((name).wait_list) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
/*
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons.
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
#define _NSIG_BPW 64
#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
-#define SA_PERCPU_IRQ 0x02000000
-
#endif /* __KERNEL__ */
#include <asm-generic/signal.h>
#define end_of_stack(p) (unsigned long *)((void *)(p) + IA64_RBS_OFFSET)
#define __HAVE_ARCH_TASK_STRUCT_ALLOCATOR
-#define alloc_task_struct() ((task_t *)__get_free_pages(GFP_KERNEL | __GFP_COMP, KERNEL_STACK_SIZE_ORDER))
+#define alloc_task_struct() ((struct task_struct *)__get_free_pages(GFP_KERNEL | __GFP_COMP, KERNEL_STACK_SIZE_ORDER))
#define free_task_struct(tsk) free_pages((unsigned long) (tsk), KERNEL_STACK_SIZE_ORDER)
#endif /* !__ASSEMBLY */
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
* switch_to(prev, next) should switch from task `prev' to `next'
* `prev' will never be the same as `next'.
*
- * `next' and `prev' should be task_t, but it isn't always defined
+ * `next' and `prev' should be struct task_struct, but it isn't always defined
*/
#define switch_to(prev, next, last) do { \
static int fd_request_irq(void)
{
if(MACH_IS_Q40)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", floppy_hardint);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", floppy_hardint);
else if(MACH_IS_SUN3X)
return sun3xflop_request_irq();
return -ENXIO;
/*
* various flags for request_irq() - the Amiga now uses the standard
- * mechanism like all other architectures - SA_INTERRUPT and SA_SHIRQ
- * are your friends.
+ * mechanism like all other architectures - IRQF_DISABLED and
+ * IRQF_SHARED are your friends.
*/
#ifndef MACH_AMIGA_ONLY
#define IRQ_FLG_LOCK (0x0001) /* handler is not replaceable */
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
/*
* sigaltstack controls
if(!once) {
once = 1;
- error = request_irq(FLOPPY_IRQ, sun3xflop_hardint, SA_INTERRUPT, "floppy", NULL);
+ error = request_irq(FLOPPY_IRQ, sun3xflop_hardint,
+ IRQF_DISABLED, "floppy", NULL);
return ((error == 0) ? 0 : -1);
} else return 0;
}
/*
* various flags for request_irq() - the Amiga now uses the standard
- * mechanism like all other architectures - SA_INTERRUPT and SA_SHIRQ
- * are your friends.
+ * mechanism like all other architectures - IRQF_DISABLED and
+ * IRQF_SHARED are your friends.
*/
#define IRQ_FLG_LOCK (0x0001) /* handler is not replaceable */
#define IRQ_FLG_REPLACE (0x0002) /* replace existing handler */
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
/*
* sigaltstack controls
static inline int fd_request_irq(void)
{
return request_irq(FLOPPY_IRQ, floppy_interrupt,
- SA_INTERRUPT, "floppy", NULL);
+ IRQF_DISABLED, "floppy", NULL);
}
static inline void fd_free_irq(void)
static inline int fd_request_irq(void)
{
return request_irq(FLOPPY_IRQ, floppy_interrupt,
- SA_INTERRUPT, "floppy", NULL);
+ IRQF_DISABLED, "floppy", NULL);
}
static inline void fd_free_irq(void)
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000 /* Only for o32 */
#ifdef __KERNEL__
-/*
- * These values of sa_flags are used only by the kernel as part of the
- * irq handling routines.
- *
- * SA_INTERRUPT is also used by the irq handling routines.
- * SA_SHIRQ flag is for shared interrupt support on PCI and EISA.
- */
-#define SA_SAMPLE_RANDOM SA_RESTART
-
#ifdef CONFIG_TRAD_SIGNALS
#define sig_uses_siginfo(ka) ((ka)->sa.sa_flags & SA_SIGINFO)
#else
static int fd_request_irq(void)
{
if(can_use_virtual_dma)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", NULL);
else
- return request_irq(FLOPPY_IRQ, floppy_interrupt, SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_interrupt,
+ IRQF_DISABLED, "floppy", NULL);
}
static unsigned long dma_mem_alloc(unsigned long size)
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000 /* obsolete -- ignored */
#define fd_disable_irq() disable_irq(FLOPPY_IRQ)
#define fd_cacheflush(addr,size) /* nothing */
#define fd_request_irq() request_irq(FLOPPY_IRQ, floppy_interrupt, \
- SA_INTERRUPT, "floppy", NULL)
+ IRQF_DISABLED, "floppy", NULL)
#define fd_free_irq() free_irq(FLOPPY_IRQ, NULL);
#ifdef CONFIG_PCI
#include <linux/irq.h>
-extern struct hw_interrupt_type i8259_pic;
-
+#ifdef CONFIG_PPC_MERGE
+extern void i8259_init(struct device_node *node, unsigned long intack_addr);
+extern unsigned int i8259_irq(struct pt_regs *regs);
+#else
extern void i8259_init(unsigned long intack_addr, int offset);
extern int i8259_irq(struct pt_regs *regs);
-extern int i8259_irq_cascade(struct pt_regs *regs, void *unused);
+#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_I8259_H */
* 2 of the License, or (at your option) any later version.
*/
+#include <linux/config.h>
#include <linux/threads.h>
+#include <linux/list.h>
+#include <linux/radix-tree.h>
#include <asm/types.h>
#include <asm/atomic.h>
-/* this number is used when no interrupt has been assigned */
-#define NO_IRQ (-1)
-
-/*
- * These constants are used for passing information about interrupt
- * signal polarity and level/edge sensing to the low-level PIC chip
- * drivers.
- */
-#define IRQ_SENSE_MASK 0x1
-#define IRQ_SENSE_LEVEL 0x1 /* interrupt on active level */
-#define IRQ_SENSE_EDGE 0x0 /* interrupt triggered by edge */
-
-#define IRQ_POLARITY_MASK 0x2
-#define IRQ_POLARITY_POSITIVE 0x2 /* high level or low->high edge */
-#define IRQ_POLARITY_NEGATIVE 0x0 /* low level or high->low edge */
#define get_irq_desc(irq) (&irq_desc[(irq)])
#define for_each_irq(i) \
for ((i) = 0; (i) < NR_IRQS; ++(i))
-#ifdef CONFIG_PPC64
+extern atomic_t ppc_n_lost_interrupts;
-/*
- * Maximum number of interrupt sources that we can handle.
+#ifdef CONFIG_PPC_MERGE
+
+/* This number is used when no interrupt has been assigned */
+#define NO_IRQ (0)
+
+/* This is a special irq number to return from get_irq() to tell that
+ * no interrupt happened _and_ ignore it (don't count it as bad). Some
+ * platforms like iSeries rely on that.
*/
+#define NO_IRQ_IGNORE ((unsigned int)-1)
+
+/* Total number of virq in the platform (make it a CONFIG_* option ? */
#define NR_IRQS 512
-/* Interrupt numbers are virtual in case they are sparsely
- * distributed by the hardware.
+/* Number of irqs reserved for the legacy controller */
+#define NUM_ISA_INTERRUPTS 16
+
+/* This type is the placeholder for a hardware interrupt number. It has to
+ * be big enough to enclose whatever representation is used by a given
+ * platform.
+ */
+typedef unsigned long irq_hw_number_t;
+
+/* Interrupt controller "host" data structure. This could be defined as a
+ * irq domain controller. That is, it handles the mapping between hardware
+ * and virtual interrupt numbers for a given interrupt domain. The host
+ * structure is generally created by the PIC code for a given PIC instance
+ * (though a host can cover more than one PIC if they have a flat number
+ * model). It's the host callbacks that are responsible for setting the
+ * irq_chip on a given irq_desc after it's been mapped.
+ *
+ * The host code and data structures are fairly agnostic to the fact that
+ * we use an open firmware device-tree. We do have references to struct
+ * device_node in two places: in irq_find_host() to find the host matching
+ * a given interrupt controller node, and of course as an argument to its
+ * counterpart host->ops->match() callback. However, those are treated as
+ * generic pointers by the core and the fact that it's actually a device-node
+ * pointer is purely a convention between callers and implementation. This
+ * code could thus be used on other architectures by replacing those two
+ * by some sort of arch-specific void * "token" used to identify interrupt
+ * controllers.
*/
-extern unsigned int virt_irq_to_real_map[NR_IRQS];
+struct irq_host;
+struct radix_tree_root;
-/* The maximum virtual IRQ number that we support. This
- * can be set by the platform and will be reduced by the
- * value of __irq_offset_value. It defaults to and is
- * capped by (NR_IRQS - 1).
+/* Functions below are provided by the host and called whenever a new mapping
+ * is created or an old mapping is disposed. The host can then proceed to
+ * whatever internal data structures management is required. It also needs
+ * to setup the irq_desc when returning from map().
*/
-extern unsigned int virt_irq_max;
+struct irq_host_ops {
+ /* Match an interrupt controller device node to a host, returns
+ * 1 on a match
+ */
+ int (*match)(struct irq_host *h, struct device_node *node);
+
+ /* Create or update a mapping between a virtual irq number and a hw
+ * irq number. This can be called several times for the same mapping
+ * but with different flags, though unmap shall always be called
+ * before the virq->hw mapping is changed.
+ */
+ int (*map)(struct irq_host *h, unsigned int virq,
+ irq_hw_number_t hw, unsigned int flags);
+
+ /* Dispose of such a mapping */
+ void (*unmap)(struct irq_host *h, unsigned int virq);
+
+ /* Translate device-tree interrupt specifier from raw format coming
+ * from the firmware to a irq_hw_number_t (interrupt line number) and
+ * trigger flags that can be passed to irq_create_mapping().
+ * If no translation is provided, raw format is assumed to be one cell
+ * for interrupt line and default sense.
+ */
+ int (*xlate)(struct irq_host *h, struct device_node *ctrler,
+ u32 *intspec, unsigned int intsize,
+ irq_hw_number_t *out_hwirq, unsigned int *out_flags);
+};
+
+struct irq_host {
+ struct list_head link;
+
+ /* type of reverse mapping technique */
+ unsigned int revmap_type;
+#define IRQ_HOST_MAP_LEGACY 0 /* legacy 8259, gets irqs 1..15 */
+#define IRQ_HOST_MAP_NOMAP 1 /* no fast reverse mapping */
+#define IRQ_HOST_MAP_LINEAR 2 /* linear map of interrupts */
+#define IRQ_HOST_MAP_TREE 3 /* radix tree */
+ union {
+ struct {
+ unsigned int size;
+ unsigned int *revmap;
+ } linear;
+ struct radix_tree_root tree;
+ } revmap_data;
+ struct irq_host_ops *ops;
+ void *host_data;
+ irq_hw_number_t inval_irq;
+};
+
+/* The main irq map itself is an array of NR_IRQ entries containing the
+ * associate host and irq number. An entry with a host of NULL is free.
+ * An entry can be allocated if it's free, the allocator always then sets
+ * hwirq first to the host's invalid irq number and then fills ops.
+ */
+struct irq_map_entry {
+ irq_hw_number_t hwirq;
+ struct irq_host *host;
+};
+
+extern struct irq_map_entry irq_map[NR_IRQS];
+
-/* Create a mapping for a real_irq if it doesn't already exist.
- * Return the virtual irq as a convenience.
+/***
+ * irq_alloc_host - Allocate a new irq_host data structure
+ * @node: device-tree node of the interrupt controller
+ * @revmap_type: type of reverse mapping to use
+ * @revmap_arg: for IRQ_HOST_MAP_LINEAR linear only: size of the map
+ * @ops: map/unmap host callbacks
+ * @inval_irq: provide a hw number in that host space that is always invalid
+ *
+ * Allocates and initialize and irq_host structure. Note that in the case of
+ * IRQ_HOST_MAP_LEGACY, the map() callback will be called before this returns
+ * for all legacy interrupts except 0 (which is always the invalid irq for
+ * a legacy controller). For a IRQ_HOST_MAP_LINEAR, the map is allocated by
+ * this call as well. For a IRQ_HOST_MAP_TREE, the radix tree will be allocated
+ * later during boot automatically (the reverse mapping will use the slow path
+ * until that happens).
+ */
+extern struct irq_host *irq_alloc_host(unsigned int revmap_type,
+ unsigned int revmap_arg,
+ struct irq_host_ops *ops,
+ irq_hw_number_t inval_irq);
+
+
+/***
+ * irq_find_host - Locates a host for a given device node
+ * @node: device-tree node of the interrupt controller
+ */
+extern struct irq_host *irq_find_host(struct device_node *node);
+
+
+/***
+ * irq_set_default_host - Set a "default" host
+ * @host: default host pointer
+ *
+ * For convenience, it's possible to set a "default" host that will be used
+ * whenever NULL is passed to irq_create_mapping(). It makes life easier for
+ * platforms that want to manipulate a few hard coded interrupt numbers that
+ * aren't properly represented in the device-tree.
+ */
+extern void irq_set_default_host(struct irq_host *host);
+
+
+/***
+ * irq_set_virq_count - Set the maximum number of virt irqs
+ * @count: number of linux virtual irqs, capped with NR_IRQS
+ *
+ * This is mainly for use by platforms like iSeries who want to program
+ * the virtual irq number in the controller to avoid the reverse mapping
+ */
+extern void irq_set_virq_count(unsigned int count);
+
+
+/***
+ * irq_create_mapping - Map a hardware interrupt into linux virq space
+ * @host: host owning this hardware interrupt or NULL for default host
+ * @hwirq: hardware irq number in that host space
+ * @flags: flags passed to the controller. contains the trigger type among
+ * others. Use IRQ_TYPE_* defined in include/linux/irq.h
+ *
+ * Only one mapping per hardware interrupt is permitted. Returns a linux
+ * virq number. The flags can be used to provide sense information to the
+ * controller (typically extracted from the device-tree). If no information
+ * is passed, the controller defaults will apply (for example, xics can only
+ * do edge so flags are irrelevant for some pseries specific irqs).
+ *
+ * The device-tree generally contains the trigger info in an encoding that is
+ * specific to a given type of controller. In that case, you can directly use
+ * host->ops->trigger_xlate() to translate that.
+ *
+ * It is recommended that new PICs that don't have existing OF bindings chose
+ * to use a representation of triggers identical to linux.
+ */
+extern unsigned int irq_create_mapping(struct irq_host *host,
+ irq_hw_number_t hwirq,
+ unsigned int flags);
+
+
+/***
+ * irq_dispose_mapping - Unmap an interrupt
+ * @virq: linux virq number of the interrupt to unmap
+ */
+extern void irq_dispose_mapping(unsigned int virq);
+
+/***
+ * irq_find_mapping - Find a linux virq from an hw irq number.
+ * @host: host owning this hardware interrupt
+ * @hwirq: hardware irq number in that host space
+ *
+ * This is a slow path, for use by generic code. It's expected that an
+ * irq controller implementation directly calls the appropriate low level
+ * mapping function.
*/
-int virt_irq_create_mapping(unsigned int real_irq);
-void virt_irq_init(void);
+extern unsigned int irq_find_mapping(struct irq_host *host,
+ irq_hw_number_t hwirq);
-static inline unsigned int virt_irq_to_real(unsigned int virt_irq)
+
+/***
+ * irq_radix_revmap - Find a linux virq from a hw irq number.
+ * @host: host owning this hardware interrupt
+ * @hwirq: hardware irq number in that host space
+ *
+ * This is a fast path, for use by irq controller code that uses radix tree
+ * revmaps
+ */
+extern unsigned int irq_radix_revmap(struct irq_host *host,
+ irq_hw_number_t hwirq);
+
+/***
+ * irq_linear_revmap - Find a linux virq from a hw irq number.
+ * @host: host owning this hardware interrupt
+ * @hwirq: hardware irq number in that host space
+ *
+ * This is a fast path, for use by irq controller code that uses linear
+ * revmaps. It does fallback to the slow path if the revmap doesn't exist
+ * yet and will create the revmap entry with appropriate locking
+ */
+
+extern unsigned int irq_linear_revmap(struct irq_host *host,
+ irq_hw_number_t hwirq);
+
+
+
+/***
+ * irq_alloc_virt - Allocate virtual irq numbers
+ * @host: host owning these new virtual irqs
+ * @count: number of consecutive numbers to allocate
+ * @hint: pass a hint number, the allocator will try to use a 1:1 mapping
+ *
+ * This is a low level function that is used internally by irq_create_mapping()
+ * and that can be used by some irq controllers implementations for things
+ * like allocating ranges of numbers for MSIs. The revmaps are left untouched.
+ */
+extern unsigned int irq_alloc_virt(struct irq_host *host,
+ unsigned int count,
+ unsigned int hint);
+
+/***
+ * irq_free_virt - Free virtual irq numbers
+ * @virq: virtual irq number of the first interrupt to free
+ * @count: number of interrupts to free
+ *
+ * This function is the opposite of irq_alloc_virt. It will not clear reverse
+ * maps, this should be done previously by unmap'ing the interrupt. In fact,
+ * all interrupts covered by the range being freed should have been unmapped
+ * prior to calling this.
+ */
+extern void irq_free_virt(unsigned int virq, unsigned int count);
+
+
+/* -- OF helpers -- */
+
+/* irq_create_of_mapping - Map a hardware interrupt into linux virq space
+ * @controller: Device node of the interrupt controller
+ * @inspec: Interrupt specifier from the device-tree
+ * @intsize: Size of the interrupt specifier from the device-tree
+ *
+ * This function is identical to irq_create_mapping except that it takes
+ * as input informations straight from the device-tree (typically the results
+ * of the of_irq_map_*() functions
+ */
+extern unsigned int irq_create_of_mapping(struct device_node *controller,
+ u32 *intspec, unsigned int intsize);
+
+
+/* irq_of_parse_and_map - Parse nad Map an interrupt into linux virq space
+ * @device: Device node of the device whose interrupt is to be mapped
+ * @index: Index of the interrupt to map
+ *
+ * This function is a wrapper that chains of_irq_map_one() and
+ * irq_create_of_mapping() to make things easier to callers
+ */
+extern unsigned int irq_of_parse_and_map(struct device_node *dev, int index);
+
+/* -- End OF helpers -- */
+
+/***
+ * irq_early_init - Init irq remapping subsystem
+ */
+extern void irq_early_init(void);
+
+static __inline__ int irq_canonicalize(int irq)
{
- return virt_irq_to_real_map[virt_irq];
+ return irq;
}
-extern unsigned int real_irq_to_virt_slowpath(unsigned int real_irq);
+
+#else /* CONFIG_PPC_MERGE */
+
+/* This number is used when no interrupt has been assigned */
+#define NO_IRQ (-1)
+#define NO_IRQ_IGNORE (-2)
+
/*
- * List of interrupt controllers.
+ * These constants are used for passing information about interrupt
+ * signal polarity and level/edge sensing to the low-level PIC chip
+ * drivers.
*/
-#define IC_INVALID 0
-#define IC_OPEN_PIC 1
-#define IC_PPC_XIC 2
-#define IC_CELL_PIC 3
-#define IC_ISERIES 4
+#define IRQ_SENSE_MASK 0x1
+#define IRQ_SENSE_LEVEL 0x1 /* interrupt on active level */
+#define IRQ_SENSE_EDGE 0x0 /* interrupt triggered by edge */
-extern u64 ppc64_interrupt_controller;
+#define IRQ_POLARITY_MASK 0x2
+#define IRQ_POLARITY_POSITIVE 0x2 /* high level or low->high edge */
+#define IRQ_POLARITY_NEGATIVE 0x0 /* low level or high->low edge */
-#else /* 32-bit */
#if defined(CONFIG_40x)
#include <asm/ibm4xx.h>
#endif /* CONFIG_8260 */
-#endif
+#endif /* Whatever way too big #ifdef */
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
/* pedantic: these are long because they are used with set_bit --RR */
extern unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
-extern atomic_t ppc_n_lost_interrupts;
-
-#define virt_irq_create_mapping(x) (x)
-
-#endif
/*
* Because many systems have two overlapping names spaces for
irq = 9;
return irq;
}
+#endif /* CONFIG_PPC_MERGE */
extern int distribute_irqs;
extern void irq_ctx_init(void);
extern void call_do_softirq(struct thread_info *tp);
-extern int call___do_IRQ(int irq, struct pt_regs *regs,
- struct thread_info *tp);
-
+extern int call_handle_irq(int irq, void *p1, void *p2,
+ struct thread_info *tp, void *func);
#else
#define irq_ctx_init()
--- /dev/null
+/*
+ * include/asm-powerpc/irqflags.h
+ *
+ * IRQ flags handling
+ *
+ * This file gets included from lowlevel asm headers too, to provide
+ * wrapped versions of the local_irq_*() APIs, based on the
+ * raw_local_irq_*() macros from the lowlevel headers.
+ */
+#ifndef _ASM_IRQFLAGS_H
+#define _ASM_IRQFLAGS_H
+
+/*
+ * Get definitions for raw_local_save_flags(x), etc.
+ */
+#include <asm-powerpc/hw_irq.h>
+
+/*
+ * Do the CPU's IRQ-state tracing from assembly code. We call a
+ * C function, so save all the C-clobbered registers:
+ */
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+#error No support on PowerPC yet for CONFIG_TRACE_IRQFLAGS
+
+#else
+# define TRACE_IRQS_ON
+# define TRACE_IRQS_OFF
+#endif
+
+#endif
void (*show_percpuinfo)(struct seq_file *m, int i);
void (*init_IRQ)(void);
- int (*get_irq)(struct pt_regs *);
+ unsigned int (*get_irq)(struct pt_regs *);
#ifdef CONFIG_KEXEC
void (*kexec_cpu_down)(int crash_shutdown, int secondary);
#endif
#define MPIC_VEC_TIMER_1 248
#define MPIC_VEC_TIMER_0 247
-/* Type definition of the cascade handler */
-typedef int (*mpic_cascade_t)(struct pt_regs *regs, void *data);
-
#ifdef CONFIG_MPIC_BROKEN_U3
/* Fixup table entry */
struct mpic_irq_fixup
/* The instance data of a given MPIC */
struct mpic
{
+ /* The device node of the interrupt controller */
+ struct device_node *of_node;
+
+ /* The remapper for this MPIC */
+ struct irq_host *irqhost;
+
/* The "linux" controller struct */
- hw_irq_controller hc_irq;
+ struct irq_chip hc_irq;
+#ifdef CONFIG_MPIC_BROKEN_U3
+ struct irq_chip hc_ht_irq;
+#endif
#ifdef CONFIG_SMP
- hw_irq_controller hc_ipi;
+ struct irq_chip hc_ipi;
#endif
const char *name;
/* Flags */
unsigned int isu_size;
unsigned int isu_shift;
unsigned int isu_mask;
- /* Offset of irq vector numbers */
- unsigned int irq_offset;
unsigned int irq_count;
- /* Offset of ipi vector numbers */
- unsigned int ipi_offset;
/* Number of sources */
unsigned int num_sources;
/* Number of CPUs */
unsigned int num_cpus;
- /* cascade handler */
- mpic_cascade_t cascade;
- void *cascade_data;
- unsigned int cascade_vec;
- /* senses array */
+ /* default senses array */
unsigned char *senses;
unsigned int senses_count;
* The values in the array start at the first source of the MPIC,
* that is senses[0] correspond to linux irq "irq_offset".
*/
-extern struct mpic *mpic_alloc(unsigned long phys_addr,
+extern struct mpic *mpic_alloc(struct device_node *node,
+ unsigned long phys_addr,
unsigned int flags,
unsigned int isu_size,
- unsigned int irq_offset,
unsigned int irq_count,
- unsigned int ipi_offset,
- unsigned char *senses,
- unsigned int senses_num,
const char *name);
/* Assign ISUs, to call before mpic_init()
extern void mpic_assign_isu(struct mpic *mpic, unsigned int isu_num,
unsigned long phys_addr);
+/* Set default sense codes
+ *
+ * @mpic: controller
+ * @senses: array of sense codes
+ * @count: size of above array
+ *
+ * Optionally provide an array (indexed on hardware interrupt numbers
+ * for this MPIC) of default sense codes for the chip. Those are linux
+ * sense codes IRQ_TYPE_*
+ *
+ * The driver gets ownership of the pointer, don't dispose of it or
+ * anything like that. __init only.
+ */
+extern void mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count);
+
+
/* Initialize the controller. After this has been called, none of the above
* should be called again for this mpic
*/
extern void mpic_init(struct mpic *mpic);
-/* Setup a cascade. Currently, only one cascade is supported this
- * way, though you can always do a normal request_irq() and add
- * other cascades this way. You should call this _after_ having
- * added all the ISUs
- *
- * @irq_no: "linux" irq number of the cascade (that is offset'ed vector)
- * @handler: cascade handler function
- */
-extern void mpic_setup_cascade(unsigned int irq_no, mpic_cascade_t hanlder,
- void *data);
-
/*
* All of the following functions must only be used after the
* ISUs have been assigned and the controller fully initialized
void smp_mpic_message_pass(int target, int msg);
/* Fetch interrupt from a given mpic */
-extern int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs);
+extern unsigned int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs);
/* This one gets to the primary mpic */
-extern int mpic_get_irq(struct pt_regs *regs);
+extern unsigned int mpic_get_irq(struct pt_regs *regs);
/* Set the EPIC clock ratio */
void mpic_set_clk_ratio(struct mpic *mpic, u32 clock_ratio);
/* Enable/Disable EPIC serial interrupt mode */
void mpic_set_serial_int(struct mpic *mpic, int enable);
-/* global mpic for pSeries */
-extern struct mpic *pSeries_mpic;
-
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_MPIC_H */
#define __per_cpu_offset(cpu) (paca[cpu].data_offset)
#define __my_cpu_offset() get_paca()->data_offset
+#define per_cpu_offset(x) (__per_cpu_offset(x))
/* Separate out the type, so (int[3], foo) works. */
#define DEFINE_PER_CPU(type, name) \
typedef u32 phandle;
typedef u32 ihandle;
-struct interrupt_info {
- int line;
- int sense; /* +ve/-ve logic, edge or level, etc. */
-};
-
struct property {
char *name;
int length;
char *type;
phandle node;
phandle linux_phandle;
- int n_intrs;
- struct interrupt_info *intrs;
char *full_name;
struct property *properties;
extern void early_init_devtree(void *);
extern int device_is_compatible(struct device_node *device, const char *);
extern int machine_is_compatible(const char *compat);
-extern unsigned char *get_property(struct device_node *node, const char *name,
- int *lenp);
+extern void *get_property(struct device_node *node, const char *name,
+ int *lenp);
extern void print_properties(struct device_node *node);
extern int prom_n_addr_cells(struct device_node* np);
extern int prom_n_size_cells(struct device_node* np);
*/
+/* Helper to read a big number */
+static inline u64 of_read_number(u32 *cell, int size)
+{
+ u64 r = 0;
+ while (size--)
+ r = (r << 32) | *(cell++);
+ return r;
+}
+
/* Translate an OF address block into a CPU physical address
*/
#define OF_BAD_ADDR ((u64)-1)
/* CPU OF node matching */
struct device_node *of_get_cpu_node(int cpu, unsigned int *thread);
+
+/*
+ * OF interrupt mapping
+ */
+
+/* This structure is returned when an interrupt is mapped. The controller
+ * field needs to be put() after use
+ */
+
+#define OF_MAX_IRQ_SPEC 4 /* We handle specifiers of at most 4 cells */
+
+struct of_irq {
+ struct device_node *controller; /* Interrupt controller node */
+ u32 size; /* Specifier size */
+ u32 specifier[OF_MAX_IRQ_SPEC]; /* Specifier copy */
+};
+
+/***
+ * of_irq_map_init - Initialize the irq remapper
+ * @flags: flags defining workarounds to enable
+ *
+ * Some machines have bugs in the device-tree which require certain workarounds
+ * to be applied. Call this before any interrupt mapping attempts to enable
+ * those workarounds.
+ */
+#define OF_IMAP_OLDWORLD_MAC 0x00000001
+#define OF_IMAP_NO_PHANDLE 0x00000002
+
+extern void of_irq_map_init(unsigned int flags);
+
+/***
+ * of_irq_map_raw - Low level interrupt tree parsing
+ * @parent: the device interrupt parent
+ * @intspec: interrupt specifier ("interrupts" property of the device)
+ * @addr: address specifier (start of "reg" property of the device)
+ * @out_irq: structure of_irq filled by this function
+ *
+ * Returns 0 on success and a negative number on error
+ *
+ * This function is a low-level interrupt tree walking function. It
+ * can be used to do a partial walk with synthetized reg and interrupts
+ * properties, for example when resolving PCI interrupts when no device
+ * node exist for the parent.
+ *
+ */
+
+extern int of_irq_map_raw(struct device_node *parent, u32 *intspec, u32 *addr,
+ struct of_irq *out_irq);
+
+
+/***
+ * of_irq_map_one - Resolve an interrupt for a device
+ * @device: the device whose interrupt is to be resolved
+ * @index: index of the interrupt to resolve
+ * @out_irq: structure of_irq filled by this function
+ *
+ * This function resolves an interrupt, walking the tree, for a given
+ * device-tree node. It's the high level pendant to of_irq_map_raw().
+ * It also implements the workarounds for OldWolrd Macs.
+ */
+extern int of_irq_map_one(struct device_node *device, int index,
+ struct of_irq *out_irq);
+
+/***
+ * of_irq_map_pci - Resolve the interrupt for a PCI device
+ * @pdev: the device whose interrupt is to be resolved
+ * @out_irq: structure of_irq filled by this function
+ *
+ * This function resolves the PCI interrupt for a given PCI device. If a
+ * device-node exists for a given pci_dev, it will use normal OF tree
+ * walking. If not, it will implement standard swizzling and walk up the
+ * PCI tree until an device-node is found, at which point it will finish
+ * resolving using the OF tree walking.
+ */
+struct pci_dev;
+extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
+
+
#endif /* __KERNEL__ */
#endif /* _POWERPC_PROM_H */
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
-#endif
};
-/*
- * initialisation
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
-#else
-#define __RWSEM_DEBUG_INIT /* */
-#endif
-
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEBUG_INIT }
+ LIST_HEAD_INIT((name).wait_list) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
/*
* SA_FLAGS values:
*
* SA_ONSTACK is not currently supported, but will allow sigaltstack(2).
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000u /* dummy -- ignored */
#define SA_RESTORER 0x04000000U
struct list_head sched_list;
int number;
int nid;
+ unsigned int irqs[3];
u32 isrc;
u32 node;
u64 flags;
static int fd_request_irq(void)
{
if (can_use_virtual_dma)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", NULL);
else
- return request_irq(FLOPPY_IRQ, floppy_interrupt, SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_interrupt,
+ IRQF_DISABLED, "floppy", NULL);
}
static int vdma_dma_setup(char *addr, unsigned long size, int mode, int io)
--- /dev/null
+/*
+ * include/asm-s390/irqflags.h
+ *
+ * Copyright (C) IBM Corp. 2006
+ * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
+ */
+
+#ifndef __ASM_IRQFLAGS_H
+#define __ASM_IRQFLAGS_H
+
+#ifdef __KERNEL__
+
+/* interrupt control.. */
+#define raw_local_irq_enable() ({ \
+ unsigned long __dummy; \
+ __asm__ __volatile__ ( \
+ "stosm 0(%1),0x03" \
+ : "=m" (__dummy) : "a" (&__dummy) : "memory" ); \
+ })
+
+#define raw_local_irq_disable() ({ \
+ unsigned long __flags; \
+ __asm__ __volatile__ ( \
+ "stnsm 0(%1),0xfc" : "=m" (__flags) : "a" (&__flags) ); \
+ __flags; \
+ })
+
+#define raw_local_save_flags(x) \
+ __asm__ __volatile__("stosm 0(%1),0" : "=m" (x) : "a" (&x), "m" (x) )
+
+#define raw_local_irq_restore(x) \
+ __asm__ __volatile__("ssm 0(%0)" : : "a" (&x), "m" (x) : "memory")
+
+#define raw_irqs_disabled() \
+({ \
+ unsigned long flags; \
+ local_save_flags(flags); \
+ !((flags >> __FLAG_SHIFT) & 3); \
+})
+
+static inline int raw_irqs_disabled_flags(unsigned long flags)
+{
+ return !((flags >> __FLAG_SHIFT) & 3);
+}
+
+/* For spinlocks etc */
+#define raw_local_irq_save(x) ((x) = raw_local_irq_disable())
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_IRQFLAGS_H */
#define __get_cpu_var(var) __reloc_hide(var,S390_lowcore.percpu_offset)
#define __raw_get_cpu_var(var) __reloc_hide(var,S390_lowcore.percpu_offset)
#define per_cpu(var,cpu) __reloc_hide(var,__per_cpu_offset[cpu])
+#define per_cpu_offset(x) (__per_cpu_offset[x])
/* A macro to avoid #include hell... */
#define percpu_modcopy(pcpudst, src, size) \
signed long count;
spinlock_t wait_lock;
struct list_head wait_list;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
};
#ifndef __s390x__
/*
* initialisation
*/
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
+#else
+# define __RWSEM_DEP_MAP_INIT(lockname)
+#endif
+
#define __RWSEM_INITIALIZER(name) \
-{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) }
+{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) \
+ __RWSEM_DEP_MAP_INIT(name) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
INIT_LIST_HEAD(&sem->wait_list);
}
+extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
+ struct lock_class_key *key);
+
+#define init_rwsem(sem) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __init_rwsem((sem), #sem, &__key); \
+} while (0)
+
+
/*
* lock for reading
*/
/*
* lock for writing
*/
-static inline void __down_write(struct rw_semaphore *sem)
+static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
{
signed long old, new, tmp;
rwsem_down_write_failed(sem);
}
+static inline void __down_write(struct rw_semaphore *sem)
+{
+ __down_write_nested(sem, 0);
+}
+
/*
* trylock for writing -- returns 1 if successful, 0 if contention
*/
static inline void sema_init (struct semaphore *sem, int val)
{
- *sem = (struct semaphore) __SEMAPHORE_INITIALIZER((*sem),val);
+ atomic_set(&sem->count, val);
+ init_waitqueue_head(&sem->wait);
}
static inline void init_MUTEX (struct semaphore *sem)
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
#define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
-/* interrupt control.. */
-#define local_irq_enable() ({ \
- unsigned long __dummy; \
- __asm__ __volatile__ ( \
- "stosm 0(%1),0x03" \
- : "=m" (__dummy) : "a" (&__dummy) : "memory" ); \
- })
-
-#define local_irq_disable() ({ \
- unsigned long __flags; \
- __asm__ __volatile__ ( \
- "stnsm 0(%1),0xfc" : "=m" (__flags) : "a" (&__flags) ); \
- __flags; \
- })
-
-#define local_save_flags(x) \
- __asm__ __volatile__("stosm 0(%1),0" : "=m" (x) : "a" (&x), "m" (x) )
-
-#define local_irq_restore(x) \
- __asm__ __volatile__("ssm 0(%0)" : : "a" (&x), "m" (x) : "memory")
-
-#define irqs_disabled() \
-({ \
- unsigned long flags; \
- local_save_flags(flags); \
- !((flags >> __FLAG_SHIFT) & 3); \
-})
-
#ifdef __s390x__
#define __ctl_load(array, low, high) ({ \
})
#endif /* __s390x__ */
-/* For spinlocks etc */
-#define local_irq_save(x) ((x) = local_irq_disable())
+#include <linux/irqflags.h>
/*
* Use to set psw mask except for the first byte which
#endif /* __KERNEL__ */
#endif
-
static int fd_request_irq(void)
{
if(can_use_virtual_dma)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", NULL);
else
- return request_irq(FLOPPY_IRQ, floppy_interrupt, SA_INTERRUPT,
- "floppy", NULL);
-
+ return request_irq(FLOPPY_IRQ, floppy_interrupt,
+ IRQF_DISABLED, "floppy", NULL);
}
static unsigned long dma_mem_alloc(unsigned long size)
#define AUX_IRQ 12
#define aux_request_irq(hand, dev_id) \
- request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS2 Mouse", dev_id)
+ request_irq(AUX_IRQ, hand, IRQF_SHARED, "PS2 Mouse", dev_id)
#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
-#endif
};
-/*
- * initialisation
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
-#else
-#define __RWSEM_DEBUG_INIT /* */
-#endif
-
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEBUG_INIT }
+ LIST_HEAD_INIT((name).wait_list) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
/*
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
*/
#define switch_to(prev, next, last) do { \
- task_t *__last; \
+ struct task_struct *__last; \
register unsigned long *__ts1 __asm__ ("r1") = &prev->thread.sp; \
register unsigned long *__ts2 __asm__ ("r2") = &prev->thread.pc; \
register unsigned long *__ts4 __asm__ ("r4") = (unsigned long *)prev; \
#endif
#define aux_request_irq(hand, dev_id) \
- request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS2 Mouse", dev_id)
+ request_irq(AUX_IRQ, hand, IRQF_SHARED, "PS2 Mouse", dev_id)
#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
if(!once) {
once = 1;
- error = request_fast_irq(FLOPPY_IRQ, floppy_hardint, SA_INTERRUPT, "floppy");
+ error = request_fast_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy");
return ((error == 0) ? 0 : -1);
} else return 0;
}
* usage of signal stacks by using the (now obsolete) sa_restorer field in
* the sigaction structure as a stack pointer. This is now possible due to
* the changes in signal handling. LBT 010493.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
- * SA_SHIRQ flag is for shared interrupt support on PCI and EISA.
*/
#define SA_NOCLDSTOP _SV_IGNCHILD
#define SA_STACK _SV_SSTACK
#define SA_ONSTACK _SV_SSTACK
#define SA_RESTART _SV_INTR
#define SA_ONESHOT _SV_RESET
-#define SA_INTERRUPT 0x10u
#define SA_NOMASK 0x20u
#define SA_NOCLDWAIT 0x100u
#define SA_SIGINFO 0x200u
once = 1;
error = request_irq(FLOPPY_IRQ, sparc_floppy_irq,
- SA_INTERRUPT, "floppy", NULL);
+ IRQF_DISABLED, "floppy", NULL);
return ((error == 0) ? 0 : -1);
}
extern unsigned long __per_cpu_shift;
#define __per_cpu_offset(__cpu) \
(__per_cpu_base + ((unsigned long)(__cpu) << __per_cpu_shift))
+#define per_cpu_offset(x) (__per_cpu_offset(x))
/* Separate out the type, so (int[3], foo) works. */
#define DEFINE_PER_CPU(type, name) \
* usage of signal stacks by using the (now obsolete) sa_restorer field in
* the sigaction structure as a stack pointer. This is now possible due to
* the changes in signal handling. LBT 010493.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
- * SA_SHIRQ flag is for shared interrupt support on PCI and EISA.
*/
#define SA_NOCLDSTOP _SV_IGNCHILD
#define SA_STACK _SV_SSTACK
#define SA_ONSTACK _SV_SSTACK
#define SA_RESTART _SV_INTR
#define SA_ONESHOT _SV_RESET
-#define SA_INTERRUPT 0x10u
#define SA_NOMASK 0x20u
#define SA_NOCLDWAIT 0x100u
#define SA_SIGINFO 0x200u
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
static int fd_request_irq(void)
{
if(can_use_virtual_dma)
- return request_irq(FLOPPY_IRQ, floppy_hardint,SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_hardint,
+ IRQF_DISABLED, "floppy", NULL);
else
- return request_irq(FLOPPY_IRQ, floppy_interrupt, SA_INTERRUPT,
- "floppy", NULL);
+ return request_irq(FLOPPY_IRQ, floppy_interrupt,
+ IRQF_DISABLED, "floppy", NULL);
}
static unsigned long dma_mem_alloc(unsigned long size)
--- /dev/null
+/*
+ * include/asm-x86_64/irqflags.h
+ *
+ * IRQ flags handling
+ *
+ * This file gets included from lowlevel asm headers too, to provide
+ * wrapped versions of the local_irq_*() APIs, based on the
+ * raw_local_irq_*() functions from the lowlevel headers.
+ */
+#ifndef _ASM_IRQFLAGS_H
+#define _ASM_IRQFLAGS_H
+
+#ifndef __ASSEMBLY__
+/*
+ * Interrupt control:
+ */
+
+static inline unsigned long __raw_local_save_flags(void)
+{
+ unsigned long flags;
+
+ __asm__ __volatile__(
+ "# __raw_save_flags\n\t"
+ "pushfq ; popq %q0"
+ : "=g" (flags)
+ : /* no input */
+ : "memory"
+ );
+
+ return flags;
+}
+
+#define raw_local_save_flags(flags) \
+ do { (flags) = __raw_local_save_flags(); } while (0)
+
+static inline void raw_local_irq_restore(unsigned long flags)
+{
+ __asm__ __volatile__(
+ "pushq %0 ; popfq"
+ : /* no output */
+ :"g" (flags)
+ :"memory", "cc"
+ );
+}
+
+#ifdef CONFIG_X86_VSMP
+
+/*
+ * Interrupt control for the VSMP architecture:
+ */
+
+static inline void raw_local_irq_disable(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ raw_local_irq_restore((flags & ~(1 << 9)) | (1 << 18));
+}
+
+static inline void raw_local_irq_enable(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ raw_local_irq_restore((flags | (1 << 9)) & ~(1 << 18));
+}
+
+static inline int raw_irqs_disabled_flags(unsigned long flags)
+{
+ return !(flags & (1<<9)) || (flags & (1 << 18));
+}
+
+#else /* CONFIG_X86_VSMP */
+
+static inline void raw_local_irq_disable(void)
+{
+ __asm__ __volatile__("cli" : : : "memory");
+}
+
+static inline void raw_local_irq_enable(void)
+{
+ __asm__ __volatile__("sti" : : : "memory");
+}
+
+static inline int raw_irqs_disabled_flags(unsigned long flags)
+{
+ return !(flags & (1 << 9));
+}
+
+#endif
+
+/*
+ * For spinlocks, etc.:
+ */
+
+static inline unsigned long __raw_local_irq_save(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ raw_local_irq_disable();
+
+ return flags;
+}
+
+#define raw_local_irq_save(flags) \
+ do { (flags) = __raw_local_irq_save(); } while (0)
+
+static inline int raw_irqs_disabled(void)
+{
+ unsigned long flags = __raw_local_save_flags();
+
+ return raw_irqs_disabled_flags(flags);
+}
+
+/*
+ * Used in the idle loop; sti takes one instruction cycle
+ * to complete:
+ */
+static inline void raw_safe_halt(void)
+{
+ __asm__ __volatile__("sti; hlt" : : : "memory");
+}
+
+/*
+ * Used when interrupts are already enabled or to
+ * shutdown the processor:
+ */
+static inline void halt(void)
+{
+ __asm__ __volatile__("hlt": : :"memory");
+}
+
+#else /* __ASSEMBLY__: */
+# ifdef CONFIG_TRACE_IRQFLAGS
+# define TRACE_IRQS_ON call trace_hardirqs_on_thunk
+# define TRACE_IRQS_OFF call trace_hardirqs_off_thunk
+# else
+# define TRACE_IRQS_ON
+# define TRACE_IRQS_OFF
+# endif
+#endif
+
+#endif
return atomic_notifier_call_chain(&die_chain, val, &args);
}
-extern int printk_address(unsigned long address);
+extern void printk_address(unsigned long address);
extern void die(const char *,struct pt_regs *,long);
extern void __die(const char *,struct pt_regs *,long);
extern void show_registers(struct pt_regs *regs);
#define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset)
#define __my_cpu_offset() read_pda(data_offset)
+#define per_cpu_offset(x) (__per_cpu_offset(x))
+
/* Separate out the type, so (int[3], foo) works. */
#define DEFINE_PER_CPU(type, name) \
__attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
#define warn_if_not_ulong(x) do { unsigned long foo; (void) (&(x) == &foo); } while (0)
-/* interrupt control.. */
-#define local_save_flags(x) do { warn_if_not_ulong(x); __asm__ __volatile__("# save_flags \n\t pushfq ; popq %q0":"=g" (x): /* no input */ :"memory"); } while (0)
-#define local_irq_restore(x) __asm__ __volatile__("# restore_flags \n\t pushq %0 ; popfq": /* no output */ :"g" (x):"memory", "cc")
-
-#ifdef CONFIG_X86_VSMP
-/* Interrupt control for VSMP architecture */
-#define local_irq_disable() do { unsigned long flags; local_save_flags(flags); local_irq_restore((flags & ~(1 << 9)) | (1 << 18)); } while (0)
-#define local_irq_enable() do { unsigned long flags; local_save_flags(flags); local_irq_restore((flags | (1 << 9)) & ~(1 << 18)); } while (0)
-
-#define irqs_disabled() \
-({ \
- unsigned long flags; \
- local_save_flags(flags); \
- (flags & (1<<18)) || !(flags & (1<<9)); \
-})
-
-/* For spinlocks etc */
-#define local_irq_save(x) do { local_save_flags(x); local_irq_restore((x & ~(1 << 9)) | (1 << 18)); } while (0)
-#else /* CONFIG_X86_VSMP */
-#define local_irq_disable() __asm__ __volatile__("cli": : :"memory")
-#define local_irq_enable() __asm__ __volatile__("sti": : :"memory")
-
-#define irqs_disabled() \
-({ \
- unsigned long flags; \
- local_save_flags(flags); \
- !(flags & (1<<9)); \
-})
-
-/* For spinlocks etc */
-#define local_irq_save(x) do { warn_if_not_ulong(x); __asm__ __volatile__("# local_irq_save \n\t pushfq ; popq %0 ; cli":"=g" (x): /* no input */ :"memory"); } while (0)
-#endif
-
-/* used in the idle loop; sti takes one instruction cycle to complete */
-#define safe_halt() __asm__ __volatile__("sti; hlt": : :"memory")
-/* used when interrupts are already enabled or to shutdown the processor */
-#define halt() __asm__ __volatile__("hlt": : :"memory")
+#include <linux/irqflags.h>
void cpu_idle_wait(void);
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
-#endif
};
-/*
- * initialisation
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
-#else
-#define __RWSEM_DEBUG_INIT /* */
-#endif
-
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEBUG_INIT }
+ LIST_HEAD_INIT((name).wait_list) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
/*
* SA_FLAGS values:
*
* SA_ONSTACK indicates that a registered stack_t will be used.
- * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
* SA_RESTART flag to get restarting signals (which were the default long ago)
* SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
* SA_RESETHAND clears the handler when the signal is delivered.
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
#define SA_RESTORER 0x04000000
#define SIGSTKSZ 8192
#ifndef __ASSEMBLY__
-#ifdef __KERNEL__
-
-/*
- * These values of sa_flags are used only by the kernel as part of the
- * irq handling routines.
- *
- * SA_INTERRUPT is also used by the irq handling routines.
- * SA_SHIRQ is for shared interrupt support on PCI and EISA.
- */
-#define SA_SAMPLE_RANDOM SA_RESTART
-#define SA_SHIRQ 0x04000000
-#define SA_PROBEIRQ 0x08000000
-#endif
#define SIG_BLOCK 0 /* for blocking signals */
#define SIG_UNBLOCK 1 /* for unblocking signals */
#define DECLARE_COMPLETION(work) \
struct completion work = COMPLETION_INITIALIZER(work)
+/*
+ * Lockdep needs to run a non-constant initializer for on-stack
+ * completions - so we use the _ONSTACK() variant for those that
+ * are on the kernel stack:
+ */
+#ifdef CONFIG_LOCKDEP
+# define DECLARE_COMPLETION_ONSTACK(work) \
+ struct completion work = ({ init_completion(&work); work; })
+#else
+# define DECLARE_COMPLETION_ONSTACK(work) DECLARE_COMPLETION(work)
+#endif
+
static inline void init_completion(struct completion *x)
{
x->done = 0;
unsigned char d_iname[DNAME_INLINE_LEN_MIN]; /* small names */
};
+/*
+ * dentry->d_lock spinlock nesting subclasses:
+ *
+ * 0: normal
+ * 1: nested
+ */
+enum dentry_d_lock_class
+{
+ DENTRY_D_LOCK_NORMAL, /* implicitly used by plain spin_lock() APIs. */
+ DENTRY_D_LOCK_NESTED
+};
+
struct dentry_operations {
int (*d_revalidate)(struct dentry *, struct nameidata *);
int (*d_hash) (struct dentry *, struct qstr *);
--- /dev/null
+#ifndef __LINUX_DEBUG_LOCKING_H
+#define __LINUX_DEBUG_LOCKING_H
+
+extern int debug_locks;
+extern int debug_locks_silent;
+
+/*
+ * Generic 'turn off all lock debugging' function:
+ */
+extern int debug_locks_off(void);
+
+/*
+ * In the debug case we carry the caller's instruction pointer into
+ * other functions, but we dont want the function argument overhead
+ * in the nondebug case - hence these macros:
+ */
+#define _RET_IP_ (unsigned long)__builtin_return_address(0)
+#define _THIS_IP_ ({ __label__ __here; __here: (unsigned long)&&__here; })
+
+#define DEBUG_LOCKS_WARN_ON(c) \
+({ \
+ int __ret = 0; \
+ \
+ if (unlikely(c)) { \
+ if (debug_locks_off()) \
+ WARN_ON(1); \
+ __ret = 1; \
+ } \
+ __ret; \
+})
+
+#ifdef CONFIG_SMP
+# define SMP_DEBUG_LOCKS_WARN_ON(c) DEBUG_LOCKS_WARN_ON(c)
+#else
+# define SMP_DEBUG_LOCKS_WARN_ON(c) do { } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCKING_API_SELFTESTS
+ extern void locking_selftest(void);
+#else
+# define locking_selftest() do { } while (0)
+#endif
+
+#ifdef CONFIG_LOCKDEP
+extern void debug_show_all_locks(void);
+extern void debug_show_held_locks(struct task_struct *task);
+extern void debug_check_no_locks_freed(const void *from, unsigned long len);
+extern void debug_check_no_locks_held(struct task_struct *task);
+#else
+static inline void debug_show_all_locks(void)
+{
+}
+
+static inline void debug_show_held_locks(struct task_struct *task)
+{
+}
+
+static inline void
+debug_check_no_locks_freed(const void *from, unsigned long len)
+{
+}
+
+static inline void
+debug_check_no_locks_held(struct task_struct *task)
+{
+}
+#endif
+
+#endif
};
/**
- * typedef dma_cookie_t
+ * typedef dma_cookie_t - an opaque DMA cookie
*
* if dma_cookie_t is >0 it's a DMA request cookie, <0 it's an error code
*/
/**
* struct dma_chan - devices supply DMA channels, clients use them
- * @client: ptr to the client user of this chan, will be NULL when unused
- * @device: ptr to the dma device who supplies this channel, always !NULL
+ * @client: ptr to the client user of this chan, will be %NULL when unused
+ * @device: ptr to the dma device who supplies this channel, always !%NULL
* @cookie: last cookie value returned to client
- * @chan_id:
- * @class_dev:
+ * @chan_id: channel ID for sysfs
+ * @class_dev: class device for sysfs
* @refcount: kref, used in "bigref" slow-mode
- * @slow_ref:
- * @rcu:
+ * @slow_ref: indicates that the DMA channel is free
+ * @rcu: the DMA channel's RCU head
* @client_node: used to add this to the client chan list
* @device_node: used to add this to the device chan list
* @local: per-cpu pointer to a struct dma_chan_percpu
* @chancnt: how many DMA channels are supported
* @channels: the list of struct dma_chan
* @global_node: list_head for global dma_device_list
- * @refcount:
- * @done:
- * @dev_id:
- * Other func ptrs: used to make use of this device's capabilities
+ * @refcount: reference count
+ * @done: IO completion struct
+ * @dev_id: unique device ID
+ * @device_alloc_chan_resources: allocate resources and return the
+ * number of allocated descriptors
+ * @device_free_chan_resources: release DMA channel's resources
+ * @device_memcpy_buf_to_buf: memcpy buf pointer to buf pointer
+ * @device_memcpy_buf_to_pg: memcpy buf pointer to struct page
+ * @device_memcpy_pg_to_pg: memcpy struct page/offset to struct page/offset
+ * @device_memcpy_complete: poll the status of an IOAT DMA transaction
+ * @device_memcpy_issue_pending: push appended descriptors to hardware
*/
struct dma_device {
* Both @dest and @src must be mappable to a bus address according to the
* DMA mapping API rules for streaming mappings.
* Both @dest and @src must stay memory resident (kernel memory or locked
- * user space pages)
+ * user space pages).
*/
static inline dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
void *dest, void *src, size_t len)
}
/**
- * dma_async_memcpy_buf_to_pg - offloaded copy
+ * dma_async_memcpy_buf_to_pg - offloaded copy from address to page
* @chan: DMA channel to offload copy to
* @page: destination page
* @offset: offset in page to copy to
}
/**
- * dma_async_memcpy_buf_to_pg - offloaded copy
+ * dma_async_memcpy_pg_to_pg - offloaded copy from page to page
* @chan: DMA channel to offload copy to
- * @dest_page: destination page
+ * @dest_pg: destination page
* @dest_off: offset in page to copy to
- * @src_page: source page
+ * @src_pg: source page
* @src_off: offset in page to copy from
* @len: length
*
* Both @dest_page/@dest_off and @src_page/@src_off must be mappable to a bus
* address according to the DMA mapping API rules for streaming mappings.
* Both @dest_page/@dest_off and @src_page/@src_off must stay memory resident
- * (kernel memory or locked user space pages)
+ * (kernel memory or locked user space pages).
*/
static inline dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
/**
* dma_async_memcpy_issue_pending - flush pending copies to HW
- * @chan:
+ * @chan: target DMA channel
*
* This allows drivers to push copies to HW in batches,
* reducing MMIO writes where possible.
unsigned long bd_private;
};
+/*
+ * bdev->bd_mutex nesting subclasses for the lock validator:
+ *
+ * 0: normal
+ * 1: 'whole'
+ * 2: 'partition'
+ */
+enum bdev_bd_mutex_lock_class
+{
+ BD_MUTEX_NORMAL,
+ BD_MUTEX_WHOLE,
+ BD_MUTEX_PARTITION
+};
+
+
/*
* Radix-tree tags, for tagging dirty and writeback pages within the pagecache
* radix trees
#endif
};
+/*
+ * inode->i_mutex nesting subclasses for the lock validator:
+ *
+ * 0: the object of the current VFS operation
+ * 1: parent
+ * 2: child/target
+ * 3: quota file
+ *
+ * The locking order between these classes is
+ * parent -> child -> normal -> quota
+ */
+enum inode_i_mutex_lock_class
+{
+ I_MUTEX_NORMAL,
+ I_MUTEX_PARENT,
+ I_MUTEX_CHILD,
+ I_MUTEX_QUOTA
+};
+
/*
* NOTE: in a 32bit arch with a preemptable kernel and
* an UP compile the i_size_read/write must be atomic
struct module *owner;
struct file_system_type * next;
struct list_head fs_supers;
+ struct lock_class_key s_lock_key;
+ struct lock_class_key s_umount_key;
};
extern int get_sb_bdev(struct file_system_type *fs_type,
extern void bd_forget(struct inode *inode);
extern void bdput(struct block_device *);
extern struct block_device *open_by_devnum(dev_t, unsigned);
+extern struct block_device *open_partition_by_devnum(dev_t, unsigned);
extern const struct file_operations def_blk_fops;
extern const struct address_space_operations def_blk_aops;
extern const struct file_operations def_chr_fops;
extern long compat_blkdev_ioctl(struct file *, unsigned, unsigned long);
extern int blkdev_get(struct block_device *, mode_t, unsigned);
extern int blkdev_put(struct block_device *);
+extern int blkdev_put_partition(struct block_device *);
extern int bd_claim(struct block_device *, void *);
extern void bd_release(struct block_device *);
#ifdef CONFIG_SYSFS
#include <linux/preempt.h>
#include <linux/smp_lock.h>
+#include <linux/lockdep.h>
#include <asm/hardirq.h>
#include <asm/system.h>
# define synchronize_irq(irq) barrier()
#endif
-#define nmi_enter() irq_enter()
-#define nmi_exit() sub_preempt_count(HARDIRQ_OFFSET)
-
struct task_struct;
#ifndef CONFIG_VIRT_CPU_ACCOUNTING
}
#endif
+/*
+ * It is safe to do non-atomic ops on ->hardirq_context,
+ * because NMI handlers may not preempt and the ops are
+ * always balanced, so the interrupted value of ->hardirq_context
+ * will always be restored.
+ */
#define irq_enter() \
do { \
account_system_vtime(current); \
add_preempt_count(HARDIRQ_OFFSET); \
+ trace_hardirq_enter(); \
+ } while (0)
+
+/*
+ * Exit irq context without processing softirqs:
+ */
+#define __irq_exit() \
+ do { \
+ trace_hardirq_exit(); \
+ account_system_vtime(current); \
+ sub_preempt_count(HARDIRQ_OFFSET); \
} while (0)
+/*
+ * Exit irq context and process softirqs if needed:
+ */
extern void irq_exit(void);
+#define nmi_enter() do { lockdep_off(); irq_enter(); } while (0)
+#define nmi_exit() do { __irq_exit(); lockdep_on(); } while (0)
+
#endif /* LINUX_HARDIRQ_H */
ktime_t (*get_softirq_time)(void);
struct hrtimer *curr_timer;
ktime_t softirq_time;
+ struct lock_class_key lock_key;
};
/*
* ide_drive_t->hwif: constant, no locking
*/
-#define local_irq_set(flags) do { local_save_flags((flags)); local_irq_enable(); } while (0)
+#define local_irq_set(flags) do { local_save_flags((flags)); local_irq_enable_in_hardirq(); } while (0)
extern struct bus_type ide_bus_type;
.id_free = NULL, \
.layers = 0, \
.id_free_cnt = 0, \
- .lock = SPIN_LOCK_UNLOCKED, \
+ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
}
#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)
#include <linux/file.h>
#include <linux/rcupdate.h>
+#include <linux/irqflags.h>
+#include <linux/lockdep.h>
#define INIT_FDTABLE \
{ \
.count = ATOMIC_INIT(1), \
.fdt = &init_files.fdtab, \
.fdtab = INIT_FDTABLE, \
- .file_lock = SPIN_LOCK_UNLOCKED, \
+ .file_lock = __SPIN_LOCK_UNLOCKED(init_task.file_lock), \
.next_fd = 0, \
.close_on_exec_init = { { 0, } }, \
.open_fds_init = { { 0, } }, \
.user_id = 0, \
.next = NULL, \
.wait = __WAIT_QUEUE_HEAD_INITIALIZER(name.wait), \
- .ctx_lock = SPIN_LOCK_UNLOCKED, \
+ .ctx_lock = __SPIN_LOCK_UNLOCKED(name.ctx_lock), \
.reqs_active = 0U, \
.max_reqs = ~0U, \
}
.mm_users = ATOMIC_INIT(2), \
.mm_count = ATOMIC_INIT(1), \
.mmap_sem = __RWSEM_INITIALIZER(name.mmap_sem), \
- .page_table_lock = SPIN_LOCK_UNLOCKED, \
+ .page_table_lock = __SPIN_LOCK_UNLOCKED(name.page_table_lock), \
.mmlist = LIST_HEAD_INIT(name.mmlist), \
.cpu_vm_mask = CPU_MASK_ALL, \
}
#define INIT_SIGHAND(sighand) { \
.count = ATOMIC_INIT(1), \
.action = { { { .sa_handler = NULL, } }, }, \
- .siglock = SPIN_LOCK_UNLOCKED, \
+ .siglock = __SPIN_LOCK_UNLOCKED(sighand.siglock), \
}
extern struct group_info init_groups;
.list = LIST_HEAD_INIT(tsk.pending.list), \
.signal = {{0}}}, \
.blocked = {{0}}, \
- .alloc_lock = SPIN_LOCK_UNLOCKED, \
+ .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \
.journal_info = NULL, \
.cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
.fs_excl = ATOMIC_INIT(0), \
.pi_lock = SPIN_LOCK_UNLOCKED, \
- INIT_RT_MUTEXES(tsk) \
+ INIT_TRACE_IRQFLAGS \
+ INIT_LOCKDEP \
}
#include <linux/irqreturn.h>
#include <linux/hardirq.h>
#include <linux/sched.h>
+#include <linux/irqflags.h>
#include <asm/atomic.h>
#include <asm/ptrace.h>
#include <asm/system.h>
+/*
+ * These correspond to the IORESOURCE_IRQ_* defines in
+ * linux/ioport.h to select the interrupt line behaviour. When
+ * requesting an interrupt without specifying a IRQF_TRIGGER, the
+ * setting should be assumed to be "as already configured", which
+ * may be as per machine or firmware initialisation.
+ */
+#define IRQF_TRIGGER_NONE 0x00000000
+#define IRQF_TRIGGER_RISING 0x00000001
+#define IRQF_TRIGGER_FALLING 0x00000002
+#define IRQF_TRIGGER_HIGH 0x00000004
+#define IRQF_TRIGGER_LOW 0x00000008
+#define IRQF_TRIGGER_MASK (IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW | \
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)
+#define IRQF_TRIGGER_PROBE 0x00000010
+
+/*
+ * These flags used only by the kernel as part of the
+ * irq handling routines.
+ *
+ * IRQF_DISABLED - keep irqs disabled when calling the action handler
+ * IRQF_SAMPLE_RANDOM - irq is used to feed the random generator
+ * IRQF_SHARED - allow sharing the irq among several devices
+ * IRQF_PROBE_SHARED - set by callers when they expect sharing mismatches to occur
+ * IRQF_TIMER - Flag to mark this interrupt as timer interrupt
+ */
+#define IRQF_DISABLED 0x00000020
+#define IRQF_SAMPLE_RANDOM 0x00000040
+#define IRQF_SHARED 0x00000080
+#define IRQF_PROBE_SHARED 0x00000100
+#define IRQF_TIMER 0x00000200
+#define IRQF_PERCPU 0x00000400
+
+/*
+ * Migration helpers. Scheduled for removal in 1/2007
+ * Do not use for new code !
+ */
+#define SA_INTERRUPT IRQF_DISABLED
+#define SA_SAMPLE_RANDOM IRQF_SAMPLE_RANDOM
+#define SA_SHIRQ IRQF_SHARED
+#define SA_PROBEIRQ IRQF_PROBE_SHARED
+#define SA_PERCPU IRQF_PERCPU
+
+#define SA_TRIGGER_LOW IRQF_TRIGGER_LOW
+#define SA_TRIGGER_HIGH IRQF_TRIGGER_HIGH
+#define SA_TRIGGER_FALLING IRQF_TRIGGER_FALLING
+#define SA_TRIGGER_RISING IRQF_TRIGGER_RISING
+#define SA_TRIGGER_MASK IRQF_TRIGGER_MASK
+
struct irqaction {
irqreturn_t (*handler)(int, void *, struct pt_regs *);
unsigned long flags;
unsigned long, const char *, void *);
extern void free_irq(unsigned int, void *);
+/*
+ * On lockdep we dont want to enable hardirqs in hardirq
+ * context. Use local_irq_enable_in_hardirq() to annotate
+ * kernel code that has to do this nevertheless (pretty much
+ * the only valid case is for old/broken hardware that is
+ * insanely slow).
+ *
+ * NOTE: in theory this might break fragile code that relies
+ * on hardirq delivery - in practice we dont seem to have such
+ * places left. So the only effect should be slightly increased
+ * irqs-off latencies.
+ */
+#ifdef CONFIG_LOCKDEP
+# define local_irq_enable_in_hardirq() do { } while (0)
+#else
+# define local_irq_enable_in_hardirq() local_irq_enable()
+#endif
#ifdef CONFIG_GENERIC_HARDIRQS
extern void disable_irq_nosync(unsigned int irq);
extern void disable_irq(unsigned int irq);
extern void enable_irq(unsigned int irq);
+/*
+ * Special lockdep variants of irq disabling/enabling.
+ * These should be used for locking constructs that
+ * know that a particular irq context which is disabled,
+ * and which is the only irq-context user of a lock,
+ * that it's safe to take the lock in the irq-disabled
+ * section without disabling hardirqs.
+ *
+ * On !CONFIG_LOCKDEP they are equivalent to the normal
+ * irq disable/enable methods.
+ */
+static inline void disable_irq_nosync_lockdep(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+#ifdef CONFIG_LOCKDEP
+ local_irq_disable();
+#endif
+}
+
+static inline void disable_irq_lockdep(unsigned int irq)
+{
+ disable_irq(irq);
+#ifdef CONFIG_LOCKDEP
+ local_irq_disable();
+#endif
+}
+
+static inline void enable_irq_lockdep(unsigned int irq)
+{
+#ifdef CONFIG_LOCKDEP
+ local_irq_enable();
+#endif
+ enable_irq(irq);
+}
+
/* IRQ wakeup (PM) control: */
extern int set_irq_wake(unsigned int irq, unsigned int on);
return set_irq_wake(irq, 0);
}
-#endif
+#else /* !CONFIG_GENERIC_HARDIRQS */
+/*
+ * NOTE: non-genirq architectures, if they want to support the lock
+ * validator need to define the methods below in their asm/irq.h
+ * files, under an #ifdef CONFIG_LOCKDEP section.
+ */
+# ifndef CONFIG_LOCKDEP
+# define disable_irq_nosync_lockdep(irq) disable_irq_nosync(irq)
+# define disable_irq_lockdep(irq) disable_irq(irq)
+# define enable_irq_lockdep(irq) enable_irq(irq)
+# endif
+
+#endif /* CONFIG_GENERIC_HARDIRQS */
#ifndef __ARCH_SET_SOFTIRQ_PENDING
#define set_softirq_pending(x) (local_softirq_pending() = (x))
#define save_and_cli(x) save_and_cli(&x)
#endif /* CONFIG_SMP */
-/* SoftIRQ primitives. */
-#define local_bh_disable() \
- do { add_preempt_count(SOFTIRQ_OFFSET); barrier(); } while (0)
-#define __local_bh_enable() \
- do { barrier(); sub_preempt_count(SOFTIRQ_OFFSET); } while (0)
-
+extern void local_bh_disable(void);
+extern void __local_bh_enable(void);
+extern void _local_bh_enable(void);
extern void local_bh_enable(void);
+extern void local_bh_enable_ip(unsigned long ip);
/* PLEASE, avoid to allocate new softirqs, if you need not _really_ high
frequency threaded job scheduling. For almost all the purposes
#define IORESOURCE_IRQ_LOWEDGE (1<<1)
#define IORESOURCE_IRQ_HIGHLEVEL (1<<2)
#define IORESOURCE_IRQ_LOWLEVEL (1<<3)
+#define IORESOURCE_IRQ_SHAREABLE (1<<4)
/* ISA PnP DMA specific bits (IORESOURCE_BITS) */
#define IORESOURCE_DMA_TYPE_MASK (3<<0)
/*
* IRQ line status.
+ *
+ * Bits 0-16 are reserved for the IRQF_* bits in linux/interrupt.h
+ *
+ * IRQ types
*/
-#define IRQ_INPROGRESS 1 /* IRQ handler active - do not enter! */
-#define IRQ_DISABLED 2 /* IRQ disabled - do not enter! */
-#define IRQ_PENDING 4 /* IRQ pending - replay on enable */
-#define IRQ_REPLAY 8 /* IRQ has been replayed but not acked yet */
-#define IRQ_AUTODETECT 16 /* IRQ is being autodetected */
-#define IRQ_WAITING 32 /* IRQ not yet seen - for autodetection */
-#define IRQ_LEVEL 64 /* IRQ level triggered */
-#define IRQ_MASKED 128 /* IRQ masked - shouldn't be seen again */
+#define IRQ_TYPE_NONE 0x00000000 /* Default, unspecified type */
+#define IRQ_TYPE_EDGE_RISING 0x00000001 /* Edge rising type */
+#define IRQ_TYPE_EDGE_FALLING 0x00000002 /* Edge falling type */
+#define IRQ_TYPE_EDGE_BOTH (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING)
+#define IRQ_TYPE_LEVEL_HIGH 0x00000004 /* Level high type */
+#define IRQ_TYPE_LEVEL_LOW 0x00000008 /* Level low type */
+#define IRQ_TYPE_SENSE_MASK 0x0000000f /* Mask of the above */
+#define IRQ_TYPE_PROBE 0x00000010 /* Probing in progress */
+
+/* Internal flags */
+#define IRQ_INPROGRESS 0x00010000 /* IRQ handler active - do not enter! */
+#define IRQ_DISABLED 0x00020000 /* IRQ disabled - do not enter! */
+#define IRQ_PENDING 0x00040000 /* IRQ pending - replay on enable */
+#define IRQ_REPLAY 0x00080000 /* IRQ has been replayed but not acked yet */
+#define IRQ_AUTODETECT 0x00100000 /* IRQ is being autodetected */
+#define IRQ_WAITING 0x00200000 /* IRQ not yet seen - for autodetection */
+#define IRQ_LEVEL 0x00400000 /* IRQ level triggered */
+#define IRQ_MASKED 0x00800000 /* IRQ masked - shouldn't be seen again */
#ifdef CONFIG_IRQ_PER_CPU
-# define IRQ_PER_CPU 256 /* IRQ is per CPU */
+# define IRQ_PER_CPU 0x01000000 /* IRQ is per CPU */
# define CHECK_IRQ_PER_CPU(var) ((var) & IRQ_PER_CPU)
#else
# define CHECK_IRQ_PER_CPU(var) 0
#endif
-#define IRQ_NOPROBE 512 /* IRQ is not valid for probing */
-#define IRQ_NOREQUEST 1024 /* IRQ cannot be requested */
-#define IRQ_NOAUTOEN 2048 /* IRQ will not be enabled on request irq */
-#define IRQ_DELAYED_DISABLE \
- 4096 /* IRQ disable (masking) happens delayed. */
-
-/*
- * IRQ types, see also include/linux/interrupt.h
- */
-#define IRQ_TYPE_NONE 0x0000 /* Default, unspecified type */
-#define IRQ_TYPE_EDGE_RISING 0x0001 /* Edge rising type */
-#define IRQ_TYPE_EDGE_FALLING 0x0002 /* Edge falling type */
-#define IRQ_TYPE_EDGE_BOTH (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING)
-#define IRQ_TYPE_LEVEL_HIGH 0x0004 /* Level high type */
-#define IRQ_TYPE_LEVEL_LOW 0x0008 /* Level low type */
-#define IRQ_TYPE_SENSE_MASK 0x000f /* Mask of the above */
-#define IRQ_TYPE_SIMPLE 0x0010 /* Simple type */
-#define IRQ_TYPE_PERCPU 0x0020 /* Per CPU type */
-#define IRQ_TYPE_PROBE 0x0040 /* Probing in progress */
+#define IRQ_NOPROBE 0x02000000 /* IRQ is not valid for probing */
+#define IRQ_NOREQUEST 0x04000000 /* IRQ cannot be requested */
+#define IRQ_NOAUTOEN 0x08000000 /* IRQ will not be enabled on request irq */
+#define IRQ_DELAYED_DISABLE 0x10000000 /* IRQ disable (masking) happens delayed. */
struct proc_dir_entry;
#ifdef CONFIG_GENERIC_HARDIRQS
+#ifndef handle_dynamic_tick
+# define handle_dynamic_tick(a) do { } while (0)
+#endif
+
#ifdef CONFIG_SMP
static inline void set_native_irq_info(int irq, cpumask_t mask)
{
/* Checks whether the interrupt can be requested by request_irq(): */
extern int can_request_irq(unsigned int irq, unsigned long irqflags);
-/* Dummy irq-chip implementation: */
+/* Dummy irq-chip implementations: */
extern struct irq_chip no_irq_chip;
+extern struct irq_chip dummy_irq_chip;
extern void
set_irq_chip_and_handler(unsigned int irq, struct irq_chip *chip,
--- /dev/null
+/*
+ * include/linux/irqflags.h
+ *
+ * IRQ flags tracing: follow the state of the hardirq and softirq flags and
+ * provide callbacks for transitions between ON and OFF states.
+ *
+ * This file gets included from lowlevel asm headers too, to provide
+ * wrapped versions of the local_irq_*() APIs, based on the
+ * raw_local_irq_*() macros from the lowlevel headers.
+ */
+#ifndef _LINUX_TRACE_IRQFLAGS_H
+#define _LINUX_TRACE_IRQFLAGS_H
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ extern void trace_hardirqs_on(void);
+ extern void trace_hardirqs_off(void);
+ extern void trace_softirqs_on(unsigned long ip);
+ extern void trace_softirqs_off(unsigned long ip);
+# define trace_hardirq_context(p) ((p)->hardirq_context)
+# define trace_softirq_context(p) ((p)->softirq_context)
+# define trace_hardirqs_enabled(p) ((p)->hardirqs_enabled)
+# define trace_softirqs_enabled(p) ((p)->softirqs_enabled)
+# define trace_hardirq_enter() do { current->hardirq_context++; } while (0)
+# define trace_hardirq_exit() do { current->hardirq_context--; } while (0)
+# define trace_softirq_enter() do { current->softirq_context++; } while (0)
+# define trace_softirq_exit() do { current->softirq_context--; } while (0)
+# define INIT_TRACE_IRQFLAGS .softirqs_enabled = 1,
+#else
+# define trace_hardirqs_on() do { } while (0)
+# define trace_hardirqs_off() do { } while (0)
+# define trace_softirqs_on(ip) do { } while (0)
+# define trace_softirqs_off(ip) do { } while (0)
+# define trace_hardirq_context(p) 0
+# define trace_softirq_context(p) 0
+# define trace_hardirqs_enabled(p) 0
+# define trace_softirqs_enabled(p) 0
+# define trace_hardirq_enter() do { } while (0)
+# define trace_hardirq_exit() do { } while (0)
+# define trace_softirq_enter() do { } while (0)
+# define trace_softirq_exit() do { } while (0)
+# define INIT_TRACE_IRQFLAGS
+#endif
+
+#ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
+
+#include <asm/irqflags.h>
+
+#define local_irq_enable() \
+ do { trace_hardirqs_on(); raw_local_irq_enable(); } while (0)
+#define local_irq_disable() \
+ do { raw_local_irq_disable(); trace_hardirqs_off(); } while (0)
+#define local_irq_save(flags) \
+ do { raw_local_irq_save(flags); trace_hardirqs_off(); } while (0)
+
+#define local_irq_restore(flags) \
+ do { \
+ if (raw_irqs_disabled_flags(flags)) { \
+ raw_local_irq_restore(flags); \
+ trace_hardirqs_off(); \
+ } else { \
+ trace_hardirqs_on(); \
+ raw_local_irq_restore(flags); \
+ } \
+ } while (0)
+#else /* !CONFIG_TRACE_IRQFLAGS_SUPPORT */
+/*
+ * The local_irq_*() APIs are equal to the raw_local_irq*()
+ * if !TRACE_IRQFLAGS.
+ */
+# define raw_local_irq_disable() local_irq_disable()
+# define raw_local_irq_enable() local_irq_enable()
+# define raw_local_irq_save(flags) local_irq_save(flags)
+# define raw_local_irq_restore(flags) local_irq_restore(flags)
+#endif /* CONFIG_TRACE_IRQFLAGS_SUPPORT */
+
+#ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
+#define safe_halt() \
+ do { \
+ trace_hardirqs_on(); \
+ raw_safe_halt(); \
+ } while (0)
+
+#define local_save_flags(flags) raw_local_save_flags(flags)
+
+#define irqs_disabled() \
+({ \
+ unsigned long flags; \
+ \
+ raw_local_save_flags(flags); \
+ raw_irqs_disabled_flags(flags); \
+})
+
+#define irqs_disabled_flags(flags) raw_irqs_disabled_flags(flags)
+#endif /* CONFIG_X86 */
+
+#endif
#define print_fn_descriptor_symbol(fmt, addr) print_symbol(fmt, addr)
#endif
-#define print_symbol(fmt, addr) \
-do { \
- __check_printsym_format(fmt, ""); \
- __print_symbol(fmt, addr); \
+static inline void print_symbol(const char *fmt, unsigned long addr)
+{
+ __check_printsym_format(fmt, "");
+ __print_symbol(fmt, (unsigned long)
+ __builtin_extract_return_addr((void *)addr));
+}
+
+#ifndef CONFIG_64BIT
+#define print_ip_sym(ip) \
+do { \
+ printk("[<%08lx>]", ip); \
+ print_symbol(" %s\n", ip); \
} while(0)
+#else
+#define print_ip_sym(ip) \
+do { \
+ printk("[<%016lx>]", ip); \
+ print_symbol(" %s\n", ip); \
+} while(0)
+#endif
#endif /*_LINUX_KALLSYMS_H*/
--- /dev/null
+/*
+ * Runtime locking correctness validator
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *
+ * see Documentation/lockdep-design.txt for more details.
+ */
+#ifndef __LINUX_LOCKDEP_H
+#define __LINUX_LOCKDEP_H
+
+#include <linux/linkage.h>
+#include <linux/list.h>
+#include <linux/debug_locks.h>
+#include <linux/stacktrace.h>
+
+#ifdef CONFIG_LOCKDEP
+
+/*
+ * Lock-class usage-state bits:
+ */
+enum lock_usage_bit
+{
+ LOCK_USED = 0,
+ LOCK_USED_IN_HARDIRQ,
+ LOCK_USED_IN_SOFTIRQ,
+ LOCK_ENABLED_SOFTIRQS,
+ LOCK_ENABLED_HARDIRQS,
+ LOCK_USED_IN_HARDIRQ_READ,
+ LOCK_USED_IN_SOFTIRQ_READ,
+ LOCK_ENABLED_SOFTIRQS_READ,
+ LOCK_ENABLED_HARDIRQS_READ,
+ LOCK_USAGE_STATES
+};
+
+/*
+ * Usage-state bitmasks:
+ */
+#define LOCKF_USED (1 << LOCK_USED)
+#define LOCKF_USED_IN_HARDIRQ (1 << LOCK_USED_IN_HARDIRQ)
+#define LOCKF_USED_IN_SOFTIRQ (1 << LOCK_USED_IN_SOFTIRQ)
+#define LOCKF_ENABLED_HARDIRQS (1 << LOCK_ENABLED_HARDIRQS)
+#define LOCKF_ENABLED_SOFTIRQS (1 << LOCK_ENABLED_SOFTIRQS)
+
+#define LOCKF_ENABLED_IRQS (LOCKF_ENABLED_HARDIRQS | LOCKF_ENABLED_SOFTIRQS)
+#define LOCKF_USED_IN_IRQ (LOCKF_USED_IN_HARDIRQ | LOCKF_USED_IN_SOFTIRQ)
+
+#define LOCKF_USED_IN_HARDIRQ_READ (1 << LOCK_USED_IN_HARDIRQ_READ)
+#define LOCKF_USED_IN_SOFTIRQ_READ (1 << LOCK_USED_IN_SOFTIRQ_READ)
+#define LOCKF_ENABLED_HARDIRQS_READ (1 << LOCK_ENABLED_HARDIRQS_READ)
+#define LOCKF_ENABLED_SOFTIRQS_READ (1 << LOCK_ENABLED_SOFTIRQS_READ)
+
+#define LOCKF_ENABLED_IRQS_READ \
+ (LOCKF_ENABLED_HARDIRQS_READ | LOCKF_ENABLED_SOFTIRQS_READ)
+#define LOCKF_USED_IN_IRQ_READ \
+ (LOCKF_USED_IN_HARDIRQ_READ | LOCKF_USED_IN_SOFTIRQ_READ)
+
+#define MAX_LOCKDEP_SUBCLASSES 8UL
+
+/*
+ * Lock-classes are keyed via unique addresses, by embedding the
+ * lockclass-key into the kernel (or module) .data section. (For
+ * static locks we use the lock address itself as the key.)
+ */
+struct lockdep_subclass_key {
+ char __one_byte;
+} __attribute__ ((__packed__));
+
+struct lock_class_key {
+ struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
+};
+
+/*
+ * The lock-class itself:
+ */
+struct lock_class {
+ /*
+ * class-hash:
+ */
+ struct list_head hash_entry;
+
+ /*
+ * global list of all lock-classes:
+ */
+ struct list_head lock_entry;
+
+ struct lockdep_subclass_key *key;
+ unsigned int subclass;
+
+ /*
+ * IRQ/softirq usage tracking bits:
+ */
+ unsigned long usage_mask;
+ struct stack_trace usage_traces[LOCK_USAGE_STATES];
+
+ /*
+ * These fields represent a directed graph of lock dependencies,
+ * to every node we attach a list of "forward" and a list of
+ * "backward" graph nodes.
+ */
+ struct list_head locks_after, locks_before;
+
+ /*
+ * Generation counter, when doing certain classes of graph walking,
+ * to ensure that we check one node only once:
+ */
+ unsigned int version;
+
+ /*
+ * Statistics counter:
+ */
+ unsigned long ops;
+
+ const char *name;
+ int name_version;
+};
+
+/*
+ * Map the lock object (the lock instance) to the lock-class object.
+ * This is embedded into specific lock instances:
+ */
+struct lockdep_map {
+ struct lock_class_key *key;
+ struct lock_class *class[MAX_LOCKDEP_SUBCLASSES];
+ const char *name;
+};
+
+/*
+ * Every lock has a list of other locks that were taken after it.
+ * We only grow the list, never remove from it:
+ */
+struct lock_list {
+ struct list_head entry;
+ struct lock_class *class;
+ struct stack_trace trace;
+};
+
+/*
+ * We record lock dependency chains, so that we can cache them:
+ */
+struct lock_chain {
+ struct list_head entry;
+ u64 chain_key;
+};
+
+struct held_lock {
+ /*
+ * One-way hash of the dependency chain up to this point. We
+ * hash the hashes step by step as the dependency chain grows.
+ *
+ * We use it for dependency-caching and we skip detection
+ * passes and dependency-updates if there is a cache-hit, so
+ * it is absolutely critical for 100% coverage of the validator
+ * to have a unique key value for every unique dependency path
+ * that can occur in the system, to make a unique hash value
+ * as likely as possible - hence the 64-bit width.
+ *
+ * The task struct holds the current hash value (initialized
+ * with zero), here we store the previous hash value:
+ */
+ u64 prev_chain_key;
+ struct lock_class *class;
+ unsigned long acquire_ip;
+ struct lockdep_map *instance;
+
+ /*
+ * The lock-stack is unified in that the lock chains of interrupt
+ * contexts nest ontop of process context chains, but we 'separate'
+ * the hashes by starting with 0 if we cross into an interrupt
+ * context, and we also keep do not add cross-context lock
+ * dependencies - the lock usage graph walking covers that area
+ * anyway, and we'd just unnecessarily increase the number of
+ * dependencies otherwise. [Note: hardirq and softirq contexts
+ * are separated from each other too.]
+ *
+ * The following field is used to detect when we cross into an
+ * interrupt context:
+ */
+ int irq_context;
+ int trylock;
+ int read;
+ int check;
+ int hardirqs_off;
+};
+
+/*
+ * Initialization, self-test and debugging-output methods:
+ */
+extern void lockdep_init(void);
+extern void lockdep_info(void);
+extern void lockdep_reset(void);
+extern void lockdep_reset_lock(struct lockdep_map *lock);
+extern void lockdep_free_key_range(void *start, unsigned long size);
+
+extern void lockdep_off(void);
+extern void lockdep_on(void);
+extern int lockdep_internal(void);
+
+/*
+ * These methods are used by specific locking variants (spinlocks,
+ * rwlocks, mutexes and rwsems) to pass init/acquire/release events
+ * to lockdep:
+ */
+
+extern void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key);
+
+/*
+ * Reinitialize a lock key - for cases where there is special locking or
+ * special initialization of locks so that the validator gets the scope
+ * of dependencies wrong: they are either too broad (they need a class-split)
+ * or they are too narrow (they suffer from a false class-split):
+ */
+#define lockdep_set_class(lock, key) \
+ lockdep_init_map(&(lock)->dep_map, #key, key)
+#define lockdep_set_class_and_name(lock, key, name) \
+ lockdep_init_map(&(lock)->dep_map, name, key)
+
+/*
+ * Acquire a lock.
+ *
+ * Values for "read":
+ *
+ * 0: exclusive (write) acquire
+ * 1: read-acquire (no recursion allowed)
+ * 2: read-acquire with same-instance recursion allowed
+ *
+ * Values for check:
+ *
+ * 0: disabled
+ * 1: simple checks (freeing, held-at-exit-time, etc.)
+ * 2: full validation
+ */
+extern void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
+ int trylock, int read, int check, unsigned long ip);
+
+extern void lock_release(struct lockdep_map *lock, int nested,
+ unsigned long ip);
+
+# define INIT_LOCKDEP .lockdep_recursion = 0,
+
+#else /* !LOCKDEP */
+
+static inline void lockdep_off(void)
+{
+}
+
+static inline void lockdep_on(void)
+{
+}
+
+static inline int lockdep_internal(void)
+{
+ return 0;
+}
+
+# define lock_acquire(l, s, t, r, c, i) do { } while (0)
+# define lock_release(l, n, i) do { } while (0)
+# define lockdep_init() do { } while (0)
+# define lockdep_info() do { } while (0)
+# define lockdep_init_map(lock, name, key) do { (void)(key); } while (0)
+# define lockdep_set_class(lock, key) do { (void)(key); } while (0)
+# define lockdep_set_class_and_name(lock, key, name) \
+ do { (void)(key); } while (0)
+# define INIT_LOCKDEP
+# define lockdep_reset() do { debug_locks = 1; } while (0)
+# define lockdep_free_key_range(start, size) do { } while (0)
+/*
+ * The class key takes no space if lockdep is disabled:
+ */
+struct lock_class_key { };
+#endif /* !LOCKDEP */
+
+#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_GENERIC_HARDIRQS)
+extern void early_init_irq_lock_class(void);
+#else
+# define early_init_irq_lock_class() do { } while (0)
+#endif
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+extern void early_boot_irqs_off(void);
+extern void early_boot_irqs_on(void);
+#else
+# define early_boot_irqs_off() do { } while (0)
+# define early_boot_irqs_on() do { } while (0)
+#endif
+
+/*
+ * For trivial one-depth nesting of a lock-class, the following
+ * global define can be used. (Subsystems with multiple levels
+ * of nesting should define their own lock-nesting subclasses.)
+ */
+#define SINGLE_DEPTH_NESTING 1
+
+/*
+ * Map the dependency ops to NOP or to real lockdep ops, depending
+ * on the per lock-class debug mode:
+ */
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# ifdef CONFIG_PROVE_LOCKING
+# define spin_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 2, i)
+# else
+# define spin_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 1, i)
+# endif
+# define spin_release(l, n, i) lock_release(l, n, i)
+#else
+# define spin_acquire(l, s, t, i) do { } while (0)
+# define spin_release(l, n, i) do { } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# ifdef CONFIG_PROVE_LOCKING
+# define rwlock_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 2, i)
+# define rwlock_acquire_read(l, s, t, i) lock_acquire(l, s, t, 2, 2, i)
+# else
+# define rwlock_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 1, i)
+# define rwlock_acquire_read(l, s, t, i) lock_acquire(l, s, t, 2, 1, i)
+# endif
+# define rwlock_release(l, n, i) lock_release(l, n, i)
+#else
+# define rwlock_acquire(l, s, t, i) do { } while (0)
+# define rwlock_acquire_read(l, s, t, i) do { } while (0)
+# define rwlock_release(l, n, i) do { } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# ifdef CONFIG_PROVE_LOCKING
+# define mutex_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 2, i)
+# else
+# define mutex_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 1, i)
+# endif
+# define mutex_release(l, n, i) lock_release(l, n, i)
+#else
+# define mutex_acquire(l, s, t, i) do { } while (0)
+# define mutex_release(l, n, i) do { } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# ifdef CONFIG_PROVE_LOCKING
+# define rwsem_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 2, i)
+# define rwsem_acquire_read(l, s, t, i) lock_acquire(l, s, t, 1, 2, i)
+# else
+# define rwsem_acquire(l, s, t, i) lock_acquire(l, s, t, 0, 1, i)
+# define rwsem_acquire_read(l, s, t, i) lock_acquire(l, s, t, 1, 1, i)
+# endif
+# define rwsem_release(l, n, i) lock_release(l, n, i)
+#else
+# define rwsem_acquire(l, s, t, i) do { } while (0)
+# define rwsem_acquire_read(l, s, t, i) do { } while (0)
+# define rwsem_release(l, n, i) do { } while (0)
+#endif
+
+#endif /* __LINUX_LOCKDEP_H */
#include <linux/prio_tree.h>
#include <linux/fs.h>
#include <linux/mutex.h>
+#include <linux/debug_locks.h>
struct mempolicy;
struct anon_vma;
}
#endif /* CONFIG_PROC_FS */
-static inline void
-debug_check_no_locks_freed(const void *from, unsigned long len)
-{
- mutex_debug_check_no_locks_freed(from, len);
- rt_mutex_debug_check_no_locks_freed(from, len);
-}
-
#ifndef CONFIG_DEBUG_PAGEALLOC
static inline void
kernel_map_pages(struct page *page, int numpages, int enable)
unsigned long lowmem_reserve[MAX_NR_ZONES];
#ifdef CONFIG_NUMA
+ /*
+ * zone reclaim becomes active if more unmapped pages exist.
+ */
+ unsigned long min_unmapped_ratio;
struct per_cpu_pageset *pageset[NR_CPUS];
#else
struct per_cpu_pageset pageset[NR_CPUS];
void __user *, size_t *, loff_t *);
int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *, int, struct file *,
void __user *, size_t *, loff_t *);
+int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int,
+ struct file *, void __user *, size_t *, loff_t *);
#include <linux/topology.h>
/* Returns the number of the current Node. */
/* Is this address in a module? (second is with no locks, for oops) */
struct module *module_text_address(unsigned long addr);
struct module *__module_text_address(unsigned long addr);
+int is_module_address(unsigned long addr);
/* Returns module and fills in value, defined and namebuf, or NULL if
symnum out of range. */
return NULL;
}
+static inline int is_module_address(unsigned long addr)
+{
+ return 0;
+}
+
/* Get/put a kernel symbol (calls should be symmetric) */
#define symbol_get(x) ({ extern typeof(x) x __attribute__((weak)); &(x); })
#define symbol_put(x) do { } while(0)
/**
* struct nand_bbt_descr - bad block table descriptor
- * @param options options for this descriptor
- * @param pages the page(s) where we find the bbt, used with
+ * @options: options for this descriptor
+ * @pages: the page(s) where we find the bbt, used with
* option BBT_ABSPAGE when bbt is searched,
* then we store the found bbts pages here.
* Its an array and supports up to 8 chips now
- * @param offs offset of the pattern in the oob area of the page
- * @param veroffs offset of the bbt version counter in the oob are of the page
- * @param version version read from the bbt page during scan
- * @param len length of the pattern, if 0 no pattern check is performed
- * @param maxblocks maximum number of blocks to search for a bbt. This number of
- * blocks is reserved at the end of the device
+ * @offs: offset of the pattern in the oob area of the page
+ * @veroffs: offset of the bbt version counter in the oob area of the page
+ * @version: version read from the bbt page during scan
+ * @len: length of the pattern, if 0 no pattern check is performed
+ * @maxblocks: maximum number of blocks to search for a bbt. This
+ * number of blocks is reserved at the end of the device
* where the tables are written.
- * @param reserved_block_code if non-0, this pattern denotes a reserved
+ * @reserved_block_code: if non-0, this pattern denotes a reserved
* (rather than bad) block in the stored bbt
- * @param pattern pattern to identify bad block table or factory marked
+ * @pattern: pattern to identify bad block table or factory marked
* good / bad blocks, can be NULL, if len = 0
*
* Descriptor for the bad block table marker and the descriptor for the
#define ONENAND_BADBLOCK_POS 0
/**
- * struct bbt_info - [GENERIC] Bad Block Table data structure
- * @param bbt_erase_shift [INTERN] number of address bits in a bbt entry
- * @param badblockpos [INTERN] position of the bad block marker in the oob area
- * @param bbt [INTERN] bad block table pointer
- * @param badblock_pattern [REPLACEABLE] bad block scan pattern used for initial bad block scan
- * @param priv [OPTIONAL] pointer to private bbm date
+ * struct bbm_info - [GENERIC] Bad Block Table data structure
+ * @bbt_erase_shift: [INTERN] number of address bits in a bbt entry
+ * @badblockpos: [INTERN] position of the bad block marker in the oob area
+ * @options: options for this descriptor
+ * @bbt: [INTERN] bad block table pointer
+ * @isbad_bbt: function to determine if a block is bad
+ * @badblock_pattern: [REPLACEABLE] bad block scan pattern used for
+ * initial bad block scan
+ * @priv: [OPTIONAL] pointer to private bbm date
*/
struct bbm_info {
int bbt_erase_shift;
*
* @len: number of bytes to write/read. When a data buffer is given
* (datbuf != NULL) this is the number of data bytes. When
- + no data buffer is available this is the number of oob bytes.
+ * no data buffer is available this is the number of oob bytes.
*
* @retlen: number of bytes written/read. When a data buffer is given
* (datbuf != NULL) this is the number of data bytes. When
- + no data buffer is available this is the number of oob bytes.
+ * no data buffer is available this is the number of oob bytes.
*
* @ooblen: number of oob bytes per page
* @ooboffs: offset of oob data in the oob area (only relevant when
struct nand_chip;
/**
- * struct nand_hw_control - Control structure for hardware controller (e.g ECC generator) shared among independend devices
+ * struct nand_hw_control - Control structure for hardware controller (e.g ECC generator) shared among independent devices
* @lock: protection lock
* @active: the mtd device which holds the controller currently
* @wq: wait queue to sleep on if a NAND operation is in progress
* @total: total number of ecc bytes per page
* @prepad: padding information for syndrome based ecc generators
* @postpad: padding information for syndrome based ecc generators
+ * @layout: ECC layout control struct pointer
* @hwctl: function to control hardware ecc generator. Must only
* be provided if an hardware ECC is available
* @calculate: function for ecc calculation or readback from ecc hardware
* @correct: function for ecc correction, matching to ecc generator (sw/hw)
* @read_page: function to read a page according to the ecc generator requirements
* @write_page: function to write a page according to the ecc generator requirements
+ * @read_oob: function to read chip OOB data
+ * @write_oob: function to write chip OOB data
*/
struct nand_ecc_ctrl {
nand_ecc_modes_t mode;
* @cmdfunc: [REPLACEABLE] hardwarespecific function for writing commands to the chip
* @waitfunc: [REPLACEABLE] hardwarespecific function for wait on ready
* @ecc: [BOARDSPECIFIC] ecc control ctructure
+ * @buffers: buffer structure for read/write
+ * @hwcontrol: platform-specific hardware control structure
+ * @ops: oob operation operands
* @erase_cmd: [INTERN] erase command write function, selectable due to AND support
* @scan_bbt: [REPLACEABLE] function to scan bad block table
* @chip_delay: [BOARDSPECIFIC] chip dependent delay for transfering data from array to read regs (tR)
* @wq: [INTERN] wait queue to sleep on if a NAND operation is in progress
* @state: [INTERN] the current state of the NAND device
+ * @oob_poi: poison value buffer
* @page_shift: [INTERN] number of address bits in a page (column address bits)
* @phys_erase_shift: [INTERN] number of address bits in a physical eraseblock
* @bbt_erase_shift: [INTERN] number of address bits in a bbt entry
/**
* struct nand_flash_dev - NAND Flash Device ID Structure
- *
* @name: Identify the device type
* @id: device ID code
* @pagesize: Pagesize in bytes. Either 256 or 512 or 0
/**
* struct platform_nand_chip - chip level device structure
- *
* @nr_chips: max. number of chips to scan for
- * @chip_offs: chip number offset
+ * @chip_offset: chip number offset
* @nr_partitions: number of partitions pointed to by partitions (or zero)
* @partitions: mtd partition list
* @chip_delay: R/B delay value in us
/**
* struct platform_nand_ctrl - controller level device structure
- *
* @hwcontrol: platform specific hardware control structure
* @dev_ready: platform specific function to read ready/busy pin
* @select_chip: platform specific chip select function
- * @priv_data: private data to transport driver specific settings
+ * @priv: private data to transport driver specific settings
*
* All fields are optional and depend on the hardware driver requirements
*/
/* Free resources held by the OneNAND device */
extern void onenand_release(struct mtd_info *mtd);
-/**
+/*
* onenand_state_t - chip states
* Enumeration for OneNAND flash chip state
*/
/**
* struct onenand_bufferram - OneNAND BufferRAM Data
- * @param block block address in BufferRAM
- * @param page page address in BufferRAM
- * @param valid valid flag
+ * @block: block address in BufferRAM
+ * @page: page address in BufferRAM
+ * @valid: valid flag
*/
struct onenand_bufferram {
int block;
/**
* struct onenand_chip - OneNAND Private Flash Chip Data
- * @param base [BOARDSPECIFIC] address to access OneNAND
- * @param chipsize [INTERN] the size of one chip for multichip arrays
- * @param device_id [INTERN] device ID
- * @param verstion_id [INTERN] version ID
- * @param options [BOARDSPECIFIC] various chip options. They can partly be set to inform onenand_scan about
- * @param erase_shift [INTERN] number of address bits in a block
- * @param page_shift [INTERN] number of address bits in a page
- * @param ppb_shift [INTERN] number of address bits in a pages per block
- * @param page_mask [INTERN] a page per block mask
- * @param bufferam_index [INTERN] BufferRAM index
- * @param bufferam [INTERN] BufferRAM info
- * @param readw [REPLACEABLE] hardware specific function for read short
- * @param writew [REPLACEABLE] hardware specific function for write short
- * @param command [REPLACEABLE] hardware specific function for writing commands to the chip
- * @param wait [REPLACEABLE] hardware specific function for wait on ready
- * @param read_bufferram [REPLACEABLE] hardware specific function for BufferRAM Area
- * @param write_bufferram [REPLACEABLE] hardware specific function for BufferRAM Area
- * @param read_word [REPLACEABLE] hardware specific function for read register of OneNAND
- * @param write_word [REPLACEABLE] hardware specific function for write register of OneNAND
- * @param scan_bbt [REPLACEALBE] hardware specific function for scaning Bad block Table
- * @param chip_lock [INTERN] spinlock used to protect access to this structure and the chip
- * @param wq [INTERN] wait queue to sleep on if a OneNAND operation is in progress
- * @param state [INTERN] the current state of the OneNAND device
- * @param ecclayout [REPLACEABLE] the default ecc placement scheme
- * @param bbm [REPLACEABLE] pointer to Bad Block Management
- * @param priv [OPTIONAL] pointer to private chip date
+ * @base: [BOARDSPECIFIC] address to access OneNAND
+ * @chipsize: [INTERN] the size of one chip for multichip arrays
+ * @device_id: [INTERN] device ID
+ * @density_mask: chip density, used for DDP devices
+ * @verstion_id: [INTERN] version ID
+ * @options: [BOARDSPECIFIC] various chip options. They can
+ * partly be set to inform onenand_scan about
+ * @erase_shift: [INTERN] number of address bits in a block
+ * @page_shift: [INTERN] number of address bits in a page
+ * @ppb_shift: [INTERN] number of address bits in a pages per block
+ * @page_mask: [INTERN] a page per block mask
+ * @bufferram_index: [INTERN] BufferRAM index
+ * @bufferram: [INTERN] BufferRAM info
+ * @readw: [REPLACEABLE] hardware specific function for read short
+ * @writew: [REPLACEABLE] hardware specific function for write short
+ * @command: [REPLACEABLE] hardware specific function for writing
+ * commands to the chip
+ * @wait: [REPLACEABLE] hardware specific function for wait on ready
+ * @read_bufferram: [REPLACEABLE] hardware specific function for BufferRAM Area
+ * @write_bufferram: [REPLACEABLE] hardware specific function for BufferRAM Area
+ * @read_word: [REPLACEABLE] hardware specific function for read
+ * register of OneNAND
+ * @write_word: [REPLACEABLE] hardware specific function for write
+ * register of OneNAND
+ * @mmcontrol: sync burst read function
+ * @block_markbad: function to mark a block as bad
+ * @scan_bbt: [REPLACEALBE] hardware specific function for scanning
+ * Bad block Table
+ * @chip_lock: [INTERN] spinlock used to protect access to this
+ * structure and the chip
+ * @wq: [INTERN] wait queue to sleep on if a OneNAND
+ * operation is in progress
+ * @state: [INTERN] the current state of the OneNAND device
+ * @page_buf: data buffer
+ * @ecclayout: [REPLACEABLE] the default ecc placement scheme
+ * @bbm: [REPLACEABLE] pointer to Bad Block Management
+ * @priv: [OPTIONAL] pointer to private chip date
*/
struct onenand_chip {
void __iomem *base;
#define ONENAND_MFR_SAMSUNG 0xec
/**
- * struct nand_manufacturers - NAND Flash Manufacturer ID Structure
- * @param name: Manufacturer name
- * @param id: manufacturer ID code of device.
+ * struct onenand_manufacturers - NAND Flash Manufacturer ID Structure
+ * @name: Manufacturer name
+ * @id: manufacturer ID code of device.
*/
struct onenand_manufacturers {
int id;
#define __LINUX_MUTEX_DEBUG_H
#include <linux/linkage.h>
+#include <linux/lockdep.h>
/*
* Mutexes - debugging helpers:
*/
-#define __DEBUG_MUTEX_INITIALIZER(lockname) \
- , .held_list = LIST_HEAD_INIT(lockname.held_list), \
- .name = #lockname , .magic = &lockname
+#define __DEBUG_MUTEX_INITIALIZER(lockname) \
+ , .magic = &lockname
-#define mutex_init(sem) __mutex_init(sem, __FUNCTION__)
+#define mutex_init(mutex) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __mutex_init((mutex), #mutex, &__key); \
+} while (0)
extern void FASTCALL(mutex_destroy(struct mutex *lock));
-extern void mutex_debug_show_all_locks(void);
-extern void mutex_debug_show_held_locks(struct task_struct *filter);
-extern void mutex_debug_check_no_locks_held(struct task_struct *task);
-extern void mutex_debug_check_no_locks_freed(const void *from, unsigned long len);
-
#endif
#include <linux/list.h>
#include <linux/spinlock_types.h>
#include <linux/linkage.h>
+#include <linux/lockdep.h>
#include <asm/atomic.h>
struct list_head wait_list;
#ifdef CONFIG_DEBUG_MUTEXES
struct thread_info *owner;
- struct list_head held_list;
- unsigned long acquire_ip;
const char *name;
void *magic;
#endif
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
};
/*
# include <linux/mutex-debug.h>
#else
# define __DEBUG_MUTEX_INITIALIZER(lockname)
-# define mutex_init(mutex) __mutex_init(mutex, NULL)
+# define mutex_init(mutex) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __mutex_init((mutex), #mutex, &__key); \
+} while (0)
# define mutex_destroy(mutex) do { } while (0)
-# define mutex_debug_show_all_locks() do { } while (0)
-# define mutex_debug_show_held_locks(p) do { } while (0)
-# define mutex_debug_check_no_locks_held(task) do { } while (0)
-# define mutex_debug_check_no_locks_freed(from, len) do { } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
+ , .dep_map = { .name = #lockname }
+#else
+# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
#endif
#define __MUTEX_INITIALIZER(lockname) \
{ .count = ATOMIC_INIT(1) \
, .wait_lock = SPIN_LOCK_UNLOCKED \
, .wait_list = LIST_HEAD_INIT(lockname.wait_list) \
- __DEBUG_MUTEX_INITIALIZER(lockname) }
+ __DEBUG_MUTEX_INITIALIZER(lockname) \
+ __DEP_MAP_MUTEX_INITIALIZER(lockname) }
#define DEFINE_MUTEX(mutexname) \
struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
-extern void fastcall __mutex_init(struct mutex *lock, const char *name);
+extern void __mutex_init(struct mutex *lock, const char *name,
+ struct lock_class_key *key);
/***
* mutex_is_locked - is the mutex locked
*/
extern void fastcall mutex_lock(struct mutex *lock);
extern int fastcall mutex_lock_interruptible(struct mutex *lock);
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
+#else
+# define mutex_lock_nested(lock, subclass) mutex_lock(lock)
+#endif
+
/*
* NOTE: mutex_trylock() follows the spin_trylock() convention,
* not the down_trylock() convention!
} while (0)
#define ATOMIC_NOTIFIER_INIT(name) { \
- .lock = SPIN_LOCK_UNLOCKED, \
+ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
.head = NULL }
#define BLOCKING_NOTIFIER_INIT(name) { \
.rwsem = __RWSEM_INITIALIZER((name).rwsem), \
#define PCI_DEVICE_ID_TI_TVP4020 0x3d07
#define PCI_DEVICE_ID_TI_4450 0x8011
#define PCI_DEVICE_ID_TI_XX21_XX11 0x8031
+#define PCI_DEVICE_ID_TI_XX21_XX11_SD 0x8034
#define PCI_DEVICE_ID_TI_X515 0x8036
#define PCI_DEVICE_ID_TI_XX12 0x8039
#define PCI_DEVICE_ID_TI_1130 0xac12
#define PCI_DEVICE_ID_RICOH_RL5C475 0x0475
#define PCI_DEVICE_ID_RICOH_RL5C476 0x0476
#define PCI_DEVICE_ID_RICOH_RL5C478 0x0478
+#define PCI_DEVICE_ID_RICOH_R5C822 0x0822
#define PCI_VENDOR_ID_DLINK 0x1186
#define PCI_DEVICE_ID_DLINK_DGE510T 0x4c00
/********** drivers/atm/ **********/
#define ATM_POISON_FREE 0x12
+#define ATM_POISON 0xdeadbeef
+
+/********** net/ **********/
+#define NEIGHBOR_DEAD 0xdeadbeef
+#define NETFILTER_LINK_POISON 0xdead57ac
/********** kernel/mutexes **********/
#define MUTEX_DEBUG_INIT 0x11
struct task_struct *owner;
#ifdef CONFIG_DEBUG_RT_MUTEXES
int save_state;
- struct list_head held_list_entry;
- unsigned long acquire_ip;
const char *name, *file;
int line;
void *magic;
extern void rt_mutex_unlock(struct rt_mutex *lock);
-#ifdef CONFIG_DEBUG_RT_MUTEXES
-# define INIT_RT_MUTEX_DEBUG(tsk) \
- .held_list_head = LIST_HEAD_INIT(tsk.held_list_head), \
- .held_list_lock = SPIN_LOCK_UNLOCKED
-#else
-# define INIT_RT_MUTEX_DEBUG(tsk)
-#endif
-
#ifdef CONFIG_RT_MUTEXES
# define INIT_RT_MUTEXES(tsk) \
.pi_waiters = PLIST_HEAD_INIT(tsk.pi_waiters, tsk.pi_lock), \
__s32 activity;
spinlock_t wait_lock;
struct list_head wait_list;
-#if RWSEM_DEBUG
- int debug;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
#endif
};
-/*
- * initialisation
- */
-#if RWSEM_DEBUG
-#define __RWSEM_DEBUG_INIT , 0
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
#else
-#define __RWSEM_DEBUG_INIT /* */
+# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
#define __RWSEM_INITIALIZER(name) \
-{ 0, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) __RWSEM_DEBUG_INIT }
+{ 0, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) }
#define DECLARE_RWSEM(name) \
struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-extern void FASTCALL(init_rwsem(struct rw_semaphore *sem));
+extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
+ struct lock_class_key *key);
+
+#define init_rwsem(sem) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __init_rwsem((sem), #sem, &__key); \
+} while (0)
+
extern void FASTCALL(__down_read(struct rw_semaphore *sem));
extern int FASTCALL(__down_read_trylock(struct rw_semaphore *sem));
extern void FASTCALL(__down_write(struct rw_semaphore *sem));
+extern void FASTCALL(__down_write_nested(struct rw_semaphore *sem, int subclass));
extern int FASTCALL(__down_write_trylock(struct rw_semaphore *sem));
extern void FASTCALL(__up_read(struct rw_semaphore *sem));
extern void FASTCALL(__up_write(struct rw_semaphore *sem));
#include <linux/linkage.h>
-#define RWSEM_DEBUG 0
-
#ifdef __KERNEL__
#include <linux/types.h>
#include <asm/rwsem.h> /* use an arch-specific implementation */
#endif
-#ifndef rwsemtrace
-#if RWSEM_DEBUG
-extern void FASTCALL(rwsemtrace(struct rw_semaphore *sem, const char *str));
-#else
-#define rwsemtrace(SEM,FMT)
-#endif
-#endif
-
/*
* lock for reading
*/
-static inline void down_read(struct rw_semaphore *sem)
-{
- might_sleep();
- rwsemtrace(sem,"Entering down_read");
- __down_read(sem);
- rwsemtrace(sem,"Leaving down_read");
-}
+extern void down_read(struct rw_semaphore *sem);
/*
* trylock for reading -- returns 1 if successful, 0 if contention
*/
-static inline int down_read_trylock(struct rw_semaphore *sem)
-{
- int ret;
- rwsemtrace(sem,"Entering down_read_trylock");
- ret = __down_read_trylock(sem);
- rwsemtrace(sem,"Leaving down_read_trylock");
- return ret;
-}
+extern int down_read_trylock(struct rw_semaphore *sem);
/*
* lock for writing
*/
-static inline void down_write(struct rw_semaphore *sem)
-{
- might_sleep();
- rwsemtrace(sem,"Entering down_write");
- __down_write(sem);
- rwsemtrace(sem,"Leaving down_write");
-}
+extern void down_write(struct rw_semaphore *sem);
/*
* trylock for writing -- returns 1 if successful, 0 if contention
*/
-static inline int down_write_trylock(struct rw_semaphore *sem)
-{
- int ret;
- rwsemtrace(sem,"Entering down_write_trylock");
- ret = __down_write_trylock(sem);
- rwsemtrace(sem,"Leaving down_write_trylock");
- return ret;
-}
+extern int down_write_trylock(struct rw_semaphore *sem);
/*
* release a read lock
*/
-static inline void up_read(struct rw_semaphore *sem)
-{
- rwsemtrace(sem,"Entering up_read");
- __up_read(sem);
- rwsemtrace(sem,"Leaving up_read");
-}
+extern void up_read(struct rw_semaphore *sem);
/*
* release a write lock
*/
-static inline void up_write(struct rw_semaphore *sem)
-{
- rwsemtrace(sem,"Entering up_write");
- __up_write(sem);
- rwsemtrace(sem,"Leaving up_write");
-}
+extern void up_write(struct rw_semaphore *sem);
/*
* downgrade write lock to read lock
*/
-static inline void downgrade_write(struct rw_semaphore *sem)
-{
- rwsemtrace(sem,"Entering downgrade_write");
- __downgrade_write(sem);
- rwsemtrace(sem,"Leaving downgrade_write");
-}
+extern void downgrade_write(struct rw_semaphore *sem);
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+/*
+ * nested locking:
+ */
+extern void down_read_nested(struct rw_semaphore *sem, int subclass);
+extern void down_write_nested(struct rw_semaphore *sem, int subclass);
+/*
+ * Take/release a lock when not the owner will release it:
+ */
+extern void down_read_non_owner(struct rw_semaphore *sem);
+extern void up_read_non_owner(struct rw_semaphore *sem);
+#else
+# define down_read_nested(sem, subclass) down_read(sem)
+# define down_write_nested(sem, subclass) down_write(sem)
+# define down_read_non_owner(sem) down_read(sem)
+# define up_read_non_owner(sem) up_read(sem)
+#endif
#endif /* __KERNEL__ */
#endif /* _LINUX_RWSEM_H */
extern rwlock_t tasklist_lock;
extern spinlock_t mmlist_lock;
-typedef struct task_struct task_t;
+struct task_struct;
extern void sched_init(void);
extern void sched_init_smp(void);
-extern void init_idle(task_t *idle, int cpu);
+extern void init_idle(struct task_struct *idle, int cpu);
extern cpumask_t nohz_cpu_mask;
wait_queue_head_t wait_chldexit; /* for wait4() */
/* current thread group signal load-balancing target: */
- task_t *curr_target;
+ struct task_struct *curr_target;
/* shared signal handling: */
struct sigpending shared_pending;
extern struct user_struct root_user;
#define INIT_USER (&root_user)
-typedef struct prio_array prio_array_t;
struct backing_dev_info;
struct reclaim_state;
((gi)->blocks[(i)/NGROUPS_PER_BLOCK][(i)%NGROUPS_PER_BLOCK])
#ifdef ARCH_HAS_PREFETCH_SWITCH_STACK
-extern void prefetch_stack(struct task_struct*);
+extern void prefetch_stack(struct task_struct *t);
#else
static inline void prefetch_stack(struct task_struct *t) { }
#endif
SLEEP_INTERRUPTED,
};
+struct prio_array;
+
struct task_struct {
volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */
struct thread_info *thread_info;
int load_weight; /* for niceness load balancing purposes */
int prio, static_prio, normal_prio;
struct list_head run_list;
- prio_array_t *array;
+ struct prio_array *array;
unsigned short ioprio;
unsigned int btrace_seq;
struct plist_head pi_waiters;
/* Deadlock detection and priority inheritance handling */
struct rt_mutex_waiter *pi_blocked_on;
-# ifdef CONFIG_DEBUG_RT_MUTEXES
- spinlock_t held_list_lock;
- struct list_head held_list_head;
-# endif
#endif
#ifdef CONFIG_DEBUG_MUTEXES
/* mutex deadlock detection */
struct mutex_waiter *blocked_on;
#endif
+#ifdef CONFIG_TRACE_IRQFLAGS
+ unsigned int irq_events;
+ int hardirqs_enabled;
+ unsigned long hardirq_enable_ip;
+ unsigned int hardirq_enable_event;
+ unsigned long hardirq_disable_ip;
+ unsigned int hardirq_disable_event;
+ int softirqs_enabled;
+ unsigned long softirq_disable_ip;
+ unsigned int softirq_disable_event;
+ unsigned long softirq_enable_ip;
+ unsigned int softirq_enable_event;
+ int hardirq_context;
+ int softirq_context;
+#endif
+#ifdef CONFIG_LOCKDEP
+# define MAX_LOCK_DEPTH 30UL
+ u64 curr_chain_key;
+ int lockdep_depth;
+ struct held_lock held_locks[MAX_LOCK_DEPTH];
+ unsigned int lockdep_recursion;
+#endif
/* journalling filesystem info */
void *journal_info;
#define used_math() tsk_used_math(current)
#ifdef CONFIG_SMP
-extern int set_cpus_allowed(task_t *p, cpumask_t new_mask);
+extern int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask);
#else
-static inline int set_cpus_allowed(task_t *p, cpumask_t new_mask)
+static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
{
if (!cpu_isset(0, new_mask))
return -EINVAL;
#endif
extern unsigned long long sched_clock(void);
-extern unsigned long long current_sched_time(const task_t *current_task);
+extern unsigned long long
+current_sched_time(const struct task_struct *current_task);
/* sched_exec is called by processes performing an exec */
#ifdef CONFIG_SMP
extern void sched_idle_next(void);
#ifdef CONFIG_RT_MUTEXES
-extern int rt_mutex_getprio(task_t *p);
-extern void rt_mutex_setprio(task_t *p, int prio);
-extern void rt_mutex_adjust_pi(task_t *p);
+extern int rt_mutex_getprio(struct task_struct *p);
+extern void rt_mutex_setprio(struct task_struct *p, int prio);
+extern void rt_mutex_adjust_pi(struct task_struct *p);
#else
-static inline int rt_mutex_getprio(task_t *p)
+static inline int rt_mutex_getprio(struct task_struct *p)
{
return p->normal_prio;
}
# define rt_mutex_adjust_pi(p) do { } while (0)
#endif
-extern void set_user_nice(task_t *p, long nice);
-extern int task_prio(const task_t *p);
-extern int task_nice(const task_t *p);
-extern int can_nice(const task_t *p, const int nice);
-extern int task_curr(const task_t *p);
+extern void set_user_nice(struct task_struct *p, long nice);
+extern int task_prio(const struct task_struct *p);
+extern int task_nice(const struct task_struct *p);
+extern int can_nice(const struct task_struct *p, const int nice);
+extern int task_curr(const struct task_struct *p);
extern int idle_cpu(int cpu);
extern int sched_setscheduler(struct task_struct *, int, struct sched_param *);
-extern task_t *idle_task(int cpu);
-extern task_t *curr_task(int cpu);
-extern void set_curr_task(int cpu, task_t *p);
+extern struct task_struct *idle_task(int cpu);
+extern struct task_struct *curr_task(int cpu);
+extern void set_curr_task(int cpu, struct task_struct *p);
void yield(void);
#else
static inline void kick_process(struct task_struct *tsk) { }
#endif
-extern void FASTCALL(sched_fork(task_t * p, int clone_flags));
-extern void FASTCALL(sched_exit(task_t * p));
+extern void FASTCALL(sched_fork(struct task_struct * p, int clone_flags));
+extern void FASTCALL(sched_exit(struct task_struct * p));
extern int in_group_p(gid_t);
extern int in_egroup_p(gid_t);
extern void daemonize(const char *, ...);
extern int allow_signal(int);
extern int disallow_signal(int);
-extern task_t *child_reaper;
+extern struct task_struct *child_reaper;
extern int do_execve(char *, char __user * __user *, char __user * __user *, struct pt_regs *);
extern long do_fork(unsigned long, unsigned long, struct pt_regs *, unsigned long, int __user *, int __user *);
-task_t *fork_idle(int);
+struct task_struct *fork_idle(int);
extern void set_task_comm(struct task_struct *tsk, char *from);
extern void get_task_comm(char *to, struct task_struct *tsk);
#ifdef CONFIG_SMP
-extern void wait_task_inactive(task_t * p);
+extern void wait_task_inactive(struct task_struct * p);
#else
#define wait_task_inactive(p) do { } while (0)
#endif
/* de_thread depends on thread_group_leader not being a pid based check */
#define thread_group_leader(p) (p == p->group_leader)
-static inline task_t *next_thread(const task_t *p)
+static inline struct task_struct *next_thread(const struct task_struct *p)
{
return list_entry(rcu_dereference(p->thread_group.next),
- task_t, thread_group);
+ struct task_struct, thread_group);
}
-static inline int thread_group_empty(task_t *p)
+static inline int thread_group_empty(struct task_struct *p)
{
return list_empty(&p->thread_group);
}
* These macros triggered gcc-3.x compile-time problems. We think these are
* OK now. Be cautious.
*/
-#define SEQLOCK_UNLOCKED { 0, SPIN_LOCK_UNLOCKED }
-#define seqlock_init(x) do { *(x) = (seqlock_t) SEQLOCK_UNLOCKED; } while (0)
+#define __SEQLOCK_UNLOCKED(lockname) \
+ { 0, __SPIN_LOCK_UNLOCKED(lockname) }
+#define SEQLOCK_UNLOCKED \
+ __SEQLOCK_UNLOCKED(old_style_seqlock_init)
+
+#define seqlock_init(x) \
+ do { *(x) = (seqlock_t) __SEQLOCK_UNLOCKED(x); } while (0)
+
+#define DEFINE_SEQLOCK(x) \
+ seqlock_t x = __SEQLOCK_UNLOCKED(x)
/* Lock out other writers and update the count.
* Acts like a normal spin_lock/unlock.
unsigned char __iomem *membase; /* read/write[bwl] */
unsigned int irq; /* irq number */
unsigned int uartclk; /* base uart clock */
- unsigned char fifosize; /* tx fifo size */
+ unsigned int fifosize; /* tx fifo size */
unsigned char x_char; /* xon/xoff char */
unsigned char regshift; /* reg offset shift */
unsigned char iotype; /* io access style */
+ unsigned char unused1;
#define UPIO_PORT (0)
#define UPIO_HUB6 (1)
#include <linux/list.h>
#include <linux/spinlock.h>
-/*
- * These values of sa_flags are used only by the kernel as part of the
- * irq handling routines.
- *
- * SA_INTERRUPT is also used by the irq handling routines.
- * SA_SHIRQ is for shared interrupt support on PCI and EISA.
- * SA_PROBEIRQ is set by callers when they expect sharing mismatches to occur
- */
-#define SA_SAMPLE_RANDOM SA_RESTART
-#define SA_SHIRQ 0x04000000
-#define SA_PROBEIRQ 0x08000000
-
-/*
- * As above, these correspond to the IORESOURCE_IRQ_* defines in
- * linux/ioport.h to select the interrupt line behaviour. When
- * requesting an interrupt without specifying a SA_TRIGGER, the
- * setting should be assumed to be "as already configured", which
- * may be as per machine or firmware initialisation.
- */
-#define SA_TRIGGER_LOW 0x00000008
-#define SA_TRIGGER_HIGH 0x00000004
-#define SA_TRIGGER_FALLING 0x00000002
-#define SA_TRIGGER_RISING 0x00000001
-#define SA_TRIGGER_MASK (SA_TRIGGER_HIGH|SA_TRIGGER_LOW|\
- SA_TRIGGER_RISING|SA_TRIGGER_FALLING)
-
/*
* Real Time signals may be queued.
*/
return list_->qlen;
}
+extern struct lock_class_key skb_queue_lock_key;
+
static inline void skb_queue_head_init(struct sk_buff_head *list)
{
spin_lock_init(&list->lock);
+ lockdep_set_class(&list->lock, &skb_queue_lock_key);
list->prev = list->next = (struct sk_buff *)list;
list->qlen = 0;
}
/*
* Pull the __raw*() functions/declarations (UP-nondebug doesnt need them):
*/
-#if defined(CONFIG_SMP)
+#ifdef CONFIG_SMP
# include <asm/spinlock.h>
#else
# include <linux/spinlock_up.h>
#endif
-#define spin_lock_init(lock) do { *(lock) = SPIN_LOCK_UNLOCKED; } while (0)
-#define rwlock_init(lock) do { *(lock) = RW_LOCK_UNLOCKED; } while (0)
+#ifdef CONFIG_DEBUG_SPINLOCK
+ extern void __spin_lock_init(spinlock_t *lock, const char *name,
+ struct lock_class_key *key);
+# define spin_lock_init(lock) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __spin_lock_init((lock), #lock, &__key); \
+} while (0)
+
+#else
+# define spin_lock_init(lock) \
+ do { *(lock) = SPIN_LOCK_UNLOCKED; } while (0)
+#endif
+
+#ifdef CONFIG_DEBUG_SPINLOCK
+ extern void __rwlock_init(rwlock_t *lock, const char *name,
+ struct lock_class_key *key);
+# define rwlock_init(lock) \
+do { \
+ static struct lock_class_key __key; \
+ \
+ __rwlock_init((lock), #lock, &__key); \
+} while (0)
+#else
+# define rwlock_init(lock) \
+ do { *(lock) = RW_LOCK_UNLOCKED; } while (0)
+#endif
#define spin_is_locked(lock) __raw_spin_is_locked(&(lock)->raw_lock)
#define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
extern int _raw_spin_trylock(spinlock_t *lock);
extern void _raw_spin_unlock(spinlock_t *lock);
-
extern void _raw_read_lock(rwlock_t *lock);
extern int _raw_read_trylock(rwlock_t *lock);
extern void _raw_read_unlock(rwlock_t *lock);
extern int _raw_write_trylock(rwlock_t *lock);
extern void _raw_write_unlock(rwlock_t *lock);
#else
-# define _raw_spin_unlock(lock) __raw_spin_unlock(&(lock)->raw_lock)
-# define _raw_spin_trylock(lock) __raw_spin_trylock(&(lock)->raw_lock)
# define _raw_spin_lock(lock) __raw_spin_lock(&(lock)->raw_lock)
# define _raw_spin_lock_flags(lock, flags) \
__raw_spin_lock_flags(&(lock)->raw_lock, *(flags))
+# define _raw_spin_trylock(lock) __raw_spin_trylock(&(lock)->raw_lock)
+# define _raw_spin_unlock(lock) __raw_spin_unlock(&(lock)->raw_lock)
# define _raw_read_lock(rwlock) __raw_read_lock(&(rwlock)->raw_lock)
-# define _raw_write_lock(rwlock) __raw_write_lock(&(rwlock)->raw_lock)
-# define _raw_read_unlock(rwlock) __raw_read_unlock(&(rwlock)->raw_lock)
-# define _raw_write_unlock(rwlock) __raw_write_unlock(&(rwlock)->raw_lock)
# define _raw_read_trylock(rwlock) __raw_read_trylock(&(rwlock)->raw_lock)
+# define _raw_read_unlock(rwlock) __raw_read_unlock(&(rwlock)->raw_lock)
+# define _raw_write_lock(rwlock) __raw_write_lock(&(rwlock)->raw_lock)
# define _raw_write_trylock(rwlock) __raw_write_trylock(&(rwlock)->raw_lock)
+# define _raw_write_unlock(rwlock) __raw_write_unlock(&(rwlock)->raw_lock)
#endif
#define read_can_lock(rwlock) __raw_read_can_lock(&(rwlock)->raw_lock)
#define write_trylock(lock) __cond_lock(_write_trylock(lock))
#define spin_lock(lock) _spin_lock(lock)
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define spin_lock_nested(lock, subclass) _spin_lock_nested(lock, subclass)
+#else
+# define spin_lock_nested(lock, subclass) _spin_lock(lock)
+#endif
+
#define write_lock(lock) _write_lock(lock)
#define read_lock(lock) _read_lock(lock)
/*
* We inline the unlock functions in the nondebug case:
*/
-#if defined(CONFIG_DEBUG_SPINLOCK) || defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP)
+#if defined(CONFIG_DEBUG_SPINLOCK) || defined(CONFIG_PREEMPT) || \
+ !defined(CONFIG_SMP)
# define spin_unlock(lock) _spin_unlock(lock)
# define read_unlock(lock) _read_unlock(lock)
# define write_unlock(lock) _write_unlock(lock)
-#else
-# define spin_unlock(lock) __raw_spin_unlock(&(lock)->raw_lock)
-# define read_unlock(lock) __raw_read_unlock(&(lock)->raw_lock)
-# define write_unlock(lock) __raw_write_unlock(&(lock)->raw_lock)
-#endif
-
-#if defined(CONFIG_DEBUG_SPINLOCK) || defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP)
# define spin_unlock_irq(lock) _spin_unlock_irq(lock)
# define read_unlock_irq(lock) _read_unlock_irq(lock)
# define write_unlock_irq(lock) _write_unlock_irq(lock)
#else
+# define spin_unlock(lock) __raw_spin_unlock(&(lock)->raw_lock)
+# define read_unlock(lock) __raw_read_unlock(&(lock)->raw_lock)
+# define write_unlock(lock) __raw_write_unlock(&(lock)->raw_lock)
# define spin_unlock_irq(lock) \
do { __raw_spin_unlock(&(lock)->raw_lock); local_irq_enable(); } while (0)
# define read_unlock_irq(lock) \
#define assert_spin_locked(x) BUG_ON(!spin_is_locked(x))
void __lockfunc _spin_lock(spinlock_t *lock) __acquires(spinlock_t);
+void __lockfunc _spin_lock_nested(spinlock_t *lock, int subclass)
+ __acquires(spinlock_t);
void __lockfunc _read_lock(rwlock_t *lock) __acquires(rwlock_t);
void __lockfunc _write_lock(rwlock_t *lock) __acquires(rwlock_t);
void __lockfunc _spin_lock_bh(spinlock_t *lock) __acquires(spinlock_t);
do { local_irq_restore(flags); __UNLOCK(lock); } while (0)
#define _spin_lock(lock) __LOCK(lock)
+#define _spin_lock_nested(lock, subclass) __LOCK(lock)
#define _read_lock(lock) __LOCK(lock)
#define _write_lock(lock) __LOCK(lock)
#define _spin_lock_bh(lock) __LOCK_BH(lock)
* Released under the General Public License (GPL).
*/
+#include <linux/lockdep.h>
+
#if defined(CONFIG_SMP)
# include <asm/spinlock_types.h>
#else
unsigned int magic, owner_cpu;
void *owner;
#endif
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
} spinlock_t;
#define SPINLOCK_MAGIC 0xdead4ead
unsigned int magic, owner_cpu;
void *owner;
#endif
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
} rwlock_t;
#define RWLOCK_MAGIC 0xdeaf1eed
#define SPINLOCK_OWNER_INIT ((void *)-1L)
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+#else
+# define SPIN_DEP_MAP_INIT(lockname)
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define RW_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+#else
+# define RW_DEP_MAP_INIT(lockname)
+#endif
+
#ifdef CONFIG_DEBUG_SPINLOCK
-# define SPIN_LOCK_UNLOCKED \
+# define __SPIN_LOCK_UNLOCKED(lockname) \
(spinlock_t) { .raw_lock = __RAW_SPIN_LOCK_UNLOCKED, \
.magic = SPINLOCK_MAGIC, \
.owner = SPINLOCK_OWNER_INIT, \
- .owner_cpu = -1 }
-#define RW_LOCK_UNLOCKED \
+ .owner_cpu = -1, \
+ SPIN_DEP_MAP_INIT(lockname) }
+#define __RW_LOCK_UNLOCKED(lockname) \
(rwlock_t) { .raw_lock = __RAW_RW_LOCK_UNLOCKED, \
.magic = RWLOCK_MAGIC, \
.owner = SPINLOCK_OWNER_INIT, \
- .owner_cpu = -1 }
+ .owner_cpu = -1, \
+ RW_DEP_MAP_INIT(lockname) }
#else
-# define SPIN_LOCK_UNLOCKED \
- (spinlock_t) { .raw_lock = __RAW_SPIN_LOCK_UNLOCKED }
-#define RW_LOCK_UNLOCKED \
- (rwlock_t) { .raw_lock = __RAW_RW_LOCK_UNLOCKED }
+# define __SPIN_LOCK_UNLOCKED(lockname) \
+ (spinlock_t) { .raw_lock = __RAW_SPIN_LOCK_UNLOCKED, \
+ SPIN_DEP_MAP_INIT(lockname) }
+#define __RW_LOCK_UNLOCKED(lockname) \
+ (rwlock_t) { .raw_lock = __RAW_RW_LOCK_UNLOCKED, \
+ RW_DEP_MAP_INIT(lockname) }
#endif
-#define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED
-#define DEFINE_RWLOCK(x) rwlock_t x = RW_LOCK_UNLOCKED
+#define SPIN_LOCK_UNLOCKED __SPIN_LOCK_UNLOCKED(old_style_spin_init)
+#define RW_LOCK_UNLOCKED __RW_LOCK_UNLOCKED(old_style_rw_init)
+
+#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+#define DEFINE_RWLOCK(x) rwlock_t x = __RW_LOCK_UNLOCKED(x)
#endif /* __LINUX_SPINLOCK_TYPES_H */
* Released under the General Public License (GPL).
*/
-#ifdef CONFIG_DEBUG_SPINLOCK
+#if defined(CONFIG_DEBUG_SPINLOCK) || \
+ defined(CONFIG_DEBUG_LOCK_ALLOC)
typedef struct {
volatile unsigned int slock;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
} raw_spinlock_t;
#define __RAW_SPIN_LOCK_UNLOCKED { 1 }
typedef struct {
/* no debug version on UP */
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
} raw_rwlock_t;
#define __RAW_RW_LOCK_UNLOCKED { }
*/
#ifdef CONFIG_DEBUG_SPINLOCK
-
#define __raw_spin_is_locked(x) ((x)->slock == 0)
static inline void __raw_spin_lock(raw_spinlock_t *lock)
--- /dev/null
+#ifndef __LINUX_STACKTRACE_H
+#define __LINUX_STACKTRACE_H
+
+#ifdef CONFIG_STACKTRACE
+struct stack_trace {
+ unsigned int nr_entries, max_entries;
+ unsigned long *entries;
+};
+
+extern void save_stack_trace(struct stack_trace *trace,
+ struct task_struct *task, int all_contexts,
+ unsigned int skip);
+
+extern void print_stack_trace(struct stack_trace *trace, int spaces);
+#else
+# define save_stack_trace(trace, task, all, skip) do { } while (0)
+# define print_stack_trace(trace) do { } while (0)
+#endif
+
+#endif
#ifdef CONFIG_NUMA
extern int zone_reclaim_mode;
+extern int sysctl_min_unmapped_ratio;
extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
#else
#define zone_reclaim_mode 0
VM_DROP_PAGECACHE=29, /* int: nuke lots of pagecache */
VM_PERCPU_PAGELIST_FRACTION=30,/* int: fraction of pages in each percpu_pagelist */
VM_ZONE_RECLAIM_MODE=31, /* reclaim local zone memory before going off node */
- VM_ZONE_RECLAIM_INTERVAL=32, /* time period to wait after reclaim failure */
+ VM_MIN_UNMAPPED=32, /* Set min percent of unmapped pages */
VM_PANIC_ON_OOM=33, /* panic at out-of-memory */
VM_VDSO_ENABLED=34, /* map VDSO into new processes? */
};
-#include <linux/version.h>
+#include <linux/utsrelease.h>
#include <linux/module.h>
/* Simply sanity version stamp for modules. */
wait_queue_t name = __WAITQUEUE_INITIALIZER(name, tsk)
#define __WAIT_QUEUE_HEAD_INITIALIZER(name) { \
- .lock = SPIN_LOCK_UNLOCKED, \
+ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
.task_list = { &(name).task_list, &(name).task_list } }
#define DECLARE_WAIT_QUEUE_HEAD(name) \
#define __WAIT_BIT_KEY_INITIALIZER(word, bit) \
{ .flags = word, .bit_nr = bit, }
+/*
+ * lockdep: we want one lock-class for all waitqueue locks.
+ */
+extern struct lock_class_key waitqueue_lock_key;
+
static inline void init_waitqueue_head(wait_queue_head_t *q)
{
spin_lock_init(&q->lock);
+ lockdep_set_class(&q->lock, &waitqueue_lock_key);
INIT_LIST_HEAD(&q->task_list);
}
};
/**
- * struct mtd_ecc_stats - error correction status
+ * struct mtd_ecc_stats - error correction stats
*
* @corrected: number of corrected bits
* @failed: number of uncorrectable errors
#define unix_state_rlock(s) spin_lock(&unix_sk(s)->lock)
#define unix_state_runlock(s) spin_unlock(&unix_sk(s)->lock)
#define unix_state_wlock(s) spin_lock(&unix_sk(s)->lock)
+#define unix_state_wlock_nested(s) \
+ spin_lock_nested(&unix_sk(s)->lock, \
+ SINGLE_DEPTH_NESTING)
#define unix_state_wunlock(s) spin_unlock(&unix_sk(s)->lock)
#ifdef __KERNEL__
typedef struct ax25_route {
struct ax25_route *next;
- atomic_t ref;
+ atomic_t refcount;
ax25_address callsign;
struct net_device *dev;
ax25_digi *digipeat;
char ip_mode;
- struct timer_list timer;
} ax25_route;
+static inline void ax25_hold_route(ax25_route *ax25_rt)
+{
+ atomic_inc(&ax25_rt->refcount);
+}
+
+extern void __ax25_put_route(ax25_route *ax25_rt);
+
+static inline void ax25_put_route(ax25_route *ax25_rt)
+{
+ if (atomic_dec_and_test(&ax25_rt->refcount))
+ __ax25_put_route(ax25_rt);
+}
+
typedef struct {
char slave; /* slave_mode? */
struct timer_list slave_timer; /* timeout timer */
extern void ax25_rt_device_down(struct net_device *);
extern int ax25_rt_ioctl(unsigned int, void __user *);
extern struct file_operations ax25_route_fops;
+extern ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev);
extern int ax25_rt_autobind(ax25_cb *, ax25_address *);
-extern ax25_route *ax25_rt_find_route(ax25_route *, ax25_address *,
- struct net_device *);
extern struct sk_buff *ax25_rt_build_path(struct sk_buff *, ax25_address *, ax25_address *, ax25_digi *);
extern void ax25_rt_free(void);
-static inline void ax25_put_route(ax25_route *ax25_rt)
-{
- atomic_dec(&ax25_rt->ref);
-}
-
/* ax25_std_in.c */
extern int ax25_std_frame_in(ax25_cb *, struct sk_buff *, int);
extern int bt_sysfs_init(void);
extern void bt_sysfs_cleanup(void);
-extern struct class bt_class;
+extern struct class *bt_class;
#endif /* __BLUETOOTH_H */
/* HCI device quirks */
enum {
HCI_QUIRK_RESET_ON_INIT,
- HCI_QUIRK_RAW_DEVICE
+ HCI_QUIRK_RAW_DEVICE,
+ HCI_QUIRK_FIXUP_BUFFER_SIZE
};
/* HCI device flags */
#define HCIINQUIRY _IOR('H', 240, int)
/* HCI timeouts */
-#define HCI_CONN_TIMEOUT (HZ * 40)
-#define HCI_DISCONN_TIMEOUT (HZ * 2)
-#define HCI_CONN_IDLE_TIMEOUT (HZ * 60)
+#define HCI_CONNECT_TIMEOUT (40000) /* 40 seconds */
+#define HCI_DISCONN_TIMEOUT (2000) /* 2 seconds */
+#define HCI_IDLE_TIMEOUT (6000) /* 6 seconds */
+#define HCI_INIT_TIMEOUT (10000) /* 10 seconds */
/* HCI Packet types */
#define HCI_COMMAND_PKT 0x01
#define LMP_TACCURACY 0x10
#define LMP_RSWITCH 0x20
#define LMP_HOLD 0x40
-#define LMP_SNIF 0x80
+#define LMP_SNIFF 0x80
#define LMP_PARK 0x01
#define LMP_RSSI 0x02
#define LMP_PSCHEME 0x02
#define LMP_PCONTROL 0x04
+#define LMP_SNIFF_SUBR 0x02
+
+/* Connection modes */
+#define HCI_CM_ACTIVE 0x0000
+#define HCI_CM_HOLD 0x0001
+#define HCI_CM_SNIFF 0x0002
+#define HCI_CM_PARK 0x0003
+
/* Link policies */
#define HCI_LP_RSWITCH 0x0001
#define HCI_LP_HOLD 0x0002
#define HCI_LP_SNIFF 0x0004
#define HCI_LP_PARK 0x0008
-/* Link mode */
+/* Link modes */
#define HCI_LM_ACCEPT 0x8000
#define HCI_LM_MASTER 0x0001
#define HCI_LM_AUTH 0x0002
} __attribute__ ((packed));
#define OCF_READ_LOCAL_FEATURES 0x0003
-struct hci_rp_read_loc_features {
+struct hci_rp_read_local_features {
__u8 status;
__u8 features[8];
} __attribute__ ((packed));
} __attribute__ ((packed));
#define OCF_READ_REMOTE_FEATURES 0x001B
-struct hci_cp_read_rmt_features {
+struct hci_cp_read_remote_features {
__le16 handle;
} __attribute__ ((packed));
#define OCF_READ_REMOTE_VERSION 0x001D
-struct hci_cp_read_rmt_version {
+struct hci_cp_read_remote_version {
__le16 handle;
} __attribute__ ((packed));
/* Link Policy */
-#define OGF_LINK_POLICY 0x02
+#define OGF_LINK_POLICY 0x02
+
+#define OCF_SNIFF_MODE 0x0003
+struct hci_cp_sniff_mode {
+ __le16 handle;
+ __le16 max_interval;
+ __le16 min_interval;
+ __le16 attempt;
+ __le16 timeout;
+} __attribute__ ((packed));
+
+#define OCF_EXIT_SNIFF_MODE 0x0004
+struct hci_cp_exit_sniff_mode {
+ __le16 handle;
+} __attribute__ ((packed));
+
#define OCF_ROLE_DISCOVERY 0x0009
struct hci_cp_role_discovery {
__le16 handle;
__le16 policy;
} __attribute__ ((packed));
-#define OCF_SWITCH_ROLE 0x000B
+#define OCF_SWITCH_ROLE 0x000B
struct hci_cp_switch_role {
bdaddr_t bdaddr;
__u8 role;
__le16 handle;
} __attribute__ ((packed));
+#define OCF_SNIFF_SUBRATE 0x0011
+struct hci_cp_sniff_subrate {
+ __le16 handle;
+ __le16 max_latency;
+ __le16 min_remote_timeout;
+ __le16 min_local_timeout;
+} __attribute__ ((packed));
+
/* Status params */
#define OGF_STATUS_PARAM 0x05
__u8 key_type;
} __attribute__ ((packed));
-#define HCI_EV_RMT_FEATURES 0x0B
-struct hci_ev_rmt_features {
+#define HCI_EV_REMOTE_FEATURES 0x0B
+struct hci_ev_remote_features {
__u8 status;
__le16 handle;
__u8 features[8];
} __attribute__ ((packed));
-#define HCI_EV_RMT_VERSION 0x0C
-struct hci_ev_rmt_version {
+#define HCI_EV_REMOTE_VERSION 0x0C
+struct hci_ev_remote_version {
__u8 status;
__le16 handle;
__u8 lmp_ver;
__u8 pscan_rep_mode;
} __attribute__ ((packed));
+#define HCI_EV_SNIFF_SUBRATE 0x2E
+struct hci_ev_sniff_subrate {
+ __u8 status;
+ __le16 handle;
+ __le16 max_tx_latency;
+ __le16 max_rx_latency;
+ __le16 max_remote_timeout;
+ __le16 max_local_timeout;
+} __attribute__ ((packed));
+
/* Internal events generated by Bluetooth stack */
#define HCI_EV_STACK_INTERNAL 0xFD
struct hci_ev_stack_internal {
#define HCI_PROTO_L2CAP 0
#define HCI_PROTO_SCO 1
-#define HCI_INIT_TIMEOUT (HZ * 10)
-
/* HCI Core structures */
-
struct inquiry_data {
bdaddr_t bdaddr;
__u8 pscan_rep_mode;
__u16 link_policy;
__u16 link_mode;
+ __u32 idle_timeout;
+ __u16 sniff_min_interval;
+ __u16 sniff_max_interval;
+
unsigned long quirks;
atomic_t cmd_cnt;
atomic_t promisc;
- struct class_device class_dev;
+ struct device *parent;
+ struct device dev;
struct module *owner;
bdaddr_t dst;
__u16 handle;
__u16 state;
+ __u8 mode;
__u8 type;
__u8 out;
__u8 dev_class[3];
+ __u8 features[8];
+ __u16 interval;
+ __u16 link_policy;
__u32 link_mode;
+ __u8 power_save;
unsigned long pend;
-
+
unsigned int sent;
-
+
struct sk_buff_head data_q;
- struct timer_list timer;
-
+ struct timer_list disc_timer;
+ struct timer_list idle_timer;
+
struct hci_dev *hdev;
void *l2cap_data;
void *sco_data;
enum {
HCI_CONN_AUTH_PEND,
HCI_CONN_ENCRYPT_PEND,
- HCI_CONN_RSWITCH_PEND
+ HCI_CONN_RSWITCH_PEND,
+ HCI_CONN_MODE_CHANGE_PEND,
};
static inline void hci_conn_hash_init(struct hci_dev *hdev)
int hci_conn_change_link_key(struct hci_conn *conn);
int hci_conn_switch_role(struct hci_conn *conn, uint8_t role);
-static inline void hci_conn_set_timer(struct hci_conn *conn, unsigned long timeout)
-{
- mod_timer(&conn->timer, jiffies + timeout);
-}
-
-static inline void hci_conn_del_timer(struct hci_conn *conn)
-{
- del_timer(&conn->timer);
-}
+void hci_conn_enter_active_mode(struct hci_conn *conn);
+void hci_conn_enter_sniff_mode(struct hci_conn *conn);
static inline void hci_conn_hold(struct hci_conn *conn)
{
atomic_inc(&conn->refcnt);
- hci_conn_del_timer(conn);
+ del_timer(&conn->disc_timer);
}
static inline void hci_conn_put(struct hci_conn *conn)
{
if (atomic_dec_and_test(&conn->refcnt)) {
+ unsigned long timeo;
if (conn->type == ACL_LINK) {
- unsigned long timeo = (conn->out) ?
- HCI_DISCONN_TIMEOUT : HCI_DISCONN_TIMEOUT * 2;
- hci_conn_set_timer(conn, timeo);
+ timeo = msecs_to_jiffies(HCI_DISCONN_TIMEOUT);
+ if (!conn->out)
+ timeo *= 2;
+ del_timer(&conn->idle_timer);
} else
- hci_conn_set_timer(conn, HZ / 100);
+ timeo = msecs_to_jiffies(10);
+ mod_timer(&conn->disc_timer, jiffies + timeo);
}
}
int hci_register_sysfs(struct hci_dev *hdev);
void hci_unregister_sysfs(struct hci_dev *hdev);
-#define SET_HCIDEV_DEV(hdev, pdev) ((hdev)->class_dev.dev = (pdev))
+#define SET_HCIDEV_DEV(hdev, pdev) ((hdev)->parent = (pdev))
/* ----- LMP capabilities ----- */
-#define lmp_rswitch_capable(dev) (dev->features[0] & LMP_RSWITCH)
-#define lmp_encrypt_capable(dev) (dev->features[0] & LMP_ENCRYPT)
+#define lmp_rswitch_capable(dev) ((dev)->features[0] & LMP_RSWITCH)
+#define lmp_encrypt_capable(dev) ((dev)->features[0] & LMP_ENCRYPT)
+#define lmp_sniff_capable(dev) ((dev)->features[0] & LMP_SNIFF)
+#define lmp_sniffsubr_capable(dev) ((dev)->features[5] & LMP_SNIFF_SUBR)
/* ----- HCI protocols ----- */
struct hci_proto {
int irq, irq2; /* Interrupts used */
int dma, dma2; /* DMA channel(s) used */
int fifo_size; /* FIFO size */
- int irqflags; /* interrupt flags (ie, SA_SHIRQ|SA_INTERRUPT) */
+ int irqflags; /* interrupt flags (ie, IRQF_SHARED|IRQF_DISABLED) */
int direction; /* Link direction, used by some FIR drivers */
int enabled; /* Powered on? */
int suspended; /* Suspended by APM */
#include <linux/timer.h>
#include <linux/cache.h>
#include <linux/module.h>
+#include <linux/lockdep.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h> /* struct sk_buff */
#include <linux/security.h>
spinlock_t slock;
struct sock_iocb *owner;
wait_queue_head_t wq;
+ /*
+ * We express the mutex-alike socket_lock semantics
+ * to the lock validator by explicitly managing
+ * the slock as a lock variant (in addition to
+ * the slock itself):
+ */
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map dep_map;
+#endif
} socket_lock_t;
-#define sock_lock_init(__sk) \
-do { spin_lock_init(&((__sk)->sk_lock.slock)); \
- (__sk)->sk_lock.owner = NULL; \
- init_waitqueue_head(&((__sk)->sk_lock.wq)); \
-} while(0)
-
struct sock;
struct proto;
/* BH context may only use the following locking interface. */
#define bh_lock_sock(__sk) spin_lock(&((__sk)->sk_lock.slock))
+#define bh_lock_sock_nested(__sk) \
+ spin_lock_nested(&((__sk)->sk_lock.slock), \
+ SINGLE_DEPTH_NESTING)
#define bh_unlock_sock(__sk) spin_unlock(&((__sk)->sk_lock.slock))
extern struct sock *sk_alloc(int family,
ISCSI_UEVENT_TRANSPORT_EP_POLL = UEVENT_BASE + 13,
ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT = UEVENT_BASE + 14,
+ ISCSI_UEVENT_TGT_DSCVR = UEVENT_BASE + 15,
+
/* up events */
ISCSI_KEVENT_RECV_PDU = KEVENT_BASE + 1,
ISCSI_KEVENT_CONN_ERROR = KEVENT_BASE + 2,
ISCSI_KEVENT_IF_ERROR = KEVENT_BASE + 3,
+ ISCSI_KEVENT_DESTROY_SESSION = KEVENT_BASE + 4,
+};
+
+enum iscsi_tgt_dscvr {
+ ISCSI_TGT_DSCVR_SEND_TARGETS = 1,
+ ISCSI_TGT_DSCVR_ISNS = 2,
+ ISCSI_TGT_DSCVR_SLP = 3,
};
struct iscsi_uevent {
struct msg_transport_disconnect {
uint64_t ep_handle;
} ep_disconnect;
+ struct msg_tgt_dscvr {
+ enum iscsi_tgt_dscvr type;
+ uint32_t host_no;
+ /*
+ * enable = 1 to establish a new connection
+ * with the server. enable = 0 to disconnect
+ * from the server. Used primarily to switch
+ * from one iSNS server to another.
+ */
+ uint32_t enable;
+ } tgt_dscvr;
} u;
union {
/* messages k -> u */
uint32_t cid;
uint32_t error; /* enum iscsi_err */
} connerror;
+ struct msg_session_destroyed {
+ uint32_t host_no;
+ uint32_t sid;
+ } d_session;
struct msg_transport_connect_ret {
uint64_t handle;
} ep_connect_ret;
int max_xmit_dlength; /* target_max_recv_dsl */
int hdrdgst_en;
int datadgst_en;
+ int ifmarker_en;
+ int ofmarker_en;
+ /* values userspace uses to id a conn */
+ int persistent_port;
+ char *persistent_address;
/* MIB-statistics */
uint64_t txdata_octets;
int pdu_inorder_en;
int dataseq_inorder_en;
int erl;
- int ifmarker_en;
- int ofmarker_en;
+ int tpgt;
+ char *targetname;
/* control data */
struct iscsi_transport *tt;
extern void iscsi_session_teardown(struct iscsi_cls_session *);
extern struct iscsi_session *class_to_transport_session(struct iscsi_cls_session *);
extern void iscsi_session_recovery_timedout(struct iscsi_cls_session *);
+extern int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf, int buflen);
+extern int iscsi_session_get_param(struct iscsi_cls_session *cls_session,
+ enum iscsi_param param, char *buf);
#define session_to_cls(_sess) \
hostdata_session(_sess->host->hostdata)
extern int iscsi_conn_bind(struct iscsi_cls_session *, struct iscsi_cls_conn *,
int);
extern void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err);
+extern int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ enum iscsi_param param, char *buf);
/*
* pdu and task processing
extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
extern void scsi_put_command(struct scsi_cmnd *);
-extern void scsi_io_completion(struct scsi_cmnd *, unsigned int, unsigned int);
+extern void scsi_io_completion(struct scsi_cmnd *, unsigned int);
extern void scsi_finish_command(struct scsi_cmnd *cmd);
extern void scsi_req_abort_cmd(struct scsi_cmnd *cmd);
*/
unsigned ordered_tag:1;
+ /* task mgmt function in progress */
+ unsigned tmf_in_progress:1;
+
/*
* Optional work queue to be utilized by the transport
*/
{
return shost->shost_state == SHOST_RECOVERY ||
shost->shost_state == SHOST_CANCEL_RECOVERY ||
- shost->shost_state == SHOST_DEL_RECOVERY;
+ shost->shost_state == SHOST_DEL_RECOVERY ||
+ shost->tmf_in_progress;
}
extern int scsi_queue_work(struct Scsi_Host *, struct work_struct *);
struct iscsi_conn;
struct iscsi_cmd_task;
struct iscsi_mgmt_task;
+struct sockaddr;
/**
* struct iscsi_transport - iSCSI Transport template
* @bind_conn: associate this connection with existing iSCSI session
* and specified transport descriptor
* @destroy_conn: destroy inactive iSCSI connection
- * @set_param: set iSCSI Data-Path operational parameter
+ * @set_param: set iSCSI parameter. Return 0 on success, -ENODATA
+ * when param is not supported, and a -Exx value on other
+ * error.
+ * @get_param get iSCSI parameter. Must return number of bytes
+ * copied to buffer on success, -ENODATA when param
+ * is not supported, and a -Exx value on other error
* @start_conn: set connection to be operational
* @stop_conn: suspend/recover/terminate connection
* @send_pdu: send iSCSI PDU, Login, Logout, NOP-Out, Reject, Text.
void (*stop_conn) (struct iscsi_cls_conn *conn, int flag);
void (*destroy_conn) (struct iscsi_cls_conn *conn);
int (*set_param) (struct iscsi_cls_conn *conn, enum iscsi_param param,
- uint32_t value);
+ char *buf, int buflen);
int (*get_conn_param) (struct iscsi_cls_conn *conn,
- enum iscsi_param param, uint32_t *value);
+ enum iscsi_param param, char *buf);
int (*get_session_param) (struct iscsi_cls_session *session,
- enum iscsi_param param, uint32_t *value);
- int (*get_conn_str_param) (struct iscsi_cls_conn *conn,
- enum iscsi_param param, char *buf);
- int (*get_session_str_param) (struct iscsi_cls_session *session,
- enum iscsi_param param, char *buf);
+ enum iscsi_param param, char *buf);
int (*send_pdu) (struct iscsi_cls_conn *conn, struct iscsi_hdr *hdr,
char *data, uint32_t data_size);
void (*get_stats) (struct iscsi_cls_conn *conn,
uint64_t *ep_handle);
int (*ep_poll) (uint64_t ep_handle, int timeout_ms);
void (*ep_disconnect) (uint64_t ep_handle);
+ int (*tgt_dscvr) (enum iscsi_tgt_dscvr type, uint32_t host_no,
+ uint32_t enable, struct sockaddr *dst_addr);
};
/*
struct iscsi_transport *transport;
uint32_t cid; /* connection id */
- /* portal/group values we got during discovery */
- char *persistent_address;
- int persistent_port;
- /* portal/group values we are currently using */
- char *address;
- int port;
-
int active; /* must be accessed with the connlock */
struct device dev; /* sysfs transport/container device */
struct mempool_zone *z_error;
struct list_head host_list;
struct iscsi_transport *transport;
- /* iSCSI values used as unique id by userspace. */
- char *targetname;
- int tpgt;
-
/* recovery fields */
int recovery_tmo;
struct work_struct recovery_work;
int target_id;
- int channel;
int sid; /* session id */
void *dd_data; /* LLD private data */
#define iscsi_session_to_shost(_session) \
dev_to_shost(_session->dev.parent)
+#define starget_to_session(_stgt) \
+ iscsi_dev_to_session(_stgt->dev.parent)
+
struct iscsi_host {
- int next_target_id;
struct list_head sessions;
struct mutex mutex;
};
/*
* session and connection functions that can be used by HW iSCSI LLDs
*/
+extern struct iscsi_cls_session *iscsi_alloc_session(struct Scsi_Host *shost,
+ struct iscsi_transport *transport);
+extern int iscsi_add_session(struct iscsi_cls_session *session,
+ unsigned int target_id);
+extern int iscsi_if_create_session_done(struct iscsi_cls_conn *conn);
+extern int iscsi_if_destroy_session_done(struct iscsi_cls_conn *conn);
extern struct iscsi_cls_session *iscsi_create_session(struct Scsi_Host *shost,
- struct iscsi_transport *t, int channel);
+ struct iscsi_transport *t,
+ unsigned int target_id);
+extern void iscsi_remove_session(struct iscsi_cls_session *session);
+extern void iscsi_free_session(struct iscsi_cls_session *session);
extern int iscsi_destroy_session(struct iscsi_cls_session *session);
extern struct iscsi_cls_conn *iscsi_create_conn(struct iscsi_cls_session *sess,
uint32_t cid);
extern void iscsi_unblock_session(struct iscsi_cls_session *session);
extern void iscsi_block_session(struct iscsi_cls_session *session);
+
#endif
#include <linux/transport_class.h>
#include <linux/types.h>
+#include <linux/mutex.h>
struct scsi_transport_template;
struct sas_rphy;
enum sas_linkrate minimum_linkrate;
enum sas_linkrate maximum_linkrate_hw;
enum sas_linkrate maximum_linkrate;
- u8 port_identifier;
/* internal state */
unsigned int local_attached : 1;
u32 loss_of_dword_sync_count;
u32 phy_reset_problem_count;
- /* the other end of the link */
- struct sas_rphy *rphy;
+ /* for the list of phys belonging to a port */
+ struct list_head port_siblings;
};
#define dev_to_phy(d) \
#define rphy_to_expander_device(r) \
container_of((r), struct sas_expander_device, rphy)
+struct sas_port {
+ struct device dev;
+
+ u8 port_identifier;
+ int num_phys;
+
+ /* the other end of the link */
+ struct sas_rphy *rphy;
+
+ struct mutex phy_list_mutex;
+ struct list_head phy_list;
+};
+
+#define dev_to_sas_port(d) \
+ container_of((d), struct sas_port, dev)
+#define transport_class_to_sas_port(cdev) \
+ dev_to_sas_port((cdev)->dev)
+
/* The functions by which the transport class and the driver communicate */
struct sas_function_template {
int (*get_linkerrors)(struct sas_phy *);
};
+void sas_remove_children(struct device *);
extern void sas_remove_host(struct Scsi_Host *);
extern struct sas_phy *sas_phy_alloc(struct device *, int);
extern void sas_phy_delete(struct sas_phy *);
extern int scsi_is_sas_phy(const struct device *);
-extern struct sas_rphy *sas_end_device_alloc(struct sas_phy *);
-extern struct sas_rphy *sas_expander_alloc(struct sas_phy *, enum sas_device_type);
+extern struct sas_rphy *sas_end_device_alloc(struct sas_port *);
+extern struct sas_rphy *sas_expander_alloc(struct sas_port *, enum sas_device_type);
void sas_rphy_free(struct sas_rphy *);
extern int sas_rphy_add(struct sas_rphy *);
extern void sas_rphy_delete(struct sas_rphy *);
extern int scsi_is_sas_rphy(const struct device *);
+struct sas_port *sas_port_alloc(struct device *, int);
+int sas_port_add(struct sas_port *);
+void sas_port_free(struct sas_port *);
+void sas_port_delete(struct sas_port *);
+void sas_port_add_phy(struct sas_port *, struct sas_phy *);
+void sas_port_delete_phy(struct sas_port *, struct sas_phy *);
+int scsi_is_sas_port(const struct device *);
+
extern struct scsi_transport_template *
sas_attach_transport(struct sas_function_template *);
extern void sas_release_transport(struct scsi_transport_template *);
{
while (*irq_table != -1) {
if (!request_irq(*irq_table, snd_legacy_empty_irq_handler,
- SA_INTERRUPT | SA_PROBEIRQ, "ALSA Test IRQ",
+ IRQF_DISABLED | IRQF_PROBE_SHARED, "ALSA Test IRQ",
(void *) irq_table)) {
free_irq(*irq_table, (void *) irq_table);
return *irq_table;
#include <linux/key.h>
#include <linux/unwind.h>
#include <linux/buffer_head.h>
+#include <linux/debug_locks.h>
+#include <linux/lockdep.h>
#include <asm/io.h>
#include <asm/bugs.h>
smp_setup_processor_id();
+ /*
+ * Need to run as early as possible, to initialize the
+ * lockdep hash:
+ */
+ lockdep_init();
+
+ local_irq_disable();
+ early_boot_irqs_off();
+ early_init_irq_lock_class();
+
/*
* Interrupts are still disabled. Do necessary setups, then
* enable them
init_timers();
hrtimers_init();
softirq_init();
- time_init();
timekeeping_init();
+ time_init();
+ profile_init();
+ if (!irqs_disabled())
+ printk("start_kernel(): bug: interrupts were enabled early\n");
+ early_boot_irqs_on();
+ local_irq_enable();
/*
* HACK ALERT! This is early. We're enabling the console before
console_init();
if (panic_later)
panic(panic_later, panic_param);
- profile_init();
- local_irq_enable();
+
+ lockdep_info();
+
+ /*
+ * Need to run this when irqs are enabled, because it wants
+ * to self-test [hard/soft]-irqs on/off lock inversion bugs
+ * too:
+ */
+ locking_selftest();
+
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start && !initrd_below_start_ok &&
initrd_start < min_low_pfn << PAGE_SHIFT) {
#include <linux/module.h>
#include <linux/uts.h>
#include <linux/utsname.h>
+#include <linux/utsrelease.h>
#include <linux/version.h>
#define version(a) Version_ ## a
signal.o sys.o kmod.o workqueue.o pid.o \
rcupdate.o extable.o params.o posix-timers.o \
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o \
- hrtimer.o
+ hrtimer.o rwsem.o
+obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += time/
obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
+obj-$(CONFIG_LOCKDEP) += lockdep.o
+ifeq ($(CONFIG_PROC_FS),y)
+obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
+endif
obj-$(CONFIG_FUTEX) += futex.o
ifeq ($(CONFIG_COMPAT),y)
obj-$(CONFIG_FUTEX) += futex_compat.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
obj-$(CONFIG_SMP) += cpu.o spinlock.o
obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
+obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
obj-$(CONFIG_UID16) += uid16.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_KALLSYMS) += kallsyms.o
int ret = 0;
pid_t pid;
__u32 version;
- task_t *target;
+ struct task_struct *target;
struct __user_cap_data_struct data;
if (get_user(version, &header->version))
kernel_cap_t *inheritable,
kernel_cap_t *permitted)
{
- task_t *g, *target;
+ struct task_struct *g, *target;
int ret = -EPERM;
int found = 0;
kernel_cap_t *inheritable,
kernel_cap_t *permitted)
{
- task_t *g, *target;
+ struct task_struct *g, *target;
int ret = -EPERM;
int found = 0;
{
kernel_cap_t inheritable, permitted, effective;
__u32 version;
- task_t *target;
+ struct task_struct *target;
int ret;
pid_t pid;
void release_task(struct task_struct * p)
{
+ struct task_struct *leader;
int zap_leader;
- task_t *leader;
repeat:
atomic_dec(&p->user->processes);
write_lock_irq(&tasklist_lock);
*
* "I ask you, have you ever known what it is to be an orphan?"
*/
-static int will_become_orphaned_pgrp(int pgrp, task_t *ignored_task)
+static int will_become_orphaned_pgrp(int pgrp, struct task_struct *ignored_task)
{
struct task_struct *p;
int ret = 1;
mmput(mm);
}
-static inline void choose_new_parent(task_t *p, task_t *reaper)
+static inline void
+choose_new_parent(struct task_struct *p, struct task_struct *reaper)
{
/*
* Make sure we're not reparenting to ourselves and that
p->real_parent = reaper;
}
-static void reparent_thread(task_t *p, task_t *father, int traced)
+static void
+reparent_thread(struct task_struct *p, struct task_struct *father, int traced)
{
/* We don't want people slaying init. */
if (p->exit_signal != -1)
* group, and if no such member exists, give it to
* the global child reaper process (ie "init")
*/
-static void forget_original_parent(struct task_struct * father,
- struct list_head *to_release)
+static void
+forget_original_parent(struct task_struct *father, struct list_head *to_release)
{
struct task_struct *p, *reaper = father;
struct list_head *_p, *_n;
*/
list_for_each_safe(_p, _n, &father->children) {
int ptrace;
- p = list_entry(_p,struct task_struct,sibling);
+ p = list_entry(_p, struct task_struct, sibling);
ptrace = p->ptrace;
list_add(&p->ptrace_list, to_release);
}
list_for_each_safe(_p, _n, &father->ptrace_children) {
- p = list_entry(_p,struct task_struct,ptrace_list);
+ p = list_entry(_p, struct task_struct, ptrace_list);
choose_new_parent(p, reaper);
reparent_thread(p, father, 1);
}
list_for_each_safe(_p, _n, &ptrace_dead) {
list_del_init(_p);
- t = list_entry(_p,struct task_struct,ptrace_list);
+ t = list_entry(_p, struct task_struct, ptrace_list);
release_task(t);
}
if (unlikely(current->pi_state_cache))
kfree(current->pi_state_cache);
/*
- * If DEBUG_MUTEXES is on, make sure we are holding no locks:
+ * Make sure we are holding no locks:
*/
- mutex_debug_check_no_locks_held(tsk);
- rt_mutex_debug_check_no_locks_held(tsk);
+ debug_check_no_locks_held(tsk);
if (tsk->io_context)
exit_io_context();
do_group_exit((error_code & 0xff) << 8);
}
-static int eligible_child(pid_t pid, int options, task_t *p)
+static int eligible_child(pid_t pid, int options, struct task_struct *p)
{
if (pid > 0) {
if (p->pid != pid)
return 1;
}
-static int wait_noreap_copyout(task_t *p, pid_t pid, uid_t uid,
+static int wait_noreap_copyout(struct task_struct *p, pid_t pid, uid_t uid,
int why, int status,
struct siginfo __user *infop,
struct rusage __user *rusagep)
{
int retval = rusagep ? getrusage(p, RUSAGE_BOTH, rusagep) : 0;
+
put_task_struct(p);
if (!retval)
retval = put_user(SIGCHLD, &infop->si_signo);
* the lock and this task is uninteresting. If we return nonzero, we have
* released the lock and the system call should return.
*/
-static int wait_task_zombie(task_t *p, int noreap,
+static int wait_task_zombie(struct task_struct *p, int noreap,
struct siginfo __user *infop,
int __user *stat_addr, struct rusage __user *ru)
{
* the lock and this task is uninteresting. If we return nonzero, we have
* released the lock and the system call should return.
*/
-static int wait_task_stopped(task_t *p, int delayed_group_leader, int noreap,
- struct siginfo __user *infop,
+static int wait_task_stopped(struct task_struct *p, int delayed_group_leader,
+ int noreap, struct siginfo __user *infop,
int __user *stat_addr, struct rusage __user *ru)
{
int retval, exit_code;
* the lock and this task is uninteresting. If we return nonzero, we have
* released the lock and the system call should return.
*/
-static int wait_task_continued(task_t *p, int noreap,
+static int wait_task_continued(struct task_struct *p, int noreap,
struct siginfo __user *infop,
int __user *stat_addr, struct rusage __user *ru)
{
int ret;
list_for_each(_p,&tsk->children) {
- p = list_entry(_p,struct task_struct,sibling);
+ p = list_entry(_p, struct task_struct, sibling);
ret = eligible_child(pid, options, p);
if (!ret)
down_write(&oldmm->mmap_sem);
flush_cache_mm(oldmm);
- down_write(&mm->mmap_sem);
+ /*
+ * Not linked in yet - no deadlock potential:
+ */
+ down_write_nested(&mm->mmap_sem, SINGLE_DEPTH_NESTING);
mm->locked_vm = 0;
mm->mmap = NULL;
spin_lock_init(&p->pi_lock);
plist_head_init(&p->pi_waiters, &p->pi_lock);
p->pi_blocked_on = NULL;
-# ifdef CONFIG_DEBUG_RT_MUTEXES
- spin_lock_init(&p->held_list_lock);
- INIT_LIST_HEAD(&p->held_list_head);
-# endif
#endif
}
* parts of the process environment (as per the clone
* flags). The actual kick-off is left to the caller.
*/
-static task_t *copy_process(unsigned long clone_flags,
- unsigned long stack_start,
- struct pt_regs *regs,
- unsigned long stack_size,
- int __user *parent_tidptr,
- int __user *child_tidptr,
- int pid)
+static struct task_struct *copy_process(unsigned long clone_flags,
+ unsigned long stack_start,
+ struct pt_regs *regs,
+ unsigned long stack_size,
+ int __user *parent_tidptr,
+ int __user *child_tidptr,
+ int pid)
{
int retval;
struct task_struct *p = NULL;
if (!p)
goto fork_out;
+#ifdef CONFIG_TRACE_IRQFLAGS
+ DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
+ DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
+#endif
retval = -EAGAIN;
if (atomic_read(&p->user->processes) >=
p->signal->rlim[RLIMIT_NPROC].rlim_cur) {
}
mpol_fix_fork_child_flag(p);
#endif
+#ifdef CONFIG_TRACE_IRQFLAGS
+ p->irq_events = 0;
+ p->hardirqs_enabled = 0;
+ p->hardirq_enable_ip = 0;
+ p->hardirq_enable_event = 0;
+ p->hardirq_disable_ip = _THIS_IP_;
+ p->hardirq_disable_event = 0;
+ p->softirqs_enabled = 1;
+ p->softirq_enable_ip = _THIS_IP_;
+ p->softirq_enable_event = 0;
+ p->softirq_disable_ip = 0;
+ p->softirq_disable_event = 0;
+ p->hardirq_context = 0;
+ p->softirq_context = 0;
+#endif
+#ifdef CONFIG_LOCKDEP
+ p->lockdep_depth = 0; /* no locks held yet */
+ p->curr_chain_key = 0;
+ p->lockdep_recursion = 0;
+#endif
rt_mutex_init_task(p);
return regs;
}
-task_t * __devinit fork_idle(int cpu)
+struct task_struct * __devinit fork_idle(int cpu)
{
- task_t *task;
+ struct task_struct *task;
struct pt_regs regs;
task = copy_process(CLONE_VM, 0, idle_regs(®s), 0, NULL, NULL, 0);
return 0;
}
+/*
+ * Express the locking dependencies for lockdep:
+ */
+static inline void
+double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+{
+ if (hb1 <= hb2) {
+ spin_lock(&hb1->lock);
+ if (hb1 < hb2)
+ spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
+ } else { /* hb1 > hb2 */
+ spin_lock(&hb2->lock);
+ spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
+ }
+}
+
/*
* Wake up all waiters hashed on the physical page that is mapped
* to this virtual address:
hb2 = hash_futex(&key2);
retry:
- if (hb1 < hb2)
- spin_lock(&hb1->lock);
- spin_lock(&hb2->lock);
- if (hb1 > hb2)
- spin_lock(&hb1->lock);
+ double_lock_hb(hb1, hb2);
op_ret = futex_atomic_op_inuser(op, uaddr2);
if (unlikely(op_ret < 0)) {
hb1 = hash_futex(&key1);
hb2 = hash_futex(&key2);
- if (hb1 < hb2)
- spin_lock(&hb1->lock);
- spin_lock(&hb2->lock);
- if (hb1 > hb2)
- spin_lock(&hb1->lock);
+ double_lock_hb(hb1, hb2);
if (likely(cmpval != NULL)) {
u32 curval;
return HRTIMER_NORESTART;
}
-void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, task_t *task)
+void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
{
sl->timer.function = hrtimer_wakeup;
sl->task = task;
struct hrtimer_base *base = per_cpu(hrtimer_bases, cpu);
int i;
- for (i = 0; i < MAX_HRTIMER_BASES; i++, base++)
+ for (i = 0; i < MAX_HRTIMER_BASES; i++, base++) {
spin_lock_init(&base->lock);
+ lockdep_set_class(&base->lock, &base->lock_key);
+ }
}
#ifdef CONFIG_HOTPLUG_CPU
* keep it masked and get out of here
*/
action = desc->action;
- if (unlikely(!action || (desc->status & IRQ_DISABLED)))
+ if (unlikely(!action || (desc->status & IRQ_DISABLED))) {
+ desc->status |= IRQ_PENDING;
goto out;
+ }
desc->status |= IRQ_INPROGRESS;
+ desc->status &= ~IRQ_PENDING;
spin_unlock(&desc->lock);
action_ret = handle_IRQ_event(irq, regs, action);
if (!handle)
handle = handle_bad_irq;
- if (is_chained && desc->chip == &no_irq_chip)
- printk(KERN_WARNING "Trying to install "
- "chained interrupt type for IRQ%d\n", irq);
+ if (desc->chip == &no_irq_chip) {
+ printk(KERN_WARNING "Trying to install %sinterrupt handler "
+ "for IRQ%d\n", is_chained ? "chained " : " ", irq);
+ /*
+ * Some ARM implementations install a handler for really dumb
+ * interrupt hardware without setting an irq_chip. This worked
+ * with the ARM no_irq_chip but the check in setup_irq would
+ * prevent us to setup the interrupt at all. Switch it to
+ * dummy_irq_chip for easy transition.
+ */
+ desc->chip = &dummy_irq_chip;
+ }
spin_lock_irqsave(&desc->lock, flags);
.end = noop,
};
+/*
+ * Generic dummy implementation which can be used for
+ * real dumb interrupt sources
+ */
+struct irq_chip dummy_irq_chip = {
+ .name = "dummy",
+ .startup = noop_ret,
+ .shutdown = noop,
+ .enable = noop,
+ .disable = noop,
+ .ack = noop,
+ .mask = noop,
+ .unmask = noop,
+ .end = noop,
+};
+
/*
* Special, empty irq handler:
*/
irqreturn_t ret, retval = IRQ_NONE;
unsigned int status = 0;
- if (!(action->flags & SA_INTERRUPT))
- local_irq_enable();
+ handle_dynamic_tick(action);
+
+ if (!(action->flags & IRQF_DISABLED))
+ local_irq_enable_in_hardirq();
do {
ret = action->handler(irq, action->dev_id, regs);
action = action->next;
} while (action);
- if (status & SA_SAMPLE_RANDOM)
+ if (status & IRQF_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
local_irq_disable();
return 1;
}
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+/*
+ * lockdep: we want to handle all irq_desc locks as a single lock-class:
+ */
+static struct lock_class_key irq_desc_lock_class;
+
+void early_init_irq_lock_class(void)
+{
+ int i;
+
+ for (i = 0; i < NR_IRQS; i++)
+ lockdep_set_class(&irq_desc[i].lock, &irq_desc_lock_class);
+}
+
+#endif
action = irq_desc[irq].action;
if (action)
- if (irqflags & action->flags & SA_SHIRQ)
+ if (irqflags & action->flags & IRQF_SHARED)
action = NULL;
return !action;
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
+ if (new->flags & IRQF_SAMPLE_RANDOM) {
/*
* This function might sleep, we want to call it first,
* outside of the atomic block.
/*
* Can't share interrupts unless both agree to and are
* the same type (level, edge, polarity). So both flag
- * fields must have SA_SHIRQ set and the bits which
+ * fields must have IRQF_SHARED set and the bits which
* set the trigger type must match.
*/
- if (!((old->flags & new->flags) & SA_SHIRQ) ||
- ((old->flags ^ new->flags) & SA_TRIGGER_MASK))
+ if (!((old->flags & new->flags) & IRQF_SHARED) ||
+ ((old->flags ^ new->flags) & IRQF_TRIGGER_MASK))
goto mismatch;
-#if defined(CONFIG_IRQ_PER_CPU) && defined(SA_PERCPU_IRQ)
+#if defined(CONFIG_IRQ_PER_CPU)
/* All handlers must agree on per-cpuness */
- if ((old->flags & SA_PERCPU_IRQ) !=
- (new->flags & SA_PERCPU_IRQ))
+ if ((old->flags & IRQF_PERCPU) !=
+ (new->flags & IRQF_PERCPU))
goto mismatch;
#endif
}
*p = new;
-#if defined(CONFIG_IRQ_PER_CPU) && defined(SA_PERCPU_IRQ)
- if (new->flags & SA_PERCPU_IRQ)
+#if defined(CONFIG_IRQ_PER_CPU)
+ if (new->flags & IRQF_PERCPU)
desc->status |= IRQ_PER_CPU;
#endif
if (!shared) {
irq_chip_set_defaults(desc->chip);
/* Setup the type (level, edge polarity) if configured: */
- if (new->flags & SA_TRIGGER_MASK) {
+ if (new->flags & IRQF_TRIGGER_MASK) {
if (desc->chip && desc->chip->set_type)
desc->chip->set_type(irq,
- new->flags & SA_TRIGGER_MASK);
+ new->flags & IRQF_TRIGGER_MASK);
else
/*
- * SA_TRIGGER_* but the PIC does not support
+ * IRQF_TRIGGER_* but the PIC does not support
* multiple flow-types?
*/
- printk(KERN_WARNING "No SA_TRIGGER set_type "
+ printk(KERN_WARNING "No IRQF_TRIGGER set_type "
"function for IRQ %d (%s)\n", irq,
desc->chip ? desc->chip->name :
"unknown");
mismatch:
spin_unlock_irqrestore(&desc->lock, flags);
- if (!(new->flags & SA_PROBEIRQ)) {
+ if (!(new->flags & IRQF_PROBE_SHARED)) {
printk(KERN_ERR "IRQ handler type mismatch for IRQ %d\n", irq);
dump_stack();
}
*
* Flags:
*
- * SA_SHIRQ Interrupt is shared
- * SA_INTERRUPT Disable local interrupts while processing
- * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ * IRQF_SHARED Interrupt is shared
+ * IRQF_DISABLED Disable local interrupts while processing
+ * IRQF_SAMPLE_RANDOM The interrupt can be used for entropy
*
*/
int request_irq(unsigned int irq,
struct irqaction *action;
int retval;
+#ifdef CONFIG_LOCKDEP
+ /*
+ * Lockdep wants atomic interrupt handlers:
+ */
+ irqflags |= SA_INTERRUPT;
+#endif
/*
* Sanity-check: shared interrupts must pass in a real dev-ID,
* otherwise we'll have trouble later trying to figure out
* which interrupt is which (messes up the interrupt freeing
* logic etc).
*/
- if ((irqflags & SA_SHIRQ) && !dev_id)
+ if ((irqflags & IRQF_SHARED) && !dev_id)
return -EINVAL;
if (irq >= NR_IRQS)
return -EINVAL;
* Already running: If it is shared get the other
* CPU to go looking for our mystery interrupt too
*/
- if (desc->action && (desc->action->flags & SA_SHIRQ))
+ if (desc->action && (desc->action->flags & IRQF_SHARED))
desc->status |= IRQ_PENDING;
spin_unlock(&desc->lock);
continue;
while (action) {
/* Only shared IRQ handlers are safe to call */
- if (action->flags & SA_SHIRQ) {
+ if (action->flags & IRQF_SHARED) {
if (action->handler(i, action->dev_id, regs) ==
IRQ_HANDLED)
ok = 1;
int call_usermodehelper_keys(char *path, char **argv, char **envp,
struct key *session_keyring, int wait)
{
- DECLARE_COMPLETION(done);
+ DECLARE_COMPLETION_ONSTACK(done);
struct subprocess_info sub_info = {
.complete = &done,
.path = path,
--- /dev/null
+/*
+ * kernel/lockdep.c
+ *
+ * Runtime locking correctness validator
+ *
+ * Started by Ingo Molnar:
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *
+ * this code maps all the lock dependencies as they occur in a live kernel
+ * and will warn about the following classes of locking bugs:
+ *
+ * - lock inversion scenarios
+ * - circular lock dependencies
+ * - hardirq/softirq safe/unsafe locking bugs
+ *
+ * Bugs are reported even if the current locking scenario does not cause
+ * any deadlock at this point.
+ *
+ * I.e. if anytime in the past two locks were taken in a different order,
+ * even if it happened for another task, even if those were different
+ * locks (but of the same class as this lock), this code will detect it.
+ *
+ * Thanks to Arjan van de Ven for coming up with the initial idea of
+ * mapping lock dependencies runtime.
+ */
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/spinlock.h>
+#include <linux/kallsyms.h>
+#include <linux/interrupt.h>
+#include <linux/stacktrace.h>
+#include <linux/debug_locks.h>
+#include <linux/irqflags.h>
+
+#include <asm/sections.h>
+
+#include "lockdep_internals.h"
+
+/*
+ * hash_lock: protects the lockdep hashes and class/list/hash allocators.
+ *
+ * This is one of the rare exceptions where it's justified
+ * to use a raw spinlock - we really dont want the spinlock
+ * code to recurse back into the lockdep code.
+ */
+static raw_spinlock_t hash_lock = (raw_spinlock_t)__RAW_SPIN_LOCK_UNLOCKED;
+
+static int lockdep_initialized;
+
+unsigned long nr_list_entries;
+static struct lock_list list_entries[MAX_LOCKDEP_ENTRIES];
+
+/*
+ * Allocate a lockdep entry. (assumes hash_lock held, returns
+ * with NULL on failure)
+ */
+static struct lock_list *alloc_list_entry(void)
+{
+ if (nr_list_entries >= MAX_LOCKDEP_ENTRIES) {
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ printk("BUG: MAX_LOCKDEP_ENTRIES too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ return NULL;
+ }
+ return list_entries + nr_list_entries++;
+}
+
+/*
+ * All data structures here are protected by the global debug_lock.
+ *
+ * Mutex key structs only get allocated, once during bootup, and never
+ * get freed - this significantly simplifies the debugging code.
+ */
+unsigned long nr_lock_classes;
+static struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+
+/*
+ * We keep a global list of all lock classes. The list only grows,
+ * never shrinks. The list is only accessed with the lockdep
+ * spinlock lock held.
+ */
+LIST_HEAD(all_lock_classes);
+
+/*
+ * The lockdep classes are in a hash-table as well, for fast lookup:
+ */
+#define CLASSHASH_BITS (MAX_LOCKDEP_KEYS_BITS - 1)
+#define CLASSHASH_SIZE (1UL << CLASSHASH_BITS)
+#define CLASSHASH_MASK (CLASSHASH_SIZE - 1)
+#define __classhashfn(key) ((((unsigned long)key >> CLASSHASH_BITS) + (unsigned long)key) & CLASSHASH_MASK)
+#define classhashentry(key) (classhash_table + __classhashfn((key)))
+
+static struct list_head classhash_table[CLASSHASH_SIZE];
+
+unsigned long nr_lock_chains;
+static struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
+
+/*
+ * We put the lock dependency chains into a hash-table as well, to cache
+ * their existence:
+ */
+#define CHAINHASH_BITS (MAX_LOCKDEP_CHAINS_BITS-1)
+#define CHAINHASH_SIZE (1UL << CHAINHASH_BITS)
+#define CHAINHASH_MASK (CHAINHASH_SIZE - 1)
+#define __chainhashfn(chain) \
+ (((chain >> CHAINHASH_BITS) + chain) & CHAINHASH_MASK)
+#define chainhashentry(chain) (chainhash_table + __chainhashfn((chain)))
+
+static struct list_head chainhash_table[CHAINHASH_SIZE];
+
+/*
+ * The hash key of the lock dependency chains is a hash itself too:
+ * it's a hash of all locks taken up to that lock, including that lock.
+ * It's a 64-bit hash, because it's important for the keys to be
+ * unique.
+ */
+#define iterate_chain_key(key1, key2) \
+ (((key1) << MAX_LOCKDEP_KEYS_BITS/2) ^ \
+ ((key1) >> (64-MAX_LOCKDEP_KEYS_BITS/2)) ^ \
+ (key2))
+
+void lockdep_off(void)
+{
+ current->lockdep_recursion++;
+}
+
+EXPORT_SYMBOL(lockdep_off);
+
+void lockdep_on(void)
+{
+ current->lockdep_recursion--;
+}
+
+EXPORT_SYMBOL(lockdep_on);
+
+int lockdep_internal(void)
+{
+ return current->lockdep_recursion != 0;
+}
+
+EXPORT_SYMBOL(lockdep_internal);
+
+/*
+ * Debugging switches:
+ */
+
+#define VERBOSE 0
+#ifdef VERBOSE
+# define VERY_VERBOSE 0
+#endif
+
+#if VERBOSE
+# define HARDIRQ_VERBOSE 1
+# define SOFTIRQ_VERBOSE 1
+#else
+# define HARDIRQ_VERBOSE 0
+# define SOFTIRQ_VERBOSE 0
+#endif
+
+#if VERBOSE || HARDIRQ_VERBOSE || SOFTIRQ_VERBOSE
+/*
+ * Quick filtering for interesting events:
+ */
+static int class_filter(struct lock_class *class)
+{
+ if (class->name_version == 1 &&
+ !strcmp(class->name, "&rl->lock"))
+ return 1;
+ if (class->name_version == 1 &&
+ !strcmp(class->name, "&ni->mrec_lock"))
+ return 1;
+ if (class->name_version == 1 &&
+ !strcmp(class->name, "mft_ni_runlist_lock"))
+ return 1;
+ if (class->name_version == 1 &&
+ !strcmp(class->name, "mft_ni_mrec_lock"))
+ return 1;
+ if (class->name_version == 1 &&
+ !strcmp(class->name, "&vol->lcnbmp_lock"))
+ return 1;
+ return 0;
+}
+#endif
+
+static int verbose(struct lock_class *class)
+{
+#if VERBOSE
+ return class_filter(class);
+#endif
+ return 0;
+}
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+static int hardirq_verbose(struct lock_class *class)
+{
+#if HARDIRQ_VERBOSE
+ return class_filter(class);
+#endif
+ return 0;
+}
+
+static int softirq_verbose(struct lock_class *class)
+{
+#if SOFTIRQ_VERBOSE
+ return class_filter(class);
+#endif
+ return 0;
+}
+
+#endif
+
+/*
+ * Stack-trace: tightly packed array of stack backtrace
+ * addresses. Protected by the hash_lock.
+ */
+unsigned long nr_stack_trace_entries;
+static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES];
+
+static int save_trace(struct stack_trace *trace)
+{
+ trace->nr_entries = 0;
+ trace->max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
+ trace->entries = stack_trace + nr_stack_trace_entries;
+
+ save_stack_trace(trace, NULL, 0, 3);
+
+ trace->max_entries = trace->nr_entries;
+
+ nr_stack_trace_entries += trace->nr_entries;
+ if (DEBUG_LOCKS_WARN_ON(nr_stack_trace_entries > MAX_STACK_TRACE_ENTRIES))
+ return 0;
+
+ if (nr_stack_trace_entries == MAX_STACK_TRACE_ENTRIES) {
+ __raw_spin_unlock(&hash_lock);
+ if (debug_locks_off()) {
+ printk("BUG: MAX_STACK_TRACE_ENTRIES too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ dump_stack();
+ }
+ return 0;
+ }
+
+ return 1;
+}
+
+unsigned int nr_hardirq_chains;
+unsigned int nr_softirq_chains;
+unsigned int nr_process_chains;
+unsigned int max_lockdep_depth;
+unsigned int max_recursion_depth;
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+/*
+ * We cannot printk in early bootup code. Not even early_printk()
+ * might work. So we mark any initialization errors and printk
+ * about it later on, in lockdep_info().
+ */
+static int lockdep_init_error;
+
+/*
+ * Various lockdep statistics:
+ */
+atomic_t chain_lookup_hits;
+atomic_t chain_lookup_misses;
+atomic_t hardirqs_on_events;
+atomic_t hardirqs_off_events;
+atomic_t redundant_hardirqs_on;
+atomic_t redundant_hardirqs_off;
+atomic_t softirqs_on_events;
+atomic_t softirqs_off_events;
+atomic_t redundant_softirqs_on;
+atomic_t redundant_softirqs_off;
+atomic_t nr_unused_locks;
+atomic_t nr_cyclic_checks;
+atomic_t nr_cyclic_check_recursions;
+atomic_t nr_find_usage_forwards_checks;
+atomic_t nr_find_usage_forwards_recursions;
+atomic_t nr_find_usage_backwards_checks;
+atomic_t nr_find_usage_backwards_recursions;
+# define debug_atomic_inc(ptr) atomic_inc(ptr)
+# define debug_atomic_dec(ptr) atomic_dec(ptr)
+# define debug_atomic_read(ptr) atomic_read(ptr)
+#else
+# define debug_atomic_inc(ptr) do { } while (0)
+# define debug_atomic_dec(ptr) do { } while (0)
+# define debug_atomic_read(ptr) 0
+#endif
+
+/*
+ * Locking printouts:
+ */
+
+static const char *usage_str[] =
+{
+ [LOCK_USED] = "initial-use ",
+ [LOCK_USED_IN_HARDIRQ] = "in-hardirq-W",
+ [LOCK_USED_IN_SOFTIRQ] = "in-softirq-W",
+ [LOCK_ENABLED_SOFTIRQS] = "softirq-on-W",
+ [LOCK_ENABLED_HARDIRQS] = "hardirq-on-W",
+ [LOCK_USED_IN_HARDIRQ_READ] = "in-hardirq-R",
+ [LOCK_USED_IN_SOFTIRQ_READ] = "in-softirq-R",
+ [LOCK_ENABLED_SOFTIRQS_READ] = "softirq-on-R",
+ [LOCK_ENABLED_HARDIRQS_READ] = "hardirq-on-R",
+};
+
+const char * __get_key_name(struct lockdep_subclass_key *key, char *str)
+{
+ unsigned long offs, size;
+ char *modname;
+
+ return kallsyms_lookup((unsigned long)key, &size, &offs, &modname, str);
+}
+
+void
+get_usage_chars(struct lock_class *class, char *c1, char *c2, char *c3, char *c4)
+{
+ *c1 = '.', *c2 = '.', *c3 = '.', *c4 = '.';
+
+ if (class->usage_mask & LOCKF_USED_IN_HARDIRQ)
+ *c1 = '+';
+ else
+ if (class->usage_mask & LOCKF_ENABLED_HARDIRQS)
+ *c1 = '-';
+
+ if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ)
+ *c2 = '+';
+ else
+ if (class->usage_mask & LOCKF_ENABLED_SOFTIRQS)
+ *c2 = '-';
+
+ if (class->usage_mask & LOCKF_ENABLED_HARDIRQS_READ)
+ *c3 = '-';
+ if (class->usage_mask & LOCKF_USED_IN_HARDIRQ_READ) {
+ *c3 = '+';
+ if (class->usage_mask & LOCKF_ENABLED_HARDIRQS_READ)
+ *c3 = '?';
+ }
+
+ if (class->usage_mask & LOCKF_ENABLED_SOFTIRQS_READ)
+ *c4 = '-';
+ if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ_READ) {
+ *c4 = '+';
+ if (class->usage_mask & LOCKF_ENABLED_SOFTIRQS_READ)
+ *c4 = '?';
+ }
+}
+
+static void print_lock_name(struct lock_class *class)
+{
+ char str[128], c1, c2, c3, c4;
+ const char *name;
+
+ get_usage_chars(class, &c1, &c2, &c3, &c4);
+
+ name = class->name;
+ if (!name) {
+ name = __get_key_name(class->key, str);
+ printk(" (%s", name);
+ } else {
+ printk(" (%s", name);
+ if (class->name_version > 1)
+ printk("#%d", class->name_version);
+ if (class->subclass)
+ printk("/%d", class->subclass);
+ }
+ printk("){%c%c%c%c}", c1, c2, c3, c4);
+}
+
+static void print_lockdep_cache(struct lockdep_map *lock)
+{
+ const char *name;
+ char str[128];
+
+ name = lock->name;
+ if (!name)
+ name = __get_key_name(lock->key->subkeys, str);
+
+ printk("%s", name);
+}
+
+static void print_lock(struct held_lock *hlock)
+{
+ print_lock_name(hlock->class);
+ printk(", at: ");
+ print_ip_sym(hlock->acquire_ip);
+}
+
+static void lockdep_print_held_locks(struct task_struct *curr)
+{
+ int i, depth = curr->lockdep_depth;
+
+ if (!depth) {
+ printk("no locks held by %s/%d.\n", curr->comm, curr->pid);
+ return;
+ }
+ printk("%d lock%s held by %s/%d:\n",
+ depth, depth > 1 ? "s" : "", curr->comm, curr->pid);
+
+ for (i = 0; i < depth; i++) {
+ printk(" #%d: ", i);
+ print_lock(curr->held_locks + i);
+ }
+}
+/*
+ * Helper to print a nice hierarchy of lock dependencies:
+ */
+static void print_spaces(int nr)
+{
+ int i;
+
+ for (i = 0; i < nr; i++)
+ printk(" ");
+}
+
+static void print_lock_class_header(struct lock_class *class, int depth)
+{
+ int bit;
+
+ print_spaces(depth);
+ printk("->");
+ print_lock_name(class);
+ printk(" ops: %lu", class->ops);
+ printk(" {\n");
+
+ for (bit = 0; bit < LOCK_USAGE_STATES; bit++) {
+ if (class->usage_mask & (1 << bit)) {
+ int len = depth;
+
+ print_spaces(depth);
+ len += printk(" %s", usage_str[bit]);
+ len += printk(" at:\n");
+ print_stack_trace(class->usage_traces + bit, len);
+ }
+ }
+ print_spaces(depth);
+ printk(" }\n");
+
+ print_spaces(depth);
+ printk(" ... key at: ");
+ print_ip_sym((unsigned long)class->key);
+}
+
+/*
+ * printk all lock dependencies starting at <entry>:
+ */
+static void print_lock_dependencies(struct lock_class *class, int depth)
+{
+ struct lock_list *entry;
+
+ if (DEBUG_LOCKS_WARN_ON(depth >= 20))
+ return;
+
+ print_lock_class_header(class, depth);
+
+ list_for_each_entry(entry, &class->locks_after, entry) {
+ DEBUG_LOCKS_WARN_ON(!entry->class);
+ print_lock_dependencies(entry->class, depth + 1);
+
+ print_spaces(depth);
+ printk(" ... acquired at:\n");
+ print_stack_trace(&entry->trace, 2);
+ printk("\n");
+ }
+}
+
+/*
+ * Add a new dependency to the head of the list:
+ */
+static int add_lock_to_list(struct lock_class *class, struct lock_class *this,
+ struct list_head *head, unsigned long ip)
+{
+ struct lock_list *entry;
+ /*
+ * Lock not present yet - get a new dependency struct and
+ * add it to the list:
+ */
+ entry = alloc_list_entry();
+ if (!entry)
+ return 0;
+
+ entry->class = this;
+ save_trace(&entry->trace);
+
+ /*
+ * Since we never remove from the dependency list, the list can
+ * be walked lockless by other CPUs, it's only allocation
+ * that must be protected by the spinlock. But this also means
+ * we must make new entries visible only once writes to the
+ * entry become visible - hence the RCU op:
+ */
+ list_add_tail_rcu(&entry->entry, head);
+
+ return 1;
+}
+
+/*
+ * Recursive, forwards-direction lock-dependency checking, used for
+ * both noncyclic checking and for hardirq-unsafe/softirq-unsafe
+ * checking.
+ *
+ * (to keep the stackframe of the recursive functions small we
+ * use these global variables, and we also mark various helper
+ * functions as noinline.)
+ */
+static struct held_lock *check_source, *check_target;
+
+/*
+ * Print a dependency chain entry (this is only done when a deadlock
+ * has been detected):
+ */
+static noinline int
+print_circular_bug_entry(struct lock_list *target, unsigned int depth)
+{
+ if (debug_locks_silent)
+ return 0;
+ printk("\n-> #%u", depth);
+ print_lock_name(target->class);
+ printk(":\n");
+ print_stack_trace(&target->trace, 6);
+
+ return 0;
+}
+
+/*
+ * When a circular dependency is detected, print the
+ * header first:
+ */
+static noinline int
+print_circular_bug_header(struct lock_list *entry, unsigned int depth)
+{
+ struct task_struct *curr = current;
+
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n=======================================================\n");
+ printk( "[ INFO: possible circular locking dependency detected ]\n");
+ printk( "-------------------------------------------------------\n");
+ printk("%s/%d is trying to acquire lock:\n",
+ curr->comm, curr->pid);
+ print_lock(check_source);
+ printk("\nbut task is already holding lock:\n");
+ print_lock(check_target);
+ printk("\nwhich lock already depends on the new lock.\n\n");
+ printk("\nthe existing dependency chain (in reverse order) is:\n");
+
+ print_circular_bug_entry(entry, depth);
+
+ return 0;
+}
+
+static noinline int print_circular_bug_tail(void)
+{
+ struct task_struct *curr = current;
+ struct lock_list this;
+
+ if (debug_locks_silent)
+ return 0;
+
+ this.class = check_source->class;
+ save_trace(&this.trace);
+ print_circular_bug_entry(&this, 0);
+
+ printk("\nother info that might help us debug this:\n\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+static int noinline print_infinite_recursion_bug(void)
+{
+ __raw_spin_unlock(&hash_lock);
+ DEBUG_LOCKS_WARN_ON(1);
+
+ return 0;
+}
+
+/*
+ * Prove that the dependency graph starting at <entry> can not
+ * lead to <target>. Print an error and return 0 if it does.
+ */
+static noinline int
+check_noncircular(struct lock_class *source, unsigned int depth)
+{
+ struct lock_list *entry;
+
+ debug_atomic_inc(&nr_cyclic_check_recursions);
+ if (depth > max_recursion_depth)
+ max_recursion_depth = depth;
+ if (depth >= 20)
+ return print_infinite_recursion_bug();
+ /*
+ * Check this lock's dependency list:
+ */
+ list_for_each_entry(entry, &source->locks_after, entry) {
+ if (entry->class == check_target->class)
+ return print_circular_bug_header(entry, depth+1);
+ debug_atomic_inc(&nr_cyclic_checks);
+ if (!check_noncircular(entry->class, depth+1))
+ return print_circular_bug_entry(entry, depth+1);
+ }
+ return 1;
+}
+
+static int very_verbose(struct lock_class *class)
+{
+#if VERY_VERBOSE
+ return class_filter(class);
+#endif
+ return 0;
+}
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+/*
+ * Forwards and backwards subgraph searching, for the purposes of
+ * proving that two subgraphs can be connected by a new dependency
+ * without creating any illegal irq-safe -> irq-unsafe lock dependency.
+ */
+static enum lock_usage_bit find_usage_bit;
+static struct lock_class *forwards_match, *backwards_match;
+
+/*
+ * Find a node in the forwards-direction dependency sub-graph starting
+ * at <source> that matches <find_usage_bit>.
+ *
+ * Return 2 if such a node exists in the subgraph, and put that node
+ * into <forwards_match>.
+ *
+ * Return 1 otherwise and keep <forwards_match> unchanged.
+ * Return 0 on error.
+ */
+static noinline int
+find_usage_forwards(struct lock_class *source, unsigned int depth)
+{
+ struct lock_list *entry;
+ int ret;
+
+ if (depth > max_recursion_depth)
+ max_recursion_depth = depth;
+ if (depth >= 20)
+ return print_infinite_recursion_bug();
+
+ debug_atomic_inc(&nr_find_usage_forwards_checks);
+ if (source->usage_mask & (1 << find_usage_bit)) {
+ forwards_match = source;
+ return 2;
+ }
+
+ /*
+ * Check this lock's dependency list:
+ */
+ list_for_each_entry(entry, &source->locks_after, entry) {
+ debug_atomic_inc(&nr_find_usage_forwards_recursions);
+ ret = find_usage_forwards(entry->class, depth+1);
+ if (ret == 2 || ret == 0)
+ return ret;
+ }
+ return 1;
+}
+
+/*
+ * Find a node in the backwards-direction dependency sub-graph starting
+ * at <source> that matches <find_usage_bit>.
+ *
+ * Return 2 if such a node exists in the subgraph, and put that node
+ * into <backwards_match>.
+ *
+ * Return 1 otherwise and keep <backwards_match> unchanged.
+ * Return 0 on error.
+ */
+static noinline int
+find_usage_backwards(struct lock_class *source, unsigned int depth)
+{
+ struct lock_list *entry;
+ int ret;
+
+ if (depth > max_recursion_depth)
+ max_recursion_depth = depth;
+ if (depth >= 20)
+ return print_infinite_recursion_bug();
+
+ debug_atomic_inc(&nr_find_usage_backwards_checks);
+ if (source->usage_mask & (1 << find_usage_bit)) {
+ backwards_match = source;
+ return 2;
+ }
+
+ /*
+ * Check this lock's dependency list:
+ */
+ list_for_each_entry(entry, &source->locks_before, entry) {
+ debug_atomic_inc(&nr_find_usage_backwards_recursions);
+ ret = find_usage_backwards(entry->class, depth+1);
+ if (ret == 2 || ret == 0)
+ return ret;
+ }
+ return 1;
+}
+
+static int
+print_bad_irq_dependency(struct task_struct *curr,
+ struct held_lock *prev,
+ struct held_lock *next,
+ enum lock_usage_bit bit1,
+ enum lock_usage_bit bit2,
+ const char *irqclass)
+{
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n======================================================\n");
+ printk( "[ INFO: %s-safe -> %s-unsafe lock order detected ]\n",
+ irqclass, irqclass);
+ printk( "------------------------------------------------------\n");
+ printk("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
+ curr->comm, curr->pid,
+ curr->hardirq_context, hardirq_count() >> HARDIRQ_SHIFT,
+ curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
+ curr->hardirqs_enabled,
+ curr->softirqs_enabled);
+ print_lock(next);
+
+ printk("\nand this task is already holding:\n");
+ print_lock(prev);
+ printk("which would create a new lock dependency:\n");
+ print_lock_name(prev->class);
+ printk(" ->");
+ print_lock_name(next->class);
+ printk("\n");
+
+ printk("\nbut this new dependency connects a %s-irq-safe lock:\n",
+ irqclass);
+ print_lock_name(backwards_match);
+ printk("\n... which became %s-irq-safe at:\n", irqclass);
+
+ print_stack_trace(backwards_match->usage_traces + bit1, 1);
+
+ printk("\nto a %s-irq-unsafe lock:\n", irqclass);
+ print_lock_name(forwards_match);
+ printk("\n... which became %s-irq-unsafe at:\n", irqclass);
+ printk("...");
+
+ print_stack_trace(forwards_match->usage_traces + bit2, 1);
+
+ printk("\nother info that might help us debug this:\n\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nthe %s-irq-safe lock's dependencies:\n", irqclass);
+ print_lock_dependencies(backwards_match, 0);
+
+ printk("\nthe %s-irq-unsafe lock's dependencies:\n", irqclass);
+ print_lock_dependencies(forwards_match, 0);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+static int
+check_usage(struct task_struct *curr, struct held_lock *prev,
+ struct held_lock *next, enum lock_usage_bit bit_backwards,
+ enum lock_usage_bit bit_forwards, const char *irqclass)
+{
+ int ret;
+
+ find_usage_bit = bit_backwards;
+ /* fills in <backwards_match> */
+ ret = find_usage_backwards(prev->class, 0);
+ if (!ret || ret == 1)
+ return ret;
+
+ find_usage_bit = bit_forwards;
+ ret = find_usage_forwards(next->class, 0);
+ if (!ret || ret == 1)
+ return ret;
+ /* ret == 2 */
+ return print_bad_irq_dependency(curr, prev, next,
+ bit_backwards, bit_forwards, irqclass);
+}
+
+#endif
+
+static int
+print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
+ struct held_lock *next)
+{
+ debug_locks_off();
+ __raw_spin_unlock(&hash_lock);
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n=============================================\n");
+ printk( "[ INFO: possible recursive locking detected ]\n");
+ printk( "---------------------------------------------\n");
+ printk("%s/%d is trying to acquire lock:\n",
+ curr->comm, curr->pid);
+ print_lock(next);
+ printk("\nbut task is already holding lock:\n");
+ print_lock(prev);
+
+ printk("\nother info that might help us debug this:\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+/*
+ * Check whether we are holding such a class already.
+ *
+ * (Note that this has to be done separately, because the graph cannot
+ * detect such classes of deadlocks.)
+ *
+ * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read
+ */
+static int
+check_deadlock(struct task_struct *curr, struct held_lock *next,
+ struct lockdep_map *next_instance, int read)
+{
+ struct held_lock *prev;
+ int i;
+
+ for (i = 0; i < curr->lockdep_depth; i++) {
+ prev = curr->held_locks + i;
+ if (prev->class != next->class)
+ continue;
+ /*
+ * Allow read-after-read recursion of the same
+ * lock class (i.e. read_lock(lock)+read_lock(lock)):
+ */
+ if ((read == 2) && prev->read)
+ return 2;
+ return print_deadlock_bug(curr, prev, next);
+ }
+ return 1;
+}
+
+/*
+ * There was a chain-cache miss, and we are about to add a new dependency
+ * to a previous lock. We recursively validate the following rules:
+ *
+ * - would the adding of the <prev> -> <next> dependency create a
+ * circular dependency in the graph? [== circular deadlock]
+ *
+ * - does the new prev->next dependency connect any hardirq-safe lock
+ * (in the full backwards-subgraph starting at <prev>) with any
+ * hardirq-unsafe lock (in the full forwards-subgraph starting at
+ * <next>)? [== illegal lock inversion with hardirq contexts]
+ *
+ * - does the new prev->next dependency connect any softirq-safe lock
+ * (in the full backwards-subgraph starting at <prev>) with any
+ * softirq-unsafe lock (in the full forwards-subgraph starting at
+ * <next>)? [== illegal lock inversion with softirq contexts]
+ *
+ * any of these scenarios could lead to a deadlock.
+ *
+ * Then if all the validations pass, we add the forwards and backwards
+ * dependency.
+ */
+static int
+check_prev_add(struct task_struct *curr, struct held_lock *prev,
+ struct held_lock *next)
+{
+ struct lock_list *entry;
+ int ret;
+
+ /*
+ * Prove that the new <prev> -> <next> dependency would not
+ * create a circular dependency in the graph. (We do this by
+ * forward-recursing into the graph starting at <next>, and
+ * checking whether we can reach <prev>.)
+ *
+ * We are using global variables to control the recursion, to
+ * keep the stackframe size of the recursive functions low:
+ */
+ check_source = next;
+ check_target = prev;
+ if (!(check_noncircular(next->class, 0)))
+ return print_circular_bug_tail();
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ /*
+ * Prove that the new dependency does not connect a hardirq-safe
+ * lock with a hardirq-unsafe lock - to achieve this we search
+ * the backwards-subgraph starting at <prev>, and the
+ * forwards-subgraph starting at <next>:
+ */
+ if (!check_usage(curr, prev, next, LOCK_USED_IN_HARDIRQ,
+ LOCK_ENABLED_HARDIRQS, "hard"))
+ return 0;
+
+ /*
+ * Prove that the new dependency does not connect a hardirq-safe-read
+ * lock with a hardirq-unsafe lock - to achieve this we search
+ * the backwards-subgraph starting at <prev>, and the
+ * forwards-subgraph starting at <next>:
+ */
+ if (!check_usage(curr, prev, next, LOCK_USED_IN_HARDIRQ_READ,
+ LOCK_ENABLED_HARDIRQS, "hard-read"))
+ return 0;
+
+ /*
+ * Prove that the new dependency does not connect a softirq-safe
+ * lock with a softirq-unsafe lock - to achieve this we search
+ * the backwards-subgraph starting at <prev>, and the
+ * forwards-subgraph starting at <next>:
+ */
+ if (!check_usage(curr, prev, next, LOCK_USED_IN_SOFTIRQ,
+ LOCK_ENABLED_SOFTIRQS, "soft"))
+ return 0;
+ /*
+ * Prove that the new dependency does not connect a softirq-safe-read
+ * lock with a softirq-unsafe lock - to achieve this we search
+ * the backwards-subgraph starting at <prev>, and the
+ * forwards-subgraph starting at <next>:
+ */
+ if (!check_usage(curr, prev, next, LOCK_USED_IN_SOFTIRQ_READ,
+ LOCK_ENABLED_SOFTIRQS, "soft"))
+ return 0;
+#endif
+ /*
+ * For recursive read-locks we do all the dependency checks,
+ * but we dont store read-triggered dependencies (only
+ * write-triggered dependencies). This ensures that only the
+ * write-side dependencies matter, and that if for example a
+ * write-lock never takes any other locks, then the reads are
+ * equivalent to a NOP.
+ */
+ if (next->read == 2 || prev->read == 2)
+ return 1;
+ /*
+ * Is the <prev> -> <next> dependency already present?
+ *
+ * (this may occur even though this is a new chain: consider
+ * e.g. the L1 -> L2 -> L3 -> L4 and the L5 -> L1 -> L2 -> L3
+ * chains - the second one will be new, but L1 already has
+ * L2 added to its dependency list, due to the first chain.)
+ */
+ list_for_each_entry(entry, &prev->class->locks_after, entry) {
+ if (entry->class == next->class)
+ return 2;
+ }
+
+ /*
+ * Ok, all validations passed, add the new lock
+ * to the previous lock's dependency list:
+ */
+ ret = add_lock_to_list(prev->class, next->class,
+ &prev->class->locks_after, next->acquire_ip);
+ if (!ret)
+ return 0;
+ /*
+ * Return value of 2 signals 'dependency already added',
+ * in that case we dont have to add the backlink either.
+ */
+ if (ret == 2)
+ return 2;
+ ret = add_lock_to_list(next->class, prev->class,
+ &next->class->locks_before, next->acquire_ip);
+
+ /*
+ * Debugging printouts:
+ */
+ if (verbose(prev->class) || verbose(next->class)) {
+ __raw_spin_unlock(&hash_lock);
+ printk("\n new dependency: ");
+ print_lock_name(prev->class);
+ printk(" => ");
+ print_lock_name(next->class);
+ printk("\n");
+ dump_stack();
+ __raw_spin_lock(&hash_lock);
+ }
+ return 1;
+}
+
+/*
+ * Add the dependency to all directly-previous locks that are 'relevant'.
+ * The ones that are relevant are (in increasing distance from curr):
+ * all consecutive trylock entries and the final non-trylock entry - or
+ * the end of this context's lock-chain - whichever comes first.
+ */
+static int
+check_prevs_add(struct task_struct *curr, struct held_lock *next)
+{
+ int depth = curr->lockdep_depth;
+ struct held_lock *hlock;
+
+ /*
+ * Debugging checks.
+ *
+ * Depth must not be zero for a non-head lock:
+ */
+ if (!depth)
+ goto out_bug;
+ /*
+ * At least two relevant locks must exist for this
+ * to be a head:
+ */
+ if (curr->held_locks[depth].irq_context !=
+ curr->held_locks[depth-1].irq_context)
+ goto out_bug;
+
+ for (;;) {
+ hlock = curr->held_locks + depth-1;
+ /*
+ * Only non-recursive-read entries get new dependencies
+ * added:
+ */
+ if (hlock->read != 2) {
+ check_prev_add(curr, hlock, next);
+ /*
+ * Stop after the first non-trylock entry,
+ * as non-trylock entries have added their
+ * own direct dependencies already, so this
+ * lock is connected to them indirectly:
+ */
+ if (!hlock->trylock)
+ break;
+ }
+ depth--;
+ /*
+ * End of lock-stack?
+ */
+ if (!depth)
+ break;
+ /*
+ * Stop the search if we cross into another context:
+ */
+ if (curr->held_locks[depth].irq_context !=
+ curr->held_locks[depth-1].irq_context)
+ break;
+ }
+ return 1;
+out_bug:
+ __raw_spin_unlock(&hash_lock);
+ DEBUG_LOCKS_WARN_ON(1);
+
+ return 0;
+}
+
+
+/*
+ * Is this the address of a static object:
+ */
+static int static_obj(void *obj)
+{
+ unsigned long start = (unsigned long) &_stext,
+ end = (unsigned long) &_end,
+ addr = (unsigned long) obj;
+#ifdef CONFIG_SMP
+ int i;
+#endif
+
+ /*
+ * static variable?
+ */
+ if ((addr >= start) && (addr < end))
+ return 1;
+
+#ifdef CONFIG_SMP
+ /*
+ * percpu var?
+ */
+ for_each_possible_cpu(i) {
+ start = (unsigned long) &__per_cpu_start + per_cpu_offset(i);
+ end = (unsigned long) &__per_cpu_end + per_cpu_offset(i);
+
+ if ((addr >= start) && (addr < end))
+ return 1;
+ }
+#endif
+
+ /*
+ * module var?
+ */
+ return is_module_address(addr);
+}
+
+/*
+ * To make lock name printouts unique, we calculate a unique
+ * class->name_version generation counter:
+ */
+static int count_matching_names(struct lock_class *new_class)
+{
+ struct lock_class *class;
+ int count = 0;
+
+ if (!new_class->name)
+ return 0;
+
+ list_for_each_entry(class, &all_lock_classes, lock_entry) {
+ if (new_class->key - new_class->subclass == class->key)
+ return class->name_version;
+ if (class->name && !strcmp(class->name, new_class->name))
+ count = max(count, class->name_version);
+ }
+
+ return count + 1;
+}
+
+extern void __error_too_big_MAX_LOCKDEP_SUBCLASSES(void);
+
+/*
+ * Register a lock's class in the hash-table, if the class is not present
+ * yet. Otherwise we look it up. We cache the result in the lock object
+ * itself, so actual lookup of the hash should be once per lock object.
+ */
+static inline struct lock_class *
+register_lock_class(struct lockdep_map *lock, unsigned int subclass)
+{
+ struct lockdep_subclass_key *key;
+ struct list_head *hash_head;
+ struct lock_class *class;
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+ /*
+ * If the architecture calls into lockdep before initializing
+ * the hashes then we'll warn about it later. (we cannot printk
+ * right now)
+ */
+ if (unlikely(!lockdep_initialized)) {
+ lockdep_init();
+ lockdep_init_error = 1;
+ }
+#endif
+
+ /*
+ * Static locks do not have their class-keys yet - for them the key
+ * is the lock object itself:
+ */
+ if (unlikely(!lock->key))
+ lock->key = (void *)lock;
+
+ /*
+ * NOTE: the class-key must be unique. For dynamic locks, a static
+ * lock_class_key variable is passed in through the mutex_init()
+ * (or spin_lock_init()) call - which acts as the key. For static
+ * locks we use the lock object itself as the key.
+ */
+ if (sizeof(struct lock_class_key) > sizeof(struct lock_class))
+ __error_too_big_MAX_LOCKDEP_SUBCLASSES();
+
+ key = lock->key->subkeys + subclass;
+
+ hash_head = classhashentry(key);
+
+ /*
+ * We can walk the hash lockfree, because the hash only
+ * grows, and we are careful when adding entries to the end:
+ */
+ list_for_each_entry(class, hash_head, hash_entry)
+ if (class->key == key)
+ goto out_set;
+
+ /*
+ * Debug-check: all keys must be persistent!
+ */
+ if (!static_obj(lock->key)) {
+ debug_locks_off();
+ printk("INFO: trying to register non-static key.\n");
+ printk("the code is fine but needs lockdep annotation.\n");
+ printk("turning off the locking correctness validator.\n");
+ dump_stack();
+
+ return NULL;
+ }
+
+ __raw_spin_lock(&hash_lock);
+ /*
+ * We have to do the hash-walk again, to avoid races
+ * with another CPU:
+ */
+ list_for_each_entry(class, hash_head, hash_entry)
+ if (class->key == key)
+ goto out_unlock_set;
+ /*
+ * Allocate a new key from the static array, and add it to
+ * the hash:
+ */
+ if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ printk("BUG: MAX_LOCKDEP_KEYS too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ return NULL;
+ }
+ class = lock_classes + nr_lock_classes++;
+ debug_atomic_inc(&nr_unused_locks);
+ class->key = key;
+ class->name = lock->name;
+ class->subclass = subclass;
+ INIT_LIST_HEAD(&class->lock_entry);
+ INIT_LIST_HEAD(&class->locks_before);
+ INIT_LIST_HEAD(&class->locks_after);
+ class->name_version = count_matching_names(class);
+ /*
+ * We use RCU's safe list-add method to make
+ * parallel walking of the hash-list safe:
+ */
+ list_add_tail_rcu(&class->hash_entry, hash_head);
+
+ if (verbose(class)) {
+ __raw_spin_unlock(&hash_lock);
+ printk("\nnew class %p: %s", class->key, class->name);
+ if (class->name_version > 1)
+ printk("#%d", class->name_version);
+ printk("\n");
+ dump_stack();
+ __raw_spin_lock(&hash_lock);
+ }
+out_unlock_set:
+ __raw_spin_unlock(&hash_lock);
+
+out_set:
+ lock->class[subclass] = class;
+
+ DEBUG_LOCKS_WARN_ON(class->subclass != subclass);
+
+ return class;
+}
+
+/*
+ * Look up a dependency chain. If the key is not present yet then
+ * add it and return 0 - in this case the new dependency chain is
+ * validated. If the key is already hashed, return 1.
+ */
+static inline int lookup_chain_cache(u64 chain_key)
+{
+ struct list_head *hash_head = chainhashentry(chain_key);
+ struct lock_chain *chain;
+
+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+ /*
+ * We can walk it lock-free, because entries only get added
+ * to the hash:
+ */
+ list_for_each_entry(chain, hash_head, entry) {
+ if (chain->chain_key == chain_key) {
+cache_hit:
+ debug_atomic_inc(&chain_lookup_hits);
+ /*
+ * In the debugging case, force redundant checking
+ * by returning 1:
+ */
+#ifdef CONFIG_DEBUG_LOCKDEP
+ __raw_spin_lock(&hash_lock);
+ return 1;
+#endif
+ return 0;
+ }
+ }
+ /*
+ * Allocate a new chain entry from the static array, and add
+ * it to the hash:
+ */
+ __raw_spin_lock(&hash_lock);
+ /*
+ * We have to walk the chain again locked - to avoid duplicates:
+ */
+ list_for_each_entry(chain, hash_head, entry) {
+ if (chain->chain_key == chain_key) {
+ __raw_spin_unlock(&hash_lock);
+ goto cache_hit;
+ }
+ }
+ if (unlikely(nr_lock_chains >= MAX_LOCKDEP_CHAINS)) {
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ printk("BUG: MAX_LOCKDEP_CHAINS too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ return 0;
+ }
+ chain = lock_chains + nr_lock_chains++;
+ chain->chain_key = chain_key;
+ list_add_tail_rcu(&chain->entry, hash_head);
+ debug_atomic_inc(&chain_lookup_misses);
+#ifdef CONFIG_TRACE_IRQFLAGS
+ if (current->hardirq_context)
+ nr_hardirq_chains++;
+ else {
+ if (current->softirq_context)
+ nr_softirq_chains++;
+ else
+ nr_process_chains++;
+ }
+#else
+ nr_process_chains++;
+#endif
+
+ return 1;
+}
+
+/*
+ * We are building curr_chain_key incrementally, so double-check
+ * it from scratch, to make sure that it's done correctly:
+ */
+static void check_chain_key(struct task_struct *curr)
+{
+#ifdef CONFIG_DEBUG_LOCKDEP
+ struct held_lock *hlock, *prev_hlock = NULL;
+ unsigned int i, id;
+ u64 chain_key = 0;
+
+ for (i = 0; i < curr->lockdep_depth; i++) {
+ hlock = curr->held_locks + i;
+ if (chain_key != hlock->prev_chain_key) {
+ debug_locks_off();
+ printk("hm#1, depth: %u [%u], %016Lx != %016Lx\n",
+ curr->lockdep_depth, i,
+ (unsigned long long)chain_key,
+ (unsigned long long)hlock->prev_chain_key);
+ WARN_ON(1);
+ return;
+ }
+ id = hlock->class - lock_classes;
+ DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS);
+ if (prev_hlock && (prev_hlock->irq_context !=
+ hlock->irq_context))
+ chain_key = 0;
+ chain_key = iterate_chain_key(chain_key, id);
+ prev_hlock = hlock;
+ }
+ if (chain_key != curr->curr_chain_key) {
+ debug_locks_off();
+ printk("hm#2, depth: %u [%u], %016Lx != %016Lx\n",
+ curr->lockdep_depth, i,
+ (unsigned long long)chain_key,
+ (unsigned long long)curr->curr_chain_key);
+ WARN_ON(1);
+ }
+#endif
+}
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+/*
+ * print irq inversion bug:
+ */
+static int
+print_irq_inversion_bug(struct task_struct *curr, struct lock_class *other,
+ struct held_lock *this, int forwards,
+ const char *irqclass)
+{
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n=========================================================\n");
+ printk( "[ INFO: possible irq lock inversion dependency detected ]\n");
+ printk( "---------------------------------------------------------\n");
+ printk("%s/%d just changed the state of lock:\n",
+ curr->comm, curr->pid);
+ print_lock(this);
+ if (forwards)
+ printk("but this lock took another, %s-irq-unsafe lock in the past:\n", irqclass);
+ else
+ printk("but this lock was taken by another, %s-irq-safe lock in the past:\n", irqclass);
+ print_lock_name(other);
+ printk("\n\nand interrupts could create inverse lock ordering between them.\n\n");
+
+ printk("\nother info that might help us debug this:\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nthe first lock's dependencies:\n");
+ print_lock_dependencies(this->class, 0);
+
+ printk("\nthe second lock's dependencies:\n");
+ print_lock_dependencies(other, 0);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+/*
+ * Prove that in the forwards-direction subgraph starting at <this>
+ * there is no lock matching <mask>:
+ */
+static int
+check_usage_forwards(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit bit, const char *irqclass)
+{
+ int ret;
+
+ find_usage_bit = bit;
+ /* fills in <forwards_match> */
+ ret = find_usage_forwards(this->class, 0);
+ if (!ret || ret == 1)
+ return ret;
+
+ return print_irq_inversion_bug(curr, forwards_match, this, 1, irqclass);
+}
+
+/*
+ * Prove that in the backwards-direction subgraph starting at <this>
+ * there is no lock matching <mask>:
+ */
+static int
+check_usage_backwards(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit bit, const char *irqclass)
+{
+ int ret;
+
+ find_usage_bit = bit;
+ /* fills in <backwards_match> */
+ ret = find_usage_backwards(this->class, 0);
+ if (!ret || ret == 1)
+ return ret;
+
+ return print_irq_inversion_bug(curr, backwards_match, this, 0, irqclass);
+}
+
+static inline void print_irqtrace_events(struct task_struct *curr)
+{
+ printk("irq event stamp: %u\n", curr->irq_events);
+ printk("hardirqs last enabled at (%u): ", curr->hardirq_enable_event);
+ print_ip_sym(curr->hardirq_enable_ip);
+ printk("hardirqs last disabled at (%u): ", curr->hardirq_disable_event);
+ print_ip_sym(curr->hardirq_disable_ip);
+ printk("softirqs last enabled at (%u): ", curr->softirq_enable_event);
+ print_ip_sym(curr->softirq_enable_ip);
+ printk("softirqs last disabled at (%u): ", curr->softirq_disable_event);
+ print_ip_sym(curr->softirq_disable_ip);
+}
+
+#else
+static inline void print_irqtrace_events(struct task_struct *curr)
+{
+}
+#endif
+
+static int
+print_usage_bug(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit)
+{
+ __raw_spin_unlock(&hash_lock);
+ debug_locks_off();
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n=================================\n");
+ printk( "[ INFO: inconsistent lock state ]\n");
+ printk( "---------------------------------\n");
+
+ printk("inconsistent {%s} -> {%s} usage.\n",
+ usage_str[prev_bit], usage_str[new_bit]);
+
+ printk("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] takes:\n",
+ curr->comm, curr->pid,
+ trace_hardirq_context(curr), hardirq_count() >> HARDIRQ_SHIFT,
+ trace_softirq_context(curr), softirq_count() >> SOFTIRQ_SHIFT,
+ trace_hardirqs_enabled(curr),
+ trace_softirqs_enabled(curr));
+ print_lock(this);
+
+ printk("{%s} state was registered at:\n", usage_str[prev_bit]);
+ print_stack_trace(this->class->usage_traces + prev_bit, 1);
+
+ print_irqtrace_events(curr);
+ printk("\nother info that might help us debug this:\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+/*
+ * Print out an error if an invalid bit is set:
+ */
+static inline int
+valid_state(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit new_bit, enum lock_usage_bit bad_bit)
+{
+ if (unlikely(this->class->usage_mask & (1 << bad_bit)))
+ return print_usage_bug(curr, this, bad_bit, new_bit);
+ return 1;
+}
+
+#define STRICT_READ_CHECKS 1
+
+/*
+ * Mark a lock with a usage bit, and validate the state transition:
+ */
+static int mark_lock(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit new_bit, unsigned long ip)
+{
+ unsigned int new_mask = 1 << new_bit, ret = 1;
+
+ /*
+ * If already set then do not dirty the cacheline,
+ * nor do any checks:
+ */
+ if (likely(this->class->usage_mask & new_mask))
+ return 1;
+
+ __raw_spin_lock(&hash_lock);
+ /*
+ * Make sure we didnt race:
+ */
+ if (unlikely(this->class->usage_mask & new_mask)) {
+ __raw_spin_unlock(&hash_lock);
+ return 1;
+ }
+
+ this->class->usage_mask |= new_mask;
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ if (new_bit == LOCK_ENABLED_HARDIRQS ||
+ new_bit == LOCK_ENABLED_HARDIRQS_READ)
+ ip = curr->hardirq_enable_ip;
+ else if (new_bit == LOCK_ENABLED_SOFTIRQS ||
+ new_bit == LOCK_ENABLED_SOFTIRQS_READ)
+ ip = curr->softirq_enable_ip;
+#endif
+ if (!save_trace(this->class->usage_traces + new_bit))
+ return 0;
+
+ switch (new_bit) {
+#ifdef CONFIG_TRACE_IRQFLAGS
+ case LOCK_USED_IN_HARDIRQ:
+ if (!valid_state(curr, this, new_bit, LOCK_ENABLED_HARDIRQS))
+ return 0;
+ if (!valid_state(curr, this, new_bit,
+ LOCK_ENABLED_HARDIRQS_READ))
+ return 0;
+ /*
+ * just marked it hardirq-safe, check that this lock
+ * took no hardirq-unsafe lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_HARDIRQS, "hard"))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it hardirq-safe, check that this lock
+ * took no hardirq-unsafe-read lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_HARDIRQS_READ, "hard-read"))
+ return 0;
+#endif
+ if (hardirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_USED_IN_SOFTIRQ:
+ if (!valid_state(curr, this, new_bit, LOCK_ENABLED_SOFTIRQS))
+ return 0;
+ if (!valid_state(curr, this, new_bit,
+ LOCK_ENABLED_SOFTIRQS_READ))
+ return 0;
+ /*
+ * just marked it softirq-safe, check that this lock
+ * took no softirq-unsafe lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_SOFTIRQS, "soft"))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it softirq-safe, check that this lock
+ * took no softirq-unsafe-read lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_SOFTIRQS_READ, "soft-read"))
+ return 0;
+#endif
+ if (softirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_USED_IN_HARDIRQ_READ:
+ if (!valid_state(curr, this, new_bit, LOCK_ENABLED_HARDIRQS))
+ return 0;
+ /*
+ * just marked it hardirq-read-safe, check that this lock
+ * took no hardirq-unsafe lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_HARDIRQS, "hard"))
+ return 0;
+ if (hardirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_USED_IN_SOFTIRQ_READ:
+ if (!valid_state(curr, this, new_bit, LOCK_ENABLED_SOFTIRQS))
+ return 0;
+ /*
+ * just marked it softirq-read-safe, check that this lock
+ * took no softirq-unsafe lock in the past:
+ */
+ if (!check_usage_forwards(curr, this,
+ LOCK_ENABLED_SOFTIRQS, "soft"))
+ return 0;
+ if (softirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_ENABLED_HARDIRQS:
+ if (!valid_state(curr, this, new_bit, LOCK_USED_IN_HARDIRQ))
+ return 0;
+ if (!valid_state(curr, this, new_bit,
+ LOCK_USED_IN_HARDIRQ_READ))
+ return 0;
+ /*
+ * just marked it hardirq-unsafe, check that no hardirq-safe
+ * lock in the system ever took it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_HARDIRQ, "hard"))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it hardirq-unsafe, check that no
+ * hardirq-safe-read lock in the system ever took
+ * it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_HARDIRQ_READ, "hard-read"))
+ return 0;
+#endif
+ if (hardirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_ENABLED_SOFTIRQS:
+ if (!valid_state(curr, this, new_bit, LOCK_USED_IN_SOFTIRQ))
+ return 0;
+ if (!valid_state(curr, this, new_bit,
+ LOCK_USED_IN_SOFTIRQ_READ))
+ return 0;
+ /*
+ * just marked it softirq-unsafe, check that no softirq-safe
+ * lock in the system ever took it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_SOFTIRQ, "soft"))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it softirq-unsafe, check that no
+ * softirq-safe-read lock in the system ever took
+ * it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_SOFTIRQ_READ, "soft-read"))
+ return 0;
+#endif
+ if (softirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_ENABLED_HARDIRQS_READ:
+ if (!valid_state(curr, this, new_bit, LOCK_USED_IN_HARDIRQ))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it hardirq-read-unsafe, check that no
+ * hardirq-safe lock in the system ever took it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_HARDIRQ, "hard"))
+ return 0;
+#endif
+ if (hardirq_verbose(this->class))
+ ret = 2;
+ break;
+ case LOCK_ENABLED_SOFTIRQS_READ:
+ if (!valid_state(curr, this, new_bit, LOCK_USED_IN_SOFTIRQ))
+ return 0;
+#if STRICT_READ_CHECKS
+ /*
+ * just marked it softirq-read-unsafe, check that no
+ * softirq-safe lock in the system ever took it in the past:
+ */
+ if (!check_usage_backwards(curr, this,
+ LOCK_USED_IN_SOFTIRQ, "soft"))
+ return 0;
+#endif
+ if (softirq_verbose(this->class))
+ ret = 2;
+ break;
+#endif
+ case LOCK_USED:
+ /*
+ * Add it to the global list of classes:
+ */
+ list_add_tail_rcu(&this->class->lock_entry, &all_lock_classes);
+ debug_atomic_dec(&nr_unused_locks);
+ break;
+ default:
+ debug_locks_off();
+ WARN_ON(1);
+ return 0;
+ }
+
+ __raw_spin_unlock(&hash_lock);
+
+ /*
+ * We must printk outside of the hash_lock:
+ */
+ if (ret == 2) {
+ printk("\nmarked lock as {%s}:\n", usage_str[new_bit]);
+ print_lock(this);
+ print_irqtrace_events(curr);
+ dump_stack();
+ }
+
+ return ret;
+}
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+/*
+ * Mark all held locks with a usage bit:
+ */
+static int
+mark_held_locks(struct task_struct *curr, int hardirq, unsigned long ip)
+{
+ enum lock_usage_bit usage_bit;
+ struct held_lock *hlock;
+ int i;
+
+ for (i = 0; i < curr->lockdep_depth; i++) {
+ hlock = curr->held_locks + i;
+
+ if (hardirq) {
+ if (hlock->read)
+ usage_bit = LOCK_ENABLED_HARDIRQS_READ;
+ else
+ usage_bit = LOCK_ENABLED_HARDIRQS;
+ } else {
+ if (hlock->read)
+ usage_bit = LOCK_ENABLED_SOFTIRQS_READ;
+ else
+ usage_bit = LOCK_ENABLED_SOFTIRQS;
+ }
+ if (!mark_lock(curr, hlock, usage_bit, ip))
+ return 0;
+ }
+
+ return 1;
+}
+
+/*
+ * Debugging helper: via this flag we know that we are in
+ * 'early bootup code', and will warn about any invalid irqs-on event:
+ */
+static int early_boot_irqs_enabled;
+
+void early_boot_irqs_off(void)
+{
+ early_boot_irqs_enabled = 0;
+}
+
+void early_boot_irqs_on(void)
+{
+ early_boot_irqs_enabled = 1;
+}
+
+/*
+ * Hardirqs will be enabled:
+ */
+void trace_hardirqs_on(void)
+{
+ struct task_struct *curr = current;
+ unsigned long ip;
+
+ if (unlikely(!debug_locks || current->lockdep_recursion))
+ return;
+
+ if (DEBUG_LOCKS_WARN_ON(unlikely(!early_boot_irqs_enabled)))
+ return;
+
+ if (unlikely(curr->hardirqs_enabled)) {
+ debug_atomic_inc(&redundant_hardirqs_on);
+ return;
+ }
+ /* we'll do an OFF -> ON transition: */
+ curr->hardirqs_enabled = 1;
+ ip = (unsigned long) __builtin_return_address(0);
+
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return;
+ if (DEBUG_LOCKS_WARN_ON(current->hardirq_context))
+ return;
+ /*
+ * We are going to turn hardirqs on, so set the
+ * usage bit for all held locks:
+ */
+ if (!mark_held_locks(curr, 1, ip))
+ return;
+ /*
+ * If we have softirqs enabled, then set the usage
+ * bit for all held locks. (disabled hardirqs prevented
+ * this bit from being set before)
+ */
+ if (curr->softirqs_enabled)
+ if (!mark_held_locks(curr, 0, ip))
+ return;
+
+ curr->hardirq_enable_ip = ip;
+ curr->hardirq_enable_event = ++curr->irq_events;
+ debug_atomic_inc(&hardirqs_on_events);
+}
+
+EXPORT_SYMBOL(trace_hardirqs_on);
+
+/*
+ * Hardirqs were disabled:
+ */
+void trace_hardirqs_off(void)
+{
+ struct task_struct *curr = current;
+
+ if (unlikely(!debug_locks || current->lockdep_recursion))
+ return;
+
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return;
+
+ if (curr->hardirqs_enabled) {
+ /*
+ * We have done an ON -> OFF transition:
+ */
+ curr->hardirqs_enabled = 0;
+ curr->hardirq_disable_ip = _RET_IP_;
+ curr->hardirq_disable_event = ++curr->irq_events;
+ debug_atomic_inc(&hardirqs_off_events);
+ } else
+ debug_atomic_inc(&redundant_hardirqs_off);
+}
+
+EXPORT_SYMBOL(trace_hardirqs_off);
+
+/*
+ * Softirqs will be enabled:
+ */
+void trace_softirqs_on(unsigned long ip)
+{
+ struct task_struct *curr = current;
+
+ if (unlikely(!debug_locks))
+ return;
+
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return;
+
+ if (curr->softirqs_enabled) {
+ debug_atomic_inc(&redundant_softirqs_on);
+ return;
+ }
+
+ /*
+ * We'll do an OFF -> ON transition:
+ */
+ curr->softirqs_enabled = 1;
+ curr->softirq_enable_ip = ip;
+ curr->softirq_enable_event = ++curr->irq_events;
+ debug_atomic_inc(&softirqs_on_events);
+ /*
+ * We are going to turn softirqs on, so set the
+ * usage bit for all held locks, if hardirqs are
+ * enabled too:
+ */
+ if (curr->hardirqs_enabled)
+ mark_held_locks(curr, 0, ip);
+}
+
+/*
+ * Softirqs were disabled:
+ */
+void trace_softirqs_off(unsigned long ip)
+{
+ struct task_struct *curr = current;
+
+ if (unlikely(!debug_locks))
+ return;
+
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return;
+
+ if (curr->softirqs_enabled) {
+ /*
+ * We have done an ON -> OFF transition:
+ */
+ curr->softirqs_enabled = 0;
+ curr->softirq_disable_ip = ip;
+ curr->softirq_disable_event = ++curr->irq_events;
+ debug_atomic_inc(&softirqs_off_events);
+ DEBUG_LOCKS_WARN_ON(!softirq_count());
+ } else
+ debug_atomic_inc(&redundant_softirqs_off);
+}
+
+#endif
+
+/*
+ * Initialize a lock instance's lock-class mapping info:
+ */
+void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key)
+{
+ if (unlikely(!debug_locks))
+ return;
+
+ if (DEBUG_LOCKS_WARN_ON(!key))
+ return;
+ if (DEBUG_LOCKS_WARN_ON(!name))
+ return;
+ /*
+ * Sanity check, the lock-class key must be persistent:
+ */
+ if (!static_obj(key)) {
+ printk("BUG: key %p not in .data!\n", key);
+ DEBUG_LOCKS_WARN_ON(1);
+ return;
+ }
+ lock->name = name;
+ lock->key = key;
+ memset(lock->class, 0, sizeof(lock->class[0])*MAX_LOCKDEP_SUBCLASSES);
+}
+
+EXPORT_SYMBOL_GPL(lockdep_init_map);
+
+/*
+ * This gets called for every mutex_lock*()/spin_lock*() operation.
+ * We maintain the dependency maps and validate the locking attempt:
+ */
+static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
+ int trylock, int read, int check, int hardirqs_off,
+ unsigned long ip)
+{
+ struct task_struct *curr = current;
+ struct held_lock *hlock;
+ struct lock_class *class;
+ unsigned int depth, id;
+ int chain_head = 0;
+ u64 chain_key;
+
+ if (unlikely(!debug_locks))
+ return 0;
+
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return 0;
+
+ if (unlikely(subclass >= MAX_LOCKDEP_SUBCLASSES)) {
+ debug_locks_off();
+ printk("BUG: MAX_LOCKDEP_SUBCLASSES too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ return 0;
+ }
+
+ class = lock->class[subclass];
+ /* not cached yet? */
+ if (unlikely(!class)) {
+ class = register_lock_class(lock, subclass);
+ if (!class)
+ return 0;
+ }
+ debug_atomic_inc((atomic_t *)&class->ops);
+ if (very_verbose(class)) {
+ printk("\nacquire class [%p] %s", class->key, class->name);
+ if (class->name_version > 1)
+ printk("#%d", class->name_version);
+ printk("\n");
+ dump_stack();
+ }
+
+ /*
+ * Add the lock to the list of currently held locks.
+ * (we dont increase the depth just yet, up until the
+ * dependency checks are done)
+ */
+ depth = curr->lockdep_depth;
+ if (DEBUG_LOCKS_WARN_ON(depth >= MAX_LOCK_DEPTH))
+ return 0;
+
+ hlock = curr->held_locks + depth;
+
+ hlock->class = class;
+ hlock->acquire_ip = ip;
+ hlock->instance = lock;
+ hlock->trylock = trylock;
+ hlock->read = read;
+ hlock->check = check;
+ hlock->hardirqs_off = hardirqs_off;
+
+ if (check != 2)
+ goto out_calc_hash;
+#ifdef CONFIG_TRACE_IRQFLAGS
+ /*
+ * If non-trylock use in a hardirq or softirq context, then
+ * mark the lock as used in these contexts:
+ */
+ if (!trylock) {
+ if (read) {
+ if (curr->hardirq_context)
+ if (!mark_lock(curr, hlock,
+ LOCK_USED_IN_HARDIRQ_READ, ip))
+ return 0;
+ if (curr->softirq_context)
+ if (!mark_lock(curr, hlock,
+ LOCK_USED_IN_SOFTIRQ_READ, ip))
+ return 0;
+ } else {
+ if (curr->hardirq_context)
+ if (!mark_lock(curr, hlock, LOCK_USED_IN_HARDIRQ, ip))
+ return 0;
+ if (curr->softirq_context)
+ if (!mark_lock(curr, hlock, LOCK_USED_IN_SOFTIRQ, ip))
+ return 0;
+ }
+ }
+ if (!hardirqs_off) {
+ if (read) {
+ if (!mark_lock(curr, hlock,
+ LOCK_ENABLED_HARDIRQS_READ, ip))
+ return 0;
+ if (curr->softirqs_enabled)
+ if (!mark_lock(curr, hlock,
+ LOCK_ENABLED_SOFTIRQS_READ, ip))
+ return 0;
+ } else {
+ if (!mark_lock(curr, hlock,
+ LOCK_ENABLED_HARDIRQS, ip))
+ return 0;
+ if (curr->softirqs_enabled)
+ if (!mark_lock(curr, hlock,
+ LOCK_ENABLED_SOFTIRQS, ip))
+ return 0;
+ }
+ }
+#endif
+ /* mark it as used: */
+ if (!mark_lock(curr, hlock, LOCK_USED, ip))
+ return 0;
+out_calc_hash:
+ /*
+ * Calculate the chain hash: it's the combined has of all the
+ * lock keys along the dependency chain. We save the hash value
+ * at every step so that we can get the current hash easily
+ * after unlock. The chain hash is then used to cache dependency
+ * results.
+ *
+ * The 'key ID' is what is the most compact key value to drive
+ * the hash, not class->key.
+ */
+ id = class - lock_classes;
+ if (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
+ return 0;
+
+ chain_key = curr->curr_chain_key;
+ if (!depth) {
+ if (DEBUG_LOCKS_WARN_ON(chain_key != 0))
+ return 0;
+ chain_head = 1;
+ }
+
+ hlock->prev_chain_key = chain_key;
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ /*
+ * Keep track of points where we cross into an interrupt context:
+ */
+ hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
+ curr->softirq_context;
+ if (depth) {
+ struct held_lock *prev_hlock;
+
+ prev_hlock = curr->held_locks + depth-1;
+ /*
+ * If we cross into another context, reset the
+ * hash key (this also prevents the checking and the
+ * adding of the dependency to 'prev'):
+ */
+ if (prev_hlock->irq_context != hlock->irq_context) {
+ chain_key = 0;
+ chain_head = 1;
+ }
+ }
+#endif
+ chain_key = iterate_chain_key(chain_key, id);
+ curr->curr_chain_key = chain_key;
+
+ /*
+ * Trylock needs to maintain the stack of held locks, but it
+ * does not add new dependencies, because trylock can be done
+ * in any order.
+ *
+ * We look up the chain_key and do the O(N^2) check and update of
+ * the dependencies only if this is a new dependency chain.
+ * (If lookup_chain_cache() returns with 1 it acquires
+ * hash_lock for us)
+ */
+ if (!trylock && (check == 2) && lookup_chain_cache(chain_key)) {
+ /*
+ * Check whether last held lock:
+ *
+ * - is irq-safe, if this lock is irq-unsafe
+ * - is softirq-safe, if this lock is hardirq-unsafe
+ *
+ * And check whether the new lock's dependency graph
+ * could lead back to the previous lock.
+ *
+ * any of these scenarios could lead to a deadlock. If
+ * All validations
+ */
+ int ret = check_deadlock(curr, hlock, lock, read);
+
+ if (!ret)
+ return 0;
+ /*
+ * Mark recursive read, as we jump over it when
+ * building dependencies (just like we jump over
+ * trylock entries):
+ */
+ if (ret == 2)
+ hlock->read = 2;
+ /*
+ * Add dependency only if this lock is not the head
+ * of the chain, and if it's not a secondary read-lock:
+ */
+ if (!chain_head && ret != 2)
+ if (!check_prevs_add(curr, hlock))
+ return 0;
+ __raw_spin_unlock(&hash_lock);
+ }
+ curr->lockdep_depth++;
+ check_chain_key(curr);
+ if (unlikely(curr->lockdep_depth >= MAX_LOCK_DEPTH)) {
+ debug_locks_off();
+ printk("BUG: MAX_LOCK_DEPTH too low!\n");
+ printk("turning off the locking correctness validator.\n");
+ return 0;
+ }
+ if (unlikely(curr->lockdep_depth > max_lockdep_depth))
+ max_lockdep_depth = curr->lockdep_depth;
+
+ return 1;
+}
+
+static int
+print_unlock_inbalance_bug(struct task_struct *curr, struct lockdep_map *lock,
+ unsigned long ip)
+{
+ if (!debug_locks_off())
+ return 0;
+ if (debug_locks_silent)
+ return 0;
+
+ printk("\n=====================================\n");
+ printk( "[ BUG: bad unlock balance detected! ]\n");
+ printk( "-------------------------------------\n");
+ printk("%s/%d is trying to release lock (",
+ curr->comm, curr->pid);
+ print_lockdep_cache(lock);
+ printk(") at:\n");
+ print_ip_sym(ip);
+ printk("but there are no more locks to release!\n");
+ printk("\nother info that might help us debug this:\n");
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+
+ return 0;
+}
+
+/*
+ * Common debugging checks for both nested and non-nested unlock:
+ */
+static int check_unlock(struct task_struct *curr, struct lockdep_map *lock,
+ unsigned long ip)
+{
+ if (unlikely(!debug_locks))
+ return 0;
+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ return 0;
+
+ if (curr->lockdep_depth <= 0)
+ return print_unlock_inbalance_bug(curr, lock, ip);
+
+ return 1;
+}
+
+/*
+ * Remove the lock to the list of currently held locks in a
+ * potentially non-nested (out of order) manner. This is a
+ * relatively rare operation, as all the unlock APIs default
+ * to nested mode (which uses lock_release()):
+ */
+static int
+lock_release_non_nested(struct task_struct *curr,
+ struct lockdep_map *lock, unsigned long ip)
+{
+ struct held_lock *hlock, *prev_hlock;
+ unsigned int depth;
+ int i;
+
+ /*
+ * Check whether the lock exists in the current stack
+ * of held locks:
+ */
+ depth = curr->lockdep_depth;
+ if (DEBUG_LOCKS_WARN_ON(!depth))
+ return 0;
+
+ prev_hlock = NULL;
+ for (i = depth-1; i >= 0; i--) {
+ hlock = curr->held_locks + i;
+ /*
+ * We must not cross into another context:
+ */
+ if (prev_hlock && prev_hlock->irq_context != hlock->irq_context)
+ break;
+ if (hlock->instance == lock)
+ goto found_it;
+ prev_hlock = hlock;
+ }
+ return print_unlock_inbalance_bug(curr, lock, ip);
+
+found_it:
+ /*
+ * We have the right lock to unlock, 'hlock' points to it.
+ * Now we remove it from the stack, and add back the other
+ * entries (if any), recalculating the hash along the way:
+ */
+ curr->lockdep_depth = i;
+ curr->curr_chain_key = hlock->prev_chain_key;
+
+ for (i++; i < depth; i++) {
+ hlock = curr->held_locks + i;
+ if (!__lock_acquire(hlock->instance,
+ hlock->class->subclass, hlock->trylock,
+ hlock->read, hlock->check, hlock->hardirqs_off,
+ hlock->acquire_ip))
+ return 0;
+ }
+
+ if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - 1))
+ return 0;
+ return 1;
+}
+
+/*
+ * Remove the lock to the list of currently held locks - this gets
+ * called on mutex_unlock()/spin_unlock*() (or on a failed
+ * mutex_lock_interruptible()). This is done for unlocks that nest
+ * perfectly. (i.e. the current top of the lock-stack is unlocked)
+ */
+static int lock_release_nested(struct task_struct *curr,
+ struct lockdep_map *lock, unsigned long ip)
+{
+ struct held_lock *hlock;
+ unsigned int depth;
+
+ /*
+ * Pop off the top of the lock stack:
+ */
+ depth = curr->lockdep_depth - 1;
+ hlock = curr->held_locks + depth;
+
+ /*
+ * Is the unlock non-nested:
+ */
+ if (hlock->instance != lock)
+ return lock_release_non_nested(curr, lock, ip);
+ curr->lockdep_depth--;
+
+ if (DEBUG_LOCKS_WARN_ON(!depth && (hlock->prev_chain_key != 0)))
+ return 0;
+
+ curr->curr_chain_key = hlock->prev_chain_key;
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+ hlock->prev_chain_key = 0;
+ hlock->class = NULL;
+ hlock->acquire_ip = 0;
+ hlock->irq_context = 0;
+#endif
+ return 1;
+}
+
+/*
+ * Remove the lock to the list of currently held locks - this gets
+ * called on mutex_unlock()/spin_unlock*() (or on a failed
+ * mutex_lock_interruptible()). This is done for unlocks that nest
+ * perfectly. (i.e. the current top of the lock-stack is unlocked)
+ */
+static void
+__lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
+{
+ struct task_struct *curr = current;
+
+ if (!check_unlock(curr, lock, ip))
+ return;
+
+ if (nested) {
+ if (!lock_release_nested(curr, lock, ip))
+ return;
+ } else {
+ if (!lock_release_non_nested(curr, lock, ip))
+ return;
+ }
+
+ check_chain_key(curr);
+}
+
+/*
+ * Check whether we follow the irq-flags state precisely:
+ */
+static void check_flags(unsigned long flags)
+{
+#if defined(CONFIG_DEBUG_LOCKDEP) && defined(CONFIG_TRACE_IRQFLAGS)
+ if (!debug_locks)
+ return;
+
+ if (irqs_disabled_flags(flags))
+ DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled);
+ else
+ DEBUG_LOCKS_WARN_ON(!current->hardirqs_enabled);
+
+ /*
+ * We dont accurately track softirq state in e.g.
+ * hardirq contexts (such as on 4KSTACKS), so only
+ * check if not in hardirq contexts:
+ */
+ if (!hardirq_count()) {
+ if (softirq_count())
+ DEBUG_LOCKS_WARN_ON(current->softirqs_enabled);
+ else
+ DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
+ }
+
+ if (!debug_locks)
+ print_irqtrace_events(current);
+#endif
+}
+
+/*
+ * We are not always called with irqs disabled - do that here,
+ * and also avoid lockdep recursion:
+ */
+void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
+ int trylock, int read, int check, unsigned long ip)
+{
+ unsigned long flags;
+
+ if (unlikely(current->lockdep_recursion))
+ return;
+
+ raw_local_irq_save(flags);
+ check_flags(flags);
+
+ current->lockdep_recursion = 1;
+ __lock_acquire(lock, subclass, trylock, read, check,
+ irqs_disabled_flags(flags), ip);
+ current->lockdep_recursion = 0;
+ raw_local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL_GPL(lock_acquire);
+
+void lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
+{
+ unsigned long flags;
+
+ if (unlikely(current->lockdep_recursion))
+ return;
+
+ raw_local_irq_save(flags);
+ check_flags(flags);
+ current->lockdep_recursion = 1;
+ __lock_release(lock, nested, ip);
+ current->lockdep_recursion = 0;
+ raw_local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL_GPL(lock_release);
+
+/*
+ * Used by the testsuite, sanitize the validator state
+ * after a simulated failure:
+ */
+
+void lockdep_reset(void)
+{
+ unsigned long flags;
+
+ raw_local_irq_save(flags);
+ current->curr_chain_key = 0;
+ current->lockdep_depth = 0;
+ current->lockdep_recursion = 0;
+ memset(current->held_locks, 0, MAX_LOCK_DEPTH*sizeof(struct held_lock));
+ nr_hardirq_chains = 0;
+ nr_softirq_chains = 0;
+ nr_process_chains = 0;
+ debug_locks = 1;
+ raw_local_irq_restore(flags);
+}
+
+static void zap_class(struct lock_class *class)
+{
+ int i;
+
+ /*
+ * Remove all dependencies this lock is
+ * involved in:
+ */
+ for (i = 0; i < nr_list_entries; i++) {
+ if (list_entries[i].class == class)
+ list_del_rcu(&list_entries[i].entry);
+ }
+ /*
+ * Unhash the class and remove it from the all_lock_classes list:
+ */
+ list_del_rcu(&class->hash_entry);
+ list_del_rcu(&class->lock_entry);
+
+}
+
+static inline int within(void *addr, void *start, unsigned long size)
+{
+ return addr >= start && addr < start + size;
+}
+
+void lockdep_free_key_range(void *start, unsigned long size)
+{
+ struct lock_class *class, *next;
+ struct list_head *head;
+ unsigned long flags;
+ int i;
+
+ raw_local_irq_save(flags);
+ __raw_spin_lock(&hash_lock);
+
+ /*
+ * Unhash all classes that were created by this module:
+ */
+ for (i = 0; i < CLASSHASH_SIZE; i++) {
+ head = classhash_table + i;
+ if (list_empty(head))
+ continue;
+ list_for_each_entry_safe(class, next, head, hash_entry)
+ if (within(class->key, start, size))
+ zap_class(class);
+ }
+
+ __raw_spin_unlock(&hash_lock);
+ raw_local_irq_restore(flags);
+}
+
+void lockdep_reset_lock(struct lockdep_map *lock)
+{
+ struct lock_class *class, *next, *entry;
+ struct list_head *head;
+ unsigned long flags;
+ int i, j;
+
+ raw_local_irq_save(flags);
+ __raw_spin_lock(&hash_lock);
+
+ /*
+ * Remove all classes this lock has:
+ */
+ for (i = 0; i < CLASSHASH_SIZE; i++) {
+ head = classhash_table + i;
+ if (list_empty(head))
+ continue;
+ list_for_each_entry_safe(class, next, head, hash_entry) {
+ for (j = 0; j < MAX_LOCKDEP_SUBCLASSES; j++) {
+ entry = lock->class[j];
+ if (class == entry) {
+ zap_class(class);
+ lock->class[j] = NULL;
+ break;
+ }
+ }
+ }
+ }
+
+ /*
+ * Debug check: in the end all mapped classes should
+ * be gone.
+ */
+ for (j = 0; j < MAX_LOCKDEP_SUBCLASSES; j++) {
+ entry = lock->class[j];
+ if (!entry)
+ continue;
+ __raw_spin_unlock(&hash_lock);
+ DEBUG_LOCKS_WARN_ON(1);
+ raw_local_irq_restore(flags);
+ return;
+ }
+
+ __raw_spin_unlock(&hash_lock);
+ raw_local_irq_restore(flags);
+}
+
+void __init lockdep_init(void)
+{
+ int i;
+
+ /*
+ * Some architectures have their own start_kernel()
+ * code which calls lockdep_init(), while we also
+ * call lockdep_init() from the start_kernel() itself,
+ * and we want to initialize the hashes only once:
+ */
+ if (lockdep_initialized)
+ return;
+
+ for (i = 0; i < CLASSHASH_SIZE; i++)
+ INIT_LIST_HEAD(classhash_table + i);
+
+ for (i = 0; i < CHAINHASH_SIZE; i++)
+ INIT_LIST_HEAD(chainhash_table + i);
+
+ lockdep_initialized = 1;
+}
+
+void __init lockdep_info(void)
+{
+ printk("Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar\n");
+
+ printk("... MAX_LOCKDEP_SUBCLASSES: %lu\n", MAX_LOCKDEP_SUBCLASSES);
+ printk("... MAX_LOCK_DEPTH: %lu\n", MAX_LOCK_DEPTH);
+ printk("... MAX_LOCKDEP_KEYS: %lu\n", MAX_LOCKDEP_KEYS);
+ printk("... CLASSHASH_SIZE: %lu\n", CLASSHASH_SIZE);
+ printk("... MAX_LOCKDEP_ENTRIES: %lu\n", MAX_LOCKDEP_ENTRIES);
+ printk("... MAX_LOCKDEP_CHAINS: %lu\n", MAX_LOCKDEP_CHAINS);
+ printk("... CHAINHASH_SIZE: %lu\n", CHAINHASH_SIZE);
+
+ printk(" memory used by lock dependency info: %lu kB\n",
+ (sizeof(struct lock_class) * MAX_LOCKDEP_KEYS +
+ sizeof(struct list_head) * CLASSHASH_SIZE +
+ sizeof(struct lock_list) * MAX_LOCKDEP_ENTRIES +
+ sizeof(struct lock_chain) * MAX_LOCKDEP_CHAINS +
+ sizeof(struct list_head) * CHAINHASH_SIZE) / 1024);
+
+ printk(" per task-struct memory footprint: %lu bytes\n",
+ sizeof(struct held_lock) * MAX_LOCK_DEPTH);
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+ if (lockdep_init_error)
+ printk("WARNING: lockdep init error! Arch code didnt call lockdep_init() early enough?\n");
+#endif
+}
+
+static inline int in_range(const void *start, const void *addr, const void *end)
+{
+ return addr >= start && addr <= end;
+}
+
+static void
+print_freed_lock_bug(struct task_struct *curr, const void *mem_from,
+ const void *mem_to)
+{
+ if (!debug_locks_off())
+ return;
+ if (debug_locks_silent)
+ return;
+
+ printk("\n=========================\n");
+ printk( "[ BUG: held lock freed! ]\n");
+ printk( "-------------------------\n");
+ printk("%s/%d is freeing memory %p-%p, with a lock still held there!\n",
+ curr->comm, curr->pid, mem_from, mem_to-1);
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+}
+
+/*
+ * Called when kernel memory is freed (or unmapped), or if a lock
+ * is destroyed or reinitialized - this code checks whether there is
+ * any held lock in the memory range of <from> to <to>:
+ */
+void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
+{
+ const void *mem_to = mem_from + mem_len, *lock_from, *lock_to;
+ struct task_struct *curr = current;
+ struct held_lock *hlock;
+ unsigned long flags;
+ int i;
+
+ if (unlikely(!debug_locks))
+ return;
+
+ local_irq_save(flags);
+ for (i = 0; i < curr->lockdep_depth; i++) {
+ hlock = curr->held_locks + i;
+
+ lock_from = (void *)hlock->instance;
+ lock_to = (void *)(hlock->instance + 1);
+
+ if (!in_range(mem_from, lock_from, mem_to) &&
+ !in_range(mem_from, lock_to, mem_to))
+ continue;
+
+ print_freed_lock_bug(curr, mem_from, mem_to);
+ break;
+ }
+ local_irq_restore(flags);
+}
+
+static void print_held_locks_bug(struct task_struct *curr)
+{
+ if (!debug_locks_off())
+ return;
+ if (debug_locks_silent)
+ return;
+
+ printk("\n=====================================\n");
+ printk( "[ BUG: lock held at task exit time! ]\n");
+ printk( "-------------------------------------\n");
+ printk("%s/%d is exiting with locks still held!\n",
+ curr->comm, curr->pid);
+ lockdep_print_held_locks(curr);
+
+ printk("\nstack backtrace:\n");
+ dump_stack();
+}
+
+void debug_check_no_locks_held(struct task_struct *task)
+{
+ if (unlikely(task->lockdep_depth > 0))
+ print_held_locks_bug(task);
+}
+
+void debug_show_all_locks(void)
+{
+ struct task_struct *g, *p;
+ int count = 10;
+ int unlock = 1;
+
+ printk("\nShowing all locks held in the system:\n");
+
+ /*
+ * Here we try to get the tasklist_lock as hard as possible,
+ * if not successful after 2 seconds we ignore it (but keep
+ * trying). This is to enable a debug printout even if a
+ * tasklist_lock-holding task deadlocks or crashes.
+ */
+retry:
+ if (!read_trylock(&tasklist_lock)) {
+ if (count == 10)
+ printk("hm, tasklist_lock locked, retrying... ");
+ if (count) {
+ count--;
+ printk(" #%d", 10-count);
+ mdelay(200);
+ goto retry;
+ }
+ printk(" ignoring it.\n");
+ unlock = 0;
+ }
+ if (count != 10)
+ printk(" locked it.\n");
+
+ do_each_thread(g, p) {
+ if (p->lockdep_depth)
+ lockdep_print_held_locks(p);
+ if (!unlock)
+ if (read_trylock(&tasklist_lock))
+ unlock = 1;
+ } while_each_thread(g, p);
+
+ printk("\n");
+ printk("=============================================\n\n");
+
+ if (unlock)
+ read_unlock(&tasklist_lock);
+}
+
+EXPORT_SYMBOL_GPL(debug_show_all_locks);
+
+void debug_show_held_locks(struct task_struct *task)
+{
+ lockdep_print_held_locks(task);
+}
+
+EXPORT_SYMBOL_GPL(debug_show_held_locks);
+
--- /dev/null
+/*
+ * kernel/lockdep_internals.h
+ *
+ * Runtime locking correctness validator
+ *
+ * lockdep subsystem internal functions and variables.
+ */
+
+/*
+ * MAX_LOCKDEP_ENTRIES is the maximum number of lock dependencies
+ * we track.
+ *
+ * We use the per-lock dependency maps in two ways: we grow it by adding
+ * every to-be-taken lock to all currently held lock's own dependency
+ * table (if it's not there yet), and we check it for lock order
+ * conflicts and deadlocks.
+ */
+#define MAX_LOCKDEP_ENTRIES 8192UL
+
+#define MAX_LOCKDEP_KEYS_BITS 11
+#define MAX_LOCKDEP_KEYS (1UL << MAX_LOCKDEP_KEYS_BITS)
+
+#define MAX_LOCKDEP_CHAINS_BITS 13
+#define MAX_LOCKDEP_CHAINS (1UL << MAX_LOCKDEP_CHAINS_BITS)
+
+/*
+ * Stack-trace: tightly packed array of stack backtrace
+ * addresses. Protected by the hash_lock.
+ */
+#define MAX_STACK_TRACE_ENTRIES 131072UL
+
+extern struct list_head all_lock_classes;
+
+extern void
+get_usage_chars(struct lock_class *class, char *c1, char *c2, char *c3, char *c4);
+
+extern const char * __get_key_name(struct lockdep_subclass_key *key, char *str);
+
+extern unsigned long nr_lock_classes;
+extern unsigned long nr_list_entries;
+extern unsigned long nr_lock_chains;
+extern unsigned long nr_stack_trace_entries;
+
+extern unsigned int nr_hardirq_chains;
+extern unsigned int nr_softirq_chains;
+extern unsigned int nr_process_chains;
+extern unsigned int max_lockdep_depth;
+extern unsigned int max_recursion_depth;
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+/*
+ * Various lockdep statistics:
+ */
+extern atomic_t chain_lookup_hits;
+extern atomic_t chain_lookup_misses;
+extern atomic_t hardirqs_on_events;
+extern atomic_t hardirqs_off_events;
+extern atomic_t redundant_hardirqs_on;
+extern atomic_t redundant_hardirqs_off;
+extern atomic_t softirqs_on_events;
+extern atomic_t softirqs_off_events;
+extern atomic_t redundant_softirqs_on;
+extern atomic_t redundant_softirqs_off;
+extern atomic_t nr_unused_locks;
+extern atomic_t nr_cyclic_checks;
+extern atomic_t nr_cyclic_check_recursions;
+extern atomic_t nr_find_usage_forwards_checks;
+extern atomic_t nr_find_usage_forwards_recursions;
+extern atomic_t nr_find_usage_backwards_checks;
+extern atomic_t nr_find_usage_backwards_recursions;
+# define debug_atomic_inc(ptr) atomic_inc(ptr)
+# define debug_atomic_dec(ptr) atomic_dec(ptr)
+# define debug_atomic_read(ptr) atomic_read(ptr)
+#else
+# define debug_atomic_inc(ptr) do { } while (0)
+# define debug_atomic_dec(ptr) do { } while (0)
+# define debug_atomic_read(ptr) 0
+#endif
--- /dev/null
+/*
+ * kernel/lockdep_proc.c
+ *
+ * Runtime locking correctness validator
+ *
+ * Started by Ingo Molnar:
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *
+ * Code for /proc/lockdep and /proc/lockdep_stats:
+ *
+ */
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/kallsyms.h>
+#include <linux/debug_locks.h>
+
+#include "lockdep_internals.h"
+
+static void *l_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ struct lock_class *class = v;
+
+ (*pos)++;
+
+ if (class->lock_entry.next != &all_lock_classes)
+ class = list_entry(class->lock_entry.next, struct lock_class,
+ lock_entry);
+ else
+ class = NULL;
+ m->private = class;
+
+ return class;
+}
+
+static void *l_start(struct seq_file *m, loff_t *pos)
+{
+ struct lock_class *class = m->private;
+
+ if (&class->lock_entry == all_lock_classes.next)
+ seq_printf(m, "all lock classes:\n");
+
+ return class;
+}
+
+static void l_stop(struct seq_file *m, void *v)
+{
+}
+
+static unsigned long count_forward_deps(struct lock_class *class)
+{
+ struct lock_list *entry;
+ unsigned long ret = 1;
+
+ /*
+ * Recurse this class's dependency list:
+ */
+ list_for_each_entry(entry, &class->locks_after, entry)
+ ret += count_forward_deps(entry->class);
+
+ return ret;
+}
+
+static unsigned long count_backward_deps(struct lock_class *class)
+{
+ struct lock_list *entry;
+ unsigned long ret = 1;
+
+ /*
+ * Recurse this class's dependency list:
+ */
+ list_for_each_entry(entry, &class->locks_before, entry)
+ ret += count_backward_deps(entry->class);
+
+ return ret;
+}
+
+static int l_show(struct seq_file *m, void *v)
+{
+ unsigned long nr_forward_deps, nr_backward_deps;
+ struct lock_class *class = m->private;
+ char str[128], c1, c2, c3, c4;
+ const char *name;
+
+ seq_printf(m, "%p", class->key);
+#ifdef CONFIG_DEBUG_LOCKDEP
+ seq_printf(m, " OPS:%8ld", class->ops);
+#endif
+ nr_forward_deps = count_forward_deps(class);
+ seq_printf(m, " FD:%5ld", nr_forward_deps);
+
+ nr_backward_deps = count_backward_deps(class);
+ seq_printf(m, " BD:%5ld", nr_backward_deps);
+
+ get_usage_chars(class, &c1, &c2, &c3, &c4);
+ seq_printf(m, " %c%c%c%c", c1, c2, c3, c4);
+
+ name = class->name;
+ if (!name) {
+ name = __get_key_name(class->key, str);
+ seq_printf(m, ": %s", name);
+ } else{
+ seq_printf(m, ": %s", name);
+ if (class->name_version > 1)
+ seq_printf(m, "#%d", class->name_version);
+ if (class->subclass)
+ seq_printf(m, "/%d", class->subclass);
+ }
+ seq_puts(m, "\n");
+
+ return 0;
+}
+
+static struct seq_operations lockdep_ops = {
+ .start = l_start,
+ .next = l_next,
+ .stop = l_stop,
+ .show = l_show,
+};
+
+static int lockdep_open(struct inode *inode, struct file *file)
+{
+ int res = seq_open(file, &lockdep_ops);
+ if (!res) {
+ struct seq_file *m = file->private_data;
+
+ if (!list_empty(&all_lock_classes))
+ m->private = list_entry(all_lock_classes.next,
+ struct lock_class, lock_entry);
+ else
+ m->private = NULL;
+ }
+ return res;
+}
+
+static struct file_operations proc_lockdep_operations = {
+ .open = lockdep_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static void lockdep_stats_debug_show(struct seq_file *m)
+{
+#ifdef CONFIG_DEBUG_LOCKDEP
+ unsigned int hi1 = debug_atomic_read(&hardirqs_on_events),
+ hi2 = debug_atomic_read(&hardirqs_off_events),
+ hr1 = debug_atomic_read(&redundant_hardirqs_on),
+ hr2 = debug_atomic_read(&redundant_hardirqs_off),
+ si1 = debug_atomic_read(&softirqs_on_events),
+ si2 = debug_atomic_read(&softirqs_off_events),
+ sr1 = debug_atomic_read(&redundant_softirqs_on),
+ sr2 = debug_atomic_read(&redundant_softirqs_off);
+
+ seq_printf(m, " chain lookup misses: %11u\n",
+ debug_atomic_read(&chain_lookup_misses));
+ seq_printf(m, " chain lookup hits: %11u\n",
+ debug_atomic_read(&chain_lookup_hits));
+ seq_printf(m, " cyclic checks: %11u\n",
+ debug_atomic_read(&nr_cyclic_checks));
+ seq_printf(m, " cyclic-check recursions: %11u\n",
+ debug_atomic_read(&nr_cyclic_check_recursions));
+ seq_printf(m, " find-mask forwards checks: %11u\n",
+ debug_atomic_read(&nr_find_usage_forwards_checks));
+ seq_printf(m, " find-mask forwards recursions: %11u\n",
+ debug_atomic_read(&nr_find_usage_forwards_recursions));
+ seq_printf(m, " find-mask backwards checks: %11u\n",
+ debug_atomic_read(&nr_find_usage_backwards_checks));
+ seq_printf(m, " find-mask backwards recursions:%11u\n",
+ debug_atomic_read(&nr_find_usage_backwards_recursions));
+
+ seq_printf(m, " hardirq on events: %11u\n", hi1);
+ seq_printf(m, " hardirq off events: %11u\n", hi2);
+ seq_printf(m, " redundant hardirq ons: %11u\n", hr1);
+ seq_printf(m, " redundant hardirq offs: %11u\n", hr2);
+ seq_printf(m, " softirq on events: %11u\n", si1);
+ seq_printf(m, " softirq off events: %11u\n", si2);
+ seq_printf(m, " redundant softirq ons: %11u\n", sr1);
+ seq_printf(m, " redundant softirq offs: %11u\n", sr2);
+#endif
+}
+
+static int lockdep_stats_show(struct seq_file *m, void *v)
+{
+ struct lock_class *class;
+ unsigned long nr_unused = 0, nr_uncategorized = 0,
+ nr_irq_safe = 0, nr_irq_unsafe = 0,
+ nr_softirq_safe = 0, nr_softirq_unsafe = 0,
+ nr_hardirq_safe = 0, nr_hardirq_unsafe = 0,
+ nr_irq_read_safe = 0, nr_irq_read_unsafe = 0,
+ nr_softirq_read_safe = 0, nr_softirq_read_unsafe = 0,
+ nr_hardirq_read_safe = 0, nr_hardirq_read_unsafe = 0,
+ sum_forward_deps = 0, factor = 0;
+
+ list_for_each_entry(class, &all_lock_classes, lock_entry) {
+
+ if (class->usage_mask == 0)
+ nr_unused++;
+ if (class->usage_mask == LOCKF_USED)
+ nr_uncategorized++;
+ if (class->usage_mask & LOCKF_USED_IN_IRQ)
+ nr_irq_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_IRQS)
+ nr_irq_unsafe++;
+ if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ)
+ nr_softirq_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_SOFTIRQS)
+ nr_softirq_unsafe++;
+ if (class->usage_mask & LOCKF_USED_IN_HARDIRQ)
+ nr_hardirq_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_HARDIRQS)
+ nr_hardirq_unsafe++;
+ if (class->usage_mask & LOCKF_USED_IN_IRQ_READ)
+ nr_irq_read_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_IRQS_READ)
+ nr_irq_read_unsafe++;
+ if (class->usage_mask & LOCKF_USED_IN_SOFTIRQ_READ)
+ nr_softirq_read_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_SOFTIRQS_READ)
+ nr_softirq_read_unsafe++;
+ if (class->usage_mask & LOCKF_USED_IN_HARDIRQ_READ)
+ nr_hardirq_read_safe++;
+ if (class->usage_mask & LOCKF_ENABLED_HARDIRQS_READ)
+ nr_hardirq_read_unsafe++;
+
+ sum_forward_deps += count_forward_deps(class);
+ }
+#ifdef CONFIG_LOCKDEP_DEBUG
+ DEBUG_LOCKS_WARN_ON(debug_atomic_read(&nr_unused_locks) != nr_unused);
+#endif
+ seq_printf(m, " lock-classes: %11lu [max: %lu]\n",
+ nr_lock_classes, MAX_LOCKDEP_KEYS);
+ seq_printf(m, " direct dependencies: %11lu [max: %lu]\n",
+ nr_list_entries, MAX_LOCKDEP_ENTRIES);
+ seq_printf(m, " indirect dependencies: %11lu\n",
+ sum_forward_deps);
+
+ /*
+ * Total number of dependencies:
+ *
+ * All irq-safe locks may nest inside irq-unsafe locks,
+ * plus all the other known dependencies:
+ */
+ seq_printf(m, " all direct dependencies: %11lu\n",
+ nr_irq_unsafe * nr_irq_safe +
+ nr_hardirq_unsafe * nr_hardirq_safe +
+ nr_list_entries);
+
+ /*
+ * Estimated factor between direct and indirect
+ * dependencies:
+ */
+ if (nr_list_entries)
+ factor = sum_forward_deps / nr_list_entries;
+
+ seq_printf(m, " dependency chains: %11lu [max: %lu]\n",
+ nr_lock_chains, MAX_LOCKDEP_CHAINS);
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+ seq_printf(m, " in-hardirq chains: %11u\n",
+ nr_hardirq_chains);
+ seq_printf(m, " in-softirq chains: %11u\n",
+ nr_softirq_chains);
+#endif
+ seq_printf(m, " in-process chains: %11u\n",
+ nr_process_chains);
+ seq_printf(m, " stack-trace entries: %11lu [max: %lu]\n",
+ nr_stack_trace_entries, MAX_STACK_TRACE_ENTRIES);
+ seq_printf(m, " combined max dependencies: %11u\n",
+ (nr_hardirq_chains + 1) *
+ (nr_softirq_chains + 1) *
+ (nr_process_chains + 1)
+ );
+ seq_printf(m, " hardirq-safe locks: %11lu\n",
+ nr_hardirq_safe);
+ seq_printf(m, " hardirq-unsafe locks: %11lu\n",
+ nr_hardirq_unsafe);
+ seq_printf(m, " softirq-safe locks: %11lu\n",
+ nr_softirq_safe);
+ seq_printf(m, " softirq-unsafe locks: %11lu\n",
+ nr_softirq_unsafe);
+ seq_printf(m, " irq-safe locks: %11lu\n",
+ nr_irq_safe);
+ seq_printf(m, " irq-unsafe locks: %11lu\n",
+ nr_irq_unsafe);
+
+ seq_printf(m, " hardirq-read-safe locks: %11lu\n",
+ nr_hardirq_read_safe);
+ seq_printf(m, " hardirq-read-unsafe locks: %11lu\n",
+ nr_hardirq_read_unsafe);
+ seq_printf(m, " softirq-read-safe locks: %11lu\n",
+ nr_softirq_read_safe);
+ seq_printf(m, " softirq-read-unsafe locks: %11lu\n",
+ nr_softirq_read_unsafe);
+ seq_printf(m, " irq-read-safe locks: %11lu\n",
+ nr_irq_read_safe);
+ seq_printf(m, " irq-read-unsafe locks: %11lu\n",
+ nr_irq_read_unsafe);
+
+ seq_printf(m, " uncategorized locks: %11lu\n",
+ nr_uncategorized);
+ seq_printf(m, " unused locks: %11lu\n",
+ nr_unused);
+ seq_printf(m, " max locking depth: %11u\n",
+ max_lockdep_depth);
+ seq_printf(m, " max recursion depth: %11u\n",
+ max_recursion_depth);
+ lockdep_stats_debug_show(m);
+ seq_printf(m, " debug_locks: %11u\n",
+ debug_locks);
+
+ return 0;
+}
+
+static int lockdep_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, lockdep_stats_show, NULL);
+}
+
+static struct file_operations proc_lockdep_stats_operations = {
+ .open = lockdep_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static int __init lockdep_proc_init(void)
+{
+ struct proc_dir_entry *entry;
+
+ entry = create_proc_entry("lockdep", S_IRUSR, NULL);
+ if (entry)
+ entry->proc_fops = &proc_lockdep_operations;
+
+ entry = create_proc_entry("lockdep_stats", S_IRUSR, NULL);
+ if (entry)
+ entry->proc_fops = &proc_lockdep_stats_operations;
+
+ return 0;
+}
+
+__initcall(lockdep_proc_init);
+
if (mod->percpu)
percpu_modfree(mod->percpu);
+ /* Free lock-classes: */
+ lockdep_free_key_range(mod->module_core, mod->core_size);
+
/* Finally, free the core (containing the module structure) */
module_free(mod, mod->module_core);
}
return e;
}
+/*
+ * Is this a valid module address?
+ */
+int is_module_address(unsigned long addr)
+{
+ unsigned long flags;
+ struct module *mod;
+
+ spin_lock_irqsave(&modlist_lock, flags);
+
+ list_for_each_entry(mod, &modules, list) {
+ if (within(addr, mod->module_core, mod->core_size)) {
+ spin_unlock_irqrestore(&modlist_lock, flags);
+ return 1;
+ }
+ }
+
+ spin_unlock_irqrestore(&modlist_lock, flags);
+
+ return 0;
+}
+
+
/* Is this a valid kernel address? We don't grab the lock: we are oopsing. */
struct module *__module_text_address(unsigned long addr)
{
#include <linux/spinlock.h>
#include <linux/kallsyms.h>
#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
#include "mutex-debug.h"
-/*
- * We need a global lock when we walk through the multi-process
- * lock tree. Only used in the deadlock-debugging case.
- */
-DEFINE_SPINLOCK(debug_mutex_lock);
-
-/*
- * All locks held by all tasks, in a single global list:
- */
-LIST_HEAD(debug_mutex_held_locks);
-
-/*
- * In the debug case we carry the caller's instruction pointer into
- * other functions, but we dont want the function argument overhead
- * in the nondebug case - hence these macros:
- */
-#define __IP_DECL__ , unsigned long ip
-#define __IP__ , ip
-#define __RET_IP__ , (unsigned long)__builtin_return_address(0)
-
-/*
- * "mutex debugging enabled" flag. We turn it off when we detect
- * the first problem because we dont want to recurse back
- * into the tracing code when doing error printk or
- * executing a BUG():
- */
-int debug_mutex_on = 1;
-
-static void printk_task(struct task_struct *p)
-{
- if (p)
- printk("%16s:%5d [%p, %3d]", p->comm, p->pid, p, p->prio);
- else
- printk("<none>");
-}
-
-static void printk_ti(struct thread_info *ti)
-{
- if (ti)
- printk_task(ti->task);
- else
- printk("<none>");
-}
-
-static void printk_task_short(struct task_struct *p)
-{
- if (p)
- printk("%s/%d [%p, %3d]", p->comm, p->pid, p, p->prio);
- else
- printk("<none>");
-}
-
-static void printk_lock(struct mutex *lock, int print_owner)
-{
- printk(" [%p] {%s}\n", lock, lock->name);
-
- if (print_owner && lock->owner) {
- printk(".. held by: ");
- printk_ti(lock->owner);
- printk("\n");
- }
- if (lock->owner) {
- printk("... acquired at: ");
- print_symbol("%s\n", lock->acquire_ip);
- }
-}
-
-/*
- * printk locks held by a task:
- */
-static void show_task_locks(struct task_struct *p)
-{
- switch (p->state) {
- case TASK_RUNNING: printk("R"); break;
- case TASK_INTERRUPTIBLE: printk("S"); break;
- case TASK_UNINTERRUPTIBLE: printk("D"); break;
- case TASK_STOPPED: printk("T"); break;
- case EXIT_ZOMBIE: printk("Z"); break;
- case EXIT_DEAD: printk("X"); break;
- default: printk("?"); break;
- }
- printk_task(p);
- if (p->blocked_on) {
- struct mutex *lock = p->blocked_on->lock;
-
- printk(" blocked on mutex:");
- printk_lock(lock, 1);
- } else
- printk(" (not blocked on mutex)\n");
-}
-
-/*
- * printk all locks held in the system (if filter == NULL),
- * or all locks belonging to a single task (if filter != NULL):
- */
-void show_held_locks(struct task_struct *filter)
-{
- struct list_head *curr, *cursor = NULL;
- struct mutex *lock;
- struct thread_info *t;
- unsigned long flags;
- int count = 0;
-
- if (filter) {
- printk("------------------------------\n");
- printk("| showing all locks held by: | (");
- printk_task_short(filter);
- printk("):\n");
- printk("------------------------------\n");
- } else {
- printk("---------------------------\n");
- printk("| showing all locks held: |\n");
- printk("---------------------------\n");
- }
-
- /*
- * Play safe and acquire the global trace lock. We
- * cannot printk with that lock held so we iterate
- * very carefully:
- */
-next:
- debug_spin_lock_save(&debug_mutex_lock, flags);
- list_for_each(curr, &debug_mutex_held_locks) {
- if (cursor && curr != cursor)
- continue;
- lock = list_entry(curr, struct mutex, held_list);
- t = lock->owner;
- if (filter && (t != filter->thread_info))
- continue;
- count++;
- cursor = curr->next;
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
-
- printk("\n#%03d: ", count);
- printk_lock(lock, filter ? 0 : 1);
- goto next;
- }
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
- printk("\n");
-}
-
-void mutex_debug_show_all_locks(void)
-{
- struct task_struct *g, *p;
- int count = 10;
- int unlock = 1;
-
- printk("\nShowing all blocking locks in the system:\n");
-
- /*
- * Here we try to get the tasklist_lock as hard as possible,
- * if not successful after 2 seconds we ignore it (but keep
- * trying). This is to enable a debug printout even if a
- * tasklist_lock-holding task deadlocks or crashes.
- */
-retry:
- if (!read_trylock(&tasklist_lock)) {
- if (count == 10)
- printk("hm, tasklist_lock locked, retrying... ");
- if (count) {
- count--;
- printk(" #%d", 10-count);
- mdelay(200);
- goto retry;
- }
- printk(" ignoring it.\n");
- unlock = 0;
- }
- if (count != 10)
- printk(" locked it.\n");
-
- do_each_thread(g, p) {
- show_task_locks(p);
- if (!unlock)
- if (read_trylock(&tasklist_lock))
- unlock = 1;
- } while_each_thread(g, p);
-
- printk("\n");
- show_held_locks(NULL);
- printk("=============================================\n\n");
-
- if (unlock)
- read_unlock(&tasklist_lock);
-}
-
-static void report_deadlock(struct task_struct *task, struct mutex *lock,
- struct mutex *lockblk, unsigned long ip)
-{
- printk("\n%s/%d is trying to acquire this lock:\n",
- current->comm, current->pid);
- printk_lock(lock, 1);
- printk("... trying at: ");
- print_symbol("%s\n", ip);
- show_held_locks(current);
-
- if (lockblk) {
- printk("but %s/%d is deadlocking current task %s/%d!\n\n",
- task->comm, task->pid, current->comm, current->pid);
- printk("\n%s/%d is blocked on this lock:\n",
- task->comm, task->pid);
- printk_lock(lockblk, 1);
-
- show_held_locks(task);
-
- printk("\n%s/%d's [blocked] stackdump:\n\n",
- task->comm, task->pid);
- show_stack(task, NULL);
- }
-
- printk("\n%s/%d's [current] stackdump:\n\n",
- current->comm, current->pid);
- dump_stack();
- mutex_debug_show_all_locks();
- printk("[ turning off deadlock detection. Please report this. ]\n\n");
- local_irq_disable();
-}
-
-/*
- * Recursively check for mutex deadlocks:
- */
-static int check_deadlock(struct mutex *lock, int depth,
- struct thread_info *ti, unsigned long ip)
-{
- struct mutex *lockblk;
- struct task_struct *task;
-
- if (!debug_mutex_on)
- return 0;
-
- ti = lock->owner;
- if (!ti)
- return 0;
-
- task = ti->task;
- lockblk = NULL;
- if (task->blocked_on)
- lockblk = task->blocked_on->lock;
-
- /* Self-deadlock: */
- if (current == task) {
- DEBUG_OFF();
- if (depth)
- return 1;
- printk("\n==========================================\n");
- printk( "[ BUG: lock recursion deadlock detected! |\n");
- printk( "------------------------------------------\n");
- report_deadlock(task, lock, NULL, ip);
- return 0;
- }
-
- /* Ugh, something corrupted the lock data structure? */
- if (depth > 20) {
- DEBUG_OFF();
- printk("\n===========================================\n");
- printk( "[ BUG: infinite lock dependency detected!? |\n");
- printk( "-------------------------------------------\n");
- report_deadlock(task, lock, lockblk, ip);
- return 0;
- }
-
- /* Recursively check for dependencies: */
- if (lockblk && check_deadlock(lockblk, depth+1, ti, ip)) {
- printk("\n============================================\n");
- printk( "[ BUG: circular locking deadlock detected! ]\n");
- printk( "--------------------------------------------\n");
- report_deadlock(task, lock, lockblk, ip);
- return 0;
- }
- return 0;
-}
-
-/*
- * Called when a task exits, this function checks whether the
- * task is holding any locks, and reports the first one if so:
- */
-void mutex_debug_check_no_locks_held(struct task_struct *task)
-{
- struct list_head *curr, *next;
- struct thread_info *t;
- unsigned long flags;
- struct mutex *lock;
-
- if (!debug_mutex_on)
- return;
-
- debug_spin_lock_save(&debug_mutex_lock, flags);
- list_for_each_safe(curr, next, &debug_mutex_held_locks) {
- lock = list_entry(curr, struct mutex, held_list);
- t = lock->owner;
- if (t != task->thread_info)
- continue;
- list_del_init(curr);
- DEBUG_OFF();
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
-
- printk("BUG: %s/%d, lock held at task exit time!\n",
- task->comm, task->pid);
- printk_lock(lock, 1);
- if (lock->owner != task->thread_info)
- printk("exiting task is not even the owner??\n");
- return;
- }
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
-}
-
-/*
- * Called when kernel memory is freed (or unmapped), or if a mutex
- * is destroyed or reinitialized - this code checks whether there is
- * any held lock in the memory range of <from> to <to>:
- */
-void mutex_debug_check_no_locks_freed(const void *from, unsigned long len)
-{
- struct list_head *curr, *next;
- const void *to = from + len;
- unsigned long flags;
- struct mutex *lock;
- void *lock_addr;
-
- if (!debug_mutex_on)
- return;
-
- debug_spin_lock_save(&debug_mutex_lock, flags);
- list_for_each_safe(curr, next, &debug_mutex_held_locks) {
- lock = list_entry(curr, struct mutex, held_list);
- lock_addr = lock;
- if (lock_addr < from || lock_addr >= to)
- continue;
- list_del_init(curr);
- DEBUG_OFF();
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
-
- printk("BUG: %s/%d, active lock [%p(%p-%p)] freed!\n",
- current->comm, current->pid, lock, from, to);
- dump_stack();
- printk_lock(lock, 1);
- if (lock->owner != current_thread_info())
- printk("freeing task is not even the owner??\n");
- return;
- }
- debug_spin_unlock_restore(&debug_mutex_lock, flags);
-}
-
/*
* Must be called with lock->wait_lock held.
*/
-void debug_mutex_set_owner(struct mutex *lock,
- struct thread_info *new_owner __IP_DECL__)
+void debug_mutex_set_owner(struct mutex *lock, struct thread_info *new_owner)
{
lock->owner = new_owner;
- DEBUG_WARN_ON(!list_empty(&lock->held_list));
- if (debug_mutex_on) {
- list_add_tail(&lock->held_list, &debug_mutex_held_locks);
- lock->acquire_ip = ip;
- }
}
-void debug_mutex_init_waiter(struct mutex_waiter *waiter)
+void debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter)
{
memset(waiter, MUTEX_DEBUG_INIT, sizeof(*waiter));
waiter->magic = waiter;
void debug_mutex_wake_waiter(struct mutex *lock, struct mutex_waiter *waiter)
{
- SMP_DEBUG_WARN_ON(!spin_is_locked(&lock->wait_lock));
- DEBUG_WARN_ON(list_empty(&lock->wait_list));
- DEBUG_WARN_ON(waiter->magic != waiter);
- DEBUG_WARN_ON(list_empty(&waiter->list));
+ SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
+ DEBUG_LOCKS_WARN_ON(list_empty(&lock->wait_list));
+ DEBUG_LOCKS_WARN_ON(waiter->magic != waiter);
+ DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
}
void debug_mutex_free_waiter(struct mutex_waiter *waiter)
{
- DEBUG_WARN_ON(!list_empty(&waiter->list));
+ DEBUG_LOCKS_WARN_ON(!list_empty(&waiter->list));
memset(waiter, MUTEX_DEBUG_FREE, sizeof(*waiter));
}
void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
- struct thread_info *ti __IP_DECL__)
+ struct thread_info *ti)
{
- SMP_DEBUG_WARN_ON(!spin_is_locked(&lock->wait_lock));
- check_deadlock(lock, 0, ti, ip);
+ SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
+
/* Mark the current thread as blocked on the lock: */
ti->task->blocked_on = waiter;
waiter->lock = lock;
void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct thread_info *ti)
{
- DEBUG_WARN_ON(list_empty(&waiter->list));
- DEBUG_WARN_ON(waiter->task != ti->task);
- DEBUG_WARN_ON(ti->task->blocked_on != waiter);
+ DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
+ DEBUG_LOCKS_WARN_ON(waiter->task != ti->task);
+ DEBUG_LOCKS_WARN_ON(ti->task->blocked_on != waiter);
ti->task->blocked_on = NULL;
list_del_init(&waiter->list);
void debug_mutex_unlock(struct mutex *lock)
{
- DEBUG_WARN_ON(lock->magic != lock);
- DEBUG_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next);
- DEBUG_WARN_ON(lock->owner != current_thread_info());
- if (debug_mutex_on) {
- DEBUG_WARN_ON(list_empty(&lock->held_list));
- list_del_init(&lock->held_list);
- }
+ DEBUG_LOCKS_WARN_ON(lock->owner != current_thread_info());
+ DEBUG_LOCKS_WARN_ON(lock->magic != lock);
+ DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next);
+ DEBUG_LOCKS_WARN_ON(lock->owner != current_thread_info());
}
-void debug_mutex_init(struct mutex *lock, const char *name)
+void debug_mutex_init(struct mutex *lock, const char *name,
+ struct lock_class_key *key)
{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held lock:
*/
- mutex_debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+ lockdep_init_map(&lock->dep_map, name, key);
+#endif
lock->owner = NULL;
- INIT_LIST_HEAD(&lock->held_list);
- lock->name = name;
lock->magic = lock;
}
*/
void fastcall mutex_destroy(struct mutex *lock)
{
- DEBUG_WARN_ON(mutex_is_locked(lock));
+ DEBUG_LOCKS_WARN_ON(mutex_is_locked(lock));
lock->magic = NULL;
}
* More details are in kernel/mutex-debug.c.
*/
-extern spinlock_t debug_mutex_lock;
-extern struct list_head debug_mutex_held_locks;
-extern int debug_mutex_on;
-
-/*
- * In the debug case we carry the caller's instruction pointer into
- * other functions, but we dont want the function argument overhead
- * in the nondebug case - hence these macros:
- */
-#define __IP_DECL__ , unsigned long ip
-#define __IP__ , ip
-#define __RET_IP__ , (unsigned long)__builtin_return_address(0)
-
/*
* This must be called with lock->wait_lock held.
*/
-extern void debug_mutex_set_owner(struct mutex *lock,
- struct thread_info *new_owner __IP_DECL__);
+extern void
+debug_mutex_set_owner(struct mutex *lock, struct thread_info *new_owner);
static inline void debug_mutex_clear_owner(struct mutex *lock)
{
lock->owner = NULL;
}
-extern void debug_mutex_init_waiter(struct mutex_waiter *waiter);
+extern void debug_mutex_lock_common(struct mutex *lock,
+ struct mutex_waiter *waiter);
extern void debug_mutex_wake_waiter(struct mutex *lock,
struct mutex_waiter *waiter);
extern void debug_mutex_free_waiter(struct mutex_waiter *waiter);
extern void debug_mutex_add_waiter(struct mutex *lock,
struct mutex_waiter *waiter,
- struct thread_info *ti __IP_DECL__);
+ struct thread_info *ti);
extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct thread_info *ti);
extern void debug_mutex_unlock(struct mutex *lock);
-extern void debug_mutex_init(struct mutex *lock, const char *name);
-
-#define debug_spin_lock_save(lock, flags) \
- do { \
- local_irq_save(flags); \
- if (debug_mutex_on) \
- spin_lock(lock); \
- } while (0)
-
-#define debug_spin_unlock_restore(lock, flags) \
- do { \
- if (debug_mutex_on) \
- spin_unlock(lock); \
- local_irq_restore(flags); \
- preempt_check_resched(); \
- } while (0)
+extern void debug_mutex_init(struct mutex *lock, const char *name,
+ struct lock_class_key *key);
#define spin_lock_mutex(lock, flags) \
do { \
struct mutex *l = container_of(lock, struct mutex, wait_lock); \
\
- DEBUG_WARN_ON(in_interrupt()); \
- debug_spin_lock_save(&debug_mutex_lock, flags); \
- spin_lock(lock); \
- DEBUG_WARN_ON(l->magic != l); \
+ DEBUG_LOCKS_WARN_ON(in_interrupt()); \
+ local_irq_save(flags); \
+ __raw_spin_lock(&(lock)->raw_lock); \
+ DEBUG_LOCKS_WARN_ON(l->magic != l); \
} while (0)
#define spin_unlock_mutex(lock, flags) \
do { \
- spin_unlock(lock); \
- debug_spin_unlock_restore(&debug_mutex_lock, flags); \
+ __raw_spin_unlock(&(lock)->raw_lock); \
+ local_irq_restore(flags); \
+ preempt_check_resched(); \
} while (0)
-
-#define DEBUG_OFF() \
-do { \
- if (debug_mutex_on) { \
- debug_mutex_on = 0; \
- console_verbose(); \
- if (spin_is_locked(&debug_mutex_lock)) \
- spin_unlock(&debug_mutex_lock); \
- } \
-} while (0)
-
-#define DEBUG_BUG() \
-do { \
- if (debug_mutex_on) { \
- DEBUG_OFF(); \
- BUG(); \
- } \
-} while (0)
-
-#define DEBUG_WARN_ON(c) \
-do { \
- if (unlikely(c && debug_mutex_on)) { \
- DEBUG_OFF(); \
- WARN_ON(1); \
- } \
-} while (0)
-
-# define DEBUG_BUG_ON(c) \
-do { \
- if (unlikely(c)) \
- DEBUG_BUG(); \
-} while (0)
-
-#ifdef CONFIG_SMP
-# define SMP_DEBUG_WARN_ON(c) DEBUG_WARN_ON(c)
-# define SMP_DEBUG_BUG_ON(c) DEBUG_BUG_ON(c)
-#else
-# define SMP_DEBUG_WARN_ON(c) do { } while (0)
-# define SMP_DEBUG_BUG_ON(c) do { } while (0)
-#endif
-
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
/*
* In the DEBUG case we are using the "NULL fastpath" for mutexes,
*
* It is not allowed to initialize an already locked mutex.
*/
-void fastcall __mutex_init(struct mutex *lock, const char *name)
+void
+__mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
{
atomic_set(&lock->count, 1);
spin_lock_init(&lock->wait_lock);
INIT_LIST_HEAD(&lock->wait_list);
- debug_mutex_init(lock, name);
+ debug_mutex_init(lock, name, key);
}
EXPORT_SYMBOL(__mutex_init);
* branch is predicted by the CPU as default-untaken.
*/
static void fastcall noinline __sched
-__mutex_lock_slowpath(atomic_t *lock_count __IP_DECL__);
+__mutex_lock_slowpath(atomic_t *lock_count);
/***
* mutex_lock - acquire the mutex
*
* This function is similar to (but not equivalent to) down().
*/
-void fastcall __sched mutex_lock(struct mutex *lock)
+void inline fastcall __sched mutex_lock(struct mutex *lock)
{
might_sleep();
/*
EXPORT_SYMBOL(mutex_lock);
static void fastcall noinline __sched
-__mutex_unlock_slowpath(atomic_t *lock_count __IP_DECL__);
+__mutex_unlock_slowpath(atomic_t *lock_count);
/***
* mutex_unlock - release the mutex
* Lock a mutex (possibly interruptible), slowpath:
*/
static inline int __sched
-__mutex_lock_common(struct mutex *lock, long state __IP_DECL__)
+__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass)
{
struct task_struct *task = current;
struct mutex_waiter waiter;
unsigned int old_val;
unsigned long flags;
- debug_mutex_init_waiter(&waiter);
-
spin_lock_mutex(&lock->wait_lock, flags);
- debug_mutex_add_waiter(lock, &waiter, task->thread_info, ip);
+ debug_mutex_lock_common(lock, &waiter);
+ mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+ debug_mutex_add_waiter(lock, &waiter, task->thread_info);
/* add waiting tasks to the end of the waitqueue (FIFO): */
list_add_tail(&waiter.list, &lock->wait_list);
if (unlikely(state == TASK_INTERRUPTIBLE &&
signal_pending(task))) {
mutex_remove_waiter(lock, &waiter, task->thread_info);
+ mutex_release(&lock->dep_map, 1, _RET_IP_);
spin_unlock_mutex(&lock->wait_lock, flags);
debug_mutex_free_waiter(&waiter);
/* got the lock - rejoice! */
mutex_remove_waiter(lock, &waiter, task->thread_info);
- debug_mutex_set_owner(lock, task->thread_info __IP__);
+ debug_mutex_set_owner(lock, task->thread_info);
/* set it to 0 if there are no waiters left: */
if (likely(list_empty(&lock->wait_list)))
debug_mutex_free_waiter(&waiter);
- DEBUG_WARN_ON(list_empty(&lock->held_list));
- DEBUG_WARN_ON(lock->owner != task->thread_info);
-
return 0;
}
static void fastcall noinline __sched
-__mutex_lock_slowpath(atomic_t *lock_count __IP_DECL__)
+__mutex_lock_slowpath(atomic_t *lock_count)
{
struct mutex *lock = container_of(lock_count, struct mutex, count);
- __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE __IP__);
+ __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0);
+}
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+void __sched
+mutex_lock_nested(struct mutex *lock, unsigned int subclass)
+{
+ might_sleep();
+ __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass);
}
+EXPORT_SYMBOL_GPL(mutex_lock_nested);
+#endif
+
/*
* Release the lock, slowpath:
*/
-static fastcall noinline void
-__mutex_unlock_slowpath(atomic_t *lock_count __IP_DECL__)
+static fastcall inline void
+__mutex_unlock_common_slowpath(atomic_t *lock_count, int nested)
{
struct mutex *lock = container_of(lock_count, struct mutex, count);
unsigned long flags;
- DEBUG_WARN_ON(lock->owner != current_thread_info());
-
spin_lock_mutex(&lock->wait_lock, flags);
+ mutex_release(&lock->dep_map, nested, _RET_IP_);
+ debug_mutex_unlock(lock);
/*
* some architectures leave the lock unlocked in the fastpath failure
if (__mutex_slowpath_needs_to_unlock())
atomic_set(&lock->count, 1);
- debug_mutex_unlock(lock);
-
if (!list_empty(&lock->wait_list)) {
/* get the first entry from the wait-list: */
struct mutex_waiter *waiter =
spin_unlock_mutex(&lock->wait_lock, flags);
}
+/*
+ * Release the lock, slowpath:
+ */
+static fastcall noinline void
+__mutex_unlock_slowpath(atomic_t *lock_count)
+{
+ __mutex_unlock_common_slowpath(lock_count, 1);
+}
+
/*
* Here come the less common (and hence less performance-critical) APIs:
* mutex_lock_interruptible() and mutex_trylock().
*/
static int fastcall noinline __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count __IP_DECL__);
+__mutex_lock_interruptible_slowpath(atomic_t *lock_count);
/***
* mutex_lock_interruptible - acquire the mutex, interruptable
EXPORT_SYMBOL(mutex_lock_interruptible);
static int fastcall noinline __sched
-__mutex_lock_interruptible_slowpath(atomic_t *lock_count __IP_DECL__)
+__mutex_lock_interruptible_slowpath(atomic_t *lock_count)
{
struct mutex *lock = container_of(lock_count, struct mutex, count);
- return __mutex_lock_common(lock, TASK_INTERRUPTIBLE __IP__);
+ return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0);
}
/*
spin_lock_mutex(&lock->wait_lock, flags);
prev = atomic_xchg(&lock->count, -1);
- if (likely(prev == 1))
- debug_mutex_set_owner(lock, current_thread_info() __RET_IP__);
+ if (likely(prev == 1)) {
+ debug_mutex_set_owner(lock, current_thread_info());
+ mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+ }
/* Set it back to 0 if there are no waiters: */
if (likely(list_empty(&lock->wait_list)))
atomic_set(&lock->count, 0);
* This function must not be used in interrupt context. The
* mutex must be released by the same task that acquired it.
*/
-int fastcall mutex_trylock(struct mutex *lock)
+int fastcall __sched mutex_trylock(struct mutex *lock)
{
return __mutex_fastpath_trylock(&lock->count,
__mutex_trylock_slowpath);
#define mutex_remove_waiter(lock, waiter, ti) \
__list_del((waiter)->list.prev, (waiter)->list.next)
-#define DEBUG_WARN_ON(c) do { } while (0)
#define debug_mutex_set_owner(lock, new_owner) do { } while (0)
#define debug_mutex_clear_owner(lock) do { } while (0)
-#define debug_mutex_init_waiter(waiter) do { } while (0)
#define debug_mutex_wake_waiter(lock, waiter) do { } while (0)
#define debug_mutex_free_waiter(waiter) do { } while (0)
-#define debug_mutex_add_waiter(lock, waiter, ti, ip) do { } while (0)
+#define debug_mutex_add_waiter(lock, waiter, ti) do { } while (0)
#define debug_mutex_unlock(lock) do { } while (0)
-#define debug_mutex_init(lock, name) do { } while (0)
-
-/*
- * Return-address parameters/declarations. They are very useful for
- * debugging, but add overhead in the !DEBUG case - so we go the
- * trouble of using this not too elegant but zero-cost solution:
- */
-#define __IP_DECL__
-#define __IP__
-#define __RET_IP__
+#define debug_mutex_init(lock, name, key) do { } while (0)
+static inline void
+debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter)
+{
+}
return NULL;
}
-int fastcall attach_pid(task_t *task, enum pid_type type, int nr)
+int fastcall attach_pid(struct task_struct *task, enum pid_type type, int nr)
{
struct pid_link *link;
struct pid *pid;
return 0;
}
-void fastcall detach_pid(task_t *task, enum pid_type type)
+void fastcall detach_pid(struct task_struct *task, enum pid_type type)
{
struct pid_link *link;
struct pid *pid;
/*
* Must be called under rcu_read_lock() or with tasklist_lock read-held.
*/
-task_t *find_task_by_pid_type(int type, int nr)
+struct task_struct *find_task_by_pid_type(int type, int nr)
{
return pid_task(find_pid(nr), type);
}
zap_locks();
/* This stops the holder of console_sem just where we want him */
- spin_lock_irqsave(&logbuf_lock, flags);
+ local_irq_save(flags);
+ lockdep_off();
+ spin_lock(&logbuf_lock);
printk_cpu = smp_processor_id();
/* Emit the output into the temporary buffer */
*/
console_locked = 1;
printk_cpu = UINT_MAX;
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ spin_unlock(&logbuf_lock);
/*
* Console drivers may assume that per-cpu resources have
console_locked = 0;
up(&console_sem);
}
+ lockdep_on();
+ local_irq_restore(flags);
} else {
/*
* Someone else owns the drivers. We drop the spinlock, which
* console drivers with the output which we just produced.
*/
printk_cpu = UINT_MAX;
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ spin_unlock(&logbuf_lock);
+ lockdep_on();
+ local_irq_restore(flags);
}
preempt_enable();
console_may_schedule = 0;
up(&console_sem);
spin_unlock_irqrestore(&logbuf_lock, flags);
- if (wake_klogd && !oops_in_progress && waitqueue_active(&log_wait))
- wake_up_interruptible(&log_wait);
+ if (wake_klogd && !oops_in_progress && waitqueue_active(&log_wait)) {
+ /*
+ * If we printk from within the lock dependency code,
+ * from within the scheduler code, then do not lock
+ * up due to self-recursion:
+ */
+ if (!lockdep_internal())
+ wake_up_interruptible(&log_wait);
+ }
}
EXPORT_SYMBOL(release_console_sem);
*
* Must be called with the tasklist lock write-held.
*/
-void __ptrace_link(task_t *child, task_t *new_parent)
+void __ptrace_link(struct task_struct *child, struct task_struct *new_parent)
{
BUG_ON(!list_empty(&child->ptrace_list));
if (child->parent == new_parent)
* TASK_TRACED, resume it now.
* Requires that irqs be disabled.
*/
-void ptrace_untrace(task_t *child)
+void ptrace_untrace(struct task_struct *child)
{
spin_lock(&child->sighand->siglock);
if (child->state == TASK_TRACED) {
*
* Must be called with the tasklist lock write-held.
*/
-void __ptrace_unlink(task_t *child)
+void __ptrace_unlink(struct task_struct *child)
{
BUG_ON(!child->ptrace);
static struct rcu_ctrlblk rcu_ctrlblk = {
.cur = -300,
.completed = -300,
- .lock = SPIN_LOCK_UNLOCKED,
+ .lock = __SPIN_LOCK_UNLOCKED(&rcu_ctrlblk.lock),
.cpumask = CPU_MASK_NONE,
};
static struct rcu_ctrlblk rcu_bh_ctrlblk = {
.cur = -300,
.completed = -300,
- .lock = SPIN_LOCK_UNLOCKED,
+ .lock = __SPIN_LOCK_UNLOCKED(&rcu_bh_ctrlblk.lock),
.cpumask = CPU_MASK_NONE,
};
#include <linux/interrupt.h>
#include <linux/plist.h>
#include <linux/fs.h>
+#include <linux/debug_locks.h>
#include "rtmutex_common.h"
console_verbose(); \
if (spin_is_locked(¤t->pi_lock)) \
spin_unlock(¤t->pi_lock); \
- if (spin_is_locked(¤t->held_list_lock)) \
- spin_unlock(¤t->held_list_lock); \
} \
} while (0)
rt_trace_on = 0;
}
-static void printk_task(task_t *p)
+static void printk_task(struct task_struct *p)
{
if (p)
printk("%16s:%5d [%p, %3d]", p->comm, p->pid, p, p->prio);
printk("<none>");
}
-static void printk_task_short(task_t *p)
-{
- if (p)
- printk("%s/%d [%p, %3d]", p->comm, p->pid, p, p->prio);
- else
- printk("<none>");
-}
-
static void printk_lock(struct rt_mutex *lock, int print_owner)
{
if (lock->name)
printk_task(rt_mutex_owner(lock));
printk("\n");
}
- if (rt_mutex_owner(lock)) {
- printk("... acquired at: ");
- print_symbol("%s\n", lock->acquire_ip);
- }
-}
-
-static void printk_waiter(struct rt_mutex_waiter *w)
-{
- printk("-------------------------\n");
- printk("| waiter struct %p:\n", w);
- printk("| w->list_entry: [DP:%p/%p|SP:%p/%p|PRI:%d]\n",
- w->list_entry.plist.prio_list.prev, w->list_entry.plist.prio_list.next,
- w->list_entry.plist.node_list.prev, w->list_entry.plist.node_list.next,
- w->list_entry.prio);
- printk("| w->pi_list_entry: [DP:%p/%p|SP:%p/%p|PRI:%d]\n",
- w->pi_list_entry.plist.prio_list.prev, w->pi_list_entry.plist.prio_list.next,
- w->pi_list_entry.plist.node_list.prev, w->pi_list_entry.plist.node_list.next,
- w->pi_list_entry.prio);
- printk("\n| lock:\n");
- printk_lock(w->lock, 1);
- printk("| w->ti->task:\n");
- printk_task(w->task);
- printk("| blocked at: ");
- print_symbol("%s\n", w->ip);
- printk("-------------------------\n");
-}
-
-static void show_task_locks(task_t *p)
-{
- switch (p->state) {
- case TASK_RUNNING: printk("R"); break;
- case TASK_INTERRUPTIBLE: printk("S"); break;
- case TASK_UNINTERRUPTIBLE: printk("D"); break;
- case TASK_STOPPED: printk("T"); break;
- case EXIT_ZOMBIE: printk("Z"); break;
- case EXIT_DEAD: printk("X"); break;
- default: printk("?"); break;
- }
- printk_task(p);
- if (p->pi_blocked_on) {
- struct rt_mutex *lock = p->pi_blocked_on->lock;
-
- printk(" blocked on:");
- printk_lock(lock, 1);
- } else
- printk(" (not blocked)\n");
-}
-
-void rt_mutex_show_held_locks(task_t *task, int verbose)
-{
- struct list_head *curr, *cursor = NULL;
- struct rt_mutex *lock;
- task_t *t;
- unsigned long flags;
- int count = 0;
-
- if (!rt_trace_on)
- return;
-
- if (verbose) {
- printk("------------------------------\n");
- printk("| showing all locks held by: | (");
- printk_task_short(task);
- printk("):\n");
- printk("------------------------------\n");
- }
-
-next:
- spin_lock_irqsave(&task->held_list_lock, flags);
- list_for_each(curr, &task->held_list_head) {
- if (cursor && curr != cursor)
- continue;
- lock = list_entry(curr, struct rt_mutex, held_list_entry);
- t = rt_mutex_owner(lock);
- WARN_ON(t != task);
- count++;
- cursor = curr->next;
- spin_unlock_irqrestore(&task->held_list_lock, flags);
-
- printk("\n#%03d: ", count);
- printk_lock(lock, 0);
- goto next;
- }
- spin_unlock_irqrestore(&task->held_list_lock, flags);
-
- printk("\n");
-}
-
-void rt_mutex_show_all_locks(void)
-{
- task_t *g, *p;
- int count = 10;
- int unlock = 1;
-
- printk("\n");
- printk("----------------------\n");
- printk("| showing all tasks: |\n");
- printk("----------------------\n");
-
- /*
- * Here we try to get the tasklist_lock as hard as possible,
- * if not successful after 2 seconds we ignore it (but keep
- * trying). This is to enable a debug printout even if a
- * tasklist_lock-holding task deadlocks or crashes.
- */
-retry:
- if (!read_trylock(&tasklist_lock)) {
- if (count == 10)
- printk("hm, tasklist_lock locked, retrying... ");
- if (count) {
- count--;
- printk(" #%d", 10-count);
- mdelay(200);
- goto retry;
- }
- printk(" ignoring it.\n");
- unlock = 0;
- }
- if (count != 10)
- printk(" locked it.\n");
-
- do_each_thread(g, p) {
- show_task_locks(p);
- if (!unlock)
- if (read_trylock(&tasklist_lock))
- unlock = 1;
- } while_each_thread(g, p);
-
- printk("\n");
-
- printk("-----------------------------------------\n");
- printk("| showing all locks held in the system: |\n");
- printk("-----------------------------------------\n");
-
- do_each_thread(g, p) {
- rt_mutex_show_held_locks(p, 0);
- if (!unlock)
- if (read_trylock(&tasklist_lock))
- unlock = 1;
- } while_each_thread(g, p);
-
-
- printk("=============================================\n\n");
-
- if (unlock)
- read_unlock(&tasklist_lock);
-}
-
-void rt_mutex_debug_check_no_locks_held(task_t *task)
-{
- struct rt_mutex_waiter *w;
- struct list_head *curr;
- struct rt_mutex *lock;
-
- if (!rt_trace_on)
- return;
- if (!rt_prio(task->normal_prio) && rt_prio(task->prio)) {
- printk("BUG: PI priority boost leaked!\n");
- printk_task(task);
- printk("\n");
- }
- if (list_empty(&task->held_list_head))
- return;
-
- spin_lock(&task->pi_lock);
- plist_for_each_entry(w, &task->pi_waiters, pi_list_entry) {
- TRACE_OFF();
-
- printk("hm, PI interest held at exit time? Task:\n");
- printk_task(task);
- printk_waiter(w);
- return;
- }
- spin_unlock(&task->pi_lock);
-
- list_for_each(curr, &task->held_list_head) {
- lock = list_entry(curr, struct rt_mutex, held_list_entry);
-
- printk("BUG: %s/%d, lock held at task exit time!\n",
- task->comm, task->pid);
- printk_lock(lock, 1);
- if (rt_mutex_owner(lock) != task)
- printk("exiting task is not even the owner??\n");
- }
-}
-
-int rt_mutex_debug_check_no_locks_freed(const void *from, unsigned long len)
-{
- const void *to = from + len;
- struct list_head *curr;
- struct rt_mutex *lock;
- unsigned long flags;
- void *lock_addr;
-
- if (!rt_trace_on)
- return 0;
-
- spin_lock_irqsave(¤t->held_list_lock, flags);
- list_for_each(curr, ¤t->held_list_head) {
- lock = list_entry(curr, struct rt_mutex, held_list_entry);
- lock_addr = lock;
- if (lock_addr < from || lock_addr >= to)
- continue;
- TRACE_OFF();
-
- printk("BUG: %s/%d, active lock [%p(%p-%p)] freed!\n",
- current->comm, current->pid, lock, from, to);
- dump_stack();
- printk_lock(lock, 1);
- if (rt_mutex_owner(lock) != current)
- printk("freeing task is not even the owner??\n");
- return 1;
- }
- spin_unlock_irqrestore(¤t->held_list_lock, flags);
-
- return 0;
}
void rt_mutex_debug_task_free(struct task_struct *task)
current->comm, current->pid);
printk_lock(waiter->lock, 1);
- printk("... trying at: ");
- print_symbol("%s\n", waiter->ip);
-
printk("\n2) %s/%d is blocked on this lock:\n", task->comm, task->pid);
printk_lock(waiter->deadlock_lock, 1);
- rt_mutex_show_held_locks(current, 1);
- rt_mutex_show_held_locks(task, 1);
+ debug_show_held_locks(current);
+ debug_show_held_locks(task);
printk("\n%s/%d's [blocked] stackdump:\n\n", task->comm, task->pid);
show_stack(task, NULL);
printk("\n%s/%d's [current] stackdump:\n\n",
current->comm, current->pid);
dump_stack();
- rt_mutex_show_all_locks();
+ debug_show_all_locks();
+
printk("[ turning off deadlock detection."
"Please report this trace. ]\n\n");
local_irq_disable();
}
-void debug_rt_mutex_lock(struct rt_mutex *lock __IP_DECL__)
+void debug_rt_mutex_lock(struct rt_mutex *lock)
{
- unsigned long flags;
-
- if (rt_trace_on) {
- TRACE_WARN_ON_LOCKED(!list_empty(&lock->held_list_entry));
-
- spin_lock_irqsave(¤t->held_list_lock, flags);
- list_add_tail(&lock->held_list_entry, ¤t->held_list_head);
- spin_unlock_irqrestore(¤t->held_list_lock, flags);
-
- lock->acquire_ip = ip;
- }
}
void debug_rt_mutex_unlock(struct rt_mutex *lock)
{
- unsigned long flags;
-
- if (rt_trace_on) {
- TRACE_WARN_ON_LOCKED(rt_mutex_owner(lock) != current);
- TRACE_WARN_ON_LOCKED(list_empty(&lock->held_list_entry));
-
- spin_lock_irqsave(¤t->held_list_lock, flags);
- list_del_init(&lock->held_list_entry);
- spin_unlock_irqrestore(¤t->held_list_lock, flags);
- }
+ TRACE_WARN_ON_LOCKED(rt_mutex_owner(lock) != current);
}
-void debug_rt_mutex_proxy_lock(struct rt_mutex *lock,
- struct task_struct *powner __IP_DECL__)
+void
+debug_rt_mutex_proxy_lock(struct rt_mutex *lock, struct task_struct *powner)
{
- unsigned long flags;
-
- if (rt_trace_on) {
- TRACE_WARN_ON_LOCKED(!list_empty(&lock->held_list_entry));
-
- spin_lock_irqsave(&powner->held_list_lock, flags);
- list_add_tail(&lock->held_list_entry, &powner->held_list_head);
- spin_unlock_irqrestore(&powner->held_list_lock, flags);
-
- lock->acquire_ip = ip;
- }
}
void debug_rt_mutex_proxy_unlock(struct rt_mutex *lock)
{
- unsigned long flags;
-
- if (rt_trace_on) {
- struct task_struct *owner = rt_mutex_owner(lock);
-
- TRACE_WARN_ON_LOCKED(!owner);
- TRACE_WARN_ON_LOCKED(list_empty(&lock->held_list_entry));
-
- spin_lock_irqsave(&owner->held_list_lock, flags);
- list_del_init(&lock->held_list_entry);
- spin_unlock_irqrestore(&owner->held_list_lock, flags);
- }
+ TRACE_WARN_ON_LOCKED(!rt_mutex_owner(lock));
}
void debug_rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
void debug_rt_mutex_init(struct rt_mutex *lock, const char *name)
{
- void *addr = lock;
-
- if (rt_trace_on) {
- rt_mutex_debug_check_no_locks_freed(addr,
- sizeof(struct rt_mutex));
- INIT_LIST_HEAD(&lock->held_list_entry);
- lock->name = name;
- }
+ /*
+ * Make sure we are not reinitializing a held lock:
+ */
+ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+ lock->name = name;
}
-void rt_mutex_deadlock_account_lock(struct rt_mutex *lock, task_t *task)
+void
+rt_mutex_deadlock_account_lock(struct rt_mutex *lock, struct task_struct *task)
{
}
* This file contains macros used solely by rtmutex.c. Debug version.
*/
-#define __IP_DECL__ , unsigned long ip
-#define __IP__ , ip
-#define __RET_IP__ , (unsigned long)__builtin_return_address(0)
-
extern void
rt_mutex_deadlock_account_lock(struct rt_mutex *lock, struct task_struct *task);
extern void rt_mutex_deadlock_account_unlock(struct task_struct *task);
extern void debug_rt_mutex_init_waiter(struct rt_mutex_waiter *waiter);
extern void debug_rt_mutex_free_waiter(struct rt_mutex_waiter *waiter);
extern void debug_rt_mutex_init(struct rt_mutex *lock, const char *name);
-extern void debug_rt_mutex_lock(struct rt_mutex *lock __IP_DECL__);
+extern void debug_rt_mutex_lock(struct rt_mutex *lock);
extern void debug_rt_mutex_unlock(struct rt_mutex *lock);
extern void debug_rt_mutex_proxy_lock(struct rt_mutex *lock,
- struct task_struct *powner __IP_DECL__);
+ struct task_struct *powner);
extern void debug_rt_mutex_proxy_unlock(struct rt_mutex *lock);
extern void debug_rt_mutex_deadlock(int detect, struct rt_mutex_waiter *waiter,
struct rt_mutex *lock);
};
static struct test_thread_data thread_data[MAX_RT_TEST_THREADS];
-static task_t *threads[MAX_RT_TEST_THREADS];
+static struct task_struct *threads[MAX_RT_TEST_THREADS];
static struct rt_mutex mutexes[MAX_RT_TEST_MUTEXES];
enum test_opcodes {
static ssize_t sysfs_test_status(struct sys_device *dev, char *buf)
{
struct test_thread_data *td;
+ struct task_struct *tsk;
char *curr = buf;
- task_t *tsk;
int i;
td = container_of(dev, struct test_thread_data, sysdev);
* Decreases task's usage by one - may thus free the task.
* Returns 0 or -EDEADLK.
*/
-static int rt_mutex_adjust_prio_chain(task_t *task,
+static int rt_mutex_adjust_prio_chain(struct task_struct *task,
int deadlock_detect,
struct rt_mutex *orig_lock,
struct rt_mutex_waiter *orig_waiter,
- struct task_struct *top_task
- __IP_DECL__)
+ struct task_struct *top_task)
{
struct rt_mutex *lock;
struct rt_mutex_waiter *waiter, *top_waiter = orig_waiter;
spin_unlock_irqrestore(&task->pi_lock, flags);
out_put_task:
put_task_struct(task);
+
return ret;
}
*
* Must be called with lock->wait_lock held.
*/
-static int try_to_take_rt_mutex(struct rt_mutex *lock __IP_DECL__)
+static int try_to_take_rt_mutex(struct rt_mutex *lock)
{
/*
* We have to be careful here if the atomic speedups are
return 0;
/* We got the lock. */
- debug_rt_mutex_lock(lock __IP__);
+ debug_rt_mutex_lock(lock);
rt_mutex_set_owner(lock, current, 0);
*/
static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
struct rt_mutex_waiter *waiter,
- int detect_deadlock
- __IP_DECL__)
+ int detect_deadlock)
{
+ struct task_struct *owner = rt_mutex_owner(lock);
struct rt_mutex_waiter *top_waiter = waiter;
- task_t *owner = rt_mutex_owner(lock);
- int boost = 0, res;
unsigned long flags;
+ int boost = 0, res;
spin_lock_irqsave(¤t->pi_lock, flags);
__rt_mutex_adjust_prio(current);
spin_unlock(&lock->wait_lock);
res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter,
- current __IP__);
+ current);
spin_lock(&lock->wait_lock);
* Must be called with lock->wait_lock held
*/
static void remove_waiter(struct rt_mutex *lock,
- struct rt_mutex_waiter *waiter __IP_DECL__)
+ struct rt_mutex_waiter *waiter)
{
int first = (waiter == rt_mutex_top_waiter(lock));
- int boost = 0;
- task_t *owner = rt_mutex_owner(lock);
+ struct task_struct *owner = rt_mutex_owner(lock);
unsigned long flags;
+ int boost = 0;
spin_lock_irqsave(¤t->pi_lock, flags);
plist_del(&waiter->list_entry, &lock->wait_list);
spin_unlock(&lock->wait_lock);
- rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current __IP__);
+ rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current);
spin_lock(&lock->wait_lock);
}
get_task_struct(task);
spin_unlock_irqrestore(&task->pi_lock, flags);
- rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task __RET_IP__);
+ rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task);
}
/*
static int __sched
rt_mutex_slowlock(struct rt_mutex *lock, int state,
struct hrtimer_sleeper *timeout,
- int detect_deadlock __IP_DECL__)
+ int detect_deadlock)
{
struct rt_mutex_waiter waiter;
int ret = 0;
spin_lock(&lock->wait_lock);
/* Try to acquire the lock again: */
- if (try_to_take_rt_mutex(lock __IP__)) {
+ if (try_to_take_rt_mutex(lock)) {
spin_unlock(&lock->wait_lock);
return 0;
}
for (;;) {
/* Try to acquire the lock: */
- if (try_to_take_rt_mutex(lock __IP__))
+ if (try_to_take_rt_mutex(lock))
break;
/*
*/
if (!waiter.task) {
ret = task_blocks_on_rt_mutex(lock, &waiter,
- detect_deadlock __IP__);
+ detect_deadlock);
/*
* If we got woken up by the owner then start loop
* all over without going into schedule to try
set_current_state(TASK_RUNNING);
if (unlikely(waiter.task))
- remove_waiter(lock, &waiter __IP__);
+ remove_waiter(lock, &waiter);
/*
* try_to_take_rt_mutex() sets the waiter bit
* Slow path try-lock function:
*/
static inline int
-rt_mutex_slowtrylock(struct rt_mutex *lock __IP_DECL__)
+rt_mutex_slowtrylock(struct rt_mutex *lock)
{
int ret = 0;
if (likely(rt_mutex_owner(lock) != current)) {
- ret = try_to_take_rt_mutex(lock __IP__);
+ ret = try_to_take_rt_mutex(lock);
/*
* try_to_take_rt_mutex() sets the lock waiters
* bit unconditionally. Clean this up.
int detect_deadlock,
int (*slowfn)(struct rt_mutex *lock, int state,
struct hrtimer_sleeper *timeout,
- int detect_deadlock __IP_DECL__))
+ int detect_deadlock))
{
if (!detect_deadlock && likely(rt_mutex_cmpxchg(lock, NULL, current))) {
rt_mutex_deadlock_account_lock(lock, current);
return 0;
} else
- return slowfn(lock, state, NULL, detect_deadlock __RET_IP__);
+ return slowfn(lock, state, NULL, detect_deadlock);
}
static inline int
struct hrtimer_sleeper *timeout, int detect_deadlock,
int (*slowfn)(struct rt_mutex *lock, int state,
struct hrtimer_sleeper *timeout,
- int detect_deadlock __IP_DECL__))
+ int detect_deadlock))
{
if (!detect_deadlock && likely(rt_mutex_cmpxchg(lock, NULL, current))) {
rt_mutex_deadlock_account_lock(lock, current);
return 0;
} else
- return slowfn(lock, state, timeout, detect_deadlock __RET_IP__);
+ return slowfn(lock, state, timeout, detect_deadlock);
}
static inline int
rt_mutex_fasttrylock(struct rt_mutex *lock,
- int (*slowfn)(struct rt_mutex *lock __IP_DECL__))
+ int (*slowfn)(struct rt_mutex *lock))
{
if (likely(rt_mutex_cmpxchg(lock, NULL, current))) {
rt_mutex_deadlock_account_lock(lock, current);
return 1;
}
- return slowfn(lock __RET_IP__);
+ return slowfn(lock);
}
static inline void
struct task_struct *proxy_owner)
{
__rt_mutex_init(lock, NULL);
- debug_rt_mutex_proxy_lock(lock, proxy_owner __RET_IP__);
+ debug_rt_mutex_proxy_lock(lock, proxy_owner);
rt_mutex_set_owner(lock, proxy_owner, 0);
rt_mutex_deadlock_account_lock(lock, proxy_owner);
}
* Non-debug version.
*/
-#define __IP_DECL__
-#define __IP__
-#define __RET_IP__
#define rt_mutex_deadlock_check(l) (0)
#define rt_mutex_deadlock_account_lock(m, t) do { } while (0)
#define rt_mutex_deadlock_account_unlock(l) do { } while (0)
--- /dev/null
+/* kernel/rwsem.c: R/W semaphores, public implementation
+ *
+ * Written by David Howells (dhowells@redhat.com).
+ * Derived from asm-i386/semaphore.h
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/rwsem.h>
+
+#include <asm/system.h>
+#include <asm/atomic.h>
+
+/*
+ * lock for reading
+ */
+void down_read(struct rw_semaphore *sem)
+{
+ might_sleep();
+ rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
+
+ __down_read(sem);
+}
+
+EXPORT_SYMBOL(down_read);
+
+/*
+ * trylock for reading -- returns 1 if successful, 0 if contention
+ */
+int down_read_trylock(struct rw_semaphore *sem)
+{
+ int ret = __down_read_trylock(sem);
+
+ if (ret == 1)
+ rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_);
+ return ret;
+}
+
+EXPORT_SYMBOL(down_read_trylock);
+
+/*
+ * lock for writing
+ */
+void down_write(struct rw_semaphore *sem)
+{
+ might_sleep();
+ rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
+
+ __down_write(sem);
+}
+
+EXPORT_SYMBOL(down_write);
+
+/*
+ * trylock for writing -- returns 1 if successful, 0 if contention
+ */
+int down_write_trylock(struct rw_semaphore *sem)
+{
+ int ret = __down_write_trylock(sem);
+
+ if (ret == 1)
+ rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
+ return ret;
+}
+
+EXPORT_SYMBOL(down_write_trylock);
+
+/*
+ * release a read lock
+ */
+void up_read(struct rw_semaphore *sem)
+{
+ rwsem_release(&sem->dep_map, 1, _RET_IP_);
+
+ __up_read(sem);
+}
+
+EXPORT_SYMBOL(up_read);
+
+/*
+ * release a write lock
+ */
+void up_write(struct rw_semaphore *sem)
+{
+ rwsem_release(&sem->dep_map, 1, _RET_IP_);
+
+ __up_write(sem);
+}
+
+EXPORT_SYMBOL(up_write);
+
+/*
+ * downgrade write lock to read lock
+ */
+void downgrade_write(struct rw_semaphore *sem)
+{
+ /*
+ * lockdep: a downgraded write will live on as a write
+ * dependency.
+ */
+ __downgrade_write(sem);
+}
+
+EXPORT_SYMBOL(downgrade_write);
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+
+void down_read_nested(struct rw_semaphore *sem, int subclass)
+{
+ might_sleep();
+ rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_);
+
+ __down_read(sem);
+}
+
+EXPORT_SYMBOL(down_read_nested);
+
+void down_read_non_owner(struct rw_semaphore *sem)
+{
+ might_sleep();
+
+ __down_read(sem);
+}
+
+EXPORT_SYMBOL(down_read_non_owner);
+
+void down_write_nested(struct rw_semaphore *sem, int subclass)
+{
+ might_sleep();
+ rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_);
+
+ __down_write_nested(sem, subclass);
+}
+
+EXPORT_SYMBOL(down_write_nested);
+
+void up_read_non_owner(struct rw_semaphore *sem)
+{
+ __up_read(sem);
+}
+
+EXPORT_SYMBOL(up_read_non_owner);
+
+#endif
+
+
#include <linux/capability.h>
#include <linux/completion.h>
#include <linux/kernel_stat.h>
+#include <linux/debug_locks.h>
#include <linux/security.h>
#include <linux/notifier.h>
#include <linux/profile.h>
return SCALE_PRIO(DEF_TIMESLICE, static_prio);
}
-static inline unsigned int task_timeslice(task_t *p)
+static inline unsigned int task_timeslice(struct task_struct *p)
{
return static_prio_timeslice(p->static_prio);
}
-#define task_hot(p, now, sd) ((long long) ((now) - (p)->last_ran) \
- < (long long) (sd)->cache_hot_time)
-
/*
* These are the runqueue data structures:
*/
-typedef struct runqueue runqueue_t;
-
struct prio_array {
unsigned int nr_active;
DECLARE_BITMAP(bitmap, MAX_PRIO+1); /* include 1 bit for delimiter */
* (such as the load balancing or the thread migration code), lock
* acquire operations must be ordered by ascending &runqueue.
*/
-struct runqueue {
+struct rq {
spinlock_t lock;
/*
unsigned long expired_timestamp;
unsigned long long timestamp_last_tick;
- task_t *curr, *idle;
+ struct task_struct *curr, *idle;
struct mm_struct *prev_mm;
- prio_array_t *active, *expired, arrays[2];
+ struct prio_array *active, *expired, arrays[2];
int best_expired_prio;
atomic_t nr_iowait;
int active_balance;
int push_cpu;
- task_t *migration_thread;
+ struct task_struct *migration_thread;
struct list_head migration_queue;
#endif
unsigned long ttwu_cnt;
unsigned long ttwu_local;
#endif
+ struct lock_class_key rq_lock_key;
};
-static DEFINE_PER_CPU(struct runqueue, runqueues);
+static DEFINE_PER_CPU(struct rq, runqueues);
/*
* The domain tree (rq->sd) is protected by RCU's quiescent state transition.
* The domain tree of any CPU may only be accessed from within
* preempt-disabled sections.
*/
-#define for_each_domain(cpu, domain) \
-for (domain = rcu_dereference(cpu_rq(cpu)->sd); domain; domain = domain->parent)
+#define for_each_domain(cpu, __sd) \
+ for (__sd = rcu_dereference(cpu_rq(cpu)->sd); __sd; __sd = __sd->parent)
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
#define this_rq() (&__get_cpu_var(runqueues))
#endif
#ifndef __ARCH_WANT_UNLOCKED_CTXSW
-static inline int task_running(runqueue_t *rq, task_t *p)
+static inline int task_running(struct rq *rq, struct task_struct *p)
{
return rq->curr == p;
}
-static inline void prepare_lock_switch(runqueue_t *rq, task_t *next)
+static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
{
}
-static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
+static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
{
#ifdef CONFIG_DEBUG_SPINLOCK
/* this is a valid case when another task releases the spinlock */
rq->lock.owner = current;
#endif
+ /*
+ * If we are tracking spinlock dependencies then we have to
+ * fix up the runqueue lock - which gets 'carried over' from
+ * prev into current:
+ */
+ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
+
spin_unlock_irq(&rq->lock);
}
#else /* __ARCH_WANT_UNLOCKED_CTXSW */
-static inline int task_running(runqueue_t *rq, task_t *p)
+static inline int task_running(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
return p->oncpu;
#endif
}
-static inline void prepare_lock_switch(runqueue_t *rq, task_t *next)
+static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
{
#ifdef CONFIG_SMP
/*
#endif
}
-static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
+static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
{
#ifdef CONFIG_SMP
/*
* __task_rq_lock - lock the runqueue a given task resides on.
* Must be called interrupts disabled.
*/
-static inline runqueue_t *__task_rq_lock(task_t *p)
+static inline struct rq *__task_rq_lock(struct task_struct *p)
__acquires(rq->lock)
{
- struct runqueue *rq;
+ struct rq *rq;
repeat_lock_task:
rq = task_rq(p);
* interrupts. Note the ordering: we can safely lookup the task_rq without
* explicitly disabling preemption.
*/
-static runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
+static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags)
__acquires(rq->lock)
{
- struct runqueue *rq;
+ struct rq *rq;
repeat_lock_task:
local_irq_save(*flags);
return rq;
}
-static inline void __task_rq_unlock(runqueue_t *rq)
+static inline void __task_rq_unlock(struct rq *rq)
__releases(rq->lock)
{
spin_unlock(&rq->lock);
}
-static inline void task_rq_unlock(runqueue_t *rq, unsigned long *flags)
+static inline void task_rq_unlock(struct rq *rq, unsigned long *flags)
__releases(rq->lock)
{
spin_unlock_irqrestore(&rq->lock, *flags);
seq_printf(seq, "version %d\n", SCHEDSTAT_VERSION);
seq_printf(seq, "timestamp %lu\n", jiffies);
for_each_online_cpu(cpu) {
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
#ifdef CONFIG_SMP
struct sched_domain *sd;
int dcnt = 0;
/*
* rq_lock - lock a given runqueue and disable interrupts.
*/
-static inline runqueue_t *this_rq_lock(void)
+static inline struct rq *this_rq_lock(void)
__acquires(rq->lock)
{
- runqueue_t *rq;
+ struct rq *rq;
local_irq_disable();
rq = this_rq();
* long it was from the *first* time it was queued to the time that it
* finally hit a cpu.
*/
-static inline void sched_info_dequeued(task_t *t)
+static inline void sched_info_dequeued(struct task_struct *t)
{
t->sched_info.last_queued = 0;
}
* long it was waiting to run. We also note when it began so that we
* can keep stats on how long its timeslice is.
*/
-static void sched_info_arrive(task_t *t)
+static void sched_info_arrive(struct task_struct *t)
{
unsigned long now = jiffies, diff = 0;
- struct runqueue *rq = task_rq(t);
+ struct rq *rq = task_rq(t);
if (t->sched_info.last_queued)
diff = now - t->sched_info.last_queued;
* the timestamp if it is already not set. It's assumed that
* sched_info_dequeued() will clear that stamp when appropriate.
*/
-static inline void sched_info_queued(task_t *t)
+static inline void sched_info_queued(struct task_struct *t)
{
if (!t->sched_info.last_queued)
t->sched_info.last_queued = jiffies;
* Called when a process ceases being the active-running process, either
* voluntarily or involuntarily. Now we can calculate how long we ran.
*/
-static inline void sched_info_depart(task_t *t)
+static inline void sched_info_depart(struct task_struct *t)
{
- struct runqueue *rq = task_rq(t);
+ struct rq *rq = task_rq(t);
unsigned long diff = jiffies - t->sched_info.last_arrival;
t->sched_info.cpu_time += diff;
* their time slice. (This may also be called when switching to or from
* the idle task.) We are only called when prev != next.
*/
-static inline void sched_info_switch(task_t *prev, task_t *next)
+static inline void
+sched_info_switch(struct task_struct *prev, struct task_struct *next)
{
- struct runqueue *rq = task_rq(prev);
+ struct rq *rq = task_rq(prev);
/*
* prev now departs the cpu. It's not interesting to record
/*
* Adding/removing a task to/from a priority array:
*/
-static void dequeue_task(struct task_struct *p, prio_array_t *array)
+static void dequeue_task(struct task_struct *p, struct prio_array *array)
{
array->nr_active--;
list_del(&p->run_list);
__clear_bit(p->prio, array->bitmap);
}
-static void enqueue_task(struct task_struct *p, prio_array_t *array)
+static void enqueue_task(struct task_struct *p, struct prio_array *array)
{
sched_info_queued(p);
list_add_tail(&p->run_list, array->queue + p->prio);
* Put task to the end of the run list without the overhead of dequeue
* followed by enqueue.
*/
-static void requeue_task(struct task_struct *p, prio_array_t *array)
+static void requeue_task(struct task_struct *p, struct prio_array *array)
{
list_move_tail(&p->run_list, array->queue + p->prio);
}
-static inline void enqueue_task_head(struct task_struct *p, prio_array_t *array)
+static inline void
+enqueue_task_head(struct task_struct *p, struct prio_array *array)
{
list_add(&p->run_list, array->queue + p->prio);
__set_bit(p->prio, array->bitmap);
* Both properties are important to certain workloads.
*/
-static inline int __normal_prio(task_t *p)
+static inline int __normal_prio(struct task_struct *p)
{
int bonus, prio;
#define RTPRIO_TO_LOAD_WEIGHT(rp) \
(PRIO_TO_LOAD_WEIGHT(MAX_RT_PRIO) + LOAD_WEIGHT(rp))
-static void set_load_weight(task_t *p)
+static void set_load_weight(struct task_struct *p)
{
if (has_rt_policy(p)) {
#ifdef CONFIG_SMP
p->load_weight = PRIO_TO_LOAD_WEIGHT(p->static_prio);
}
-static inline void inc_raw_weighted_load(runqueue_t *rq, const task_t *p)
+static inline void
+inc_raw_weighted_load(struct rq *rq, const struct task_struct *p)
{
rq->raw_weighted_load += p->load_weight;
}
-static inline void dec_raw_weighted_load(runqueue_t *rq, const task_t *p)
+static inline void
+dec_raw_weighted_load(struct rq *rq, const struct task_struct *p)
{
rq->raw_weighted_load -= p->load_weight;
}
-static inline void inc_nr_running(task_t *p, runqueue_t *rq)
+static inline void inc_nr_running(struct task_struct *p, struct rq *rq)
{
rq->nr_running++;
inc_raw_weighted_load(rq, p);
}
-static inline void dec_nr_running(task_t *p, runqueue_t *rq)
+static inline void dec_nr_running(struct task_struct *p, struct rq *rq)
{
rq->nr_running--;
dec_raw_weighted_load(rq, p);
* setprio syscalls, and whenever the interactivity
* estimator recalculates.
*/
-static inline int normal_prio(task_t *p)
+static inline int normal_prio(struct task_struct *p)
{
int prio;
* interactivity modifiers. Will be RT if the task got
* RT-boosted. If not then it returns p->normal_prio.
*/
-static int effective_prio(task_t *p)
+static int effective_prio(struct task_struct *p)
{
p->normal_prio = normal_prio(p);
/*
/*
* __activate_task - move a task to the runqueue.
*/
-static void __activate_task(task_t *p, runqueue_t *rq)
+static void __activate_task(struct task_struct *p, struct rq *rq)
{
- prio_array_t *target = rq->active;
+ struct prio_array *target = rq->active;
if (batch_task(p))
target = rq->expired;
/*
* __activate_idle_task - move idle task to the _front_ of runqueue.
*/
-static inline void __activate_idle_task(task_t *p, runqueue_t *rq)
+static inline void __activate_idle_task(struct task_struct *p, struct rq *rq)
{
enqueue_task_head(p, rq->active);
inc_nr_running(p, rq);
* Recalculate p->normal_prio and p->prio after having slept,
* updating the sleep-average too:
*/
-static int recalc_task_prio(task_t *p, unsigned long long now)
+static int recalc_task_prio(struct task_struct *p, unsigned long long now)
{
/* Caller must always ensure 'now >= p->timestamp' */
unsigned long sleep_time = now - p->timestamp;
* Update all the scheduling statistics stuff. (sleep average
* calculation, priority modifiers, etc.)
*/
-static void activate_task(task_t *p, runqueue_t *rq, int local)
+static void activate_task(struct task_struct *p, struct rq *rq, int local)
{
unsigned long long now;
#ifdef CONFIG_SMP
if (!local) {
/* Compensate for drifting sched_clock */
- runqueue_t *this_rq = this_rq();
+ struct rq *this_rq = this_rq();
now = (now - this_rq->timestamp_last_tick)
+ rq->timestamp_last_tick;
}
/*
* deactivate_task - remove a task from the runqueue.
*/
-static void deactivate_task(struct task_struct *p, runqueue_t *rq)
+static void deactivate_task(struct task_struct *p, struct rq *rq)
{
dec_nr_running(p, rq);
dequeue_task(p, p->array);
#define tsk_is_polling(t) test_tsk_thread_flag(t, TIF_POLLING_NRFLAG)
#endif
-static void resched_task(task_t *p)
+static void resched_task(struct task_struct *p)
{
int cpu;
smp_send_reschedule(cpu);
}
#else
-static inline void resched_task(task_t *p)
+static inline void resched_task(struct task_struct *p)
{
assert_spin_locked(&task_rq(p)->lock);
set_tsk_need_resched(p);
* task_curr - is this task currently executing on a CPU?
* @p: the task in question.
*/
-inline int task_curr(const task_t *p)
+inline int task_curr(const struct task_struct *p)
{
return cpu_curr(task_cpu(p)) == p;
}
}
#ifdef CONFIG_SMP
-typedef struct {
+struct migration_req {
struct list_head list;
- task_t *task;
+ struct task_struct *task;
int dest_cpu;
struct completion done;
-} migration_req_t;
+};
/*
* The task's runqueue lock must be held.
* Returns true if you have to wait for migration thread.
*/
-static int migrate_task(task_t *p, int dest_cpu, migration_req_t *req)
+static int
+migrate_task(struct task_struct *p, int dest_cpu, struct migration_req *req)
{
- runqueue_t *rq = task_rq(p);
+ struct rq *rq = task_rq(p);
/*
* If the task is not on a runqueue (and not running), then
req->task = p;
req->dest_cpu = dest_cpu;
list_add(&req->list, &rq->migration_queue);
+
return 1;
}
* smp_call_function() if an IPI is sent by the same process we are
* waiting to become inactive.
*/
-void wait_task_inactive(task_t *p)
+void wait_task_inactive(struct task_struct *p)
{
unsigned long flags;
- runqueue_t *rq;
+ struct rq *rq;
int preempted;
repeat:
* to another CPU then no harm is done and the purpose has been
* achieved as well.
*/
-void kick_process(task_t *p)
+void kick_process(struct task_struct *p)
{
int cpu;
*/
static inline unsigned long source_load(int cpu, int type)
{
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
if (type == 0)
return rq->raw_weighted_load;
*/
static inline unsigned long target_load(int cpu, int type)
{
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
if (type == 0)
return rq->raw_weighted_load;
*/
static inline unsigned long cpu_avg_load_per_task(int cpu)
{
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
unsigned long n = rq->nr_running;
- return n ? rq->raw_weighted_load / n : SCHED_LOAD_SCALE;
+ return n ? rq->raw_weighted_load / n : SCHED_LOAD_SCALE;
}
/*
* Returns the CPU we should wake onto.
*/
#if defined(ARCH_HAS_SCHED_WAKE_IDLE)
-static int wake_idle(int cpu, task_t *p)
+static int wake_idle(int cpu, struct task_struct *p)
{
cpumask_t tmp;
struct sched_domain *sd;
return cpu;
}
#else
-static inline int wake_idle(int cpu, task_t *p)
+static inline int wake_idle(int cpu, struct task_struct *p)
{
return cpu;
}
*
* returns failure only if the task is already active.
*/
-static int try_to_wake_up(task_t *p, unsigned int state, int sync)
+static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
{
int cpu, this_cpu, success = 0;
unsigned long flags;
long old_state;
- runqueue_t *rq;
+ struct rq *rq;
#ifdef CONFIG_SMP
- unsigned long load, this_load;
struct sched_domain *sd, *this_sd = NULL;
+ unsigned long load, this_load;
int new_cpu;
#endif
return success;
}
-int fastcall wake_up_process(task_t *p)
+int fastcall wake_up_process(struct task_struct *p)
{
return try_to_wake_up(p, TASK_STOPPED | TASK_TRACED |
TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE, 0);
}
-
EXPORT_SYMBOL(wake_up_process);
-int fastcall wake_up_state(task_t *p, unsigned int state)
+int fastcall wake_up_state(struct task_struct *p, unsigned int state)
{
return try_to_wake_up(p, state, 0);
}
* Perform scheduler related setup for a newly forked process p.
* p is forked by current.
*/
-void fastcall sched_fork(task_t *p, int clone_flags)
+void fastcall sched_fork(struct task_struct *p, int clone_flags)
{
int cpu = get_cpu();
* that must be done for every newly created context, then puts the task
* on the runqueue and wakes it.
*/
-void fastcall wake_up_new_task(task_t *p, unsigned long clone_flags)
+void fastcall wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
{
+ struct rq *rq, *this_rq;
unsigned long flags;
int this_cpu, cpu;
- runqueue_t *rq, *this_rq;
rq = task_rq_lock(p, &flags);
BUG_ON(p->state != TASK_RUNNING);
* artificially, because any timeslice recovered here
* was given away by the parent in the first place.)
*/
-void fastcall sched_exit(task_t *p)
+void fastcall sched_exit(struct task_struct *p)
{
unsigned long flags;
- runqueue_t *rq;
+ struct rq *rq;
/*
* If the child was a (relative-) CPU hog then decrease
* prepare_task_switch sets up locking and calls architecture specific
* hooks.
*/
-static inline void prepare_task_switch(runqueue_t *rq, task_t *next)
+static inline void prepare_task_switch(struct rq *rq, struct task_struct *next)
{
prepare_lock_switch(rq, next);
prepare_arch_switch(next);
* with the lock held can cause deadlocks; see schedule() for
* details.)
*/
-static inline void finish_task_switch(runqueue_t *rq, task_t *prev)
+static inline void finish_task_switch(struct rq *rq, struct task_struct *prev)
__releases(rq->lock)
{
struct mm_struct *mm = rq->prev_mm;
* schedule_tail - first thing a freshly forked thread must call.
* @prev: the thread we just switched away from.
*/
-asmlinkage void schedule_tail(task_t *prev)
+asmlinkage void schedule_tail(struct task_struct *prev)
__releases(rq->lock)
{
- runqueue_t *rq = this_rq();
+ struct rq *rq = this_rq();
+
finish_task_switch(rq, prev);
#ifdef __ARCH_WANT_UNLOCKED_CTXSW
/* In this case, finish_task_switch does not reenable preemption */
* context_switch - switch to the new MM and the new
* thread's register state.
*/
-static inline
-task_t * context_switch(runqueue_t *rq, task_t *prev, task_t *next)
+static inline struct task_struct *
+context_switch(struct rq *rq, struct task_struct *prev,
+ struct task_struct *next)
{
struct mm_struct *mm = next->mm;
struct mm_struct *oldmm = prev->active_mm;
WARN_ON(rq->prev_mm);
rq->prev_mm = oldmm;
}
+ spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
/* Here we just switch the register state and the stack. */
switch_to(prev, next, prev);
#ifdef CONFIG_SMP
+/*
+ * Is this task likely cache-hot:
+ */
+static inline int
+task_hot(struct task_struct *p, unsigned long long now, struct sched_domain *sd)
+{
+ return (long long)(now - p->last_ran) < (long long)sd->cache_hot_time;
+}
+
/*
* double_rq_lock - safely lock two runqueues
*
* Note this does not disable interrupts like task_rq_lock,
* you need to do so manually before calling.
*/
-static void double_rq_lock(runqueue_t *rq1, runqueue_t *rq2)
+static void double_rq_lock(struct rq *rq1, struct rq *rq2)
__acquires(rq1->lock)
__acquires(rq2->lock)
{
* Note this does not restore interrupts like task_rq_unlock,
* you need to do so manually after calling.
*/
-static void double_rq_unlock(runqueue_t *rq1, runqueue_t *rq2)
+static void double_rq_unlock(struct rq *rq1, struct rq *rq2)
__releases(rq1->lock)
__releases(rq2->lock)
{
/*
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
*/
-static void double_lock_balance(runqueue_t *this_rq, runqueue_t *busiest)
+static void double_lock_balance(struct rq *this_rq, struct rq *busiest)
__releases(this_rq->lock)
__acquires(busiest->lock)
__acquires(this_rq->lock)
* allow dest_cpu, which will force the cpu onto dest_cpu. Then
* the cpu_allowed mask is restored.
*/
-static void sched_migrate_task(task_t *p, int dest_cpu)
+static void sched_migrate_task(struct task_struct *p, int dest_cpu)
{
- migration_req_t req;
- runqueue_t *rq;
+ struct migration_req req;
unsigned long flags;
+ struct rq *rq;
rq = task_rq_lock(p, &flags);
if (!cpu_isset(dest_cpu, p->cpus_allowed)
if (migrate_task(p, dest_cpu, &req)) {
/* Need to wait for migration thread (might exit: take ref). */
struct task_struct *mt = rq->migration_thread;
+
get_task_struct(mt);
task_rq_unlock(rq, &flags);
wake_up_process(mt);
put_task_struct(mt);
wait_for_completion(&req.done);
+
return;
}
out:
* pull_task - move a task from a remote runqueue to the local runqueue.
* Both runqueues must be locked.
*/
-static
-void pull_task(runqueue_t *src_rq, prio_array_t *src_array, task_t *p,
- runqueue_t *this_rq, prio_array_t *this_array, int this_cpu)
+static void pull_task(struct rq *src_rq, struct prio_array *src_array,
+ struct task_struct *p, struct rq *this_rq,
+ struct prio_array *this_array, int this_cpu)
{
dequeue_task(p, src_array);
dec_nr_running(p, src_rq);
* can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
*/
static
-int can_migrate_task(task_t *p, runqueue_t *rq, int this_cpu,
+int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu,
struct sched_domain *sd, enum idle_type idle,
int *all_pinned)
{
}
#define rq_best_prio(rq) min((rq)->curr->prio, (rq)->best_expired_prio)
+
/*
* move_tasks tries to move up to max_nr_move tasks and max_load_move weighted
* load from busiest to this_rq, as part of a balancing operation within
*
* Called with both runqueues locked.
*/
-static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
+static int move_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
unsigned long max_nr_move, unsigned long max_load_move,
struct sched_domain *sd, enum idle_type idle,
int *all_pinned)
{
- prio_array_t *array, *dst_array;
+ int idx, pulled = 0, pinned = 0, this_best_prio, best_prio,
+ best_prio_seen, skip_for_load;
+ struct prio_array *array, *dst_array;
struct list_head *head, *curr;
- int idx, pulled = 0, pinned = 0, this_best_prio, busiest_best_prio;
- int busiest_best_prio_seen;
- int skip_for_load; /* skip the task based on weighted load issues */
+ struct task_struct *tmp;
long rem_load_move;
- task_t *tmp;
if (max_nr_move == 0 || max_load_move == 0)
goto out;
rem_load_move = max_load_move;
pinned = 1;
this_best_prio = rq_best_prio(this_rq);
- busiest_best_prio = rq_best_prio(busiest);
+ best_prio = rq_best_prio(busiest);
/*
* Enable handling of the case where there is more than one task
* with the best priority. If the current running task is one
- * of those with prio==busiest_best_prio we know it won't be moved
+ * of those with prio==best_prio we know it won't be moved
* and therefore it's safe to override the skip (based on load) of
* any task we find with that prio.
*/
- busiest_best_prio_seen = busiest_best_prio == busiest->curr->prio;
+ best_prio_seen = best_prio == busiest->curr->prio;
/*
* We first consider expired tasks. Those will likely not be
head = array->queue + idx;
curr = head->prev;
skip_queue:
- tmp = list_entry(curr, task_t, run_list);
+ tmp = list_entry(curr, struct task_struct, run_list);
curr = curr->prev;
*/
skip_for_load = tmp->load_weight > rem_load_move;
if (skip_for_load && idx < this_best_prio)
- skip_for_load = !busiest_best_prio_seen && idx == busiest_best_prio;
+ skip_for_load = !best_prio_seen && idx == best_prio;
if (skip_for_load ||
!can_migrate_task(tmp, busiest, this_cpu, sd, idle, &pinned)) {
- busiest_best_prio_seen |= idx == busiest_best_prio;
+
+ best_prio_seen |= idx == best_prio;
if (curr != head)
goto skip_queue;
idx++;
/*
* find_busiest_group finds and returns the busiest CPU group within the
- * domain. It calculates and returns the amount of weighted load which should be
- * moved to restore balance via the imbalance parameter.
+ * domain. It calculates and returns the amount of weighted load which
+ * should be moved to restore balance via the imbalance parameter.
*/
static struct sched_group *
find_busiest_group(struct sched_domain *sd, int this_cpu,
sum_weighted_load = sum_nr_running = avg_load = 0;
for_each_cpu_mask(i, group->cpumask) {
- runqueue_t *rq = cpu_rq(i);
+ struct rq *rq = cpu_rq(i);
if (*sd_idle && !idle_cpu(i))
*sd_idle = 0;
* capacity but still has some space to pick up some load
* from other group and save more power
*/
- if (sum_nr_running <= group_capacity - 1)
+ if (sum_nr_running <= group_capacity - 1) {
if (sum_nr_running > leader_nr_running ||
(sum_nr_running == leader_nr_running &&
first_cpu(group->cpumask) >
group_leader = group;
leader_nr_running = sum_nr_running;
}
-
+ }
group_next:
#endif
group = group->next;
* moved
*/
if (*imbalance < busiest_load_per_task) {
- unsigned long pwr_now, pwr_move;
- unsigned long tmp;
+ unsigned long tmp, pwr_now, pwr_move;
unsigned int imbn;
small_imbalance:
/*
* find_busiest_queue - find the busiest runqueue among the cpus in group.
*/
-static runqueue_t *find_busiest_queue(struct sched_group *group,
- enum idle_type idle, unsigned long imbalance)
+static struct rq *
+find_busiest_queue(struct sched_group *group, enum idle_type idle,
+ unsigned long imbalance)
{
+ struct rq *busiest = NULL, *rq;
unsigned long max_load = 0;
- runqueue_t *busiest = NULL, *rqi;
int i;
for_each_cpu_mask(i, group->cpumask) {
- rqi = cpu_rq(i);
+ rq = cpu_rq(i);
- if (rqi->nr_running == 1 && rqi->raw_weighted_load > imbalance)
+ if (rq->nr_running == 1 && rq->raw_weighted_load > imbalance)
continue;
- if (rqi->raw_weighted_load > max_load) {
- max_load = rqi->raw_weighted_load;
- busiest = rqi;
+ if (rq->raw_weighted_load > max_load) {
+ max_load = rq->raw_weighted_load;
+ busiest = rq;
}
}
*/
#define MAX_PINNED_INTERVAL 512
-#define minus_1_or_zero(n) ((n) > 0 ? (n) - 1 : 0)
+static inline unsigned long minus_1_or_zero(unsigned long n)
+{
+ return n > 0 ? n - 1 : 0;
+}
+
/*
* Check this_cpu to ensure it is balanced within domain. Attempt to move
* tasks if there is an imbalance.
*
* Called with this_rq unlocked.
*/
-static int load_balance(int this_cpu, runqueue_t *this_rq,
+static int load_balance(int this_cpu, struct rq *this_rq,
struct sched_domain *sd, enum idle_type idle)
{
+ int nr_moved, all_pinned = 0, active_balance = 0, sd_idle = 0;
struct sched_group *group;
- runqueue_t *busiest;
unsigned long imbalance;
- int nr_moved, all_pinned = 0;
- int active_balance = 0;
- int sd_idle = 0;
+ struct rq *busiest;
if (idle != NOT_IDLE && sd->flags & SD_SHARE_CPUPOWER &&
!sched_smt_power_savings)
*/
double_rq_lock(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
- minus_1_or_zero(busiest->nr_running),
- imbalance, sd, idle, &all_pinned);
+ minus_1_or_zero(busiest->nr_running),
+ imbalance, sd, idle, &all_pinned);
double_rq_unlock(this_rq, busiest);
/* All tasks on this runqueue were pinned by CPU affinity */
(sd->balance_interval < sd->max_interval))
sd->balance_interval *= 2;
- if (!sd_idle && sd->flags & SD_SHARE_CPUPOWER && !sched_smt_power_savings)
+ if (!sd_idle && sd->flags & SD_SHARE_CPUPOWER &&
+ !sched_smt_power_savings)
return -1;
return 0;
}
* Called from schedule when this_rq is about to become idle (NEWLY_IDLE).
* this_rq is locked.
*/
-static int load_balance_newidle(int this_cpu, runqueue_t *this_rq,
- struct sched_domain *sd)
+static int
+load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd)
{
struct sched_group *group;
- runqueue_t *busiest = NULL;
+ struct rq *busiest = NULL;
unsigned long imbalance;
int nr_moved = 0;
int sd_idle = 0;
out_balanced:
schedstat_inc(sd, lb_balanced[NEWLY_IDLE]);
- if (!sd_idle && sd->flags & SD_SHARE_CPUPOWER && !sched_smt_power_savings)
+ if (!sd_idle && sd->flags & SD_SHARE_CPUPOWER &&
+ !sched_smt_power_savings)
return -1;
sd->nr_balance_failed = 0;
+
return 0;
}
* idle_balance is called by schedule() if this_cpu is about to become
* idle. Attempts to pull tasks from other CPUs.
*/
-static void idle_balance(int this_cpu, runqueue_t *this_rq)
+static void idle_balance(int this_cpu, struct rq *this_rq)
{
struct sched_domain *sd;
for_each_domain(this_cpu, sd) {
if (sd->flags & SD_BALANCE_NEWIDLE) {
- if (load_balance_newidle(this_cpu, this_rq, sd)) {
- /* We've pulled tasks over so stop searching */
+ /* If we've pulled tasks over stop searching: */
+ if (load_balance_newidle(this_cpu, this_rq, sd))
break;
- }
}
}
}
*
* Called with busiest_rq locked.
*/
-static void active_load_balance(runqueue_t *busiest_rq, int busiest_cpu)
+static void active_load_balance(struct rq *busiest_rq, int busiest_cpu)
{
- struct sched_domain *sd;
- runqueue_t *target_rq;
int target_cpu = busiest_rq->push_cpu;
+ struct sched_domain *sd;
+ struct rq *target_rq;
+ /* Is there any task to move? */
if (busiest_rq->nr_running <= 1)
- /* no task to move */
return;
target_rq = cpu_rq(target_cpu);
/* Search for an sd spanning us and the target CPU. */
for_each_domain(target_cpu, sd) {
if ((sd->flags & SD_LOAD_BALANCE) &&
- cpu_isset(busiest_cpu, sd->span))
+ cpu_isset(busiest_cpu, sd->span))
break;
}
- if (unlikely(sd == NULL))
- goto out;
-
- schedstat_inc(sd, alb_cnt);
+ if (likely(sd)) {
+ schedstat_inc(sd, alb_cnt);
- if (move_tasks(target_rq, target_cpu, busiest_rq, 1,
- RTPRIO_TO_LOAD_WEIGHT(100), sd, SCHED_IDLE, NULL))
- schedstat_inc(sd, alb_pushed);
- else
- schedstat_inc(sd, alb_failed);
-out:
+ if (move_tasks(target_rq, target_cpu, busiest_rq, 1,
+ RTPRIO_TO_LOAD_WEIGHT(100), sd, SCHED_IDLE,
+ NULL))
+ schedstat_inc(sd, alb_pushed);
+ else
+ schedstat_inc(sd, alb_failed);
+ }
spin_unlock(&target_rq->lock);
}
* Balancing parameters are set up in arch_init_sched_domains.
*/
-/* Don't have all balancing operations going off at once */
-#define CPU_OFFSET(cpu) (HZ * cpu / NR_CPUS)
+/* Don't have all balancing operations going off at once: */
+static inline unsigned long cpu_offset(int cpu)
+{
+ return jiffies + cpu * HZ / NR_CPUS;
+}
-static void rebalance_tick(int this_cpu, runqueue_t *this_rq,
- enum idle_type idle)
+static void
+rebalance_tick(int this_cpu, struct rq *this_rq, enum idle_type idle)
{
- unsigned long old_load, this_load;
- unsigned long j = jiffies + CPU_OFFSET(this_cpu);
+ unsigned long this_load, interval, j = cpu_offset(this_cpu);
struct sched_domain *sd;
- int i;
+ int i, scale;
this_load = this_rq->raw_weighted_load;
- /* Update our load */
- for (i = 0; i < 3; i++) {
- unsigned long new_load = this_load;
- int scale = 1 << i;
+
+ /* Update our load: */
+ for (i = 0, scale = 1; i < 3; i++, scale <<= 1) {
+ unsigned long old_load, new_load;
+
old_load = this_rq->cpu_load[i];
+ new_load = this_load;
/*
* Round up the averaging division if load is increasing. This
* prevents us from getting stuck on 9 if the load is 10, for
}
for_each_domain(this_cpu, sd) {
- unsigned long interval;
-
if (!(sd->flags & SD_LOAD_BALANCE))
continue;
/*
* on UP we do not need to balance between CPUs:
*/
-static inline void rebalance_tick(int cpu, runqueue_t *rq, enum idle_type idle)
+static inline void rebalance_tick(int cpu, struct rq *rq, enum idle_type idle)
{
}
-static inline void idle_balance(int cpu, runqueue_t *rq)
+static inline void idle_balance(int cpu, struct rq *rq)
{
}
#endif
-static inline int wake_priority_sleeper(runqueue_t *rq)
+static inline int wake_priority_sleeper(struct rq *rq)
{
int ret = 0;
+
#ifdef CONFIG_SCHED_SMT
spin_lock(&rq->lock);
/*
* This is called on clock ticks and on context switches.
* Bank in p->sched_time the ns elapsed since the last tick or switch.
*/
-static inline void update_cpu_clock(task_t *p, runqueue_t *rq,
- unsigned long long now)
+static inline void
+update_cpu_clock(struct task_struct *p, struct rq *rq, unsigned long long now)
{
- unsigned long long last = max(p->timestamp, rq->timestamp_last_tick);
- p->sched_time += now - last;
+ p->sched_time += now - max(p->timestamp, rq->timestamp_last_tick);
}
/*
* Return current->sched_time plus any more ns on the sched_clock
* that have not yet been banked.
*/
-unsigned long long current_sched_time(const task_t *tsk)
+unsigned long long current_sched_time(const struct task_struct *p)
{
unsigned long long ns;
unsigned long flags;
+
local_irq_save(flags);
- ns = max(tsk->timestamp, task_rq(tsk)->timestamp_last_tick);
- ns = tsk->sched_time + (sched_clock() - ns);
+ ns = max(p->timestamp, task_rq(p)->timestamp_last_tick);
+ ns = p->sched_time + sched_clock() - ns;
local_irq_restore(flags);
+
return ns;
}
* increasing number of running tasks. We also ignore the interactivity
* if a better static_prio task has expired:
*/
-#define EXPIRED_STARVING(rq) \
- ((STARVATION_LIMIT && ((rq)->expired_timestamp && \
- (jiffies - (rq)->expired_timestamp >= \
- STARVATION_LIMIT * ((rq)->nr_running) + 1))) || \
- ((rq)->curr->static_prio > (rq)->best_expired_prio))
+static inline int expired_starving(struct rq *rq)
+{
+ if (rq->curr->static_prio > rq->best_expired_prio)
+ return 1;
+ if (!STARVATION_LIMIT || !rq->expired_timestamp)
+ return 0;
+ if (jiffies - rq->expired_timestamp > STARVATION_LIMIT * rq->nr_running)
+ return 1;
+ return 0;
+}
/*
* Account user cpu time to a process.
cputime_t cputime)
{
struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
- runqueue_t *rq = this_rq();
+ struct rq *rq = this_rq();
cputime64_t tmp;
p->stime = cputime_add(p->stime, cputime);
{
struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
cputime64_t tmp = cputime_to_cputime64(steal);
- runqueue_t *rq = this_rq();
+ struct rq *rq = this_rq();
if (p == rq->idle) {
p->stime = cputime_add(p->stime, steal);
*/
void scheduler_tick(void)
{
- int cpu = smp_processor_id();
- runqueue_t *rq = this_rq();
- task_t *p = current;
unsigned long long now = sched_clock();
+ struct task_struct *p = current;
+ int cpu = smp_processor_id();
+ struct rq *rq = cpu_rq(cpu);
update_cpu_clock(p, rq, now);
if (!rq->expired_timestamp)
rq->expired_timestamp = jiffies;
- if (!TASK_INTERACTIVE(p) || EXPIRED_STARVING(rq)) {
+ if (!TASK_INTERACTIVE(p) || expired_starving(rq)) {
enqueue_task(p, rq->expired);
if (p->static_prio < rq->best_expired_prio)
rq->best_expired_prio = p->static_prio;
}
#ifdef CONFIG_SCHED_SMT
-static inline void wakeup_busy_runqueue(runqueue_t *rq)
+static inline void wakeup_busy_runqueue(struct rq *rq)
{
/* If an SMT runqueue is sleeping due to priority reasons wake it up */
if (rq->curr == rq->idle && rq->nr_running)
return;
for_each_cpu_mask(i, sd->span) {
- runqueue_t *smt_rq = cpu_rq(i);
+ struct rq *smt_rq = cpu_rq(i);
if (i == this_cpu)
continue;
* utilize, if another task runs on a sibling. This models the
* slowdown effect of other tasks running on siblings:
*/
-static inline unsigned long smt_slice(task_t *p, struct sched_domain *sd)
+static inline unsigned long
+smt_slice(struct task_struct *p, struct sched_domain *sd)
{
return p->time_slice * (100 - sd->per_cpu_gain) / 100;
}
* acquire their lock. As we only trylock the normal locking order does not
* need to be obeyed.
*/
-static int dependent_sleeper(int this_cpu, runqueue_t *this_rq, task_t *p)
+static int
+dependent_sleeper(int this_cpu, struct rq *this_rq, struct task_struct *p)
{
struct sched_domain *tmp, *sd = NULL;
int ret = 0, i;
return 0;
for_each_cpu_mask(i, sd->span) {
- runqueue_t *smt_rq;
- task_t *smt_curr;
+ struct task_struct *smt_curr;
+ struct rq *smt_rq;
if (i == this_cpu)
continue;
static inline void wake_sleeping_dependent(int this_cpu)
{
}
-
-static inline int dependent_sleeper(int this_cpu, runqueue_t *this_rq,
- task_t *p)
+static inline int
+dependent_sleeper(int this_cpu, struct rq *this_rq, struct task_struct *p)
{
return 0;
}
/*
* Underflow?
*/
- BUG_ON((preempt_count() < 0));
+ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
+ return;
preempt_count() += val;
/*
* Spinlock count overflowing soon?
*/
- BUG_ON((preempt_count() & PREEMPT_MASK) >= PREEMPT_MASK-10);
+ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= PREEMPT_MASK-10);
}
EXPORT_SYMBOL(add_preempt_count);
/*
* Underflow?
*/
- BUG_ON(val > preempt_count());
+ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
+ return;
/*
* Is the spinlock portion underflowing?
*/
- BUG_ON((val < PREEMPT_MASK) && !(preempt_count() & PREEMPT_MASK));
+ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
+ !(preempt_count() & PREEMPT_MASK)))
+ return;
+
preempt_count() -= val;
}
EXPORT_SYMBOL(sub_preempt_count);
*/
asmlinkage void __sched schedule(void)
{
- long *switch_count;
- task_t *prev, *next;
- runqueue_t *rq;
- prio_array_t *array;
+ struct task_struct *prev, *next;
+ struct prio_array *array;
struct list_head *queue;
unsigned long long now;
unsigned long run_time;
int cpu, idx, new_prio;
+ long *switch_count;
+ struct rq *rq;
/*
* Test if we are atomic. Since do_exit() needs to call into
idx = sched_find_first_bit(array->bitmap);
queue = array->queue + idx;
- next = list_entry(queue->next, task_t, run_list);
+ next = list_entry(queue->next, struct task_struct, run_list);
if (!rt_task(next) && interactive_sleep(next->sleep_type)) {
unsigned long long delta = now - next->timestamp;
if (unlikely(test_thread_flag(TIF_NEED_RESCHED)))
goto need_resched;
}
-
EXPORT_SYMBOL(schedule);
#ifdef CONFIG_PREEMPT
if (unlikely(test_thread_flag(TIF_NEED_RESCHED)))
goto need_resched;
}
-
EXPORT_SYMBOL(preempt_schedule);
/*
int default_wake_function(wait_queue_t *curr, unsigned mode, int sync,
void *key)
{
- task_t *p = curr->private;
- return try_to_wake_up(p, mode, sync);
+ return try_to_wake_up(curr->private, mode, sync);
}
-
EXPORT_SYMBOL(default_wake_function);
/*
struct list_head *tmp, *next;
list_for_each_safe(tmp, next, &q->task_list) {
- wait_queue_t *curr;
- unsigned flags;
- curr = list_entry(tmp, wait_queue_t, task_list);
- flags = curr->flags;
+ wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list);
+ unsigned flags = curr->flags;
+
if (curr->func(curr, mode, sync, key) &&
- (flags & WQ_FLAG_EXCLUSIVE) &&
- !--nr_exclusive)
+ (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
break;
}
}
__wake_up_common(q, mode, nr_exclusive, 0, key);
spin_unlock_irqrestore(&q->lock, flags);
}
-
EXPORT_SYMBOL(__wake_up);
/*
void fastcall __sched wait_for_completion(struct completion *x)
{
might_sleep();
+
spin_lock_irq(&x->wait.lock);
if (!x->done) {
DECLARE_WAITQUEUE(wait, current);
schedule();
SLEEP_ON_TAIL
}
-
EXPORT_SYMBOL(interruptible_sleep_on);
long fastcall __sched
return timeout;
}
-
EXPORT_SYMBOL(interruptible_sleep_on_timeout);
void fastcall __sched sleep_on(wait_queue_head_t *q)
schedule();
SLEEP_ON_TAIL
}
-
EXPORT_SYMBOL(sleep_on);
long fastcall __sched sleep_on_timeout(wait_queue_head_t *q, long timeout)
*
* Used by the rt_mutex code to implement priority inheritance logic.
*/
-void rt_mutex_setprio(task_t *p, int prio)
+void rt_mutex_setprio(struct task_struct *p, int prio)
{
+ struct prio_array *array;
unsigned long flags;
- prio_array_t *array;
- runqueue_t *rq;
+ struct rq *rq;
int oldprio;
BUG_ON(prio < 0 || prio > MAX_PRIO);
#endif
-void set_user_nice(task_t *p, long nice)
+void set_user_nice(struct task_struct *p, long nice)
{
- unsigned long flags;
- prio_array_t *array;
- runqueue_t *rq;
+ struct prio_array *array;
int old_prio, delta;
+ unsigned long flags;
+ struct rq *rq;
if (TASK_NICE(p) == nice || nice < -20 || nice > 19)
return;
* @p: task
* @nice: nice value
*/
-int can_nice(const task_t *p, const int nice)
+int can_nice(const struct task_struct *p, const int nice)
{
/* convert nice value [19,-20] to rlimit style value [1,40] */
int nice_rlim = 20 - nice;
+
return (nice_rlim <= p->signal->rlim[RLIMIT_NICE].rlim_cur ||
capable(CAP_SYS_NICE));
}
*/
asmlinkage long sys_nice(int increment)
{
- int retval;
- long nice;
+ long nice, retval;
/*
* Setpriority might change our priority at the same moment.
* RT tasks are offset by -200. Normal tasks are centered
* around 0, value goes from -16 to +15.
*/
-int task_prio(const task_t *p)
+int task_prio(const struct task_struct *p)
{
return p->prio - MAX_RT_PRIO;
}
* task_nice - return the nice value of a given task.
* @p: the task in question.
*/
-int task_nice(const task_t *p)
+int task_nice(const struct task_struct *p)
{
return TASK_NICE(p);
}
* idle_task - return the idle task for a given cpu.
* @cpu: the processor in question.
*/
-task_t *idle_task(int cpu)
+struct task_struct *idle_task(int cpu)
{
return cpu_rq(cpu)->idle;
}
* find_process_by_pid - find a process with a matching PID value.
* @pid: the pid in question.
*/
-static inline task_t *find_process_by_pid(pid_t pid)
+static inline struct task_struct *find_process_by_pid(pid_t pid)
{
return pid ? find_task_by_pid(pid) : current;
}
static void __setscheduler(struct task_struct *p, int policy, int prio)
{
BUG_ON(p->array);
+
p->policy = policy;
p->rt_priority = prio;
p->normal_prio = normal_prio(p);
int sched_setscheduler(struct task_struct *p, int policy,
struct sched_param *param)
{
- int retval;
- int oldprio, oldpolicy = -1;
- prio_array_t *array;
+ int retval, oldprio, oldpolicy = -1;
+ struct prio_array *array;
unsigned long flags;
- runqueue_t *rq;
+ struct rq *rq;
/* may grab non-irq protected spin_locks */
BUG_ON(in_interrupt());
static int
do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
{
- int retval;
struct sched_param lparam;
struct task_struct *p;
+ int retval;
if (!param || pid < 0)
return -EINVAL;
read_unlock_irq(&tasklist_lock);
retval = sched_setscheduler(p, policy, &lparam);
put_task_struct(p);
+
return retval;
}
*/
asmlinkage long sys_sched_getscheduler(pid_t pid)
{
+ struct task_struct *p;
int retval = -EINVAL;
- task_t *p;
if (pid < 0)
goto out_nounlock;
asmlinkage long sys_sched_getparam(pid_t pid, struct sched_param __user *param)
{
struct sched_param lp;
+ struct task_struct *p;
int retval = -EINVAL;
- task_t *p;
if (!param || pid < 0)
goto out_nounlock;
long sched_setaffinity(pid_t pid, cpumask_t new_mask)
{
- task_t *p;
- int retval;
cpumask_t cpus_allowed;
+ struct task_struct *p;
+ int retval;
lock_cpu_hotplug();
read_lock(&tasklist_lock);
long sched_getaffinity(pid_t pid, cpumask_t *mask)
{
+ struct task_struct *p;
int retval;
- task_t *p;
lock_cpu_hotplug();
read_lock(&tasklist_lock);
*/
asmlinkage long sys_sched_yield(void)
{
- runqueue_t *rq = this_rq_lock();
- prio_array_t *array = current->array;
- prio_array_t *target = rq->expired;
+ struct rq *rq = this_rq_lock();
+ struct prio_array *array = current->array, *target = rq->expired;
schedstat_inc(rq, yld_cnt);
/*
* no need to preempt or enable interrupts:
*/
__release(rq->lock);
+ spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
_raw_spin_unlock(&rq->lock);
preempt_enable_no_resched();
spin_lock(lock);
}
if (need_resched() && __resched_legal()) {
+ spin_release(&lock->dep_map, 1, _THIS_IP_);
_raw_spin_unlock(lock);
preempt_enable_no_resched();
__cond_resched();
BUG_ON(!in_softirq());
if (need_resched() && __resched_legal()) {
- __local_bh_enable();
+ raw_local_irq_disable();
+ _local_bh_enable();
+ raw_local_irq_enable();
__cond_resched();
local_bh_disable();
return 1;
set_current_state(TASK_RUNNING);
sys_sched_yield();
}
-
EXPORT_SYMBOL(yield);
/*
*/
void __sched io_schedule(void)
{
- struct runqueue *rq = &__raw_get_cpu_var(runqueues);
+ struct rq *rq = &__raw_get_cpu_var(runqueues);
atomic_inc(&rq->nr_iowait);
schedule();
atomic_dec(&rq->nr_iowait);
}
-
EXPORT_SYMBOL(io_schedule);
long __sched io_schedule_timeout(long timeout)
{
- struct runqueue *rq = &__raw_get_cpu_var(runqueues);
+ struct rq *rq = &__raw_get_cpu_var(runqueues);
long ret;
atomic_inc(&rq->nr_iowait);
asmlinkage
long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
{
+ struct task_struct *p;
int retval = -EINVAL;
struct timespec t;
- task_t *p;
if (pid < 0)
goto out_nounlock;
static inline struct task_struct *eldest_child(struct task_struct *p)
{
- if (list_empty(&p->children)) return NULL;
+ if (list_empty(&p->children))
+ return NULL;
return list_entry(p->children.next,struct task_struct,sibling);
}
static inline struct task_struct *older_sibling(struct task_struct *p)
{
- if (p->sibling.prev==&p->parent->children) return NULL;
+ if (p->sibling.prev==&p->parent->children)
+ return NULL;
return list_entry(p->sibling.prev,struct task_struct,sibling);
}
static inline struct task_struct *younger_sibling(struct task_struct *p)
{
- if (p->sibling.next==&p->parent->children) return NULL;
+ if (p->sibling.next==&p->parent->children)
+ return NULL;
return list_entry(p->sibling.next,struct task_struct,sibling);
}
-static void show_task(task_t *p)
+static const char *stat_nam[] = { "R", "S", "D", "T", "t", "Z", "X" };
+
+static void show_task(struct task_struct *p)
{
- task_t *relative;
- unsigned state;
+ struct task_struct *relative;
unsigned long free = 0;
- static const char *stat_nam[] = { "R", "S", "D", "T", "t", "Z", "X" };
+ unsigned state;
printk("%-13.13s ", p->comm);
state = p->state ? __ffs(p->state) + 1 : 0;
void show_state(void)
{
- task_t *g, *p;
+ struct task_struct *g, *p;
#if (BITS_PER_LONG == 32)
printk("\n"
} while_each_thread(g, p);
read_unlock(&tasklist_lock);
- mutex_debug_show_all_locks();
+ debug_show_all_locks();
}
/**
* NOTE: this function does not set the idle thread's NEED_RESCHED
* flag, to make booting more robust.
*/
-void __devinit init_idle(task_t *idle, int cpu)
+void __devinit init_idle(struct task_struct *idle, int cpu)
{
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
unsigned long flags;
idle->timestamp = sched_clock();
/*
* This is how migration works:
*
- * 1) we queue a migration_req_t structure in the source CPU's
+ * 1) we queue a struct migration_req structure in the source CPU's
* runqueue and wake up that CPU's migration thread.
* 2) we down() the locked semaphore => thread blocks.
* 3) migration thread wakes up (implicitly it forces the migrated
* task must not exit() & deallocate itself prematurely. The
* call is not atomic; no spinlocks may be held.
*/
-int set_cpus_allowed(task_t *p, cpumask_t new_mask)
+int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
{
+ struct migration_req req;
unsigned long flags;
+ struct rq *rq;
int ret = 0;
- migration_req_t req;
- runqueue_t *rq;
rq = task_rq_lock(p, &flags);
if (!cpus_intersects(new_mask, cpu_online_map)) {
}
out:
task_rq_unlock(rq, &flags);
+
return ret;
}
-
EXPORT_SYMBOL_GPL(set_cpus_allowed);
/*
*/
static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
{
- runqueue_t *rq_dest, *rq_src;
+ struct rq *rq_dest, *rq_src;
int ret = 0;
if (unlikely(cpu_is_offline(dest_cpu)))
*/
static int migration_thread(void *data)
{
- runqueue_t *rq;
int cpu = (long)data;
+ struct rq *rq;
rq = cpu_rq(cpu);
BUG_ON(rq->migration_thread != current);
set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) {
+ struct migration_req *req;
struct list_head *head;
- migration_req_t *req;
try_to_freeze();
set_current_state(TASK_INTERRUPTIBLE);
continue;
}
- req = list_entry(head->next, migration_req_t, list);
+ req = list_entry(head->next, struct migration_req, list);
list_del_init(head->next);
spin_unlock(&rq->lock);
#ifdef CONFIG_HOTPLUG_CPU
/* Figure out where task on dead CPU should go, use force if neccessary. */
-static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *tsk)
+static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
{
- runqueue_t *rq;
unsigned long flags;
- int dest_cpu;
cpumask_t mask;
+ struct rq *rq;
+ int dest_cpu;
restart:
/* On same node? */
mask = node_to_cpumask(cpu_to_node(dead_cpu));
- cpus_and(mask, mask, tsk->cpus_allowed);
+ cpus_and(mask, mask, p->cpus_allowed);
dest_cpu = any_online_cpu(mask);
/* On any allowed CPU? */
if (dest_cpu == NR_CPUS)
- dest_cpu = any_online_cpu(tsk->cpus_allowed);
+ dest_cpu = any_online_cpu(p->cpus_allowed);
/* No more Mr. Nice Guy. */
if (dest_cpu == NR_CPUS) {
- rq = task_rq_lock(tsk, &flags);
- cpus_setall(tsk->cpus_allowed);
- dest_cpu = any_online_cpu(tsk->cpus_allowed);
+ rq = task_rq_lock(p, &flags);
+ cpus_setall(p->cpus_allowed);
+ dest_cpu = any_online_cpu(p->cpus_allowed);
task_rq_unlock(rq, &flags);
/*
* kernel threads (both mm NULL), since they never
* leave kernel.
*/
- if (tsk->mm && printk_ratelimit())
+ if (p->mm && printk_ratelimit())
printk(KERN_INFO "process %d (%s) no "
"longer affine to cpu%d\n",
- tsk->pid, tsk->comm, dead_cpu);
+ p->pid, p->comm, dead_cpu);
}
- if (!__migrate_task(tsk, dead_cpu, dest_cpu))
+ if (!__migrate_task(p, dead_cpu, dest_cpu))
goto restart;
}
* their home CPUs. So we just add the counter to another CPU's counter,
* to keep the global sum constant after CPU-down:
*/
-static void migrate_nr_uninterruptible(runqueue_t *rq_src)
+static void migrate_nr_uninterruptible(struct rq *rq_src)
{
- runqueue_t *rq_dest = cpu_rq(any_online_cpu(CPU_MASK_ALL));
+ struct rq *rq_dest = cpu_rq(any_online_cpu(CPU_MASK_ALL));
unsigned long flags;
local_irq_save(flags);
/* Run through task list and migrate tasks from the dead cpu. */
static void migrate_live_tasks(int src_cpu)
{
- struct task_struct *tsk, *t;
+ struct task_struct *p, *t;
write_lock_irq(&tasklist_lock);
- do_each_thread(t, tsk) {
- if (tsk == current)
+ do_each_thread(t, p) {
+ if (p == current)
continue;
- if (task_cpu(tsk) == src_cpu)
- move_task_off_dead_cpu(src_cpu, tsk);
- } while_each_thread(t, tsk);
+ if (task_cpu(p) == src_cpu)
+ move_task_off_dead_cpu(src_cpu, p);
+ } while_each_thread(t, p);
write_unlock_irq(&tasklist_lock);
}
/* Schedules idle task to be the next runnable task on current CPU.
* It does so by boosting its priority to highest possible and adding it to
- * the _front_ of runqueue. Used by CPU offline code.
+ * the _front_ of the runqueue. Used by CPU offline code.
*/
void sched_idle_next(void)
{
- int cpu = smp_processor_id();
- runqueue_t *rq = this_rq();
+ int this_cpu = smp_processor_id();
+ struct rq *rq = cpu_rq(this_cpu);
struct task_struct *p = rq->idle;
unsigned long flags;
/* cpu has to be offline */
- BUG_ON(cpu_online(cpu));
+ BUG_ON(cpu_online(this_cpu));
- /* Strictly not necessary since rest of the CPUs are stopped by now
- * and interrupts disabled on current cpu.
+ /*
+ * Strictly not necessary since rest of the CPUs are stopped by now
+ * and interrupts disabled on the current cpu.
*/
spin_lock_irqsave(&rq->lock, flags);
__setscheduler(p, SCHED_FIFO, MAX_RT_PRIO-1);
- /* Add idle task to _front_ of it's priority queue */
+
+ /* Add idle task to the _front_ of its priority queue: */
__activate_idle_task(p, rq);
spin_unlock_irqrestore(&rq->lock, flags);
}
-/* Ensures that the idle task is using init_mm right before its cpu goes
+/*
+ * Ensures that the idle task is using init_mm right before its cpu goes
* offline.
*/
void idle_task_exit(void)
mmdrop(mm);
}
-static void migrate_dead(unsigned int dead_cpu, task_t *tsk)
+static void migrate_dead(unsigned int dead_cpu, struct task_struct *p)
{
- struct runqueue *rq = cpu_rq(dead_cpu);
+ struct rq *rq = cpu_rq(dead_cpu);
/* Must be exiting, otherwise would be on tasklist. */
- BUG_ON(tsk->exit_state != EXIT_ZOMBIE && tsk->exit_state != EXIT_DEAD);
+ BUG_ON(p->exit_state != EXIT_ZOMBIE && p->exit_state != EXIT_DEAD);
/* Cannot have done final schedule yet: would have vanished. */
- BUG_ON(tsk->flags & PF_DEAD);
+ BUG_ON(p->flags & PF_DEAD);
- get_task_struct(tsk);
+ get_task_struct(p);
/*
* Drop lock around migration; if someone else moves it,
* fine.
*/
spin_unlock_irq(&rq->lock);
- move_task_off_dead_cpu(dead_cpu, tsk);
+ move_task_off_dead_cpu(dead_cpu, p);
spin_lock_irq(&rq->lock);
- put_task_struct(tsk);
+ put_task_struct(p);
}
/* release_task() removes task from tasklist, so we won't find dead tasks. */
static void migrate_dead_tasks(unsigned int dead_cpu)
{
- unsigned arr, i;
- struct runqueue *rq = cpu_rq(dead_cpu);
+ struct rq *rq = cpu_rq(dead_cpu);
+ unsigned int arr, i;
for (arr = 0; arr < 2; arr++) {
for (i = 0; i < MAX_PRIO; i++) {
struct list_head *list = &rq->arrays[arr].queue[i];
+
while (!list_empty(list))
- migrate_dead(dead_cpu,
- list_entry(list->next, task_t,
- run_list));
+ migrate_dead(dead_cpu, list_entry(list->next,
+ struct task_struct, run_list));
}
}
}
* migration_call - callback that gets triggered when a CPU is added.
* Here we can start up the necessary migration thread for the new CPU.
*/
-static int __cpuinit migration_call(struct notifier_block *nfb,
- unsigned long action,
- void *hcpu)
+static int __cpuinit
+migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
{
- int cpu = (long)hcpu;
struct task_struct *p;
- struct runqueue *rq;
+ int cpu = (long)hcpu;
unsigned long flags;
+ struct rq *rq;
switch (action) {
case CPU_UP_PREPARE:
task_rq_unlock(rq, &flags);
cpu_rq(cpu)->migration_thread = p;
break;
+
case CPU_ONLINE:
/* Strictly unneccessary, as first user will wake it. */
wake_up_process(cpu_rq(cpu)->migration_thread);
break;
+
#ifdef CONFIG_HOTPLUG_CPU
case CPU_UP_CANCELED:
if (!cpu_rq(cpu)->migration_thread)
kthread_stop(cpu_rq(cpu)->migration_thread);
cpu_rq(cpu)->migration_thread = NULL;
break;
+
case CPU_DEAD:
migrate_live_tasks(cpu);
rq = cpu_rq(cpu);
* the requestors. */
spin_lock_irq(&rq->lock);
while (!list_empty(&rq->migration_queue)) {
- migration_req_t *req;
+ struct migration_req *req;
+
req = list_entry(rq->migration_queue.next,
- migration_req_t, list);
+ struct migration_req, list);
list_del_init(&req->list);
complete(&req->done);
}
int __init migration_init(void)
{
void *cpu = (void *)(long)smp_processor_id();
- /* Start one for boot CPU. */
+
+ /* Start one for the boot CPU: */
migration_call(&migration_notifier, CPU_UP_PREPARE, cpu);
migration_call(&migration_notifier, CPU_ONLINE, cpu);
register_cpu_notifier(&migration_notifier);
+
return 0;
}
#endif
} while (sd);
}
#else
-#define sched_domain_debug(sd, cpu) {}
+# define sched_domain_debug(sd, cpu) do { } while (0)
#endif
static int sd_degenerate(struct sched_domain *sd)
return 1;
}
-static int sd_parent_degenerate(struct sched_domain *sd,
- struct sched_domain *parent)
+static int
+sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
{
unsigned long cflags = sd->flags, pflags = parent->flags;
*/
static void cpu_attach_domain(struct sched_domain *sd, int cpu)
{
- runqueue_t *rq = cpu_rq(cpu);
+ struct rq *rq = cpu_rq(cpu);
struct sched_domain *tmp;
/* Remove the sched domains which do not contribute to scheduling. */
/*
* Measure the cache-cost of one task migration. Returns in units of nsec.
*/
-static unsigned long long measure_one(void *cache, unsigned long size,
- int source, int target)
+static unsigned long long
+measure_one(void *cache, unsigned long size, int source, int target)
{
cpumask_t mask, saved_mask;
unsigned long long t0, t1, t2, t3, cost;
*/
static cpumask_t sched_domain_node_span(int node)
{
- int i;
- cpumask_t span, nodemask;
DECLARE_BITMAP(used_nodes, MAX_NUMNODES);
+ cpumask_t span, nodemask;
+ int i;
cpus_clear(span);
bitmap_zero(used_nodes, MAX_NUMNODES);
for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
int next_node = find_next_best_node(node, used_nodes);
+
nodemask = node_to_cpumask(next_node);
cpus_or(span, span, nodemask);
}
#endif
int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
+
/*
- * At the moment, CONFIG_SCHED_SMT is never defined, but leave it in so we
- * can switch it on easily if needed.
+ * SMT sched-domains:
*/
#ifdef CONFIG_SCHED_SMT
static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
static struct sched_group sched_group_cpus[NR_CPUS];
+
static int cpu_to_cpu_group(int cpu)
{
return cpu;
}
#endif
+/*
+ * multi-core sched-domains:
+ */
#ifdef CONFIG_SCHED_MC
static DEFINE_PER_CPU(struct sched_domain, core_domains);
static struct sched_group *sched_group_core_bycpu[NR_CPUS];
static DEFINE_PER_CPU(struct sched_domain, phys_domains);
static struct sched_group *sched_group_phys_bycpu[NR_CPUS];
+
static int cpu_to_phys_group(int cpu)
{
-#if defined(CONFIG_SCHED_MC)
+#ifdef CONFIG_SCHED_MC
cpumask_t mask = cpu_coregroup_map(cpu);
return first_cpu(mask);
#elif defined(CONFIG_SCHED_SMT)
int sched_create_sysfs_power_savings_entries(struct sysdev_class *cls)
{
int err = 0;
+
#ifdef CONFIG_SCHED_SMT
if (smt_capable())
err = sysfs_create_file(&cls->kset.kobj,
{
return sprintf(page, "%u\n", sched_mc_power_savings);
}
-static ssize_t sched_mc_power_savings_store(struct sys_device *dev, const char *buf, size_t count)
+static ssize_t sched_mc_power_savings_store(struct sys_device *dev,
+ const char *buf, size_t count)
{
return sched_power_savings_store(buf, count, 0);
}
{
return sprintf(page, "%u\n", sched_smt_power_savings);
}
-static ssize_t sched_smt_power_savings_store(struct sys_device *dev, const char *buf, size_t count)
+static ssize_t sched_smt_power_savings_store(struct sys_device *dev,
+ const char *buf, size_t count)
{
return sched_power_savings_store(buf, count, 1);
}
{
/* Linker adds these: start and end of __sched functions */
extern char __sched_text_start[], __sched_text_end[];
+
return in_lock_functions(addr) ||
(addr >= (unsigned long)__sched_text_start
&& addr < (unsigned long)__sched_text_end);
void __init sched_init(void)
{
- runqueue_t *rq;
int i, j, k;
for_each_possible_cpu(i) {
- prio_array_t *array;
+ struct prio_array *array;
+ struct rq *rq;
rq = cpu_rq(i);
spin_lock_init(&rq->lock);
+ lockdep_set_class(&rq->lock, &rq->rq_lock_key);
rq->nr_running = 0;
rq->active = rq->arrays;
rq->expired = rq->arrays + 1;
#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
void __might_sleep(char *file, int line)
{
-#if defined(in_atomic)
+#ifdef in_atomic
static unsigned long prev_jiffy; /* ratelimiting */
if ((in_atomic() || irqs_disabled()) &&
#ifdef CONFIG_MAGIC_SYSRQ
void normalize_rt_tasks(void)
{
+ struct prio_array *array;
struct task_struct *p;
- prio_array_t *array;
unsigned long flags;
- runqueue_t *rq;
+ struct rq *rq;
read_lock_irq(&tasklist_lock);
for_each_process(p) {
*
* ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
*/
-task_t *curr_task(int cpu)
+struct task_struct *curr_task(int cpu)
{
return cpu_curr(cpu);
}
*
* ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
*/
-void set_curr_task(int cpu, task_t *p)
+void set_curr_task(int cpu, struct task_struct *p)
{
cpu_curr(cpu) = p;
}
wake_up_process(tsk);
}
+/*
+ * This one is for softirq.c-internal use,
+ * where hardirqs are disabled legitimately:
+ */
+static void __local_bh_disable(unsigned long ip)
+{
+ unsigned long flags;
+
+ WARN_ON_ONCE(in_irq());
+
+ raw_local_irq_save(flags);
+ add_preempt_count(SOFTIRQ_OFFSET);
+ /*
+ * Were softirqs turned off above:
+ */
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_off(ip);
+ raw_local_irq_restore(flags);
+}
+
+void local_bh_disable(void)
+{
+ __local_bh_disable((unsigned long)__builtin_return_address(0));
+}
+
+EXPORT_SYMBOL(local_bh_disable);
+
+void __local_bh_enable(void)
+{
+ WARN_ON_ONCE(in_irq());
+
+ /*
+ * softirqs should never be enabled by __local_bh_enable(),
+ * it always nests inside local_bh_enable() sections:
+ */
+ WARN_ON_ONCE(softirq_count() == SOFTIRQ_OFFSET);
+
+ sub_preempt_count(SOFTIRQ_OFFSET);
+}
+EXPORT_SYMBOL_GPL(__local_bh_enable);
+
+/*
+ * Special-case - softirqs can safely be enabled in
+ * cond_resched_softirq(), or by __do_softirq(),
+ * without processing still-pending softirqs:
+ */
+void _local_bh_enable(void)
+{
+ WARN_ON_ONCE(in_irq());
+ WARN_ON_ONCE(!irqs_disabled());
+
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_on((unsigned long)__builtin_return_address(0));
+ sub_preempt_count(SOFTIRQ_OFFSET);
+}
+
+EXPORT_SYMBOL(_local_bh_enable);
+
+void local_bh_enable(void)
+{
+ unsigned long flags;
+
+ WARN_ON_ONCE(in_irq());
+ WARN_ON_ONCE(irqs_disabled());
+
+ local_irq_save(flags);
+ /*
+ * Are softirqs going to be turned on now:
+ */
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_on((unsigned long)__builtin_return_address(0));
+ /*
+ * Keep preemption disabled until we are done with
+ * softirq processing:
+ */
+ sub_preempt_count(SOFTIRQ_OFFSET - 1);
+
+ if (unlikely(!in_interrupt() && local_softirq_pending()))
+ do_softirq();
+
+ dec_preempt_count();
+ local_irq_restore(flags);
+ preempt_check_resched();
+}
+EXPORT_SYMBOL(local_bh_enable);
+
+void local_bh_enable_ip(unsigned long ip)
+{
+ unsigned long flags;
+
+ WARN_ON_ONCE(in_irq());
+
+ local_irq_save(flags);
+ /*
+ * Are softirqs going to be turned on now:
+ */
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_on(ip);
+ /*
+ * Keep preemption disabled until we are done with
+ * softirq processing:
+ */
+ sub_preempt_count(SOFTIRQ_OFFSET - 1);
+
+ if (unlikely(!in_interrupt() && local_softirq_pending()))
+ do_softirq();
+
+ dec_preempt_count();
+ local_irq_restore(flags);
+ preempt_check_resched();
+}
+EXPORT_SYMBOL(local_bh_enable_ip);
+
/*
* We restart softirq processing MAX_SOFTIRQ_RESTART times,
* and we fall back to softirqd after that.
int cpu;
pending = local_softirq_pending();
+ account_system_vtime(current);
+
+ __local_bh_disable((unsigned long)__builtin_return_address(0));
+ trace_softirq_enter();
- local_bh_disable();
cpu = smp_processor_id();
restart:
/* Reset the pending bitmask before enabling irqs */
if (pending)
wakeup_softirqd();
- __local_bh_enable();
+ trace_softirq_exit();
+
+ account_system_vtime(current);
+ _local_bh_enable();
}
#ifndef __ARCH_HAS_DO_SOFTIRQ
#endif
-void local_bh_enable(void)
-{
- WARN_ON(irqs_disabled());
- /*
- * Keep preemption disabled until we are done with
- * softirq processing:
- */
- sub_preempt_count(SOFTIRQ_OFFSET - 1);
-
- if (unlikely(!in_interrupt() && local_softirq_pending()))
- do_softirq();
-
- dec_preempt_count();
- preempt_check_resched();
-}
-EXPORT_SYMBOL(local_bh_enable);
-
#ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED
# define invoke_softirq() __do_softirq()
#else
void irq_exit(void)
{
account_system_vtime(current);
+ trace_hardirq_exit();
sub_preempt_count(IRQ_EXIT_OFFSET);
if (!in_interrupt() && local_softirq_pending())
invoke_softirq();
#include <linux/preempt.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
#include <linux/module.h>
/*
int __lockfunc _spin_trylock(spinlock_t *lock)
{
preempt_disable();
- if (_raw_spin_trylock(lock))
+ if (_raw_spin_trylock(lock)) {
+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
return 1;
+ }
preempt_enable();
return 0;
int __lockfunc _read_trylock(rwlock_t *lock)
{
preempt_disable();
- if (_raw_read_trylock(lock))
+ if (_raw_read_trylock(lock)) {
+ rwlock_acquire_read(&lock->dep_map, 0, 1, _RET_IP_);
return 1;
+ }
preempt_enable();
return 0;
int __lockfunc _write_trylock(rwlock_t *lock)
{
preempt_disable();
- if (_raw_write_trylock(lock))
+ if (_raw_write_trylock(lock)) {
+ rwlock_acquire(&lock->dep_map, 0, 1, _RET_IP_);
return 1;
+ }
preempt_enable();
return 0;
}
EXPORT_SYMBOL(_write_trylock);
-#if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP)
+/*
+ * If lockdep is enabled then we use the non-preemption spin-ops
+ * even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
+ * not re-enabled during lock-acquire (which the preempt-spin-ops do):
+ */
+#if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP) || \
+ defined(CONFIG_PROVE_LOCKING)
void __lockfunc _read_lock(rwlock_t *lock)
{
preempt_disable();
+ rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
_raw_read_lock(lock);
}
EXPORT_SYMBOL(_read_lock);
local_irq_save(flags);
preempt_disable();
+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+ /*
+ * On lockdep we dont want the hand-coded irq-enable of
+ * _raw_spin_lock_flags() code, because lockdep assumes
+ * that interrupts are not re-enabled during lock-acquire:
+ */
+#ifdef CONFIG_PROVE_LOCKING
+ _raw_spin_lock(lock);
+#else
_raw_spin_lock_flags(lock, &flags);
+#endif
return flags;
}
EXPORT_SYMBOL(_spin_lock_irqsave);
{
local_irq_disable();
preempt_disable();
+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_spin_lock(lock);
}
EXPORT_SYMBOL(_spin_lock_irq);
{
local_bh_disable();
preempt_disable();
+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_spin_lock(lock);
}
EXPORT_SYMBOL(_spin_lock_bh);
local_irq_save(flags);
preempt_disable();
+ rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
_raw_read_lock(lock);
return flags;
}
{
local_irq_disable();
preempt_disable();
+ rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
_raw_read_lock(lock);
}
EXPORT_SYMBOL(_read_lock_irq);
{
local_bh_disable();
preempt_disable();
+ rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
_raw_read_lock(lock);
}
EXPORT_SYMBOL(_read_lock_bh);
local_irq_save(flags);
preempt_disable();
+ rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_write_lock(lock);
return flags;
}
{
local_irq_disable();
preempt_disable();
+ rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_write_lock(lock);
}
EXPORT_SYMBOL(_write_lock_irq);
{
local_bh_disable();
preempt_disable();
+ rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_write_lock(lock);
}
EXPORT_SYMBOL(_write_lock_bh);
void __lockfunc _spin_lock(spinlock_t *lock)
{
preempt_disable();
+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_spin_lock(lock);
}
void __lockfunc _write_lock(rwlock_t *lock)
{
preempt_disable();
+ rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
_raw_write_lock(lock);
}
#endif /* CONFIG_PREEMPT */
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+
+void __lockfunc _spin_lock_nested(spinlock_t *lock, int subclass)
+{
+ preempt_disable();
+ spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+ _raw_spin_lock(lock);
+}
+
+EXPORT_SYMBOL(_spin_lock_nested);
+
+#endif
+
void __lockfunc _spin_unlock(spinlock_t *lock)
{
+ spin_release(&lock->dep_map, 1, _RET_IP_);
_raw_spin_unlock(lock);
preempt_enable();
}
void __lockfunc _write_unlock(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_write_unlock(lock);
preempt_enable();
}
void __lockfunc _read_unlock(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_read_unlock(lock);
preempt_enable();
}
void __lockfunc _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
{
+ spin_release(&lock->dep_map, 1, _RET_IP_);
_raw_spin_unlock(lock);
local_irq_restore(flags);
preempt_enable();
void __lockfunc _spin_unlock_irq(spinlock_t *lock)
{
+ spin_release(&lock->dep_map, 1, _RET_IP_);
_raw_spin_unlock(lock);
local_irq_enable();
preempt_enable();
void __lockfunc _spin_unlock_bh(spinlock_t *lock)
{
+ spin_release(&lock->dep_map, 1, _RET_IP_);
_raw_spin_unlock(lock);
preempt_enable_no_resched();
- local_bh_enable();
+ local_bh_enable_ip((unsigned long)__builtin_return_address(0));
}
EXPORT_SYMBOL(_spin_unlock_bh);
void __lockfunc _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_read_unlock(lock);
local_irq_restore(flags);
preempt_enable();
void __lockfunc _read_unlock_irq(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_read_unlock(lock);
local_irq_enable();
preempt_enable();
void __lockfunc _read_unlock_bh(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_read_unlock(lock);
preempt_enable_no_resched();
- local_bh_enable();
+ local_bh_enable_ip((unsigned long)__builtin_return_address(0));
}
EXPORT_SYMBOL(_read_unlock_bh);
void __lockfunc _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_write_unlock(lock);
local_irq_restore(flags);
preempt_enable();
void __lockfunc _write_unlock_irq(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_write_unlock(lock);
local_irq_enable();
preempt_enable();
void __lockfunc _write_unlock_bh(rwlock_t *lock)
{
+ rwlock_release(&lock->dep_map, 1, _RET_IP_);
_raw_write_unlock(lock);
preempt_enable_no_resched();
- local_bh_enable();
+ local_bh_enable_ip((unsigned long)__builtin_return_address(0));
}
EXPORT_SYMBOL(_write_unlock_bh);
{
local_bh_disable();
preempt_disable();
- if (_raw_spin_trylock(lock))
+ if (_raw_spin_trylock(lock)) {
+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
return 1;
+ }
preempt_enable_no_resched();
- local_bh_enable();
+ local_bh_enable_ip((unsigned long)__builtin_return_address(0));
return 0;
}
EXPORT_SYMBOL(_spin_trylock_bh);
--- /dev/null
+/*
+ * kernel/stacktrace.c
+ *
+ * Stack trace management functions
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ */
+#include <linux/sched.h>
+#include <linux/kallsyms.h>
+#include <linux/stacktrace.h>
+
+void print_stack_trace(struct stack_trace *trace, int spaces)
+{
+ int i, j;
+
+ for (i = 0; i < trace->nr_entries; i++) {
+ unsigned long ip = trace->entries[i];
+
+ for (j = 0; j < spaces + 1; j++)
+ printk(" ");
+ print_ip_sym(ip);
+ }
+}
+
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/syscalls.h>
-#include <linux/kthread.h>
#include <asm/atomic.h>
#include <asm/semaphore.h>
#include <asm/uaccess.h>
static atomic_t stopmachine_thread_ack;
static DECLARE_MUTEX(stopmachine_mutex);
-static int stopmachine(void *unused)
+static int stopmachine(void *cpu)
{
int irqs_disabled = 0;
int prepared = 0;
+ set_cpus_allowed(current, cpumask_of_cpu((int)(long)cpu));
+
/* Ack: we are alive */
smp_mb(); /* Theoretically the ack = 0 might not be on this CPU yet. */
atomic_inc(&stopmachine_thread_ack);
static int stop_machine(void)
{
- int ret = 0;
- unsigned int i;
+ int i, ret = 0;
struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
/* One high-prio thread per cpu. We'll do this one. */
stopmachine_state = STOPMACHINE_WAIT;
for_each_online_cpu(i) {
- struct task_struct *tsk;
if (i == raw_smp_processor_id())
continue;
- tsk = kthread_create(stopmachine, NULL, "stopmachine");
- if (IS_ERR(tsk)) {
- ret = PTR_ERR(tsk);
+ ret = kernel_thread(stopmachine, (void *)(long)i,CLONE_KERNEL);
+ if (ret < 0)
break;
- }
- kthread_bind(tsk, i);
- wake_up_process(tsk);
stopmachine_num_threads++;
}
.strategy = &sysctl_intvec,
.extra1 = &zero,
},
+ {
+ .ctl_name = VM_MIN_UNMAPPED,
+ .procname = "min_unmapped_ratio",
+ .data = &sysctl_min_unmapped_ratio,
+ .maxlen = sizeof(sysctl_min_unmapped_ratio),
+ .mode = 0644,
+ .proc_handler = &sysctl_min_unmapped_ratio_sysctl_handler,
+ .strategy = &sysctl_intvec,
+ .extra1 = &zero,
+ .extra2 = &one_hundred,
+ },
#endif
#ifdef CONFIG_X86_32
{
* playing with xtime and avenrun.
*/
#ifndef ARCH_HAVE_XTIME_LOCK
-seqlock_t xtime_lock __cacheline_aligned_in_smp = SEQLOCK_UNLOCKED;
+__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
EXPORT_SYMBOL(xtime_lock);
#endif
static void process_timeout(unsigned long __data)
{
- wake_up_process((task_t *)__data);
+ wake_up_process((struct task_struct *)__data);
}
/**
return 0;
}
+/*
+ * lockdep: we want to track each per-CPU base as a separate lock-class,
+ * but timer-bases are kmalloc()-ed, so we need to attach separate
+ * keys to them:
+ */
+static struct lock_class_key base_lock_keys[NR_CPUS];
+
static int __devinit init_timers_cpu(int cpu)
{
int j;
}
spin_lock_init(&base->lock);
+ lockdep_set_class(&base->lock, base_lock_keys + cpu);
+
for (j = 0; j < TVN_SIZE; j++) {
INIT_LIST_HEAD(base->tv5.vec + j);
INIT_LIST_HEAD(base->tv4.vec + j);
#include <linux/wait.h>
#include <linux/hash.h>
+struct lock_class_key waitqueue_lock_key;
+
+EXPORT_SYMBOL(waitqueue_lock_key);
+
void fastcall add_wait_queue(wait_queue_head_t *q, wait_queue_t *wait)
{
unsigned long flags;
wait_queue_head_t work_done;
struct workqueue_struct *wq;
- task_t *thread;
+ struct task_struct *thread;
int run_depth; /* Detect run_workqueue() recursion depth */
} ____cacheline_aligned;
config LOG_BUF_SHIFT
int "Kernel log buffer size (16 => 64KB, 17 => 128KB)" if DEBUG_KERNEL
range 12 21
- default 17 if S390
+ default 17 if S390 || LOCKDEP
default 16 if X86_NUMAQ || IA64
default 15 if SMP
default 14
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
- depends on DEBUG_KERNEL && PREEMPT
+ depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT
default y
help
If you say Y here then the kernel will use a debug variant of the
if kernel code uses it in a preemption-unsafe way. Also, the kernel
will detect preemption count underflows.
-config DEBUG_MUTEXES
- bool "Mutex debugging, deadlock detection"
- default n
- depends on DEBUG_KERNEL
- help
- This allows mutex semantics violations and mutex related deadlocks
- (lockups) to be detected and reported automatically.
-
config DEBUG_RT_MUTEXES
bool "RT Mutex debugging, deadlock detection"
depends on DEBUG_KERNEL && RT_MUTEXES
This option enables a rt-mutex tester.
config DEBUG_SPINLOCK
- bool "Spinlock debugging"
+ bool "Spinlock and rw-lock debugging: basic checks"
depends on DEBUG_KERNEL
help
Say Y here and build SMP to catch missing spinlock initialization
best used in conjunction with the NMI watchdog so that spinlock
deadlocks are also debuggable.
+config DEBUG_MUTEXES
+ bool "Mutex debugging: basic checks"
+ depends on DEBUG_KERNEL
+ help
+ This feature allows mutex semantics violations to be detected and
+ reported.
+
+config DEBUG_RWSEMS
+ bool "RW-sem debugging: basic checks"
+ depends on DEBUG_KERNEL
+ help
+ This feature allows read-write semaphore semantics violations to
+ be detected and reported.
+
+config DEBUG_LOCK_ALLOC
+ bool "Lock debugging: detect incorrect freeing of live locks"
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+ select DEBUG_SPINLOCK
+ select DEBUG_MUTEXES
+ select DEBUG_RWSEMS
+ select LOCKDEP
+ help
+ This feature will check whether any held lock (spinlock, rwlock,
+ mutex or rwsem) is incorrectly freed by the kernel, via any of the
+ memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
+ vfree(), etc.), whether a live lock is incorrectly reinitialized via
+ spin_lock_init()/mutex_init()/etc., or whether there is any lock
+ held during task exit.
+
+config PROVE_LOCKING
+ bool "Lock debugging: prove locking correctness"
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+ select LOCKDEP
+ select DEBUG_SPINLOCK
+ select DEBUG_MUTEXES
+ select DEBUG_RWSEMS
+ select DEBUG_LOCK_ALLOC
+ default n
+ help
+ This feature enables the kernel to prove that all locking
+ that occurs in the kernel runtime is mathematically
+ correct: that under no circumstance could an arbitrary (and
+ not yet triggered) combination of observed locking
+ sequences (on an arbitrary number of CPUs, running an
+ arbitrary number of tasks and interrupt contexts) cause a
+ deadlock.
+
+ In short, this feature enables the kernel to report locking
+ related deadlocks before they actually occur.
+
+ The proof does not depend on how hard and complex a
+ deadlock scenario would be to trigger: how many
+ participant CPUs, tasks and irq-contexts would be needed
+ for it to trigger. The proof also does not depend on
+ timing: if a race and a resulting deadlock is possible
+ theoretically (no matter how unlikely the race scenario
+ is), it will be proven so and will immediately be
+ reported by the kernel (once the event is observed that
+ makes the deadlock theoretically possible).
+
+ If a deadlock is impossible (i.e. the locking rules, as
+ observed by the kernel, are mathematically correct), the
+ kernel reports nothing.
+
+ NOTE: this feature can also be enabled for rwlocks, mutexes
+ and rwsems - in which case all dependencies between these
+ different locking variants are observed and mapped too, and
+ the proof of observed correctness is also maintained for an
+ arbitrary combination of these separate locking variants.
+
+ For more details, see Documentation/lockdep-design.txt.
+
+config LOCKDEP
+ bool
+ depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+ select STACKTRACE
+ select FRAME_POINTER
+ select KALLSYMS
+ select KALLSYMS_ALL
+
+config DEBUG_LOCKDEP
+ bool "Lock dependency engine debugging"
+ depends on LOCKDEP
+ help
+ If you say Y here, the lock dependency engine will do
+ additional runtime checks to debug itself, at the price
+ of more runtime overhead.
+
+config TRACE_IRQFLAGS
+ bool
+ default y
+ depends on TRACE_IRQFLAGS_SUPPORT
+ depends on PROVE_LOCKING
+
config DEBUG_SPINLOCK_SLEEP
- bool "Sleep-inside-spinlock checking"
+ bool "Spinlock debugging: sleep-inside-spinlock checking"
depends on DEBUG_KERNEL
help
If you say Y here, various routines which may sleep will become very
noisy if they are called with a spinlock held.
+config DEBUG_LOCKING_API_SELFTESTS
+ bool "Locking API boot-time self-tests"
+ depends on DEBUG_KERNEL
+ help
+ Say Y here if you want the kernel to run a short self-test during
+ bootup. The self-test checks whether common types of locking bugs
+ are detected by debugging mechanisms or not. (if you disable
+ lock debugging then those bugs wont be detected of course.)
+ The following locking APIs are covered: spinlocks, rwlocks,
+ mutexes and rwsems.
+
+config STACKTRACE
+ bool
+ depends on STACKTRACE_SUPPORT
+
config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
- depends on DEBUG_KERNEL && (X86 || CRIS || M68K || M68KNOMMU || FRV || UML)
+ depends on DEBUG_KERNEL && (X86 || CRIS || M68K || M68KNOMMU || FRV || UML || S390)
default y if DEBUG_INFO && UML
help
If you say Y here the resulting kernel image will be slightly larger
lib-y += kobject.o kref.o kobject_uevent.o klist.o
-obj-y += sort.o parser.o halfmd4.o iomap_copy.o
+obj-y += sort.o parser.o halfmd4.o iomap_copy.o debug_locks.o
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
CFLAGS_kobject_uevent.o += -DDEBUG
endif
+obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
lib-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
--- /dev/null
+/*
+ * lib/debug_locks.c
+ *
+ * Generic place for common debugging facilities for various locks:
+ * spinlocks, rwlocks, mutexes and rwsems.
+ *
+ * Started by Ingo Molnar:
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ */
+#include <linux/rwsem.h>
+#include <linux/mutex.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/debug_locks.h>
+
+/*
+ * We want to turn all lock-debugging facilities on/off at once,
+ * via a global flag. The reason is that once a single bug has been
+ * detected and reported, there might be cascade of followup bugs
+ * that would just muddy the log. So we report the first one and
+ * shut up after that.
+ */
+int debug_locks = 1;
+
+/*
+ * The locking-testsuite uses <debug_locks_silent> to get a
+ * 'silent failure': nothing is printed to the console when
+ * a locking bug is detected.
+ */
+int debug_locks_silent;
+
+/*
+ * Generic 'turn off all lock debugging' function:
+ */
+int debug_locks_off(void)
+{
+ if (xchg(&debug_locks, 0)) {
+ if (!debug_locks_silent) {
+ console_verbose();
+ return 1;
+ }
+ }
+ return 0;
+}
static inline void __unlock_kernel(void)
{
- spin_unlock(&kernel_flag);
+ /*
+ * the BKL is not covered by lockdep, so we open-code the
+ * unlocking sequence (and thus avoid the dep-chain ops):
+ */
+ _raw_spin_unlock(&kernel_flag);
+ preempt_enable();
}
/*
--- /dev/null
+#undef IRQ_DISABLE
+#undef IRQ_ENABLE
+#undef IRQ_ENTER
+#undef IRQ_EXIT
+
+#define IRQ_ENABLE HARDIRQ_ENABLE
+#define IRQ_DISABLE HARDIRQ_DISABLE
+#define IRQ_ENTER HARDIRQ_ENTER
+#define IRQ_EXIT HARDIRQ_EXIT
--- /dev/null
+#undef LOCK
+#define LOCK ML
+
+#undef UNLOCK
+#define UNLOCK MU
+
+#undef RLOCK
+#undef WLOCK
+
+#undef INIT
+#define INIT MI
--- /dev/null
+#include "locking-selftest-rlock.h"
+#include "locking-selftest-hardirq.h"
--- /dev/null
+#include "locking-selftest-rlock.h"
+#include "locking-selftest-softirq.h"
--- /dev/null
+#undef LOCK
+#define LOCK RL
+
+#undef UNLOCK
+#define UNLOCK RU
+
+#undef RLOCK
+#define RLOCK RL
+
+#undef WLOCK
+#define WLOCK WL
+
+#undef INIT
+#define INIT RWI
--- /dev/null
+#undef LOCK
+#define LOCK RSL
+
+#undef UNLOCK
+#define UNLOCK RSU
+
+#undef RLOCK
+#define RLOCK RSL
+
+#undef WLOCK
+#define WLOCK WSL
+
+#undef INIT
+#define INIT RWSI
--- /dev/null
+#undef IRQ_DISABLE
+#undef IRQ_ENABLE
+#undef IRQ_ENTER
+#undef IRQ_EXIT
+
+#define IRQ_DISABLE SOFTIRQ_DISABLE
+#define IRQ_ENABLE SOFTIRQ_ENABLE
+#define IRQ_ENTER SOFTIRQ_ENTER
+#define IRQ_EXIT SOFTIRQ_EXIT
--- /dev/null
+#include "locking-selftest-spin.h"
+#include "locking-selftest-hardirq.h"
--- /dev/null
+#include "locking-selftest-spin.h"
+#include "locking-selftest-softirq.h"
--- /dev/null
+#undef LOCK
+#define LOCK L
+
+#undef UNLOCK
+#define UNLOCK U
+
+#undef RLOCK
+#undef WLOCK
+
+#undef INIT
+#define INIT SI
--- /dev/null
+#include "locking-selftest-wlock.h"
+#include "locking-selftest-hardirq.h"
--- /dev/null
+#include "locking-selftest-wlock.h"
+#include "locking-selftest-softirq.h"
--- /dev/null
+#undef LOCK
+#define LOCK WL
+
+#undef UNLOCK
+#define UNLOCK WU
+
+#undef RLOCK
+#define RLOCK RL
+
+#undef WLOCK
+#define WLOCK WL
+
+#undef INIT
+#define INIT RWI
--- /dev/null
+#undef LOCK
+#define LOCK WSL
+
+#undef UNLOCK
+#define UNLOCK WSU
+
+#undef RLOCK
+#define RLOCK RSL
+
+#undef WLOCK
+#define WLOCK WSL
+
+#undef INIT
+#define INIT RWSI
--- /dev/null
+/*
+ * lib/locking-selftest.c
+ *
+ * Testsuite for various locking APIs: spinlocks, rwlocks,
+ * mutexes and rw-semaphores.
+ *
+ * It is checking both false positives and false negatives.
+ *
+ * Started by Ingo Molnar:
+ *
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ */
+#include <linux/rwsem.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/lockdep.h>
+#include <linux/spinlock.h>
+#include <linux/kallsyms.h>
+#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
+#include <linux/irqflags.h>
+
+/*
+ * Change this to 1 if you want to see the failure printouts:
+ */
+static unsigned int debug_locks_verbose;
+
+static int __init setup_debug_locks_verbose(char *str)
+{
+ get_option(&str, &debug_locks_verbose);
+
+ return 1;
+}
+
+__setup("debug_locks_verbose=", setup_debug_locks_verbose);
+
+#define FAILURE 0
+#define SUCCESS 1
+
+#define LOCKTYPE_SPIN 0x1
+#define LOCKTYPE_RWLOCK 0x2
+#define LOCKTYPE_MUTEX 0x4
+#define LOCKTYPE_RWSEM 0x8
+
+/*
+ * Normal standalone locks, for the circular and irq-context
+ * dependency tests:
+ */
+static DEFINE_SPINLOCK(lock_A);
+static DEFINE_SPINLOCK(lock_B);
+static DEFINE_SPINLOCK(lock_C);
+static DEFINE_SPINLOCK(lock_D);
+
+static DEFINE_RWLOCK(rwlock_A);
+static DEFINE_RWLOCK(rwlock_B);
+static DEFINE_RWLOCK(rwlock_C);
+static DEFINE_RWLOCK(rwlock_D);
+
+static DEFINE_MUTEX(mutex_A);
+static DEFINE_MUTEX(mutex_B);
+static DEFINE_MUTEX(mutex_C);
+static DEFINE_MUTEX(mutex_D);
+
+static DECLARE_RWSEM(rwsem_A);
+static DECLARE_RWSEM(rwsem_B);
+static DECLARE_RWSEM(rwsem_C);
+static DECLARE_RWSEM(rwsem_D);
+
+/*
+ * Locks that we initialize dynamically as well so that
+ * e.g. X1 and X2 becomes two instances of the same class,
+ * but X* and Y* are different classes. We do this so that
+ * we do not trigger a real lockup:
+ */
+static DEFINE_SPINLOCK(lock_X1);
+static DEFINE_SPINLOCK(lock_X2);
+static DEFINE_SPINLOCK(lock_Y1);
+static DEFINE_SPINLOCK(lock_Y2);
+static DEFINE_SPINLOCK(lock_Z1);
+static DEFINE_SPINLOCK(lock_Z2);
+
+static DEFINE_RWLOCK(rwlock_X1);
+static DEFINE_RWLOCK(rwlock_X2);
+static DEFINE_RWLOCK(rwlock_Y1);
+static DEFINE_RWLOCK(rwlock_Y2);
+static DEFINE_RWLOCK(rwlock_Z1);
+static DEFINE_RWLOCK(rwlock_Z2);
+
+static DEFINE_MUTEX(mutex_X1);
+static DEFINE_MUTEX(mutex_X2);
+static DEFINE_MUTEX(mutex_Y1);
+static DEFINE_MUTEX(mutex_Y2);
+static DEFINE_MUTEX(mutex_Z1);
+static DEFINE_MUTEX(mutex_Z2);
+
+static DECLARE_RWSEM(rwsem_X1);
+static DECLARE_RWSEM(rwsem_X2);
+static DECLARE_RWSEM(rwsem_Y1);
+static DECLARE_RWSEM(rwsem_Y2);
+static DECLARE_RWSEM(rwsem_Z1);
+static DECLARE_RWSEM(rwsem_Z2);
+
+/*
+ * non-inlined runtime initializers, to let separate locks share
+ * the same lock-class:
+ */
+#define INIT_CLASS_FUNC(class) \
+static noinline void \
+init_class_##class(spinlock_t *lock, rwlock_t *rwlock, struct mutex *mutex, \
+ struct rw_semaphore *rwsem) \
+{ \
+ spin_lock_init(lock); \
+ rwlock_init(rwlock); \
+ mutex_init(mutex); \
+ init_rwsem(rwsem); \
+}
+
+INIT_CLASS_FUNC(X)
+INIT_CLASS_FUNC(Y)
+INIT_CLASS_FUNC(Z)
+
+static void init_shared_classes(void)
+{
+ init_class_X(&lock_X1, &rwlock_X1, &mutex_X1, &rwsem_X1);
+ init_class_X(&lock_X2, &rwlock_X2, &mutex_X2, &rwsem_X2);
+
+ init_class_Y(&lock_Y1, &rwlock_Y1, &mutex_Y1, &rwsem_Y1);
+ init_class_Y(&lock_Y2, &rwlock_Y2, &mutex_Y2, &rwsem_Y2);
+
+ init_class_Z(&lock_Z1, &rwlock_Z1, &mutex_Z1, &rwsem_Z1);
+ init_class_Z(&lock_Z2, &rwlock_Z2, &mutex_Z2, &rwsem_Z2);
+}
+
+/*
+ * For spinlocks and rwlocks we also do hardirq-safe / softirq-safe tests.
+ * The following functions use a lock from a simulated hardirq/softirq
+ * context, causing the locks to be marked as hardirq-safe/softirq-safe:
+ */
+
+#define HARDIRQ_DISABLE local_irq_disable
+#define HARDIRQ_ENABLE local_irq_enable
+
+#define HARDIRQ_ENTER() \
+ local_irq_disable(); \
+ irq_enter(); \
+ WARN_ON(!in_irq());
+
+#define HARDIRQ_EXIT() \
+ __irq_exit(); \
+ local_irq_enable();
+
+#define SOFTIRQ_DISABLE local_bh_disable
+#define SOFTIRQ_ENABLE local_bh_enable
+
+#define SOFTIRQ_ENTER() \
+ local_bh_disable(); \
+ local_irq_disable(); \
+ trace_softirq_enter(); \
+ WARN_ON(!in_softirq());
+
+#define SOFTIRQ_EXIT() \
+ trace_softirq_exit(); \
+ local_irq_enable(); \
+ local_bh_enable();
+
+/*
+ * Shortcuts for lock/unlock API variants, to keep
+ * the testcases compact:
+ */
+#define L(x) spin_lock(&lock_##x)
+#define U(x) spin_unlock(&lock_##x)
+#define LU(x) L(x); U(x)
+#define SI(x) spin_lock_init(&lock_##x)
+
+#define WL(x) write_lock(&rwlock_##x)
+#define WU(x) write_unlock(&rwlock_##x)
+#define WLU(x) WL(x); WU(x)
+
+#define RL(x) read_lock(&rwlock_##x)
+#define RU(x) read_unlock(&rwlock_##x)
+#define RLU(x) RL(x); RU(x)
+#define RWI(x) rwlock_init(&rwlock_##x)
+
+#define ML(x) mutex_lock(&mutex_##x)
+#define MU(x) mutex_unlock(&mutex_##x)
+#define MI(x) mutex_init(&mutex_##x)
+
+#define WSL(x) down_write(&rwsem_##x)
+#define WSU(x) up_write(&rwsem_##x)
+
+#define RSL(x) down_read(&rwsem_##x)
+#define RSU(x) up_read(&rwsem_##x)
+#define RWSI(x) init_rwsem(&rwsem_##x)
+
+#define LOCK_UNLOCK_2(x,y) LOCK(x); LOCK(y); UNLOCK(y); UNLOCK(x)
+
+/*
+ * Generate different permutations of the same testcase, using
+ * the same basic lock-dependency/state events:
+ */
+
+#define GENERATE_TESTCASE(name) \
+ \
+static void name(void) { E(); }
+
+#define GENERATE_PERMUTATIONS_2_EVENTS(name) \
+ \
+static void name##_12(void) { E1(); E2(); } \
+static void name##_21(void) { E2(); E1(); }
+
+#define GENERATE_PERMUTATIONS_3_EVENTS(name) \
+ \
+static void name##_123(void) { E1(); E2(); E3(); } \
+static void name##_132(void) { E1(); E3(); E2(); } \
+static void name##_213(void) { E2(); E1(); E3(); } \
+static void name##_231(void) { E2(); E3(); E1(); } \
+static void name##_312(void) { E3(); E1(); E2(); } \
+static void name##_321(void) { E3(); E2(); E1(); }
+
+/*
+ * AA deadlock:
+ */
+
+#define E() \
+ \
+ LOCK(X1); \
+ LOCK(X2); /* this one should fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(AA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(AA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(AA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(AA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(AA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(AA_rsem)
+
+#undef E
+
+/*
+ * Special-case for read-locking, they are
+ * allowed to recurse on the same lock class:
+ */
+static void rlock_AA1(void)
+{
+ RL(X1);
+ RL(X1); // this one should NOT fail
+}
+
+static void rlock_AA1B(void)
+{
+ RL(X1);
+ RL(X2); // this one should NOT fail
+}
+
+static void rsem_AA1(void)
+{
+ RSL(X1);
+ RSL(X1); // this one should fail
+}
+
+static void rsem_AA1B(void)
+{
+ RSL(X1);
+ RSL(X2); // this one should fail
+}
+/*
+ * The mixing of read and write locks is not allowed:
+ */
+static void rlock_AA2(void)
+{
+ RL(X1);
+ WL(X2); // this one should fail
+}
+
+static void rsem_AA2(void)
+{
+ RSL(X1);
+ WSL(X2); // this one should fail
+}
+
+static void rlock_AA3(void)
+{
+ WL(X1);
+ RL(X2); // this one should fail
+}
+
+static void rsem_AA3(void)
+{
+ WSL(X1);
+ RSL(X2); // this one should fail
+}
+
+/*
+ * ABBA deadlock:
+ */
+
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(B, A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABBA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABBA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABBA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABBA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABBA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABBA_rsem)
+
+#undef E
+
+/*
+ * AB BC CA deadlock:
+ */
+
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(B, C); \
+ LOCK_UNLOCK_2(C, A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABBCCA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABBCCA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABBCCA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABBCCA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABBCCA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABBCCA_rsem)
+
+#undef E
+
+/*
+ * AB CA BC deadlock:
+ */
+
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(C, A); \
+ LOCK_UNLOCK_2(B, C); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABCABC_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABCABC_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABCABC_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABCABC_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABCABC_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABCABC_rsem)
+
+#undef E
+
+/*
+ * AB BC CD DA deadlock:
+ */
+
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(B, C); \
+ LOCK_UNLOCK_2(C, D); \
+ LOCK_UNLOCK_2(D, A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABBCCDDA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABBCCDDA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABBCCDDA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABBCCDDA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABBCCDDA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABBCCDDA_rsem)
+
+#undef E
+
+/*
+ * AB CD BD DA deadlock:
+ */
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(C, D); \
+ LOCK_UNLOCK_2(B, D); \
+ LOCK_UNLOCK_2(D, A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABCDBDDA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABCDBDDA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABCDBDDA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABCDBDDA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABCDBDDA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABCDBDDA_rsem)
+
+#undef E
+
+/*
+ * AB CD BC DA deadlock:
+ */
+#define E() \
+ \
+ LOCK_UNLOCK_2(A, B); \
+ LOCK_UNLOCK_2(C, D); \
+ LOCK_UNLOCK_2(B, C); \
+ LOCK_UNLOCK_2(D, A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(ABCDBCDA_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(ABCDBCDA_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(ABCDBCDA_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(ABCDBCDA_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(ABCDBCDA_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(ABCDBCDA_rsem)
+
+#undef E
+
+/*
+ * Double unlock:
+ */
+#define E() \
+ \
+ LOCK(A); \
+ UNLOCK(A); \
+ UNLOCK(A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(double_unlock_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(double_unlock_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(double_unlock_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(double_unlock_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(double_unlock_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(double_unlock_rsem)
+
+#undef E
+
+/*
+ * Bad unlock ordering:
+ */
+#define E() \
+ \
+ LOCK(A); \
+ LOCK(B); \
+ UNLOCK(A); /* fail */ \
+ UNLOCK(B);
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(bad_unlock_order_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(bad_unlock_order_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(bad_unlock_order_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(bad_unlock_order_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(bad_unlock_order_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(bad_unlock_order_rsem)
+
+#undef E
+
+/*
+ * initializing a held lock:
+ */
+#define E() \
+ \
+ LOCK(A); \
+ INIT(A); /* fail */
+
+/*
+ * 6 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_TESTCASE(init_held_spin)
+#include "locking-selftest-wlock.h"
+GENERATE_TESTCASE(init_held_wlock)
+#include "locking-selftest-rlock.h"
+GENERATE_TESTCASE(init_held_rlock)
+#include "locking-selftest-mutex.h"
+GENERATE_TESTCASE(init_held_mutex)
+#include "locking-selftest-wsem.h"
+GENERATE_TESTCASE(init_held_wsem)
+#include "locking-selftest-rsem.h"
+GENERATE_TESTCASE(init_held_rsem)
+
+#undef E
+
+/*
+ * locking an irq-safe lock with irqs enabled:
+ */
+#define E1() \
+ \
+ IRQ_ENTER(); \
+ LOCK(A); \
+ UNLOCK(A); \
+ IRQ_EXIT();
+
+#define E2() \
+ \
+ LOCK(A); \
+ UNLOCK(A);
+
+/*
+ * Generate 24 testcases:
+ */
+#include "locking-selftest-spin-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_spin)
+
+#include "locking-selftest-rlock-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_rlock)
+
+#include "locking-selftest-wlock-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_wlock)
+
+#include "locking-selftest-spin-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_spin)
+
+#include "locking-selftest-rlock-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_rlock)
+
+#include "locking-selftest-wlock-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_wlock)
+
+#undef E1
+#undef E2
+
+/*
+ * Enabling hardirqs with a softirq-safe lock held:
+ */
+#define E1() \
+ \
+ SOFTIRQ_ENTER(); \
+ LOCK(A); \
+ UNLOCK(A); \
+ SOFTIRQ_EXIT();
+
+#define E2() \
+ \
+ HARDIRQ_DISABLE(); \
+ LOCK(A); \
+ HARDIRQ_ENABLE(); \
+ UNLOCK(A);
+
+/*
+ * Generate 12 testcases:
+ */
+#include "locking-selftest-spin.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_spin)
+
+#include "locking-selftest-wlock.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_wlock)
+
+#include "locking-selftest-rlock.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_rlock)
+
+#undef E1
+#undef E2
+
+/*
+ * Enabling irqs with an irq-safe lock held:
+ */
+#define E1() \
+ \
+ IRQ_ENTER(); \
+ LOCK(A); \
+ UNLOCK(A); \
+ IRQ_EXIT();
+
+#define E2() \
+ \
+ IRQ_DISABLE(); \
+ LOCK(A); \
+ IRQ_ENABLE(); \
+ UNLOCK(A);
+
+/*
+ * Generate 24 testcases:
+ */
+#include "locking-selftest-spin-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_spin)
+
+#include "locking-selftest-rlock-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_rlock)
+
+#include "locking-selftest-wlock-hardirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_wlock)
+
+#include "locking-selftest-spin-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_spin)
+
+#include "locking-selftest-rlock-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_rlock)
+
+#include "locking-selftest-wlock-softirq.h"
+GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock)
+
+#undef E1
+#undef E2
+
+/*
+ * Acquiring a irq-unsafe lock while holding an irq-safe-lock:
+ */
+#define E1() \
+ \
+ LOCK(A); \
+ LOCK(B); \
+ UNLOCK(B); \
+ UNLOCK(A); \
+
+#define E2() \
+ \
+ LOCK(B); \
+ UNLOCK(B);
+
+#define E3() \
+ \
+ IRQ_ENTER(); \
+ LOCK(A); \
+ UNLOCK(A); \
+ IRQ_EXIT();
+
+/*
+ * Generate 36 testcases:
+ */
+#include "locking-selftest-spin-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_spin)
+
+#include "locking-selftest-rlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_rlock)
+
+#include "locking-selftest-wlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_wlock)
+
+#include "locking-selftest-spin-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_spin)
+
+#include "locking-selftest-rlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_rlock)
+
+#include "locking-selftest-wlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * If a lock turns into softirq-safe, but earlier it took
+ * a softirq-unsafe lock:
+ */
+
+#define E1() \
+ IRQ_DISABLE(); \
+ LOCK(A); \
+ LOCK(B); \
+ UNLOCK(B); \
+ UNLOCK(A); \
+ IRQ_ENABLE();
+
+#define E2() \
+ LOCK(B); \
+ UNLOCK(B);
+
+#define E3() \
+ IRQ_ENTER(); \
+ LOCK(A); \
+ UNLOCK(A); \
+ IRQ_EXIT();
+
+/*
+ * Generate 36 testcases:
+ */
+#include "locking-selftest-spin-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_spin)
+
+#include "locking-selftest-rlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_rlock)
+
+#include "locking-selftest-wlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_wlock)
+
+#include "locking-selftest-spin-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_spin)
+
+#include "locking-selftest-rlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_rlock)
+
+#include "locking-selftest-wlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_wlock)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * read-lock / write-lock irq inversion.
+ *
+ * Deadlock scenario:
+ *
+ * CPU#1 is at #1, i.e. it has write-locked A, but has not
+ * taken B yet.
+ *
+ * CPU#2 is at #2, i.e. it has locked B.
+ *
+ * Hardirq hits CPU#2 at point #2 and is trying to read-lock A.
+ *
+ * The deadlock occurs because CPU#1 will spin on B, and CPU#2
+ * will spin on A.
+ */
+
+#define E1() \
+ \
+ IRQ_DISABLE(); \
+ WL(A); \
+ LOCK(B); \
+ UNLOCK(B); \
+ WU(A); \
+ IRQ_ENABLE();
+
+#define E2() \
+ \
+ LOCK(B); \
+ UNLOCK(B);
+
+#define E3() \
+ \
+ IRQ_ENTER(); \
+ RL(A); \
+ RU(A); \
+ IRQ_EXIT();
+
+/*
+ * Generate 36 testcases:
+ */
+#include "locking-selftest-spin-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_hard_spin)
+
+#include "locking-selftest-rlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_hard_rlock)
+
+#include "locking-selftest-wlock-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_hard_wlock)
+
+#include "locking-selftest-spin-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_spin)
+
+#include "locking-selftest-rlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_rlock)
+
+#include "locking-selftest-wlock-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * read-lock / write-lock recursion that is actually safe.
+ */
+
+#define E1() \
+ \
+ IRQ_DISABLE(); \
+ WL(A); \
+ WU(A); \
+ IRQ_ENABLE();
+
+#define E2() \
+ \
+ RL(A); \
+ RU(A); \
+
+#define E3() \
+ \
+ IRQ_ENTER(); \
+ RL(A); \
+ L(B); \
+ U(B); \
+ RU(A); \
+ IRQ_EXIT();
+
+/*
+ * Generate 12 testcases:
+ */
+#include "locking-selftest-hardirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard)
+
+#include "locking-selftest-softirq.h"
+GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
+
+#undef E1
+#undef E2
+#undef E3
+
+/*
+ * read-lock / write-lock recursion that is unsafe.
+ */
+
+#define E1() \
+ \
+ IRQ_DISABLE(); \
+ L(B); \
+ WL(A); \
+ WU(A); \
+ U(B); \
+ IRQ_ENABLE();
+
+#define E2() \
+ \
+ RL(A); \
+ RU(A); \
+
+#define E3() \
+ \
+ IRQ_ENTER(); \
+ L(B); \
+ U(B); \
+ IRQ_EXIT();
+
+/*
+ * Generate 12 testcases:
+ */
+#include "locking-selftest-hardirq.h"
+// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard)
+
+#include "locking-selftest-softirq.h"
+// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft)
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define I_SPINLOCK(x) lockdep_reset_lock(&lock_##x.dep_map)
+# define I_RWLOCK(x) lockdep_reset_lock(&rwlock_##x.dep_map)
+# define I_MUTEX(x) lockdep_reset_lock(&mutex_##x.dep_map)
+# define I_RWSEM(x) lockdep_reset_lock(&rwsem_##x.dep_map)
+#else
+# define I_SPINLOCK(x)
+# define I_RWLOCK(x)
+# define I_MUTEX(x)
+# define I_RWSEM(x)
+#endif
+
+#define I1(x) \
+ do { \
+ I_SPINLOCK(x); \
+ I_RWLOCK(x); \
+ I_MUTEX(x); \
+ I_RWSEM(x); \
+ } while (0)
+
+#define I2(x) \
+ do { \
+ spin_lock_init(&lock_##x); \
+ rwlock_init(&rwlock_##x); \
+ mutex_init(&mutex_##x); \
+ init_rwsem(&rwsem_##x); \
+ } while (0)
+
+static void reset_locks(void)
+{
+ local_irq_disable();
+ I1(A); I1(B); I1(C); I1(D);
+ I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2);
+ lockdep_reset();
+ I2(A); I2(B); I2(C); I2(D);
+ init_shared_classes();
+ local_irq_enable();
+}
+
+#undef I
+
+static int testcase_total;
+static int testcase_successes;
+static int expected_testcase_failures;
+static int unexpected_testcase_failures;
+
+static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
+{
+ unsigned long saved_preempt_count = preempt_count();
+ int expected_failure = 0;
+
+ WARN_ON(irqs_disabled());
+
+ testcase_fn();
+ /*
+ * Filter out expected failures:
+ */
+#ifndef CONFIG_PROVE_LOCKING
+ if ((lockclass_mask & LOCKTYPE_SPIN) && debug_locks != expected)
+ expected_failure = 1;
+ if ((lockclass_mask & LOCKTYPE_RWLOCK) && debug_locks != expected)
+ expected_failure = 1;
+ if ((lockclass_mask & LOCKTYPE_MUTEX) && debug_locks != expected)
+ expected_failure = 1;
+ if ((lockclass_mask & LOCKTYPE_RWSEM) && debug_locks != expected)
+ expected_failure = 1;
+#endif
+ if (debug_locks != expected) {
+ if (expected_failure) {
+ expected_testcase_failures++;
+ printk("failed|");
+ } else {
+ unexpected_testcase_failures++;
+ printk("FAILED|");
+ }
+ } else {
+ testcase_successes++;
+ printk(" ok |");
+ }
+ testcase_total++;
+
+ if (debug_locks_verbose)
+ printk(" lockclass mask: %x, debug_locks: %d, expected: %d\n",
+ lockclass_mask, debug_locks, expected);
+ /*
+ * Some tests (e.g. double-unlock) might corrupt the preemption
+ * count, so restore it:
+ */
+ preempt_count() = saved_preempt_count;
+#ifdef CONFIG_TRACE_IRQFLAGS
+ if (softirq_count())
+ current->softirqs_enabled = 0;
+ else
+ current->softirqs_enabled = 1;
+#endif
+
+ reset_locks();
+}
+
+static inline void print_testname(const char *testname)
+{
+ printk("%33s:", testname);
+}
+
+#define DO_TESTCASE_1(desc, name, nr) \
+ print_testname(desc"/"#nr); \
+ dotest(name##_##nr, SUCCESS, LOCKTYPE_RWLOCK); \
+ printk("\n");
+
+#define DO_TESTCASE_1B(desc, name, nr) \
+ print_testname(desc"/"#nr); \
+ dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK); \
+ printk("\n");
+
+#define DO_TESTCASE_3(desc, name, nr) \
+ print_testname(desc"/"#nr); \
+ dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN); \
+ dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK); \
+ dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK); \
+ printk("\n");
+
+#define DO_TESTCASE_3RW(desc, name, nr) \
+ print_testname(desc"/"#nr); \
+ dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN|LOCKTYPE_RWLOCK);\
+ dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK); \
+ dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK); \
+ printk("\n");
+
+#define DO_TESTCASE_6(desc, name) \
+ print_testname(desc); \
+ dotest(name##_spin, FAILURE, LOCKTYPE_SPIN); \
+ dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK); \
+ dotest(name##_rlock, FAILURE, LOCKTYPE_RWLOCK); \
+ dotest(name##_mutex, FAILURE, LOCKTYPE_MUTEX); \
+ dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM); \
+ dotest(name##_rsem, FAILURE, LOCKTYPE_RWSEM); \
+ printk("\n");
+
+#define DO_TESTCASE_6_SUCCESS(desc, name) \
+ print_testname(desc); \
+ dotest(name##_spin, SUCCESS, LOCKTYPE_SPIN); \
+ dotest(name##_wlock, SUCCESS, LOCKTYPE_RWLOCK); \
+ dotest(name##_rlock, SUCCESS, LOCKTYPE_RWLOCK); \
+ dotest(name##_mutex, SUCCESS, LOCKTYPE_MUTEX); \
+ dotest(name##_wsem, SUCCESS, LOCKTYPE_RWSEM); \
+ dotest(name##_rsem, SUCCESS, LOCKTYPE_RWSEM); \
+ printk("\n");
+
+/*
+ * 'read' variant: rlocks must not trigger.
+ */
+#define DO_TESTCASE_6R(desc, name) \
+ print_testname(desc); \
+ dotest(name##_spin, FAILURE, LOCKTYPE_SPIN); \
+ dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK); \
+ dotest(name##_rlock, SUCCESS, LOCKTYPE_RWLOCK); \
+ dotest(name##_mutex, FAILURE, LOCKTYPE_MUTEX); \
+ dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM); \
+ dotest(name##_rsem, FAILURE, LOCKTYPE_RWSEM); \
+ printk("\n");
+
+#define DO_TESTCASE_2I(desc, name, nr) \
+ DO_TESTCASE_1("hard-"desc, name##_hard, nr); \
+ DO_TESTCASE_1("soft-"desc, name##_soft, nr);
+
+#define DO_TESTCASE_2IB(desc, name, nr) \
+ DO_TESTCASE_1B("hard-"desc, name##_hard, nr); \
+ DO_TESTCASE_1B("soft-"desc, name##_soft, nr);
+
+#define DO_TESTCASE_6I(desc, name, nr) \
+ DO_TESTCASE_3("hard-"desc, name##_hard, nr); \
+ DO_TESTCASE_3("soft-"desc, name##_soft, nr);
+
+#define DO_TESTCASE_6IRW(desc, name, nr) \
+ DO_TESTCASE_3RW("hard-"desc, name##_hard, nr); \
+ DO_TESTCASE_3RW("soft-"desc, name##_soft, nr);
+
+#define DO_TESTCASE_2x3(desc, name) \
+ DO_TESTCASE_3(desc, name, 12); \
+ DO_TESTCASE_3(desc, name, 21);
+
+#define DO_TESTCASE_2x6(desc, name) \
+ DO_TESTCASE_6I(desc, name, 12); \
+ DO_TESTCASE_6I(desc, name, 21);
+
+#define DO_TESTCASE_6x2(desc, name) \
+ DO_TESTCASE_2I(desc, name, 123); \
+ DO_TESTCASE_2I(desc, name, 132); \
+ DO_TESTCASE_2I(desc, name, 213); \
+ DO_TESTCASE_2I(desc, name, 231); \
+ DO_TESTCASE_2I(desc, name, 312); \
+ DO_TESTCASE_2I(desc, name, 321);
+
+#define DO_TESTCASE_6x2B(desc, name) \
+ DO_TESTCASE_2IB(desc, name, 123); \
+ DO_TESTCASE_2IB(desc, name, 132); \
+ DO_TESTCASE_2IB(desc, name, 213); \
+ DO_TESTCASE_2IB(desc, name, 231); \
+ DO_TESTCASE_2IB(desc, name, 312); \
+ DO_TESTCASE_2IB(desc, name, 321);
+
+#define DO_TESTCASE_6x6(desc, name) \
+ DO_TESTCASE_6I(desc, name, 123); \
+ DO_TESTCASE_6I(desc, name, 132); \
+ DO_TESTCASE_6I(desc, name, 213); \
+ DO_TESTCASE_6I(desc, name, 231); \
+ DO_TESTCASE_6I(desc, name, 312); \
+ DO_TESTCASE_6I(desc, name, 321);
+
+#define DO_TESTCASE_6x6RW(desc, name) \
+ DO_TESTCASE_6IRW(desc, name, 123); \
+ DO_TESTCASE_6IRW(desc, name, 132); \
+ DO_TESTCASE_6IRW(desc, name, 213); \
+ DO_TESTCASE_6IRW(desc, name, 231); \
+ DO_TESTCASE_6IRW(desc, name, 312); \
+ DO_TESTCASE_6IRW(desc, name, 321);
+
+
+void locking_selftest(void)
+{
+ /*
+ * Got a locking failure before the selftest ran?
+ */
+ if (!debug_locks) {
+ printk("----------------------------------\n");
+ printk("| Locking API testsuite disabled |\n");
+ printk("----------------------------------\n");
+ return;
+ }
+
+ /*
+ * Run the testsuite:
+ */
+ printk("------------------------\n");
+ printk("| Locking API testsuite:\n");
+ printk("----------------------------------------------------------------------------\n");
+ printk(" | spin |wlock |rlock |mutex | wsem | rsem |\n");
+ printk(" --------------------------------------------------------------------------\n");
+
+ init_shared_classes();
+ debug_locks_silent = !debug_locks_verbose;
+
+ DO_TESTCASE_6R("A-A deadlock", AA);
+ DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
+ DO_TESTCASE_6R("A-B-B-C-C-A deadlock", ABBCCA);
+ DO_TESTCASE_6R("A-B-C-A-B-C deadlock", ABCABC);
+ DO_TESTCASE_6R("A-B-B-C-C-D-D-A deadlock", ABBCCDDA);
+ DO_TESTCASE_6R("A-B-C-D-B-D-D-A deadlock", ABCDBDDA);
+ DO_TESTCASE_6R("A-B-C-D-B-C-D-A deadlock", ABCDBCDA);
+ DO_TESTCASE_6("double unlock", double_unlock);
+ DO_TESTCASE_6("initialize held", init_held);
+ DO_TESTCASE_6_SUCCESS("bad unlock order", bad_unlock_order);
+
+ printk(" --------------------------------------------------------------------------\n");
+ print_testname("recursive read-lock");
+ printk(" |");
+ dotest(rlock_AA1, SUCCESS, LOCKTYPE_RWLOCK);
+ printk(" |");
+ dotest(rsem_AA1, FAILURE, LOCKTYPE_RWSEM);
+ printk("\n");
+
+ print_testname("recursive read-lock #2");
+ printk(" |");
+ dotest(rlock_AA1B, SUCCESS, LOCKTYPE_RWLOCK);
+ printk(" |");
+ dotest(rsem_AA1B, FAILURE, LOCKTYPE_RWSEM);
+ printk("\n");
+
+ print_testname("mixed read-write-lock");
+ printk(" |");
+ dotest(rlock_AA2, FAILURE, LOCKTYPE_RWLOCK);
+ printk(" |");
+ dotest(rsem_AA2, FAILURE, LOCKTYPE_RWSEM);
+ printk("\n");
+
+ print_testname("mixed write-read-lock");
+ printk(" |");
+ dotest(rlock_AA3, FAILURE, LOCKTYPE_RWLOCK);
+ printk(" |");
+ dotest(rsem_AA3, FAILURE, LOCKTYPE_RWSEM);
+ printk("\n");
+
+ printk(" --------------------------------------------------------------------------\n");
+
+ /*
+ * irq-context testcases:
+ */
+ DO_TESTCASE_2x6("irqs-on + irq-safe-A", irqsafe1);
+ DO_TESTCASE_2x3("sirq-safe-A => hirqs-on", irqsafe2A);
+ DO_TESTCASE_2x6("safe-A + irqs-on", irqsafe2B);
+ DO_TESTCASE_6x6("safe-A + unsafe-B #1", irqsafe3);
+ DO_TESTCASE_6x6("safe-A + unsafe-B #2", irqsafe4);
+ DO_TESTCASE_6x6RW("irq lock-inversion", irq_inversion);
+
+ DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
+// DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
+
+ if (unexpected_testcase_failures) {
+ printk("-----------------------------------------------------------------\n");
+ debug_locks = 0;
+ printk("BUG: %3d unexpected failures (out of %3d) - debugging disabled! |\n",
+ unexpected_testcase_failures, testcase_total);
+ printk("-----------------------------------------------------------------\n");
+ } else if (expected_testcase_failures && testcase_successes) {
+ printk("--------------------------------------------------------\n");
+ printk("%3d out of %3d testcases failed, as expected. |\n",
+ expected_testcase_failures, testcase_total);
+ printk("----------------------------------------------------\n");
+ debug_locks = 1;
+ } else if (expected_testcase_failures && !testcase_successes) {
+ printk("--------------------------------------------------------\n");
+ printk("All %3d testcases failed, as expected. |\n",
+ expected_testcase_failures);
+ printk("----------------------------------------\n");
+ debug_locks = 1;
+ } else {
+ printk("-------------------------------------------------------\n");
+ printk("Good, all %3d testcases passed! |\n",
+ testcase_successes);
+ printk("---------------------------------\n");
+ debug_locks = 1;
+ }
+ debug_locks_silent = 0;
+}
#define RWSEM_WAITING_FOR_WRITE 0x00000002
};
-#if RWSEM_DEBUG
-void rwsemtrace(struct rw_semaphore *sem, const char *str)
-{
- if (sem->debug)
- printk("[%d] %s({%d,%d})\n",
- current->pid, str, sem->activity,
- list_empty(&sem->wait_list) ? 0 : 1);
-}
-#endif
-
/*
* initialise the semaphore
*/
-void fastcall init_rwsem(struct rw_semaphore *sem)
+void __init_rwsem(struct rw_semaphore *sem, const char *name,
+ struct lock_class_key *key)
{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /*
+ * Make sure we are not reinitializing a held semaphore:
+ */
+ debug_check_no_locks_freed((void *)sem, sizeof(*sem));
+ lockdep_init_map(&sem->dep_map, name, key);
+#endif
sem->activity = 0;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#if RWSEM_DEBUG
- sem->debug = 0;
-#endif
}
/*
struct task_struct *tsk;
int woken;
- rwsemtrace(sem, "Entering __rwsem_do_wake");
-
waiter = list_entry(sem->wait_list.next, struct rwsem_waiter, list);
if (!wakewrite) {
sem->activity += woken;
out:
- rwsemtrace(sem, "Leaving __rwsem_do_wake");
return sem;
}
struct rwsem_waiter waiter;
struct task_struct *tsk;
- rwsemtrace(sem, "Entering __down_read");
-
spin_lock_irq(&sem->wait_lock);
if (sem->activity >= 0 && list_empty(&sem->wait_list)) {
}
tsk->state = TASK_RUNNING;
-
out:
- rwsemtrace(sem, "Leaving __down_read");
+ ;
}
/*
unsigned long flags;
int ret = 0;
- rwsemtrace(sem, "Entering __down_read_trylock");
spin_lock_irqsave(&sem->wait_lock, flags);
spin_unlock_irqrestore(&sem->wait_lock, flags);
- rwsemtrace(sem, "Leaving __down_read_trylock");
return ret;
}
* get a write lock on the semaphore
* - we increment the waiting count anyway to indicate an exclusive lock
*/
-void fastcall __sched __down_write(struct rw_semaphore *sem)
+void fastcall __sched __down_write_nested(struct rw_semaphore *sem, int subclass)
{
struct rwsem_waiter waiter;
struct task_struct *tsk;
- rwsemtrace(sem, "Entering __down_write");
-
spin_lock_irq(&sem->wait_lock);
if (sem->activity == 0 && list_empty(&sem->wait_list)) {
}
tsk->state = TASK_RUNNING;
-
out:
- rwsemtrace(sem, "Leaving __down_write");
+ ;
+}
+
+void fastcall __sched __down_write(struct rw_semaphore *sem)
+{
+ __down_write_nested(sem, 0);
}
/*
unsigned long flags;
int ret = 0;
- rwsemtrace(sem, "Entering __down_write_trylock");
-
spin_lock_irqsave(&sem->wait_lock, flags);
if (sem->activity == 0 && list_empty(&sem->wait_list)) {
spin_unlock_irqrestore(&sem->wait_lock, flags);
- rwsemtrace(sem, "Leaving __down_write_trylock");
return ret;
}
{
unsigned long flags;
- rwsemtrace(sem, "Entering __up_read");
-
spin_lock_irqsave(&sem->wait_lock, flags);
if (--sem->activity == 0 && !list_empty(&sem->wait_list))
sem = __rwsem_wake_one_writer(sem);
spin_unlock_irqrestore(&sem->wait_lock, flags);
-
- rwsemtrace(sem, "Leaving __up_read");
}
/*
{
unsigned long flags;
- rwsemtrace(sem, "Entering __up_write");
-
spin_lock_irqsave(&sem->wait_lock, flags);
sem->activity = 0;
sem = __rwsem_do_wake(sem, 1);
spin_unlock_irqrestore(&sem->wait_lock, flags);
-
- rwsemtrace(sem, "Leaving __up_write");
}
/*
{
unsigned long flags;
- rwsemtrace(sem, "Entering __downgrade_write");
-
spin_lock_irqsave(&sem->wait_lock, flags);
sem->activity = 1;
sem = __rwsem_do_wake(sem, 0);
spin_unlock_irqrestore(&sem->wait_lock, flags);
-
- rwsemtrace(sem, "Leaving __downgrade_write");
}
-EXPORT_SYMBOL(init_rwsem);
+EXPORT_SYMBOL(__init_rwsem);
EXPORT_SYMBOL(__down_read);
EXPORT_SYMBOL(__down_read_trylock);
+EXPORT_SYMBOL(__down_write_nested);
EXPORT_SYMBOL(__down_write);
EXPORT_SYMBOL(__down_write_trylock);
EXPORT_SYMBOL(__up_read);
EXPORT_SYMBOL(__up_write);
EXPORT_SYMBOL(__downgrade_write);
-#if RWSEM_DEBUG
-EXPORT_SYMBOL(rwsemtrace);
-#endif
#include <linux/init.h>
#include <linux/module.h>
+/*
+ * Initialize an rwsem:
+ */
+void __init_rwsem(struct rw_semaphore *sem, const char *name,
+ struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /*
+ * Make sure we are not reinitializing a held semaphore:
+ */
+ debug_check_no_locks_freed((void *)sem, sizeof(*sem));
+ lockdep_init_map(&sem->dep_map, name, key);
+#endif
+ sem->count = RWSEM_UNLOCKED_VALUE;
+ spin_lock_init(&sem->wait_lock);
+ INIT_LIST_HEAD(&sem->wait_list);
+}
+
+EXPORT_SYMBOL(__init_rwsem);
+
struct rwsem_waiter {
struct list_head list;
struct task_struct *task;
#define RWSEM_WAITING_FOR_WRITE 0x00000002
};
-#if RWSEM_DEBUG
-#undef rwsemtrace
-void rwsemtrace(struct rw_semaphore *sem, const char *str)
-{
- printk("sem=%p\n", sem);
- printk("(sem)=%08lx\n", sem->count);
- if (sem->debug)
- printk("[%d] %s({%08lx})\n", current->pid, str, sem->count);
-}
-#endif
-
/*
* handle the lock release when processes blocked on it that can now run
* - if we come here from up_xxxx(), then:
struct list_head *next;
signed long oldcount, woken, loop;
- rwsemtrace(sem, "Entering __rwsem_do_wake");
-
if (downgrading)
goto dont_wake_writers;
next->prev = &sem->wait_list;
out:
- rwsemtrace(sem, "Leaving __rwsem_do_wake");
return sem;
/* undo the change to count, but check for a transition 1->0 */
{
struct rwsem_waiter waiter;
- rwsemtrace(sem, "Entering rwsem_down_read_failed");
-
waiter.flags = RWSEM_WAITING_FOR_READ;
rwsem_down_failed_common(sem, &waiter,
RWSEM_WAITING_BIAS - RWSEM_ACTIVE_BIAS);
-
- rwsemtrace(sem, "Leaving rwsem_down_read_failed");
return sem;
}
{
struct rwsem_waiter waiter;
- rwsemtrace(sem, "Entering rwsem_down_write_failed");
-
waiter.flags = RWSEM_WAITING_FOR_WRITE;
rwsem_down_failed_common(sem, &waiter, -RWSEM_ACTIVE_BIAS);
- rwsemtrace(sem, "Leaving rwsem_down_write_failed");
return sem;
}
{
unsigned long flags;
- rwsemtrace(sem, "Entering rwsem_wake");
-
spin_lock_irqsave(&sem->wait_lock, flags);
/* do nothing if list empty */
spin_unlock_irqrestore(&sem->wait_lock, flags);
- rwsemtrace(sem, "Leaving rwsem_wake");
-
return sem;
}
{
unsigned long flags;
- rwsemtrace(sem, "Entering rwsem_downgrade_wake");
-
spin_lock_irqsave(&sem->wait_lock, flags);
/* do nothing if list empty */
spin_unlock_irqrestore(&sem->wait_lock, flags);
- rwsemtrace(sem, "Leaving rwsem_downgrade_wake");
return sem;
}
EXPORT_SYMBOL(rwsem_down_write_failed);
EXPORT_SYMBOL(rwsem_wake);
EXPORT_SYMBOL(rwsem_downgrade_wake);
-#if RWSEM_DEBUG
-EXPORT_SYMBOL(rwsemtrace);
-#endif
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
#include <linux/delay.h>
+#include <linux/module.h>
+
+void __spin_lock_init(spinlock_t *lock, const char *name,
+ struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /*
+ * Make sure we are not reinitializing a held lock:
+ */
+ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+ lockdep_init_map(&lock->dep_map, name, key);
+#endif
+ lock->raw_lock = (raw_spinlock_t)__RAW_SPIN_LOCK_UNLOCKED;
+ lock->magic = SPINLOCK_MAGIC;
+ lock->owner = SPINLOCK_OWNER_INIT;
+ lock->owner_cpu = -1;
+}
+
+EXPORT_SYMBOL(__spin_lock_init);
+
+void __rwlock_init(rwlock_t *lock, const char *name,
+ struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /*
+ * Make sure we are not reinitializing a held lock:
+ */
+ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+ lockdep_init_map(&lock->dep_map, name, key);
+#endif
+ lock->raw_lock = (raw_rwlock_t) __RAW_RW_LOCK_UNLOCKED;
+ lock->magic = RWLOCK_MAGIC;
+ lock->owner = SPINLOCK_OWNER_INIT;
+ lock->owner_cpu = -1;
+}
+
+EXPORT_SYMBOL(__rwlock_init);
static void spin_bug(spinlock_t *lock, const char *msg)
{
- static long print_once = 1;
struct task_struct *owner = NULL;
- if (xchg(&print_once, 0)) {
- if (lock->owner && lock->owner != SPINLOCK_OWNER_INIT)
- owner = lock->owner;
- printk(KERN_EMERG "BUG: spinlock %s on CPU#%d, %s/%d\n",
- msg, raw_smp_processor_id(),
- current->comm, current->pid);
- printk(KERN_EMERG " lock: %p, .magic: %08x, .owner: %s/%d, "
- ".owner_cpu: %d\n",
- lock, lock->magic,
- owner ? owner->comm : "<none>",
- owner ? owner->pid : -1,
- lock->owner_cpu);
- dump_stack();
-#ifdef CONFIG_SMP
- /*
- * We cannot continue on SMP:
- */
-// panic("bad locking");
-#endif
- }
+ if (!debug_locks_off())
+ return;
+
+ if (lock->owner && lock->owner != SPINLOCK_OWNER_INIT)
+ owner = lock->owner;
+ printk(KERN_EMERG "BUG: spinlock %s on CPU#%d, %s/%d\n",
+ msg, raw_smp_processor_id(),
+ current->comm, current->pid);
+ printk(KERN_EMERG " lock: %p, .magic: %08x, .owner: %s/%d, "
+ ".owner_cpu: %d\n",
+ lock, lock->magic,
+ owner ? owner->comm : "<none>",
+ owner ? owner->pid : -1,
+ lock->owner_cpu);
+ dump_stack();
}
#define SPIN_BUG_ON(cond, lock, msg) if (unlikely(cond)) spin_bug(lock, msg)
-static inline void debug_spin_lock_before(spinlock_t *lock)
+static inline void
+debug_spin_lock_before(spinlock_t *lock)
{
SPIN_BUG_ON(lock->magic != SPINLOCK_MAGIC, lock, "bad magic");
SPIN_BUG_ON(lock->owner == current, lock, "recursion");
static void rwlock_bug(rwlock_t *lock, const char *msg)
{
- static long print_once = 1;
-
- if (xchg(&print_once, 0)) {
- printk(KERN_EMERG "BUG: rwlock %s on CPU#%d, %s/%d, %p\n",
- msg, raw_smp_processor_id(), current->comm,
- current->pid, lock);
- dump_stack();
-#ifdef CONFIG_SMP
- /*
- * We cannot continue on SMP:
- */
- panic("bad locking");
-#endif
- }
+ if (!debug_locks_off())
+ return;
+
+ printk(KERN_EMERG "BUG: rwlock %s on CPU#%d, %s/%d, %p\n",
+ msg, raw_smp_processor_id(), current->comm,
+ current->pid, lock);
+ dump_stack();
}
#define RWLOCK_BUG_ON(cond, lock, msg) if (unlikely(cond)) rwlock_bug(lock, msg)
return -ENOMEM;
src_pte = pte_offset_map_nested(src_pmd, addr);
src_ptl = pte_lockptr(src_mm, src_pmd);
- spin_lock(src_ptl);
+ spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
do {
/*
new_pte = pte_offset_map_nested(new_pmd, new_addr);
new_ptl = pte_lockptr(mm, new_pmd);
if (new_ptl != old_ptl)
- spin_lock(new_ptl);
+ spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
for (; old_addr < old_end; old_pte++, old_addr += PAGE_SIZE,
new_pte++, new_addr += PAGE_SIZE) {
* CAP_SYS_RAW_IO set, send SIGTERM instead (but it's unlikely that
* we select a process with CAP_SYS_RAW_IO set).
*/
-static void __oom_kill_task(task_t *p, const char *message)
+static void __oom_kill_task(struct task_struct *p, const char *message)
{
if (p->pid == 1) {
WARN_ON(1);
force_sig(SIGKILL, p);
}
-static int oom_kill_task(task_t *p, const char *message)
+static int oom_kill_task(struct task_struct *p, const char *message)
{
struct mm_struct *mm;
- task_t * g, * q;
+ struct task_struct *g, *q;
mm = p->mm;
*/
void out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask, int order)
{
- task_t *p;
+ struct task_struct *p;
unsigned long points = 0;
if (printk_ratelimit()) {
zone->spanned_pages = size;
zone->present_pages = realsize;
+#ifdef CONFIG_NUMA
+ zone->min_unmapped_ratio = (realsize*sysctl_min_unmapped_ratio)
+ / 100;
+#endif
zone->name = zone_names[j];
spin_lock_init(&zone->lock);
spin_lock_init(&zone->lru_lock);
return 0;
}
+#ifdef CONFIG_NUMA
+int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write,
+ struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
+{
+ struct zone *zone;
+ int rc;
+
+ rc = proc_dointvec_minmax(table, write, file, buffer, length, ppos);
+ if (rc)
+ return rc;
+
+ for_each_zone(zone)
+ zone->min_unmapped_ratio = (zone->present_pages *
+ sysctl_min_unmapped_ratio) / 100;
+ return 0;
+}
+#endif
+
/*
* lowmem_reserve_ratio_sysctl_handler - just a wrapper around
* proc_dointvec() so that we can call setup_per_zone_lowmem_reserve()
}
}
-static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
+static inline int cache_free_alien(struct kmem_cache *cachep, void *objp,
+ int nesting)
{
struct slab *slabp = virt_to_slab(objp);
int nodeid = slabp->nodeid;
STATS_INC_NODEFREES(cachep);
if (l3->alien && l3->alien[nodeid]) {
alien = l3->alien[nodeid];
- spin_lock(&alien->lock);
+ spin_lock_nested(&alien->lock, nesting);
if (unlikely(alien->avail == alien->limit)) {
STATS_INC_ACOVERFLOW(cachep);
__drain_alien_cache(cachep, alien, nodeid);
{
}
-static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
+static inline int cache_free_alien(struct kmem_cache *cachep, void *objp,
+ int nesting)
{
return 0;
}
local_irq_disable();
memcpy(ptr, list, sizeof(struct kmem_list3));
+ /*
+ * Do not assume that spinlocks can be initialized via memcpy:
+ */
+ spin_lock_init(&ptr->list_lock);
+
MAKE_ALL_LISTS(cachep, ptr, nodeid);
cachep->nodelists[nodeid] = ptr;
local_irq_enable();
}
/* 4) Replace the bootstrap head arrays */
{
- void *ptr;
+ struct array_cache *ptr;
ptr = kmalloc(sizeof(struct arraycache_init), GFP_KERNEL);
BUG_ON(cpu_cache_get(&cache_cache) != &initarray_cache.cache);
memcpy(ptr, cpu_cache_get(&cache_cache),
sizeof(struct arraycache_init));
+ /*
+ * Do not assume that spinlocks can be initialized via memcpy:
+ */
+ spin_lock_init(&ptr->lock);
+
cache_cache.array[smp_processor_id()] = ptr;
local_irq_enable();
!= &initarray_generic.cache);
memcpy(ptr, cpu_cache_get(malloc_sizes[INDEX_AC].cs_cachep),
sizeof(struct arraycache_init));
+ /*
+ * Do not assume that spinlocks can be initialized via memcpy:
+ */
+ spin_lock_init(&ptr->lock);
+
malloc_sizes[INDEX_AC].cs_cachep->array[smp_processor_id()] =
ptr;
local_irq_enable();
}
#endif
+static void __cache_free(struct kmem_cache *cachep, void *objp, int nesting);
+
/**
* slab_destroy - destroy and release all objects in a slab
* @cachep: cache pointer being destroyed
call_rcu(&slab_rcu->head, kmem_rcu_free);
} else {
kmem_freepages(cachep, addr);
- if (OFF_SLAB(cachep))
- kmem_cache_free(cachep->slabp_cache, slabp);
+ if (OFF_SLAB(cachep)) {
+ unsigned long flags;
+
+ /*
+ * lockdep: we may nest inside an already held
+ * ac->lock, so pass in a nesting flag:
+ */
+ local_irq_save(flags);
+ __cache_free(cachep->slabp_cache, slabp, 1);
+ local_irq_restore(flags);
+ }
}
}
if (slabp->inuse == 0) {
if (l3->free_objects > l3->free_limit) {
l3->free_objects -= cachep->num;
+ /*
+ * It is safe to drop the lock. The slab is
+ * no longer linked to the cache. cachep
+ * cannot disappear - we are using it and
+ * all destruction of caches must be
+ * serialized properly by the user.
+ */
+ spin_unlock(&l3->list_lock);
slab_destroy(cachep, slabp);
+ spin_lock(&l3->list_lock);
} else {
list_add(&slabp->list, &l3->slabs_free);
}
#endif
check_irq_off();
l3 = cachep->nodelists[node];
- spin_lock(&l3->list_lock);
+ spin_lock_nested(&l3->list_lock, SINGLE_DEPTH_NESTING);
if (l3->shared) {
struct array_cache *shared_array = l3->shared;
int max = shared_array->limit - shared_array->avail;
* Release an obj back to its cache. If the obj has a constructed state, it must
* be in this state _before_ it is released. Called with disabled ints.
*/
-static inline void __cache_free(struct kmem_cache *cachep, void *objp)
+static void __cache_free(struct kmem_cache *cachep, void *objp, int nesting)
{
struct array_cache *ac = cpu_cache_get(cachep);
check_irq_off();
objp = cache_free_debugcheck(cachep, objp, __builtin_return_address(0));
- if (cache_free_alien(cachep, objp))
+ if (cache_free_alien(cachep, objp, nesting))
return;
if (likely(ac->avail < ac->limit)) {
BUG_ON(virt_to_cache(objp) != cachep);
local_irq_save(flags);
- __cache_free(cachep, objp);
+ __cache_free(cachep, objp, 0);
local_irq_restore(flags);
}
EXPORT_SYMBOL(kmem_cache_free);
kfree_debugcheck(objp);
c = virt_to_cache(objp);
debug_check_no_locks_freed(objp, obj_size(c));
- __cache_free(c, (void *)objp);
+ __cache_free(c, (void *)objp, 0);
local_irq_restore(flags);
}
EXPORT_SYMBOL(kfree);
struct address_space swapper_space = {
.page_tree = RADIX_TREE_INIT(GFP_ATOMIC|__GFP_NOWARN),
- .tree_lock = RW_LOCK_UNLOCKED,
+ .tree_lock = __RW_LOCK_UNLOCKED(swapper_space.tree_lock),
.a_ops = &swap_aops,
.i_mmap_nonlinear = LIST_HEAD_INIT(swapper_space.i_mmap_nonlinear),
.backing_dev_info = &swap_backing_dev_info,
return;
}
+ debug_check_no_locks_freed(addr, area->size);
+
if (deallocate_pages) {
int i;
*
* If non-zero call zone_reclaim when the number of free pages falls below
* the watermarks.
- *
- * In the future we may add flags to the mode. However, the page allocator
- * should only have to check that zone_reclaim_mode != 0 before calling
- * zone_reclaim().
*/
int zone_reclaim_mode __read_mostly;
*/
#define ZONE_RECLAIM_PRIORITY 4
+/*
+ * Percentage of pages in a zone that must be unmapped for zone_reclaim to
+ * occur.
+ */
+int sysctl_min_unmapped_ratio = 1;
+
/*
* Try to free up some pages from this zone through reclaim.
*/
int node_id;
/*
- * Do not reclaim if there are not enough reclaimable pages in this
- * zone that would satify this allocations.
+ * Zone reclaim reclaims unmapped file backed pages.
*
- * All unmapped pagecache pages are reclaimable.
- *
- * Both counters may be temporarily off a bit so we use
- * SWAP_CLUSTER_MAX as the boundary. It may also be good to
- * leave a few frequently used unmapped pagecache pages around.
+ * A small portion of unmapped file backed pages is needed for
+ * file I/O otherwise pages read by file I/O will be immediately
+ * thrown out if the zone is overallocated. So we do not reclaim
+ * if less than a specified percentage of the zone is used by
+ * unmapped file backed pages.
*/
if (zone_page_state(zone, NR_FILE_PAGES) -
- zone_page_state(zone, NR_FILE_MAPPED) < SWAP_CLUSTER_MAX)
- return 0;
+ zone_page_state(zone, NR_FILE_MAPPED) <= zone->min_unmapped_ratio)
+ return 0;
/*
* Avoid concurrent zone reclaims, do not reclaim in a zone that does
}
}
+/*
+ * vlan network devices have devices nesting below it, and are a special
+ * "super class" of normal network devices; split their locks off into a
+ * separate class since they always nest.
+ */
+static struct lock_class_key vlan_netdev_xmit_lock_key;
+
+
/* Attach a VLAN device to a mac address (ie Ethernet Card).
* Returns the device that was created, or NULL if there was
* an error of some kind.
new_dev = alloc_netdev(sizeof(struct vlan_dev_info), name,
vlan_setup);
+
if (new_dev == NULL)
goto out_unlock;
if (register_netdevice(new_dev))
goto out_free_newdev;
+ lockdep_set_class(&new_dev->_xmit_lock, &vlan_netdev_xmit_lock_key);
+
new_dev->iflink = real_dev->ifindex;
vlan_transfer_operstate(real_dev, new_dev);
linkwatch_fire_event(new_dev); /* _MUST_ call rfc2863_policy() */
#include <linux/if.h> /* for IFF_UP */
#include <linux/inetdevice.h>
#include <linux/bitops.h>
+#include <linux/poison.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/rcupdate.h>
DPRINTK("clip_neigh_destroy (neigh %p)\n", neigh);
if (NEIGH2ENTRY(neigh)->vccs)
printk(KERN_CRIT "clip_neigh_destroy: vccs != NULL !!!\n");
- NEIGH2ENTRY(neigh)->vccs = (void *) 0xdeadbeef;
+ NEIGH2ENTRY(neigh)->vccs = (void *) NEIGHBOR_DEAD;
}
static void clip_neigh_solicit(struct neighbour *neigh, struct sk_buff *skb)
{
struct sk_buff *ourskb;
unsigned char *bp = skb->data;
- struct net_device *dev;
+ ax25_route *route;
+ struct net_device *dev = NULL;
ax25_address *src, *dst;
+ ax25_digi *digipeat = NULL;
ax25_dev *ax25_dev;
- ax25_route _route, *route = &_route;
ax25_cb *ax25;
+ char ip_mode = ' ';
dst = (ax25_address *)(bp + 1);
src = (ax25_address *)(bp + 8);
if (arp_find(bp + 1, skb))
return 1;
- route = ax25_rt_find_route(route, dst, NULL);
- dev = route->dev;
+ route = ax25_get_route(dst, NULL);
+ if (route) {
+ digipeat = route->digipeat;
+ dev = route->dev;
+ ip_mode = route->ip_mode;
+ };
if (dev == NULL)
dev = skb->dev;
}
if (bp[16] == AX25_P_IP) {
- if (route->ip_mode == 'V' || (route->ip_mode == ' ' && ax25_dev->values[AX25_VALUES_IPDEFMODE])) {
+ if (ip_mode == 'V' || (ip_mode == ' ' && ax25_dev->values[AX25_VALUES_IPDEFMODE])) {
/*
* We copy the buffer and release the original thereby
* keeping it straight
ourskb,
ax25_dev->values[AX25_VALUES_PACLEN],
&src_c,
- &dst_c, route->digipeat, dev);
+ &dst_c, digipeat, dev);
if (ax25) {
ax25_cb_put(ax25);
}
skb_pull(skb, AX25_KISS_HEADER_LEN);
- if (route->digipeat != NULL) {
+ if (digipeat != NULL) {
if ((ourskb = ax25_rt_build_path(skb, src, dst, route->digipeat)) == NULL) {
kfree_skb(skb);
goto put;
ax25_queue_xmit(skb, dev);
put:
- ax25_put_route(route);
+ if (route)
+ ax25_put_route(route);
return 1;
}
static ax25_route *ax25_route_list;
static DEFINE_RWLOCK(ax25_route_lock);
-static ax25_route *ax25_get_route(ax25_address *, struct net_device *);
-
void ax25_rt_device_down(struct net_device *dev)
{
ax25_route *s, *t, *ax25_rt;
return -ENOMEM;
}
- atomic_set(&ax25_rt->ref, 0);
+ atomic_set(&ax25_rt->refcount, 1);
ax25_rt->callsign = route->dest_addr;
ax25_rt->dev = ax25_dev->dev;
ax25_rt->digipeat = NULL;
return 0;
}
-static void ax25_rt_destroy(ax25_route *ax25_rt)
+void __ax25_put_route(ax25_route *ax25_rt)
{
- if (atomic_read(&ax25_rt->ref) == 0) {
- kfree(ax25_rt->digipeat);
- kfree(ax25_rt);
- return;
- }
-
- /*
- * Uh... Route is still in use; we can't yet destroy it. Retry later.
- */
- init_timer(&ax25_rt->timer);
- ax25_rt->timer.data = (unsigned long) ax25_rt;
- ax25_rt->timer.function = (void *) ax25_rt_destroy;
- ax25_rt->timer.expires = jiffies + 5 * HZ;
-
- add_timer(&ax25_rt->timer);
+ kfree(ax25_rt->digipeat);
+ kfree(ax25_rt);
}
static int ax25_rt_del(struct ax25_routes_struct *route)
ax25cmp(&route->dest_addr, &s->callsign) == 0) {
if (ax25_route_list == s) {
ax25_route_list = s->next;
- ax25_rt_destroy(s);
+ ax25_put_route(s);
} else {
for (t = ax25_route_list; t != NULL; t = t->next) {
if (t->next == s) {
t->next = s->next;
- ax25_rt_destroy(s);
+ ax25_put_route(s);
break;
}
}
*
* Only routes with a reference count of zero can be destroyed.
*/
-static ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
+ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
{
ax25_route *ax25_spe_rt = NULL;
ax25_route *ax25_def_rt = NULL;
ax25_rt = ax25_spe_rt;
if (ax25_rt != NULL)
- atomic_inc(&ax25_rt->ref);
+ ax25_hold_route(ax25_rt);
read_unlock(&ax25_route_lock);
return 0;
}
-ax25_route *ax25_rt_find_route(ax25_route * route, ax25_address *addr,
- struct net_device *dev)
-{
- ax25_route *ax25_rt;
-
- if ((ax25_rt = ax25_get_route(addr, dev)))
- return ax25_rt;
-
- route->next = NULL;
- atomic_set(&route->ref, 1);
- route->callsign = *addr;
- route->dev = dev;
- route->digipeat = NULL;
- route->ip_mode = ' ';
-
- return route;
-}
-
struct sk_buff *ax25_rt_build_path(struct sk_buff *skb, ax25_address *src,
ax25_address *dest, ax25_digi *digi)
{
#define BT_DBG(D...)
#endif
-#define VERSION "2.8"
+#define VERSION "2.10"
/* Bluetooth sockets */
#define BT_MAX_PROTO 8
static int __init bt_init(void)
{
+ int err;
+
BT_INFO("Core ver %s", VERSION);
- sock_register(&bt_sock_family_ops);
+ err = bt_sysfs_init();
+ if (err < 0)
+ return err;
- BT_INFO("HCI device and connection manager initialized");
+ err = sock_register(&bt_sock_family_ops);
+ if (err < 0) {
+ bt_sysfs_cleanup();
+ return err;
+ }
- bt_sysfs_init();
+ BT_INFO("HCI device and connection manager initialized");
hci_sock_init();
{
hci_sock_cleanup();
- bt_sysfs_cleanup();
-
sock_unregister(PF_BLUETOOTH);
+
+ bt_sysfs_cleanup();
}
subsys_initcall(bt_init);
static void hci_conn_timeout(unsigned long arg)
{
- struct hci_conn *conn = (void *)arg;
- struct hci_dev *hdev = conn->hdev;
+ struct hci_conn *conn = (void *) arg;
+ struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p state %d", conn, conn->state);
return;
}
-static void hci_conn_init_timer(struct hci_conn *conn)
+static void hci_conn_idle(unsigned long arg)
{
- init_timer(&conn->timer);
- conn->timer.function = hci_conn_timeout;
- conn->timer.data = (unsigned long)conn;
+ struct hci_conn *conn = (void *) arg;
+
+ BT_DBG("conn %p mode %d", conn, conn->mode);
+
+ hci_conn_enter_sniff_mode(conn);
}
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst)
BT_DBG("%s dst %s", hdev->name, batostr(dst));
- if (!(conn = kmalloc(sizeof(struct hci_conn), GFP_ATOMIC)))
+ conn = kzalloc(sizeof(struct hci_conn), GFP_ATOMIC);
+ if (!conn)
return NULL;
- memset(conn, 0, sizeof(struct hci_conn));
bacpy(&conn->dst, dst);
- conn->type = type;
conn->hdev = hdev;
+ conn->type = type;
+ conn->mode = HCI_CM_ACTIVE;
conn->state = BT_OPEN;
+ conn->power_save = 1;
+
skb_queue_head_init(&conn->data_q);
- hci_conn_init_timer(conn);
+
+ init_timer(&conn->disc_timer);
+ conn->disc_timer.function = hci_conn_timeout;
+ conn->disc_timer.data = (unsigned long) conn;
+
+ init_timer(&conn->idle_timer);
+ conn->idle_timer.function = hci_conn_idle;
+ conn->idle_timer.data = (unsigned long) conn;
atomic_set(&conn->refcnt, 0);
BT_DBG("%s conn %p handle %d", hdev->name, conn, conn->handle);
- hci_conn_del_timer(conn);
+ del_timer(&conn->idle_timer);
+
+ del_timer(&conn->disc_timer);
if (conn->type == SCO_LINK) {
struct hci_conn *acl = conn->link;
}
EXPORT_SYMBOL(hci_conn_switch_role);
+/* Enter active mode */
+void hci_conn_enter_active_mode(struct hci_conn *conn)
+{
+ struct hci_dev *hdev = conn->hdev;
+
+ BT_DBG("conn %p mode %d", conn, conn->mode);
+
+ if (test_bit(HCI_RAW, &hdev->flags))
+ return;
+
+ if (conn->mode != HCI_CM_SNIFF || !conn->power_save)
+ goto timer;
+
+ if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
+ struct hci_cp_exit_sniff_mode cp;
+ cp.handle = __cpu_to_le16(conn->handle);
+ hci_send_cmd(hdev, OGF_LINK_POLICY,
+ OCF_EXIT_SNIFF_MODE, sizeof(cp), &cp);
+ }
+
+timer:
+ if (hdev->idle_timeout > 0)
+ mod_timer(&conn->idle_timer,
+ jiffies + msecs_to_jiffies(hdev->idle_timeout));
+}
+
+/* Enter sniff mode */
+void hci_conn_enter_sniff_mode(struct hci_conn *conn)
+{
+ struct hci_dev *hdev = conn->hdev;
+
+ BT_DBG("conn %p mode %d", conn, conn->mode);
+
+ if (test_bit(HCI_RAW, &hdev->flags))
+ return;
+
+ if (!lmp_sniff_capable(hdev) || !lmp_sniff_capable(conn))
+ return;
+
+ if (conn->mode != HCI_CM_ACTIVE || !(conn->link_policy & HCI_LP_SNIFF))
+ return;
+
+ if (lmp_sniffsubr_capable(hdev) && lmp_sniffsubr_capable(conn)) {
+ struct hci_cp_sniff_subrate cp;
+ cp.handle = __cpu_to_le16(conn->handle);
+ cp.max_latency = __constant_cpu_to_le16(0);
+ cp.min_remote_timeout = __constant_cpu_to_le16(0);
+ cp.min_local_timeout = __constant_cpu_to_le16(0);
+ hci_send_cmd(hdev, OGF_LINK_POLICY,
+ OCF_SNIFF_SUBRATE, sizeof(cp), &cp);
+ }
+
+ if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
+ struct hci_cp_sniff_mode cp;
+ cp.handle = __cpu_to_le16(conn->handle);
+ cp.max_interval = __cpu_to_le16(hdev->sniff_max_interval);
+ cp.min_interval = __cpu_to_le16(hdev->sniff_min_interval);
+ cp.attempt = __constant_cpu_to_le16(4);
+ cp.timeout = __constant_cpu_to_le16(1);
+ hci_send_cmd(hdev, OGF_LINK_POLICY,
+ OCF_SNIFF_MODE, sizeof(cp), &cp);
+ }
+}
+
/* Drop all connection on the device */
void hci_conn_hash_flush(struct hci_dev *hdev)
{
}
hci_dev_unlock_bh(hdev);
- timeo = ir.length * 2 * HZ;
+ timeo = ir.length * msecs_to_jiffies(2000);
if (do_inquiry && (err = hci_request(hdev, hci_inq_req, (unsigned long)&ir, timeo)) < 0)
goto done;
set_bit(HCI_INIT, &hdev->flags);
//__hci_request(hdev, hci_reset_req, 0, HZ);
- ret = __hci_request(hdev, hci_init_req, 0, HCI_INIT_TIMEOUT);
+ ret = __hci_request(hdev, hci_init_req, 0,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
clear_bit(HCI_INIT, &hdev->flags);
}
atomic_set(&hdev->cmd_cnt, 1);
if (!test_bit(HCI_RAW, &hdev->flags)) {
set_bit(HCI_INIT, &hdev->flags);
- __hci_request(hdev, hci_reset_req, 0, HZ/4);
+ __hci_request(hdev, hci_reset_req, 0,
+ msecs_to_jiffies(250));
clear_bit(HCI_INIT, &hdev->flags);
}
hdev->acl_cnt = 0; hdev->sco_cnt = 0;
if (!test_bit(HCI_RAW, &hdev->flags))
- ret = __hci_request(hdev, hci_reset_req, 0, HCI_INIT_TIMEOUT);
+ ret = __hci_request(hdev, hci_reset_req, 0,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
done:
tasklet_enable(&hdev->tx_task);
switch (cmd) {
case HCISETAUTH:
- err = hci_request(hdev, hci_auth_req, dr.dev_opt, HCI_INIT_TIMEOUT);
+ err = hci_request(hdev, hci_auth_req, dr.dev_opt,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETENCRYPT:
if (!test_bit(HCI_AUTH, &hdev->flags)) {
/* Auth must be enabled first */
- err = hci_request(hdev, hci_auth_req,
- dr.dev_opt, HCI_INIT_TIMEOUT);
+ err = hci_request(hdev, hci_auth_req, dr.dev_opt,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
if (err)
break;
}
- err = hci_request(hdev, hci_encrypt_req,
- dr.dev_opt, HCI_INIT_TIMEOUT);
+ err = hci_request(hdev, hci_encrypt_req, dr.dev_opt,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETSCAN:
- err = hci_request(hdev, hci_scan_req, dr.dev_opt, HCI_INIT_TIMEOUT);
+ err = hci_request(hdev, hci_scan_req, dr.dev_opt,
+ msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETPTYPE:
{
skb_queue_purge(&hdev->driver_init);
- /* will free via class release */
- class_device_put(&hdev->class_dev);
+ /* will free via device release */
+ put_device(&hdev->dev);
}
EXPORT_SYMBOL(hci_free_dev);
hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1);
hdev->link_mode = (HCI_LM_ACCEPT);
+ hdev->idle_timeout = 0;
+ hdev->sniff_max_interval = 800;
+ hdev->sniff_min_interval = 80;
+
tasklet_init(&hdev->cmd_task, hci_cmd_task,(unsigned long) hdev);
tasklet_init(&hdev->rx_task, hci_rx_task, (unsigned long) hdev);
tasklet_init(&hdev->tx_task, hci_tx_task, (unsigned long) hdev);
while (hdev->acl_cnt && (conn = hci_low_sent(hdev, ACL_LINK, "e))) {
while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
BT_DBG("skb %p len %d", skb, skb->len);
+
+ hci_conn_enter_active_mode(conn);
+
hci_send_frame(skb);
hdev->acl_last_tx = jiffies;
if (conn) {
register struct hci_proto *hp;
+ hci_conn_enter_active_mode(conn);
+
/* Send to upper protocol */
if ((hp = hci_proto[HCI_PROTO_L2CAP]) && hp->recv_acldata) {
hp->recv_acldata(conn, skb, flags);
{
struct hci_conn *conn;
struct hci_rp_role_discovery *rd;
+ struct hci_rp_write_link_policy *lp;
+ void *sent;
BT_DBG("%s ocf 0x%x", hdev->name, ocf);
hci_dev_unlock(hdev);
break;
+ case OCF_WRITE_LINK_POLICY:
+ sent = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY);
+ if (!sent)
+ break;
+
+ lp = (struct hci_rp_write_link_policy *) skb->data;
+
+ if (lp->status)
+ break;
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(lp->handle));
+ if (conn) {
+ __le16 policy = get_unaligned((__le16 *) (sent + 2));
+ conn->link_policy = __le16_to_cpu(policy);
+ }
+
+ hci_dev_unlock(hdev);
+ break;
+
default:
BT_DBG("%s: Command complete: ogf LINK_POLICY ocf %x",
hdev->name, ocf);
/* Command Complete OGF INFO_PARAM */
static void hci_cc_info_param(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb)
{
- struct hci_rp_read_loc_features *lf;
+ struct hci_rp_read_local_features *lf;
struct hci_rp_read_buffer_size *bs;
struct hci_rp_read_bd_addr *ba;
switch (ocf) {
case OCF_READ_LOCAL_FEATURES:
- lf = (struct hci_rp_read_loc_features *) skb->data;
+ lf = (struct hci_rp_read_local_features *) skb->data;
if (lf->status) {
BT_DBG("%s READ_LOCAL_FEATURES failed %d", hdev->name, lf->status);
}
hdev->acl_mtu = __le16_to_cpu(bs->acl_mtu);
- hdev->sco_mtu = bs->sco_mtu ? bs->sco_mtu : 64;
- hdev->acl_pkts = hdev->acl_cnt = __le16_to_cpu(bs->acl_max_pkt);
- hdev->sco_pkts = hdev->sco_cnt = __le16_to_cpu(bs->sco_max_pkt);
+ hdev->sco_mtu = bs->sco_mtu;
+ hdev->acl_pkts = __le16_to_cpu(bs->acl_max_pkt);
+ hdev->sco_pkts = __le16_to_cpu(bs->sco_max_pkt);
+
+ if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) {
+ hdev->sco_mtu = 64;
+ hdev->sco_pkts = 8;
+ }
+
+ hdev->acl_cnt = hdev->acl_pkts;
+ hdev->sco_cnt = hdev->sco_pkts;
BT_DBG("%s mtu: acl %d, sco %d max_pkt: acl %d, sco %d", hdev->name,
hdev->acl_mtu, hdev->sco_mtu, hdev->acl_pkts, hdev->sco_pkts);
BT_DBG("%s ocf 0x%x", hdev->name, ocf);
switch (ocf) {
+ case OCF_SNIFF_MODE:
+ if (status) {
+ struct hci_conn *conn;
+ struct hci_cp_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_SNIFF_MODE);
+
+ if (!cp)
+ break;
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle));
+ if (conn) {
+ clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend);
+ }
+
+ hci_dev_unlock(hdev);
+ }
+ break;
+
+ case OCF_EXIT_SNIFF_MODE:
+ if (status) {
+ struct hci_conn *conn;
+ struct hci_cp_exit_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_EXIT_SNIFF_MODE);
+
+ if (!cp)
+ break;
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle));
+ if (conn) {
+ clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend);
+ }
+
+ hci_dev_unlock(hdev);
+ }
+ break;
+
default:
- BT_DBG("%s Command status: ogf HOST_POLICY ocf %x", hdev->name, ocf);
+ BT_DBG("%s Command status: ogf LINK_POLICY ocf %x", hdev->name, ocf);
break;
}
}
else
cp.role = 0x01; /* Remain slave */
- hci_send_cmd(hdev, OGF_LINK_CTL, OCF_ACCEPT_CONN_REQ, sizeof(cp), &cp);
+ hci_send_cmd(hdev, OGF_LINK_CTL,
+ OCF_ACCEPT_CONN_REQ, sizeof(cp), &cp);
} else {
/* Connection rejected */
struct hci_cp_reject_conn_req cp;
bacpy(&cp.bdaddr, &ev->bdaddr);
cp.reason = 0x0f;
- hci_send_cmd(hdev, OGF_LINK_CTL, OCF_REJECT_CONN_REQ, sizeof(cp), &cp);
+ hci_send_cmd(hdev, OGF_LINK_CTL,
+ OCF_REJECT_CONN_REQ, sizeof(cp), &cp);
}
}
static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_conn_complete *ev = (struct hci_ev_conn_complete *) skb->data;
- struct hci_conn *conn = NULL;
+ struct hci_conn *conn;
BT_DBG("%s", hdev->name);
if (test_bit(HCI_ENCRYPT, &hdev->flags))
conn->link_mode |= HCI_LM_ENCRYPT;
+ /* Get remote features */
+ if (conn->type == ACL_LINK) {
+ struct hci_cp_read_remote_features cp;
+ cp.handle = ev->handle;
+ hci_send_cmd(hdev, OGF_LINK_CTL,
+ OCF_READ_REMOTE_FEATURES, sizeof(cp), &cp);
+ }
+
/* Set link policy */
if (conn->type == ACL_LINK && hdev->link_policy) {
struct hci_cp_write_link_policy cp;
cp.handle = ev->handle;
cp.policy = __cpu_to_le16(hdev->link_policy);
- hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY, sizeof(cp), &cp);
+ hci_send_cmd(hdev, OGF_LINK_POLICY,
+ OCF_WRITE_LINK_POLICY, sizeof(cp), &cp);
}
/* Set packet type for incoming connection */
__cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK):
__cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK);
- hci_send_cmd(hdev, OGF_LINK_CTL, OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp);
+ hci_send_cmd(hdev, OGF_LINK_CTL,
+ OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp);
}
} else
conn->state = BT_CLOSED;
static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_disconn_complete *ev = (struct hci_ev_disconn_complete *) skb->data;
- struct hci_conn *conn = NULL;
- __u16 handle = __le16_to_cpu(ev->handle);
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
conn->state = BT_CLOSED;
hci_proto_disconn_ind(conn, ev->reason);
static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_role_change *ev = (struct hci_ev_role_change *) skb->data;
- struct hci_conn *conn = NULL;
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_unlock(hdev);
}
+/* Mode Change */
+static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
+{
+ struct hci_ev_mode_change *ev = (struct hci_ev_mode_change *) skb->data;
+ struct hci_conn *conn;
+
+ BT_DBG("%s status %d", hdev->name, ev->status);
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
+ if (conn) {
+ conn->mode = ev->mode;
+ conn->interval = __le16_to_cpu(ev->interval);
+
+ if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
+ if (conn->mode == HCI_CM_ACTIVE)
+ conn->power_save = 1;
+ else
+ conn->power_save = 0;
+ }
+ }
+
+ hci_dev_unlock(hdev);
+}
+
/* Authentication Complete */
static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_auth_complete *ev = (struct hci_ev_auth_complete *) skb->data;
- struct hci_conn *conn = NULL;
- __u16 handle = __le16_to_cpu(ev->handle);
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status)
conn->link_mode |= HCI_LM_AUTH;
cp.handle = __cpu_to_le16(conn->handle);
cp.encrypt = 1;
hci_send_cmd(conn->hdev, OGF_LINK_CTL,
- OCF_SET_CONN_ENCRYPT,
- sizeof(cp), &cp);
+ OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp);
} else {
clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->pend);
hci_encrypt_cfm(conn, ev->status, 0x00);
static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_encrypt_change *ev = (struct hci_ev_encrypt_change *) skb->data;
- struct hci_conn *conn = NULL;
- __u16 handle = __le16_to_cpu(ev->handle);
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status) {
if (ev->encrypt)
static inline void hci_change_conn_link_key_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_change_conn_link_key_complete *ev = (struct hci_ev_change_conn_link_key_complete *) skb->data;
- struct hci_conn *conn = NULL;
- __u16 handle = __le16_to_cpu(ev->handle);
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status)
conn->link_mode |= HCI_LM_SECURE;
{
}
+/* Remote Features */
+static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff *skb)
+{
+ struct hci_ev_remote_features *ev = (struct hci_ev_remote_features *) skb->data;
+ struct hci_conn *conn;
+
+ BT_DBG("%s status %d", hdev->name, ev->status);
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
+ if (conn && !ev->status) {
+ memcpy(conn->features, ev->features, sizeof(conn->features));
+ }
+
+ hci_dev_unlock(hdev);
+}
+
/* Clock Offset */
static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_clock_offset *ev = (struct hci_ev_clock_offset *) skb->data;
- struct hci_conn *conn = NULL;
- __u16 handle = __le16_to_cpu(ev->handle);
+ struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
- conn = hci_conn_hash_lookup_handle(hdev, handle);
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn && !ev->status) {
struct inquiry_entry *ie;
hci_dev_unlock(hdev);
}
+/* Sniff Subrate */
+static inline void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *skb)
+{
+ struct hci_ev_sniff_subrate *ev = (struct hci_ev_sniff_subrate *) skb->data;
+ struct hci_conn *conn;
+
+ BT_DBG("%s status %d", hdev->name, ev->status);
+
+ hci_dev_lock(hdev);
+
+ conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
+ if (conn) {
+ }
+
+ hci_dev_unlock(hdev);
+}
+
void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_event_hdr *hdr = (struct hci_event_hdr *) skb->data;
hci_role_change_evt(hdev, skb);
break;
+ case HCI_EV_MODE_CHANGE:
+ hci_mode_change_evt(hdev, skb);
+ break;
+
case HCI_EV_AUTH_COMPLETE:
hci_auth_complete_evt(hdev, skb);
break;
hci_link_key_notify_evt(hdev, skb);
break;
+ case HCI_EV_REMOTE_FEATURES:
+ hci_remote_features_evt(hdev, skb);
+ break;
+
case HCI_EV_CLOCK_OFFSET:
hci_clock_offset_evt(hdev, skb);
break;
hci_pscan_rep_mode_evt(hdev, skb);
break;
+ case HCI_EV_SNIFF_SUBRATE:
+ hci_sniff_subrate_evt(hdev, skb);
+ break;
+
case HCI_EV_CMD_STATUS:
cs = (struct hci_ev_cmd_status *) skb->data;
skb_pull(skb, sizeof(cs));
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/platform_device.h>
+
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#define BT_DBG(D...)
#endif
-static ssize_t show_name(struct class_device *cdev, char *buf)
+static ssize_t show_name(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", hdev->name);
}
-static ssize_t show_type(struct class_device *cdev, char *buf)
+static ssize_t show_type(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", hdev->type);
}
-static ssize_t show_address(struct class_device *cdev, char *buf)
+static ssize_t show_address(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
bdaddr_t bdaddr;
baswap(&bdaddr, &hdev->bdaddr);
return sprintf(buf, "%s\n", batostr(&bdaddr));
}
-static ssize_t show_flags(struct class_device *cdev, char *buf)
+static ssize_t show_flags(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "0x%lx\n", hdev->flags);
}
-static ssize_t show_inquiry_cache(struct class_device *cdev, char *buf)
+static ssize_t show_inquiry_cache(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
struct inquiry_cache *cache = &hdev->inq_cache;
struct inquiry_entry *e;
int n = 0;
return n;
}
-static CLASS_DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
-static CLASS_DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
-static CLASS_DEVICE_ATTR(address, S_IRUGO, show_address, NULL);
-static CLASS_DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL);
-static CLASS_DEVICE_ATTR(inquiry_cache, S_IRUGO, show_inquiry_cache, NULL);
+static ssize_t show_idle_timeout(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ return sprintf(buf, "%d\n", hdev->idle_timeout);
+}
-static struct class_device_attribute *bt_attrs[] = {
- &class_device_attr_name,
- &class_device_attr_type,
- &class_device_attr_address,
- &class_device_attr_flags,
- &class_device_attr_inquiry_cache,
- NULL
-};
+static ssize_t store_idle_timeout(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ char *ptr;
+ __u32 val;
+
+ val = simple_strtoul(buf, &ptr, 10);
+ if (ptr == buf)
+ return -EINVAL;
-#ifdef CONFIG_HOTPLUG
-static int bt_uevent(struct class_device *cdev, char **envp, int num_envp, char *buf, int size)
+ if (val != 0 && (val < 500 || val > 3600000))
+ return -EINVAL;
+
+ hdev->idle_timeout = val;
+
+ return count;
+}
+
+static ssize_t show_sniff_max_interval(struct device *dev, struct device_attribute *attr, char *buf)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
- int n, i = 0;
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ return sprintf(buf, "%d\n", hdev->sniff_max_interval);
+}
- envp[i++] = buf;
- n = snprintf(buf, size, "INTERFACE=%s", hdev->name) + 1;
- buf += n;
- size -= n;
+static ssize_t store_sniff_max_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ char *ptr;
+ __u16 val;
- if ((size <= 0) || (i >= num_envp))
- return -ENOMEM;
+ val = simple_strtoul(buf, &ptr, 10);
+ if (ptr == buf)
+ return -EINVAL;
- envp[i] = NULL;
- return 0;
+ if (val < 0x0002 || val > 0xFFFE || val % 2)
+ return -EINVAL;
+
+ if (val < hdev->sniff_min_interval)
+ return -EINVAL;
+
+ hdev->sniff_max_interval = val;
+
+ return count;
+}
+
+static ssize_t show_sniff_min_interval(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ return sprintf(buf, "%d\n", hdev->sniff_min_interval);
}
-#endif
-static void bt_release(struct class_device *cdev)
+static ssize_t store_sniff_min_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
- struct hci_dev *hdev = class_get_devdata(cdev);
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ char *ptr;
+ __u16 val;
- kfree(hdev);
+ val = simple_strtoul(buf, &ptr, 10);
+ if (ptr == buf)
+ return -EINVAL;
+
+ if (val < 0x0002 || val > 0xFFFE || val % 2)
+ return -EINVAL;
+
+ if (val > hdev->sniff_max_interval)
+ return -EINVAL;
+
+ hdev->sniff_min_interval = val;
+
+ return count;
}
-struct class bt_class = {
- .name = "bluetooth",
- .release = bt_release,
-#ifdef CONFIG_HOTPLUG
- .uevent = bt_uevent,
-#endif
+static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
+static DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
+static DEVICE_ATTR(address, S_IRUGO, show_address, NULL);
+static DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL);
+static DEVICE_ATTR(inquiry_cache, S_IRUGO, show_inquiry_cache, NULL);
+
+static DEVICE_ATTR(idle_timeout, S_IRUGO | S_IWUSR,
+ show_idle_timeout, store_idle_timeout);
+static DEVICE_ATTR(sniff_max_interval, S_IRUGO | S_IWUSR,
+ show_sniff_max_interval, store_sniff_max_interval);
+static DEVICE_ATTR(sniff_min_interval, S_IRUGO | S_IWUSR,
+ show_sniff_min_interval, store_sniff_min_interval);
+
+static struct device_attribute *bt_attrs[] = {
+ &dev_attr_name,
+ &dev_attr_type,
+ &dev_attr_address,
+ &dev_attr_flags,
+ &dev_attr_inquiry_cache,
+ &dev_attr_idle_timeout,
+ &dev_attr_sniff_max_interval,
+ &dev_attr_sniff_min_interval,
+ NULL
};
+struct class *bt_class = NULL;
EXPORT_SYMBOL_GPL(bt_class);
+static struct bus_type bt_bus = {
+ .name = "bluetooth",
+};
+
+static struct platform_device *bt_platform;
+
+static void bt_release(struct device *dev)
+{
+ struct hci_dev *hdev = dev_get_drvdata(dev);
+ kfree(hdev);
+}
+
int hci_register_sysfs(struct hci_dev *hdev)
{
- struct class_device *cdev = &hdev->class_dev;
+ struct device *dev = &hdev->dev;
unsigned int i;
int err;
BT_DBG("%p name %s type %d", hdev, hdev->name, hdev->type);
- cdev->class = &bt_class;
- class_set_devdata(cdev, hdev);
+ dev->class = bt_class;
+
+ if (hdev->parent)
+ dev->parent = hdev->parent;
+ else
+ dev->parent = &bt_platform->dev;
+
+ strlcpy(dev->bus_id, hdev->name, BUS_ID_SIZE);
+
+ dev->release = bt_release;
- strlcpy(cdev->class_id, hdev->name, BUS_ID_SIZE);
- err = class_device_register(cdev);
+ dev_set_drvdata(dev, hdev);
+
+ err = device_register(dev);
if (err < 0)
return err;
for (i = 0; bt_attrs[i]; i++)
- class_device_create_file(cdev, bt_attrs[i]);
+ device_create_file(dev, bt_attrs[i]);
return 0;
}
void hci_unregister_sysfs(struct hci_dev *hdev)
{
- struct class_device * cdev = &hdev->class_dev;
+ struct device *dev = &hdev->dev;
BT_DBG("%p name %s type %d", hdev, hdev->name, hdev->type);
- class_device_del(cdev);
+ device_del(dev);
}
int __init bt_sysfs_init(void)
{
- return class_register(&bt_class);
+ int err;
+
+ bt_platform = platform_device_register_simple("bluetooth", -1, NULL, 0);
+ if (IS_ERR(bt_platform))
+ return PTR_ERR(bt_platform);
+
+ err = bus_register(&bt_bus);
+ if (err < 0) {
+ platform_device_unregister(bt_platform);
+ return err;
+ }
+
+ bt_class = class_create(THIS_MODULE, "bluetooth");
+ if (IS_ERR(bt_class)) {
+ bus_unregister(&bt_bus);
+ platform_device_unregister(bt_platform);
+ return PTR_ERR(bt_class);
+ }
+
+ return 0;
}
void __exit bt_sysfs_cleanup(void)
{
- class_unregister(&bt_class);
+ class_destroy(bt_class);
+
+ bus_unregister(&bt_bus);
+
+ platform_device_unregister(bt_platform);
}
.lock = RW_LOCK_UNLOCKED
};
-static int l2cap_conn_del(struct hci_conn *conn, int err);
-
-static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent);
-static void l2cap_chan_del(struct sock *sk, int err);
-
static void __l2cap_sock_close(struct sock *sk, int reason);
static void l2cap_sock_close(struct sock *sk);
static void l2cap_sock_kill(struct sock *sk);
sk->sk_timer.data = (unsigned long)sk;
}
+/* ---- L2CAP channels ---- */
+static struct sock *__l2cap_get_chan_by_dcid(struct l2cap_chan_list *l, u16 cid)
+{
+ struct sock *s;
+ for (s = l->head; s; s = l2cap_pi(s)->next_c) {
+ if (l2cap_pi(s)->dcid == cid)
+ break;
+ }
+ return s;
+}
+
+static struct sock *__l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
+{
+ struct sock *s;
+ for (s = l->head; s; s = l2cap_pi(s)->next_c) {
+ if (l2cap_pi(s)->scid == cid)
+ break;
+ }
+ return s;
+}
+
+/* Find channel with given SCID.
+ * Returns locked socket */
+static inline struct sock *l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
+{
+ struct sock *s;
+ read_lock(&l->lock);
+ s = __l2cap_get_chan_by_scid(l, cid);
+ if (s) bh_lock_sock(s);
+ read_unlock(&l->lock);
+ return s;
+}
+
+static struct sock *__l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
+{
+ struct sock *s;
+ for (s = l->head; s; s = l2cap_pi(s)->next_c) {
+ if (l2cap_pi(s)->ident == ident)
+ break;
+ }
+ return s;
+}
+
+static inline struct sock *l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
+{
+ struct sock *s;
+ read_lock(&l->lock);
+ s = __l2cap_get_chan_by_ident(l, ident);
+ if (s) bh_lock_sock(s);
+ read_unlock(&l->lock);
+ return s;
+}
+
+static u16 l2cap_alloc_cid(struct l2cap_chan_list *l)
+{
+ u16 cid = 0x0040;
+
+ for (; cid < 0xffff; cid++) {
+ if(!__l2cap_get_chan_by_scid(l, cid))
+ return cid;
+ }
+
+ return 0;
+}
+
+static inline void __l2cap_chan_link(struct l2cap_chan_list *l, struct sock *sk)
+{
+ sock_hold(sk);
+
+ if (l->head)
+ l2cap_pi(l->head)->prev_c = sk;
+
+ l2cap_pi(sk)->next_c = l->head;
+ l2cap_pi(sk)->prev_c = NULL;
+ l->head = sk;
+}
+
+static inline void l2cap_chan_unlink(struct l2cap_chan_list *l, struct sock *sk)
+{
+ struct sock *next = l2cap_pi(sk)->next_c, *prev = l2cap_pi(sk)->prev_c;
+
+ write_lock(&l->lock);
+ if (sk == l->head)
+ l->head = next;
+
+ if (next)
+ l2cap_pi(next)->prev_c = prev;
+ if (prev)
+ l2cap_pi(prev)->next_c = next;
+ write_unlock(&l->lock);
+
+ __sock_put(sk);
+}
+
+static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
+{
+ struct l2cap_chan_list *l = &conn->chan_list;
+
+ BT_DBG("conn %p, psm 0x%2.2x, dcid 0x%4.4x", conn, l2cap_pi(sk)->psm, l2cap_pi(sk)->dcid);
+
+ l2cap_pi(sk)->conn = conn;
+
+ if (sk->sk_type == SOCK_SEQPACKET) {
+ /* Alloc CID for connection-oriented socket */
+ l2cap_pi(sk)->scid = l2cap_alloc_cid(l);
+ } else if (sk->sk_type == SOCK_DGRAM) {
+ /* Connectionless socket */
+ l2cap_pi(sk)->scid = 0x0002;
+ l2cap_pi(sk)->dcid = 0x0002;
+ l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
+ } else {
+ /* Raw socket can send/recv signalling messages only */
+ l2cap_pi(sk)->scid = 0x0001;
+ l2cap_pi(sk)->dcid = 0x0001;
+ l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
+ }
+
+ __l2cap_chan_link(l, sk);
+
+ if (parent)
+ bt_accept_enqueue(parent, sk);
+}
+
+/* Delete channel.
+ * Must be called on the locked socket. */
+static void l2cap_chan_del(struct sock *sk, int err)
+{
+ struct l2cap_conn *conn = l2cap_pi(sk)->conn;
+ struct sock *parent = bt_sk(sk)->parent;
+
+ l2cap_sock_clear_timer(sk);
+
+ BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
+
+ if (conn) {
+ /* Unlink from channel list */
+ l2cap_chan_unlink(&conn->chan_list, sk);
+ l2cap_pi(sk)->conn = NULL;
+ hci_conn_put(conn->hcon);
+ }
+
+ sk->sk_state = BT_CLOSED;
+ sock_set_flag(sk, SOCK_ZAPPED);
+
+ if (err)
+ sk->sk_err = err;
+
+ if (parent) {
+ bt_accept_unlink(sk);
+ parent->sk_data_ready(parent, 0);
+ } else
+ sk->sk_state_change(sk);
+}
+
/* ---- L2CAP connections ---- */
static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon, u8 status)
{
- struct l2cap_conn *conn;
-
- if ((conn = hcon->l2cap_data))
- return conn;
+ struct l2cap_conn *conn = hcon->l2cap_data;
- if (status)
+ if (conn || status)
return conn;
- if (!(conn = kmalloc(sizeof(struct l2cap_conn), GFP_ATOMIC)))
+ conn = kzalloc(sizeof(struct l2cap_conn), GFP_ATOMIC);
+ if (!conn)
return NULL;
- memset(conn, 0, sizeof(struct l2cap_conn));
hcon->l2cap_data = conn;
conn->hcon = hcon;
+ BT_DBG("hcon %p conn %p", hcon, conn);
+
conn->mtu = hcon->hdev->acl_mtu;
conn->src = &hcon->hdev->bdaddr;
conn->dst = &hcon->dst;
spin_lock_init(&conn->lock);
rwlock_init(&conn->chan_list.lock);
- BT_DBG("hcon %p conn %p", hcon, conn);
return conn;
}
-static int l2cap_conn_del(struct hci_conn *hcon, int err)
+static void l2cap_conn_del(struct hci_conn *hcon, int err)
{
- struct l2cap_conn *conn;
+ struct l2cap_conn *conn = hcon->l2cap_data;
struct sock *sk;
- if (!(conn = hcon->l2cap_data))
- return 0;
+ if (!conn)
+ return;
BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);
hcon->l2cap_data = NULL;
kfree(conn);
- return 0;
}
static inline void l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
return err;
}
-/* ---- L2CAP channels ---- */
-static struct sock *__l2cap_get_chan_by_dcid(struct l2cap_chan_list *l, u16 cid)
-{
- struct sock *s;
- for (s = l->head; s; s = l2cap_pi(s)->next_c) {
- if (l2cap_pi(s)->dcid == cid)
- break;
- }
- return s;
-}
-
-static struct sock *__l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
-{
- struct sock *s;
- for (s = l->head; s; s = l2cap_pi(s)->next_c) {
- if (l2cap_pi(s)->scid == cid)
- break;
- }
- return s;
-}
-
-/* Find channel with given SCID.
- * Returns locked socket */
-static inline struct sock *l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
-{
- struct sock *s;
- read_lock(&l->lock);
- s = __l2cap_get_chan_by_scid(l, cid);
- if (s) bh_lock_sock(s);
- read_unlock(&l->lock);
- return s;
-}
-
-static struct sock *__l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
-{
- struct sock *s;
- for (s = l->head; s; s = l2cap_pi(s)->next_c) {
- if (l2cap_pi(s)->ident == ident)
- break;
- }
- return s;
-}
-
-static inline struct sock *l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
-{
- struct sock *s;
- read_lock(&l->lock);
- s = __l2cap_get_chan_by_ident(l, ident);
- if (s) bh_lock_sock(s);
- read_unlock(&l->lock);
- return s;
-}
-
-static u16 l2cap_alloc_cid(struct l2cap_chan_list *l)
-{
- u16 cid = 0x0040;
-
- for (; cid < 0xffff; cid++) {
- if(!__l2cap_get_chan_by_scid(l, cid))
- return cid;
- }
-
- return 0;
-}
-
-static inline void __l2cap_chan_link(struct l2cap_chan_list *l, struct sock *sk)
-{
- sock_hold(sk);
-
- if (l->head)
- l2cap_pi(l->head)->prev_c = sk;
-
- l2cap_pi(sk)->next_c = l->head;
- l2cap_pi(sk)->prev_c = NULL;
- l->head = sk;
-}
-
-static inline void l2cap_chan_unlink(struct l2cap_chan_list *l, struct sock *sk)
-{
- struct sock *next = l2cap_pi(sk)->next_c, *prev = l2cap_pi(sk)->prev_c;
-
- write_lock(&l->lock);
- if (sk == l->head)
- l->head = next;
-
- if (next)
- l2cap_pi(next)->prev_c = prev;
- if (prev)
- l2cap_pi(prev)->next_c = next;
- write_unlock(&l->lock);
-
- __sock_put(sk);
-}
-
-static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
-{
- struct l2cap_chan_list *l = &conn->chan_list;
-
- BT_DBG("conn %p, psm 0x%2.2x, dcid 0x%4.4x", conn, l2cap_pi(sk)->psm, l2cap_pi(sk)->dcid);
-
- l2cap_pi(sk)->conn = conn;
-
- if (sk->sk_type == SOCK_SEQPACKET) {
- /* Alloc CID for connection-oriented socket */
- l2cap_pi(sk)->scid = l2cap_alloc_cid(l);
- } else if (sk->sk_type == SOCK_DGRAM) {
- /* Connectionless socket */
- l2cap_pi(sk)->scid = 0x0002;
- l2cap_pi(sk)->dcid = 0x0002;
- l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
- } else {
- /* Raw socket can send/recv signalling messages only */
- l2cap_pi(sk)->scid = 0x0001;
- l2cap_pi(sk)->dcid = 0x0001;
- l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
- }
-
- __l2cap_chan_link(l, sk);
-
- if (parent)
- bt_accept_enqueue(parent, sk);
-}
-
-/* Delete channel.
- * Must be called on the locked socket. */
-static void l2cap_chan_del(struct sock *sk, int err)
-{
- struct l2cap_conn *conn = l2cap_pi(sk)->conn;
- struct sock *parent = bt_sk(sk)->parent;
-
- l2cap_sock_clear_timer(sk);
-
- BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
-
- if (conn) {
- /* Unlink from channel list */
- l2cap_chan_unlink(&conn->chan_list, sk);
- l2cap_pi(sk)->conn = NULL;
- hci_conn_put(conn->hcon);
- }
-
- sk->sk_state = BT_CLOSED;
- sock_set_flag(sk, SOCK_ZAPPED);
-
- if (err)
- sk->sk_err = err;
-
- if (parent) {
- bt_accept_unlink(sk);
- parent->sk_data_ready(parent, 0);
- } else
- sk->sk_state_change(sk);
-}
-
static void l2cap_conn_ready(struct l2cap_conn *conn)
{
struct l2cap_chan_list *l = &conn->chan_list;
kfree_skb(skb);
done:
- if (sk) bh_unlock_sock(sk);
+ if (sk)
+ bh_unlock_sock(sk);
+
return 0;
}
static int l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
{
+ struct l2cap_conn *conn;
+
BT_DBG("hcon %p bdaddr %s status %d", hcon, batostr(&hcon->dst), status);
if (hcon->type != ACL_LINK)
return 0;
if (!status) {
- struct l2cap_conn *conn;
-
conn = l2cap_conn_add(hcon, status);
if (conn)
l2cap_conn_ready(conn);
- } else
+ } else
l2cap_conn_del(hcon, bt_err(status));
return 0;
return 0;
l2cap_conn_del(hcon, bt_err(reason));
+
return 0;
}
static int l2cap_auth_cfm(struct hci_conn *hcon, u8 status)
{
struct l2cap_chan_list *l;
- struct l2cap_conn *conn;
+ struct l2cap_conn *conn = conn = hcon->l2cap_data;
struct l2cap_conn_rsp rsp;
struct sock *sk;
int result;
- if (!(conn = hcon->l2cap_data))
+ if (!conn)
return 0;
+
l = &conn->chan_list;
BT_DBG("conn %p", conn);
static int l2cap_encrypt_cfm(struct hci_conn *hcon, u8 status)
{
struct l2cap_chan_list *l;
- struct l2cap_conn *conn;
+ struct l2cap_conn *conn = hcon->l2cap_data;
struct l2cap_conn_rsp rsp;
struct sock *sk;
int result;
- if (!(conn = hcon->l2cap_data))
+ if (!conn)
return 0;
+
l = &conn->chan_list;
BT_DBG("conn %p", conn);
goto error;
}
- class_create_file(&bt_class, &class_attr_l2cap);
+ class_create_file(bt_class, &class_attr_l2cap);
BT_INFO("L2CAP ver %s", VERSION);
BT_INFO("L2CAP socket layer initialized");
static void __exit l2cap_exit(void)
{
- class_remove_file(&bt_class, &class_attr_l2cap);
+ class_remove_file(bt_class, &class_attr_l2cap);
if (bt_sock_unregister(BTPROTO_L2CAP) < 0)
BT_ERR("L2CAP socket unregistration failed");
#define BT_DBG(D...)
#endif
-#define VERSION "1.7"
+#define VERSION "1.8"
+static int disable_cfc = 0;
static unsigned int l2cap_mtu = RFCOMM_MAX_L2CAP_MTU;
static struct task_struct *rfcomm_thread;
s->sock = sock;
s->mtu = RFCOMM_DEFAULT_MTU;
- s->cfc = RFCOMM_CFC_UNKNOWN;
+ s->cfc = disable_cfc ? RFCOMM_CFC_DISABLED : RFCOMM_CFC_UNKNOWN;
/* Do not increment module usage count for listening sessions.
* Otherwise we won't be able to unload the module. */
static void rfcomm_dlc_accept(struct rfcomm_dlc *d)
{
+ struct sock *sk = d->session->sock->sk;
+
BT_DBG("dlc %p", d);
rfcomm_send_ua(d->session, d->dlci);
d->state_change(d, 0);
rfcomm_dlc_unlock(d);
+ if (d->link_mode & RFCOMM_LM_MASTER)
+ hci_conn_switch_role(l2cap_pi(sk)->conn->hcon, 0x00);
+
rfcomm_send_msc(d->session, 1, d->dlci, d->v24_sig);
}
BT_DBG("dlc %p state %ld dlci %d mtu %d fc 0x%x credits %d",
d, d->state, d->dlci, pn->mtu, pn->flow_ctrl, pn->credits);
- if (pn->flow_ctrl == 0xf0 || pn->flow_ctrl == 0xe0) {
- d->cfc = s->cfc = RFCOMM_CFC_ENABLED;
+ if ((pn->flow_ctrl == 0xf0 && s->cfc != RFCOMM_CFC_DISABLED) ||
+ pn->flow_ctrl == 0xe0) {
+ d->cfc = RFCOMM_CFC_ENABLED;
d->tx_credits = pn->credits;
} else {
- d->cfc = s->cfc = RFCOMM_CFC_DISABLED;
+ d->cfc = RFCOMM_CFC_DISABLED;
set_bit(RFCOMM_TX_THROTTLED, &d->flags);
}
+ if (s->cfc == RFCOMM_CFC_UNKNOWN)
+ s->cfc = d->cfc;
+
d->priority = pn->priority;
d->mtu = s->mtu = btohs(pn->mtu);
kernel_thread(rfcomm_run, NULL, CLONE_KERNEL);
- class_create_file(&bt_class, &class_attr_rfcomm_dlc);
+ class_create_file(bt_class, &class_attr_rfcomm_dlc);
rfcomm_init_sockets();
static void __exit rfcomm_exit(void)
{
- class_remove_file(&bt_class, &class_attr_rfcomm_dlc);
+ class_remove_file(bt_class, &class_attr_rfcomm_dlc);
hci_unregister_cb(&rfcomm_cb);
module_init(rfcomm_init);
module_exit(rfcomm_exit);
+module_param(disable_cfc, bool, 0644);
+MODULE_PARM_DESC(disable_cfc, "Disable credit based flow control");
+
module_param(l2cap_mtu, uint, 0644);
MODULE_PARM_DESC(l2cap_mtu, "Default MTU for the L2CAP connection");
if (err < 0)
goto error;
- class_create_file(&bt_class, &class_attr_rfcomm);
+ class_create_file(bt_class, &class_attr_rfcomm);
BT_INFO("RFCOMM socket layer initialized");
void __exit rfcomm_cleanup_sockets(void)
{
- class_remove_file(&bt_class, &class_attr_rfcomm);
+ class_remove_file(bt_class, &class_attr_rfcomm);
if (bt_sock_unregister(BTPROTO_RFCOMM) < 0)
BT_ERR("RFCOMM socket layer unregistration failed");
goto error;
}
- class_create_file(&bt_class, &class_attr_sco);
+ class_create_file(bt_class, &class_attr_sco);
BT_INFO("SCO (Voice Link) ver %s", VERSION);
BT_INFO("SCO socket layer initialized");
static void __exit sco_exit(void)
{
- class_remove_file(&bt_class, &class_attr_sco);
+ class_remove_file(bt_class, &class_attr_sco);
if (bt_sock_unregister(BTPROTO_SCO) < 0)
BT_ERR("SCO socket unregistration failed");
continue;
if (idx < s_idx)
- continue;
+ goto cont;
err = br_fill_ifinfo(skb, p, NETLINK_CB(cb->skb).pid,
cb->nlh->nlmsg_seq, RTM_NEWLINK, NLM_F_MULTI);
if (err <= 0)
break;
+cont:
++idx;
}
read_unlock(&dev_base_lock);
static kmem_cache_t *skbuff_head_cache __read_mostly;
static kmem_cache_t *skbuff_fclone_cache __read_mostly;
+/*
+ * lockdep: lock class key used by skb_queue_head_init():
+ */
+struct lock_class_key skb_queue_lock_key;
+
+EXPORT_SYMBOL(skb_queue_lock_key);
+
/*
* Keep out-of-line to prevent kernel bloat.
* __builtin_return_address is not used because it is not always
#include <net/tcp.h>
#endif
+/*
+ * Each address family might have different locking rules, so we have
+ * one slock key per address family:
+ */
+static struct lock_class_key af_family_keys[AF_MAX];
+static struct lock_class_key af_family_slock_keys[AF_MAX];
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+/*
+ * Make lock validator output more readable. (we pre-construct these
+ * strings build-time, so that runtime initialization of socket
+ * locks is fast):
+ */
+static const char *af_family_key_strings[AF_MAX+1] = {
+ "sk_lock-AF_UNSPEC", "sk_lock-AF_UNIX" , "sk_lock-AF_INET" ,
+ "sk_lock-AF_AX25" , "sk_lock-AF_IPX" , "sk_lock-AF_APPLETALK",
+ "sk_lock-AF_NETROM", "sk_lock-AF_BRIDGE" , "sk_lock-AF_ATMPVC" ,
+ "sk_lock-AF_X25" , "sk_lock-AF_INET6" , "sk_lock-AF_ROSE" ,
+ "sk_lock-AF_DECnet", "sk_lock-AF_NETBEUI" , "sk_lock-AF_SECURITY" ,
+ "sk_lock-AF_KEY" , "sk_lock-AF_NETLINK" , "sk_lock-AF_PACKET" ,
+ "sk_lock-AF_ASH" , "sk_lock-AF_ECONET" , "sk_lock-AF_ATMSVC" ,
+ "sk_lock-21" , "sk_lock-AF_SNA" , "sk_lock-AF_IRDA" ,
+ "sk_lock-AF_PPPOX" , "sk_lock-AF_WANPIPE" , "sk_lock-AF_LLC" ,
+ "sk_lock-27" , "sk_lock-28" , "sk_lock-29" ,
+ "sk_lock-AF_TIPC" , "sk_lock-AF_BLUETOOTH", "sk_lock-AF_MAX"
+};
+static const char *af_family_slock_key_strings[AF_MAX+1] = {
+ "slock-AF_UNSPEC", "slock-AF_UNIX" , "slock-AF_INET" ,
+ "slock-AF_AX25" , "slock-AF_IPX" , "slock-AF_APPLETALK",
+ "slock-AF_NETROM", "slock-AF_BRIDGE" , "slock-AF_ATMPVC" ,
+ "slock-AF_X25" , "slock-AF_INET6" , "slock-AF_ROSE" ,
+ "slock-AF_DECnet", "slock-AF_NETBEUI" , "slock-AF_SECURITY" ,
+ "slock-AF_KEY" , "slock-AF_NETLINK" , "slock-AF_PACKET" ,
+ "slock-AF_ASH" , "slock-AF_ECONET" , "slock-AF_ATMSVC" ,
+ "slock-21" , "slock-AF_SNA" , "slock-AF_IRDA" ,
+ "slock-AF_PPPOX" , "slock-AF_WANPIPE" , "slock-AF_LLC" ,
+ "slock-27" , "slock-28" , "slock-29" ,
+ "slock-AF_TIPC" , "slock-AF_BLUETOOTH", "slock-AF_MAX"
+};
+#endif
+
+/*
+ * sk_callback_lock locking rules are per-address-family,
+ * so split the lock classes by using a per-AF key:
+ */
+static struct lock_class_key af_callback_keys[AF_MAX];
+
/* Take into consideration the size of the struct sk_buff overhead in the
* determination of these values, since that is non-constant across
* platforms. This makes socket queueing behavior and performance
skb->dev = NULL;
bh_lock_sock(sk);
- if (!sock_owned_by_user(sk))
+ if (!sock_owned_by_user(sk)) {
+ /*
+ * trylock + unlock semantics:
+ */
+ mutex_acquire(&sk->sk_lock.dep_map, 0, 1, _RET_IP_);
+
rc = sk->sk_backlog_rcv(sk, skb);
- else
+
+ mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
+ } else
sk_add_backlog(sk, skb);
bh_unlock_sock(sk);
out:
return 0;
}
+/*
+ * Initialize an sk_lock.
+ *
+ * (We also register the sk_lock with the lock validator.)
+ */
+static void inline sock_lock_init(struct sock *sk)
+{
+ spin_lock_init(&sk->sk_lock.slock);
+ sk->sk_lock.owner = NULL;
+ init_waitqueue_head(&sk->sk_lock.wq);
+ /*
+ * Make sure we are not reinitializing a held lock:
+ */
+ debug_check_no_locks_freed((void *)&sk->sk_lock, sizeof(sk->sk_lock));
+
+ /*
+ * Mark both the sk_lock and the sk_lock.slock as a
+ * per-address-family lock class:
+ */
+ lockdep_set_class_and_name(&sk->sk_lock.slock,
+ af_family_slock_keys + sk->sk_family,
+ af_family_slock_key_strings[sk->sk_family]);
+ lockdep_init_map(&sk->sk_lock.dep_map,
+ af_family_key_strings[sk->sk_family],
+ af_family_keys + sk->sk_family);
+}
+
/**
* sk_alloc - All socket objects are allocated here
* @family: protocol family
rwlock_init(&newsk->sk_dst_lock);
rwlock_init(&newsk->sk_callback_lock);
+ lockdep_set_class(&newsk->sk_callback_lock,
+ af_callback_keys + newsk->sk_family);
newsk->sk_dst_cache = NULL;
newsk->sk_wmem_queued = 0;
rwlock_init(&sk->sk_dst_lock);
rwlock_init(&sk->sk_callback_lock);
+ lockdep_set_class(&sk->sk_callback_lock,
+ af_callback_keys + sk->sk_family);
sk->sk_state_change = sock_def_wakeup;
sk->sk_data_ready = sock_def_readable;
void fastcall lock_sock(struct sock *sk)
{
might_sleep();
- spin_lock_bh(&(sk->sk_lock.slock));
+ spin_lock_bh(&sk->sk_lock.slock);
if (sk->sk_lock.owner)
__lock_sock(sk);
sk->sk_lock.owner = (void *)1;
- spin_unlock_bh(&(sk->sk_lock.slock));
+ spin_unlock(&sk->sk_lock.slock);
+ /*
+ * The sk_lock has mutex_lock() semantics here:
+ */
+ mutex_acquire(&sk->sk_lock.dep_map, 0, 0, _RET_IP_);
+ local_bh_enable();
}
EXPORT_SYMBOL(lock_sock);
void fastcall release_sock(struct sock *sk)
{
- spin_lock_bh(&(sk->sk_lock.slock));
+ /*
+ * The sk_lock has mutex_unlock() semantics:
+ */
+ mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
+
+ spin_lock_bh(&sk->sk_lock.slock);
if (sk->sk_backlog.tail)
__release_sock(sk);
sk->sk_lock.owner = NULL;
- if (waitqueue_active(&(sk->sk_lock.wq)))
- wake_up(&(sk->sk_lock.wq));
- spin_unlock_bh(&(sk->sk_lock.slock));
+ if (waitqueue_active(&sk->sk_lock.wq))
+ wake_up(&sk->sk_lock.wq);
+ spin_unlock_bh(&sk->sk_lock.slock);
}
EXPORT_SYMBOL(release_sock);
int ihl;
int id;
- if (!pskb_may_pull(skb, sizeof(*iph)))
+ if (unlikely(skb_shinfo(skb)->gso_type &
+ ~(SKB_GSO_TCPV4 |
+ SKB_GSO_UDP |
+ SKB_GSO_DODGY |
+ SKB_GSO_TCP_ECN |
+ 0)))
+ goto out;
+
+ if (unlikely(!pskb_may_pull(skb, sizeof(*iph))))
goto out;
iph = skb->nh.iph;
if (ihl < sizeof(*iph))
goto out;
- if (!pskb_may_pull(skb, ihl))
+ if (unlikely(!pskb_may_pull(skb, ihl)))
goto out;
skb->h.raw = __skb_pull(skb, ihl);
rcu_read_lock();
ops = rcu_dereference(inet_protos[proto]);
- if (ops && ops->gso_segment)
+ if (likely(ops && ops->gso_segment))
segs = ops->gso_segment(skb, features);
rcu_read_unlock();
struct rt_hash_bucket {
struct rtable *chain;
};
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) || \
+ defined(CONFIG_PROVE_LOCKING)
/*
* Instead of using one spinlock for each rt_hash_bucket, we use a table of spinlocks
* The size of this table is a power of two and depends on the number of CPUS.
+ * (on lockdep we have a quite big spinlock_t, so keep the size down there)
*/
-#if NR_CPUS >= 32
-#define RT_HASH_LOCK_SZ 4096
-#elif NR_CPUS >= 16
-#define RT_HASH_LOCK_SZ 2048
-#elif NR_CPUS >= 8
-#define RT_HASH_LOCK_SZ 1024
-#elif NR_CPUS >= 4
-#define RT_HASH_LOCK_SZ 512
+#ifdef CONFIG_LOCKDEP
+# define RT_HASH_LOCK_SZ 256
#else
-#define RT_HASH_LOCK_SZ 256
+# if NR_CPUS >= 32
+# define RT_HASH_LOCK_SZ 4096
+# elif NR_CPUS >= 16
+# define RT_HASH_LOCK_SZ 2048
+# elif NR_CPUS >= 8
+# define RT_HASH_LOCK_SZ 1024
+# elif NR_CPUS >= 4
+# define RT_HASH_LOCK_SZ 512
+# else
+# define RT_HASH_LOCK_SZ 256
+# endif
#endif
static spinlock_t *rt_hash_locks;
if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
/* Packet is from an untrusted source, reset gso_segs. */
- int mss = skb_shinfo(skb)->gso_size;
+ int type = skb_shinfo(skb)->gso_type;
+ int mss;
+
+ if (unlikely(type &
+ ~(SKB_GSO_TCPV4 |
+ SKB_GSO_DODGY |
+ SKB_GSO_TCP_ECN |
+ SKB_GSO_TCPV6 |
+ 0) ||
+ !(type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))))
+ goto out;
+ mss = skb_shinfo(skb)->gso_size;
skb_shinfo(skb)->gso_segs = (skb->len + mss - 1) / mss;
segs = NULL;
void tcp_v4_send_check(struct sock *sk, int len, struct sk_buff *skb);
struct inet_hashinfo __cacheline_aligned tcp_hashinfo = {
- .lhash_lock = RW_LOCK_UNLOCKED,
+ .lhash_lock = __RW_LOCK_UNLOCKED(tcp_hashinfo.lhash_lock),
.lhash_users = ATOMIC_INIT(0),
.lhash_wait = __WAIT_QUEUE_HEAD_INITIALIZER(tcp_hashinfo.lhash_wait),
};
skb->dev = NULL;
- bh_lock_sock(sk);
+ bh_lock_sock_nested(sk);
ret = 0;
if (!sock_owned_by_user(sk)) {
#ifdef CONFIG_NET_DMA
struct inet_timewait_death_row tcp_death_row = {
.sysctl_max_tw_buckets = NR_FILE * 2,
.period = TCP_TIMEWAIT_LEN / INET_TWDR_TWKILL_SLOTS,
- .death_lock = SPIN_LOCK_UNLOCKED,
+ .death_lock = __SPIN_LOCK_UNLOCKED(tcp_death_row.death_lock),
.hashinfo = &tcp_hashinfo,
.tw_timer = TIMER_INITIALIZER(inet_twdr_hangman, 0,
(unsigned long)&tcp_death_row),
struct inet6_protocol *ops;
int proto;
+ if (unlikely(skb_shinfo(skb)->gso_type &
+ ~(SKB_GSO_UDP |
+ SKB_GSO_DODGY |
+ SKB_GSO_TCP_ECN |
+ SKB_GSO_TCPV6 |
+ 0)))
+ goto out;
+
if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
goto out;
for (skb = segs; skb; skb = skb->next) {
ipv6h = skb->nh.ipv6h;
- ipv6h->payload_len = htons(skb->len - skb->mac_len);
+ ipv6h->payload_len = htons(skb->len - skb->mac_len -
+ sizeof(*ipv6h));
}
out:
#include <linux/vmalloc.h>
#include <linux/netdevice.h>
#include <linux/module.h>
+#include <linux/poison.h>
#include <linux/icmpv6.h>
#include <net/ipv6.h>
#include <asm/uaccess.h>
} while (!hotdrop);
#ifdef CONFIG_NETFILTER_DEBUG
- ((struct ip6t_entry *)table_base)->comefrom = 0xdead57ac;
+ ((struct ip6t_entry *)table_base)->comefrom = NETFILTER_LINK_POISON;
#endif
read_unlock_bh(&table->lock);
static void netlink_table_grab(void)
{
- write_lock_bh(&nl_table_lock);
+ write_lock_irq(&nl_table_lock);
if (atomic_read(&nl_table_users)) {
DECLARE_WAITQUEUE(wait, current);
set_current_state(TASK_UNINTERRUPTIBLE);
if (atomic_read(&nl_table_users) == 0)
break;
- write_unlock_bh(&nl_table_lock);
+ write_unlock_irq(&nl_table_lock);
schedule();
- write_lock_bh(&nl_table_lock);
+ write_lock_irq(&nl_table_lock);
}
__set_current_state(TASK_RUNNING);
static __inline__ void netlink_table_ungrab(void)
{
- write_unlock_bh(&nl_table_lock);
+ write_unlock_irq(&nl_table_lock);
wake_up(&nl_table_wait);
}
/* Now attach up the new socket */
kfree_skb(skb);
- sk->sk_ack_backlog--;
+ sk_acceptq_removed(sk);
newsock->sk = newsk;
out:
nr_make->vr = 0;
nr_make->vl = 0;
nr_make->state = NR_STATE_3;
- sk->sk_ack_backlog++;
+ sk_acceptq_added(sk);
nr_insert_socket(make);
rose_insert_socket(sk); /* Finish the bind */
}
-
+rose_try_next_neigh:
rose->dest_addr = addr->srose_addr;
rose->dest_call = addr->srose_call;
rose->rand = ((long)rose & 0xFFFF) + rose->lci;
}
if (sk->sk_state != TCP_ESTABLISHED) {
+ /* Try next neighbour */
+ rose->neighbour = rose_get_neigh(&addr->srose_addr, &cause, &diagnostic);
+ if (rose->neighbour)
+ goto rose_try_next_neigh;
+ /* No more neighbour */
sock->state = SS_UNCONNECTED;
return sock_error(sk); /* Always set at this point */
}
struct net_device_stats *stats = netdev_priv(dev);
unsigned char *bp = (unsigned char *)skb->data;
struct sk_buff *skbn;
+ unsigned int len;
#ifdef CONFIG_INET
if (arp_find(bp + 7, skb)) {
kfree_skb(skb);
+ len = skbn->len;
+
if (!rose_route_frame(skbn, NULL)) {
kfree_skb(skbn);
stats->tx_errors++;
}
stats->tx_packets++;
- stats->tx_bytes += skbn->len;
+ stats->tx_bytes += len;
#endif
return 1;
}
struct dentry *dentry, *dvec[10];
int n = 0;
- mutex_lock(&dir->i_mutex);
+ mutex_lock_nested(&dir->i_mutex, I_MUTEX_CHILD);
repeat:
spin_lock(&dcache_lock);
list_for_each_safe(pos, next, &parent->d_subdirs) {
if ((error = rpc_lookup_parent(path, nd)) != 0)
return ERR_PTR(error);
dir = nd->dentry->d_inode;
- mutex_lock(&dir->i_mutex);
+ mutex_lock_nested(&dir->i_mutex, I_MUTEX_PARENT);
dentry = lookup_one_len(nd->last.name, nd->dentry, nd->last.len);
if (IS_ERR(dentry))
goto out_err;
if ((error = rpc_lookup_parent(path, &nd)) != 0)
return error;
dir = nd.dentry->d_inode;
- mutex_lock(&dir->i_mutex);
+ mutex_lock_nested(&dir->i_mutex, I_MUTEX_PARENT);
dentry = lookup_one_len(nd.last.name, nd.dentry, nd.last.len);
if (IS_ERR(dentry)) {
error = PTR_ERR(dentry);
if ((error = rpc_lookup_parent(path, &nd)) != 0)
return error;
dir = nd.dentry->d_inode;
- mutex_lock(&dir->i_mutex);
+ mutex_lock_nested(&dir->i_mutex, I_MUTEX_PARENT);
dentry = lookup_one_len(nd.last.name, nd.dentry, nd.last.len);
if (IS_ERR(dentry)) {
error = PTR_ERR(dentry);
* buf_acquire - creates a TIPC message buffer
* @size: message size (including TIPC header)
*
- * Returns a new buffer. Space is reserved for a data link header.
+ * Returns a new buffer with data pointers set to the specified size.
+ *
+ * NOTE: Headroom is reserved to allow prepending of a data link header.
+ * There may also be unrequested tailroom present at the buffer's end.
*/
static inline struct sk_buff *buf_acquire(u32 size)
return 0;
if (skb_tailroom(bundler) < (pad + size))
return 0;
+ if (link_max_pkt(l_ptr) < (to_pos + size))
+ return 0;
skb_put(bundler, pad + size);
memcpy(bundler->data + to_pos, buf->data, size);
scm->seclen = *UNIXSECLEN(skb);
}
#else
-static void unix_get_peersec_dgram(struct sk_buff *skb)
+static inline void unix_get_peersec_dgram(struct sk_buff *skb)
{ }
static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb)
.obj_size = sizeof(struct unix_sock),
};
+/*
+ * AF_UNIX sockets do not interact with hardware, hence they
+ * dont trigger interrupts - so it's safe for them to have
+ * bh-unsafe locking for their sk_receive_queue.lock. Split off
+ * this special lock-class by reinitializing the spinlock key:
+ */
+static struct lock_class_key af_unix_sk_receive_queue_lock_key;
+
static struct sock * unix_create1(struct socket *sock)
{
struct sock *sk = NULL;
atomic_inc(&unix_nr_socks);
sock_init_data(sock,sk);
+ lockdep_set_class(&sk->sk_receive_queue.lock,
+ &af_unix_sk_receive_queue_lock_key);
sk->sk_write_space = unix_write_space;
sk->sk_max_ack_backlog = sysctl_unix_max_dgram_qlen;
goto out_unlock;
}
- unix_state_wlock(sk);
+ unix_state_wlock_nested(sk);
if (sk->sk_state != st) {
unix_state_wunlock(sk);
#! /usr/bin/perl
#
-# checkversion find uses of LINUX_VERSION_CODE, KERNEL_VERSION, or
-# UTS_RELEASE without including <linux/version.h>, or cases of
+# checkversion find uses of LINUX_VERSION_CODE or KERNEL_VERSION
+# without including <linux/version.h>, or cases of
# including <linux/version.h> that don't need it.
# Copyright (C) 2003, Randy Dunlap <rdunlap@xenotime.net>
}
# Look for uses: LINUX_VERSION_CODE, KERNEL_VERSION, UTS_RELEASE
- if (($_ =~ /LINUX_VERSION_CODE/) || ($_ =~ /\WKERNEL_VERSION/) ||
- ($_ =~ /UTS_RELEASE/)) {
+ if (($_ =~ /LINUX_VERSION_CODE/) || ($_ =~ /\WKERNEL_VERSION/)) {
$fUseVersion = 1;
last LINE if $iLinuxVersion;
}
static void get_irq(struct device_node * np, int *irqptr)
{
- *irqptr = -1;
- if (!np)
- return;
- if (np->n_intrs != 1)
- return;
- *irqptr = np->intrs[0].line;
+ *irqptr = irq_of_parse_and_map(np, 0);
}
/* 0x4 is outenable, 0x1 is out, thus 4 or 5 */
if (strncmp(np->name, "i2s-", 4))
return 0;
- if (np->n_intrs != 3)
+ if (macio_irq_count(macio) != 3)
return 0;
dev = kzalloc(sizeof(struct i2sbus_dev), GFP_KERNEL);
snprintf(dev->rnames[i], sizeof(dev->rnames[i]), rnames[i], np->name);
}
for (i=0;i<3;i++) {
- if (request_irq(np->intrs[i].line, ints[i], 0, dev->rnames[i], dev))
+ if (request_irq(macio_irq(macio, i), ints[i], 0,
+ dev->rnames[i], dev))
goto err;
- dev->interrupts[i] = np->intrs[i].line;
+ dev->interrupts[i] = macio_irq(macio, i);
}
for (i=0;i<3;i++) {
if (ret)
goto out;
- ret = request_irq(aaci->dev->irq[0], aaci_irq, SA_SHIRQ|SA_INTERRUPT,
+ ret = request_irq(aaci->dev->irq[0], aaci_irq, IRQF_SHARED|IRQF_DISABLED,
DRIVER_NAME, aaci);
if (ret)
goto out;
/* set up driver entry */
strlcpy(ops->id, id, sizeof(ops->id));
mutex_init(&ops->reg_mutex);
+ /*
+ * The ->reg_mutex locking rules are per-driver, so we create
+ * separate per-driver lock classes:
+ */
+ lockdep_set_class(&ops->reg_mutex, (struct lock_class_key *)id);
+
ops->driver = DRIVER_EMPTY;
INIT_LIST_HEAD(&ops->dev_list);
/* lock this instance */
atomic_set(&subs->ref_count, 2);
down_write(&src->list_mutex);
- down_write(&dest->list_mutex);
+ down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
exclusive = info->flags & SNDRV_SEQ_PORT_SUBS_EXCLUSIVE ? 1 : 0;
err = -EBUSY;
unsigned long flags;
down_write(&src->list_mutex);
- down_write(&dest->list_mutex);
+ down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
/* look for the connection */
list_for_each(p, &src->list_head) {
if ((err = snd_mpu401_uart_new(card, 0,
MPU401_HW_MPU401,
port[dev], 0,
- irq[dev], irq[dev] >= 0 ? SA_INTERRUPT : 0, NULL)) < 0) {
+ irq[dev], irq[dev] >= 0 ? IRQF_DISABLED : 0, NULL)) < 0) {
printk(KERN_ERR "MPU401 not detected at 0x%lx\n", port[dev]);
goto _err;
}
return -EBUSY;
}
mcard->port = port;
- if (request_irq(irq, snd_mtpav_irqh, SA_INTERRUPT, "MOTU MTPAV", mcard)) {
+ if (request_irq(irq, snd_mtpav_irqh, IRQF_DISABLED, "MOTU MTPAV", mcard)) {
snd_printk("MTVAP IRQ %d busy\n", irq);
return -EBUSY;
}
if (irq >= 0 && irq != SNDRV_AUTO_IRQ) {
if (request_irq(irq, snd_uart16550_interrupt,
- SA_INTERRUPT, "Serial MIDI", (void *) uart)) {
+ IRQF_DISABLED, "Serial MIDI", (void *) uart)) {
snd_printk("irq %d busy. Using Polling.\n", irq);
} else {
uart->irq = irq;
if (mpu_port[dev] > 0) {
if (snd_mpu401_uart_new(card, 0, MPU401_HW_MPU401,
- mpu_port[dev], 0, mpu_irq[dev], SA_INTERRUPT,
+ mpu_port[dev], 0, mpu_irq[dev], IRQF_DISABLED,
NULL) < 0)
printk(KERN_ERR PFX "no MPU-401 device at 0x%lx.\n", mpu_port[dev]);
}
snd_ad1816a_free(chip);
return -EBUSY;
}
- if (request_irq(irq, snd_ad1816a_interrupt, SA_INTERRUPT, "AD1816A", (void *) chip)) {
+ if (request_irq(irq, snd_ad1816a_interrupt, IRQF_DISABLED, "AD1816A", (void *) chip)) {
snd_printk(KERN_ERR "ad1816a: can't grab IRQ %d\n", irq);
snd_ad1816a_free(chip);
return -EBUSY;
snd_ad1848_free(chip);
return -EBUSY;
}
- if (request_irq(irq, snd_ad1848_interrupt, SA_INTERRUPT, "AD1848", (void *) chip)) {
+ if (request_irq(irq, snd_ad1848_interrupt, IRQF_DISABLED, "AD1848", (void *) chip)) {
snd_printk(KERN_ERR "ad1848: can't grab IRQ %d\n", irq);
snd_ad1848_free(chip);
return -EBUSY;
if (mpu_port[dev] > 0 && mpu_port[dev] != SNDRV_AUTO_PORT) {
if (snd_mpu401_uart_new(card, 0, MPU401_HW_ALS100,
mpu_port[dev], 0,
- mpu_irq[dev], SA_INTERRUPT,
+ mpu_irq[dev], IRQF_DISABLED,
NULL) < 0)
snd_printk(KERN_ERR PFX "no MPU-401 device at 0x%lx\n", mpu_port[dev]);
}
if (mpu_port[dev] > 0 && mpu_port[dev] != SNDRV_AUTO_PORT) {
if (snd_mpu401_uart_new(card, 0, MPU401_HW_AZT2320,
mpu_port[dev], 0,
- mpu_irq[dev], SA_INTERRUPT,
+ mpu_irq[dev], IRQF_DISABLED,
NULL) < 0)
snd_printk(KERN_ERR PFX "no MPU-401 device at 0x%lx\n", mpu_port[dev]);
}
if (snd_mpu401_uart_new(card, 0, MPU401_HW_CS4232,
mpu_port[dev], 0,
mpu_irq[dev],
- mpu_irq[dev] >= 0 ? SA_INTERRUPT : 0,
+ mpu_irq[dev] >= 0 ? IRQF_DISABLED : 0,
NULL) < 0)
printk(KERN_WARNING "cs4231: MPU401 not detected\n");
}
return -ENODEV;
}
chip->cport = cport;
- if (!(hwshare & CS4231_HWSHARE_IRQ) && request_irq(irq, snd_cs4231_interrupt, SA_INTERRUPT, "CS4231", (void *) chip)) {
+ if (!(hwshare & CS4231_HWSHARE_IRQ) && request_irq(irq, snd_cs4231_interrupt, IRQF_DISABLED, "CS4231", (void *) chip)) {
snd_printk(KERN_ERR "cs4231: can't grab IRQ %d\n", irq);
snd_cs4231_free(chip);
return -EBUSY;
if (snd_mpu401_uart_new(card, 0, MPU401_HW_CS4232,
mpu_port[dev], 0,
mpu_irq[dev],
- mpu_irq[dev] >= 0 ? SA_INTERRUPT : 0, NULL) < 0)
+ mpu_irq[dev] >= 0 ? IRQF_DISABLED : 0, NULL) < 0)
printk(KERN_WARNING IDENT ": MPU401 not detected\n");
}
MPU401_HW_MPU401,
mpu_port[dev], 0,
mpu_irq[dev],
- mpu_irq[dev] >= 0 ? SA_INTERRUPT : 0,
+ mpu_irq[dev] >= 0 ? IRQF_DISABLED : 0,
NULL) < 0)
snd_printk(KERN_ERR PFX "no MPU-401 device at 0x%lx ?\n", mpu_port[dev]);
}
if ((err = snd_mpu401_uart_new(card, 0, MPU401_HW_ES1688,
chip->mpu_port, 0,
xmpu_irq,
- SA_INTERRUPT,
+ IRQF_DISABLED,
NULL)) < 0)
goto _err;
}
snd_es1688_free(chip);
return -EBUSY;
}
- if (request_irq(irq, snd_es1688_interrupt, SA_INTERRUPT, "ES1688", (void *) chip)) {
+ if (request_irq(irq, snd_es1688_interrupt, IRQF_DISABLED, "ES1688", (void *) chip)) {
snd_printk(KERN_ERR "es1688: can't grab IRQ %d\n", irq);
snd_es1688_free(chip);
return -EBUSY;
return -EBUSY;
}
- if (request_irq(irq, snd_es18xx_interrupt, SA_INTERRUPT, "ES18xx", (void *) chip)) {
+ if (request_irq(irq, snd_es18xx_interrupt, IRQF_DISABLED, "ES18xx", (void *) chip)) {
snd_es18xx_free(chip);
snd_printk(KERN_ERR PFX "unable to grap IRQ %d\n", irq);
return -EBUSY;
snd_gus_free(gus);
return -EBUSY;
}
- if (irq >= 0 && request_irq(irq, snd_gus_interrupt, SA_INTERRUPT, "GUS GF1", (void *) gus)) {
+ if (irq >= 0 && request_irq(irq, snd_gus_interrupt, IRQF_DISABLED, "GUS GF1", (void *) gus)) {
snd_printk(KERN_ERR "gus: can't grab irq %d\n", irq);
snd_gus_free(gus);
return -EBUSY;
(err = snd_mpu401_uart_new(card, 0, MPU401_HW_ES1688,
es1688->mpu_port, 0,
xmpu_irq,
- SA_INTERRUPT,
+ IRQF_DISABLED,
NULL)) < 0)
goto out;
goto _err;
}
- if (request_irq(xirq, snd_gusmax_interrupt, SA_INTERRUPT, "GUS MAX", (void *)maxcard)) {
+ if (request_irq(xirq, snd_gusmax_interrupt, IRQF_DISABLED, "GUS MAX", (void *)maxcard)) {
snd_printk(KERN_ERR PFX "unable to grab IRQ %d\n", xirq);
err = -EBUSY;
goto _err;
if ((err = snd_gus_initialize(gus)) < 0)
return err;
- if (request_irq(xirq, snd_interwave_interrupt, SA_INTERRUPT,
+ if (request_irq(xirq, snd_interwave_interrupt, IRQF_DISABLED,
"InterWave", iwcard)) {
snd_printk(KERN_ERR PFX "unable to grab IRQ %d\n", xirq);
return -EBUSY;
chip->single_dma = 1;
if ((err = snd_opl3sa2_detect(chip)) < 0)
return err;
- if (request_irq(xirq, snd_opl3sa2_interrupt, SA_INTERRUPT, "OPL3-SA2", chip)) {
+ if (request_irq(xirq, snd_opl3sa2_interrupt, IRQF_DISABLED, "OPL3-SA2", chip)) {
snd_printk(KERN_ERR PFX "can't grab IRQ %d\n", xirq);
return -ENODEV;
}
rmidi = NULL;
else
if ((error = snd_mpu401_uart_new(card, 0, MPU401_HW_MPU401,
- miro->mpu_port, 0, miro->mpu_irq, SA_INTERRUPT,
+ miro->mpu_port, 0, miro->mpu_irq, IRQF_DISABLED,
&rmidi)))
snd_printk(KERN_WARNING "no MPU-401 device at 0x%lx?\n", miro->mpu_port);
}
codec->dma2 = chip->dma2;
- if (request_irq(chip->irq, snd_opti93x_interrupt, SA_INTERRUPT, DRIVER_NAME" - WSS", codec)) {
+ if (request_irq(chip->irq, snd_opti93x_interrupt, IRQF_DISABLED, DRIVER_NAME" - WSS", codec)) {
snd_printk(KERN_ERR "opti9xx: can't grab IRQ %d\n", chip->irq);
snd_opti93x_free(codec);
return -EBUSY;
rmidi = NULL;
else
if ((error = snd_mpu401_uart_new(card, 0, MPU401_HW_MPU401,
- chip->mpu_port, 0, chip->mpu_irq, SA_INTERRUPT,
+ chip->mpu_port, 0, chip->mpu_irq, IRQF_DISABLED,
&rmidi)))
snd_printk(KERN_WARNING "no MPU-401 device at 0x%lx?\n",
chip->mpu_port);
chip->port = port;
if (request_irq(irq, irq_handler, hardware == SB_HW_ALS4000 ?
- SA_INTERRUPT | SA_SHIRQ : SA_INTERRUPT,
+ IRQF_DISABLED | IRQF_SHARED : IRQF_DISABLED,
"SoundBlaster", (void *) chip)) {
snd_printk(KERN_ERR "sb: can't grab irq %d\n", irq);
snd_sbdsp_free(chip);
if (tmp < 0)
return -EINVAL;
- if (request_irq(irq, snd_sgalaxy_dummy_interrupt, SA_INTERRUPT, "sgalaxy", NULL)) {
+ if (request_irq(irq, snd_sgalaxy_dummy_interrupt, IRQF_DISABLED, "sgalaxy", NULL)) {
snd_printk(KERN_ERR "sgalaxy: can't grab irq %d\n", irq);
return -EIO;
}
if ((err = snd_mpu401_uart_new(card, devnum,
MPU401_HW_MPU401,
port, MPU401_INFO_INTEGRATED,
- irq, SA_INTERRUPT,
+ irq, IRQF_DISABLED,
&rawmidi)) == 0) {
struct snd_mpu401 *mpu = (struct snd_mpu401 *) rawmidi->private_data;
mpu->open_input = mpu401_open;
return -EBUSY;
}
if (request_irq(ics2115_irq[dev], snd_wavefront_ics2115_interrupt,
- SA_INTERRUPT, "ICS2115", acard)) {
+ IRQF_DISABLED, "ICS2115", acard)) {
snd_printk(KERN_ERR "unable to use ICS2115 IRQ %d\n", ics2115_irq[dev]);
return -EBUSY;
}
if ((err = snd_mpu401_uart_new(card, midi_dev, MPU401_HW_CS4232,
cs4232_mpu_port[dev], 0,
cs4232_mpu_irq[dev],
- SA_INTERRUPT,
+ IRQF_DISABLED,
NULL)) < 0) {
snd_printk (KERN_ERR "can't allocate CS4232 MPU-401 device\n");
return err;
flags = claim_dma_lock();
if ((au1000->stream[PLAYBACK]->dma = request_au1000_dma(DMA_ID_AC97C_TX,
- "AC97 TX", au1000_dma_interrupt, SA_INTERRUPT,
+ "AC97 TX", au1000_dma_interrupt, IRQF_DISABLED,
au1000->stream[PLAYBACK])) < 0) {
release_dma_lock(flags);
return -EBUSY;
}
if ((au1000->stream[CAPTURE]->dma = request_au1000_dma(DMA_ID_AC97C_RX,
- "AC97 RX", au1000_dma_interrupt, SA_INTERRUPT,
+ "AC97 RX", au1000_dma_interrupt, IRQF_DISABLED,
au1000->stream[CAPTURE])) < 0){
release_dma_lock(flags);
return -EBUSY;
goto out2;
}
- if (request_irq(pcidev->irq, ad1889_interrupt, SA_SHIRQ, DEVNAME, dev) != 0) {
+ if (request_irq(pcidev->irq, ad1889_interrupt, IRQF_SHARED, DEVNAME, dev) != 0) {
printk(KERN_ERR DEVNAME ": unable to request interrupt\n");
goto out3;
}
card->channel[4].num = 4;
/* claim our iospace and irq */
request_region(card->iobase, 256, card_names[pci_id->driver_data]);
- if (request_irq(card->irq, &ali_interrupt, SA_SHIRQ,
+ if (request_irq(card->irq, &ali_interrupt, IRQF_SHARED,
card_names[pci_id->driver_data], card)) {
printk(KERN_ERR "ali_audio: unable to allocate irq %d\n",
card->irq);
if ((s->dma_dac.dmanr = request_au1000_dma(DMA_ID_AC97C_TX,
"audio DAC",
dac_dma_interrupt,
- SA_INTERRUPT, s)) < 0) {
+ IRQF_DISABLED, s)) < 0) {
err("Can't get DAC DMA");
goto err_dma1;
}
if ((s->dma_adc.dmanr = request_au1000_dma(DMA_ID_AC97C_RX,
"audio ADC",
adc_dma_interrupt,
- SA_INTERRUPT, s)) < 0) {
+ IRQF_DISABLED, s)) < 0) {
err("Can't get ADC DMA");
goto err_dma2;
}
btwrite(~0U, REG_INT_STAT);
pci_set_master(pci_dev);
- if ((rc = request_irq(bta->irq, btaudio_irq, SA_SHIRQ|SA_INTERRUPT,
+ if ((rc = request_irq(bta->irq, btaudio_irq, IRQF_SHARED|IRQF_DISABLED,
"btaudio",(void *)bta)) < 0) {
printk(KERN_WARNING
"btaudio: can't request irq (rc=%d)\n",rc);
wrmixer(s, DSP_MIX_DATARESETIDX, 0);
/* request irq */
- if ((ret = request_irq(s->irq, cm_interrupt, SA_SHIRQ, "cmpci", s))) {
+ if ((ret = request_irq(s->irq, cm_interrupt, IRQF_SHARED, "cmpci", s))) {
printk(KERN_ERR "cmpci: irq %u in use\n", s->irq);
goto err_irq;
}
s->pcidev = pcidev;
s->irq = pcidev->irq;
if (request_irq
- (s->irq, cs4281_interrupt, SA_SHIRQ, "Crystal CS4281", s)) {
+ (s->irq, cs4281_interrupt, IRQF_SHARED, "Crystal CS4281", s)) {
CS_DBGOUT(CS_INIT | CS_ERROR, 1,
printk(KERN_ERR "cs4281: irq %u in use\n", s->irq));
goto err_irq;
card->ba1.name.reg == 0)
goto fail2;
- if (request_irq(card->irq, &cs_interrupt, SA_SHIRQ, "cs46xx", card)) {
+ if (request_irq(card->irq, &cs_interrupt, IRQF_SHARED, "cs46xx", card)) {
printk(KERN_ERR "cs46xx: unable to allocate irq %d\n", card->irq);
goto fail2;
}
*gpio_pol = *pp;
else
*gpio_pol = 1;
- if (np->n_intrs > 0)
- return np->intrs[0].line;
-
- return 0;
+ return irq_of_parse_and_map(np, 0);
}
static inline void
* other info if necessary (early AWACS we want to read chip ids)
*/
- if (of_get_address(io, 2, NULL, NULL) == NULL || io->n_intrs < 3) {
+ if (of_get_address(io, 2, NULL, NULL) == NULL) {
/* OK - maybe we need to use the 'awacs' node (on earlier
* machines).
*/
if (awacs_node) {
io = awacs_node ;
- if (of_get_address(io, 2, NULL, NULL) == NULL ||
- io->n_intrs < 3) {
+ if (of_get_address(io, 2, NULL, NULL) == NULL) {
printk("dmasound_pmac: can't use %s\n",
io->full_name);
return -ENODEV;
if (awacs_revision == AWACS_SCREAMER && awacs)
awacs_recalibrate();
- awacs_irq = io->intrs[0].line;
- awacs_tx_irq = io->intrs[1].line;
- awacs_rx_irq = io->intrs[2].line;
+ awacs_irq = irq_of_parse_and_map(io, 0);
+ awacs_tx_irq = irq_of_parse_and_map(io, 1);
+ awacs_rx_irq = irq_of_parse_and_map(io, 2);
/* Hack for legacy crap that will be killed someday */
awacs_node = io;
card->pci_dev = pci_dev;
/* Reserve IRQ Line */
- if (request_irq(card->irq, emu10k1_interrupt, SA_SHIRQ, card_names[pci_id->driver_data], card)) {
+ if (request_irq(card->irq, emu10k1_interrupt, IRQF_SHARED, card_names[pci_id->driver_data], card)) {
printk(KERN_ERR "emu10k1: IRQ in use\n");
ret = -EBUSY;
goto err_irq;
ret = -EBUSY;
goto err_region;
}
- if ((ret=request_irq(s->irq, es1370_interrupt, SA_SHIRQ, "es1370",s))) {
+ if ((ret=request_irq(s->irq, es1370_interrupt, IRQF_SHARED, "es1370",s))) {
printk(KERN_ERR "es1370: irq %u in use\n", s->irq);
goto err_irq;
}
res = -EBUSY;
goto err_region;
}
- if ((res=request_irq(s->irq, es1371_interrupt, SA_SHIRQ, "es1371",s))) {
+ if ((res=request_irq(s->irq, es1371_interrupt, IRQF_SHARED, "es1371",s))) {
printk(KERN_ERR PFX "irq %u in use\n", s->irq);
goto err_irq;
}
printk(KERN_ERR "solo1: io ports in use\n");
goto err_region4;
}
- if ((ret=request_irq(s->irq,solo1_interrupt,SA_SHIRQ,"ESS Solo1",s))) {
+ if ((ret=request_irq(s->irq,solo1_interrupt,IRQF_SHARED,"ESS Solo1",s))) {
printk(KERN_ERR "solo1: irq %u in use\n", s->irq);
goto err_irq;
}
chip->iobase = pci_resource_start (pci_dev, 0);
chip->irq = pci_dev->irq;
- if (request_irq (chip->irq, forte_interrupt, SA_SHIRQ, DRIVER_NAME,
+ if (request_irq (chip->irq, forte_interrupt, IRQF_SHARED, DRIVER_NAME,
chip)) {
printk (KERN_WARNING PFX "Unable to reserve IRQ");
ret = -EIO;
hpc3->pbus_dmacfg[hal2->dac.pbus.pbusnr][0] = 0x8208844;
hpc3->pbus_dmacfg[hal2->adc.pbus.pbusnr][0] = 0x8208844;
- if (request_irq(SGI_HPCDMA_IRQ, hal2_interrupt, SA_SHIRQ,
+ if (request_irq(SGI_HPCDMA_IRQ, hal2_interrupt, IRQF_SHARED,
hal2str, hal2)) {
printk(KERN_ERR "HAL2: Can't get irq %d\n", SGI_HPCDMA_IRQ);
ret = -EAGAIN;
goto out_iospace;
}
- if (request_irq(card->irq, &i810_interrupt, SA_SHIRQ,
+ if (request_irq(card->irq, &i810_interrupt, IRQF_SHARED,
card_names[pci_id->driver_data], card)) {
printk(KERN_ERR "i810_audio: unable to allocate irq %d\n", card->irq);
goto out_iospace;
s->io, s->io + pci_resource_len(pcidev,0)-1);
goto err_region;
}
- if (request_irq(s->irq, it8172_interrupt, SA_INTERRUPT,
+ if (request_irq(s->irq, it8172_interrupt, IRQF_DISABLED,
IT8172_MODULE_NAME, s)) {
err("irq %u in use", s->irq);
goto err_irq;
mixer_push_state(card);
}
- if((ret=request_irq(card->irq, ess_interrupt, SA_SHIRQ, card_names[card_type], card)))
+ if((ret=request_irq(card->irq, ess_interrupt, IRQF_SHARED, card_names[card_type], card)))
{
printk(KERN_ERR "maestro: unable to allocate irq %d,\n", card->irq);
unregister_sound_mixer(card->dev_mixer);
}
}
- if(request_irq(card->irq, m3_interrupt, SA_SHIRQ, card_names[card->card_type], card)) {
+ if(request_irq(card->irq, m3_interrupt, IRQF_SHARED, card_names[card->card_type], card)) {
printk(KERN_ERR PFX "unable to allocate irq %d,\n", card->irq);
s->io, s->io + pci_resource_len(pcidev,0)-1);
goto err_region;
}
- if (request_irq(s->irq, vrc5477_ac97_interrupt, SA_INTERRUPT,
+ if (request_irq(s->irq, vrc5477_ac97_interrupt, IRQF_DISABLED,
VRC5477_AC97_MODULE_NAME, s)) {
printk(KERN_ERR PFX "irq %u in use\n", s->irq);
goto err_irq;
nm256_grabInterrupt (struct nm256_info *card)
{
if (card->has_irq++ == 0) {
- if (request_irq (card->irq, card->introutine, SA_SHIRQ,
+ if (request_irq (card->irq, card->introutine, IRQF_SHARED,
"NM256_audio", card) < 0) {
printk (KERN_ERR "NM256: can't obtain IRQ %d\n", card->irq);
return -1;
if (pci_enable_device(pcidev))
goto err_irq;
- if (request_irq(s->irq, rme96xx_interrupt, SA_SHIRQ, "rme96xx", s)) {
+ if (request_irq(s->irq, rme96xx_interrupt, IRQF_SHARED, "rme96xx", s)) {
printk(KERN_ERR RME_MESS" irq %u in use\n", s->irq);
goto err_irq;
}
* will get shared PCI irq lines we must cope.
*/
- int i=(devc->caps&SB_PCI_IRQ)?SA_SHIRQ:0;
+ int i=(devc->caps&SB_PCI_IRQ)?IRQF_SHARED:0;
if (request_irq(hw_config->irq, sbintr, i, "soundblaster", devc) < 0)
{
dac_audio_set_rate();
retval =
- request_irq(TIMER1_IRQ, timer1_interrupt, SA_INTERRUPT, MODNAME, 0);
+ request_irq(TIMER1_IRQ, timer1_interrupt, IRQF_DISABLED, MODNAME, 0);
if (retval < 0) {
printk(KERN_ERR "sh_dac_audio: IRQ %d request failed\n",
TIMER1_IRQ);
wrindir(s, SV_CIPCMSR1, ((8000 * 65536 / FULLRATE) >> 8) & 0xff);
wrindir(s, SV_CIADCOUTPUT, 0);
/* request irq */
- if ((ret=request_irq(s->irq,sv_interrupt,SA_SHIRQ,"S3 SonicVibes",s))) {
+ if ((ret=request_irq(s->irq,sv_interrupt,IRQF_SHARED,"S3 SonicVibes",s))) {
printk(KERN_ERR "sv: irq %u in use\n", s->irq);
goto err_irq;
}
/* claim our irq */
rc = -ENODEV;
- if (request_irq(card->irq, &trident_interrupt, SA_SHIRQ,
+ if (request_irq(card->irq, &trident_interrupt, IRQF_SHARED,
card_names[pci_id->driver_data], card)) {
printk(KERN_ERR "trident: unable to allocate irq %d\n",
card->irq);
tmp8 |= VIA_CR48_FM_TRAP_TO_NMI;
pci_write_config_byte (card->pdev, VIA_FM_NMI_CTRL, tmp8);
}
- if (request_irq (card->pdev->irq, via_interrupt, SA_SHIRQ, VIA_MODULE_NAME, card)) {
+ if (request_irq (card->pdev->irq, via_interrupt, IRQF_SHARED, VIA_MODULE_NAME, card)) {
printk (KERN_ERR PFX "unable to obtain IRQ %d, aborting\n",
card->pdev->irq);
DPRINTK ("EXIT, returning -EBUSY\n");
}
else
{
- if (request_irq (card->pdev->irq, via_new_interrupt, SA_SHIRQ, VIA_MODULE_NAME, card)) {
+ if (request_irq (card->pdev->irq, via_new_interrupt, IRQF_SHARED, VIA_MODULE_NAME, card)) {
printk (KERN_ERR PFX "unable to obtain IRQ %d, aborting\n",
card->pdev->irq);
DPRINTK ("EXIT, returning -EBUSY\n");
}
if (request_irq (dev.irq, wavefrontintr,
- SA_INTERRUPT|SA_SHIRQ,
+ IRQF_DISABLED|IRQF_SHARED,
"wavefront synth", &dev) < 0) {
printk (KERN_WARNING LOGNAME "IRQ %d not available!\n",
dev.irq);
/* OK, now we're configured to handle an interrupt ... */
- if (request_irq (phys_dev->irq, wf_mpuintr, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq (phys_dev->irq, wf_mpuintr, IRQF_DISABLED|IRQF_SHARED,
"wavefront midi", phys_dev) < 0) {
printk (KERN_ERR "WF-MPU: Failed to allocate IRQ%d\n",
goto out_disable_dsp;
ymf_memload(codec);
- if (request_irq(pcidev->irq, ymf_interrupt, SA_SHIRQ, "ymfpci", codec) != 0) {
+ if (request_irq(pcidev->irq, ymf_interrupt, IRQF_SHARED, "ymfpci", codec) != 0) {
printk(KERN_ERR "ymfpci: unable to request IRQ %d\n",
pcidev->irq);
goto out_memfree;
spin_lock_init(&chip->lock); /* only now can we call ad1889_free */
if (request_irq(pci->irq, snd_ad1889_interrupt,
- SA_INTERRUPT|SA_SHIRQ, card->driver, (void*)chip)) {
+ IRQF_DISABLED|IRQF_SHARED, card->driver, (void*)chip)) {
printk(KERN_ERR PFX "cannot obtain IRQ %d\n", pci->irq);
snd_ad1889_free(chip);
return -EBUSY;
return err;
codec->port = pci_resource_start(codec->pci, 0);
- if (request_irq(codec->pci->irq, snd_ali_card_interrupt, SA_INTERRUPT|SA_SHIRQ, "ALI 5451", (void *)codec)) {
+ if (request_irq(codec->pci->irq, snd_ali_card_interrupt, IRQF_DISABLED|IRQF_SHARED, "ALI 5451", (void *)codec)) {
snd_printk(KERN_ERR "Unable to request irq.\n");
return -EBUSY;
}
else
irq_handler = snd_als300_interrupt;
- if (request_irq(pci->irq, irq_handler, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, irq_handler, IRQF_DISABLED|IRQF_SHARED,
card->shortname, (void *)chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_als300_free(chip);
return -EIO;
}
- if (request_irq(pci->irq, snd_atiixp_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_atiixp_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->shortname, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_atiixp_free(chip);
return -EIO;
}
- if (request_irq(pci->irq, snd_atiixp_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_atiixp_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->shortname, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_atiixp_free(chip);
}
if ((err = request_irq(pci->irq, vortex_interrupt,
- SA_INTERRUPT | SA_SHIRQ, CARD_NAME_SHORT,
+ IRQF_DISABLED | IRQF_SHARED, CARD_NAME_SHORT,
chip)) != 0) {
printk(KERN_ERR "cannot grab irq\n");
goto irq_out;
chip->synth_port = pci_resource_start(pci, 3);
chip->mixer_port = pci_resource_start(pci, 4);
- if (request_irq(pci->irq, snd_azf3328_interrupt, SA_INTERRUPT|SA_SHIRQ, card->shortname, (void *)chip)) {
+ if (request_irq(pci->irq, snd_azf3328_interrupt, IRQF_DISABLED|IRQF_SHARED, card->shortname, (void *)chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
err = -EBUSY;
goto out_err;
snd_bt87x_writel(chip, REG_INT_MASK, 0);
snd_bt87x_writel(chip, REG_INT_STAT, MY_INTERRUPTS);
- if (request_irq(pci->irq, snd_bt87x_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq(pci->irq, snd_bt87x_interrupt, IRQF_DISABLED | IRQF_SHARED,
"Bt87x audio", chip)) {
snd_bt87x_free(chip);
snd_printk(KERN_ERR "cannot grab irq\n");
}
if (request_irq(pci->irq, snd_ca0106_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "snd_ca0106",
+ IRQF_DISABLED|IRQF_SHARED, "snd_ca0106",
(void *)chip)) {
snd_ca0106_free(chip);
printk(KERN_ERR "cannot grab irq\n");
cm->iobase = pci_resource_start(pci, 0);
if (request_irq(pci->irq, snd_cmipci_interrupt,
- SA_INTERRUPT|SA_SHIRQ, card->driver, cm)) {
+ IRQF_DISABLED|IRQF_SHARED, card->driver, cm)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_cmipci_free(cm);
return -EBUSY;
return -ENOMEM;
}
- if (request_irq(pci->irq, snd_cs4281_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_cs4281_interrupt, IRQF_DISABLED|IRQF_SHARED,
"CS4281", chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_cs4281_free(chip);
}
}
- if (request_irq(pci->irq, snd_cs46xx_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_cs46xx_interrupt, IRQF_DISABLED|IRQF_SHARED,
"CS46XX", chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_cs46xx_free(chip);
cs5535au->port = pci_resource_start(pci, 0);
if (request_irq(pci->irq, snd_cs5535audio_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "CS5535 Audio", cs5535au)) {
+ IRQF_DISABLED|IRQF_SHARED, "CS5535 Audio", cs5535au)) {
snd_printk("unable to grab IRQ %d\n", pci->irq);
err = -EBUSY;
goto sndfail;
chip->dsp_registers = (volatile u32 __iomem *)
ioremap_nocache(chip->dsp_registers_phys, sz);
- if (request_irq(pci->irq, snd_echo_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ if (request_irq(pci->irq, snd_echo_interrupt, IRQF_DISABLED | IRQF_SHARED,
ECHOCARD_NAME, (void *)chip)) {
snd_echo_free(chip);
snd_printk(KERN_ERR "cannot grab irq\n");
}
emu->port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_emu10k1_interrupt, SA_INTERRUPT|SA_SHIRQ, "EMU10K1", (void *)emu)) {
+ if (request_irq(pci->irq, snd_emu10k1_interrupt, IRQF_DISABLED|IRQF_SHARED, "EMU10K1", (void *)emu)) {
err = -EBUSY;
goto error;
}
}
if (request_irq(pci->irq, snd_emu10k1x_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "EMU10K1X",
+ IRQF_DISABLED|IRQF_SHARED, "EMU10K1X",
(void *)chip)) {
snd_printk(KERN_ERR "emu10k1x: cannot grab irq %d\n", pci->irq);
snd_emu10k1x_free(chip);
return err;
}
ensoniq->port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_audiopci_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_audiopci_interrupt, IRQF_DISABLED|IRQF_SHARED,
"Ensoniq AudioPCI", ensoniq)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_ensoniq_free(ensoniq);
pci_restore_state(pci);
pci_enable_device(pci);
request_irq(pci->irq, snd_es1938_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "ES1938", chip);
+ IRQF_DISABLED|IRQF_SHARED, "ES1938", chip);
chip->irq = pci->irq;
snd_es1938_chip_init(chip);
chip->vc_port = pci_resource_start(pci, 2);
chip->mpu_port = pci_resource_start(pci, 3);
chip->game_port = pci_resource_start(pci, 4);
- if (request_irq(pci->irq, snd_es1938_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_es1938_interrupt, IRQF_DISABLED|IRQF_SHARED,
"ES1938", chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_es1938_free(chip);
return err;
}
chip->io_port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_es1968_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_es1968_interrupt, IRQF_DISABLED|IRQF_SHARED,
"ESS Maestro", (void*)chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_es1968_free(chip);
return err;
}
chip->port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_fm801_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_fm801_interrupt, IRQF_DISABLED|IRQF_SHARED,
"FM801", chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", chip->irq);
snd_fm801_free(chip);
goto errout;
}
- if (request_irq(pci->irq, azx_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, azx_interrupt, IRQF_DISABLED|IRQF_SHARED,
"HDA Intel", (void*)chip)) {
snd_printk(KERN_ERR SFX "unable to grab IRQ %d\n", pci->irq);
err = -EBUSY;
ice->dmapath_port = pci_resource_start(pci, 2);
ice->profi_port = pci_resource_start(pci, 3);
- if (request_irq(pci->irq, snd_ice1712_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_ice1712_interrupt, IRQF_DISABLED|IRQF_SHARED,
"ICE1712", ice)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_ice1712_free(ice);
ice->profi_port = pci_resource_start(pci, 1);
if (request_irq(pci->irq, snd_vt1724_interrupt,
- SA_INTERRUPT|SA_SHIRQ, "ICE1724", ice)) {
+ IRQF_DISABLED|IRQF_SHARED, "ICE1724", ice)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_vt1724_free(ice);
return -EIO;
pci_restore_state(pci);
pci_enable_device(pci);
pci_set_master(pci);
- request_irq(pci->irq, snd_intel8x0_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ request_irq(pci->irq, snd_intel8x0_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->shortname, chip);
chip->irq = pci->irq;
synchronize_irq(chip->irq);
/* request irq after initializaing int_sta_mask, etc */
if (request_irq(pci->irq, snd_intel8x0_interrupt,
- SA_INTERRUPT|SA_SHIRQ, card->shortname, chip)) {
+ IRQF_DISABLED|IRQF_SHARED, card->shortname, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_intel8x0_free(chip);
return -EBUSY;
}
port_inited:
- if (request_irq(pci->irq, snd_intel8x0_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_intel8x0_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->shortname, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_intel8x0_free(chip);
}
err = request_irq(pci->irq, snd_korg1212_interrupt,
- SA_INTERRUPT|SA_SHIRQ,
+ IRQF_DISABLED|IRQF_SHARED,
"korg1212", korg1212);
if (err) {
tasklet_init(&chip->hwvol_tq, snd_m3_update_hw_volume, (unsigned long)chip);
- if (request_irq(pci->irq, snd_m3_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_m3_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->driver, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_m3_free(chip);
pci_resource_len(pci, i));
}
- if (request_irq(pci->irq, snd_mixart_interrupt, SA_INTERRUPT|SA_SHIRQ, CARD_NAME, (void *)mgr)) {
+ if (request_irq(pci->irq, snd_mixart_interrupt, IRQF_DISABLED|IRQF_SHARED, CARD_NAME, (void *)mgr)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_mixart_free(mgr);
return -EBUSY;
{
mutex_lock(&chip->irq_mutex);
if (chip->irq < 0) {
- if (request_irq(chip->pci->irq, chip->interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(chip->pci->irq, chip->interrupt, IRQF_DISABLED|IRQF_SHARED,
chip->card->driver, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", chip->pci->irq);
mutex_unlock(&chip->irq_mutex);
mgr->pci = pci;
mgr->irq = -1;
- if (request_irq(pci->irq, pcxhr_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, pcxhr_interrupt, IRQF_DISABLED|IRQF_SHARED,
card_name, mgr)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
pcxhr_free(mgr);
UNSET_AIE(hwport);
if (request_irq
- (pci->irq, snd_riptide_interrupt, SA_INTERRUPT | SA_SHIRQ,
+ (pci->irq, snd_riptide_interrupt, IRQF_DISABLED | IRQF_SHARED,
"RIPTIDE", chip)) {
snd_printk(KERN_ERR "Riptide: unable to grab IRQ %d\n",
pci->irq);
return -ENOMEM;
}
- if (request_irq(pci->irq, snd_rme32_interrupt, SA_INTERRUPT | SA_SHIRQ, "RME32", (void *) rme32)) {
+ if (request_irq(pci->irq, snd_rme32_interrupt, IRQF_DISABLED | IRQF_SHARED, "RME32", (void *) rme32)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
return -EBUSY;
}
return -ENOMEM;
}
- if (request_irq(pci->irq, snd_rme96_interrupt, SA_INTERRUPT|SA_SHIRQ, "RME96", (void *)rme96)) {
+ if (request_irq(pci->irq, snd_rme96_interrupt, IRQF_DISABLED|IRQF_SHARED, "RME96", (void *)rme96)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
return -EBUSY;
}
return -EBUSY;
}
- if (request_irq(pci->irq, snd_hdsp_interrupt, SA_INTERRUPT|SA_SHIRQ, "hdsp", (void *)hdsp)) {
+ if (request_irq(pci->irq, snd_hdsp_interrupt, IRQF_DISABLED|IRQF_SHARED, "hdsp", (void *)hdsp)) {
snd_printk(KERN_ERR "Hammerfall-DSP: unable to use IRQ %d\n", pci->irq);
return -EBUSY;
}
hdspm->port + io_extent - 1);
if (request_irq(pci->irq, snd_hdspm_interrupt,
- SA_INTERRUPT | SA_SHIRQ, "hdspm",
+ IRQF_DISABLED | IRQF_SHARED, "hdspm",
(void *) hdspm)) {
snd_printk(KERN_ERR "HDSPM: unable to use IRQ %d\n", pci->irq);
return -EBUSY;
return -EBUSY;
}
- if (request_irq(pci->irq, snd_rme9652_interrupt, SA_INTERRUPT|SA_SHIRQ, "rme9652", (void *)rme9652)) {
+ if (request_irq(pci->irq, snd_rme9652_interrupt, IRQF_DISABLED|IRQF_SHARED, "rme9652", (void *)rme9652)) {
snd_printk(KERN_ERR "unable to request IRQ %d\n", pci->irq);
return -EBUSY;
}
sonic->midi_port = pci_resource_start(pci, 3);
sonic->game_port = pci_resource_start(pci, 4);
- if (request_irq(pci->irq, snd_sonicvibes_interrupt, SA_INTERRUPT|SA_SHIRQ, "S3 SonicVibes", (void *)sonic)) {
+ if (request_irq(pci->irq, snd_sonicvibes_interrupt, IRQF_DISABLED|IRQF_SHARED, "S3 SonicVibes", (void *)sonic)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_sonicvibes_free(sonic);
return -EBUSY;
}
trident->port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_trident_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_trident_interrupt, IRQF_DISABLED|IRQF_SHARED,
"Trident Audio", trident)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_trident_free(trident);
if (request_irq(pci->irq,
chip_type == TYPE_VIA8233 ?
snd_via8233_interrupt : snd_via686_interrupt,
- SA_INTERRUPT|SA_SHIRQ,
+ IRQF_DISABLED|IRQF_SHARED,
card->driver, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_via82xx_free(chip);
return err;
}
chip->port = pci_resource_start(pci, 0);
- if (request_irq(pci->irq, snd_via82xx_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_via82xx_interrupt, IRQF_DISABLED|IRQF_SHARED,
card->driver, chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_via82xx_free(chip);
for (i = 0; i < 2; i++)
vx->port[i] = pci_resource_start(pci, i + 1);
- if (request_irq(pci->irq, snd_vx_irq_handler, SA_INTERRUPT|SA_SHIRQ,
+ if (request_irq(pci->irq, snd_vx_irq_handler, IRQF_DISABLED|IRQF_SHARED,
CARD_NAME, (void *) chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_vx222_free(chip);
snd_ymfpci_free(chip);
return -EBUSY;
}
- if (request_irq(pci->irq, snd_ymfpci_interrupt, SA_INTERRUPT|SA_SHIRQ, "YMFPCI", (void *) chip)) {
+ if (request_irq(pci->irq, snd_ymfpci_interrupt, IRQF_DISABLED|IRQF_SHARED, "YMFPCI", (void *) chip)) {
snd_printk(KERN_ERR "unable to grab IRQ %d\n", pci->irq);
snd_ymfpci_free(chip);
return -EBUSY;
struct snd_pmac *chip;
struct device_node *np;
int i, err;
+ unsigned int irq;
unsigned long ctrl_addr, txdma_addr, rxdma_addr;
static struct snd_device_ops ops = {
.dev_free = snd_pmac_dev_free,
if (chip->is_k2) {
static char *rnames[] = {
"Sound Control", "Sound DMA" };
- if (np->n_intrs < 3) {
- err = -ENODEV;
- goto __error;
- }
for (i = 0; i < 2; i ++) {
if (of_address_to_resource(np->parent, i,
&chip->rsrc[i])) {
chip->rsrc[i].start + 1,
rnames[i]) == NULL) {
printk(KERN_ERR "snd: can't request rsrc "
- " %d (%s: 0x%016lx:%016lx)\n",
+ " %d (%s: 0x%016llx:%016llx)\n",
i, rnames[i],
(unsigned long long)chip->rsrc[i].start,
(unsigned long long)chip->rsrc[i].end);
} else {
static char *rnames[] = {
"Sound Control", "Sound Tx DMA", "Sound Rx DMA" };
- if (np->n_intrs < 3) {
- err = -ENODEV;
- goto __error;
- }
for (i = 0; i < 3; i ++) {
if (of_address_to_resource(np, i,
&chip->rsrc[i])) {
chip->playback.dma = ioremap(txdma_addr, 0x100);
chip->capture.dma = ioremap(rxdma_addr, 0x100);
if (chip->model <= PMAC_BURGUNDY) {
- if (request_irq(np->intrs[0].line, snd_pmac_ctrl_intr, 0,
+ irq = irq_of_parse_and_map(np, 0);
+ if (request_irq(irq, snd_pmac_ctrl_intr, 0,
"PMac", (void*)chip)) {
- snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n", np->intrs[0].line);
+ snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n",
+ irq);
err = -EBUSY;
goto __error;
}
- chip->irq = np->intrs[0].line;
+ chip->irq = irq;
}
- if (request_irq(np->intrs[1].line, snd_pmac_tx_intr, 0,
- "PMac Output", (void*)chip)) {
- snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n", np->intrs[1].line);
+ irq = irq_of_parse_and_map(np, 1);
+ if (request_irq(irq, snd_pmac_tx_intr, 0, "PMac Output", (void*)chip)){
+ snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n", irq);
err = -EBUSY;
goto __error;
}
- chip->tx_irq = np->intrs[1].line;
- if (request_irq(np->intrs[2].line, snd_pmac_rx_intr, 0,
- "PMac Input", (void*)chip)) {
- snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n", np->intrs[2].line);
+ chip->tx_irq = irq;
+ irq = irq_of_parse_and_map(np, 2);
+ if (request_irq(irq, snd_pmac_rx_intr, 0, "PMac Input", (void*)chip)) {
+ snd_printk(KERN_ERR "pmac: unable to grab IRQ %d\n", irq);
err = -EBUSY;
goto __error;
}
- chip->rx_irq = np->intrs[2].line;
+ chip->rx_irq = irq;
snd_pmac_sound_feature(chip, 1);
DBG("(I) GPIO device %s found, offset: %x, active state: %d !\n",
device, gp->addr, gp->active_state);
- return (node->n_intrs > 0) ? node->intrs[0].line : 0;
+ return irq_of_parse_and_map(node, 0);
}
/* reset audio */
&mix->line_mute, 1);
irq = tumbler_find_device("headphone-detect",
NULL, &mix->hp_detect, 0);
- if (irq < 0)
+ if (irq <= NO_IRQ)
irq = tumbler_find_device("headphone-detect",
NULL, &mix->hp_detect, 1);
- if (irq < 0)
+ if (irq <= NO_IRQ)
irq = tumbler_find_device("keywest-gpio15",
NULL, &mix->hp_detect, 1);
mix->headphone_irq = irq;
irq = tumbler_find_device("line-output-detect",
NULL, &mix->line_detect, 0);
- if (irq < 0)
+ if (irq <= NO_IRQ)
irq = tumbler_find_device("line-output-detect",
NULL, &mix->line_detect, 1);
mix->lineout_irq = irq;
amd7930_idle(amd);
if (request_irq(irq, snd_amd7930_interrupt,
- SA_INTERRUPT | SA_SHIRQ, "amd7930", amd)) {
+ IRQF_DISABLED | IRQF_SHARED, "amd7930", amd)) {
snd_printk("amd7930-%d: Unable to grab IRQ %d\n",
dev, irq);
snd_amd7930_free(amd);
strcpy(card->driver, "AMD7930");
strcpy(card->shortname, "Sun AMD7930");
- sprintf(card->longname, "%s at 0x%02lx:0x%08lx, irq %d",
+ sprintf(card->longname, "%s at 0x%02lx:0x%08Lx, irq %d",
card->shortname,
rp->flags & 0xffL,
- rp->start,
+ (unsigned long long)rp->start,
irq);
if ((err = snd_amd7930_create(card, rp,
chip->c_dma.preallocate = sbus_dma_preallocate;
if (request_irq(sdev->irqs[0], snd_cs4231_sbus_interrupt,
- SA_SHIRQ, "cs4231", chip)) {
+ IRQF_SHARED, "cs4231", chip)) {
snd_printdd("cs4231-%d: Unable to grab SBUS IRQ %d\n",
dev, sdev->irqs[0]);
snd_cs4231_sbus_free(chip);
if (err)
return err;
- sprintf(card->longname, "%s at 0x%02lx:0x%016lx, irq %d",
+ sprintf(card->longname, "%s at 0x%02lx:0x%016Lx, irq %d",
card->shortname,
rp->flags & 0xffL,
(unsigned long long)rp->start,
return -EIO;
}
- err = request_irq(dbri->irq, snd_dbri_interrupt, SA_SHIRQ,
+ err = request_irq(dbri->irq, snd_dbri_interrupt, IRQF_SHARED,
"DBRI audio", dbri);
if (err) {
printk(KERN_ERR "DBRI: Can't get irq %d\n", dbri->irq);
strcpy(card->driver, "DBRI");
strcpy(card->shortname, "Sun DBRI");
rp = &sdev->resource[0];
- sprintf(card->longname, "%s at 0x%02lx:0x%016lx, irq %d",
+ sprintf(card->longname, "%s at 0x%02lx:0x%016Lx, irq %d",
card->shortname,
rp->flags & 0xffL, (unsigned long long)rp->start, irq.pri);