提交 7591103c 编写于 作者: L Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: (66 commits)
  ata: Add documentation for hard disk shock protection interface (v3)
  ide: Implement disk shock protection support (v4)
  ide-cd: fix printk format warning
  piix: add Hercules EC-900 mini-notebook to ich_laptop short cable list
  ide-atapi: assign taskfile flags per device type
  ide-cd: move cdrom_info.dma to ide_drive_t.dma
  ide: add ide_drive_t.dma flag
  ide-cd: add a debug_mask module parameter
  ide-cd: convert driver to new ide debugging macro (v3)
  ide: move SFF DMA code to ide-dma-sff.c
  ide: cleanup ide-dma.c
  ide: cleanup ide_build_dmatable()
  ide: remove needless includes from ide-dma.c
  ide: switch to DMA-mapping API part #2
  ide: make ide_dma_timeout() available also for CONFIG_BLK_DEV_IDEDMA_SFF=n
  ide: make ide_dma_lost_irq() available also for CONFIG_BLK_DEV_IDEDMA_SFF=n
  ide: __ide_dma_end() -> ide_dma_end()
  pmac: remove needless pmac_ide_destroy_dmatable() wrapper
  pmac: remove superfluous pmif == NULL checks
  ide: Two fixes regarding memory allocation
  ...
Hard disk shock protection
==========================
Author: Elias Oltmanns <eo@nebensachen.de>
Last modified: 2008-10-03
0. Contents
-----------
1. Intro
2. The interface
3. References
4. CREDITS
1. Intro
--------
ATA/ATAPI-7 specifies the IDLE IMMEDIATE command with unload feature.
Issuing this command should cause the drive to switch to idle mode and
unload disk heads. This feature is being used in modern laptops in
conjunction with accelerometers and appropriate software to implement
a shock protection facility. The idea is to stop all I/O operations on
the internal hard drive and park its heads on the ramp when critical
situations are anticipated. The desire to have such a feature
available on GNU/Linux systems has been the original motivation to
implement a generic disk head parking interface in the Linux kernel.
Please note, however, that other components have to be set up on your
system in order to get disk shock protection working (see
section 3. References below for pointers to more information about
that).
2. The interface
----------------
For each ATA device, the kernel exports the file
block/*/device/unload_heads in sysfs (here assumed to be mounted under
/sys). Access to /sys/block/*/device/unload_heads is denied with
-EOPNOTSUPP if the device does not support the unload feature.
Otherwise, writing an integer value to this file will take the heads
of the respective drive off the platter and block all I/O operations
for the specified number of milliseconds. When the timeout expires and
no further disk head park request has been issued in the meantime,
normal operation will be resumed. The maximal value accepted for a
timeout is 30000 milliseconds. Exceeding this limit will return
-EOVERFLOW, but heads will be parked anyway and the timeout will be
set to 30 seconds. However, you can always change a timeout to any
value between 0 and 30000 by issuing a subsequent head park request
before the timeout of the previous one has expired. In particular, the
total timeout can exceed 30 seconds and, more importantly, you can
cancel a previously set timeout and resume normal operation
immediately by specifying a timeout of 0. Values below -2 are rejected
with -EINVAL (see below for the special meaning of -1 and -2). If the
timeout specified for a recent head park request has not yet expired,
reading from /sys/block/*/device/unload_heads will report the number
of milliseconds remaining until normal operation will be resumed;
otherwise, reading the unload_heads attribute will return 0.
For example, do the following in order to park the heads of drive
/dev/sda and stop all I/O operations for five seconds:
# echo 5000 > /sys/block/sda/device/unload_heads
A simple
# cat /sys/block/sda/device/unload_heads
will show you how many milliseconds are left before normal operation
will be resumed.
A word of caution: The fact that the interface operates on a basis of
milliseconds may raise expectations that cannot be satisfied in
reality. In fact, the ATA specs clearly state that the time for an
unload operation to complete is vendor specific. The hint in ATA-7
that this will typically be within 500 milliseconds apparently has
been dropped in ATA-8.
There is a technical detail of this implementation that may cause some
confusion and should be discussed here. When a head park request has
been issued to a device successfully, all I/O operations on the
controller port this device is attached to will be deferred. That is
to say, any other device that may be connected to the same port will
be affected too. The only exception is that a subsequent head unload
request to that other device will be executed immediately. Further
operations on that port will be deferred until the timeout specified
for either device on the port has expired. As far as PATA (old style
IDE) configurations are concerned, there can only be two devices
attached to any single port. In SATA world we have port multipliers
which means that a user-issued head parking request to one device may
actually result in stopping I/O to a whole bunch of devices. However,
since this feature is supposed to be used on laptops and does not seem
to be very useful in any other environment, there will be mostly one
device per port. Even if the CD/DVD writer happens to be connected to
the same port as the hard drive, it generally *should* recover just
fine from the occasional buffer under-run incurred by a head park
request to the HD. Actually, when you are using an ide driver rather
than its libata counterpart (i.e. your disk is called /dev/hda
instead of /dev/sda), then parking the heads of one drive (drive X)
will generally not affect the mode of operation of another drive
(drive Y) on the same port as described above. It is only when a port
reset is required to recover from an exception on drive Y that further
I/O operations on that drive (and the reset itself) will be delayed
until drive X is no longer in the parked state.
Finally, there are some hard drives that only comply with an earlier
version of the ATA standard than ATA-7, but do support the unload
feature nonetheless. Unfortunately, there is no safe way Linux can
detect these devices, so you won't be able to write to the
unload_heads attribute. If you know that your device really does
support the unload feature (for instance, because the vendor of your
laptop or the hard drive itself told you so), then you can tell the
kernel to enable the usage of this feature for that drive by writing
the special value -1 to the unload_heads attribute:
# echo -1 > /sys/block/sda/device/unload_heads
will enable the feature for /dev/sda, and giving -2 instead of -1 will
disable it again.
3. References
-------------
There are several laptops from different vendors featuring shock
protection capabilities. As manufacturers have refused to support open
source development of the required software components so far, Linux
support for shock protection varies considerably between different
hardware implementations. Ideally, this section should contain a list
of pointers at different projects aiming at an implementation of shock
protection on different systems. Unfortunately, I only know of a
single project which, although still considered experimental, is fit
for use. Please feel free to add projects that have been the victims
of my ignorance.
- http://www.thinkwiki.org/wiki/HDAPS
See this page for information about Linux support of the hard disk
active protection system as implemented in IBM/Lenovo Thinkpads.
4. CREDITS
----------
This implementation of disk head parking has been inspired by a patch
originally published by Jon Escombe <lists@dresco.co.uk>. My efforts
to develop an implementation of this feature that is fit to be merged
into mainline have been aided by various kernel developers, in
particular by Tejun Heo and Bartlomiej Zolnierkiewicz.
......@@ -19,35 +19,6 @@
#include <linux/stddef.h>
#include <asm/processor.h>
static __inline__ int ide_probe_legacy(void)
{
#ifdef CONFIG_PCI
struct pci_dev *dev;
/*
* This can be called on the ide_setup() path, super-early in
* boot. But the down_read() will enable local interrupts,
* which can cause some machines to crash. So here we detect
* and flag that situation and bail out early.
*/
if (no_pci_devices())
return 0;
dev = pci_get_class(PCI_CLASS_BRIDGE_EISA << 8, NULL);
if (dev)
goto found;
dev = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
if (dev)
goto found;
return 0;
found:
pci_dev_put(dev);
return 1;
#elif defined(CONFIG_EISA) || defined(CONFIG_ISA)
return 1;
#else
return 0;
#endif
}
/* MIPS port and memory-mapped I/O string operations. */
static inline void __ide_flush_prologue(void)
{
......
......@@ -53,11 +53,6 @@ extern struct fd_ops no_fd_ops;
struct fd_ops *fd_ops;
#endif
#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE)
extern struct ide_ops no_ide_ops;
struct ide_ops *ide_ops;
#endif
extern struct rtc_ops no_rtc_ops;
struct rtc_ops *rtc_ops;
......
......@@ -54,38 +54,6 @@ menuconfig IDE
if IDE
config BLK_DEV_IDE
tristate "Enhanced IDE/MFM/RLL disk/cdrom/tape/floppy support"
---help---
If you say Y here, you will use the full-featured IDE driver to
control up to ten ATA/IDE interfaces, each being able to serve a
"master" and a "slave" device, for a total of up to twenty ATA/IDE
disk/cdrom/tape/floppy drives.
Useful information about large (>540 MB) IDE disks, multiple
interfaces, what to do if ATA/IDE devices are not automatically
detected, sound card ATA/IDE ports, module support, and other
topics, is contained in <file:Documentation/ide/ide.txt>. For detailed
information about hard drives, consult the Disk-HOWTO and the
Multi-Disk-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To fine-tune ATA/IDE drive/interface parameters for improved
performance, look for the hdparm package at
<ftp://ibiblio.org/pub/Linux/system/hardware/>.
To compile this driver as a module, choose M here and read
<file:Documentation/ide/ide.txt>. The module will be called ide-mod.
Do not compile this driver as a module if your root file system (the
one containing the directory /) is located on an IDE device.
If you have one or more IDE drives, say Y or M here. If your system
has no IDE drives, or if memory requirements are really tight, you
could say N here, and select the "Old hard disk driver" below
instead to save about 13 KB of memory in the kernel.
if BLK_DEV_IDE
comment "Please see Documentation/ide/ide.txt for help/info on IDE drives"
config IDE_TIMINGS
......@@ -348,7 +316,7 @@ config BLK_DEV_IDEPCI
config IDEPCI_PCIBUS_ORDER
bool "Probe IDE PCI devices in the PCI bus order (DEPRECATED)"
depends on BLK_DEV_IDE=y && BLK_DEV_IDEPCI
depends on IDE=y && BLK_DEV_IDEPCI
default y
help
Probe IDE PCI devices in the order in which they appear on the
......@@ -729,7 +697,7 @@ endif
config BLK_DEV_IDE_PMAC
tristate "PowerMac on-board IDE support"
depends on PPC_PMAC && IDE=y && BLK_DEV_IDE=y
depends on PPC_PMAC && IDE=y
select IDE_TIMINGS
help
This driver provides support for the on-board IDE controller on
......@@ -963,6 +931,4 @@ config BLK_DEV_IDEDMA
def_bool BLK_DEV_IDEDMA_SFF || BLK_DEV_IDEDMA_PMAC || \
BLK_DEV_IDEDMA_ICS || BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA
endif
endif # IDE
......@@ -5,24 +5,25 @@
EXTRA_CFLAGS += -Idrivers/ide
ide-core-y += ide.o ide-ioctls.o ide-io.o ide-iops.o ide-lib.o ide-probe.o \
ide-taskfile.o ide-pio-blacklist.o
ide-taskfile.o ide-park.o ide-pio-blacklist.o
# core IDE code
ide-core-$(CONFIG_IDE_TIMINGS) += ide-timings.o
ide-core-$(CONFIG_IDE_ATAPI) += ide-atapi.o
ide-core-$(CONFIG_BLK_DEV_IDEPCI) += setup-pci.o
ide-core-$(CONFIG_BLK_DEV_IDEDMA) += ide-dma.o
ide-core-$(CONFIG_BLK_DEV_IDEDMA_SFF) += ide-dma-sff.o
ide-core-$(CONFIG_IDE_PROC_FS) += ide-proc.o
ide-core-$(CONFIG_BLK_DEV_IDEACPI) += ide-acpi.o
obj-$(CONFIG_BLK_DEV_IDE) += ide-core.o
obj-$(CONFIG_IDE) += ide-core.o
ifeq ($(CONFIG_IDE_ARM), y)
ide-arm-core-y += arm/ide_arm.o
obj-y += ide-arm-core.o
endif
obj-$(CONFIG_BLK_DEV_IDE) += legacy/ pci/
obj-$(CONFIG_IDE) += legacy/ pci/
obj-$(CONFIG_IDEPCI_PCIBUS_ORDER) += ide-scan-pci.o
......@@ -31,15 +32,21 @@ ifeq ($(CONFIG_BLK_DEV_CMD640), y)
obj-y += cmd640-core.o
endif
obj-$(CONFIG_BLK_DEV_IDE) += ppc/
obj-$(CONFIG_IDE) += ppc/
obj-$(CONFIG_IDE_H8300) += h8300/
obj-$(CONFIG_IDE_GENERIC) += ide-generic.o
obj-$(CONFIG_BLK_DEV_IDEPNP) += ide-pnp.o
ide-disk_mod-y += ide-disk.o ide-disk_ioctl.o
ide-cd_mod-y += ide-cd.o ide-cd_ioctl.o ide-cd_verbose.o
ide-floppy_mod-y += ide-floppy.o ide-floppy_ioctl.o
obj-$(CONFIG_BLK_DEV_IDEDISK) += ide-disk.o
ifeq ($(CONFIG_IDE_PROC_FS), y)
ide-disk_mod-y += ide-disk_proc.o
ide-floppy_mod-y += ide-floppy_proc.o
endif
obj-$(CONFIG_BLK_DEV_IDEDISK) += ide-disk_mod.o
obj-$(CONFIG_BLK_DEV_IDECD) += ide-cd_mod.o
obj-$(CONFIG_BLK_DEV_IDEFLOPPY) += ide-floppy_mod.o
obj-$(CONFIG_BLK_DEV_IDETAPE) += ide-tape.o
......@@ -54,4 +61,4 @@ ifeq ($(CONFIG_BLK_DEV_PLATFORM), y)
obj-y += ide-platform-core.o
endif
obj-$(CONFIG_BLK_DEV_IDE) += arm/ mips/
obj-$(CONFIG_IDE) += arm/ mips/
......@@ -372,25 +372,6 @@ static int icside_dma_test_irq(ide_drive_t *drive)
ICS_ARCIN_V6_INTRSTAT_1)) & 1;
}
static void icside_dma_timeout(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
printk(KERN_ERR "%s: DMA timeout occurred: ", drive->name);
if (icside_dma_test_irq(drive))
return;
ide_dump_status(drive, "DMA timeout", hwif->tp_ops->read_status(hwif));
icside_dma_end(drive);
}
static void icside_dma_lost_irq(ide_drive_t *drive)
{
printk(KERN_ERR "%s: IRQ lost\n", drive->name);
}
static int icside_dma_init(ide_hwif_t *hwif, const struct ide_port_info *d)
{
hwif->dmatable_cpu = NULL;
......@@ -406,8 +387,8 @@ static const struct ide_dma_ops icside_v6_dma_ops = {
.dma_start = icside_dma_start,
.dma_end = icside_dma_end,
.dma_test_irq = icside_dma_test_irq,
.dma_timeout = icside_dma_timeout,
.dma_lost_irq = icside_dma_lost_irq,
.dma_timeout = ide_dma_timeout,
.dma_lost_irq = ide_dma_lost_irq,
};
#else
#define icside_v6_dma_ops NULL
......
......@@ -80,7 +80,7 @@ static void h8300_tf_load(ide_drive_t *drive, ide_task_t *task)
outb(tf->lbah, io_ports->lbah_addr);
if (task->tf_flags & IDE_TFLAG_OUT_DEVICE)
outb((tf->device & HIHI) | drive->select.all,
outb((tf->device & HIHI) | drive->select,
io_ports->device_addr);
}
......
......@@ -290,7 +290,7 @@ static int do_drive_get_GTF(ide_drive_t *drive,
DEBPRINT("ENTER: %s at %s, port#: %d, hard_port#: %d\n",
hwif->name, dev->bus_id, port, hwif->channel);
if (!drive->present) {
if ((drive->dev_flags & IDE_DFLAG_PRESENT) == 0) {
DEBPRINT("%s drive %d:%d not present\n",
hwif->name, hwif->channel, port);
goto out;
......@@ -420,8 +420,9 @@ static int do_drive_set_taskfiles(ide_drive_t *drive,
DEBPRINT("ENTER: %s, hard_port#: %d\n", drive->name, drive->dn);
if (!drive->present)
if ((drive->dev_flags & IDE_DFLAG_PRESENT) == 0)
goto out;
if (!gtf_count) /* shouldn't be here */
goto out;
......@@ -660,7 +661,8 @@ void ide_acpi_set_state(ide_hwif_t *hwif, int on)
if (!drive->acpidata->obj_handle)
drive->acpidata->obj_handle = ide_acpi_drive_get_handle(drive);
if (drive->acpidata->obj_handle && drive->present) {
if (drive->acpidata->obj_handle &&
(drive->dev_flags & IDE_DFLAG_PRESENT)) {
acpi_bus_set_power(drive->acpidata->obj_handle,
on? ACPI_STATE_D0: ACPI_STATE_D3);
}
......@@ -720,7 +722,7 @@ void ide_acpi_port_init_devices(ide_hwif_t *hwif)
memset(drive->acpidata, 0, sizeof(*drive->acpidata));
if (!drive->present)
if ((drive->dev_flags & IDE_DFLAG_PRESENT) == 0)
continue;
err = taskfile_lib_get_identify(drive, drive->acpidata->idbuff);
......@@ -745,7 +747,7 @@ void ide_acpi_port_init_devices(ide_hwif_t *hwif)
for (i = 0; i < MAX_DRIVES; i++) {
drive = &hwif->drives[i];
if (drive->present)
if (drive->dev_flags & IDE_DFLAG_PRESENT)
/* Execute ACPI startup code */
ide_acpi_exec_tfs(drive);
}
......
......@@ -124,8 +124,8 @@ EXPORT_SYMBOL_GPL(ide_init_pc);
* the current request, so that it will be processed immediately, on the next
* pass through the driver.
*/
void ide_queue_pc_head(ide_drive_t *drive, struct gendisk *disk,
struct ide_atapi_pc *pc, struct request *rq)
static void ide_queue_pc_head(ide_drive_t *drive, struct gendisk *disk,
struct ide_atapi_pc *pc, struct request *rq)
{
blk_rq_init(NULL, rq);
rq->cmd_type = REQ_TYPE_SPECIAL;
......@@ -137,7 +137,6 @@ void ide_queue_pc_head(ide_drive_t *drive, struct gendisk *disk,
rq->cmd[13] = REQ_IDETAPE_PC1;
ide_do_drive_cmd(drive, rq);
}
EXPORT_SYMBOL_GPL(ide_queue_pc_head);
/*
* Add a special packet command request to the tail of the request queue,
......@@ -203,25 +202,80 @@ int ide_set_media_lock(ide_drive_t *drive, struct gendisk *disk, int on)
}
EXPORT_SYMBOL_GPL(ide_set_media_lock);
/* TODO: unify the code thus making some arguments go away */
ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
ide_handler_t *handler, unsigned int timeout, ide_expiry_t *expiry,
void (*update_buffers)(ide_drive_t *, struct ide_atapi_pc *),
void (*retry_pc)(ide_drive_t *), void (*dsc_handle)(ide_drive_t *),
int (*io_buffers)(ide_drive_t *, struct ide_atapi_pc *, unsigned, int))
void ide_create_request_sense_cmd(ide_drive_t *drive, struct ide_atapi_pc *pc)
{
ide_init_pc(pc);
pc->c[0] = REQUEST_SENSE;
if (drive->media == ide_floppy) {
pc->c[4] = 255;
pc->req_xfer = 18;
} else {
pc->c[4] = 20;
pc->req_xfer = 20;
}
}
EXPORT_SYMBOL_GPL(ide_create_request_sense_cmd);
/*
* Called when an error was detected during the last packet command.
* We queue a request sense packet command in the head of the request list.
*/
void ide_retry_pc(ide_drive_t *drive, struct gendisk *disk)
{
struct request *rq = &drive->request_sense_rq;
struct ide_atapi_pc *pc = &drive->request_sense_pc;
(void)ide_read_error(drive);
ide_create_request_sense_cmd(drive, pc);
if (drive->media == ide_tape)
set_bit(IDE_AFLAG_IGNORE_DSC, &drive->atapi_flags);
ide_queue_pc_head(drive, disk, pc, rq);
}
EXPORT_SYMBOL_GPL(ide_retry_pc);
int ide_scsi_expiry(ide_drive_t *drive)
{
struct ide_atapi_pc *pc = drive->pc;
debug_log("%s called for %lu at %lu\n", __func__,
pc->scsi_cmd->serial_number, jiffies);
pc->flags |= PC_FLAG_TIMEDOUT;
return 0; /* we do not want the IDE subsystem to retry */
}
EXPORT_SYMBOL_GPL(ide_scsi_expiry);
/*
* This is the usual interrupt handler which will be called during a packet
* command. We will transfer some of the data (as requested by the drive)
* and will re-point interrupt handler to us.
*/
static ide_startstop_t ide_pc_intr(ide_drive_t *drive)
{
struct ide_atapi_pc *pc = drive->pc;
ide_hwif_t *hwif = drive->hwif;
struct request *rq = hwif->hwgroup->rq;
const struct ide_tp_ops *tp_ops = hwif->tp_ops;
xfer_func_t *xferfunc;
unsigned int temp;
ide_expiry_t *expiry;
unsigned int timeout, temp;
u16 bcount;
u8 stat, ireason, scsi = drive->scsi;
u8 stat, ireason, scsi = !!(drive->dev_flags & IDE_DFLAG_SCSI), dsc = 0;
debug_log("Enter %s - interrupt handler\n", __func__);
if (scsi) {
timeout = ide_scsi_get_timeout(pc);
expiry = ide_scsi_expiry;
} else {
timeout = (drive->media == ide_floppy) ? WAIT_FLOPPY_CMD
: WAIT_TAPE_CMD;
expiry = NULL;
}
if (pc->flags & PC_FLAG_TIMEDOUT) {
drive->pc_callback(drive);
drive->pc_callback(drive, 0);
return ide_stopped;
}
......@@ -238,8 +292,8 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
pc->flags |= PC_FLAG_DMA_ERROR;
} else {
pc->xferred = pc->req_xfer;
if (update_buffers)
update_buffers(drive, pc);
if (drive->pc_update_buffers)
drive->pc_update_buffers(drive, pc);
}
debug_log("%s: DMA finished\n", drive->name);
}
......@@ -276,21 +330,19 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
debug_log("[cmd %x]: check condition\n", rq->cmd[0]);
/* Retry operation */
retry_pc(drive);
ide_retry_pc(drive, rq->rq_disk);
/* queued, but not started */
return ide_stopped;
}
cmd_finished:
pc->error = 0;
if ((pc->flags & PC_FLAG_WAIT_FOR_DSC) &&
(stat & ATA_DSC) == 0) {
dsc_handle(drive);
return ide_stopped;
}
if ((pc->flags & PC_FLAG_WAIT_FOR_DSC) && (stat & ATA_DSC) == 0)
dsc = 1;
/* Command finished - Call the callback function */
drive->pc_callback(drive);
drive->pc_callback(drive, dsc);
return ide_stopped;
}
......@@ -336,7 +388,8 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
temp = 0;
if (temp) {
if (pc->sg)
io_buffers(drive, pc, temp, 0);
drive->pc_io_buffers(drive, pc,
temp, 0);
else
tp_ops->input_data(drive, NULL,
pc->cur_pos, temp);
......@@ -348,9 +401,7 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
pc->xferred += temp;
pc->cur_pos += temp;
ide_pad_transfer(drive, 0, bcount - temp);
ide_set_handler(drive, handler, timeout,
expiry);
return ide_started;
goto next_irq;
}
debug_log("The device wants to send us more data than "
"expected - allowing transfer\n");
......@@ -362,7 +413,7 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
if ((drive->media == ide_floppy && !scsi && !pc->buf) ||
(drive->media == ide_tape && !scsi && pc->bh) ||
(scsi && pc->sg)) {
int done = io_buffers(drive, pc, bcount,
int done = drive->pc_io_buffers(drive, pc, bcount,
!!(pc->flags & PC_FLAG_WRITING));
/* FIXME: don't do partial completions */
......@@ -377,12 +428,11 @@ ide_startstop_t ide_pc_intr(ide_drive_t *drive, struct ide_atapi_pc *pc,
debug_log("[cmd %x] transferred %d bytes on that intr.\n",
rq->cmd[0], bcount);
next_irq:
/* And set the interrupt handler again */
ide_set_handler(drive, handler, timeout, expiry);
ide_set_handler(drive, ide_pc_intr, timeout, expiry);
return ide_started;
}
EXPORT_SYMBOL_GPL(ide_pc_intr);
static u8 ide_read_ireason(ide_drive_t *drive)
{
......@@ -418,12 +468,22 @@ static u8 ide_wait_ireason(ide_drive_t *drive, u8 ireason)
return ireason;
}
ide_startstop_t ide_transfer_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
ide_handler_t *handler, unsigned int timeout,
ide_expiry_t *expiry)
static int ide_delayed_transfer_pc(ide_drive_t *drive)
{
/* Send the actual packet */
drive->hwif->tp_ops->output_data(drive, NULL, drive->pc->c, 12);
/* Timeout for the packet command */
return WAIT_FLOPPY_CMD;
}
static ide_startstop_t ide_transfer_pc(ide_drive_t *drive)
{
struct ide_atapi_pc *pc = drive->pc;
ide_hwif_t *hwif = drive->hwif;
struct request *rq = hwif->hwgroup->rq;
ide_expiry_t *expiry;
unsigned int timeout;
ide_startstop_t startstop;
u8 ireason;
......@@ -434,7 +494,8 @@ ide_startstop_t ide_transfer_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
}
ireason = ide_read_ireason(drive);
if (drive->media == ide_tape && !drive->scsi)
if (drive->media == ide_tape &&
(drive->dev_flags & IDE_DFLAG_SCSI) == 0)
ireason = ide_wait_ireason(drive, ireason);
if ((ireason & ATAPI_COD) == 0 || (ireason & ATAPI_IO)) {
......@@ -443,8 +504,27 @@ ide_startstop_t ide_transfer_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
return ide_do_reset(drive);
}
/*
* If necessary schedule the packet transfer to occur 'timeout'
* miliseconds later in ide_delayed_transfer_pc() after the device
* says it's ready for a packet.
*/
if (drive->atapi_flags & IDE_AFLAG_ZIP_DRIVE) {
timeout = drive->pc_delay;
expiry = &ide_delayed_transfer_pc;
} else {
if (drive->dev_flags & IDE_DFLAG_SCSI) {
timeout = ide_scsi_get_timeout(pc);
expiry = ide_scsi_expiry;
} else {
timeout = (drive->media == ide_floppy) ? WAIT_FLOPPY_CMD
: WAIT_TAPE_CMD;
expiry = NULL;
}
}
/* Set the interrupt routine */
ide_set_handler(drive, handler, timeout, expiry);
ide_set_handler(drive, ide_pc_intr, timeout, expiry);
/* Begin DMA, if necessary */
if (pc->flags & PC_FLAG_DMA_OK) {
......@@ -458,22 +538,22 @@ ide_startstop_t ide_transfer_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
return ide_started;
}
EXPORT_SYMBOL_GPL(ide_transfer_pc);
ide_startstop_t ide_issue_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
ide_handler_t *handler, unsigned int timeout,
ide_startstop_t ide_issue_pc(ide_drive_t *drive, unsigned int timeout,
ide_expiry_t *expiry)
{
struct ide_atapi_pc *pc = drive->pc;
ide_hwif_t *hwif = drive->hwif;
u32 tf_flags;
u16 bcount;
u8 dma = 0;
u8 scsi = !!(drive->dev_flags & IDE_DFLAG_SCSI);
/* We haven't transferred any data yet */
pc->xferred = 0;
pc->cur_pos = pc->buf;
/* Request to transfer the entire buffer at once */
if (drive->media == ide_tape && !drive->scsi)
if (drive->media == ide_tape && scsi == 0)
bcount = pc->req_xfer;
else
bcount = min(pc->req_xfer, 63 * 1024);
......@@ -483,28 +563,35 @@ ide_startstop_t ide_issue_pc(ide_drive_t *drive, struct ide_atapi_pc *pc,
ide_dma_off(drive);
}
if ((pc->flags & PC_FLAG_DMA_OK) && drive->using_dma) {
if (drive->scsi)
if ((pc->flags & PC_FLAG_DMA_OK) &&
(drive->dev_flags & IDE_DFLAG_USING_DMA)) {
if (scsi)
hwif->sg_mapped = 1;
dma = !hwif->dma_ops->dma_setup(drive);
if (drive->scsi)
drive->dma = !hwif->dma_ops->dma_setup(drive);
if (scsi)
hwif->sg_mapped = 0;
}
if (!dma)
if (!drive->dma)
pc->flags &= ~PC_FLAG_DMA_OK;
ide_pktcmd_tf_load(drive, drive->scsi ? 0 : IDE_TFLAG_OUT_DEVICE,
bcount, dma);
if (scsi)
tf_flags = 0;
else if (drive->media == ide_cdrom || drive->media == ide_optical)
tf_flags = IDE_TFLAG_OUT_NSECT | IDE_TFLAG_OUT_LBAL;
else
tf_flags = IDE_TFLAG_OUT_DEVICE;
ide_pktcmd_tf_load(drive, tf_flags, bcount, drive->dma);
/* Issue the packet command */
if (drive->atapi_flags & IDE_AFLAG_DRQ_INTERRUPT) {
ide_execute_command(drive, ATA_CMD_PACKET, handler,
ide_execute_command(drive, ATA_CMD_PACKET, ide_transfer_pc,
timeout, NULL);
return ide_started;
} else {
ide_execute_pkt_cmd(drive);
return (*handler)(drive);
return ide_transfer_pc(drive);
}
}
EXPORT_SYMBOL_GPL(ide_issue_pc);
此差异已折叠。
......@@ -88,7 +88,6 @@ struct cdrom_info {
struct request_sense sense_data;
struct request request_sense_request;
int dma;
unsigned long last_block;
unsigned long start_seek;
......
......@@ -45,21 +45,12 @@
#define IDE_DISK_MINORS 0
#endif
struct ide_disk_obj {
ide_drive_t *drive;
ide_driver_t *driver;
struct gendisk *disk;
struct kref kref;
unsigned int openers; /* protected by BKL for now */
};
#include "ide-disk.h"
static DEFINE_MUTEX(idedisk_ref_mutex);
#define to_ide_disk(obj) container_of(obj, struct ide_disk_obj, kref)
#define ide_disk_g(disk) \
container_of((disk)->private_data, struct ide_disk_obj, driver)
static void ide_disk_release(struct kref *);
static struct ide_disk_obj *ide_disk_get(struct gendisk *disk)
......@@ -140,9 +131,9 @@ static ide_startstop_t __ide_do_rw_disk(ide_drive_t *drive, struct request *rq,
sector_t block)
{
ide_hwif_t *hwif = HWIF(drive);
unsigned int dma = drive->using_dma;
u16 nsectors = (u16)rq->nr_sectors;
u8 lba48 = (drive->addressing == 1) ? 1 : 0;
u8 lba48 = !!(drive->dev_flags & IDE_DFLAG_LBA48);
u8 dma = !!(drive->dev_flags & IDE_DFLAG_USING_DMA);
ide_task_t task;
struct ide_taskfile *tf = &task.tf;
ide_startstop_t rc;
......@@ -162,7 +153,7 @@ static ide_startstop_t __ide_do_rw_disk(ide_drive_t *drive, struct request *rq,
memset(&task, 0, sizeof(task));
task.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
if (drive->select.b.lba) {
if (drive->dev_flags & IDE_DFLAG_LBA) {
if (lba48) {
pr_debug("%s: LBA=0x%012llx\n", drive->name,
(unsigned long long)block);
......@@ -187,6 +178,8 @@ static ide_startstop_t __ide_do_rw_disk(ide_drive_t *drive, struct request *rq,
tf->lbah = block >>= 8;
tf->device = (block >> 8) & 0xf;
}
tf->device |= ATA_LBA;
} else {
unsigned int sect, head, cyl, track;
......@@ -237,7 +230,7 @@ static ide_startstop_t ide_do_rw_disk(ide_drive_t *drive, struct request *rq,
{
ide_hwif_t *hwif = HWIF(drive);
BUG_ON(drive->blocked);
BUG_ON(drive->dev_flags & IDE_DFLAG_BLOCKED);
if (!blk_fs_request(rq)) {
blk_dump_rq_flags(rq, "ide_do_rw_disk - bad command");
......@@ -384,139 +377,39 @@ static void idedisk_check_hpa(ide_drive_t *drive)
static void init_idedisk_capacity(ide_drive_t *drive)
{
u16 *id = drive->id;
/*
* If this drive supports the Host Protected Area feature set,
* then we may need to change our opinion about the drive's capacity.
*/
int hpa = ata_id_hpa_enabled(id);
int lba;
if (ata_id_lba48_enabled(id)) {
/* drive speaks 48-bit LBA */
drive->select.b.lba = 1;
lba = 1;
drive->capacity64 = ata_id_u64(id, ATA_ID_LBA_CAPACITY_2);
if (hpa)
idedisk_check_hpa(drive);
} else if (ata_id_has_lba(id) && ata_id_is_lba_capacity_ok(id)) {
/* drive speaks 28-bit LBA */
drive->select.b.lba = 1;
lba = 1;
drive->capacity64 = ata_id_u32(id, ATA_ID_LBA_CAPACITY);
if (hpa)
idedisk_check_hpa(drive);
} else {
/* drive speaks boring old 28-bit CHS */
lba = 0;
drive->capacity64 = drive->cyl * drive->head * drive->sect;
}
}
static sector_t idedisk_capacity(ide_drive_t *drive)
{
return drive->capacity64;
}
if (lba) {
drive->dev_flags |= IDE_DFLAG_LBA;
#ifdef CONFIG_IDE_PROC_FS
static int smart_enable(ide_drive_t *drive)
{
ide_task_t args;
struct ide_taskfile *tf = &args.tf;
memset(&args, 0, sizeof(ide_task_t));
tf->feature = ATA_SMART_ENABLE;
tf->lbam = ATA_SMART_LBAM_PASS;
tf->lbah = ATA_SMART_LBAH_PASS;
tf->command = ATA_CMD_SMART;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
return ide_no_data_taskfile(drive, &args);
}
static int get_smart_data(ide_drive_t *drive, u8 *buf, u8 sub_cmd)
{
ide_task_t args;
struct ide_taskfile *tf = &args.tf;
memset(&args, 0, sizeof(ide_task_t));
tf->feature = sub_cmd;
tf->nsect = 0x01;
tf->lbam = ATA_SMART_LBAM_PASS;
tf->lbah = ATA_SMART_LBAH_PASS;
tf->command = ATA_CMD_SMART;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
args.data_phase = TASKFILE_IN;
(void) smart_enable(drive);
return ide_raw_taskfile(drive, &args, buf, 1);
}
static int proc_idedisk_read_cache
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
ide_drive_t *drive = (ide_drive_t *) data;
char *out = page;
int len;
if (drive->id_read)
len = sprintf(out, "%i\n", drive->id[ATA_ID_BUF_SIZE] / 2);
else
len = sprintf(out, "(none)\n");
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_capacity
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
ide_drive_t*drive = (ide_drive_t *)data;
int len;
len = sprintf(page, "%llu\n", (long long)idedisk_capacity(drive));
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_smart(char *page, char **start, off_t off,
int count, int *eof, void *data, u8 sub_cmd)
{
ide_drive_t *drive = (ide_drive_t *)data;
int len = 0, i = 0;
if (get_smart_data(drive, page, sub_cmd) == 0) {
unsigned short *val = (unsigned short *) page;
char *out = (char *)val + SECTOR_SIZE;
page = out;
do {
out += sprintf(out, "%04x%c", le16_to_cpu(*val),
(++i & 7) ? ' ' : '\n');
val += 1;
} while (i < SECTOR_SIZE / 2);
len = out - page;
/*
* If this device supports the Host Protected Area feature set,
* then we may need to change our opinion about its capacity.
*/
if (ata_id_hpa_enabled(id))
idedisk_check_hpa(drive);
}
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_sv
(char *page, char **start, off_t off, int count, int *eof, void *data)
sector_t ide_disk_capacity(ide_drive_t *drive)
{
return proc_idedisk_read_smart(page, start, off, count, eof, data,
ATA_SMART_READ_VALUES);
}
static int proc_idedisk_read_st
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
return proc_idedisk_read_smart(page, start, off, count, eof, data,
ATA_SMART_READ_THRESHOLDS);
return drive->capacity64;
}
static ide_proc_entry_t idedisk_proc[] = {
{ "cache", S_IFREG|S_IRUGO, proc_idedisk_read_cache, NULL },
{ "capacity", S_IFREG|S_IRUGO, proc_idedisk_read_capacity, NULL },
{ "geometry", S_IFREG|S_IRUGO, proc_ide_read_geometry, NULL },
{ "smart_values", S_IFREG|S_IRUSR, proc_idedisk_read_sv, NULL },
{ "smart_thresholds", S_IFREG|S_IRUSR, proc_idedisk_read_st, NULL },
{ NULL, 0, NULL, NULL }
};
#endif /* CONFIG_IDE_PROC_FS */
static void idedisk_prepare_flush(struct request_queue *q, struct request *rq)
{
ide_drive_t *drive = q->queuedata;
......@@ -568,25 +461,43 @@ static int set_multcount(ide_drive_t *drive, int arg)
return (drive->mult_count == arg) ? 0 : -EIO;
}
ide_devset_get(nowerr, nowerr);
ide_devset_get_flag(nowerr, IDE_DFLAG_NOWERR);
static int set_nowerr(ide_drive_t *drive, int arg)
{
if (arg < 0 || arg > 1)
return -EINVAL;
drive->nowerr = arg;
if (arg)
drive->dev_flags |= IDE_DFLAG_NOWERR;
else
drive->dev_flags &= ~IDE_DFLAG_NOWERR;
drive->bad_wstat = arg ? BAD_R_STAT : BAD_W_STAT;
return 0;
}
static int ide_do_setfeature(ide_drive_t *drive, u8 feature, u8 nsect)
{
ide_task_t task;
memset(&task, 0, sizeof(task));
task.tf.feature = feature;
task.tf.nsect = nsect;
task.tf.command = ATA_CMD_SET_FEATURES;
task.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
return ide_no_data_taskfile(drive, &task);
}
static void update_ordered(ide_drive_t *drive)
{
u16 *id = drive->id;
unsigned ordered = QUEUE_ORDERED_NONE;
prepare_flush_fn *prep_fn = NULL;
if (drive->wcache) {
if (drive->dev_flags & IDE_DFLAG_WCACHE) {
unsigned long long capacity;
int barrier;
/*
......@@ -597,9 +508,11 @@ static void update_ordered(ide_drive_t *drive)
* time we have trimmed the drive capacity if LBA48 is
* not available so we don't need to recheck that.
*/
capacity = idedisk_capacity(drive);
barrier = ata_id_flush_enabled(id) && !drive->noflush &&
(drive->addressing == 0 || capacity <= (1ULL << 28) ||
capacity = ide_disk_capacity(drive);
barrier = ata_id_flush_enabled(id) &&
(drive->dev_flags & IDE_DFLAG_NOFLUSH) == 0 &&
((drive->dev_flags & IDE_DFLAG_LBA48) == 0 ||
capacity <= (1ULL << 28) ||
ata_id_flush_ext_enabled(id));
printk(KERN_INFO "%s: cache flushes %ssupported\n",
......@@ -615,25 +528,24 @@ static void update_ordered(ide_drive_t *drive)
blk_queue_ordered(drive->queue, ordered, prep_fn);
}
ide_devset_get(wcache, wcache);
ide_devset_get_flag(wcache, IDE_DFLAG_WCACHE);
static int set_wcache(ide_drive_t *drive, int arg)
{
ide_task_t args;
int err = 1;
if (arg < 0 || arg > 1)
return -EINVAL;
if (ata_id_flush_enabled(drive->id)) {
memset(&args, 0, sizeof(ide_task_t));
args.tf.feature = arg ?
SETFEATURES_WC_ON : SETFEATURES_WC_OFF;
args.tf.command = ATA_CMD_SET_FEATURES;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
err = ide_no_data_taskfile(drive, &args);
if (err == 0)
drive->wcache = arg;
err = ide_do_setfeature(drive,
arg ? SETFEATURES_WC_ON : SETFEATURES_WC_OFF, 0);
if (err == 0) {
if (arg)
drive->dev_flags |= IDE_DFLAG_WCACHE;
else
drive->dev_flags &= ~IDE_DFLAG_WCACHE;
}
}
update_ordered(drive);
......@@ -658,22 +570,18 @@ ide_devset_get(acoustic, acoustic);
static int set_acoustic(ide_drive_t *drive, int arg)
{
ide_task_t args;
if (arg < 0 || arg > 254)
return -EINVAL;
memset(&args, 0, sizeof(ide_task_t));
args.tf.feature = arg ? SETFEATURES_AAM_ON : SETFEATURES_AAM_OFF;
args.tf.nsect = arg;
args.tf.command = ATA_CMD_SET_FEATURES;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
ide_no_data_taskfile(drive, &args);
ide_do_setfeature(drive,
arg ? SETFEATURES_AAM_ON : SETFEATURES_AAM_OFF, arg);
drive->acoustic = arg;
return 0;
}
ide_devset_get(addressing, addressing);
ide_devset_get_flag(addressing, IDE_DFLAG_LBA48);
/*
* drive->addressing:
......@@ -686,49 +594,27 @@ static int set_addressing(ide_drive_t *drive, int arg)
if (arg < 0 || arg > 2)
return -EINVAL;
drive->addressing = 0;
if (drive->hwif->host_flags & IDE_HFLAG_NO_LBA48)
return 0;
if (ata_id_lba48_enabled(drive->id) == 0)
if (arg && ((drive->hwif->host_flags & IDE_HFLAG_NO_LBA48) ||
ata_id_lba48_enabled(drive->id) == 0))
return -EIO;
drive->addressing = arg;
if (arg == 2)
arg = 0;
if (arg)
drive->dev_flags |= IDE_DFLAG_LBA48;
else
drive->dev_flags &= ~IDE_DFLAG_LBA48;
return 0;
}
ide_devset_rw(acoustic, acoustic);
ide_devset_rw(address, addressing);
ide_devset_rw(multcount, multcount);
ide_devset_rw(wcache, wcache);
ide_devset_rw_sync(nowerr, nowerr);
ide_ext_devset_rw(acoustic, acoustic);
ide_ext_devset_rw(address, addressing);
ide_ext_devset_rw(multcount, multcount);
ide_ext_devset_rw(wcache, wcache);
#ifdef CONFIG_IDE_PROC_FS
ide_devset_rw_field(bios_cyl, bios_cyl);
ide_devset_rw_field(bios_head, bios_head);
ide_devset_rw_field(bios_sect, bios_sect);
ide_devset_rw_field(failures, failures);
ide_devset_rw_field(lun, lun);
ide_devset_rw_field(max_failures, max_failures);
static const struct ide_proc_devset idedisk_settings[] = {
IDE_PROC_DEVSET(acoustic, 0, 254),
IDE_PROC_DEVSET(address, 0, 2),
IDE_PROC_DEVSET(bios_cyl, 0, 65535),
IDE_PROC_DEVSET(bios_head, 0, 255),
IDE_PROC_DEVSET(bios_sect, 0, 63),
IDE_PROC_DEVSET(failures, 0, 65535),
IDE_PROC_DEVSET(lun, 0, 7),
IDE_PROC_DEVSET(max_failures, 0, 65535),
IDE_PROC_DEVSET(multcount, 0, 16),
IDE_PROC_DEVSET(nowerr, 0, 1),
IDE_PROC_DEVSET(wcache, 0, 1),
{ 0 },
};
#endif
ide_ext_devset_rw_sync(nowerr, nowerr);
static void idedisk_setup(ide_drive_t *drive)
{
......@@ -740,20 +626,20 @@ static void idedisk_setup(ide_drive_t *drive)
ide_proc_register_driver(drive, idkp->driver);
if (drive->id_read == 0)
if ((drive->dev_flags & IDE_DFLAG_ID_READ) == 0)
return;
if (drive->removable) {
if (drive->dev_flags & IDE_DFLAG_REMOVABLE) {
/*
* Removable disks (eg. SYQUEST); ignore 'WD' drives
*/
if (m[0] != 'W' || m[1] != 'D')
drive->doorlocking = 1;
drive->dev_flags |= IDE_DFLAG_DOORLOCKING;
}
(void)set_addressing(drive, 1);
if (drive->addressing == 1) {
if (drive->dev_flags & IDE_DFLAG_LBA48) {
int max_s = 2048;
if (max_s > hwif->rqsize)
......@@ -769,7 +655,8 @@ static void idedisk_setup(ide_drive_t *drive)
init_idedisk_capacity(drive);
/* limit drive capacity to 137GB if LBA48 cannot be used */
if (drive->addressing == 0 && drive->capacity64 > 1ULL << 28) {
if ((drive->dev_flags & IDE_DFLAG_LBA48) == 0 &&
drive->capacity64 > 1ULL << 28) {
printk(KERN_WARNING "%s: cannot use LBA48 - full capacity "
"%llu sectors (%llu MB)\n",
drive->name, (unsigned long long)drive->capacity64,
......@@ -777,22 +664,23 @@ static void idedisk_setup(ide_drive_t *drive)
drive->capacity64 = 1ULL << 28;
}
if ((hwif->host_flags & IDE_HFLAG_NO_LBA48_DMA) && drive->addressing) {
if ((hwif->host_flags & IDE_HFLAG_NO_LBA48_DMA) &&
(drive->dev_flags & IDE_DFLAG_LBA48)) {
if (drive->capacity64 > 1ULL << 28) {
printk(KERN_INFO "%s: cannot use LBA48 DMA - PIO mode"
" will be used for accessing sectors "
"> %u\n", drive->name, 1 << 28);
} else
drive->addressing = 0;
drive->dev_flags &= ~IDE_DFLAG_LBA48;
}
/*
* if possible, give fdisk access to more of the drive,
* by correcting bios_cyls:
*/
capacity = idedisk_capacity(drive);
capacity = ide_disk_capacity(drive);
if (!drive->forced_geom) {
if ((drive->dev_flags & IDE_DFLAG_FORCED_GEOM) == 0) {
if (ata_id_lba48_enabled(drive->id)) {
/* compatibility */
drive->bios_sect = 63;
......@@ -827,14 +715,15 @@ static void idedisk_setup(ide_drive_t *drive)
/* write cache enabled? */
if ((id[ATA_ID_CSFO] & 1) || ata_id_wcache_enabled(id))
drive->wcache = 1;
drive->dev_flags |= IDE_DFLAG_WCACHE;
set_wcache(drive, 1);
}
static void ide_cacheflush_p(ide_drive_t *drive)
{
if (!drive->wcache || ata_id_flush_enabled(drive->id) == 0)
if (ata_id_flush_enabled(drive->id) == 0 ||
(drive->dev_flags & IDE_DFLAG_WCACHE) == 0)
return;
if (do_idedisk_flushcache(drive))
......@@ -918,13 +807,12 @@ static ide_driver_t idedisk_driver = {
.resume = ide_disk_resume,
.shutdown = ide_device_shutdown,
.version = IDEDISK_VERSION,
.media = ide_disk,
.do_request = ide_do_rw_disk,
.end_request = ide_end_request,
.error = __ide_error,
#ifdef CONFIG_IDE_PROC_FS
.proc = idedisk_proc,
.settings = idedisk_settings,
.proc = ide_disk_proc,
.settings = ide_disk_settings,
#endif
};
......@@ -953,15 +841,16 @@ static int idedisk_open(struct inode *inode, struct file *filp)
idkp->openers++;
if (drive->removable && idkp->openers == 1) {
if ((drive->dev_flags & IDE_DFLAG_REMOVABLE) && idkp->openers == 1) {
check_disk_change(inode->i_bdev);
/*
* Ignore the return code from door_lock,
* since the open() has already succeeded,
* and the door_lock is irrelevant at this point.
*/
if (drive->doorlocking && idedisk_set_doorlock(drive, 1))
drive->doorlocking = 0;
if ((drive->dev_flags & IDE_DFLAG_DOORLOCKING) &&
idedisk_set_doorlock(drive, 1))
drive->dev_flags &= ~IDE_DFLAG_DOORLOCKING;
}
return 0;
}
......@@ -975,9 +864,10 @@ static int idedisk_release(struct inode *inode, struct file *filp)
if (idkp->openers == 1)
ide_cacheflush_p(drive);
if (drive->removable && idkp->openers == 1) {
if (drive->doorlocking && idedisk_set_doorlock(drive, 0))
drive->doorlocking = 0;
if ((drive->dev_flags & IDE_DFLAG_REMOVABLE) && idkp->openers == 1) {
if ((drive->dev_flags & IDE_DFLAG_DOORLOCKING) &&
idedisk_set_doorlock(drive, 0))
drive->dev_flags &= ~IDE_DFLAG_DOORLOCKING;
}
idkp->openers--;
......@@ -998,48 +888,25 @@ static int idedisk_getgeo(struct block_device *bdev, struct hd_geometry *geo)
return 0;
}
static const struct ide_ioctl_devset ide_disk_ioctl_settings[] = {
{ HDIO_GET_ADDRESS, HDIO_SET_ADDRESS, &ide_devset_address },
{ HDIO_GET_MULTCOUNT, HDIO_SET_MULTCOUNT, &ide_devset_multcount },
{ HDIO_GET_NOWERR, HDIO_SET_NOWERR, &ide_devset_nowerr },
{ HDIO_GET_WCACHE, HDIO_SET_WCACHE, &ide_devset_wcache },
{ HDIO_GET_ACOUSTIC, HDIO_SET_ACOUSTIC, &ide_devset_acoustic },
{ 0 }
};
static int idedisk_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct block_device *bdev = inode->i_bdev;
struct ide_disk_obj *idkp = ide_disk_g(bdev->bd_disk);
ide_drive_t *drive = idkp->drive;
int err;
err = ide_setting_ioctl(drive, bdev, cmd, arg, ide_disk_ioctl_settings);
if (err != -EOPNOTSUPP)
return err;
return generic_ide_ioctl(drive, file, bdev, cmd, arg);
}
static int idedisk_media_changed(struct gendisk *disk)
{
struct ide_disk_obj *idkp = ide_disk_g(disk);
ide_drive_t *drive = idkp->drive;
/* do not scan partitions twice if this is a removable device */
if (drive->attach) {
drive->attach = 0;
if (drive->dev_flags & IDE_DFLAG_ATTACH) {
drive->dev_flags &= ~IDE_DFLAG_ATTACH;
return 0;
}
/* if removable, always assume it was changed */
return drive->removable;
return !!(drive->dev_flags & IDE_DFLAG_REMOVABLE);
}
static int idedisk_revalidate_disk(struct gendisk *disk)
{
struct ide_disk_obj *idkp = ide_disk_g(disk);
set_capacity(disk, idedisk_capacity(idkp->drive));
set_capacity(disk, ide_disk_capacity(idkp->drive));
return 0;
}
......@@ -1047,7 +914,7 @@ static struct block_device_operations idedisk_ops = {
.owner = THIS_MODULE,
.open = idedisk_open,
.release = idedisk_release,
.ioctl = idedisk_ioctl,
.ioctl = ide_disk_ioctl,
.getgeo = idedisk_getgeo,
.media_changed = idedisk_media_changed,
.revalidate_disk = idedisk_revalidate_disk
......@@ -1088,19 +955,20 @@ static int ide_disk_probe(ide_drive_t *drive)
drive->driver_data = idkp;
idedisk_setup(drive);
if ((!drive->head || drive->head > 16) && !drive->select.b.lba) {
if ((drive->dev_flags & IDE_DFLAG_LBA) == 0 &&
(drive->head == 0 || drive->head > 16)) {
printk(KERN_ERR "%s: INVALID GEOMETRY: %d PHYSICAL HEADS?\n",
drive->name, drive->head);
drive->attach = 0;
drive->dev_flags &= ~IDE_DFLAG_ATTACH;
} else
drive->attach = 1;
drive->dev_flags |= IDE_DFLAG_ATTACH;
g->minors = IDE_DISK_MINORS;
g->driverfs_dev = &drive->gendev;
g->flags |= GENHD_FL_EXT_DEVT;
if (drive->removable)
g->flags |= GENHD_FL_REMOVABLE;
set_capacity(g, idedisk_capacity(drive));
if (drive->dev_flags & IDE_DFLAG_REMOVABLE)
g->flags = GENHD_FL_REMOVABLE;
set_capacity(g, ide_disk_capacity(drive));
g->fops = &idedisk_ops;
add_disk(g);
return 0;
......@@ -1122,6 +990,7 @@ static int __init idedisk_init(void)
}
MODULE_ALIAS("ide:*m-disk*");
MODULE_ALIAS("ide-disk");
module_init(idedisk_init);
module_exit(idedisk_exit);
MODULE_LICENSE("GPL");
#ifndef __IDE_DISK_H
#define __IDE_DISK_H
struct ide_disk_obj {
ide_drive_t *drive;
ide_driver_t *driver;
struct gendisk *disk;
struct kref kref;
unsigned int openers; /* protected by BKL for now */
};
#define ide_disk_g(disk) \
container_of((disk)->private_data, struct ide_disk_obj, driver)
/* ide-disk.c */
sector_t ide_disk_capacity(ide_drive_t *);
ide_decl_devset(address);
ide_decl_devset(multcount);
ide_decl_devset(nowerr);
ide_decl_devset(wcache);
ide_decl_devset(acoustic);
/* ide-disk_ioctl.c */
int ide_disk_ioctl(struct inode *, struct file *, unsigned int, unsigned long);
#ifdef CONFIG_IDE_PROC_FS
/* ide-disk_proc.c */
extern ide_proc_entry_t ide_disk_proc[];
extern const struct ide_proc_devset ide_disk_settings[];
#endif
#endif /* __IDE_DISK_H */
#include <linux/kernel.h>
#include <linux/ide.h>
#include <linux/hdreg.h>
#include "ide-disk.h"
static const struct ide_ioctl_devset ide_disk_ioctl_settings[] = {
{ HDIO_GET_ADDRESS, HDIO_SET_ADDRESS, &ide_devset_address },
{ HDIO_GET_MULTCOUNT, HDIO_SET_MULTCOUNT, &ide_devset_multcount },
{ HDIO_GET_NOWERR, HDIO_SET_NOWERR, &ide_devset_nowerr },
{ HDIO_GET_WCACHE, HDIO_SET_WCACHE, &ide_devset_wcache },
{ HDIO_GET_ACOUSTIC, HDIO_SET_ACOUSTIC, &ide_devset_acoustic },
{ 0 }
};
int ide_disk_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct block_device *bdev = inode->i_bdev;
struct ide_disk_obj *idkp = ide_disk_g(bdev->bd_disk);
ide_drive_t *drive = idkp->drive;
int err;
err = ide_setting_ioctl(drive, bdev, cmd, arg, ide_disk_ioctl_settings);
if (err != -EOPNOTSUPP)
return err;
return generic_ide_ioctl(drive, file, bdev, cmd, arg);
}
#include <linux/kernel.h>
#include <linux/ide.h>
#include <linux/hdreg.h>
#include "ide-disk.h"
static int smart_enable(ide_drive_t *drive)
{
ide_task_t args;
struct ide_taskfile *tf = &args.tf;
memset(&args, 0, sizeof(ide_task_t));
tf->feature = ATA_SMART_ENABLE;
tf->lbam = ATA_SMART_LBAM_PASS;
tf->lbah = ATA_SMART_LBAH_PASS;
tf->command = ATA_CMD_SMART;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
return ide_no_data_taskfile(drive, &args);
}
static int get_smart_data(ide_drive_t *drive, u8 *buf, u8 sub_cmd)
{
ide_task_t args;
struct ide_taskfile *tf = &args.tf;
memset(&args, 0, sizeof(ide_task_t));
tf->feature = sub_cmd;
tf->nsect = 0x01;
tf->lbam = ATA_SMART_LBAM_PASS;
tf->lbah = ATA_SMART_LBAH_PASS;
tf->command = ATA_CMD_SMART;
args.tf_flags = IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
args.data_phase = TASKFILE_IN;
(void) smart_enable(drive);
return ide_raw_taskfile(drive, &args, buf, 1);
}
static int proc_idedisk_read_cache
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
ide_drive_t *drive = (ide_drive_t *) data;
char *out = page;
int len;
if (drive->dev_flags & IDE_DFLAG_ID_READ)
len = sprintf(out, "%i\n", drive->id[ATA_ID_BUF_SIZE] / 2);
else
len = sprintf(out, "(none)\n");
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_capacity
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
ide_drive_t*drive = (ide_drive_t *)data;
int len;
len = sprintf(page, "%llu\n", (long long)ide_disk_capacity(drive));
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_smart(char *page, char **start, off_t off,
int count, int *eof, void *data, u8 sub_cmd)
{
ide_drive_t *drive = (ide_drive_t *)data;
int len = 0, i = 0;
if (get_smart_data(drive, page, sub_cmd) == 0) {
unsigned short *val = (unsigned short *) page;
char *out = (char *)val + SECTOR_SIZE;
page = out;
do {
out += sprintf(out, "%04x%c", le16_to_cpu(*val),
(++i & 7) ? ' ' : '\n');
val += 1;
} while (i < SECTOR_SIZE / 2);
len = out - page;
}
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
static int proc_idedisk_read_sv
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
return proc_idedisk_read_smart(page, start, off, count, eof, data,
ATA_SMART_READ_VALUES);
}
static int proc_idedisk_read_st
(char *page, char **start, off_t off, int count, int *eof, void *data)
{
return proc_idedisk_read_smart(page, start, off, count, eof, data,
ATA_SMART_READ_THRESHOLDS);
}
ide_proc_entry_t ide_disk_proc[] = {
{ "cache", S_IFREG|S_IRUGO, proc_idedisk_read_cache, NULL },
{ "capacity", S_IFREG|S_IRUGO, proc_idedisk_read_capacity, NULL },
{ "geometry", S_IFREG|S_IRUGO, proc_ide_read_geometry, NULL },
{ "smart_values", S_IFREG|S_IRUSR, proc_idedisk_read_sv, NULL },
{ "smart_thresholds", S_IFREG|S_IRUSR, proc_idedisk_read_st, NULL },
{ NULL, 0, NULL, NULL }
};
ide_devset_rw_field(bios_cyl, bios_cyl);
ide_devset_rw_field(bios_head, bios_head);
ide_devset_rw_field(bios_sect, bios_sect);
ide_devset_rw_field(failures, failures);
ide_devset_rw_field(lun, lun);
ide_devset_rw_field(max_failures, max_failures);
const struct ide_proc_devset ide_disk_settings[] = {
IDE_PROC_DEVSET(acoustic, 0, 254),
IDE_PROC_DEVSET(address, 0, 2),
IDE_PROC_DEVSET(bios_cyl, 0, 65535),
IDE_PROC_DEVSET(bios_head, 0, 255),
IDE_PROC_DEVSET(bios_sect, 0, 63),
IDE_PROC_DEVSET(failures, 0, 65535),
IDE_PROC_DEVSET(lun, 0, 7),
IDE_PROC_DEVSET(max_failures, 0, 65535),
IDE_PROC_DEVSET(multcount, 0, 16),
IDE_PROC_DEVSET(nowerr, 0, 1),
IDE_PROC_DEVSET(wcache, 0, 1),
{ 0 },
};
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/ide.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
/**
* config_drive_for_dma - attempt to activate IDE DMA
* @drive: the drive to place in DMA mode
*
* If the drive supports at least mode 2 DMA or UDMA of any kind
* then attempt to place it into DMA mode. Drives that are known to
* support DMA but predate the DMA properties or that are known
* to have DMA handling bugs are also set up appropriately based
* on the good/bad drive lists.
*/
int config_drive_for_dma(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u16 *id = drive->id;
if (drive->media != ide_disk) {
if (hwif->host_flags & IDE_HFLAG_NO_ATAPI_DMA)
return 0;
}
/*
* Enable DMA on any drive that has
* UltraDMA (mode 0/1/2/3/4/5/6) enabled
*/
if ((id[ATA_ID_FIELD_VALID] & 4) &&
((id[ATA_ID_UDMA_MODES] >> 8) & 0x7f))
return 1;
/*
* Enable DMA on any drive that has mode2 DMA
* (multi or single) enabled
*/
if (id[ATA_ID_FIELD_VALID] & 2) /* regular DMA */
if ((id[ATA_ID_MWDMA_MODES] & 0x404) == 0x404 ||
(id[ATA_ID_SWDMA_MODES] & 0x404) == 0x404)
return 1;
/* Consult the list of known "good" drives */
if (ide_dma_good_drive(drive))
return 1;
return 0;
}
/**
* ide_dma_host_set - Enable/disable DMA on a host
* @drive: drive to control
*
* Enable/disable DMA on an IDE controller following generic
* bus-mastering IDE controller behaviour.
*/
void ide_dma_host_set(ide_drive_t *drive, int on)
{
ide_hwif_t *hwif = drive->hwif;
u8 unit = drive->dn & 1;
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
if (on)
dma_stat |= (1 << (5 + unit));
else
dma_stat &= ~(1 << (5 + unit));
if (hwif->host_flags & IDE_HFLAG_MMIO)
writeb(dma_stat,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat, hwif->dma_base + ATA_DMA_STATUS);
}
EXPORT_SYMBOL_GPL(ide_dma_host_set);
/**
* ide_build_dmatable - build IDE DMA table
*
* ide_build_dmatable() prepares a dma request. We map the command
* to get the pci bus addresses of the buffers and then build up
* the PRD table that the IDE layer wants to be fed.
*
* Most chipsets correctly interpret a length of 0x0000 as 64KB,
* but at least one (e.g. CS5530) misinterprets it as zero (!).
* So we break the 64KB entry into two 32KB entries instead.
*
* Returns the number of built PRD entries if all went okay,
* returns 0 otherwise.
*
* May also be invoked from trm290.c
*/
int ide_build_dmatable(ide_drive_t *drive, struct request *rq)
{
ide_hwif_t *hwif = drive->hwif;
__le32 *table = (__le32 *)hwif->dmatable_cpu;
unsigned int is_trm290 = (hwif->chipset == ide_trm290) ? 1 : 0;
unsigned int count = 0;
int i;
struct scatterlist *sg;
hwif->sg_nents = ide_build_sglist(drive, rq);
if (hwif->sg_nents == 0)
return 0;
for_each_sg(hwif->sg_table, sg, hwif->sg_nents, i) {
u32 cur_addr, cur_len, xcount, bcount;
cur_addr = sg_dma_address(sg);
cur_len = sg_dma_len(sg);
/*
* Fill in the dma table, without crossing any 64kB boundaries.
* Most hardware requires 16-bit alignment of all blocks,
* but the trm290 requires 32-bit alignment.
*/
while (cur_len) {
if (count++ >= PRD_ENTRIES)
goto use_pio_instead;
bcount = 0x10000 - (cur_addr & 0xffff);
if (bcount > cur_len)
bcount = cur_len;
*table++ = cpu_to_le32(cur_addr);
xcount = bcount & 0xffff;
if (is_trm290)
xcount = ((xcount >> 2) - 1) << 16;
if (xcount == 0x0000) {
if (count++ >= PRD_ENTRIES)
goto use_pio_instead;
*table++ = cpu_to_le32(0x8000);
*table++ = cpu_to_le32(cur_addr + 0x8000);
xcount = 0x8000;
}
*table++ = cpu_to_le32(xcount);
cur_addr += bcount;
cur_len -= bcount;
}
}
if (count) {
if (!is_trm290)
*--table |= cpu_to_le32(0x80000000);
return count;
}
use_pio_instead:
printk(KERN_ERR "%s: %s\n", drive->name,
count ? "DMA table too small" : "empty DMA table?");
ide_destroy_dmatable(drive);
return 0; /* revert to PIO for this request */
}
EXPORT_SYMBOL_GPL(ide_build_dmatable);
/**
* ide_dma_setup - begin a DMA phase
* @drive: target device
*
* Build an IDE DMA PRD (IDE speak for scatter gather table)
* and then set up the DMA transfer registers for a device
* that follows generic IDE PCI DMA behaviour. Controllers can
* override this function if they need to
*
* Returns 0 on success. If a PIO fallback is required then 1
* is returned.
*/
int ide_dma_setup(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
struct request *rq = hwif->hwgroup->rq;
unsigned int reading;
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
u8 dma_stat;
if (rq_data_dir(rq))
reading = 0;
else
reading = 1 << 3;
/* fall back to pio! */
if (!ide_build_dmatable(drive, rq)) {
ide_map_sg(drive, rq);
return 1;
}
/* PRD table */
if (hwif->host_flags & IDE_HFLAG_MMIO)
writel(hwif->dmatable_dma,
(void __iomem *)(hwif->dma_base + ATA_DMA_TABLE_OFS));
else
outl(hwif->dmatable_dma, hwif->dma_base + ATA_DMA_TABLE_OFS);
/* specify r/w */
if (mmio)
writeb(reading, (void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
else
outb(reading, hwif->dma_base + ATA_DMA_CMD);
/* read DMA status for INTR & ERROR flags */
dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
/* clear INTR & ERROR flags */
if (mmio)
writeb(dma_stat | 6,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat | 6, hwif->dma_base + ATA_DMA_STATUS);
drive->waiting_for_dma = 1;
return 0;
}
EXPORT_SYMBOL_GPL(ide_dma_setup);
/**
* dma_timer_expiry - handle a DMA timeout
* @drive: Drive that timed out
*
* An IDE DMA transfer timed out. In the event of an error we ask
* the driver to resolve the problem, if a DMA transfer is still
* in progress we continue to wait (arguably we need to add a
* secondary 'I don't care what the drive thinks' timeout here)
* Finally if we have an interrupt we let it complete the I/O.
* But only one time - we clear expiry and if it's still not
* completed after WAIT_CMD, we error and retry in PIO.
* This can occur if an interrupt is lost or due to hang or bugs.
*/
static int dma_timer_expiry(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
printk(KERN_WARNING "%s: %s: DMA status (0x%02x)\n",
drive->name, __func__, dma_stat);
if ((dma_stat & 0x18) == 0x18) /* BUSY Stupid Early Timer !! */
return WAIT_CMD;
hwif->hwgroup->expiry = NULL; /* one free ride for now */
/* 1 dmaing, 2 error, 4 intr */
if (dma_stat & 2) /* ERROR */
return -1;
if (dma_stat & 1) /* DMAing */
return WAIT_CMD;
if (dma_stat & 4) /* Got an Interrupt */
return WAIT_CMD;
return 0; /* Status is unknown -- reset the bus */
}
void ide_dma_exec_cmd(ide_drive_t *drive, u8 command)
{
/* issue cmd to drive */
ide_execute_command(drive, command, &ide_dma_intr, 2 * WAIT_CMD,
dma_timer_expiry);
}
EXPORT_SYMBOL_GPL(ide_dma_exec_cmd);
void ide_dma_start(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_cmd;
/* Note that this is done *after* the cmd has
* been issued to the drive, as per the BM-IDE spec.
* The Promise Ultra33 doesn't work correctly when
* we do this part before issuing the drive cmd.
*/
if (hwif->host_flags & IDE_HFLAG_MMIO) {
dma_cmd = readb((void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
/* start DMA */
writeb(dma_cmd | 1,
(void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
} else {
dma_cmd = inb(hwif->dma_base + ATA_DMA_CMD);
outb(dma_cmd | 1, hwif->dma_base + ATA_DMA_CMD);
}
wmb();
}
EXPORT_SYMBOL_GPL(ide_dma_start);
/* returns 1 on error, 0 otherwise */
int ide_dma_end(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
u8 dma_stat = 0, dma_cmd = 0;
drive->waiting_for_dma = 0;
if (mmio) {
/* get DMA command mode */
dma_cmd = readb((void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
/* stop DMA */
writeb(dma_cmd & ~1,
(void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
} else {
dma_cmd = inb(hwif->dma_base + ATA_DMA_CMD);
outb(dma_cmd & ~1, hwif->dma_base + ATA_DMA_CMD);
}
/* get DMA status */
dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
if (mmio)
/* clear the INTR & ERROR bits */
writeb(dma_stat | 6,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat | 6, hwif->dma_base + ATA_DMA_STATUS);
/* purge DMA mappings */
ide_destroy_dmatable(drive);
/* verify good DMA status */
wmb();
return (dma_stat & 7) != 4 ? (0x10 | dma_stat) : 0;
}
EXPORT_SYMBOL_GPL(ide_dma_end);
/* returns 1 if dma irq issued, 0 otherwise */
int ide_dma_test_irq(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
/* return 1 if INTR asserted */
if ((dma_stat & 4) == 4)
return 1;
return 0;
}
EXPORT_SYMBOL_GPL(ide_dma_test_irq);
const struct ide_dma_ops sff_dma_ops = {
.dma_host_set = ide_dma_host_set,
.dma_setup = ide_dma_setup,
.dma_exec_cmd = ide_dma_exec_cmd,
.dma_start = ide_dma_start,
.dma_end = ide_dma_end,
.dma_test_irq = ide_dma_test_irq,
.dma_timeout = ide_dma_timeout,
.dma_lost_irq = ide_dma_lost_irq,
};
EXPORT_SYMBOL_GPL(sff_dma_ops);
......@@ -28,24 +28,13 @@
* for supplying a Promise UDMA board & WD UDMA drive for this work!
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/timer.h>
#include <linux/mm.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/ide.h>
#include <linux/delay.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <asm/io.h>
#include <asm/irq.h>
static const struct drive_list_entry drive_whitelist [] = {
static const struct drive_list_entry drive_whitelist[] = {
{ "Micropolis 2112A" , NULL },
{ "CONNER CTMA 4000" , NULL },
{ "CONNER CTT8000-A" , NULL },
......@@ -53,8 +42,7 @@ static const struct drive_list_entry drive_whitelist [] = {
{ NULL , NULL }
};
static const struct drive_list_entry drive_blacklist [] = {
static const struct drive_list_entry drive_blacklist[] = {
{ "WDC AC11000H" , NULL },
{ "WDC AC22100H" , NULL },
{ "WDC AC32500H" , NULL },
......@@ -94,11 +82,11 @@ static const struct drive_list_entry drive_blacklist [] = {
* ide_dma_intr - IDE DMA interrupt handler
* @drive: the drive the interrupt is for
*
* Handle an interrupt completing a read/write DMA transfer on an
* Handle an interrupt completing a read/write DMA transfer on an
* IDE device
*/
ide_startstop_t ide_dma_intr (ide_drive_t *drive)
ide_startstop_t ide_dma_intr(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 stat = 0, dma_stat = 0;
......@@ -108,20 +96,19 @@ ide_startstop_t ide_dma_intr (ide_drive_t *drive)
if (OK_STAT(stat, DRIVE_READY, drive->bad_wstat | ATA_DRQ)) {
if (!dma_stat) {
struct request *rq = HWGROUP(drive)->rq;
struct request *rq = hwif->hwgroup->rq;
task_end_request(drive, rq, stat);
return ide_stopped;
}
printk(KERN_ERR "%s: dma_intr: bad DMA status (dma_stat=%x)\n",
drive->name, dma_stat);
printk(KERN_ERR "%s: %s: bad DMA status (0x%02x)\n",
drive->name, __func__, dma_stat);
}
return ide_error(drive, "dma_intr", stat);
}
EXPORT_SYMBOL_GPL(ide_dma_intr);
static int ide_dma_good_drive(ide_drive_t *drive)
int ide_dma_good_drive(ide_drive_t *drive)
{
return ide_in_drive_list(drive->id, drive_whitelist);
}
......@@ -139,7 +126,7 @@ static int ide_dma_good_drive(ide_drive_t *drive)
int ide_build_sglist(ide_drive_t *drive, struct request *rq)
{
ide_hwif_t *hwif = HWIF(drive);
ide_hwif_t *hwif = drive->hwif;
struct scatterlist *sg = hwif->sg_table;
ide_map_sg(drive, rq);
......@@ -152,106 +139,8 @@ int ide_build_sglist(ide_drive_t *drive, struct request *rq)
return dma_map_sg(hwif->dev, sg, hwif->sg_nents,
hwif->sg_dma_direction);
}
EXPORT_SYMBOL_GPL(ide_build_sglist);
#ifdef CONFIG_BLK_DEV_IDEDMA_SFF
/**
* ide_build_dmatable - build IDE DMA table
*
* ide_build_dmatable() prepares a dma request. We map the command
* to get the pci bus addresses of the buffers and then build up
* the PRD table that the IDE layer wants to be fed. The code
* knows about the 64K wrap bug in the CS5530.
*
* Returns the number of built PRD entries if all went okay,
* returns 0 otherwise.
*
* May also be invoked from trm290.c
*/
int ide_build_dmatable (ide_drive_t *drive, struct request *rq)
{
ide_hwif_t *hwif = HWIF(drive);
__le32 *table = (__le32 *)hwif->dmatable_cpu;
unsigned int is_trm290 = (hwif->chipset == ide_trm290) ? 1 : 0;
unsigned int count = 0;
int i;
struct scatterlist *sg;
hwif->sg_nents = i = ide_build_sglist(drive, rq);
if (!i)
return 0;
sg = hwif->sg_table;
while (i) {
u32 cur_addr;
u32 cur_len;
cur_addr = sg_dma_address(sg);
cur_len = sg_dma_len(sg);
/*
* Fill in the dma table, without crossing any 64kB boundaries.
* Most hardware requires 16-bit alignment of all blocks,
* but the trm290 requires 32-bit alignment.
*/
while (cur_len) {
if (count++ >= PRD_ENTRIES) {
printk(KERN_ERR "%s: DMA table too small\n", drive->name);
goto use_pio_instead;
} else {
u32 xcount, bcount = 0x10000 - (cur_addr & 0xffff);
if (bcount > cur_len)
bcount = cur_len;
*table++ = cpu_to_le32(cur_addr);
xcount = bcount & 0xffff;
if (is_trm290)
xcount = ((xcount >> 2) - 1) << 16;
else if (xcount == 0x0000) {
/*
* Most chipsets correctly interpret a length of 0x0000 as 64KB,
* but at least one (e.g. CS5530) misinterprets it as zero (!).
* So here we break the 64KB entry into two 32KB entries instead.
*/
if (count++ >= PRD_ENTRIES) {
printk(KERN_ERR "%s: DMA table too small\n", drive->name);
goto use_pio_instead;
}
*table++ = cpu_to_le32(0x8000);
*table++ = cpu_to_le32(cur_addr + 0x8000);
xcount = 0x8000;
}
*table++ = cpu_to_le32(xcount);
cur_addr += bcount;
cur_len -= bcount;
}
}
sg = sg_next(sg);
i--;
}
if (count) {
if (!is_trm290)
*--table |= cpu_to_le32(0x80000000);
return count;
}
printk(KERN_ERR "%s: empty DMA table?\n", drive->name);
use_pio_instead:
ide_destroy_dmatable(drive);
return 0; /* revert to PIO for this request */
}
EXPORT_SYMBOL_GPL(ide_build_dmatable);
#endif
/**
* ide_destroy_dmatable - clean up DMA mapping
* @drive: The drive to unmap
......@@ -262,147 +151,30 @@ EXPORT_SYMBOL_GPL(ide_build_dmatable);
* an oops as only one mapping can be live for each target at a given
* time.
*/
void ide_destroy_dmatable (ide_drive_t *drive)
void ide_destroy_dmatable(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
dma_unmap_sg(hwif->dev, hwif->sg_table, hwif->sg_nents,
hwif->sg_dma_direction);
}
EXPORT_SYMBOL_GPL(ide_destroy_dmatable);
#ifdef CONFIG_BLK_DEV_IDEDMA_SFF
/**
* config_drive_for_dma - attempt to activate IDE DMA
* @drive: the drive to place in DMA mode
*
* If the drive supports at least mode 2 DMA or UDMA of any kind
* then attempt to place it into DMA mode. Drives that are known to
* support DMA but predate the DMA properties or that are known
* to have DMA handling bugs are also set up appropriately based
* on the good/bad drive lists.
*/
static int config_drive_for_dma (ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u16 *id = drive->id;
if (drive->media != ide_disk) {
if (hwif->host_flags & IDE_HFLAG_NO_ATAPI_DMA)
return 0;
}
/*
* Enable DMA on any drive that has
* UltraDMA (mode 0/1/2/3/4/5/6) enabled
*/
if ((id[ATA_ID_FIELD_VALID] & 4) &&
((id[ATA_ID_UDMA_MODES] >> 8) & 0x7f))
return 1;
/*
* Enable DMA on any drive that has mode2 DMA
* (multi or single) enabled
*/
if (id[ATA_ID_FIELD_VALID] & 2) /* regular DMA */
if ((id[ATA_ID_MWDMA_MODES] & 0x404) == 0x404 ||
(id[ATA_ID_SWDMA_MODES] & 0x404) == 0x404)
return 1;
/* Consult the list of known "good" drives */
if (ide_dma_good_drive(drive))
return 1;
return 0;
}
/**
* dma_timer_expiry - handle a DMA timeout
* @drive: Drive that timed out
*
* An IDE DMA transfer timed out. In the event of an error we ask
* the driver to resolve the problem, if a DMA transfer is still
* in progress we continue to wait (arguably we need to add a
* secondary 'I don't care what the drive thinks' timeout here)
* Finally if we have an interrupt we let it complete the I/O.
* But only one time - we clear expiry and if it's still not
* completed after WAIT_CMD, we error and retry in PIO.
* This can occur if an interrupt is lost or due to hang or bugs.
*/
static int dma_timer_expiry (ide_drive_t *drive)
{
ide_hwif_t *hwif = HWIF(drive);
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
printk(KERN_WARNING "%s: dma_timer_expiry: dma status == 0x%02x\n",
drive->name, dma_stat);
if ((dma_stat & 0x18) == 0x18) /* BUSY Stupid Early Timer !! */
return WAIT_CMD;
HWGROUP(drive)->expiry = NULL; /* one free ride for now */
/* 1 dmaing, 2 error, 4 intr */
if (dma_stat & 2) /* ERROR */
return -1;
if (dma_stat & 1) /* DMAing */
return WAIT_CMD;
if (dma_stat & 4) /* Got an Interrupt */
return WAIT_CMD;
return 0; /* Status is unknown -- reset the bus */
}
/**
* ide_dma_host_set - Enable/disable DMA on a host
* @drive: drive to control
*
* Enable/disable DMA on an IDE controller following generic
* bus-mastering IDE controller behaviour.
*/
void ide_dma_host_set(ide_drive_t *drive, int on)
{
ide_hwif_t *hwif = HWIF(drive);
u8 unit = (drive->select.b.unit & 0x01);
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
if (on)
dma_stat |= (1 << (5 + unit));
else
dma_stat &= ~(1 << (5 + unit));
if (hwif->host_flags & IDE_HFLAG_MMIO)
writeb(dma_stat,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat, hwif->dma_base + ATA_DMA_STATUS);
}
EXPORT_SYMBOL_GPL(ide_dma_host_set);
#endif /* CONFIG_BLK_DEV_IDEDMA_SFF */
/**
* ide_dma_off_quietly - Generic DMA kill
* @drive: drive to control
*
* Turn off the current DMA on this IDE controller.
* Turn off the current DMA on this IDE controller.
*/
void ide_dma_off_quietly(ide_drive_t *drive)
{
drive->using_dma = 0;
drive->dev_flags &= ~IDE_DFLAG_USING_DMA;
ide_toggle_bounce(drive, 0);
drive->hwif->dma_ops->dma_host_set(drive, 0);
}
EXPORT_SYMBOL(ide_dma_off_quietly);
/**
......@@ -418,7 +190,6 @@ void ide_dma_off(ide_drive_t *drive)
printk(KERN_INFO "%s: DMA disabled\n", drive->name);
ide_dma_off_quietly(drive);
}
EXPORT_SYMBOL(ide_dma_off);
/**
......@@ -430,167 +201,13 @@ EXPORT_SYMBOL(ide_dma_off);
void ide_dma_on(ide_drive_t *drive)
{
drive->using_dma = 1;
drive->dev_flags |= IDE_DFLAG_USING_DMA;
ide_toggle_bounce(drive, 1);
drive->hwif->dma_ops->dma_host_set(drive, 1);
}
#ifdef CONFIG_BLK_DEV_IDEDMA_SFF
/**
* ide_dma_setup - begin a DMA phase
* @drive: target device
*
* Build an IDE DMA PRD (IDE speak for scatter gather table)
* and then set up the DMA transfer registers for a device
* that follows generic IDE PCI DMA behaviour. Controllers can
* override this function if they need to
*
* Returns 0 on success. If a PIO fallback is required then 1
* is returned.
*/
int ide_dma_setup(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
struct request *rq = HWGROUP(drive)->rq;
unsigned int reading;
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
u8 dma_stat;
if (rq_data_dir(rq))
reading = 0;
else
reading = 1 << 3;
/* fall back to pio! */
if (!ide_build_dmatable(drive, rq)) {
ide_map_sg(drive, rq);
return 1;
}
/* PRD table */
if (hwif->host_flags & IDE_HFLAG_MMIO)
writel(hwif->dmatable_dma,
(void __iomem *)(hwif->dma_base + ATA_DMA_TABLE_OFS));
else
outl(hwif->dmatable_dma, hwif->dma_base + ATA_DMA_TABLE_OFS);
/* specify r/w */
if (mmio)
writeb(reading, (void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
else
outb(reading, hwif->dma_base + ATA_DMA_CMD);
/* read DMA status for INTR & ERROR flags */
dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
/* clear INTR & ERROR flags */
if (mmio)
writeb(dma_stat | 6,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat | 6, hwif->dma_base + ATA_DMA_STATUS);
drive->waiting_for_dma = 1;
return 0;
}
EXPORT_SYMBOL_GPL(ide_dma_setup);
void ide_dma_exec_cmd(ide_drive_t *drive, u8 command)
{
/* issue cmd to drive */
ide_execute_command(drive, command, &ide_dma_intr, 2*WAIT_CMD, dma_timer_expiry);
}
EXPORT_SYMBOL_GPL(ide_dma_exec_cmd);
void ide_dma_start(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_cmd;
/* Note that this is done *after* the cmd has
* been issued to the drive, as per the BM-IDE spec.
* The Promise Ultra33 doesn't work correctly when
* we do this part before issuing the drive cmd.
*/
if (hwif->host_flags & IDE_HFLAG_MMIO) {
dma_cmd = readb((void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
/* start DMA */
writeb(dma_cmd | 1,
(void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
} else {
dma_cmd = inb(hwif->dma_base + ATA_DMA_CMD);
outb(dma_cmd | 1, hwif->dma_base + ATA_DMA_CMD);
}
hwif->dma = 1;
wmb();
}
EXPORT_SYMBOL_GPL(ide_dma_start);
/* returns 1 on error, 0 otherwise */
int __ide_dma_end (ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
u8 dma_stat = 0, dma_cmd = 0;
drive->waiting_for_dma = 0;
if (mmio) {
/* get DMA command mode */
dma_cmd = readb((void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
/* stop DMA */
writeb(dma_cmd & ~1,
(void __iomem *)(hwif->dma_base + ATA_DMA_CMD));
} else {
dma_cmd = inb(hwif->dma_base + ATA_DMA_CMD);
outb(dma_cmd & ~1, hwif->dma_base + ATA_DMA_CMD);
}
/* get DMA status */
dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
if (mmio)
/* clear the INTR & ERROR bits */
writeb(dma_stat | 6,
(void __iomem *)(hwif->dma_base + ATA_DMA_STATUS));
else
outb(dma_stat | 6, hwif->dma_base + ATA_DMA_STATUS);
/* purge DMA mappings */
ide_destroy_dmatable(drive);
/* verify good DMA status */
hwif->dma = 0;
wmb();
return (dma_stat & 7) != 4 ? (0x10 | dma_stat) : 0;
}
EXPORT_SYMBOL(__ide_dma_end);
/* returns 1 if dma irq issued, 0 otherwise */
int ide_dma_test_irq(ide_drive_t *drive)
{
ide_hwif_t *hwif = HWIF(drive);
u8 dma_stat = hwif->tp_ops->read_sff_dma_status(hwif);
/* return 1 if INTR asserted */
if ((dma_stat & 4) == 4)
return 1;
if (!drive->waiting_for_dma)
printk(KERN_WARNING "%s: (%s) called while not waiting\n",
drive->name, __func__);
return 0;
}
EXPORT_SYMBOL_GPL(ide_dma_test_irq);
#else
static inline int config_drive_for_dma(ide_drive_t *drive) { return 0; }
#endif /* CONFIG_BLK_DEV_IDEDMA_SFF */
int __ide_dma_bad_drive (ide_drive_t *drive)
int __ide_dma_bad_drive(ide_drive_t *drive)
{
u16 *id = drive->id;
......@@ -602,7 +219,6 @@ int __ide_dma_bad_drive (ide_drive_t *drive)
}
return 0;
}
EXPORT_SYMBOL(__ide_dma_bad_drive);
static const u8 xfer_mode_bases[] = {
......@@ -618,7 +234,7 @@ static unsigned int ide_get_mode_mask(ide_drive_t *drive, u8 base, u8 req_mode)
const struct ide_port_ops *port_ops = hwif->port_ops;
unsigned int mask = 0;
switch(base) {
switch (base) {
case XFER_UDMA_0:
if ((id[ATA_ID_FIELD_VALID] & 4) == 0)
break;
......@@ -719,7 +335,6 @@ u8 ide_find_dma_mode(ide_drive_t *drive, u8 req_mode)
return mode;
}
EXPORT_SYMBOL_GPL(ide_find_dma_mode);
static int ide_tune_dma(ide_drive_t *drive)
......@@ -727,7 +342,8 @@ static int ide_tune_dma(ide_drive_t *drive)
ide_hwif_t *hwif = drive->hwif;
u8 speed;
if (drive->nodma || ata_id_has_dma(drive->id) == 0)
if (ata_id_has_dma(drive->id) == 0 ||
(drive->dev_flags & IDE_DFLAG_NODMA))
return 0;
/* consult the list of known "bad" drives */
......@@ -827,66 +443,59 @@ void ide_check_dma_crc(ide_drive_t *drive)
ide_dma_on(drive);
}
#ifdef CONFIG_BLK_DEV_IDEDMA_SFF
void ide_dma_lost_irq (ide_drive_t *drive)
void ide_dma_lost_irq(ide_drive_t *drive)
{
printk("%s: DMA interrupt recovery\n", drive->name);
printk(KERN_ERR "%s: DMA interrupt recovery\n", drive->name);
}
EXPORT_SYMBOL_GPL(ide_dma_lost_irq);
EXPORT_SYMBOL(ide_dma_lost_irq);
void ide_dma_timeout (ide_drive_t *drive)
void ide_dma_timeout(ide_drive_t *drive)
{
ide_hwif_t *hwif = HWIF(drive);
ide_hwif_t *hwif = drive->hwif;
printk(KERN_ERR "%s: timeout waiting for DMA\n", drive->name);
if (hwif->dma_ops->dma_test_irq(drive))
return;
ide_dump_status(drive, "DMA timeout", hwif->tp_ops->read_status(hwif));
hwif->dma_ops->dma_end(drive);
}
EXPORT_SYMBOL(ide_dma_timeout);
EXPORT_SYMBOL_GPL(ide_dma_timeout);
void ide_release_dma_engine(ide_hwif_t *hwif)
{
if (hwif->dmatable_cpu) {
struct pci_dev *pdev = to_pci_dev(hwif->dev);
int prd_size = hwif->prd_max_nents * hwif->prd_ent_size;
pci_free_consistent(pdev, PRD_ENTRIES * PRD_BYTES,
hwif->dmatable_cpu, hwif->dmatable_dma);
dma_free_coherent(hwif->dev, prd_size,
hwif->dmatable_cpu, hwif->dmatable_dma);
hwif->dmatable_cpu = NULL;
}
}
EXPORT_SYMBOL_GPL(ide_release_dma_engine);
int ide_allocate_dma_engine(ide_hwif_t *hwif)
{
struct pci_dev *pdev = to_pci_dev(hwif->dev);
int prd_size;
hwif->dmatable_cpu = pci_alloc_consistent(pdev,
PRD_ENTRIES * PRD_BYTES,
&hwif->dmatable_dma);
if (hwif->prd_max_nents == 0)
hwif->prd_max_nents = PRD_ENTRIES;
if (hwif->prd_ent_size == 0)
hwif->prd_ent_size = PRD_BYTES;
if (hwif->dmatable_cpu)
return 0;
prd_size = hwif->prd_max_nents * hwif->prd_ent_size;
printk(KERN_ERR "%s: -- Error, unable to allocate DMA table.\n",
hwif->dmatable_cpu = dma_alloc_coherent(hwif->dev, prd_size,
&hwif->dmatable_dma,
GFP_ATOMIC);
if (hwif->dmatable_cpu == NULL) {
printk(KERN_ERR "%s: unable to allocate PRD table\n",
hwif->name);
return -ENOMEM;
}
return 1;
return 0;
}
EXPORT_SYMBOL_GPL(ide_allocate_dma_engine);
const struct ide_dma_ops sff_dma_ops = {
.dma_host_set = ide_dma_host_set,
.dma_setup = ide_dma_setup,
.dma_exec_cmd = ide_dma_exec_cmd,
.dma_start = ide_dma_start,
.dma_end = __ide_dma_end,
.dma_test_irq = ide_dma_test_irq,
.dma_timeout = ide_dma_timeout,
.dma_lost_irq = ide_dma_lost_irq,
};
EXPORT_SYMBOL_GPL(sff_dma_ops);
#endif /* CONFIG_BLK_DEV_IDEDMA_SFF */
此差异已折叠。
......@@ -13,20 +13,14 @@ typedef struct ide_floppy_obj {
struct kref kref;
unsigned int openers; /* protected by BKL for now */
/* Current packet command */
struct ide_atapi_pc *pc;
/* Last failed packet command */
struct ide_atapi_pc *failed_pc;
/* used for blk_{fs,pc}_request() requests */
struct ide_atapi_pc queued_pc;
struct ide_atapi_pc request_sense_pc;
struct request request_sense_rq;
/* Last error information */
u8 sense_key, asc, ascq;
/* delay this long before sending packet command */
u8 ticks;
int progress_indication;
/* Device information */
......@@ -54,10 +48,15 @@ typedef struct ide_floppy_obj {
/* ide-floppy.c */
void ide_floppy_create_mode_sense_cmd(struct ide_atapi_pc *, u8);
void ide_floppy_create_read_capacity_cmd(struct ide_atapi_pc *);
void ide_floppy_create_request_sense_cmd(struct ide_atapi_pc *);
sector_t ide_floppy_capacity(ide_drive_t *);
/* ide-floppy_ioctl.c */
int ide_floppy_format_ioctl(ide_drive_t *, struct file *, unsigned int,
void __user *);
int ide_floppy_ioctl(struct inode *, struct file *, unsigned, unsigned long);
#ifdef CONFIG_IDE_PROC_FS
/* ide-floppy_proc.c */
extern ide_proc_entry_t ide_floppy_proc[];
extern const struct ide_proc_devset ide_floppy_settings[];
#endif
#endif /*__IDE_FLOPPY_H */
......@@ -195,7 +195,7 @@ static int ide_floppy_get_format_progress(ide_drive_t *drive, int __user *arg)
int progress_indication = 0x10000;
if (drive->atapi_flags & IDE_AFLAG_SRFP) {
ide_floppy_create_request_sense_cmd(&pc);
ide_create_request_sense_cmd(drive, &pc);
if (ide_queue_pc_tail(drive, floppy->disk, &pc))
return -EIO;
......@@ -223,8 +223,26 @@ static int ide_floppy_get_format_progress(ide_drive_t *drive, int __user *arg)
return 0;
}
int ide_floppy_format_ioctl(ide_drive_t *drive, struct file *file,
unsigned int cmd, void __user *argp)
static int ide_floppy_lockdoor(ide_drive_t *drive, struct ide_atapi_pc *pc,
unsigned long arg, unsigned int cmd)
{
idefloppy_floppy_t *floppy = drive->driver_data;
struct gendisk *disk = floppy->disk;
int prevent = (arg && cmd != CDROMEJECT) ? 1 : 0;
if (floppy->openers > 1)
return -EBUSY;
ide_set_media_lock(drive, disk, prevent);
if (cmd == CDROMEJECT)
ide_do_start_stop(drive, disk, 2);
return 0;
}
static int ide_floppy_format_ioctl(ide_drive_t *drive, struct file *file,
unsigned int cmd, void __user *argp)
{
switch (cmd) {
case IDEFLOPPY_IOCTL_FORMAT_SUPPORTED:
......@@ -241,3 +259,35 @@ int ide_floppy_format_ioctl(ide_drive_t *drive, struct file *file,
return -ENOTTY;
}
}
int ide_floppy_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct block_device *bdev = inode->i_bdev;
struct ide_floppy_obj *floppy = ide_drv_g(bdev->bd_disk,
ide_floppy_obj);
ide_drive_t *drive = floppy->drive;
struct ide_atapi_pc pc;
void __user *argp = (void __user *)arg;
int err;
if (cmd == CDROMEJECT || cmd == CDROM_LOCKDOOR)
return ide_floppy_lockdoor(drive, &pc, arg, cmd);
err = ide_floppy_format_ioctl(drive, file, cmd, argp);
if (err != -ENOTTY)
return err;
/*
* skip SCSI_IOCTL_SEND_COMMAND (deprecated)
* and CDROM_SEND_PACKET (legacy) ioctls
*/
if (cmd != CDROM_SEND_PACKET && cmd != SCSI_IOCTL_SEND_COMMAND)
err = scsi_cmd_ioctl(file, bdev->bd_disk->queue,
bdev->bd_disk, cmd, argp);
if (err == -ENOTTY)
err = generic_ide_ioctl(drive, file, bdev, cmd, arg);
return err;
}
#include <linux/kernel.h>
#include <linux/ide.h>
#include "ide-floppy.h"
static int proc_idefloppy_read_capacity(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
ide_drive_t*drive = (ide_drive_t *)data;
int len;
len = sprintf(page, "%llu\n", (long long)ide_floppy_capacity(drive));
PROC_IDE_READ_RETURN(page, start, off, count, eof, len);
}
ide_proc_entry_t ide_floppy_proc[] = {
{ "capacity", S_IFREG|S_IRUGO, proc_idefloppy_read_capacity, NULL },
{ "geometry", S_IFREG|S_IRUGO, proc_ide_read_geometry, NULL },
{ NULL, 0, NULL, NULL }
};
ide_devset_rw_field(bios_cyl, bios_cyl);
ide_devset_rw_field(bios_head, bios_head);
ide_devset_rw_field(bios_sect, bios_sect);
ide_devset_rw_field(ticks, pc_delay);
const struct ide_proc_devset ide_floppy_settings[] = {
IDE_PROC_DEVSET(bios_cyl, 0, 1023),
IDE_PROC_DEVSET(bios_head, 0, 255),
IDE_PROC_DEVSET(bios_sect, 0, 63),
IDE_PROC_DEVSET(ticks, 0, 255),
{ 0 },
};
......@@ -137,15 +137,10 @@ static void ide_generic_check_pci_legacy_iobases(int *primary, int *secondary)
static int __init ide_generic_init(void)
{
hw_regs_t hw[MAX_HWIFS], *hws[MAX_HWIFS];
struct ide_host *host;
hw_regs_t hw, *hws[] = { &hw, NULL, NULL, NULL };
unsigned long io_addr;
int i, rc, primary = 0, secondary = 0;
int i, rc = 0, primary = 0, secondary = 0;
#ifdef CONFIG_MIPS
if (!ide_probe_legacy())
return -ENODEV;
#endif
ide_generic_check_pci_legacy_iobases(&primary, &secondary);
if (!probe_mask) {
......@@ -161,13 +156,9 @@ static int __init ide_generic_init(void)
printk(KERN_INFO DRV_NAME ": enforcing probing of I/O ports "
"upon user request\n");
memset(hws, 0, sizeof(hw_regs_t *) * MAX_HWIFS);
for (i = 0; i < ARRAY_SIZE(legacy_bases); i++) {
io_addr = legacy_bases[i];
hws[i] = NULL;
if ((probe_mask & (1 << i)) && io_addr) {
if (!request_region(io_addr, 8, DRV_NAME)) {
printk(KERN_ERR "%s: I/O resource 0x%lX-0x%lX "
......@@ -184,45 +175,27 @@ static int __init ide_generic_init(void)
continue;
}
memset(&hw[i], 0, sizeof(hw[i]));
ide_std_init_ports(&hw[i], io_addr, io_addr + 0x206);
memset(&hw, 0, sizeof(hw));
ide_std_init_ports(&hw, io_addr, io_addr + 0x206);
#ifdef CONFIG_IA64
hw[i].irq = isa_irq_to_vector(legacy_irqs[i]);
hw.irq = isa_irq_to_vector(legacy_irqs[i]);
#else
hw[i].irq = legacy_irqs[i];
hw.irq = legacy_irqs[i];
#endif
hw[i].chipset = ide_generic;
hw.chipset = ide_generic;
hws[i] = &hw[i];
rc = ide_host_add(NULL, hws, NULL);
if (rc) {
release_region(io_addr + 0x206, 1);
release_region(io_addr, 8);
}
}
}
host = ide_host_alloc_all(NULL, hws);
if (host == NULL) {
rc = -ENOMEM;
goto err;
}
rc = ide_host_register(host, NULL, hws);
if (rc)
goto err_free;
if (ide_generic_sysfs_init())
printk(KERN_ERR DRV_NAME ": failed to create ide_generic "
"class\n");
return 0;
err_free:
ide_host_free(host);
err:
for (i = 0; i < MAX_HWIFS; i++) {
if (hws[i] == NULL)
continue;
io_addr = hws[i]->io_ports.data_addr;
release_region(io_addr + 0x206, 1);
release_region(io_addr, 8);
}
return rc;
}
......
......@@ -78,8 +78,9 @@ static int __ide_end_request(ide_drive_t *drive, struct request *rq,
* decide whether to reenable DMA -- 3 is a random magic for now,
* if we DMA timeout more than 3 times, just stay in PIO
*/
if (drive->state == DMA_PIO_RETRY && drive->retry_pio <= 3) {
drive->state = 0;
if ((drive->dev_flags & IDE_DFLAG_DMA_PIO_RETRY) &&
drive->retry_pio <= 3) {
drive->dev_flags &= ~IDE_DFLAG_DMA_PIO_RETRY;
ide_dma_on(drive);
}
......@@ -131,21 +132,6 @@ int ide_end_request (ide_drive_t *drive, int uptodate, int nr_sectors)
}
EXPORT_SYMBOL(ide_end_request);
/*
* Power Management state machine. This one is rather trivial for now,
* we should probably add more, like switching back to PIO on suspend
* to help some BIOSes, re-do the door locking on resume, etc...
*/
enum {
ide_pm_flush_cache = ide_pm_state_start_suspend,
idedisk_pm_standby,
idedisk_pm_restore_pio = ide_pm_state_start_resume,
idedisk_pm_idle,
ide_pm_restore_dma,
};
static void ide_complete_power_step(ide_drive_t *drive, struct request *rq, u8 stat, u8 error)
{
struct request_pm_state *pm = rq->data;
......@@ -154,20 +140,20 @@ static void ide_complete_power_step(ide_drive_t *drive, struct request *rq, u8 s
return;
switch (pm->pm_step) {
case ide_pm_flush_cache: /* Suspend step 1 (flush cache) complete */
case IDE_PM_FLUSH_CACHE: /* Suspend step 1 (flush cache) */
if (pm->pm_state == PM_EVENT_FREEZE)
pm->pm_step = ide_pm_state_completed;
pm->pm_step = IDE_PM_COMPLETED;
else
pm->pm_step = idedisk_pm_standby;
pm->pm_step = IDE_PM_STANDBY;
break;
case idedisk_pm_standby: /* Suspend step 2 (standby) complete */
pm->pm_step = ide_pm_state_completed;
case IDE_PM_STANDBY: /* Suspend step 2 (standby) */
pm->pm_step = IDE_PM_COMPLETED;
break;
case idedisk_pm_restore_pio: /* Resume step 1 complete */
pm->pm_step = idedisk_pm_idle;
case IDE_PM_RESTORE_PIO: /* Resume step 1 (restore PIO) */
pm->pm_step = IDE_PM_IDLE;
break;
case idedisk_pm_idle: /* Resume step 2 (idle) complete */
pm->pm_step = ide_pm_restore_dma;
case IDE_PM_IDLE: /* Resume step 2 (idle)*/
pm->pm_step = IDE_PM_RESTORE_DMA;
break;
}
}
......@@ -180,11 +166,12 @@ static ide_startstop_t ide_start_power_step(ide_drive_t *drive, struct request *
memset(args, 0, sizeof(*args));
switch (pm->pm_step) {
case ide_pm_flush_cache: /* Suspend step 1 (flush cache) */
case IDE_PM_FLUSH_CACHE: /* Suspend step 1 (flush cache) */
if (drive->media != ide_disk)
break;
/* Not supported? Switch to next step now. */
if (!drive->wcache || ata_id_flush_enabled(drive->id) == 0) {
if (ata_id_flush_enabled(drive->id) == 0 ||
(drive->dev_flags & IDE_DFLAG_WCACHE) == 0) {
ide_complete_power_step(drive, rq, 0, 0);
return ide_stopped;
}
......@@ -193,27 +180,23 @@ static ide_startstop_t ide_start_power_step(ide_drive_t *drive, struct request *
else
args->tf.command = ATA_CMD_FLUSH;
goto out_do_tf;
case idedisk_pm_standby: /* Suspend step 2 (standby) */
case IDE_PM_STANDBY: /* Suspend step 2 (standby) */
args->tf.command = ATA_CMD_STANDBYNOW1;
goto out_do_tf;
case idedisk_pm_restore_pio: /* Resume step 1 (restore PIO) */
case IDE_PM_RESTORE_PIO: /* Resume step 1 (restore PIO) */
ide_set_max_pio(drive);
/*
* skip idedisk_pm_idle for ATAPI devices
* skip IDE_PM_IDLE for ATAPI devices
*/
if (drive->media != ide_disk)
pm->pm_step = ide_pm_restore_dma;
pm->pm_step = IDE_PM_RESTORE_DMA;
else
ide_complete_power_step(drive, rq, 0, 0);
return ide_stopped;
case idedisk_pm_idle: /* Resume step 2 (idle) */
case IDE_PM_IDLE: /* Resume step 2 (idle) */
args->tf.command = ATA_CMD_IDLEIMMEDIATE;
goto out_do_tf;
case ide_pm_restore_dma: /* Resume step 3 (restore DMA) */
case IDE_PM_RESTORE_DMA: /* Resume step 3 (restore DMA) */
/*
* Right now, all we do is call ide_set_dma(drive),
* we could be smarter and check for current xfer_speed
......@@ -222,12 +205,13 @@ static ide_startstop_t ide_start_power_step(ide_drive_t *drive, struct request *
if (drive->hwif->dma_ops == NULL)
break;
/*
* TODO: respect ->using_dma setting
* TODO: respect IDE_DFLAG_USING_DMA
*/
ide_set_dma(drive);
break;
}
pm->pm_step = ide_pm_state_completed;
pm->pm_step = IDE_PM_COMPLETED;
return ide_stopped;
out_do_tf:
......@@ -287,7 +271,7 @@ static void ide_complete_pm_request (ide_drive_t *drive, struct request *rq)
if (blk_pm_suspend_request(rq)) {
blk_stop_queue(drive->queue);
} else {
drive->blocked = 0;
drive->dev_flags &= ~IDE_DFLAG_BLOCKED;
blk_start_queue(drive->queue);
}
HWGROUP(drive)->rq = NULL;
......@@ -343,7 +327,7 @@ void ide_end_drive_cmd (ide_drive_t *drive, u8 stat, u8 err)
drive->name, rq->pm->pm_step, stat, err);
#endif
ide_complete_power_step(drive, rq, stat, err);
if (pm->pm_step == ide_pm_state_completed)
if (pm->pm_step == IDE_PM_COMPLETED)
ide_complete_pm_request(drive, rq);
return;
}
......@@ -374,13 +358,14 @@ static ide_startstop_t ide_ata_error(ide_drive_t *drive, struct request *rq, u8
{
ide_hwif_t *hwif = drive->hwif;
if ((stat & ATA_BUSY) || ((stat & ATA_DF) && !drive->nowerr)) {
if ((stat & ATA_BUSY) ||
((stat & ATA_DF) && (drive->dev_flags & IDE_DFLAG_NOWERR) == 0)) {
/* other bits are useless when BUSY */
rq->errors |= ERROR_RESET;
} else if (stat & ATA_ERR) {
/* err has different meaning on cdrom and tape */
if (err == ATA_ABORTED) {
if (drive->select.b.lba &&
if ((drive->dev_flags & IDE_DFLAG_LBA) &&
/* some newer drives don't support ATA_CMD_INIT_DEV_PARAMS */
hwif->tp_ops->read_status(hwif) == ATA_CMD_INIT_DEV_PARAMS)
return ide_stopped;
......@@ -428,7 +413,8 @@ static ide_startstop_t ide_atapi_error(ide_drive_t *drive, struct request *rq, u
{
ide_hwif_t *hwif = drive->hwif;
if ((stat & ATA_BUSY) || ((stat & ATA_DF) && !drive->nowerr)) {
if ((stat & ATA_BUSY) ||
((stat & ATA_DF) && (drive->dev_flags & IDE_DFLAG_NOWERR) == 0)) {
/* other bits are useless when BUSY */
rq->errors |= ERROR_RESET;
} else {
......@@ -509,7 +495,7 @@ static void ide_tf_set_specify_cmd(ide_drive_t *drive, struct ide_taskfile *tf)
tf->lbal = drive->sect;
tf->lbam = drive->cyl;
tf->lbah = drive->cyl >> 8;
tf->device = ((drive->head - 1) | drive->select.all) & ~ATA_LBA;
tf->device = (drive->head - 1) | drive->select;
tf->command = ATA_CMD_INIT_DEV_PARAMS;
}
......@@ -557,30 +543,6 @@ static ide_startstop_t ide_disk_special(ide_drive_t *drive)
return ide_started;
}
/*
* handle HDIO_SET_PIO_MODE ioctl abusers here, eventually it will go away
*/
static int set_pio_mode_abuse(ide_hwif_t *hwif, u8 req_pio)
{
switch (req_pio) {
case 202:
case 201:
case 200:
case 102:
case 101:
case 100:
return (hwif->host_flags & IDE_HFLAG_ABUSE_DMA_MODES) ? 1 : 0;
case 9:
case 8:
return (hwif->host_flags & IDE_HFLAG_ABUSE_PREFETCH) ? 1 : 0;
case 7:
case 6:
return (hwif->host_flags & IDE_HFLAG_ABUSE_FAST_DEVSEL) ? 1 : 0;
default:
return 0;
}
}
/**
* do_special - issue some special commands
* @drive: drive the command is for
......@@ -598,45 +560,12 @@ static ide_startstop_t do_special (ide_drive_t *drive)
#ifdef DEBUG
printk("%s: do_special: 0x%02x\n", drive->name, s->all);
#endif
if (s->b.set_tune) {
ide_hwif_t *hwif = drive->hwif;
const struct ide_port_ops *port_ops = hwif->port_ops;
u8 req_pio = drive->tune_req;
s->b.set_tune = 0;
if (set_pio_mode_abuse(drive->hwif, req_pio)) {
/*
* take ide_lock for drive->[no_]unmask/[no_]io_32bit
*/
if (req_pio == 8 || req_pio == 9) {
unsigned long flags;
spin_lock_irqsave(&ide_lock, flags);
port_ops->set_pio_mode(drive, req_pio);
spin_unlock_irqrestore(&ide_lock, flags);
} else
port_ops->set_pio_mode(drive, req_pio);
} else {
int keep_dma = drive->using_dma;
ide_set_pio(drive, req_pio);
if (hwif->host_flags & IDE_HFLAG_SET_PIO_MODE_KEEP_DMA) {
if (keep_dma)
ide_dma_on(drive);
}
}
return ide_stopped;
} else {
if (drive->media == ide_disk)
return ide_disk_special(drive);
if (drive->media == ide_disk)
return ide_disk_special(drive);
s->all = 0;
drive->mult_req = 0;
return ide_stopped;
}
s->all = 0;
drive->mult_req = 0;
return ide_stopped;
}
void ide_map_sg(ide_drive_t *drive, struct request *rq)
......@@ -726,10 +655,7 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting,
if (!(setting->flags & DS_SYNC))
return setting->set(drive, arg);
rq = blk_get_request(q, READ, GFP_KERNEL);
if (!rq)
return -ENOMEM;
rq = blk_get_request(q, READ, __GFP_WAIT);
rq->cmd_type = REQ_TYPE_SPECIAL;
rq->cmd_len = 5;
rq->cmd[0] = REQ_DEVSET_EXEC;
......@@ -746,7 +672,32 @@ EXPORT_SYMBOL_GPL(ide_devset_execute);
static ide_startstop_t ide_special_rq(ide_drive_t *drive, struct request *rq)
{
switch (rq->cmd[0]) {
u8 cmd = rq->cmd[0];
if (cmd == REQ_PARK_HEADS || cmd == REQ_UNPARK_HEADS) {
ide_task_t task;
struct ide_taskfile *tf = &task.tf;
memset(&task, 0, sizeof(task));
if (cmd == REQ_PARK_HEADS) {
drive->sleep = *(unsigned long *)rq->special;
drive->dev_flags |= IDE_DFLAG_SLEEPING;
tf->command = ATA_CMD_IDLEIMMEDIATE;
tf->feature = 0x44;
tf->lbal = 0x4c;
tf->lbam = 0x4e;
tf->lbah = 0x55;
task.tf_flags |= IDE_TFLAG_CUSTOM_HANDLER;
} else /* cmd == REQ_UNPARK_HEADS */
tf->command = ATA_CMD_CHK_POWER;
task.tf_flags |= IDE_TFLAG_TF | IDE_TFLAG_DEVICE;
task.rq = rq;
drive->hwif->data_phase = task.data_phase = TASKFILE_NO_DATA;
return do_rw_taskfile(drive, &task);
}
switch (cmd) {
case REQ_DEVSET_EXEC:
{
int err, (*setfunc)(ide_drive_t *, int) = rq->special;
......@@ -773,11 +724,11 @@ static void ide_check_pm_state(ide_drive_t *drive, struct request *rq)
struct request_pm_state *pm = rq->data;
if (blk_pm_suspend_request(rq) &&
pm->pm_step == ide_pm_state_start_suspend)
pm->pm_step == IDE_PM_START_SUSPEND)
/* Mark drive blocked when starting the suspend sequence. */
drive->blocked = 1;
drive->dev_flags |= IDE_DFLAG_BLOCKED;
else if (blk_pm_resume_request(rq) &&
pm->pm_step == ide_pm_state_start_resume) {
pm->pm_step == IDE_PM_START_RESUME) {
/*
* The first thing we do on wakeup is to wait for BSY bit to
* go away (with a looong timeout) as a drive on this hwif may
......@@ -857,7 +808,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq)
#endif
startstop = ide_start_power_step(drive, rq);
if (startstop == ide_stopped &&
pm->pm_step == ide_pm_state_completed)
pm->pm_step == IDE_PM_COMPLETED)
ide_complete_pm_request(drive, rq);
return startstop;
} else if (!rq->rq_disk && blk_special_request(rq))
......@@ -895,7 +846,7 @@ void ide_stall_queue (ide_drive_t *drive, unsigned long timeout)
if (timeout > WAIT_WORSTCASE)
timeout = WAIT_WORSTCASE;
drive->sleep = timeout + jiffies;
drive->sleeping = 1;
drive->dev_flags |= IDE_DFLAG_SLEEPING;
}
EXPORT_SYMBOL(ide_stall_queue);
......@@ -935,18 +886,23 @@ static inline ide_drive_t *choose_drive (ide_hwgroup_t *hwgroup)
}
do {
if ((!drive->sleeping || time_after_eq(jiffies, drive->sleep))
&& !elv_queue_empty(drive->queue)) {
if (!best
|| (drive->sleeping && (!best->sleeping || time_before(drive->sleep, best->sleep)))
|| (!best->sleeping && time_before(WAKEUP(drive), WAKEUP(best))))
{
u8 dev_s = !!(drive->dev_flags & IDE_DFLAG_SLEEPING);
u8 best_s = (best && !!(best->dev_flags & IDE_DFLAG_SLEEPING));
if ((dev_s == 0 || time_after_eq(jiffies, drive->sleep)) &&
!elv_queue_empty(drive->queue)) {
if (best == NULL ||
(dev_s && (best_s == 0 || time_before(drive->sleep, best->sleep))) ||
(best_s == 0 && time_before(WAKEUP(drive), WAKEUP(best)))) {
if (!blk_queue_plugged(drive->queue))
best = drive;
}
}
} while ((drive = drive->next) != hwgroup->drive);
if (best && best->nice1 && !best->sleeping && best != hwgroup->drive && best->service_time > WAIT_MIN_SLEEP) {
if (best && (best->dev_flags & IDE_DFLAG_NICE1) &&
(best->dev_flags & IDE_DFLAG_SLEEPING) == 0 &&
best != hwgroup->drive && best->service_time > WAIT_MIN_SLEEP) {
long t = (signed long)(WAKEUP(best) - jiffies);
if (t >= WAIT_MIN_SLEEP) {
/*
......@@ -955,7 +911,7 @@ static inline ide_drive_t *choose_drive (ide_hwgroup_t *hwgroup)
*/
drive = best->next;
do {
if (!drive->sleeping
if ((drive->dev_flags & IDE_DFLAG_SLEEPING) == 0
&& time_before(jiffies - best->service_time, WAKEUP(drive))
&& time_before(WAKEUP(drive), jiffies + t))
{
......@@ -1026,7 +982,9 @@ static void ide_do_request (ide_hwgroup_t *hwgroup, int masked_irq)
hwgroup->rq = NULL;
drive = hwgroup->drive;
do {
if (drive->sleeping && (!sleeping || time_before(drive->sleep, sleep))) {
if ((drive->dev_flags & IDE_DFLAG_SLEEPING) &&
(sleeping == 0 ||
time_before(drive->sleep, sleep))) {
sleeping = 1;
sleep = drive->sleep;
}
......@@ -1075,7 +1033,7 @@ static void ide_do_request (ide_hwgroup_t *hwgroup, int masked_irq)
}
hwgroup->hwif = hwif;
hwgroup->drive = drive;
drive->sleeping = 0;
drive->dev_flags &= ~(IDE_DFLAG_SLEEPING | IDE_DFLAG_PARKED);
drive->service_start = jiffies;
if (blk_queue_plugged(drive->queue)) {
......@@ -1109,7 +1067,9 @@ static void ide_do_request (ide_hwgroup_t *hwgroup, int masked_irq)
* We count how many times we loop here to make sure we service
* all drives in the hwgroup without looping for ever
*/
if (drive->blocked && !blk_pm_request(rq) && !(rq->cmd_flags & REQ_PREEMPT)) {
if ((drive->dev_flags & IDE_DFLAG_BLOCKED) &&
blk_pm_request(rq) == 0 &&
(rq->cmd_flags & REQ_PREEMPT) == 0) {
drive = drive->next ? drive->next : hwgroup->drive;
if (loops++ < 4 && !blk_queue_plugged(drive->queue))
goto again;
......@@ -1182,8 +1142,8 @@ static ide_startstop_t ide_dma_timeout_retry(ide_drive_t *drive, int error)
* a timeout -- we'll reenable after we finish this next request
* (or rather the first chunk of it) in pio.
*/
drive->dev_flags |= IDE_DFLAG_DMA_PIO_RETRY;
drive->retry_pio++;
drive->state = DMA_PIO_RETRY;
ide_dma_off_quietly(drive);
/*
......@@ -1480,23 +1440,16 @@ irqreturn_t ide_intr (int irq, void *dev_id)
del_timer(&hwgroup->timer);
spin_unlock(&ide_lock);
/* Some controllers might set DMA INTR no matter DMA or PIO;
* bmdma status might need to be cleared even for
* PIO interrupts to prevent spurious/lost irq.
*/
if (hwif->ide_dma_clear_irq && !(drive->waiting_for_dma))
/* ide_dma_end() needs bmdma status for error checking.
* So, skip clearing bmdma status here and leave it
* to ide_dma_end() if this is dma interrupt.
*/
hwif->ide_dma_clear_irq(drive);
if (hwif->port_ops && hwif->port_ops->clear_irq)
hwif->port_ops->clear_irq(drive);
if (drive->unmask)
if (drive->dev_flags & IDE_DFLAG_UNMASK)
local_irq_enable_in_hardirq();
/* service this interrupt, may set handler for next interrupt */
startstop = handler(drive);
spin_lock_irq(&ide_lock);
spin_lock_irq(&ide_lock);
/*
* Note that handler() may have set things up for another
* interrupt to occur soon, but it cannot happen until
......
......@@ -62,7 +62,7 @@ static int ide_get_identity_ioctl(ide_drive_t *drive, unsigned int cmd,
int size = (cmd == HDIO_GET_IDENTITY) ? (ATA_ID_WORDS * 2) : 142;
int rc = 0;
if (drive->id_read == 0) {
if ((drive->dev_flags & IDE_DFLAG_ID_READ) == 0) {
rc = -ENOMSG;
goto out;
}
......@@ -86,8 +86,10 @@ static int ide_get_identity_ioctl(ide_drive_t *drive, unsigned int cmd,
static int ide_get_nice_ioctl(ide_drive_t *drive, unsigned long arg)
{
return put_user((drive->dsc_overlap << IDE_NICE_DSC_OVERLAP) |
(drive->nice1 << IDE_NICE_1), (long __user *)arg);
return put_user((!!(drive->dev_flags & IDE_DFLAG_DSC_OVERLAP)
<< IDE_NICE_DSC_OVERLAP) |
(!!(drive->dev_flags & IDE_DFLAG_NICE1)
<< IDE_NICE_1), (long __user *)arg);
}
static int ide_set_nice_ioctl(ide_drive_t *drive, unsigned long arg)
......@@ -97,11 +99,18 @@ static int ide_set_nice_ioctl(ide_drive_t *drive, unsigned long arg)
if (((arg >> IDE_NICE_DSC_OVERLAP) & 1) &&
(drive->media == ide_disk || drive->media == ide_floppy ||
drive->scsi))
(drive->dev_flags & IDE_DFLAG_SCSI)))
return -EPERM;
drive->dsc_overlap = (arg >> IDE_NICE_DSC_OVERLAP) & 1;
drive->nice1 = (arg >> IDE_NICE_1) & 1;
if ((arg >> IDE_NICE_DSC_OVERLAP) & 1)
drive->dev_flags |= IDE_DFLAG_DSC_OVERLAP;
else
drive->dev_flags &= ~IDE_DFLAG_DSC_OVERLAP;
if ((arg >> IDE_NICE_1) & 1)
drive->dev_flags |= IDE_DFLAG_NICE1;
else
drive->dev_flags &= ~IDE_DFLAG_NICE1;
return 0;
}
......
......@@ -181,7 +181,7 @@ void ide_tf_load(ide_drive_t *drive, ide_task_t *task)
tf_outb(tf->lbah, io_ports->lbah_addr);
if (task->tf_flags & IDE_TFLAG_OUT_DEVICE)
tf_outb((tf->device & HIHI) | drive->select.all,
tf_outb((tf->device & HIHI) | drive->select,
io_ports->device_addr);
}
EXPORT_SYMBOL_GPL(ide_tf_load);
......@@ -647,7 +647,7 @@ u8 eighty_ninty_three (ide_drive_t *drive)
return 1;
no_80w:
if (drive->udma33_warned == 1)
if (drive->dev_flags & IDE_DFLAG_UDMA33_WARNED)
return 0;
printk(KERN_WARNING "%s: %s side 80-wire cable detection failed, "
......@@ -655,7 +655,7 @@ u8 eighty_ninty_three (ide_drive_t *drive)
drive->name,
hwif->cbl == ATA_CBL_PATA80 ? "drive" : "host");
drive->udma33_warned = 1;
drive->dev_flags |= IDE_DFLAG_UDMA33_WARNED;
return 0;
}
......@@ -711,7 +711,7 @@ int ide_driveid_update(ide_drive_t *drive)
kfree(id);
if (drive->using_dma && ide_id_dma_bug(drive))
if ((drive->dev_flags & IDE_DFLAG_USING_DMA) && ide_id_dma_bug(drive))
ide_dma_off(drive);
return 1;
......@@ -790,7 +790,7 @@ int ide_config_drive_speed(ide_drive_t *drive, u8 speed)
skip:
#ifdef CONFIG_BLK_DEV_IDEDMA
if (speed >= XFER_SW_DMA_0 && drive->using_dma)
if (speed >= XFER_SW_DMA_0 && (drive->dev_flags & IDE_DFLAG_USING_DMA))
hwif->dma_ops->dma_host_set(drive, 1);
else if (hwif->dma_ops) /* check if host supports DMA */
ide_dma_off_quietly(drive);
......@@ -940,6 +940,25 @@ static ide_startstop_t atapi_reset_pollfunc (ide_drive_t *drive)
return ide_stopped;
}
static void ide_reset_report_error(ide_hwif_t *hwif, u8 err)
{
static const char *err_master_vals[] =
{ NULL, "passed", "formatter device error",
"sector buffer error", "ECC circuitry error",
"controlling MPU error" };
u8 err_master = err & 0x7f;
printk(KERN_ERR "%s: reset: master: ", hwif->name);
if (err_master && err_master < 6)
printk(KERN_CONT "%s", err_master_vals[err_master]);
else
printk(KERN_CONT "error (0x%02x?)", err);
if (err & 0x80)
printk(KERN_CONT "; slave: failed");
printk(KERN_CONT "\n");
}
/*
* reset_pollfunc() gets invoked to poll the interface for completion every 50ms
* during an ide reset operation. If the drives have not yet responded,
......@@ -975,31 +994,14 @@ static ide_startstop_t reset_pollfunc (ide_drive_t *drive)
drive->failures++;
err = -EIO;
} else {
printk("%s: reset: ", hwif->name);
tmp = ide_read_error(drive);
if (tmp == 1) {
printk("success\n");
printk(KERN_INFO "%s: reset: success\n", hwif->name);
drive->failures = 0;
} else {
ide_reset_report_error(hwif, tmp);
drive->failures++;
printk("master: ");
switch (tmp & 0x7f) {
case 1: printk("passed");
break;
case 2: printk("formatter device error");
break;
case 3: printk("sector buffer error");
break;
case 4: printk("ECC circuitry error");
break;
case 5: printk("controlling MPU error");
break;
default:printk("error (0x%02x?)", tmp);
}
if (tmp & 0x80)
printk("; slave: failed");
printk("\n");
err = -EIO;
}
}
......@@ -1016,9 +1018,14 @@ static void ide_disk_pre_reset(ide_drive_t *drive)
drive->special.all = 0;
drive->special.b.set_geometry = legacy;
drive->special.b.recalibrate = legacy;
drive->mult_count = 0;
if (!drive->keep_settings && !drive->using_dma)
drive->dev_flags &= ~IDE_DFLAG_PARKED;
if ((drive->dev_flags & IDE_DFLAG_KEEP_SETTINGS) == 0 &&
(drive->dev_flags & IDE_DFLAG_USING_DMA) == 0)
drive->mult_req = 0;
if (drive->mult_req != drive->mult_count)
drive->special.b.set_multmode = 1;
}
......@@ -1030,18 +1037,18 @@ static void pre_reset(ide_drive_t *drive)
if (drive->media == ide_disk)
ide_disk_pre_reset(drive);
else
drive->post_reset = 1;
drive->dev_flags |= IDE_DFLAG_POST_RESET;
if (drive->using_dma) {
if (drive->dev_flags & IDE_DFLAG_USING_DMA) {
if (drive->crc_count)
ide_check_dma_crc(drive);
else
ide_dma_off(drive);
}
if (!drive->keep_settings) {
if (!drive->using_dma) {
drive->unmask = 0;
if ((drive->dev_flags & IDE_DFLAG_KEEP_SETTINGS) == 0) {
if ((drive->dev_flags & IDE_DFLAG_USING_DMA) == 0) {
drive->dev_flags &= ~IDE_DFLAG_UNMASK;
drive->io_32bit = 0;
}
return;
......@@ -1073,12 +1080,13 @@ static void pre_reset(ide_drive_t *drive)
static ide_startstop_t do_reset1 (ide_drive_t *drive, int do_not_try_atapi)
{
unsigned int unit;
unsigned long flags;
unsigned long flags, timeout;
ide_hwif_t *hwif;
ide_hwgroup_t *hwgroup;
struct ide_io_ports *io_ports;
const struct ide_tp_ops *tp_ops;
const struct ide_port_ops *port_ops;
DEFINE_WAIT(wait);
spin_lock_irqsave(&ide_lock, flags);
hwif = HWIF(drive);
......@@ -1105,6 +1113,31 @@ static ide_startstop_t do_reset1 (ide_drive_t *drive, int do_not_try_atapi)
return ide_started;
}
/* We must not disturb devices in the IDE_DFLAG_PARKED state. */
do {
unsigned long now;
prepare_to_wait(&ide_park_wq, &wait, TASK_UNINTERRUPTIBLE);
timeout = jiffies;
for (unit = 0; unit < MAX_DRIVES; unit++) {
ide_drive_t *tdrive = &hwif->drives[unit];
if (tdrive->dev_flags & IDE_DFLAG_PRESENT &&
tdrive->dev_flags & IDE_DFLAG_PARKED &&
time_after(tdrive->sleep, timeout))
timeout = tdrive->sleep;
}
now = jiffies;
if (time_before_eq(timeout, now))
break;
spin_unlock_irqrestore(&ide_lock, flags);
timeout = schedule_timeout_uninterruptible(timeout - now);
spin_lock_irqsave(&ide_lock, flags);
} while (timeout);
finish_wait(&ide_park_wq, &wait);
/*
* First, reset any device state data we were maintaining
* for any of the drives on this interface.
......
......@@ -317,7 +317,7 @@ static void ide_dump_sector(ide_drive_t *drive)
{
ide_task_t task;
struct ide_taskfile *tf = &task.tf;
int lba48 = (drive->addressing == 1) ? 1 : 0;
u8 lba48 = !!(drive->dev_flags & IDE_DFLAG_LBA48);
memset(&task, 0, sizeof(task));
if (lba48)
......
此差异已折叠。
此差异已折叠。
......@@ -227,7 +227,7 @@ static int set_xfer_rate (ide_drive_t *drive, int arg)
ide_devset_rw(current_speed, xfer_rate);
ide_devset_rw_field(init_speed, init_speed);
ide_devset_rw_field(nice1, nice1);
ide_devset_rw_flag(nice1, IDE_DFLAG_NICE1);
ide_devset_rw_field(number, dn);
static const struct ide_proc_devset ide_generic_settings[] = {
......@@ -622,9 +622,7 @@ void ide_proc_port_register_devices(ide_hwif_t *hwif)
for (d = 0; d < MAX_DRIVES; d++) {
ide_drive_t *drive = &hwif->drives[d];
if (!drive->present)
continue;
if (drive->proc)
if ((drive->dev_flags & IDE_DFLAG_PRESENT) == 0 || drive->proc)
continue;
drive->proc = proc_mkdir(drive->name, parent);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -131,7 +131,7 @@ static void ali14xx_set_pio_mode(ide_drive_t *drive, const u8 pio)
drive->name, pio, time1, time2, param1, param2, param3, param4);
/* stuff timing parameters into controller registers */
driveNum = (HWIF(drive)->index << 1) + drive->select.b.unit;
driveNum = (drive->hwif->index << 1) + (drive->dn & 1);
spin_lock_irqsave(&ali14xx_lock, flags);
outb_p(regOn, basePort);
outReg(param1, regTab[driveNum].reg1);
......
......@@ -120,7 +120,8 @@ static void ht6560b_selectproc (ide_drive_t *drive)
* Need to enforce prefetch sometimes because otherwise
* it'll hang (hard).
*/
if (drive->media != ide_disk || !drive->present)
if (drive->media != ide_disk ||
(drive->dev_flags & IDE_DFLAG_PRESENT) == 0)
select |= HT_PREFETCH_MODE;
if (select != current_select || timing != current_timing) {
......@@ -249,11 +250,11 @@ static void ht_set_prefetch(ide_drive_t *drive, u8 state)
*/
if (state) {
drive->drive_data |= t; /* enable prefetch mode */
drive->no_unmask = 1;
drive->unmask = 0;
drive->dev_flags |= IDE_DFLAG_NO_UNMASK;
drive->dev_flags &= ~IDE_DFLAG_UNMASK;
} else {
drive->drive_data &= ~t; /* disable prefetch mode */
drive->no_unmask = 0;
drive->dev_flags &= ~IDE_DFLAG_NO_UNMASK;
}
spin_unlock_irqrestore(&ht6560b_lock, flags);
......
......@@ -14,7 +14,7 @@ MODULE_PARM_DESC(probe, "probe for generic IDE chipset with 4 drives/port");
static void ide_4drives_init_dev(ide_drive_t *drive)
{
if (drive->hwif->channel)
drive->select.all ^= 0x20;
drive->select ^= 0x20;
}
static const struct ide_port_ops ide_4drives_port_ops = {
......
......@@ -305,7 +305,7 @@ static void __init qd6580_init_dev(ide_drive_t *drive)
} else
t2 = t1 = hwif->channel ? QD6580_DEF_DATA2 : QD6580_DEF_DATA;
drive->drive_data = drive->select.b.unit ? t2 : t1;
drive->drive_data = (drive->dn & 1) ? t2 : t1;
}
static const struct ide_port_ops qd6500_port_ops = {
......
此差异已折叠。
......@@ -115,7 +115,7 @@ static void aec6260_set_mode(ide_drive_t *drive, const u8 speed)
struct pci_dev *dev = to_pci_dev(hwif->dev);
struct ide_host *host = pci_get_drvdata(dev);
struct chipset_bus_clock_list_entry *bus_clock = host->host_priv;
u8 unit = (drive->select.b.unit & 0x01);
u8 unit = drive->dn & 1;
u8 tmp1 = 0, tmp2 = 0;
u8 ultra = 0, drive_conf = 0, ultra_conf = 0;
unsigned long flags;
......@@ -302,7 +302,7 @@ static const struct pci_device_id aec62xx_pci_tbl[] = {
};
MODULE_DEVICE_TABLE(pci, aec62xx_pci_tbl);
static struct pci_driver driver = {
static struct pci_driver aec62xx_pci_driver = {
.name = "AEC62xx_IDE",
.id_table = aec62xx_pci_tbl,
.probe = aec62xx_init_one,
......@@ -313,12 +313,12 @@ static struct pci_driver driver = {
static int __init aec62xx_ide_init(void)
{
return ide_pci_register_driver(&driver);
return ide_pci_register_driver(&aec62xx_pci_driver);
}
static void __exit aec62xx_ide_exit(void)
{
pci_unregister_driver(&driver);
pci_unregister_driver(&aec62xx_pci_driver);
}
module_init(aec62xx_ide_init);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -145,7 +145,7 @@ static const struct pci_device_id cs5520_pci_tbl[] = {
};
MODULE_DEVICE_TABLE(pci, cs5520_pci_tbl);
static struct pci_driver driver = {
static struct pci_driver cs5520_pci_driver = {
.name = "Cyrix_IDE",
.id_table = cs5520_pci_tbl,
.probe = cs5520_init_one,
......@@ -155,7 +155,7 @@ static struct pci_driver driver = {
static int __init cs5520_ide_init(void)
{
return ide_pci_register_driver(&driver);
return ide_pci_register_driver(&cs5520_pci_driver);
}
module_init(cs5520_ide_init);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册