Merge by hand (conflicts between pending drivers and kfree cleanups)

Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This commit is contained in:
James Bottomley 2005-11-08 12:50:26 -05:00
commit 383f974950
33 changed files with 1416 additions and 4793 deletions

View File

@ -133,3 +133,32 @@ hardware and it is important to prevent the kernel from attempting to directly
access these devices too, as if the array controller were merely a SCSI access these devices too, as if the array controller were merely a SCSI
controller in the same way that we are allowing it to access SCSI tape drives. controller in the same way that we are allowing it to access SCSI tape drives.
SCSI error handling for tape drives and medium changers
-------------------------------------------------------
The linux SCSI mid layer provides an error handling protocol which
kicks into gear whenever a SCSI command fails to complete within a
certain amount of time (which can vary depending on the command).
The cciss driver participates in this protocol to some extent. The
normal protocol is a four step process. First the device is told
to abort the command. If that doesn't work, the device is reset.
If that doesn't work, the SCSI bus is reset. If that doesn't work
the host bus adapter is reset. Because the cciss driver is a block
driver as well as a SCSI driver and only the tape drives and medium
changers are presented to the SCSI mid layer, and unlike more
straightforward SCSI drivers, disk i/o continues through the block
side during the SCSI error recovery process, the cciss driver only
implements the first two of these actions, aborting the command, and
resetting the device. Additionally, most tape drives will not oblige
in aborting commands, and sometimes it appears they will not even
obey a reset coommand, though in most circumstances they will. In
the case that the command cannot be aborted and the device cannot be
reset, the device will be set offline.
In the event the error handling code is triggered and a tape drive is
successfully reset or the tardy command is successfully aborted, the
tape drive may still not allow i/o to continue until some command
is issued which positions the tape to a known position. Typically you
must rewind the tape (by issuing "mt -f /dev/st0 rewind" for example)
before i/o can proceed again to a tape drive which was reset.

View File

@ -52,8 +52,6 @@ ppa.txt
- info on driver for IOmega zip drive - info on driver for IOmega zip drive
qlogicfas.txt qlogicfas.txt
- info on driver for QLogic FASxxx based adapters - info on driver for QLogic FASxxx based adapters
qlogicisp.txt
- info on driver for QLogic ISP 1020 based adapters
scsi-generic.txt scsi-generic.txt
- info on the sg driver for generic (non-disk/CD/tape) SCSI devices. - info on the sg driver for generic (non-disk/CD/tape) SCSI devices.
scsi.txt scsi.txt

View File

@ -11,8 +11,7 @@ Qlogic boards:
* IQ-PCI-10 * IQ-PCI-10
* IQ-PCI-D * IQ-PCI-D
is provided by the qlogicisp.c driver. Check README.qlogicisp for is provided by the qla1280 driver.
details.
Nor does it support the PCI-Basic, which is supported by the Nor does it support the PCI-Basic, which is supported by the
'am53c974' driver. 'am53c974' driver.

View File

@ -1,30 +0,0 @@
Notes for the QLogic ISP1020 PCI SCSI Driver:
This driver works well in practice, but does not support disconnect/
reconnect, which makes using it with tape drives impractical.
It should work for most host adaptors with the ISP1020 chip. The
QLogic Corporation produces several PCI SCSI adapters which should
work:
* IQ-PCI
* IQ-PCI-10
* IQ-PCI-D
This driver may work with boards containing the ISP1020A or ISP1040A
chips, but that has not been tested.
This driver will NOT work with:
* ISA or VL Bus Qlogic cards (they use the 'qlogicfas' driver)
* PCI-basic (it uses the 'am53c974' driver)
Much thanks to QLogic's tech support for providing the latest ISP1020
firmware, and for taking the time to review my code.
Erik Moe
ehm@cris.com
Revised:
Michael A. Griffith
grif@cs.ucr.edu

View File

@ -83,11 +83,11 @@ with the command.
The timeout handler is scsi_times_out(). When a timeout occurs, this The timeout handler is scsi_times_out(). When a timeout occurs, this
function function
1. invokes optional hostt->eh_timedout() callback. Return value can 1. invokes optional hostt->eh_timed_out() callback. Return value can
be one of be one of
- EH_HANDLED - EH_HANDLED
This indicates that eh_timedout() dealt with the timeout. The This indicates that eh_timed_out() dealt with the timeout. The
scmd is passed to __scsi_done() and thus linked into per-cpu scmd is passed to __scsi_done() and thus linked into per-cpu
scsi_done_q. Normal command completion described in [1-2-1] scsi_done_q. Normal command completion described in [1-2-1]
follows. follows.
@ -105,7 +105,7 @@ function
command will time out again. command will time out again.
- EH_NOT_HANDLED - EH_NOT_HANDLED
This is the same as when eh_timedout() callback doesn't exist. This is the same as when eh_timed_out() callback doesn't exist.
Step #2 is taken. Step #2 is taken.
2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the 2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the
@ -142,7 +142,7 @@ are linked on shost->eh_cmd_q.
Note that this does not mean lower layers are quiescent. If a LLDD Note that this does not mean lower layers are quiescent. If a LLDD
completed a scmd with error status, the LLDD and lower layers are completed a scmd with error status, the LLDD and lower layers are
assumed to forget about the scmd at that point. However, if a scmd assumed to forget about the scmd at that point. However, if a scmd
has timed out, unless hostt->eh_timedout() made lower layers forget has timed out, unless hostt->eh_timed_out() made lower layers forget
about the scmd, which currently no LLDD does, the command is still about the scmd, which currently no LLDD does, the command is still
active as long as lower layers are concerned and completion could active as long as lower layers are concerned and completion could
occur at any time. Of course, all such completions are ignored as the occur at any time. Of course, all such completions are ignored as the

View File

@ -148,6 +148,7 @@ static struct board_type products[] = {
static ctlr_info_t *hba[MAX_CTLR]; static ctlr_info_t *hba[MAX_CTLR];
static void do_cciss_request(request_queue_t *q); static void do_cciss_request(request_queue_t *q);
static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs);
static int cciss_open(struct inode *inode, struct file *filep); static int cciss_open(struct inode *inode, struct file *filep);
static int cciss_release(struct inode *inode, struct file *filep); static int cciss_release(struct inode *inode, struct file *filep);
static int cciss_ioctl(struct inode *inode, struct file *filep, static int cciss_ioctl(struct inode *inode, struct file *filep,
@ -1583,6 +1584,24 @@ static int fill_cmd(CommandList_struct *c, __u8 cmd, int ctlr, void *buff,
} }
} else if (cmd_type == TYPE_MSG) { } else if (cmd_type == TYPE_MSG) {
switch (cmd) { switch (cmd) {
case 0: /* ABORT message */
c->Request.CDBLen = 12;
c->Request.Type.Attribute = ATTR_SIMPLE;
c->Request.Type.Direction = XFER_WRITE;
c->Request.Timeout = 0;
c->Request.CDB[0] = cmd; /* abort */
c->Request.CDB[1] = 0; /* abort a command */
/* buff contains the tag of the command to abort */
memcpy(&c->Request.CDB[4], buff, 8);
break;
case 1: /* RESET message */
c->Request.CDBLen = 12;
c->Request.Type.Attribute = ATTR_SIMPLE;
c->Request.Type.Direction = XFER_WRITE;
c->Request.Timeout = 0;
memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
c->Request.CDB[0] = cmd; /* reset */
c->Request.CDB[1] = 0x04; /* reset a LUN */
case 3: /* No-Op message */ case 3: /* No-Op message */
c->Request.CDBLen = 1; c->Request.CDBLen = 1;
c->Request.Type.Attribute = ATTR_SIMPLE; c->Request.Type.Attribute = ATTR_SIMPLE;
@ -1869,6 +1888,52 @@ static unsigned long pollcomplete(int ctlr)
/* Invalid address to tell caller we ran out of time */ /* Invalid address to tell caller we ran out of time */
return 1; return 1;
} }
static int add_sendcmd_reject(__u8 cmd, int ctlr, unsigned long complete)
{
/* We get in here if sendcmd() is polling for completions
and gets some command back that it wasn't expecting --
something other than that which it just sent down.
Ordinarily, that shouldn't happen, but it can happen when
the scsi tape stuff gets into error handling mode, and
starts using sendcmd() to try to abort commands and
reset tape drives. In that case, sendcmd may pick up
completions of commands that were sent to logical drives
through the block i/o system, or cciss ioctls completing, etc.
In that case, we need to save those completions for later
processing by the interrupt handler.
*/
#ifdef CONFIG_CISS_SCSI_TAPE
struct sendcmd_reject_list *srl = &hba[ctlr]->scsi_rejects;
/* If it's not the scsi tape stuff doing error handling, (abort */
/* or reset) then we don't expect anything weird. */
if (cmd != CCISS_RESET_MSG && cmd != CCISS_ABORT_MSG) {
#endif
printk( KERN_WARNING "cciss cciss%d: SendCmd "
"Invalid command list address returned! (%lx)\n",
ctlr, complete);
/* not much we can do. */
#ifdef CONFIG_CISS_SCSI_TAPE
return 1;
}
/* We've sent down an abort or reset, but something else
has completed */
if (srl->ncompletions >= (NR_CMDS + 2)) {
/* Uh oh. No room to save it for later... */
printk(KERN_WARNING "cciss%d: Sendcmd: Invalid command addr, "
"reject list overflow, command lost!\n", ctlr);
return 1;
}
/* Save it for later */
srl->complete[srl->ncompletions] = complete;
srl->ncompletions++;
#endif
return 0;
}
/* /*
* Send a command to the controller, and wait for it to complete. * Send a command to the controller, and wait for it to complete.
* Only used at init time. * Only used at init time.
@ -1891,7 +1956,7 @@ static int sendcmd(
unsigned long complete; unsigned long complete;
ctlr_info_t *info_p= hba[ctlr]; ctlr_info_t *info_p= hba[ctlr];
u64bit buff_dma_handle; u64bit buff_dma_handle;
int status; int status, done = 0;
if ((c = cmd_alloc(info_p, 1)) == NULL) { if ((c = cmd_alloc(info_p, 1)) == NULL) {
printk(KERN_WARNING "cciss: unable to get memory"); printk(KERN_WARNING "cciss: unable to get memory");
@ -1913,7 +1978,9 @@ resend_cmd1:
info_p->access.set_intr_mask(info_p, CCISS_INTR_OFF); info_p->access.set_intr_mask(info_p, CCISS_INTR_OFF);
/* Make sure there is room in the command FIFO */ /* Make sure there is room in the command FIFO */
/* Actually it should be completely empty at this time. */ /* Actually it should be completely empty at this time */
/* unless we are in here doing error handling for the scsi */
/* tape side of the driver. */
for (i = 200000; i > 0; i--) for (i = 200000; i > 0; i--)
{ {
/* if fifo isn't full go */ /* if fifo isn't full go */
@ -1930,13 +1997,25 @@ resend_cmd1:
* Send the cmd * Send the cmd
*/ */
info_p->access.submit_command(info_p, c); info_p->access.submit_command(info_p, c);
complete = pollcomplete(ctlr); done = 0;
do {
complete = pollcomplete(ctlr);
#ifdef CCISS_DEBUG #ifdef CCISS_DEBUG
printk(KERN_DEBUG "cciss: command completed\n"); printk(KERN_DEBUG "cciss: command completed\n");
#endif /* CCISS_DEBUG */ #endif /* CCISS_DEBUG */
if (complete != 1) { if (complete == 1) {
printk( KERN_WARNING
"cciss cciss%d: SendCmd Timeout out, "
"No command list address returned!\n",
ctlr);
status = IO_ERROR;
done = 1;
break;
}
/* This will need to change for direct lookup completions */
if ( (complete & CISS_ERROR_BIT) if ( (complete & CISS_ERROR_BIT)
&& (complete & ~CISS_ERROR_BIT) == c->busaddr) && (complete & ~CISS_ERROR_BIT) == c->busaddr)
{ {
@ -1976,6 +2055,10 @@ resend_cmd1:
status = IO_ERROR; status = IO_ERROR;
goto cleanup1; goto cleanup1;
} }
} else if (c->err_info->CommandStatus == CMD_UNABORTABLE) {
printk(KERN_WARNING "cciss%d: command could not be aborted.\n", ctlr);
status = IO_ERROR;
goto cleanup1;
} }
printk(KERN_WARNING "ciss ciss%d: sendcmd" printk(KERN_WARNING "ciss ciss%d: sendcmd"
" Error %x \n", ctlr, " Error %x \n", ctlr,
@ -1990,20 +2073,15 @@ resend_cmd1:
goto cleanup1; goto cleanup1;
} }
} }
/* This will need changing for direct lookup completions */
if (complete != c->busaddr) { if (complete != c->busaddr) {
printk( KERN_WARNING "cciss cciss%d: SendCmd " if (add_sendcmd_reject(cmd, ctlr, complete) != 0) {
"Invalid command list address returned! (%lx)\n", BUG(); /* we are pretty much hosed if we get here. */
ctlr, complete); }
status = IO_ERROR; continue;
goto cleanup1; } else
} done = 1;
} else { } while (!done);
printk( KERN_WARNING
"cciss cciss%d: SendCmd Timeout out, "
"No command list address returned!\n",
ctlr);
status = IO_ERROR;
}
cleanup1: cleanup1:
/* unlock the data buffer from DMA */ /* unlock the data buffer from DMA */
@ -2011,6 +2089,11 @@ cleanup1:
buff_dma_handle.val32.upper = c->SG[0].Addr.upper; buff_dma_handle.val32.upper = c->SG[0].Addr.upper;
pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val, pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val,
c->SG[0].Len, PCI_DMA_BIDIRECTIONAL); c->SG[0].Len, PCI_DMA_BIDIRECTIONAL);
#ifdef CONFIG_CISS_SCSI_TAPE
/* if we saved some commands for later, process them now. */
if (info_p->scsi_rejects.ncompletions > 0)
do_cciss_intr(0, info_p, NULL);
#endif
cmd_free(info_p, c, 1); cmd_free(info_p, c, 1);
return (status); return (status);
} }
@ -2335,6 +2418,48 @@ startio:
start_io(h); start_io(h);
} }
static inline unsigned long get_next_completion(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
/* Any rejects from sendcmd() lying around? Process them first */
if (h->scsi_rejects.ncompletions == 0)
return h->access.command_completed(h);
else {
struct sendcmd_reject_list *srl;
int n;
srl = &h->scsi_rejects;
n = --srl->ncompletions;
/* printk("cciss%d: processing saved reject\n", h->ctlr); */
printk("p");
return srl->complete[n];
}
#else
return h->access.command_completed(h);
#endif
}
static inline int interrupt_pending(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
return ( h->access.intr_pending(h)
|| (h->scsi_rejects.ncompletions > 0));
#else
return h->access.intr_pending(h);
#endif
}
static inline long interrupt_not_for_us(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
return (((h->access.intr_pending(h) == 0) ||
(h->interrupts_enabled == 0))
&& (h->scsi_rejects.ncompletions == 0));
#else
return (((h->access.intr_pending(h) == 0) ||
(h->interrupts_enabled == 0)));
#endif
}
static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs) static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs)
{ {
ctlr_info_t *h = dev_id; ctlr_info_t *h = dev_id;
@ -2344,19 +2469,15 @@ static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs)
int j; int j;
int start_queue = h->next_to_run; int start_queue = h->next_to_run;
/* Is this interrupt for us? */ if (interrupt_not_for_us(h))
if (( h->access.intr_pending(h) == 0) || (h->interrupts_enabled == 0))
return IRQ_NONE; return IRQ_NONE;
/* /*
* If there are completed commands in the completion queue, * If there are completed commands in the completion queue,
* we had better do something about it. * we had better do something about it.
*/ */
spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags); spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags);
while( h->access.intr_pending(h)) while (interrupt_pending(h)) {
{ while((a = get_next_completion(h)) != FIFO_EMPTY) {
while((a = h->access.command_completed(h)) != FIFO_EMPTY)
{
a1 = a; a1 = a;
if ((a & 0x04)) { if ((a & 0x04)) {
a2 = (a >> 3); a2 = (a >> 3);
@ -2963,7 +3084,15 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
printk( KERN_ERR "cciss: out of memory"); printk( KERN_ERR "cciss: out of memory");
goto clean4; goto clean4;
} }
#ifdef CONFIG_CISS_SCSI_TAPE
hba[i]->scsi_rejects.complete =
kmalloc(sizeof(hba[i]->scsi_rejects.complete[0]) *
(NR_CMDS + 5), GFP_KERNEL);
if (hba[i]->scsi_rejects.complete == NULL) {
printk( KERN_ERR "cciss: out of memory");
goto clean4;
}
#endif
spin_lock_init(&hba[i]->lock); spin_lock_init(&hba[i]->lock);
/* Initialize the pdev driver private data. /* Initialize the pdev driver private data.
@ -3031,6 +3160,10 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
return(1); return(1);
clean4: clean4:
#ifdef CONFIG_CISS_SCSI_TAPE
if(hba[i]->scsi_rejects.complete)
kfree(hba[i]->scsi_rejects.complete);
#endif
kfree(hba[i]->cmd_pool_bits); kfree(hba[i]->cmd_pool_bits);
if(hba[i]->cmd_pool) if(hba[i]->cmd_pool)
pci_free_consistent(hba[i]->pdev, pci_free_consistent(hba[i]->pdev,
@ -3103,6 +3236,9 @@ static void __devexit cciss_remove_one (struct pci_dev *pdev)
pci_free_consistent(hba[i]->pdev, NR_CMDS * sizeof( ErrorInfo_struct), pci_free_consistent(hba[i]->pdev, NR_CMDS * sizeof( ErrorInfo_struct),
hba[i]->errinfo_pool, hba[i]->errinfo_pool_dhandle); hba[i]->errinfo_pool, hba[i]->errinfo_pool_dhandle);
kfree(hba[i]->cmd_pool_bits); kfree(hba[i]->cmd_pool_bits);
#ifdef CONFIG_CISS_SCSI_TAPE
kfree(hba[i]->scsi_rejects.complete);
#endif
release_io_mem(hba[i]); release_io_mem(hba[i]);
free_hba(i); free_hba(i);
} }

View File

@ -44,6 +44,14 @@ typedef struct _drive_info_struct
*/ */
} drive_info_struct; } drive_info_struct;
#ifdef CONFIG_CISS_SCSI_TAPE
struct sendcmd_reject_list {
int ncompletions;
unsigned long *complete; /* array of NR_CMDS tags */
};
#endif
struct ctlr_info struct ctlr_info
{ {
int ctlr; int ctlr;
@ -100,6 +108,9 @@ struct ctlr_info
struct gendisk *gendisk[NWD]; struct gendisk *gendisk[NWD];
#ifdef CONFIG_CISS_SCSI_TAPE #ifdef CONFIG_CISS_SCSI_TAPE
void *scsi_ctlr; /* ptr to structure containing scsi related stuff */ void *scsi_ctlr; /* ptr to structure containing scsi related stuff */
/* list of block side commands the scsi error handling sucked up */
/* and saved for later processing */
struct sendcmd_reject_list scsi_rejects;
#endif #endif
unsigned char alive; unsigned char alive;
}; };

View File

@ -42,6 +42,9 @@
#include "cciss_scsi.h" #include "cciss_scsi.h"
#define CCISS_ABORT_MSG 0x00
#define CCISS_RESET_MSG 0x01
/* some prototypes... */ /* some prototypes... */
static int sendcmd( static int sendcmd(
__u8 cmd, __u8 cmd,
@ -67,6 +70,8 @@ static int cciss_scsi_proc_info(
static int cciss_scsi_queue_command (struct scsi_cmnd *cmd, static int cciss_scsi_queue_command (struct scsi_cmnd *cmd,
void (* done)(struct scsi_cmnd *)); void (* done)(struct scsi_cmnd *));
static int cciss_eh_device_reset_handler(struct scsi_cmnd *);
static int cciss_eh_abort_handler(struct scsi_cmnd *);
static struct cciss_scsi_hba_t ccissscsi[MAX_CTLR] = { static struct cciss_scsi_hba_t ccissscsi[MAX_CTLR] = {
{ .name = "cciss0", .ndevices = 0 }, { .name = "cciss0", .ndevices = 0 },
@ -90,6 +95,9 @@ static struct scsi_host_template cciss_driver_template = {
.sg_tablesize = MAXSGENTRIES, .sg_tablesize = MAXSGENTRIES,
.cmd_per_lun = 1, .cmd_per_lun = 1,
.use_clustering = DISABLE_CLUSTERING, .use_clustering = DISABLE_CLUSTERING,
/* Can't have eh_bus_reset_handler or eh_host_reset_handler for cciss */
.eh_device_reset_handler= cciss_eh_device_reset_handler,
.eh_abort_handler = cciss_eh_abort_handler,
}; };
#pragma pack(1) #pragma pack(1)
@ -247,7 +255,7 @@ scsi_cmd_stack_free(int ctlr)
#define DEVICETYPE(n) (n<0 || n>MAX_SCSI_DEVICE_CODE) ? \ #define DEVICETYPE(n) (n<0 || n>MAX_SCSI_DEVICE_CODE) ? \
"Unknown" : scsi_device_types[n] "Unknown" : scsi_device_types[n]
#if 0 #if 1
static int xmargin=8; static int xmargin=8;
static int amargin=60; static int amargin=60;
@ -1448,6 +1456,78 @@ cciss_proc_tape_report(int ctlr, unsigned char *buffer, off_t *pos, off_t *len)
*pos += size; *len += size; *pos += size; *len += size;
} }
/* Need at least one of these error handlers to keep ../scsi/hosts.c from
* complaining. Doing a host- or bus-reset can't do anything good here.
* Despite what it might say in scsi_error.c, there may well be commands
* on the controller, as the cciss driver registers twice, once as a block
* device for the logical drives, and once as a scsi device, for any tape
* drives. So we know there are no commands out on the tape drives, but we
* don't know there are no commands on the controller, and it is likely
* that there probably are, as the cciss block device is most commonly used
* as a boot device (embedded controller on HP/Compaq systems.)
*/
static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
{
int rc;
CommandList_struct *cmd_in_trouble;
ctlr_info_t **c;
int ctlr;
/* find the controller to which the command to be aborted was sent */
c = (ctlr_info_t **) &scsicmd->device->host->hostdata[0];
if (c == NULL) /* paranoia */
return FAILED;
ctlr = (*c)->ctlr;
printk(KERN_WARNING "cciss%d: resetting tape drive or medium changer.\n", ctlr);
/* find the command that's giving us trouble */
cmd_in_trouble = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_in_trouble == NULL) { /* paranoia */
return FAILED;
}
/* send a reset to the SCSI LUN which the command was sent to */
rc = sendcmd(CCISS_RESET_MSG, ctlr, NULL, 0, 2, 0, 0,
(unsigned char *) &cmd_in_trouble->Header.LUN.LunAddrBytes[0],
TYPE_MSG);
/* sendcmd turned off interrputs on the board, turn 'em back on. */
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0)
return SUCCESS;
printk(KERN_WARNING "cciss%d: resetting device failed.\n", ctlr);
return FAILED;
}
static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd)
{
int rc;
CommandList_struct *cmd_to_abort;
ctlr_info_t **c;
int ctlr;
/* find the controller to which the command to be aborted was sent */
c = (ctlr_info_t **) &scsicmd->device->host->hostdata[0];
if (c == NULL) /* paranoia */
return FAILED;
ctlr = (*c)->ctlr;
printk(KERN_WARNING "cciss%d: aborting tardy SCSI cmd\n", ctlr);
/* find the command to be aborted */
cmd_to_abort = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_to_abort == NULL) /* paranoia */
return FAILED;
rc = sendcmd(CCISS_ABORT_MSG, ctlr, &cmd_to_abort->Header.Tag,
0, 2, 0, 0,
(unsigned char *) &cmd_to_abort->Header.LUN.LunAddrBytes[0],
TYPE_MSG);
/* sendcmd turned off interrputs on the board, turn 'em back on. */
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0)
return SUCCESS;
return FAILED;
}
#else /* no CONFIG_CISS_SCSI_TAPE */ #else /* no CONFIG_CISS_SCSI_TAPE */
/* If no tape support, then these become defined out of existence */ /* If no tape support, then these become defined out of existence */

View File

@ -1295,27 +1295,6 @@ config SCSI_QLOGIC_FAS
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called qlogicfas. module will be called qlogicfas.
config SCSI_QLOGIC_ISP
tristate "Qlogic ISP SCSI support (old driver)"
depends on PCI && SCSI && BROKEN
---help---
This driver works for all QLogic PCI SCSI host adapters (IQ-PCI,
IQ-PCI-10, IQ_PCI-D) except for the PCI-basic card. (This latter
card is supported by the "AM53/79C974 PCI SCSI" driver.)
If you say Y here, make sure to choose "BIOS" at the question "PCI
access mode".
Please read the file <file:Documentation/scsi/qlogicisp.txt>. You
should also read the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as a module, choose M here: the
module will be called qlogicisp.
These days the hardware is also supported by the more modern qla1280
driver. In doubt use that one instead of qlogicisp.
config SCSI_QLOGIC_FC config SCSI_QLOGIC_FC
tristate "Qlogic ISP FC SCSI support" tristate "Qlogic ISP FC SCSI support"
depends on PCI && SCSI depends on PCI && SCSI
@ -1342,14 +1321,6 @@ config SCSI_QLOGIC_1280
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called qla1280. module will be called qla1280.
config SCSI_QLOGIC_1280_1040
bool "Qlogic QLA 1020/1040 SCSI support"
depends on SCSI_QLOGIC_1280 && SCSI_QLOGIC_ISP!=y
help
Say Y here if you have a QLogic ISP1020/1040 SCSI host adapter and
do not want to use the old driver. This option enables support in
the qla1280 driver for those host adapters.
config SCSI_QLOGICPTI config SCSI_QLOGICPTI
tristate "PTI Qlogic, ISP Driver" tristate "PTI Qlogic, ISP Driver"
depends on SBUS && SCSI depends on SBUS && SCSI

View File

@ -78,7 +78,6 @@ obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o
obj-$(CONFIG_SCSI_SYM53C416) += sym53c416.o obj-$(CONFIG_SCSI_SYM53C416) += sym53c416.o
obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o
obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_ISP) += qlogicisp.o
obj-$(CONFIG_SCSI_QLOGIC_FC) += qlogicfc.o obj-$(CONFIG_SCSI_QLOGIC_FC) += qlogicfc.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA2XXX) += qla2xxx/ obj-$(CONFIG_SCSI_QLA2XXX) += qla2xxx/

View File

@ -436,29 +436,20 @@ ahd_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *))
{ {
struct ahd_softc *ahd; struct ahd_softc *ahd;
struct ahd_linux_device *dev = scsi_transport_device_data(cmd->device); struct ahd_linux_device *dev = scsi_transport_device_data(cmd->device);
int rtn = SCSI_MLQUEUE_HOST_BUSY;
unsigned long flags;
ahd = *(struct ahd_softc **)cmd->device->host->hostdata; ahd = *(struct ahd_softc **)cmd->device->host->hostdata;
/* ahd_lock(ahd, &flags);
* Close the race of a command that was in the process of if (ahd->platform_data->qfrozen == 0) {
* being queued to us just as our simq was frozen. Let cmd->scsi_done = scsi_done;
* DV commands through so long as we are only frozen to cmd->result = CAM_REQ_INPROG << 16;
* perform DV. rtn = ahd_linux_run_command(ahd, dev, cmd);
*/
if (ahd->platform_data->qfrozen != 0) {
printf("%s: queue frozen\n", ahd_name(ahd));
return SCSI_MLQUEUE_HOST_BUSY;
} }
ahd_unlock(ahd, &flags);
/* return rtn;
* Save the callback on completion function.
*/
cmd->scsi_done = scsi_done;
cmd->result = CAM_REQ_INPROG << 16;
return ahd_linux_run_command(ahd, dev, cmd);
} }
static inline struct scsi_target ** static inline struct scsi_target **
@ -1081,7 +1072,6 @@ ahd_linux_register_host(struct ahd_softc *ahd, struct scsi_host_template *templa
*((struct ahd_softc **)host->hostdata) = ahd; *((struct ahd_softc **)host->hostdata) = ahd;
ahd_lock(ahd, &s); ahd_lock(ahd, &s);
scsi_assign_lock(host, &ahd->platform_data->spin_lock);
ahd->platform_data->host = host; ahd->platform_data->host = host;
host->can_queue = AHD_MAX_QUEUE; host->can_queue = AHD_MAX_QUEUE;
host->cmd_per_lun = 2; host->cmd_per_lun = 2;
@ -2062,6 +2052,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int wait; int wait;
int disconnected; int disconnected;
ahd_mode_state saved_modes; ahd_mode_state saved_modes;
unsigned long flags;
pending_scb = NULL; pending_scb = NULL;
paused = FALSE; paused = FALSE;
@ -2077,7 +2068,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf(" 0x%x", cmd->cmnd[cdb_byte]); printf(" 0x%x", cmd->cmnd[cdb_byte]);
printf("\n"); printf("\n");
spin_lock_irq(&ahd->platform_data->spin_lock); ahd_lock(ahd, &flags);
/* /*
* First determine if we currently own this command. * First determine if we currently own this command.
@ -2291,7 +2282,8 @@ done:
int ret; int ret;
ahd->platform_data->flags |= AHD_SCB_UP_EH_SEM; ahd->platform_data->flags |= AHD_SCB_UP_EH_SEM;
spin_unlock_irq(&ahd->platform_data->spin_lock); ahd_unlock(ahd, &flags);
init_timer(&timer); init_timer(&timer);
timer.data = (u_long)ahd; timer.data = (u_long)ahd;
timer.expires = jiffies + (5 * HZ); timer.expires = jiffies + (5 * HZ);
@ -2305,9 +2297,8 @@ done:
printf("Timer Expired\n"); printf("Timer Expired\n");
retval = FAILED; retval = FAILED;
} }
spin_lock_irq(&ahd->platform_data->spin_lock);
} }
spin_unlock_irq(&ahd->platform_data->spin_lock); ahd_unlock(ahd, &flags);
return (retval); return (retval);
} }

View File

@ -476,26 +476,20 @@ ahc_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *))
{ {
struct ahc_softc *ahc; struct ahc_softc *ahc;
struct ahc_linux_device *dev = scsi_transport_device_data(cmd->device); struct ahc_linux_device *dev = scsi_transport_device_data(cmd->device);
int rtn = SCSI_MLQUEUE_HOST_BUSY;
unsigned long flags;
ahc = *(struct ahc_softc **)cmd->device->host->hostdata; ahc = *(struct ahc_softc **)cmd->device->host->hostdata;
/* ahc_lock(ahc, &flags);
* Save the callback on completion function. if (ahc->platform_data->qfrozen == 0) {
*/ cmd->scsi_done = scsi_done;
cmd->scsi_done = scsi_done; cmd->result = CAM_REQ_INPROG << 16;
rtn = ahc_linux_run_command(ahc, dev, cmd);
}
ahc_unlock(ahc, &flags);
/* return rtn;
* Close the race of a command that was in the process of
* being queued to us just as our simq was frozen. Let
* DV commands through so long as we are only frozen to
* perform DV.
*/
if (ahc->platform_data->qfrozen != 0)
return SCSI_MLQUEUE_HOST_BUSY;
cmd->result = CAM_REQ_INPROG << 16;
return ahc_linux_run_command(ahc, dev, cmd);
} }
static inline struct scsi_target ** static inline struct scsi_target **
@ -1079,7 +1073,6 @@ ahc_linux_register_host(struct ahc_softc *ahc, struct scsi_host_template *templa
*((struct ahc_softc **)host->hostdata) = ahc; *((struct ahc_softc **)host->hostdata) = ahc;
ahc_lock(ahc, &s); ahc_lock(ahc, &s);
scsi_assign_lock(host, &ahc->platform_data->spin_lock);
ahc->platform_data->host = host; ahc->platform_data->host = host;
host->can_queue = AHC_MAX_QUEUE; host->can_queue = AHC_MAX_QUEUE;
host->cmd_per_lun = 2; host->cmd_per_lun = 2;
@ -2111,6 +2104,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int paused; int paused;
int wait; int wait;
int disconnected; int disconnected;
unsigned long flags;
pending_scb = NULL; pending_scb = NULL;
paused = FALSE; paused = FALSE;
@ -2125,7 +2119,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf(" 0x%x", cmd->cmnd[cdb_byte]); printf(" 0x%x", cmd->cmnd[cdb_byte]);
printf("\n"); printf("\n");
spin_lock_irq(&ahc->platform_data->spin_lock); ahc_lock(ahc, &flags);
/* /*
* First determine if we currently own this command. * First determine if we currently own this command.
@ -2357,7 +2351,8 @@ done:
int ret; int ret;
ahc->platform_data->flags |= AHC_UP_EH_SEMAPHORE; ahc->platform_data->flags |= AHC_UP_EH_SEMAPHORE;
spin_unlock_irq(&ahc->platform_data->spin_lock); ahc_unlock(ahc, &flags);
init_timer(&timer); init_timer(&timer);
timer.data = (u_long)ahc; timer.data = (u_long)ahc;
timer.expires = jiffies + (5 * HZ); timer.expires = jiffies + (5 * HZ);
@ -2371,10 +2366,8 @@ done:
printf("Timer Expired\n"); printf("Timer Expired\n");
retval = FAILED; retval = FAILED;
} }
spin_lock_irq(&ahc->platform_data->spin_lock); } else
} ahc_unlock(ahc, &flags);
spin_unlock_irq(&ahc->platform_data->spin_lock);
return (retval); return (retval);
} }

View File

@ -395,6 +395,7 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs)
int log = test_bit(IDESCSI_LOG_CMD, &scsi->log); int log = test_bit(IDESCSI_LOG_CMD, &scsi->log);
struct Scsi_Host *host; struct Scsi_Host *host;
u8 *scsi_buf; u8 *scsi_buf;
int errors = rq->errors;
unsigned long flags; unsigned long flags;
if (!(rq->flags & (REQ_SPECIAL|REQ_SENSE))) { if (!(rq->flags & (REQ_SPECIAL|REQ_SENSE))) {
@ -421,11 +422,11 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs)
printk (KERN_WARNING "ide-scsi: %s: timed out for %lu\n", printk (KERN_WARNING "ide-scsi: %s: timed out for %lu\n",
drive->name, pc->scsi_cmd->serial_number); drive->name, pc->scsi_cmd->serial_number);
pc->scsi_cmd->result = DID_TIME_OUT << 16; pc->scsi_cmd->result = DID_TIME_OUT << 16;
} else if (rq->errors >= ERROR_MAX) { } else if (errors >= ERROR_MAX) {
pc->scsi_cmd->result = DID_ERROR << 16; pc->scsi_cmd->result = DID_ERROR << 16;
if (log) if (log)
printk ("ide-scsi: %s: I/O error for %lu\n", drive->name, pc->scsi_cmd->serial_number); printk ("ide-scsi: %s: I/O error for %lu\n", drive->name, pc->scsi_cmd->serial_number);
} else if (rq->errors) { } else if (errors) {
if (log) if (log)
printk ("ide-scsi: %s: check condition for %lu\n", drive->name, pc->scsi_cmd->serial_number); printk ("ide-scsi: %s: check condition for %lu\n", drive->name, pc->scsi_cmd->serial_number);
if (!idescsi_check_condition(drive, rq)) if (!idescsi_check_condition(drive, rq))

File diff suppressed because it is too large Load Diff

View File

@ -36,23 +36,8 @@
/* /*
* Literals * Literals
*/ */
#define IPR_DRIVER_VERSION "2.0.14" #define IPR_DRIVER_VERSION "2.1.0"
#define IPR_DRIVER_DATE "(May 2, 2005)" #define IPR_DRIVER_DATE "(October 31, 2005)"
/*
* IPR_DBG_TRACE: Setting this to 1 will turn on some general function tracing
* resulting in a bunch of extra debugging printks to the console
*
* IPR_DEBUG: Setting this to 1 will turn on some error path tracing.
* Enables the ipr_trace macro.
*/
#ifdef IPR_DEBUG_ALL
#define IPR_DEBUG 1
#define IPR_DBG_TRACE 1
#else
#define IPR_DEBUG 0
#define IPR_DBG_TRACE 0
#endif
/* /*
* IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
@ -76,6 +61,10 @@
#define IPR_SUBS_DEV_ID_571A 0x02C0 #define IPR_SUBS_DEV_ID_571A 0x02C0
#define IPR_SUBS_DEV_ID_571B 0x02BE #define IPR_SUBS_DEV_ID_571B 0x02BE
#define IPR_SUBS_DEV_ID_571E 0x02BF #define IPR_SUBS_DEV_ID_571E 0x02BF
#define IPR_SUBS_DEV_ID_571F 0x02D5
#define IPR_SUBS_DEV_ID_572A 0x02C1
#define IPR_SUBS_DEV_ID_572B 0x02C2
#define IPR_SUBS_DEV_ID_575B 0x030D
#define IPR_NAME "ipr" #define IPR_NAME "ipr"
@ -95,7 +84,10 @@
#define IPR_IOASC_HW_DEV_BUS_STATUS 0x04448500 #define IPR_IOASC_HW_DEV_BUS_STATUS 0x04448500
#define IPR_IOASC_IOASC_MASK 0xFFFFFF00 #define IPR_IOASC_IOASC_MASK 0xFFFFFF00
#define IPR_IOASC_SCSI_STATUS_MASK 0x000000FF #define IPR_IOASC_SCSI_STATUS_MASK 0x000000FF
#define IPR_IOASC_IR_INVALID_REQ_TYPE_OR_PKT 0x05240000
#define IPR_IOASC_IR_RESOURCE_HANDLE 0x05250000 #define IPR_IOASC_IR_RESOURCE_HANDLE 0x05250000
#define IPR_IOASC_IR_NO_CMDS_TO_2ND_IOA 0x05258100
#define IPR_IOASA_IR_DUAL_IOA_DISABLED 0x052C8000
#define IPR_IOASC_BUS_WAS_RESET 0x06290000 #define IPR_IOASC_BUS_WAS_RESET 0x06290000
#define IPR_IOASC_BUS_WAS_RESET_BY_OTHER 0x06298000 #define IPR_IOASC_BUS_WAS_RESET_BY_OTHER 0x06298000
#define IPR_IOASC_ABORTED_CMD_TERM_BY_HOST 0x0B5A0000 #define IPR_IOASC_ABORTED_CMD_TERM_BY_HOST 0x0B5A0000
@ -107,14 +99,14 @@
#define IPR_NUM_LOG_HCAMS 2 #define IPR_NUM_LOG_HCAMS 2
#define IPR_NUM_CFG_CHG_HCAMS 2 #define IPR_NUM_CFG_CHG_HCAMS 2
#define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS) #define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS)
#define IPR_MAX_NUM_TARGETS_PER_BUS 0x10 #define IPR_MAX_NUM_TARGETS_PER_BUS 256
#define IPR_MAX_NUM_LUNS_PER_TARGET 256 #define IPR_MAX_NUM_LUNS_PER_TARGET 256
#define IPR_MAX_NUM_VSET_LUNS_PER_TARGET 8 #define IPR_MAX_NUM_VSET_LUNS_PER_TARGET 8
#define IPR_VSET_BUS 0xff #define IPR_VSET_BUS 0xff
#define IPR_IOA_BUS 0xff #define IPR_IOA_BUS 0xff
#define IPR_IOA_TARGET 0xff #define IPR_IOA_TARGET 0xff
#define IPR_IOA_LUN 0xff #define IPR_IOA_LUN 0xff
#define IPR_MAX_NUM_BUSES 4 #define IPR_MAX_NUM_BUSES 8
#define IPR_MAX_BUS_TO_SCAN IPR_MAX_NUM_BUSES #define IPR_MAX_BUS_TO_SCAN IPR_MAX_NUM_BUSES
#define IPR_NUM_RESET_RELOAD_RETRIES 3 #define IPR_NUM_RESET_RELOAD_RETRIES 3
@ -205,6 +197,7 @@
#define IPR_SDT_FMT2_EXP_ROM_SEL 0x8 #define IPR_SDT_FMT2_EXP_ROM_SEL 0x8
#define IPR_FMT2_SDT_READY_TO_USE 0xC4D4E3F2 #define IPR_FMT2_SDT_READY_TO_USE 0xC4D4E3F2
#define IPR_DOORBELL 0x82800000 #define IPR_DOORBELL 0x82800000
#define IPR_RUNTIME_RESET 0x40000000
#define IPR_PCII_IOA_TRANS_TO_OPER (0x80000000 >> 0) #define IPR_PCII_IOA_TRANS_TO_OPER (0x80000000 >> 0)
#define IPR_PCII_IOARCB_XFER_FAILED (0x80000000 >> 3) #define IPR_PCII_IOARCB_XFER_FAILED (0x80000000 >> 3)
@ -261,6 +254,16 @@ struct ipr_std_inq_vpids {
u8 product_id[IPR_PROD_ID_LEN]; u8 product_id[IPR_PROD_ID_LEN];
}__attribute__((packed)); }__attribute__((packed));
struct ipr_vpd {
struct ipr_std_inq_vpids vpids;
u8 sn[IPR_SERIAL_NUM_LEN];
}__attribute__((packed));
struct ipr_ext_vpd {
struct ipr_vpd vpd;
__be32 wwid[2];
}__attribute__((packed));
struct ipr_std_inq_data { struct ipr_std_inq_data {
u8 peri_qual_dev_type; u8 peri_qual_dev_type;
#define IPR_STD_INQ_PERI_QUAL(peri) ((peri) >> 5) #define IPR_STD_INQ_PERI_QUAL(peri) ((peri) >> 5)
@ -304,6 +307,10 @@ struct ipr_config_table_entry {
#define IPR_SUBTYPE_GENERIC_SCSI 1 #define IPR_SUBTYPE_GENERIC_SCSI 1
#define IPR_SUBTYPE_VOLUME_SET 2 #define IPR_SUBTYPE_VOLUME_SET 2
#define IPR_QUEUEING_MODEL(res) ((((res)->cfgte.flags) & 0x70) >> 4)
#define IPR_QUEUE_FROZEN_MODEL 0
#define IPR_QUEUE_NACA_MODEL 1
struct ipr_res_addr res_addr; struct ipr_res_addr res_addr;
__be32 res_handle; __be32 res_handle;
__be32 reserved4[2]; __be32 reserved4[2];
@ -410,23 +417,26 @@ struct ipr_ioadl_desc {
struct ipr_ioasa_vset { struct ipr_ioasa_vset {
__be32 failing_lba_hi; __be32 failing_lba_hi;
__be32 failing_lba_lo; __be32 failing_lba_lo;
__be32 ioa_data[22]; __be32 reserved;
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_ioasa_af_dasd { struct ipr_ioasa_af_dasd {
__be32 failing_lba; __be32 failing_lba;
__be32 reserved[2];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_ioasa_gpdd { struct ipr_ioasa_gpdd {
u8 end_state; u8 end_state;
u8 bus_phase; u8 bus_phase;
__be16 reserved; __be16 reserved;
__be32 ioa_data[23]; __be32 ioa_data[2];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_ioasa_raw { struct ipr_auto_sense {
__be32 ioa_data[24]; __be16 auto_sense_len;
}__attribute__((packed, aligned (4))); __be16 ioa_data_len;
__be32 data[SCSI_SENSE_BUFFERSIZE/sizeof(__be32)];
};
struct ipr_ioasa { struct ipr_ioasa {
__be32 ioasc; __be32 ioasc;
@ -453,6 +463,8 @@ struct ipr_ioasa {
__be32 fd_res_handle; __be32 fd_res_handle;
__be32 ioasc_specific; /* status code specific field */ __be32 ioasc_specific; /* status code specific field */
#define IPR_ADDITIONAL_STATUS_FMT 0x80000000
#define IPR_AUTOSENSE_VALID 0x40000000
#define IPR_IOASC_SPECIFIC_MASK 0x00ffffff #define IPR_IOASC_SPECIFIC_MASK 0x00ffffff
#define IPR_FIELD_POINTER_VALID (0x80000000 >> 8) #define IPR_FIELD_POINTER_VALID (0x80000000 >> 8)
#define IPR_FIELD_POINTER_MASK 0x0000ffff #define IPR_FIELD_POINTER_MASK 0x0000ffff
@ -461,8 +473,9 @@ struct ipr_ioasa {
struct ipr_ioasa_vset vset; struct ipr_ioasa_vset vset;
struct ipr_ioasa_af_dasd dasd; struct ipr_ioasa_af_dasd dasd;
struct ipr_ioasa_gpdd gpdd; struct ipr_ioasa_gpdd gpdd;
struct ipr_ioasa_raw raw;
} u; } u;
struct ipr_auto_sense auto_sense;
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_mode_parm_hdr { struct ipr_mode_parm_hdr {
@ -536,28 +549,49 @@ struct ipr_inquiry_page3 {
u8 patch_number[4]; u8 patch_number[4];
}__attribute__((packed)); }__attribute__((packed));
#define IPR_INQUIRY_PAGE0_ENTRIES 20
struct ipr_inquiry_page0 {
u8 peri_qual_dev_type;
u8 page_code;
u8 reserved1;
u8 len;
u8 page[IPR_INQUIRY_PAGE0_ENTRIES];
}__attribute__((packed));
struct ipr_hostrcb_device_data_entry { struct ipr_hostrcb_device_data_entry {
struct ipr_std_inq_vpids dev_vpids; struct ipr_vpd vpd;
u8 dev_sn[IPR_SERIAL_NUM_LEN];
struct ipr_res_addr dev_res_addr; struct ipr_res_addr dev_res_addr;
struct ipr_std_inq_vpids new_dev_vpids; struct ipr_vpd new_vpd;
u8 new_dev_sn[IPR_SERIAL_NUM_LEN]; struct ipr_vpd ioa_last_with_dev_vpd;
struct ipr_std_inq_vpids ioa_last_with_dev_vpids; struct ipr_vpd cfc_last_with_dev_vpd;
u8 ioa_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
struct ipr_std_inq_vpids cfc_last_with_dev_vpids;
u8 cfc_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
__be32 ioa_data[5]; __be32 ioa_data[5];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_device_data_entry_enhanced {
struct ipr_ext_vpd vpd;
u8 ccin[4];
struct ipr_res_addr dev_res_addr;
struct ipr_ext_vpd new_vpd;
u8 new_ccin[4];
struct ipr_ext_vpd ioa_last_with_dev_vpd;
struct ipr_ext_vpd cfc_last_with_dev_vpd;
}__attribute__((packed, aligned (4)));
struct ipr_hostrcb_array_data_entry { struct ipr_hostrcb_array_data_entry {
struct ipr_std_inq_vpids vpids; struct ipr_vpd vpd;
u8 serial_num[IPR_SERIAL_NUM_LEN]; struct ipr_res_addr expected_dev_res_addr;
struct ipr_res_addr dev_res_addr;
}__attribute__((packed, aligned (4)));
struct ipr_hostrcb_array_data_entry_enhanced {
struct ipr_ext_vpd vpd;
u8 ccin[4];
struct ipr_res_addr expected_dev_res_addr; struct ipr_res_addr expected_dev_res_addr;
struct ipr_res_addr dev_res_addr; struct ipr_res_addr dev_res_addr;
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_ff_error { struct ipr_hostrcb_type_ff_error {
__be32 ioa_data[246]; __be32 ioa_data[502];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_01_error { struct ipr_hostrcb_type_01_error {
@ -568,47 +602,75 @@ struct ipr_hostrcb_type_01_error {
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_02_error { struct ipr_hostrcb_type_02_error {
struct ipr_std_inq_vpids ioa_vpids; struct ipr_vpd ioa_vpd;
u8 ioa_sn[IPR_SERIAL_NUM_LEN]; struct ipr_vpd cfc_vpd;
struct ipr_std_inq_vpids cfc_vpids; struct ipr_vpd ioa_last_attached_to_cfc_vpd;
u8 cfc_sn[IPR_SERIAL_NUM_LEN]; struct ipr_vpd cfc_last_attached_to_ioa_vpd;
struct ipr_std_inq_vpids ioa_last_attached_to_cfc_vpids; __be32 ioa_data[3];
u8 ioa_last_attached_to_cfc_sn[IPR_SERIAL_NUM_LEN]; }__attribute__((packed, aligned (4)));
struct ipr_std_inq_vpids cfc_last_attached_to_ioa_vpids;
u8 cfc_last_attached_to_ioa_sn[IPR_SERIAL_NUM_LEN]; struct ipr_hostrcb_type_12_error {
struct ipr_ext_vpd ioa_vpd;
struct ipr_ext_vpd cfc_vpd;
struct ipr_ext_vpd ioa_last_attached_to_cfc_vpd;
struct ipr_ext_vpd cfc_last_attached_to_ioa_vpd;
__be32 ioa_data[3]; __be32 ioa_data[3];
u8 reserved[844];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_03_error { struct ipr_hostrcb_type_03_error {
struct ipr_std_inq_vpids ioa_vpids; struct ipr_vpd ioa_vpd;
u8 ioa_sn[IPR_SERIAL_NUM_LEN]; struct ipr_vpd cfc_vpd;
struct ipr_std_inq_vpids cfc_vpids;
u8 cfc_sn[IPR_SERIAL_NUM_LEN];
__be32 errors_detected; __be32 errors_detected;
__be32 errors_logged; __be32 errors_logged;
u8 ioa_data[12]; u8 ioa_data[12];
struct ipr_hostrcb_device_data_entry dev_entry[3]; struct ipr_hostrcb_device_data_entry dev[3];
u8 reserved[444]; }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_13_error {
struct ipr_ext_vpd ioa_vpd;
struct ipr_ext_vpd cfc_vpd;
__be32 errors_detected;
__be32 errors_logged;
struct ipr_hostrcb_device_data_entry_enhanced dev[3];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_04_error { struct ipr_hostrcb_type_04_error {
struct ipr_std_inq_vpids ioa_vpids; struct ipr_vpd ioa_vpd;
u8 ioa_sn[IPR_SERIAL_NUM_LEN]; struct ipr_vpd cfc_vpd;
struct ipr_std_inq_vpids cfc_vpids;
u8 cfc_sn[IPR_SERIAL_NUM_LEN];
u8 ioa_data[12]; u8 ioa_data[12];
struct ipr_hostrcb_array_data_entry array_member[10]; struct ipr_hostrcb_array_data_entry array_member[10];
__be32 exposed_mode_adn; __be32 exposed_mode_adn;
__be32 array_id; __be32 array_id;
struct ipr_std_inq_vpids incomp_dev_vpids; struct ipr_vpd incomp_dev_vpd;
u8 incomp_dev_sn[IPR_SERIAL_NUM_LEN];
__be32 ioa_data2; __be32 ioa_data2;
struct ipr_hostrcb_array_data_entry array_member2[8]; struct ipr_hostrcb_array_data_entry array_member2[8];
struct ipr_res_addr last_func_vset_res_addr; struct ipr_res_addr last_func_vset_res_addr;
u8 vset_serial_num[IPR_SERIAL_NUM_LEN]; u8 vset_serial_num[IPR_SERIAL_NUM_LEN];
u8 protection_level[8]; u8 protection_level[8];
u8 reserved[124]; }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_14_error {
struct ipr_ext_vpd ioa_vpd;
struct ipr_ext_vpd cfc_vpd;
__be32 exposed_mode_adn;
__be32 array_id;
struct ipr_res_addr last_func_vset_res_addr;
u8 vset_serial_num[IPR_SERIAL_NUM_LEN];
u8 protection_level[8];
__be32 num_entries;
struct ipr_hostrcb_array_data_entry_enhanced array_member[18];
}__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_07_error {
u8 failure_reason[64];
struct ipr_vpd vpd;
u32 data[222];
}__attribute__((packed, aligned (4)));
struct ipr_hostrcb_type_17_error {
u8 failure_reason[64];
struct ipr_ext_vpd vpd;
u32 data[476];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_hostrcb_error { struct ipr_hostrcb_error {
@ -622,6 +684,11 @@ struct ipr_hostrcb_error {
struct ipr_hostrcb_type_02_error type_02_error; struct ipr_hostrcb_type_02_error type_02_error;
struct ipr_hostrcb_type_03_error type_03_error; struct ipr_hostrcb_type_03_error type_03_error;
struct ipr_hostrcb_type_04_error type_04_error; struct ipr_hostrcb_type_04_error type_04_error;
struct ipr_hostrcb_type_07_error type_07_error;
struct ipr_hostrcb_type_12_error type_12_error;
struct ipr_hostrcb_type_13_error type_13_error;
struct ipr_hostrcb_type_14_error type_14_error;
struct ipr_hostrcb_type_17_error type_17_error;
} u; } u;
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
@ -655,6 +722,12 @@ struct ipr_hcam {
#define IPR_HOST_RCB_OVERLAY_ID_3 0x03 #define IPR_HOST_RCB_OVERLAY_ID_3 0x03
#define IPR_HOST_RCB_OVERLAY_ID_4 0x04 #define IPR_HOST_RCB_OVERLAY_ID_4 0x04
#define IPR_HOST_RCB_OVERLAY_ID_6 0x06 #define IPR_HOST_RCB_OVERLAY_ID_6 0x06
#define IPR_HOST_RCB_OVERLAY_ID_7 0x07
#define IPR_HOST_RCB_OVERLAY_ID_12 0x12
#define IPR_HOST_RCB_OVERLAY_ID_13 0x13
#define IPR_HOST_RCB_OVERLAY_ID_14 0x14
#define IPR_HOST_RCB_OVERLAY_ID_16 0x16
#define IPR_HOST_RCB_OVERLAY_ID_17 0x17
#define IPR_HOST_RCB_OVERLAY_ID_DEFAULT 0xFF #define IPR_HOST_RCB_OVERLAY_ID_DEFAULT 0xFF
u8 reserved1[3]; u8 reserved1[3];
@ -743,6 +816,7 @@ struct ipr_resource_table {
struct ipr_misc_cbs { struct ipr_misc_cbs {
struct ipr_ioa_vpd ioa_vpd; struct ipr_ioa_vpd ioa_vpd;
struct ipr_inquiry_page0 page0_data;
struct ipr_inquiry_page3 page3_data; struct ipr_inquiry_page3 page3_data;
struct ipr_mode_pages mode_pages; struct ipr_mode_pages mode_pages;
struct ipr_supported_device supp_dev; struct ipr_supported_device supp_dev;
@ -813,6 +887,7 @@ struct ipr_trace_entry {
struct ipr_sglist { struct ipr_sglist {
u32 order; u32 order;
u32 num_sg; u32 num_sg;
u32 num_dma_sg;
u32 buffer_len; u32 buffer_len;
struct scatterlist scatterlist[1]; struct scatterlist scatterlist[1];
}; };
@ -825,6 +900,13 @@ enum ipr_sdt_state {
DUMP_OBTAINED DUMP_OBTAINED
}; };
enum ipr_cache_state {
CACHE_NONE,
CACHE_DISABLED,
CACHE_ENABLED,
CACHE_INVALID
};
/* Per-controller data */ /* Per-controller data */
struct ipr_ioa_cfg { struct ipr_ioa_cfg {
char eye_catcher[8]; char eye_catcher[8];
@ -841,6 +923,7 @@ struct ipr_ioa_cfg {
u8 allow_cmds:1; u8 allow_cmds:1;
u8 allow_ml_add_del:1; u8 allow_ml_add_del:1;
enum ipr_cache_state cache_state;
u16 type; /* CCIN of the card */ u16 type; /* CCIN of the card */
u8 log_level; u8 log_level;
@ -911,6 +994,7 @@ struct ipr_ioa_cfg {
u16 reset_retries; u16 reset_retries;
u32 errors_logged; u32 errors_logged;
u32 doorbell;
struct Scsi_Host *host; struct Scsi_Host *host;
struct pci_dev *pdev; struct pci_dev *pdev;
@ -948,6 +1032,7 @@ struct ipr_cmnd {
struct timer_list timer; struct timer_list timer;
void (*done) (struct ipr_cmnd *); void (*done) (struct ipr_cmnd *);
int (*job_step) (struct ipr_cmnd *); int (*job_step) (struct ipr_cmnd *);
int (*job_step_failed) (struct ipr_cmnd *);
u16 cmd_index; u16 cmd_index;
u8 sense_buffer[SCSI_SENSE_BUFFERSIZE]; u8 sense_buffer[SCSI_SENSE_BUFFERSIZE];
dma_addr_t sense_buffer_dma; dma_addr_t sense_buffer_dma;
@ -1083,11 +1168,7 @@ struct ipr_ucode_image_header {
/* /*
* Macros * Macros
*/ */
#if IPR_DEBUG #define IPR_DBG_CMD(CMD) if (ipr_debug) { CMD; }
#define IPR_DBG_CMD(CMD) do { CMD; } while (0)
#else
#define IPR_DBG_CMD(CMD)
#endif
#ifdef CONFIG_SCSI_IPR_TRACE #ifdef CONFIG_SCSI_IPR_TRACE
#define ipr_create_trace_file(kobj, attr) sysfs_create_bin_file(kobj, attr) #define ipr_create_trace_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
@ -1135,16 +1216,22 @@ struct ipr_ucode_image_header {
#define ipr_res_dbg(ioa_cfg, res, fmt, ...) \ #define ipr_res_dbg(ioa_cfg, res, fmt, ...) \
IPR_DBG_CMD(ipr_res_printk(KERN_INFO, ioa_cfg, res, fmt, ##__VA_ARGS__)) IPR_DBG_CMD(ipr_res_printk(KERN_INFO, ioa_cfg, res, fmt, ##__VA_ARGS__))
#define ipr_phys_res_err(ioa_cfg, res, fmt, ...) \
{ \
if ((res).bus >= IPR_MAX_NUM_BUSES) { \
ipr_err(fmt": unknown\n", ##__VA_ARGS__); \
} else { \
ipr_err(fmt": %d:%d:%d:%d\n", \
##__VA_ARGS__, (ioa_cfg)->host->host_no, \
(res).bus, (res).target, (res).lun); \
} \
}
#define ipr_trace ipr_dbg("%s: %s: Line: %d\n",\ #define ipr_trace ipr_dbg("%s: %s: Line: %d\n",\
__FILE__, __FUNCTION__, __LINE__) __FILE__, __FUNCTION__, __LINE__)
#if IPR_DBG_TRACE #define ENTER IPR_DBG_CMD(printk(KERN_INFO IPR_NAME": Entering %s\n", __FUNCTION__))
#define ENTER printk(KERN_INFO IPR_NAME": Entering %s\n", __FUNCTION__) #define LEAVE IPR_DBG_CMD(printk(KERN_INFO IPR_NAME": Leaving %s\n", __FUNCTION__))
#define LEAVE printk(KERN_INFO IPR_NAME": Leaving %s\n", __FUNCTION__)
#else
#define ENTER
#define LEAVE
#endif
#define ipr_err_separator \ #define ipr_err_separator \
ipr_err("----------------------------------------------------------\n") ipr_err("----------------------------------------------------------\n")
@ -1216,6 +1303,20 @@ static inline int ipr_is_gscsi(struct ipr_resource_entry *res)
return 0; return 0;
} }
/**
* ipr_is_naca_model - Determine if a resource is using NACA queueing model
* @res: resource entry struct
*
* Return value:
* 1 if NACA queueing model / 0 if not NACA queueing model
**/
static inline int ipr_is_naca_model(struct ipr_resource_entry *res)
{
if (ipr_is_gscsi(res) && IPR_QUEUEING_MODEL(res) == IPR_QUEUE_NACA_MODEL)
return 1;
return 0;
}
/** /**
* ipr_is_device - Determine if resource address is that of a device * ipr_is_device - Determine if resource address is that of a device
* @res_addr: resource address struct * @res_addr: resource address struct
@ -1226,7 +1327,7 @@ static inline int ipr_is_gscsi(struct ipr_resource_entry *res)
static inline int ipr_is_device(struct ipr_res_addr *res_addr) static inline int ipr_is_device(struct ipr_res_addr *res_addr)
{ {
if ((res_addr->bus < IPR_MAX_NUM_BUSES) && if ((res_addr->bus < IPR_MAX_NUM_BUSES) &&
(res_addr->target < IPR_MAX_NUM_TARGETS_PER_BUS)) (res_addr->target < (IPR_MAX_NUM_TARGETS_PER_BUS - 1)))
return 1; return 1;
return 0; return 0;

View File

@ -139,6 +139,7 @@
/* - Remove 3 unused "inline" functions */ /* - Remove 3 unused "inline" functions */
/* 7.12.xx - Use STATIC functions whereever possible */ /* 7.12.xx - Use STATIC functions whereever possible */
/* - Clean up deprecated MODULE_PARM calls */ /* - Clean up deprecated MODULE_PARM calls */
/* 7.12.05 - Remove Version Matching per IBM request */
/*****************************************************************************/ /*****************************************************************************/
/* /*
@ -210,7 +211,7 @@ module_param(ips, charp, 0);
* DRIVER_VER * DRIVER_VER
*/ */
#define IPS_VERSION_HIGH "7.12" #define IPS_VERSION_HIGH "7.12"
#define IPS_VERSION_LOW ".02 " #define IPS_VERSION_LOW ".05 "
#if !defined(__i386__) && !defined(__ia64__) && !defined(__x86_64__) #if !defined(__i386__) && !defined(__ia64__) && !defined(__x86_64__)
#warning "This driver has only been tested on the x86/ia64/x86_64 platforms" #warning "This driver has only been tested on the x86/ia64/x86_64 platforms"
@ -347,8 +348,6 @@ static int ips_proc_info(struct Scsi_Host *, char *, char **, off_t, int, int);
static int ips_host_info(ips_ha_t *, char *, off_t, int); static int ips_host_info(ips_ha_t *, char *, off_t, int);
static void copy_mem_info(IPS_INFOSTR *, char *, int); static void copy_mem_info(IPS_INFOSTR *, char *, int);
static int copy_info(IPS_INFOSTR *, char *, ...); static int copy_info(IPS_INFOSTR *, char *, ...);
static int ips_get_version_info(ips_ha_t * ha, dma_addr_t, int intr);
static void ips_version_check(ips_ha_t * ha, int intr);
static int ips_abort_init(ips_ha_t * ha, int index); static int ips_abort_init(ips_ha_t * ha, int index);
static int ips_init_phase2(int index); static int ips_init_phase2(int index);
@ -406,8 +405,6 @@ static Scsi_Host_Template ips_driver_template = {
#endif #endif
}; };
static IPS_DEFINE_COMPAT_TABLE( Compatable ); /* Version Compatability Table */
/* This table describes all ServeRAID Adapters */ /* This table describes all ServeRAID Adapters */
static struct pci_device_id ips_pci_table[] = { static struct pci_device_id ips_pci_table[] = {
@ -5930,7 +5927,7 @@ ips_write_driver_status(ips_ha_t * ha, int intr)
strncpy((char *) ha->nvram->bios_high, ha->bios_version, 4); strncpy((char *) ha->nvram->bios_high, ha->bios_version, 4);
strncpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4); strncpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4);
ips_version_check(ha, intr); /* Check BIOS/FW/Driver Versions */ ha->nvram->versioning = 0; /* Indicate the Driver Does Not Support Versioning */
/* now update the page */ /* now update the page */
if (!ips_readwrite_page5(ha, TRUE, intr)) { if (!ips_readwrite_page5(ha, TRUE, intr)) {
@ -6847,135 +6844,6 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
return (0); return (0);
} }
/*---------------------------------------------------------------------------*/
/* Routine Name: ips_version_check */
/* */
/* Dependencies: */
/* Assumes that ips_read_adapter_status() is called first filling in */
/* the data for SubSystem Parameters. */
/* Called from ips_write_driver_status() so it also assumes NVRAM Page 5 */
/* Data is available. */
/* */
/*---------------------------------------------------------------------------*/
static void
ips_version_check(ips_ha_t * ha, int intr)
{
IPS_VERSION_DATA *VersionInfo;
uint8_t FirmwareVersion[IPS_COMPAT_ID_LENGTH + 1];
uint8_t BiosVersion[IPS_COMPAT_ID_LENGTH + 1];
int MatchError;
int rc;
char BiosString[10];
char FirmwareString[10];
METHOD_TRACE("ips_version_check", 1);
VersionInfo = ( IPS_VERSION_DATA * ) ha->ioctl_data;
memset(FirmwareVersion, 0, IPS_COMPAT_ID_LENGTH + 1);
memset(BiosVersion, 0, IPS_COMPAT_ID_LENGTH + 1);
/* Get the Compatible BIOS Version from NVRAM Page 5 */
memcpy(BiosVersion, ha->nvram->BiosCompatibilityID,
IPS_COMPAT_ID_LENGTH);
rc = IPS_FAILURE;
if (ha->subsys->param[4] & IPS_GET_VERSION_SUPPORT) { /* If Versioning is Supported */
/* Get the Version Info with a Get Version Command */
memset( VersionInfo, 0, sizeof (IPS_VERSION_DATA));
rc = ips_get_version_info(ha, ha->ioctl_busaddr, intr);
if (rc == IPS_SUCCESS)
memcpy(FirmwareVersion, VersionInfo->compatibilityId,
IPS_COMPAT_ID_LENGTH);
}
if (rc != IPS_SUCCESS) { /* If Data Not Obtainable from a GetVersion Command */
/* Get the Firmware Version from Enquiry Data */
memcpy(FirmwareVersion, ha->enq->CodeBlkVersion,
IPS_COMPAT_ID_LENGTH);
}
/* printk(KERN_WARNING "Adapter's BIOS Version = %s\n", BiosVersion); */
/* printk(KERN_WARNING "BIOS Compatible Version = %s\n", IPS_COMPAT_BIOS); */
/* printk(KERN_WARNING "Adapter's Firmware Version = %s\n", FirmwareVersion); */
/* printk(KERN_WARNING "Firmware Compatible Version = %s \n", Compatable[ ha->nvram->adapter_type ]); */
MatchError = 0;
if (strncmp
(FirmwareVersion, Compatable[ha->nvram->adapter_type],
IPS_COMPAT_ID_LENGTH) != 0)
MatchError = 1;
if (strncmp(BiosVersion, IPS_COMPAT_BIOS, IPS_COMPAT_ID_LENGTH) != 0)
MatchError = 1;
ha->nvram->versioning = 1; /* Indicate the Driver Supports Versioning */
if (MatchError) {
ha->nvram->version_mismatch = 1;
if (ips_cd_boot == 0) {
strncpy(&BiosString[0], ha->nvram->bios_high, 4);
strncpy(&BiosString[4], ha->nvram->bios_low, 4);
BiosString[8] = 0;
strncpy(&FirmwareString[0], ha->enq->CodeBlkVersion, 8);
FirmwareString[8] = 0;
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Warning ! ! ! ServeRAID Version Mismatch\n");
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Bios = %s, Firmware = %s, Device Driver = %s%s\n",
BiosString, FirmwareString, IPS_VERSION_HIGH,
IPS_VERSION_LOW);
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"These levels should match to avoid possible compatibility problems.\n");
}
} else {
ha->nvram->version_mismatch = 0;
}
return;
}
/*---------------------------------------------------------------------------*/
/* Routine Name: ips_get_version_info */
/* */
/* Routine Description: */
/* Issue an internal GETVERSION Command */
/* */
/* Return Value: */
/* 0 if Successful, else non-zero */
/*---------------------------------------------------------------------------*/
static int
ips_get_version_info(ips_ha_t * ha, dma_addr_t Buffer, int intr)
{
ips_scb_t *scb;
int rc;
METHOD_TRACE("ips_get_version_info", 1);
scb = &ha->scbs[ha->max_cmds - 1];
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
scb->cdb[0] = IPS_CMD_GET_VERSION_INFO;
scb->cmd.version_info.op_code = IPS_CMD_GET_VERSION_INFO;
scb->cmd.version_info.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.version_info.reserved = 0;
scb->cmd.version_info.count = sizeof (IPS_VERSION_DATA);
scb->cmd.version_info.reserved2 = 0;
scb->data_len = sizeof (IPS_VERSION_DATA);
scb->data_busaddr = Buffer;
scb->cmd.version_info.buffer_addr = Buffer;
scb->flags = 0;
/* issue command */
rc = ips_send_wait(ha, scb, ips_cmd_timeout, intr);
return (rc);
}
/****************************************************************************/ /****************************************************************************/
/* */ /* */
/* Routine Name: ips_abort_init */ /* Routine Name: ips_abort_init */

View File

@ -362,6 +362,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
adapter_t *adapter; adapter_t *adapter;
scb_t *scb; scb_t *scb;
int busy=0; int busy=0;
unsigned long flags;
adapter = (adapter_t *)scmd->device->host->hostdata; adapter = (adapter_t *)scmd->device->host->hostdata;
@ -377,6 +378,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
* return 0 in that case. * return 0 in that case.
*/ */
spin_lock_irqsave(&adapter->lock, flags);
scb = mega_build_cmd(adapter, scmd, &busy); scb = mega_build_cmd(adapter, scmd, &busy);
if(scb) { if(scb) {
@ -393,6 +395,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
} }
return 0; return 0;
} }
spin_unlock_irqrestore(&adapter->lock, flags);
return busy; return busy;
} }
@ -1981,7 +1984,7 @@ megaraid_reset(struct scsi_cmnd *cmd)
mc.cmd = MEGA_CLUSTER_CMD; mc.cmd = MEGA_CLUSTER_CMD;
mc.opcode = MEGA_RESET_RESERVATIONS; mc.opcode = MEGA_RESET_RESERVATIONS;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) != 0 ) { if( mega_internal_command(adapter, &mc, NULL) != 0 ) {
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: reservation reset failed.\n"); "megaraid: reservation reset failed.\n");
} }
@ -3011,7 +3014,7 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end )
mc.cmd = FC_NEW_CONFIG; mc.cmd = FC_NEW_CONFIG;
mc.opcode = OP_DCMD_READ_CONFIG; mc.opcode = OP_DCMD_READ_CONFIG;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) ) { if( mega_internal_command(adapter, &mc, NULL) ) {
len = sprintf(page, "40LD read config failed.\n"); len = sprintf(page, "40LD read config failed.\n");
@ -3029,11 +3032,11 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end )
else { else {
mc.cmd = NEW_READ_CONFIG_8LD; mc.cmd = NEW_READ_CONFIG_8LD;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) ) { if( mega_internal_command(adapter, &mc, NULL) ) {
mc.cmd = READ_CONFIG_8LD; mc.cmd = READ_CONFIG_8LD;
if( mega_internal_command(adapter, LOCK_INT, &mc, if( mega_internal_command(adapter, &mc,
NULL) ){ NULL) ){
len = sprintf(page, len = sprintf(page,
@ -3632,7 +3635,7 @@ megadev_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
/* /*
* Issue the command * Issue the command
*/ */
mega_internal_command(adapter, LOCK_INT, &mc, pthru); mega_internal_command(adapter, &mc, pthru);
rval = mega_n_to_m((void __user *)arg, &mc); rval = mega_n_to_m((void __user *)arg, &mc);
@ -3715,7 +3718,7 @@ freemem_and_return:
/* /*
* Issue the command * Issue the command
*/ */
mega_internal_command(adapter, LOCK_INT, &mc, NULL); mega_internal_command(adapter, &mc, NULL);
rval = mega_n_to_m((void __user *)arg, &mc); rval = mega_n_to_m((void __user *)arg, &mc);
@ -4234,7 +4237,7 @@ mega_do_del_logdrv(adapter_t *adapter, int logdrv)
mc.opcode = OP_DEL_LOGDRV; mc.opcode = OP_DEL_LOGDRV;
mc.subopcode = logdrv; mc.subopcode = logdrv;
rval = mega_internal_command(adapter, LOCK_INT, &mc, NULL); rval = mega_internal_command(adapter, &mc, NULL);
/* log this event */ /* log this event */
if(rval) { if(rval) {
@ -4367,7 +4370,7 @@ mega_adapinq(adapter_t *adapter, dma_addr_t dma_handle)
mc.xferaddr = (u32)dma_handle; mc.xferaddr = (u32)dma_handle;
if ( mega_internal_command(adapter, LOCK_INT, &mc, NULL) != 0 ) { if ( mega_internal_command(adapter, &mc, NULL) != 0 ) {
return -1; return -1;
} }
@ -4435,7 +4438,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
mc.cmd = MEGA_MBOXCMD_PASSTHRU; mc.cmd = MEGA_MBOXCMD_PASSTHRU;
mc.xferaddr = (u32)pthru_dma_handle; mc.xferaddr = (u32)pthru_dma_handle;
rval = mega_internal_command(adapter, LOCK_INT, &mc, pthru); rval = mega_internal_command(adapter, &mc, pthru);
pci_free_consistent(pdev, sizeof(mega_passthru), pthru, pci_free_consistent(pdev, sizeof(mega_passthru), pthru,
pthru_dma_handle); pthru_dma_handle);
@ -4449,7 +4452,6 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
/** /**
* mega_internal_command() * mega_internal_command()
* @adapter - pointer to our soft state * @adapter - pointer to our soft state
* @ls - the scope of the exclusion lock.
* @mc - the mailbox command * @mc - the mailbox command
* @pthru - Passthru structure for DCDB commands * @pthru - Passthru structure for DCDB commands
* *
@ -4463,8 +4465,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
* Note: parameter 'pthru' is null for non-passthru commands. * Note: parameter 'pthru' is null for non-passthru commands.
*/ */
static int static int
mega_internal_command(adapter_t *adapter, lockscope_t ls, megacmd_t *mc, mega_internal_command(adapter_t *adapter, megacmd_t *mc, mega_passthru *pthru)
mega_passthru *pthru )
{ {
Scsi_Cmnd *scmd; Scsi_Cmnd *scmd;
struct scsi_device *sdev; struct scsi_device *sdev;
@ -4508,15 +4509,8 @@ mega_internal_command(adapter_t *adapter, lockscope_t ls, megacmd_t *mc,
scb->idx = CMDID_INT_CMDS; scb->idx = CMDID_INT_CMDS;
/*
* Get the lock only if the caller has not acquired it already
*/
if( ls == LOCK_INT ) spin_lock_irqsave(&adapter->lock, flags);
megaraid_queue(scmd, mega_internal_done); megaraid_queue(scmd, mega_internal_done);
if( ls == LOCK_INT ) spin_unlock_irqrestore(&adapter->lock, flags);
wait_for_completion(&adapter->int_waitq); wait_for_completion(&adapter->int_waitq);
rval = scmd->result; rval = scmd->result;

View File

@ -925,13 +925,6 @@ struct mega_hbas {
#define MEGA_BULK_DATA 0x0001 #define MEGA_BULK_DATA 0x0001
#define MEGA_SGLIST 0x0002 #define MEGA_SGLIST 0x0002
/*
* lockscope definitions, callers can specify the lock scope with this data
* type. LOCK_INT would mean the caller has not acquired the lock before
* making the call and LOCK_EXT would mean otherwise.
*/
typedef enum { LOCK_INT, LOCK_EXT } lockscope_t;
/* /*
* Parameters for the io-mapped controllers * Parameters for the io-mapped controllers
*/ */
@ -1062,8 +1055,7 @@ static int mega_support_random_del(adapter_t *);
static int mega_del_logdrv(adapter_t *, int); static int mega_del_logdrv(adapter_t *, int);
static int mega_do_del_logdrv(adapter_t *, int); static int mega_do_del_logdrv(adapter_t *, int);
static void mega_get_max_sgl(adapter_t *); static void mega_get_max_sgl(adapter_t *);
static int mega_internal_command(adapter_t *, lockscope_t, megacmd_t *, static int mega_internal_command(adapter_t *, megacmd_t *, mega_passthru *);
mega_passthru *);
static void mega_internal_done(Scsi_Cmnd *); static void mega_internal_done(Scsi_Cmnd *);
static int mega_support_cluster(adapter_t *); static int mega_support_cluster(adapter_t *);
#endif #endif

View File

@ -97,7 +97,6 @@ typedef struct {
* @param dpc_h : tasklet handle * @param dpc_h : tasklet handle
* @param pdev : pci configuration pointer for kernel * @param pdev : pci configuration pointer for kernel
* @param host : pointer to host structure of mid-layer * @param host : pointer to host structure of mid-layer
* @param host_lock : pointer to appropriate lock
* @param lock : synchronization lock for mid-layer and driver * @param lock : synchronization lock for mid-layer and driver
* @param quiescent : driver is quiescent for now. * @param quiescent : driver is quiescent for now.
* @param outstanding_cmds : number of commands pending in the driver * @param outstanding_cmds : number of commands pending in the driver
@ -152,7 +151,6 @@ typedef struct {
struct tasklet_struct dpc_h; struct tasklet_struct dpc_h;
struct pci_dev *pdev; struct pci_dev *pdev;
struct Scsi_Host *host; struct Scsi_Host *host;
spinlock_t *host_lock;
spinlock_t lock; spinlock_t lock;
uint8_t quiescent; uint8_t quiescent;
int outstanding_cmds; int outstanding_cmds;

View File

@ -533,8 +533,6 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
// Initialize the synchronization lock for kernel and LLD // Initialize the synchronization lock for kernel and LLD
spin_lock_init(&adapter->lock); spin_lock_init(&adapter->lock);
adapter->host_lock = &adapter->lock;
// Initialize the command queues: the list of free SCBs and the list // Initialize the command queues: the list of free SCBs and the list
// of pending SCBs. // of pending SCBs.
@ -715,9 +713,6 @@ megaraid_io_attach(adapter_t *adapter)
SCSIHOST2ADAP(host) = (caddr_t)adapter; SCSIHOST2ADAP(host) = (caddr_t)adapter;
adapter->host = host; adapter->host = host;
// export the parameters required by the mid-layer
scsi_assign_lock(host, adapter->host_lock);
host->irq = adapter->irq; host->irq = adapter->irq;
host->unique_id = adapter->unique_id; host->unique_id = adapter->unique_id;
host->can_queue = adapter->max_cmds; host->can_queue = adapter->max_cmds;
@ -1560,10 +1555,6 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *))
scp->scsi_done = done; scp->scsi_done = done;
scp->result = 0; scp->result = 0;
assert_spin_locked(adapter->host_lock);
spin_unlock(adapter->host_lock);
/* /*
* Allocate and build a SCB request * Allocate and build a SCB request
* if_busy flag will be set if megaraid_mbox_build_cmd() command could * if_busy flag will be set if megaraid_mbox_build_cmd() command could
@ -1573,23 +1564,16 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *))
* return 0 in that case, and we would do the callback right away. * return 0 in that case, and we would do the callback right away.
*/ */
if_busy = 0; if_busy = 0;
scb = megaraid_mbox_build_cmd(adapter, scp, &if_busy); scb = megaraid_mbox_build_cmd(adapter, scp, &if_busy);
if (scb) {
megaraid_mbox_runpendq(adapter, scb);
}
spin_lock(adapter->host_lock);
if (!scb) { // command already completed if (!scb) { // command already completed
done(scp); done(scp);
return 0; return 0;
} }
megaraid_mbox_runpendq(adapter, scb);
return if_busy; return if_busy;
} }
/** /**
* megaraid_mbox_build_cmd - transform the mid-layer scsi command to megaraid * megaraid_mbox_build_cmd - transform the mid-layer scsi command to megaraid
* firmware lingua * firmware lingua
@ -2546,9 +2530,7 @@ megaraid_mbox_dpc(unsigned long devp)
megaraid_dealloc_scb(adapter, scb); megaraid_dealloc_scb(adapter, scb);
// send the scsi packet back to kernel // send the scsi packet back to kernel
spin_lock(adapter->host_lock);
scp->scsi_done(scp); scp->scsi_done(scp);
spin_unlock(adapter->host_lock);
} }
return; return;
@ -2563,7 +2545,7 @@ megaraid_mbox_dpc(unsigned long devp)
* aborted. All the commands issued to the F/W must complete. * aborted. All the commands issued to the F/W must complete.
**/ **/
static int static int
__megaraid_abort_handler(struct scsi_cmnd *scp) megaraid_abort_handler(struct scsi_cmnd *scp)
{ {
adapter_t *adapter; adapter_t *adapter;
mraid_device_t *raid_dev; mraid_device_t *raid_dev;
@ -2577,8 +2559,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
adapter = SCP2ADAPTER(scp); adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter); raid_dev = ADAP2RAIDDEV(adapter);
assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n", "megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n",
scp->serial_number, scp->cmnd[0], SCP2CHANNEL(scp), scp->serial_number, scp->cmnd[0], SCP2CHANNEL(scp),
@ -2658,6 +2638,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
// traverse through the list of all SCB, since driver does not // traverse through the list of all SCB, since driver does not
// maintain these SCBs on any list // maintain these SCBs on any list
found = 0; found = 0;
spin_lock_irq(&adapter->lock);
for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) { for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
scb = adapter->kscb_list + i; scb = adapter->kscb_list + i;
@ -2680,6 +2661,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
} }
} }
} }
spin_unlock_irq(&adapter->lock);
if (!found) { if (!found) {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
@ -2696,22 +2678,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
return FAILED; return FAILED;
} }
static int
megaraid_abort_handler(struct scsi_cmnd *scp)
{
adapter_t *adapter;
int rc;
adapter = SCP2ADAPTER(scp);
spin_lock_irq(adapter->host_lock);
rc = __megaraid_abort_handler(scp);
spin_unlock_irq(adapter->host_lock);
return rc;
}
/** /**
* megaraid_reset_handler - device reset hadler for mailbox based driver * megaraid_reset_handler - device reset hadler for mailbox based driver
* @scp : reference command * @scp : reference command
@ -2723,7 +2689,7 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
* host * host
**/ **/
static int static int
__megaraid_reset_handler(struct scsi_cmnd *scp) megaraid_reset_handler(struct scsi_cmnd *scp)
{ {
adapter_t *adapter; adapter_t *adapter;
scb_t *scb; scb_t *scb;
@ -2739,10 +2705,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
adapter = SCP2ADAPTER(scp); adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter); raid_dev = ADAP2RAIDDEV(adapter);
assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING "megaraid: reseting the host...\n"));
// return failure if adapter is not responding // return failure if adapter is not responding
if (raid_dev->hw_error) { if (raid_dev->hw_error) {
con_log(CL_ANN, (KERN_NOTICE con_log(CL_ANN, (KERN_NOTICE
@ -2779,8 +2741,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
adapter->outstanding_cmds, MBOX_RESET_WAIT)); adapter->outstanding_cmds, MBOX_RESET_WAIT));
} }
spin_unlock(adapter->host_lock);
recovery_window = MBOX_RESET_WAIT + MBOX_RESET_EXT_WAIT; recovery_window = MBOX_RESET_WAIT + MBOX_RESET_EXT_WAIT;
recovering = adapter->outstanding_cmds; recovering = adapter->outstanding_cmds;
@ -2806,7 +2766,7 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
msleep(1000); msleep(1000);
} }
spin_lock(adapter->host_lock); spin_lock(&adapter->lock);
// If still outstanding commands, bail out // If still outstanding commands, bail out
if (adapter->outstanding_cmds) { if (adapter->outstanding_cmds) {
@ -2815,7 +2775,8 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
raid_dev->hw_error = 1; raid_dev->hw_error = 1;
return FAILED; rval = FAILED;
goto out;
} }
else { else {
con_log(CL_ANN, (KERN_NOTICE con_log(CL_ANN, (KERN_NOTICE
@ -2824,7 +2785,10 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
// If the controller supports clustering, reset reservations // If the controller supports clustering, reset reservations
if (!adapter->ha) return SUCCESS; if (!adapter->ha) {
rval = SUCCESS;
goto out;
}
// clear reservations if any // clear reservations if any
raw_mbox[0] = CLUSTER_CMD; raw_mbox[0] = CLUSTER_CMD;
@ -2841,22 +2805,11 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
"megaraid: reservation reset failed\n")); "megaraid: reservation reset failed\n"));
} }
out:
spin_unlock_irq(&adapter->lock);
return rval; return rval;
} }
static int
megaraid_reset_handler(struct scsi_cmnd *cmd)
{
int rc;
spin_lock_irq(cmd->device->host->host_lock);
rc = __megaraid_reset_handler(cmd);
spin_unlock_irq(cmd->device->host->host_lock);
return rc;
}
/* /*
* START: internal commands library * START: internal commands library
* *
@ -3776,9 +3729,9 @@ wait_till_fw_empty(adapter_t *adapter)
/* /*
* Set the quiescent flag to stop issuing cmds to FW. * Set the quiescent flag to stop issuing cmds to FW.
*/ */
spin_lock_irqsave(adapter->host_lock, flags); spin_lock_irqsave(&adapter->lock, flags);
adapter->quiescent++; adapter->quiescent++;
spin_unlock_irqrestore(adapter->host_lock, flags); spin_unlock_irqrestore(&adapter->lock, flags);
/* /*
* Wait till there are no more cmds outstanding at FW. Try for at most * Wait till there are no more cmds outstanding at FW. Try for at most

View File

@ -767,17 +767,12 @@ static int megasas_generic_reset(struct scsi_cmnd *scmd)
return FAILED; return FAILED;
} }
spin_unlock(scmd->device->host->host_lock);
ret_val = megasas_wait_for_outstanding(instance); ret_val = megasas_wait_for_outstanding(instance);
if (ret_val == SUCCESS) if (ret_val == SUCCESS)
printk(KERN_NOTICE "megasas: reset successful \n"); printk(KERN_NOTICE "megasas: reset successful \n");
else else
printk(KERN_ERR "megasas: failed to do reset\n"); printk(KERN_ERR "megasas: failed to do reset\n");
spin_lock(scmd->device->host->host_lock);
return ret_val; return ret_val;
} }

View File

@ -639,10 +639,8 @@ struct qla_boards {
static struct pci_device_id qla1280_pci_tbl[] = { static struct pci_device_id qla1280_pci_tbl[] = {
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP12160, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP12160,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
#ifdef CONFIG_SCSI_QLOGIC_1280_1040
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1020, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1020,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1},
#endif
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1080, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1080,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2},
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1240, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1240,

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,13 @@
/* /*
* RAID Attributes * raid_class.c - implementation of a simple raid visualisation class
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*
* This class is designed to allow raid attributes to be visualised and
* manipulated in a form independent of the underlying raid. Ultimately this
* should work for both hardware and software raids.
*/ */
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
@ -24,7 +32,7 @@ struct raid_internal {
struct raid_component { struct raid_component {
struct list_head node; struct list_head node;
struct device *dev; struct class_device cdev;
int num; int num;
}; };
@ -74,11 +82,10 @@ static int raid_setup(struct transport_container *tc, struct device *dev,
BUG_ON(class_get_devdata(cdev)); BUG_ON(class_get_devdata(cdev));
rd = kmalloc(sizeof(*rd), GFP_KERNEL); rd = kzalloc(sizeof(*rd), GFP_KERNEL);
if (!rd) if (!rd)
return -ENOMEM; return -ENOMEM;
memset(rd, 0, sizeof(*rd));
INIT_LIST_HEAD(&rd->component_list); INIT_LIST_HEAD(&rd->component_list);
class_set_devdata(cdev, rd); class_set_devdata(cdev, rd);
@ -90,15 +97,15 @@ static int raid_remove(struct transport_container *tc, struct device *dev,
{ {
struct raid_data *rd = class_get_devdata(cdev); struct raid_data *rd = class_get_devdata(cdev);
struct raid_component *rc, *next; struct raid_component *rc, *next;
dev_printk(KERN_ERR, dev, "RAID REMOVE\n");
class_set_devdata(cdev, NULL); class_set_devdata(cdev, NULL);
list_for_each_entry_safe(rc, next, &rd->component_list, node) { list_for_each_entry_safe(rc, next, &rd->component_list, node) {
char buf[40];
snprintf(buf, sizeof(buf), "component-%d", rc->num);
list_del(&rc->node); list_del(&rc->node);
sysfs_remove_link(&cdev->kobj, buf); dev_printk(KERN_ERR, rc->cdev.dev, "RAID COMPONENT REMOVE\n");
kfree(rc); class_device_unregister(&rc->cdev);
} }
kfree(class_get_devdata(cdev)); dev_printk(KERN_ERR, dev, "RAID REMOVE DONE\n");
kfree(rd);
return 0; return 0;
} }
@ -112,10 +119,11 @@ static struct {
enum raid_state value; enum raid_state value;
char *name; char *name;
} raid_states[] = { } raid_states[] = {
{ RAID_ACTIVE, "active" }, { RAID_STATE_UNKNOWN, "unknown" },
{ RAID_DEGRADED, "degraded" }, { RAID_STATE_ACTIVE, "active" },
{ RAID_RESYNCING, "resyncing" }, { RAID_STATE_DEGRADED, "degraded" },
{ RAID_OFFLINE, "offline" }, { RAID_STATE_RESYNCING, "resyncing" },
{ RAID_STATE_OFFLINE, "offline" },
}; };
static const char *raid_state_name(enum raid_state state) static const char *raid_state_name(enum raid_state state)
@ -132,6 +140,33 @@ static const char *raid_state_name(enum raid_state state)
return name; return name;
} }
static struct {
enum raid_level value;
char *name;
} raid_levels[] = {
{ RAID_LEVEL_UNKNOWN, "unknown" },
{ RAID_LEVEL_LINEAR, "linear" },
{ RAID_LEVEL_0, "raid0" },
{ RAID_LEVEL_1, "raid1" },
{ RAID_LEVEL_3, "raid3" },
{ RAID_LEVEL_4, "raid4" },
{ RAID_LEVEL_5, "raid5" },
{ RAID_LEVEL_6, "raid6" },
};
static const char *raid_level_name(enum raid_level level)
{
int i;
char *name = NULL;
for (i = 0; i < sizeof(raid_levels)/sizeof(raid_levels[0]); i++) {
if (raid_levels[i].value == level) {
name = raid_levels[i].name;
break;
}
}
return name;
}
#define raid_attr_show_internal(attr, fmt, var, code) \ #define raid_attr_show_internal(attr, fmt, var, code) \
static ssize_t raid_show_##attr(struct class_device *cdev, char *buf) \ static ssize_t raid_show_##attr(struct class_device *cdev, char *buf) \
@ -161,11 +196,22 @@ static CLASS_DEVICE_ATTR(attr, S_IRUGO, raid_show_##attr, NULL)
#define raid_attr_ro(attr) raid_attr_ro_internal(attr, ) #define raid_attr_ro(attr) raid_attr_ro_internal(attr, )
#define raid_attr_ro_fn(attr) raid_attr_ro_internal(attr, ATTR_CODE(attr)) #define raid_attr_ro_fn(attr) raid_attr_ro_internal(attr, ATTR_CODE(attr))
#define raid_attr_ro_state(attr) raid_attr_ro_states(attr, attr, ATTR_CODE(attr)) #define raid_attr_ro_state(attr) raid_attr_ro_states(attr, attr, )
#define raid_attr_ro_state_fn(attr) raid_attr_ro_states(attr, attr, ATTR_CODE(attr))
raid_attr_ro(level);
raid_attr_ro_state(level);
raid_attr_ro_fn(resync); raid_attr_ro_fn(resync);
raid_attr_ro_state(state); raid_attr_ro_state_fn(state);
static void raid_component_release(struct class_device *cdev)
{
struct raid_component *rc = container_of(cdev, struct raid_component,
cdev);
dev_printk(KERN_ERR, rc->cdev.dev, "COMPONENT RELEASE\n");
put_device(rc->cdev.dev);
kfree(rc);
}
void raid_component_add(struct raid_template *r,struct device *raid_dev, void raid_component_add(struct raid_template *r,struct device *raid_dev,
struct device *component_dev) struct device *component_dev)
@ -175,34 +221,36 @@ void raid_component_add(struct raid_template *r,struct device *raid_dev,
raid_dev); raid_dev);
struct raid_component *rc; struct raid_component *rc;
struct raid_data *rd = class_get_devdata(cdev); struct raid_data *rd = class_get_devdata(cdev);
char buf[40];
rc = kmalloc(sizeof(*rc), GFP_KERNEL); rc = kzalloc(sizeof(*rc), GFP_KERNEL);
if (!rc) if (!rc)
return; return;
INIT_LIST_HEAD(&rc->node); INIT_LIST_HEAD(&rc->node);
rc->dev = component_dev; class_device_initialize(&rc->cdev);
rc->cdev.release = raid_component_release;
rc->cdev.dev = get_device(component_dev);
rc->num = rd->component_count++; rc->num = rd->component_count++;
snprintf(buf, sizeof(buf), "component-%d", rc->num); snprintf(rc->cdev.class_id, sizeof(rc->cdev.class_id),
"component-%d", rc->num);
list_add_tail(&rc->node, &rd->component_list); list_add_tail(&rc->node, &rd->component_list);
sysfs_create_link(&cdev->kobj, &component_dev->kobj, buf); rc->cdev.parent = cdev;
rc->cdev.class = &raid_class.class;
class_device_add(&rc->cdev);
} }
EXPORT_SYMBOL(raid_component_add); EXPORT_SYMBOL(raid_component_add);
struct raid_template * struct raid_template *
raid_class_attach(struct raid_function_template *ft) raid_class_attach(struct raid_function_template *ft)
{ {
struct raid_internal *i = kmalloc(sizeof(struct raid_internal), struct raid_internal *i = kzalloc(sizeof(struct raid_internal),
GFP_KERNEL); GFP_KERNEL);
int count = 0; int count = 0;
if (unlikely(!i)) if (unlikely(!i))
return NULL; return NULL;
memset(i, 0, sizeof(*i));
i->f = ft; i->f = ft;
i->r.raid_attrs.ac.class = &raid_class.class; i->r.raid_attrs.ac.class = &raid_class.class;

View File

@ -416,44 +416,16 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
return FAILED; return FAILED;
} }
/**
* scsi_eh_times_out - timeout function for error handling.
* @scmd: Cmd that is timing out.
*
* Notes:
* During error handling, the kernel thread will be sleeping waiting
* for some action to complete on the device. our only job is to
* record that it timed out, and to wake up the thread.
**/
static void scsi_eh_times_out(struct scsi_cmnd *scmd)
{
scmd->eh_eflags |= SCSI_EH_REC_TIMEOUT;
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd:%p\n", __FUNCTION__,
scmd));
up(scmd->device->host->eh_action);
}
/** /**
* scsi_eh_done - Completion function for error handling. * scsi_eh_done - Completion function for error handling.
* @scmd: Cmd that is done. * @scmd: Cmd that is done.
**/ **/
static void scsi_eh_done(struct scsi_cmnd *scmd) static void scsi_eh_done(struct scsi_cmnd *scmd)
{ {
/* SCSI_LOG_ERROR_RECOVERY(3,
* if the timeout handler is already running, then just set the printk("%s scmd: %p result: %x\n",
* flag which says we finished late, and return. we have no __FUNCTION__, scmd, scmd->result));
* way of stopping the timeout handler from running, so we must complete(scmd->device->host->eh_action);
* always defer to it.
*/
if (del_timer(&scmd->eh_timeout)) {
scmd->request->rq_status = RQ_SCSI_DONE;
SCSI_LOG_ERROR_RECOVERY(3, printk("%s scmd: %p result: %x\n",
__FUNCTION__, scmd, scmd->result));
up(scmd->device->host->eh_action);
}
} }
/** /**
@ -461,10 +433,6 @@ static void scsi_eh_done(struct scsi_cmnd *scmd)
* @scmd: SCSI Cmd to send. * @scmd: SCSI Cmd to send.
* @timeout: Timeout for cmd. * @timeout: Timeout for cmd.
* *
* Notes:
* The initialization of the structures is quite a bit different in
* this case, and furthermore, there is a different completion handler
* vs scsi_dispatch_cmd.
* Return value: * Return value:
* SUCCESS or FAILED or NEEDS_RETRY * SUCCESS or FAILED or NEEDS_RETRY
**/ **/
@ -472,24 +440,16 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
{ {
struct scsi_device *sdev = scmd->device; struct scsi_device *sdev = scmd->device;
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
DECLARE_MUTEX_LOCKED(sem); DECLARE_COMPLETION(done);
unsigned long timeleft;
unsigned long flags; unsigned long flags;
int rtn = SUCCESS; int rtn;
/*
* we will use a queued command if possible, otherwise we will
* emulate the queuing and calling of completion function ourselves.
*/
if (sdev->scsi_level <= SCSI_2) if (sdev->scsi_level <= SCSI_2)
scmd->cmnd[1] = (scmd->cmnd[1] & 0x1f) | scmd->cmnd[1] = (scmd->cmnd[1] & 0x1f) |
(sdev->lun << 5 & 0xe0); (sdev->lun << 5 & 0xe0);
scsi_add_timer(scmd, timeout, scsi_eh_times_out); shost->eh_action = &done;
/*
* set up the semaphore so we wait for the command to complete.
*/
shost->eh_action = &sem;
scmd->request->rq_status = RQ_SCSI_BUSY; scmd->request->rq_status = RQ_SCSI_BUSY;
spin_lock_irqsave(shost->host_lock, flags); spin_lock_irqsave(shost->host_lock, flags);
@ -497,47 +457,29 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
shost->hostt->queuecommand(scmd, scsi_eh_done); shost->hostt->queuecommand(scmd, scsi_eh_done);
spin_unlock_irqrestore(shost->host_lock, flags); spin_unlock_irqrestore(shost->host_lock, flags);
down(&sem); timeleft = wait_for_completion_timeout(&done, timeout);
scsi_log_completion(scmd, SUCCESS);
scmd->request->rq_status = RQ_SCSI_DONE;
shost->eh_action = NULL; shost->eh_action = NULL;
/* scsi_log_completion(scmd, SUCCESS);
* see if timeout. if so, tell the host to forget about it.
* in other words, we don't want a callback any more.
*/
if (scmd->eh_eflags & SCSI_EH_REC_TIMEOUT) {
scmd->eh_eflags &= ~SCSI_EH_REC_TIMEOUT;
/* SCSI_LOG_ERROR_RECOVERY(3,
* as far as the low level driver is printk("%s: scmd: %p, timeleft: %ld\n",
* concerned, this command is still active, so __FUNCTION__, scmd, timeleft));
* we must give the low level driver a chance
* to abort it. (db)
*
* FIXME(eric) - we are not tracking whether we could
* abort a timed out command or not. not sure how
* we should treat them differently anyways.
*/
if (shost->hostt->eh_abort_handler)
shost->hostt->eh_abort_handler(scmd);
scmd->request->rq_status = RQ_SCSI_DONE;
rtn = FAILED;
}
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd: %p, rtn:%x\n",
__FUNCTION__, scmd, rtn));
/* /*
* now examine the actual status codes to see whether the command * If there is time left scsi_eh_done got called, and we will
* actually did complete normally. * examine the actual status codes to see whether the command
* actually did complete normally, else tell the host to forget
* about this command.
*/ */
if (rtn == SUCCESS) { if (timeleft) {
rtn = scsi_eh_completed_normally(scmd); rtn = scsi_eh_completed_normally(scmd);
SCSI_LOG_ERROR_RECOVERY(3, SCSI_LOG_ERROR_RECOVERY(3,
printk("%s: scsi_eh_completed_normally %x\n", printk("%s: scsi_eh_completed_normally %x\n",
__FUNCTION__, rtn)); __FUNCTION__, rtn));
switch (rtn) { switch (rtn) {
case SUCCESS: case SUCCESS:
case NEEDS_RETRY: case NEEDS_RETRY:
@ -547,6 +489,15 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
rtn = FAILED; rtn = FAILED;
break; break;
} }
} else {
/*
* FIXME(eric) - we are not tracking whether we could
* abort a timed out command or not. not sure how
* we should treat them differently anyways.
*/
if (shost->hostt->eh_abort_handler)
shost->hostt->eh_abort_handler(scmd);
rtn = FAILED;
} }
return rtn; return rtn;
@ -1571,50 +1522,41 @@ static void scsi_unjam_host(struct Scsi_Host *shost)
} }
/** /**
* scsi_error_handler - Handle errors/timeouts of SCSI cmds. * scsi_error_handler - SCSI error handler thread
* @data: Host for which we are running. * @data: Host for which we are running.
* *
* Notes: * Notes:
* This is always run in the context of a kernel thread. The idea is * This is the main error handling loop. This is run as a kernel thread
* that we start this thing up when the kernel starts up (one per host * for every SCSI host and handles all error handling activity.
* that we detect), and it immediately goes to sleep and waits for some
* event (i.e. failure). When this takes place, we have the job of
* trying to unjam the bus and restarting things.
**/ **/
int scsi_error_handler(void *data) int scsi_error_handler(void *data)
{ {
struct Scsi_Host *shost = (struct Scsi_Host *) data; struct Scsi_Host *shost = data;
int rtn;
current->flags |= PF_NOFREEZE; current->flags |= PF_NOFREEZE;
/* /*
* Note - we always use TASK_INTERRUPTIBLE even if the module * We use TASK_INTERRUPTIBLE so that the thread is not
* was loaded as part of the kernel. The reason is that * counted against the load average as a running process.
* UNINTERRUPTIBLE would cause this thread to be counted in * We never actually get interrupted because kthread_run
* the load average as a running process, and an interruptible * disables singal delivery for the created thread.
* wait doesn't.
*/ */
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
if (shost->host_failed == 0 || if (shost->host_failed == 0 ||
shost->host_failed != shost->host_busy) { shost->host_failed != shost->host_busy) {
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" SCSI_LOG_ERROR_RECOVERY(1,
" scsi_eh_%d" printk("Error handler scsi_eh_%d sleeping\n",
" sleeping\n", shost->host_no));
shost->host_no));
schedule(); schedule();
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
continue; continue;
} }
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" SCSI_LOG_ERROR_RECOVERY(1,
" scsi_eh_%d waking" printk("Error handler scsi_eh_%d waking up\n",
" up\n",shost->host_no)); shost->host_no));
shost->eh_active = 1;
/* /*
* We have a host that is failing for some reason. Figure out * We have a host that is failing for some reason. Figure out
@ -1622,12 +1564,10 @@ int scsi_error_handler(void *data)
* If we fail, we end up taking the thing offline. * If we fail, we end up taking the thing offline.
*/ */
if (shost->hostt->eh_strategy_handler) if (shost->hostt->eh_strategy_handler)
rtn = shost->hostt->eh_strategy_handler(shost); shost->hostt->eh_strategy_handler(shost);
else else
scsi_unjam_host(shost); scsi_unjam_host(shost);
shost->eh_active = 0;
/* /*
* Note - if the above fails completely, the action is to take * Note - if the above fails completely, the action is to take
* individual devices offline and flush the queue of any * individual devices offline and flush the queue of any
@ -1638,15 +1578,10 @@ int scsi_error_handler(void *data)
scsi_restart_operations(shost); scsi_restart_operations(shost);
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
} }
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler scsi_eh_%d" SCSI_LOG_ERROR_RECOVERY(1,
" exiting\n",shost->host_no)); printk("Error handler scsi_eh_%d exiting\n", shost->host_no));
/*
* Make sure that nobody tries to wake us up again.
*/
shost->ehandler = NULL; shost->ehandler = NULL;
return 0; return 0;
} }

View File

@ -254,55 +254,6 @@ void scsi_do_req(struct scsi_request *sreq, const void *cmnd,
} }
EXPORT_SYMBOL(scsi_do_req); EXPORT_SYMBOL(scsi_do_req);
/* This is the end routine we get to if a command was never attached
* to the request. Simply complete the request without changing
* rq_status; this will cause a DRIVER_ERROR. */
static void scsi_wait_req_end_io(struct request *req)
{
BUG_ON(!req->waiting);
complete(req->waiting);
}
void scsi_wait_req(struct scsi_request *sreq, const void *cmnd, void *buffer,
unsigned bufflen, int timeout, int retries)
{
DECLARE_COMPLETION(wait);
int write = (sreq->sr_data_direction == DMA_TO_DEVICE);
struct request *req;
req = blk_get_request(sreq->sr_device->request_queue, write,
__GFP_WAIT);
if (bufflen && blk_rq_map_kern(sreq->sr_device->request_queue, req,
buffer, bufflen, __GFP_WAIT)) {
sreq->sr_result = DRIVER_ERROR << 24;
blk_put_request(req);
return;
}
req->flags |= REQ_NOMERGE;
req->waiting = &wait;
req->end_io = scsi_wait_req_end_io;
req->cmd_len = COMMAND_SIZE(((u8 *)cmnd)[0]);
req->sense = sreq->sr_sense_buffer;
req->sense_len = 0;
memcpy(req->cmd, cmnd, req->cmd_len);
req->timeout = timeout;
req->flags |= REQ_BLOCK_PC;
req->rq_disk = NULL;
blk_insert_request(sreq->sr_device->request_queue, req,
sreq->sr_data_direction == DMA_TO_DEVICE, NULL);
wait_for_completion(&wait);
sreq->sr_request->waiting = NULL;
sreq->sr_result = req->errors;
if (req->errors)
sreq->sr_result |= (DRIVER_ERROR << 24);
blk_put_request(req);
}
EXPORT_SYMBOL(scsi_wait_req);
/** /**
* scsi_execute - insert request and wait for the result * scsi_execute - insert request and wait for the result
* @sdev: scsi device * @sdev: scsi device

View File

@ -22,7 +22,6 @@ struct Scsi_Host;
* Scsi Error Handler Flags * Scsi Error Handler Flags
*/ */
#define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */ #define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */
#define SCSI_EH_REC_TIMEOUT 0x0002 /* EH retry timed out */
#define SCSI_SENSE_VALID(scmd) \ #define SCSI_SENSE_VALID(scmd) \
(((scmd)->sense_buffer[0] & 0x70) == 0x70) (((scmd)->sense_buffer[0] & 0x70) == 0x70)

View File

@ -691,16 +691,19 @@ int scsi_sysfs_add_sdev(struct scsi_device *sdev)
void __scsi_remove_device(struct scsi_device *sdev) void __scsi_remove_device(struct scsi_device *sdev)
{ {
struct device *dev = &sdev->sdev_gendev;
if (scsi_device_set_state(sdev, SDEV_CANCEL) != 0) if (scsi_device_set_state(sdev, SDEV_CANCEL) != 0)
return; return;
class_device_unregister(&sdev->sdev_classdev); class_device_unregister(&sdev->sdev_classdev);
device_del(&sdev->sdev_gendev); transport_remove_device(dev);
device_del(dev);
scsi_device_set_state(sdev, SDEV_DEL); scsi_device_set_state(sdev, SDEV_DEL);
if (sdev->host->hostt->slave_destroy) if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev); sdev->host->hostt->slave_destroy(sdev);
transport_unregister_device(&sdev->sdev_gendev); transport_destroy_device(dev);
put_device(&sdev->sdev_gendev); put_device(dev);
} }
/** /**

View File

@ -441,6 +441,7 @@
#define PCI_DEVICE_ID_IBM_SNIPE 0x0180 #define PCI_DEVICE_ID_IBM_SNIPE 0x0180
#define PCI_DEVICE_ID_IBM_CITRINE 0x028C #define PCI_DEVICE_ID_IBM_CITRINE 0x028C
#define PCI_DEVICE_ID_IBM_GEMSTONE 0xB166 #define PCI_DEVICE_ID_IBM_GEMSTONE 0xB166
#define PCI_DEVICE_ID_IBM_OBSIDIAN 0x02BD
#define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_1 0x0031 #define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_1 0x0031
#define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_2 0x0219 #define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_2 0x0219
#define PCI_DEVICE_ID_IBM_ICOM_V2_TWO_PORTS_RVX 0x021A #define PCI_DEVICE_ID_IBM_ICOM_V2_TWO_PORTS_RVX 0x021A
@ -2144,6 +2145,7 @@
#define PCI_DEVICE_ID_ADAPTEC2_7899B 0x00c1 #define PCI_DEVICE_ID_ADAPTEC2_7899B 0x00c1
#define PCI_DEVICE_ID_ADAPTEC2_7899D 0x00c3 #define PCI_DEVICE_ID_ADAPTEC2_7899D 0x00c3
#define PCI_DEVICE_ID_ADAPTEC2_7899P 0x00cf #define PCI_DEVICE_ID_ADAPTEC2_7899P 0x00cf
#define PCI_DEVICE_ID_ADAPTEC2_OBSIDIAN 0x0500
#define PCI_DEVICE_ID_ADAPTEC2_SCAMP 0x0503 #define PCI_DEVICE_ID_ADAPTEC2_SCAMP 0x0503

View File

@ -1,4 +1,9 @@
/* /*
* raid_class.h - a generic raid visualisation class
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*/ */
#include <linux/transport_class.h> #include <linux/transport_class.h>
@ -14,20 +19,35 @@ struct raid_function_template {
}; };
enum raid_state { enum raid_state {
RAID_ACTIVE = 1, RAID_STATE_UNKNOWN = 0,
RAID_DEGRADED, RAID_STATE_ACTIVE,
RAID_RESYNCING, RAID_STATE_DEGRADED,
RAID_OFFLINE, RAID_STATE_RESYNCING,
RAID_STATE_OFFLINE,
};
enum raid_level {
RAID_LEVEL_UNKNOWN = 0,
RAID_LEVEL_LINEAR,
RAID_LEVEL_0,
RAID_LEVEL_1,
RAID_LEVEL_3,
RAID_LEVEL_4,
RAID_LEVEL_5,
RAID_LEVEL_6,
}; };
struct raid_data { struct raid_data {
struct list_head component_list; struct list_head component_list;
int component_count; int component_count;
int level; enum raid_level level;
enum raid_state state; enum raid_state state;
int resync; int resync;
}; };
/* resync complete goes from 0 to this */
#define RAID_MAX_RESYNC (10000)
#define DEFINE_RAID_ATTRIBUTE(type, attr) \ #define DEFINE_RAID_ATTRIBUTE(type, attr) \
static inline void \ static inline void \
raid_set_##attr(struct raid_template *r, struct device *dev, type value) { \ raid_set_##attr(struct raid_template *r, struct device *dev, type value) { \
@ -48,7 +68,7 @@ raid_get_##attr(struct raid_template *r, struct device *dev) { \
return rd->attr; \ return rd->attr; \
} }
DEFINE_RAID_ATTRIBUTE(int, level) DEFINE_RAID_ATTRIBUTE(enum raid_level, level)
DEFINE_RAID_ATTRIBUTE(int, resync) DEFINE_RAID_ATTRIBUTE(int, resync)
DEFINE_RAID_ATTRIBUTE(enum raid_state, state) DEFINE_RAID_ATTRIBUTE(enum raid_state, state)

View File

@ -7,6 +7,7 @@
#include <linux/workqueue.h> #include <linux/workqueue.h>
struct block_device; struct block_device;
struct completion;
struct module; struct module;
struct scsi_cmnd; struct scsi_cmnd;
struct scsi_device; struct scsi_device;
@ -467,10 +468,8 @@ struct Scsi_Host {
struct list_head eh_cmd_q; struct list_head eh_cmd_q;
struct task_struct * ehandler; /* Error recovery thread. */ struct task_struct * ehandler; /* Error recovery thread. */
struct semaphore * eh_action; /* Wait for specific actions on the struct completion * eh_action; /* Wait for specific actions on the
host. */ host. */
unsigned int eh_active:1; /* Indicates the eh thread is awake and active if
this is true. */
wait_queue_head_t host_wait; wait_queue_head_t host_wait;
struct scsi_host_template *hostt; struct scsi_host_template *hostt;
struct scsi_transport_template *transportt; struct scsi_transport_template *transportt;

View File

@ -47,9 +47,6 @@ struct scsi_request {
extern struct scsi_request *scsi_allocate_request(struct scsi_device *, gfp_t); extern struct scsi_request *scsi_allocate_request(struct scsi_device *, gfp_t);
extern void scsi_release_request(struct scsi_request *); extern void scsi_release_request(struct scsi_request *);
extern void scsi_wait_req(struct scsi_request *, const void *cmnd,
void *buffer, unsigned bufflen,
int timeout, int retries);
extern void scsi_do_req(struct scsi_request *, const void *cmnd, extern void scsi_do_req(struct scsi_request *, const void *cmnd,
void *buffer, unsigned bufflen, void *buffer, unsigned bufflen,
void (*done) (struct scsi_cmnd *), void (*done) (struct scsi_cmnd *),