nvme: fix boot hang with only being able to get one IRQ vector

NVMe always asks for io_queues + 1 worth of IRQ vectors, which
means that even when we scale all the way down, we still ask
for 2 vectors and get -ENOSPC in return if the system can't
support more than 1.

Getting just 1 vector is fine, it just means that we'll have
1 IO queue and 1 admin queue, with a shared vector between them.
Check for this case and don't add our + 1 if it happens.

Fixes: 3b6592f70a ("nvme: utilize two queue maps, one for reads and one for writes")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
Jens Axboe 2018-11-14 10:13:50 -07:00
parent d16a67667c
commit 30e066286e

View File

@ -2073,7 +2073,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, int nr_io_queues)
.nr_sets = ARRAY_SIZE(irq_sets),
.sets = irq_sets,
};
int result;
int result = 0;
/*
* For irq sets, we have to ask for minvec == maxvec. This passes
@ -2088,9 +2088,16 @@ static int nvme_setup_irqs(struct nvme_dev *dev, int nr_io_queues)
affd.nr_sets = 1;
/*
* Need IRQs for read+write queues, and one for the admin queue
* Need IRQs for read+write queues, and one for the admin queue.
* If we can't get more than one vector, we have to share the
* admin queue and IO queue vector. For that case, don't add
* an extra vector for the admin queue, or we'll continue
* asking for 2 and get -ENOSPC in return.
*/
nr_io_queues = irq_sets[0] + irq_sets[1] + 1;
if (result == -ENOSPC && nr_io_queues == 1)
nr_io_queues = 1;
else
nr_io_queues = irq_sets[0] + irq_sets[1] + 1;
result = pci_alloc_irq_vectors_affinity(pdev, nr_io_queues,
nr_io_queues,