提交 e8613084 编写于 作者: J James Smart 提交者: Martin K. Petersen

scsi: lpfc: Remove use of kmalloc() in trace event logging

There are instances when trace event logs are triggered from an interrupt
context. The trace event log may attempt to alloc memory causing scheduling
while atomic bug call traces.

Remove the need for the kmalloc'ed vport array when checking the
log_verbose flag, which eliminates the need for any allocation.

Link: https://lore.kernel.org/r/20210707184351.67872-3-jsmart2021@gmail.comCo-developed-by: NJustin Tee <justin.tee@broadcom.com>
Signed-off-by: NJustin Tee <justin.tee@broadcom.com>
Signed-off-by: NJames Smart <jsmart2021@gmail.com>
Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
上级 ae463b60
......@@ -14162,8 +14162,9 @@ void lpfc_dmp_dbg(struct lpfc_hba *phba)
unsigned int temp_idx;
int i;
int j = 0;
unsigned long rem_nsec;
struct lpfc_vport **vports;
unsigned long rem_nsec, iflags;
bool log_verbose = false;
struct lpfc_vport *port_iterator;
/* Don't dump messages if we explicitly set log_verbose for the
* physical port or any vport.
......@@ -14171,16 +14172,24 @@ void lpfc_dmp_dbg(struct lpfc_hba *phba)
if (phba->cfg_log_verbose)
return;
vports = lpfc_create_vport_work_array(phba);
if (vports != NULL) {
for (i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
if (vports[i]->cfg_log_verbose) {
lpfc_destroy_vport_work_array(phba, vports);
spin_lock_irqsave(&phba->port_list_lock, iflags);
list_for_each_entry(port_iterator, &phba->port_list, listentry) {
if (port_iterator->load_flag & FC_UNLOADING)
continue;
if (scsi_host_get(lpfc_shost_from_vport(port_iterator))) {
if (port_iterator->cfg_log_verbose)
log_verbose = true;
scsi_host_put(lpfc_shost_from_vport(port_iterator));
if (log_verbose) {
spin_unlock_irqrestore(&phba->port_list_lock,
iflags);
return;
}
}
}
lpfc_destroy_vport_work_array(phba, vports);
spin_unlock_irqrestore(&phba->port_list_lock, iflags);
if (atomic_cmpxchg(&phba->dbg_log_dmping, 0, 1) != 0)
return;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册