MHI changes for v5.13

core:
 
 - Added support for Flash Programmer execution environment which allows the
   host machine (like x86) to flash the modem firmware to NAND or eMMC in the
   modem. The MHI bus will expose EDL channels (34, 35) and then the opensource
   QDL tool [1] can be used to flash the firmware from the host.
 - Added an internal helper for polling the MHI registers with a retry interval.
   This helper is used now to poll for the MHI ready state in MHI STATUS
   register.
 - Various fixes for issues found during the bringup of SDX24/SDX55 based Quectel
   and Telit modems.
 - Updates to the Execution environment handling for proper downloading of the
   AMSS image from SBL (Secondary Bootloader) mode.
 - Added support for sending STOP channel command to the MHI device and also made
   changes to the MHI core for proper handling of stop and restart.
 - Fixed the runtime_pm handling in the core by forcing the device to be in wake
   mode until TX completion and allowing it to suspend for RX.
 - Added sanity checks for values read from the device to avoid crash if those
   are corrupted somehow.
 - Fixed warnings generated by sparse (W=2)
 - Couple of kernel doc cleanups in mhi.h
 
 pci_generic:
 
 - Added support for runtime PM and generic PM
 - Added Firehose channels for flashing the firmware
 - Added support for modems such as Quectel EM1XXGR-L, SDX24, SDX65, Foxconn
   T99W175 exposing relevant channels.
 
 [1] https://git.linaro.org/landing-teams/working/qualcomm/qdl.git
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZ6VDKoFIy9ikWCeXVZ8R5v6RzvUFAmByjAUACgkQVZ8R5v6R
 zvXcgQf/cmZ+E7DUXfIutbtxu0WQsprLoi12Z+tNy+0di/6HbstodNYsDJGEMCeg
 f5mXClHMTj6uO5aRu+5tgWxA6pNNBeHSpJmztbbxjrtdiAC1tZHXMFCMQ/Mj4Sv5
 IFmfHVF/wsMdFJUkfaOWC45mVhPG/TK5Wng86CUSZXUdhgC0AxY0mQqmivTjS5UE
 TA1zxCTS7ni97fceGM+V2JlebFYJJ+gwkVVgHhZMF0x+1xNldoNCxjSfMso6EeS1
 ThK8bjxYYi/eRcM1jltdv/zWlJbePOTSos5Pkm+NQsauPWtETELKq58MDhLTzD28
 aiQ8mx10gsYDGXvXpfh3nsMN1pwOfg==
 =lVaD
 -----END PGP SIGNATURE-----

Merge tag 'mhi-for-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi into char-misc-next

Manivannan writes:

MHI changes for v5.13

core:

- Added support for Flash Programmer execution environment which allows the
  host machine (like x86) to flash the modem firmware to NAND or eMMC in the
  modem. The MHI bus will expose EDL channels (34, 35) and then the opensource
  QDL tool [1] can be used to flash the firmware from the host.
- Added an internal helper for polling the MHI registers with a retry interval.
  This helper is used now to poll for the MHI ready state in MHI STATUS
  register.
- Various fixes for issues found during the bringup of SDX24/SDX55 based Quectel
  and Telit modems.
- Updates to the Execution environment handling for proper downloading of the
  AMSS image from SBL (Secondary Bootloader) mode.
- Added support for sending STOP channel command to the MHI device and also made
  changes to the MHI core for proper handling of stop and restart.
- Fixed the runtime_pm handling in the core by forcing the device to be in wake
  mode until TX completion and allowing it to suspend for RX.
- Added sanity checks for values read from the device to avoid crash if those
  are corrupted somehow.
- Fixed warnings generated by sparse (W=2)
- Couple of kernel doc cleanups in mhi.h

pci_generic:

- Added support for runtime PM and generic PM
- Added Firehose channels for flashing the firmware
- Added support for modems such as Quectel EM1XXGR-L, SDX24, SDX65, Foxconn
  T99W175 exposing relevant channels.

[1] https://git.linaro.org/landing-teams/working/qualcomm/qdl.git

* tag 'mhi-for-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi: (49 commits)
  bus: mhi: fix typo in comments for struct mhi_channel_config
  bus: mhi: core: Fix shadow declarations
  bus: mhi: pci_generic: Constify mhi_controller_config struct definitions
  bus: mhi: pci_generic: Introduce Foxconn T99W175 support
  bus: mhi: core: Sanity check values from remote device before use
  bus: mhi: pci_generic: Add FIREHOSE channels
  bus: mhi: pci_generic: Implement PCI shutdown callback
  bus: mhi: Improve documentation on channel transfer setup APIs
  bus: mhi: core: Remove __ prefix for MHI channel unprepare function
  bus: mhi: core: Check channel execution environment before issuing reset
  bus: mhi: core: Clear configuration from channel context during reset
  bus: mhi: core: Hold device wake for channel update commands
  bus: mhi: core: Update debug messages to use client device
  bus: mhi: core: Improvements to the channel handling state machine
  bus: mhi: core: Clear context for stopped channels from remove()
  bus: mhi: core: Allow sending the STOP channel command
  bus: mhi: pci_generic: Add SDX65 based modem support
  bus: mhi: core: Remove pre_init flag used for power purposes
  bus: mhi: pm: reduce PM state change verbosity
  bus: mhi: core: Fix MHI runtime_pm behavior
  ...
This commit is contained in:
Greg Kroah-Hartman 2021-04-11 08:53:17 +02:00
commit 31d8df9f4a
8 changed files with 798 additions and 299 deletions

View File

@ -389,7 +389,6 @@ static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
{
const struct firmware *firmware = NULL;
struct image_info *image_info;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
const char *fw_name;
void *buf;
@ -417,9 +416,9 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
}
}
/* If device is in pass through, do reset to ready state transition */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
goto fw_load_ee_pthru;
/* wait for ready on pass through or any other execution environment */
if (mhi_cntrl->ee != MHI_EE_EDL && mhi_cntrl->ee != MHI_EE_PBL)
goto fw_load_ready_state;
fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
mhi_cntrl->edl_image : mhi_cntrl->fw_image;
@ -461,9 +460,10 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
goto error_fw_load;
}
if (mhi_cntrl->ee == MHI_EE_EDL) {
/* Wait for ready since EDL image was loaded */
if (fw_name == mhi_cntrl->edl_image) {
release_firmware(firmware);
return;
goto fw_load_ready_state;
}
write_lock_irq(&mhi_cntrl->pm_lock);
@ -488,47 +488,45 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
release_firmware(firmware);
fw_load_ee_pthru:
fw_load_ready_state:
/* Transitioning into MHI RESET->READY state */
ret = mhi_ready_state_transition(mhi_cntrl);
if (!mhi_cntrl->fbc_download)
return;
if (ret) {
dev_err(dev, "MHI did not enter READY state\n");
goto error_ready_state;
}
/* Wait for the SBL event */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_SBL ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "MHI did not enter SBL\n");
goto error_ready_state;
}
/* Start full firmware image download */
image_info = mhi_cntrl->fbc_image;
ret = mhi_fw_load_bhie(mhi_cntrl,
/* Vector table is the last entry */
&image_info->mhi_buf[image_info->entries - 1]);
if (ret) {
dev_err(dev, "MHI did not load image over BHIe, ret: %d\n",
ret);
goto error_fw_load;
}
dev_info(dev, "Wait for device to enter SBL or Mission mode\n");
return;
error_ready_state:
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
if (mhi_cntrl->fbc_download) {
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
}
error_fw_load:
mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
wake_up_all(&mhi_cntrl->state_event);
}
int mhi_download_amss_image(struct mhi_controller *mhi_cntrl)
{
struct image_info *image_info = mhi_cntrl->fbc_image;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
if (!image_info)
return -EIO;
ret = mhi_fw_load_bhie(mhi_cntrl,
/* Vector table is the last entry */
&image_info->mhi_buf[image_info->entries - 1]);
if (ret) {
dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
wake_up_all(&mhi_cntrl->state_event);
}
return ret;
}

View File

@ -377,7 +377,7 @@ static struct dentry *mhi_debugfs_root;
void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
{
mhi_cntrl->debugfs_dentry =
debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
debugfs_create_dir(dev_name(&mhi_cntrl->mhi_dev->dev),
mhi_debugfs_root);
debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,

View File

@ -22,13 +22,14 @@
static DEFINE_IDA(mhi_controller_ida);
const char * const mhi_ee_str[MHI_EE_MAX] = {
[MHI_EE_PBL] = "PBL",
[MHI_EE_SBL] = "SBL",
[MHI_EE_AMSS] = "AMSS",
[MHI_EE_RDDM] = "RDDM",
[MHI_EE_WFW] = "WFW",
[MHI_EE_PTHRU] = "PASS THRU",
[MHI_EE_EDL] = "EDL",
[MHI_EE_PBL] = "PRIMARY BOOTLOADER",
[MHI_EE_SBL] = "SECONDARY BOOTLOADER",
[MHI_EE_AMSS] = "MISSION MODE",
[MHI_EE_RDDM] = "RAMDUMP DOWNLOAD MODE",
[MHI_EE_WFW] = "WLAN FIRMWARE",
[MHI_EE_PTHRU] = "PASS THROUGH",
[MHI_EE_EDL] = "EMERGENCY DOWNLOAD",
[MHI_EE_FP] = "FLASH PROGRAMMER",
[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
[MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
};
@ -37,8 +38,9 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
[DEV_ST_TRANSITION_PBL] = "PBL",
[DEV_ST_TRANSITION_READY] = "READY",
[DEV_ST_TRANSITION_SBL] = "SBL",
[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
[DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION MODE",
[DEV_ST_TRANSITION_FP] = "FLASH PROGRAMMER",
[DEV_ST_TRANSITION_SYS_ERR] = "SYS ERROR",
[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
};
@ -49,24 +51,30 @@ const char * const mhi_state_str[MHI_STATE_MAX] = {
[MHI_STATE_M1] = "M1",
[MHI_STATE_M2] = "M2",
[MHI_STATE_M3] = "M3",
[MHI_STATE_M3_FAST] = "M3_FAST",
[MHI_STATE_M3_FAST] = "M3 FAST",
[MHI_STATE_BHI] = "BHI",
[MHI_STATE_SYS_ERR] = "SYS_ERR",
[MHI_STATE_SYS_ERR] = "SYS ERROR",
};
const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
[MHI_CH_STATE_TYPE_RESET] = "RESET",
[MHI_CH_STATE_TYPE_STOP] = "STOP",
[MHI_CH_STATE_TYPE_START] = "START",
};
static const char * const mhi_pm_state_str[] = {
[MHI_PM_STATE_DISABLE] = "DISABLE",
[MHI_PM_STATE_POR] = "POR",
[MHI_PM_STATE_POR] = "POWER ON RESET",
[MHI_PM_STATE_M0] = "M0",
[MHI_PM_STATE_M2] = "M2",
[MHI_PM_STATE_M3_ENTER] = "M?->M3",
[MHI_PM_STATE_M3] = "M3",
[MHI_PM_STATE_M3_EXIT] = "M3->M0",
[MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
[MHI_PM_STATE_FW_DL_ERR] = "Firmware Download Error",
[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS ERROR Detect",
[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS ERROR Process",
[MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
};
const char *to_mhi_pm_state_str(enum mhi_pm_state state)
@ -508,8 +516,6 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
/* Setup wake db */
mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
mhi_cntrl->wake_set = false;
/* Setup channel db address for each channel in tre_ring */
@ -552,6 +558,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_ring *buf_ring;
struct mhi_ring *tre_ring;
struct mhi_chan_ctxt *chan_ctxt;
u32 tmp;
buf_ring = &mhi_chan->buf_ring;
tre_ring = &mhi_chan->tre_ring;
@ -565,7 +572,19 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
vfree(buf_ring->base);
buf_ring->base = tre_ring->base = NULL;
tre_ring->ctxt_wp = NULL;
chan_ctxt->rbase = 0;
chan_ctxt->rlen = 0;
chan_ctxt->rp = 0;
chan_ctxt->wp = 0;
tmp = chan_ctxt->chcfg;
tmp &= ~CHAN_CTX_CHSTATE_MASK;
tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
chan_ctxt->chcfg = tmp;
/* Update to all cores */
smp_wmb();
}
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
@ -863,12 +882,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
u32 soc_info;
int ret, i;
if (!mhi_cntrl)
return -EINVAL;
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->regs ||
!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
!mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
!mhi_cntrl->write_reg || !mhi_cntrl->nr_irqs)
!mhi_cntrl->write_reg || !mhi_cntrl->nr_irqs || !mhi_cntrl->irq)
return -EINVAL;
ret = parse_config(mhi_cntrl, config);
@ -890,8 +907,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
init_waitqueue_head(&mhi_cntrl->state_event);
mhi_cntrl->hiprio_wq = alloc_ordered_workqueue
("mhi_hiprio_wq", WQ_MEM_RECLAIM | WQ_HIGHPRI);
mhi_cntrl->hiprio_wq = alloc_ordered_workqueue("mhi_hiprio_wq", WQ_HIGHPRI);
if (!mhi_cntrl->hiprio_wq) {
dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate workqueue\n");
ret = -ENOMEM;
@ -1083,8 +1099,6 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
}
mhi_cntrl->pre_init = true;
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
@ -1115,7 +1129,6 @@ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
}
mhi_deinit_dev_ctxt(mhi_cntrl);
mhi_cntrl->pre_init = false;
}
EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
@ -1296,7 +1309,8 @@ static int mhi_driver_remove(struct device *dev)
mutex_lock(&mhi_chan->mutex);
if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
ch_state[dir] == MHI_CH_STATE_STOP) &&
!mhi_chan->offload_ch)
mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);

View File

@ -369,6 +369,18 @@ enum mhi_ch_state {
MHI_CH_STATE_ERROR = 0x5,
};
enum mhi_ch_state_type {
MHI_CH_STATE_TYPE_RESET,
MHI_CH_STATE_TYPE_STOP,
MHI_CH_STATE_TYPE_START,
MHI_CH_STATE_TYPE_MAX,
};
extern const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX];
#define TO_CH_STATE_TYPE_STR(state) (((state) >= MHI_CH_STATE_TYPE_MAX) ? \
"INVALID_STATE" : \
mhi_ch_state_type_str[(state)])
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
mode != MHI_DB_BRST_ENABLE)
@ -379,13 +391,15 @@ extern const char * const mhi_ee_str[MHI_EE_MAX];
#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
ee == MHI_EE_EDL)
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW || \
ee == MHI_EE_FP)
enum dev_st_transition {
DEV_ST_TRANSITION_PBL,
DEV_ST_TRANSITION_READY,
DEV_ST_TRANSITION_SBL,
DEV_ST_TRANSITION_MISSION_MODE,
DEV_ST_TRANSITION_FP,
DEV_ST_TRANSITION_SYS_ERR,
DEV_ST_TRANSITION_DISABLE,
DEV_ST_TRANSITION_MAX,
@ -619,6 +633,7 @@ int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
enum mhi_cmd_type cmd);
int mhi_download_amss_image(struct mhi_controller *mhi_cntrl);
static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
{
return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
@ -643,6 +658,9 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 shift, u32 *out);
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 shift, u32 val, u32 delayus);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,

View File

@ -4,6 +4,7 @@
*
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
@ -37,6 +38,28 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
return 0;
}
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset,
u32 mask, u32 shift, u32 val, u32 delayus)
{
int ret;
u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
&out);
if (ret)
return ret;
if (out == val)
return 0;
fsleep(delayus);
}
return -ETIMEDOUT;
}
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val)
{
@ -242,10 +265,17 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
smp_wmb();
}
static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
{
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
}
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_chan *ul_chan, *dl_chan;
struct mhi_device *mhi_dev;
struct mhi_controller *mhi_cntrl;
enum mhi_ee_type ee = MHI_EE_MAX;
if (dev->bus != &mhi_bus_type)
return 0;
@ -257,6 +287,17 @@ int mhi_destroy_device(struct device *dev, void *data)
if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
return 0;
ul_chan = mhi_dev->ul_chan;
dl_chan = mhi_dev->dl_chan;
/*
* If execution environment is specified, remove only those devices that
* started in them based on ee_mask for the channels as we move on to a
* different execution environment
*/
if (data)
ee = *(enum mhi_ee_type *)data;
/*
* For the suspend and resume case, this function will get called
* without mhi_unregister_controller(). Hence, we need to drop the
@ -264,11 +305,19 @@ int mhi_destroy_device(struct device *dev, void *data)
* be sure that there will be no instances of mhi_dev left after
* this.
*/
if (mhi_dev->ul_chan)
put_device(&mhi_dev->ul_chan->mhi_dev->dev);
if (ul_chan) {
if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
return 0;
if (mhi_dev->dl_chan)
put_device(&mhi_dev->dl_chan->mhi_dev->dev);
put_device(&ul_chan->mhi_dev->dev);
}
if (dl_chan) {
if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
return 0;
put_device(&dl_chan->mhi_dev->dev);
}
dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
mhi_dev->name);
@ -383,7 +432,16 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
struct mhi_event_ctxt *er_ctxt =
&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
struct mhi_ring *ev_ring = &mhi_event->ring;
void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
dma_addr_t ptr = er_ctxt->rp;
void *dev_rp;
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
return IRQ_HANDLED;
}
dev_rp = mhi_to_virtual(ev_ring, ptr);
/* Only proceed if event ring has pending events */
if (ev_ring->rp == dev_rp)
@ -407,9 +465,9 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
{
struct mhi_controller *mhi_cntrl = priv;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state state = MHI_STATE_MAX;
enum mhi_state state;
enum mhi_pm_state pm_state = 0;
enum mhi_ee_type ee = 0;
enum mhi_ee_type ee;
write_lock_irq(&mhi_cntrl->pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
@ -418,11 +476,11 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
}
state = mhi_get_mhi_state(mhi_cntrl);
ee = mhi_cntrl->ee;
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
TO_MHI_STATE_STR(state));
ee = mhi_get_exec_env(mhi_cntrl);
dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
if (state == MHI_STATE_SYS_ERR) {
dev_dbg(dev, "System error detected\n");
@ -431,27 +489,30 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
}
write_unlock_irq(&mhi_cntrl->pm_lock);
/* If device supports RDDM don't bother processing SYS error */
if (mhi_cntrl->rddm_image) {
/* host may be performing a device power down already */
if (!mhi_is_active(mhi_cntrl))
goto exit_intvec;
if (pm_state != MHI_PM_SYS_ERR_DETECT || ee == mhi_cntrl->ee)
goto exit_intvec;
if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
switch (ee) {
case MHI_EE_RDDM:
/* proceed if power down is not already in progress */
if (mhi_cntrl->rddm_image && mhi_is_active(mhi_cntrl)) {
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
mhi_cntrl->ee = ee;
wake_up_all(&mhi_cntrl->state_event);
}
goto exit_intvec;
}
if (pm_state == MHI_PM_SYS_ERR_DETECT) {
break;
case MHI_EE_PBL:
case MHI_EE_EDL:
case MHI_EE_PTHRU:
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
mhi_cntrl->ee = ee;
wake_up_all(&mhi_cntrl->state_event);
/* For fatal errors, we let controller decide next step */
if (MHI_IN_PBL(ee))
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
else
mhi_pm_sys_err_handler(mhi_cntrl);
mhi_pm_sys_err_handler(mhi_cntrl);
break;
default:
wake_up_all(&mhi_cntrl->state_event);
mhi_pm_sys_err_handler(mhi_cntrl);
break;
}
exit_intvec:
@ -536,6 +597,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info;
u16 xfer_len;
if (!is_valid_ring_ptr(tre_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event element points outside of the tre ring\n");
break;
}
/* Get the TRB this event points to */
ev_tre = mhi_to_virtual(tre_ring, ptr);
@ -570,8 +636,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
/* notify client */
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
if (mhi_chan->dir == DMA_TO_DEVICE)
if (mhi_chan->dir == DMA_TO_DEVICE) {
atomic_dec(&mhi_cntrl->pending_pkts);
/* Release the reference got from mhi_queue() */
mhi_cntrl->runtime_put(mhi_cntrl);
}
/*
* Recycle the buffer if buffer is pre-allocated,
@ -595,15 +664,15 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
case MHI_EV_CC_OOB:
case MHI_EV_CC_DB_MODE:
{
unsigned long flags;
unsigned long pm_lock_flags;
mhi_chan->db_cfg.db_mode = 1;
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
read_lock_irqsave(&mhi_cntrl->pm_lock, pm_lock_flags);
if (tre_ring->wp != tre_ring->rp &&
MHI_DB_ACCESS_VALID(mhi_cntrl)) {
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
}
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
read_unlock_irqrestore(&mhi_cntrl->pm_lock, pm_lock_flags);
break;
}
case MHI_EV_CC_BAD_TRE:
@ -695,6 +764,12 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan;
u32 chan;
if (!is_valid_ring_ptr(mhi_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event element points outside of the cmd ring\n");
return;
}
cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
@ -719,6 +794,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 chan;
int count = 0;
dma_addr_t ptr = er_ctxt->rp;
/*
* This is a quick check to avoid unnecessary event processing
@ -728,7 +804,13 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
return -EIO;
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
return -EIO;
}
dev_rp = mhi_to_virtual(ev_ring, ptr);
local_rp = ev_ring->rp;
while (dev_rp != local_rp) {
@ -771,14 +853,14 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
break;
case MHI_STATE_SYS_ERR:
{
enum mhi_pm_state new_state;
enum mhi_pm_state pm_state;
dev_dbg(dev, "System error detected\n");
write_lock_irq(&mhi_cntrl->pm_lock);
new_state = mhi_tryset_pm_state(mhi_cntrl,
pm_state = mhi_tryset_pm_state(mhi_cntrl,
MHI_PM_SYS_ERR_DETECT);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (new_state == MHI_PM_SYS_ERR_DETECT)
if (pm_state == MHI_PM_SYS_ERR_DETECT)
mhi_pm_sys_err_handler(mhi_cntrl);
break;
}
@ -807,6 +889,9 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
case MHI_EE_AMSS:
st = DEV_ST_TRANSITION_MISSION_MODE;
break;
case MHI_EE_FP:
st = DEV_ST_TRANSITION_FP;
break;
case MHI_EE_RDDM:
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
write_lock_irq(&mhi_cntrl->pm_lock);
@ -834,6 +919,8 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
*/
if (chan < mhi_cntrl->max_chan) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
if (!mhi_chan->configured)
break;
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
}
@ -845,7 +932,15 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
local_rp = ev_ring->rp;
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
ptr = er_ctxt->rp;
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
return -EIO;
}
dev_rp = mhi_to_virtual(ev_ring, ptr);
count++;
}
@ -868,11 +963,18 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
int count = 0;
u32 chan;
struct mhi_chan *mhi_chan;
dma_addr_t ptr = er_ctxt->rp;
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
return -EIO;
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
return -EIO;
}
dev_rp = mhi_to_virtual(ev_ring, ptr);
local_rp = ev_ring->rp;
while (dev_rp != local_rp && event_quota > 0) {
@ -886,7 +988,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
* Only process the event ring elements whose channel
* ID is within the maximum supported range.
*/
if (chan < mhi_cntrl->max_chan) {
if (chan < mhi_cntrl->max_chan &&
mhi_cntrl->mhi_chan[chan].configured) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
@ -900,7 +1003,15 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
local_rp = ev_ring->rp;
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
ptr = er_ctxt->rp;
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
return -EIO;
}
dev_rp = mhi_to_virtual(ev_ring, ptr);
count++;
}
read_lock_bh(&mhi_cntrl->pm_lock);
@ -996,7 +1107,7 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
if (unlikely(ret)) {
ret = -ENOMEM;
ret = -EAGAIN;
goto exit_unlock;
}
@ -1004,9 +1115,11 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (unlikely(ret))
goto exit_unlock;
/* trigger M3 exit if necessary */
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
/* Packet is queued, take a usage ref to exit M3 if necessary
* for host->device buffer, balanced put is done on buffer completion
* for device->host buffer, balanced put is after ringing the DB
*/
mhi_cntrl->runtime_get(mhi_cntrl);
/* Assert dev_wake (to exit/prevent M1/M2)*/
mhi_cntrl->wake_toggle(mhi_cntrl);
@ -1014,12 +1127,11 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (mhi_chan->dir == DMA_TO_DEVICE)
atomic_inc(&mhi_cntrl->pending_pkts);
if (unlikely(!MHI_DB_ACCESS_VALID(mhi_cntrl))) {
ret = -EIO;
goto exit_unlock;
}
if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
if (dir == DMA_FROM_DEVICE)
mhi_cntrl->runtime_put(mhi_cntrl);
exit_unlock:
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
@ -1162,6 +1274,11 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
break;
case MHI_CMD_STOP_CHAN:
cmd_tre->ptr = MHI_TRE_CMD_STOP_PTR;
cmd_tre->dword[0] = MHI_TRE_CMD_STOP_DWORD0;
cmd_tre->dword[1] = MHI_TRE_CMD_STOP_DWORD1(chan);
break;
case MHI_CMD_START_CHAN:
cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
@ -1183,56 +1300,125 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
return 0;
}
static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan)
static int mhi_update_channel_state(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan,
enum mhi_ch_state_type to_state)
{
struct device *dev = &mhi_chan->mhi_dev->dev;
enum mhi_cmd_type cmd = MHI_CMD_NOP;
int ret;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
dev_dbg(dev, "Entered: unprepare channel:%d\n", mhi_chan->chan);
dev_dbg(dev, "%d: Updating channel state to: %s\n", mhi_chan->chan,
TO_CH_STATE_TYPE_STR(to_state));
/* no more processing events for this channel */
mutex_lock(&mhi_chan->mutex);
write_lock_irq(&mhi_chan->lock);
if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
mhi_chan->ch_state != MHI_CH_STATE_SUSPENDED) {
switch (to_state) {
case MHI_CH_STATE_TYPE_RESET:
write_lock_irq(&mhi_chan->lock);
if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
mhi_chan->ch_state != MHI_CH_STATE_SUSPENDED) {
write_unlock_irq(&mhi_chan->lock);
return -EINVAL;
}
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
write_unlock_irq(&mhi_chan->lock);
mutex_unlock(&mhi_chan->mutex);
return;
cmd = MHI_CMD_RESET_CHAN;
break;
case MHI_CH_STATE_TYPE_STOP:
if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
return -EINVAL;
cmd = MHI_CMD_STOP_CHAN;
break;
case MHI_CH_STATE_TYPE_START:
if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
mhi_chan->ch_state != MHI_CH_STATE_DISABLED)
return -EINVAL;
cmd = MHI_CMD_START_CHAN;
break;
default:
dev_err(dev, "%d: Channel state update to %s not allowed\n",
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
return -EINVAL;
}
/* bring host and device out of suspended states */
ret = mhi_device_get_sync(mhi_cntrl->mhi_dev);
if (ret)
return ret;
mhi_cntrl->runtime_get(mhi_cntrl);
reinit_completion(&mhi_chan->completion);
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, cmd);
if (ret) {
dev_err(dev, "%d: Failed to send %s channel command\n",
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
goto exit_channel_update;
}
ret = wait_for_completion_timeout(&mhi_chan->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
dev_err(dev,
"%d: Failed to receive %s channel command completion\n",
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
ret = -EIO;
goto exit_channel_update;
}
ret = 0;
if (to_state != MHI_CH_STATE_TYPE_RESET) {
write_lock_irq(&mhi_chan->lock);
mhi_chan->ch_state = (to_state == MHI_CH_STATE_TYPE_START) ?
MHI_CH_STATE_ENABLED : MHI_CH_STATE_STOP;
write_unlock_irq(&mhi_chan->lock);
}
dev_dbg(dev, "%d: Channel state change to %s successful\n",
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
exit_channel_update:
mhi_cntrl->runtime_put(mhi_cntrl);
mhi_device_put(mhi_cntrl->mhi_dev);
return ret;
}
static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan)
{
int ret;
struct device *dev = &mhi_chan->mhi_dev->dev;
mutex_lock(&mhi_chan->mutex);
if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
dev_dbg(dev, "Current EE: %s Required EE Mask: 0x%x\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
goto exit_unprepare_channel;
}
/* no more processing events for this channel */
ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
MHI_CH_STATE_TYPE_RESET);
if (ret)
dev_err(dev, "%d: Failed to reset channel, still resetting\n",
mhi_chan->chan);
exit_unprepare_channel:
write_lock_irq(&mhi_chan->lock);
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
write_unlock_irq(&mhi_chan->lock);
reinit_completion(&mhi_chan->completion);
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
read_unlock_bh(&mhi_cntrl->pm_lock);
goto error_invalid_state;
}
mhi_cntrl->wake_toggle(mhi_cntrl);
read_unlock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
if (ret)
goto error_invalid_state;
/* even if it fails we will still reset */
ret = wait_for_completion_timeout(&mhi_chan->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
dev_err(dev,
"Failed to receive cmd completion, still resetting\n");
error_invalid_state:
if (!mhi_chan->offload_ch) {
mhi_reset_chan(mhi_cntrl, mhi_chan);
mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
}
dev_dbg(dev, "chan:%d successfully resetted\n", mhi_chan->chan);
dev_dbg(dev, "%d: successfully reset\n", mhi_chan->chan);
mutex_unlock(&mhi_chan->mutex);
}
@ -1240,28 +1426,16 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan)
{
int ret = 0;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
dev_dbg(dev, "Preparing channel: %d\n", mhi_chan->chan);
struct device *dev = &mhi_chan->mhi_dev->dev;
if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
dev_err(dev,
"Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
mhi_chan->name);
dev_err(dev, "Current EE: %s Required EE Mask: 0x%x\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
return -ENOTCONN;
}
mutex_lock(&mhi_chan->mutex);
/* If channel is not in disable state, do not allow it to start */
if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
ret = -EIO;
dev_dbg(dev, "channel: %d is not in disabled state\n",
mhi_chan->chan);
goto error_init_chan;
}
/* Check of client manages channel context for offload channels */
if (!mhi_chan->offload_ch) {
ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
@ -1269,34 +1443,11 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
goto error_init_chan;
}
reinit_completion(&mhi_chan->completion);
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
read_unlock_bh(&mhi_cntrl->pm_lock);
ret = -EIO;
goto error_pm_state;
}
mhi_cntrl->wake_toggle(mhi_cntrl);
read_unlock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
MHI_CH_STATE_TYPE_START);
if (ret)
goto error_pm_state;
ret = wait_for_completion_timeout(&mhi_chan->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
ret = -EIO;
goto error_pm_state;
}
write_lock_irq(&mhi_chan->lock);
mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
write_unlock_irq(&mhi_chan->lock);
/* Pre-allocate buffer for xfer ring */
if (mhi_chan->pre_alloc) {
int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
@ -1334,9 +1485,6 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
mutex_unlock(&mhi_chan->mutex);
dev_dbg(dev, "Chan: %d successfully moved to start state\n",
mhi_chan->chan);
return 0;
error_pm_state:
@ -1350,7 +1498,7 @@ error_init_chan:
error_pre_alloc:
mutex_unlock(&mhi_chan->mutex);
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
return ret;
}
@ -1365,6 +1513,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ev_ring;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
unsigned long flags;
dma_addr_t ptr;
dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
@ -1372,7 +1521,15 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
/* mark all stale events related to channel as STALE event */
spin_lock_irqsave(&mhi_event->lock, flags);
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
ptr = er_ctxt->rp;
if (!is_valid_ring_ptr(ev_ring, ptr)) {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Event ring rp points outside of the event ring\n");
dev_rp = ev_ring->rp;
} else {
dev_rp = mhi_to_virtual(ev_ring, ptr);
}
local_rp = ev_ring->rp;
while (dev_rp != local_rp) {
@ -1403,8 +1560,11 @@ static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
while (tre_ring->rp != tre_ring->wp) {
struct mhi_buf_info *buf_info = buf_ring->rp;
if (mhi_chan->dir == DMA_TO_DEVICE)
if (mhi_chan->dir == DMA_TO_DEVICE) {
atomic_dec(&mhi_cntrl->pending_pkts);
/* Release the reference got from mhi_queue() */
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (!buf_info->pre_mapped)
mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
@ -1467,7 +1627,7 @@ error_open_chan:
if (!mhi_chan)
continue;
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
}
return ret;
@ -1485,7 +1645,7 @@ void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
if (!mhi_chan)
continue;
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
}
}
EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);

View File

@ -153,35 +153,33 @@ static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
/* Handle device ready state transition */
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
{
void __iomem *base = mhi_cntrl->regs;
struct mhi_event *mhi_event;
enum mhi_pm_state cur_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 reset = 1, ready = 0;
u32 interval_us = 25000; /* poll register field every 25 milliseconds */
int ret, i;
/* Wait for RESET to be cleared and READY bit to be set by the device */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
MHICTRL_RESET_MASK,
MHICTRL_RESET_SHIFT, &reset) ||
mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
MHISTATUS_READY_MASK,
MHISTATUS_READY_SHIFT, &ready) ||
(!reset && ready),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
/* Check if device entered error state */
if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "Device link is not accessible\n");
return -EIO;
}
/* Timeout if device did not transition to ready state */
if (reset || !ready) {
dev_err(dev, "Device Ready timeout\n");
return -ETIMEDOUT;
/* Wait for RESET to be cleared and READY bit to be set by the device */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
interval_us);
if (ret) {
dev_err(dev, "Device failed to clear MHI Reset\n");
return ret;
}
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
interval_us);
if (ret) {
dev_err(dev, "Device failed to enter MHI Ready\n");
return ret;
}
dev_dbg(dev, "Device in READY State\n");
@ -377,24 +375,28 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
{
struct mhi_event *mhi_event;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_ee_type ee = MHI_EE_MAX, current_ee = mhi_cntrl->ee;
int i, ret;
dev_dbg(dev, "Processing Mission Mode transition\n");
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
ee = mhi_get_exec_env(mhi_cntrl);
if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
if (!MHI_IN_MISSION_MODE(ee)) {
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
write_unlock_irq(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
return -EIO;
}
mhi_cntrl->ee = ee;
write_unlock_irq(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
device_for_each_child(&mhi_cntrl->mhi_dev->dev, &current_ee,
mhi_destroy_device);
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
/* Force MHI to be in M0 state before continuing */
@ -560,6 +562,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
{
enum mhi_pm_state cur_state, prev_state;
enum dev_st_transition next_state;
struct mhi_event *mhi_event;
struct mhi_cmd_ctxt *cmd_ctxt;
struct mhi_cmd *mhi_cmd;
@ -673,7 +676,23 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
er_ctxt->wp = er_ctxt->rbase;
}
mhi_ready_state_transition(mhi_cntrl);
/* Transition to next state */
if (MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (cur_state != MHI_PM_POR) {
dev_err(dev, "Error moving to state %s from %s\n",
to_mhi_pm_state_str(MHI_PM_POR),
to_mhi_pm_state_str(cur_state));
goto exit_sys_error_transition;
}
next_state = DEV_ST_TRANSITION_PBL;
} else {
next_state = DEV_ST_TRANSITION_READY;
}
mhi_queue_state_transition(mhi_cntrl, next_state);
exit_sys_error_transition:
dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
@ -742,8 +761,7 @@ void mhi_pm_st_worker(struct work_struct *work)
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (MHI_IN_PBL(mhi_cntrl->ee))
mhi_fw_load_handler(mhi_cntrl);
mhi_fw_load_handler(mhi_cntrl);
break;
case DEV_ST_TRANSITION_SBL:
write_lock_irq(&mhi_cntrl->pm_lock);
@ -755,10 +773,18 @@ void mhi_pm_st_worker(struct work_struct *work)
* either SBL or AMSS states
*/
mhi_create_devices(mhi_cntrl);
if (mhi_cntrl->fbc_download)
mhi_download_amss_image(mhi_cntrl);
break;
case DEV_ST_TRANSITION_MISSION_MODE:
mhi_pm_mission_mode_transition(mhi_cntrl);
break;
case DEV_ST_TRANSITION_FP:
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->ee = MHI_EE_FP;
write_unlock_irq(&mhi_cntrl->pm_lock);
mhi_create_devices(mhi_cntrl);
break;
case DEV_ST_TRANSITION_READY:
mhi_ready_state_transition(mhi_cntrl);
break;
@ -822,7 +848,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
return -EBUSY;
}
dev_info(dev, "Allowing M3 transition\n");
dev_dbg(dev, "Allowing M3 transition\n");
new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
if (new_state != MHI_PM_M3_ENTER) {
write_unlock_irq(&mhi_cntrl->pm_lock);
@ -836,7 +862,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
/* Set MHI to M3 and wait for completion */
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
write_unlock_irq(&mhi_cntrl->pm_lock);
dev_info(dev, "Wait for M3 completion\n");
dev_dbg(dev, "Waiting for M3 completion\n");
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->dev_state == MHI_STATE_M3 ||
@ -870,9 +896,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
enum mhi_pm_state cur_state;
int ret;
dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
return 0;
@ -880,6 +906,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3)
return -EINVAL;
/* Notify clients about exiting LPM */
list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
mutex_lock(&itr->mutex);
@ -1033,13 +1062,6 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
mutex_lock(&mhi_cntrl->pm_mutex);
mhi_cntrl->pm_state = MHI_PM_DISABLE;
if (!mhi_cntrl->pre_init) {
/* Setup device context */
ret = mhi_init_dev_ctxt(mhi_cntrl);
if (ret)
goto error_dev_ctxt;
}
ret = mhi_init_irq_setup(mhi_cntrl);
if (ret)
goto error_setup_irq;
@ -1092,7 +1114,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
&val) ||
!val,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (ret) {
if (!ret) {
ret = -EIO;
dev_info(dev, "Failed to reset MHI due to syserr state\n");
goto error_bhi_offset;
@ -1121,10 +1143,7 @@ error_bhi_offset:
mhi_deinit_free_irq(mhi_cntrl);
error_setup_irq:
if (!mhi_cntrl->pre_init)
mhi_deinit_dev_ctxt(mhi_cntrl);
error_dev_ctxt:
mhi_cntrl->pm_state = MHI_PM_DISABLE;
mutex_unlock(&mhi_cntrl->pm_mutex);
return ret;
@ -1136,12 +1155,19 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
enum mhi_pm_state cur_state, transition_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
mutex_lock(&mhi_cntrl->pm_mutex);
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_cntrl->pm_state;
if (cur_state == MHI_PM_DISABLE) {
write_unlock_irq(&mhi_cntrl->pm_lock);
mutex_unlock(&mhi_cntrl->pm_mutex);
return; /* Already powered down */
}
/* If it's not a graceful shutdown, force MHI to linkdown state */
transition_state = (graceful) ? MHI_PM_SHUTDOWN_PROCESS :
MHI_PM_LD_ERR_FATAL_DETECT;
mutex_lock(&mhi_cntrl->pm_mutex);
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
if (cur_state != transition_state) {
dev_err(dev, "Failed to move to state: %s from: %s\n",
@ -1166,15 +1192,6 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
flush_work(&mhi_cntrl->st_worker);
free_irq(mhi_cntrl->irq[0], mhi_cntrl);
if (!mhi_cntrl->pre_init) {
/* Free all allocated resources */
if (mhi_cntrl->fbc_image) {
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
}
mhi_deinit_dev_ctxt(mhi_cntrl);
}
}
EXPORT_SYMBOL_GPL(mhi_power_down);

View File

@ -14,6 +14,7 @@
#include <linux/mhi.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
@ -71,9 +72,9 @@ struct mhi_pci_dev_info {
.doorbell_mode_switch = false, \
}
#define MHI_EVENT_CONFIG_CTRL(ev_ring) \
#define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \
{ \
.num_elements = 64, \
.num_elements = el_count, \
.irq_moderation_ms = 0, \
.irq = (ev_ring) + 1, \
.priority = 1, \
@ -114,9 +115,69 @@ struct mhi_pci_dev_info {
.doorbell_mode_switch = true, \
}
#define MHI_EVENT_CONFIG_DATA(ev_ring) \
#define MHI_CHANNEL_CONFIG_UL_SBL(ch_num, ch_name, el_count, ev_ring) \
{ \
.num = ch_num, \
.name = ch_name, \
.num_elements = el_count, \
.event_ring = ev_ring, \
.dir = DMA_TO_DEVICE, \
.ee_mask = BIT(MHI_EE_SBL), \
.pollcfg = 0, \
.doorbell = MHI_DB_BRST_DISABLE, \
.lpm_notify = false, \
.offload_channel = false, \
.doorbell_mode_switch = false, \
} \
#define MHI_CHANNEL_CONFIG_DL_SBL(ch_num, ch_name, el_count, ev_ring) \
{ \
.num = ch_num, \
.name = ch_name, \
.num_elements = el_count, \
.event_ring = ev_ring, \
.dir = DMA_FROM_DEVICE, \
.ee_mask = BIT(MHI_EE_SBL), \
.pollcfg = 0, \
.doorbell = MHI_DB_BRST_DISABLE, \
.lpm_notify = false, \
.offload_channel = false, \
.doorbell_mode_switch = false, \
}
#define MHI_CHANNEL_CONFIG_UL_FP(ch_num, ch_name, el_count, ev_ring) \
{ \
.num = ch_num, \
.name = ch_name, \
.num_elements = el_count, \
.event_ring = ev_ring, \
.dir = DMA_TO_DEVICE, \
.ee_mask = BIT(MHI_EE_FP), \
.pollcfg = 0, \
.doorbell = MHI_DB_BRST_DISABLE, \
.lpm_notify = false, \
.offload_channel = false, \
.doorbell_mode_switch = false, \
} \
#define MHI_CHANNEL_CONFIG_DL_FP(ch_num, ch_name, el_count, ev_ring) \
{ \
.num = ch_num, \
.name = ch_name, \
.num_elements = el_count, \
.event_ring = ev_ring, \
.dir = DMA_FROM_DEVICE, \
.ee_mask = BIT(MHI_EE_FP), \
.pollcfg = 0, \
.doorbell = MHI_DB_BRST_DISABLE, \
.lpm_notify = false, \
.offload_channel = false, \
.doorbell_mode_switch = false, \
}
#define MHI_EVENT_CONFIG_DATA(ev_ring, el_count) \
{ \
.num_elements = 128, \
.num_elements = el_count, \
.irq_moderation_ms = 5, \
.irq = (ev_ring) + 1, \
.priority = 1, \
@ -127,9 +188,9 @@ struct mhi_pci_dev_info {
.offload_channel = false, \
}
#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, ch_num) \
#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, el_count, ch_num) \
{ \
.num_elements = 2048, \
.num_elements = el_count, \
.irq_moderation_ms = 1, \
.irq = (ev_ring) + 1, \
.priority = 1, \
@ -150,21 +211,23 @@ static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 3),
};
static struct mhi_event_config modem_qcom_v1_mhi_events[] = {
/* first ring is control+data ring */
MHI_EVENT_CONFIG_CTRL(0),
MHI_EVENT_CONFIG_CTRL(0, 64),
/* DIAG dedicated event ring */
MHI_EVENT_CONFIG_DATA(1),
MHI_EVENT_CONFIG_DATA(1, 128),
/* Hardware channels request dedicated hardware event rings */
MHI_EVENT_CONFIG_HW_DATA(2, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 101)
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 2048, 101)
};
static struct mhi_controller_config modem_qcom_v1_mhiv_config = {
static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
.max_channels = 128,
.timeout_ms = 8000,
.num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
@ -173,6 +236,15 @@ static struct mhi_controller_config modem_qcom_v1_mhiv_config = {
.event_cfg = modem_qcom_v1_mhi_events,
};
static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
.name = "qcom-sdx65m",
.fw = "qcom/sdx65m/xbl.elf",
.edl = "qcom/sdx65m/edl.mbn",
.config = &modem_qcom_v1_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32
};
static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
.name = "qcom-sdx55m",
.fw = "qcom/sdx55m/sbl1.mbn",
@ -182,15 +254,121 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
.dma_data_width = 32
};
static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
.name = "qcom-sdx24",
.edl = "qcom/prog_firehose_sdx24.mbn",
.config = &modem_qcom_v1_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32
};
static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = {
MHI_CHANNEL_CONFIG_UL(0, "NMEA", 32, 0),
MHI_CHANNEL_CONFIG_DL(1, "NMEA", 32, 0),
MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
/* The EDL firmware is a flash-programmer exposing firehose protocol */
MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
};
static struct mhi_event_config mhi_quectel_em1xx_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_DATA(1, 128),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
};
static const struct mhi_controller_config modem_quectel_em1xx_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_quectel_em1xx_channels),
.ch_cfg = mhi_quectel_em1xx_channels,
.num_events = ARRAY_SIZE(mhi_quectel_em1xx_events),
.event_cfg = mhi_quectel_em1xx_events,
};
static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
.name = "quectel-em1xx",
.edl = "qcom/prog_firehose_sdx24.mbn",
.config = &modem_quectel_em1xx_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32
};
static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 32, 0),
MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 32, 0),
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_UL(32, "AT", 32, 0),
MHI_CHANNEL_CONFIG_DL(33, "AT", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
};
static struct mhi_event_config mhi_foxconn_sdx55_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_DATA(1, 128),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
};
static const struct mhi_controller_config modem_foxconn_sdx55_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_foxconn_sdx55_channels),
.ch_cfg = mhi_foxconn_sdx55_channels,
.num_events = ARRAY_SIZE(mhi_foxconn_sdx55_events),
.event_cfg = mhi_foxconn_sdx55_events,
};
static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
.name = "foxconn-sdx55",
.fw = "qcom/sdx55m/sbl1.mbn",
.edl = "qcom/sdx55m/edl.mbn",
.config = &modem_foxconn_sdx55_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32
};
static const struct pci_device_id mhi_pci_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
{ PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(0x1eac, 0x1002), /* EM160R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
/* T99W175 (sdx55), Both for eSIM and Non-eSIM */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0ab),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
/* DW5930e (sdx55), With eSIM, It's also T99W175 */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b0),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
/* DW5930e (sdx55), Non-eSIM, It's also T99W175 */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b1),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
{ }
};
MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);
enum mhi_pci_device_status {
MHI_PCI_DEV_STARTED,
MHI_PCI_DEV_SUSPENDED,
};
struct mhi_pci_device {
@ -224,12 +402,31 @@ static void mhi_pci_status_cb(struct mhi_controller *mhi_cntrl,
case MHI_CB_FATAL_ERROR:
case MHI_CB_SYS_ERROR:
dev_warn(&pdev->dev, "firmware crashed (%u)\n", cb);
pm_runtime_forbid(&pdev->dev);
break;
case MHI_CB_EE_MISSION_MODE:
pm_runtime_allow(&pdev->dev);
break;
default:
break;
}
}
static void mhi_pci_wake_get_nop(struct mhi_controller *mhi_cntrl, bool force)
{
/* no-op */
}
static void mhi_pci_wake_put_nop(struct mhi_controller *mhi_cntrl, bool override)
{
/* no-op */
}
static void mhi_pci_wake_toggle_nop(struct mhi_controller *mhi_cntrl)
{
/* no-op */
}
static bool mhi_pci_is_alive(struct mhi_controller *mhi_cntrl)
{
struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
@ -330,13 +527,19 @@ static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
static int mhi_pci_runtime_get(struct mhi_controller *mhi_cntrl)
{
/* no PM for now */
return 0;
/* The runtime_get() MHI callback means:
* Do whatever is requested to leave M3.
*/
return pm_runtime_get(mhi_cntrl->cntrl_dev);
}
static void mhi_pci_runtime_put(struct mhi_controller *mhi_cntrl)
{
/* no PM for now */
/* The runtime_put() MHI callback means:
* Device can be moved in M3 state.
*/
pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
pm_runtime_put(mhi_cntrl->cntrl_dev);
}
static void mhi_pci_recovery_work(struct work_struct *work)
@ -350,6 +553,7 @@ static void mhi_pci_recovery_work(struct work_struct *work)
dev_warn(&pdev->dev, "device recovery started\n");
del_timer(&mhi_pdev->health_check_timer);
pm_runtime_forbid(&pdev->dev);
/* Clean up MHI state */
if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
@ -357,7 +561,6 @@ static void mhi_pci_recovery_work(struct work_struct *work)
mhi_unprepare_after_power_down(mhi_cntrl);
}
/* Check if we can recover without full reset */
pci_set_power_state(pdev, PCI_D0);
pci_load_saved_state(pdev, mhi_pdev->pci_state);
pci_restore_state(pdev);
@ -391,6 +594,10 @@ static void health_check(struct timer_list *t)
struct mhi_pci_device *mhi_pdev = from_timer(mhi_pdev, t, health_check_timer);
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
test_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
return;
if (!mhi_pci_is_alive(mhi_cntrl)) {
dev_err(mhi_cntrl->cntrl_dev, "Device died\n");
queue_work(system_long_wq, &mhi_pdev->recovery_work);
@ -433,6 +640,9 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mhi_cntrl->status_cb = mhi_pci_status_cb;
mhi_cntrl->runtime_get = mhi_pci_runtime_get;
mhi_cntrl->runtime_put = mhi_pci_runtime_put;
mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
if (err)
@ -444,9 +654,12 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_drvdata(pdev, mhi_pdev);
/* Have stored pci confspace at hand for restore in sudden PCI error */
/* Have stored pci confspace at hand for restore in sudden PCI error.
* cache the state locally and discard the PCI core one.
*/
pci_save_state(pdev);
mhi_pdev->pci_state = pci_store_saved_state(pdev);
pci_load_saved_state(pdev, NULL);
pci_enable_pcie_error_reporting(pdev);
@ -472,6 +685,14 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* start health check */
mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
/* Only allow runtime-suspend if PME capable (for wakeup) */
if (pci_pme_capable(pdev, PCI_D3hot)) {
pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_mark_last_busy(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
}
return 0;
err_unprepare:
@ -495,9 +716,19 @@ static void mhi_pci_remove(struct pci_dev *pdev)
mhi_unprepare_after_power_down(mhi_cntrl);
}
/* balancing probe put_noidle */
if (pci_pme_capable(pdev, PCI_D3hot))
pm_runtime_get_noresume(&pdev->dev);
mhi_unregister_controller(mhi_cntrl);
}
static void mhi_pci_shutdown(struct pci_dev *pdev)
{
mhi_pci_remove(pdev);
pci_set_power_state(pdev, PCI_D3hot);
}
static void mhi_pci_reset_prepare(struct pci_dev *pdev)
{
struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
@ -605,41 +836,59 @@ static const struct pci_error_handlers mhi_pci_err_handler = {
.reset_done = mhi_pci_reset_done,
};
static int __maybe_unused mhi_pci_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
del_timer(&mhi_pdev->health_check_timer);
cancel_work_sync(&mhi_pdev->recovery_work);
/* Transition to M3 state */
mhi_pm_suspend(mhi_cntrl);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_wake_from_d3(pdev, true);
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
static int __maybe_unused mhi_pci_resume(struct device *dev)
static int __maybe_unused mhi_pci_runtime_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
int err;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_set_master(pdev);
if (test_and_set_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
return 0;
del_timer(&mhi_pdev->health_check_timer);
cancel_work_sync(&mhi_pdev->recovery_work);
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
mhi_cntrl->ee != MHI_EE_AMSS)
goto pci_suspend; /* Nothing to do at MHI level */
/* Transition to M3 state */
err = mhi_pm_suspend(mhi_cntrl);
if (err) {
dev_err(&pdev->dev, "failed to suspend device: %d\n", err);
clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status);
return -EBUSY;
}
pci_suspend:
pci_disable_device(pdev);
pci_wake_from_d3(pdev, true);
return 0;
}
static int __maybe_unused mhi_pci_runtime_resume(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
int err;
if (!test_and_clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
return 0;
err = pci_enable_device(pdev);
if (err)
goto err_recovery;
pci_set_master(pdev);
pci_wake_from_d3(pdev, false);
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
mhi_cntrl->ee != MHI_EE_AMSS)
return 0; /* Nothing to do at MHI level */
/* Exit M3, transition to M0 state */
err = mhi_pm_resume(mhi_cntrl);
if (err) {
@ -650,16 +899,44 @@ static int __maybe_unused mhi_pci_resume(struct device *dev)
/* Resume health check */
mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
/* It can be a remote wakeup (no mhi runtime_get), update access time */
pm_runtime_mark_last_busy(dev);
return 0;
err_recovery:
/* The device may have loose power or crashed, try recovering it */
/* Do not fail to not mess up our PCI device state, the device likely
* lost power (d3cold) and we simply need to reset it from the recovery
* procedure, trigger the recovery asynchronously to prevent system
* suspend exit delaying.
*/
queue_work(system_long_wq, &mhi_pdev->recovery_work);
pm_runtime_mark_last_busy(dev);
return err;
return 0;
}
static int __maybe_unused mhi_pci_suspend(struct device *dev)
{
pm_runtime_disable(dev);
return mhi_pci_runtime_suspend(dev);
}
static int __maybe_unused mhi_pci_resume(struct device *dev)
{
int ret;
/* Depending the platform, device may have lost power (d3cold), we need
* to resume it now to check its state and recover when necessary.
*/
ret = mhi_pci_runtime_resume(dev);
pm_runtime_enable(dev);
return ret;
}
static const struct dev_pm_ops mhi_pci_pm_ops = {
SET_RUNTIME_PM_OPS(mhi_pci_runtime_suspend, mhi_pci_runtime_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(mhi_pci_suspend, mhi_pci_resume)
};
@ -668,6 +945,7 @@ static struct pci_driver mhi_pci_driver = {
.id_table = mhi_pci_id_table,
.probe = mhi_pci_probe,
.remove = mhi_pci_remove,
.shutdown = mhi_pci_shutdown,
.err_handler = &mhi_pci_err_handler,
.driver.pm = &mhi_pci_pm_ops
};

View File

@ -117,6 +117,7 @@ struct mhi_link_info {
* @MHI_EE_WFW: WLAN firmware mode
* @MHI_EE_PTHRU: Passthrough
* @MHI_EE_EDL: Embedded downloader
* @MHI_EE_FP: Flash Programmer Environment
*/
enum mhi_ee_type {
MHI_EE_PBL,
@ -126,7 +127,8 @@ enum mhi_ee_type {
MHI_EE_WFW,
MHI_EE_PTHRU,
MHI_EE_EDL,
MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
MHI_EE_FP,
MHI_EE_MAX_SUPPORTED = MHI_EE_FP,
MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
MHI_EE_NOT_SUPPORTED,
MHI_EE_MAX,
@ -203,7 +205,7 @@ enum mhi_db_brst_mode {
* @num: The number assigned to this channel
* @num_elements: The number of elements that can be queued to this channel
* @local_elements: The local ring length of the channel
* @event_ring: The event rung index that services this channel
* @event_ring: The event ring index that services this channel
* @dir: Direction that data may flow on this channel
* @type: Channel type
* @ee_mask: Execution Environment mask for this channel
@ -296,7 +298,7 @@ struct mhi_controller_config {
* @wake_db: MHI WAKE doorbell register address
* @iova_start: IOMMU starting address for data (required)
* @iova_stop: IOMMU stop address for data (required)
* @fw_image: Firmware image name for normal booting (required)
* @fw_image: Firmware image name for normal booting (optional)
* @edl_image: Firmware image name for emergency download mode (optional)
* @rddm_size: RAM dump size that host should allocate for debugging purpose
* @sbl_size: SBL image size downloaded through BHIe (optional)
@ -352,7 +354,6 @@ struct mhi_controller_config {
* @index: Index of the MHI controller instance
* @bounce_buf: Use of bounce buffer
* @fbc_download: MHI host needs to do complete image transfer (optional)
* @pre_init: MHI host needs to do pre-initialization before power up
* @wake_set: Device wakeup set flag
* @irq_flags: irq flags passed to request_irq (optional)
*
@ -445,7 +446,6 @@ struct mhi_controller {
int index;
bool bounce_buf;
bool fbc_download;
bool pre_init;
bool wake_set;
unsigned long irq_flags;
};
@ -712,13 +712,27 @@ int mhi_device_get_sync(struct mhi_device *mhi_dev);
void mhi_device_put(struct mhi_device *mhi_dev);
/**
* mhi_prepare_for_transfer - Setup channel for data transfer
* mhi_prepare_for_transfer - Setup UL and DL channels for data transfer.
* Allocate and initialize the channel context and
* also issue the START channel command to both
* channels. Channels can be started only if both
* host and device execution environments match and
* channels are in a DISABLED state.
* @mhi_dev: Device associated with the channels
*/
int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
/**
* mhi_unprepare_from_transfer - Unprepare the channels
* mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer.
* Issue the RESET channel command and let the
* device clean-up the context so no incoming
* transfers are seen on the host. Free memory
* associated with the context on host. If device
* is unresponsive, only perform a host side
* clean-up. Channels can be reset only if both
* host and device execution environments match
* and channels are in an ENABLED, STOPPED or
* SUSPENDED state.
* @mhi_dev: Device associated with the channels
*/
void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);