========
 
 - Added alignment check for event ring read pointer to avoid the potential
   buffer corruption issue.
 - Added support for SDX75 modem which takes longer time to enter READY state.
 - Added spinlock to protect concurrent access while queuing transfer ring
   elements.
 - Dropped the read channel lock before invoking the client callback as the
   client can potentially queue buffers thus ending up wtih soft lockup.
 
 MHI Endpoint
 ============
 
 - Used kzalloc() to allocate event ring elements instead of allocating the
   elements on the stack, as the endpoint controller trying to queue them may not
   be able to use vmalloc memory (using DMA).
 - Used slab allocator for allocting the memory for objects used frequently and
   are of fixed size.
 - Added support for interrupt moderation timer feature which is used by the host
   to limit the number of interrupts raised by the device for an event ring.
 - Added async read/write DMA support for transferring data between host and the
   endpoint. So far MHI EP stack assumed that the data will be transferred
   synchronously (i.e., it sends completion once the transfer APIs are returned).
   But this impacts the throughput if the controller is using DMA to do the
   transfer.
 
   So to add async suport, existing sync transfer APIs are renamed to
   {read/write}_sync and also introduced two new APIs {read/write}_async for
   carrying out the async transfer.
 
   Controllers implementing the async APIs should queue the buffers and return
   immediately without waiting for transfer completion. Once the transfer
   completion happens later, they should invoke the completion callback so that
   the MHI EP stack can send the completion event to the host.
 
   The controller driver patches (PCI EPF) for this async support are also merged
   to the MHI tree with Acks from PCI maintainers.
 - Fixed the DMA channel direction in error path of the PCI EPF driver.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZ6VDKoFIy9ikWCeXVZ8R5v6RzvUFAmWAUzEACgkQVZ8R5v6R
 zvX6kQf/T3vG4seI9u+pUEmkmtF0ilmbyFqgcoGQPgED736uNQo8jKhpVuz8gqmg
 nfHDXP36dzpUL5ycVqDT8RwGfL1qqCNS65p0mGeheXwoAeapzEjU4Asopq1b6gXm
 PE3/yjIzyMqBWBE/y6Ubms3ptNq1TPqU8mQ1HImhkPOfaVGxB0TGy4kfSwlO3ufY
 RQ2qmfnXB28rslmEi28Wi9iuYSroKGT1zBj3a4mb4TlJ2RJMNjlGhvpuXZbLLqlh
 1FA6DZRs4U2eM4MNsq/klxXLMBG4HCsKLkPZpZwgKrLlNSirAC3DKSAUlE8W6WHp
 kJB/3dx0rVgRHYx/Z0epWKUWmnAXXA==
 =1NzX
 -----END PGP SIGNATURE-----

Merge tag 'mhi-for-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi into char-misc-next

Manivannan writes:

MHI Host
========

- Added alignment check for event ring read pointer to avoid the potential
  buffer corruption issue.
- Added support for SDX75 modem which takes longer time to enter READY state.
- Added spinlock to protect concurrent access while queuing transfer ring
  elements.
- Dropped the read channel lock before invoking the client callback as the
  client can potentially queue buffers thus ending up wtih soft lockup.

MHI Endpoint
============

- Used kzalloc() to allocate event ring elements instead of allocating the
  elements on the stack, as the endpoint controller trying to queue them may not
  be able to use vmalloc memory (using DMA).
- Used slab allocator for allocting the memory for objects used frequently and
  are of fixed size.
- Added support for interrupt moderation timer feature which is used by the host
  to limit the number of interrupts raised by the device for an event ring.
- Added async read/write DMA support for transferring data between host and the
  endpoint. So far MHI EP stack assumed that the data will be transferred
  synchronously (i.e., it sends completion once the transfer APIs are returned).
  But this impacts the throughput if the controller is using DMA to do the
  transfer.

  So to add async suport, existing sync transfer APIs are renamed to
  {read/write}_sync and also introduced two new APIs {read/write}_async for
  carrying out the async transfer.

  Controllers implementing the async APIs should queue the buffers and return
  immediately without waiting for transfer completion. Once the transfer
  completion happens later, they should invoke the completion callback so that
  the MHI EP stack can send the completion event to the host.

  The controller driver patches (PCI EPF) for this async support are also merged
  to the MHI tree with Acks from PCI maintainers.
- Fixed the DMA channel direction in error path of the PCI EPF driver.

* tag 'mhi-for-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi:
  bus: mhi: host: Drop chan lock before queuing buffers
  bus: mhi: host: Add spinlock to protect WP access when queueing TREs
  PCI: epf-mhi: Fix the DMA data direction of dma_unmap_single()
  bus: mhi: ep: Add checks for read/write callbacks while registering controllers
  bus: mhi: ep: Add support for async DMA read operation
  bus: mhi: ep: Add support for async DMA write operation
  PCI: epf-mhi: Enable MHI async read/write support
  PCI: epf-mhi: Add support for DMA async read/write operation
  PCI: epf-mhi: Simulate async read/write using iATU
  bus: mhi: ep: Introduce async read/write callbacks
  bus: mhi: ep: Rename read_from_host() and write_to_host() APIs
  bus: mhi: ep: Pass mhi_ep_buf_info struct to read/write APIs
  bus: mhi: ep: Add support for interrupt moderation timer
  bus: mhi: ep: Use slab allocator where applicable
  bus: mhi: host: Add alignment check for event ring read pointer
  bus: mhi: host: pci_generic: Add SDX75 based modem support
  bus: mhi: host: Add a separate timeout parameter for waiting ready
  bus: mhi: ep: Do not allocate event ring element on stack
This commit is contained in:
Greg Kroah-Hartman 2023-12-19 08:57:03 +01:00
commit 687a28590c
11 changed files with 680 additions and 208 deletions

View File

@ -126,6 +126,7 @@ struct mhi_ep_ring {
union mhi_ep_ring_ctx *ring_ctx;
struct mhi_ring_element *ring_cache;
enum mhi_ep_ring_type type;
struct delayed_work intmodt_work;
u64 rbase;
size_t rd_offset;
size_t wr_offset;
@ -135,7 +136,9 @@ struct mhi_ep_ring {
u32 ch_id;
u32 er_index;
u32 irq_vector;
u32 intmodt;
bool started;
bool irq_pending;
};
struct mhi_ep_cmd {
@ -159,6 +162,7 @@ struct mhi_ep_chan {
void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
enum mhi_ch_state state;
enum dma_data_direction dir;
size_t rd_offset;
u64 tre_loc;
u32 tre_size;
u32 tre_bytes_left;

View File

@ -54,11 +54,27 @@ static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
mutex_unlock(&mhi_cntrl->event_lock);
/*
* Raise IRQ to host only if the BEI flag is not set in TRE. Host might
* set this flag for interrupt moderation as per MHI protocol.
* As per the MHI specification, section 4.3, Interrupt moderation:
*
* 1. If BEI flag is not set, cancel any pending intmodt work if started
* for the event ring and raise IRQ immediately.
*
* 2. If both BEI and intmodt are set, and if no IRQ is pending for the
* same event ring, start the IRQ delayed work as per the value of
* intmodt. If previous IRQ is pending, then do nothing as the pending
* IRQ is enough for the host to process the current event ring element.
*
* 3. If BEI is set and intmodt is not set, no need to raise IRQ.
*/
if (!bei)
if (!bei) {
if (READ_ONCE(ring->irq_pending))
cancel_delayed_work(&ring->intmodt_work);
mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
} else if (ring->intmodt && !READ_ONCE(ring->irq_pending)) {
WRITE_ONCE(ring->irq_pending, true);
schedule_delayed_work(&ring->intmodt_work, msecs_to_jiffies(ring->intmodt));
}
return 0;
@ -71,45 +87,77 @@ err_unlock:
static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
{
struct mhi_ring_element event = {};
struct mhi_ring_element *event;
int ret;
event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
event.dword[0] = MHI_TRE_EV_DWORD0(code, len);
event.dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
if (!event)
return -ENOMEM;
return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event, MHI_TRE_DATA_GET_BEI(tre));
event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
event->dword[0] = MHI_TRE_EV_DWORD0(code, len);
event->dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
ret = mhi_ep_send_event(mhi_cntrl, ring->er_index, event, MHI_TRE_DATA_GET_BEI(tre));
kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
return ret;
}
int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
{
struct mhi_ring_element event = {};
struct mhi_ring_element *event;
int ret;
event.dword[0] = MHI_SC_EV_DWORD0(state);
event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
if (!event)
return -ENOMEM;
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
event->dword[0] = MHI_SC_EV_DWORD0(state);
event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
return ret;
}
int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
{
struct mhi_ring_element event = {};
struct mhi_ring_element *event;
int ret;
event.dword[0] = MHI_EE_EV_DWORD0(exec_env);
event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
if (!event)
return -ENOMEM;
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
event->dword[0] = MHI_EE_EV_DWORD0(exec_env);
event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
return ret;
}
static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
{
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
struct mhi_ring_element event = {};
struct mhi_ring_element *event;
int ret;
event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
event.dword[0] = MHI_CC_EV_DWORD0(code);
event.dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
if (!event)
return -ENOMEM;
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
event->dword[0] = MHI_CC_EV_DWORD0(code);
event->dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
return ret;
}
static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
@ -151,6 +199,8 @@ static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_ele
goto err_unlock;
}
mhi_chan->rd_offset = ch_ring->rd_offset;
}
/* Set channel state to RUNNING */
@ -280,22 +330,85 @@ bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_directio
struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
return !!(ring->rd_offset == ring->wr_offset);
return !!(mhi_chan->rd_offset == ring->wr_offset);
}
EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);
static void mhi_ep_read_completion(struct mhi_ep_buf_info *buf_info)
{
struct mhi_ep_device *mhi_dev = buf_info->mhi_dev;
struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_ep_chan *mhi_chan = mhi_dev->ul_chan;
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
struct mhi_ring_element *el = &ring->ring_cache[ring->rd_offset];
struct mhi_result result = {};
int ret;
if (mhi_chan->xfer_cb) {
result.buf_addr = buf_info->cb_buf;
result.dir = mhi_chan->dir;
result.bytes_xferd = buf_info->size;
mhi_chan->xfer_cb(mhi_dev, &result);
}
/*
* The host will split the data packet into multiple TREs if it can't fit
* the packet in a single TRE. In that case, CHAIN flag will be set by the
* host for all TREs except the last one.
*/
if (buf_info->code != MHI_EV_CC_OVERFLOW) {
if (MHI_TRE_DATA_GET_CHAIN(el)) {
/*
* IEOB (Interrupt on End of Block) flag will be set by the host if
* it expects the completion event for all TREs of a TD.
*/
if (MHI_TRE_DATA_GET_IEOB(el)) {
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
MHI_TRE_DATA_GET_LEN(el),
MHI_EV_CC_EOB);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev,
"Error sending transfer compl. event\n");
goto err_free_tre_buf;
}
}
} else {
/*
* IEOT (Interrupt on End of Transfer) flag will be set by the host
* for the last TRE of the TD and expects the completion event for
* the same.
*/
if (MHI_TRE_DATA_GET_IEOT(el)) {
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
MHI_TRE_DATA_GET_LEN(el),
MHI_EV_CC_EOT);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev,
"Error sending transfer compl. event\n");
goto err_free_tre_buf;
}
}
}
}
mhi_ep_ring_inc_index(ring);
err_free_tre_buf:
kmem_cache_free(mhi_cntrl->tre_buf_cache, buf_info->cb_buf);
}
static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_ring *ring,
struct mhi_result *result,
u32 len)
struct mhi_ep_ring *ring)
{
struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
struct device *dev = &mhi_cntrl->mhi_dev->dev;
size_t tr_len, read_offset, write_offset;
struct mhi_ep_buf_info buf_info = {};
u32 len = MHI_EP_DEFAULT_MTU;
struct mhi_ring_element *el;
bool tr_done = false;
void *write_addr;
u64 read_addr;
void *buf_addr;
u32 buf_left;
int ret;
@ -308,7 +421,7 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
return -ENODEV;
}
el = &ring->ring_cache[ring->rd_offset];
el = &ring->ring_cache[mhi_chan->rd_offset];
/* Check if there is data pending to be read from previous read operation */
if (mhi_chan->tre_bytes_left) {
@ -324,81 +437,51 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
write_offset = len - buf_left;
read_addr = mhi_chan->tre_loc + read_offset;
write_addr = result->buf_addr + write_offset;
buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL | GFP_DMA);
if (!buf_addr)
return -ENOMEM;
buf_info.host_addr = mhi_chan->tre_loc + read_offset;
buf_info.dev_addr = buf_addr + write_offset;
buf_info.size = tr_len;
buf_info.cb = mhi_ep_read_completion;
buf_info.cb_buf = buf_addr;
buf_info.mhi_dev = mhi_chan->mhi_dev;
if (mhi_chan->tre_bytes_left - tr_len)
buf_info.code = MHI_EV_CC_OVERFLOW;
dev_dbg(dev, "Reading %zd bytes from channel (%u)\n", tr_len, ring->ch_id);
ret = mhi_cntrl->read_from_host(mhi_cntrl, read_addr, write_addr, tr_len);
ret = mhi_cntrl->read_async(mhi_cntrl, &buf_info);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev, "Error reading from channel\n");
return ret;
goto err_free_buf_addr;
}
buf_left -= tr_len;
mhi_chan->tre_bytes_left -= tr_len;
/*
* Once the TRE (Transfer Ring Element) of a TD (Transfer Descriptor) has been
* read completely:
*
* 1. Send completion event to the host based on the flags set in TRE.
* 2. Increment the local read offset of the transfer ring.
*/
if (!mhi_chan->tre_bytes_left) {
/*
* The host will split the data packet into multiple TREs if it can't fit
* the packet in a single TRE. In that case, CHAIN flag will be set by the
* host for all TREs except the last one.
*/
if (MHI_TRE_DATA_GET_CHAIN(el)) {
/*
* IEOB (Interrupt on End of Block) flag will be set by the host if
* it expects the completion event for all TREs of a TD.
*/
if (MHI_TRE_DATA_GET_IEOB(el)) {
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
MHI_TRE_DATA_GET_LEN(el),
MHI_EV_CC_EOB);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev,
"Error sending transfer compl. event\n");
return ret;
}
}
} else {
/*
* IEOT (Interrupt on End of Transfer) flag will be set by the host
* for the last TRE of the TD and expects the completion event for
* the same.
*/
if (MHI_TRE_DATA_GET_IEOT(el)) {
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
MHI_TRE_DATA_GET_LEN(el),
MHI_EV_CC_EOT);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev,
"Error sending transfer compl. event\n");
return ret;
}
}
if (MHI_TRE_DATA_GET_IEOT(el))
tr_done = true;
}
mhi_ep_ring_inc_index(ring);
mhi_chan->rd_offset = (mhi_chan->rd_offset + 1) % ring->ring_size;
}
result->bytes_xferd += tr_len;
} while (buf_left && !tr_done);
return 0;
err_free_buf_addr:
kmem_cache_free(mhi_cntrl->tre_buf_cache, buf_addr);
return ret;
}
static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct mhi_result result = {};
u32 len = MHI_EP_DEFAULT_MTU;
struct mhi_ep_chan *mhi_chan;
int ret;
@ -419,44 +502,59 @@ static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_elem
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
} else {
/* UL channel */
result.buf_addr = kzalloc(len, GFP_KERNEL);
if (!result.buf_addr)
return -ENOMEM;
do {
ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
ret = mhi_ep_read_channel(mhi_cntrl, ring);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
kfree(result.buf_addr);
return ret;
}
result.dir = mhi_chan->dir;
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
result.bytes_xferd = 0;
memset(result.buf_addr, 0, len);
/* Read until the ring becomes empty */
} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
kfree(result.buf_addr);
}
return 0;
}
static void mhi_ep_skb_completion(struct mhi_ep_buf_info *buf_info)
{
struct mhi_ep_device *mhi_dev = buf_info->mhi_dev;
struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_ep_chan *mhi_chan = mhi_dev->dl_chan;
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
struct mhi_ring_element *el = &ring->ring_cache[ring->rd_offset];
struct device *dev = &mhi_dev->dev;
struct mhi_result result = {};
int ret;
if (mhi_chan->xfer_cb) {
result.buf_addr = buf_info->cb_buf;
result.dir = mhi_chan->dir;
result.bytes_xferd = buf_info->size;
mhi_chan->xfer_cb(mhi_dev, &result);
}
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el, buf_info->size,
buf_info->code);
if (ret) {
dev_err(dev, "Error sending transfer completion event\n");
return;
}
mhi_ep_ring_inc_index(ring);
}
/* TODO: Handle partially formed TDs */
int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
{
struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_ep_chan *mhi_chan = mhi_dev->dl_chan;
struct device *dev = &mhi_chan->mhi_dev->dev;
struct mhi_ep_buf_info buf_info = {};
struct mhi_ring_element *el;
u32 buf_left, read_offset;
struct mhi_ep_ring *ring;
enum mhi_ev_ccs code;
void *read_addr;
u64 write_addr;
size_t tr_len;
u32 tre_len;
int ret;
@ -480,40 +578,44 @@ int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
goto err_exit;
}
el = &ring->ring_cache[ring->rd_offset];
el = &ring->ring_cache[mhi_chan->rd_offset];
tre_len = MHI_TRE_DATA_GET_LEN(el);
tr_len = min(buf_left, tre_len);
read_offset = skb->len - buf_left;
read_addr = skb->data + read_offset;
write_addr = MHI_TRE_DATA_GET_PTR(el);
dev_dbg(dev, "Writing %zd bytes to channel (%u)\n", tr_len, ring->ch_id);
ret = mhi_cntrl->write_to_host(mhi_cntrl, read_addr, write_addr, tr_len);
if (ret < 0) {
dev_err(dev, "Error writing to the channel\n");
goto err_exit;
}
buf_info.dev_addr = skb->data + read_offset;
buf_info.host_addr = MHI_TRE_DATA_GET_PTR(el);
buf_info.size = tr_len;
buf_info.cb = mhi_ep_skb_completion;
buf_info.cb_buf = skb;
buf_info.mhi_dev = mhi_dev;
buf_left -= tr_len;
/*
* For all TREs queued by the host for DL channel, only the EOT flag will be set.
* If the packet doesn't fit into a single TRE, send the OVERFLOW event to
* the host so that the host can adjust the packet boundary to next TREs. Else send
* the EOT event to the host indicating the packet boundary.
*/
if (buf_left)
code = MHI_EV_CC_OVERFLOW;
if (buf_left - tr_len)
buf_info.code = MHI_EV_CC_OVERFLOW;
else
code = MHI_EV_CC_EOT;
buf_info.code = MHI_EV_CC_EOT;
ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el, tr_len, code);
if (ret) {
dev_err(dev, "Error sending transfer completion event\n");
dev_dbg(dev, "Writing %zd bytes to channel (%u)\n", tr_len, ring->ch_id);
ret = mhi_cntrl->write_async(mhi_cntrl, &buf_info);
if (ret < 0) {
dev_err(dev, "Error writing to the channel\n");
goto err_exit;
}
mhi_ep_ring_inc_index(ring);
buf_left -= tr_len;
/*
* Update the read offset cached in mhi_chan. Actual read offset
* will be updated by the completion handler.
*/
mhi_chan->rd_offset = (mhi_chan->rd_offset + 1) % ring->ring_size;
} while (buf_left);
mutex_unlock(&mhi_chan->lock);
@ -714,7 +816,6 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, ch_ring_work);
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct mhi_ep_ring_item *itr, *tmp;
struct mhi_ring_element *el;
struct mhi_ep_ring *ring;
struct mhi_ep_chan *chan;
unsigned long flags;
@ -748,31 +849,29 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
if (ret) {
dev_err(dev, "Error updating write offset for ring\n");
mutex_unlock(&chan->lock);
kfree(itr);
kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
continue;
}
/* Sanity check to make sure there are elements in the ring */
if (ring->rd_offset == ring->wr_offset) {
if (chan->rd_offset == ring->wr_offset) {
mutex_unlock(&chan->lock);
kfree(itr);
kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
continue;
}
el = &ring->ring_cache[ring->rd_offset];
dev_dbg(dev, "Processing the ring for channel (%u)\n", ring->ch_id);
ret = mhi_ep_process_ch_ring(ring, el);
ret = mhi_ep_process_ch_ring(ring);
if (ret) {
dev_err(dev, "Error processing ring for channel (%u): %d\n",
ring->ch_id, ret);
mutex_unlock(&chan->lock);
kfree(itr);
kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
continue;
}
mutex_unlock(&chan->lock);
kfree(itr);
kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
}
}
@ -828,7 +927,7 @@ static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned lon
u32 ch_id = ch_idx + i;
ring = &mhi_cntrl->mhi_chan[ch_id].ring;
item = kzalloc(sizeof(*item), GFP_ATOMIC);
item = kmem_cache_zalloc(mhi_cntrl->ring_item_cache, GFP_ATOMIC);
if (!item)
return;
@ -1365,6 +1464,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
return -EINVAL;
if (!mhi_cntrl->read_sync || !mhi_cntrl->write_sync ||
!mhi_cntrl->read_async || !mhi_cntrl->write_async)
return -EINVAL;
ret = mhi_ep_chan_init(mhi_cntrl, config);
if (ret)
return ret;
@ -1375,6 +1478,29 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
goto err_free_ch;
}
mhi_cntrl->ev_ring_el_cache = kmem_cache_create("mhi_ep_event_ring_el",
sizeof(struct mhi_ring_element), 0,
SLAB_CACHE_DMA, NULL);
if (!mhi_cntrl->ev_ring_el_cache) {
ret = -ENOMEM;
goto err_free_cmd;
}
mhi_cntrl->tre_buf_cache = kmem_cache_create("mhi_ep_tre_buf", MHI_EP_DEFAULT_MTU, 0,
SLAB_CACHE_DMA, NULL);
if (!mhi_cntrl->tre_buf_cache) {
ret = -ENOMEM;
goto err_destroy_ev_ring_el_cache;
}
mhi_cntrl->ring_item_cache = kmem_cache_create("mhi_ep_ring_item",
sizeof(struct mhi_ep_ring_item), 0,
0, NULL);
if (!mhi_cntrl->ev_ring_el_cache) {
ret = -ENOMEM;
goto err_destroy_tre_buf_cache;
}
INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker);
@ -1383,7 +1509,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
if (!mhi_cntrl->wq) {
ret = -ENOMEM;
goto err_free_cmd;
goto err_destroy_ring_item_cache;
}
INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
@ -1442,6 +1568,12 @@ err_ida_free:
ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
err_destroy_wq:
destroy_workqueue(mhi_cntrl->wq);
err_destroy_ring_item_cache:
kmem_cache_destroy(mhi_cntrl->ring_item_cache);
err_destroy_ev_ring_el_cache:
kmem_cache_destroy(mhi_cntrl->ev_ring_el_cache);
err_destroy_tre_buf_cache:
kmem_cache_destroy(mhi_cntrl->tre_buf_cache);
err_free_cmd:
kfree(mhi_cntrl->mhi_cmd);
err_free_ch:
@ -1463,6 +1595,9 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
free_irq(mhi_cntrl->irq, mhi_cntrl);
kmem_cache_destroy(mhi_cntrl->tre_buf_cache);
kmem_cache_destroy(mhi_cntrl->ev_ring_el_cache);
kmem_cache_destroy(mhi_cntrl->ring_item_cache);
kfree(mhi_cntrl->mhi_cmd);
kfree(mhi_cntrl->mhi_chan);

View File

@ -30,7 +30,8 @@ static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
size_t start, copy_size;
struct mhi_ep_buf_info buf_info = {};
size_t start;
int ret;
/* Don't proceed in the case of event ring. This happens during mhi_ep_ring_start(). */
@ -43,30 +44,34 @@ static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
start = ring->wr_offset;
if (start < end) {
copy_size = (end - start) * sizeof(struct mhi_ring_element);
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
(start * sizeof(struct mhi_ring_element)),
&ring->ring_cache[start], copy_size);
buf_info.size = (end - start) * sizeof(struct mhi_ring_element);
buf_info.host_addr = ring->rbase + (start * sizeof(struct mhi_ring_element));
buf_info.dev_addr = &ring->ring_cache[start];
ret = mhi_cntrl->read_sync(mhi_cntrl, &buf_info);
if (ret < 0)
return ret;
} else {
copy_size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
(start * sizeof(struct mhi_ring_element)),
&ring->ring_cache[start], copy_size);
buf_info.size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
buf_info.host_addr = ring->rbase + (start * sizeof(struct mhi_ring_element));
buf_info.dev_addr = &ring->ring_cache[start];
ret = mhi_cntrl->read_sync(mhi_cntrl, &buf_info);
if (ret < 0)
return ret;
if (end) {
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase,
&ring->ring_cache[0],
end * sizeof(struct mhi_ring_element));
buf_info.host_addr = ring->rbase;
buf_info.dev_addr = &ring->ring_cache[0];
buf_info.size = end * sizeof(struct mhi_ring_element);
ret = mhi_cntrl->read_sync(mhi_cntrl, &buf_info);
if (ret < 0)
return ret;
}
}
dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, buf_info.size);
return 0;
}
@ -102,6 +107,7 @@ int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *e
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct mhi_ep_buf_info buf_info = {};
size_t old_offset = 0;
u32 num_free_elem;
__le64 rp;
@ -133,12 +139,11 @@ int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *e
rp = cpu_to_le64(ring->rd_offset * sizeof(*el) + ring->rbase);
memcpy_toio((void __iomem *) &ring->ring_ctx->generic.rp, &rp, sizeof(u64));
ret = mhi_cntrl->write_to_host(mhi_cntrl, el, ring->rbase + (old_offset * sizeof(*el)),
sizeof(*el));
if (ret < 0)
return ret;
buf_info.host_addr = ring->rbase + (old_offset * sizeof(*el));
buf_info.dev_addr = el;
buf_info.size = sizeof(*el);
return 0;
return mhi_cntrl->write_sync(mhi_cntrl, &buf_info);
}
void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
@ -157,6 +162,15 @@ void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32
}
}
static void mhi_ep_raise_irq(struct work_struct *work)
{
struct mhi_ep_ring *ring = container_of(work, struct mhi_ep_ring, intmodt_work.work);
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
WRITE_ONCE(ring->irq_pending, false);
}
int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
union mhi_ep_ring_ctx *ctx)
{
@ -173,8 +187,13 @@ int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
if (ring->type == RING_TYPE_CH)
ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
if (ring->type == RING_TYPE_ER)
if (ring->type == RING_TYPE_ER) {
ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
ring->intmodt = FIELD_GET(EV_CTX_INTMODT_MASK,
le32_to_cpu(ring->ring_ctx->ev.intmod));
INIT_DELAYED_WORK(&ring->intmodt_work, mhi_ep_raise_irq);
}
/* During ring init, both rp and wp are equal */
memcpy_fromio(&val, (void __iomem *) &ring->ring_ctx->generic.rp, sizeof(u64));
@ -201,6 +220,9 @@ int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
{
if (ring->type == RING_TYPE_ER)
cancel_delayed_work_sync(&ring->intmodt_work);
ring->started = false;
kfree(ring->ring_cache);
ring->ring_cache = NULL;

View File

@ -881,6 +881,7 @@ static int parse_config(struct mhi_controller *mhi_cntrl,
if (!mhi_cntrl->timeout_ms)
mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
mhi_cntrl->ready_timeout_ms = config->ready_timeout_ms;
mhi_cntrl->bounce_buf = config->use_bounce_buf;
mhi_cntrl->buffer_len = config->buf_len;
if (!mhi_cntrl->buffer_len)

View File

@ -321,7 +321,7 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
u32 *out);
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 val, u32 delayus);
u32 val, u32 delayus, u32 timeout_ms);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
int __must_check mhi_write_reg_field(struct mhi_controller *mhi_cntrl,

View File

@ -40,10 +40,11 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset,
u32 mask, u32 val, u32 delayus)
u32 mask, u32 val, u32 delayus,
u32 timeout_ms)
{
int ret;
u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
u32 out, retry = (timeout_ms * 1000) / delayus;
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, &out);
@ -268,7 +269,8 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
{
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len &&
!(addr & (sizeof(struct mhi_ring_element) - 1));
}
int mhi_destroy_device(struct device *dev, void *data)
@ -642,6 +644,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
mhi_del_ring_element(mhi_cntrl, tre_ring);
local_rp = tre_ring->rp;
read_unlock_bh(&mhi_chan->lock);
/* notify client */
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
@ -667,6 +671,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
kfree(buf_info->cb_buf);
}
}
read_lock_bh(&mhi_chan->lock);
}
break;
} /* CC_EOT */
@ -1122,17 +1128,15 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
return -EIO;
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
if (unlikely(ret)) {
ret = -EAGAIN;
goto exit_unlock;
}
if (unlikely(ret))
return -EAGAIN;
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
if (unlikely(ret))
goto exit_unlock;
return ret;
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
/* Packet is queued, take a usage ref to exit M3 if necessary
* for host->device buffer, balanced put is done on buffer completion
@ -1152,7 +1156,6 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (dir == DMA_FROM_DEVICE)
mhi_cntrl->runtime_put(mhi_cntrl);
exit_unlock:
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
return ret;
@ -1204,6 +1207,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
int eot, eob, chain, bei;
int ret;
/* Protect accesses for reading and incrementing WP */
write_lock_bh(&mhi_chan->lock);
buf_ring = &mhi_chan->buf_ring;
tre_ring = &mhi_chan->tre_ring;
@ -1221,8 +1227,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
if (!info->pre_mapped) {
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
if (ret)
if (ret) {
write_unlock_bh(&mhi_chan->lock);
return ret;
}
}
eob = !!(flags & MHI_EOB);
@ -1239,6 +1247,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
mhi_add_ring_element(mhi_cntrl, tre_ring);
mhi_add_ring_element(mhi_cntrl, buf_ring);
write_unlock_bh(&mhi_chan->lock);
return 0;
}

View File

@ -269,6 +269,16 @@ static struct mhi_event_config modem_qcom_v1_mhi_events[] = {
MHI_EVENT_CONFIG_HW_DATA(5, 2048, 101)
};
static const struct mhi_controller_config modem_qcom_v2_mhiv_config = {
.max_channels = 128,
.timeout_ms = 8000,
.ready_timeout_ms = 50000,
.num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
.ch_cfg = modem_qcom_v1_mhi_channels,
.num_events = ARRAY_SIZE(modem_qcom_v1_mhi_events),
.event_cfg = modem_qcom_v1_mhi_events,
};
static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
.max_channels = 128,
.timeout_ms = 8000,
@ -278,6 +288,16 @@ static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
.event_cfg = modem_qcom_v1_mhi_events,
};
static const struct mhi_pci_dev_info mhi_qcom_sdx75_info = {
.name = "qcom-sdx75m",
.fw = "qcom/sdx75m/xbl.elf",
.edl = "qcom/sdx75m/edl.mbn",
.config = &modem_qcom_v2_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.sideband_wake = false,
};
static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
.name = "qcom-sdx65m",
.fw = "qcom/sdx65m/xbl.elf",
@ -600,6 +620,8 @@ static const struct pci_device_id mhi_pci_id_table[] = {
.driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0309),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx75_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QUECTEL, 0x1001), /* EM120R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QUECTEL, 0x1002), /* EM160R-GL (sdx24) */

View File

@ -163,6 +163,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
enum mhi_pm_state cur_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 interval_us = 25000; /* poll register field every 25 milliseconds */
u32 timeout_ms;
int ret, i;
/* Check if device entered error state */
@ -173,14 +174,18 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
/* Wait for RESET to be cleared and READY bit to be set by the device */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 0, interval_us);
MHICTRL_RESET_MASK, 0, interval_us,
mhi_cntrl->timeout_ms);
if (ret) {
dev_err(dev, "Device failed to clear MHI Reset\n");
return ret;
}
timeout_ms = mhi_cntrl->ready_timeout_ms ?
mhi_cntrl->ready_timeout_ms : mhi_cntrl->timeout_ms;
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
MHISTATUS_READY_MASK, 1, interval_us);
MHISTATUS_READY_MASK, 1, interval_us,
timeout_ms);
if (ret) {
dev_err(dev, "Device failed to enter MHI Ready\n");
return ret;
@ -479,7 +484,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
/* Wait for the reset bit to be cleared by the device */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 0, 25000);
MHICTRL_RESET_MASK, 0, 25000, mhi_cntrl->timeout_ms);
if (ret)
dev_err(dev, "Device failed to clear MHI Reset\n");
@ -492,8 +497,8 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
if (!MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
/* wait for ready to be set */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs,
MHISTATUS,
MHISTATUS_READY_MASK, 1, 25000);
MHISTATUS, MHISTATUS_READY_MASK,
1, 25000, mhi_cntrl->timeout_ms);
if (ret)
dev_err(dev, "Device failed to enter READY state\n");
}
@ -1111,7 +1116,8 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
if (state == MHI_STATE_SYS_ERR) {
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 0, interval_us);
MHICTRL_RESET_MASK, 0, interval_us,
mhi_cntrl->timeout_ms);
if (ret) {
dev_info(dev, "Failed to reset MHI due to syserr state\n");
goto error_exit;
@ -1202,14 +1208,18 @@ EXPORT_SYMBOL_GPL(mhi_power_down);
int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
{
int ret = mhi_async_power_up(mhi_cntrl);
u32 timeout_ms;
if (ret)
return ret;
/* Some devices need more time to set ready during power up */
timeout_ms = mhi_cntrl->ready_timeout_ms ?
mhi_cntrl->ready_timeout_ms : mhi_cntrl->timeout_ms;
wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_MISSION_MODE(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
msecs_to_jiffies(timeout_ms));
ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT;
if (ret)

View File

@ -21,6 +21,15 @@
/* Platform specific flags */
#define MHI_EPF_USE_DMA BIT(0)
struct pci_epf_mhi_dma_transfer {
struct pci_epf_mhi *epf_mhi;
struct mhi_ep_buf_info buf_info;
struct list_head node;
dma_addr_t paddr;
enum dma_data_direction dir;
size_t size;
};
struct pci_epf_mhi_ep_info {
const struct mhi_ep_cntrl_config *config;
struct pci_epf_header *epf_header;
@ -124,6 +133,10 @@ struct pci_epf_mhi {
resource_size_t mmio_phys;
struct dma_chan *dma_chan_tx;
struct dma_chan *dma_chan_rx;
struct workqueue_struct *dma_wq;
struct work_struct dma_work;
struct list_head dma_list;
spinlock_t list_lock;
u32 mmio_size;
int irq;
};
@ -209,59 +222,65 @@ static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector)
vector + 1);
}
static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
void *to, size_t size)
static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
size_t offset = get_align_offset(epf_mhi, from);
size_t offset = get_align_offset(epf_mhi, buf_info->host_addr);
void __iomem *tre_buf;
phys_addr_t tre_phys;
int ret;
mutex_lock(&epf_mhi->lock);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, from, &tre_phys, &tre_buf,
offset, size);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, buf_info->host_addr, &tre_phys,
&tre_buf, offset, buf_info->size);
if (ret) {
mutex_unlock(&epf_mhi->lock);
return ret;
}
memcpy_fromio(to, tre_buf, size);
memcpy_fromio(buf_info->dev_addr, tre_buf, buf_info->size);
__pci_epf_mhi_unmap_free(mhi_cntrl, from, tre_phys, tre_buf, offset,
size);
__pci_epf_mhi_unmap_free(mhi_cntrl, buf_info->host_addr, tre_phys,
tre_buf, offset, buf_info->size);
mutex_unlock(&epf_mhi->lock);
if (buf_info->cb)
buf_info->cb(buf_info);
return 0;
}
static int pci_epf_mhi_iatu_write(struct mhi_ep_cntrl *mhi_cntrl,
void *from, u64 to, size_t size)
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
size_t offset = get_align_offset(epf_mhi, to);
size_t offset = get_align_offset(epf_mhi, buf_info->host_addr);
void __iomem *tre_buf;
phys_addr_t tre_phys;
int ret;
mutex_lock(&epf_mhi->lock);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, to, &tre_phys, &tre_buf,
offset, size);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, buf_info->host_addr, &tre_phys,
&tre_buf, offset, buf_info->size);
if (ret) {
mutex_unlock(&epf_mhi->lock);
return ret;
}
memcpy_toio(tre_buf, from, size);
memcpy_toio(tre_buf, buf_info->dev_addr, buf_info->size);
__pci_epf_mhi_unmap_free(mhi_cntrl, to, tre_phys, tre_buf, offset,
size);
__pci_epf_mhi_unmap_free(mhi_cntrl, buf_info->host_addr, tre_phys,
tre_buf, offset, buf_info->size);
mutex_unlock(&epf_mhi->lock);
if (buf_info->cb)
buf_info->cb(buf_info);
return 0;
}
@ -270,8 +289,8 @@ static void pci_epf_mhi_dma_callback(void *param)
complete(param);
}
static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
void *to, size_t size)
static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
@ -284,13 +303,13 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
dma_addr_t dst_addr;
int ret;
if (size < SZ_4K)
return pci_epf_mhi_iatu_read(mhi_cntrl, from, to, size);
if (buf_info->size < SZ_4K)
return pci_epf_mhi_iatu_read(mhi_cntrl, buf_info);
mutex_lock(&epf_mhi->lock);
config.direction = DMA_DEV_TO_MEM;
config.src_addr = from;
config.src_addr = buf_info->host_addr;
ret = dmaengine_slave_config(chan, &config);
if (ret) {
@ -298,14 +317,16 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
goto err_unlock;
}
dst_addr = dma_map_single(dma_dev, to, size, DMA_FROM_DEVICE);
dst_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
DMA_FROM_DEVICE);
ret = dma_mapping_error(dma_dev, dst_addr);
if (ret) {
dev_err(dev, "Failed to map remote memory\n");
goto err_unlock;
}
desc = dmaengine_prep_slave_single(chan, dst_addr, size, DMA_DEV_TO_MEM,
desc = dmaengine_prep_slave_single(chan, dst_addr, buf_info->size,
DMA_DEV_TO_MEM,
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
if (!desc) {
dev_err(dev, "Failed to prepare DMA\n");
@ -332,15 +353,15 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
}
err_unmap:
dma_unmap_single(dma_dev, dst_addr, size, DMA_FROM_DEVICE);
dma_unmap_single(dma_dev, dst_addr, buf_info->size, DMA_FROM_DEVICE);
err_unlock:
mutex_unlock(&epf_mhi->lock);
return ret;
}
static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
u64 to, size_t size)
static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
@ -353,13 +374,13 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
dma_addr_t src_addr;
int ret;
if (size < SZ_4K)
return pci_epf_mhi_iatu_write(mhi_cntrl, from, to, size);
if (buf_info->size < SZ_4K)
return pci_epf_mhi_iatu_write(mhi_cntrl, buf_info);
mutex_lock(&epf_mhi->lock);
config.direction = DMA_MEM_TO_DEV;
config.dst_addr = to;
config.dst_addr = buf_info->host_addr;
ret = dmaengine_slave_config(chan, &config);
if (ret) {
@ -367,14 +388,16 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
goto err_unlock;
}
src_addr = dma_map_single(dma_dev, from, size, DMA_TO_DEVICE);
src_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
DMA_TO_DEVICE);
ret = dma_mapping_error(dma_dev, src_addr);
if (ret) {
dev_err(dev, "Failed to map remote memory\n");
goto err_unlock;
}
desc = dmaengine_prep_slave_single(chan, src_addr, size, DMA_MEM_TO_DEV,
desc = dmaengine_prep_slave_single(chan, src_addr, buf_info->size,
DMA_MEM_TO_DEV,
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
if (!desc) {
dev_err(dev, "Failed to prepare DMA\n");
@ -401,7 +424,199 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
}
err_unmap:
dma_unmap_single(dma_dev, src_addr, size, DMA_FROM_DEVICE);
dma_unmap_single(dma_dev, src_addr, buf_info->size, DMA_TO_DEVICE);
err_unlock:
mutex_unlock(&epf_mhi->lock);
return ret;
}
static void pci_epf_mhi_dma_worker(struct work_struct *work)
{
struct pci_epf_mhi *epf_mhi = container_of(work, struct pci_epf_mhi, dma_work);
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
struct pci_epf_mhi_dma_transfer *itr, *tmp;
struct mhi_ep_buf_info *buf_info;
unsigned long flags;
LIST_HEAD(head);
spin_lock_irqsave(&epf_mhi->list_lock, flags);
list_splice_tail_init(&epf_mhi->dma_list, &head);
spin_unlock_irqrestore(&epf_mhi->list_lock, flags);
list_for_each_entry_safe(itr, tmp, &head, node) {
list_del(&itr->node);
dma_unmap_single(dma_dev, itr->paddr, itr->size, itr->dir);
buf_info = &itr->buf_info;
buf_info->cb(buf_info);
kfree(itr);
}
}
static void pci_epf_mhi_dma_async_callback(void *param)
{
struct pci_epf_mhi_dma_transfer *transfer = param;
struct pci_epf_mhi *epf_mhi = transfer->epf_mhi;
spin_lock(&epf_mhi->list_lock);
list_add_tail(&transfer->node, &epf_mhi->dma_list);
spin_unlock(&epf_mhi->list_lock);
queue_work(epf_mhi->dma_wq, &epf_mhi->dma_work);
}
static int pci_epf_mhi_edma_read_async(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
struct pci_epf_mhi_dma_transfer *transfer = NULL;
struct dma_chan *chan = epf_mhi->dma_chan_rx;
struct device *dev = &epf_mhi->epf->dev;
DECLARE_COMPLETION_ONSTACK(complete);
struct dma_async_tx_descriptor *desc;
struct dma_slave_config config = {};
dma_cookie_t cookie;
dma_addr_t dst_addr;
int ret;
mutex_lock(&epf_mhi->lock);
config.direction = DMA_DEV_TO_MEM;
config.src_addr = buf_info->host_addr;
ret = dmaengine_slave_config(chan, &config);
if (ret) {
dev_err(dev, "Failed to configure DMA channel\n");
goto err_unlock;
}
dst_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
DMA_FROM_DEVICE);
ret = dma_mapping_error(dma_dev, dst_addr);
if (ret) {
dev_err(dev, "Failed to map remote memory\n");
goto err_unlock;
}
desc = dmaengine_prep_slave_single(chan, dst_addr, buf_info->size,
DMA_DEV_TO_MEM,
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
if (!desc) {
dev_err(dev, "Failed to prepare DMA\n");
ret = -EIO;
goto err_unmap;
}
transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
if (!transfer) {
ret = -ENOMEM;
goto err_unmap;
}
transfer->epf_mhi = epf_mhi;
transfer->paddr = dst_addr;
transfer->size = buf_info->size;
transfer->dir = DMA_FROM_DEVICE;
memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
desc->callback = pci_epf_mhi_dma_async_callback;
desc->callback_param = transfer;
cookie = dmaengine_submit(desc);
ret = dma_submit_error(cookie);
if (ret) {
dev_err(dev, "Failed to do DMA submit\n");
goto err_free_transfer;
}
dma_async_issue_pending(chan);
goto err_unlock;
err_free_transfer:
kfree(transfer);
err_unmap:
dma_unmap_single(dma_dev, dst_addr, buf_info->size, DMA_FROM_DEVICE);
err_unlock:
mutex_unlock(&epf_mhi->lock);
return ret;
}
static int pci_epf_mhi_edma_write_async(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_buf_info *buf_info)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
struct pci_epf_mhi_dma_transfer *transfer = NULL;
struct dma_chan *chan = epf_mhi->dma_chan_tx;
struct device *dev = &epf_mhi->epf->dev;
DECLARE_COMPLETION_ONSTACK(complete);
struct dma_async_tx_descriptor *desc;
struct dma_slave_config config = {};
dma_cookie_t cookie;
dma_addr_t src_addr;
int ret;
mutex_lock(&epf_mhi->lock);
config.direction = DMA_MEM_TO_DEV;
config.dst_addr = buf_info->host_addr;
ret = dmaengine_slave_config(chan, &config);
if (ret) {
dev_err(dev, "Failed to configure DMA channel\n");
goto err_unlock;
}
src_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
DMA_TO_DEVICE);
ret = dma_mapping_error(dma_dev, src_addr);
if (ret) {
dev_err(dev, "Failed to map remote memory\n");
goto err_unlock;
}
desc = dmaengine_prep_slave_single(chan, src_addr, buf_info->size,
DMA_MEM_TO_DEV,
DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
if (!desc) {
dev_err(dev, "Failed to prepare DMA\n");
ret = -EIO;
goto err_unmap;
}
transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
if (!transfer) {
ret = -ENOMEM;
goto err_unmap;
}
transfer->epf_mhi = epf_mhi;
transfer->paddr = src_addr;
transfer->size = buf_info->size;
transfer->dir = DMA_TO_DEVICE;
memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
desc->callback = pci_epf_mhi_dma_async_callback;
desc->callback_param = transfer;
cookie = dmaengine_submit(desc);
ret = dma_submit_error(cookie);
if (ret) {
dev_err(dev, "Failed to do DMA submit\n");
goto err_free_transfer;
}
dma_async_issue_pending(chan);
goto err_unlock;
err_free_transfer:
kfree(transfer);
err_unmap:
dma_unmap_single(dma_dev, src_addr, buf_info->size, DMA_TO_DEVICE);
err_unlock:
mutex_unlock(&epf_mhi->lock);
@ -431,6 +646,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
struct device *dev = &epf_mhi->epf->dev;
struct epf_dma_filter filter;
dma_cap_mask_t mask;
int ret;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
@ -449,16 +665,35 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
&filter);
if (IS_ERR_OR_NULL(epf_mhi->dma_chan_rx)) {
dev_err(dev, "Failed to request rx channel\n");
dma_release_channel(epf_mhi->dma_chan_tx);
epf_mhi->dma_chan_tx = NULL;
return -ENODEV;
ret = -ENODEV;
goto err_release_tx;
}
epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0);
if (!epf_mhi->dma_wq) {
ret = -ENOMEM;
goto err_release_rx;
}
INIT_LIST_HEAD(&epf_mhi->dma_list);
INIT_WORK(&epf_mhi->dma_work, pci_epf_mhi_dma_worker);
spin_lock_init(&epf_mhi->list_lock);
return 0;
err_release_rx:
dma_release_channel(epf_mhi->dma_chan_rx);
epf_mhi->dma_chan_rx = NULL;
err_release_tx:
dma_release_channel(epf_mhi->dma_chan_tx);
epf_mhi->dma_chan_tx = NULL;
return ret;
}
static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi)
{
destroy_workqueue(epf_mhi->dma_wq);
dma_release_channel(epf_mhi->dma_chan_tx);
dma_release_channel(epf_mhi->dma_chan_rx);
epf_mhi->dma_chan_tx = NULL;
@ -531,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
if (info->flags & MHI_EPF_USE_DMA) {
mhi_cntrl->read_from_host = pci_epf_mhi_edma_read;
mhi_cntrl->write_to_host = pci_epf_mhi_edma_write;
} else {
mhi_cntrl->read_from_host = pci_epf_mhi_iatu_read;
mhi_cntrl->write_to_host = pci_epf_mhi_iatu_write;
mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
}
/* Register the MHI EP controller */

View File

@ -266,6 +266,7 @@ struct mhi_event_config {
* struct mhi_controller_config - Root MHI controller configuration
* @max_channels: Maximum number of channels supported
* @timeout_ms: Timeout value for operations. 0 means use default
* @ready_timeout_ms: Timeout value for waiting device to be ready (optional)
* @buf_len: Size of automatically allocated buffers. 0 means use default
* @num_channels: Number of channels defined in @ch_cfg
* @ch_cfg: Array of defined channels
@ -277,6 +278,7 @@ struct mhi_event_config {
struct mhi_controller_config {
u32 max_channels;
u32 timeout_ms;
u32 ready_timeout_ms;
u32 buf_len;
u32 num_channels;
const struct mhi_channel_config *ch_cfg;
@ -330,6 +332,7 @@ struct mhi_controller_config {
* @pm_mutex: Mutex for suspend/resume operation
* @pm_lock: Lock for protecting MHI power management state
* @timeout_ms: Timeout in ms for state transitions
* @ready_timeout_ms: Timeout in ms for waiting device to be ready (optional)
* @pm_state: MHI power management state
* @db_access: DB access states
* @ee: MHI device execution environment
@ -419,6 +422,7 @@ struct mhi_controller {
struct mutex pm_mutex;
rwlock_t pm_lock;
u32 timeout_ms;
u32 ready_timeout_ms;
u32 pm_state;
u32 db_access;
enum mhi_ee_type ee;

View File

@ -49,6 +49,27 @@ struct mhi_ep_db_info {
u32 status;
};
/**
* struct mhi_ep_buf_info - MHI Endpoint transfer buffer info
* @mhi_dev: MHI device associated with this buffer
* @dev_addr: Address of the buffer in endpoint
* @host_addr: Address of the bufffer in host
* @size: Size of the buffer
* @code: Transfer completion code
* @cb: Callback to be executed by controller drivers after transfer completion (async)
* @cb_buf: Opaque buffer to be passed to the callback
*/
struct mhi_ep_buf_info {
struct mhi_ep_device *mhi_dev;
void *dev_addr;
u64 host_addr;
size_t size;
int code;
void (*cb)(struct mhi_ep_buf_info *buf_info);
void *cb_buf;
};
/**
* struct mhi_ep_cntrl - MHI Endpoint controller structure
* @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
@ -82,8 +103,10 @@ struct mhi_ep_db_info {
* @raise_irq: CB function for raising IRQ to the host
* @alloc_map: CB function for allocating memory in endpoint for storing host context and mapping it
* @unmap_free: CB function to unmap and free the allocated memory in endpoint for storing host context
* @read_from_host: CB function for reading from host memory from endpoint
* @write_to_host: CB function for writing to host memory from endpoint
* @read_sync: CB function for reading from host memory synchronously
* @write_sync: CB function for writing to host memory synchronously
* @read_async: CB function for reading from host memory asynchronously
* @write_async: CB function for writing to host memory asynchronously
* @mhi_state: MHI Endpoint state
* @max_chan: Maximum channels supported by the endpoint controller
* @mru: MRU (Maximum Receive Unit) value of the endpoint controller
@ -128,14 +151,19 @@ struct mhi_ep_cntrl {
struct work_struct reset_work;
struct work_struct cmd_ring_work;
struct work_struct ch_ring_work;
struct kmem_cache *ring_item_cache;
struct kmem_cache *ev_ring_el_cache;
struct kmem_cache *tre_buf_cache;
void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
int (*alloc_map)(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t *phys_ptr,
void __iomem **virt, size_t size);
void (*unmap_free)(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
void __iomem *virt, size_t size);
int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void *to, size_t size);
int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void *from, u64 to, size_t size);
int (*read_sync)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
int (*write_sync)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
int (*read_async)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
int (*write_async)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
enum mhi_state mhi_state;