drm/i915/guc: Add CT size delay helper
As of now, there is no mechanism for tracking a given request's progress through the queue. Instead, add a helper that returns an estimated maximum time the queue should take to drain if completely full. Suggested-by: John Harrison <john.c.harrison@intel.com> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: John Harrison <john.c.harrison@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231017180806.3054290-3-jonathan.cavitt@intel.com
This commit is contained in:
parent
29e6683375
commit
ff0dac080a
@ -103,6 +103,33 @@ enum { CTB_SEND = 0, CTB_RECV = 1 };
|
||||
|
||||
enum { CTB_OWNER_HOST = 0 };
|
||||
|
||||
/*
|
||||
* Some H2G commands involve a synchronous response that the driver needs
|
||||
* to wait for. In such cases, a timeout is required to prevent the driver
|
||||
* from waiting forever in the case of an error (either no error response
|
||||
* is defined in the protocol or something has died and requires a reset).
|
||||
* The specific command may be defined as having a time bound response but
|
||||
* the CT is a queue and that time guarantee only starts from the point
|
||||
* when the command reaches the head of the queue and is processed by GuC.
|
||||
*
|
||||
* Ideally there would be a helper to report the progress of a given
|
||||
* command through the CT. However, that would require a significant
|
||||
* amount of work in the CT layer. In the meantime, provide a reasonable
|
||||
* estimation of the worst case latency it should take for the entire
|
||||
* queue to drain. And therefore, how long a caller should wait before
|
||||
* giving up on their request. The current estimate is based on empirical
|
||||
* measurement of a test that fills the buffer with context creation and
|
||||
* destruction requests as they seem to be the slowest operation.
|
||||
*/
|
||||
long intel_guc_ct_max_queue_time_jiffies(void)
|
||||
{
|
||||
/*
|
||||
* A 4KB buffer full of context destroy commands takes a little
|
||||
* over a second to process so bump that to 2s to be super safe.
|
||||
*/
|
||||
return (CTB_H2G_BUFFER_SIZE * HZ) / SZ_2K;
|
||||
}
|
||||
|
||||
static void ct_receive_tasklet_func(struct tasklet_struct *t);
|
||||
static void ct_incoming_request_worker_func(struct work_struct *w);
|
||||
|
||||
|
@ -104,6 +104,8 @@ struct intel_guc_ct {
|
||||
#endif
|
||||
};
|
||||
|
||||
long intel_guc_ct_max_queue_time_jiffies(void);
|
||||
|
||||
void intel_guc_ct_init_early(struct intel_guc_ct *ct);
|
||||
int intel_guc_ct_init(struct intel_guc_ct *ct);
|
||||
void intel_guc_ct_fini(struct intel_guc_ct *ct);
|
||||
|
Loading…
x
Reference in New Issue
Block a user