IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
kvmgt get msi eventfd_ctx at qemu vfio set irq eventfd, then
msi eventfd_ctx should be put at some point.
The first point is kvmgt handle qemu vfio_disable_irqindex()
call which has DATA_NONE and ACTION_TRIGGER in flags.
If qemu doesn't call vfio_disable_irqindex(), the second point
is vgpu release function.
v2: Don't inject msi interrupt into guest if eventfd_ctx is dereferenced
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The error exit path when a duplicate is found does not kfree and cmd_entry
struct and hence there is a small memory leak. Fix this by kfree'ing it.
Detected by CoverityScan, CID#1370198 ("Resource Leak")
Fixes: be1da7070aea ("drm/i915/gvt: vGPU command scanner")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Commit 'aee3bac0a3a8 ("drm/i915/psr: Tie PSR2 support to Y
coordinate requirement")' got merged to drm-intel-next-queued
but the variable was defined commit 'c5fe47327b06 ("drm: Add PSR
version 3 macro") who was merged through drm-misc.
So backmerging to get drm-intel-next-queued compiling back again.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Add drm_format_mod update, which is omitted.
Fixes: e546e281("drm/i915/gvt: Dmabuf support for GVT-g")
Cc: stable@vger.kernel.org
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Much error exist in host dmesg during guest boot up with loca display
enabled.
gvt: vgpu 1: invalid range gmadr 0x0 size 0x0
This error happens when qemu get dmabuf info in case that the virtual
display plane is enabled but its base address is an invalid 0, such
case may be true before guest enable its plane. At this moment, its
state is copied from host where the plane may be enabled.
This patch disable primary/sprite/cursor plane at virtual display
initialization, so intel_vgpu_decode_primary/cursor/sprite could
return early as plane is disabled, then plane base check is skipped and
error message disapper.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Much error message exist in host dmesg when guest boot up with local
display enabled.
[ 167.680011] gvt: vgpu 1: invalid range gmadr 0x0 size 0x0
[ 167.680013] gvt: vgpu 1: invalid gma address: 0
The second error line duplicate with the first error line, so this
patch remove this redundant error message and make the next error
message much clearer.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We have canceled dma map for ppgtt entries. Also we need to do it for
ggtt entries when them are invalidated.
This can fix task hung issue as:
[13517.791767] INFO: task gvt_service_thr:1081 blocked for more than 120 seconds.
[13517.792584] Not tainted 4.14.15+ #3
[13517.793417] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[13517.794267] gvt_service_thr D 0 1081 2 0x80000000
[13517.795132] Call Trace:
[13517.795996] ? __schedule+0x493/0x77b
[13517.796859] schedule+0x79/0x82
[13517.797740] schedule_preempt_disabled+0x5/0x6
[13517.798614] __mutex_lock.isra.0+0x2b5/0x445
[13517.799504] ? __switch_to_asm+0x24/0x60
[13517.800381] ? intel_gvt_cleanup+0x10/0x10
[13517.801261] ? intel_gvt_schedule+0x19/0x2b9
[13517.802107] intel_gvt_schedule+0x19/0x2b9
[13517.802954] ? intel_gvt_cleanup+0x10/0x10
[13517.803824] gvt_service_thread+0xe3/0x10d
[13517.804704] ? wait_woken+0x68/0x68
[13517.805588] kthread+0x118/0x120
[13517.806478] ? kthread_create_on_node+0x3a/0x3a
[13517.807381] ? call_usermodehelper_exec_async+0x113/0x11a
[13517.808307] ret_from_fork+0x35/0x40
v3: split out ggtt reset case.
v2: also unmap ggtt during reset.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
GVT-g dispatches request to host i915 and depends on i915 notify
ring interrupt mechanism to check completion of request.
For now MI_USER_INTERRUPT in guest requests is passed through
in GVT-g cmd parser and i915 does not use it, which causes
unnecessary interrupt handling in i915.
On the other hand, if several requests from guest are combined into
one request in and contain MI_USER_INTERRUPT in the middle of
combined request. GVT-g still has to wait on the whole request to
complete to inject user interrupts to guest.
This patch makes all the MI_USER_INTERRUPT nop to save some interrupt
handling.
Here is test result to run glmark2 on guest for 10 seconds:
host master interrupts number is reduced from 16021 to 11162
host user interrupts number is reduced from 7936 to 3536
v2:
- revise commit message. (Kevin)
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhipeng Gong <zhipeng.gong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Addresses-Coverity-ID: 1466154 ("Missing break in switch")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJauCZfAAoJEHm+PkMAQRiGWGUH/2rhdQDkoJpYWnjaQkolECG8
MxpGE7nmIIHxQcbSDdHTGJ8IhVm6Z5wZ7ym/PwCDTT043Y1y341sJrIwL2/nTG6d
HVidk8hFvgN6QzlzVAHT3ZZMII/V9Zt+VV5SUYLGnPAVuJNHo/6uzWlTU5g+NTFo
IquFDdQUaGBlkKqby+NoAFnkV1UAIkW0g22cfvPnlO5GMer0gusGyVNvVp7TNj3C
sqj4Hvt3RMDLMNe9RZ2pFTiOD096n8FWpYftZneUTxFImhRV3Jg5MaaYZm9SI3HW
tXrv/LChT/F1mi5Pkx6tkT5Hr8WvcrwDMJ4It1kom10RqWAgjxIR3CMm448ileY=
=YKUG
-----END PGP SIGNATURE-----
Backmerge tag 'v4.16-rc7' into drm-next
Linux 4.16-rc7
This was requested by Daniel, and things were getting
a bit hard to reconcile, most of the conflicts were
trivial though.
On unknown/unhandled ioctls the driver should return an error, so
userspace knows it tried to use something unsupported.
Cc: stable@vger.kernel.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Our shadow context content is from guest but with masked control reg like
CTX_CONTEXT_CONTROL, we need to make sure all settings from guest would be set
when this context is on hw, this trys to force mask enable bits for all to
ensure every bits setting would be effective on hw.
One regression found related to once inhibit bit is set, gpu engine are working
on inhibit state until MI_LOAD_REG_IMM command or context image clear inhibit
bit with mask bit set to 1, and val bit set to 0. In gvt-g currently workload
has the highest priority, so gvt-g workload could trigger preempt context
easily, preempt context set inhibit bit, then gvt-g workload is scheduled in,
but gvt-g workload shadow context image usually doesn't set inhibit mask bit,
so gpu is still in inhibit state when gvt workload is running. This caused gpu
hang.
Suggested-by: Zhang, Xiong <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Zhang, Xiong <xiong.y.zhang@intel.com>
The PDPs of a shadow page will only be valid after a vGPU mm is pinned.
So the PDPs in the shadow context should be updated then.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As different OSes might handling GVT PPGTT creation/destroy notification
differently during a vGPU reset. A better approach is invalidating all
vGPU PPGTT mm objects during vGPU reset.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Trivial fix to spelling mistake in gvt_err error message text.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reduntant message prints when:
- linux guest creating.
- dma-buf win10 guest boot.
- xonotic stress testing in linux guest.
Add below registers to default MMIO handler:
0xd00, RPM_CONFIG0
0xd40, RC6_LOCATION
0x65010, HSW_AUD_MISC_CTRL
0x6671c,
0x700a0, CUR_FBC_CTL
0x7239c,
v2:
- Should init i915_reg_t using uint32_t instead of _MMIO macro.
(compiling errors)
- Use defined offset in i915_reg.h
(zhenyu)
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Once the ring buffer is copied to ring_scan_buffer and scanned,
the shadow batch buffer start address is only updated into
ring_scan_buffer, not the real ring address allocated through
intel_ring_begin in later copy_workload_to_ring_buffer.
This patch is only to set the right shadow batch buffer address
from Ring buffer, not include the shadow_wa_ctx.
v2:
- refine some comments. (Zhenyu)
v3:
- fix typo in title. (Zhenyu)
v4:
- remove the unnecessary comments. (Zhenyu)
- add comments in bb_start_cmd_va update. (Zhenyu)
Fixes: 0a53bc07f044 ("drm/i915/gvt: Separate cmd scan from request allocation")
Cc: stable@vger.kernel.org # v4.15
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Yulei Zhang <yulei.zhang@intel.com>
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When populating shadow ctx from guest, we should handle oa related
registers in hw ctx, so that they will not be overlapped by guest oa
configs. This patch made it possible to capture oa data from host for
both host and guests.
Signed-off-by: Min He <min.he@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
If user continuously create vgpu, boot guest, shoutdown guest and destroy
vgpu from remote, the following calltrace exists in dmesg sometimes:
[ 6412.954721] RPM wakelock ref not held during HW access
[ 6412.954795] WARNING: CPU: 7 PID: 11941 at
linux/drivers/gpu/drm/i915/intel_drv.h:1800
intel_uncore_forcewake_get.part.7+0x96/0xa0 [i915]
[ 6412.954915] Call Trace:
[ 6412.954951] intel_uncore_forcewake_get+0x18/0x20 [i915]
[ 6412.954989] intel_gvt_switch_mmio+0x8e/0x770 [i915]
[ 6412.954996] ? __slab_free+0x14d/0x2c0
[ 6412.955001] ? __slab_free+0x14d/0x2c0
[ 6412.955006] ? __slab_free+0x14d/0x2c0
[ 6412.955041] intel_vgpu_stop_schedule+0x92/0xd0 [i915]
[ 6412.955073] intel_gvt_deactivate_vgpu+0x48/0x60 [i915]
[ 6412.955078] __intel_vgpu_release+0x55/0x260 [kvmgt]
when this happens, gvt_switch_mmio is called at vgpu destroy, host i915 is
idle and doesn't hold RPM wakelock, igd is in powersave mode, but
gvt_switch_mmio require igd power on to access register, so
intel_runtime_pm_get should be added to make sure igd power on before
gvt_switch_mmio.
v2: Move runtime_pm_get/put into gvt_switch_mmio.(Zhenyu)
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
In XenGT, ioreq copy is used to trap mmio write and ppgtt write. Both
of them are memory write, ioreq handler couldn't distinguish them. So
ioreq handler probe the ppgtt write handler, if it is succuess, this
ioreq is ppgtt write, otherwise it is mmio write.
So ppgtt write handler should return an error at the failure of finding
page track, it is fatal to implement ioreq handler in XenGT.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
page_track_handler take lock at the beginning, the lock should be released
at the failure of finding page track. Otherwise deadlock will happen.
Fixes: e502a2af4c35 ("drm/i915/gvt: Provide generic page_track infrastructure for write-protected page")
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add a new debugfs entry kvmgt_nr_cache_entries under vgpu which shows
the number of entry in dma cache.
$ cat /sys/kernel/debug/gvt/vgpu1/kvmgt_nr_cache_entries
10101
v3: fix compiling error for some configuration. (Xiong Zhang <xiong.y.zhang@intel.com>)
v2: keep debugfs layout flat.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The implementation of current kvmgt implicitly setup dma mapping at MPT
API gfn_to_mfn. First this design against the API's original purpose.
Second, there is no unmap hit in this design. The result is that the
dma mapping keep growing larger and larger. For mutl-vm case, they will
consume IOMMU IOVA low 4GB address space quickly and so tons of rbtree
entries crated in the IOMMU IOVA allocator. Finally, single IOVA
allocation can take as long as ~70ms. Such latency is intolerable.
To address both above issues, this patch introduced two new MPT API:
o dma_map_guest_page - setup dma map for guest page
o dma_unmap_guest_page - cancel dma map for guest page
The kvmgt implements these 2 API. And to reduce dma setup overhead for
duplicated pages (eg. scratch pages), two caches are used: one is for
mapping gfn to struct gvt_dma, another is for mapping dma addr to
struct gvt_dma.
With these 2 new API, the gtt now is able to cancel dma mapping when page
table is invalidated. The dma mapping is not in a gradual increase now.
v2: follow the old logic for VFIO_IOMMU_NOTIFY_DMA_UNMAP at this point.
Cc: Hang Yuan <hang.yuan@intel.com>
Cc: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix below error with minor code refactor.
CHECK drivers/gpu/drm/i915//gvt/handlers.c
drivers/gpu/drm/i915//gvt/handlers.c:203 sanitize_fence_mmio_access() error: 'vgpu' dereferencing possible ERR_PTR()
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix check error at
CHECK drivers/gpu/drm/i915//gvt/kvmgt.c
drivers/gpu/drm/i915//gvt/kvmgt.c:455 intel_vgpu_create() error: we previously assumed 'vgpu' could be null (see line 454)
For failed vgpu create, just show error return in failure message.
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
There is one issue relates to Coarse Power Gating(CPG) on KBL NUC in GVT-g,
vgpu can't get the correct default context by updating the registers before
inhibit context submission. It always get back the hardware default value
unless the inhibit context submission happened before the 1st time
forcewake put. With this wrong default context, vgpu will run with
incorrect state and meet unknown issues.
The solution is initialize these mmios by adding lri command in ring buffer
of the inhibit context, then gpu hardware has no chance to go down RC6 when
lri commands are right being executed, and then vgpu can get correct
default context for further use.
v3:
- fix code fault, use 'for' to loop through mmio render list(Zhenyu)
v4:
- save the count of engine mmio need to be restored for inhibit context and
refine some comments. (Kevin)
v5:
- code rebase
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
No functional change, just for easy to use.
v4:
- refine comment (Kevin)
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
No functional change. This defination will also be used in future patchesi.
v4:
- refine patch description (Kevin)
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We don't know how many page tables will be shadowed. It varies
considerably corresponding to guest load. Radix tree is a better
choice for us. Since Page Frame Number is used as key so most of
the bits are common.
Here is some performance data (duration in us) of looking up a
element:
Before: (aka. ppgtt_find_shadow_page)
0.308 0.292 0.246 0.432 0.143 ... 0.311 0.225 0.382 0.199 0.325
After: (aka. intel_vgpu_find_spt_by_mfn)
0.106 0.106 0.107 0.106 0.105 0.107 ... 0.107 0.109 0.105 0.108
This time I didn't get the early data of hash table. The data is
measured when desktop is shown.
As last change, the overall benchmark almost is not changed, but
we get better scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch provide generic page_track infrastructure for write-protected
guest page. The old page_track logic gets rewrote and now stays in a new
standalone page_track.c. This page track infrastructure can be both used
by vGUC and GTT shadowing.
The important change is that it uses radix tree instead of hash table.
We don't have a predictable number of pages that will be tracked.
Here is some performance data (duration in us) of looking up a element:
Before: (aka. intel_vgpu_find_tracked_page)
0.091 0.089 0.090 ... 0.093 0.091 0.087 ... 0.292 0.285 0.292 0.291
After: (aka. intel_vgpu_find_page_track)
0.104 0.105 0.100 0.102 0.102 0.100 ... 0.101 0.101 0.105 0.105
The hash table has good performance at beginning, but turns bad with
more pages being tracked even no 3D applications are running. As
expected, radix tree has stable duration and very quick.
The overall benchmark (tested with Heaven Benchmark) marginally improved
since this is not the bottleneck. What we benefit more from this change
is scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Don't extend page_track to mpt layer. Keep MPT simple and clean.
Meanwhile remove gtt.n_tracked_guest_page which doesn't make much
sense.
v2: clean up gtt.n_tracked_guest_page.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The kvmgt's implementation of mpt api {set,unset}_wp_page is not real
write-protection - the data get written before invoke this two api.
As discussed, change the mpt api to match the real behavior.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The target structure of some functions is struct intel_vgpu_ppgtt_spt and
their names are xxx_shadow_page. It should be xxx_shadow_page_table. Let's
use short name 'spt' instead to reduce the length. As well as the hash
table name.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This is a another big one and the GVT shadow page management code is
heavily refined.
The new code only use struct intel_vgpu_ppgtt_spt to represent a vgpu
shadow page table - w/ or wo/ a guest page associated with. A pure shadow
page (no guest page associated) will be used to shadow splited 2M huge
gtt. In this case, the spt.guest_page.gfn should be a zero.
To search a existed shadow page table, we have two new interfaces:
- intel_vgpu_find_spt_by_gfn(), find a spt by guest gfn. It must not
be a pure spt.
- intel_vgpu_find_spt_by_mfn, Find the spt using shadow page mfn in
shadowed PTE.
The oos_page management is remained as what is was.
v2: Split some changes into small standalone patches.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Make the shadow PTE population code clear. Later we will add huge gtt
support based on this.
v2:
- rebase to latest code.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
GTT entry has similar format with the CPU PTE. We'd prefer named macro
instead of hardcode.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out these two interfaces so we can kill some duplicated code in
scheduler.c.
v2:
- rename to intel_vgpu_{get,put}_ppgtt_mm
- refine handle_g2v_notification
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Accurate names help to avoid confusing so improve readability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>