IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The ring index will always collide as hash into the fence list, so use
the context number instead. That can still cause collisions, but they
are less likely than using ring indices.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
That avoids lock inversion between the BO reservation lock
and the anon_vma lock.
v2:
* Changed amdgpu_bo_list_entry.user_pages to an array of pointers
* Lock mmap_sem only for get_user_pages
* Added invalidation of unbound userpointer BOs
* Fixed memory leak and page reference leak
v3 (chk):
* Revert locking mmap_sem only for_get user_pages
* Revert adding invalidation of unbound userpointer BOs
* Sanitize and fix error handling
v4 (chk):
* Init userpages pointer everywhere.
* Fix error handling when get_user_pages() fails.
* Add invalidation of unbound userpointer BOs again.
v5 (chk):
* Add maximum number of tries.
v6 (chk):
* Fix error handling when we run out of tries.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> (v4)
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Switching the GDS space to often seems to be problematic.
This patch together with the following can avoid VM faults on context switch.
v2: extend commit message a bit
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com> (v1)
Reviewed-by: Chunming Zhou <david1.zhou@amd.com> (v1)
The owner must be per ring as long as we don't
support sharing VMIDs per process. Also move the
assigned VMID and page directory address into the
IB structure.
v3: assign the VMID to all IBs, not just the first one.
v4: use correct pointer for owner
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Remove the double housekeeping and use something sane to
forcefuly delete BOs on unload.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Read back harvest configuration from registers and simplify
calculations. No need to program the raster config registers.
These are programmed as golden registers and the user mode
drivers program them as well.
v2: rebase on Tom's patches
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This allows us to remove the kernel context and use a better
priority for the submissions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This allows us to remove the kernel context and use a better
priority for the submissions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Distribute the load on both rings.
v2: use a loop for the initialization
v3: agd: rebase on upstream
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Updates from different VMs can be processed independently.
v2: agd: rebase on upstream
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
The padding depends on the firmware version and we need that for BO moves as
well, not only for VM updates.
v2: new approach of making pad_ib a ring function
v3: fix typo in macro name
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of when we try to bind it check the usermm when
we try to use it in the IOCTLs.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
No need to duplicate that code over and over again. Also stop using the
flags to determine if we need to map the addresses.
v2: constify the pages_addr
v3: rebased, fix typo in commit message
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The new sysfs interfaces:
pp_num_states: Read-only, return the number of all pp states, 0 if powerplay is not available.
pp_cur_state: Read-only, return the index number of current pp state.
pp_force_state: Read-write, to write a power state index will switch to selected state forcedly and
enable forced state mode, disable forced state mode. such as "echo >...".
pp_table: Read-write, binary output, to be used to read or write the dpm table, the maximum
file size is 4KB of page size.
pp_dpm_sclk: Read-write, reading will return a dpm levels list, to write an index number will force
powerplay to set the corresponding dpm level.
pp_dpm_mclk: same as sclk.
pp_dpm_pcie: same as sclk.
And add new setting "manual" to the existing interface power_dpm_force_performance_level.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This allows the scheduler to handle the dependencies on ID contention as well.
v2: grab id only once
v3: use a separate lock for the VMIDs
v4: cleanup after semaphore removal
v5: minor coding style change
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It doesn't currently do anything and there's no need for it
going forward since pci config reset will be required as a
fallback even when we have fine grained reset implemented.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>