IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- To enable the CUDA test, lit detects the `libcudart.so` in
`LD_LIBRARY_PATH`. Now lit also set the CUDA SDK root of
`libcudart.so` as cling parameter (`--cuda-path`) in the tests.
- Pass through the environment variable `CUDA_VISIBLE_DEVICES`.
- little changes at comments and code style
- try to use const in IncrementalCUDADeviceCompiler, where is possible
- move CUDA device code compiler instance to IncrementalParser
- change the members of CuArgs to const and adjust the setCuArgs method
- use std::vector<string> instead llvm::Smallvector<const char *> to build argv for executeAndWait
- improve the error messages of generatePCH(), generatePTX() and generateFatbinary()
- replace m_Counter with a copy in IncrementalCUDADeviceCompiler to avoid involuntary changes
- add cudaDeviceSynchronize() at every kernel launch
- remove small address bug at cudaMemcpy, if host array is used
- in parallel test cases, replace fixes thread number with variable
- overworked shared memory kernel
- CUDA __constant__ memory
- CUDA global __device__ memory
- CUDA __host__ prefix
- CUDA kernel launch with arguments
- CUDA templated kernels
- CUDA shared memory with dynamic runtime
- CUDA Streams
- test if CUDA device is available