IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- add cudaDeviceSynchronize() at every kernel launch
- remove small address bug at cudaMemcpy, if host array is used
- in parallel test cases, replace fixes thread number with variable
- overworked shared memory kernel
- CUDA __constant__ memory
- CUDA global __device__ memory
- CUDA __host__ prefix
- CUDA kernel launch with arguments
- CUDA templated kernels
- CUDA shared memory with dynamic runtime
- CUDA Streams
- test if CUDA device is available
Before, it was not possible to find the clang++, which is contained in the cling, if we don't start the clang from the bin folder ('./cling -xcuda'). Now, for example it is possible to start the cling with 'bin/cling -xcuda' .
Fix a Bug, which avoid to start './cling -xcuda -fsyntax-only'.
In some cases, the path of the cling temp folder contains some non printable chars at the end.
Change the handling of the path string, to solve this problem.
Now, it is possible to declare variables on the prompt, which are visible for other statements.
The problems was, that cling wrapped all statements in a function, to get valid input. So, every variable is just visible inside in its own wrapper function. To solve the problem, cling change the local variable declaration to a global declaration.
The implementation for the CUDA compiler checks, if the unwrapping is happened. If it happened, the c++ code of the unwrapped variable declaration (AST-printer) instead the raw input will be written to the .cu-file.
At the moment, to extend the AST-tree of the device code, we use PCH-files to extend the exist device code with new lines of code. In detail, if we want to create a new PTX-file, we use the CUDA code (.cu file) and a PCH-file with the existing AST as input and generate an new PCH-file, which contains the whole AST. Then, the PCH-file will compiled to a PTX-file.
A bug in clang prevent, that we can’t generate more than 5 new PCH-files. The bug is not easy to fix, so I write a small workaround. Instead using a PCH-file, which contains the AST, we generate a new complete AST from all .cu-files every time.
The workaround is temporary and should removed, if clang is patched.
Now, it is possible to set some arguments of the clang nvptx and fatbinary via arguments at cling start. The arguments are filtered. So not every argument is possible at the moment. The Arguments can’t changed during runtime, because the PCH-files forbid it. For Example, the calng nvptx use the optimization level, which is set at start of cling.
At the moment, the debug options of clang nvptx are simple. If any debug option is detected, just a -g will add to the clang nvptx.
Additional PTX options for clang nvptx doesn’t works at the moment. There is a problem at parsing at the start of cling.
I replaced copies of the include paths with a pointer to the headerSearchOptions. Now, explicit handling of the include paths is not more necessary. Add include paths, which was declared via argument at start also works.
The class IncrementalCUDADeviceCompiler use external tools to generate PTX and cuda fatbin files. It runs the tools clang and fatbinary via llvm::sys::ExecuteAndWait. The class also handle to include new code in existing code. The steps of the compiler pipeline are:
- clang: CUDA C++ + previous PCH -> PCH
- clang: PCH -> PTX
- fatbinary: PTX -> fatbin
There is no selection of code. Every input of the cling will pass to the IncrementalCUDADeviceCompiler.
Now, it is possible to define functions with c++ attributes without the .rawInput mode. For example functions like `[[ noreturn ]] foo() { ... }` or `[[deprecated]] [[nodiscard]] int bar(){ … }`.
COMPILER="ccache clang" gets lost in CMake; using ccache does not work as there is no ccache-wrapper for clang-3.9.
So just use clang-3.9 without ccache.
This reverts commit 011aa8200277cd31957e222afd9b37415458b31f.
This is a revert of revert. I reverted the first commit because adding
"." to prebuiltmodulepath was causing failure in runtime modules, but
now we're skipping "." in TCling::LazyFunctionCreatorAutoloadForModule so
doesn't matter even if we have ".".
We had test failures in runtime nightlies such as this one:
https://epsft-jenkins.cern.ch/view/ROOT/job/root-nightly-runtime-cxxmodules/95/BUILDTYPE=Debug,COMPILER=gcc62,LABEL=slc6/testReport/junit/projectroot.roottest.root.math/smatrix/roottest_root_math_smatrix_testKalman/
Failures were due to what @pcanal commented in #2135, that some so files in
roottest doesn't have external linkage. (It means that if you call
dlopen(libfoo.so), linux kernel can't find dependency libraries and it
emits "undefined symbol" error when they try to initialize global
variables in libfoo.so but couldn't find symbol definition)
With pch, rootmap files were providing information about the depending library.
However we stopped generating rootmap files in #2127 and that's why we
got these failures. To fix this issue, I implemented a callback to
TCling which gets called when DynamicLibraryManager fails. The callback
pass error message to TCling and it handles message if it contains "undefined error".
This reverts commit 5298b418eec4129351888f41cb7c3bfc90161e22.
This commit was mistakenly committed. PR was opened in #1730, but it was
closed and moved to #1761. I didn't notice this and created another PR
in #1980.
This change was causing 100+ failures in runtime cxxmodules nightlies.
(Eg. https://epsft-jenkins.cern.ch/job/root-pullrequests-build/29183/testReport/junit/projectroot/runtutorials/tutorial_fit_FittingDemo/)
We want to have **proper** PrebuildModulesPaths which information were
extracted from LD_LIBRARY_PATH and DYLD_LIBRARY_PATH, not a random ".".
Because of this commit, we were trying to autoload libraries generated
by roottest on-demand (for example "./h1analysisTreeReader_C.so") This
is not an intentional behavior, these autogenerated libraries are
already loaded by roottest and what we want to do is to load **proper**
libraries like libHist.so instead.
In previous allmodules&autoloading patch, we used callback from
DeserializationListener to get Decl and loaded corresponding libraries.
It worked, but the performance was bad because ROOT was loading
excessive libraries.
In this patch, we use TCling::LazyFunctionCreatorAutoloadForModule. This
function gets callback when "mangled_name" was not found in loaded
libraries thus we have to the load corresponding library and lookup
again.
I used unordered_map to store mangled identifier and library pair. I'm
doing an optimization by hashing mangled name and storing library not by
name but by uint8 and hold uint8-name information in another vector.
Also tried std::map but unorderd_map was more performant. There are
better hash table like:
https://probablydance.com/2018/05/28/a-new-fast-hash-table-in-response-to-googles-new-fast-hash-table/
we can try to use them if this part gets crucial.
With this patch:
```
Processing tutorials/hsimple.C...
hsimple : Real Time = 0.04 seconds Cpu Time = 0.03 seconds
(TFile *) 0x562b37a14fe0
Processing /home/yuka/CERN/ROOT/memory.C...
cpu time = 0.362307 seconds
sys time = 0.039741 seconds
res memory = 278.215 Mbytes
vir memory = 448.973 Mbytes
```
W/o this patch:
```
Processing tutorials/hsimple.C...
hsimple : Real Time = 0.08 seconds Cpu Time = 0.07 seconds
(TFile *) 0x5563018a1d30
Processing /home/yuka/CERN/ROOT/memory.C...
cpu time = 1.524314 seconds
sys time = 0.157075 seconds
res memory = 546.867 Mbytes
vir memory = 895.184 Mbytes
```
So it improves time by 4x times and memory by 2x.
Before this commit, cpt.py attempted `"3.11.1" < "3.4.3"`, but this
incorrectly returns `True`. This commit adds a function that splits the
string into version identifiers and checks them all individually.
Add llvm module pass to generate unique cuda module ctor/dtor names.
This llvm module pass address the follow problem. Every llvm module has a cuda ctor and dtor (if a cuda fatbinary exist), with at least a function call to register the fatbinary. The ctor/dtor can also include function calls to register global functions and variables at runtime, depending on user's code. The lazy compilation detects functions by the name. If the name (symbol) already exists it uses the existing translation. Otherwise it translates the function on first use (but it never translates twice). Without the module pass, Cling will always use the translation of the first module.
The testcase use the reflection of the gCling interpreter object. It takes two random modules and compare the symbols of the cuda module ctor and dtor.
Also add function, which change the symbol of the cuda module ctor and dtor to preprocessor compliant symbols.