IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We are relying on this since a while, for example for reemission of
template symbols. At the moment, we get the incremental extensions
because Preprocessor::enableIncrementalProcessing() turns them on
internally, but this will change with LLVM 18 where this method only
controls incremental processing of a single Preprocessor object.
Sema::AddTemplateOverloadCandidate() can issue diagnostics as part of the
template instantiation it performs. The LookupHelper just wants to know
whether there is a matching function, which is similar to a SFINAE context
and a good reason to silence potential diagnostics if asked to do so!
Do get there, simple increase the scope of the setSuppressAllDiagnostics()
calls, and make it an RAII for robustness.
Outer RAIIs might still reference the Transaction, and unload is
assuming that it owns the transaction and can delete it / put it
into the TransactionPool.
This fixes https://github.com/root-project/root/issues/7657
We still need to track ownership as what has happened here (unload
of a Transaction held by an RAII) can happen again / elsewhere.
This will be addressed by a subsequent PR in master.
Before, pre-existing fwd decls of specializations got unloaded.
OTOH, any decl produced during (failed) template instantiation must
also be unloaded; see #6331. Those are easiest identified by unloading
the whole (failed) transaction.
```
error: no member named 'value' in 'std::__and_<std::is_copy_assignable<std::__cxx11::basic_string<char> >, std::is_copy_assignable<Inner<int> > >'
is_copy_assignable<_T2>>::value,
~~~~~~~~~~~~~~~~~~~~~~~~~~^
```
which is due to the first decl being invalid (as `Inner<int>` does not have a deinition and a `static_assert` /usr/lib/gcc/x86_64-redhat-linux/10/../../../../include/c++/10/type_traits:1093 being triggered), then not unloaded, and then picked up again where we *do* have the definition for `Inner<int>`.
Add test for templt spec lookup vs unloading.
When doing lookup on templates, instantiation can fail.
This can be triggered during template instantiation somewhere
in clang, autoloading, cling-lookup - and SFINAE errors that
occur in cling-lookup must not bubble up to clang, or else
clang will think that there was a problem (where there was
none - just e.g. ROOT trying to autoload a bogus template).
In this concrete case, a template specialized with a lambda
was not found by clang, was tried to be autoloaded, TMetaUtils
produced a broken normalized type name, lookup on the broken
type name failed with a SFINAE error - and that ended up being
swallowed by a clang SFINAETrap by the topmost lookup.
Instead, keep SFINAE errors to ourselves.
This is an improved version of 40673e583fceda8b362a0902c6aba371ddd0cfd8
which lowers the SFINAE-reset to Lookup, as AutoParse can still
see partial template specialization scopes. It fixes ROOT-10777.
The implementation of class->library mapping makes a call to the
LookupHelper::findScope. This makes the recursive invocations to
LookupHelper::findScope -> ... -> LookupHelper::findScope happen more often.
The operations done by the LookupHelper are costly in both memory and
performance. Almost every operation requires memory allocation and parsing
of often non-trivial C++ code.
Unfortunately, the LookupHelper is used very intensively by rootcling and
ROOT. The callers usually do not use any caching mechanisms and redo the
expensive operations over and over even though the answer is known to be
the same as before. For instance, building the dictionary of shows:
```
MathCore:
Cached entries: 217
Total parse requests: 54051
Cache hits: 53834
TreePlayer:
Cached entries: 183
Total parse requests: 57697
Cache hits: 57514
```
This patch introduces the first set of caching functionality. In
particular, each LookupHelper::find* function allocates a memory buffer
which is then stored in the clang::SourceManager. We hash the buffer
content and keep a mapping between a hash and FileID and next time we
encounter the same content we do not allocate a new FileID but reuse the
old one. We see decrease in memory footprint by 7% for non-cxxmodules ROOT.
For cxxmodules we see significant reduction of the pcm sizes (by half)
which translates into rss improvements:
```
master before:
cpu time = 0.291462 seconds
sys time = 0.064409 seconds
res memory = 345.816 Mbytes
vir memory = 573.508 Mbytes
master after:
cpu time = 0.235828 seconds
sys time = 0.098327 seconds
res memory = 260.012 Mbytes
vir memory = 377.945 Mbytes
```
Patch by Yuka Takahashi and me.
In previous allmodules&autoloading patch, we used callback from
DeserializationListener to get Decl and loaded corresponding libraries.
It worked, but the performance was bad because ROOT was loading
excessive libraries.
In this patch, we use TCling::LazyFunctionCreatorAutoloadForModule. This
function gets callback when "mangled_name" was not found in loaded
libraries thus we have to the load corresponding library and lookup
again.
I used unordered_map to store mangled identifier and library pair. I'm
doing an optimization by hashing mangled name and storing library not by
name but by uint8 and hold uint8-name information in another vector.
Also tried std::map but unorderd_map was more performant. There are
better hash table like:
https://probablydance.com/2018/05/28/a-new-fast-hash-table-in-response-to-googles-new-fast-hash-table/
we can try to use them if this part gets crucial.
With this patch:
```
Processing tutorials/hsimple.C...
hsimple : Real Time = 0.04 seconds Cpu Time = 0.03 seconds
(TFile *) 0x562b37a14fe0
Processing /home/yuka/CERN/ROOT/memory.C...
cpu time = 0.362307 seconds
sys time = 0.039741 seconds
res memory = 278.215 Mbytes
vir memory = 448.973 Mbytes
```
W/o this patch:
```
Processing tutorials/hsimple.C...
hsimple : Real Time = 0.08 seconds Cpu Time = 0.07 seconds
(TFile *) 0x5563018a1d30
Processing /home/yuka/CERN/ROOT/memory.C...
cpu time = 1.524314 seconds
sys time = 0.157075 seconds
res memory = 546.867 Mbytes
vir memory = 895.184 Mbytes
```
So it improves time by 4x times and memory by 2x.
Preloading all the modules has several advantages. 1. We do not have to
rely on rootmap files which don't support some features (namespaces and
templates) 2. Lookup would be faster because we don't have to do
trampoline via rootmap files.
Autoloading libraries when decls are deserialized gives us correctness.
However we still need to optimize the performance by reducing the amount
of loaded libraries and improving Clang performance.
Fixes an issue for ATLAS, where Property<string>::Property<string&> cannot be instantiated by ROOT.
ROOT is trying to do that because the function is templated, with all template parameters having defaults.
(This "let us instantiate if all template params have defaults" is needed e.g. to see pair::pair() which is sfinae protected.)
If the Transaction that holds std::string type is unloaded LookupHelper::m_StringTy will point to invalid memory.
LookupHelper::findType can return an invalid QualType causing dereference of null.
This was happening when a namespace had the same as the function in a namespace that was 'used'. Namely,
in the issue report it was the 'next' namespace and the function std::next.
This required the quick search function in LookupHelper.cxx to properly handle the return value of utils::Lookup::Named.
This was happening when a namespace had the same as the function in a namespace that was 'used'. Namely,
in the issue report it was the 'next' namespace and the function std::next.
This required the quick search function in LookupHelper.cxx to properly handle the return value of utils::Lookup::Named.